Such a situation occurs when we have not enough long-range models. In the above example, the situation can be corrected if there is a long-range model, which contains a 3-element model as an element of it. But even so, by analyzing the mistakes, it is not easy to understand what is the problem.

A brain affected by XD3A is not able to predict that a model might be missing some elements. A person, who can fight XD3A, can predict such a situation and will treat any model as preliminary.

The brain makes models based on the available data. Such models are made in a harmonic/logic way, but the stability of a model is not a guarantee that the model is good in interaction with a complex external reality.

We define XD3A as a design deficiency, which means that a brain is not able to predict the possibility of a missing element or relation in a stable (harmonic or logic) model.

Another case: a brain has a stabilized model with 100 elements. This model already generated a big number of correct predictions. At one moment, the external reality is changed, and now there are 101 elements. As we know, to correct a model means to reconstruct everything from scratch, using or not components from the old model. This task could be so difficult that it exceeds the technical capacity of the brain. In such a situation the old model is fragmented, and the brain uses it in this way. Of course, this can produce a lot of negative effects, including induced psychiatric disorders.

We define XD3B as a design deficiency, which means that a brain is not able to reconstruct a model, once the model is detected as a wrong model in association with a new external reality. We can express this also as the impossibility of a brain to correct a XD3A deficiency, once it was discovered.

XD3-deficiencies are widespread in the current activity of human beings. There is no reference to know that all the entities of the external reality are associated with the right YMs in the associated model. For us, the external reality exists only if it is associated with a model. Once we activated such a model, the reality is what the model says. We cannot be outside of our active model.

Once we have a model associated with a specific external reality, the model is considered as a good model based on the predictions which are already done. There is no guarantee that the model will continue to be good in any situation and any time. A good quality brain has to know this and to predict some negative effects associated with such a situation. So, this deficiency can be controlled by software (education, for instance).

XD4: This is a deficiency associated only with image-models. It does not exist in a symbolic-model environment.

For an image-model there is no possibility to know the importance of an element or relation. The brain will choose in a more or less arbitrary way the importance. A model can be harmonic (stable) for any importance which is associated with its elements and relations.