BASIC DESIGN DEFFICIENCIES OF THE HUMAN BRAIN
The theory treats the brain as a technological product. So, the theory considers that a designer existed. He had to fulfil some design requirements. Any technological design has some deficiencies. We shall guess them in this section.
This theoretical and abstract designer is outside of the theory and we are not interested by it. It could be "Mother Nature" or God or an extraterrestrial civilization or anything else.
These deficiencies are described here mainly for the human brain, but some can be met also in the animal brain. The design deficiencies as MDT can detect them, are:
XD1: The tendency to associate an image-model to any situation met by a person. This deficiency is explained due to the "image nature" of the brain. This deficiency explains why so many persons "stay" on level 3, when level 5 is accessible since 100 years ago. This deficiency can be corrected by education.
XD2: There is no hardware protection to prevent the uncontrolled jump from a model to another, in interaction with a complex external reality. The stability in a model is a quality parameter of a brain.
Long-range models can stabilize a person. The XD2 deficiency is not related to them. XD2 is related to the capacity to stay in a model, when faced with a complex external reality. This deficiency can be corrected by software (education, for instance).
The lack of stability in a model can induce the illness called schizophrenia because this lack of stability has the tendency to favor short-range models. Indeed, when there is no stability in a model, the brain will make a specialized model for any particular situation met in the external reality. Such models are not able to see that some different facts can be correlated. Only a long-range model can detect such correlation. So, the stability in a model is a parameter of quality for a brain and the lack of stability indicates a low quality brain.
This deficiency can be met in the animal world too. For example, a dog has to watch a perimeter. That dog can jump from watch-model to food-model, if it gets food from strangers. Such a dog is a low quality dog, due to the lack of stability in the model.
The dolphins have a good stability in a model, and so, we consider them as advanced animals.
For human beings, the lack of stability in a model is a major drawback. Such persons are not good for any complex activity.
XD3: This is a basic deficiency. Let's start with its description, based on examples.
So, the brain interacts with an external reality and makes a harmonic model with 3 elements. If, that external reality has, in fact, 4 elements, the missing element cannot be discovered based on the 3-element model. As a 3- element model has a number of wrong predictions, it is not easy to see what is the problem from the analysis of the mistakes. The reason is that, once the 3- element model is activated, the reality is just that one which is generated by this model. There is no other reality! We cannot be outside of our active model. In such a case, the brain tries to correct the model. Usually, it will try to correct the model by changing the importance of some elements or relations. Sometimes this procedure works, and the brain will continue to use the 3-element model.
Such a situation occurs when we have not enough long-range models. In the above example, the situation can be corrected if there is a long-range model, which contains a 3-element model as an element of it. But even so, by analyzing the mistakes, it is not easy to understand what is the problem.
A brain affected by XD3A is not able to predict that a model might be missing some elements. A person, who can fight XD3A, can predict such a situation and will treat any model as preliminary.
The brain makes models based on the available data. Such models are made in a harmonic/logic way, but the stability of a model is not a guarantee that the model is good in interaction with a complex external reality.
We define XD3A as a design deficiency, which means that a brain is not able to predict the possibility of a missing element or relation in a stable (harmonic or logic) model.
Another case: a brain has a stabilized model with 100 elements. This model already generated a big number of correct predictions. At one moment, the external reality is changed, and now there are 101 elements. As we know, to correct a model means to reconstruct everything from scratch, using or not components from the old model. This task could be so difficult that it exceeds the technical capacity of the brain. In such a situation the old model is fragmented, and the brain uses it in this way. Of course, this can produce a lot of negative effects, including induced psychiatric disorders.
We define XD3B as a design deficiency, which means that a brain is not able to reconstruct a model, once the model is detected as a wrong model in association with a new external reality. We can express this also as the impossibility of a brain to correct a XD3A deficiency, once it was discovered.
XD3-deficiencies are widespread in the current activity of human beings. There is no reference to know that all the entities of the external reality are associated with the right YMs in the associated model. For us, the external reality exists only if it is associated with a model. Once we activated such a model, the reality is what the model says. We cannot be outside of our active model.
Once we have a model associated with a specific external reality, the model is considered as a good model based on the predictions which are already done. There is no guarantee that the model will continue to be good in any situation and any time. A good quality brain has to know this and to predict some negative effects associated with such a situation. So, this deficiency can be controlled by software (education, for instance).
XD4: This is a deficiency associated only with image-models. It does not exist in a symbolic-model environment.
For an image-model there is no possibility to know the importance of an element or relation. The brain will choose in a more or less arbitrary way the importance. A model can be harmonic (stable) for any importance which is associated with its elements and relations.
A "lightly" negative consequence of this deficiency is the fact that, faced with a given external reality, almost any person makes a personal image-model associated with that external reality. We will see later that, for extreme situations, such deficiency is associated with the psychiatric disorder called "paranoia".
The symbolic models do not have such problems. Once a symbolic model is made in a mathematical environment, the "law of the propagation of the errors" is able to predict the importance of any element or relation.
For instance, if we have a complex mathematical formula, the law of the propagation of the errors will tell us how much the result is changed if an element is changed with, let's say, 1%.
We already used the term "correct" associated with the importance of an element or relation in an image model. If there is an external reality and two associated models, one image-model and one symbolic-model, and if the two models have the same predictions, then the importance associated with the elements and relations of the image model is correct. If not, the right importance is that of the symbolic model.
The above method is not good in any practical situation. In fact, there is no method to know if we associated the right importance to any element or relation of an image-model. This is XD4.
XD5: this deficiency is a technological one. It means that there is no hardware or software method to erase a model of the brain. A model is made forever. It can be destroyed only in an uncontrolled way due to the biological deficiencies or the brain.
The consequence of this deficiency is huge in many practical situations. The problem is developed more in another section of this book.