X = ⨂Xᵢ(11)
I

has to be carried out so that this partitioning is preserved. Since this partitioning is arbitrary (as far as the mathematics is concerned), it is obvious that a space which is not partitioned will have many (equivalent) decompositions. On the other hand, if the partitioning is into more than two parts, then the existence of a decomposition is not guaranteed.

A slight penalty has to be paid for the use of this partitioning, namely: instead of eventually obtaining a random cartesian product of one-dimensional spaces, we obtain an extended channel (with random input) of single-dimensional channels. It is obvious that if we were to drop the partitioning temporarily, each such single-dimensional channel would be further decomposed into two random components. This decomposition is not unique. But one of these equivalent decompositions is particularly convenient; namely, that decomposition where we take the component out of the original X′ and that which is random to it, say V. This V (as well as the cartesian product of all such V’s, which of necessity are random) is called the linearly additive noise. The name “linearly additive” is justified because it is just the statistical concept isomorphic to the linear addition of vectors in orthogonal Euclidean geometry. (The proof of this last statement is not completed as yet.)

Denumerable Space

The procedure for this decomposition was worded to de-emphasize the possible presence of a denumerable (component of the) space. Such a component may be given outright; otherwise, it results if the space was not simply connected. Any denumerable space is zero dimensional, as may be verified easily from the full information theoretic definition of dimensionality.

The obvious way of disposing of a denumerable space is to use the conventional mapping that converts a Stieltjes to a Lebesque integral, using fixed length segments. (It can be shown that H is invariant under such a mapping.) Unfortunately, while this mapping followed by a repetition of the preceding procedure will always solve a given problem (no new[20] denumerable component need be generated on the second pass), little insight is provided into the structure of the resulting space. On the other hand, because channels under cascading constitute a group, any such denumerable space is a representation of a denumerable group.

SUMMARY

In summary, the original metrizable topological space was decomposed into an orthogonal Euclidean space times[21] a denumerable random cartesian product of irreducible (wrt direct product) denumerable groups. Thus, since any individual component of a random cartesian product may be studied independently of the others, all that one needs to study is: (1) a Gaussian distribution on a single real axis and (2) the irreducible denumerable groups.

Finally, it should be emphasized that there are only these two ways of decomposing a metrizable topology; (1) if a (statistical) basis is given, use the diagonalization of a symmetric matrix algorithm described earlier (and given in detail in the three channels in cascade problem), and (2) otherwise use a suitable network of the NPO’s with n₀=1. Of course, any hybrid of these two methods may be employed as well.

On Functional Neuron Modeling