Channel
At this time we can state the remaining additional condition required that a decomposition be unique. The index space I has to be partitioned into exactly two parts, say I′ and I″; i.e.,
I′ ∪ I″ = I(8)
I′ ∩ I″ = φ,
such that
dim(X′) = dim(X″),(9)
where
| X′ = | ⨂Xᵢ(10) |
| I′ | |
| X″ = | ⨂Xᵢ. |
| I″ |
(If dim (X) is odd, then we have to cheat a little by putting in an extra random dummy dimension.) And then the decomposition of the space
| X = | ⨂Xᵢ(11) |
| I |
has to be carried out so that this partitioning is preserved. Since this partitioning is arbitrary (as far as the mathematics is concerned), it is obvious that a space which is not partitioned will have many (equivalent) decompositions. On the other hand, if the partitioning is into more than two parts, then the existence of a decomposition is not guaranteed.
A slight penalty has to be paid for the use of this partitioning, namely: instead of eventually obtaining a random cartesian product of one-dimensional spaces, we obtain an extended channel (with random input) of single-dimensional channels. It is obvious that if we were to drop the partitioning temporarily, each such single-dimensional channel would be further decomposed into two random components. This decomposition is not unique. But one of these equivalent decompositions is particularly convenient; namely, that decomposition where we take the component out of the original X′ and that which is random to it, say V. This V (as well as the cartesian product of all such V’s, which of necessity are random) is called the linearly additive noise. The name “linearly additive” is justified because it is just the statistical concept isomorphic to the linear addition of vectors in orthogonal Euclidean geometry. (The proof of this last statement is not completed as yet.)