VI. The Theory of Errors.—We are thus led to speak of the theory of errors, which is directly connected with the problem of the probability of causes. Here again we find effects, to wit, a certain number of discordant observations, and we seek to divine the causes, which are, on the one hand, the real value of the quantity to be measured; on the other hand, the error made in each isolated observation. It is necessary to calculate what is a posteriori the probable magnitude of each error, and consequently the probable value of the quantity to be measured.

But as I have just explained, we should not know how to undertake this calculation if we did not admit a priori, that is to say, before all observation, a law of probability of errors. Is there a law of errors?

The law of errors admitted by all calculators is Gauss's law, which is represented by a certain transcendental curve known under the name of 'the bell.'

But first it is proper to recall the classic distinction between systematic and accidental errors. If we measure a length with too long a meter, we shall always find too small a number, and it will be of no use to measure several times; this is a systematic error. If we measure with an accurate meter, we may, however, make a mistake; but we go wrong, now too much, now too little, and when we take the mean of a great number of measurements, the error will tend to grow small. These are accidental errors.

It is evident from the first that systematic errors can not satisfy Gauss's law; but do the accidental errors satisfy it? A great number of demonstrations have been attempted; almost all are crude paralogisms. Nevertheless, we may demonstrate Gauss's law by starting from the following hypotheses: the error committed is the result of a great number of partial and independent errors; each of the partial errors is very little and besides, obeys any law of probability, provided that the probability of a positive error is the same as that of an equal negative error. It is evident that these conditions will be often but not always fulfilled, and we may reserve the name of accidental for errors which satisfy them.

We see that the method of least squares is not legitimate in every case; in general the physicists are more distrustful of it than the astronomers. This is, no doubt, because the latter, besides the systematic errors to which they and the physicists are subject alike, have to control with an extremely important source of error which is wholly accidental; I mean atmospheric undulations. So it is very curious to hear a physicist discuss with an astronomer about a method of observation. The physicist, persuaded that one good measurement is worth more than many bad ones, is before all concerned with eliminating by dint of precautions the least systematic errors, and the astronomer says to him: 'But thus you can observe only a small number of stars; the accidental errors will not disappear.'

What should we conclude? Must we continue to use the method of least squares? We must distinguish. We have eliminated all the systematic errors we could suspect; we know well there are still others, but we can not detect them; yet it is necessary to make up our mind and adopt a definitive value which will be regarded as the probable value; and for that it is evident the best thing to do is to apply Gauss's method. We have only applied a practical rule referring to subjective probability. There is nothing more to be said.

But we wish to go farther and affirm that not only is the probable value so much, but that the probable error in the result is so much. This is absolutely illegitimate; it would be true only if we were sure that all the systematic errors were eliminated, and of that we know absolutely nothing. We have two series of observations; by applying the rule of least squares, we find that the probable error in the first series is twice as small as in the second. The second series may, however, be better than the first, because the first perhaps is affected by a large systematic error. All we can say is that the first series is probably better than the second, since its accidental error is smaller, and we have no reason to affirm that the systematic error is greater for one of the series than for the other, our ignorance on this point being absolute.

VII. Conclusions.—In the lines which precede, I have set many problems without solving any of them. Yet I do not regret having written them, because they will perhaps invite the reader to reflect on these delicate questions.

However that may be, there are certain points which seem well established. To undertake any calculation of probability, and even for that calculation to have any meaning, it is necessary to admit, as point of departure, a hypothesis or convention which has always something arbitrary about it. In the choice of this convention, we can be guided only by the principle of sufficient reason. Unfortunately this principle is very vague and very elastic, and in the cursory examination we have just made, we have seen it take many different forms. The form under which we have met it most often is the belief in continuity, a belief which it would be difficult to justify by apodeictic reasoning, but without which all science would be impossible. Finally the problems to which the calculus of probabilities may be applied with profit are those in which the result is independent of the hypothesis made at the outset, provided only that this hypothesis satisfies the condition of continuity.