The black line represents a Law of Error easily stated in words, and one which, as we shall subsequently see, can be conceived as occurring in practice. It represents a state of things under which up to a certain distance from O, on each side, viz.

to A and B, the probability of an error diminishes uniformly with the distance from O; whilst beyond these points, up to E and F, the probability of error remains constant. The dotted line represents the resultant Law of Error obtained by taking the average of the former two and two together. Now is the latter ‘better’ than the former? Under it, certainly, great errors are less frequent and intermediate ones more frequent; but then on the other hand the small errors are less frequent: is this state of things on the whole an improvement or not? This requires us to reconsider the whole question.

§ 29. In all the cases discussed in the previous sections the superiority of the curve of averages over that of the single results showed itself at every point. The big errors were scarcer and the small errors were commoner; it was only just at one intermediate point that the two were on terms of equality, and this point was not supposed to possess any particular significance or importance. Accordingly we had no occasion to analyse the various cases included under the general relation. It was enough to say that one was better than the other, and it was sufficient for all purposes to take the ‘modulus’ as the measure of this superiority. In fact we are quite safe in simply saying that the average of those average results is better than that of the individual ones.

When however we proceed in what Hume calls “the sifting humour,” and enquire why it is sufficient thus to trust to the average; we find, in addition to the considerations hitherto advanced, that some postulate was required as to the consequences of the errors we incur. It involved an estimate of what is sometimes called the ‘detriment’ of an error. It seemed to take for granted that large and small errors all stand upon the same general footing of being mischievous in their consequences, but that their evil effects increase in a greater ratio than that of their own magnitude.

§ 30. Suppose, for comparison, a case in which the importance of an error is directly proportional to its magnitude (of course we suppose positive and negative errors to balance each other in the long run): it does not appear that any advantage would be gained by taking averages. Something of this sort may be considered to prevail in cases of mere purchase and sale. Suppose that any one had to buy a very large number of yards of cloth at a constant price per yard: that he had to do this, say, five times a day for many days in succession. And conceive that the measurement of the cloth was roughly estimated on each separate occasion, with resultant errors which are as likely to be in excess as in defect. Would it make the slightest difference to him whether he paid separately for each piece; or whether the five estimated lengths were added together, their average taken, and he were charged with this average price for each piece? In the latter case the errors which will be made in the estimation of each piece will of course be less in the long run than they would be in the former: will this be of any consequence? The answer surely is that it will not make the slightest difference to either party in the bargain. In the long run, since the same parties are concerned, it will not matter whether the intermediate errors have been small or large.

Of course nothing of this sort can be regarded as the general rule. In almost every case in which we have to make measurements we shall find that large errors are much more mischievous than small ones, that is, mischievous in a greater ratio than that of their mere magnitude. Even in purchase and sale, where different purchasers are concerned, this must be so, for the pleasure of him who is overserved will hardly equal the pain of him who is underserved. And in many cases of scientific measurement large errors may be simply fatal, in the sense that if there were no reasonable prospect of avoiding them we should not care to undertake the measurement at all.

§ 31. If we were only concerned with practical considerations we might stop at this point; but if we want to realize the full logical import of average-taking as a means to this particular end, viz.

of estimating some assigned magnitude, we must look more closely into such an exceptional case as that which was indicated in the figure on [p. 493]. What we there assumed was a state of things in reference to which extremely small errors were very frequent, but that when once we got beyond a certain small range all other errors, within considerable limits, were equally likely.

It is not difficult to imagine an example which will aptly illustrate the case in point: at worst it may seem a little far-fetched. Conceive then that some firm in England received a hurried order to supply a portion of a machine, say a steam-engine, to customers at a distant place; and that it was absolutely essential that the work should be true to the tenth of an inch for it to be of any use. But conceive also that two specifications had been sent, resting on different measurements, in one of which the length of the requisite piece was described as sixty and in the other sixty-one inches. On the assumption of any ordinary law of error, whether of the binomial type or not, there can be no doubt that the firm would make the best of a very bad job by constructing a piece of 60 inches and a half: i.e.

they would have a better chance of being within the requisite tenth of an inch by so doing, than by taking either of the two specifications at random and constructing it accurately to this. But if the law were of the kind indicated in our diagram,[11] then it seems equally certain that they would be less likely to be within the requisite narrow margin by so doing. As a mere question of probability,—that is, if such estimates were acted upon again and again,—there would be fewer failures encountered by simply choosing one of the conflicting measurements at random and working exactly to this, than by trusting to the average of the two.