If we apply analysis to the principle just propounded, we obtain the following rule: Let us designate by unity the part of the fortune of an individual, independent of his expectations. If we determine the different values that this fortune may have by virtue of these expectations and their probabilities, the product of these values raised respectively to the powers indicated by their probabilities will be the physical fortune which would procure for the individual the same moral advantage which he receives from the part of his fortune taken as unity and from his expectations; by subtracting unity from the product, the difference will be the increase of the physical fortune due to expectations: we will call this increase moral hope. It is easy to see that it coincides with mathematical hope when the fortune taken as unity becomes infinite in reference to the variations which it receives from the expectations. But when these variations are an appreciable part of this unity the two hopes may differ very materially among themselves.
This rule conduces to results conformable to the indications of common sense which can by this means be appreciated with some exactitude. Thus in the preceding question it is found that if the fortune of Paul is two hundred francs, he ought not reasonably to stake more than nine francs. The same rule leads us again to distribute the danger over several parts of a benefit expected rather than to expose the entire benefit to this danger. It results similarly that at the fairest game the loss is always greater than the gain. Let us suppose, for example, that a player having a fortune of one hundred francs risks fifty at the play of heads and tails; his fortune after his stake at the play will be reduced to eighty-seven francs, that is to say, this last sum would procure for the player the same moral advantage as the state of his fortune after the stake. The play is then disadvantageous even in the case where the stake is equal to the product of the sum hoped for, by its probability. We can judge by this of the immorality of games in which the sum hoped for is below this product. They subsist only by false reasonings and by the cupidity which they excite and which, leading the people to sacrifice their necessaries to chimerical hopes whose improbability they are not in condition to appreciate, are the source of an infinity of evils.
The disadvantage of games of chance, the advantage of not exposing to the same danger the whole benefit that is expected, and all the similar results indicated by common sense, subsist, whatever may be the function of the physical fortune which for each individual expresses his moral fortune. It is enough that the proportion of the increase of this function to the increase of the physical fortune diminishes in the measure that the latter increases.
CHAPTER V.
CONCERNING THE ANALYTICAL METHODS OF THE CALCULUS OF PROBABILITIES.
The application of the principle which we have just expounded to the various questions of probability requires methods whose investigation has given birth to several methods of analysis and especially to the theory of combinations and to the calculus of finite differences.
If we form the product of the binomials, unity plus the first letter, unity plus the second letter, unity plus the third letter, and so on up to n letters, and subtract unity from this developed product, the result will be the sum of the combination of all these letters taken one by one, two by two, three by three, etc., each combination having unity for a coefficient. In order to have the number of combinations of these n letters taken s by s times, we shall observe that if we suppose these letters equal among themselves, the preceding product will become the nth power of the binomial one plus the first letter; thus the number of combinations of n letters taken s by s times will be the coefficient of the sth power of the first letter in the development in this binomial; and this number is obtained by means of the known binomial formula.
Attention must be paid to the respective situations of the letters in each combination, observing that if a second letter is joined to the first it may be placed in the first or second position which gives two combinations. If we join to these combinations a third letter, we can give it in each combination the first, the second, and the third rank which forms three combinations relative to each of the two others, in all six combinations. From this it is easy to conclude that the number of arrangements of which s letters are susceptible is the product of the numbers from unity to s. In order to pay regard to the respective positions of the letters it is necessary then to multiply by this product the number of combinations of n letters s by s times, which is tantamount to taking away the denominator of the coefficient of the binomial which expresses this number.
Let us imagine a lottery composed of n numbers, of which r are drawn at each draw. The probability is demanded of the drawing of s given numbers in one draw. To arrive at this let us form a fraction whose denominator will be the number of all the cases possible or of the combinations of n letters taken r by r times, and whose numerator will be the number of all the combinations which contain the given s numbers. This last number is evidently that of the combinations of the other numbers taken n less s by n less s times. This fraction will be the required probability, and we shall easily find that it can be reduced to a fraction whose numerator is the number of combinations of r numbers taken s by s times, and whose denominator is the number of combinations of n numbers taken similarly s by s times. Thus in the lottery of France, formed as is known of 90 numbers of which five are drawn at each draw, the probability of drawing a given combination is 5⁄90, or 1⁄18; the lottery ought then for the equality of the play to give eighteen times the stake. The total number of combinations two by two of the 90 numbers is 4005, and that of the combinations two by two of 5 numbers is 10. The probability of the drawing of a given pair is then 1⁄4005, and the lottery ought to give four hundred and a half times the stake; it ought to give 11748 times for a given tray, 511038 times for a quaternary, and 43949268 times for a quint. The lottery is far from giving the player these advantages.
Suppose in an urn a white balls, b black balls, and after having drawn a ball it is put back into the urn; the probability is asked that in n number of draws m white balls and n - m black balls will be drawn. It is clear that the number of cases that may occur at each drawing is a + b. Each case of the second drawing being able to combine with all the cases of the first, the number of possible cases in two drawings is the square of the binomial a + b. In the development of this square, the square of a expresses the number of cases in which a white ball is twice drawn, the double product of a by b expresses the number of cases in which a white ball and a black ball are drawn. Finally, the square of b expresses the number of cases in which two black balls are drawn. Continuing thus, we see generally that the nth power of the binomial a + b expresses the number of all the cases possible in n draws; and that in the development of this power the term multiplied by the mth power of a expresses the number of cases in which m white balls and n - m black balls may be drawn. Dividing then this term by the entire power of the binomial, we shall have the probability of drawing m white balls and n - m black balls. The ratio of the numbers a and a + b being the probability of drawing one white ball at one draw; and the ratio of the numbers b and a + b being the probability of drawing one black ball; if we call these probabilities p and q, the probability of drawing m white balls in n draws will be the term multiplied by the mth power of p in the development of the nth power of the binomial p + q; we may see that the sum p + q is unity. This remarkable property of the binomial is very useful in the theory of probabilities. But the most general and direct method of resolving questions of probability consists in making them depend upon equations of differences. Comparing the successive conditions of the function which expresses the probability when we increase the variables by their respective differences, the proposed question often furnishes a very simple proportion between the conditions. This proportion is what is called equation of ordinary or partial differentials; ordinary when there is only one variable, partial when there are several. Let us consider some examples of this.