Three players of supposed equal ability play together on the following conditions: that one of the first two players who beats his adversary plays the third, and if he beats him the game is finished. If he is beaten, the victor plays against the second until one of the players has defeated consecutively the two others, which ends the game. The probability is demanded that the game will be finished in a certain number n of plays. Let us find the probability that it will end precisely at the nth play. For that the player who wins ought to enter the game at the play n - 1 and win it thus at the following play. But if in place of winning the play n - 1 he should be beaten by his adversary who had just beaten the other player, the game would end at this play. Thus the probability that one of the players will enter the game at the play n - 1 and will win it is equal to the probability that the game will end precisely with this play; and as this player ought to win the following play in order that the game may be finished at the nth play, the probability of this last case will be only one half of the preceding one. This probability is evidently a function of the number n; this function is then equal to the half of the same function when n is diminished by unity. This equality forms one of those equations called ordinary finite differential equations.

We may easily determine by its use the probability that the game will end precisely at a certain play. It is evident that the play cannot end sooner than at the second play; and for this it is necessary that that one of the first two players who has beaten his adversary should beat at the second play the third player; the probability that the game will end at this play is ½. Hence by virtue of the preceding equation we conclude that the successive probabilities of the end of the game are ¼ for the third play, ⅛ for the fourth play, and so on; and in general ½ raised to the power n - 1 for the nth play. The sum of all these powers of ½ is unity less the last of these powers; it is the probability that the game will end at the latest in n plays.

Let us consider again the first problem more difficult which may be solved by probabilities and which Pascal proposed to Fermat to solve. Two players, A and B, of equal skill play together on the conditions that the one who first shall beat the other a given number of times shall win the game and shall take the sum of the stakes at the game; after some throws the players agree to quit without having finished the game: we ask in what manner the sum ought to be divided between them. It is evident that the parts ought to be proportional to the respective probabilities of winning the game. The question is reduced then to the determination of these probabilities. They depend evidently upon the number of points which each player lacks of having attained the given number. Hence the probability of A is a function of the two numbers which we will call indices. If the two players should agree to play one throw more (an agreement which does not change their condition, provided that after this new throw the division is always made proportionally to the new probabilities of winning the game), then either A would win this throw and in that case the number of points which he lacks would be diminished by unity, or the player B would win it and in that case the number of points lacking to this last player would be less by unity. But the probability of each of these cases is ½; the function sought is then equal to one half of this function in which we diminish by unity the first index plus the half of the same function in which the second variable is diminished by unity. This equality is one of those equations called equations of partial differentials.

We are able to determine by its use the probabilities of A by dividing the smallest numbers, and by observing that the probability or the function which expresses it is equal to unity when the player A does not lack a single point, or when the first index is zero, and that this function becomes zero with the second index. Supposing thus that the player A lacks only one point, we find that his probability is ½, ¾, 78, etc., according as B lacks one point, two, three, etc. Generally it is then unity less the power of ½, equal to the number of points which B lacks. We will suppose then that the player A lacks two points and his probability will be found equal to ¼, ½, 1116, etc., according as B lacks one point, two points, three points, etc. We will suppose again that the player A lacks three points, and so on.

This manner of obtaining the successive values of a quantity by means of its equation of differences is long and laborious. The geometricians have sought methods to obtain the general function of indices that satisfies this equation, so that for any particular case we need only to substitute in this function the corresponding values of the indices. Let us consider this subject in a general way. For this purpose let us conceive a series of terms arranged along a horizontal line so that each of them is derived from the preceding one according to a given law. Let us suppose this law expressed by an equation among several consecutive terms and their index, or the number which indicates the rank that they occupy in the series. This equation I call the equation of finite differences by a single index. The order or the degree of this equation is the difference of rank of its two extreme terms. We are able by its use to determine successively the terms of the series and to continue it indefinitely; but for that it is necessary to know a number of terms of the series equal to the degree of the equation. These terms are the arbitrary constants of the expression of the general term of the series or of the integral of the equation of differences.

Let us imagine now below the terms of the preceding series a second series of terms arranged horizontally; let us imagine again below the terms of the second series a third horizontal series, and so on to infinity; and let us suppose the terms of all these series connected by a general equation among several consecutive terms, taken as much in the horizontal as in the vertical sense, and the numbers which indicate their rank in the two senses. This equation is called the equation of partial finite differences by two indices.

Let us imagine in the same way below the plan of the preceding series a second plan of similar series, whose terms should be placed respectively below those of the first plan; let us imagine again below this second plan a third plan of similar series, and so on to infinity; let us suppose all the terms of these series connected by an equation among several consecutive terms taken in the sense of length, width, and depth, and the three numbers which indicate their rank in these three senses. This equation I call the equation of partial finite differences by three indices.

Finally, considering the matter in an abstract way and independently of the dimensions of space, let us imagine generally a system of magnitudes, which should be functions of a certain number of indices, and let us suppose among these magnitudes, their relative differences to these indices and the indices themselves, as many equations as there are magnitudes; these equations will be partial finite differences by a certain number of indices.

We are able by their use to determine successively these magnitudes. But in the same manner as the equation by a single index requires for it that we know a certain number of terms of the series, so the equation by two indices requires that we know one or several lines of series whose general terms should be expressed each by an arbitrary function of one of the indices. Similarly the equation by three indices requires that we know one or several plans of series, the general terms of which should be expressed each by an arbitrary function of two indices, and so on. In all these cases we shall be able by successive eliminations to determine a certain term of the series. But all the equations among which we eliminate being comprised in the same system of equations, all the expressions of the successive terms which we obtain by these eliminations ought to be comprised in one general expression, a function of the indices which determine the rank of the term. This expression is the integral of the proposed equation of differences, and the search for it is the object of integral calculus.

Taylor is the first who in his work entitled Metodus incrementorum has considered linear equations of finite differences. He gives the manner of integrating those of the first order with a coefficient and a last term, functions of the index. In truth the relations of the terms of the arithmetical and geometrical progressions which have always been taken into consideration are the simplest cases of linear equations of differences; but they had not been considered from this point of view. It was one of those which, attaching themselves to general theories, lead to these theories and are consequently veritable discoveries.