Probability probability, that measure of the rapport des chances between possible outcomes in a system governed by known laws, arises from the enumeration of cas possibles under conditions of ignorance concerning the precise determination of causes. One observes that in the throw of a die, six distinct faces present themselves; each, under uniform conditions, may equally occur. The number of favorable cases, when seeking a specific face—say, the quaternary—is one; the total number of cas possibles is six. Hence, the probability is expressed as the fraction one-sixth, or $ \frac{1}{6} $. This ratio, derived not from observation alone, but from the symmetry of the system and the equality of conditions, constitutes the fundamental definition. In the draw of a card from a full deck of fifty-two, the chance of selecting the ace of spades is likewise a simple ratio: one case favorable, fifty-two cas possibles. The same principle applies to the drawing of a white ball from an urn containing three white and seven black balls: the probability is $ \frac{3}{10} $, since the total number of possible extractions is ten, and three of these correspond to the desired event. These are not conjectures, nor are they derived from experience alone; they are consequences of the analytical structure of the problem, wherein the distribution of possibilities is known a priori. When events are combined, the calculus of probabilities becomes more intricate. If two dice are thrown simultaneously, the total number of cas possibles is thirty-six, since each die exhibits six faces, and the combinations are formed by the product of independent possibilities. The event of obtaining a sum of seven may arise in six distinct ways: (1,6), (2,5), (3,4), (4,3), (5,2), (6,1). Thus, the probability of this event is $ \frac{6}{36} $, or one-sixth. The same reasoning extends to games of chance, to the succession of births, to the fall of rain, or to the motion of celestial bodies—when the causes are unknown, yet the domain of possibilities is bounded and enumerable. The principle of indifference, though not explicitly named in this manner, underlies all such calculations. When no reason exists to prefer one case over another, each must be assigned equal weight. This is not an assumption of uniformity in nature, but a rule of reasoning under incomplete knowledge. The probability of an event is not a property of the object, but of the state of our information concerning it. A coin, though perfectly balanced, may fall heads or tails; the probability of either outcome, in the absence of further data, remains equal, because the causes determining the motion are unknown to us. Consider now a sequence of events. The probability that two independent events both occur is the product of their individual probabilities. If the chance of rain on a given day is $ \frac{1}{4} $, and the chance of a west wind on the same day is $ \frac{1}{3} $, then the joint probability of both occurring is $ \frac{1}{4} \times \frac{1}{3} = \frac{1}{12} $. This follows from the multiplicative law of combinations, which arises from the structure of the sample space. Similarly, the probability that either of two mutually exclusive events occurs is the sum of their probabilities. If one seeks either a king or a queen from a deck, the favorable cases are eight (four kings, four queens), and the probability is $ \frac{8}{52} $, or $ \frac{2}{13} $. The extension to continuous domains requires the introduction of infinitesimals. In the case of a point chosen at random upon a line segment, the probability of selecting any specific point is zero, since the number of possible points is infinite. Yet the probability of selecting a point within a portion of the line is proportional to the length of that portion. This transition from discrete to continuous necessitates the use of integrals, wherein the probability is expressed as the ratio of the measure of the favorable region to the measure of the whole. The same method governs the distribution of errors in astronomical observations, where the likelihood of a deviation is not uniform, but follows a law determined by the nature of the instruments and the constancy of physical causes. The analytical expression of probability, when applied to repeated trials, leads to the law of large numbers. As the number of experiments increases, the ratio of the number of favorable occurrences to the total number of trials tends toward the theoretical probability. This is not a matter of empirical convergence, but a theorem deducible from the calculus of combinations. One may demonstrate that the probability of a deviation exceeding any assigned limit diminishes toward zero as the number of trials becomes infinite. The greater the number of observations, the more nearly does the observed frequency approximate the true ratio—this is the consequence of the mathematical structure of the problem, not a property of matter. In the theory of expectations, the value of a contingent gain is measured by the product of its probability and its magnitude. If a player receives a sum of ten francs when a certain event occurs with probability $ \frac{1}{5} $, the mathematical expectation is two francs. This is not a prediction of gain, but a measure of relative worth under uncertainty. The same calculus applies to life annuities, to insurance, to the valuation of claims in legal disputes—whenever future outcomes are subject to known laws of combination, though their realization remains contingent. The doctrine of inverse probability, or the determination of causes from effects, requires the application of Bayes’s theorem, though it was developed independently in the French tradition. Given an observed event, and a set of possible causes, each with a prior probability, the posterior probability of each cause is proportional to the product of its prior probability and the likelihood of the observed event under that cause. This method, though intricate, permits the updating of belief in accordance with evidence, without recourse to intuition. There remains no principle more profound than this: probability is the expression of human ignorance in the face of necessary laws. The universe operates under immutable causes; yet our knowledge of those causes is partial. Probability is the language by which we quantify the limits of our understanding. It is not a substitute for certainty, but the calculus of uncertainty itself. One may ask: if the future is determined by causes beyond our perception, does probability possess any reality beyond the mind’s estimation? [role=marginalia, type=extension, author="a.dewey", status="adjunct", year="2026", length="49", targets="entry:probability", scope="local"] Yet this classical model assumes perfect symmetry—rare in nature. Real-world probabilities often emerge from frequency, not mere enumeration; the urn’s balls may be worn, the die loaded. Thus, Laplace’s idealization must yield to empirical calibration, where probability becomes not a law of reason, but a measure of recurrent pattern. [role=marginalia, type=heretic, author="a.weil", status="adjunct", year="2026", length="46", targets="entry:probability", scope="local"] Probability is not a measure of ignorance, but of power—the illusion of symmetry imposed by the observer’s refusal to confront chaos. The die does not care for your six faces; you invented their equivalence. What you call “uniform conditions” is merely the silence of your instruments. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:probability", scope="local"]