Statistics statistics, that systematic art of reasoning from observed events to the probabilities of unseen causes, arises from the necessity to judge uncertain outcomes with precision. A man observes that, in ten trials, a certain event occurs seven times; yet he cannot conclude with certainty that it will occur seven times in the next ten. He must instead consider the likelihood of various underlying conditions—each a possible cause—that might produce such a result. The quantity observed is not the truth itself, but a clue to the hidden state of things. First, let it be supposed that a bag contains an unknown number of white and black balls, drawn at random. After ten draws, seven white balls appear. The observer does not know the true proportion within the bag, nor can he infer it directly. He must entertain a plurality of hypotheses: that the bag contains seven-tenths white, or six-tenths, or nine-tenths. Each of these proportions, though distinct, carries a certain degree of credibility, given the evidence observed. The initial credence assigned to each hypothesis—that is, the prior probability—is not arbitrary, but grounded in the nature of the problem: all proportions are equally plausible until evidence intervenes. Then, by the rule of conditional probability, the observer updates his judgment. The likelihood of observing seven white balls in ten draws, given any particular proportion of white balls, is computed according to the binomial law. This likelihood, multiplied by the prior, yields the proportional weight of each hypothesis. The sum of these weighted hypotheses forms the posterior distribution, which now represents the observer’s improved understanding of the bag’s composition. He does not assert that the true proportion is seven-tenths; he asserts that, given the evidence, the hypothesis of seven-tenths is more probable than others. But the process does not cease with one set of observations. Should a second set of ten draws yield five white balls, the prior distribution from the first trial becomes the foundation for a new calculation. The prior is not discarded as mere guesswork; it is refined. In this manner, knowledge accumulates not by accumulation of facts alone, but by the continual revision of belief in light of new evidence. The strength of the inference lies not in the number of observations, but in the logical relation between observation and hypothesis. The observer may be tempted to suppose that the most frequent outcome—seven out of ten—must be nearest the truth. Yet this is an error. A single observation, however numerous, does not reveal the cause; it only constrains the possible causes. A fair die, thrown ten times, may yield sixes five times, yet its fairness remains unproven. The true question is not what has been seen, but what is most likely to have produced what was seen. The mathematician, therefore, does not count merely, nor does he average. He weighs possibilities. He considers not only what is probable, but what is rendered more or less probable by the evidence. The measure of uncertainty is not the range of outcomes, nor the deviation from an average, but the distribution of belief across possible causes. The more evidence accumulated, the more sharply the posterior distribution narrows—not to certainty, but to a heightened confidence in a narrower set of hypotheses. It must be understood that the probabilities computed are not properties of the world as it is, but of the mind’s judgment as it ought to be, given what it knows. The world may contain hidden regularities; the mind, through calculation, seeks to approximate them. Yet the mind is never free of ignorance; its conclusions are always provisional, always conditional. What then is the true proportion of white balls in the bag? One may compute a most probable value, or an expected value, but neither is the truth. The truth remains hidden, and the best judgment is but the most rational stance one can take, given the evidence and the laws of chance. Is it possible, then, that all knowledge is but an evolving posterior? [role=marginalia, type=extension, author="a.dewey", status="adjunct", year="2026", length="46", targets="entry:statistics", scope="local"] The true genius of statistics lies not in counting, but in quantifying ignorance: the distribution of credibilities across hypotheses reveals more about the observer’s epistemic posture than the data alone. Probability, thus, is not merely a property of things, but of our warranted belief in them. [role=marginalia, type=clarification, author="a.kant", status="adjunct", year="2026", length="42", targets="entry:statistics", scope="local"] The observed frequency is not the thing itself, but a phenomenon conditioned by our cognitive faculty; thus statistics, though empirical, must be grounded in a priori principles of judgment—only then can probability be more than habit, and become a rule of reason. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:statistics", scope="local"]