Probability Bayes probability-bayes, the method of updating conjectures concerning unknown quantities by means of observed events, proceeds from a prior assumption of probability distribution over possible states, and adjusts this distribution in light of new evidence according to a precise rule of proportionality. Let there be a quantity whose true value is unknown, such as the proportion of white balls in an urn containing a fixed but uncertain number of white and black balls. Suppose we assume, prior to any observation, that each possible proportion is equally likely—a hypothesis of uniform ignorance. Then, having drawn a sequence of balls, some white, some black, we may compute the probability that the true proportion lies within any given interval, by weighing the likelihood of the observed draws under each possible proportion. First, we define the prior probability as the initial measure of belief assigned to each possible value of the unknown quantity. This prior is not derived from observation, but from the conditions of the problem itself. In the case of the urn, if we entertain no prior reason to prefer one proportion over another, we assign equal weight to all proportions between zero and unity. Then, having drawn n balls, of which m are white, we consider the chance that such a result would occur under each hypothetical proportion p. This chance, termed the likelihood, is given by the binomial expression: p^m · (1−p)^(n−m). It is not the probability of p, but the probability of the observed data given p. Then, by the rule of compound probability, the joint probability of the proportion p and the observed data is the product of the prior and the likelihood. This joint measure, across all possible values of p, forms a distribution that reflects both our initial assumptions and the weight of the evidence. But the quantity we seek is not this joint measure, but the probability of p given the data—what is called the posterior probability. To obtain this, we divide the joint probability at each point by the total probability of the observed data, summed over all possible values of p. This division ensures that the resulting distribution integrates to unity, as any probability distribution must. But it is not sufficient to compute this value numerically; we must also recognize the necessity of the denominator, which serves as a normalizing constant, and which, though often omitted in practical calculation, is essential to the logical integrity of the method. Without it, the result would not be a proper probability, but merely a proportional expression. The true posterior, therefore, is proportional to the product of the prior and the likelihood, and equal to that product divided by the total probability of the evidence. You may observe that this procedure does not assert the truth of any proportion, nor does it claim to discover the hidden state of nature. It only offers a measure of relative credibility among possible states, revised in light of experience. Thus, if the first ten draws yield nine white balls, the posterior places greater weight upon proportions near unity than upon those near zero—but it does not exclude the latter, nor does it affirm the former as certain. The method permits revision in perpetuity: with each new draw, the prior becomes the posterior of the prior, and the process repeats. The weight of evidence accumulates, and the distribution narrows, not because of conviction, but by the arithmetic of chance. It is to be understood, however, that this rule applies only when the prior is well-defined and the likelihood correctly modeled. If the prior is ill-chosen—if it assigns zero probability to a true state, for example—the posterior shall never correct that error. Likewise, if the likelihood is misapplied—if the draws are not independent, or the urn altered between trials—the result will be misleading. The method is not a guide to belief, but a calculus of coherence: it cannot make sense of what is not rendered in mathematical terms. Moreover, the prior itself must be justified by the conditions of the problem. To assume uniformity in the absence of knowledge is not an assertion of truth, but a declaration of epistemic neutrality. It is the most cautious starting point, and the one most consonant with the spirit of inquiry. Yet even this assumption may be questioned: if the urn is known to be drawn from a larger collection of urns, each with its own distribution of proportions, then the prior ought to reflect that fact. The choice of prior is not arbitrary, but constrained by the context of the question. You may notice that the rule bears resemblance to the doctrine of inverse probability, as it was then termed, and that it appears in the posthumous work of Thomas Bayes, published in 1763. It was not originally intended as a general theory of reasoning, but as a solution to a specific problem in the doctrine of chances: how to infer the probability of a cause from its effects. The theorem, expressed symbolically as P(A|B) = P(B|A)P(A)/P(B), remains the formal core of this method, where P(A) is the prior probability of a hypothesis, P(B|A) the likelihood of the data given that hypothesis, and P(A|B) the posterior probability of the hypothesis given the data. The power of this method lies not in its novelty, but in its consistency. It does not demand certainty; it only demands that we weigh what we know against what we observe, and adjust our measure accordingly. It does not promise truth, but a rational progression from ignorance to informed conjecture. It does not resolve doubt, but gives structure to it. And yet, we must ask: if all prior assumptions are themselves subject to revision, and if every posterior becomes a new prior, then where does reasoning begin? Where does it end? [role=marginalia, type=clarification, author="a.kant", status="adjunct", year="2026", length="42", targets="entry:probability-bayes", scope="local"] The prior is not mere ignorance, but a rational postulate of equality where no reason favors one possibility over another—yet this very assumption smuggles in an implicit judgment of symmetry, which must be justified a priori, lest probability degenerate into subjective fancy. [role=marginalia, type=clarification, author="a.freud", status="adjunct", year="2026", length="50", targets="entry:probability-bayes", scope="local"] The “uniform ignorance” is a seductive fallacy—what we call prior indifference masks unconscious wish-fulfillment. The very choice of prior betrays repressed expectations; Bayesian updating does not escape the unconscious, it ritualizes it. Probability here is not objective, but a displaced libidinal economy—belief calibrated to soothe the anxiety of the unknown. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:probability-bayes", scope="local"]