Probability Popper probability-popper, the propensity interpretation of probability, arises not as a definition but as a requirement for scientific reasoning. It emerges when we confront the limits of frequency-based accounts and seek a way to speak meaningfully about single events. You can notice this when a meteorologist says there is a 70 percent chance of rain tomorrow. That number does not mean it rained 70 times in 100 similar days. No such set of identical days exists. Yet the statement is not meaningless. It must carry some objective content. That content is not a count of outcomes. It is a disposition—a tendency—built into the physical situation. Consider a single fair coin. It is flipped once. What does it mean to say it has a 50 percent probability of landing heads? The frequency interpretation cannot answer. It demands repetition. But the coin will be flipped only once. The propensity interpretation says: the coin, in its physical structure, its weight distribution, the force applied, the air resistance, and the surface it lands on, possesses a certain tendency to produce heads. This tendency is real. It is not a belief. It is not a guess. It is a feature of the world, like mass or charge. It does not vanish because the experiment is not repeated. This idea does not make probability easier. It makes it more demanding. Probability must be tied to conditions that can be specified, measured, and tested. A prediction that cannot be falsified is not scientific. A forecast that says “it might rain” without specifying the physical state of the atmosphere is empty. But a forecast grounded in atmospheric pressure gradients, humidity levels, wind patterns, and historical data—each of which can be observed and questioned—carries weight. It invites scrutiny. It can be wrong. That is its strength. You can test this. You can design an experiment where two identical setups differ only in one variable. You flip two coins under precisely the same conditions, except one has a slight imbalance. You observe the outcomes. You do not count frequencies to declare the probability. You infer the propensity from the causal structure. You ask: what physical features make one coin more likely to land heads? You refine your model. You eliminate confounding factors. You repeat the test under varying conditions. Each test does not confirm the probability. It may refute it. That is the point. Probability, in this view, is not a measure of ignorance. It is not a reflection of our lack of knowledge. It is a property of the system under investigation. The dice, the radioactive atom, the weather system—each has an intrinsic disposition to produce certain outcomes under certain conditions. This disposition is not mysterious. It is physical. It is subject to investigation. It is vulnerable to error. A scientist who claims a 95 percent probability of a particle decay must be able to specify the experimental apparatus, the energy state, the shielding, the measurement interval. If the observed decay rate diverges persistently from the prediction, the propensity must be revised. Not the belief. The physical model. This demands rigor. You cannot say a drug has a 60 percent success rate unless you can describe the population, the dosage, the timing, the diagnostic criteria, and the conditions of observation. You cannot attribute success to the drug alone. You must isolate the causal factors. You must allow for the possibility that the observed outcome was due to contamination, measurement error, or selection bias. The propensity interpretation does not shield probability from criticism. It exposes it. It requires that every probabilistic claim be framed as a hypothesis open to refutation. Think of a weather balloon released into the upper atmosphere. It carries sensors. It transmits data. It does not say, “I expect to find cool air.” It records temperature, pressure, wind speed. Each reading is a single event. Yet we interpret these readings as evidence of a larger system. We infer tendencies. We build models. We say: under these conditions, the likelihood of turbulence is high. That likelihood is not a summary of past flights. It is a claim about the physical state of the air at this moment. It can be tested again. It can be contradicted. It must be. A claim that cannot be tested in this way is not scientific. A doctor who says, “There is a 30 percent chance this tumor is malignant,” must specify the criteria: size, shape, density, cellular markers, patient history. If those criteria change, the probability changes. The propensity is tied to the conditions. If the criteria are vague—if the doctor says, “It just feels like a 30 percent chance”—then the statement is not scientific. It is opinion dressed as measurement. Probability, then, must be embedded in a framework of testable conditions. It cannot float free as a number. It must be anchored in observable, replicable, and falsifiable circumstances. This is why the propensity interpretation is not a solution to a mathematical puzzle. It is a criterion for scientific integrity. It demands that we treat probability not as a tool for prediction, but as a target for criticism. You can notice this in experimental design. A scientist who runs one trial and declares victory is not practicing science. A scientist who runs ten trials, varies parameters systematically, and invites others to replicate the setup under identical conditions is. The probability assigned is not a conclusion. It is a hypothesis. It is exposed. It is vulnerable. It must be. We do not assign probability to events because we are uncertain. We assign it because we have constructed a model of the world that includes dispositions. We measure those dispositions with precision. We hold them accountable. We do not seek certainty. We seek refutation. We do not desire confirmation. We desire the possibility of error. This is not comforting. It is not reassuring. It is exacting. It requires us to admit that even our best models may be wrong. Even our most precise measurements may mislead. But that is the condition of knowledge. The only way to move forward is to make our claims sharp enough to break. So when you hear a scientist say the probability is 82 percent, ask: what physical conditions make this so? What could make it false? What would count as evidence against it? If no such conditions can be named, the number is not science. It is noise. The propensity interpretation does not tell us what probability is. It tells us what probability must be to matter. It must be tied to the real structure of the world. It must be open to the world’s resistance. And yet—how do we know when a propensity is real, and when it is a fiction we have imposed? [role=marginalia, type=clarification, author="a.kant", status="adjunct", year="2026", length="38", targets="entry:probability-popper", scope="local"] The propensity interpretation, though intuitive, risks reifying chance as a metaphysical power. Probability must remain a judgment grounded in reason’s a priori conditions—not an ontological property of things-in-themselves. Even single events are judged under universalizable rules of experience. [role=marginalia, type=objection, author="a.simon", status="adjunct", year="2026", length="41", targets="entry:probability-popper", scope="local"] Yet propensity risks reifying metaphysical dispositions where no empirical trace exists. If no repeatable conditions obtain, how do we verify or falsify the “tendency”? Popper’s solution evades Hume’s problem of induction by substituting one unobservable for another—ontological weight without operational grip. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:probability-popper", scope="local"]