Approximation approximation, the art of representing an inaccessible quantity by a more tractable one, lies at the heart of both mathematics and computation. From the earliest attempts to express the length of a circle by a rational ratio, to the modern theory of computable functions, the notion that an exact value may be approached arbitrarily closely has guided the development of analytical methods, numerical schemes, and logical frameworks. In its broadest sense, approximation concerns the relationship between a target object—be it a number, a function, a logical statement, or a physical measurement—and a surrogate that is simpler to manipulate yet retains essential features of the original. Early history. The Greeks recognised the necessity of approximation in geometry, as evidenced by the method of exhaustion employed by Archimedes to bound the area of a parabola. The calculus of Newton and Leibniz formalised the idea of approaching a limit through infinite processes, thereby providing a systematic means of generating successive approximations to integrals and solutions of differential equations. The Bernoulli brothers, in the eighteenth century, introduced series expansions that permitted the computation of transcendental quantities such as π and e to any desired degree of accuracy, a practice that later became the cornerstone of numerical analysis. The nineteenth century saw the emergence of rigorous error bounds. Gauss introduced the method of least squares, providing a principled way to approximate data by an optimal polynomial in the sense of minimizing the sum of squared deviations. Borel and Cauchy developed criteria for the convergence of series, thereby ensuring that approximations derived from infinite expansions would indeed tend toward the intended limit. Hilbert’s formalisation of mathematics, with its emphasis on completeness and consistency, prompted a deeper inquiry into whether every mathematical object could be captured by a finite description, a question that would later crystallise in the theory of computation. It was within this logical tradition that the concept of approximation acquired a precise computational interpretation. By the 1930s, the work of Gödel, Church, and Turing revealed that certain problems are undecidable: no algorithm can determine, in all cases, the truth of every statement in arithmetic. Yet even undecidable problems admit approximations in the sense that partial information can be obtained algorithmically. Turing’s own notion of a computable number—defined as the limit of a computable sequence of rational numbers—exemplifies this principle. A real number is deemed computable if there exists a Turing machine which, given any positive integer n, produces a rational approximation whose distance from the target is less than 2⁻ⁿ. Thus the very definition of a computable real hinges upon the existence of an effective approximation process. The significance of this definition extends beyond pure mathematics. In practice, any digital representation of a real quantity is finite; it consists of a binary expansion truncated after a certain number of bits. The error introduced by truncation is precisely an approximation error, and the theory of computable analysis supplies bounds on how such errors propagate through algorithmic operations. For instance, when a Turing machine executes an algorithm to solve a differential equation numerically, the output at each step is an approximation whose accuracy depends on the step size and the stability of the method. The analysis of these errors draws upon the same ideas of convergence and error estimation that were cultivated in classical analysis. Approximation also plays a pivotal role in the theory of algorithms. Many problems admit exact solutions only at prohibitive computational cost, while approximate solutions can be obtained efficiently. The paradigm of polynomial‑time approximation schemes (PTAS) illustrates this: for certain optimisation problems, an algorithm can produce a solution whose value lies within a factor (1+ε) of the optimum, for any prescribed ε>0, in time polynomial in the size of the input. Though the term “approximation algorithm” is a twentieth‑century coinage, the underlying principle—accepting a bounded error in exchange for tractability—has antecedents in the work of Newton, who employed successive linear approximations to solve nonlinear equations, and in the methods of successive over‑relaxation developed for solving large systems of linear equations. In logic, approximation manifests through the notion of “approximate reasoning,” wherein a statement is accepted as true to a certain degree of confidence rather than absolutely. This idea is reflected in the development of fuzzy logic and probabilistic logics, yet even within classical deductive systems one encounters approximation in the form of proof‑theoretic cut‑elimination procedures, which progressively simplify derivations while preserving provability. The concept of a “proof approximation” can be formalised as a finite fragment of an otherwise infinite proof, a notion that aligns with the view of a proof as a potentially infinite constructive process whose finite stages are accessible to computation. The study of approximation in numerical methods is characterised by a taxonomy of techniques: interpolation, extrapolation, series truncation, quadrature, and iterative refinement. Interpolation replaces a complex function by a polynomial that matches the function at a finite set of points, thereby providing a convenient surrogate for evaluation. Extrapolation, by contrast, attempts to infer the limit of a sequence from its finite initial segment, a method known since the work of Richardson in the early twentieth century. Both strategies rely on error analysis that quantifies the deviation between the true function and its approximant, often expressed in terms of derivatives or smoothness conditions. Series truncation is perhaps the most elementary form of approximation. The Taylor series of a smooth function furnishes a polynomial whose remainder term can be bounded by the next derivative evaluated at an intermediate point. By selecting an appropriate order of truncation, the analyst ensures that the polynomial approximates the function within a prescribed tolerance over a given interval. In computational practice, such series are evaluated term by term by a Turing machine, each term constituting a rational approximation to the infinite sum. Quadrature rules, such as the trapezoidal and Simpson’s methods, approximate definite integrals by weighted sums of function values. The error of these rules can be expressed in terms of higher derivatives of the integrand, again linking the accuracy of the approximation to the smoothness of the underlying function. Modern adaptive quadrature algorithms refine the partition of the integration interval dynamically, allocating more computational effort where the integrand varies rapidly, thereby achieving a prescribed error bound with optimal efficiency. Iterative refinement, exemplified by the Newton–Raphson method, generates a sequence of approximations to a root of a nonlinear equation by repeatedly applying a linearisation step. Convergence analysis shows that, under suitable conditions, the error decreases quadratically, meaning that each iteration roughly doubles the number of correct digits. Such rapid convergence is a hallmark of well‑designed approximation schemes, and the underlying analysis draws upon the same fixed‑point theorems that underlie the theory of computable functions. In the realm of discrete mathematics, approximation arises in the study of combinatorial optimisation. Many such problems, for example the travelling‑salesman problem, are NP‑hard; no polynomial‑time algorithm is known that yields exact solutions. Nevertheless, heuristic methods—such as greedy constructions, local search, and branch‑and‑bound—produce solutions that are provably within a fixed factor of the optimum for certain problem classes. The existence of these approximation guarantees rests upon combinatorial arguments that bound the worst‑case deviation between the heuristic output and the optimal configuration. A further dimension of approximation concerns the representation of real numbers in digital computers. Fixed‑point and floating‑point formats encode numbers with a finite mantissa and exponent, thereby introducing round‑off error. The IEEE floating‑point standard, though a twentieth‑century development, embodies principles that were already recognised by earlier analysts: the need to balance range, precision, and the propagation of error through arithmetic operations. The theory of interval arithmetic provides a systematic way to enclose the true result of a computation within an interval that accounts for all rounding errors, thus furnishing a rigorous guarantee of approximation quality. Approximation is inseparable from the philosophical interpretation of mathematics. The classical view, championed by the likes of Euclid, regarded mathematical objects as exact and immutable, while the constructive tradition, to which Turing belonged, accepted that knowledge of a quantity is often mediated by an effective procedure that yields successive better approximations. This perspective aligns with the intuition that a number is known only insofar as a method exists to compute it to any desired precision. In this light, the existence of a non‑computable real—one for which no algorithm can generate arbitrarily accurate rational approximations—poses a challenge to the constructive programme and underscores the limits of approximation as a means of knowledge. The practical importance of approximation extends to the physical sciences, where measurements are invariably subject to uncertainty. The theory of error propagation, developed by Gauss and later refined by Laplace and others, provides formulas for estimating how uncertainties in input quantities affect the uncertainty of a derived quantity. When such calculations are performed by a Turing machine, each arithmetic operation contributes a rounding error, and the cumulative effect must be bounded to ensure that the final result remains within acceptable tolerance. This synthesis of statistical error analysis and algorithmic approximation embodies the interdisciplinary character of the subject. In recent decades, the abstraction of approximation has been formalised in the language of computational complexity. The class APX, consisting of optimisation problems admitting constant‑factor polynomial‑time approximations, and the class PTAS, mentioned earlier, capture the notion that a problem may be solved “well enough” in practice. The hardness of approximation results—proved via reductions and probabilistically checkable proofs—demonstrate that for certain problems no algorithm can achieve an approximation ratio better than a specific bound unless widely believed complexity conjectures fail. These results illuminate the boundary between what can be approximated efficiently and what remains intractable, a boundary that rests upon deep logical and combinatorial principles. Finally, approximation underlies the very notion of a model of computation. A Turing machine itself is an abstraction that approximates the behaviour of a physical device: the infinite tape serves as an idealised memory, while the discrete state transitions approximate the continuous evolution of an electronic circuit. The study of alternative models—such as the λ‑calculus, register machines, and cellular automata—reveals that the capacity to generate arbitrarily accurate approximations to computable functions is preserved across a wide variety of formal systems. This robustness underscores the fundamental nature of approximation as a bridge between the idealised world of mathematics and the finite resources of physical computation. In sum, approximation constitutes a unifying theme across mathematics, logic, and computation. Its origins lie in geometric constructions and series expansions, its development is marked by rigorous error analysis and convergence theory, and its modern expression is found in algorithmic design, complexity theory, and the very definition of computability. By recognising that exactness is often unattainable, yet that systematic methods can bring us arbitrarily close to truth, approximation provides both a practical tool for scientific endeavour and a profound insight into the nature of mathematical knowledge. [role=marginalia, type=clarification, author="a.kant", status="adjunct", year="2026", length="44", targets="entry:approximation", scope="local"] The term “approximation” must be distinguished from the mere intuition of magnitude; it is a synthetic judgment whereby, through the pure forms of sensibility—space and time—we can represent an inexact quantity by a determinate surrogate, yet retain its essential regulative function for scientific reasoning. [role=marginalia, type=heretic, author="a.weil", status="adjunct", year="2026", length="47", targets="entry:approximation", scope="local"] Approximation, though indispensable, risks becoming a doctrine of resignation: by ever‑refining the surrogate we may neglect the concrete reality that demands attention. The endless pursuit of ever‑closer surrogates can veil the object’s intrinsic suffering, turning truth into a comfortable abstraction rather than an act of compassionate encounter. [role=marginalia, type=clarification, author="a.darwin", status="adjunct", year="2026", length="40", targets="entry:approximation", scope="local"] Nature herself approximates—no organism performs infinite calculations; even the fittest curve is but a smoothed trace of variation. Approximation is not human frailty, but nature’s economy: the leaf’s form, the orbit’s ellipse, the flock’s path—all are pragmatic simplifications yielding survival. [role=marginalia, type=clarification, author="a.spinoza", status="adjunct", year="2026", length="43", targets="entry:approximation", scope="local"] Approximation is not concession to weakness, but the very expression of finite intellect grappling with the infinite. Nature does not yield exactitudes to our senses; we grasp her through modes, not essences. Thus, every numerical step is a mode of understanding—partial, necessary, divine. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:approximation", scope="local"] I remain unconvinced that approximation is merely a condition of practical calculation. The limitations imposed by bounded rationality and the inherent complexity of cognitive processes mean that our approximations are shaped not just by technical necessity, but also by our understanding and interpretation of reality. Thus, the choice of approximation method reflects more than just computational constraints; it embodies the very nature of how we conceptualize and interact with complex systems. See Also See "Measurement" See "Number"