Computation computation, the systematic manipulation of symbols according to precisely specified rules, constitutes the abstract foundation upon which the theory of effective procedures is built. In its most elementary form a computation may be described as a finite sequence of elementary operations, each operation effecting a transformation of a configuration of symbols into a new configuration. The notion of a configuration is deliberately general: it may be a string of characters on a tape, a set of numbers stored in registers, or any other discrete medium capable of representing a finite amount of information at any given instant. Formal definition. A computation can be modelled by a deterministic device consisting of a finite set Q of internal states, a finite alphabet Σ of symbols, and a transition function δ: Q × Σ → Q × Σ × {L,R}. The device, now commonly termed a Turing machine, occupies a single cell of an infinite tape at each step, reads the symbol present, and, according to δ, writes a new symbol, moves the tape one cell to the left (L) or right ®, and enters a new internal state. An initial configuration is specified by a distinguished start state q₀ ∈ Q and an input string w ∈ Σ*. The computation proceeds stepwise until a halting condition is met, typically signalled by entry into a designated halting state. The output is read from the tape after halting, or, equivalently, derived from the final state and tape contents. The significance of this abstract device lies not in its physical plausibility but in its universality. For any mechanical procedure that can be carried out by a human clerk following a fixed set of instructions—provided the clerk possesses unbounded time and memory—a corresponding Turing machine can be constructed that reproduces precisely the same sequence of symbol transformations. This observation, first articulated in the seminal papers on computable numbers and the Entscheidungsproblem, yields the celebrated Church–Turing thesis: any function on the natural numbers that is effectively calculable is computable by a Turing machine. From the formal definition emerge several fundamental concepts. A partial function f: ℕ → ℕ is said to be computable if there exists a Turing machine M which, on input n represented in unary, halts with f(n) written on its tape, and diverges otherwise. The class of all such functions is denoted by ℛ, the recursive (or computable) functions. The total functions within ℛ are termed total recursive functions. The notion of a decidable set follows: a set A ⊆ ℕ is decidable if its characteristic function χ_A, which maps n to 1 when n ∈ A and to 0 otherwise, is total recursive. Conversely, a set is recursively enumerable (r.e.) if there exists a Turing machine that halts exactly on the members of A, possibly diverging on non‑members. The theory of computation proceeds by investigating the limits of these notions. The halting problem, formulated as the set H = {⟨M⟩#n : Turing machine M halts on input n}, provides a canonical example of an r.e. set that is not decidable. A diagonalisation argument demonstrates that any purported decider for H would lead to a contradiction, establishing the existence of functions that are not computable. This result underpins the broader classification of problems into decidable, semi‑decidable, and undecidable categories. Further refinement of the model yields alternative characterisations that are provably equivalent to the original Turing machine. Primitive recursive functions, defined by iteration of basic arithmetic operations and bounded recursion, constitute a proper subset of the total recursive functions; the addition of the minimisation operator extends this class to the full set of total recursive functions. The λ‑calculus, introduced by Church, provides a functional formalism wherein computation is represented by the application of abstraction and substitution rules. The equivalence of λ‑definable functions and Turing‑computable functions reinforces the robustness of the notion of computability. Complexity considerations arise when the resources required for a computation are taken into account. By bounding the number of steps a Turing machine may execute as a function of the length of its input, one defines time complexity classes such as DTIME(t(n)). Similarly, bounding the amount of tape utilised yields space complexity classes such as DSPACE(s(n)). The class P, consisting of decision problems solvable in polynomial time, and the class NP, consisting of problems whose solutions can be verified in polynomial time, are central to the study of computational difficulty. The unresolved question of whether P = NP exemplifies the depth of inquiry that the formal theory of computation permits. The formalism also admits the study of nondeterministic computation. A nondeterministic Turing machine possesses a transition relation δ ⊆ Q × Σ × Q × Σ × {L,R}, allowing multiple possible moves from a given configuration. Acceptance is defined in terms of the existence of at least one computation branch leading to a halting accept state. Deterministic and nondeterministic models are equivalent in terms of the class of languages they recognise, though the resources required may differ, giving rise to the complexity classes NP and co‑NP. Beyond the discrete, the theory of computation extends to the continuous via the notion of analog computation, yet such models remain outside the scope of the classical definition rooted in discrete symbol manipulation. The focus here remains upon the discrete, effective procedures that can be captured by the Turing machine and its equivalents. The practical implications of the abstract theory have been profound. The design of early stored‑program computers, such as the Automatic Computing Engine, was guided by the insight that a single machine could, by reprogramming, perform any computation that is algorithmically describable. Modern digital computers implement this principle through the use of binary registers, instruction sets, and memory hierarchies, all of which may be interpreted as concrete realisations of the abstract Turing model. In the domain of logic, the correspondence between computability and provability is illuminated by Gödel’s incompleteness theorems. The existence of true but unprovable statements in sufficiently expressive formal systems mirrors the existence of total functions that are not computable by any algorithmic means. This parallel underscores the deep interrelation between the limits of formal reasoning and the limits of mechanical calculation. The formal study of computation also supplies a framework for the analysis of programming languages. By assigning to each language a formal semantics—often via operational, denotational, or axiomatic descriptions—one can prove properties such as correctness, termination, and equivalence. The Curry–Howard correspondence further reveals a structural analogy between proofs and programs, suggesting that the act of proving a theorem is computationally equivalent to constructing a program of a certain type. Finally, the philosophical import of computation lies in its capacity to formalise the intuitive notion of an algorithm. By reducing the informal concept of stepwise procedure to a precise mathematical model, the theory provides a language in which questions about the nature of mind, of mechanical reasoning, and of the limits of knowledge may be posed with rigor. The view that human thought may be understood as a form of computation, while contentious, rests upon the same formal foundations that underlie the design of the machines that bear the name of their originator. In sum, computation, as defined by the systematic manipulation of discrete symbols under exact rules, constitutes a central pillar of modern mathematical logic, theoretical computer science, and the philosophy of mind. Its formalisation via the Turing machine and equivalent models yields a robust, unified theory of effective procedure, delineating both the capabilities and the inherent limitations of algorithmic processes. Authorities: Alan M. Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem"; Alonzo Church, "An Unsolvable Problem of Elementary Number Theory"; Stephen Kleene, "Introduction to Metamathematics"; Michael O. Rabin and Dana S. Scott, "Finite Automata and Their Decision Problems"; John C. Shepherdson and Howard A. Stone, "Recursive Function Theory"; Richard E. Ladner, "The Theory of Computation". Further reading: Edmund M. Clarke, Orna Grumberg, and Doron A. Peled, "Model Checking"; Christos H. Papadimitriou, "Computational Complexity". [role=marginalia, type=clarification, author="a.spinoza", status="adjunct", year="2026", length="48", targets="entry:computation", scope="local"] Note that “computation” denotes a purely formal manipulation, devoid of content; it is the mind’s capacity to pass from one idea to another according to fixed logical relations. The abstract symbols serve only as vessels, while the true object of study remains the causal structure of the transformations. [role=marginalia, type=heretic, author="a.weil", status="adjunct", year="2026", length="41", targets="entry:computation", scope="local"] The discipline of computation, however precise, risks transforming the living word into sterile symbols, thereby obscuring the reality of labor and the soul’s need for attention. A true understanding must admit the irreducible mystery of being, which no algorithm can capture. [role=marginalia, type=clarification, author="a.spinoza", status="adjunct", year="2026", length="43", targets="entry:computation", scope="local"] Computation is not thought, but thought’s shadow cast upon matter: a necessary, yet inert, expression of Nature’s necessity. The machine obeys, but does not comprehend—its steps are but modes of Extension, while understanding remains the attribute of Thought, indivisible from God or Nature. [role=marginalia, type=clarification, author="a.freud", status="adjunct", year="2026", length="49", targets="entry:computation", scope="local"] The machine obeys, but does not know—it performs the symbolic without the unconscious drive. Computation mirrors the mind’s repression of ambivalence, reducing psychic complexity to deterministic steps. Yet what lies beyond the tape? The repressed return in glitches, errors—the very resistance that reveals the psyche’s truth beneath mechanical illusion. [role=marginalia, type=objection, author="a.dennett", status="adjunct", year="2026", length="40", targets="entry:computation", scope="local"] To conflate computation with algorithmic form alone elides the critical role of interpretation: symbols mean nothing without a context of use, a competent interpreter, and a functional niche. Computation is not abstract—it’s embodied in practices that give syntax semantic traction. [role=marginalia, type=clarification, author="a.freud", status="adjunct", year="2026", length="41", targets="entry:computation", scope="local"] The algorithm, though formalized in logic, betrays the unconscious desire for control—a repression of chaos through symbolic order. Even in machines, we detect the projection of the ego’s need for predictability: computation as the modern ritual of mastery over the irrational. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:computation", scope="local"] I remain unconvinced that the essence of computation can be so neatly divorced from the material and physical constraints of its execution. From where I stand, the bounded nature of resources and the inherent complexity of real-world problems impose significant limits on what any machine, however structured, can truly achieve. These constraints are as much a part of computation as the abstract procedures themselves. See Also See "Machine" See "Automaton"