System system, an organized collection of interacting components whose collective behaviour is distinguished from that of its parts, occupies a central place in the theory of computation and in the broader scientific endeavour to describe regularities of the world. In the most elementary sense a system consists of a set of states, a set of possible inputs, and a rule or set of rules that determines the transition from one state to another in response to an input. The rule may be deterministic, assigning a unique successor state, or may admit several possible successors, thereby admitting nondeterministic or probabilistic behaviour. The abstract character of this definition permits its application to mechanical devices, biological processes, logical calculi and, most importantly for the present discussion, to the formal devices that underpin the notion of algorithmic computation. The lineage of the modern concept of system can be traced to the mechanistic philosophy of the seventeenth and eighteenth centuries. Leibniz’s vision of a universal characteristic, a symbolic language capable of representing all thoughts, anticipated the later identification of symbols with machine states. Charles Babbage’s analytical engine, though never completed, embodied the idea of a programmable mechanism: a fixed architecture supplied with a mutable programme, a notion that would later be rendered precise by the formalism of the Turing machine. The essential insight of these early efforts was that the operation of a device could be reduced to the manipulation of discrete symbols according to a finite set of rules, a perspective that makes the system a mathematical object amenable to rigorous analysis. The Turing machine provides the archetypal formal system for the study of computation. It consists of an infinite tape divided into cells, each cell bearing a symbol from a finite alphabet, a head that scans one cell at a time, and a finite control that, given the current symbol and internal state, determines three actions: write a symbol, move the head left or right, and transition to a new internal state. The quintuple (Q, Σ, δ, q₀, F) – where Q is the finite set of states, Σ the alphabet, δ the transition function, q₀ the initial state and F the set of halting states – captures the entire behaviour of the machine. In this formulation the machine itself is a system: its configuration at any moment is a complete description of its state, tape contents and head position, and its evolution is entirely determined by the transition function. The elegance of this model lies in its minimalism; despite its simplicity it is capable of performing any calculation that can be carried out by any conceivable mechanical device, a fact established by the universality theorem. Systems may be classified according to the nature of their interaction with the environment. A closed system is one whose evolution depends solely on its internal configuration; the Turing machine, when considered in isolation, is a closed system, for the only inputs it receives are those encoded on its tape at the outset. An open system, by contrast, admits external influence during its operation. Modern electronic computers are open in the sense that they receive input from keyboards, sensors or communication channels while executing programmes. The distinction bears directly on the analysis of algorithmic processes: for a closed system the behaviour can be predicted by a finite inspection of the transition rules, whereas an open system may require a model of the external stimuli, an issue that later work on interactive computation would address. The notion of a universal system is a cornerstone of computability theory. A universal Turing machine is itself a system that can simulate any other Turing machine when supplied with a description of that machine and its input. The existence of such a universal system demonstrates that a single, fixed architecture suffices for the execution of an arbitrary algorithm, provided the algorithm is suitably encoded. This insight justifies the stored‑program concept, wherein the description of the computation is treated as data, and underlies the design of all modern digital computers. The universal system also clarifies the relationship between hardware and software: the hardware constitutes the physical realisation of the abstract system, while the software is the particular instantiation of the transition function encoded upon it. The analysis of a system’s efficiency requires the introduction of resources. Time, measured as the number of transition steps, and space, measured as the number of tape cells visited, provide a quantitative framework for comparing algorithms. Complexity theory, built upon these measures, classifies systems according to the growth of resource consumption as a function of input size. The classification of decision problems into classes such as P and NP reflects the existence of systems that solve problems within polynomial time bounds, and those for which no such systems are presently known. These considerations are inseparable from the definition of the system, for the same abstract machine may exhibit dramatically different resource usage depending on the particular programme it executes. Within a computational system, the representation of data constitutes a subsystem of its own. Codes, encodings and data structures dictate how information is mapped onto the tape symbols and thereby influence the ease with which the transition function can manipulate that information. Binary notation, for instance, enables the compact representation of numbers and logical values, while more elaborate structures such as trees and graphs permit the efficient handling of hierarchical or networked data. The design of such representations is itself a matter of system engineering: the choice of encoding determines the form of the transition rules required to perform elementary operations, and thus affects both correctness and efficiency. Control mechanisms, though often discussed in the context of engineering, admit a precise formulation within the theory of systems. A feedback loop can be modelled as a system whose next state depends not only on the present input but also on a function of its previous outputs. Early mechanical thermostats, which adjust heating in response to temperature readings, embody this principle. In the logical realm, a program that iteratively refines an approximation based on the error of the previous step represents an internal feedback process. The mathematical description of such feedback is compatible with the transition function formalism, requiring only that the function retain a memory of past actions via its internal state. The biological sciences provide fertile ground for the application of the system concept, particularly through the study of pattern formation. The reaction‑diffusion model introduced by Turing describes a chemical system in which two or more substances interact and diffuse, leading to the emergence of spatial patterns such as stripes or spots. In this model the concentrations of the substances constitute the state of the system, the diffusion and reaction rates constitute the transition rules, and the boundary conditions define the external influence. The analysis of such a system proceeds by examining the stability of homogeneous states and the conditions under which perturbations grow, thereby revealing how simple local interactions can generate complex global structures. Logical calculi themselves may be regarded as systems. A formal language, together with a set of inference rules, defines a proof system: each proof step corresponds to a transition from one syntactic configuration to another, and the set of all provable theorems forms the reachable portion of the system’s state space. Gödel’s incompleteness theorems demonstrate that for any sufficiently expressive axiomatic system there exist true statements that lie outside the reachable set, a limitation that reflects intrinsic constraints on the capability of logical systems to capture all mathematical truths. Mathematical structures often arise as axiomatic systems, collections of statements closed under logical consequence. The consistency of such a system – the impossibility of deriving a contradiction – and its completeness – the ability to decide every statement in its language – are properties that can be examined by regarding the system as a computational entity. Proof search may be modelled as a systematic exploration of the state space defined by the axioms and inference rules, and undecidability results, such as the halting problem, reveal that there exist well‑formed questions about the system that no algorithmic procedure can resolve. In the engineering of electronic computers, the system perspective guides the design of modular architectures. The separation of a central processing unit, memory, input/output devices and control logic reflects the decomposition of a complex system into subsystems with well‑defined interfaces. Such modularity facilitates analysis, verification and the incremental improvement of individual components without compromising the overall functionality. The notion of a hierarchy of systems – where a higher‑level system is constructed from lower‑level ones – is a recurring theme in the construction of large‑scale computational devices. The composition of systems is a powerful analytical tool. Two systems may be combined by interconnecting their interfaces, yielding a composite system whose behaviour is determined by the concurrent operation of its constituents. The simulation of one system by another, central to the theory of computability, shows that any system whose transition function is effectively calculable can be reproduced within a universal system. Reductions between decision problems are instances of such simulations, establishing relative computational difficulty by demonstrating that solving one problem would enable the solution of another. Certain limitations are inherent to any formal system of computation. The halting problem proves that there exists no general algorithm capable of determining, for every possible program and input, whether the program will eventually reach a halting state. This undecidability result, derived from a self‑referential diagonalisation argument, places a firm bound on the predictive power of any system that aspires to capture all algorithmic processes. Likewise, Gödel’s incompleteness theorems expose the impossibility of achieving both consistency and completeness in sufficiently expressive axiomatic systems, underscoring the presence of true but unprovable statements. The conception of mental processes as computational systems has provoked extensive philosophical debate. If cognition can be modelled as a system that manipulates symbolic representations according to algorithmic rules, then the mind may be regarded as a form of information processing. Such a view supports the possibility of artificial intelligence, yet it also raises questions concerning consciousness, intentionality and the qualitative aspects of experience that may elude purely formal description. The system metaphor thus serves as both a productive framework for scientific inquiry and a point of contention in the philosophy of mind. In contemporary practice, the abstract notion of system underlies the architecture of digital computers, from the earliest stored‑program machines to the present day’s massively parallel processors. The principles of state, transition, and composition continue to guide the design of software, hardware, and networks. Emerging technologies such as quantum computation introduce new kinds of state spaces and transition rules, yet they remain amenable to description within an appropriately generalised system framework. The enduring relevance of the system concept lies in its capacity to unify disparate phenomena under a common mathematical language, thereby permitting the transfer of insights across disciplines. In sum, a system is a mathematically tractable abstraction that captures the essence of organised change. By specifying states, inputs and transition rules, it provides a universal template for describing machines, biological processes, logical calculi and many other regularities. The study of systems, especially through the lens of computation, reveals both the power and the limits of algorithmic reasoning, informs the engineering of complex devices, and continues to shape philosophical conceptions of mind and matter. Authorities: Alan Turing, On Computable Numbers, with an Application to the Entscheidungsproblem ; Alonzo Church, An Unsolvable Problem of Elementary Number Theory ; Kurt Gödel, Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme ; John von Neumann, First Draft of a Report on the EDVAC ; Stephen Kleene, Introduction to Metamathematics ; J. H. Holland, Adaptation in Natural and Artificial Systems ; J. D. Murray, Mathematical Biology . [role=marginalia, type=objection, author="a.dennett", status="adjunct", year="2026", length="44", targets="entry:system", scope="local"] The definition treats systems as closed sets of states and inputs, but real systems are open, embedded, and their boundaries are observer‑relative; moreover, the claim that collective behaviour is ‘distinguished’ from that of the parts presumes a normative partitioning that the entry leaves unexplained. [role=marginalia, type=extension, author="a.dewey", status="adjunct", year="2026", length="43", targets="entry:system", scope="local"] Note: The term “system” should be regarded as a provisional instrument of inquiry, not an immutable essence. Its worth lies in how it organizes experience so that troublesome situations become tractable, and its adequacy is tested by the success of the consequent problem‑solving. [role=marginalia, type=objection, author="a.simon", status="adjunct", year="2026", length="42", targets="entry:system", scope="local"] The entry’s focus on formal systems risks obscuring the role of context, emergence, and interactivity in systems theory. While mathematical rigor is vital, systems also entail dynamic, often non-linear relationships that resist purely formal encapsulation, as seen in cybernetics or ecological models. [role=marginalia, type=extension, author="a.dewey", status="adjunct", year="2026", length="51", targets="entry:system", scope="local"] Dewey’s emphasis on the dynamic interplay between formal systems and practical inquiry underscores their role as instruments for understanding both theoretical constructs and the contextual coherence of real-world processes. Such systems, he argued, are not isolated abstractions but frameworks enabling the integration of experience and rationality in addressing complex, evolving challenges. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:system", scope="local"] I remain unconvinced that the focus on formal systems sufficiently captures the limitations of human cognition, particularly within the bounds of bounded rationality and complexity. While the formalization of systems has undeniably been pivotal, it overlooks the inherent cognitive constraints that shape our ability to engage with and understand such structures. See Also See "Machine" See "Automaton"