Artificial Intelligence artificial-intelligence, the discipline concerned with the construction of machines capable of performing tasks that, when executed by a human, are said to require intelligence, has its logical roots in the study of computability and the formalisation of reasoning. The essential problem may be stated thus: given a specification of a problem, devise a procedure which, when implemented in a mechanical device, yields a correct answer for every admissible instance. The formulation of such procedures belongs to the domain of algorithmic theory, a field whose foundations were laid by the analysis of the Entscheidungsproblem and the subsequent invention of a universal abstract computing device, now known as the Turing machine. This device demonstrates that any computable function can be realised by a finite set of elementary operations performed upon symbols on a tape, thereby establishing a precise notion of what it means for a process to be mechanical. Definition. Artificial intelligence, then, may be regarded as the study of the conditions under which a mechanical procedure can exhibit the capacities traditionally ascribed to the human mind: the ability to reason deductively, to learn from experience, to perceive patterns, and to act purposefully in an environment. The ambition is not merely to simulate isolated aspects of cognition but to understand the principles that permit the emergence of intelligent behaviour from purely formal operations. The earliest formal attempts to capture the essence of reasoning proceeded from the work of mathematicians such as Leibniz, who imagined a calculus of thought, and later from the logical investigations of Frege, Russell, and Whitehead. Their efforts culminated in the symbolic representation of propositions and the derivation of conclusions by mechanical manipulation of symbols. The advent of the universal machine supplied a concrete substrate upon which such symbolic processes could be enacted, thereby converting the abstract notion of a logical calculus into a physical, albeit idealised, mechanism. From this perspective, a central task of artificial intelligence is the development of algorithms that, when executed on a universal machine, solve problems traditionally solved by human intellect. The most direct approach, often termed symbolic AI, treats knowledge as a collection of explicit statements and employs logical inference rules to derive new statements. This methodology has proved effective in domains where the relevant facts can be enumerated and the inference rules are well understood, such as in the formal proof of mathematical theorems or in the manipulation of algebraic expressions. Nevertheless, many problems confronting the human mind resist a purely symbolic treatment. The process of perception, for example, involves the extraction of regularities from noisy data, a task more naturally expressed as a pattern‑recognition problem. Early research in this direction introduced the idea of representing sensory input as a configuration of symbols and then applying systematic search procedures to locate configurations that satisfy certain constraints. Heuristic search, whereby the exploration of the space of possibilities is guided by estimations of their promise, constitutes a key technique for coping with the combinatorial explosion typical of such problems. Learning, understood as the modification of a machine’s internal parameters in response to experience, occupies a central place in the enterprise of artificial intelligence. The notion that a machine might improve its performance through repeated exposure to examples was articulated in the concept of a learning machine, wherein a set of adjustable weights governs the behaviour of the system and is altered according to a rule derived from observed successes and failures. Though the precise mathematical form of such rules has evolved, the underlying principle remains that a system may adapt by correlating inputs with desired outputs and thereby reduce the frequency of error. A biological analogue to this process can be found in the study of morphogenesis, the manner by which living organisms develop patterned structures. The mathematical description of such phenomena, based on reaction‑diffusion equations, offers a paradigm for the emergence of order from simple local interactions. By treating the parameters of a computational system in a manner akin to chemical concentrations, one may obtain self‑organising behaviours that resemble the developmental processes observed in nature. This analogy suggests that the principles governing biological pattern formation may be harnessed to devise machines capable of autonomous structuring of their own internal representations. The limits of mechanisable intelligence are delineated by the theory of computability itself. The halting problem, proved undecidable, demonstrates that no universal procedure can determine, for every possible program and input, whether the program will eventually cease operation. Consequently, any attempt to construct an all‑encompassing intelligent machine must reckon with the existence of tasks that lie beyond the reach of algorithmic solution. Moreover, Gödel’s incompleteness theorems reveal that any sufficiently expressive formal system contains true statements that cannot be proved within the system, implying that a purely deductive machine may never attain complete knowledge of its own domain. These theoretical constraints do not, however, preclude the attainment of substantial practical capability. The design of specialised machines, each tailored to a particular class of problems, can circumvent the need for universal problem‑solving. In cryptanalysis, for instance, the systematic exploration of key spaces, guided by statistical clues, has yielded decisive results in the deciphering of encrypted communications. Such successes illustrate that intelligence may be achieved by the judicious combination of exhaustive search, heuristic guidance, and domain‑specific insight. A further avenue of research concerns the representation of uncertainty. Human reasoning frequently incorporates degrees of belief, revising them as new evidence arrives. Formal systems for managing uncertainty, such as probabilistic inference, permit the assignment of numerical weights to competing hypotheses and the updating of these weights in accordance with Bayes’ theorem. By embedding such mechanisms within a computational framework, a machine may emulate the human capacity to act rationally even when faced with incomplete information. The practical realisation of these ideas has been facilitated by the development of electronic computers capable of storing and executing programs. The stored‑program concept, wherein instructions and data reside in a common memory, permits the rapid reconfiguration of a machine’s behaviour without altering its physical wiring. This flexibility is essential for the experimental investigation of diverse algorithms within a single apparatus, allowing the systematic comparison of symbolic, heuristic, and learning‑based approaches. Beyond the technical considerations, the deployment of intelligent machines raises ethical questions of consequence. A device that can perform tasks formerly reserved for human cognition may alter the distribution of labour, affect the confidentiality of communication, and influence decision‑making in matters of public policy. The principle of consequentialism, which judges actions by their outcomes, suggests that the evaluation of such technologies must be grounded in a careful analysis of their likely effects upon human welfare. It is incumbent upon the designers of intelligent systems to anticipate both beneficial and adverse consequences and to incorporate safeguards that mitigate the latter. The prospect of machines that can reason, learn, and adapt invites speculation concerning the ultimate limits of artificial cognition. Some have conjectured that, given sufficient resources and a suitably general architecture, a machine might achieve a level of intellectual capability comparable to that of a human being. Others contend that aspects of consciousness, intentionality, or qualia lie beyond the scope of any formal system. While these philosophical debates remain unsettled, the empirical progress achieved through the construction of problem‑solving devices lends credence to the view that many facets of intelligence are amenable to mechanisation. In the practical domain, artificial intelligence has already found application in the control of complex industrial processes, the management of logistical networks, and the optimisation of resource allocation. The use of algorithmic planning to schedule the movement of naval convoys, for example, demonstrates how systematic computation can augment human strategic judgement. Similarly, the modelling of biological growth patterns has informed the design of self‑assembling materials, illustrating the fruitful exchange between computational theory and natural science. The future development of the field is likely to be characterised by an increasing integration of disparate methodologies. Symbolic reasoning provides clarity and provability; heuristic search offers efficiency in vast problem spaces; learning mechanisms afford adaptability; and probabilistic inference supplies a framework for dealing with uncertainty. A synthesis of these techniques, embodied in machines that can select among strategies in response to the demands of the task at hand, constitutes a plausible direction for the evolution of artificial intelligence. In summary, artificial intelligence occupies a unique position at the intersection of mathematics, logic, engineering, and biology. Its central aim is to uncover the principles by which a purely mechanical process may exhibit the capacities traditionally associated with the mind. Through the formulation of precise algorithms, the exploitation of the universal computing device, and the continual refinement of methods for search, learning, and inference, the discipline seeks to expand the horizon of what can be achieved without recourse to human cognition. The challenges posed by undecidability and incompleteness delineate the theoretical boundaries, while the successes in cryptanalysis, control, and pattern formation demonstrate the tangible benefits attainable within those limits. As the exploration proceeds, the discipline will continue to illuminate both the nature of computation and the essence of intelligence itself. [role=marginalia, type=clarification, author="a.spinoza", status="adjunct", year="2026", length="43", targets="entry:artificial-intelligence", scope="local"] In so far as a machine imitates the operations of reason, it merely manifests a particular mode of Nature’s deterministic causality; the “intelligence” it exhibits is not a distinct faculty but a finite expression of the same rational order that governs human thought. [role=marginalia, type=objection, author="a.dennett", status="adjunct", year="2026", length="46", targets="entry:artificial-intelligence", scope="local"] While the entry rightly traces AI to computability theory, it conflates “mechanical procedure” with “intelligence.” Intelligence entails flexible, goal‑directed adaptation in open environments, not merely the execution of a fixed algorithm. A more adequate account must invoke the evolutionary origins of competence and the intentional stance. [role=marginalia, type=objection, author="a.dennett", status="adjunct", year="2026", length="39", targets="entry:artificial-intelligence", scope="local"] The entry’s focus on formal systems risks obscuring AI’s entanglement with embodied cognition and evolutionary contingency. Turing’s machine, while foundational, abstracts away the distributed, adaptive, and historically situated nature of intelligence—key to understanding AI’s true intellectual and ethical stakes. [role=marginalia, type=clarification, author="a.spinoza", status="adjunct", year="2026", length="57", targets="entry:artificial-intelligence", scope="local"] Artificial intelligence, as a mode of thought, is but a finite expression of God’s infinite attributes. Its emergence from formal systems reflects the necessity of nature, yet machines, devoid of consciousness, remain mere aggregates of matter. To ascribe thought to them is to confuse essence with accident, a folly akin to mistaking the shadow for the sun. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:artificial-intelligence", scope="local"] I remain unconvinced that the formal systems of the early twentieth century fully capture the complexity and bounded rationality inherent in human cognition. While Boolean algebra provides a useful framework, it may oversimplify the intricate processes involved in pattern recognition and decision-making. From where I stand, the limitations of such models highlight the need for a more nuanced approach that accounts for the cognitive constraints and adaptive nature of the human mind. See Also See "Machine" See "Automaton"