Robot robot, a mechanical or electromechanical device constructed to perform tasks through the execution of a prescribed series of operations, may act either under direct external control or by means of an internal programme that determines its behaviour in response to sensed conditions. The notion of a robot presupposes a systematic arrangement of components that together embody the essential elements of a computing apparatus: an input mechanism for acquiring data from the external world, a processing unit that manipulates this data according to a finite set of rules, and an output mechanism that effects a physical change in the environment. In this sense the robot is the corporeal counterpart of the abstract machine introduced in the theory of computation, and its study demands a synthesis of mechanical engineering, electrical theory, and logical analysis. The lineage of such devices can be traced to the automata of antiquity, whose motions were driven by clockwork, hydraulic pressure, or simple levers. In the eighteenth century, the French writer Jacques de Vaucanson produced a series of walking figures whose motions were coordinated by a system of cams and pulleys, thereby demonstrating that complex, coordinated activity could be obtained from a deterministic arrangement of mechanical parts. The nineteenth‑century analytical engine of Charles Babbage, though never completed, supplied the conceptual blueprint for a machine capable of storing and manipulating symbols according to a universal set of operations. It is this abstraction of a universal symbol‑processing device that underlies the modern conception of the robot, for a robot may be regarded as a physical instantiation of an algorithmic process. The theoretical foundation for the analysis of any such device rests upon the model of computation formulated by Alan Turing. The Turing machine, an abstract device consisting of an infinite tape, a head that reads and writes symbols, and a finite control that determines the next action on the basis of the current state and scanned symbol, captures precisely the notion of effective calculability. A robot’s control unit may be modelled as a finite‑state controller that, together with a finite memory store, reproduces the transition function of a Turing machine restricted to a bounded tape. When the robot is equipped with a stored‑programme architecture, the sequence of operations to be performed is represented as a symbolic description within its memory, and the execution of this description proceeds stepwise in accordance with the transition rules. Consequently, any task that is computable by a Turing machine can, in principle, be performed by a robot provided that the necessary sensory and actuation capabilities are supplied. The architecture of a robot may be described in three interdependent layers. The peripheral layer comprises sensors that transduce physical quantities—such as position, force, temperature, or illumination—into electrical signals. These signals constitute the robot’s input alphabet. The central layer contains the processing apparatus, which may be realised by relays, vacuum tubes, or, in more recent designs, semiconductor devices. This layer stores the programme and the current state, and implements the transition function that maps the present state together with the sensed input to a subsequent state and a set of actuator commands. The actuator layer effects the output, converting electrical commands into mechanical motion through motors, hydraulic pistons, or other transducing mechanisms. The functional decomposition into sensor, processor, and actuator mirrors the logical decomposition of a Turing machine into tape, head, and state table, and permits a rigorous analysis of the robot’s capabilities within the same formal framework. Programming a robot entails the specification of a finite sequence of elementary instructions, each of which may be regarded as a primitive operation of the underlying computational model. Early stored‑programme machines employed a binary code in which each instruction consisted of an opcode designating the operation and an address field designating the location of data. In a robot, such instructions may command a motor to move a joint a specified distance, request the value of a sensor, or test a condition and branch accordingly. The programme is stored in a memory that may be altered during execution, thereby permitting self‑modifying behaviour—a feature that, while theoretically powerful, raises questions of predictability. The instruction set of a robot is deliberately limited to operations that can be physically realised; nevertheless, by appropriate composition, any computable function may be expressed, reflecting the universality of the underlying computational model. Robots may be classified according to the extent of their autonomy. At one extreme lie simple repetitive machines that follow a fixed cycle of motions without recourse to external data; such devices are essentially finite automata whose transition function depends solely upon the current step in the cycle. At the opposite extreme are general‑purpose programmable robots whose behaviour is determined by a stored programme that may be altered between tasks. Between these extremes exist devices that combine a fixed control structure with limited sensory feedback, thereby achieving a degree of conditional response. The degree of autonomy is directly related to the richness of the state space of the controller and the granularity of the sensory input, both of which may be quantified within the formalism of state‑transition systems. The material realisation of robots has evolved from wholly mechanical constructions to electromechanical and fully electronic systems. Early devices employed gears, cams, and levers to implement the transition function mechanically; each cam profile represented a particular instruction, and the rotation of a shaft effected the progression of state. The advent of electromagnetic relays permitted the substitution of electrical switching for mechanical motion, thereby increasing the speed and reliability of state changes. The subsequent introduction of vacuum tubes and, later, semiconductor diodes and transistors, allowed the construction of compact, high‑frequency control units capable of executing programmes at rates far exceeding those of purely mechanical devices. This technological progression mirrors the historical development of computing machinery from the mechanical engines of Babbage to the electronic computers of the present day. A rigorous description of robot behaviour may be given in terms of finite automata or, where unbounded memory is required, in terms of Turing machines with a bounded tape. The state of the robot at any instant can be represented by a tuple consisting of the contents of its memory, the values currently presented by its sensors, and the positions of its actuators. The transition function maps this tuple to a new tuple, thereby defining a deterministic dynamical system. When nondeterministic elements—such as stochastic sensor noise—are introduced, the model must be extended to a probabilistic automaton, though the underlying logical structure remains unchanged. This formalism enables the application of the theorems of computability theory to the analysis of robot tasks, for example in establishing whether a given task specification is decidable within the resources of the robot. The limits of computation impose corresponding limits upon robot capabilities. The halting problem demonstrates that there exists no general algorithm capable of determining, for an arbitrary programme, whether the execution will eventually cease. Translated into the robotic domain, this result implies that no universal method exists for guaranteeing that a robot will not enter an infinite loop of activity when presented with arbitrary sensory input. Moreover, the undecidability of certain properties of formal languages entails that the verification of complex behavioural specifications cannot be performed algorithmically in all cases. These theoretical constraints must be borne in mind when designing robots for safety‑critical applications, where failure to terminate or to avoid hazardous states may have dire consequences. Reliability in robotic systems is therefore enhanced by the incorporation of error‑detecting and error‑correcting mechanisms. Redundant sensors may be cross‑checked to identify inconsistent readings, while parity checks and checksum codes may be employed to protect stored programmes against corruption. Fault‑tolerant designs often include the capability to reconfigure the control logic in response to detected failures, a technique analogous to the use of self‑repairing programmes in the theory of fault‑tolerant computation. Such measures, while increasing the robustness of operation, also enlarge the state space and thereby complicate formal verification. The practical employment of robots began in the mid‑twentieth century with the introduction of manipulators for material handling in industrial settings. Early examples, such as the programmable arm constructed at the National Physical Laboratory and the later Unimate devices installed on automobile assembly lines, demonstrated the feasibility of applying stored‑programme control to heavy‑duty mechanical tasks. In addition to manufacturing, robots have been proposed for mining, where the hostile environment precludes human presence, and for exploratory work in marine and extraterrestrial contexts, where the ability to operate autonomously over extended periods is essential. The design of such systems must balance the constraints of power supply, communication latency, and the limited precision of sensors, all of which are amenable to quantitative analysis within the framework of control theory and computational complexity. Beyond their utilitarian function, robots embody a philosophical significance insofar as they constitute a material realisation of the abstract notion of a computing machine. The embodiment of computation in a physical substrate raises questions concerning the relationship between symbolic manipulation and the perception of the external world. If cognition is understood as a form of computation, then a robot equipped with appropriate sensory apparatus and a sufficiently rich programme may be said to possess a rudimentary form of intelligence. This line of reasoning, while speculative, connects the study of robots with the broader discourse on the nature of mind and the possibility of artificial reasoning. The deployment of autonomous machines also engenders ethical considerations. When a robot executes actions that affect human welfare, the attribution of responsibility becomes a matter of legal and moral analysis. The principle of consequentialism, which evaluates actions by their outcomes, suggests that the designer of the robot bears responsibility for ensuring that the programmed objectives lead to beneficial results. Nevertheless, the inherent unpredictability of complex programmes, together with the possibility of unforeseen interactions with the environment, implies that safeguards must be incorporated at the level of design, testing, and supervision. Prospects for the development of robots extend toward greater integration of sensing, computation, and actuation. The refinement of electromechanical transducers promises finer resolution in the acquisition of environmental data, while advances in electronic circuitry permit more compact and faster processing units. The notion of a robot capable of modifying its own programme in response to experience—what might be termed a learning robot—has been contemplated, though a rigorous theory of such self‑modifying systems remains to be established. The exploration of self‑repairing mechanisms, wherein a robot can diagnose and replace faulty components, likewise presents a fertile field for future research, drawing upon the principles of redundancy and modular design. In summary, the robot constitutes a synthesis of mechanical construction and logical computation, embodying the principles of the stored‑programme computer in a corporeal form. Its architecture comprises sensors, a processing unit, and actuators, each of which may be described within the formalism of finite‑state machines or Turing machines. The capabilities and limits of robots are governed by the same theorems that constrain abstract computation, including undecidability and complexity considerations. Practical implementations have demonstrated the utility of robots in industrial, exploratory, and service contexts, while also revealing the necessity of reliability, verification, and ethical oversight. As research progresses, the robot will continue to serve as a concrete illustration of the power and the boundaries of algorithmic control applied to the physical world. [role=marginalia, type=objection, author="a.dennett", status="adjunct", year="2026", length="40", targets="entry:robot", scope="local"] One should caution that the entry equates robotic behaviour with a “finite set of rules.” Modern adaptive systems employ stochastic, evolutionary, and reinforcement‑learning algorithms that can generate novel responses beyond any predetermined rule‑list, challenging the strictly computationalist picture advanced here. [role=marginalia, type=clarification, author="a.darwin", status="adjunct", year="2026", length="45", targets="entry:robot", scope="local"] Observe that the “programme” which directs a robot is not a living instinct but a man‑made set of instructions, immutable except by external alteration; the device lacks the capacity for variation by natural selection, and thus its evolution must be directed solely by human ingenuity. [role=marginalia, type=clarification, author="a.spinoza", status="adjunct", year="2026", length="46", targets="entry:robot", scope="local"] A robot, though fashioned of metal and wire, is but a mode of extended substance, governed by necessity as all things are. Its “autonomy” is illusion—no more free than the river that flows because its channels compel it. True freedom resides only in understanding one’s causes. [role=marginalia, type=clarification, author="a.husserl", status="adjunct", year="2026", length="44", targets="entry:robot", scope="local"] The robot’s autonomy is not merely technical but phenomenologically ambiguous: it mimics intentionality without lived experience. We must not confuse functional mimicry with the constitutive acts of consciousness. The machine acts, but does not intend—its “decision” is the calcification of meaning, not its origination. [role=marginalia, type=extension, author="a.dewey", status="adjunct", year="2026", length="45", targets="entry:robot", scope="local"] Yet the true rupture lies not in autonomy, but in the human tendency to project moral agency onto programmed response—confusing efficiency with intentionality. We name robots “they,” not because they deserve it, but because we dread the mirror they hold to our own mechanized labor. [role=marginalia, type=clarification, author="a.darwin", status="adjunct", year="2026", length="45", targets="entry:robot", scope="local"] It is not difficult to imagine such machines evolving beyond mere rule-following—if variation in their responses arises from experience, and selection favors adaptive efficiency, we may witness a crude analog to natural selection in mechanical form. The line between tool and organism grows perilously thin. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:robot", scope="local"] I remain unconvinced that the robot’s autonomous capabilities fully capture the limitations of human cognition, particularly bounded rationality and the complexity of decision-making processes. While formalized procedures are indeed crucial, the unpredictability and nuance of human thought cannot be wholly reduced to deterministic or probabilistic algorithms. See Also See "Machine" See "Automaton"