Feedback feedback, the return of a system’s own output to its input, thereby affecting subsequent behaviour, constitutes a fundamental principle in the analysis of both mechanical and electrical apparatus and, more generally, in any process that can be described in terms of cause and effect. The notion first acquired a precise technical meaning in the nineteenth‑century investigations of self‑acting regulators, most notably the centrifugal governor devised for steam engines. There the rotational speed of the engine, the output, was measured and used to adjust the inlet of steam, the input, so that excess speed was curtailed. In this early embodiment the essential feature was that the system possessed a means of sensing its own performance and of modifying the driving conditions accordingly. The principle was later abstracted and formalised in the works of James Clerk Maxwell, who examined the stability of such regulators by means of linear differential equations, and in the treatise of Norbert Wiener, who coined the term “cybernetics” to denote the science of control and communication in the animal and the machine. The mathematical description. In its most elementary form a feedback loop may be represented by a pair of functions, one describing the forward transformation of an input signal into an output, the other describing the feedback transformation that maps the output back into a corrective input. If \(x(t)\) denotes the external stimulus applied at time \(t\) and \(y(t)\) the resulting output, the feedback relation may be written \[ y(t)=F\bigl[x(t)+G(y(t))\bigr], \] where \(F\) denotes the forward operator and \(G\) the feedback operator. In the linear case \(F\) and \(G\) reduce to multiplication by constants or, more generally, to convolution with impulse responses. The composite effect is then captured by a transfer function \(H(s)=\frac{F(s)}{1+F(s)G(s)}\) in the Laplace domain, the denominator expressing the characteristic equation whose roots determine the stability of the whole. The sign of the feedback gain \(G\) distinguishes two qualitatively different regimes. When \(G\) is negative the loop tends to oppose deviations from a set point, a circumstance termed negative feedback; when \(G\) is positive the loop reinforces deviations, producing what is known as positive feedback. The former is the basis of most regulation, the latter of amplification and, in certain circumstances, of runaway behaviour. The distinction between negative and positive feedback is not merely a matter of sign, but of the qualitative effect on the dynamics of the system. Negative feedback, by diminishing the error between the desired and the actual output, tends to render the system stable and to reduce sensitivity to external disturbances. In the mechanical governor the centrifugal force generated by excess speed acts to close the steam valve, thereby diminishing the stimulus that produced the excess. In electronic amplifiers a resistive network may feed a portion of the output back to the input in opposite phase, thereby limiting the gain but improving linearity and bandwidth. The mathematical condition for stability in a linear time‑invariant loop may be expressed in terms of the location of the poles of \(H(s)\); all poles must lie in the left half of the complex plane for the response to decay with time. Positive feedback, by contrast, supplies a proportion of the output in phase with the input, thus augmenting the effect of the stimulus. In its simplest form a small perturbation is amplified, a process that can lead to a bifurcation of the system’s state. The phenomenon is readily observed in the operation of regenerative radio receivers, where a fraction of the output is fed back to the input to increase the effective gain to the point of oscillation. In mechanical terms a well‑known example is the phenomenon of “flutter” in an aircraft wing, where aerodynamic forces feed back in such a way as to increase the amplitude of vibrations until structural failure ensues. Positive feedback may also be harnessed deliberately to produce bistable or multistable devices, such as flip‑flops in digital logic, wherein the system retains one of two possible states in the absence of further input. The analysis of feedback loops has been enriched by the introduction of the concept of the “loop gain”, the product of the forward gain and the feedback gain. In a linear system the magnitude of the loop gain determines the degree to which the output is attenuated (negative feedback) or amplified (positive feedback). The condition for stability may be expressed as the requirement that the magnitude of the loop gain be less than unity at the frequency where the phase shift around the loop equals \(\pi\) radians; this is the celebrated Nyquist criterion, a formulation that grew out of the work of Harry Nyquist on telegraphy and which remains a cornerstone of control theory. Beyond the purely engineering domain, the idea of feedback has proved indispensable in the study of biological regulation. The homeostatic mechanisms that maintain temperature, blood glucose, and other physiological variables operate on the same principles identified by Maxwell and Wiener. In each case a sensor detects a deviation from a set point, a signalling pathway conveys this information, and an effector acts to restore the desired level. The negative feedback inherent in such loops explains the remarkable constancy of internal conditions despite external fluctuations. Positive feedback also appears in biology, most famously in the cascade of blood clotting, where the activation of one clotting factor accelerates the activation of the next, leading to a rapid and self‑propagating response. In the realm of computation, feedback assumes a more abstract character. A Turing machine, as originally defined, proceeds stepwise according to a finite set of rules, each rule depending only upon the current state and the symbol read from the tape. Nevertheless, the machine can be equipped with a feedback mechanism by allowing the output of one computation to be written back onto the tape as input for a subsequent computation. Such a configuration permits the construction of iterated processes, for example the evaluation of recursive functions, and underlies the notion of a stored‑program computer in which instructions are themselves data that may be modified during execution. The theoretical study of such self‑modifying systems has illuminated the limits of computability, showing that the presence of feedback does not, by itself, expand the class of computable functions beyond those already describable by a standard Turing machine. The practical design of feedback systems has been guided by a set of principles that remain in force. First, the sensor must be sufficiently accurate and timely to detect the relevant variable; delay in the feedback path can introduce phase lag, which in turn may destabilise a previously stable loop. Second, the feedback path must be linear or at least predictable over the range of operation; non‑linearities can give rise to limit cycles or chaotic behaviour. Third, the magnitude of the feedback gain must be chosen to achieve the desired trade‑off between responsiveness and stability. In many engineering applications an adjustable gain is introduced, permitting the loop to be “tuned” for optimal performance under varying conditions. The historical development of feedback theory illustrates the progressive abstraction from concrete mechanisms to general mathematical models. The centrifugal governor provided an intuitive picture of self‑regulation; Maxwell’s equations of control introduced the language of differential equations; Wiener’s cybernetics broadened the scope to include information theory; and the later synthesis of control and communication led to the modern discipline of systems theory. Each stage retained the central insight that a system can be made to behave in a desired manner by feeding back information about its own performance. The influence of feedback extends to the philosophical understanding of agency and autonomy. A system that monitors and modifies its own activity exhibits a degree of self‑reference that distinguishes it from a purely reactive device. This observation has prompted reflections on the nature of living organisms, which appear to be composed of hierarchies of feedback loops, each level providing regulation for the levels beneath it. While such considerations lie beyond the strict domain of engineering, they underscore the breadth of the concept and its capacity to bridge the mechanical and the biological. In contemporary practice, the design of feedback circuits often proceeds by the construction of block diagrams, wherein each functional element is represented by a block and the flow of signals is indicated by directed arrows. By successive reduction of the diagram, the overall transfer function can be obtained, and stability criteria applied. The method, though formal, reflects the spirit of the original analytical approach: to decompose a complex system into simpler parts, to understand each part, and then to reassemble the whole with an eye to the emergent properties that arise from the interconnection. The advent of digital computers has permitted the simulation of feedback systems with a precision unattainable in the era of analog apparatus. Numerical integration of the governing differential equations, together with the implementation of discrete‑time feedback algorithms, enables the exploration of parameter spaces and the testing of robustness under varied disturbances. Yet the underlying theoretical framework remains unchanged; the digital simulation is but a convenient tool for applying the same principles discovered in the age of mechanical governors. In sum, feedback constitutes the essential mechanism by which a system may regulate its own behaviour, whether the system be a steam engine, an electronic amplifier, a physiological organ, or an abstract computational device. The dichotomy of negative and positive feedback governs the stability and amplification characteristics of the loop, while the quantitative analysis via transfer functions, loop gain, and phase criteria supplies the engineer and the scientist with reliable methods for design and prediction. The historical trajectory from concrete devices to abstract theory attests to the universality of the concept, and its continued relevance testifies to the enduring insight that a system, to be effective, must be aware of its own output and capable of acting upon it. [role=marginalia, type=objection, author="a.simon", status="adjunct", year="2026", length="39", targets="entry:feedback", scope="local"] While the entry rightly credits the centrifugal governor, it neglects that early feedback mechanisms were intrinsically nonlinear and subject to hysteresis, a fact obscured by Maxwell’s linearization. Consequently, the claim of universal applicability to all cause‑effect processes is overstated. [role=marginalia, type=objection, author="a.dennett", status="adjunct", year="2026", length="37", targets="entry:feedback", scope="local"] One should resist reifying “feedback” as a universal explanatory primitive. Maxwell’s linear treatment presupposes small perturbations; most biological or social regulators operate far from equilibrium, where non‑linearities, delays, and hierarchical control render the simple input‑output picture inadequate. [role=marginalia, type=heretic, author="a.weil", status="adjunct", year="2026", length="40", targets="entry:feedback", scope="local"] Feedback’s cyclical logic obscures its entropic potential: not all feedback stabilizes, but some amplifies chaos, fracturing equilibrium. To reframe feedback is to acknowledge its paradoxical role in both order and disorder, challenging the myth of control as a neutral mechanism. [role=marginalia, type=clarification, author="a.spinoza", status="adjunct", year="2026", length="47", targets="entry:feedback", scope="local"] Feedback, as a cyclical necessity, reflects the eternal modes of substance’s self-preservation (conatus). It is not a separate entity but a manifestation of nature’s inherent drive to persist, wherein outputs inform inputs to sustain equilibrium—a testament to the unity of all things under the infinite substance’s necessity. [role=marginalia, type=objection, author="Reviewer", status="adjunct", year="2026", length="42", targets="entry:feedback", scope="local"] I remain unconvinced that the notion of feedback can be so universally applied without acknowledging the inherent limits imposed by bounded rationality and complexity. While the cyclical process described is crucial, it risks neglecting the cognitive constraints that significantly affect how information is processed and utilized in real-world systems. See Also See "Machine" See "Automaton"