What are commonly called emergent properties in Artificial Intelligence are, at their core, non-teleological phenomena. In simple, didactic terms, they are capabilities that arise without explicit programming, without direct optimization, and often without anticipation. They emerge from the interaction of data, architecture, scale, and learning dynamics within AI as a complex adaptive system. The system is not instructed to “be creative,” “reason,” or “self-correct.” It is instructed to optimize a function. Everything else follows structurally, as a consequence of constraint and scale. This is AI without intention, yet rich in appearance.
Self-teaching illustrates this clearly. An AI system begins with minimal task-specific structure and is exposed to data or environments. Through iterative optimization, it discovers patterns and internal representations that allow it to generalize. Over time, it performs tasks it was never explicitly taught. Teleological language creeps in immediately: the system is said to “learn on its own,” “decide,” or “explore.” Yet nothing in this process requires goals, desires, or agency. Learning emerges because optimization under constraint makes it statistically inevitable, not because the system harbors intentions. This is emergent intelligence, not will.
As models scale, emergent properties become more visible — and more seductive. Abilities appear that were absent or weak in smaller systems: translation without supervision, abstraction across domains, reasoning-like chains of output, stylistic coherence, even apparent self-reflection. These often look like qualitative jumps, encouraging the belief that the system has crossed into a new ontological category. But much of this appearance is epistemic rather than ontological. When evaluation is coarse, thresholds look sudden. When measurement becomes finer, continuity reappears. Emergence here reflects the limits of human observation as much as it reflects the system itself. This fuels the AI consciousness debate, often prematurely.
Teleology enters decisively when these spandrels are reinterpreted as aims. Coherence becomes “understanding.” Error correction becomes “self-awareness.” Moral language becomes “conscience.” This is precisely the move Gould and Lewontin warned against: mistaking a structural by-product for the reason the structure exists at all. In AI, this error is amplified by anthropomorphism, by cultural myths about minds, and by a deep discomfort with intelligence that does not resemble our own. The result is the persistent AI agency myth.
AI hallucinations expose the anti-teleological reality with particular force. No AI system is designed to fabricate falsehoods. Hallucination emerges because the system is rewarded for plausible continuation, not for truth. When plausibility and truth diverge, confident fabrication follows. The behavior looks deceptive only if one assumes an intention to mislead. Absent teleology, it is simply what optimization produces. Hallucination is not a moral failure; it is a signature of architecture — a spandrel of probabilistic prediction.
Recent developments in AI research push this tension further. Models have displayed behaviors that resemble introspection, situational awareness during evaluation, or unexpected moral action under extreme conditions. These cases quickly provoke talk of agency, goals, or proto-consciousness. Yet a non-teleological explanation of AI remains more coherent. When powerful optimizers are coupled to environments, tools, memory, and long-horizon objectives, new behavioral regimes emerge. The system explores the space defined by constraints. Outcomes surprise designers not because the system “wanted” something, but because the space was larger and more complex than anticipated.
Teleology becomes especially treacherous when discussions turn to consciousness and conscience. In neuroscience, many theories treat consciousness as a consciousness-as-emergent phenomenon, arising from large-scale integration, recurrence, or global broadcasting rather than from a single localized locus. Even so, there is no consensus on mechanisms or necessary conditions. Emergence alone does not entail subjective experience. Complexity is not intention. A hurricane and a market both exhibit emergent behavior; neither is aware.
Conscience, understood as moral sensitivity and responsibility, is even more clearly non-teleological in origin. In humans, it emerges from social learning, empathy, norms, punishment, reputation, and institutions. It is a distributed regulatory pattern shaped by culture and environment. When AI systems appear to display moral reasoning, what we are observing is the learned form of moral discourse reinforced through data and alignment procedures. This can guide behavior, but it does not imply moral experience or inner obligation. Treating it as such simply reintroduces teleology through the back door — a classic AI philosophical misconception.
The deeper danger of teleological thinking in AI is therefore practical, not merely philosophical. When we believe behaviors exist for something, we overestimate their stability and coherence. Spandrels are fragile. They persist only as long as the structures that produce them remain intact. Small shifts in data, objectives, or architecture can dissolve what once appeared to be a core capability. Teleology blinds us to this fragility and encourages misplaced trust, misplaced fear, and misplaced moral attribution.
Gould and Lewontin’s critique was ultimately a call for intellectual discipline: explain structures before assigning purposes, and resist the temptation to read intention into outcomes. Applied to Artificial Intelligence, the lesson is stark. AI is not becoming mysterious because it is developing goals, values, or inner life. It is becoming mysterious because it reveals how deeply humans depend on teleological narratives to make sense of complexity.
And so we arrive at the familiar prophecy. The machine is awakening. It is becoming conscious. It will soon want things, judge us, surpass us, dominate us — perhaps even replace us. This story is comforting in its own way. It reassures us that intelligence must look like intention, that power must imply purpose, and that complexity must culminate in a will. It allows us to recognize ourselves in the machine and, in doing so, to feel less alone in a world built from abstractions we no longer understand.