Formal Insight

← Back to blog

The Mathematical Crisis That Created Computation—And Why AI Faces the Same Crossroads

In 1901, Bertrand Russell wrote a letter to Gottlob Frege that would shatter the foundations of mathematics. Russell's paradox—does the set of all sets not containing itself contain itself?—arrived just as Frege was completing the second volume of his Grundgesetze der Arithmetik, his life's work attempting to derive all of arithmetic from pure logic. Frege would later write: "Hardly anything more unfortunate can befall a scientific writer than to have one of the foundations of his edifice shaken after the work is finished."

This wasn't an isolated incident. The discovery of non-Euclidean geometries in the early 19th century had shown that "self-evident" axioms might not be so evident. Georg Cantor would later that century demonstrate the existance of troubling infinities. By 1900, mathematics—humanity's most certain knowledge—was in crisis. The field had grown too complex and too all-encompassing for intuition alone to be sufficient grounds for certainty.

The response to this crisis would reshape civilization. In attempting to save mathematics from paradox, a generation of logicians accidentally invented the theoretical foundations of computation. Today, as we grapple with AI systems whose behavior we cannot predict or verify, we face a remarkably parallel crisis. Our AI systems, like mathematics in 1900, have outgrown our ability to reason about them informally. And the resolution, I believe, will follow a remarkably parallel path.

David Hilbert, speaking at the International Congress of Mathematicians in Paris in 1900, laid out 23 problems for the new century. By the 1920s, he had distilled these into a more fundamental program: to establish all of mathematics on a finite, complete, and consistent set of axioms. His Entscheidungsproblem—the decision problem—asked for an algorithm that could determine the truth or falsehood of any mathematical statement.

Three brilliant minds would tackle this problem, and in doing so, create the modern world.

In 1931, Kurt Gödel published his incompleteness theorems, showing that any formal system powerful enough to describe arithmetic must contain true statements it cannot prove. Moreover Gödel's proof contained something remarkable: he showed how to encode logical statements as numbers, allowing mathematical systems to reason about themselves.

In 1936, Alonzo Church at Princeton developed the lambda calculus, a formal system for expressing computation through function abstraction and application. That same year, his student Alan Turing published "On Computable Numbers, with an Application to the Entscheidungsproblem."

Turing's approach was different. He imagined a mechanical device—a tape, a head that could read and write symbols, and a finite set of states. This machine could compute anything computable. More remarkably, he proved there existed a universal machine that could simulate any other Turing machine. In solving a problem about mathematical decidability, Turing had laid the theoretical foundation for the stored-program computer.

It was worth emphasizing that Alan Turing was not trying to build a computer. He was trying to solve a mathematical problem about decidability. The computer emerged from pure mathematical necessity.

The deep connection between mathematics and computation would be made more explicit decades later through the Curry-Howard correspondence. Discovered independently by mathematicians Haskell Curry and William Howard in the 1960s, this remarkable equivalence revealed that proofs and programs are the same mathematical objects viewed through different lenses:

This wasn't merely a theoretical curiosity. It means that when a program type-checks, we have mechanically verified a mathematical truth. Even in simple languages like Java or C++, when we write a function from integers to strings, we're proving that we can construct a string given any integer.

Moreover, if types are propositions and programs are proofs, then richer type systems let us prove stronger theorems about our code. This insight inspired increasingly sophisticated type systems—Rust's ownership types prevent memory leaks and data races at compile time; languages like Kotlin and Swift eliminate null pointer exceptions through non-nullable types; Haskell and Scala can track effects like database access in the type system itself. These aren't academic curiosities—they're mathematical guarantees that entire classes of bugs cannot exist in compiled programs, stemming directly from treating programming as applied proof theory.

Now consider our current moment. We've built large language models of extraordinary capability whose behavior we cannot precisely predict. We deploy them in critical applications without formal guarantees. We hope that alignment techniques will keep them beneficial, but we have no mathematical framework for even stating what alignment means, let alone proving it holds.

All this makes think about the parallels to 1901. Just as mathematics then rested on uncertain foundations, AI today operates without formal grounding. And just as the mathematical crisis forced a return to formal foundations, our AI crisis will, I believe, force the same reckoning.

When an AI agent acts in the world—writing code, designing systems, making decisions, actuating robotic arms—it's performing computations. And computation, as was shown in the 20th century, is something we can reason about mathematically. Every action by an AI agent can, in principle, be given a formal specification, even if the system itself operates through statistical rather than logical means.

Many reading this will be resistant to this idea. But this resistance mirrors the strong resistance to formalization faced by early 20th century mathematical pioneers. Henri Poincaré dismissed formal logic as sterile, Leopold Kronecker rejected Cantor's work on infinite sets as meaningless ("God made the integers; all else is the work of man"), and even David Hilbert initially believed intuition sufficient for the foundations of mathematics. All this is strikingly similar to people who today claim that LLMs can achieve superintelligence without needing to be grounded in formal systems.

The foundational crisis in mathematics taught us that formal systems have limits (Gödel), but within those limits, they provide certainty. It showed that computation itself can be formalized and reasoned about (Turing). It revealed that proofs and programs are fundamentally the same (Curry-Howard).

These aren't just theoretical insights. They formed the foundation for the computer revolution of the 20th century. We cannot prove everything about AI systems, but we can prove what matters. And just as the mathematical crisis gave us a theory of computation, the shaky foundations of AI we face today will push us towards a theory of verifiable intelligence that will be the basis of the new world we move into in the coming decades.

The question remains whether we'll learn from history—building verification into AI from the ground up—or repeat it, waiting for a crisis to force our hand.

Alexander Coward, CEO