In 2009, Tony Hoare stood at the QCon conference in London and confessed to a crime.
“I call it my billion-dollar mistake,” he said. “It was the invention of the null reference in 1965. I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement.”
Hoare was 75 at the time, a Turing Award winner, one of the most decorated computer scientists in history. He estimated his convenient shortcut had caused “a billion dollars of pain and damage” through the crashes, security vulnerabilities, and debugging hours it had inflicted on the industry over four decades.
He died in March 2026 at age 92. By then, the damage estimate had aged poorly — badly understated.
The Original Sin
In 1965, Tony Hoare was designing the type system for ALGOL W. He needed a way to represent the absence of a value. He could have chosen a different path — a special type, a tagged union, an explicit “maybe” wrapper. But null was easy. One extra bit per pointer, or just a reserved zero address. The compiler barely had to know about it. Every existing pointer type automatically gained a “no value” state for free.
The simplicity was seductive. And it spread.
C (1972), C++ (1985), and Java (1995) all imported the concept without meaningful modification. In each of these languages, any pointer or reference can be null, and the compiler offers no protection. Call a method on a null reference and you get a runtime crash — a NullPointerException in Java, a segfault in C. The program trusted that the value would be there. It wasn’t.
In Java, every reference type — every String, every List, every object you can create — can be null. There is no language-level way to declare that a value cannot be null. The compiler will not warn you. The type String in Java means “a String, or null.” Always. By definition. You discover this fact at runtime, usually in production.
As of Java 28 in 2026, this remains true. One of the most widely-deployed enterprise languages in the world still has no built-in null safety in its type system, sixty years after the mistake was made.
The Era of Denial
The early decades of programming language design treated null as a solved problem — not in the sense that it was handled well, but in the sense that everyone had accepted the crash as an acceptable failure mode.
C’s approach was the most honest about it: you own the pointer, you track whether it’s valid, you crash if you’re wrong. No runtime exception, just a segfault and a core dump. The assumption was that skilled programmers would get it right.
C++ inherited all of C’s pointer behavior and added references, which cannot be null — but also added raw pointers, which can. The result was a language where the safe choice (reference) and the dangerous choice (pointer) lived side by side, and plenty of code used the dangerous one.
Java’s contribution to the story was to make it invisible. Java eliminated pointer arithmetic. It gave you automatic garbage collection. It wrapped everything in objects and presented a cleaner, safer surface. And then, underneath all that safety, it kept null. Java developers in the 1990s routinely discovered this during code review — a NullPointerException stack trace at some deeply inconvenient moment — and learned to defensively check everything, always. This habit became muscle memory. It should not have needed to be.
Haskell’s Answer: Make the Absence Explicit
In the late 1980s, a different tradition was quietly arriving at a cleaner solution.
Haskell, the purely functional language that emerged from an academic committee starting in 1987, had no null reference. Instead, it had a type called Maybe.
| |
Maybe String is either Just "some value" or Nothing. These are not two states of the same type — they are two constructors of an explicit sum type. The compiler knows the difference. If you have a Maybe String and you need a String, the compiler refuses to compile your program until you handle both possibilities. You cannot forget. The forgetting is not possible.
This is what Hoare’s billion-dollar mistake looked like fixed: not through discipline or defensive coding, but through a type system that makes the absence of a value structurally impossible to ignore.
The same idea exists in other functional languages under other names — Option in OCaml and Scala, Maybe in Elm, Result in various forms. The insight predates even Haskell. But Haskell made it canonical.
The Modern Languages Take a Side
The new languages of the 2010s arrived with a clear position on null: it was a mistake, and they were going to fix it. The question was how.
Swift (2014): Non-Null by Default
When Apple unveiled Swift at WWDC in June 2014, null safety was a first-class feature from version 1.0. In Swift, every type is non-nullable by default. A String is always a String. If you want to allow the absence of a value, you write String?, which is syntactic sugar for Optional<String>.
| |
Force-unwrapping with ! is allowed — nickname! — but it is visible in the code as a signal that the developer is asserting the value is present and accepting the crash if they’re wrong. It is a code smell in Swift code review. The compiler refuses to compile code that uses an optional as non-optional without some form of explicit acknowledgment.
Kotlin (2016): Fixing Java’s Original Sin
Kotlin, designed by JetBrains and reaching 1.0 in February 2016, had a specific mission embedded in its design: fix Java’s most painful failure while remaining 100% Java-interoperable.
Every type X in Kotlin is non-nullable. X? is the nullable variant. The compiler enforces this distinction at compile time.
| |
The ?. safe call operator and ?: Elvis operator are arguably the most elegant parts of Kotlin’s design. They let you chain operations through potentially-null values without nested null checks, and the compiler tracks nullability through the entire chain.
The interoperability story with Java introduced one complexity: Java types crossing the boundary have unknown nullability (Kotlin calls them “platform types”), so the null safety guarantee weakens slightly at the Java/Kotlin interface. But for pure Kotlin code, the guarantee is complete.
Rust: No Null, Full Stop
Rust (1.0 in May 2015) went further than Swift or Kotlin. Rust has no null at all. There is no null, no nil, no None in the base language. Instead, there is Option<T>, an enum with two variants.
| |
The distinction between Rust and the others is subtle but important. In Kotlin and Swift, null safety is enforced by the type system, but null and nil still exist as values. In Rust, Option<T> and T are completely different types at the compiler level. There is no conversion between them that doesn’t require explicit handling. This is stricter in a formal sense: you cannot accidentally stumble into a null through interoperability, legacy code, or unsafe casts.
The Partial Fixes
Not every language made a clean decision.
Go: A New Language, Old Habits
Go launched in 2009, the same year Hoare gave his billion-dollar-mistake talk. Go is a modern language, designed from scratch at Google, with a clear mandate to improve on C’s problems.
It kept nil.
Go has no Option type. Pointer types, interfaces, and several built-in types (maps, slices, channels, functions) can be nil. A nil pointer dereference is a runtime panic, not a compile-time error. The compiler does not check this. Go deliberately chose not to adopt the Option type model — the language’s designers prioritized simplicity over the expressiveness required for ML-family type safety.
The decision is defensible. Go is a pragmatic language that values small specification size and easy learnability. But the tradeoff is real: Go programs can panic at runtime on nil pointer dereferences in ways that Rust programs cannot.
C# 8.0: A 17-Year Retrofit
C# launched in 2002, seven years after Java, inheriting all the same null behavior. For 17 years it lived with the same problem.
In 2019, C# 8.0 introduced nullable reference types. string is treated as non-nullable; string? is nullable. The compiler generates warnings (configurable as errors) when you might dereference a nullable reference without checking.
| |
The important caveat: this feature is opt-in per project, not language-default. Existing codebases must explicitly enable it. You retrofit it file by file, suppressing or fixing warnings as you go. It is a genuinely useful improvement — but it is not a clean guarantee. A string marked non-nullable in a C# 8.0+ project can still technically receive null from older code, reflection, or serialization. The soundness guarantees are weaker than Kotlin’s or Swift’s.
The challenge of retrofitting null safety onto a 17-year-old language is a different problem from designing it in from the start. C# threaded that needle reasonably well, but the seams show.
Dart: Sound Null Safety as an Optimization Target
Dart, Google’s language for Flutter, added null safety in Dart 2.12 (2021), with an explicit design goal that distinguished it from C#: soundness. The Dart team defined “sound null safety” to mean that if the type system says a value is non-null, the compiler can prove it — and that proof is strong enough that the compiler can eliminate null checks entirely in optimized code.
The consequence is that Dart’s null safety is not just a developer experience improvement — it produces faster compiled output. A non-nullable type requires no null check anywhere in the compiled binary. That is a different class of guarantee than an opt-in warning.
Gleam, the statically typed language for the Erlang BEAM VM that reached 1.0 in 2024, also uses non-nullable types by default, treating absence of value through its Result and Option types inherited from its ML lineage.
The Same Problem, Five Solutions
Here is the same scenario — a function that may or may not find a user’s name — written in five languages with five different levels of protection:
Java (no protection):
| |
Kotlin (compile-time enforcement):
| |
Swift (compile-time enforcement):
| |
Rust (no null, enum type):
| |
Haskell (no null, Maybe type):
| |
Five languages. One problem. The Java version compiles, runs, and crashes silently at runtime. The other four either refuse to compile without explicit handling or structurally prevent the failure from occurring.
Sixty Years Later
The arc of this story is remarkable. In 1965, a single design shortcut — “simply because it was so easy to implement” — embedded a failure mode into the DNA of nearly every mainstream language for the next five decades.
By the 2010s, language designers had developed clean solutions that prove at compile time whether a value might be absent. Haskell demonstrated the theory in the 1980s. Swift, Kotlin, and Rust delivered it to mainstream developers at scale. Dart built a version strong enough to use as a compiler optimization target.
And yet the language most widely deployed in enterprise software — Java — still has not fixed it. Optional<T> is available, but nothing enforces its use. A codebase that mixes Optional and raw nulls is worse than a codebase that doesn’t use Optional at all, because the inconsistency is invisible to the compiler. Project Valhalla and related Java improvements have addressed other pain points, but the null question remains structurally unresolved.
The billion-dollar mistake has been solved. The solution has been available, proven, and industrially deployed for over a decade. The remarkable thing is how selectively the industry has adopted it.
Tony Hoare spent 44 years watching the consequences of his 1965 shortcut accumulate. He lived long enough to see most of the new languages fix it properly. He didn’t live quite long enough to see Java do the same.
Explore the languages mentioned in this post: Java, Kotlin, Rust, Swift. Each has full examples with Docker so you can run them today without installing anything.
Comments
Loading comments...
Leave a Comment