The history of systems programming is, in part, a long procession of people standing up to announce that they had built a better C++.
They weren’t wrong. Many of them had built something better in important ways. Better generics. Better memory model. Better syntax. Better tooling. And yet, the procession continues — because C++ is still here, still in the top three of virtually every language ranking, still powering the software that runs the planet.
TIOBE’s January 2026 index puts C++ at #4, exactly where it’s been for most of the past thirty years. It powers Unreal Engine, Chrome, V8, the Windows kernel, macOS core components, MySQL, MongoDB, and most of the high-performance infrastructure that underlies the modern internet. Billions of lines of C++ exist in production. No safe migration path exists for that code. And C++ — uniquely among entrenched languages — has a habit of absorbing its challengers’ best ideas and making them its own.
That is the story of why killing C++ is so hard. But it is not the whole story, because something genuinely different may be happening now.
The Unkillable Language
Before cataloguing the challengers, it’s worth understanding why the job is so difficult.
C++’s moat has three distinct walls.
The first is installed base. Billions of lines of C++ code run in production systems that have been hardened, optimized, and debugged over decades. Rewriting them is not just expensive — it is dangerous. Every line of that code encodes accumulated knowledge: edge cases handled, performance characteristics tuned, subtle invariants maintained. That knowledge doesn’t migrate automatically.
The second is ecosystem depth. Libraries, frameworks, debuggers, profilers, static analyzers, compilers (GCC, Clang, MSVC), and build systems have been built and refined around C++ for forty years. A new language doesn’t just need to be better than C++ — it needs to be better enough to justify rebuilding the entire toolchain alongside it.
The third wall — and this is the one that defeats most challengers before they start — is C++’s own evolution. The language that shipped in 1985 barely resembles the language in use today. C++11 added lambdas, type inference via auto, smart pointers, and move semantics — ideas borrowed directly from languages that had been criticizing C++ for lacking them. C++17 added structured bindings and std::optional. C++20 added concepts, coroutines, and ranges — features that closed the gap with functional languages that had been pointing at these absences for a decade. C++23 continued the trend.
Every time a competitor says “we have better generics,” the C++ committee reads the proposal and schedules a working group. This is maddening for language designers and remarkable for language survival.
The Early Challengers
D (2001) — C++ Done Right
Walter Bright had been writing C and C++ compilers since the 1980s. He knew the language’s internals better than almost anyone. In 2001, he released D, explicitly framed as what C++ should have been.
The critique was well-founded. D had a cleaner module system, better generics, an optional garbage collector, a safer type system, and built-in unit testing that didn’t require an external framework. It got rid of the header file mess. It made template metaprogramming legible rather than an arcane art. Three compilers exist today — DMD (the reference implementation), LDC (LLVM-based), and GDC (GCC-based) — and the language is still actively developed after more than two decades.
But D never achieved mass adoption. The most frequently cited reason is interoperability. D can call C code, but interfacing with existing C++ libraries — the thing that defines C++ programmers’ daily lives — was never seamless enough to justify migration. If you want to use a large C++ codebase, you write C++. D’s improvements weren’t enough to compensate for starting over from zero in a world where the existing investment was enormous.
D answered the question “what if C++ were designed better?” The market turned out not to care enough about that question.
Vala (2006) — The GNOME Experiment
Vala came from a different angle. GNOME developers were writing GTK+ applications in C — a painful experience, because GObject (the GTK object system) requires enormous amounts of boilerplate to implement classes, signals, and reference counting manually. Vala gave them a C#-like syntax and compiled it down to GObject C, which then linked like any C library.
The cleverness of Vala is real: you get modern syntax and the GObject reference counting model with no runtime overhead and no new runtime to ship. GNOME applications like Gedit and GNOME Files shipped production code in Vala.
But Vala’s story is a cautionary tale about niche commitment. It never escaped the GTK ecosystem. The compiler is maintained by a small team. The language’s entire value proposition depends on GObject, which limits its applicability to essentially one desktop toolkit. Vala proved that you can build a technically interesting language with a narrow enough scope that it survives without ever becoming consequential beyond that scope.
Nim (2008, 1.0 in 2019) — The Language Enthusiasts Love
Nim is the language that makes systems programmers nod approvingly when they read about it, then never quite migrate to.
It compiles to C, C++, or JavaScript. It offers Python-like syntax with optional indentation-based structure. Its macro system enables genuinely powerful metaprogramming — macros that operate on the AST and allow you to extend the language’s semantics in ways that most languages can’t match. The ORC and ARC memory management systems give you deterministic, GC-free memory handling with ergonomics that approach garbage-collected languages.
Nim has real production users. Status, which builds an Ethereum client, chose Nim partly for performance and partly for the macro system. Some game developers use it for scripting. The community is small but devoted and technically sophisticated.
The problem Nim has never solved is the chicken-and-egg problem that afflicts every small-community language: the ecosystem is small because the community is small, and the community is small partly because the ecosystem is small. Nim’s improvements over C++ are real but mostly syntactic and ergonomic — they make C++-style programming more pleasant without fundamentally changing what becomes possible. That turns out not to be enough to pull developers out of their existing toolchains.
The Genuine Breakthrough
Rust (May 2015) — The First Credible Challenger in Decades
Everything before Rust was offering variations on the same proposition: “C++, but nicer.” Rust offered something categorically different.
The borrow checker — Rust’s compile-time ownership and lifetime analysis — is not a nicer syntax. It is not a better standard library. It is a formal, machine-verified proof that memory safety and data races are impossible in safe Rust code. Not discouraged. Not warned against. Impossible to compile.
This matters because the problem it solves is not theoretical. A 2019 Microsoft study found that approximately 70% of their CVEs were memory safety bugs. A Google analysis of severe Chrome security bugs found the same figure — around 70% traceable to memory safety issues. These are not the kind of statistics you can argue with by pointing at developer discipline or better code review practices. They represent the fundamental cost of manually managed memory in large codebases written by large teams over long time periods.
The policy response reflects how seriously governments and industry have taken this. The U.S. NSA issued guidance in 2022 recommending memory-safe languages, citing Rust by name. The White House Office of the National Cyber Director published a report in 2024 recommending Rust for new systems programming work. That is an unusual level of governmental attention to a programming language choice, and it reflects a genuine cost that has accumulated for decades.
Rust has been the most admired language in the Stack Overflow Developer Survey every year since 2016 — nine consecutive years as of the 2024 survey. The caveats matter: actual adoption is still modest compared to C++. Rust’s learning curve is genuinely steep, the borrow checker requires new mental models, and migrating existing C++ code to Rust is difficult. But unlike every previous C++ challenger, Rust is growing — in the Linux kernel, in the Windows kernel, in Android’s security-critical components, in infrastructure code at companies that have decades of C++ history.
The difference is the value proposition. Every prior language said “we’re easier.” Rust says “we’re the only way to prove safety.” That is a categorically different argument.
The Current Generation
Zig (first public 2016, not yet 1.0 as of 2026) — The Anti-C++
Zig is the most interesting reframe in the current generation, because it doesn’t try to compete with C++ on C++’s terms at all.
Where Rust says “we’ll make systems programming safe,” Zig says “we’ll make it honest.” The argument isn’t against C++’s complexity per se — it’s against C’s accumulated deceptions. No undefined behavior in release builds. No hidden control flow. No hidden memory allocations. No preprocessor macros — comptime replaces them with ordinary Zig code that runs at compile time, which means the metaprogramming layer is the same language as everything else. If a function can fail, that failure is visible at the call site; nothing can silently swallow an error.
Zig is not trying to be safe in Rust’s formal sense. You can write unsafe code in Zig. But the language forces that unsafety to be explicit and visible, which is a significant improvement over C’s invisible undefined behavior. Cross-compilation is a first-class feature — Zig ships its own libc implementations and can target essentially any platform from any platform, which has made it attractive as a C/C++ cross-compilation toolchain even for projects not written in Zig itself.
The production usage is real: Zig is the foundation of Bun, the JavaScript runtime that has attracted significant attention for its performance. It’s being used to rewrite parts of LLVM. The Zig Software Foundation, backed by investors who take the language seriously as infrastructure, stewards its development. But Zig has not yet reached 1.0 — a fact that limits enterprise adoption regardless of technical merit.
Carbon (July 2022, still experimental) — Learning from Everyone Else’s Mistakes
Google engineer Chandler Carruth unveiled Carbon at CppNorth in Toronto in July 2022, and the pitch was unlike any prior C++ challenger’s.
Carbon is not trying to kill C++. It is not trying to be better C++. It is trying to give C++ codebases a migration path.
The distinction matters enormously. Every prior challenger asked C++ programmers to throw away their existing code and start over. Carbon’s answer to that ask is bidirectional interoperability at the ABI level — you can call Carbon from C++ and call C++ from Carbon without wrappers, without recompilation, without a translation layer. The vision is that a team with a ten-million-line C++ codebase could begin writing new modules in Carbon, migrate existing files incrementally, and end up with a mixed codebase that gradually shifts over years rather than a big-bang rewrite.
This design reflects a direct lesson from Rust’s adoption experience. Rust’s memory safety guarantees are its defining feature — but they also make incremental adoption of Rust in C++ codebases genuinely difficult. The ownership model doesn’t compose smoothly with C++ calling conventions. Large organizations with enormous C++ investments have found Rust compelling in principle but painful to adopt in practice, precisely because the codebases are too large to rewrite and too interconnected to separate cleanly.
Carbon’s bet is that a language with lower adoption friction could succeed where technically superior languages have stalled. Whether it will succeed is genuinely unknown — as of early 2026, Carbon has no production deployments, and its viability depends on whether the C++ community coalesces behind it in a way it has not yet demonstrated. But the argument itself is the most sophisticated one any C++ challenger has made.
Hylo/Val (2022) — The Academic Edge
Hylo, which began as Val, comes from EPFL and has former Swift compiler engineers involved in its design. Its core idea — “mutable value semantics” — is a strict discipline that sits between functional programming’s immutability and traditional object-oriented mutation. Every value is either uniquely owned or immutably shared; there is no place for aliased mutable state.
Hylo is not production-ready and makes no pretense of being so. It is a research language exploring whether this ownership model can provide safety guarantees in a different way than Rust’s borrow checker. The connection to Swift’s former engineers is notable — Swift’s value semantics story is one of the more successful recent examples of bringing ownership-style thinking to a mainstream audience. Hylo is asking whether that story can be taken further.
The Pattern
Four decades of challengers reveal something consistent: every successful C++ challenger offered something genuinely new, not just nicer syntax.
C beat assembly with portability — the same code could target different hardware. C++ beat C with zero-cost abstractions — object-oriented programming with no runtime overhead compared to hand-written C. These were not ergonomic improvements. They were new capabilities.
D offered better generics and a cleaner module system. Vala offered GTK ergonomics. Nim offered Python-ish syntax. These are meaningful improvements, but they are improvements in kind, not in category. They made the same kinds of programs easier to write without changing what became possible or provable about those programs.
Rust’s compile-time safety proof is a categorical difference. It allows organizations to make a specific and verifiable claim about their software: that an entire class of bugs cannot exist. That claim has real economic value — it is directly measurable in reduced CVE counts and reduced incident response costs. It is not a convenience improvement. It is a correctness guarantee.
Zig’s explicit resource management and Rust’s borrow checker both represent this kind of categorical improvement: not “we’re easier,” but “we’re formally different from what you had before.”
Carbon’s bet is different again — not categorical technical superiority, but categorical adoption superiority. Whether that is a sufficient proposition remains the open question.
C++ Isn’t Standing Still
It would be a mistake to tell this story as if C++ is a static target.
C++26 is in development now, with proposals addressing safety annotations, improved compile-time evaluation, and pattern matching. The Safe C++ proposal — building on decades of experience with C++ safety research and informed by Rust’s work — is attempting to add a safety profile to C++ that provides similar guarantees to safe Rust without abandoning existing code. Herb Sutter and others have been pushing C++ Profiles, which would allow codebases to opt into stricter subsets of C++ with enforcement guarantees.
Whether these efforts arrive in time and at sufficient quality to satisfy the organizations currently moving toward Rust is a live question. C++ has always managed to incorporate good ideas from its competitors. The question is whether the pace of the standards committee — historically measured in years and decades — can match the urgency of a software security environment that is now attracting national government attention.
What Actually Changes the Outcome
The most honest conclusion from four decades of data is this: the question was never whether any language would kill C++. C++ will not be killed. It will be retired, gradually, the same way COBOL has been retired — slowly, incompletely, over a generation, while trillions of dollars of existing investment keeps running.
The real question is whether the next generation of systems programming carves out enough of C++’s territory that new projects increasingly choose something else. And that is already happening.
Rust is in the Linux kernel. It is in the Windows kernel. It is in Android’s security-sensitive components. Google, Microsoft, and Amazon have all made substantial commitments to Rust for new systems code. The NSA and White House have recommended it in formal policy guidance. These are not the signs of a language on the fringe. They are the signs of a language that has found the argument compelling enough to change decisions at organizations that have decades of C++ investment.
Zig is winning converts among C programmers with large codebases and limited interest in Rust’s learning curve. Carbon is the most sophisticated attempt to solve the migration problem directly.
None of this will happen quickly. The installed C++ base will be running in 2050. But the portion of new systems programming work written in C++ is shrinking, for the first time in forty years, in a way that looks durable.
That is what success looks like against an unkillable language. Not a coup — an erosion.
Dig deeper: CodeArchaeology has language guides for Rust, Zig, Carbon, Nim, and C++ itself — each with Docker-based Hello World examples so you can run the code today without installing anything.
Comments
Loading comments...
Leave a Comment