Est. 2023 Intermediate

Mojo

A Python superset designed for AI/ML workloads, combining Python syntax with systems-level performance through MLIR compilation.

Created by Chris Lattner at Modular Inc.

Paradigm Multi-paradigm: Imperative, Object-Oriented, Functional, Systems
Typing Static and Dynamic, Strong, Inferred
First Appeared 2023
Latest Version Mojo 0.26.1 (January 2026)

Mojo is a programming language designed to bridge the gap between Python’s ease of use and systems-level performance. Created by Chris Lattner—the mind behind LLVM, Clang, and Swift—Mojo aims to be a full superset of Python while delivering performance comparable to C++ and Rust.

History & Origins

Mojo emerged from a fundamental frustration in the AI/ML world: Python is the dominant language for machine learning, but it’s far too slow for production inference and training workloads. The industry has relied on writing performance-critical code in C++ and CUDA, then wrapping it with Python bindings—creating the infamous “two-language problem.”

Chris Lattner and Tim Davis founded Modular Inc. in 2019 with the mission to fix AI infrastructure. Rather than building another framework on top of Python, they decided to build a new language that would be a strict superset of Python while adding the performance features that AI workloads demand.

The MLIR Foundation

Mojo is built on MLIR (Multi-Level Intermediate Representation), a compiler infrastructure that Lattner originally created at Google. MLIR provides a flexible framework for representing code at multiple levels of abstraction—from high-level Python-like operations down to machine-specific instructions. This gives Mojo a unique advantage: it can optimize code across the entire stack, from algorithm-level transformations to hardware-specific optimizations.

Why “Mojo”?

The name plays on the idea of giving Python programmers their “mojo”—the ability to write high-performance code without leaving the Python ecosystem. The fire emoji (🔥) is the language’s unofficial mascot, and .🔥 is actually a valid file extension alongside the more practical .mojo.

Key Features

Python Superset

Mojo’s most important design goal is full Python compatibility:

  • All valid Python code should eventually work in Mojo
  • Python developers can gradually adopt Mojo features
  • Existing Python libraries can be imported and used directly
  • The syntax is familiar—def, class, for, if all work as expected

Systems-Level Performance

Mojo adds features that enable C++/Rust-level performance:

  • fn functions: Strict, compiled functions with mandatory type annotations
  • struct types: Value-semantic types laid out in memory like C structs
  • var declarations: Mutable variable declarations with ownership semantics
  • SIMD operations: First-class support for vectorized computation
  • Manual memory management: When you need direct control

Progressive Typing

Mojo supports both dynamic and static typing:

# Dynamic (Python-style)
def greet(name):
    print("Hello, " + name)

# Static (Mojo-style)
fn greet(name: String):
    print("Hello, " + name)

def functions behave like Python with dynamic types. fn functions require type annotations and provide compile-time guarantees.

Ownership and Borrowing

Inspired by Rust, Mojo has an ownership system for memory safety:

  • Owned values: The function takes ownership of the argument
  • Borrowed values: The function gets a read-only reference (default)
  • Mutable references: The function can modify the argument (inout)

Performance Claims

Modular has demonstrated remarkable performance numbers:

  • 68,000x faster than Python for certain matrix operations
  • Comparable to C++ for systems-level tasks
  • Near-CUDA performance for GPU workloads without writing CUDA
  • Zero-overhead abstractions for compiled code paths

These numbers come from Mojo’s ability to compile directly to optimized machine code through MLIR, bypassing the Python interpreter entirely for fn-annotated code.

The Two-Language Problem

The AI/ML industry has long suffered from needing two languages:

  1. Python for prototyping, research, and high-level logic
  2. C++/CUDA for production performance

This creates friction: researchers write in Python, then engineers rewrite in C++. Mojo aims to eliminate this divide by being one language that serves both purposes.

Current Status

As of early 2026, Mojo is rapidly evolving:

  • The standard library is open source under Apache 2.0
  • The compiler remains closed source (planned for open-sourcing by end of 2026)
  • Installation is via Pixi/conda package manager
  • Available on Linux (x86_64) and macOS (Apple Silicon)
  • Mojo 1.0 is targeted for H1 2026

While still pre-1.0, Mojo has generated significant interest in the AI/ML community as a potential successor to Python for performance-critical workloads.

Mojo vs. Other Languages

Compared to Python: Same syntax for simple code, dramatically faster for compiled paths Compared to C++: Similar performance, far more accessible syntax, Python interop Compared to Rust: Similar safety features, but built for AI/ML rather than systems programming Compared to Julia: Both target scientific computing, but Mojo’s Python compatibility is stronger

Mojo represents a bold bet: that one language can span from scripting convenience to systems-level performance. Whether it succeeds depends on achieving full Python compatibility while maintaining its performance advantages—a challenge that Chris Lattner’s track record suggests is achievable.

Timeline

2019
Modular Inc. founded by Chris Lattner (creator of LLVM, Swift, and Clang) and Tim Davis
2022
Mojo development begins, built on MLIR (Multi-Level Intermediate Representation)
2023
Mojo publicly announced and launched in playground preview
2024
Mojo open-sources standard library under Apache 2.0; available via conda/pixi
2025
The Path to Mojo 1.0 announced; rapid feature development continues
2026
Mojo 0.26.1 released; 1.0 targeted for H1 2026

Notable Uses & Legacy

Modular MAX Platform

Mojo powers the MAX inference engine for deploying AI models with performance rivaling hand-tuned CUDA kernels.

AI/ML Performance

Mojo achieves up to 68,000x speedup over Python for certain computational kernels through MLIR compilation.

Scientific Computing

Systems-level performance with Python syntax makes Mojo attractive for numerical computing and data processing.

Hardware Acceleration

Direct access to SIMD, GPU kernels, and custom accelerators through low-level programming constructs.

Language Influence

Influenced By

Python Rust C++ Swift

Running Today

Run examples using the official Docker image:

docker pull codearchaeology/mojo:latest

Example usage:

docker run --rm -v $(pwd):/app -w /app codearchaeology/mojo:latest mojo hello.mojo

Topics Covered

Last updated: