Open your browser’s developer console right now and type 0.1 + 0.2. Go ahead, I’ll wait.
You expected 0.3, didn’t you? Instead, you got:
0.30000000000000004
This isn’t a JavaScript bug. It’s not a browser quirk. It’s a fundamental consequence of how computers represent numbers—and different programming languages have made radically different choices about how to handle it.
Understanding these choices explains why banks run COBOL instead of Node.js, why scientific computing favors certain languages, and why that rounding error in your e-commerce checkout might be costing you money.
The Problem: Binary Can’t Represent 0.1
Here’s the core issue: computers think in binary (base-2), but humans think in decimal (base-10).
In decimal, some fractions can’t be represented exactly. Try writing 1/3 as a decimal—you get 0.333333… repeating forever. We accept this limitation without much thought.
Binary has the same problem, but with different numbers. The decimal value 0.1 (one-tenth) cannot be represented exactly in binary. It becomes:
0.0001100110011001100110011001100110011001100110011...
That pattern repeats infinitely. When a computer stores 0.1 in a 64-bit floating-point number, it has to round—and the stored value is actually:
0.1000000000000000055511151231257827021181583404541015625
When you add two slightly-wrong numbers together, you get a slightly-wrong result. Hence: 0.1 + 0.2 = 0.30000000000000004.
Three Approaches to Numeric Handling
Programming languages have developed three fundamentally different strategies for dealing with numbers:
| Approach | How It Works | Trade-off |
|---|---|---|
| Fixed-size binary (IEEE 754) | Hardware-accelerated, 32 or 64 bits | Fast but inexact for decimals |
| Fixed-point decimal | Stores exact decimal digits | Exact for money, limited range |
| Arbitrary precision | Grows as needed, limited by memory | Exact and unlimited, but slower |
Let’s see how seven different languages implement these approaches.
1. JavaScript: The Worst of All Worlds
JavaScript has exactly one numeric type: the 64-bit IEEE 754 floating-point number. There are no integers, no decimals, no options.
| |
Why JavaScript Made This Choice
JavaScript was created in 10 days in 1995 for simple web scripting. Using a single numeric type simplified the language. The assumption was that precise calculations would happen on servers, not in browsers.
Three decades later, JavaScript runs banking apps, cryptocurrency exchanges, and e-commerce checkouts. The single numeric type that seemed convenient in 1995 is now a constant source of bugs.
The Workaround
Modern JavaScript offers BigInt for arbitrary-precision integers, but no built-in solution for decimals:
| |
2. Java: Fixed Types with Library Escape Hatches
Java provides multiple numeric types with explicit sizes, plus library classes for when precision matters.
| |
BigDecimal: The Financial Solution
Java’s BigDecimal class provides arbitrary-precision decimal arithmetic:
| |
Note the string constructor—using new BigDecimal(0.1) would capture the floating-point imprecision.
Why Java Made This Choice
Java balanced performance with safety. Primitive types use hardware acceleration for speed, while library classes provide precision when needed. The programmer explicitly chooses the trade-off.
3. Python: The Best of Multiple Worlds
Python takes a pragmatic approach: integers are arbitrary-precision by default, floats follow IEEE 754, and the decimal module provides exact decimal arithmetic when needed.
| |
The Decimal Module
Python’s decimal module implements IBM’s General Decimal Arithmetic specification:
| |
The Fraction Module
Python can even represent exact fractions:
| |
Why Python Made This Choice
Python prioritizes programmer productivity and correctness over raw speed. Arbitrary-precision integers eliminate overflow bugs. The explicit decimal and fractions modules let programmers opt into exact arithmetic when needed.
4. C: Explicit Control, No Safety Net
C provides exactly what the hardware provides—nothing more, nothing less. You get IEEE 754 floating-point and fixed-size integers. Overflow and precision loss happen silently.
| |
Why C Made This Choice
C was designed in 1972 to write operating systems. It provides direct access to hardware capabilities without abstraction overhead. The programmer is responsible for understanding the limitations.
For arbitrary precision in C, you need external libraries like GMP (GNU Multiple Precision Arithmetic Library).
5. COBOL: Why Banks Trust It With Your Money
COBOL takes a fundamentally different approach: fixed-point decimal arithmetic. Numbers are stored as exact decimal digits, not binary approximations.
| |
The PIC 9(5)V99 declaration means: 5 digits before the decimal point, 2 after. The V marks where the decimal point goes—it’s not stored, just implied.
How COBOL Decimal Arithmetic Works
COBOL stores numbers in packed decimal (COMP-3) format:
- Each decimal digit takes 4 bits (a nibble)
- The decimal point position is fixed at compile time
- Arithmetic operates on the decimal representation directly
This means:
0.10 + 0.20 = 0.30(exactly!)- No binary conversion errors
- Results match what humans expect from calculator math
Why COBOL Made This Choice
COBOL was designed in 1959 for business data processing. Financial calculations must be exact—a bank can’t tell customers their balance is $100.00000000000001. The machines COBOL targeted (IBM mainframes) had special hardware for binary-coded decimal arithmetic, making this approach fast as well as accurate.
This is why 95% of ATM transactions still run through COBOL systems. It’s not inertia—it’s that COBOL’s numeric handling is genuinely better suited for financial calculations than IEEE 754 floating-point.
6. REXX: Arbitrary-Precision Decimal by Default
REXX takes decimal arithmetic to its logical extreme: all arithmetic is arbitrary-precision decimal, with programmer-controlled precision.
| |
How REXX Works
In REXX, all values are strings. When you perform arithmetic, REXX:
- Parses the string as a decimal number
- Performs the operation using decimal arithmetic
- Returns the result as a string
This sounds slow, but it’s remarkably practical for scripting tasks where correctness matters more than speed.
Why REXX Made This Choice
Mike Cowlishaw designed REXX at IBM in 1979 with a radical goal: make programming easy for humans. Human-friendly decimal arithmetic—the kind you’d do on paper or a calculator—was part of that vision.
Cowlishaw later created the IBM General Decimal Arithmetic specification, which influenced Python’s decimal module and other implementations.
7. Common Lisp: Exact Rational Numbers
Lisp takes yet another approach: in addition to floating-point and arbitrary-precision integers, it supports exact rational numbers as a built-in type.
| |
How Rationals Work
Lisp stores fractions as a numerator/denominator pair:
1/3is stored as the integers 1 and 3- Operations produce new fractions:
1/3 + 1/4 = 7/12 - Fractions are automatically reduced:
2/4becomes1/2
This means calculations that should produce exact results do produce exact results:
| |
Why Lisp Made This Choice
Lisp was created in 1958 for artificial intelligence research. Symbolic computation—manipulating expressions exactly—was more important than raw numeric speed. Rational numbers fit naturally into this paradigm.
Comparison Table
Here’s how our seven languages handle the same calculations:
| Language | 0.1 + 0.2 | Large Integer | Exact 1/3 | Approach |
|---|---|---|---|---|
| JavaScript | 0.30000000000000004 | Loses precision | No | IEEE 754 only |
| Java | 0.30000000000000004 | Fixed 64-bit | BigDecimal | IEEE 754 + libraries |
| Python | 0.30000000000000004 | Exact | Decimal/Fraction modules | Multiple options |
| C | 0.30000000000000004 | Silent overflow | No (need GMP) | IEEE 754, explicit |
| COBOL | 0.30 | Fixed size | Fixed decimal | Packed decimal |
| REXX | 0.3 | Exact | Configurable precision | Arbitrary decimal |
| Lisp | 0.30000000000000004 (float) or 3/10 (ratio) | Exact | Yes (1/3) | Multiple types |
Real-World Consequences
The $327 Million Mars Climate Orbiter
In 1999, NASA’s Mars Climate Orbiter was lost because one team used metric units and another used imperial. A type system that distinguished between units—like Ada’s strong typing—would have caught this at compile time.
The Patriot Missile Failure
During the Gulf War, a Patriot missile battery failed to intercept an incoming Scud missile, killing 28 soldiers. The cause: accumulated floating-point error in the system clock. After 100 hours of operation, the timing error reached 0.34 seconds—enough for the Scud to travel half a kilometer.
The Vancouver Stock Exchange
In 1982, the Vancouver Stock Exchange index was initialized at 1000. After 22 months of accumulated rounding errors in floating-point calculations, the index stood at 524—about half its correct value. Switching to exact decimal arithmetic fixed the problem overnight.
Every E-Commerce Site Ever
If your shopping cart calculates totals with floating-point math:
| |
That extra 0.00000000000001 might round up to an extra cent. Multiply by millions of transactions, and you have a real problem—or a lawsuit.
Choosing the Right Approach
Use IEEE 754 Floating-Point When:
- Speed matters more than exact decimal representation
- You’re doing scientific/engineering calculations
- Small rounding errors are acceptable
- You’re working with graphics, physics simulations, or machine learning
Use Fixed-Point Decimal When:
- You’re handling money
- Regulatory compliance requires exact results
- You need to match human/calculator arithmetic
- Results must be auditable and reproducible
Use Arbitrary Precision When:
- You need integers larger than 64 bits (cryptography)
- Exact results matter more than performance
- You’re doing symbolic/mathematical computation
- The problem domain requires configurable precision
Key Takeaways
0.1 + 0.2 ≠ 0.3 is not a bug—it’s a consequence of binary floating-point representation
Different languages make different trade-offs between speed, precision, and ease of use
COBOL’s dominance in banking isn’t legacy inertia—its decimal arithmetic is genuinely superior for financial calculations
JavaScript’s single numeric type is a historical accident that continues to cause bugs in production
Libraries and alternative types exist for when you need exact arithmetic—use them
Understand your domain: scientific computing tolerates small errors; financial computing doesn’t
Try It Yourself
Every language in this article is documented on CodeArchaeology with runnable Docker examples:
- JavaScript — IEEE 754 floating-point only
- Java — Fixed types with BigDecimal library
- Python — Arbitrary integers with decimal module
- C — Explicit hardware types
- COBOL — Fixed-point decimal arithmetic
- REXX — Arbitrary-precision decimal
- Common Lisp — Exact rational numbers
Or explore our encyclopedia of 1,200+ programming languages to see how other languages handle numeric computation.
Have you encountered floating-point bugs in production? Share your war stories on GitHub.