Why 0.1 + 0.2 ≠ 0.3: How Programming Languages Handle Numbers Differently

Open your browser’s developer console right now and type 0.1 + 0.2. Go ahead, I’ll wait.

You expected 0.3, didn’t you? Instead, you got:

0.30000000000000004

This isn’t a JavaScript bug. It’s not a browser quirk. It’s a fundamental consequence of how computers represent numbers—and different programming languages have made radically different choices about how to handle it.

Understanding these choices explains why banks run COBOL instead of Node.js, why scientific computing favors certain languages, and why that rounding error in your e-commerce checkout might be costing you money.


The Problem: Binary Can’t Represent 0.1

Here’s the core issue: computers think in binary (base-2), but humans think in decimal (base-10).

In decimal, some fractions can’t be represented exactly. Try writing 1/3 as a decimal—you get 0.333333… repeating forever. We accept this limitation without much thought.

Binary has the same problem, but with different numbers. The decimal value 0.1 (one-tenth) cannot be represented exactly in binary. It becomes:

0.0001100110011001100110011001100110011001100110011...

That pattern repeats infinitely. When a computer stores 0.1 in a 64-bit floating-point number, it has to round—and the stored value is actually:

0.1000000000000000055511151231257827021181583404541015625

When you add two slightly-wrong numbers together, you get a slightly-wrong result. Hence: 0.1 + 0.2 = 0.30000000000000004.


Three Approaches to Numeric Handling

Programming languages have developed three fundamentally different strategies for dealing with numbers:

ApproachHow It WorksTrade-off
Fixed-size binary (IEEE 754)Hardware-accelerated, 32 or 64 bitsFast but inexact for decimals
Fixed-point decimalStores exact decimal digitsExact for money, limited range
Arbitrary precisionGrows as needed, limited by memoryExact and unlimited, but slower

Let’s see how seven different languages implement these approaches.


1. JavaScript: The Worst of All Worlds

JavaScript has exactly one numeric type: the 64-bit IEEE 754 floating-point number. There are no integers, no decimals, no options.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// The classic problem
console.log(0.1 + 0.2);           // 0.30000000000000004
console.log(0.1 + 0.2 === 0.3);   // false

// It gets worse
console.log(0.1 + 0.7);           // 0.7999999999999999
console.log(1.0 - 0.9);           // 0.09999999999999998

// Large integers lose precision too
console.log(9999999999999999);    // 10000000000000000 (!)

Why JavaScript Made This Choice

JavaScript was created in 10 days in 1995 for simple web scripting. Using a single numeric type simplified the language. The assumption was that precise calculations would happen on servers, not in browsers.

Three decades later, JavaScript runs banking apps, cryptocurrency exchanges, and e-commerce checkouts. The single numeric type that seemed convenient in 1995 is now a constant source of bugs.

The Workaround

Modern JavaScript offers BigInt for arbitrary-precision integers, but no built-in solution for decimals:

1
2
3
4
5
6
// BigInt works for integers
const big = 9999999999999999n;
console.log(big);  // 9999999999999999n (correct!)

// For decimals, use libraries or scale to integers
const priceInCents = 199;  // $1.99 as integer cents

Learn JavaScript →


2. Java: Fixed Types with Library Escape Hatches

Java provides multiple numeric types with explicit sizes, plus library classes for when precision matters.

1
2
3
4
5
6
7
8
9
// Primitive types with fixed precision
int i = 2147483647;           // 32-bit, max ~2.1 billion
long l = 9223372036854775807L; // 64-bit, max ~9.2 quintillion
float f = 0.1f + 0.2f;        // 32-bit float: 0.3 (luck!)
double d = 0.1 + 0.2;         // 64-bit double: 0.30000000000000004

// The floating-point problem exists
System.out.println(0.1 + 0.2);        // 0.30000000000000004
System.out.println(0.1 + 0.2 == 0.3); // false

BigDecimal: The Financial Solution

Java’s BigDecimal class provides arbitrary-precision decimal arithmetic:

1
2
3
4
5
6
7
8
import java.math.BigDecimal;

BigDecimal a = new BigDecimal("0.1");
BigDecimal b = new BigDecimal("0.2");
BigDecimal sum = a.add(b);

System.out.println(sum);                    // 0.3 (exact!)
System.out.println(sum.equals(new BigDecimal("0.3"))); // true

Note the string constructor—using new BigDecimal(0.1) would capture the floating-point imprecision.

Why Java Made This Choice

Java balanced performance with safety. Primitive types use hardware acceleration for speed, while library classes provide precision when needed. The programmer explicitly chooses the trade-off.

Learn Java →


3. Python: The Best of Multiple Worlds

Python takes a pragmatic approach: integers are arbitrary-precision by default, floats follow IEEE 754, and the decimal module provides exact decimal arithmetic when needed.

1
2
3
4
5
6
7
# Integers: arbitrary precision, just works
huge = 10 ** 100
print(huge)  # 100-digit number, exact

# Floats: same problem as everywhere
print(0.1 + 0.2)        # 0.30000000000000004
print(0.1 + 0.2 == 0.3) # False

The Decimal Module

Python’s decimal module implements IBM’s General Decimal Arithmetic specification:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from decimal import Decimal, getcontext

# Exact decimal arithmetic
a = Decimal('0.1')
b = Decimal('0.2')
print(a + b)        # 0.3 (exact!)
print(a + b == Decimal('0.3'))  # True

# Configurable precision
getcontext().prec = 50
print(Decimal(1) / Decimal(3))
# 0.33333333333333333333333333333333333333333333333333

The Fraction Module

Python can even represent exact fractions:

1
2
3
4
5
6
7
8
9
from fractions import Fraction

one_third = Fraction(1, 3)
print(one_third * 3)  # 1 (exact, not 0.9999...)

# Convert from float (warning: captures imprecision)
print(Fraction(0.1))  # 3602879701896397/36028797018963968
# Convert from string (exact)
print(Fraction('0.1'))  # 1/10

Why Python Made This Choice

Python prioritizes programmer productivity and correctness over raw speed. Arbitrary-precision integers eliminate overflow bugs. The explicit decimal and fractions modules let programmers opt into exact arithmetic when needed.

Learn Python →


4. C: Explicit Control, No Safety Net

C provides exactly what the hardware provides—nothing more, nothing less. You get IEEE 754 floating-point and fixed-size integers. Overflow and precision loss happen silently.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
#include <stdio.h>

int main() {
    // The familiar problem
    double sum = 0.1 + 0.2;
    printf("%.17f\n", sum);  // 0.30000000000000004

    // Integer overflow: silent wraparound
    int max = 2147483647;
    printf("%d\n", max + 1);  // -2147483648 (!)

    // Float precision loss
    float f = 16777216.0f;
    printf("%.1f\n", f + 1.0f);  // 16777216.0 (1 was lost!)

    return 0;
}

Why C Made This Choice

C was designed in 1972 to write operating systems. It provides direct access to hardware capabilities without abstraction overhead. The programmer is responsible for understanding the limitations.

For arbitrary precision in C, you need external libraries like GMP (GNU Multiple Precision Arithmetic Library).

Learn C →


5. COBOL: Why Banks Trust It With Your Money

COBOL takes a fundamentally different approach: fixed-point decimal arithmetic. Numbers are stored as exact decimal digits, not binary approximations.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
IDENTIFICATION DIVISION.
PROGRAM-ID. MONEY-MATH.

DATA DIVISION.
WORKING-STORAGE SECTION.
01 PRICE        PIC 9(5)V99 VALUE 19.99.
01 TAX-RATE     PIC V999    VALUE 0.075.
01 TAX-AMOUNT   PIC 9(5)V99.
01 TOTAL        PIC 9(5)V99.

PROCEDURE DIVISION.
    COMPUTE TAX-AMOUNT = PRICE * TAX-RATE.
    COMPUTE TOTAL = PRICE + TAX-AMOUNT.
    DISPLAY "Price: $" PRICE.
    DISPLAY "Tax:   $" TAX-AMOUNT.
    DISPLAY "Total: $" TOTAL.
    STOP RUN.

The PIC 9(5)V99 declaration means: 5 digits before the decimal point, 2 after. The V marks where the decimal point goes—it’s not stored, just implied.

How COBOL Decimal Arithmetic Works

COBOL stores numbers in packed decimal (COMP-3) format:

  • Each decimal digit takes 4 bits (a nibble)
  • The decimal point position is fixed at compile time
  • Arithmetic operates on the decimal representation directly

This means:

  • 0.10 + 0.20 = 0.30 (exactly!)
  • No binary conversion errors
  • Results match what humans expect from calculator math

Why COBOL Made This Choice

COBOL was designed in 1959 for business data processing. Financial calculations must be exact—a bank can’t tell customers their balance is $100.00000000000001. The machines COBOL targeted (IBM mainframes) had special hardware for binary-coded decimal arithmetic, making this approach fast as well as accurate.

This is why 95% of ATM transactions still run through COBOL systems. It’s not inertia—it’s that COBOL’s numeric handling is genuinely better suited for financial calculations than IEEE 754 floating-point.

Learn COBOL →


6. REXX: Arbitrary-Precision Decimal by Default

REXX takes decimal arithmetic to its logical extreme: all arithmetic is arbitrary-precision decimal, with programmer-controlled precision.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
/* REXX - Arbitrary precision decimal */
say 0.1 + 0.2           /* 0.3 (exact!) */

/* Default precision is 9 digits */
say 1/3                 /* 0.333333333 */

/* But you can set any precision you want */
numeric digits 50
say 1/3
/* 0.33333333333333333333333333333333333333333333333333 */

/* Calculate pi to 40 digits */
numeric digits 40
pi = 3.1415926535897932384626433832795028841971
say pi * 2
/* 6.2831853071795864769252867665590057683942 */

How REXX Works

In REXX, all values are strings. When you perform arithmetic, REXX:

  1. Parses the string as a decimal number
  2. Performs the operation using decimal arithmetic
  3. Returns the result as a string

This sounds slow, but it’s remarkably practical for scripting tasks where correctness matters more than speed.

Why REXX Made This Choice

Mike Cowlishaw designed REXX at IBM in 1979 with a radical goal: make programming easy for humans. Human-friendly decimal arithmetic—the kind you’d do on paper or a calculator—was part of that vision.

Cowlishaw later created the IBM General Decimal Arithmetic specification, which influenced Python’s decimal module and other implementations.

Learn REXX →


7. Common Lisp: Exact Rational Numbers

Lisp takes yet another approach: in addition to floating-point and arbitrary-precision integers, it supports exact rational numbers as a built-in type.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
;; Integers: arbitrary precision
(* 1000000000000000000000 1000000000000000000000)
;; => 1000000000000000000000000000000000000000000

;; Rationals: exact fractions
(+ 1/10 2/10)      ;; => 3/10 (exact ratio, not 0.3)
(* 1/3 3)          ;; => 1 (exact, not 0.9999...)

;; Rationals stay exact through complex calculations
(+ 1/7 2/7 3/7 1/7)  ;; => 1

;; Convert to float only when needed
(float 3/10)       ;; => 0.3

How Rationals Work

Lisp stores fractions as a numerator/denominator pair:

  • 1/3 is stored as the integers 1 and 3
  • Operations produce new fractions: 1/3 + 1/4 = 7/12
  • Fractions are automatically reduced: 2/4 becomes 1/2

This means calculations that should produce exact results do produce exact results:

1
2
3
4
5
6
7
8
9
;; Computing interest exactly
(defvar principal 1000)
(defvar rate 1/12)  ; Monthly rate as fraction
(defvar interest (* principal rate))
;; => 250/3 (exactly 83.333...)

;; Compare to floating point
(* 1000 (/ 1.0 12))
;; => 83.33333333333333 (approximate)

Why Lisp Made This Choice

Lisp was created in 1958 for artificial intelligence research. Symbolic computation—manipulating expressions exactly—was more important than raw numeric speed. Rational numbers fit naturally into this paradigm.

Learn Common Lisp →


Comparison Table

Here’s how our seven languages handle the same calculations:

Language0.1 + 0.2Large IntegerExact 1/3Approach
JavaScript0.30000000000000004Loses precisionNoIEEE 754 only
Java0.30000000000000004Fixed 64-bitBigDecimalIEEE 754 + libraries
Python0.30000000000000004ExactDecimal/Fraction modulesMultiple options
C0.30000000000000004Silent overflowNo (need GMP)IEEE 754, explicit
COBOL0.30Fixed sizeFixed decimalPacked decimal
REXX0.3ExactConfigurable precisionArbitrary decimal
Lisp0.30000000000000004 (float) or 3/10 (ratio)ExactYes (1/3)Multiple types

Real-World Consequences

The $327 Million Mars Climate Orbiter

In 1999, NASA’s Mars Climate Orbiter was lost because one team used metric units and another used imperial. A type system that distinguished between units—like Ada’s strong typing—would have caught this at compile time.

The Patriot Missile Failure

During the Gulf War, a Patriot missile battery failed to intercept an incoming Scud missile, killing 28 soldiers. The cause: accumulated floating-point error in the system clock. After 100 hours of operation, the timing error reached 0.34 seconds—enough for the Scud to travel half a kilometer.

The Vancouver Stock Exchange

In 1982, the Vancouver Stock Exchange index was initialized at 1000. After 22 months of accumulated rounding errors in floating-point calculations, the index stood at 524—about half its correct value. Switching to exact decimal arithmetic fixed the problem overnight.

Every E-Commerce Site Ever

If your shopping cart calculates totals with floating-point math:

1
2
// Price: $19.99, Quantity: 3
19.99 * 3  // 59.97000000000001

That extra 0.00000000000001 might round up to an extra cent. Multiply by millions of transactions, and you have a real problem—or a lawsuit.


Choosing the Right Approach

Use IEEE 754 Floating-Point When:

  • Speed matters more than exact decimal representation
  • You’re doing scientific/engineering calculations
  • Small rounding errors are acceptable
  • You’re working with graphics, physics simulations, or machine learning

Use Fixed-Point Decimal When:

  • You’re handling money
  • Regulatory compliance requires exact results
  • You need to match human/calculator arithmetic
  • Results must be auditable and reproducible

Use Arbitrary Precision When:

  • You need integers larger than 64 bits (cryptography)
  • Exact results matter more than performance
  • You’re doing symbolic/mathematical computation
  • The problem domain requires configurable precision

Key Takeaways

  1. 0.1 + 0.2 ≠ 0.3 is not a bug—it’s a consequence of binary floating-point representation

  2. Different languages make different trade-offs between speed, precision, and ease of use

  3. COBOL’s dominance in banking isn’t legacy inertia—its decimal arithmetic is genuinely superior for financial calculations

  4. JavaScript’s single numeric type is a historical accident that continues to cause bugs in production

  5. Libraries and alternative types exist for when you need exact arithmetic—use them

  6. Understand your domain: scientific computing tolerates small errors; financial computing doesn’t


Try It Yourself

Every language in this article is documented on CodeArchaeology with runnable Docker examples:

  • JavaScript — IEEE 754 floating-point only
  • Java — Fixed types with BigDecimal library
  • Python — Arbitrary integers with decimal module
  • C — Explicit hardware types
  • COBOL — Fixed-point decimal arithmetic
  • REXX — Arbitrary-precision decimal
  • Common Lisp — Exact rational numbers

Or explore our encyclopedia of 1,200+ programming languages to see how other languages handle numeric computation.


Have you encountered floating-point bugs in production? Share your war stories on GitHub.

Last updated: