Logo
Steve MarlowSteve Marlow
5 min readMay 4, 2025

Is Floating-Point Math Broken? Demystifying Computer Arithmetic

Why floating-point math in programming leads to unexpected results and how to handle it effectively for reliable code.

Have you ever added 0.1 and 0.2 in your code, only to get something like 0.30000000000000004? If you're nodding, you're not alone—it's a classic gotcha in programming that makes floating-point math seem downright unreliable. In this article, we'll unpack why this happens, how computers handle numbers under the hood, and practical ways to work around these quirks. Whether you're a beginner debugging weird results or a seasoned developer optimizing calculations, understanding floating-point arithmetic can save you headaches and build your confidence in writing robust code.

What Is Floating-Point Math and Why Does It Matter?

Floating-point math is the way computers represent and perform calculations with decimal numbers, like 3.14 or 0.001. Unlike integers, which are straightforward whole numbers, floating-point values approximate real numbers because computers use binary (base-2) systems, not decimal (base-10). This approximation leads to tiny errors that can accumulate, causing results that don't match what you'd expect on paper.

Think of it like trying to measure a piece of string with a ruler that's only marked in inches—it's precise, but if you need millimeters, you'll end up with some rounding issues. In programming, this matters because financial apps, scientific simulations, or even game physics rely on accurate decimals. Ignoring these quirks can lead to bugs that are hard to spot, like off-by-a-fraction errors in a banking system.

The Culprit: How Computers Store Floating-Point Numbers

At the heart of the problem is the IEEE 754 standard, which most programming languages follow for floating-point arithmetic. This standard defines how numbers are represented in binary: a sign bit, an exponent, and a mantissa (or significand). Essentially, it's like scientific notation in binary—great for a wide range of values, but not perfect for every decimal.

For example, the decimal 0.1 can't be represented exactly in binary; it's a repeating fraction, similar to how 1/3 is 0.333... in decimal. When you add 0.1 and 0.2, the computer approximates both, and their sum might not hit exactly 0.3. Let's see this in action with some TypeScript code:

function addFloats(a: number, b: number): number {
    return a + b;
}

console.log(addFloats(0.1, 0.2));  // Outputs something like 0.30000000000000004

Here, the result isn't precisely 0.3 due to the binary approximation. This isn't a bug in your language—it's just how floating-point works. Most modern languages, like JavaScript or Python, use double-precision floats by default, which offer about 15 decimal digits of precision, but that's still not infinite.

Common Pitfalls and Real-World Examples

Floating-point errors pop up in unexpected places. In a game, they might cause a character to clip through a wall due to tiny positioning inaccuracies. In data analysis, they could skew statistical results if not handled properly.

  • Precision loss in loops: Repeated additions can amplify errors. For instance, adding 0.1 ten times might not equal 1.0 exactly.
  • Comparisons gone wrong: Never check if two floats are exactly equal; use a tolerance instead. Like comparing heights—"is this person exactly 6 feet?" is less useful than "is this person about 6 feet?".
  • Currency calculations: Avoid floats for money; use integers representing cents or specialized libraries to prevent rounding errors that could cost real dollars.

To illustrate a fix, here's a simple TypeScript function that rounds results to a specific decimal place:

function safeAdd(a: number, b: number, decimals: number = 2): number {
    const factor = 10 ** decimals;
    return Math.round((a + b) * factor) / factor;
}

console.log(safeAdd(0.1, 0.2));  // Outputs 0.3

This approach isn't foolproof but helps in scenarios where you need readable outputs.

Practical Tips for Handling Floating-Point Arithmetic

So, how do you minimize these issues? Start by choosing the right tool for the job:

  • Use libraries: For precise decimal work, libraries like Big.js or Decimal.js in JavaScript handle arbitrary-precision arithmetic, much like using a high-precision scale in a lab.
  • Decimal types: In languages like C# or Java, opt for BigDecimal classes instead of standard floats.
  • Avoid unnecessary operations: Chain fewer calculations to reduce error buildup—it's like minimizing handoffs in a relay race to keep the baton from slipping.
  • Testing and validation: Always include tests that check for floating-point anomalies, such as asserting values are within a small epsilon range.

In summary, while floating-point math isn't "broken," it's a compromise for efficiency. Understanding its limitations lets you write more reliable code, whether you're building apps or crunching data.

Wrapping Up: Embrace the Imperfections

Floating-point arithmetic is a fundamental part of computing, quirks and all. By grasping how it works and applying smart workarounds, you can avoid common pitfalls and create more accurate programs. Next time you encounter a mysterious decimal discrepancy, remember it's not the computer being difficult—it's just doing its best with binary tools. Dive deeper into resources like the IEEE 754 specification or experiment with code to solidify your knowledge.