This article contains affiliate links. See my affiliate disclosure for more information.



Floating-point numbers are a fast and efficient way to store and work with numbers, but they come with a range of pitfalls that have surely stumped many fledgling programmers — perhaps some experienced programmers, too! The classic example demonstrating the pitfalls of floats goes like this:

>>> 0.1 + 0.2 == 0.3
False

Seeing this for the first time can be disorienting. But don't throw your computer in the trash bin. This behavior is correct!

This article will show you why floating-point errors like the one above are common, why they make sense, and what you can do to deal with them in Python.

📹
Prefer video? Watch here

Your Computer is a Liar... Sort Of

You've seen that 0.1 + 0.2 is not equal to 0.3 but the madness doesn't stop there. Here are some more confounding examples:

>>> 0.2 + 0.2 + 0.2 == 0.6
False

>>> 1.3 + 2.0 == 3.3
False

>>> 1.2 + 2.4 + 3.6 == 7.2
False

The issue isn't restricted to equality comparisons, either:

>>> 0.1 + 0.2 <= 0.3
False

>>> 10.4 + 20.8 > 31.2
True

>>> 0.8 - 0.1 > 0.7
True

So what's going on? Is your computer lying to you? It sure looks like it, but there's more going on beneath the surface.

When you type the number 0.1 into the Python interpreter, it gets stored in memory as a floating-point number. There's a conversion that takes place when this happens. 0.1 is a decimal in base 10, but floating-point numbers are stored in binary. In other words, 0.1 gets converted from base 10 to base 2.

The resulting binary number may not accurately represent the original base 10 number. 0.1 is one example. The binary representation is \(0.0\overline{0011}\). That is, 0.1 is an infinitely repeating decimal when written in base 2. The same thing happens when you write the fraction ⅓ as a decimal in base 10. You end up with the infinitely repeating decimal \(0.\overline{33}\).



Computer memory is finite, so the infinitely repeating binary fraction representation of 0.1 gets rounded to a finite fraction. The value of this number depends on your computer's architecture (32-bit vs. 64-bit). One way to see the floating-point value that gets stored for 0.1 is to use the .as_integer_ratio() method for floats to get the numerator and denominator of the floating-point representation:

>>> numerator, denominator = (0.1).as_integer_ratio()
>>> f"0.1 ≈ {numerator} / {denominator}"
'0.1 ≈ 3602879701896397 / 36028797018963968'

Now use format() to show the fraction accurate to 55 decimal places:

>>> format(numerator / denominator, ".55f")
'0.1000000000000000055511151231257827021181583404541015625'

So 0.1 gets rounded to a number slightly larger than its true value.

🐍
Learn more about number methods like .as_integer_ratio() in my article 3 Things You Might Not Know About Numbers in Python.

This error, known as floating-point representation error, happens way more often than you might realize.


Representation Error is Really Common

There are three reasons that a number gets rounded when represented as a floating-point number:

  1. The number has more significant digits than floating points allow.
  2. The number is irrational.
  3. The number is rational but has a non-terminating binary representation.

64-bit floating-point numbers are good for about 16 or 17 significant digits. Any number with more significant digits gets rounded. Irrational numbers, like π and e, can't be represented by any terminating fraction in any integer base. So again, no matter what, irrational numbers will get rounded when stored as floats.

These two situations create an infinite set of numbers that can't be exactly represented as a floating-point number. But unless you're a chemist dealing with tiny numbers, or a physicist dealing with astronomically large numbers, you're unlikely to run into these problems.

What about non-terminating rational numbers, like 0.1 in base 2? This is where you'll encounter most of your floating-point woes, and thanks to the math that determines whether or not a fraction terminates, you'll brush up against representation error more often than you think.

In base 10, a fraction terminates if its denominator is a product of powers of prime factors of 10. The two prime factors of 10 are 2 and 5, so fractions like ½, ¼, ⅕, ⅛, and ⅒ all terminate, but ⅓, ⅐, and ⅑ do not. In base 2, however, there is only one prime factor: 2. So only fractions whose denominator is a power of 2 terminate. As a result, fractions like ⅓, ⅕, ⅙, ⅐, ⅑, and ⅒ are all non-terminating when expressed in binary.

You can now understand the original example in this article. 0.1, 0.2, and 0.3 all get rounded when converted to floating-point numbers:

>>> # -----------vvvv  Display with 17 significant digits
>>> format(0.1, ".17g")
'0.10000000000000001'

>>> format(0.2, ".17g")
'0.20000000000000001'

>>> format(0.3, ".17g")
'0.29999999999999999'

When 0.1 and 0.2 are added, the result is a number slightly larger than 0.3:

>>> 0.1 + 0.2
0.30000000000000004

Since 0.1 + 0.2 is slightly larger than0.3 and 0.3 gets represented by a number slightly smaller than itself, the expression 0.1 + 0.2 == 0.3 evaluates to False.

Floating-point representation error is something every programmer in every language needs to be aware of and know how to handle. It's not specific to Python. You can see the result of printing 0.1 + 0.2 in many different languages over at Erik Wiffin's aptly named website 0.30000000000000004.com.



How To Compare Floats in Python

So, how do you deal with floating-point representation errors when comparing floats in Python? The trick is to avoid checking for equality. Never use ==, >=, or <= with floats. Use the math.isclose() function instead:

>>> import math
>>> math.isclose(0.1 + 0.2, 0.3)
True

math.isclose() checks if the first argument is acceptably close to the second argument. But what exactly does that mean? The key idea is to examine the distance between the first argument and the second argument, which is equivalent to the absolute value of the difference of the values:

>>> a = 0.1 + 0.2
>>> b = 0.3
>>> abs(a - b)
5.551115123125783e-17

If abs(a - b) is smaller than some percentage of the larger of a or b, then a is considered sufficiently close to b to be "equal" to b. This percentage is called the relative tolerance. You can specify the relative tolerance with the rel_tol keyword argument of math.isclose() which defaults to 1e-9. In other words, if abs(a - b) is less than 1e-9 * max(abs(a), abs(b)), then a and b are considered "close" to each other. This guarantees that a and b are equal to about nine decimal places.

You can change the relative tolerance if you need to:

>>> math.isclose(0.1 + 0.2, 0.3, rel_tol=1e-20)
False

Of course, the relative tolerance depends on constraints set by the problem you're solving. For most everyday applications, however, the default relative tolerance should suffice.

There's a problem if one of a or b is zero and rel_tol is less than one, however. In that case, no matter how close the nonzero value is to zero, the relative tolerance guarantees that the check for closeness will always fail. In this case, using an absolute tolerance works as a fallback:

>>> # Relative check fails!
>>> # ---------------vvvv  Relative tolerance
>>> # ----------------------vvvvv  max(0, 1e-10)
>>> abs(0 - 1e-10) < 1e-9 * 1e-10
False

>>> # Absolute check works!
>>> # ---------------vvvv  Absolute tolerance
>>> abs(0 - 1e-10) < 1e-9
True

math.isclose() will do this check for you automatically. The abs_tol keyword argument determines the absolute tolerance. However, abs_tol defaults to 0.0So you'll need to set this manually if you need to check how close a value is to zero.

All in all, math.isclose() returns the result of the following comparison, which combines the relative and absolute tests into a single expression:

abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)

math.isclose() was introduced in PEP 485 and has been available since Python 3.5.


When Should You Use math.isclose()?

In general, you should use math.isclose() whenever you need to compare floating-point values. Replace == with math.isclose():

>>> # Don't do this:
>>> 0.1 + 0.2 == 0.3
False

>>> # Do this instead:
>>> math.isclose(0.1 + 0.2, 0.3)
True

You also need to be careful with >= and <= comparisons. Handle the equality separately using math.isclose() and then check the strict comparison:

>>> a, b, c = 0.1, 0.2, 0.3

>>> # Don't do this:
>>> a + b <= c
False

>>> # Do this instead:
>>> math.isclose(a + b, c) or (a + b < c)
True

Various alternatives to math.isclose() exist. If you use NumPy, you can leverage numpy.allclose() and numpy.isclose():

>>> import numpy as np

>>> # Use numpy.allclose() to check if two arrays are equal
>>> # to each other within a tolerance.
>>> np.allclose([1e10, 1e-7], [1.00001e10, 1e-8])
False

>>> np.allclose([1e10, 1e-8], [1.00001e10, 1e-9])
True

>>> # Use numpy.isclose() to check if the elements of two arrays
>>> # are equal to each other within a tolerance
>>> np.isclose([1e10, 1e-7], [1.00001e10, 1e-8])
array([ True, False])

>>> np.isclose([1e10, 1e-8], [1.00001e10, 1e-9])
array([ True, True])

Keep in mind that the default relative and absolute tolerances are not the same as math.isclose(). The default relative tolerance for both numpy.allclose() and numpy.isclose() is 1e-05 and the default absolute tolerance for both is 1e-08.

math.isclose() is especially useful for unit tests, although there are some alternatives. Python's built-in unittest module has a unittest.TestCase.assertAlmostEqual() method. However, that method only uses an absolute difference test. It's also an assertion, meaning that failures raise an AssertionError, making it unsuitable for comparisons in your business logic.

A great alternative to math.isclose() for unit testing is the pytest.approx() function from the pytest package. Unlike math.isclose(), pytest.approx() only takes one argument — namely, the value you expect:

>>> import pytest
>>> 0.1 + 0.2 == pytest.approx(0.3)
True

pytest.approx() has rel_tol and abs_tol keyword arguments for setting the relative and absolute tolerances. The default values are different from math.isclose(), however. rel_tol has a default value of 1e-6 and abs_tol has a default value of 1e-12.

If the argument passed to pytest.approx() is array-like, meaning it's a Python iterable like a list or a tuple, or even a NumPy array, then pytest.approx() behaves similar to numpy.allclose() and returns whether or not the two arrays are equal within the tolerances:

>>> import numpy as np                                                          
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == pytest.approx(np.array([0.3, 0.6])) 
True

pytest.approx() will even work with dictionary values:

>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == pytest.approx({'a': 0.3, 'b': 0.6})
True

Floating-point numbers are great for working with numbers whenever absolute precision isn't needed. They are fast and memory efficient. But if you do need precision, then there are some alternatives to floats that you should consider.




Floating-Point Alternatives That Are Precise

Two built-in numeric types in Python offer full precision for situations where floats are inadequate: Decimal and Fraction.

The Decimal Type

The Decimal type can store decimal values exactly with as much precision as you need. By default, Decimal preserves 28 significant figures, but you can change this to whatever you need to suit the specific problem you're solving:

>>> # Import the Decimal type from the decimal module
>>> from decimal import Decimal

>>> # Values are represented exactly so no rounding error occurs
>>> Decimal("0.1") + Decimal("0.2") == Decimal("0.3")
True

>>> # By default 28 significant figures are preserved
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')

>>> # You can change the significant figures if needed
>>> from decimal import getcontext
>>> getcontext().prec = 6  # Use 6 significant figures
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')

You can read more about the Decimal type in the Python docs.

The Fraction Type

Another alternative to floating-point numbers is the Fraction type. Fraction can store rational numbers exactly and overcomes representation error issues encountered by floating-point numbers:

>>> # import the Fraction type from the fractions module
>>> from fractions import Fraction

>>> # Instantiate a Fraction with a numerator and denominator
>>> Fraction(1, 10)
Fraction(1, 10)

>>> # Values are represented exactly so no rounding error occurs
>>> Fraction(1, 10) + Fraction(2, 10) == Fraction(3, 10)
True

Both Fraction and Decimal offer numerous benefits over standard floating-point values. However, these benefits come at a price: reduced speed and higher memory consumption. If you don't need absolute precision, you're better off sticking with floats. But for things like financial and mission-critical applications, the tradeoffs incurred by Fraction and Decimal may be worthwhile.


Conclusion

Floating-point values are both a blessing and a curse. They offer fast arithmetic operations and efficient memory use at the cost of inaccurate representation. In this article, you learned:

  • Why floating-point numbers are imprecise
  • Why floating-point representation error is common
  • How to correctly compare floating-point values in Python
  • How to  represent numbers precisely using Python's Fraction and Decimal types

If you learned something new, then there might be even more that you don't know about numbers in Python. For example, did you know the int type isn't the only integer type in Python? Find out what the other integer type is and other little-known facts about numbers in my article 3 Things You Might Not Know About Numbers in Python.

3 Things You Might Not Know About Numbers in Python
If you’ve written anything in Python, you’ve probably used a number in one of your programs. But there’s more to numbers than just their raw values.

Additional Resources

Thanks to Brian Okken for helping catch an issue with one of the pytest.approx() examples.