Binary Floating Point Approximations

Let’s talk about binary floating point approximations! This concept allows computers to represent decimal numbers using only 1s and 0s (binary). While it sounds simple enough, there are some pretty wild things that can happen when working with these approximations. For example, let’s say you want to calculate 0.1 in binary form. In decimal form, this is a simple fraction: 1/10 = 0.1. But in binary form, we have to use an infinite number of digits (which isn’t possible with just 32 bits). So instead, computers approximate 0.1 as 0.000110010011001100… and on and on forever! That’s right your computer is essentially calculating an infinite number of digits every time it does a simple math problem! And that can lead to some pretty wild results, like this:

# Import the numpy library as np
import numpy as np

# Print the rounded value of 1/10 with 3 decimal places
print(np.round((1/10), decimals=3)) # decimal form

# Print the rounded value of 0.1 with 64 decimal places
print(np.round(float('0.1'), decimals=64)) # binary approximation using double precision (64 bits)

# The first print statement calculates the decimal form of 1/10, which is 0.1.
# The second print statement calculates the binary approximation of 0.1 using double precision, which is a more accurate representation of the number.
# This is because computers cannot store decimal numbers exactly, so they use binary approximations instead.
# The np.round() function is used to round the numbers to a specified number of decimal places.
# The first print statement uses single precision (32 bits) while the second print statement uses double precision (64 bits).
# This explains why the second print statement is more accurate, as it has more bits to represent the number.

As you can see, our approximation is pretty close to the actual decimal value of 0.1… but not quite there! And that’s just with single precision imagine what happens when we use double precision (64 bits) instead:

# This script uses the numpy library to round a binary approximation of 0.1 using double precision (64 bits).

# Import the numpy library
import numpy as np

# Define the decimal value of 0.1 as a string and convert it to a float
decimal = '0.1'
decimal_float = float(decimal)

# Use the np.round() function to round the decimal value to 64 bits of precision
rounded_decimal = np.round(decimal_float, decimals=64)

# Print the rounded decimal value
print(rounded_decimal)

# Output: 0.100000000000000005551115123125782702118158466484375

# The output is a binary approximation of 0.1 using double precision (64 bits). 
# It is not an exact representation of 0.1 due to the limitations of floating-point numbers.

Wow, that’s a lot of digits! But even with all those extra bits, we still can’t represent 0.1 exactly in binary form. And that’s why you might see some strange results when working with floating point numbers:

# This script is used to demonstrate the limitations of representing floating point numbers in binary form.

# Import the numpy library
import numpy as np

# Print the result of 1 divided by 3, rounded to 64 decimal places
print(np.round((1/3), decimals=64)) # decimal form

# Print the result of converting 0.3 to a float and rounding it to 64 decimal places
print(np.round(float('0.3'), decimals=64)) # binary approximation using double precision (64 bits)

# The first print statement shows the decimal form of 1/3, which is a repeating decimal in binary form.
# The second print statement shows the binary approximation of 0.3 using double precision, which is not exact due to the limitations of representing floating point numbers in binary form.

As you can see, our decimal form is a lot more accurate than the binary approximation! But that’s just because we’re using an infinite number of digits in decimal form. In binary form, we have to settle for an approximation and sometimes those approximations aren’t very good at all:

# The following script compares the accuracy of decimal and binary forms of numbers using the numpy library.

# Importing the numpy library
import numpy as np

# Printing the decimal form of 1/7 with 64 decimal places
print(np.round((1/7), decimals=64))

# Output: 0.14285714285714285714285714285714285714285714285714285714285714

# Printing the binary approximation of 0.1 using double precision (64 bits)
print(np.round(float('0.1'), decimals=64))

# Output: 0.00011001100110011001100110011001100110011001100110011001100110

# The decimal form is more accurate because it uses an infinite number of digits, while the binary form has to settle for an approximation.
# The numpy round function is used to round the numbers to the specified number of decimal places.
# The float function is used to convert the string '0.1' to a floating-point number.
# The decimals parameter specifies the number of decimal places to round to.

As you can see, our decimal form is a lot more accurate than the binary approximation! But that’s just because we’re using an infinite number of digits in decimal form. In binary form, we have to settle for an approximation and sometimes those approximations aren’t very good at all:

So what can you do about it? Well, there are a few things you can try:

1. Use more bits! If your computer supports it (and most modern computers do), use double precision instead of single precision for better accuracy.

2. Avoid using floating point numbers whenever possible. Instead, use integer math or fixed-point arithmetic to avoid the problems associated with binary floating point approximations.

3. Be aware of the limitations of binary floating point and adjust your expectations accordingly. Don’t expect perfect results instead, aim for “good enough” and be prepared to accept some level of error in your calculations.

SICORPS