Whoever wants to use double to define the amount of goods, just pack up the guys and go

Look at the phenomenon first. When it
comes to the processing of two types of floating-point data such as float or double, there will always be some weird phenomena occasionally. I don't know if you have paid attention to it. Let me give a few common examples:

Typical phenomenon (1): Conditional judgment exceeds expectations

System.out.println( 1f == 0.9999999f ); // Print: false
System.out.println( 1f == 0.99999999f ); // Print: true Nani?
Typical phenomenon (2): Data conversion exceeds expectations

float f = 1.1f;
double d = (double) f;
System.out.println(f); // Print: 1.1
System.out.println(d); // Print: 1.100000023841858 Nani?
Typical phenomenon (3): Basic calculations exceed expectations

System.out.println( 0.2 + 0.7 );  
 
// Print: 0.8999999999999999 Nani?
Typical phenomenon (4): Self-increasing data exceeds expectations

float f1 = 8455263f;
for (int i = 0; i <10; i++) {     System.out.println(f1);     f1++; } // print: 8455263.0 // print: 8455264.0 // print: 8455265.0 // print: 8455266.0 // Print: 8455267.0 // Print: 8455268.0 // Print: 8455269.0 // Print: 8455270.0 // Print: 8455271.0 // Print: 8455272.0 float f2 = 84552631f; for (int i = 0; i <10; i++) {     System.out.println(f2);     f2++; } // Print: 8.4552632E7 Nani? Didn’t you +1? // Print: 8.4552632E7 Nani? Didn’t you +1? // Print: 8.4552632E7 Nani? Didn’t you +1? // Print: 8.4552632E7 Nani? Didn’t you +1? // Print: 8.4552632E7 Nani? Didn’t you +1?













 










// Print: 8.4552632E7 Nani? Didn’t you +1?
// Print: 8.4552632E7 Nani? Didn’t you +1?
// Print: 8.4552632E7 Nani? Didn’t you +1?
// Print: 8.4552632E7 Nani? Didn’t you +1?
// Print: 8.4552632E7 Nani? Didn’t you +1?
See, the usage in these simple scenarios is difficult to meet our needs, so there are many cryptic pits waiting for us to deal with problems with floating-point numbers (including double and float)!

No wonder the technical director said cruelly: If anyone dares to use floating-point data (double/float) when dealing with items such as the amount of goods, order transactions, and currency calculations, let us go directly!

 

Where is the reason?
Let's take the first typical phenomenon as an example to analyze:

System.out.println( 1f == 0.99999999f );
Directly use the code to compare 1 and 0.99999999, and it actually prints true!

 

This shows what? This shows that the computer can't distinguish these two numbers at all. Why is this?

Let's think about it briefly:

We know that the two floating-point numbers entered are only the specific values ​​that our human eyes see. They are decimal numbers that we usually understand. However, the bottom layer of the computer is not calculated according to the decimal system when calculating. Those who have learned the basic calculation principles Knowing that the bottom layer of the computer is ultimately based on 0, 1 binary like 010100100100110011011.

So in order to understand the actual situation, we should convert these two decimal floating point numbers into binary space and take a look.

How to convert decimal floating-point numbers to binary and how to calculate, I think this should belong to the basic computer system conversion common sense. I must have learned it in the similar courses of "Computer Composition Principle". Convert it to IEEE 754 Single precision 32-bit, which is the precision corresponding to the float type)

1.0 (decimal)
    ↓
00111111 10000000 00000000 00000000 (binary)
    ↓
0x3F800000 (hexadecimal)
0.99999999 (decimal)
    ↓
00111111 10000000 00000000 00000000 (binary)
    ↓
0x3F800000 (hexadecimal) As expected
, the bottom layer of these two decimal floating-point numbers The binary representation is the same, no wonder the judgment result of == returns true!

But the result returned by 1f == 0.9999999f is as expected. If false is printed, we will also convert them to binary mode to see the situation:

1.0 (decimal)
    ↓
00111111 10000000 00000000 00000000 (binary)
    ↓
0x3F800000 (hexadecimal)
0.9999999 (decimal)
    ↓
00111111 01111111 11111111 11111110 (binary)
    ↓
0x3F7FFFFE (hexadecimal)
Oh, obviously, they are binary numbers Said it is indeed different, this is the result of reason.

So why the underlying binary representation of 0.9999999 is: 00111111 10000000 00000000 00000000?

Is this unclearly the binary representation of the floating point number 1.0?

This is about the accuracy of floating-point numbers.

The accuracy of floating point numbers!
Those who have studied "Principles of Computer Composition" should all know that the storage method of floating-point numbers in computers follows the IEEE 754 floating-point number counting standard, which can be expressed in scientific notation as:

 

As long as the three dimensions of information: sign (S), order part (E), and mantissa part (M) are given, the representation of a floating-point number is completely determined, so the two floating-point numbers, float and double, are in the memory The storage structure in is as follows:

 

 

1. Symbol part (S)

0-positive 1-negative

2. Step code part (E) (exponent part):

For float-type floating-point numbers, the exponent part is 8 bits, considering whether it can be positive or negative, so the exponent range that can be expressed is -127 ~ 128.
For double-type floating-point numbers, the exponent part is 11 bits, considering whether it can be positive or negative, so the exponent that can be expressed The range is -1023 ~ 1024
3. The mantissa part (M):

The precision of floating-point numbers is determined by the number of digits in the mantissa:

For float-type floating-point numbers, the mantissa part is 23 digits, and converted to decimal is 2^23=8388608, so the decimal precision is only 6 to 7 digits;
for double-type floating-point numbers, the mantissa part is 52 digits, and converted to decimal is 2^52 = 4503599627370496 , So the decimal precision is only 15 ~ 16 digits.
So for the above value 0.99999999f, it is obvious that it has exceeded the precision range of float-type floating-point data, and problems are unavoidable.

How to solve the accuracy problem
So what if it involves high-precision scenarios such as commodity amount, transaction value, currency calculation, etc.?

Method 1: Use string or array to solve multi-digit problem

Those who have brushed algorithm problems in school recruitment should all know that using strings or arrays to represent large numbers is a typical problem-solving idea.

For example, the classic interview questions: write the addition, subtraction, multiplication and other operations of two large numbers of arbitrary digits.

At this time, we can use strings or arrays to represent such large numbers, and then manually simulate the specific calculation process according to the rules of the four arithmetic operations. In the middle, we need to consider various issues such as carry, borrow, symbol, etc. , It's really complicated, so I won't repeat it in this article.

Method 2: Java's large number class is a good thing

The JDK has already considered the calculation accuracy of floating-point numbers for us, so it provides a large number class dedicated to high-precision numerical calculations to facilitate our use.

As mentioned in the previous article, "I’m telling you, I recently got on the Java source code bar", Java’s large number classes are located in the java.math package:

 

As you can see, the commonly used BigInteger and BigDecimal are powerful tools for processing high-precision numerical calculations.

BigDecimal num3 = new BigDecimal( Double.toString( 1.0f) );
BigDecimal num4 = new BigDecimal( Double.toString( 0.99999999f) );
System.out.println( num3 == num4 ); // print false
 
BigDecimal num1 = new BigDecimal( Double.toString( 0.2) );
BigDecimal num2 = new BigDecimal( Double.toString( 0.7) );
 
// add
System.out.println( num1.add( num2) ); // print: 0.9
 
// subtract
System .out.println( num2.subtract( num1) ); // Print: 0.5
 
// Multiply
System.out.println( num1.multiply( num2) ); // Print: 0.14
 
// Divide
System.out.println( num2 .divide( num1) ); // Print: 3.5
Of course, the computational efficiency of large numbers like BigInteger and BigDecimal is definitely not as efficient as the original type, and the cost is still relatively expensive. Whether to choose it needs to be evaluated according to the actual scene.

 

Guess you like

Origin blog.csdn.net/guodashen007/article/details/106409903