SlightlyLoony
Tera Contributor
Options
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Subscribe
- Printer Friendly Page
- Report Inappropriate Content
07-13-2010
06:37 AM
Consider this little piece of code:
var x = 0.1;
var y = 0.2;
var z = 0.3;
if(z == x + y)
gs.log('Of course 0.3 equals 0.3!');
else
gs.log('What a moronic computer!');
What do you think it will print out?
Apparently 0.3 doesn't equal 0.3! What's going on here?
Let's make one small change to the code to find out:
var x = 0.1;
var y = 0.2;
var z = 0.3;
gs.log('x + y: ' + (x + y));
gs.log('z: ' + z);
if(z == x + y)
gs.log('Of course 0.3 equals 0.3!');
else
gs.log('What a moronic computer!');
Run that, and you'll see that 0.1 + 0.2 = 0.30000000000000004, not 0.3!
You may be surprised to find out that this is not a bug — it's a feature. Of floating point numeric representation, that is.
Many people — even many experienced programmers — get fooled by math with floating point numbers. It seems to be a widespread preconcieved notion that computers are really good with numbers — but in fact, conventional computers are actually only inherently good with integers. As soon as you start working with fractional or decimal values, things start to fall apart. There are several technical reasons for this, but the one most frequently encountered is simple enough to understand: there are many fractional values that cannot be exactly represented in a floating point number. If this seems weird to you, consider this: even in the decimal numbers you're familiar with, you cannot exactly represent many values. For example, one third is 0.3333… — but that's certainly not an exact representation. Floating point numbers in a computer suffer from the same problem, but for different numbers (mainly because floating point numbers in a computer are internally in binary, not decimal). If you're interested in the gory technical details, read this (a classic) and this for an introduction. And yes, there are whole books on the topic.
But for the moment let's ignore the underlying cause of the unexpected result, and talk instead about what can be done about it. Here are the most common techniques:
- Rounding: Using methods like those I discussed the other day, round every result of an arithmetic operation. This is by far the easiest method to implement, and it works well in Javascript when you have less than (roughly) 15 decimal digits of precision required in your numbers. Most everyday problems fall into this category, but some applications do not (this happens especially often with financial applications).
- Use scaled integers: For instance, instead of representing $35.85 as 35.85, represent it as 3585 cents. This plays to your computer's strength: integer arithmetic. Because JavaScript always represent numbers in floating point, you'll need to use Math.round() to round every result of an arithmetic operation to the nearest integer. This method works great when the values you're representing have some definite "smallest value" (as in our example of American currency, where the smallest value is one cent, or $0.01).
- Represent values as a ratio: This is a very flexible solution, but much harder to implement. Libraries are available for some programming languages, but not (to my knowledge) for JavaScript. With this solution, we'd represent 83.00543 as 8300543/100000, and any computations would be done exactly the same way you'd do fractional math by hand.
- Use arbitrary precision math libraries: This is another very flexible solution, and quite appropriate for applications where you need more 15 or so decimal digits of precision. These are very challenging libraries to implement, but fortunately there are several JavaScript libraries available in open source. One word of warning: arbitrary precision math can be very, very slow — thousands of times slower than JavaScript's built in math.
- 297 Views
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.