OK, I see, I can post this every day as there are always some people who never heard of IEEE 754 so far:
At least this time they're not blaming JavaScript for it.
yeah i hate that, it's obviously typescript's fault
nah man, it's the v8's fault
I dunno mine goes really fast but it's expensive to fill up in this economy.
I’ve seen junior devs stumped by this, but this is very very basic CS, wtf Literally the things you learn in first classes ffs What tf are they teaching now, how to plug a computer into a power outlet?!
I just went through an intro java course last year. 0.1 + 0.2 was the example used for why floats / doubles shouldn't be used if you want 100% accuracy. I think 0.2 + 0.3 was the example used before to give an example of floating point numbers adding correctly too. I think I've goten that same demonstration in 3 / 4 different languages I've taken classes for, but the last one was an advanced course with one of the other languages as a prerequisite. I can't 100% blame the jr devs though, most of the classes I've taken from about 3rd grade to my senior year of college were more focused on us passing the tests instead of actually learning the information. I've only had one professor that didn't have final tests, they had final projects that made us actually apply what we learned all year.
Somewhat unrelated, but why this happens wasn't touched on at all in the 6 different programming specific classes I've taken except for my computer architecture and assembly class that isn't part of the CS degree. I accepted that it did happen, but no one explained why besides saying "it just does" until being taught how floating point numbers were converted into binary representations.
Or in elementary school. Write the decimal expansion of 1/3. Oops, wrong answer, it's off by a tiny bit no matter where you stop. HumanScript is so silly, right guys!
pfft 0.3̅
I mean, meme kinda checks out here. You look at it. It's weird. You do some math to figure it out.
Mantissa? I hardly know ‘er!
some numbers in a base 10 system are non repeating but in base 2 they are repeating.
For example 1/10 in binary is 0.0001100110011001100… repeating. Computers can’t store decimal cause the hardware registers store binary “bits”, not decimal “digits”.
In higher base systems, things that are repeating in decimal are not repeating… for example 1/3 = 0.333333… in base 10. But it is 0.4 in base 12.
It is just different bases have a different set of numbers whose inverse are recurring, base 17 for example is particularly garbage, and has been nicknamed "suboptimal"
Isn't any prime base garbage?
We use base 2, because On/Off is simple for transistors, but without that limitation any anti-prime base would be much better.
Prime bases are bad when it comes to whole number ratios (and a few other random division-related things). Otherwise, they're basically the same
Now, let's talk non-integer bases
Edit : I brain farted and said something silly.
The rational numbers aren't defined that way. They are any number that can be expressed as "p/q" for integers p and q.
0.1 can always be written as 1 / 10 no matter what base you represent the numbers with
Yeah, I stumbled on this one. Totally forgot non repeating numbers :|
they're talking about repeating infinite (rational) and terminating (also rational) numbers only here
some numbers in a base 10 system are non repeating but in base 2 they are repeating.
please name one number, that is repeating in base 2, but not in base 10. i am sure there are none, because 2 is a factor of 10. 5 is not a factor of 2.
0.1, as mentioned. Base 10 non repeating base 2 repeating, not the other way around.
Only for 64 and 80 bit floats, not 32 or 16
If arbitrary precision is truly necessary then why not go with something like big decimal? Yes it is slow and yes it takes up a lot of space but it will be precise.
Good way to weed out jr devs who suggest making money calculations using floats.
What is the solution? Do everything with integers (cents) and add the decimal at the end of all calculations?
Well
Decimal -> Binary
0.5 -> 0.1
0.3 -> 0.01001100110011001100110011001100110011001100110011001100110011...
Tell me you don’t understand floating point numbers without telling me you don’t understand floating point numbers. Also What Every Computer Scientist Should Know About Floating Point Numbers
When you keep adding and subtracting 0.1 at equal but random rates, and the result gradually changes
It's actually not exactly 0.5 either. There's still only 64 bits of precision. You just trust zeroes on the end more.
0.2 -> 3,602,879,701,896,397 / 18,014,398,509,481,984 > 0.2
0.3 -> 10,808,639,105,689,190 / 36,028,797,018,963,970 < 0.3
their sum is 32,451,855,365,842,673,398,891,542,436,905 / 64,903,710,731,685,348,959,510,906,011,648
= 0.499,999,999,999,999,983,346,654,630,622,653,434,389...
which rounds to 0.5
The limit as x→ username checks out.
Can you do the same for 0.1 + 0.2 please?