Puts inconsistency & machine epsilon management
-
Great!
i know understand! i'll need to redefine my points and vectors with arrays of double floats if i want to reach machine epsilon accuracy!or use sketchup's ones but with sketchup precision.
What i still don't understand is why it wouldn't round when calling .to_i (so may be getting the difference with math.round will be solution to keep working with the Geom module).thx a lot for the time you spend on my problem Aerilius!
ako
-
@abakobo said:
how can i "puts" the real float that's in memory and not a rounded one?
To answer this final part of your original question...
The only utterly reliable way to do this is to print the value in binary!Each number base has values that it can exactly represent, and those that it can't. We're all familiar with this in base ten, for example...
1.0 / 3.0 = 0.3333333...etc...
There is simply no exact representation of 1/3 no matter how many decimal places you use.The same is true of base 2 (binary) - for example, a float cannot exactly represent the decimal value 0.1. The fraction 1/10 is infinitely recurring in binary...
0b0001 / 0b0110 = 0b0.0001100110011001100...etc...
In effect, there is no such number available to a float, regardless of the number of bits - and it has to make to with the nearest approximation.Therefore, any time you change the number base, there will be a rounding error unless BOTH number bases can exactly represent the value. It's important to realise that this is a fundamental property of the way that numbers are represented - greater precision yields closer approximations, but will never solve the problem entirely.
So in your example case, where apparently 4.0.to_i == 3, what may be happening is that the 'true' binary value is ever so slightly less than 4.0, but is a value with no exact decimal representation. The nearest 'non-recurring' decimal just happens to be the integer 4 - so this is what 'puts' displays.
This kind of thing happens all the time, it's just that the 'integer/not-integer' case tends to make the problem more visible.
(NB - this is also the reason that Ruby has the Rational class of number; so that fractions with recurring/irrational decimals can be precisely represented).Until we become a race of technology enhanced cyborgs that can all natively do maths in binary, this problem is always going to come up when we use float values anywhere in a user interface - the very act of displaying the value can change the value being displayed!
-
Will using Kernel.sprintf() help ?
n = 4.000000000000003 %(#004000)[4.000000000000003] n.class %(#004000)[Float] sprintf( "%.15f", n ) %(#004000)[4.000000000000003]
-
The 'sprintf' method used may or may not be more precise than the standard 'to_s' method, I'm not sure - but it will still never completely circumvent the problem of decimal values that binary cannot represent.
For example, your value '4.000...003' is just a String as far as the Ruby parser is concerned, and it get converted to binary to make a Float. When you print it, it converts back to decimal. At no point can you know whether the Float is really holding that exact number, or whether it's just that the conversion routines are exactly reversible.
You can't even do a simple test whether the stored value is precise or not - Ruby would always tell you that it is, because you can only compare with another Float that has also been ultimately converted from a decimal string.
Whether or not to round or format depends a lot on what you want to use the number for. For end users, 'decimalised' values are much more intuitive to deal with, and they are the 'norm' - but for a developer trying to locate rounding errors, this can be very deceptive.
Rounding the Float value itself won't always help either - the Float#round method rounds to a fixed number of decimal places - but since even 0.1 can't be exactly represented in binary, you will often still end up with an approximation.
I have seen some debuggers/editors that do have a setting to always print the actual binary number that is ultimately held in memory - as there is a fixed number of bits in a Float, and 2 is a factor of 10, it IS always POSSIBLE to translate the exact binary value to decimal within a bounded number of digits. When I've used these, it has been quite shocking to see what the CPU does with the values that we type in!
Very few applications do this because it's counter-intuitive to type in a value and then be told its something different on the display. Scientific and financial applications often use deviously complex routines for what seem like simple calculations so that they can emulate the maths being done in an 'all decimal' number space, to prevent this kind of issue.
Although often misunderstood and complained about, I have to say that I think the handling of rounding errors by the SU API is very well done - I think there would be many more threads like this from frustrated developers if we had to manage 'epsilon' for ourselves!
-
In case it might be useful to anyone experiencing similar issues with numeric precision, I've tarted up a little module that I've used for a long time to peek inside number values.
IEEE754.rb
There are methods to take any Float, Integer or numeric String (including Hex/Binary), trim precision to either 32 or 64 bits, and then show.- The Float value represented by those 32/64 bits.
- Signed and unsigned integer represented by the same bit pattern.
- Raw bytes that make up the value in hex and binary.
- Breakdown of the IEEE754 construction of the float (IEEE754 is the international standard for such things - the Wikipedia entry is a good start for understanding how Floats are built.).
- Shows infinities, NaNs and denormal values.
I designed it with another API in mind, where Ruby is interfacing to external code that can only handle 32bit floats - but it's turned out to be quite a useful tool whenever rounding errors, infinities, 'Not a Number' etc. are causing woe - and also when interfacing to external DLLs to check that all that horrible String packing/unpacking is working as expected.
Probably not so useful for the OP's problems, as for 64bit double-precision floats (Ruby standard), decimal values cannot be displayed any more accurately than usual (though it may hint at the case where the binary has an infinitely repeating fraction)
No restrictions on use, distribution etc. - nor is there any kind of warranty (caveat emptor!!)
-
Hi,
I have come across a problem which seems to be related to this issue.> r1 = (20.0/1000)/0.8 0.025 > r1.class Float > "%.2f" % r1 0.02
=> INCORRECT
However;
> "%.2f" % (20.0/(0.8*1000)) 0.03
=> CORRECT
> r2 = 0.025 0.025 > r2.class Float > "%.2f" % r2 0.03
=> CORRECT
> "%.2f" % 0.025 0.03
=> CORRECT
Also:
> r1 <=> 0.025 -1
=> INCORRECT
> r2 <=> 0.025 0
=> CORRECT
> r1 <=> r2 -1
=> INCORRECT
So it is clearly something to do with how the number is stored in binary and the order of operators can affect the result as alluded to above in this thread.
Any ideas on how to manage this? - it seems quite a fundamental flaw if this comes up when making calculations at a pretty basic level of precision. It doesn't seem practical to re-order potential calculations to push the precision one way or another because the values of variables may not be known in advance. -
@archidave said:
it seems quite a fundamental flaw
"Fundamental" - that is the key word. The problem is just inherent in any numeric system with a finite amount of precision. Without hugely complex (i.e. slow) assessment of the "rules of mathematics", programs will always work this way. When we do maths using our brains as the computer, we implicitly use all those extra rules without even realising we are doing it - so the quirks of floating point maths come as a nasty surprise to most programmers at some point in their career.
If we add the width of an atom to the diameter of the solar system, logic tells us that we must have made the original number larger - but to a CPU, all it sees is that the pattern of bits in a register probably didn't get changed, and that makes the end result "equal" to the start value despite what the semantics of the equation tell us.
This can be overcome with special processing that does interpret the "meaning" of the maths. For example, if you were to do the same maths as your example using Ruby's "Rational" class of numbers, you would get exactly the expected answer - but the cost in extra CPU load is huge and would likely outweigh the advantages in most cases.
@archidave said:
It doesn't seem practical to re-order potential calculation
I do a fair bit of Audio DSP programming, where equations are often used within feedback loops at high sample rates, and it is incredible the difference that a simple swap of instructions can make - beautiful smooth filtering one way, blown speakers/eardrums another! How do I know which to use? - if I'm lucky, Google will find me the most stable algorithm, otherwise it's mostly just good old trial and error!
It's an area of computer science that fills volumes of textbooks and research papers every year - and sadly, there are very few 'shortcuts' that us mere mortals can rely on. But to make the most of the available precision, this kind of code re-factoring is indeed the most "practical" solution.
There are a few 'rules of thumb', though. For example, when adding and subtracting, the result is usually most reliable when the two values are similar in magnitude - but, of course, some algorithms just don't have an easy way to arrange that!
-
@archidave said:
Any ideas on how to manage this? - it seems quite a fundamental flaw if this comes up when making calculations at a pretty basic level of precision.
If you haven't come across this website already I highly recommend the read:
http://floating-point-gui.de/@archidave said:
It doesn't seem practical to re-order potential calculations to push the precision one way or another because the values of variables may not be known in advance.
If high precision is required you might want to look into the special classes in Ruby to handle such. You have Rational for instance.
-
FWIW, your first example:
r1 = (20.0/1000)/0.8 puts r1 %(darkgreen)[>> 0.024999999999999998]
.. so, therefore:
"%.2f" % r1 %(darkgreen)[>> 0.02]
.. is CORRECT, not incorrect. -
Thanks all,
Thom, that link looks very useful thanks, makes me think I should dig out my A-level computer science notes too ...
Dan, I still get
> r1 = (20.0/1000)/0.8 0.025 > puts r1 0.025
How do you output the full precision in order to understand what is going on?
I don't have the Rational class in SU8 (I'm on OS X 10.6.8 so can't go any higher for the time being)
FYI;RUBY_VERSION
1.8.5
Do I need Ruby 2.0 for that?Dave
-
@archidave said:
I don't have the Rational class in SU8 (I'm on OS X 10.6.8 so can't go any higher for the time being)
FYI;RUBY_VERSION
1.8.5
Do I need Ruby 2.0 for that?Indeed. We used Ruby 1.8 with just the core lib up til and including SU2013. In SU2014 we upgraded to Ruby 2.0 and included most of the stdlib.
-
There are some class level constant settings for the
Float
class.List them:
Float.constants
Example:
Float::DIG
15
Advertisement