Weird rounding?

Quick an easy question!
I have a piece of code that has a series of "if" statements, and in half of the instances, a calculated value (lets call it Q) is assigned to a variable (x).
Later in the code as a boolean check, I have the code do "if (x-Q <=0)" or something along those lines. X was assigned the value Q earlier. However, when this operation takes place, abs(q-x) does not equal 0! it usually equals something between -10,10. Q by itself is a value that is something like this 7.5*10^7. So the numbers being subtracted are x(7.5*10^7) - Q(7.5*10^7) and it is not equalling 0. I'm sure it has to do with sig figs and the precision, but, without sharing my code, is there an obvious solution to these rounding weirdities?
Hi!

Just a question - why do you need to check if (x-Q <= 0) when you know that you have previously set x = Q?

What I usually do, when I want to check whether two variables are (close) to the same value, is that I calculate (using your variable names) abs(x/Q-1) and then allow for a small offset from 1 (due to possible floating point rounding error, or allow for some variation in the values of the variables, etc.), i.e.  if ( abs(x/Q-1) > 1e-5) where the value 1e-5 would need to be chosen appropriately ...

But probably there are other (better) ways to do that ...

--
Gregor K
ALOISA Beamline
Elettra Synchrotron
You always have rounding errors, meaning that if you subtract two seemingly equal numbers you never get exactly zero. Your example with a +/- 10 error on two numbers in the 1e7 range sounds a little big. Nevertheless, the solution is to do an if (abs((A-B)/A) < 1-6) or something similar. Also be careful with using x and q, since they both have special meanings in Igor.
The error of something around 1 part in 10^6 suggests that your test is using a single-precision wave, not a variable. Variables come only in double precision, which has a precision of about one part in 10^17. If you assign a number having more than about 8 digits to a single-precision wave:
Make/O junk = 123456789
print/D junk[0]
  123456792

indeed, you don't get back what you think you put in because of floating-point truncation. Another surprise is a loop:
Function test()

    Variable i
    for (i = 0; i < 1; i+=0.1)
        print "i=",i
    endfor
end

The output is like this:
i= 0
i= 0.1
i= 0.2
i= 0.3
i= 0.4
i= 0.5
i= 0.6
i= 0.7
i= 0.8
i= 0.9
i= 1

But, wait! I said i<1, not i<=1! The trick is that 0.1 cannot be exactly represented as a floating-point value; it is an infinite fraction. Here is another look at the problem:
Function test()

    Variable i
    for (i = 0; i < 1; i+=0.1)
        printf "i = %.18g\r", i
    endfor
end

Now the output is:
i = 0
i = 0.100000000000000006
i = 0.200000000000000011
i = 0.300000000000000044
i = 0.400000000000000022
i = 0.5
i = 0.599999999999999978
i = 0.699999999999999956
i = 0.799999999999999933
i = 0.899999999999999911
i = 0.999999999999999889

Now you can see that it does, indeed, stop at less than 1.0!

Notice that 0.5 is exactly represented. It is a power of 1/2, so it can be represented exactly with a binary number. You will get what is expected if you increment by 0.25, another integer power of 1/2.

See https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

The article is called "What Every Computer Scientist Should Know About Floating-Point Arithmetic" but these days, it should just say "scientist". Virtually every scientist these days is using a computer, and will run into floating-point truncation errors some day.

John Weeks
WaveMetrics, Inc.
support@wavemetrics.com
Thank you for all of your respones.
Johnweeks, I had specified the waves as /d previously. THank you for all the extra info. very helpful
If they really are double precision (still!) then we would have to see the details of computations that led to the values you are comparing. Even if your numbers are double precision, if the value is the result is a small difference of large numbers you can lose precision:
print/D 0.1 - 0.099999999999
  1.00000563385549e-12

Note that here the loss of precision has crept up into the region of 5 parts in 10^6, even though the original numbers have precision of somewhere around 1 part in 10^17.

John Weeks
WaveMetrics, Inc.
support@wavemetrics.com
I would think that this could also be attacked through a combination of taking a log and setting a defined number of significant digits that designate equality. Whether such an approach is faster or better ... that is left as an exercise.

Ultimately, I'd try entirely to avoid comparisons of floating point values in the code. Instead, I'd set a (boolean) flag at the exact point when Q is set to the (supposedly constant) variable. Later, I would test for the truth of the boolean flag. Something like this ...

variable itisset = 0
if ...
   // case where Qq is set to xx
   Qq = xx // note the caution about using Q and x
   itisset = 1
else
   // case where Qq is not set to xx
   ...
   itisset = 0 // just to be sure
endif

// ... do later tests on itisset rather than (Qq - xx)


--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
johnweeks wrote:
The error of something around 1 part in 10^6 suggests that your test is using a single-precision wave, not a variable. Variables come only in double precision, which has a precision of about one part in 10^17. If you assign a number having more than about 8 digits to a single-precision wave:
Make/O junk = 123456789
print/D junk[0]
  123456792

indeed, you don't get back what you think you put in because of floating-point truncation. Another surprise is a loop:
Function test()

    Variable i
    for (i = 0; i < 1; i+=0.1)
        print "i=",i
    endfor
end

The output is like this:
i= 0
i= 0.1
i= 0.2
i= 0.3
i= 0.4
i= 0.5
i= 0.6
i= 0.7
i= 0.8
i= 0.9
i= 1

But, wait! I said i<1, not i<=1! The trick is that 0.1 cannot be exactly represented as a floating-point value; it is an infinite fraction. Here is another look at the problem:
Function test()

    Variable i
    for (i = 0; i < 1; i+=0.1)
        printf "i = %.18g\r", i
    endfor
end

Now the output is:
i = 0
i = 0.100000000000000006
i = 0.200000000000000011
i = 0.300000000000000044
i = 0.400000000000000022
i = 0.5
i = 0.599999999999999978
i = 0.699999999999999956
i = 0.799999999999999933
i = 0.899999999999999911
i = 0.999999999999999889

Now you can see that it does, indeed, stop at less than 1.0!

Notice that 0.5 is exactly represented. It is a power of 1/2, so it can be represented exactly with a binary number. You will get what is expected if you increment by 0.25, another integer power of 1/2.

See https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

The article is called "What Every Computer Scientist Should Know About Floating-Point Arithmetic" but these days, it should just say "scientist". Virtually every scientist these days is using a computer, and will run into floating-point truncation errors some day.

John Weeks
WaveMetrics, Inc.
support@wavemetrics.com


Great!