Web lists-archives.com

Re: [Mingw-users] msvcrt printf bug

On 2017-01-20 02:53+0100 Emanuel Falkenauer wrote:

>> I think everybody (apart maybe from the OP) agrees how floating point
>> numbers behave. Keith makes a good point about rounding. Can I toss in
>> another feature that changing the compiler optimisation level often
>> reorders instructions meaning that rounding errors accumulate in
>> different ways. So changing the optimisation level often slightly
>> changes the numerical answers. :-\
> I agree that it could well (or even should?) be the case... but it's not
> in my case - to my own pleasant surprise.
> I can build with -O3 to get the most juice for releases, or with -O0 to
> debug... my logs are still the same (spare for actual bugs). I even
> compile with -mtune=native -march=native -mpopcnt -mmmx -mssse3 -msse4.2
> (native: I'm on Xeons), although I doubt very much the Borland compiler
> knows anything about those optimizations... and yet the latter's logs
> are still identical to MinGW's.
> Honestly it beats me as well... but I'm sure glad it's the case! :-)

Interesting thread.


Floating-point representations and calculations are exact for a
subset of floating point numbers (e.g., positive or negative powers of
two times an integer). But if you are using arbitrary floating-point
numbers in your calculations, then I frankly cannot understand your
result since time and again I have seen examples where compiler
optimization changes the answer in the 15 place (if you are really
lucky and it is often much worse than that if you have any
significance loss which is a common problem if you are solving
solutions of linear equations). Also consider a number similar to
0.4321499999999999999999999 which rounds to either 0.4321 or 0.4322
depending on minute floating-point errors.  If such numbers appear in
your log output even heavy rounding is not going to make your logs
agree for different optimization levels.  Perhaps your logs have no
arbitrary floating-point numbers in them and instead actually contain
exact floating-point answers (such as integers, half-integers, etc.)
or no floating-point decimal output at all?  For example, your log
could simply say that a certain category of answer was achieved
without giving exact details, and you should indeed get that same
result regardless of optimization level if you are well-protected
(e.g., make no logical decisions based on floating-point comparisons)
against floating-point errors.

>> Emanuel - The one thing I cannot grasp is that you have built s/w with a
>> range of toolchains, but you are very focussed on obtaining exactly the
>> same numerical answers - seemingly to the level of false precision - for
>> each build.

@ Everybody:

Here I have to agree with Emanual that sometimes such a result is
desireable for testing purposes.  One example of this I have run into
is the positions and velocities of the planets (planetary ephemerides)
that are distributed by JPL in both binary and ascii forms.  So when
converting between the two forms for debugging purposes you would like
to start with the binary form (64-bit double precision IEEE floating
point numbers) and be able to convert to ascii form and back again
with no bit flips at all in those binary results.  It turns out that
with gcc on Linux and the Linux C library that with x86_64 hardware
(where intermediate floating-point results are stored in 80-bit
registers) that this result was obtained. Apparently the C library
converted the binary format to ascii decimal format with sufficient
additional (likely 80-bit) precision so that the result was exact to
something like 20 places.  And if my ascii representation included
those ascii guard digits that was sufficient so the result could be
converted back exactly to the 64-bit floating-point representation. I
could also take the ascii result distributed by JPL (which apparently also had
sufficient guard digits likely because they were using x86_64 hardware
with a decent C library that took advantage of that 80-bit
floating-point precision) and convert those results to their
distributed binary form exactly.  So that was a very nice round-trip
test result for extremely large masses of floating-point numbers.

By the way, I tried the same round-trip binary to ascii to binary
ephemeris test using MinGW gcc on Wine, and the upshot was I
discovered a bug (#28422) in the scanf family of functions implemented
for Wine (they were actually using 32-bit floating-point numbers for
the conversion at the time so it was a significant bug) that they have
subsequently fixed.  And after that Wine fix my round-trip test worked
for that platform as well.

In sum, if you have some scanf-type conversion from ascii to binary
representation of floating point or some printf-type conversion from
binary to ascii representation of floating point that is not done
using the maximum possible precision for the hardware, types like
Emanual and me who are keen on testing will come back to haunt you!

Alan W. Irwin

Astronomical research affiliation with Department of Physics and Astronomy,
University of Victoria (astrowww.phys.uvic.ca).

Programming affiliations with the FreeEOS equation-of-state
implementation for stellar interiors (freeeos.sf.net); the Time
Ephemerides project (timeephem.sf.net); PLplot scientific plotting
software package (plplot.sf.net); the libLASi project
(unifont.org/lasi); the Loads of Linux Links project (loll.sf.net);
and the Linux Brochure Project (lbproject.sf.net).

Linux-powered Science

Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
MinGW-users mailing list

This list observes the Etiquette found at 
We ask that you be polite and do the same.  Disregard for the list etiquette may cause your account to be moderated.

You may change your MinGW Account Options or unsubscribe at:
Also: mailto:mingw-users-request@xxxxxxxxxxxxxxxxxxxxx?subject=unsubscribe