Re: [Mingw-users] msvcrt printf bug
-----BEGIN PGP SIGNED MESSAGE-----
This thread now seems to have run its course; sadly, the level of
pervasive wide-spread ignorance it has demonstrated is depressing.
On 18/01/17 10:00, tei.andu@xxxxxxxxxxxx wrote:
> Emanuel, thank you very much for stepping in. I am extremely happy
> that you found my code useful.
Great that he finds it useful; depressing that neither of you cares
in the slightest about accuracy; rather, you are both chasing the
grail of "consistent inaccuracy". (I don't have a problem with
that, as objective, but please do not claim that it represents
"accuracy"; it is entirely speculative; anything but accurate).
> I started learning C by myself in 2010. I work in embedded,
> specifically digital power conversion and industrial control
> systems. KH Man, thank you for your advice again. I am not a student,
> but I am still learning.
And still have much to learn, apparently, especially regarding the
representable precision of floating point numbers.
> I agree that this a non issue for most users. For me the matter is
> closed. I will use cygwin when I need a more accurate printf.
You don't need to do that -- just compile your MinGW code with any
of the options, or define any of the feature test macros, which
enable our alternative printf() function suite, and you should get
output which is just as consistently INACCURATE as cygwin's, (for,
AFAIK, the conversion algorithm employed is the same).
Yes, I deliberately said "consistently inaccurate"; see, cygwin's
printf() is ABSOLUTELY NOT more accurate than MinGW's, (or even
Microsoft's, probably, for that matter). You keep stating these
(sadly all too widely accepted) myths:
> Every valid floating point representation that is not NaN or inf
> corresponds to an exact, non recurring fraction representation in
In the general case, this is utter and absolute nonsense! Sure,
there are a few cases where it may be true -- cases which are
limited to those where the number of significant decimal digits generated does not exceed the representable precision of the
available binary digits within the underlying data type, AND the
significant digits of the represented value, when considered as
an unsigned integer, (after conversion of the exponent from base
two to base ten), are evenly divisible by ten, with no remainder.
In ALL other cases, (by far the majority), the value represented
> There is no reason why printf shouldn't print that exact
> representation when needed, as the glibc printf does.
Pragmatically, there is every reason. For a binary representation
with N binary digits of precision, the equivalent REPRESENTABLE
decimal precision is limited to a MAXIMUM of N * log10(2) decimal
digits; for the standard data types, this equates to:
- - 4-byte (float): 24 binary digit precision = 7.225 decimal digits
- - 8-byte (double): 53 binary digits = 15.955 decimal digits
- - 10-byte (long double*): 64 binary digits = 19.266 decimal digits
[*] MSVC, (hence MSVCRT.DLL), does not support 10-byte long double
data type; their "long double" is indistinguishable from 8-byte
Now, for RELIABLY ACCURATE output, we CANNOT achieve any better
than floor( N * log10(2) ) decimal digits of precision. Sure, you
can generate more, by appending arbitrary less significant binary digits to the original representation of the value; for each such
extra binary digit there are two (equally valid) choices, so you
increase uncertainty in the output value by a factor of two for
each such digit -- and by a factor of ten for those extra bits
required for each full decimal digit added, after the bits of
the actual original representation have been exhausted. Thus,
your claimed accuracy, in those extra digits, is bogus; it
trails exponentially to zero, as you add more of them. In
reality, you are representing just one of an (ultimately)
infinite number of alternative realities, each equally valid,
(and each equally fictitious), depending on what specific,
arbitrarily chosen bit pattern you adopt, to extend the limited
representation from which you started, (and which is the extent
to which anything realistically accurate is actually known).
Incidentally, you do make a valid point regarding conversion of
INPUT from string to binary. Take the 4-byte float, for example:
its maximum reliable OUTPUT precision is only seven significant
decimal digits, but seven decimal digits is insufficient to
fully represent the full 24 binary digits within the underlying
float data type -- you need 7.225 (realistically eight) decimal
digits for that, and nine are recommended to address potential
rounding disparity. Thus, if your output is to be read back,
so as to generate identically the same binary representation,
you will likely need to emit two digits beyond the maximum
reliable output precision. That's absolutely fine, provided
you understand that these extra digits are dependent on some
arbitrary choice, and do not represent significance in the
original data -- you certainly have absolutely no justification
for claiming them to be "accurate".
Public key available from keys.gnupg.net
Key fingerprint: C19E C018 1547 DE50 E1D4 8F53 C0AD 36C6 347E 5A3F
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.20 (GNU/Linux)
-----END PGP SIGNATURE-----
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
MinGW-users mailing list
This list observes the Etiquette found at
We ask that you be polite and do the same. Disregard for the list etiquette may cause your account to be moderated.
You may change your MinGW Account Options or unsubscribe at: