Web lists-archives.com

Re: [Mingw-users] msvcrt printf bug

On 19/01/17 14:09, Keith Marshall wrote:
Hash: SHA1

On 19/01/17 11:19, Peter Rockett wrote:
On 19/01/17 08:21, tei.andu@xxxxxxxxxxxx wrote:
Keith, have a look here please:
Quote from the article:
Every binary floating-point number has an exact decimal equivalent 
which can be expressed as a decimal string of finite length.

When I started this, I didn't know about this article. Also another 
in-depth look at float to decimal conversion from the same author:
I think the above is exactly the sort of misleading website Keith was
complaining about.
Indeed, yes.  I stumbled across that same website several years
ago: I immediately dismissed it as the gigantic pile of bovine
manure which it so clearly represents.  Sadly, there are far too
many mathematically challenged programmers who are gullible enough
to accept such crap at face value; (and far too much similarly
ill-informed nonsense pervading the internet).

If you take the binary representation of DBL_MAX, replace the least
significant digit in the mantissa with a zero and calculate the
decimal equivalent, you will get another 309 digit number. But
compared to the decimal equivalent of DBL_MAX, I think you will find
that the bottom ~294 digits will have changed. If you got this number
from a calculation, what does it tell you about the reliability of
the bottom 294 digits if the smallest possible change to the binary
number produces such a massive shift?
It tells us, as I've said before, that those excess ~294 digits are
meaningless garbage, having no mathematical significance whatsoever.
Thinking about this further, I believe there is another level of subtlety here.

Again taking the DBL_MAX example from http://www.exploringbinary.com/number-of-decimal-digits-in-a-binary-fraction/, the logic goes:

DBL_MAX is 1.1111111111111111111111111111111111111111111111111111 x 2^1023

Add so:

1.1111111111111111111111111111111111111111111111111111 x 21023

(2 – 2-52) · 21023

17976931348623...58368 (309 digits worth)

The second step - the conversion from (2 – 2-52) · 21023  to this monster 309-digit decimal number, and therefore the validity of the trailing 294 digits - is unquestionably correct. All 294 digits are perfectly good! The logical flaw in the above argument is actually the very first step that equates the IEEE-format binary number to  (2 – 2-52) · 21023. This is not an equation, it is a logical equivalence (should be a '<=>' symbol)! (How do you assign a discrete data structure to a number?) It thus follows that any reasoning about the IEEE-format binary number using the decimal equivalent has to be tempered by some key conditions because there has been a fundamental change in the problem in the first step.

Given the subtlety of this, I am not surprised many people miss the point and erroneously assert things like "every  binary floating-point number has an exact decimal equivalent". The equivalent number written in terms of powers of 2 does indeed have an exact decimal equivalent; but this does not extend 'upstream' to the original binary number. The second stage of the reasoning is absolutely correct. But a correct piece of logic following a flawed logical step is still false. The net conclusion, of course, is that we still only have 15-16 significant digits and any further digits are false precision due to the finite width of the binary mantissa.

From some unpromising beginnings, I have actually found this a very useful thread. I now have a clearer formal understanding of why so many people get floating point numbers wrong. I have also been corrected about recent versions of gcc not reordering instructions during optimisation - that is really useful to know!


Put another way, if you do arithmetic at this end of the floating
point scale, the smallest change you can make is ~10^{294}. Thus only
~15 of the decimal digits are significant - the rest are completely
uncertain. Passing to the decimal equivalents ultimately clouds the
issue. Floating point arithmetic is done in binary, not using decimal

I suspect the OP's conceptual problem lies in viewing every float in
splendid isolation rather than as part of a computational system.
This is why printing to false precision has not attracted much uptake
here. There is a fundamental difference between digits and digits
that contain any useful information!
Exactly so.  Nicely elucidated, Peter.  Thank you.

Or another take: If you plot possible floating point representations
on a real number line, you will have gaps between the points. The OP
is trying print out numbers that fall in the gaps!
And, taken to its ultimate, there are an infinite number of possible
numbers falling in each gap: each of these is an equally speculative
possible misrepresentation of the reality conveyed by the actual bits encoding the number at one or other end of the gap.

- -- 

Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
MinGW-users mailing list

This list observes the Etiquette found at 
We ask that you be polite and do the same.  Disregard for the list etiquette may cause your account to be moderated.

You may change your MinGW Account Options or unsubscribe at:
Also: mailto:mingw-users-request@xxxxxxxxxxxxxxxxxxxxx?subject=unsubscribe