Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft
- Date: Mon, 7 Jan 2019 10:28:31 +0000
- From: Luke Kenneth Casson Leighton <lkcl@xxxxxxxx>
- Subject: Re: Rebuilding the entire Debian archive twice on arm64 hardware for fun and proft
On Sun, Jan 6, 2019 at 11:46 PM Steve McIntyre <steve@xxxxxxxxxx> wrote:
> [ Please note the cross-post and respect the Reply-To... ]
> Hi folks,
> This has taken a while in coming, for which I apologise. There's a lot
> of work involved in rebuilding the whole Debian archive, and many many
> hours spent analysing the results. You learn quite a lot, too! :-)
> I promised way back before DC18 that I'd publish the results of the
> rebuilds that I'd just started. Here they are, after a few false
> starts. I've been rebuilding the archive *specifically* to check if we
> would have any problems building our 32-bit Arm ports (armel and
> armhf) using 64-bit arm64 hardware. I might have found other issues
> too, but that was my goal.
steve, this is probably as good a time as any to mention a very
specific issue with binutils (ld) that has been slowly and inexorably
creeping up on *all* distros - both 64 and 32 bit - where the 32-bit
arches are beginning to hit the issue first.
it's a 4GB variant of the "640k should be enough for anyone" problem,
as applied to linking.
i spoke with dr stallman a couple of weeks ago and confirmed that in
the original version of ld that he wrote, he very very specifically
made sure that it ONLY allocated memory up to the maximum *physical*
resident available amount (i.e. only went into swap as an absolute
last resort), and secondly that the number of object files loaded into
memory was kept, again, to the minimum that the amount of spare
resident RAM could handle.
some... less-experienced people, somewhere in the late 1990s, ripped
all of that code out [what's all this crap, why are we not just
relying on swap, 4GB swap will surely be enough for anybody!!!!"]
by 2008 i experienced a complete melt-down on a 2GB system when
compiling webkit. i tracked it down to having accidentally enabled
"-g -g -g" in the Makefile, which i had done specifically for one
file, forgot about it, and accidentally recompiled everything.
that resulted in an absolute thrashing meltdown that nearly took out
the entire laptop.
the problem is that the linker phase in any application is so heavy
on cross-references that the moment the memory allocated by the linker
goes outside of the boundary of the available resident RAM it is
ABSOLUTELY GUARANTEED to go into permanent sustained thrashing.
i cannot emphasise enough how absolutely critical that this is to
EVERY distribution to get this fixed.
resources world-wide are being completely wasted (power, time, and the
destruction of HDDs and SSDs) because systems which should only really
take an hour to do a link are instead often taking FIFTY times longer
due to swap thrashing.
not only that, but the poor design of ld is beginning to stop certain
packages from even *linking* on 32-bit systems! firefox i heard now
requires SEVEN GIGABYTES during the linker phase!
and it's down to this very short-sighted decision to remove code
written by dr stallman, back in the late 1990s.
it would be extremely useful to confirm that 32-bit builds can in fact
be completed, simply by adding "-Wl no-keep-memory" to any 32-bit
builds that are failing at the linker phase due to lack of memory.
however *please do not make the mistake of thinking that this is
specifically a 32-bit problem*. resources are being wasted on 64-bit
systems by them going into massive thrashing, just as much as they are
on 32-bit ones: it's just that if it happens on a 32-bit system a hard
somebody needs to take responsibility for fixing binutils: the
maintainer of binutils needs help as he does not understand the