Web lists-archives.com

Re: [PATCH 0/4] dropping support for older curl

On Wed, Aug 09 2017, Johannes Schindelin jotted:

> Hi Peff,
> On Wed, 9 Aug 2017, Jeff King wrote:
>> This is a resurrection of the thread from April:
>>   https://public-inbox.org/git/20170404025438.bgxz5sfmrawqswcj@xxxxxxxxxxxxxxxxxxxxx/
> As before, I would like to point out that people running with older cURL
> are most likely not at liberty to change the system libraries.
> I know that I didn't when I was working on a very expensive microscope
> whose only certified control computer ran a very old version of CentOS,
> and I really needed to install Git on it.
> In such a case, it is often preferable to be able to build against an old
> cURL -- even if some of the fancier features might be broken, and even if
> some minor compile errors need to be fixed.
> I know I was happy to compile Git against an ancient cURL back then.
> Just so you understand where I come from when I would like to caution
> against dropping support for older cURL unless it *really* adds an
> *enormous* amount of maintenance burden.
> I mean, if we even go out of our way to support the completely outdated
> and obsolete .git/branches/ for what is likely a single user, it may not
> be the worst to keep those couple of #ifdef guards to keep at least
> nominal support for older cURLs?

I too compile against ancient CentOS crap often where I need a newer
library and upgrading the system library is not an option, and the
problem you're describing is easily solved.

You grab the source RPM for e.g. curl, search-replace both the package
name and the installation paths to something else, e.g. name it
avar-curl and install it in /usr/local/avar-curl/{lib,bin,include}, then
make your new git package {Requires,BuildRequires}: avar-curl{,-dev}.

You then get a brand new curl on your system without touching anything
that needed the ancient system-library curl, because your new custom
curl lives under other paths, you then compile the package you actually
wanted against those.

Is it painless? No, of course it would be easier for me if I could just
"yum upgrade" and every package tested all 10 year old versions of their
dependencies, but it's often not realistic that they do that.

It usually takes no more than 10 minutes to give a package this
treatment, since I can usually grab a SRPM that already works for that
OS version, I just need to change the name & installation paths.

At $WORK we have hundreds of RPMs that have been given this treatment
for one reason or another.

Some of those are because upstream has decided to support the stuff
found on our systems. In some cases it's trivial to fix and they're
willing to take a patch, but in other cases it's reasonable of them to
say "just upgrade". I think looking at the diffstat of this series that
this is such a case, especially given Jeff's argument in