Re: [PATCH] convert: avoid malloc of original file size
- Date: Thu, 7 Mar 2019 21:52:37 -0500
- From: Jeff King <peff@xxxxxxxx>
- Subject: Re: [PATCH] convert: avoid malloc of original file size
On Fri, Mar 08, 2019 at 10:26:24AM +0900, Junio C Hamano wrote:
> Jeff King <peff@xxxxxxxx> writes:
> > As discussed there, I do think this only solves half the problem, as the
> > smudge filter has the same issue in reverse. That's more complicated to
> > fix, and AFAIK nobody is working on it. But I don't think there's any
> > reason not to pick up this part in the meantime.
> Yeah, I agree that the reverse direction shares the same issue.
> I am not sure 0 is a good initial value in this direction, either;
> I'd rather clip to min(len, core.bigfilethreshold) or something like
> that, to avoid regressing the more normal use cases.
That was my initial thought, too, but Joey's benchmarks show that it
doesn't seem to make a big difference either way. In his numbers it did
get measurable for a 1GB file, but we'd still not use "hint == len" in
that case (we'd probably do one or two doublings to get there).
I also think running a real (non-condensing) filter on a 1GB file is
already a pretty unlikely corner case.
> But let's queue this and see what happens.
Sounds good to me.