Web lists-archives.com

Re: git gc --auto yelling at users where a repo legitimately has >6700 loose objects




On Fri, Jan 12 2018, Duy Nguyen jotted:

> On Fri, Jan 12, 2018 at 4:33 AM, Ævar Arnfjörð Bjarmason
> <avarab@xxxxxxxxx> wrote:
>> For those rusty on git-gc's defaults, this is what it looks like in this
>> scenario:
>>
>>  1. User runs "git pull"
>>  2. git gc --auto is called, there are >6700 loose objects
>>  3. it forks into the background, tries to prune and repack, objects
>>     older than gc.pruneExpire (2.weeks.ago) are pruned.
>>  4. At the end of all this, we check *again* if we have >6700 objects,
>>     if we do we print "run 'git prune'" to .git/gc.log, and will just
>>     emit that error for the next day before trying again, at which point
>>     we unlink the gc.log and retry, see gc.logExpiry.
>>
>> Right now I've just worked around this by setting gc.pruneExpire to a
>> lower value (4.days.ago). But there's a larger issue to be addressed
>> here, and I'm not sure how.
>>
>> When the warning was added in [1] it didn't know to detach to the
>> background yet, that came in [2], shortly after came gc.log in [3].
>>
>> We could add another gc.auto-like limit, which could be set at some
>> higher value than gc.auto. "Hey if I have more than 6700 loose objects,
>> prune the <2wks old ones, but if at the end there's still >6700 I don't
>> want to hear about it unless there's >6700*N".
>
> Yes it's about time we make too_many_loose_objects() more accurate and
> complain less, especially when the complaint is useless.
>
>> I thought I'd just add that, but the details of how to pass that message
>> around get nasty. With that solution we *also* don't want git gc to
>> start churning in the background once we reach >6700 objects, so we need
>> something like gc.logExpiry which defers the gc until the next day. We
>> might need to create .git/gc-waitabit.marker, ew.
>
> Hmm.. could we save the info from the last run to help the next one?
> If the last gc --auto (which does try to remove some loose objects)
> leaves 6700 objects still loose, then it's "clear" that the next run
> may also leave those loose. If we save that number somewhere (gc.log
> too?) too_many_loose_objects() can read back and subtract it from the
> estimation and may decide not to do gc at all since the number of
> loose-and-prunable objects is below threshold.
>
> The problem is of course these 6700 will gradually become prunable
> over time. We can't just substract the same constant forever. Perhaps
> we can do something based on gc.pruneExpire?
>
> Say gc.pruneExpires specifies to keep objects in two weeks, we assume
> these object create time is spread out equally over 14 days. So after
> one day, 6700/14 objects are supposed to be prune-able and part of
> too_many_loose_objects estimation. The gc--auto that is run two weeks
> since the first run would count all loose objects as prunable again.
>
>> More generally, these hard limits seem contrary to what the user cares
>> about. E.g. I suspect that most of these loose objects come from
>> branches since deleted in upstream, whose objects could have a different
>> retention policy.
>
> Er.. what retention policy? I think gc.pruneExpire is the only thing
> that can keep loose objects around?

You answered this yourself in
CACsJy8CUYosOGK5tn0C=t=SkbS-fyaSxp536zx+9jh_O+WNaEQ@xxxxxxxxxxxxxx, yeah
I mean loose objects from branch deletions.

More generally, the reason we even have the 2 week limit is to pick a
good trade-off between performance and not losing someone's work that
they e.g. "git add"-ed but never committed.

I'm suggesting (but don't know if this is worth it, especially given
Jeff's comments) that one smarter approach might be to track where the
objects came from (e.g. by keeping reflogs for deleted upstream branches
for $expiry_time).

Then we could immediately delete loose objects we got from upstream
branches (or delete them more aggressively), while treating objects that
were originally created in the local repository differently.

>> But now I have git-gc on some servers yelling at users on every pull
>> command:
>>
>>    warning: There are too many unreachable loose objects; run 'git prune' to remove them.
>
> Why do we yell at the users when some maintenance thing is supposed to
> be done on the server side? If this is the case, should gc have some
> way to yell at the admin instead?

Sorry I didn't clarify this, this is a shared server (rollout system
with staged checkouts) that users log into and stage/test a rollout from
the git repo, so not the git server.

Because it's a shared repo there's a lot more loose object churn, Mostly
due to pulling more often (and thus more branches that later get
deleted), but also from rebasing and whatnot in the rollout repo.