Re: reftable: new ref storage format
- Date: Thu, 13 Jul 2017 12:56:54 -0700
- From: Stefan Beller <sbeller@xxxxxxxxxx>
- Subject: Re: reftable: new ref storage format
On Thu, Jul 13, 2017 at 12:32 PM, Jeff King <peff@xxxxxxxx> wrote:
> On Wed, Jul 12, 2017 at 05:17:58PM -0700, Shawn Pearce wrote:
>> We've been having scaling problems with insane number of references
>> (>866k), so I started thinking a lot about improving ref storage.
>> I've written a simple approach, and implemented it in JGit.
>> Performance is promising:
>> - 62M packed-refs compresses to 27M
>> - 42.3 usec lookup
> Exciting. I'd love for us to have a simple-ish on-disk structure that
> scales well and doesn't involve a dependency on a third-party database
> Let me see what holes I can poke in your proposal, though. :)
>> ### Problem statement
>> Some repositories contain a lot of references (e.g. android at 866k,
>> rails at 31k). The existing packed-refs format takes up a lot of
>> space (e.g. 62M), and does not scale with additional references.
>> Lookup of a single reference requires linearly scanning the file.
> I think the linear scan is actually an implementation short-coming. Even
> though the records aren't fixed-length, the fact that newlines can only
> appear as end-of-record is sufficient to mmap and binary search a
> packed-refs file (you just have to backtrack a little when you land in
> the middle of a record).
Except that a record is a "delta" to the previous record, so it's not
just finding a record, but reconstructing it. Example for records:
varint( prefix_length )
varint( (suffix_length << 2) | type )
16 << 2 | 0x01,
next record (refs/heads/master):
4 << 2 | 0x01
Now if you found the second one, you cannot reconstruct its
real name (refs/heads/master) without knowing the name
of the first. The name of the first is easy because the prefix_length
is 0. If it also had a prefix length != 0 you'd have to go back more.
>> - Occupy less disk space for large repositories.
> Good goal. Just to play devil's advocate, the simplest way to do that
> with the current code would be to gzip packed-refs (and/or store sha1s
> as binary). That works against the "mmap and binary search" plan,
> though. :)
Given the compression by delta-ing the name to the previous change and
the fact that Gerrit has
I think this format would trump a "dumb" zip.
(Github having sequentially numbered pull requests would also
>> ## File format
> OK, let me try to summarize to see if I understand.
When Shawn presented the proposal, a couple of colleagues here
were as excited as I was, but the daring question is, why Shawn
did not give the whole thing in BNF format from top down:
> The reftable file is a sequence of blocks, each of which contains a
> finite set of heavily-compressed refs. You have to read each block
Each block may have restarting points, that allow for intra-block
> but since they're a fixed size, that's still a
> constant-time operation (I'm ignoring the "restarts" thing for now). You
> find the right block by reading the index.
or by reading the footer at the end. If the footer and the index differ
in block size (one bit flipped), we can ask the CRC of the footer
for more guidance.
> So lookup really is more
> like O(block_size * log(n/block_size)), but block_size being a constant,
> it drops out to O(log n).
There is also an index block such that you can binary search across
O( log(block_count) + log(intra_block_restarting_points) + small linear scan)
There are 2 binary searches, and the block size is an interesting
thing to look at when making up trade offs.