Sophie

Sophie

distrib > Mageia > 5 > i586 > by-pkgid > 27647990744ebd9cfe32398f37f67e20 > files > 2620

bzr-2.6.0-11.1.mga5.i586.rpm

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">


<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    
    <title>CHK Optimized index &mdash; Bazaar 2.6.0 documentation</title>
    
    <link rel="stylesheet" href="_static/default.css" type="text/css" />
    <link rel="stylesheet" href="_static/pygments.css" type="text/css" />
    
    <script type="text/javascript">
      var DOCUMENTATION_OPTIONS = {
        URL_ROOT:    './',
        VERSION:     '2.6.0',
        COLLAPSE_INDEX: false,
        FILE_SUFFIX: '.html',
        HAS_SOURCE:  true
      };
    </script>
    <script type="text/javascript" src="_static/jquery.js"></script>
    <script type="text/javascript" src="_static/underscore.js"></script>
    <script type="text/javascript" src="_static/doctools.js"></script>
    <link rel="shortcut icon" href="_static/bzr.ico"/>

    <link rel="top" title="Bazaar 2.6.0 documentation" href="index.html" />
    <link rel="up" title="Plans" href="plans.html" />
    <link rel="next" title="1   Nested Trees" href="nested-trees.html" />
    <link rel="prev" title="Bazaar Windows Shell Extension Options" href="tortoise-strategy.html" />
<link rel="stylesheet" href="_static/bzr-doc.css" type="text/css" />
 
  </head>
  <body>
    <div class="related">
      <h3>Navigation</h3>
      <ul>
        <li class="right" style="margin-right: 10px">
          <a href="nested-trees.html" title="1   Nested Trees"
             accesskey="N">next</a></li>
        <li class="right" >
          <a href="tortoise-strategy.html" title="Bazaar Windows Shell Extension Options"
             accesskey="P">previous</a> |</li>
<li><a href="http://bazaar.canonical.com/">
    <img src="_static/bzr icon 16.png" /> Home</a>&nbsp;|&nbsp;</li>
<a href="http://doc.bazaar.canonical.com/en/">Documentation</a>&nbsp;|&nbsp;</li>

        <li><a href="index.html">Developer Document Catalog (2.6.0)</a> &raquo;</li>

          <li><a href="plans.html" accesskey="U">Plans</a> &raquo;</li> 
      </ul>
    </div>  

    <div class="document">
      <div class="documentwrapper">
        <div class="bodywrapper">
          <div class="body">
            
  <div class="section" id="chk-optimized-index">
<h1>CHK Optimized index<a class="headerlink" href="#chk-optimized-index" title="Permalink to this headline">¶</a></h1>
<p>Our current btree style index is nice as a general index, but it is not optimal
for Content-Hash-Key based content. With CHK, the keys themselves are hashes,
which means they are randomly distributed (similar keys do not refer to
similar content), and they do not compress well. However, we can create an
index which takes advantage of these abilites, rather than suffering from
them. Even further, there are specific advantages provided by
<tt class="docutils literal"><span class="pre">groupcompress</span></tt>, because of how individual items are clustered together.</p>
<p>Btree indexes also rely on zlib compression, in order to get their compact
size, and further has to try hard to fit things into a compressed 4k page.
When the key is a sha1 hash, we would not expect to get better than 20bytes
per key, which is the same size as the binary representation of the hash. This
means we could write an index format that gets approximately the same on-disk
size, without having the overhead of <tt class="docutils literal"><span class="pre">zlib.decompress</span></tt>. Some thought would
still need to be put into how to efficiently access these records from remote.</p>
<div class="section" id="required-information">
<h2>Required information<a class="headerlink" href="#required-information" title="Permalink to this headline">¶</a></h2>
<p>For a given groupcompress record, we need to know the offset and length of the
compressed group in the .pack file, and the start and end of the content inside
the uncompressed group. The absolute minimum is slightly less, but this is a
good starting point. The other thing to consider, is that for 1M revisions and
1M files, we&#8217;ll probably have 10-20M CHK pages, so we want to make sure we
have an index that can scale up efficiently.</p>
<ol class="arabic simple">
<li>A compressed sha hash is 20-bytes</li>
<li>Pack files can be &gt; 4GB, we could use an 8-byte (64-bit) pointer, or we
could store a 5-byte pointer for a cap at 1TB. 8-bytes still seems like
overkill, even if it is the natural next size up.</li>
<li>An individual group would never be longer than 2^32, but they will often
be bigger than 2^16. 3 bytes for length (16MB) would be the minimum safe
length, and may not be safe if we expand groups for large content (like ISOs).
So probably 4-bytes for group length is necessary.</li>
<li>A given start offset has to fit in the group, so another 4-bytes.</li>
<li>Uncompressed length of record is based on original size, so 4-bytes is
expected as well.</li>
<li>That leaves us with 20+8+4+4+4 = 40 bytes per record. At the moment, btree
compression gives us closer to 38.5 bytes per record. We don&#8217;t have perfect
compression, but we also don&#8217;t have &gt;4GB pack files (and if we did, the first
4GB are all under then 2^32 barrier :).</li>
</ol>
<p>If we wanted to go back to the &#8216;&#8217;minimal&#8217;&#8217; amount of data that we would need to
store.</p>
<ol class="arabic">
<li><p class="first">8 bytes of a sha hash are generally going to be more than enough to fully
determine the entry (see <a class="reference internal" href="#partial-hash">Partial hash</a>). We could support some amount of
collision in an index record, in exchange for resolving it inside the
content. At least in theory, we don&#8217;t <em>have</em> to record the whole 20-bytes
for the sha1 hash. (8-bytes gives us less than 1 in 1000 chance of
a single collision for 10M nodes in an index)</p>
</li>
<li><p class="first">We could record the start and length of each group in a separate location,
and then have each record reference the group by an &#8216;offset&#8217;. This is because
we expect to have many records in the same group (something like 10k or so,
though we&#8217;ve fit &gt;64k under some circumstances). At a minimum, we have one
record per group so we have to store at least one reference anyway. So the
maximum overhead is just the size and cost of the dereference (and normally
will be much much better than that.)</p>
</li>
<li><p class="first">If a group reference is an 8-byte start, and a 4-byte length, and we have
10M keys, but get at least 1k records per group, then we would have 10k
groups.  So we would need 120kB to record all the group offsets, and then
each individual record would only need a 2-byte group number, rather than a
12-byte reference.  We could be safe with a 4-byte group number, but if
each group is ~1MB, 64k groups is 64GB. We can start with 2-byte, but leave
room in the header info to indicate if we have more than 64k group entries.
Also, current grouping creates groups of 4MB each, which would make it
256GB, to create 64k groups. And our current chk pages compress down to
less than 100 bytes each (average is closer to 40 bytes), which for 256GB
of raw data, would amount to 2.7 billion CHK records. (This will change if
we start to use CHK for text records, as they do not compress down as
small.) Using 100 bytes per 10M chk records, we have 1GB of compressed chk
data, split into 4MB groups or 250 total groups. Still &lt;&lt; 64k groups.
Conversions could create 1 chk record at a time, creating a group for each,
but they would be foolish to not commit a write group after 10k revisions
(assuming 6 CHK pages each).</p>
</li>
<li><p class="first">We want to know the start-and-length of a record in the decompressed
stream. This could actually be moved into a mini-index inside the group
itself. Initial testing showed that storing an expanded &#8220;key =&gt;
start,offset&#8221; consumed a considerable amount of compressed space. (about
30% of final size was just these internal indices.) However, we could move
to a pure &#8220;record 1 is at location 10-20&#8221;, and then our external index
would just have a single &#8216;group entry number&#8217;.</p>
<p>There are other internal forces that would give a natural cap of 64k
entries per group. So without much loss of generality, we could probably get
away with a 2-byte &#8216;group entry&#8217; number. (which then generates an 8-byte
offset + endpoint as a header in the group itself.)</p>
</li>
<li><p class="first">So for 1M keys, an ideal chk+group index would be:</p>
<blockquote>
<div><ol class="loweralpha simple">
<li>6-byte hash prefix</li>
<li>2-byte group number</li>
<li>2-byte entry in group number</li>
<li>a separate lookup of 12-byte group number to offset + length</li>
<li>a variable width mini-index that splits X bits of the key. (to maintain
small keys, low chance of collision, this is <em>not</em> redundant with the
value stored in (a)) This should then dereference into a location in
the index. This should probably be a 4-byte reference. It is unlikely,
but possible, to have an index &gt;16MB. With an 10-byte entry, it only
takes 1.6M chk nodes to do so.  At the smallest end, this will probably
be a 256-way (8-bits) fan out, at the high end it could go up to
64k-way (16-bits) or maybe even 1M-way (20-bits). (64k-way should
handle up to 5-16M nodes and still allow a cheap &lt;4k read to find the
final entry.)</li>
</ol>
</div></blockquote>
</li>
</ol>
<p>So the max size for the optimal groupcompress+chk index with 10M entries would be:</p>
<div class="highlight-python"><div class="highlight"><pre>10 * 10M (entries) + 64k * 12 (group) + 64k * 4 (mini index) = 101 MiB
</pre></div>
</div>
<p>So 101MiB which breaks down as 100MiB for the actual entries, 0.75MiB for the
group records, and 0.25MiB for the mini index.</p>
<ol class="arabic">
<li><p class="first">Looking up a key would involve:</p>
<ol class="loweralpha">
<li><p class="first">Read <tt class="docutils literal"><span class="pre">XX</span></tt> bytes to get the header, and various config for the index.
Such as length of the group records, length of mini index, etc.</p>
</li>
<li><p class="first">Find the offset in the mini index for the first YY bits of the key. Read
the 4 byte pointer stored at that location (which may already be in the
first content if we pre-read a minimum size.)</p>
</li>
<li><p class="first">Jump to the location indicated, and read enough bytes to find the
correct 12-byte record. The mini-index only indicates the start of
records that start with the given prefix. A 64k-way index resolves 10MB
records down to 160 possibilities. So at 12 bytes each, to read all would
cost 1920 bytes to be read.</p>
</li>
<li><p class="first">Determine the offset for the group entry, which is the known <tt class="docutils literal"><span class="pre">start</span> <span class="pre">of</span>
<span class="pre">groups</span></tt> location + 12B*offset number. Read its 12-byte record.</p>
</li>
<li><p class="first">Switch to the .pack file, and read the group header to determine where in
the stream the given record exists. At this point, you have enough
information to read the entire group block. For local ops, you could
only read enough to get the header, and then only read enough to
decompress just the content you want to get at.</p>
<p>Using an offset, you also don&#8217;t need to decode the entire group header.
If we assume that things are stored in fixed-size records, you can jump
to exactly the entry that you care about, and read its 8-byte
(start,length in uncompressed) info.  If we wanted more redundancy we
could store the 20-byte hash, but the content can verify itself.</p>
</li>
<li><p class="first">If the size of these mini headers becomes critical (8 bytes per record
is 8% overhead for 100 byte records), we could also compress this mini
header. Changing the number of bytes per entry is unlikely to be
efficient, because groups standardize on 4MiB wide, which is &gt;&gt;64KiB for
a 2-byte offset, 3-bytes would be enough as long as we never store an
ISO as a single entry in the content. Variable width also isn&#8217;t a big
win, since base-128 hits 4-bytes at just 2MiB.</p>
<p>For minimum size without compression, we could only store the 4-byte
length of each node. Then to compute the offset, you have to sum all
previous nodes. We require &lt;64k nodes in a group, so it is up to 256KiB
for this header, but we would lose partial reads.  This should still be
cheap in compiled code (needs tests, as you can&#8217;t do partial info), and
would also have the advantage that fixed width would be highly
compressible itself. (Most nodes are going to have a length that fits
1-2 bytes.)</p>
<p>An alternative form would be to use the base-128 encoding.  (If the MSB
is set, then the next byte needs to be added to the current value
shifted by 7*n bits.) This encodes 4GiB in 5 bytes, but stores 127B in 1
byte, and 2MiB in 3 bytes. If we only stored 64k entries in a 4 MiB
group, the average size can only be 64B, which fits in a single byte
length, so 64KiB for this header, or only 1.5% overhead. We also don&#8217;t
have to compute the offset of <em>all</em> nodes, just the ones before the one
we want, which is the similar to what we have to do to get the actual
content out.</p>
</li>
</ol>
</li>
</ol>
</div>
<div class="section" id="partial-hash">
<h2>Partial Hash<a class="headerlink" href="#partial-hash" title="Permalink to this headline">¶</a></h2>
<p>The size of the index is dominated by the individual entries (the 1M records).
Saving 1 byte there saves 1MB overall, which is the same as the group entries
and mini index combined. If we can change the index so that it can handle
collisions gracefully (have multiple records for a given collision), then we
can shrink the number of bytes we need overall. Also, if we aren&#8217;t going to
put the full 20-bytes into the index, then some form of graceful handling of
collisions is recommended anyway.</p>
<p>The current structure does this just fine, in that the mini-index dereferences
you to a &#8220;list&#8221; of records that start with that prefix. It is assumed that
those would be sorted, but we could easily have multiple records. To resolve
the exact record, you can read both records, and compute the sha1 to decide
between them. This has performance implications, as you are now decoding 2x
the records to get at one.</p>
<p>The chance of <tt class="docutils literal"><span class="pre">n</span></tt> texts colliding with a hash space of <tt class="docutils literal"><span class="pre">H</span></tt> is generally
given as:</p>
<div class="highlight-python"><div class="highlight"><pre>1 - e ^(-n^2 / 2 H)
</pre></div>
</div>
<p>Or if you use <tt class="docutils literal"><span class="pre">H</span> <span class="pre">=</span> <span class="pre">2^h</span></tt>, where <tt class="docutils literal"><span class="pre">h</span></tt> is the number of bits:</p>
<div class="highlight-python"><div class="highlight"><pre><span class="mi">1</span> <span class="o">-</span> <span class="n">e</span> <span class="o">^</span><span class="p">(</span><span class="o">-</span><span class="n">n</span><span class="o">^</span><span class="mi">2</span> <span class="o">/</span> <span class="mi">2</span><span class="o">^</span><span class="p">(</span><span class="n">h</span><span class="o">+</span><span class="mi">1</span><span class="p">))</span>
</pre></div>
</div>
<p>For 1M keys and 4-bytes (32-bit), the chance of collision is for all intents
and purposes 100%.  Rewriting the equation to give the number of bits (<tt class="docutils literal"><span class="pre">h</span></tt>)
needed versus the number of entries (<tt class="docutils literal"><span class="pre">n</span></tt>) and the desired collision rate
(<tt class="docutils literal"><span class="pre">epsilon</span></tt>):</p>
<div class="highlight-python"><div class="highlight"><pre><span class="n">h</span> <span class="o">=</span> <span class="n">log_2</span><span class="p">(</span><span class="o">-</span><span class="n">n</span><span class="o">^</span><span class="mi">2</span> <span class="o">/</span> <span class="n">ln</span><span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">epsilon</span><span class="p">))</span> <span class="o">-</span> <span class="mi">1</span>
</pre></div>
</div>
<p>The denominator <tt class="docutils literal"><span class="pre">ln(1-epsilon)</span></tt> == <tt class="docutils literal"><span class="pre">-epsilon`</span></tt> for small values (even &#64;0.1
== -0.105, and we are assuming we want a much lower chance of collision than
10%). So we have:</p>
<div class="highlight-python"><div class="highlight"><pre>h = log_2(n^2/epsilon) - 1 = 2 log_2(n) - log_2(epsilon) - 1
</pre></div>
</div>
<p>Given that <tt class="docutils literal"><span class="pre">epsilon</span></tt> will often be very small and <tt class="docutils literal"><span class="pre">n</span></tt> very large, it can
be more convenient to transform it into <tt class="docutils literal"><span class="pre">epsilon</span> <span class="pre">=</span> <span class="pre">10^-E</span></tt> and <tt class="docutils literal"><span class="pre">n</span> <span class="pre">=</span> <span class="pre">10^N</span></tt>,
which gives us:</p>
<div class="highlight-python"><div class="highlight"><pre>h = 2 * log_2(10^N) - 2 log_2(10^-E) - 1
h = log_2(10) (2N + E) - 1
h ~ 3.3 (2N + E) - 1
</pre></div>
</div>
<p>Or if we use number of bytes <tt class="docutils literal"><span class="pre">h</span> <span class="pre">=</span> <span class="pre">8H</span></tt>:</p>
<div class="highlight-python"><div class="highlight"><pre>H ~ 0.4 (2N + E)
</pre></div>
</div>
<p>This actually has some nice understanding to be had. For every order of
magnitude we want to increase the number of keys (at the same chance of
collision), we need ~1 byte (0.8), for every two orders of magnitude we want
to reduce the chance of collision we need the same extra bytes. So with 8
bytes, you can have 20 orders of magnitude to work with, 10^10 keys, with
guaranteed collision, or 10 keys with 10^-20 chance of collision.</p>
<p>Putting this in a different form, we could make <tt class="docutils literal"><span class="pre">epsilon</span> <span class="pre">==</span> <span class="pre">1/n</span></tt>. This gives
us an interesting simplified form:</p>
<div class="highlight-python"><div class="highlight"><pre>h = log_2(n^3) - 1 = 3 log_2(n) - 1
</pre></div>
</div>
<p>writing <tt class="docutils literal"><span class="pre">n</span></tt> as <tt class="docutils literal"><span class="pre">10^N</span></tt>, and <tt class="docutils literal"><span class="pre">H=8h</span></tt>:</p>
<div class="highlight-python"><div class="highlight"><pre>h = 3 N log_2(10) - 1 =~ 10 N - 1
H ~ 1.25 N
</pre></div>
</div>
<p>So to have a one in a million chance of collision using 1 million keys, you
need ~59 bits, or slightly more than 7 bytes. For 10 million keys and a one in
10 million chance of any of them colliding, you can use 9 (8.6) bytes. With 10
bytes, we have a one in a 100M chance of getting a collision in 100M keys
(substituting back, the original equation says the chance of collision is 4e-9
for 100M keys when using 10 bytes.)</p>
<p>Given that the only cost for a collision is reading a second page and ensuring
the sha hash actually matches we could actually use a fairly &#8220;high&#8221; collision
rate. A chance of 1 in 1000 that you will collide in an index with 1M keys is
certainly acceptible.  (note that isn&#8217;t 1 in 1000 of those keys will be a
collision, but 1 in 1000 that you will have a <em>single</em> collision).  Using a
collision chance of 10^-3, and number of keys 10^6, means we need (12+3)*0.4 =
6 bytes. For 10M keys, you need (14+3)*0.4 = 6.8 aka 7. We get that extra byte
from the <tt class="docutils literal"><span class="pre">mini-index</span></tt>. In an index with a lot of keys, you want a bigger
fan-out up front anyway, which gives you more bytes consumed and extends your
effective key width.</p>
<p>Also taking one more look at <tt class="docutils literal"><span class="pre">H</span> <span class="pre">~</span> <span class="pre">0.4</span> <span class="pre">(2N</span> <span class="pre">+</span> <span class="pre">E)</span></tt>, you can rearrange and
consider that for every order of magnitude more keys you insert, your chance
for collision goes up by 2 orders of magnitude. But for 100M keys, 8 bytes
gives you a 1 in 10,000 chance of collision, and that is gotten at a 16-bit
fan-out (64k-way), but for 100M keys, we would likely want at least 20-bit fan
out.</p>
<p>You can also see this from the original equation with a bit of rearranging:</p>
<div class="highlight-python"><div class="highlight"><pre>epsilon = 1 - e^(-n^2 / 2^(h+1))
epsilon = 1 - e^(-(2^N)^2 / (2^(h+1))) = 1 - e^(-(2^(2N))(2^-(h+1)))
        = 1 - e^(-(2^(2N - h - 1)))
</pre></div>
</div>
<p>Such that you want <tt class="docutils literal"><span class="pre">2N</span> <span class="pre">-</span> <span class="pre">h</span></tt> to be a very negative integer, such that
<tt class="docutils literal"><span class="pre">2^-X</span></tt> is thus very close to zero, and <tt class="docutils literal"><span class="pre">1-e^0</span> <span class="pre">=</span> <span class="pre">0</span></tt>. But you can see that
if you want to double the number of source texts, you need to quadruple the
number of bits.</p>
</div>
<div class="section" id="scaling-sizes">
<h2>Scaling Sizes<a class="headerlink" href="#scaling-sizes" title="Permalink to this headline">¶</a></h2>
<div class="section" id="scaling-up">
<h3>Scaling up<a class="headerlink" href="#scaling-up" title="Permalink to this headline">¶</a></h3>
<p>We have said we want to be able to scale to a tree with 1M files and 1M
commits. With a 255-way fan out for chk pages, you need 2 internal nodes,
and a leaf node with 16 items. (You maintain 2 internal nodes up until 16.5M
nodes, when you get another internal node, and your leaf nodes shrink down to
1 again.) If we assume every commit averages 10 changes (large, but possible,
especially with large merges), then you get 1 root + 10*(1 internal + 1 leaf
node) per commit or 21 nodes per commit. At 1M revisions, that is 21M chk
nodes. So to support the 1Mx1M project, we really need to consider having up
to 100M chk nodes.</p>
<p>Even if you went up to 16M tree nodes, that only bumps us up to 31M chk
nodes. Though it also scales by number of changes, so if you had a huge churn,
and had 100 changes per commit and a 16M node tree, you would have 301M chk
nodes. Note that 8 bytes (64-bits) in the prefix still only gives us a 0.27%
chance of collision (1 in 370). Or if you had 370 projects of that size, with
all different content, <em>one</em> of them would have a collision in the index.</p>
<p>We also should consider that you have the <tt class="docutils literal"><span class="pre">(parent_id,basename)</span> <span class="pre">=&gt;</span> <span class="pre">file_id</span></tt>
map that takes up its own set of chk pages, but testing seems to indicate that
it is only about 1/10th that of the <tt class="docutils literal"><span class="pre">id_to_entry</span></tt> map. (rename,add,delete
are much less common then content changes.)</p>
<p>As a point of reference, one of the largest projects today OOo, has only 170k
revisions, and something less than 100k files (and probably 4-5 changes per
commit, but their history has very few merges, being a conversion from CVS).
At 100k files, they are probably just starting to hit 2-internal nodes, so
they would end up with 10 pages per commit (as a fair-but-high estimate), and
at 170k revs, that would be 1.7M chk nodes.</p>
</div>
<div class="section" id="scaling-down">
<h3>Scaling down<a class="headerlink" href="#scaling-down" title="Permalink to this headline">¶</a></h3>
<p>While it is nice to scale to a 16M files tree with 1M files (100M total
changes), it is also important to scale efficiently to more <em>real world</em>
scenarios. Most projects will fall into the 255-64k file range, which is where
you have one internal node and 255 leaf nodes (1-2 chk nodes per commit). And
a modest number of changes (10 is generally a high figure). At 50k revisions,
that would give you 50*2*10=500k chk nodes. (Note that all of python has 303k
chk nodes, all of launchpad has 350k, mysql-5.1 in gc255 rather than gc255big had
650k chk nodes, [depth=3].)</p>
<p>So for these trees, scaling to 1M nodes is more than sufficient, and allows us
to use a 6-byte prefix per record. At a minimum, group records could use a
4-byte start and 3-byte length, but honestly, they are a tiny fraction of the
overall index size, and it isn&#8217;t really worth the implementation cost of being
flexible here. We can keep a field in the header for the group record layout
(8, 4) and for now just assert that this size is fixed.</p>
</div>
</div>
<div class="section" id="other-discussion">
<h2>Other discussion<a class="headerlink" href="#other-discussion" title="Permalink to this headline">¶</a></h2>
<div class="section" id="group-encoding">
<h3>group encoding<a class="headerlink" href="#group-encoding" title="Permalink to this headline">¶</a></h3>
<p>In the above scheme we store the group locations as an 8-byte start, and
4-byte length. We could theoretically just store a 4-byte length, and then you
have to read all of the groups and add them up to determine the actual start
position. The trade off is a direct jump-to-location versus storing 3x the
data. Given when you have 64k groups you will need only .75MiB to store it,
versus the 120MB for the actual entries, this seems to be no real overhead.
Especially when you consider that 10M chk nodes should fit in only 250 groups,
so total data is actually only 3KiB. Then again, if it was only 1KiB it is
obvious that you would read the whole thing in one pass. But again, see the
pathological &#8220;conversion creating 1 group per chk page&#8221; issue.</p>
<p>Also, we might want to support more than 64k groups in a given index when we
get to the point of storing file content in a CHK index. A lot of the analysis
about the number of groups is based on the 100 byte compression of CHK nodes,
which would not be true with file-content. We should compress well, I don&#8217;t
expect us to compress <em>that</em> well. Launchpad shows that the average size of a
content record is about 500-600 bytes (after you filter out the ~140k that are
NULL content records). At that size, you expect to get approx 7k records per
group, down from 40k. Going further, though, you also want to split groups
earlier, since you end up with better compression. so with 100,000 unique file
texts, you end up with ~100 groups. With 1M revisions &#64; 10 changes each, you
have 10M file texts, and would end up at 10,485 groups. That seems like more
64k groups is still more than enough head room. You need to fit only 100
entries per group, to get down to where you are getting into trouble (and have
10M file texts.) Something to keep an eye on, but unlikely to be something
that is strictly a problem.</p>
<p>Still reasonable to have a record in the header indicating that index entries
use a 2-byte group entry pointer, and allow it to scale to 3 (we may also find
a win scaling it down to 1 in the common cases of &lt;250 groups). Note that if
you have the full 4MB groups, it takes 256 GB of compressed content to fill
64k records. And our groups are currently scaled that we require at least
1-2MB before they can be considered &#8216;full&#8217;.</p>
</div>
<div class="section" id="variable-length-index-entries">
<h3>variable length index entries<a class="headerlink" href="#variable-length-index-entries" title="Permalink to this headline">¶</a></h3>
<p>The above had us store 8-bytes of sha hash, 2 bytes of group number, and
2 bytes for record-in-group. However, since we have the variable-pointer
mini-index, we could consider having those values be &#8216;variable length&#8217;. So
when you read the bytes between the previous-and-next record, you have a
parser that can handle variable width. The main problem is that to encode
start/stop of record takes some bytes, and at 12-bytes for a record, you don&#8217;t
have a lot of space to waste for a &#8220;end-of-entry&#8221; indicator. The easiest would
be to store things in base-128 (high bit indicates the next byte also should
be included).</p>
</div>
<div class="section" id="storing-uncompressed-offset-length">
<h3>storing uncompressed offset + length<a class="headerlink" href="#storing-uncompressed-offset-length" title="Permalink to this headline">¶</a></h3>
<p>To get the smallest index possible, we store only a 2-byte &#8216;record indicator&#8217;
inside the index, and then assume that it can be decoded once we&#8217;ve read the
actual group. This is certainly possible, but it represents yet another layer
of indirection before you can actually get content. If we went with
variable-length index entries, we could probably get most of the benefit with
a variable-width start-of-entry value. The length-of-content is already being
stored as a base128 integer starting at the second byte of the uncompressed
data (the first being the record type, fulltext/delta). It complicates some of
our other processing, since we would then only know how much to decompress to
get the start of the record.</p>
<p>Another intriguing possibility would be to store the <em>end</em> of the record in
the index, and then in the data stream store the length and type information
at the <em>end</em> of the record, rather than at the beginning (or possibly at both
ends). Storing it at the end is a bit unintuitive when you think about reading
in the data as a stream, and figuring out information (you have to read to the
end, then seek back) But a given GC block does store the
length-of-uncompressed-content, which means we can trivially decompress, jump
to the end, and then walk-backwards for everything else.</p>
<p>Given that every byte in an index entry costs 10MiB in a 10M index, it is
worth considering. At 4MiB for a block, base 128 takes 4 bytes to encode the
last 50% of records (those beyond 2MiB), 3 bytes for everything from 16KiB =&gt;
2MiB.  So the expected size is for all intents and purposes, 3.5 bytes.  (Just
due to an unfortunate effect of where the boundary is that you need more
bytes.) If we capped the data at 2MB, the expected drops to just under 3
bytes. Note that a flat 3bytes could decode up to 16MiB, which would be much
better for our purpose, but wouldn&#8217;t let us write groups that had a record
after 16MiB, which doesn&#8217;t work for the ISO case. Though it works <em>absolutely</em>
fine for the CHK inventory cases (what we have today).</p>
</div>
<div class="section" id="null-content">
<h3>null content<a class="headerlink" href="#null-content" title="Permalink to this headline">¶</a></h3>
<p>At the moment, we have a lot of records in our per-file graph that refers to
empty content. We get one for every symlink and directory, for every time that
they change. This isn&#8217;t specifically relevant for CHK pages, but for
efficiency we could certainly consider setting &#8220;group = 0 entry = 0&#8221; to mean
that this is actually a no-content entry. It means the group block itself
doesn&#8217;t have to hold a record for it, etc. Alternatively we could use
&#8220;group=FFFF entry = FFFF&#8221; to mean the same thing.</p>
</div>
<div class="section" id="vf-keys">
<h3><tt class="docutils literal"><span class="pre">VF.keys()</span></tt><a class="headerlink" href="#vf-keys" title="Permalink to this headline">¶</a></h3>
<p>At the moment, some apis expect that you can list the references by reading
all of the index. We would like to get away from this anyway, as it doesn&#8217;t
scale particularly well. However, with this format, we no longer store the
exact value for the content. The content is self describing, and we <em>would</em> be
storing enough to uniquely decide which node to read. Though that is actually
contained in just 4-bytes (2-byte group, 2-byte group entry).</p>
<p>We use <tt class="docutils literal"><span class="pre">VF.keys()</span></tt> during &#8216;pack&#8217; and &#8216;autopack&#8217; to avoid asking for content
we don&#8217;t have, and to put a counter on the progress bar. For the latter, we
can just use <tt class="docutils literal"><span class="pre">index.key_count()</span></tt> for the former, we could just properly
handle <tt class="docutils literal"><span class="pre">AbsentContentFactory</span></tt>.</p>
</div>
<div class="section" id="more-than-64k-groups">
<h3>More than 64k groups<a class="headerlink" href="#more-than-64k-groups" title="Permalink to this headline">¶</a></h3>
<p>Doing a streaming conversion all at once is still something to consider. As it
would default to creating all chk pages in separate groups (300-400k easily).
However, just making the number of group block entries variable, and allowing
the pointer in each entry to be variable should suffice. At 3 bytes for the
group pointer, we can refer to 16.7M groups. It does add complexity, but it is
likely necessary to allow for arbitrary cases.</p>
</div>
</div>
</div>


          </div>
        </div>
      </div>
      <div class="sphinxsidebar">
        <div class="sphinxsidebarwrapper">
  <h3><a href="index.html">Table Of Contents</a></h3>
  <ul>
<li><a class="reference internal" href="#">CHK Optimized index</a><ul>
<li><a class="reference internal" href="#required-information">Required information</a></li>
<li><a class="reference internal" href="#partial-hash">Partial Hash</a></li>
<li><a class="reference internal" href="#scaling-sizes">Scaling Sizes</a><ul>
<li><a class="reference internal" href="#scaling-up">Scaling up</a></li>
<li><a class="reference internal" href="#scaling-down">Scaling down</a></li>
</ul>
</li>
<li><a class="reference internal" href="#other-discussion">Other discussion</a><ul>
<li><a class="reference internal" href="#group-encoding">group encoding</a></li>
<li><a class="reference internal" href="#variable-length-index-entries">variable length index entries</a></li>
<li><a class="reference internal" href="#storing-uncompressed-offset-length">storing uncompressed offset + length</a></li>
<li><a class="reference internal" href="#null-content">null content</a></li>
<li><a class="reference internal" href="#vf-keys"><tt class="docutils literal"><span class="pre">VF.keys()</span></tt></a></li>
<li><a class="reference internal" href="#more-than-64k-groups">More than 64k groups</a></li>
</ul>
</li>
</ul>
</li>
</ul>

  <h4>Previous topic</h4>
  <p class="topless"><a href="tortoise-strategy.html"
                        title="previous chapter">Bazaar Windows Shell Extension Options</a></p>
  <h4>Next topic</h4>
  <p class="topless"><a href="nested-trees.html"
                        title="next chapter">1&nbsp;&nbsp;&nbsp;Nested Trees</a></p>
  <h3>This Page</h3>
  <ul class="this-page-menu">
    <li><a href="_sources/improved_chk_index.txt"
           rel="nofollow">Show Source</a></li>
  </ul>
<div id="searchbox" style="display: none">
  <h3>Quick search</h3>
    <form class="search" action="search.html" method="get">
      <input type="text" name="q" />
      <input type="submit" value="Go" />
      <input type="hidden" name="check_keywords" value="yes" />
      <input type="hidden" name="area" value="default" />
    </form>
    <p class="searchtip" style="font-size: 90%">
    Enter search terms or a module, class or function name.
    </p>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
        </div>
      </div>
      <div class="clearer"></div>
    </div>
    <div class="related">
      <h3>Navigation</h3>
      <ul>
        <li class="right" style="margin-right: 10px">
          <a href="nested-trees.html" title="1   Nested Trees"
             >next</a></li>
        <li class="right" >
          <a href="tortoise-strategy.html" title="Bazaar Windows Shell Extension Options"
             >previous</a> |</li>
<li><a href="http://bazaar.canonical.com/">
    <img src="_static/bzr icon 16.png" /> Home</a>&nbsp;|&nbsp;</li>
<a href="http://doc.bazaar.canonical.com/en/">Documentation</a>&nbsp;|&nbsp;</li>

        <li><a href="index.html">Developer Document Catalog (2.6.0)</a> &raquo;</li>

          <li><a href="plans.html" >Plans</a> &raquo;</li> 
      </ul>
    </div>
    <div class="footer">
        &copy; Copyright 2009-2011 Canonical Ltd.
      Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.2.3.
    </div>
  </body>
</html>