Sophie

Sophie

distrib > Mageia > 7 > aarch64 > by-pkgid > 7e647d9940d31b34c253e6f71c416c4b > files > 2688

bzr-2.7.0-6.mga7.aarch64.rpm


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    <meta http-equiv="X-UA-Compatible" content="IE=Edge" />
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
    <title>Incremental push/pull &#8212; Bazaar 2.7.0 documentation</title>
    <link rel="stylesheet" href="_static/classic.css" type="text/css" />
    <link rel="stylesheet" href="_static/pygments.css" type="text/css" />
    
    <script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
    <script type="text/javascript" src="_static/jquery.js"></script>
    <script type="text/javascript" src="_static/underscore.js"></script>
    <script type="text/javascript" src="_static/doctools.js"></script>
    <script type="text/javascript" src="_static/language_data.js"></script>
    
    <link rel="shortcut icon" href="_static/bzr.ico"/>

    <link rel="search" title="Search" href="search.html" />
<link rel="stylesheet" href="_static/bzr-doc.css" type="text/css" />
 
  </head><body>
    <div class="related" role="navigation" aria-label="related navigation">
      <h3>Navigation</h3>
      <ul>
<li><a href="http://bazaar.canonical.com/">
    <img src="_static/bzr icon 16.png" /> Home</a>&nbsp;|&nbsp;</li>
<a href="http://doc.bazaar.canonical.com/en/">Documentation</a>&nbsp;|&nbsp;</li>

        <li class="nav-item nav-item-0"><a href="index.html">Developer Document Catalog (2.7.0)</a> &#187;</li>
 
      </ul>
    </div>  

    <div class="document">
      <div class="documentwrapper">
        <div class="bodywrapper">
          <div class="body" role="main">
            
  <div class="section" id="incremental-push-pull">
<h1>Incremental push/pull<a class="headerlink" href="#incremental-push-pull" title="Permalink to this headline">¶</a></h1>
<p>This use case covers pulling in or pushing out some number of revisions which
is typically a small fraction of the number already present in the target
repository. Pushing and pulling are defined as branch level operations for ease
of interaction with VCS systems that have no repository abstraction (such as
bzr-svn or GNU Arch) but within bzrlib’s core they are currently the
responsibility of the Repository object.</p>
<div class="section" id="functional-requirements">
<h2>Functional Requirements<a class="headerlink" href="#functional-requirements" title="Permalink to this headline">¶</a></h2>
<dl class="docutils">
<dt>A push or pull operation must:</dt>
<dd><ul class="first last simple">
<li>Copy all the data to reconstruct the selected revisions in the target
branch. This is the goal of push and pull after all.</li>
<li>Reject corrupt data. As bzr has no innate mechanism for discarding corrupted
data, corrupted data should not be incorporated accidentally.</li>
</ul>
</dd>
</dl>
</div>
<div class="section" id="factors-which-should-add-work-for-push-pull">
<h2>Factors which should add work for push/pull<a class="headerlink" href="#factors-which-should-add-work-for-push-pull" title="Permalink to this headline">¶</a></h2>
<blockquote>
<div><ul class="simple">
<li>Baseline overhead: The time to connect to both branches.</li>
<li>Actual new data in the revisions being pulled (drives the amount of data to
move around, includes the commit messages etc)</li>
<li>Number of revisions in the two repositories (scaling affects the
determination of what revisions to move around).</li>
</ul>
</div></blockquote>
</div>
<div class="section" id="push-pull-overview">
<h2>Push/pull overview<a class="headerlink" href="#push-pull-overview" title="Permalink to this headline">¶</a></h2>
<ol class="arabic simple">
<li>New data is identified in the source repository.</li>
<li>That data is read from the source repository.</li>
<li>The same data is verified and written to the target repository in such a
manner that it’s not visible to readers until it’s ready for use.</li>
</ol>
<div class="section" id="new-data-identification">
<h3>New data identification<a class="headerlink" href="#new-data-identification" title="Permalink to this headline">¶</a></h3>
<p>We have a single top level data object: revisions. Everything else is
subordinate to revisions, so determining the revisions to propagate should be
all thats needed. This depends on revisions with partial data - such as those
with no signature - being flagged in some efficient manner.</p>
<p>We could do this in two manners: determine revisions to sync and signatures to sync in two passes, or change the ‘value’ of a revision implicitly when the signature is different. E.g. by using merkle hash trees with the signature data a separate component the signatures will naturally be identified to sync.</p>
<p>We want to only exchange data proportional to the number of new revisions and
signatures in the system though. One way to achieve this for revisions is to
walk the graph out from the desired tips until the surface area intersection is
found. For signatures a set difference seems to be needed as there is no DAG of signatures: the presence of one has no implications on the presence of another, so a full pass over the set of signatures would be required to confirm no new signatures are needed (let alone replaced signatures).</p>
<p>IFF we can determine ‘new revisions’ and ‘new signatures’ without full graph access then we can scale acceptable for push and pull.</p>
<p>Ghosts are revisions which are not present in a particular repository. Filling ghosts refers to removing ghosts in the target repository when the ghost is present in the source repository. Filling ghosts can be either an explicit or implicit action. The common case is no ghosts.</p>
<div class="section" id="set-synchronisation-approaches">
<h4>Set synchronisation approaches<a class="headerlink" href="#set-synchronisation-approaches" title="Permalink to this headline">¶</a></h4>
<p>A set synchronisation approach is one which synchronises two sets without
regard for innate structure. This can be very efficient but requires adding a
new node to be processed with every commit. Caching of the results of the
various set based syncs I’ve seen is possible but because the data structures
look different depending on the tip revision being synced up to the cache needs
to be very complex. I recommend not using such an approach for the common case
pull because of the failure to scale. We can use such an approach for
synchronisation of new signatures and ghosts, which should be an explicit
option in both cases.</p>
</div>
<div class="section" id="dag-synchronisation-approaches">
<h4>DAG synchronisation approaches<a class="headerlink" href="#dag-synchronisation-approaches" title="Permalink to this headline">¶</a></h4>
<p>A DAG based approach to synchronistion is one that uses the DAG structure to
determine the difference in present nodes. It can as a result operate from the
tip of the DAG backwards. A dag based approach should allow incremental access
to data and not require a full-graph scan for incremental operations.</p>
</div>
<div class="section" id="file-level-scaling">
<h4>File level scaling<a class="headerlink" href="#file-level-scaling" title="Permalink to this headline">¶</a></h4>
<p>We should read roughly as much of the revision level graph as is needed from
each repository to determine the node difference.  If requested we should
perform a detailed scan to pick up ghost revisions and revisions which have had
signatures added. This should not be the default as it requires full history
access in both cases.</p>
<p>Expected file IO and access pattern:</p>
<blockquote>
<div><ul>
<li><p class="first">Common case: repo with many branches of one project, to the same.</p>
<ol class="arabic simple">
<li>Source and Target branch tips read.</li>
<li>Find the tip of each branch in their repo (will require reading some of
the revision graph but is typically near the end of the graph).</li>
<li>Read and parse increasing amounts of the revision graph until one is
found to be a subset of the other, or a complete list of revisions to be
transmitted is created.</li>
</ol>
</li>
<li><p class="first">Uncommon cases:</p>
<ol class="arabic simple">
<li>Repositories with many projects or branches which are very old may
require reading a lot of unrelated graph data.</li>
</ol>
<ol class="arabic simple">
<li>Initial push/pull scenarios should not require reading an entire graph.</li>
</ol>
</li>
</ul>
</div></blockquote>
</div>
<div class="section" id="api-scaling">
<h4>API scaling<a class="headerlink" href="#api-scaling" title="Permalink to this headline">¶</a></h4>
<blockquote>
<div><ol class="arabic simple">
<li>Get branch tips.</li>
<li>Determine one sided graph difference. To avoid obtaining a full graph over
the wire this needs to be done without reference to the full graph, and
with some logarthmic scaling algorithm. There are several already available
for this.</li>
</ol>
</div></blockquote>
<p>With ghost and new-signature detection:</p>
<blockquote>
<div><ul class="simple">
<li>File IO access pattern will read the entire graph on the ‘target’ side - if
no ghosts are present then stop, otherwise seek the new revisions on the
source side with the regular algorithm and also explicitly search for the
ghost points from the target; plus a set difference search is needed on
signatures.</li>
<li>Semantic level can probably be tuned, but as it’s also complex I suggest
deferring analysis for optimal behaviour of this use case.</li>
</ul>
</div></blockquote>
</div>
</div>
<div class="section" id="data-reading">
<h3>Data reading<a class="headerlink" href="#data-reading" title="Permalink to this headline">¶</a></h3>
<p>When transferring information about a revision the graph of data for the
revision is walked: revision -&gt; inventory, revision -&gt; matching signature,
inventory -&gt; file ids:revision pairs.</p>
<div class="section" id="id1">
<h4>File level scaling<a class="headerlink" href="#id1" title="Permalink to this headline">¶</a></h4>
<p>As we’re reading already committed data, as long as nothing is mutating data on
disk reading should be race free. We will:</p>
<blockquote>
<div><ul class="simple">
<li>read each revision object</li>
<li>read the matching inventory delta</li>
<li>attempt to read a signature object</li>
<li>parse the inventory delta</li>
<li>read the fileid:revisionid compressed chunk for each line in the inventory
delta</li>
</ul>
</div></blockquote>
<p>Theres no point validating that the data read is valid, as transmission through to the client writing the data might invalidate it; we need to validate before we write.</p>
</div>
<div class="section" id="id2">
<h4>API scaling<a class="headerlink" href="#id2" title="Permalink to this headline">¶</a></h4>
<p>Given that we have established the revisions needed, a single API call should
suffice to obtain all data; the API should present the data in such an order
that it can be validated as it arrives and thus not require large scale
buffering on disk. Specifically each item of data should be validatable (e.g.
for some file data we want the fileid:revisionid:validationhash + content).</p>
</div>
</div>
<div class="section" id="data-verification-and-writing">
<h3>Data Verification and writing<a class="headerlink" href="#data-verification-and-writing" title="Permalink to this headline">¶</a></h3>
<p>New data written to a repository should be completed intact when it is made
visible. This suggests that either all the data for a revision must be made
atomically visible (e.g. by renaming a single file) or the leaf nodes of the
reference graph must become visible first.</p>
<p>Data is referred to via the following graph:
revision -&gt; revision
revision -&gt; signature
revision -&gt; inventory
inventory -&gt; fileid:revisionid
fileid:revisionid -&gt; fileid:revisionid</p>
<p>Data is verifiable via a different ordering:
signature -&gt; revision -&gt; inventory -&gt; fileid:revisionid texts.</p>
<p>We dont gpg verify each revision today; this analysis only speaks to hash
verification of contents.</p>
<p>To validate a revision we need to validate the data it refers to. But to
validate the contents of a revision we need the new texts in the inventory for
the revision - to check a fileid:revisionid we need to know the expected sha1
of the full text and thus also need to read the delta chain to construct the
text as we accept it to determine if it’s valid. Providing separate validators
for the chosen representation would address this.
e.g: For an inventory entry FILEID:REVISIONID we store the validator of the
full text :SHA1:. If we also stored the validator of the chosen disk
representation (:DELTASHA1:) we could validate the transmitted representation
without expanding the delta in the common case. If that failed we could expand
the delta chain and try against the full text validator, and finally fail. As
different delta generators might generate different deltas, :DELTASHA1: should
not become part of the revision validator, only the inventory disk encoding. In
a related manner a transmission format that allowed cheap validation of content
without applying locally stored deltas would be advantageous because no local
reads would be incurred to validate new content. For instance, always sending a
full text for any file, possibly with a delta-chain when transmitting multiple
revisionids of the file, would allow this. (git pack-files have this property).</p>
<div class="section" id="overview-summary">
<h4>Overview summary<a class="headerlink" href="#overview-summary" title="Permalink to this headline">¶</a></h4>
<p>A single-file local format would allow safe atomic addition of data while
allowing optimisal transmission order of data. Failing this the validation of
data should be tuned to not require reading local texts during data addition
even in the presence of delta chains. We should have transmission-validators
separate from content validators that allow validation of the delta-transmitted
form of objects.</p>
</div>
<div class="section" id="id3">
<h4>File level scaling<a class="headerlink" href="#id3" title="Permalink to this headline">¶</a></h4>
<ul class="simple">
<li>Every new file text requires transmission and local serialisation.</li>
<li>Every commit requires transmission and storage of a revision, signature and inventory.</li>
</ul>
<p>Thus 4000 commits to a 50000 path tree of 10 files on averages requires (with
knits) between 26 writes (2*(3+10)) and 80006 (2*(4000*10 + 3)) writes. In all
cases there are 4000 * 13 distinct objects to record.</p>
<p>Grouping data by fileid, content and metadata, gives the figures above.
Data grouping:</p>
<ul class="simple">
<li>File per full identifier (fileid:revisionid:meta|content): 104000</li>
<li>Delta-chain per object: object id count * constant overhead per object id
(26 -&gt; 80006)</li>
<li>Collation/pack file: 1</li>
</ul>
<dl class="docutils">
<dt>Performance for these depends heavily on implementation:</dt>
<dd><ul class="first last simple">
<li>Using full ids we could name by validator or by id, giving best performance
that depends on either receiving data in validator order or in id order.</li>
<li>using delta-chain per object we get least seek overhead and syscall overhead
if we recieve in topological order within the object id, and object ids in
lexical order.</li>
<li>Using a collation/pack file we can stream it into place and validate as we go,
giving near ideal performance.</li>
</ul>
</dd>
</dl>
</div>
<div class="section" id="id4">
<h4>API scaling<a class="headerlink" href="#id4" title="Permalink to this headline">¶</a></h4>
<p>The api for writing new data recieved over the network will need to be geared
to the transmission and local storage method. What we need is for the
transmission method to reasonably closely match the desired write ordering
locally. This suggests that once we decide on the best local storage means we
should design the api.</p>
<p>take N commits from A to B, if B is local then merge changes into the tree.
copy ebough data to recreate snapshots
avoid ending up wth corrupt/bad data</p>
</div>
</div>
</div>
<div class="section" id="notes-from-london">
<h2>Notes from London<a class="headerlink" href="#notes-from-london" title="Permalink to this headline">¶</a></h2>
<blockquote>
<div><ol class="arabic simple">
<li>setup</li>
</ol>
<blockquote>
<div><p>look at graph of revisions for ~N comits to deretmine eligibility for
if preserve mainline is on, check LH only</p>
<blockquote>
<div><dl class="docutils">
<dt>identify objects to send that are not on the client repo</dt>
<dd><ul class="first last simple">
<li>revision - may be proportional to the graph</li>
<li>inventory - proportional to work</li>
<li>texts     - proportional to work</li>
<li>signatures - ???</li>
</ul>
</dd>
</dl>
</div></blockquote>
</div></blockquote>
<ol class="arabic simple">
<li>data transmission</li>
</ol>
<blockquote>
<div><ul class="simple">
<li>send data proportional to the new information</li>
<li>validate the data:</li>
</ul>
<blockquote>
<div><ol class="arabic simple">
<li>validate the sha1 of the full text of each transmitted text.</li>
<li>validate the sha1:name mapping in each newly referenced inventory item.</li>
<li>validate the sha1 of the XML of each inventory against the revision.
<strong>this is proportional to tree size and must be fixed</strong></li>
</ol>
</div></blockquote>
</div></blockquote>
<ol class="arabic simple">
<li>write the data to the local repo.
The API should output the file texts needed by the merge as by product of the transmission</li>
<li>tree application</li>
</ol>
</div></blockquote>
<p>Combine the output from the transmission step with additional ‘new work data’ for anything already in the local repository that is new in this tree.
should write new files and stat existing files proportional to the count of the new work and the size of the full texts.</p>
</div>
</div>


          </div>
        </div>
      </div>
      <div class="sphinxsidebar" role="navigation" aria-label="main navigation">
        <div class="sphinxsidebarwrapper">
  <h3><a href="index.html">Table of Contents</a></h3>
  <ul>
<li><a class="reference internal" href="#">Incremental push/pull</a><ul>
<li><a class="reference internal" href="#functional-requirements">Functional Requirements</a></li>
<li><a class="reference internal" href="#factors-which-should-add-work-for-push-pull">Factors which should add work for push/pull</a></li>
<li><a class="reference internal" href="#push-pull-overview">Push/pull overview</a><ul>
<li><a class="reference internal" href="#new-data-identification">New data identification</a><ul>
<li><a class="reference internal" href="#set-synchronisation-approaches">Set synchronisation approaches</a></li>
<li><a class="reference internal" href="#dag-synchronisation-approaches">DAG synchronisation approaches</a></li>
<li><a class="reference internal" href="#file-level-scaling">File level scaling</a></li>
<li><a class="reference internal" href="#api-scaling">API scaling</a></li>
</ul>
</li>
<li><a class="reference internal" href="#data-reading">Data reading</a><ul>
<li><a class="reference internal" href="#id1">File level scaling</a></li>
<li><a class="reference internal" href="#id2">API scaling</a></li>
</ul>
</li>
<li><a class="reference internal" href="#data-verification-and-writing">Data Verification and writing</a><ul>
<li><a class="reference internal" href="#overview-summary">Overview summary</a></li>
<li><a class="reference internal" href="#id3">File level scaling</a></li>
<li><a class="reference internal" href="#id4">API scaling</a></li>
</ul>
</li>
</ul>
</li>
<li><a class="reference internal" href="#notes-from-london">Notes from London</a></li>
</ul>
</li>
</ul>

  <div role="note" aria-label="source link">
    <h3>This Page</h3>
    <ul class="this-page-menu">
      <li><a href="_sources/incremental-push-pull.txt"
            rel="nofollow">Show Source</a></li>
    </ul>
   </div>
<div id="searchbox" style="display: none" role="search">
  <h3>Quick search</h3>
    <div class="searchformwrapper">
    <form class="search" action="search.html" method="get">
      <input type="text" name="q" />
      <input type="submit" value="Go" />
      <input type="hidden" name="check_keywords" value="yes" />
      <input type="hidden" name="area" value="default" />
    </form>
    </div>
</div>
<script type="text/javascript">$('#searchbox').show(0);</script>
        </div>
      </div>
      <div class="clearer"></div>
    </div>
    <div class="related" role="navigation" aria-label="related navigation">
      <h3>Navigation</h3>
      <ul>
<li><a href="http://bazaar.canonical.com/">
    <img src="_static/bzr icon 16.png" /> Home</a>&nbsp;|&nbsp;</li>
<a href="http://doc.bazaar.canonical.com/en/">Documentation</a>&nbsp;|&nbsp;</li>

        <li class="nav-item nav-item-0"><a href="index.html">Developer Document Catalog (2.7.0)</a> &#187;</li>
 
      </ul>
    </div>
    <div class="footer" role="contentinfo">
        &#169; Copyright 2009-2011 Canonical Ltd.
      Created using <a href="http://sphinx-doc.org/">Sphinx</a> 1.8.4.
    </div>
  </body>
</html>