Sophie

Sophie

distrib > Fedora > 17 > i386 > media > updates > by-pkgid > 675c8c8167236dfcf8d66da674f931e8 > files > 1001

erlang-doc-R15B-03.3.fc17.noarch.rpm

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html xmlns:fn="http://www.w3.org/2005/02/xpath-functions">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<link rel="stylesheet" href="../../../../doc/otp_doc.css" type="text/css">
<title>Erlang -- Miscellaneous Mnesia Features</title>
</head>
<body bgcolor="white" text="#000000" link="#0000ff" vlink="#ff00ff" alink="#ff0000"><div id="container">
<script id="js" type="text/javascript" language="JavaScript" src="../../../../doc/js/flipmenu/flipmenu.js"></script><script id="js2" type="text/javascript" src="../../../../doc/js/erlresolvelinks.js"></script><script language="JavaScript" type="text/javascript">
            <!--
              function getWinHeight() {
                var myHeight = 0;
                if( typeof( window.innerHeight ) == 'number' ) {
                  //Non-IE
                  myHeight = window.innerHeight;
                } else if( document.documentElement && ( document.documentElement.clientWidth ||
                                                         document.documentElement.clientHeight ) ) {
                  //IE 6+ in 'standards compliant mode'
                  myHeight = document.documentElement.clientHeight;
                } else if( document.body && ( document.body.clientWidth || document.body.clientHeight ) ) {
                  //IE 4 compatible
                  myHeight = document.body.clientHeight;
                }
                return myHeight;
              }

              function setscrollpos() {
                var objf=document.getElementById('loadscrollpos');
                 document.getElementById("leftnav").scrollTop = objf.offsetTop - getWinHeight()/2;
              }

              function addEvent(obj, evType, fn){
                if (obj.addEventListener){
                obj.addEventListener(evType, fn, true);
                return true;
              } else if (obj.attachEvent){
                var r = obj.attachEvent("on"+evType, fn);
                return r;
              } else {
                return false;
              }
             }

             addEvent(window, 'load', setscrollpos);

             //--></script><div id="leftnav"><div class="innertube">
<img alt="Erlang logo" src="../../../../doc/erlang-logo.png"><br><small><a href="users_guide.html">User's Guide</a><br><a href="index.html">Reference Manual</a><br><a href="release_notes.html">Release Notes</a><br><a href="../pdf/mnesia-4.7.1.pdf">PDF</a><br><a href="../../../../doc/index.html">Top</a></small><p><strong>Mnesia</strong><br><strong>User's Guide</strong><br><small>Version 4.7.1</small></p>
<br><a href="javascript:openAllFlips()">Expand All</a><br><a href="javascript:closeAllFlips()">Contract All</a><p><small><strong>Chapters</strong></small></p>
<ul class="flipMenu" imagepath="../../../../doc/js/flipmenu">
<li id="no" title="Introduction" expanded="false">Introduction<ul>
<li><a href="Mnesia_chap1.html">
              Top of chapter
            </a></li>
<li title="About Mnesia"><a href="Mnesia_chap1.html#id57488">About Mnesia</a></li>
<li title="The Mnesia DataBase Management System (DBMS)"><a href="Mnesia_chap1.html#id57481">The Mnesia DataBase Management System (DBMS)</a></li>
</ul>
</li>
<li id="no" title="Getting Started with Mnesia" expanded="false">Getting Started with Mnesia<ul>
<li><a href="Mnesia_chap2.html">
              Top of chapter
            </a></li>
<li title="Starting Mnesia for the first time"><a href="Mnesia_chap2.html#id61530">Starting Mnesia for the first time</a></li>
<li title="An Introductory Example"><a href="Mnesia_chap2.html#id62101">An Introductory Example</a></li>
</ul>
</li>
<li id="no" title="Building A Mnesia Database" expanded="false">Building A Mnesia Database<ul>
<li><a href="Mnesia_chap3.html">
              Top of chapter
            </a></li>
<li title="Defining a Schema"><a href="Mnesia_chap3.html#id67814">Defining a Schema</a></li>
<li title="The Data Model"><a href="Mnesia_chap3.html#id68070">The Data Model</a></li>
<li title="Starting Mnesia"><a href="Mnesia_chap3.html#id68125">Starting Mnesia</a></li>
<li title="Creating New Tables"><a href="Mnesia_chap3.html#id72288">Creating New Tables</a></li>
</ul>
</li>
<li id="no" title="Transactions and Other Access Contexts" expanded="false">Transactions and Other Access Contexts<ul>
<li><a href="Mnesia_chap4.html">
              Top of chapter
            </a></li>
<li title="Transaction Properties"><a href="Mnesia_chap4.html#id72980">Transaction Properties</a></li>
<li title="Locking"><a href="Mnesia_chap4.html#id73193">Locking</a></li>
<li title="Dirty Operations"><a href="Mnesia_chap4.html#id73653">Dirty Operations</a></li>
<li title="Record Names versus Table Names"><a href="Mnesia_chap4.html#id74026">Record Names versus Table Names</a></li>
<li title="Activity Concept and Various Access Contexts"><a href="Mnesia_chap4.html#id74114">Activity Concept and Various Access Contexts</a></li>
<li title="Nested transactions"><a href="Mnesia_chap4.html#id74404">Nested transactions</a></li>
<li title="Pattern Matching"><a href="Mnesia_chap4.html#id74476">Pattern Matching</a></li>
<li title="Iteration"><a href="Mnesia_chap4.html#id74822">Iteration</a></li>
</ul>
</li>
<li id="loadscrollpos" title="Miscellaneous Mnesia Features" expanded="true">Miscellaneous Mnesia Features<ul>
<li><a href="Mnesia_chap5.html">
              Top of chapter
            </a></li>
<li title="Indexing"><a href="Mnesia_chap5.html#id75164">Indexing</a></li>
<li title="Distribution and Fault Tolerance"><a href="Mnesia_chap5.html#id75284">Distribution and Fault Tolerance</a></li>
<li title="Table Fragmentation"><a href="Mnesia_chap5.html#id75432">Table Fragmentation</a></li>
<li title="Local Content Tables"><a href="Mnesia_chap5.html#id76375">Local Content Tables</a></li>
<li title="Disc-less Nodes"><a href="Mnesia_chap5.html#id76402">Disc-less Nodes</a></li>
<li title="More Schema Management"><a href="Mnesia_chap5.html#id76560">More Schema Management</a></li>
<li title="Mnesia Event Handling"><a href="Mnesia_chap5.html#id76675">Mnesia Event Handling</a></li>
<li title="Debugging Mnesia Applications"><a href="Mnesia_chap5.html#id77250">Debugging Mnesia Applications</a></li>
<li title="Concurrent Processes in Mnesia"><a href="Mnesia_chap5.html#id77395">Concurrent Processes in Mnesia</a></li>
<li title="Prototyping"><a href="Mnesia_chap5.html#id77432">Prototyping</a></li>
<li title="Object Based Programming with Mnesia"><a href="Mnesia_chap5.html#id77546">Object Based Programming with Mnesia</a></li>
</ul>
</li>
<li id="no" title="Mnesia System Information" expanded="false">Mnesia System Information<ul>
<li><a href="Mnesia_chap7.html">
              Top of chapter
            </a></li>
<li title="Database Configuration Data"><a href="Mnesia_chap7.html#id77779">Database Configuration Data</a></li>
<li title="Core Dumps"><a href="Mnesia_chap7.html#id77817">Core Dumps</a></li>
<li title="Dumping Tables"><a href="Mnesia_chap7.html#id77838">Dumping Tables</a></li>
<li title="Checkpoints"><a href="Mnesia_chap7.html#id77873">Checkpoints</a></li>
<li title="Files"><a href="Mnesia_chap7.html#id78113">Files</a></li>
<li title="Loading of Tables at Start-up"><a href="Mnesia_chap7.html#id78474">Loading of Tables at Start-up</a></li>
<li title="Recovery from Communication Failure"><a href="Mnesia_chap7.html#id78632">Recovery from Communication Failure</a></li>
<li title="Recovery of Transactions"><a href="Mnesia_chap7.html#id78755">Recovery of Transactions</a></li>
<li title="Backup, Fallback, and Disaster Recovery"><a href="Mnesia_chap7.html#id78876">Backup, Fallback, and Disaster Recovery</a></li>
</ul>
</li>
<li id="no" title="Combining Mnesia with SNMP" expanded="false">Combining Mnesia with SNMP<ul>
<li><a href="Mnesia_chap8.html">
              Top of chapter
            </a></li>
<li title="Combining Mnesia and SNMP "><a href="Mnesia_chap8.html#id79691">Combining Mnesia and SNMP </a></li>
</ul>
</li>
<li id="no" title="Appendix A: Mnesia Error Messages" expanded="false">Appendix A: Mnesia Error Messages<ul>
<li><a href="Mnesia_App_A.html">
              Top of chapter
            </a></li>
<li title="Errors in Mnesia"><a href="Mnesia_App_A.html#id79834">Errors in Mnesia</a></li>
</ul>
</li>
<li id="no" title="Appendix B: The Backup Call Back Interface" expanded="false">Appendix B: The Backup Call Back Interface<ul>
<li><a href="Mnesia_App_B.html">
              Top of chapter
            </a></li>
<li title="mnesia_backup callback behavior"><a href="Mnesia_App_B.html#id80051">mnesia_backup callback behavior</a></li>
</ul>
</li>
<li id="no" title="Appendix C: The Activity Access Call Back Interface" expanded="false">Appendix C: The Activity Access Call Back Interface<ul>
<li><a href="Mnesia_App_C.html">
              Top of chapter
            </a></li>
<li title="mnesia_access callback behavior"><a href="Mnesia_App_C.html#id80172">mnesia_access callback behavior</a></li>
</ul>
</li>
<li id="no" title="Appendix D: The Fragmented Table Hashing Call Back Interface" expanded="false">Appendix D: The Fragmented Table Hashing Call Back Interface<ul>
<li><a href="Mnesia_App_D.html">
              Top of chapter
            </a></li>
<li title="mnesia_frag_hash callback behavior"><a href="Mnesia_App_D.html#id80315">mnesia_frag_hash callback behavior</a></li>
</ul>
</li>
</ul>
</div></div>
<div id="content">
<div class="innertube">
<h1>5 Miscellaneous Mnesia Features</h1>
  
  <p>The earlier chapters of this User Guide described how to get
    started with Mnesia, and how to build a Mnesia database. In this
    chapter, we will describe the more advanced features available
    when building a distributed, fault tolerant Mnesia database. This
    chapter contains the following sections:  
    </p>
  <ul>
    <li>Indexing
    </li>
    <li>Distribution and Fault Tolerance
    </li>
    <li>Table fragmentation.
    </li>
    <li>Local content tables.
    </li>
    <li>Disc-less nodes.
    </li>
    <li>More about schema management
    </li>
    <li>Debugging a Mnesia application
    </li>
    <li>Concurrent Processes in Mnesia
    </li>
    <li>Prototyping
    </li>
    <li>Object Based Programming with Mnesia.
    </li>
  </ul>

  <h3><a name="id75164">5.1 
        Indexing</a></h3>
    <a name="indexing"></a>
    
    <p>Data retrieval and matching can be performed very efficiently
      if we know the key for the record. Conversely, if the key is not 
      known, all records in a table must be searched. The larger the
      table the more time consuming it will become. To remedy this 
      problem Mnesia's indexing capabilities are used to improve data 
      retrieval and matching of records.
      </p>
    <p>The following two functions manipulate indexes on existing tables:
      </p>
    <ul>
      <li><span class="code">mnesia:add_table_index(Tab, AttributeName) -&gt; {aborted, R} |{atomic, ok}</span></li>
      <li><span class="code">mnesia:del_table_index(Tab, AttributeName) -&gt; {aborted, R} |{atomic, ok}</span></li>
    </ul>
    <p>These functions create or delete a table index on field
      defined by <span class="code">AttributeName</span>. To illustrate this,  add an 
      index  to the table definition <span class="code">(employee, {emp_no, name,  salary, sex, phone, room_no}</span>, which is the example table
      from  the Company database. The function
      which adds an index on the element <span class="code">salary</span> can be expressed in
      the following way:
      </p>
    <ul>
      <li><span class="code">mnesia:add_table_index(employee, salary)</span></li>
    </ul>
    <p>The indexing capabilities of Mnesia are utilized with the
      following three functions, which retrieve and match records on the
      basis of index entries in the database. 
      </p>
    <ul>
      <li>
<span class="code">mnesia:index_read(Tab, SecondaryKey, AttributeName) -&gt; transaction abort | RecordList</span>.
       Avoids an exhaustive search of the entire table,  by looking up
       the <span class="code">SecondaryKey</span> in the index to find the primary keys.
      </li>
      <li>
<span class="code">mnesia:index_match_object(Pattern, AttributeName) -&gt; transaction abort | RecordList</span> 
       Avoids an exhaustive search of the entire table, by looking up
       the secondary key in the index to find the primary keys.
       The secondary key is found in the <span class="code">AttributeName</span> field of
       the <span class="code">Pattern</span>. The secondary key must be bound.
      </li>
      <li>
<span class="code">mnesia:match_object(Pattern) -&gt; transaction abort | RecordList</span>
       Uses indices to avoid exhaustive search of the entire table.
       Unlike the other functions above, this function may utilize
       any index as long as the secondary key is bound.</li>
    </ul>
    <p>These functions are further described and exemplified in
      Chapter 4: <span class="bold_code"><a href="Mnesia_chap4.html#matching">Pattern matching</a></span>.
      </p>
  

  <h3><a name="id75284">5.2 
        Distribution and Fault Tolerance</a></h3>
    
    <p>Mnesia is a distributed, fault tolerant DBMS. It is possible
      to replicate tables on different Erlang nodes in a variety of
      ways. The Mnesia programmer does not have to  state
      where the different tables reside, only the names of the
      different tables are specified in the program code. This is
      known as "location transparency" and it is an important
      concept. In particular: 
      </p>
    <ul>
      <li>A program will work  regardless of  the
       location of the data. It makes no difference whether the data
       resides on the local node, or on a remote node. <strong>Note:</strong> The program
       will run slower if the data is located on a remote node. 
      </li>
      <li>The database can be reconfigured, and tables can be
       moved between nodes. These operations do not effect the user
       programs. 
      </li>
    </ul>
    <p>We have previously seen that each table has a number of
      system attributes, such as <span class="code">index</span> and
      <span class="code">type</span>. 
      </p>
    <p>Table attributes are specified when the table is created. For
      example, the following function will create a new table with two
      RAM replicas: 
      </p>
    <div class="example"><pre>
      mnesia:create_table(foo,
                          [{ram_copies, [N1, N2]},
                           {attributes, record_info(fields, foo)}]).
    </pre></div>
    <p>Tables can also have the following properties,
      where each attribute has a list of Erlang nodes as its value. 
      </p>
    <ul>
      <li>
        <p><span class="code">ram_copies</span>. The value of the node list is a list of
          Erlang nodes, and a RAM replica of the table will reside on
          each node in the list. This is a RAM replica, and it is
          important to realize that no disc operations are performed when
          a program executes write operations to these replicas. However,
          should permanent  RAM replicas be a requirement, then the
          following alternatives are available:</p>
        <ul>
          <li>The <span class="code">mnesia:dump_tables/1</span> function can be used
           to dump RAM table replicas to disc. 
          </li>
          <li>The table replicas can be backed up; either from
           RAM, or from disc if dumped there with the above
           function. 
          </li>
        </ul>
      </li>
      <li>
<span class="code">disc_copies</span>. The value of the attribute is a list
       of Erlang nodes, and a replica of the table will reside both
       in RAM and on disc on each node in the list. Write operations
       addressed to the table will address both the RAM and the disc
       copy of the table. 
      </li>
      <li>
<span class="code">disc_only_copies</span>. The value of the attribute is a
       list of Erlang nodes, and a replica of the table will reside
       only as a disc copy on each node in the list. The major
       disadvantage of this type of table replica is the access
       speed. The major advantage is that the table does not occupy
       space in memory.
      </li>
    </ul>
    <p>It is also possible to set and change table properties on
      existing tables. Refer to Chapter 3: <span class="bold_code"><a href="Mnesia_chap3.html#def_schema">Defining the Schema</a></span> for full
      details. 
      </p>
    <p>There are basically two reasons for using more than one table
      replica: fault tolerance, or speed. It is worthwhile to note
      that table replication provides a solution to both of these
      system requirements. 
      </p>
    <p>If we have two active table replicas, all information is
      still available if one of the replicas fail. This can be a very
      important property in many applications. Furthermore, if a table
      replica exists at two specific nodes, applications which execute
      at either of these nodes can read data from the table without
      accessing the network.  Network operations are considerably
      slower and consume more resources than local operations. 
      </p>
    <p>It can be advantageous to create table replicas for a
      distributed application which reads data often, but writes data
      seldom, in order to achieve fast read operations on the local
      node. The major disadvantage with replication is the increased
      time to write data. If a table has two replicas, every write
      operation must access both table replicas. Since one of these
      write operations must be a network operation, it is considerably
      more expensive to perform a write operation to a replicated
      table than to a non-replicated table. 
      </p>
  

  <h3><a name="id75432">5.3 
        Table Fragmentation</a></h3>
    

    <h4>The Concept</h4>
      
      <p>A concept of table fragmentation has been introduced in
        order to cope with very large tables. The idea is to split a
        table into several more manageable fragments. Each fragment
        is implemented as a first class Mnesia table and may be
        replicated, have indices etc. as any other table. But the
        tables may neither have <span class="code">local_content</span> nor have the
        <span class="code">snmp</span> connection activated.  
        </p>
      <p>In order to be able to access a record in a fragmented
        table, Mnesia must determine to which fragment the
        actual record belongs. This is done by the
        <span class="code">mnesia_frag</span> module, which implements the
        <span class="code">mnesia_access</span> callback behaviour. Please, read the
        documentation about <span class="code">mnesia:activity/4</span> to see how
        <span class="code">mnesia_frag</span> can be used as a <span class="code">mnesia_access</span>
        callback module.
        </p>
      <p>At each record access <span class="code">mnesia_frag</span> first computes
        a hash value from the record key. Secondly the name of the
        table fragment is determined from the hash value. And
        finally the actual table access is performed by the same
        functions as for non-fragmented tables. When the key is
        not known beforehand, all fragments are searched for
        matching records. Note: In <span class="code">ordered_set</span> tables
        the records will be ordered per fragment, and the 
        the order is undefined in results returned by select and
        match_object.
        </p>
      <p>The following piece of code illustrates
        how an existing Mnesia table is converted to be a
        fragmented table and how more fragments are added later on.
        </p>
      <div class="example"><pre>
Eshell V4.7.3.3  (abort with ^G)
(a@sam)1&gt; mnesia:start().
ok
(a@sam)2&gt; mnesia:system_info(running_db_nodes).
[b@sam,c@sam,a@sam]
(a@sam)3&gt; Tab = dictionary.
dictionary
(a@sam)4&gt; mnesia:create_table(Tab, [{ram_copies, [a@sam, b@sam]}]).
{atomic,ok}
(a@sam)5&gt; Write = fun(Keys) -&gt; [mnesia:write({Tab,K,-K}) || K &lt;- Keys], ok end.
#Fun&lt;erl_eval&gt;
(a@sam)6&gt; mnesia:activity(sync_dirty, Write, [lists:seq(1, 256)], mnesia_frag).
ok
(a@sam)7&gt; mnesia:change_table_frag(Tab, {activate, []}).
{atomic,ok}
(a@sam)8&gt; mnesia:table_info(Tab, frag_properties).
[{base_table,dictionary},
 {foreign_key,undefined},
 {n_doubles,0},
 {n_fragments,1},
 {next_n_to_split,1},
 {node_pool,[a@sam,b@sam,c@sam]}]
(a@sam)9&gt; Info = fun(Item) -&gt; mnesia:table_info(Tab, Item) end.
#Fun&lt;erl_eval&gt;
(a@sam)10&gt; Dist = mnesia:activity(sync_dirty, Info, [frag_dist], mnesia_frag).
[{c@sam,0},{a@sam,1},{b@sam,1}]
(a@sam)11&gt; mnesia:change_table_frag(Tab, {add_frag, Dist}).
{atomic,ok}
(a@sam)12&gt; Dist2 = mnesia:activity(sync_dirty, Info, [frag_dist], mnesia_frag).
[{b@sam,1},{c@sam,1},{a@sam,2}]
(a@sam)13&gt; mnesia:change_table_frag(Tab, {add_frag, Dist2}).
{atomic,ok}
(a@sam)14&gt; Dist3 = mnesia:activity(sync_dirty, Info, [frag_dist], mnesia_frag).
[{a@sam,2},{b@sam,2},{c@sam,2}]
(a@sam)15&gt; mnesia:change_table_frag(Tab, {add_frag, Dist3}).
{atomic,ok}
(a@sam)16&gt; Read = fun(Key) -&gt; mnesia:read({Tab, Key}) end.
#Fun&lt;erl_eval&gt;
(a@sam)17&gt; mnesia:activity(transaction, Read, [12], mnesia_frag).
[{dictionary,12,-12}]
(a@sam)18&gt; mnesia:activity(sync_dirty, Info, [frag_size], mnesia_frag).
[{dictionary,64},
 {dictionary_frag2,64},
 {dictionary_frag3,64},
 {dictionary_frag4,64}]
(a@sam)19&gt; 
      </pre></div>
    

    <h4>Fragmentation Properties</h4>
      
      <p>There is a  table property called
        <span class="code">frag_properties</span> and may be read with
        <span class="code">mnesia:table_info(Tab, frag_properties)</span>.  The
        fragmentation properties is a list of tagged tuples with
        the arity 2. By default the list is empty, but when it is
        non-empty it triggers Mnesia to regard the table as
        fragmented. The fragmentation properties are:
        </p>
      <dl>
        <dt><strong><span class="code">{n_fragments, Int}</span></strong></dt>
        <dd>
          <p><span class="code">n_fragments</span> regulates how many fragments
            that the table currently has. This property may explicitly
            be set at table creation and later be changed with
            <span class="code">{add_frag, NodesOrDist}</span> or
            <span class="code">del_frag</span>. <span class="code">n_fragment</span>s defaults to <span class="code">1</span>. 
            </p>
        </dd>
        <dt><strong><span class="code">{node_pool, List}</span></strong></dt>
        <dd>
          <p>The node pool contains a list of nodes and may
            explicitly be set at table creation and later be changed
            with <span class="code">{add_node, Node}</span> or <span class="code">{del_node, Node}</span>. At table creation Mnesia tries to distribute
            the replicas of each fragment evenly over all the nodes in
            the node pool. Hopefully all nodes will end up with the
            same number of replicas. <span class="code">node_pool</span> defaults to the
            return value from <span class="code">mnesia:system_info(db_nodes)</span>.
            </p>
        </dd>
        <dt><strong><span class="code">{n_ram_copies, Int}</span></strong></dt>
        <dd>
          <p>Regulates how many <span class="code">ram_copies</span> replicas
            that each fragment should have. This property may
            explicitly be set at table creation. The default is
            <span class="code">0</span>, but if <span class="code">n_disc_copies</span> and
            <span class="code">n_disc_only_copies</span> also are <span class="code">0</span>,
            <span class="code">n_ram_copies</span> will default be set to <span class="code">1</span>.
            </p>
        </dd>
        <dt><strong><span class="code">{n_disc_copies, Int}</span></strong></dt>
        <dd>
          <p>Regulates how many <span class="code">disc_copies</span> replicas
            that each fragment should have. This property may
            explicitly be set at table creation. The default is <span class="code">0</span>.
            </p>
        </dd>
        <dt><strong><span class="code">{n_disc_only_copies, Int}</span></strong></dt>
        <dd>
          <p>Regulates how many <span class="code">disc_only_copies</span> replicas
            that each fragment should have. This property may
            explicitly be set at table creation. The default is <span class="code">0</span>.
            </p>
        </dd>
        <dt><strong><span class="code">{foreign_key, ForeignKey}</span></strong></dt>
        <dd>
          <p><span class="code">ForeignKey</span> may either be the atom
            <span class="code">undefined</span> or the tuple <span class="code">{ForeignTab, Attr}</span>,
            where <span class="code">Attr</span> denotes an attribute which should be
            interpreted as a key in another fragmented table named
            <span class="code">ForeignTab</span>. Mnesia will ensure that the number of
            fragments in this table and in the foreign table are
            always the same. When fragments are added or deleted
            Mnesia will automatically propagate the operation to all
            fragmented tables that has a foreign key referring to this
            table. Instead of using the record key to determine which
            fragment to access, the value of the <span class="code">Attr</span> field is
            used. This feature makes it possible to automatically
            co-locate records in different tables to the same
            node. <span class="code">foreign_key</span> defaults to
            <span class="code">undefined</span>. However if the foreign key is set to
            something else it will cause the default values of the
            other fragmentation properties to be the same values as
            the actual fragmentation properties of the foreign table.
            </p>
        </dd>
        <dt><strong><span class="code">{hash_module, Atom}</span></strong></dt>
        <dd>
          <p>Enables definition of an alternate hashing scheme.
            The module must implement the <span class="code">mnesia_frag_hash</span>
            callback behaviour (see the reference manual). This
            property may explicitly be set at table creation.
            The default is <span class="code">mnesia_frag_hash</span>.</p>
          <p>Older tables that was created before the concept of
            user defined hash modules was introduced, uses
            the <span class="code">mnesia_frag_old_hash</span> module in order to
            be backwards compatible. The <span class="code">mnesia_frag_old_hash</span>
            is still using the poor deprecated <span class="code">erlang:hash/1</span>
            function.
            </p>
        </dd>
        <dt><strong><span class="code">{hash_state, Term}</span></strong></dt>
        <dd>
          <p>Enables a table specific parameterization
            of a generic hash module. This property may explicitly
            be set at table creation.
            The default is <span class="code">undefined</span>.</p>
          <div class="example"><pre>
Eshell V4.7.3.3  (abort with ^G)
(a@sam)1&gt; mnesia:start().
ok
(a@sam)2&gt; PrimProps = [{n_fragments, 7}, {node_pool, [node()]}].
[{n_fragments,7},{node_pool,[a@sam]}]
(a@sam)3&gt; mnesia:create_table(prim_dict, 
                              [{frag_properties, PrimProps},
                               {attributes,[prim_key,prim_val]}]).
{atomic,ok}
(a@sam)4&gt; SecProps = [{foreign_key, {prim_dict, sec_val}}].
[{foreign_key,{prim_dict,sec_val}}]
(a@sam)5&gt; mnesia:create_table(sec_dict, 
                              [{frag_properties, SecProps},
(a@sam)5&gt;                      {attributes, [sec_key, sec_val]}]).
{atomic,ok}
(a@sam)6&gt; Write = fun(Rec) -&gt; mnesia:write(Rec) end.
#Fun&lt;erl_eval&gt;
(a@sam)7&gt; PrimKey = 11.
11
(a@sam)8&gt; SecKey = 42.
42
(a@sam)9&gt; mnesia:activity(sync_dirty, Write,
                          [{prim_dict, PrimKey, -11}], mnesia_frag).
ok
(a@sam)10&gt; mnesia:activity(sync_dirty, Write,
                           [{sec_dict, SecKey, PrimKey}], mnesia_frag).
ok
(a@sam)11&gt; mnesia:change_table_frag(prim_dict, {add_frag, [node()]}).
{atomic,ok}
(a@sam)12&gt; SecRead = fun(PrimKey, SecKey) -&gt;
               mnesia:read({sec_dict, PrimKey}, SecKey, read) end.
#Fun&lt;erl_eval&gt;
(a@sam)13&gt; mnesia:activity(transaction, SecRead,
                           [PrimKey, SecKey], mnesia_frag).
[{sec_dict,42,11}]
(a@sam)14&gt; Info = fun(Tab, Item) -&gt; mnesia:table_info(Tab, Item) end.
#Fun&lt;erl_eval&gt;
(a@sam)15&gt; mnesia:activity(sync_dirty, Info,
                           [prim_dict, frag_size], mnesia_frag).
[{prim_dict,0},
 {prim_dict_frag2,0},
 {prim_dict_frag3,0},
 {prim_dict_frag4,1},
 {prim_dict_frag5,0},
 {prim_dict_frag6,0},
 {prim_dict_frag7,0},
 {prim_dict_frag8,0}]
(a@sam)16&gt; mnesia:activity(sync_dirty, Info,
                           [sec_dict, frag_size], mnesia_frag).
[{sec_dict,0},
 {sec_dict_frag2,0},
 {sec_dict_frag3,0},
 {sec_dict_frag4,1},
 {sec_dict_frag5,0},
 {sec_dict_frag6,0},
 {sec_dict_frag7,0},
 {sec_dict_frag8,0}]
(a@sam)17&gt;
          </pre></div>
        </dd>
      </dl>
    

    <h4>Management of Fragmented Tables</h4>
      
      <p>The function <span class="code">mnesia:change_table_frag(Tab, Change)</span>
        is intended to be used for reconfiguration of fragmented
        tables. The <span class="code">Change</span> argument should have one of the
        following values: 
        </p>
      <dl>
        <dt><strong><span class="code">{activate, FragProps}</span></strong></dt>
        <dd>
          <p>Activates the fragmentation properties of an
            existing table. <span class="code">FragProps</span> should either contain
            <span class="code">{node_pool, Nodes}</span> or be empty.
            </p>
        </dd>
        <dt><strong><span class="code">deactivate</span></strong></dt>
        <dd>
          <p>Deactivates the fragmentation properties of a
            table.  The number of fragments must be <span class="code">1</span>. No other
            tables may refer to this table in its foreign key. 
            </p>
        </dd>
        <dt><strong><span class="code">{add_frag, NodesOrDist}</span></strong></dt>
        <dd>
          <p>Adds one new fragment to a fragmented table. All
            records in one of the old fragments will be rehashed and
            about half of them will be moved to the new (last)
            fragment. All other fragmented tables, which refers to this
            table in their foreign key, will automatically get a new
            fragment, and their records will also be dynamically
            rehashed in the same manner as for the main table. 
            </p>
          <p>The <span class="code">NodesOrDist</span> argument may either be a list
            of nodes or the result from <span class="code">mnesia:table_info(Tab, frag_dist)</span>. The <span class="code">NodesOrDist</span> argument is
            assumed to be a sorted list with the best nodes to
            host new replicas first in the list. The new fragment
            will get the same number of replicas as the first
            fragment (see <span class="code">n_ram_copies</span>, <span class="code">n_disc_copies</span>
            and <span class="code">n_disc_only_copies</span>). The <span class="code">NodesOrDist</span>
            list must at least contain one element for each
            replica that needs to be allocated. 
            </p>
        </dd>
        <dt><strong><span class="code">del_frag</span></strong></dt>
        <dd>
          <p>Deletes one fragment from a fragmented table. All
            records in the last fragment will be moved to one of the other
            fragments. All other fragmented tables which refers to
            this table in their foreign key, will automatically lose
            their last fragment and their records will also be
            dynamically rehashed in the same manner as for the main
            table.
            </p>
        </dd>
        <dt><strong><span class="code">{add_node, Node}</span></strong></dt>
        <dd>
          <p>Adds a new node to the <span class="code">node_pool</span>. The new
            node pool will affect the list returned from
            <span class="code">mnesia:table_info(Tab, frag_dist)</span>.
            </p>
        </dd>
        <dt><strong><span class="code">{del_node, Node}</span></strong></dt>
        <dd>
          <p>Deletes a new node from the <span class="code">node_pool</span>. The
            new node pool will affect the list returned from
            <span class="code">mnesia:table_info(Tab, frag_dist)</span>.</p>
        </dd>
      </dl>
    

    <h4>Extensions of Existing Functions</h4>
      
      <p>The function <span class="code">mnesia:create_table/2</span> is used to
        create a brand new fragmented table, by setting the table
        property <span class="code">frag_properties</span> to some proper values.
        </p>
      <p>The function <span class="code">mnesia:delete_table/1</span> is  used to
        delete a fragmented table including all its
        fragments. There must however not exist any other
        fragmented tables which refers to this table in their foreign key.
        </p>
      <p>The function <span class="code">mnesia:table_info/2</span> now understands
        the <span class="code">frag_properties</span> item.
        </p>
      <p>If the function <span class="code">mnesia:table_info/2</span> is invoked in
        the activity context of the <span class="code">mnesia_frag</span> module,
        information of several new items may be obtained:
        </p>
      <dl>
        <dt><strong><span class="code">base_table</span></strong></dt>
        <dd>
          <p>the name of the fragmented table
            </p>
        </dd>
        <dt><strong><span class="code">n_fragments</span></strong></dt>
        <dd>
          <p>the actual number of fragments
            </p>
        </dd>
        <dt><strong><span class="code">node_pool</span></strong></dt>
        <dd>
          <p>the pool of nodes
            </p>
        </dd>
        <dt><strong><span class="code">n_ram_copies</span></strong></dt>
        <dd></dd>
        <dt><strong><span class="code">n_disc_copies</span></strong></dt>
        <dd></dd>
        <dt><strong><span class="code">n_disc_only_copies</span></strong></dt>
        <dd>
          <p>the number of replicas with storage type
            <span class="code">ram_copies</span>, <span class="code">disc_copies</span> and <span class="code">disc_only_copies</span>
            respectively. The actual values are dynamically derived
            from the first fragment. The first fragment serves as a
            pro-type and when the actual values needs to be computed
            (e.g. when adding new fragments) they are simply
            determined by counting the number of each replicas for
            each storage type. This means, when the functions
            <span class="code">mnesia:add_table_copy/3</span>,
            <span class="code">mnesia:del_table_copy/2</span> and<span class="code">mnesia:change_table_copy_type/2</span> are applied on the
            first fragment, it will affect the settings on
            <span class="code">n_ram_copies</span>, <span class="code">n_disc_copies</span>, and
            <span class="code">n_disc_only_copies</span>. 
            </p>
        </dd>
        <dt><strong><span class="code">foreign_key</span></strong></dt>
        <dd>
          <p>the foreign key.
            </p>
        </dd>
        <dt><strong><span class="code">foreigners</span></strong></dt>
        <dd>
          <p>all other tables that refers to this table in
            their foreign key.
            </p>
        </dd>
        <dt><strong><span class="code">frag_names</span></strong></dt>
        <dd>
          <p>the names of all fragments.
            </p>
        </dd>
        <dt><strong><span class="code">frag_dist</span></strong></dt>
        <dd>
          <p>a sorted list of <span class="code">{Node, Count}</span> tuples
            which is sorted in increasing <span class="code">Count</span> order. The
            <span class="code">Count</span> is the total number of replicas that this
            fragmented table hosts on each <span class="code">Node</span>. The list
            always contains at least all nodes in the
            <span class="code">node_pool</span>. The nodes which not belongs to the
            <span class="code">node_pool</span> will be put last in the list even if
            their <span class="code">Count</span> is lower.
            </p>
        </dd>
        <dt><strong><span class="code">frag_size</span></strong></dt>
        <dd>
          <p>a list of <span class="code">{Name, Size}</span> tuples where
            <span class="code">Name</span> is a fragment <span class="code">Name</span> and <span class="code">Size</span> is
            how many records it contains. 
            </p>
        </dd>
        <dt><strong><span class="code">frag_memory</span></strong></dt>
        <dd>
          <p>a list of <span class="code">{Name, Memory}</span> tuples where
            <span class="code">Name</span> is a fragment <span class="code">Name</span> and <span class="code">Memory</span> is
            how much memory it occupies. 
            </p>
        </dd>
        <dt><strong><span class="code">size</span></strong></dt>
        <dd>
          <p>total size of all fragments
            </p>
        </dd>
        <dt><strong><span class="code">memory</span></strong></dt>
        <dd>
          <p>the total memory of all fragments</p>
        </dd>
      </dl>
    

    <h4>Load Balancing</h4>
      
      <p>There are several algorithms for distributing records
        in a fragmented table evenly over a
        pool of nodes. No one is best, it simply depends of the
        application needs. Here follows some examples of
        situations which may need some attention:
        </p>
      <p><span class="code">permanent change of nodes</span> when a new permanent
        <span class="code">db_node</span> is introduced or dropped, it may be time to
        change the pool of nodes and re-distribute the replicas
        evenly over the new pool of nodes. It may also be time to
        add or delete a fragment before the replicas are re-distributed.
        </p>
      <p><span class="code">size/memory threshold</span> when the total size or
        total memory of a fragmented table (or a single
        fragment) exceeds some application specific threshold, it
        may be time to dynamically add a new fragment in order
        obtain a better distribution of records.   
        </p>
      <p><span class="code">temporary node down</span> when a node temporarily goes
        down it may be time to compensate some fragments with new
        replicas in order to keep the desired level of
        redundancy. When the node comes up again it may be time to
        remove the superfluous replica. 
        </p>
      <p><span class="code">overload threshold</span> when the load on some node is
        exceeds some application specific threshold, it may be time to
        either add or move some fragment replicas to nodes with lesser
        load. Extra care should be taken if the table has a foreign
        key relation to some other table. In order to avoid severe
        performance penalties, the same re-distribution must be
        performed for all of the related tables.
        </p>
      <p>Use <span class="code">mnesia:change_table_frag/2</span> to add new fragments
        and apply the usual schema manipulation functions (such as
        <span class="code">mnesia:add_table_copy/3</span>, <span class="code">mnesia:del_table_copy/2</span>
        and <span class="code">mnesia:change_table_copy_type/2</span>) on each fragment
        to perform the actual re-distribution.
        </p>
    
  

  <h3><a name="id76375">5.4 
        Local Content Tables</a></h3>
    
    <p>Replicated tables have the same content on all nodes where
      they are replicated. However, it is sometimes advantageous to
      have tables  but different content on different nodes.
      </p>
    <p>If we specify the attribute <span class="code">{local_content, true}</span> when
      we create the table, the table will reside on the nodes where
      we specify that the table shall exist, but the write operations on the
      table will only be performed on the local copy. 
      </p>
    <p>Furthermore, when the table is initialized at start-up, the
      table will only be initialized locally, and the table
      content will not be copied from another node.
      </p>
  

  <h3><a name="id76402">5.5 
        Disc-less Nodes</a></h3>
    
    <p>It is possible to run Mnesia on nodes that do not have a
      disc. It is of course not possible to have replicas
      of neither <span class="code">disc_copies</span>, nor <span class="code">disc_only_copies</span>
      on such nodes. This especially troublesome for the
      <span class="code">schema</span> table since Mnesia need the schema in order
      to initialize itself.
      </p>
    <p>The schema table may, as other tables, reside on one or
      more nodes. The storage type of the schema table may either
      be <span class="code">disc_copies</span> or <span class="code">ram_copies</span> 
      (not <span class="code">disc_only_copies</span>). At
      start-up Mnesia uses its schema to determine with which 
      nodes it should try to establish contact. If any
      of the other nodes are already started, the starting node
      merges its table definitions with the table definitions
      brought from the other nodes. This also applies to the
      definition of the schema table itself. The application
      parameter <span class="code">extra_db_nodes</span> contains a list of nodes which
      Mnesia also should establish contact with besides the ones
      found in the schema. The default value is the empty list
      <span class="code">[]</span>. 
      </p>
    <p>Hence, when a disc-less node needs to find the schema
      definitions from a remote node on the network, we need to supply
      this information through the application parameter <span class="code">-mnesia extra_db_nodes NodeList</span>. Without this
      configuration parameter set, Mnesia will start as a single node
      system. It is also possible to use <span class="code">mnesia:change_config/2</span>
      to assign a value to 'extra_db_nodes' and force a connection
      after mnesia have been started, i.e.
      mnesia:change_config(extra_db_nodes, NodeList).
      </p>
    <p>The application parameter schema_location controls where
      Mnesia will search for its schema. The parameter may be one of
      the following atoms: 
      </p>
    <dl>
      <dt><strong><span class="code">disc</span></strong></dt>
      <dd>
        <p>Mandatory disc. The schema is assumed to be located
          on the Mnesia directory. And if the schema cannot be found,
          Mnesia refuses to start. 
          </p>
      </dd>
      <dt><strong><span class="code">ram</span></strong></dt>
      <dd>
        <p>Mandatory ram. The schema resides in ram
          only. At start-up a tiny new schema is generated. This
          default schema contains just the definition of the schema
          table and only resides on the local node. Since no other
          nodes are found in the default schema, the configuration
          parameter  <span class="code">extra_db_nodes</span> must be used in order to let the
          node share its table definitions with other nodes. (The
          <span class="code">extra_db_nodes</span> parameter may also be used on disc-full nodes.)
          </p>
      </dd>
      <dt><strong><span class="code">opt_disc</span></strong></dt>
      <dd>
        <p>Optional disc. The schema may reside on either disc
          or ram. If the schema is found on disc, Mnesia starts as a
          disc-full node (the storage type of the schema table is
          disc_copies). If no schema is found  on disc, Mnesia starts
          as a disc-less node (the storage type of the schema table is
          ram_copies). The default value for the application parameter
          is
          <span class="code">opt_disc</span>.  </p>
      </dd>
    </dl>
    <p>When the <span class="code">schema_location</span> is set to opt_disc the
      function <span class="code">mnesia:change_table_copy_type/3</span> may be used to
      change the storage type of the schema. 
      This is illustrated below:
      </p>
    <div class="example"><pre>
        1&gt; mnesia:start().
        ok
        2&gt; mnesia:change_table_copy_type(schema, node(), disc_copies).
        {atomic, ok}
    </pre></div>
    <p>Assuming that the call to <span class="code">mnesia:start</span> did not
      find any schema to read on the disc, then Mnesia has started
      as a disc-less node, and then changed it to a node that
      utilizes the disc to locally store the schema.
      </p>
  

  <h3><a name="id76560">5.6 
        More Schema Management</a></h3>
    
    <p>It is possible to add and remove nodes from a Mnesia system.
      This can be done by adding a copy of the schema to those nodes.
      </p>
    <p>The functions <span class="code">mnesia:add_table_copy/3</span> and
      <span class="code">mnesia:del_table_copy/2</span> may be used to add and delete
      replicas of the schema table. Adding a node to the list
      of nodes where the schema is replicated will affect two
      things. First it allows other tables to be replicated to
      this node. Secondly it will cause Mnesia to try to contact
      the node at start-up of disc-full nodes.
      </p>
    <p>The function call <span class="code">mnesia:del_table_copy(schema, mynode@host)</span> deletes the node 'mynode@host' from the
      Mnesia system. The call fails if mnesia is running on
      'mynode@host'. The other mnesia nodes will never try to connect
      to that node again.  Note, if there is a disc
      resident schema on the  node 'mynode@host', the entire mnesia
      directory should be deleted. This can be done with
      <span class="code">mnesia:delete_schema/1</span>. If
      mnesia is started again on the the node 'mynode@host' and the
      directory has not been cleared, mnesia's behaviour is undefined.
      </p>
    <p>If the storage type of the schema is ram_copies, i.e, we
      have disc-less node, Mnesia
      will not use the disc on that particular node. The disc
      usage is enabled by changing the storage type of the table 
      <span class="code">schema</span> to disc_copies. 
      </p>
    <p>New schemas are
      created explicitly with <span class="code">mnesia:create_schema/1</span> or implicitly
      by starting Mnesia without a disc resident schema.  Whenever
      a table (including the schema table) is created it is
      assigned its own unique cookie. The schema table is not created with
      <span class="code">mnesia:create_table/2</span> as normal tables.
      </p>
    <p>At start-up  Mnesia connects different nodes to each other,
      then they  exchange table definitions with each other and the
      table definitions are merged. During the merge procedure Mnesia
      performs a sanity test to ensure that the table definitions are
      compatible with each other. If a table exists on several nodes
      the cookie must be the same, otherwise Mnesia will shutdown one
      of the nodes. This unfortunate situation will occur if a table
      has been created on two nodes independently of each other while
      they were disconnected. To solve the problem, one of the tables
      must be deleted (as the cookies differ we regard it to be two
      different tables even if they happen to have the same name). 
      </p>
    <p>Merging different versions of the schema table, does not
      always require the cookies to be the same. If the storage
      type of the schema table is disc_copies, the cookie is
      immutable, and all other db_nodes must have the same
      cookie. When the schema is stored as type ram_copies,
      its cookie can be replaced with a cookie from another node
      (ram_copies or disc_copies). The cookie replacement (during
      merge of the schema table definition) is performed each time
      a RAM node connects to another node. 
      </p>
    <p><span class="code">mnesia:system_info(schema_location)</span> and
      <span class="code">mnesia:system_info(extra_db_nodes)</span> may be used to determine
      the actual values of schema_location and  extra_db_nodes
      respectively. <span class="code">mnesia:system_info(use_dir)</span> may be used to
      determine whether Mnesia is actually  using the Mnesia
      directory. <span class="code">use_dir</span> may be determined even before
      Mnesia is started. The function <span class="code">mnesia:info/0</span> may now be
      used to printout some system information even before Mnesia
      is started. When Mnesia is started the function prints out
      more information.  
      </p>
    <p>Transactions which update the definition of a table,
      requires that Mnesia is started on all nodes where the
      storage type of the schema is disc_copies. All replicas of
      the table on these nodes must also be loaded. There are a
      few exceptions to these availability rules. Tables may be
      created and new replicas may be added without starting all
      of the disc-full nodes. New replicas may be added before all
      other replicas of the table have been loaded, it  will suffice
      when one other replica is active.
      </p>
  

  <h3><a name="id76675">5.7 
        Mnesia Event Handling</a></h3>
    
    <p>System events and table events are the two categories of events
      that Mnesia will generate in various situations. 
      </p>
    <p>It is possible for user process to subscribe on the 
      events generated by Mnesia. 
      We have the following two functions:</p>
    <dl>
      <dt><strong><span class="code">mnesia:subscribe(Event-Category)</span></strong></dt>
      <dd>
        <p>Ensures that a copy of all events of type
          <span class="code">Event-Category</span> are sent to the calling process.
          </p>
      </dd>
      <dt><strong><span class="code">mnesia:unsubscribe(Event-Category)</span></strong></dt>
      <dd>Removes the subscription on events of type
      <span class="code">Event-Category</span>
</dd>
    </dl>
    <p><span class="code">Event-Category</span> may either be the atom <span class="code">system</span>, the atom <span class="code">activity</span>, or
      one of the tuples <span class="code">{table, Tab, simple}</span>, <span class="code">{table, Tab, detailed}</span>.  The old event-category <span class="code">{table, Tab}</span> is the same
      event-category as <span class="code">{table, Tab, simple}</span>.
      The subscribe functions activate a subscription
      of events. The events are delivered as messages to the process
      evaluating the <span class="code">mnesia:subscribe/1</span> function. The syntax of
      system events is <span class="code">{mnesia_system_event, Event}</span>,
      <span class="code">{mnesia_activity_event, Event}</span> for activity events, and
      <span class="code">{mnesia_table_event, Event}</span> for table events. What the various
      event types mean is described below.</p>
    <p>All system events are subscribed by Mnesia's
      gen_event handler. The default gen_event handler is
      <span class="code">mnesia_event</span>. But it may be changed by using the application
      parameter <span class="code">event_module</span>. The value of this parameter must be
      the name of a module implementing a complete handler
      as specified by the <span class="code">gen_event</span> module in
      STDLIB. <span class="code">mnesia:system_info(subscribers)</span> and
      <span class="code">mnesia:table_info(Tab, subscribers)</span> may be used to determine
      which processes are subscribed to various 
      events.      
      </p>

    <h4>System Events</h4>
      
      <p>The system events are detailed below:</p>
      <dl>
        <dt><strong><span class="code">{mnesia_up, Node}</span></strong></dt>
        <dd>
          <p>Mnesia has been started on a node. 
            Node is the name of the node. By default this event is ignored.
            </p>
        </dd>
        <dt><strong><span class="code">{mnesia_down, Node}</span></strong></dt>
        <dd>
          <p>Mnesia has been stopped on a node.
            Node is the name of the node. By default this event is
            ignored.
            </p>
        </dd>
        <dt><strong><span class="code">{mnesia_checkpoint_activated, Checkpoint}</span></strong></dt>
        <dd>
          <p>a checkpoint with the name
            <span class="code">Checkpoint</span> has been activated and that the current node is
            involved in the checkpoint. Checkpoints may be activated
            explicitly with <span class="code">mnesia:activate_checkpoint/1</span> or implicitly
            at backup, adding table replicas, internal transfer of data
            between nodes etc. By default this event is ignored.
            </p>
        </dd>
        <dt><strong><span class="code">{mnesia_checkpoint_deactivated, Checkpoint}</span></strong></dt>
        <dd>
          <p>A checkpoint with the name
            <span class="code">Checkpoint</span> has been deactivated and that the current node was
            involved in the checkpoint. Checkpoints may explicitly be
            deactivated with <span class="code">mnesia:deactivate/1</span> or implicitly when the
            last replica of a table (involved in the checkpoint) 
            becomes unavailable, e.g. at node down. By default this
            event is ignored. 
            </p>
        </dd>
        <dt><strong><span class="code">{mnesia_overload, Details}</span></strong></dt>
        <dd>
          <p>Mnesia on the current node is
            overloaded and the subscriber should take action. 
            </p>
          <p>A typical overload situation occurs when the
            applications are performing more updates on disc
            resident tables than Mnesia is able to handle. Ignoring
            this kind of overload may lead into a situation where
            the disc space is exhausted (regardless of the size of
            the tables stored on disc).
                        <br>
 Each update is appended to
            the transaction log and occasionally(depending of how it 
            is configured) dumped to the tables files. The
            table file storage is more compact than the transaction
            log storage, especially if the same record is updated
            over and over again. If the thresholds for dumping the
            transaction log have been reached before the previous
            dump was finished an overload event is triggered. 
            </p>
          <p>Another typical overload situation is when the
            transaction manager cannot commit transactions at the
            same pace as the applications are performing updates of
            disc resident tables. When this happens the message
            queue of the transaction manager will continue to grow
            until the memory is exhausted or the load
            decreases.
            </p>
          <p>The same problem may occur for dirty updates. The overload
            is detected locally on the current node, but its cause may
            be on another node. Application processes may cause heavy
            loads if any table are residing on other nodes (replicated or not). By default this event
            is reported to the error_logger. 
            </p>
        </dd>
        <dt><strong><span class="code">{inconsistent_database, Context, Node}</span></strong></dt>
        <dd>
          <p>Mnesia regards the database as
            potential inconsistent and gives its applications a chance
            to recover from the inconsistency, e.g. by installing a
            consistent backup as fallback and then restart the system
            or pick a <span class="code">MasterNode</span> from <span class="code">mnesia:system_info(db_nodes)</span>)
            and invoke <span class="code">mnesia:set_master_node([MasterNode])</span>. By default an
            error is reported to the error logger.
            </p>
        </dd>
        <dt><strong><span class="code">{mnesia_fatal, Format, Args, BinaryCore}</span></strong></dt>
        <dd>
          <p>Mnesia has encountered a fatal error
            and will (in a short period of time) be terminated. The reason for
            the fatal error is explained in Format and Args which may
            be given as input to <span class="code">io:format/2</span> or sent to the
            error_logger. By default it will be sent to the
            error_logger. <span class="code">BinaryCore</span> is a binary containing a summary of
            Mnesia's internal state  at the time the when the fatal error was
            encountered. By default the binary is written to a
            unique file name on current directory. On RAM nodes the
            core is ignored. 
            </p>
        </dd>
        <dt><strong><span class="code">{mnesia_info, Format, Args}</span></strong></dt>
        <dd>
          <p>Mnesia has detected something that
            may be of interest when debugging the system. This is explained
            in <span class="code">Format</span> and <span class="code">Args</span> which may appear
            as input to <span class="code">io:format/2</span> or sent to the error_logger. By
            default this event is printed with <span class="code">io:format/2</span>. 
            </p>
        </dd>
        <dt><strong><span class="code">{mnesia_error, Format, Args}</span></strong></dt>
        <dd>
          <p>Mnesia has encountered an error. The
            reason for the error is explained i <span class="code">Format</span> and <span class="code">Args</span>
            which may be given as input to <span class="code">io:format/2</span> or sent to the
            error_logger. By default this event is reported to the error_logger.
            </p>
        </dd>
        <dt><strong><span class="code">{mnesia_user, Event}</span></strong></dt>
        <dd>
          <p>An application has invoked the
            function <span class="code">mnesia:report_event(Event)</span>. <span class="code">Event</span> may be any Erlang
            data structure. When tracing a system of Mnesia applications
            it is useful to be able to interleave Mnesia's own events with
            application related events that give information about the
            application context. Whenever the application starts  with
            a new and demanding Mnesia activity or enters a
            new and interesting phase in its execution it may be a good idea
            to use <span class="code">mnesia:report_event/1</span>. </p>
        </dd>
      </dl>
    

    <h4>Activity Events</h4>
      
      <p>Currently, there is only one type of activity event:</p>
      <dl>
       <dt><strong><span class="code">{complete, ActivityID}</span></strong></dt>
       <dd>
         <p>This event occurs when a transaction that caused a modification to the database
           has completed.  It is useful for determining when a set of table events
           (see below) caused by a given activity have all been sent.  Once the this event
           has been received, it is guaranteed that no further table events with the same
           ActivityID will be received.  Note that this event may still be received even
           if no table events with a corresponding ActivityID were received, depending on
           the tables to which the receiving process is subscribed.</p>
         <p>Dirty operations always only contain one update and thus no activity event is sent.</p>
       </dd>
     </dl>
    

    <h4>Table Events</h4>
      
      <p>The final category of events are table events, which are
        events related to table updates.  There are two types of table
        events simple and detailed.  
        </p>
      <p>The simple table events are tuples looking like this:
        <span class="code">{Oper, Record, ActivityId}</span>. Where <span class="code">Oper</span> is the
        operation performed. <span class="code">Record</span> is the record involved in the
        operation and <span class="code">ActivityId</span> is the identity of the
        transaction performing the operation. Note that the name of the
        record is the table name even when the <span class="code">record_name</span> has
        another setting.  The various table related events that may
        occur are:
        </p>
      <dl>
        <dt><strong><span class="code">{write, NewRecord, ActivityId}</span></strong></dt>
        <dd>
          <p>a new record has been written.
            NewRecord contains the new value of the record. 
            </p>
        </dd>
        <dt><strong><span class="code">{delete_object, OldRecord, ActivityId}</span></strong></dt>
        <dd>
          <p>a record has possibly been deleted
            with <span class="code">mnesia:delete_object/1</span>. <span class="code">OldRecord</span>
            contains the value of the old record as stated as argument
            by the application. Note that, other records with the same
            key may be remaining in the table if it is a bag.
            </p>
        </dd>
        <dt><strong><span class="code">{delete, {Tab, Key}, ActivityId}</span></strong></dt>
        <dd>
          <p>one or more records possibly has
            been deleted. All records with the key Key in the table
            <span class="code">Tab</span> have been deleted. </p>
        </dd>
      </dl>
      <p>The detailed table events are tuples looking like
        this: <span class="code">{Oper, Table, Data, [OldRecs], ActivityId}</span>.
        Where <span class="code">Oper</span> is the operation
        performed. <span class="code">Table</span> is the table involved in the operation,
        <span class="code">Data</span> is the record/oid written/deleted.
        <span class="code">OldRecs</span> is the contents before the operation.
        and <span class="code">ActivityId</span> is the identity of the transaction
        performing the operation. 
        The various table related events that may occur are:
        </p>
      <dl>
        <dt><strong><span class="code">{write, Table, NewRecord, [OldRecords], ActivityId}</span></strong></dt>
        <dd>
          <p>a new record has been written.
            NewRecord contains the new value of the record and OldRecords
            contains the records before the operation is performed.
            Note that the new content is dependent on the type of the table.</p>
        </dd>
        <dt><strong><span class="code">{delete, Table, What, [OldRecords], ActivityId}</span></strong></dt>
        <dd>
          <p>records has possibly been deleted
            <span class="code">What</span> is either {Table, Key} or a record {RecordName, Key, ...} 
            that was deleted.
            Note that the new content is dependent on the type of the table.</p>
        </dd>
      </dl>
    
  

  <h3><a name="id77250">5.8 
        Debugging Mnesia Applications</a></h3>
    
    <p>Debugging a Mnesia application can be difficult due to a number of reasons, primarily related
      to difficulties in understanding how the transaction
      and table load mechanisms work. An other source of
      confusion may be the semantics of nested transactions.
      </p>
    <p>We may set the debug level of Mnesia by calling:
      </p>
    <ul>
      <li><span class="code">mnesia:set_debug_level(Level)</span></li>
    </ul>
    <p>Where the parameter <span class="code">Level</span> is:
      </p>
    <dl>
      <dt><strong><span class="code">none</span></strong></dt>
      <dd>
        <p>no trace outputs at all. This is the default.
          </p>
      </dd>
      <dt><strong><span class="code">verbose</span></strong></dt>
      <dd>
        <p>activates tracing of important debug events. These
          debug events will generate <span class="code">{mnesia_info, Format, Args}</span>
          system events. Processes may subscribe to these events with
          <span class="code">mnesia:subscribe/1</span>. The events are always sent to Mnesia's
          event handler. 
          </p>
      </dd>
      <dt><strong><span class="code">debug</span></strong></dt>
      <dd>
        <p>activates all events at the verbose level plus 
          traces of all debug events. These debug events will generate
          <span class="code">{mnesia_info, Format, Args}</span> system events. Processes may
          subscribe to these events with <span class="code">mnesia:subscribe/1</span>. The
          events are always sent to Mnesia's event handler. On this
          debug level Mnesia's event handler starts subscribing 
          updates in the schema table. 
          </p>
      </dd>
      <dt><strong><span class="code">trace</span></strong></dt>
      <dd>
        <p>activates all events at the debug level. On this
          debug level Mnesia's event handler starts subscribing
          updates on all Mnesia tables. This level is only intended
          for debugging small toy systems, since many large
          events may be generated.</p>
      </dd>
      <dt><strong><span class="code">false</span></strong></dt>
      <dd>
        <p>is an alias for none.</p>
      </dd>
      <dt><strong><span class="code">true</span></strong></dt>
      <dd>
        <p>is an alias for debug.</p>
      </dd>
    </dl>
    <p>The debug level of Mnesia itself, is also an application
      parameter, thereby making it possible to start an Erlang system 
      in order to turn on Mnesia debug in the initial
      start-up phase by using the following code:
      </p>
    <div class="example"><pre>
      % erl -mnesia debug verbose
    </pre></div>
  

  <h3><a name="id77395">5.9 
        Concurrent Processes in Mnesia</a></h3>
    
    <p>Programming concurrent Erlang systems is the subject of
      a separate book. However, it is worthwhile to draw attention to
      the following features, which permit concurrent processes to
      exist in a Mnesia system. 
      </p>
    <p>A group of functions or processes can be called within a
      transaction. A transaction may include statements that read,
      write or delete data from the DBMS. A large number of such
      transactions can run concurrently, and the programmer does not
      have to explicitly synchronize the processes which manipulate
      the data. All programs accessing the database through the
      transaction system may be written as if they had sole access to
      the data. This is a very desirable property since all
      synchronization is taken care of by the transaction handler. If
      a program reads or writes data, the system ensures that no other
      program tries to manipulate the same data at the same time. 
      </p>
    <p>It is possible to move tables, delete tables or reconfigure
      the layout of a table in various ways. An important aspect of
      the actual implementation of these functions is that it is
      possible for user programs to continue to use a table while it
      is being reconfigured. For example, it is possible to
      simultaneously move a table and perform write operations to the
      table . This is important for many applications that
      require continuously available services. Refer to Chapter 4:
      <span class="bold_code"><a href="Mnesia_chap4.html#trans_prop">Transactions and other access contexts</a></span> for more information.
      </p>
  

  <h3><a name="id77432">5.10 
        Prototyping</a></h3>
    
    <p>If and when we decide that we would like to start and manipulate 
      Mnesia, it is often easier to write the definitions and
      data into an ordinary text file.
      Initially, no tables and no data exist, or which
      tables are required. At the initial stages of prototyping it
      is prudent write all data into one file, process
      that file and have the data in the file inserted into the database.
      It is possible to initialize Mnesia with data read from a text file.
      We have the following two functions to work with text files.
      </p>
    <ul>
      <li>
        <p><span class="code">mnesia:load_textfile(Filename)</span> Which loads a
          series of local table definitions and data found in the file
          into Mnesia. This function also starts Mnesia and possibly
          creates a new schema. The function only operates on the
          local node.
          </p>
      </li>
      <li>
        <p><span class="code">mnesia:dump_to_textfile(Filename)</span> Dumps
          all local tables of a mnesia system into a text file which can
          then be edited (by means of a normal text editor) and then 
          later reloaded.</p>
      </li>
    </ul>
    <p>These functions are of course much slower than the ordinary
      store and load functions of Mnesia. However, this is mainly intended  for minor experiments 
      and initial prototyping. The major advantages of these functions is that they are very easy
      to use. 
      </p>
    <p>The format of the text file is:
      </p>
    <div class="example"><pre>
      {tables, [{Typename, [Options]},
      {Typename2 ......}]}.
      
      {Typename, Attribute1, Atrribute2 ....}.
      {Typename, Attribute1, Atrribute2 ....}.
    </pre></div>
    <p><span class="code">Options</span> is a list of <span class="code">{Key,Value}</span> tuples conforming
      to the options we could give to <span class="code">mnesia:create_table/2</span>. 
      </p>
    <p>For example, if we want to start playing with a small
      database for healthy foods, we enter then following data into
      the file <span class="code">FRUITS</span>.
      </p>
<div class="example"><pre>

{tables,
 [{fruit, [{attributes, [name, color, taste]}]},
  {vegetable, [{attributes, [name, color, taste, price]}]}]}.


{fruit, orange, orange, sweet}.
{fruit, apple, green, sweet}.
{vegetable, carrot, orange, carrotish, 2.55}.
{vegetable, potato, yellow, none, 0.45}.</pre></div>    <p>The following session with the Erlang shell then shows how
      to load the fruits database.
      </p>
    <div class="example"><pre>
      % erl
      Erlang (BEAM) emulator version 4.9
 
      Eshell V4.9  (abort with ^G)
      1&gt; mnesia:load_textfile("FRUITS").
      New table fruit
      New table vegetable
      {atomic,ok}
      2&gt; mnesia:info().
      ---&gt; Processes holding locks &lt;--- 
      ---&gt; Processes waiting for locks &lt;--- 
      ---&gt; Pending (remote) transactions &lt;--- 
      ---&gt; Active (local) transactions &lt;---
      ---&gt; Uncertain transactions &lt;--- 
      ---&gt; Active tables &lt;--- 
      vegetable      : with 2 records occuping 299 words of mem 
      fruit          : with 2 records occuping 291 words of mem 
      schema         : with 3 records occuping 401 words of mem 
      ===&gt; System info in version "1.1", debug level = none &lt;===
      opt_disc. Directory "/var/tmp/Mnesia.nonode@nohost" is used.
      use fallback at restart = false
      running db nodes = [nonode@nohost]
      stopped db nodes = [] 
      remote           = []
      ram_copies       = [fruit,vegetable]
      disc_copies      = [schema]
      disc_only_copies = []
      [{nonode@nohost,disc_copies}] = [schema]
      [{nonode@nohost,ram_copies}] = [fruit,vegetable]
      3 transactions committed, 0 aborted, 0 restarted, 2 logged to disc
      0 held locks, 0 in queue; 0 local transactions, 0 remote
      0 transactions waits for other nodes: []
      ok
      3&gt; 
    </pre></div>
    <p>Where we can see that the DBMS was initiated from a 
      regular text file.
      </p>
  

  <h3><a name="id77546">5.11 
        Object Based Programming with Mnesia</a></h3>
    
    <p>The Company database introduced in Chapter 2 has three tables
      which store records (employee, dept, project), and three tables
      which store relationships (manager, at_dep, in_proj). This is a
      normalized data model, which has some advantages over a
      non-normalized data model. 
      </p>
    <p>It is  more efficient to do a
      generalized search in a normalized database. Some operations are
      also easier to perform on a normalized data model. For example,
      we can easily remove one project, as the following example
      illustrates: 
      </p>
<div class="example"><pre>

remove_proj(ProjName) -&gt;
    F = fun() -&gt;
                Ip = qlc:e(qlc:q([X || X &lt;- mnesia:table(in_proj),
				       X#in_proj.proj_name == ProjName]
				)),
                mnesia:delete({project, ProjName}),
                del_in_projs(Ip)
        end,
    mnesia:transaction(F).

del_in_projs([Ip|Tail]) -&gt;
    mnesia:delete_object(Ip),
    del_in_projs(Tail);
del_in_projs([]) -&gt;
    done.</pre></div>    <p>In reality, data models are seldom fully normalized. A
      realistic alternative to a normalized database model would be
      a data model which is not even in first normal form. Mnesia
      is very suitable for applications such as telecommunications,
      because it is easy to organize data in a very flexible manner. A
      Mnesia database is always organized as a set of tables. Each
      table is filled with rows/objects/records. What sets Mnesia
      apart is that individual fields in a record can contain any type
      of compound data structures. An individual field in a record can
      contain lists, tuples, functions, and even record code. 
      </p>
    <p>Many telecommunications applications have unique requirements
      on lookup times for certain types of records. If our Company
      database had been a part of a telecommunications system, then it
      could be that the lookup time of an employee <strong>together</strong>
      with a list of the projects the employee is working on, should
      be minimized. If this was the case, we might choose a
      drastically different data model which has no direct
      relationships. We would only have the records themselves, and
      different records could contain either direct references to
      other records, or they could contain other records which are not
      part of the Mnesia schema. 
      </p>
    <p>We could create the following record definitions:
      </p>
<div class="example"><pre>

-record(employee, {emp_no,
		   name,
		   salary,
		   sex,
		   phone,
		   room_no,
		   dept,
		   projects,
		   manager}).
		   

-record(dept, {id, 
               name}).

-record(project, {name,
                  number,
                  location}).
</pre></div>    <p>An record which describes an employee might look like this:
      </p>
    <div class="example"><pre>
        Me = #employee{emp_no= 104732,
        name = klacke,
        salary = 7,
        sex = male,
        phone = 99586,
        room_no = {221, 015},
        dept = 'B/SFR',
        projects = [erlang, mnesia, otp],
        manager = 114872},
    </pre></div>
    <p>This model only has three different tables, and the employee
      records contain references to other records. We have the following
      references in the record. 
      </p>
    <ul>
      <li>
<span class="code">'B/SFR'</span> refers to a  <span class="code">dept</span> record.
      </li>
      <li>
<span class="code">[erlang, mnesia, otp]</span>. This is a list of three
       direct references to three different <span class="code">projects</span> records. 
      </li>
      <li>
<span class="code">114872</span>. This refers to another employee record.
      </li>
    </ul>
    <p>We could also use the Mnesia record identifiers (<span class="code">{Tab, Key}</span>)
      as references. In this case, the <span class="code">dept</span> attribute would be
      set to the value <span class="code">{dept, 'B/SFR'}</span> instead of
      <span class="code">'B/SFR'</span>. 
      </p>
    <p>With this data model, some operations execute considerably
      faster than they do with the normalized data model in our
      Company database. On the other hand, some other operations
      become much more complicated. In particular, it becomes more
      difficult to ensure that records do not contain dangling
      pointers to other non-existent, or deleted, records. 
      </p>
    <p>The following code exemplifies a search with a non-normalized
      data model. To find all employees at department
      <span class="code">Dep</span> with a salary higher than <span class="code">Salary</span>, use the following code: 
      </p>
<div class="example"><pre>

get_emps(Salary, Dep) -&gt;
    Q = qlc:q( 
          [E || E &lt;- mnesia:table(employee),
                E#employee.salary &gt; Salary,
                E#employee.dept == Dep]
	 ),
    F = fun() -&gt; qlc:e(Q) end,
    transaction(F).</pre></div>    <p>This code is not only easier to write and to understand, but it
      also executes much faster.
      </p>
    <p>It is easy to show examples of code which executes faster if
      we use a non-normalized data model, instead of a normalized
      model. The main reason for this is that fewer tables are
      required. For this reason, we can more easily combine data from
      different tables in join operations. In the above example, the
      <span class="code">get_emps/2</span> function was transformed from a join operation
      into a simple query which consists of a selection and a projection
      on one single table. 
      </p>
  
</div>
<div class="footer">
<hr>
<p>Copyright © 1997-2012 Ericsson AB. All Rights Reserved.</p>
</div>
</div>
</div></body>
</html>