Sophie

Sophie

distrib > Fedora > 14 > x86_64 > media > updates > by-pkgid > 71d40963b505df4524269198e237b3e3 > files > 85

virtuoso-opensource-doc-6.1.4-2.fc14.noarch.rpm

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
 <head profile="http://internetalchemy.org/2003/02/profile">
  <link rel="foaf" type="application/rdf+xml" title="FOAF" href="http://www.openlinksw.com/dataspace/uda/about.rdf" />
  <link rel="schema.dc" href="http://purl.org/dc/elements/1.1/" />
  <meta name="dc.title" content="6. Administration" />
  <meta name="dc.subject" content="6. Administration" />
  <meta name="dc.creator" content="OpenLink Software Documentation Team ;&#10;" />
  <meta name="dc.copyright" content="OpenLink Software, 1999 - 2009" />
  <link rel="top" href="index.html" title="OpenLink Virtuoso Universal Server: Documentation" />
  <link rel="search" href="/doc/adv_search.vspx" title="Search OpenLink Virtuoso Universal Server: Documentation" />
  <link rel="parent" href="server.html" title="Chapter Contents" />
  <link rel="prev" href="htmlconductorbar.html" title="HTML based Administration Console (Conductor) Guide" />
  <link rel="next" href="contents.html" title="Contents" />
  <link rel="shortcut icon" href="../images/misc/favicon.ico" type="image/x-icon" />
  <link rel="stylesheet" type="text/css" href="doc.css" />
  <link rel="stylesheet" type="text/css" href="/doc/translation.css" />
  <title>6. Administration</title>
  <meta http-equiv="Content-Type" content="text/xhtml; charset=UTF-8" />
  <meta name="author" content="OpenLink Software Documentation Team ;&#10;" />
  <meta name="copyright" content="OpenLink Software, 1999 - 2009" />
  <meta name="keywords" content="" />
  <meta name="GENERATOR" content="OpenLink XSLT Team" />
 </head>
 <body>
  <div id="header">
    <a name="fault" />
    <img src="../images/misc/logo.jpg" alt="" />
    <h1>6. Administration</h1>
  </div>
  <div id="navbartop">
   <div>
      <a class="link" href="server.html">Chapter Contents</a> | <a class="link" href="htmlconductorbar.html" title="HTML based Administration Console (Conductor) Guide">Prev</a> | <a class="link" href="accessinterfaces.html" title="Data Access Interfaces">Next</a>
   </div>
  </div>
  <div id="currenttoc">
   <form method="post" action="/doc/adv_search.vspx">
    <div class="search">Keyword Search: <br />
        <input type="text" name="q" /> <input type="submit" name="go" value="Go" />
    </div>
   </form>
   <div>
      <a href="http://www.openlinksw.com/">www.openlinksw.com</a>
   </div>
   <div>
      <a href="http://docs.openlinksw.com/">docs.openlinksw.com</a>
   </div>
    <br />
   <div>
      <a href="index.html">Book Home</a>
   </div>
    <br />
   <div>
      <a href="contents.html">Contents</a>
   </div>
   <div>
      <a href="preface.html">Preface</a>
   </div>
    <br />
   <div class="selected">
      <a href="server.html">Administration</a>
   </div>
    <br />
   <div>
      <a href="databaseadmsrv.html">Database Server Administration</a>
   </div>
   <div>
      <a href="htmlconductorbar.html">HTML based Administration Console (Conductor) Guide</a>
   </div>
   <div class="selected">
      <a href="fault.html">Virtuoso Cluster Fault Tolerance</a>
    <div>
        <a href="#faultfaulttolerinto" title="Introduction">Introduction</a>
        <a href="#faultfaulttolersampleconfig" title="Sample Configuration">Sample Configuration</a>
        <a href="#faultfaulttolertransactions" title="Transactions">Transactions</a>
        <a href="#faultfaulttolerdivid" title="Dividing Virtuoso Hosts Over Physical Machines">Dividing Virtuoso Hosts Over Physical Machines</a>
        <a href="#faultfaulttolermng" title="Managing Availability">Managing Availability</a>
        <a href="#faultfaulttoleroptm" title="Optimizing Schema for Fault Tolerance">Optimizing Schema for Fault Tolerance</a>
        <a href="#faultfaulttolerinterprt" title="Interpreting Status Messages">Interpreting Status Messages</a>
        <a href="#faultfaulttoleradmapi" title="Administration API">Administration API</a>
        <a href="#faultfaulttolerrdfspecf" title="RDF Specifics">RDF Specifics</a>
        <a href="#faultfaulttolerpragram" title="Fault Tolerance Programming">Fault Tolerance Programming</a>
    </div>
   </div>
    <br />
  </div>
  <div id="text">
  <a name="fault" />
    <h2>6.3. Virtuoso Cluster Fault Tolerance</h2>
  This chapter discusses fault tolerance and how to configure it. The following aspects are covered:

  <ul>
    <li>Setting up a fault tolerant logical cluster inside a physical cluster. Creating tables and indices for fault tolerance.</li>
    <li>Interpreting status and error messages and managing failures and recovery.</li>
    <li>Optimizing a schema for fault tolerance:  For read-intensive workloads, the work can be profitably split among many copies of the same partition.</li>
    <li>RDF specifics relating to fault tolerance.</li>
    <li>Splitting a cluster so that one copy of the partitions does bulk load while another serves online queries.</li>
  </ul>

    
      <a name="faultfaulttolerinto" />
    <h3>6.3.1. Introduction</h3>

      <p>Virtuoso Cluster supports optional fault tolerance by storing
partitions in more than one copy if desired. This is a cluster-only
feature, providing for transparent fail-over with fully transactional
semantics. This feature is in no way related to the other forms of
replication discussed in the Virtuoso documentation. This feature can
be used for load balancing of read-intensive workloads and for fault
tolerance of arbitrary workloads in a tightly coupled cluster. This
feature is not suited for synchronizing geographically distributed
copies.</p>

      <p>Fault tolerance is enabled at the level of logical cluster. Making the logical cluster __ALL
fault tolerant has the effect of making all the normally non fault tolerant database objects into
fault tolerant ones.</p>
    <br />
    
      <a name="faultfaulttolersampleconfig" />
    <h3>6.3.2. Sample Configuration</h3>
      <p>We will take a minimal example of a fault tolerant setup with 4 server processes, grouped in two groups of
two mutually mirroring servers. The word host here refers to a single server process. How these are distributed
over physical hardware is a separate question. Each host (i.e. server process) has exclusive control over its database
files. Two processes may not share files.</p>

<div>
      <pre class="programlisting">
create cluster DUP default group (&quot;Host1&quot;, &quot;Host2&quot;), group (&quot;Host3&quot;, &quot;Host4&quot;);
</pre>
    </div>

      <p>Each group clause in the statement defines a set of mutually
replicating, interchangeable processes. The cluster is operational as
long as at least one process for each group is available. If all the
processes in one group are down, the tables created in the cluster
will not be available in their entirety. Even if some fragment of a
table were unavailable, the remaining fragments are still available
for transactions that concern only them.</p>

      <p>For all tables or indices created in a cluster with fault tolerance,
partitioning is applied for determining which group of the groups
listed in the create cluster statement gets which individual entry.
After this, all the hosts that make up the group are guaranteed to
hold a copy of said entry at the commit of each transaction.</p>

      <p>Regardless of the definition of logical clusters, there are global
functions at the level of the physical cluster which need to be
replicated for fault tolerance. For these functions include resolving
distributed deadlocks and allocating sequence ranges. See the
discussion of sequences in the cluster programming guide for more on
this. These global functions are handled by a single process called
the master. To keep a standby master that is synchronously kept in
sync with the first master, one can define multiple master processes,
as follows:</p>

<div>
      <pre class="programlisting">
Master = Host1
Master2 = Host2
</pre>
    </div>

      <p>These lines in the cluster.ini files of the servers constituting the cluster
mean that if Host1 is available, it will perform the functions of the master
and if it is not available, these functions go to Host2. If both are available,
then Host1 does the work and synchronously updates Host2 before returning the
results to the requesting host.</p>

      <p>To create a table or index in a specific logical cluster, one uses the cluster option in alter
index or create index. For example:
</p>

<div>
      <pre class="programlisting">
create table T1 (row_no int primary key, string1 varchar);
alter index  t1 on t1 partition cluster DUP (row_no int (0hexffff00));
create index string1 on t1 partition cluster DUP (string1 varchar (5));
</pre>
    </div>

      <p>These statements define that t1 will be kept in duplicate copies spread as declared for logical
cluster DUP. Partitioning can be altered only when the table concerned is empty.
To make an existing non-replicated table into a replicated one, use alter table rename, as follows:
</p>

<div>
      <pre class="programlisting">
create table T1 (row_no int primary key, string1 varchar);
alter index  t1 on t1 partition (row_no int (0hexffff00));
create index string1 on t1 partition  (string1 varchar (5));
</pre>
    </div>

      <p>The table is created in the default logical cluster, which by default is not replicated.
Now fill the table with a large amount of data.
Then do the move over to replicated storage with minimum effect on overall server availability,
follow the below steps:
</p>

<div>
      <pre class="programlisting">
drop index string2;
alter table t1 rename t1_old;

create table T1 (row_no int primary key, string1 varchar);
alter index  t1 on t1 partition cluster DUP (row_no int (0hexffff00));
create index string1 on t1 partition cluster DUP (string1 varchar (5));

log_enable (2);
</pre>
    </div>

      <p>This turns on row autocommit and disables logging for the session. This is necessary, as
otherwise the statements below will abort due to running out of rollback space if the table is large.
Disabling logging also saves some extra time.
</p>

<div>
      <pre class="programlisting">
insert into t1 select * from t1_old;
delete from t1_old;
drop table t1_old;
</pre>
    </div>

      <p>First deleting the contents and then dropping the table shortens the global atomic section that corresponds
to dropping the table. Otherwise all servers would be unavailable also for the time of deleting the content,
which might take long.
</p>

      <p>Finally:</p>

<div>
      <pre class="programlisting">
cl_exec (checkpoint&#39;);
</pre>
    </div>

      <p>Makes the operation permanent. All the above work would be lost in the event of
any failure since it was done without logging.</p>

<div>
      <pre class="programlisting">
log_enable (1);
</pre>
    </div>

      <p>Restores default transaction and logging behavior to the session.</p>

      <p>If T1 were very large, e.g. 100&#39;s of G or more, then one could do
checkpoint after each step so as not to keep a full copy of all
indices of t1 in the old and new versions simultaneously. Dropping
an index or deleting rows actually frees the space at the next
checkpoint. One could also write a procedure for copying the table in
parts and run many such copies in parallel for different parts of the
table. This would have obvious advantages for moving terabytes of
data.</p>
    <br />

    
      <a name="faultfaulttolertransactions" />
    <h3>6.3.3. Transactions</h3>

      <p>Replication of partitions is entirely synchronous and transactional,
with two phase commit. Replicated partitions show serializable
semantics insofar the transactions dealing with replicated partitions
are in serializable isolation.</p>

      <p>These processes are transparent to the developer and administrator.
One can program as one would in the case of a non-clustered database.
The below has bearing mostly on optimization.</p>

      <p>A read with read committed isolation can be done in any copy of a partition. Read
committed reads make no locks nor do they wait for locks. For
uncommitted updated rows, they show the pre-update state. Thus read
committed can be freely load balanced over all copies of a partition.</p>

      <p>A read with repeatable or serializable isolation will always be done
on the first copy of a partition. This is done so as to have proper
serialization of locking. If a row is locked for repeatable read and
another transaction wishes to lock it for update, the latter will
wait. Thus, transactions with locking resolve locking issues on the
first copy of each partition. The first copy is the copy of the
partition held by the first process mentioned in the partition&#39;s group clause of a
create cluster. If the process is not available, the role of the
first copy falls on the second, third and so on. If no host of the
hosts mentioned in the group clause is available, the operation cannot
be performed. We will later define what we mean by available.</p>

      <p>Updates to replicated partitions are performed concurrently on all the
copies of the partition. Updates are replicated as they are made, not
at time of commit, hence there is almost no added latency arising from
keeping replicated partitions: When one is ready to commit, all are.</p>

      <p>Distributed deadlock detection is handled by the master host. In the
event of its failure, the function is taken over by the first node to
be available in the sequence of master succession, as indicated above.</p>
    <br />
    
      <a name="faultfaulttolerdivid" />
    <h3>6.3.4. Dividing Virtuoso Hosts Over Physical Machines</h3>

      <p>This section describes how Virtuoso hosts should be placed on actual machines
for optimal balance and fault tolerance.</p>

      <p>All hosts (Virtuoso processes) constituting a cluster should be of the
same size. This means that most importantly they should have an equal
amount of memory allocated in virtuoso.ini.</p>

      <p>The situation is simplest if all physical machines are of the same
spec. This is not necessary though since a larger machine can host
more Virtuoso hosts (processes). For balanced resource use, the
machines should however have an equal amount of memory per core.</p>

      <p>Naturally, all hosts mentioned in the same group clause in a create
cluster statement must reside on different physical hardware. This is
also true of the master host list in the cluster.ini file. Putting
them on the same machine or a different virtual machine on the same
machine defeats the whole point of fault tolerance. If VM&#39;s are
automatically balanced on a data center network, one should keep the
above in mind. It is recommended to use real machines with real
network interfaces for database.</p>

      <p>In each host group, the first host gets some more load than the other
hosts. This is because serializable reading always must go to the
first in the group. Thus, the first hosts should be evenly spread
over the hardware, so do not put all the group firsts on the same
machine.</p>

      <p>Supposing 2 machines with 8 cores each, one would have hosts 1-16, 1-8
on the first, and 9-16 on the second. For analytics, have one core per
host, for OLTP, one can have two or four cores per host.</p>

      <p>In this case, one would write the create cluster as follows:</p>

<div>
      <pre class="programlisting">
create cluster XX group (&quot;Host1&quot;, &quot;Host9&quot;), group (&quot;Host10&quot;, &quot;Host2&quot;), group (&quot;Host3&quot;, &quot;Host11&quot;), group (&quot;Host12&quot;, &quot;Host4&quot;),
  group (&quot;Host5&quot;, &quot;Host13&quot;), group (&quot;Host14&quot;, &quot;Host6&quot;), group (&quot;Host7&quot;, &quot;Host15&quot;), group (&quot;Host16&quot;, &quot;Host8&quot;);
</pre>
    </div>

      <p>Both machines have 4 firsts and four seconds. One could vary the
memory allocation per process so as to have maybe 20% more RAM per
host for a first of group than for the others in the group. This may
optimize the situation when all are online and will not excessively
penalize the fallback position.</p>

      <p>Varying the amount of memory depending on whether a host is first or
second makes sense only for read intensive workloads. Dividing firsts
and seconds evenly over the hardware makes sense for all workloads.</p>
    <br />
    
      <a name="faultfaulttolermng" />
    <h3>6.3.5. Managing Availability</h3>

      <p>This section concerns prerelease 3117 amd onwards and is not final.
Later versions have higher level management features but the primitives
discussed here continue to apply.</p>

      <p>In its normal state, a cluster has all the constituent processes up and all state is kept synchronous.</p>

      <p>When a host unexpectedly disconnects, the following takes place:</p>

<ul>
  <li>All transactions which have a write affecting this host become uncommittable. The application will see this immediately, as soon as it does anything within the transaction.</li>
  <li>All work proceeding at the request of the failed host on other hosts is aborted.</li>
  <li>All remaining network connections to the failed host are disconnected.</li>
</ul>

      <p>If a query was proceeding and it had state on the failing host, the failure will
be reported to the client of the query and the query will be aborted. A subsequent
query, if in read committed isolation,  will automatically avoid the failed host and
use surviving ones. Thus, the application sees a failure as a retryable abort
of a transaction or query.</p>

      <p>For update transactions, if all copies of a partition are not online,
the update cannot be made. In order to allow proceeding with updates
even after a failure, the failed host must be declared removed. This
means that if it were to come back on, it would not get any updates or
queries from the other hosts until it was explicitly admitted back into
the cluster.</p>

      <p>In version 6.00,3116, enabling updates when all hosts are not online
must be done manually. In other words, read only work will proceed
uninterrupted but updates will be prohibited if all hosts are not
online. Read balancing and re-enabling updates when all hosts have
rejoined the cluster is done automatically.</p>

      <p>In order to declare that a host has for the time being left the
cluster or has returned to the cluster after having left it, one uses
the function cl_host_enable ().</p>

      <p>For example, suppose a hardware failure that takes multiple processes
(hosts) offline. As long as for each there is at least one surviving
host of the same group (as per create cluster), read operations
proceed normally. But to re-enable writes for the time the failed
hardware is replaced, the operator must inform the cluster that the
failed hosts are not expected to return immediately and that no
further reference to them should be made, specifically, the rest
should not attempt to keep them up to date.</p>

      <p>This is done with cl_host_enable. This is a SQL stored procedure. Log in
as dba on a surviving master host and do:</p>

<div>
      <pre class="programlisting">
SQL&gt; cl_host_enable (1, 0);
</pre>
    </div>

      <p>This will abort all the transactions pending at the time and declare host 1
to be off limits to the rest of the cluster. If Host1 was playing the role
of the master, the master role is automatically transferred to the next one in the succession.</p>

      <p>The succession of master hosts is declared in the cluster.ini with the settings of
Master, Master2, Master3 and so on. All cluster.ini files must agree.</p>

      <p>After this, even though Host1 is now acknowledged offline, updates can proceed.</p>

      <p>To rejoin a recovered host into a cluster, so as to again have an additional
copy of the formerly incompletely replicated partition, one can do</p>

<div>
      <pre class="programlisting">
SQL&gt;cl_host_enable (1, 1);
</pre>
    </div>

      <p>This states that Host1 is again part of the cluster. This statement
must be executed on an online master node of the cluster, thus not on
Host1 itself.</p>

      <p>Supposing that the database files of Host1 have been lost in the
failure and that Host1 and Host2 were in the same group. The restore
would go by taking the cluster offline, copying the database files of
Host1 to Host1 and starting the database again. Then the dba would
issue cl_host_enable (1) and Host1 would again be available.</p>

      <p>To do this without downtime, one may do the following:</p>

<ul>
  <li>Disable checkpoints on Host2:  checkpoint_interval (0);  Operations continue. Copy the database files of host2 to host1.</li>
  <li>Start host1.</li>
  <li>Put host2 and all hosts with which host2 occurs in the same group in read-only mode:  cl_read_only (2, 1)</li>
  <li>copy the transaction file of host2 to host1 and replay it with replay ().</li>
  <li>rejoin host1 to the cluster with cl_host_enable (1, 1);</li>
  <li>Re-enable updates with cl_read_only (2, 0);</li>
  <li>re-enable checkpoint on host2 with checkpoint_interval (), setting it to its previous value. See virtuoso.ini.</li>
</ul>

      <p>Further versions perform these operations automatically. The above
procedure is error prone. Do not try it unless you understand exactly
why each step is made and what its effects are supposed to be.</p>
    <br />

    
      <a name="faultfaulttoleroptm" />
    <h3>6.3.6. Optimizing Schema for Fault Tolerance</h3>

      <p>Having the working set in memory is the single most important factor
of database performance. When storing partitions in duplicate, one in
principle also requires double the memory to keep adequate working set
during write operations.</p>

      <p>However, most web and data warehouse workloads are read-intensive. In
this situation, the reading load can be balanced over the replicated
copies. If this balancing were done at random or round robin, all
copies would eventually maintain the same working set. In other
words, 64G of RAM spread over two machines would behave like 32G. If
the data volune is larger than memory, it makes sense to have the
different replicas cache different parts of the partition they share.</p>

      <p>Consider, using the example of cluster DUP mentioned above:</p>

<div>
      <pre class="programlisting">
create table customer (c_id int primary key, c_name varchar, c_state varchar);
alter index customer on customer partition cluster DUP (c_id int (0hexffff0000));

create table orders (o_id int primary key, o_c_id  int, o_date datetime, o_value numeric);
alter index orders on orders partition cluster DUP (o_id int (0hexffff0000));
create index o_c_id on orders (o_c_id) partition cluster DUP (o_c_id (0hexffff0000));
</pre>
    </div>

      <p>This has the effect of saying that the 16 low bits of c_id or o_id do not
participate in the partition hash. The hash is made from bits 32-16.
Thus c_id 0-64K will be in one partition, 64K-128K in another,
128K-192K in a third and so on, these partitions are then spread by
hash over the host groups listed in the create cluster.</p>

      <p>Now, doing the join</p>

<div>
      <pre class="programlisting">
select sum (o_value) from customer, orders where c_state = &#39;MA&#39; and c_id = o_c_id;
</pre>
    </div>

      <p>will take o_c_id&#39;s 0-32K from the first copy of the first partition, id&#39;s 32K-64K from
the second copy of the first partition, c_o_id&#39;s 64K-96K from the first copy
of the second partition and so forth.</p>

      <p>The load is split by applying range partition on the low bits of id&#39;s,
so that a system with 64G split over two replicas behaves like 64G RAM
for read committed reading but as 32G of RAM for writing. This is
enabled by leaving low bits of id&#39;s outside of the partition hash by
specifying a mask, as shown above.</p>
    <br />

    
      <a name="faultfaulttolerinterprt" />
    <h3>6.3.7. Interpreting Status Messages</h3>

      <p>There are special error codes and status reports dealing with cluster failures.</p>

      <p>The status function with an argument of &#39;cluster_d&#39; shows a host by host report of the cluster:</p>

<div>
      <pre class="programlisting">
SQL&gt; status (&#39;cluster_d&#39;);
</pre>
    </div>

      <p>If all is normal, the message is as described in the cluster
administration section. If some hosts are down, meaning that they do
not accept network connections at the cluster port, these are first
listed as being down. Then follows the summary status line and a
status line for all the hosts that can be connected to.</p>

      <p>A host being contactable over the cluster protocol does not mean that it
is online or in sync with the rest.</p>

      <p>If a physical cluster has no logical clusters that are in duplicate,
there is no redundancy, except for the built in redundancy of schema.</p>

      <p>If a host is not in the online state, an extra line in the cluster status
report describes the state in more detail. The state can be one of:</p>

<ul>
  <li>roll forward - The host is recovering from log. The count of transactions replayed to date is shown after this.</li>
  <li>removed - The host is considered down and no attempt is made to contact it until it explicitly rejoins the cluster. This is controlled with the cl_host_enable function.</li>
</ul>

      <p>It is possible that the host considers itself in one state and the
host showing the report thinks that it is in some other state.
If this is so, the status report mentions it.</p>

      <p>Applications see cluster failures through the following SQL states:</p>

<ul>
  <li>08C01 - A host cannot be contacted or can be contacted but is not in the online state</li>
  <li>08C02 - An operation that previously had a connection to a host finds that it no longer has the connection.</li>
  <li>08C03 - A master only operation was tried on a non-master. Indicates possibility of divergent understanding of master succession. This is expected to reset itself.</li>
  <li>08C04 - A write was attempted on a partition that is flagged read only, as per cl_read_only.</li>
  <li>08C05 - A request was refused because the host serving the request thinks the requesting host is not admitted to cluster as per cl_host_enable, i.e. was removed and not reintroduced.</li>
  <li>08C06 - A cluster operation was not made because the host thinks it is not joined to the cluster either because it has not finished roll forward or because it is marked removed by ch_host_enable.</li>
</ul>
    <br />

    
      <a name="faultfaulttoleradmapi" />
    <h3>6.3.8. Administration API</h3>

      <p>This section documents the SQL procedures used for managing failures and setting hosts on and off line.
These are stored procedures for DBA group users only, all in the DB.DBA. qualifier/owner.
The dba will typically not call these directly. These are intended for use by management scripts and internal functions.
</p>

<div>
      <pre class="programlisting">
cl_host_enable (in host_no int, in flag int)
</pre>
    </div>

      <p>The host_no is the number as in the cluster.ini Host&lt;nn&gt; entries. The
flag is 1 for joined and 0 for removed.</p>

      <p>This controls whether a host is excluded from operations. Only a previously
excluded host can be rejoined to the cluster with this function. This is not
for adding new hosts. A host will be excluded from operations if it is long
term unavailable,e.g. is running on failed hardware. If the unavailability
is only for the time of a restart, removing the host is not generally practical.</p>

      <p>If a host is rejoined to its cluster, then the caller of this function
asserts that the joining host is up to date. If it were not up to
date one could get discrepancy between copies of partitions, which is
a loss of integrity and can be hard to detect. Being up to date
means, for all objects of all replicated logical clusters where this
host participates, having the exact same logical content in the host&#39;s
(i.e. server process&#39;) database files as in the databases of the hosts
which are in the same group as the rejoining host.</p>

      <p>This function must be called on a master host that is itself in the
online state. The setting is recorded on all master hosts. All
non-master hosts update their copy of this setting from the first
available master in the master succession. There is always at least
one master node that is in the online state. If they all are offline,
then the cluster in general is unavailable and the last one cannot be
removed from online state with cl_host_enable.</p>

      <p>This function aborts all pending transactions, so that the whole
cluster has no uncommitted state. This puts all the hosts that can be
reached into an atomic state where they only accept operations from
the master who initiated the atomic state. If the atomic state cannot
be obtained within a timeout, the operation fails and can be retried.
This is possible if two hosts attempt to get an atomic state and
deadlock or if rollback of pending state takes longer than the
timeout. Once in atomic state, all masters record the change in
cluster join status and all non-masters get a notification of the
change. Once all these are acknowledge and logged, the atomic
section ends.</p>

      <p>An application whose transaction was aborted in this manner will see
this as a deadlock, with the 40001 SQL state and a message mentioning
global atomic state.</p>

<div>
      <pre class="programlisting">
cl_read_only (in host_no int, in flag int)
</pre>
    </div>

      <p>This sets the partitions of which host host_no has a copy into read only state.
A flag of 1 means read only, 0 means read write. This does not abort transactions
but will prevent any new updates touching the partition. Transactions with
existing uncommitted state in the partition can commit. To abort all transactions
first, use this with __atomic (1) first and then __atomic (0) to finish the atomic state.
This is a volatile state and does not survive server restart. This is intended for
use in bringing copies of partitions up to date, which is a process that would have
to be retried anyway if interrupted by failure.</p>

<div>
      <pre class="programlisting">
__atomic (in flag int)
</pre>
    </div>

      <p>This places the cluster in global atomic mode. A flag of 1 starts
this and a flag of 0 finishes this. Row autocommit is also implicitly
enabled. During such time, no host of the cluster accepts connection
through web or SQL ports and only serves requests made by the
transaction which started the atomic section with __atomic (1). When
an atomic section starts, all transactions are aborted and are
guaranteed to all be rolled back upon successful completion of
__atomic (1). New transactions will also not start until __atomic (0)
is called. Starting an atomic section may fail by timeout if rollback
takes too long or if two competing __atomic(1) requests deadlock with
each other.</p>

<div>
      <pre class="programlisting">
cl_control (in host_no int, in op varchar, in new_value any := null)
</pre>
    </div>

      <p>This returns the value associated with cluster related settings. If a
new value is specified the old value is returned and the new value is
set.</p>

      <p>The op specifies the setting. It is one of:</p>

<ul>
  <li>cl_master_list - succession of master hosts as an array of host numbers. Read only.</li>
  <li>cl_host_list- Array of all host numbers. Read only.</li>
  <li>ch_group_list -  List of host numbers of hosts which occur in the same group
with host no. These are the hosts which share a partition with host no according
to at least one create cluster statement. Read only.</li>
  <li>cl_host_map - String with a character per each host number up to the maximum existing
host number. The character is represents the status as known by the host on which thus
function is called. The new_value argument can be specified for changing this setting.</li>
  <li>ch_status - Returns/sets the status of host no as known by this host.</li>
  <li>cl_master_host - Return/set the host number used by this host for master only requests.</li>
</ul>

      <p>The status of a host is one of:</p>

<ul>
  <li>0 - online</li>
  <li>1 - removed</li>
  <li>2 - temporarily offline</li>
  <li>4 - pending roll forward</li>
  <li>7 - host number is not used</li>
</ul>
    <br />

    
      <a name="faultfaulttolerrdfspecf" />
    <h3>6.3.9. RDF Specifics</h3>

      <p>To set up fault tolerant RDF storage, one can use the template provided in the
clrdfdup.sql file in the distribution.</p>

      <p>The below applies to testing with prerelease 06.00.3116.The fault tolerance function in
6.00.3116 is provided as a demonstration of concept exclusively and is not intended for
production use and has not been tested in production. The below steps will demonstrate the
basic capability but one should not try things not explicitly mentioned.</p>

      <p>The test starts with an empty database. Edit the create cluster statement in the clrdfdup.sql
file to correspond to the setup at hand. Then load the file:</p>

<div>
      <pre class="programlisting">
SQL&gt; load clrdfdup.sql;
SQL&gt; cl_exec (&#39;checkpoint&#39;);
</pre>
    </div>

      <p>Now load data. The load will be non-transactional but now will keep two copies of each partition.
Use log_enable (2) and ttlp or rdf_load_db..rdfxml as described in the relevant documentation.</p>

      <p>After some data is loaded, do another checkpoint.</p>

<div>
      <pre class="programlisting">
SQL&gt; cl_exec (&#39;checkpoint&#39;);
</pre>
    </div>

      <p>You may query the data. Now shut down one of the servers. Querying
should remain possible as long as one host in each group is online.
Start the previously stopped host and stop of another from the same
group. Querying remains possible.</p>

      <p>Do not try non-transactional loading when all hosts are not online. This produces incorrect results.
Do not stop hosts during non-transactional loading. This also produces inconsistent results.</p>

      <p>Transactional loading is safe for stopping servers during loading but will
stop the loading with an error. The removed host must be either brought back
online or removed with cl_host_enable for the loading to proceed.</p>

      <p>Do not test transactional RDF loading  with 6.00.3116. This version is tested for duplicate
partitions with SQL data but incompletely with RDF.</p>
    <br />

 
   <a name="faultfaulttolerpragram" />
    <h3>6.3.10. Fault Tolerance Programming</h3>
<p>This section describes aspects of fault tolerance in writing cluster
aware SQL applications. Specifically, partitioned functions, which
are a way of explicitly dividing procedural execution among hosts of a
cluster, have issues and features that are specific to fault tolerance
and must be treated  separately.</p>

<p>In using a daq or dpipe, one can specify whether the function is to be partitioned like:</p>

<ol>
      <li>Read committed read - low bits not used for partition can be used for intra partition balancing, as described in the schema optimization section.</li>
      <li>The function is called on all replicas, as an update.</li>
      <li>The function is called on the first replica, like a read for update</li>
      <li>The function is called on all but the first copy of the partition.</li>
    </ol>

<p>Combinations of 3 and 4 can be used if the function, for example, allocates sequence
numbers which must be then replicated over the remaining copies. Thus the function
that allocates the new sequence number is called with in mode 3 and another function
that uses this number is called in mode 4.</p>

<p>For a daq_call call, these options are specified in the 5th argument, flags.</p>

<ul>
  <li>0 - read committed</li>
  <li>1 - write all</li>
  <li>2 - write first</li>
  <li>3 - write all but first</li>
</ul>

<p>For dpipes, this is stated in the dpipe_define call&#39;s flags argument. The values to be or&#39;ed over
the flags are:</p>

<ul>
  <li>0 - read committed</li>
  <li>1 - update all</li>
  <li>4 - update first copy</li>
  <li>8 - update all but first copy.</li>
</ul>

    <br />
  <table border="0" width="90%" id="navbarbottom">
    <tr>
        <td align="left" width="33%">
          <a href="htmlconductorbar.html" title="HTML based Administration Console (Conductor) Guide">Previous</a>
          <br />HTML based Administration Console (Conductor) Guide</td>
     <td align="center" width="34%">
          <a href="server.html">Chapter Contents</a>
     </td>
        <td align="right" width="33%">
          <a href="accessinterfaces.html" title="Data Access Interfaces">Next</a>
          <br />Contents of Data Access Interfaces</td>
    </tr>
    </table>
  </div>
  <div id="footer">
    <div>Copyright© 1999 - 2009 OpenLink Software All rights reserved.</div>
   <div id="validation">
    <a href="http://validator.w3.org/check/referer">
        <img src="http://www.w3.org/Icons/valid-xhtml10" alt="Valid XHTML 1.0!" height="31" width="88" />
    </a>
    <a href="http://jigsaw.w3.org/css-validator/">
        <img src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!" height="31" width="88" />
    </a>
   </div>
  </div>
 </body>
</html>