Sophie

Sophie

distrib > Fedora > 14 > x86_64 > media > updates > by-pkgid > 71d40963b505df4524269198e237b3e3 > files > 74

virtuoso-opensource-doc-6.1.4-2.fc14.noarch.rpm

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html>
 <head profile="http://internetalchemy.org/2003/02/profile">
  <link rel="foaf" type="application/rdf+xml" title="FOAF" href="http://www.openlinksw.com/dataspace/uda/about.rdf" />
  <link rel="schema.dc" href="http://purl.org/dc/elements/1.1/" />
  <meta name="dc.title" content="5. Conceptual Overview" />
  <meta name="dc.subject" content="5. Conceptual Overview" />
  <meta name="dc.creator" content="OpenLink Software Documentation Team ;&#10;" />
  <meta name="dc.copyright" content="OpenLink Software, 1999 - 2009" />
  <link rel="top" href="index.html" title="OpenLink Virtuoso Universal Server: Documentation" />
  <link rel="search" href="/doc/adv_search.vspx" title="Search OpenLink Virtuoso Universal Server: Documentation" />
  <link rel="parent" href="concepts.html" title="Chapter Contents" />
  <link rel="prev" href=".html" title="" />
  <link rel="next" href="thevdbrel.html" title="Virtual Database (VDB) Engine" />
  <link rel="shortcut icon" href="../images/misc/favicon.ico" type="image/x-icon" />
  <link rel="stylesheet" type="text/css" href="doc.css" />
  <link rel="stylesheet" type="text/css" href="/doc/translation.css" />
  <title>5. Conceptual Overview</title>
  <meta http-equiv="Content-Type" content="text/xhtml; charset=UTF-8" />
  <meta name="author" content="OpenLink Software Documentation Team ;&#10;" />
  <meta name="copyright" content="OpenLink Software, 1999 - 2009" />
  <meta name="keywords" content="" />
  <meta name="GENERATOR" content="OpenLink XSLT Team" />
 </head>
 <body>
  <div id="header">
    <a name="coredbengine" />
    <img src="../images/misc/logo.jpg" alt="" />
    <h1>5. Conceptual Overview</h1>
  </div>
  <div id="navbartop">
   <div>
      <a class="link" href="concepts.html">Chapter Contents</a> | <a class="link" href="concepts.html" title="Conceptual Overview">Prev</a> | <a class="link" href="thevdbrel.html" title="Virtual Database (VDB) Engine">Next</a>
   </div>
  </div>
  <div id="currenttoc">
   <form method="post" action="/doc/adv_search.vspx">
    <div class="search">Keyword Search: <br />
        <input type="text" name="q" /> <input type="submit" name="go" value="Go" />
    </div>
   </form>
   <div>
      <a href="http://www.openlinksw.com/">www.openlinksw.com</a>
   </div>
   <div>
      <a href="http://docs.openlinksw.com/">docs.openlinksw.com</a>
   </div>
    <br />
   <div>
      <a href="index.html">Book Home</a>
   </div>
    <br />
   <div>
      <a href="contents.html">Contents</a>
   </div>
   <div>
      <a href="preface.html">Preface</a>
   </div>
    <br />
   <div class="selected">
      <a href="concepts.html">Conceptual Overview</a>
   </div>
    <br />
   <div class="selected">
      <a href="coredbengine.html">Core Database Engine</a>
    <div>
        <a href="#LogicalDataModel" title="Logical Data Model">Logical Data Model</a>
        <a href="#DataTypes" title="Data Types">Data Types</a>
        <a href="#colstore" title="Virtuoso Column Store">Virtuoso Column Store</a>
        <a href="#explvectprcode" title="Explicit Vectoring of Procedural Code">Explicit Vectoring of Procedural Code</a>
        <a href="#Locking" title="Locking">Locking</a>
        <a href="#internationalization" title="Internationalization &amp; Unicode">Internationalization &amp; Unicode</a>
        <a href="#dbccollationsdef" title="Creating A Collation">Creating A Collation</a>
    </div>
   </div>
   <div>
      <a href="thevdbrel.html">Virtual Database (VDB) Engine</a>
   </div>
   <div>
      <a href="webinternetproto.html">Web &amp; Internet Protocol Support</a>
   </div>
   <div>
      <a href="websrvcsproto.html">Web Services Protocol Support</a>
   </div>
   <div>
      <a href="conceptarchitecture.html">Architecture</a>
   </div>
    <br />
  </div>
  <div id="text">
    <a name="coredbengine" />
    <h2>5.1. Core Database Engine</h2>
	
	
		<a name="LogicalDataModel" />
    <h3>5.1.1. Logical Data Model</h3>
		<p>Virtuoso provides an extended Object Relational model which offers all
the flexibility of relational access with inheritance, run time data
typing, late binding, identity based access.
</p>
		
			<a name="LDMTable" />
    <h4>5.1.1.1. Table</h4>
			<p>
A table is a uniquely named entity that has the following characteristics:
</p>
			<ul>
				<li>
					<p>Zero or more columns</p>
				</li>
				<li>
					<p>One primary key</p>
				</li>
				<li>
					<p>Zero or more keys (indices)</p>
				</li>
				<li>
					<p>An optional super table from which this inherits properties</p>
				</li>
				<li>
					<p>An optional object ID key, which may or may not be the primary key</p>
				</li>
				<li>
					<p>Various SQL table constraints, e.g. CHECK&#39;s</p>
				</li>
			</ul>
			<p>A table will then have zero or more rows. The relationship of a table
and its rows can be thought of as a class-instance relationship.
</p>
		<br />
		
			<a name="LDMCols" />
    <h4>5.1.1.2. Column</h4>
			<p>
A column is always defined in one table and has a name that is unique
within that table. A column may appear in more than one table as a
result of inheritance but always has one place of definition. i.e.
one database wide &#39;identity&#39;.
</p>
			<p>
A column has the following characteristics:
</p>
			<ul>
				<li>
					<p>Table</p>
				</li>
				<li>
					<p>Name inside the table</p>
				</li>
				<li>
					<p>database wide ID</p>
				</li>
				<li>
					<p>Data type</p>
				</li>
				<li>
					<p>Various SQL constraints, e.g. DEFAULT, CHECK etc.</p>
				</li>
			</ul>
		<br />
		
			<a name="LDMKey" />
    <h4>5.1.1.3. Key</h4>
			<p>
A key or index is the means by which tables manifest themselves in the
physical database. A key is always defined with respect to one table
but may occur in several as a result of inheritance. Keys have unique
names inside the table. A key has the following characteristics:
</p>
			<ul>
				<li>
					<p>A database wide key ID</p>
				</li>
				<li>
					<p>One or more &#39;significant&#39; key parts, which are columns of the defining table or super tables</p>
				</li>
				<li>
					<p>Zero or more &#39;trailing&#39; key parts, columns of the defining table or supertables.</p>
				</li>
				<li>
					<p>Whether the key is primary</p>
				</li>
				<li>
					<p>Whether the key is unique</p>
				</li>
				<li>
					<p>How the key is clustered</p>
				</li>
				<li>
					<p>Whether the key is an object ID key</p>
				</li>
			</ul>
		<br />
		
			<a name="LDMSubtable" />
    <h4>5.1.1.4. Subtable</h4>
			<p>
A subtable is a table that inherits all columns, the primary key and
all other keys from another table, called the super table.
</p>
			<p>
A subtable can define its own columns and keys which add themselves to
those of the super table. No primary key can be redefined, though.
</p>
			<p>
The inheritance relationship between tables is manifested by a key-subkey
relationship between the tables&#39; primary and other keys.
</p>
			<p>
   A table has at most one supertable.
</p>
		<br />
		
			<a name="LDMObjectID" />
    <h4>5.1.1.5. Object ID</h4>
			<p>
A table does not necessarily declare a primary key.  Even so, the table must
have a primary key - in this case a synthetic record ID which is defined
as primary key. The record ID  is an autoincrementing column that is normally
invisible but, if present, can be accessed by explicit reference. One should
not rely on this feature being available, though.
</p>
			<p>
Thus
</p>
			<div>
      <pre class="programlisting">
create table nokey (a integer);
</pre>
    </div>
			<p>
expands to
</p>
			<div>
      <pre class="programlisting">
create table nokey (a integer, _IDN integer identity, primary key (_IDB));
</pre>
    </div>
			<p>
The first unique index to be defined will become the primary key if the table
is empty at the time of definition.
</p>
			<p>
Thus
</p>
			<div>
      <pre class="programlisting">
create unique index a on nokey (a);
</pre>
    </div>
			<p>
will change the nokey table to be as if defined by
</p>
			<div>
      <pre class="programlisting">
create table nokey (a integer, primary key (a));
</pre>
    </div>
			<p>
Having a primary key other than _IDN is always better than the default primary key.
Declaring a primary key is therefore always advisable.
</p>
		<br />
	<br />
	
	
		<a name="DataTypes" />
    <h3>5.1.2. Data Types</h3>
		<p>Virtuoso supports most SQL 92 data types.</p>
		
			<a name="DTCharVChar" />
    <h4>5.1.2.1. CHARACTER &amp; VARCHAR</h4>
			<ul>
				<li>
					<p>CHARACTER</p>
				</li>
				<li>
					<p>VARCHAR</p>
				</li>
				<li>
					<p>VARCHAR &#39;(&#39; INTNUM &#39;)&#39;</p>
				</li>
				<li>
					<p>VARCHAR</p>
				</li>
				<li>
					<p>NVARCHAR &#39;(&#39; INTNUM &#39;)&#39;</p>
				</li>
				<li>
					<p>CHARACTER &#39;(&#39; INTNUM &#39;)&#39;</p>
				</li>
			</ul>
			<p>The CHAR, CHARACTER and VARCHAR datatypes are implemented as a
single string type with dynamic length.  The precision that may be specified controls how
the column is described by <span class="computeroutput">SQLColumns()</span>,
<span class="computeroutput">SQLDescribeCol()</span> and so on.  If a precision is not
specified for a VARCHAR then the default precision will be 0, which means do not check.
If a precision is not specified for a CHARACTER then Virtuoso sets the precision to 1.  An
explicit precision of 0 can be specified to turn off length checking for values
stored in the column.  If a value other than string or NULL is assigned to
the column it is cast to a varchar (using CAST internally) and then stored into the
column.  If the value is not castable to a varchar then Virtuoso returns
an error.  Additionally if the column precision is greater than 0 and the value
string length is greater than the column precision Virtuoso will also return an error.
</p>
<p>The length is stored separately.  Space required is 2+length for 
</p>
<p>A varchar column may contain binary 0 bytes.</p>
<p>A string literal is delimited by single quotes.</p>
		<br />
		
			<a name="DTAny" />
    <h4>5.1.2.2. ANY</h4>
			<ul>
				<li>
					<p>ANY</p>
				</li>
			</ul>
			<p>The ANY datatype is implemented as a single binary string type
with dynamic length.  It is reported as a VARCHAR in
<span class="computeroutput">SQLColumns()</span>, <span class="computeroutput">SQLDescribeCol()</span>
and so on.
The precision returned by these columns is 24 but has no effect.
This type can contain arbitrary binary data, including zeros.
</p>
<p>The length is stored separately.  The space required is 2+length

</p>
		<br />
		
			<a name="DTNumeric" />
    <h4>5.1.2.3. NUMERIC &amp; DECIMAL</h4>
			<ul>
				<li>
					<p>NUMERIC</p>
				</li>
				<li>
					<p>NUMERIC &#39;(&#39; INTNUM &#39;)&#39;</p>
				</li>
				<li>
					<p>NUMERIC &#39;(&#39; INTNUM &#39;,&#39; INTNUM &#39;)&#39;</p>
				</li>
				<li>
					<p>DECIMAL</p>
				</li>
				<li>
					<p>DECIMAL &#39;(&#39; INTNUM &#39;)&#39;</p>
				</li>
				<li>
					<p>DECIMAL &#39;(&#39; INTNUM &#39;,&#39; INTNUM &#39;)&#39;</p>
				</li>
			</ul>
			<p>The various forms of NUMERIC and DECIMAL refer to one variable-precision
floating point decimal data type that provides accurate arithmetic and decimal rounding.
The default maximum precision and scale are 40 and 20. The
precision is the number of decimal digits used in the computation. The scale is the maximum number of
decimal digits to the right of the decimal point. Internal calculations
are always precise but
numbers are truncated to the column&#39;s precision and scale when stored.  If a value being stored
has more digits to the left of the decimal point than allowed in the
column, Virtuoso signals an error.
If a number being stored has more digits to the right of the decimal point than allowed in a column the decimal part is rounded to the precision of the column.
</p>
			<p>
The space consumption of a number is <span class="computeroutput">3 + precision / 2</span> bytes.
The precision and scale of a column of this type are returned by functions
such as <span class="computeroutput">SQLColumns()</span> and <span class="computeroutput">SQLDescribeCol()</span>.
</p>
			<p>
A DECIMAL or NUMERIC with precision &lt;= 9 and scale = 0 is transformed to INTEGER.
</p>
<p>Literal numbers outside of the 32 bit signed integer range are of type 
decimal.  Any numeric literals with a decimal point are of type decimal.  
Literals with an exponent are of type double precision.</p>
		<br />
		
			<a name="DTInt" />
    <h4>5.1.2.4. INTEGER &amp; SMALLINT</h4>
			<ul>
				<li>
					<p>INT</p>
				</li>
				<li>
					<p>INTEGER</p>
				</li>
				<li>
					<p>SMALLINT</p>
				</li>
			</ul>
			<p>These types are represented as a 32-bit
signed binary integer, described as
having a precision of 9 and a scale of 0, although the range is +/- 2**31. Storage space is 2 bytes
for SMALLINT and 4 bytes otherwise.
</p>
			<p>
A column declared SMALLINT is described as SQL_SMALLINT. A column declared INTEGER or INT is
described as SQL_INTEGER.  
</p>
<p>Literals composed of of an optional sign and digits are of the integer 
type if they fit in the 32 bit range.</p>
 		<br />
		
			<a name="DTFLOAT" />
    <h4>5.1.2.5. FLOAT &amp; DOUBLE</h4>
			<ul>
				<li>
					<p>FLOAT</p>
				</li>
				<li>
					<p>FLOAT &#39;(&#39; INTNUM &#39;)&#39;</p>
				</li>
				<li>
					<p>DOUBLE PRECISION</p>
				</li>
			</ul>
			<p>These types refer to the 64-bit IEEE floating-point number, the C <span class="computeroutput">double</span> type. This is a fixed-precision
binary floating point number is described as having a precision of 15 and
a scale of 0.  This type is preferable to NUMERIC if
decimal rounding is not required since it is precise enough for
most uses and more efficient  than NUMERIC.  The storage requirement is 8 bytes.
</p>
<p>Any number literal with an exponent has the double type, e.g. 2e9.</p>
  <br />
		
			<a name="DTREAL" />
    <h4>5.1.2.6. REAL</h4>
			<ul>
				<li>
					<p>REAL</p>
				</li>
			</ul>
			<p>This type is the 32-bit IEEE floating point number corresponding to the C <span class="computeroutput">float</span> type.  The storage requirement is 5 bytes.
</p>
		<br />
		
			<a name="DTLONG" />
    <h4>5.1.2.7. LONG VARCHAR &amp; LONG VARBINARY</h4>
			<ul>
				<li>
					<p>LONG VARCHAR</p>
				</li>
				<li>
					<p>LONG VARBINARY</p>
				</li>
			</ul>
			<p>
These types implement a binary large object (BLOB) type. The length can be
up to 2**31 bytes (2GB).
If manipulated with the <span class="computeroutput">SQLGetData()</span> and <span class="computeroutput">SQLPutData()</span> ODBC functions a BLOB need not fit in the DBMS&#39;s or the client&#39;s memory.
The LONG VARCHAR and LONG VARBINARY types are distinct only because certain ODBC applications
gain from being able to distinguish long text from long binary. The types are described as SQL_LONGVARCHAR and SQL_LONGVARBINARY respectively, with a precision of 2GB.
</p>
			<p>
Several long columns may exist on a single row. A long column may not
be a key part in an index or primary key.
</p>
			<p>
Data in long columns is stored as a linked list of database pages.  Thus, a long column
that does not fit in-line on the containing row will require an integer number of 8K
database pages.  If a long column&#39;s value is short enough to fit within the row
containing it, the BLOB will be stored on the row and will not take more space than a VARCHAR of the
same length.  A long column fits on a row if the sum of the lengths of columns, including the long column, is
under 4070 bytes.
</p>
<p>ORDER BY, GROUP BY and DISTINCT may not reference long data types.  
Comparison of long data is not allowed unless first converted to the 
corresponding short type (varchar, nvarchar or varbinary).  This conversion 
is only possible if the value is under 10MB in size.  String functions accept 
long varchars and long nvarchars and convert them to varchar and nvarchar 
automatically.  There is no long literal type per se, the corresponding character 
or binary type is assignable to a long type.</p>
 		<br />
		
			<a name="DTVARBINARY" />
    <h4>5.1.2.8. VARBINARY</h4>
			<ul>
				<li>
					<p>VARBINARY</p>
				</li>
			</ul>
			<p>
This type is internally like VARCHAR but is distinct for compatibility
with ODBC applications. A VARBINARY column is described as SQL_BINARY to ODBC clients.  The
storage requirement is the same as for a corresponding VARCHAR column. VARBINARY and
VARCHAR data are never equal even if the content is the same, but they can be cast to
each other.  VARBINARY data sorts in the unsigned order of the bytes
comprising the data.
</p>
<p>A varbinary literal is introduced by 0x followed by a hexadecimal 
representation of the bytes, 2 characters per byte, e.g. 0x0123456789abcdef.</p>
<br />
		
			<a name="DTTIMESTAMP" />
    <h4>5.1.2.9. TIMESTAMP; DATE &amp; TIME</h4>
			<ul>
				<li>
					<p>TIMESTAMP</p>
				</li>
				<li>
					<p>DATETIME</p>
				</li>
				<li>
					<p>TIME</p>
				</li>
				<li>
					<p>DATE</p>
				</li>
			</ul>
			<p>
All the time- and date-related types are internally represented as a
single &#39;datetime&#39; type consisting of a Julian day, hour, minute, second, 6-digit fraction and timezone.
The range of the year is from 0 to over 9999. This type can accommodate all values of any SQL92 time-related type.
</p>
			<p>
Although the internal representation is the same, a column of a time-related type is described
as being of the appropriate ODBC type, i.e. SQL_TIMESTAMP for TIMESTAMP and DATETIME and SQL_DATE for DATE and SQL_TIME for TIME.
</p>
			<p>
A DATETIME is described as precision 19, a DATE as precision 10 and a TIME as precision 8.
</p>
			<p>
A column declared a TIMESTAMP is automatically set to the timestamp of the transaction that inserts or
updates any column of the table containing it. The timestamp of a transaction is guaranteed to be distinct from that of
any other transaction.  For compatibility reasons a TIMESTAMP column is described to ODBC clients as a binary
of 10 bytes. It is possible to use any date-related functions on TIMESTAMPs and to bind
a TIMESTAMP column to a DATE or DATETIME variable (SQL_C_TIMESTAMP type in ODBC).  Binding to a binary will also work but
the data will then be opaque.
</p>
			<p>
SQL92 provides for types with a timezone. Although the ODBC API does not expose the timezone, it is stored with
these types and can be retrieved with the <span class="computeroutput">timezone()</span> function. The timezone has a precision of minutes from UTC.
</p>
			<p>
The storage requirement for these types is 10 bytes.
</p>
<p>There is no date literal per se, but the ODBC shorthand for datetime 
literals can be used.  The datetime/timestamp literal is of the form 
{dt &#39;YYYY-MM-DD HH:MM.SS&#39;}.  The date literal is of the form 
{d &#39;YYYY-MM-DD&#39;}.  Dates and datetimes may be compared between themselves but 
not with other types without explicit casting.</p>
<br />
	
		<a name="twobyteunicode" />
    <h4>5.1.2.10. Unicode Support</h4>
		<p>
Virtuoso allows 30-bit Unicode data to be stored and retrieved from database fields.
The data are stored internally as UTF-8 encoded strings for storage space optimization.
Unicode fields are easily intermixable with other character data as all SQL functions
support wide-string case and convert to the most wide character representation on demand.
The native width of the wide character type may differ between platforms.  Windows has a 16 bit
wide character, whereas some Unixes have a 32 bit wide character type. The native width applies 
to the Virtuoso NVARCHAR data type when used as SQL data. </p>
			<p>
There are 3 additional data types to enable storing of Unicode data:
</p>
			<ul>
				<li>
					<p>NCHAR</p>
				</li>
				<li>
					<p>NVARCHAR</p>
				</li>
				<li>
					<p>LONG NVARCHAR</p>
				</li>
			</ul>
			<p>
All the Unicode types are equivalent to their corresponding
&quot;narrow&quot; type - CHAR,
VARCHAR and LONG VARCHAR - except that instead of storing data as one byte they allow
Unicode characters. Their lengths are defined and returned in characters instead of bytes.
They collate according to the active wide character collation, if any.  By 
default this is the order of the Unicode serialization values.  These types 
can be used anywhere the narrow character types can be used, except in LIKE 
conditions.</p>
  <p>Unicode literals are introduced by n&#39; and closed with &#39; 
  (single quote).  See Internationalization section on the interpretation of 
  wide literals.  This may be either UTF-8 according to some character set.</p>
			<p>
When there is a need to convert a wide string to a narrow one or vice versa,  a character set is
used.  A character set returns a wide string code for a wide char. For
example there can be a definition of the
ISO-8859-5 &quot;narrow&quot; character set which describes mapping of non-ASCII character codes
to their Unicode equivalents.  Virtuoso relies on the fact that the ASCII character codes are represented
in Unicode by type-casting and in UTF8 as one-byte tokens with the same value as in ASCII.
</p>
			<p>
When conversion is done on the server-side using cast or some of the SQL
built-in functions, the wide
characters are converted to narrow using a system-independent server-side character set. In the
absence of such a character set, Virtuoso uses the Latin1 character set to
project narrow character codes into the Unicode space
as equally valued wide-character codes.
</p>
			<p>
When conversion is done client-side - for example, when binding a VARCHAR to a
wide buffer - the default client&#39;s system character set is used.
</p>
			<p>
Wide-character literals have ANSI SQL92 syntax: N&#39;xxx&#39; (prefixing normal literals
with the letter N).  These strings process escapes with a values large enough to represent all the
Unicode characters.
</p>
		<br />

    <a name="conceptsudt" />
    <h4>5.1.2.11. User Defined Types</h4>
    <p>Virtuoso supports user-definable data types that can be based 
    on any hosted language or classes such as C#.  New types can be 
    further derived producing sub-types.  User-defined types can include 
    methods and constructors to create any potentially complicated system 
    to house data as exactly required.</p>

    <p>User defined types can be used to defined database table columns.</p>

    <div class="tip">
      <div class="tiptitle">See Also:</div>
    <p>The <a href="udt.html">User Defined Types</a> section.</p>
    </div>
    <br />

		
			<a name="widefunc" />
    <h4>5.1.2.12. Built-in SQL Functions and Wide Characters</h4>
			<p>
All the built-in SQL functions that take character attributes and have a
character input calculate their output type such that if any attribute is
a wide string or a wide BLOB, then the result is a wide string; otherwise,
the output character type is narrow.
</p>
			<p>
Functions like <span class="computeroutput">make_string()</span> that have character
result types but that do not have character parameters produce narrow
strings. Virtuoso provides equivalent functions for wide output, such as
<span class="computeroutput">make_wstring()</span>.
</p>
		<br />
		
			<a name="wideodbc" />
    <h4>5.1.2.13. Client-side changes to support wide characters</h4>
			<p>
Virtuoso&#39; ODBC client implements the SQL...W functions (like <span class="computeroutput">SQLConnectW()</span>) that take Unicode arguments.
This enables faster wide-character processing and allows binding of the SQL_C_WCHAR output type.
Since Virtuoso&#39;s SQL parser does not allow Unicode data in SQL commands, they should be bound as parameters
or should be represented as escapes.
</p>
		<br />
		
			<a name="nvdb" />
    <h4>5.1.2.14. Virtual Database and National Language Support</h4>
			<p>
Attached tables use the default collation of the data source for narrow
strings.
Virtuoso maps Wide-string columns in remote tables to the appropriate local wide-character type.
The data are then passed intact in case of wide-to-wide mapping. When data are converted
client-side in the VDB the Server&#39;s system character set is used (where available).
</p>
		<br />

		
			<a name="lrgdtrelations" />
    <h4>5.1.2.15. Operations Between Large Objects, Varchars and String Outputs</h4>

			<p>
</p>
			<p>
The built-in data types denoting sequences of characters, wide or narrow,
long or short, are:
</p>
			<ul>
      <li>
<strong>Varchar</strong>: a string of 8-bit characters, including 0&#39;s,
up to 16MB long.  These are contiguously stored, so long contents, such as
in the megabytes, will be inefficient.
</li>
      <li>
<strong>NVARCHAR</strong>: A string of wide characters, of 2 or 4 bytes each, depending on the
platform.  Because of the 16MB limit, the longest strings will be of 4M or
8M characters, depending on the platform.  Again long strings are not recommended
due to inefficiencies.
</li>
      <li>
<strong>Binary</strong>: A string of 8-bit bytes, up to 16 MB long, like a varchar but not
usable for character functions.  There is a distinct binary type only for
compatibility with the SQL92 standard and ODBC, where the binary type is
treated differently in parameter binding.
</li>
      <li>
<strong>Long varchar, long nvarchar</strong>: These are long data types, stored persistently
as a series of linked pages and accessible to clients in fragments using the
SQLGetData() and SQLPutData() calls. The length limit is 2GB.  The wide variant, LONG NVARCHAR, is internally stored as UTF8.
</li>
      <li>
<strong>String_output</strong>: This is not a database column type but
a run-time object that
can be used in stored procedures for accumulating a long sequence of 8-bit bytes,
including 0&#39;s.  This type is not contiguously stored, hence it stays efficient for
large output and has no built-in size limit; however, it is not automatically
paged to disk, so it will consume virtual memory for all its length.  This type is
useful for buffering output for a next processing step.
</li>
      <li>
<strong>Long varbinary</strong>: This is a binary BLOB, identical to long varchar but distinct for
reasons of compatibility with SQL92 and ODBC, where this can behave differently
from long varchar for parameter binding.
</li>
      <li>
<strong>XML Entity</strong>: This type is a pointer to an element of an XML tree.  The XML
tree itself may be either memory- or disk-based. In both cases there is a
reference-counted set of XML entities for each tree that Virtuoso uses to reference individual
elements of the tree.  These are used for navigating an XML tree in XPath
or XSLT;
hence, one entity gives access to it parents, siblings, and so on.  This is not properly a
string type, but it can be converted to one, producing the XML string value.
</li>
    </ul>
			<p>
All these types have the common trait of representing sequences of characters and
hence some common operations and conversions are possible between them.
</p>
			
				<a name="storageindb" />
    <h5>5.1.2.15.1. Storage in Database</h5>
				<p>
The descriptions below apply to insert and update operations for these types:
</p>

<ul>
<li>
        <p>Long varchar = x, where x is:</p>
<ul>
          <li>varchar - The text is stored as is.</li>
          <li>Long varchar - the text is stored as is.</li>
          <li>string output - the contents in the string output are stored as the value,
unchanged.  The state of the string output is not changed. </li>
          <li>XML entity - The XML tree rooted at the entity is stored
as persistent XML (disk-based) if the entity references a persistent XML tree.
Note that this may either extract a subtree or copy a tree, depending on whether
the entity references the root.  If the entity references a memory-based tree,
the text of the tree with the element as the topmost (document) element is
produced and set as the value of the column.</li>
          <li>Nvarchar - The text is stored as wide, thus the value is internally a
long nvarchar although the declared column type is long varchar.</li>
          <li>Long nvarchar - The value is stored as a long nvarchar, as with an nvarchar.</li>
        </ul>
</li>

<li>
        <p>Long nvarchar = x</p>
<p>
The cases are identical to long varchar.  Thus a wide value stays wide
and a narrow value stays narrow. Specifically, a string output and XML
entity result in a narrow value, although the character combination in the
XML entity may be interpreted as wide.
</p>
</li>

<li>
<p>Long varbinary = x</p>
<p>Identical to long varchar.  The binary type is only distinct in
column metadata for ODBC clients, where its type conversions may be different.
</p>
</li>

<li>
<p>Varchar = x, where x is:</p>
<ul>
          <li>long varchar, string output, XML entity - as with long varchar.</li>
          <li>Nvarchar, Long nvarchar - the text is stored as wide; no information is lost.</li>
        </ul>
</li>

<li>
<p>Nvarchar = x, where x is:</p>
<ul>
          <li>Long varchar, varchar - the string is converted to wide
according to the character set effective in the connection.
</li>
          <li>Long nvarchar, Nvarchar - The text is stored as is.</li>
        </ul>
</li>
</ul>

			<p>
&#39;String output&#39; and &#39;XML entity&#39; are not valid types for a column.  These types are
only created by evaluating SQL expressions and are converted as specified above
if stored as a column value.
</p>
<br />
			
				<a name="retrcolvals" />
    <h5>5.1.2.15.2. Retrieving Column Values</h5>
	<p>
A BLOB column (long varchar, long nvarchar, long varbinary) may return
either a long varchar or a long nvarchar BLOB handle.  If the actual value
is short enough to be inlined, a varchar or nvarchar value can be returned as
the column value instead. These are indistinguishable for assignment and as
arguments to SQL functions or for returning to a client application.  Only
specific SQL functions (<span class="computeroutput">isblob()</span>,
<span class="computeroutput">isstring()</span>, etc.) allow you to determine the
difference at run time. One exception is persistent XML entities, which
come back as persistent XML entities and are not compatible with string
functions but are assignable to various character columns.
</p>
	<p>
An nvarchar column is always nvarchar.
</p>
<p>
A varchar value is either varchar or nvarchar.  If the value stored was a
memory-based XML tree entity it comes back as a long varchar.  If it was a
persistent XML tree, it comes back as an XML entity.  </p>
<br />
		
			<a name="assignments" />
    <h5>5.1.2.15.3. Assignment</h5>
			<p>
PL variables are typed at run time.
</p>
<p>
A string (varchar, nvarchar, or varbinary) can be freely assigned and
passed as parameter.  This makes a copy, except for reference (inout)
parameters.
</p>
			<p>
A BLOB (long varchar, long nvarchar, long varbinary) is a reference to a
disk based structure, unless stored inline. Therefore, passing these as
parameters does not take significant time. If these are inline, these are
strings of under 4K bytes; hence assigning them is still efficient,
although it involves copying.
</p>
			<p>
A string output cannot be assigned between two variables, though it can be
passed as a reference (inout) parameter in a PL procedure call.  Copying
streams has problematic semantics and can be very resource-consuming.
</p>
			<p>
An XML entity can be assigned and passed as parameter without restrictions.
</p>
<br />
		
			<a name="builtinsqlfuncs" />
    <h5>5.1.2.15.4. Built-In SQL Functions</h5>
			<p>
All SQL92 string functions will accept varchar, long varchar, nvarchar or
long nvarchar arguments.  If the argument is long and its actual length is
above the maximum length of a varchar, the conversion fails and Virtuoso
signals an error.  You can interchange long and varchar types as long as the
length remains under the varchar maximum of 16MB.</p>
<div class="note">
      <div class="notetitle">Note:</div>
<p>Varchars or nvarchars stored in columns have a much lower limit due to the 4K row length
limit.  Intermediate results or values converted from long columns are not
affected by this limit.
</p>
    </div>
			<p>
If Virtuoso converts a value from long varchar to varchar or from
long nvarchar to nvarchar when passing the value as an argument to a string
function, the value changes in place. This has the
effect of replacing the handle with the string.  Users normally do not see
this, but may detect it with type test functions such as <span class="computeroutput">isblob()</span>.
</p>
<br />
		
			<a name="longrowlenlim" />
    <h5>5.1.2.15.5. Long Strings and Row Length Limit</h5>
			<p>
You can declare string values that might be long and that do not have to
be key parts in indices as long varchar.  These will
automatically be inlined if the row with the data inlined will fit within
the 4K limit.  Otherwise the long values will be stored as separate LOBs.
The difference between varchar and long varchar is distinguishable only
with special test functions if the length is under the varchar limit.
</p>
			<p>
A varchar column is sometimes substantially faster on
update than a long varchar column, even if the value ends up inlined.  If the
value is inlined there is no difference in retrieval speed.
</p>
<br />
		
			<a name="handlinglongdt4inou" />
    <h5>5.1.2.15.6. Handling Long Data for Input and Output</h5>
			<p>
LOBs of up to 2GB can be handled as streams without demand on memory
from ODBC clients using <span class="computeroutput">SQLGetData()</span> and
<span class="computeroutput">SQLPutData()</span>.  All other ways of
processing long data will need to make either a contiguous or non-contiguous
copy in memory.
</p>
			<p>
To transfer long data between PL procedures and files one can use the 
<a href="fn_string_to_file.html">string_to_file()</a>
function, which will accept a handle and will not need to copy the content to
memory in order to write it.
</p>
			<p>
To read a large object from a file to a table, you can use the 
<a href="fn_file_to_string_output.html">file_to_string_output()</a> 
function to get contents that may be longer than the varchar
limit into a string output.  This can then be assigned to a BLOB column.
</p>
			<p>
For long file-resident XML data you can use the <a href="fn_xml_persistent.html">
xml_persistent()</a> function with the
file:// protocol.
</p>
<div class="tip">
      <div class="tiptitle">See Also:</div>
<p>The <a href="webandxml.html">XML Support</a> chapter.</p>
    </div>
<br />
<br />
	<br />
	
	   
   <a name="colstore" />
    <h3>5.1.3. Virtuoso Column Store</h3>
     <p>Note: This feature only applies to Virtuoso 7.0 and later.</p>
     <p>As of version 7, Virtuoso offers a column-wise compressed storage format alongside 
     	its traditional row-wise storage format.</p> 
     <p>In the column-wise storage model, each column of a table or index is stored contiguously, so 
     	that values of a single column on consecutive rows are physically adjacent. In this way, adjacent 
     	values are of the same type, and if the index is sorted on said value, the consecutive values 
     	often form an ascending sequence. This organization allows the use of more powerful compression 
     	techniques than could be used for rows where consecutive values belong to different columns, and 
     	thus are of disparate data types with values in different ranges.</p>
     <p>Furthermore, when queries only access a subset of columns from one table, only those 
     	columns actually being accessed need to be read from disk, thereby making better use of I/O 
     	throughput and memory. Unreferenced columns will not take space in the memory based cache 
     	of the database. Further, the traffic between CPU cache and main memory is reduced when 
     	data is more compact, leading to better CPU utilization.</p>
     <p>The column-wise format is substantially more compact and offers substantially greater 
     	sequential-access performance, as well as greater random-access performance in situations 
     	where many rows are accessed together in a join. For single-row random-access, a row-wise 
     	format offers higher performance as long as the data is in memory. In practice, for large 
     	tables, the higher compression achieved with column-wise storage allows a larger portion 
     	of the data to be kept in memory, leading to less frequent I/O and consequently higher 
     	performance.</p>
     <p>One should not use column-wise storage in cases where columns are frequently updated, 
     	especially if a single row is updated per statement. This will give performance substantially 
     	worse than row-wise storage. However, bulk inserts and deletes are efficient with 
     	column-wise storage.</p>
     
     <a name="colstorecreatetblind" />
    <h4>5.1.3.1. Creating Column Store Tables and Indices</h4>
       <p>Any index or primary key, i.e., any table, can be declared to be stored column-wise. 
       	A single table can have multiple indices, of which some are stored column-wise and some are 
       	not. As with tables stored row-wise, the table row itself is stored following the primary 
       	key index entry on the index tree leaf corresponding to the entry. This arrangement is 
       	sometimes called a <strong>clustered index</strong>.</p>
       <p>The statement below declares the table xx to be stored column-wise:</p>
 <div>
      <pre class="programlisting">
CREATE TABLE xx ( id    INT, 
                  data  VARCHAR, 
                  PRIMARY KEY (id) COLUMN
                );
 </pre>
    </div>
       <p>This statement adds a column-wise stored index to the table:</p>
 <div>
      <pre class="programlisting">
CREATE COLUMN INDEX xxi 
  ON xx (data);
 </pre>
    </div>
       <p>The <strong>COLUMN</strong> keyword can come after the column list of the 
       primary key declaration of a table or anywhere between the <strong>CREATE</strong> 
       and <strong>INDEX</strong> keywords of a create index statement.</p>
       <p>Note that the <strong>BITMAP</strong> keyword cannot be used together 
       with the <strong>COLUMN</strong> keyword. Column-wise indices will automatically 
       use bitmap compression when appropriate without this being specified. A column-wise 
       index is likely to be more space-efficient than a row-wise bitmap index with the same 
       key parts.</p>
       <p>The directives for column compression in <strong>CREATE TABLE (NO COMPRESS, COMPRESS PREFIX)</strong> 
       have no effect on column-wise stored tables. Data is compressed in a manner chosen at run 
       time based on the data itself.</p>
     <br />
       
     <a name="colstoretransup" />
    <h4>5.1.3.2. Column Store Transaction Support</h4>
       <p>All SQL operations work identically for column- or row-wise tables and indices. 
       	The locking behavior is also identical, with row-level locking supported on all isolation 
       	levels. The behavior of the <strong>READ COMMITTED</strong> isolation is non-locking, 
       	showing the pre-image of updated data when reading pages with uncommitted inserts or updates.</p> 
       <p>Recovery is by roll forward, and checkpoints will only store committed states of the 
       	database, even if started when there are uncommitted transactions pending.</p>
     <br />
     
     <a name="colstorespaceutil" />
    <h4>5.1.3.3. Column Space Utilization</h4> 
       <p>The system table <strong>DB.DBA.sys_col_info</strong> holds information about space 
       utilization of column-wise indices.</p>
       <p>This table is updated only after the <strong>DB.DBA.sys_index_space_stats</strong> 
       procedure view has been accessed. Thus, one must first make a selection from 
       <strong>DB.DBA.sys_index_space_stats</strong>.</p>
       <p>The columns of <strong>sys_col_info</strong> have the following meaning:</p>
       <ul>
         <li>
        <strong>COI_TABLE</strong> - The table in question.</li>
         <li>
        <strong>COI_INDEX</strong> - The index in question.</li>
         <li>
        <strong>COI_NTH</strong> - The ordinal position of the column in question 
         in the key.</li>
         <li>
        <strong>COI_TYPE</strong> - This indicates the type of compression entry the rest 
         of the row concerns. For each column in the key, there is a row with <strong>coi_type</strong> 
         set to -1, representing the total of the remaining fields.</li>
         <li>
        <strong>COI_COLUMN</strong> - The name of the column concerned.</li>
         <li>
        <strong>COI_PAGES</strong> - This is the number of database pages allocated for 
         storing data of this column.</li>
         <li>
        <strong>COI_CES</strong> - The count of compression entries for the column. A 
         compression entry is logically an array of consecutive values that share a common compression 
         format. Different parts of the same column may have different compression.</li>
         <li>
        <strong>COI_VALUES</strong> - This is the count of values that are stored with the 
         compression format in question.</li>
         <li>
        <strong>COI_BYTES</strong>- The is the number of bytes actually occupied by the 
         compression entries concerned. Pages may not always by full, thus this metric can be used to 
         measure the page fill ratio, i.e.:
<div>
          <pre class="programlisting">
100 * coi_bytes / (coi_n_pages * 8192.0) 
</pre>
        </div>
         </li>                                
       </ul>
       <p>To see which columns take the most space, and how full the pages are, as well as the 
       	overall effectiveness of compression, one can do:</p>
 <div>
      <pre class="programlisting">
SELECT                                       coi_column         , 
                         coi_pages * 8192  AS  total_bytes        , 
         coi_bytes / (coi_pages * 8192.0)  AS  page_fill          , 
                                               coi_bytes          , 
             1.0 * coi_bytes / coi_values  AS  ce_bytes_per_value , 
          8192.0 * coi_pages / coi_values  AS  bytes_per_value 
    FROM sys_col_info 
   WHERE coi_type = -1 
ORDER BY coi_pages DESC ;
 </pre>
    </div>
       <p>Note that issuing a query like:</p> 
 <div>
      <pre class="programlisting">
 SELECT TOP 20 * 
    FROM sys_index_space_stats 
ORDER BY iss_pages DESC; 
 </pre>
    </div>
       <p>will update the <strong>sys_col_info</strong> table which is initially empty.</p> 
       <p>The <strong>sys_index_space_stats</strong> view shows the number of pages used for 
       the sparse row-wise index tree top for column-wise indices.</p>
       <p>The number of rows shown there for column-wise indices is the number of entries of 
       	the sparse index, not the row-count of the index. The space utilization here will be under 
       	1% of the total for a column-wise index.</p>
       <p>Below we look at space utilization of the <strong>O</strong> column of the primary 
       key of the <strong>RDF_QUAD</strong> table.</p>
 <div>
      <pre class="programlisting">
SELECT * 
  FROM sys_col_info 
 WHERE  coi_index = &#39;DB.DBA.RDF_QUAD&#39; 
   AND coi_column = &#39;O&#39; ;
 coi_table             coi_index           coi_nth           coi_type          coi_column    coi_pages      coi_ces    coi_values    coi_bytes
 VARCHAR NOT NULL      VARCHAR NOT NULL    INTEGER NOT NULL  INTEGER NOT NULL  VARCHAR       INTEGER        INTEGER    INTEGER       INTEGER
 _______________________________________________________________________________
 
 DB.DBA.RDF_QUAD       DB.DBA.RDF_QUAD     2                 -1                O             654663         0          1252064815    4617808494
 DB.DBA.RDF_QUAD       DB.DBA.RDF_QUAD     2                 1                 O             0              229074     97104862      947215
 DB.DBA.RDF_QUAD       DB.DBA.RDF_QUAD     2                 3                 O             0              3227395    490806316     3905658370
 DB.DBA.RDF_QUAD       DB.DBA.RDF_QUAD     2                 4                 O             0              94038      17227799      8554746
 DB.DBA.RDF_QUAD       DB.DBA.RDF_QUAD     2                 6                 O             0              389126     551074747     579191659
 DB.DBA.RDF_QUAD       DB.DBA.RDF_QUAD     2                 8                 O             0              160814     48480188      12026273
 DB.DBA.RDF_QUAD       DB.DBA.RDF_QUAD     2                 10                O             0              652817     47370903      111430231
 </pre>
    </div>
       <p>The top line is the overall summary across all the compression types.</p>
       <p>The lines below give information per-compression-type. The values of 
       	<strong>coi_type</strong> mean the following:</p>
       <ul>
         <li>1 - <strong>run length</strong>. The value occurs once, followed 
         by the number of repetitions.</li>
         <li>3 - <strong>array</strong>. Values are stored consecutively without compression. 
         The array elements are 4- or 8-byte depending on range. For variable length types, some 
         compression applies because values differing only in their last byte will only have the 
         last byte stored.</li>
         <li>4 - <strong>bitmap</strong>. For closely-spaced unique ascending values, the 
         bitmap has a start value in full, and a bitmap with the nth bit set if start + nth occurs 
         in the column.</li>
         <li>6 - <strong>dictionary</strong>. For non-ordered, low-cardinality columns, 
         there can be a dictionary with either 4 or 8 bytes per entry, depending on the number of 
         distinct values being encoded. The compression entry is prefixed by an array with the values 
         in full, followed by an array of positions in the dictionary.</li>
         <li>8 - <strong>run length with small deltas</strong>. For repeating, closely-spaced 
         ascending values, the run-length-delta format stores a start value in full, followed by an 
         array of bytes of which 4 bits are a delta to the previous value, and 4 bits are a run 
         length.</li>
         <li>10 - <strong>integer delta with large deltas</strong>. This format stores an 
         initial value followed by stretches of non-ordered values within 64K of the base value. 
         There can be multiple such stretches, each prefixed with a 32-bit delta from the base value. 
         This is useful for closely-spaced medium- cardinality values like dates, or for relatively 
         sparse ascending sequences, e.g., ascending sequences with a step of 1000 or more.</li>
       </ul>
     <br />  
   <br />
 
 
 	
   <a name="explvectprcode" />
    <h3>5.1.4. Explicit Vectoring of Procedural Code</h3>
     <p>Note: This feature only applies to Virtuoso 7.0 and later.</p>  
     <p>Vectored execution can be explicitly controlled for Virtuoso PL code, either by 
     	declaring a whole procedure to be vectored or by executing a block inside a procedure on 
     	multiple values at one time. See more detailed description, respectively for:</p>
       <ul>
         <li>
        <a href="">Vectored Procedures</a>
      </li>
         <li>
        <a href="">FOR VECTORED Statement</a>
      </li>
         <li>
        <a href="">Limitations on Vectored Code</a>
      </li>
         <li>
        <a href="">Data Types and Vectoring</a>
      </li>        
       </ul>
     
   <br />

	
	
		<a name="Locking" />
    <h3>5.1.5. Locking</h3>
		<p>
Virtuoso offers a dynamic locking strategy that combines
the high resolution of row-level locking with the performance of
page locking for large transactions.
</p>
		
			<a name="IsoLevels" />
    <h4>5.1.5.1. Isolation Levels</h4>
			<p>
Virtuoso has a full range of isolation options, ranging from <span class="computeroutput">dirty read</span> to
<span class="computeroutput">serializable</span>. The default isolation is <span class="computeroutput">repeatable read</span>, which is adequate
for most practical applications.
</p>
			<p>Isolation is set at the connection,
i.e. transaction, level.  Variously isolated
transactions may coexist and each will behave consistently with its semantic.
</p>
			<p>
      <span class="computeroutput">Repeatable read</span> and <span class="computeroutput">serializable</span> transactions are susceptible at any time to
termination by deadlock, SQL state 40001.  Other transactions are susceptible
to deadlock if they own locks as a result of insert, update or delete.  Deadlocks are
resolved in favor of the older of the contending transactions. A transaction&#39;s age is
the count of reads performed + 2 * the count of rows inserted, deleted or updated.
</p>
			<p>Any transaction that has modified the database may be rolled back; all transactions
maintain a rollback log.  This is a memory-based data structure that contains the
state of changed rows as they were before the transaction first affected them. This
leads to potential transient memory consumption.
All transactions that have changed the database also have a roll-forward log,
used to recreate the effects of the transaction during roll-forward recovery.
</p>
			
				<a name="ReadUncommit" />
    <h5>5.1.5.1.1. Read Uncommitted</h5>
				<p>This corresponds to SQL_TXN_READ_UNCOMMITTED.  A read
is never prevented by locking, nor do read rows stay locked. The data being read
may or may not be committed, hence there is no guarantee of transaction integrity.
</p>
			<br />
			
				<a name="ReadCommit" />
    <h5>5.1.5.1.2. Read Committed</h5>
				<p>Historical Read Committed</p>
                                <p>Starting with release 5.0, Virtuoso has a non-locking,
versioned <span class="computeroutput">read committed</span> transaction mode. This is similar to Oracle&#39;s default isolation.
</p>
                                <p>If a locked row is read without FOR UPDATE being specified and another
transaction owns the lock, the reading transaction will see the row in the state it had before being modified
by the transaction owning the lock. There will be no wait. If a row has been inserted but the insert not committed,
the row will not be seen by the <span class="computeroutput">read committed</span> transaction. If a row has been updated or deleted,
the row will be seen as it was before the uncommitted modifying transaction.
</p>
                                <p>If a row is read in <span class="computeroutput">read committed</span> mode with FOR UPDATE specified
or as part of a searched update or delete statement, the <span class="computeroutput">read committed</span> transaction will wait for a
locked row and will set an exclusive lock on the row if the row matches the search criteria. This exclusive lock will be
held until the <span class="computeroutput">read committed</span> transaction terminates.
</p>
                                <p>Hence, if FOR UPDATE is specified, a <span class="computeroutput">read committed</span> transaction
will have repeatable read semantics, otherwise it guarantees no repeatable read but does guarantee that uncommitted data are never seen.
</p>
                                <p>To make this the default mode, set DefaultIsolation in the Parameters section of virtuoso.ini to 2.
</p>
			<br />
			
                                <a name="RowbyRowAutoCommit" />
    <h5>5.1.5.1.3. Row-by-Row Autocommit</h5>
                                <p>This transaction mode causes all DML statements to commit after
every modified row. This is useful for single user situations where one does large batch updates on tables.
For example, an update of every row of a multi gigabyte table would be likely to run out of rollback space
before completing. In practice, one can end up in a thrashing situation where a large transaction is in
progress, is interrupted by a checkpoint which must temporarily roll back the changed pages, then again
resume the transaction etc., leading to serious server unavailability. Note that normally the ini parameter
TransactionAfterImageLimit places a cap on transaction size, catching situations of this type before they
lead to thrashing.
                                </p>
                                <p>The row by row autocommit mode prevents this from happening by
committing each updated, inserted or deleted row as soon as all the indices of the row are updated.
This mode will still maintain basic row integrity, i.e. if the row&#39;s data is in one index, it will be
in all indices.
                                </p>
                                <p>This mode is good for any batch operations where concurrent
updates are not expected or are not an issue. Examples include bulk loading of data, materialization of
RDF inferred data etc.
                                </p>
                                <p>This mode is enabled with the log_enable function. If the bit
of 2&#39;s is set in the argument, row-by-row autocommit is enabled and the setting will persist until
modified with log_enable or the calling connection is disconnected or the calling web request
terminates. Thus, an argument of 2 enables row-by-row autocommit and disables logging. An argument
of 3 enables row-by-row autocommit and enables logging. This will cause every updated row to be
logged in the transaction log after it is updated, which is not very efficient.
                                </p>
                                <p>Since transaction-by-transaction recovery is generally not an
issue in batch updates, a value of 2 is usually better. If the server is killed during the batch
operation, it may simply be restarted and the operation redone. Losing the first half through no
logging will not be an issue since the operation will anyway have to be redone.
                                </p>
                                <p>There is a slight penalty to row-by-row autocommit in comparison with
making updates in larger batches but this is under 10%.
                                </p>
			<br />
			
				<a name="RepeatableRead" />
    <h5>5.1.5.1.4. Repeatable Read</h5>
				<p>The transaction will wait for access to exclusively locked rows
and will lock all rows it reads.  The locking of read rows can be shared or exclusive depending
on the FOR UPDATE clause in the SELECT or the SQL_CONCURRENCY statement option.  In the case
of a select over a range of rows where not all rows match selecting criteria,
only matching rows are locked.  This mode guarantees that any row locked by the
reading transaction can be re-read on the basis of its identity (primary key) and will not have
been changed by any other transaction while the locking transaction is in progress.
This mode does not prevent another transaction from inserting new rows
(phantoms) between rows locked by the original transaction.
</p>
			<br />
			
				<a name="Serializable" />
    <h5>5.1.5.1.5. Serializable</h5>
				<p>This mode guarantees that concurrent transactions will look as if
the next transaction started only after the previous terminated. This is like <span class="computeroutput">repeatable read</span>
but prevents phantoms.  Space found to be empty
in one read will remain empty in the next read while the transaction is ongoing.
</p>
				<p>Serializable isolation is implemented by locking all ranges of rows matching
criteria pertaining to the ordering index in a select. The range here includes the last row
before the first in the range.  An empty range can be locked by locking the row before the range
by a special follow lock, which prevents insertions to the right of the locked row.  A by-product
of this is that serializable locking guarantees that a select count will give the same result repeatedly unless the transaction itself affect the rows counted.
</p>
			<br />
			<p>
      <span class="computeroutput">Serializable</span>
isolation is slower than <span class="computeroutput">repeatable read</span> and not
required by most applications.
</p>
			<p>All insert, delete and update operations make an exclusive row lock on the rows
they operate on, regardless of specified isolation.
</p>
		<br />
		
			<a name="LockExtent" />
    <h4>5.1.5.2. Lock Extent</h4>
			<p>
If a transaction is the exclusive owner of locks on a database page and a sufficient percentage
of the rows are locked, it makes sense to replace distinct row locks
with a single page lock.  The LOCK_ESCALATION_PCT parameter controls the threshold for doing
this. See the SET statement for details.
</p>
			<p>
If a cursor reads data serially and has a history of locking a high percentage of rows on
each page it traverses, it will start setting page level locks as its first choice.
It will do this when entering a new page where there are no row-level locks.
</p>
		<br />
		
			<a name="TransactionSize" />
    <h4>5.1.5.3. Transaction Size</h4>
			<p>
There is no limit in Virtuoso to the transaction size, though the
underlying software or hardware may impose limits.  Memory consumed by a transaction
is proportional to its number of locks held and number of changed rows (insert, update, delete).
BKLOBs manipulated by a transaction do not contribute to memory
consumption, because they are always disk-based.
</p>
		<br />
	<br />
	
	




<a name="internationalization" />
    <h3>5.1.6. Internationalization &amp; Unicode</h3>

<p>
National strings are best represented as Unicode (NCHAR/LONG NVARCHAR) columns.
There is no guarantee that values stored inside narrow (VARCHAR/LONG VARCHAR)
columns will get correctly represented.  If the client application is also
Unicode then no internationalization conversions take place.
Unfortunately, most current applications still use narrow characters.
</p>
<p>
The national character set defines how strings will get converted from narrow to wide
characters and back throughout Virtuoso.
A character set is an array of 255 (without the zero) Unicode codes
describing the location of each character from the narrow character set in the Unicode
space.  It has a &quot;primary&quot; or &quot;preferred&quot; name and a list of aliases.
</p>
	<p>
Character sets in Virtuoso are kept inside the system table SYS_CHARSETS. Its layout is :
</p>
<div>
      <pre class="programlisting">
CREATE TABLE SYS_CHARSETS (
    CS_NAME varchar,			-- The &quot;preferred&quot; charset name
    CS_TABLE long nvarchar,		-- the mapping table of length 255 Wide chars
    CS_ALIASES long varchar		-- serialized vector of aliases
);
</pre>
    </div>
	<p>
The CS_NAME and CS_ALIASES columns are SELECTable by PUBLIC.
To simplify retrieval of all official and unofficial names of character
sets, Virtuoso provides the following function:
</p>

<p>
      <a href="fn_charsets_list.html">charsets_list()</a>
    </p>

	<p>
There are a number of character set definitions preloaded in the SYS_CHARSETS 
table. Currently these are:</p>

<ul>
      <li>GOST19768-87</li>
      <li>IBM437, IBM850, IBM855, IBM866, IBM874</li>
      <li>ISO-8859-1, ISO-8859-2, ISO-8859-3, ISO-8859-4, ISO-8859-5,
ISO-8859-6, ISO-8859-7, ISO-8859-8, ISO-8859-9, ISO-8859-10, ISO-8859-11,
ISO-8859-13, ISO-8859-14, ISO-8859-15</li>
      <li>KOI-0, KOI-7, KOI8-A, KOI8-B, KOI8-E, KOI8-F, KOI8-R, KOI8-U</li>
      <li>MAC-UKRAINIAN</li>
      <li>MIK</li>
      <li>WINDOWS-1250, WINDOWS-1251, WINDOWS-1252, WINDOWS-1257</li>
    </ul>

	<p>
New character sets can be defined using the following function:
</p>

<p>
      <a href="fn_charset_define.html">charset_define()</a>
    </p>

	<p>
User-defined character sets can be dropped by deleting the row from the SYS_CHARSETS table
and restarting the server.
</p>
	<p>
Virtuoso performs all translations in accordance with a &quot;current
charset&quot;.  This is a connection attribute. It gets its value as
follows:
</p>
<ul>
      <li>
1. If the client supplies a CHARSET ODBC Connect string attribute either from the
DSN definition or as an argument to a
SQLDriverConnect() call, Virtuoso searches for the
name in SYS_CHARSETS and, if there is a match, that character set becomes
the default.
</li>
      <li>
2. If the database default character set (&#39;Charset&#39; parameter in the
&#39;Parameters&#39; section of virtuoso.ini) is defined, it becomes the default.
</li>
      <li>
3. If neither of these conditions is met, then Virtuoso uses ISO-8859-1 as
the default character set; this maps the narrow chars as wide using
equality.
</li>
    </ul>

<p>
At any time the user can explicitly set the character set either with a
call to
</p>
<div>
      <pre class="programlisting">
SQLSetConnectAttr (HDBC, SQL_CHARSET (=5002), CharacterSetString, StringLength)
</pre>
    </div>
<p>
or by executing the interactive SQL command:
</p>
<div>
      <pre class="programlisting">
SET CHARSET=&#39;&lt;name&gt;|&lt;alias&gt;&#39;
</pre>
    </div>
	<p>
The current character set &quot;preferred&quot; name (as a string) is
returned by the following system function:
</p>

<p>
      <a href="fn_current_charset.html">current_charset()</a>
    </p>

	<p>
Virtuoso has a default character set that gets used if the client
does not supply its own and in some special cases, like XML Views and FOR
XML AUTO statements.
</p>

<p>
    The HTTP character set can be changed during an HTTP session using: 
</p>
	   
<div>
      <pre class="programlisting">
SET HTTP_CHARSET=&#39;&lt;name&gt;|&lt;alias&gt;&#39;
</pre>
    </div>

<p>Example:</p>
<div>
      <pre class="programlisting">
     &lt;?vsp 
         set http_charset = &#39;ISO-CELTIC&#39;;
     ?&gt;
     &lt;html&gt;&lt;body&gt;&lt;h1&gt;Cén chaoi &#39;bhfuil tú?&lt;/h1&gt;&lt;/body&gt;&lt;/htm
    </pre>
    </div>



	<p>
Virtuoso supports the following types of translations from Unicode
characters to narrow characters:
</p>

<ul>
<li>
<p>String translation:</p>
  <ul>
          <li>If the Unicode represents a part of the US-ASCII (0-127)
    character set then its value gets used;</li>
          <li>If the Unicode has a mapping to narrow in the character set then use it;</li>
          <li>If neither of the above then the narrow &#39;?&#39; is returned.</li>
        </ul>
</li>
<li>
<p>Command translation:</p>
  <ul>
          <li>If the Unicode represents a part of the US-ASCII (0-127) character set then its value gets used;</li>
          <li>If the Unicode has a mapping to narrow in the character set then use it;</li>
          <li>If neither of the above then the Unicode gets escaped using the form \xNNNN (hexadecimal).</li>
        </ul>
</li>
<li>
<p>HTTP/XML translation:</p>
  <ul>
          <li>If the Unicode represents a part of the US-ASCII (0-127)
character set then its value gets used after replacing the special symbols
(&lt;, &gt;, &amp; etc.) with their entity references;</li>
          <li>If the Unicode has a mapping to narrow in the character set
then use it.  The narrow char is then checked to see if needs to be escaped;</li>
          <li>If none of the above then the Unicode gets escaped  using the form &amp;#DDDDDD; (decimal)</li>
        </ul>

</li>
</ul>


<a name="charsetclientusage" />
    <h4>5.1.6.1. Character Set Use in ODBC/UDBC/CLI Clients</h4>

	<p>
This section describes where a translation is done in the case of an ODBC/UDBC/CLI client.
These are described as solution because the Virtuoso CLI is the same as
the ODBC/UDBC interface.
</p>

	<p>
For the functions <span class="computeroutput">SQLPrepareW()</span>,
<span class="computeroutput">SQLExecDirectW()</span>,  and
<span class="computeroutput">SQLNativeSQLW()</span> any Unicode arguments will become
narrow strings by using the command translation described above.</p>
	<p>
When doing the bindings
</p>
<div>
      <pre class="programlisting">
SQL_C_WCHAR -&gt; SQL_xxx
</pre>
    </div>
<p>and</p>
<div>
      <pre class="programlisting">
SQL_Nxxx -&gt; SQL_C_xxx (except SQL_C_WCHAR)
</pre>
    </div>
<p>
Virtuoso converts Unicode strings to narrow strings using the string
translation described above.
</p>
<br />


<a name="charsetserverusage" />
    <h4>5.1.6.2. Character Set Use in the ODBC/UDBC/CLI Server</h4>

	<p>
The server uses the character set in the CAST operator when converting
NCHAR/LONG NVARCHAR to any other type.
</p>
<br />


<a name="charsethttpusage" />
    <h4>5.1.6.3. Character Set Use in the HTTP Server</h4>

	<p>
The HTTP server appends a <span class="computeroutput">charset=xxxx</span> attribute to the
<span class="computeroutput">Content-Type:</span> HTTP header field  when
returning the HTTP header to the client.  This can be overridden by calling
functions such as <span class="computeroutput">http_header()</span>.
</p>
<p>
The HTTP server uses the character set mainly to format correctly
values using the <span class="computeroutput">http_value()</span> function or its VSP
equivalent &lt;?= ...&gt;.
In these cases wide values and XML entities - the result of XML
processing function like <span class="computeroutput">xpath_contains()</span> - get
represented using the HTTP/XML translation rules described above.
The same rules apply for results returned by the FOR XML directive, by XML
Views, and for WebDAV content.
</p>

<br />

<a name="charsetxmlproc" />
    <h4>5.1.6.4. Character Set Use in the XML Processor</h4>

	<p>
The Virtuoso embedded XML parser correctly processes all encodings defined
in the SYS_CHARSETS table and UTF8.
</p>
<br />


<a name="gensql" />
    <h4>5.1.6.5. Generation of SQL</h4>

<p>The <span class="computeroutput">xpath()</span> and
<span class="computeroutput">xpath_contains()</span> functions translate their expressions as follows:</p>

<a name="inputproc" />
    <h5>5.1.6.5.1. Input Processing</h5>
<ul>
      <li>
Narrow strings are these get translated to Unicode as per the character set
and then to UTF-8, which is the internal encoding used by the Virtuoso XML tools.
</li>
      <li>
SQL Views and FOR XML directives take their values from narrow columns by firstly
converting them to Unicode based on the database character set and then to UTF-8.
</li>
    </ul>
<br />


<a name="outputproc" />
    <h5>5.1.6.5.2. Output Processing</h5>
<ul>
      <li>
Almost all the XML processors and generators return their values as type
DV_XML_ENTITY (__tag() 230). If such a value&#39;s character
representation is requested either by CAST or by
http_value() then Virtuoso converts it to narrow
characters using the HTTP/XML translation rules given above.
</li>
      <li>
XPath expressions that return string values are returned as NCHAR values
to the clients, which then convert them to
narrow character if needed.
</li>
    </ul>

<br />
<br />
<br />

	
		<a name="dbccollationsdef" />
    <h3>5.1.7. Creating A Collation</h3>
			<p>
Virtuoso supports collation orders for CHAR and VARCHAR fields that are
different from the binary, as per ANSI SQL92.  When comparing strings
using a collation, Virtuoso compares the &quot;weights&quot; of the
characters instead of their codes.  This allows programs to make different
characters compare as equal (example: case-insensitive comparisons).
</p>
		<p>
A collation can be created by supplying a collation definition text file to
the <span class="computeroutput">collation_define()</span> SQL function.  The collation definition file contains a list of
the exceptions to the binary collation order.  An exception consists of &lt;character
code&gt; = &lt;collation weight&gt; pairs.  For example a case-insensitive collation can be defined by specifying
all the lower case letters to have the same collation weights as the corresponding uppercase ones.
</p>
		
			<a name="coldeffile" />
    <h4>5.1.7.1. Collation Definition File</h4>
			<p>
The collation definition file should follow the following guidelines:
</p>
			<ul>
				<li>
					<p>Each definition should reside on a separate line.</p>
				</li>
				<li>
					<p>The format of the definition is: &lt;CHAR&gt;=&lt;CODE&gt;, 
          where <span class="computeroutput">CHAR</span> and <span class="computeroutput">CODE</span> can 
          be either the letters themselves, or their decimal codes.  For 
          example: &#39;67=68&#39; is the same as &#39;C=D&#39; using the ASCII character set.  
          For Unicode collations the codes can exceed the byte boundary.</p>
				</li>
			</ul>
<p>You can define a new collation using the following function:
</p>
<p>
      <a href="fn_collation_define.html">
  collation_define (
    <span class="parameter">COLLATION_NAME</span>, 
    <span class="parameter">FILE_PATH</span>, 
    <span class="parameter">ADD_TYPE</span>)
  </a>
    </p>
		<br />
		
			<a name="dbconssys_collations" />
    <h4>5.1.7.2. Collations System Table</h4>
			<p>
The SYS_COLLATIONS system table holds the data for all defined collations. It has the following structure:
</p>
			<div>
      <pre class="programlisting">
CREATE TABLE SYS_COLLATIONS (
    COLL_NAME VARCHAR,
    COLL_TABLE LONG VARBINARY,
    COLL_IS_WIDE INTEGER);
</pre>
    </div>
			<p>
<span class="computeroutput">COLL_NAME</span> is the fully qualified name of the
collation - its identifier.
</p>
			<p>
<span class="computeroutput">COLL_TABLE</span> holds the collation table itself.  This
is 256 bytes or 65536 wide chars.
</p>
			<p>
<span class="computeroutput">COLL_IS_WIDE</span> holds the collation&#39;s type: 0 for
CHAR and 1 for NCHAR. An 8-bit collation cannot be used by anything that
requires NCHAR data and vice versa.
</p>
			<p>
A collation can be deleted by deleting its row from SYS_COLLATIONS.
</p>
<div class="note">
      <div class="notetitle">Note</div>
<p>The collation will still be available until the server is restarted,
as it&#39;s definition is cached in memory.
</p>
</div>
		<br />
		
			<a name="collation" />
    <h4>5.1.7.3. Collations and Column Data</h4>
			<p>
The collation is a property of the column holding the data. This means that all comparisons including
that column will use its collation. SQL functions will strip collation data from the
column; for example, if a column &quot;CompanyName&quot; has an assigned collation &quot;Spanish&quot;
then the SQL call <span class="computeroutput">LEFT (CompanyName, 10);</span> will use the default collation).
</p>
			<p>
Collations can be defined on a per-column basis, at table creation time,
and on a per-database basis as a configuration parameter.
There is a special form of the CAST operator that allows casting a column
to a collation.
</p>
			<p>
A collation identifier has the same form as any other SQL identifier
(&lt;qualifier&gt;.&lt;owner&gt;.&lt;name&gt;) and it can be
escaped with the same syntax as other identifiers.
</p>
			
				<a name="tablecoll" />
    <h5>5.1.7.3.1. Defining a Collation for a Table Column</h5>
				<p>
You may assign a collation to a column at table creation using the following syntax:
</p>
				<div>
      <pre class="programlisting">
create table TABLE_NAME (
...
COLLATED   VARCHAR(50) COLLATE Spanish,
COLLATED   CHAR(20) COLLATE DB.DBA.Spanish,
....
)
</pre>
    </div>
				<p>
Assigning a collation to a non-character column gives an error.
</p>
				<p>
If the COLLATE is omitted, the default database collation is used.
</p>
				<p>
On database start-up the collation for each table&#39;s column is loaded from the SYS_COLLATIONS table
and if not found, the COLLATE attribute is ignored until the next restart.
</p>
			<br />
			
				<a name="dbcoll" />
    <h5>5.1.7.3.2. Defining Database-Wide Collations</h5>
				<p>
The database&#39;s default collation is defined by the configuration 
parameter &quot;Collation&quot; in the &quot;Parameters&quot; section of 
the <a href="databaseadmsrv.html#VIRTINI">virtuoso.ini</a> file.  This database wide 
collation is the default system collation used where none other is specified.  
This setting can only be changed in the virtuoso.ini file and hence requires 
a Virtuoso server restart.  As with all collations, legal values are those 
contained in the DB.DBA.SYS_COLLATIONS table.  The list can be retrieved using 
<a href="fn_charsets_list.html">charsets_list(1)</a>
    </p>
			<br />
		<br />
	<br />
<table border="0" width="90%" id="navbarbottom">
    <tr>
        <td align="left" width="33%">
          <a href="concepts.html" title="Conceptual Overview">Previous</a>
          <br />Contents of Conceptual Overview</td>
     <td align="center" width="34%">
          <a href="concepts.html">Chapter Contents</a>
     </td>
        <td align="right" width="33%">
          <a href="thevdbrel.html" title="Virtual Database (VDB) Engine">Next</a>
          <br />Virtual Database (VDB) Engine</td>
    </tr>
    </table>
  </div>
  <div id="footer">
    <div>Copyright© 1999 - 2009 OpenLink Software All rights reserved.</div>
   <div id="validation">
    <a href="http://validator.w3.org/check/referer">
        <img src="http://www.w3.org/Icons/valid-xhtml10" alt="Valid XHTML 1.0!" height="31" width="88" />
    </a>
    <a href="http://jigsaw.w3.org/css-validator/">
        <img src="http://jigsaw.w3.org/css-validator/images/vcss" alt="Valid CSS!" height="31" width="88" />
    </a>
   </div>
  </div>
 </body>
</html>