Sophie

Sophie

distrib > Mandriva > 2008.1 > x86_64 > media > main-release > by-pkgid > 5476648a605140ffaec58f0395c11144 > files > 36

sphinx-0.9.8-3mdv2008.1.x86_64.rpm

<html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>Sphinx 0.9.8 reference manual</title><style type="text/css">
pre.programlisting
{
	background-color:	#f0f0f0;
	padding:			0.5em;
	margin-left:		2em;
	margin-right:		2em;
}
</style><meta name="generator" content="DocBook XSL Stylesheets V1.70.1"></head><body bgcolor="white" text="black" link="#0000FF" vlink="#840084" alink="#0000FF"><div class="article" lang="en"><div class="titlepage"><div><div><h1 class="title"><a name="id94318"></a>Sphinx 0.9.8 reference manual</h1></div><div><h3 class="subtitle"><i>Free open-source SQL full-text search engine</i></h3></div><div><p class="copyright">Copyright &copy; 2001-2008 Andrew Aksyonoff, <code class="email">&lt;<a href="mailto:shodan(at)shodan.ru">shodan(at)shodan.ru</a>&gt;</code></p></div></div><hr></div><div class="toc"><p><b>Table of Contents</b></p><dl><dt><span class="sect1"><a href="#intro">1. Introduction</a></span></dt><dd><dl><dt><span class="sect2"><a href="#about">1.1. About</a></span></dt><dt><span class="sect2"><a href="#features">1.2. Sphinx features</a></span></dt><dt><span class="sect2"><a href="#getting">1.3. Where to get Sphinx</a></span></dt><dt><span class="sect2"><a href="#license">1.4. License</a></span></dt><dt><span class="sect2"><a href="#author">1.5. Author and contributors</a></span></dt><dt><span class="sect2"><a href="#history">1.6. History</a></span></dt></dl></dd><dt><span class="sect1"><a href="#installation">2. Installation</a></span></dt><dd><dl><dt><span class="sect2"><a href="#supported-system">2.1. Supported systems</a></span></dt><dt><span class="sect2"><a href="#required-tools">2.2. Required tools</a></span></dt><dt><span class="sect2"><a href="#installing">2.3. Installing Sphinx</a></span></dt><dt><span class="sect2"><a href="#install-problems">2.4. Known installation issues</a></span></dt><dt><span class="sect2"><a href="#quick-tour">2.5. Quick Sphinx usage tour</a></span></dt></dl></dd><dt><span class="sect1"><a href="#indexing">3. Indexing</a></span></dt><dd><dl><dt><span class="sect2"><a href="#sources">3.1. Data sources</a></span></dt><dt><span class="sect2"><a href="#attributes">3.2. Attributes</a></span></dt><dt><span class="sect2"><a href="#mva">3.3. MVA (multi-valued attributes)</a></span></dt><dt><span class="sect2"><a href="#indexes">3.4. Indexes</a></span></dt><dt><span class="sect2"><a href="#data-restrictions">3.5. Restrictions on the source data</a></span></dt><dt><span class="sect2"><a href="#charsets">3.6. Charsets, case folding, and translation tables</a></span></dt><dt><span class="sect2"><a href="#sql">3.7. SQL data sources (MySQL, PostgreSQL)</a></span></dt><dt><span class="sect2"><a href="#xmlpipe">3.8. xmlpipe data source</a></span></dt><dt><span class="sect2"><a href="#xmlpipe2">3.9. xmlpipe2 data source</a></span></dt><dt><span class="sect2"><a href="#live-updates">3.10. Live index updates</a></span></dt><dt><span class="sect2"><a href="#index-merging">3.11. Index merging</a></span></dt></dl></dd><dt><span class="sect1"><a href="#searching">4. Searching</a></span></dt><dd><dl><dt><span class="sect2"><a href="#matching-modes">4.1. Matching modes</a></span></dt><dt><span class="sect2"><a href="#boolean-syntax">4.2. Boolean query syntax</a></span></dt><dt><span class="sect2"><a href="#extended-syntax">4.3. Extended query syntax</a></span></dt><dt><span class="sect2"><a href="#weighting">4.4. Weighting</a></span></dt><dt><span class="sect2"><a href="#sorting-modes">4.5. Sorting modes</a></span></dt><dt><span class="sect2"><a href="#clustering">4.6. Grouping (clustering) search results </a></span></dt><dt><span class="sect2"><a href="#distributed">4.7. Distributed searching</a></span></dt><dt><span class="sect2"><a href="#query-log-format">4.8. <code class="filename">searchd</code> query log format</a></span></dt></dl></dd><dt><span class="sect1"><a href="#api-reference">5. API reference</a></span></dt><dd><dl><dt><span class="sect2"><a href="#api-funcgroup-general">5.1. General API functions</a></span></dt><dd><dl><dt><span class="sect3"><a href="#api-func-getlasterror">5.1.1. GetLastError</a></span></dt><dt><span class="sect3"><a href="#api-func-getlastwarning">5.1.2. GetLastWarning</a></span></dt><dt><span class="sect3"><a href="#api-func-setserver">5.1.3. SetServer</a></span></dt><dt><span class="sect3"><a href="#api-func-setretries">5.1.4. SetRetries</a></span></dt><dt><span class="sect3"><a href="#api-func-setarrayresult">5.1.5. SetArrayResult</a></span></dt></dl></dd><dt><span class="sect2"><a href="#api-funcgroup-general-query-settings">5.2. General query settings</a></span></dt><dd><dl><dt><span class="sect3"><a href="#api-func-setlimits">5.2.1. SetLimits</a></span></dt><dt><span class="sect3"><a href="#api-func-setmaxquerytime">5.2.2. SetMaxQueryTime</a></span></dt></dl></dd><dt><span class="sect2"><a href="#api-funcgroup-fulltext-query-settings">5.3. Full-text search query settings</a></span></dt><dd><dl><dt><span class="sect3"><a href="#api-func-setmatchmode">5.3.1. SetMatchMode</a></span></dt><dt><span class="sect3"><a href="#api-func-setrankingmode">5.3.2. SetRankingMode</a></span></dt><dt><span class="sect3"><a href="#api-func-setsortmode">5.3.3. SetSortMode</a></span></dt><dt><span class="sect3"><a href="#api-func-setweights">5.3.4. SetWeights</a></span></dt><dt><span class="sect3"><a href="#api-func-setfieldweights">5.3.5. SetFieldWeights</a></span></dt><dt><span class="sect3"><a href="#api-func-setindexweights">5.3.6. SetIndexWeights</a></span></dt></dl></dd><dt><span class="sect2"><a href="#api-funcgroup-filtering">5.4. Result set filtering settings</a></span></dt><dd><dl><dt><span class="sect3"><a href="#api-func-setidrange">5.4.1. SetIDRange</a></span></dt><dt><span class="sect3"><a href="#api-func-setfilter">5.4.2. SetFilter</a></span></dt><dt><span class="sect3"><a href="#api-func-setfilterrange">5.4.3. SetFilterRange</a></span></dt><dt><span class="sect3"><a href="#api-func-setfilterfloatrange">5.4.4. SetFilterFloatRange</a></span></dt><dt><span class="sect3"><a href="#api-func-setgeoanchor">5.4.5. SetGeoAnchor</a></span></dt></dl></dd><dt><span class="sect2"><a href="#api-funcgroup-groupby">5.5. GROUP BY settings</a></span></dt><dd><dl><dt><span class="sect3"><a href="#api-func-setgroupby">5.5.1. SetGroupBy</a></span></dt><dt><span class="sect3"><a href="#api-func-setgroupdistinct">5.5.2. SetGroupDistinct</a></span></dt></dl></dd><dt><span class="sect2"><a href="#api-funcgroup-querying">5.6. Querying</a></span></dt><dd><dl><dt><span class="sect3"><a href="#api-func-query">5.6.1. Query</a></span></dt><dt><span class="sect3"><a href="#api-func-addquery">5.6.2. AddQuery</a></span></dt><dt><span class="sect3"><a href="#api-func-runqueries">5.6.3. RunQueries</a></span></dt><dt><span class="sect3"><a href="#api-func-resetfilters">5.6.4. ResetFilters</a></span></dt><dt><span class="sect3"><a href="#api-func-resetgroupby">5.6.5. ResetGroupBy</a></span></dt></dl></dd><dt><span class="sect2"><a href="#api-funcgroup-additional-functionality">5.7. Additional functionality</a></span></dt><dd><dl><dt><span class="sect3"><a href="#api-func-buildexcerpts">5.7.1. BuildExcerpts</a></span></dt><dt><span class="sect3"><a href="#api-func-updateatttributes">5.7.2. UpdateAttributes</a></span></dt></dl></dd></dl></dd><dt><span class="sect1"><a href="#sphinxse">6. MySQL storage engine (SphinxSE)</a></span></dt><dd><dl><dt><span class="sect2"><a href="#sphinxse-overview">6.1. SphinxSE overview</a></span></dt><dt><span class="sect2"><a href="#sphinxse-installing">6.2. Installing SphinxSE</a></span></dt><dd><dl><dt><span class="sect3"><a href="#sphinxse-mysql50">6.2.1. Compiling MySQL 5.0.x with SphinxSE</a></span></dt><dt><span class="sect3"><a href="#sphinxse-mysql51">6.2.2. Compiling MySQL 5.1.x with SphinxSE</a></span></dt><dt><span class="sect3"><a href="#sphinxse-checking">6.2.3. Checking SphinxSE installation</a></span></dt></dl></dd><dt><span class="sect2"><a href="#sphinxse-using">6.3. Using SphinxSE</a></span></dt></dl></dd><dt><span class="sect1"><a href="#reporting-bugs">7. Reporting bugs</a></span></dt><dt><span class="sect1"><a href="#conf-reference">8. <code class="filename">sphinx.conf</code> options reference</a></span></dt><dd><dl><dt><span class="sect2"><a href="#confgroup-source">8.1. Data source configuration options</a></span></dt><dd><dl><dt><span class="sect3"><a href="#conf-source-type">8.1.1. type</a></span></dt><dt><span class="sect3"><a href="#conf-sql-host">8.1.2. sql_host</a></span></dt><dt><span class="sect3"><a href="#conf-sql-port">8.1.3. sql_port</a></span></dt><dt><span class="sect3"><a href="#conf-sql-user">8.1.4. sql_user</a></span></dt><dt><span class="sect3"><a href="#conf-sql-pass">8.1.5. sql_pass</a></span></dt><dt><span class="sect3"><a href="#conf-sql-db">8.1.6. sql_db</a></span></dt><dt><span class="sect3"><a href="#conf-sql-sock">8.1.7. sql_sock</a></span></dt><dt><span class="sect3"><a href="#conf-mysql-connect-flags">8.1.8. mysql_connect_flags</a></span></dt><dt><span class="sect3"><a href="#conf-sql-query-pre">8.1.9. sql_query_pre</a></span></dt><dt><span class="sect3"><a href="#conf-sql-query">8.1.10. sql_query</a></span></dt><dt><span class="sect3"><a href="#conf-sql-query-range">8.1.11. sql_query_range</a></span></dt><dt><span class="sect3"><a href="#conf-sql-range-step">8.1.12. sql_range_step</a></span></dt><dt><span class="sect3"><a href="#conf-sql-attr-uint">8.1.13. sql_attr_uint</a></span></dt><dt><span class="sect3"><a href="#conf-sql-attr-bool">8.1.14. sql_attr_bool</a></span></dt><dt><span class="sect3"><a href="#conf-sql-attr-timestamp">8.1.15. sql_attr_timestamp</a></span></dt><dt><span class="sect3"><a href="#conf-sql-attr-str2ordinal">8.1.16. sql_attr_str2ordinal</a></span></dt><dt><span class="sect3"><a href="#conf-sql-attr-float">8.1.17. sql_attr_float</a></span></dt><dt><span class="sect3"><a href="#conf-sql-attr-multi">8.1.18. sql_attr_multi</a></span></dt><dt><span class="sect3"><a href="#conf-sql-query-post">8.1.19. sql_query_post</a></span></dt><dt><span class="sect3"><a href="#conf-sql-query-post-index">8.1.20. sql_query_post_index</a></span></dt><dt><span class="sect3"><a href="#conf-sql-ranged-throttle">8.1.21. sql_ranged_throttle</a></span></dt><dt><span class="sect3"><a href="#conf-sql-query-info">8.1.22. sql_query_info</a></span></dt><dt><span class="sect3"><a href="#conf-xmlpipe-command">8.1.23. xmlpipe_command</a></span></dt><dt><span class="sect3"><a href="#conf-xmlpipe-field">8.1.24. xmlpipe_field</a></span></dt><dt><span class="sect3"><a href="#conf-xmlpipe-attr-uint">8.1.25. xmlpipe_attr_uint</a></span></dt><dt><span class="sect3"><a href="#conf-xmlpipe-attr-bool">8.1.26. xmlpipe_attr_bool</a></span></dt><dt><span class="sect3"><a href="#conf-xmlpipe-attr-timestamp">8.1.27. xmlpipe_attr_timestamp</a></span></dt><dt><span class="sect3"><a href="#conf-xmlpipe-attr-str2ordinal">8.1.28. xmlpipe_attr_str2ordinal</a></span></dt><dt><span class="sect3"><a href="#conf-xmlpipe-attr-float">8.1.29. xmlpipe_attr_float</a></span></dt><dt><span class="sect3"><a href="#conf-xmlpipe-attr-multi">8.1.30. xmlpipe_attr_multi</a></span></dt></dl></dd><dt><span class="sect2"><a href="#confgroup-index">8.2. Index configuration options</a></span></dt><dd><dl><dt><span class="sect3"><a href="#conf-index-type">8.2.1. type</a></span></dt><dt><span class="sect3"><a href="#conf-source">8.2.2. source</a></span></dt><dt><span class="sect3"><a href="#conf-path">8.2.3. path</a></span></dt><dt><span class="sect3"><a href="#conf-docinfo">8.2.4. docinfo</a></span></dt><dt><span class="sect3"><a href="#conf-mlock">8.2.5. mlock</a></span></dt><dt><span class="sect3"><a href="#conf-morphology">8.2.6. morphology</a></span></dt><dt><span class="sect3"><a href="#conf-stopwords">8.2.7. stopwords</a></span></dt><dt><span class="sect3"><a href="#conf-wordforms">8.2.8. wordforms</a></span></dt><dt><span class="sect3"><a href="#conf-exceptions">8.2.9. exceptions</a></span></dt><dt><span class="sect3"><a href="#conf-min-word-len">8.2.10. min_word_len</a></span></dt><dt><span class="sect3"><a href="#conf-charset-type">8.2.11. charset_type</a></span></dt><dt><span class="sect3"><a href="#conf-charset-table">8.2.12. charset_table</a></span></dt><dt><span class="sect3"><a href="#conf-ignore-chars">8.2.13. ignore_chars</a></span></dt><dt><span class="sect3"><a href="#conf-min-prefix-len">8.2.14. min_prefix_len</a></span></dt><dt><span class="sect3"><a href="#conf-min-infix-len">8.2.15. min_infix_len</a></span></dt><dt><span class="sect3"><a href="#conf-prefix-fields">8.2.16. prefix_fields</a></span></dt><dt><span class="sect3"><a href="#conf-infix-fields">8.2.17. infix_fields</a></span></dt><dt><span class="sect3"><a href="#conf-enable-star">8.2.18. enable_star</a></span></dt><dt><span class="sect3"><a href="#conf-ngram-len">8.2.19. ngram_len</a></span></dt><dt><span class="sect3"><a href="#conf-ngram-chars">8.2.20. ngram_chars</a></span></dt><dt><span class="sect3"><a href="#conf-phrase-boundary">8.2.21. phrase_boundary</a></span></dt><dt><span class="sect3"><a href="#conf-phrase-boundary-step">8.2.22. phrase_boundary_step</a></span></dt><dt><span class="sect3"><a href="#conf-html-strip">8.2.23. html_strip</a></span></dt><dt><span class="sect3"><a href="#conf-html-index-attrs">8.2.24. html_index_attrs</a></span></dt><dt><span class="sect3"><a href="#conf-html-remove-elements">8.2.25. html_remove_elements</a></span></dt><dt><span class="sect3"><a href="#conf-local">8.2.26. local</a></span></dt><dt><span class="sect3"><a href="#conf-agent">8.2.27. agent</a></span></dt><dt><span class="sect3"><a href="#conf-agent-connect-timeout">8.2.28. agent_connect_timeout</a></span></dt><dt><span class="sect3"><a href="#conf-agent-query-timeout">8.2.29. agent_query_timeout</a></span></dt><dt><span class="sect3"><a href="#conf-preopen">8.2.30. preopen</a></span></dt></dl></dd><dt><span class="sect2"><a href="#confgroup-indexer">8.3. <code class="filename">indexer</code> program configuration options</a></span></dt><dd><dl><dt><span class="sect3"><a href="#conf-mem-limit">8.3.1. mem_limit</a></span></dt><dt><span class="sect3"><a href="#conf-max-iops">8.3.2. max_iops</a></span></dt><dt><span class="sect3"><a href="#conf-max-iosize">8.3.3. max_iosize</a></span></dt></dl></dd><dt><span class="sect2"><a href="#confgroup-searchd">8.4. <code class="filename">searchd</code> program configuration options</a></span></dt><dd><dl><dt><span class="sect3"><a href="#conf-address">8.4.1. address</a></span></dt><dt><span class="sect3"><a href="#conf-port">8.4.2. port</a></span></dt><dt><span class="sect3"><a href="#conf-log">8.4.3. log</a></span></dt><dt><span class="sect3"><a href="#conf-query-log">8.4.4. query_log</a></span></dt><dt><span class="sect3"><a href="#conf-read-timeout">8.4.5. read_timeout</a></span></dt><dt><span class="sect3"><a href="#conf-max-children">8.4.6. max_children</a></span></dt><dt><span class="sect3"><a href="#conf-pid-file">8.4.7. pid_file</a></span></dt><dt><span class="sect3"><a href="#conf-max-matches">8.4.8. max_matches</a></span></dt><dt><span class="sect3"><a href="#conf-seamless-rotate">8.4.9. seamless_rotate</a></span></dt><dt><span class="sect3"><a href="#conf-preopen-indexes">8.4.10. preopen_indexes</a></span></dt><dt><span class="sect3"><a href="#conf-unlink-old">8.4.11. unlink_old</a></span></dt></dl></dd></dl></dd><dt><span class="appendix"><a href="#changelog">A. Sphinx revision history</a></span></dt></dl></div><div class="sect1" lang="en"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="intro"></a>1.&nbsp;Introduction</h2></div></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="about"></a>1.1.&nbsp;About</h3></div></div></div><p>
Sphinx is a full-text search engine, distributed under GPL version 2.
Commercial licensing (eg. for embedded use) is also available upon request. 
</p><p>
Generally, it's a standalone search engine, meant to provide fast,
size-efficient and relevant full-text search functions to other
applications. Sphinx was specially designed to integrate well with
SQL databases and scripting languages.
</p><p>
Currently built-in data source drivers support fetching data either via
direct connection to MySQL, or PostgreSQL, or from a pipe in a custom XML
format. Adding new drivers (eg. to natively support some other DBMSes)
is designed to be as easy as possible.
</p><p>
Search API is natively ported to PHP, Python, Perl, Ruby, Java, and
also available as a pluggable MySQL storage engine. API is very
lightweight so porting it to new language is known to take a few hours.
</p><p>
As for the name, Sphinx is an acronym which is officially decoded
as SQL Phrase Index. Yes, I know about CMU's Sphinx project. 
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="features"></a>1.2.&nbsp;Sphinx features</h3></div></div></div><p>
</p><div class="itemizedlist"><ul type="disc"><li>high indexing speed (upto 10 MB/sec on modern CPUs);</li><li>high search speed (avg query is under 0.1 sec on 2-4 GB text collections);</li><li>high scalability (upto 100 GB of text, upto 100 M documents on a single CPU);</li><li>provides good relevance ranking through combination of phrase proximity ranking and statistical (BM25) ranking;</li><li>provides distributed searching capabilities;</li><li>provides document exceprts generation;</li><li>provides searching from within MySQL through pluggable storage engine;</li><li>supports boolean, phrase, and word proximity queries;</li><li>supports multiple full-text fields per document (upto 32 by default);</li><li>supports multiple additional attributes per document (ie. groups, timestamps, etc);</li><li>supports stopwords;</li><li>supports both single-byte encodings and UTF-8;</li><li>supports English stemming, Russian stemming, and Soundex for morphology;</li><li>supports MySQL natively (MyISAM and InnoDB tables are both supported);</li><li>supports PostgreSQL natively.</li></ul></div><p>
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="getting"></a>1.3.&nbsp;Where to get Sphinx</h3></div></div></div><p>Sphinx is available through its official Web site at <a href="http://www.sphinxsearch.com/" target="_top">http://www.sphinxsearch.com/</a>.
</p><p>Currently, Sphinx distribution tarball includes the following software:
</p><div class="itemizedlist"><ul type="disc"><li><code class="filename">indexer</code>: an utility which creates fulltext indexes;</li><li><code class="filename">search</code>: a simple command-line (CLI) test utility which searches through fulltext indexes;</li><li><code class="filename">searchd</code>: a daemon which enables external software (eg. Web applications) to search through fulltext indexes;</li><li><code class="filename">sphinxapi</code>: a set of searchd client API libraries for popular Web scripting languages (PHP, Python, Perl, Ruby).</li></ul></div><p>
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="license"></a>1.4.&nbsp;License</h3></div></div></div><p>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License,
or (at your option) any later version. See COPYING file for details.
</p><p>
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
more details. 
</p><p>
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 
</p><p>
If you don't want to be bound by GNU GPL terms (for instance,
if you would like to embed Sphinx in your software, but would not
like to disclose its source code), please contact
<a href="#author" title="1.5.&nbsp;Author and contributors">the author</a> to obtain
a commercial license.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="author"></a>1.5.&nbsp;Author and contributors</h3></div></div></div><h4><a name="id345203"></a>Author</h4><p>
Sphinx initial author and current primary developer is:
</p><div class="itemizedlist"><ul type="disc"><li>Andrew Aksyonoff, <code class="email">&lt;<a href="mailto:shodan(at)shodan.ru">shodan(at)shodan.ru</a>&gt;</code></li></ul></div><p>
</p><h4><a name="id344960"></a>Contributors</h4><p>People who contributed to Sphinx and their contributions (in no particular order) are:
</p><div class="itemizedlist"><ul type="disc"><li>Robert "coredev" Bengtsson (Sweden), initial version of PostgreSQL data source;</li><li>Len Kranendonk, Perl API</li><li>Dmytro Shteflyuk, Ruby API</li></ul></div><p>
</p><p>
Many other people have contributed ideas, bug reports, fixes, etc.
Thank you!
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="history"></a>1.6.&nbsp;History</h3></div></div></div><p>
Sphinx development was started back in 2001, because I didn't manage
to find an acceptable search solution (for a database driven Web site)
which would meet my requirements. Actually, each and every important aspect was a problem: 
</p><div class="itemizedlist"><ul type="disc"><li>search quality (ie. good relevance)
<div class="itemizedlist"><ul type="circle"><li>statistical ranking methods performed rather bad, especially on large collections of small documents (forums, blogs, etc)</li></ul></div></li><li>search speed
<div class="itemizedlist"><ul type="circle"><li>especially if searching for phrases which contain stopwords, as in "to be or not to be"</li></ul></div></li><li>moderate disk and CPU requirements when indexing
<div class="itemizedlist"><ul type="circle"><li>important in shared hosting enivronment, not to mention the indexing speed.</li></ul></div></li></ul></div><p>
</p><p>
Despite the amount of time passed and numerous improvements made in the
other solutions, there's still no solution which I personally would
be eager to migrate to. 
</p><p>
Considering that and a lot of positive feedback received from Sphinx users
during last years, the obvious decision is to continue developing Sphinx
(and, eventually, to take over the world).
</p></div></div><div class="sect1" lang="en"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="installation"></a>2.&nbsp;Installation</h2></div></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="supported-system"></a>2.1.&nbsp;Supported systems</h3></div></div></div><p>
Most modern UNIX systems with a C++ compiler should be able
to compile and run Sphinx without any modifications.
</p><p>
Currently known systems Sphinx has been successfully running on are:
</p><div class="itemizedlist"><ul type="disc"><li>Linux 2.4.x, 2.6.x (various distributions)</li><li>Windows 2000, XP</li><li>FreeBSD 4.x, 5.x, 6.x</li><li>NetBSD 1.6, 3.0</li><li>Solaris 9, 11</li><li>Mac OS X</li></ul></div><p>
</p><p>
CPU architectures known to work include X86, X86-64, SPARC64.
</p><p>
I hope Sphinx will work on other Unix platforms as well. 
If the platform you run Sphinx on is not in this list,
please do report it.
</p><p>
At the moment, Windows version of Sphinx is not intended to be used
in production, but rather for testing and debugging only. Two most prominent
issues are missing concurrent queries support (client queries are stacked
on TCP connection level instead), and missing index data rotation support.
There are succesful production installations which workaround these issues.
However, running high-volume search service under Windows
is still not recommended.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="required-tools"></a>2.2.&nbsp;Required tools</h3></div></div></div><p>
On UNIX, you will need the following tools to build
and install Sphinx:
</p><div class="itemizedlist"><ul type="disc"><li>a working C++ compiler. GNU gcc is known to work.</li><li>a good make program. GNU make is known to work.</li></ul></div><p>
</p><p>
On Windows, you will need Microsoft Visual C/C++ Studio .NET 2003 or 2005.
Other compilers/environments will probably work as well, but for the
time being, you will have to build makefile (or other environment
specific project files) manually.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="installing"></a>2.3.&nbsp;Installing Sphinx</h3></div></div></div><div class="orderedlist"><ol type="1"><li><p>
	Extract everything from the distribution tarball (haven't you already?)
	and go to the <code class="filename">sphinx</code> subdirectory:
	</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;tar&nbsp;xzvf&nbsp;sphinx-0.9.7.tar.gz<br>
$&nbsp;cd&nbsp;sphinx<br>
</p></div></code></strong></p></li><li><p>Run the configuration program:</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;./configure</p></div></code></strong></p><p>
	There's a number of options to configure. The complete listing may
	be obtained by using <code class="option">--help</code> switch. The most important ones are:
	</p><div class="itemizedlist"><ul type="disc"><li><code class="option">--prefix</code>, which specifies where to install Sphinx;</li><li><code class="option">--with-mysql</code>, which specifies where to look for MySQL
			include and library files, if auto-detection fails;</li><li><code class="option">--with-pgsql</code>, which specifies where to look for PostgreSQL
			include and library files.</li></ul></div><p>
	</p></li><li><p>Build the binaries:</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;make</p></div></code></strong></p></li><li><p>Install the binaries in the directory of your choice:</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;make&nbsp;install</p></div></code></strong></p></li></ol></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="install-problems"></a>2.4.&nbsp;Known installation issues</h3></div></div></div><p>
If <code class="filename">configure</code> fails to locate MySQL headers and/or libraries,
try checking for and installing <code class="filename">mysql-devel</code> package. On some systems,
it is not installed by default.
</p><p>
If <code class="filename">make</code> fails with a message which look like
</p><pre class="programlisting">
/bin/sh: g++: command not found
make[1]: *** [libsphinx_a-sphinx.o] Error 127
</pre><p>
try checking for and installing <code class="filename">gcc-c++</code> package.
</p><p>
If you are getting compile-time errors which look like
</p><pre class="programlisting">
sphinx.cpp:67: error: invalid application of `sizeof' to
    incomplete type `Private::SizeError&lt;false&gt;'
</pre><p>
this means that some compile-time type size check failed.
The most probable reason is that off_t type is less than 64-bit
on your system. As a quick hack, you can edit sphinx.h and replace off_t
with DWORD in a typedef for SphOffset_t, but note that this will prohibit
you from using full-text indexes larger than 2 GB. Even if the hack helps,
please report such issues, providing the exact error message and
compiler/OS details, so I could properly fix them in next releases.
</p><p>
If you keep getting any other error, or the suggestions above
do not seem to help you, please don't hesitate to contact me.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="quick-tour"></a>2.5.&nbsp;Quick Sphinx usage tour</h3></div></div></div><p>
All the example commands below assume that you installed Sphinx
in <code class="filename">/usr/local/sphinx</code>.
</p><p>
To use Sphinx, you will need to:
</p><div class="orderedlist"><ol type="1"><li><p>Create a configuration file.</p><p>
	Default configuration file name is <code class="filename">sphinx.conf</code>.
	All Sphinx programs look for this file in current working directory
	by default.
	</p><p>
	Sample configuration file, <code class="filename">sphinx.conf.dist</code>, which has
	all the options documented, is created by <code class="filename">configure</code>.
	Copy and edit that sample file to make your own configuration:
	</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;cd&nbsp;/usr/local/sphinx/etc<br>
$&nbsp;cp&nbsp;sphinx.conf.dist&nbsp;sphinx.conf<br>
$&nbsp;vi&nbsp;sphinx.conf</p></div></code></strong></p><p>
	Sample configuration file is setup to index <code class="filename">documents</code>
	table from MySQL database <code class="filename">test</code>; so there's <code class="filename">example.sql</code>
	sample data file to populate that table with a few documents for testing purposes:
	</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;mysql&nbsp;-u&nbsp;test&nbsp;&lt;&nbsp;/usr/local/sphinx/etc/example.sql</p></div></code></strong></p></li><li><p>Run the indexer to create full-text index from your data:</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;cd&nbsp;/usr/local/sphinx/etc<br>
$&nbsp;/usr/local/sphinx/bin/indexer</p></div></code></strong></p></li><li><p>Query your newly created index!</p></li></ol></div><p>
To query the index from command line, use <code class="filename">search</code> utility:
</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;cd&nbsp;/usr/local/sphinx/etc<br>
$&nbsp;/usr/local/sphinx/bin/search&nbsp;test</p></div></code></strong></p><p>
To query the index from your PHP scripts, you need to:
</p><div class="orderedlist"><ol type="1"><li><p>Run the search daemon which your script will talk to:</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;cd&nbsp;/usr/local/sphinx/etc<br>
$&nbsp;/usr/local/sphinx/bin/searchd</p></div></code></strong></p></li><li><p>
		Run the attached PHP API test script (to ensure that the daemon
		was succesfully started and is ready to serve the queries):
		</p><p><strong class="userinput"><code><div class="literallayout"><p>$&nbsp;cd&nbsp;sphinx/api<br>
$&nbsp;php&nbsp;test.php&nbsp;test</p></div></code></strong></p></li><li><p>
		Include the API (it's located in <code class="filename">api/sphinxapi.php</code>)
		into your own scripts and use it.
		</p></li></ol></div><p>
Happy searching!
</p></div></div><div class="sect1" lang="en"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="indexing"></a>3.&nbsp;Indexing</h2></div></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="sources"></a>3.1.&nbsp;Data sources</h3></div></div></div><p>
The data to be indexed can generally come from very different
sources: SQL databases, plain text files, HTML files, mailboxes,
and so on. From Sphinx point of view, the data it indexes is a
set of structured <em class="glossterm">documents</em>, each of which has the
same set of <em class="glossterm">fields</em>. This is biased towards SQL, where
each row correspond to a document, and each column to a field.
</p><p>
Depending on what source Sphinx should get the data from,
different code is required to fetch the data and prepare it for indexing.
This code is called <em class="glossterm">data source driver</em> (or simply
<em class="glossterm">driver</em> or <em class="glossterm">data source</em> for brevity).
</p><p>
At the time of this writing, there are drivers for MySQL and
PostgreSQL databases, which can connect to the database using
its native C/C++ API, run queries and fetch the data. There's
also a driver called xmlpipe, which runs a specified command
and reads the data from its <code class="filename">stdout</code>.
See <a href="#xmlpipe" title="3.8.&nbsp;xmlpipe data source">Section&nbsp;3.8, &#8220;xmlpipe data source&#8221;</a> section for the format description.
</p><p>
There can be as many sources per index as necessary. They will be
sequentially processed in the very same order which was specifed in
index definition. All the documents coming from those sources
will be merged as if they were coming from a single source.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="attributes"></a>3.2.&nbsp;Attributes</h3></div></div></div><p>
Attributes are additional values associated with each document
that can be used to perform additional filtering and sorting during search.
</p><p>
It is often desired to additionally process full-text search results
based not only on matching document ID and its rank, but on a number
of other per-document values as well. For instance, one might need to
sort news search results by date and then relevance,
or search through products within specified price range,
or limit blog search to posts made by selected users,
or group results by month. To do that efficiently, Sphinx allows
to attach a number of additional <em class="glossterm">attributes</em>
to each document, and store their values in the full-text index.
It's then possible to use stored values to filter, sort,
or group full-text matches.
</p><p>
A good example would be a forum posts table. Assume that
only title and contentfields need to be full-text searchable -
but that sometimes it is also required to limit search to a certain
author or a sub-forum (ie. search only those rows that have some
specific values of author_id or forum_id columns in the SQL table);
or to sort matches by post_date column; or to group matching posts
by month of the post_date and calculate per-group match counts.
</p><p>
This can be achieved by specifying all the mentioned columns
(excluding title and content, that are full-text fields) as
attributes, indexing them, and then using API calls to
setup filtering, sorting, and grouping. Here as an example.
</p><h4><a name="id351785"></a>Example sphinx.conf part:</h4><p>
</p><pre class="programlisting">
...
sql_query = SELECT id, title, content, \
	author_id, forum_id, post_date FROM my_forum_posts
sql_attr_uint = author_id
sql_attr_uint = forum_id
sql_attr_timestamp = post_date
...
</pre><p>
</p><h4><a name="id351798"></a>Example application code (in PHP):</h4><p>
</p><pre class="programlisting">
// only search posts by author whose ID is 123
$cl-&gt;SetFilter ( "author_id", array ( 123 ) );

// only search posts in sub-forums 1, 3 and 7
$cl-&gt;SetFilter ( "forum_id", array ( 1,3,7 ) );

// sort found posts by posting date in descending order
$cl-&gt;SetSortMode ( SPH_SORT_ATTR_DESC, "post_date" );
</pre><p>
</p><p>
Attributes are named. Attribute names are case insensitive.
Attributes are <span class="emphasis"><em>not</em></span> full-text indexed; they are stored in the index as is.
Currently supported attribute types are:
</p><div class="itemizedlist"><ul type="disc"><li>unsigned integers (1-bit to 32-bit wide);</li><li>UNIX timestamps;</li><li>floating point values (32-bit, IEEE 754 single precision);</li><li>string ordinals (specially computed integers);</li><li><a href="#mva" title="3.3.&nbsp;MVA (multi-valued attributes)">MVA</a>, multi-value attributes (variable-length lists of 32-bit unsigned integers).</li></ul></div><p>
</p><p>
The complete set of per-document attribute values is sometimes
referred to as <em class="glossterm">docinfo</em>. Docinfos can either be
</p><div class="itemizedlist"><ul type="disc"><li>stored separately from the main full-text index data ("extern" storage, in <code class="filename">.spa</code> file), or</li><li>attached to each occurence of document ID in full-text index data ("inline" storage, in <code class="filename">.spd</code> file).</li></ul></div><p>
</p><p>
When using extern storage, a copy of <code class="filename">.spa</code> file
(with all the attribute values for all the documents) is kept in RAM by
<code class="filename">searchd</code> at all times. This is for performance reasons;
random disk I/O would be too slow. On the contrary, inline storage does not
require any additional RAM at all, but that comes at the cost of greatly
inflating the index size: remember that it copies <span class="emphasis"><em>all</em></span>
attribute value <span class="emphasis"><em>every</em></span> time when the document ID
is mentioned, and that is exactly as many times as there are
different keywords in the document. Inline may be the only viable
option if you have only a few attributes and need to work with big
datasets in limited RAM. However, in most cases extern storage
makes both indexing and searching <span class="emphasis"><em>much</em></span> more efficient.
</p><p>
Search-time memory requirements for extern storage are
(1+number_of_attrs)*number_of_docs*4 bytes, ie. 10 million docs with
2 groups and 1 timestamp will take (1+2+1)*10M*4 = 160 MB of RAM.
This is <span class="emphasis"><em>PER DAEMON</em></span>, not per query. <code class="filename">searchd</code>
will allocate 160 MB on startup, read the data and keep it shared between queries.
The children will <span class="emphasis"><em>NOT</em></span> allocate any additional
copies of this data.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="mva"></a>3.3.&nbsp;MVA (multi-valued attributes)</h3></div></div></div><p>
MVAs, or multi-valued attributes, are an important special type of per-document attributes in Sphinx.
MVAs make it possible to attach lists of values to every document.
They are useful for article tags, product categories, etc.
Filtering and group-by (but not sorting) on MVA attributes is supported.
</p><p>
Currently, MVA list entries are limited to unsigned 32-bit integers.
The list length is not limited, you can have an arbitrary number of values
attached to each document as long as RAM permits (<code class="filename">.spm</code> file
that contains the MVA values will be precached in RAM by <code class="filename">searchd</code>).
The source data can be taken either from a separate query, or from a document field;
see source type in <a href="#conf-sql-attr-multi" title="8.1.18.&nbsp;sql_attr_multi">sql_attr_multi</a>.
In the first case the query will have to return pairs of document ID and MVA values,
in the second one the field will be parsed for integer values.
There are absolutely no requirements as to incoming data order; the values will be
automatically grouped by document ID (and internally sorted within the same ID)
during indexing anyway.
</p><p>
When filtering, a document will match the filter on MVA attribute
if <span class="emphasis"><em>any</em></span> of the values satisfy the filtering condition.
(Therefore, documents that pass through exclude filters will not
contain any of the forbidden values.)
When grouping by MVA attribute, a document will contribute to as
many groups as there are different MVA values associated with that document.
For instance, if the collection contains exactly 1 document having a 'tag' MVA
with values 5, 7, and 11, grouping on 'tag' will produce 3 groups with
'@count' equal to 1 and '@groupby' key values of 5, 7, and 11 respectively.
Also note that grouping by MVA might lead to duplicate documents in the result set:
because each document can participate in many groups, it can be chosen as the best
one in in more than one group, leading to duplicate IDs. PHP API historically
uses ordered hash on the document ID for the resulting rows; so you'll also need to use
<a href="#api-func-setarrayresult" title="5.1.5.&nbsp;SetArrayResult">SetArrayResult()</a> in order
to employ group-by on MVA with PHP API.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="indexes"></a>3.4.&nbsp;Indexes</h3></div></div></div><p>
To be able to answer full-text search queries fast, Sphinx needs
to build a special data structure optimized for such queries from
your text data. This structure is called <em class="glossterm">index</em>; and
the process of building index from text is called <em class="glossterm">indexing</em>.
</p><p>
Different index types are well suited for different tasks.
For example, a disk-based tree-based index would be easy to
update (ie. insert new documents to existing index), but rather
slow to search. Therefore, Sphinx architecture allows for different
<em class="glossterm">index types</em> to be implemented easily.
</p><p>
The only index type which is implemented in Sphinx at the moment is
designed for maximum indexing and searching speed. This comes at a cost
of updates being really slow; theoretically, it might be slower to
update this type of index than than to reindex it from scratch.
However, this very frequently could be worked around with
muiltiple indexes, see <a href="#live-updates" title="3.10.&nbsp;Live index updates">Section&nbsp;3.10, &#8220;Live index updates&#8221;</a> for details.
</p><p>
It is planned to implement more index types, including the
type which would be updateable in real time.
</p><p>
There can be as many indexes per configuration file as necessary.
<code class="filename">indexer</code> utility can reindex either all of them
(if <code class="option">--all</code> option is specified), or a certain explicitly
specified subset. <code class="filename">searchd</code> utility will serve all
the specified indexes, and the clients can specify what indexes to
search in run time.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="data-restrictions"></a>3.5.&nbsp;Restrictions on the source data</h3></div></div></div><p>
There are a few different restrictions imposed on the source data
which is going to be indexed by Sphinx, of which the single most
important one is:
</p><p><span class="bold"><strong>
ALL DOCUMENT IDS MUST BE UNIQUE UNSIGNED NON-ZERO INTEGER NUMBERS (32-BIT OR 64-BIT, DEPENDING ON BUILD TIME SETTINGS).
</strong></span></p><p>
If this requirement is not met, different bad things can happen.
For instance, Sphinx can crash with an internal assertion while indexing;
or produce strange results when searching due to conflicting IDs.
Also, a 1000-pound gorilla might eventually come out of your
display and start throwing barrels at you. You've been warned.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="charsets"></a>3.6.&nbsp;Charsets, case folding, and translation tables</h3></div></div></div><p>
When indexing some index, Sphinx fetches documents from
the specified sources, splits the text into words, and does
case folding so that "Abc", "ABC" and "abc" would be treated
as the same word (or, to be pedantic, <em class="glossterm">term</em>).
</p><p>
To do that properly, Sphinx needs to know
</p><div class="itemizedlist"><ul type="disc"><li>what encoding is the source text in;</li><li>what characters are letters and what are not;</li><li>what letters should be folded to what letters.</li></ul></div><p>
This should be configured on a per-index basis using
<code class="option"><a href="#conf-charset-type" title="8.2.11.&nbsp;charset_type">charset_type</a></code> and 
<code class="option"><a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a></code> options.
<code class="option"><a href="#conf-charset-type" title="8.2.11.&nbsp;charset_type">charset_type</a></code>
specifies whether the document encoding is single-byte (SBCS) or UTF-8.
<code class="option"><a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a></code>
specifies the table that maps letter characters to their case
folded versions. The characters that are not in the table are considered
to be non-letters and will be treated as word separators when indexing
or searching through this index.
</p><p>
Note that while default tables do not include space character
(ASCII code 0x20, Unicode U+0020) as a letter, it's in fact
<span class="emphasis"><em>perfectly legal</em></span> to do so. This can be
useful, for instance, for indexing tag clouds, so that space-separated
word sets would index as a <span class="emphasis"><em>single</em></span> search query term.
</p><p>
Default tables currently include English and Russian characters.
Please do submit your tables for other languages!
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="sql"></a>3.7.&nbsp;SQL data sources (MySQL, PostgreSQL)</h3></div></div></div><p>
With all the SQL drivers, indexing generally works as follows.
</p><div class="itemizedlist"><ul type="disc"><li>connection to the database is established;</li><li>pre-query (see <a href="#conf-sql-query-pre" title="8.1.9.&nbsp;sql_query_pre">Section&nbsp;8.1.9, &#8220;sql_query_pre&#8221;</a>) is executed
	to perform any necessary initial setup, such as setting per-connection encoding with MySQL;</li><li>main query (see <a href="#conf-sql-query" title="8.1.10.&nbsp;sql_query">Section&nbsp;8.1.10, &#8220;sql_query&#8221;</a>) is executed and the rows it returns are indexed;</li><li>post-query (see <a href="#conf-sql-query-post" title="8.1.19.&nbsp;sql_query_post">Section&nbsp;8.1.19, &#8220;sql_query_post&#8221;</a>) is executed
	to perform any necessary cleanup;</li><li>connection to the database is closed;</li><li>indexer does the sorting phase (to be pedantic, index-type specific post-processing);</li><li>connection to the database is established again;</li><li>post-index query (see <a href="#conf-sql-query-post-index" title="8.1.20.&nbsp;sql_query_post_index">Section&nbsp;8.1.20, &#8220;sql_query_post_index&#8221;</a>) is executed
	to perform any necessary final cleanup;</li><li>connection to the database is closed again.</li></ul></div><p>
Most options, such as database user/host/password, are straightforward.
However, there are a few subtle things, which are discussed in more detail here.
</p><h4><a name="ranged-queries"></a>Ranged queries</h4><p>
Main query, which needs to fetch all the documents, can impose
a read lock on the whole table and stall the concurrent queries
(eg. INSERTs to MyISAM table), waste a lot of memory for result set, etc.
To avoid this, Sphinx supports so-called <em class="glossterm">ranged queries</em>.
With ranged queries, Sphinx first fetches min and max document IDs from
the table, and then substitutes different ID intervals into main query text
and runs the modified query to fetch another chunk of documents.
Here's an example.
</p><div class="example"><a name="ex-ranged-queries"></a><p class="title"><b>Example&nbsp;1.&nbsp;Ranged query usage example</b></p><div class="example-contents"><pre class="programlisting">
# in sphinx.conf

sql_query_range	= SELECT MIN(id),MAX(id) FROM documents
sql_range_step = 1000
sql_query = SELECT * FROM documents WHERE id&gt;=$start AND id&lt;=$end
</pre></div></div><br class="example-break"><p>
If the table contains document IDs from 1 to, say, 2345, then sql_query would
be run three times:
</p><div class="orderedlist"><ol type="1"><li>with <code class="option">$start</code> replaced with 1 and <code class="option">$end</code> replaced with 1000;</li><li>with <code class="option">$start</code> replaced with 1001 and <code class="option">$end</code> replaced with 2000;</li><li>with <code class="option">$start</code> replaced with 2000 and <code class="option">$end</code> replaced with 2345.</li></ol></div><p>
Obviously, that's not much of a difference for 2000-row table,
but when it comes to indexing 10-million-row MyISAM table,
ranged queries might be of some help.
</p><h4><a name="id352484"></a><code class="option">sql_post</code> vs. <code class="option">sql_post_index</code></h4><p>
The difference between post-query and post-index query is in that post-query
is run immediately when Sphinx received all the documents, but further indexing
<span class="bold"><strong>may</strong></span> still fail for some other reason. On the contrary,
by the time the post-index query gets executed, it is <span class="bold"><strong>guaranteed</strong></span>
that the indexing was succesful. Database connection is dropped and re-established
because sorting phase can be very lengthy and would just timeout otherwise.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="xmlpipe"></a>3.8.&nbsp;xmlpipe data source</h3></div></div></div><p>
xmlpipe data source was designed to enable users to plug data into
Sphinx without having to implement new data sources drivers themselves.
It is limited to 2 fixed fields and 2 fixed attributes, and is deprecated
in favor of <a href="#xmlpipe2" title="3.9.&nbsp;xmlpipe2 data source">Section&nbsp;3.9, &#8220;xmlpipe2 data source&#8221;</a> now. For new streams, use xmlpipe2.
</p><p>
To use xmlpipe, configure the data source in your configuration file
as follows:
</p><pre class="programlisting">
source example_xmlpipe_source
{
    type = xmlpipe
    xmlpipe_command = perl /www/mysite.com/bin/sphinxpipe.pl
}
</pre><p>
The <code class="filename">indexer</code> will run the command specified
in <code class="option"><a href="#conf-xmlpipe-command" title="8.1.23.&nbsp;xmlpipe_command">xmlpipe_command</a></code>,
and then read, parse and index the data it prints to <code class="filename">stdout</code>.
More formally, it opens a pipe to given command and then reads
from that pipe.
</p><p>
indexer will expect one or more documents in custom XML format.
Here's the example document stream, consisting of two documents:
</p><div class="example"><a name="ex-xmlpipe-document"></a><p class="title"><b>Example&nbsp;2.&nbsp;XMLpipe document stream</b></p><div class="example-contents"><pre class="programlisting">
&lt;document&gt;
&lt;id&gt;123&lt;/id&gt;
&lt;group&gt;45&lt;/group&gt;
&lt;timestamp&gt;1132223498&lt;/timestamp&gt;
&lt;title&gt;test title&lt;/title&gt;
&lt;body&gt;
this is my document body
&lt;/body&gt;
&lt;/document&gt;

&lt;document&gt;
&lt;id&gt;124&lt;/id&gt;
&lt;group&gt;46&lt;/group&gt;
&lt;timestamp&gt;1132223498&lt;/timestamp&gt;
&lt;title&gt;another test&lt;/title&gt;
&lt;body&gt;
this is another document
&lt;/body&gt;
&lt;/document&gt;
</pre></div></div><p><br class="example-break">
</p><p>
Legacy xmlpipe legacy driver uses a builtin parser
which is pretty fast but really strict and does not actually
fully support XML. It requires that all the fields <span class="emphasis"><em>must</em></span>
be present, formatted <span class="emphasis"><em>exactly</em></span> as in this example, and
occur <span class="emphasis"><em>exactly</em></span> in the same order. The only optional
field is <code class="option">timestamp</code>; it defaults to 1.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="xmlpipe2"></a>3.9.&nbsp;xmlpipe2 data source</h3></div></div></div><p>
xmlpipe2 lets you pass arbitrary full-text and attribute data to Sphinx
in yet another custom XML format. It also allows to specify the schema
(ie. the set of fields and attributes) either in the XML stream itself,
or in the source settings.
</p><p>
When indexing xmlpipe2 source, indexer runs the given command, opens
a pipe to its stdout, and expects well-formed XML stream. Here's sample
stream data:
</p><div class="example"><a name="ex-xmlpipe2-document"></a><p class="title"><b>Example&nbsp;3.&nbsp;xmlpipe2 document stream</b></p><div class="example-contents"><pre class="programlisting">
&lt;?xml version="1.0" encoding="utf-8"?&gt;
&lt;sphinx:docset&gt;

&lt;sphinx:schema&gt;
&lt;sphinx:field name="subject"/&gt; 
&lt;sphinx:field name="content"/&gt;
&lt;sphinx:attr name="published" type="timestamp"/&gt;
&lt;sphinx:attr name="author_id" type="int" bits="16" default="1"/&gt;
&lt;/sphinx:schema&gt;

&lt;sphinx:document id="1234"&gt;
&lt;content&gt;this is the main content &lt;![CDATA[[and this &lt;cdata&gt; entry must be handled properly by xml parser lib]]&gt;&lt;/content&gt;
&lt;published&gt;1012325463&lt;/published&gt;
&lt;subject&gt;note how field/attr tags can be in &lt;b class="red"&gt;randomized&lt;/b&gt; order&lt;/subject&gt;
&lt;misc&gt;some undeclared element&lt;/misc&gt;
&lt;/sphinx:document&gt;

&lt;!-- ... more documents here ... --&gt;

&lt;/sphinx:docset&gt;
</pre></div></div><p><br class="example-break">
</p><p>
Arbitrary fields and attributes are allowed.
They also can occur in the stream in arbitrary order within each document; the order is ignored.
There is a restriction on maximum field length; fields longer than 2 MB will be truncated to 2 MB (this limit can be changed in the source).
</p><p>
The schema, ie. complete fields and attributes list, must be declared
before any document could be parsed. This can be done either in the
configuration file using <code class="option">xmlpipe_field</code> and <code class="option">xmlpipe_attr_XXX</code>
settings, or right in the stream using &lt;sphinx:schema&gt; element.
&lt;sphinx:schema&gt; is optional. It is only allowed to occur as the very
first sub-element in &lt;sphinx:docset&gt;. If there is no in-stream
schema definition, settings from the configuration file will be used.
Otherwise, stream settings take precedence.
</p><p>
Unknown tags (which were not declared neither as fields nor as attributes)
will be ignored with a warning. In the example above, &lt;misc&gt; will be ignored.
All embedded tags and their attributes (such as &lt;b&gt; in &lt;subject&gt;
in the example above) will be silently ignored.
</p><p>
Support for incoming stream encodings depends on whether <code class="filename">iconv</code>
is installed on the system. xmlpipe2 is parsed using <code class="filename">libexpat</code>
parser that understands US-ASCII, ISO-8859-1, UTF-8 and a few UTF-16 variants
natively. Sphinx <code class="filename">configure</code> script will also check
for <code class="filename">libiconv</code> presence, and utilize it to handle
other encodings. <code class="filename">libexpat</code> also enforces the
requirement to use UTF-8 charset on Sphinx side, because the
parsed data it returns is always in UTF-8.

</p><p>
XML elements (tags) recognized by xmlpipe2 (and their attributes where applicable) are:
</p><div class="variablelist"><dl><dt><span class="term">sphinx:docset</span></dt><dd>Mandatory top-level element, denotes and contains xmlpipe2 document set.</dd><dt><span class="term">sphinx:schema</span></dt><dd>Optional element, must either occur as the very first child
		of sphinx:docset, or never occur at all. Declares the document schema.
		Contains field and attribute declarations. If present, overrides
		per-source settings from the configuration file.
	</dd><dt><span class="term">sphinx:field</span></dt><dd>Optional element, child of sphinx:schema. Declares a full-text field.
		The only recognized attribute is "name", it specifies the element name
		that should be treated as a full-text field in the subsequent documents.
	</dd><dt><span class="term">sphinx:attr</span></dt><dd>Optional element, child of sphinx:schema. Declares an attribute.
		Known attributes are:
		<div class="itemizedlist"><ul type="disc"><li>"name", specifies the element name that should be treated as an attribute in the subsequent documents.</li><li>"type", specifies the attribute type. Possible values are "int", "timestamp", "str2ordinal", "bool", and "float".</li><li>"bits", specifies the bit size for "int" attribute type. Valid values are 1 to 32.</li><li>"default", specifies the default value for this attribute that should be used if the attribute's element is not present in the document.</li></ul></div></dd><dt><span class="term">sphinx:document</span></dt><dd>Mandatory element, must be a child of sphinx:docset.
		Contains arbitrary other elements with field and attribute values
		to be indexed, as declared either using sphinx:field and sphinx:attr
		elements or in the configuration file. The only known attribute
		is "id" that must contain the unique integer document ID.
	</dd></dl></div><p>
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="live-updates"></a>3.10.&nbsp;Live index updates</h3></div></div></div><p>
There's a frequent situation when the total dataset is too big
to be reindexed from scratch often, but the amount of new records
is rather small. Example: a forum with a 1,000,000 archived posts,
but only 1,000 new posts per day.
</p><p>
In this case, "live" (almost real time) index updates could be
implemented using so called "main+delta" scheme.
</p><p>
The idea is to set up two sources and two indexes, with one
"main" index for the data which only changes rarely (if ever),
and one "delta" for the new documents. In the example above,
1,000,000 archived posts would go to the main index, and newly
inserted 1,000 posts/day would go to the delta index. Delta index
could then be reindexed very frequently, and the documents can
be made available to search in a matter of minutes.
</p><p>
Specifying which documents should go to what index and
reindexing main index could also be made fully automatical.
One option would be to make a counter table which would track
the ID which would split the documents, and update it
whenever the main index is reindexed.
</p><div class="example"><a name="ex-live-updates"></a><p class="title"><b>Example&nbsp;4.&nbsp;Fully automated live updates</b></p><div class="example-contents"><pre class="programlisting">
# in MySQL
CREATE TABLE sph_counter
(
    counter_id INTEGER PRIMARY KEY NOT NULL,
    max_doc_id INTEGER NOT NULL
);

# in sphinx.conf
source main
{
    # ...
    sql_query_pre = REPLACE INTO sph_counter SELECT 1, MAX(id) FROM documents
    sql_query = SELECT id, title, body FROM documents \
        WHERE id&lt;=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
}

source delta : main
{
    sql_query_pre =
    sql_query = SELECT id, title, body FROM documents \
        WHERE id&gt;( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
}

index main
{
    source = main
    path = /path/to/main
    # ... all the other settings
}

# note how all other settings are copied from main,
# but source and path are overridden (they MUST be)
index delta : main
{
    source = delta
    path = /path/to/delta
}
</pre></div></div><p><br class="example-break">
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="index-merging"></a>3.11.&nbsp;Index merging</h3></div></div></div><p>
Merging two existing indexes can be more efficient that indexing the data
from scratch, and desired in some cases (such as merging 'main' and 'delta'
indexes instead of simply reindexing 'main' in 'main+delta' partitioning
scheme). So <code class="filename">indexer</code> has an option to do that.
Merging the indexes is normally faster than reindexing but still
<span class="emphasis"><em>not</em></span> instant on huge indexes. Basically,
it will need to read the contents of both indexes once and write
the result once. Merging 100 GB and 1 GB index, for example,
will result in 202 GB of IO (but that's still likely less than
the indexing from scratch requires).
</p><p>
The basic command syntax is as follows:
</p><pre class="programlisting">
indexer --merge DSTINDEX SRCINDEX [--rotate]
</pre><p>
Only the DSTINDEX index will be affected: the contents of SRCINDEX will be merged into it.
<code class="option">--rotate</code> switch will be required if DSTINDEX is already being served by <code class="filename">searchd</code>.
The initially devised usage pattern is to merge a smaller update from SRCINDEX into DSTINDEX.
Thus, when merging the attributes, values from SRCINDEX will win if duplicate document IDs are encountered.
Note, however, that the "old" keywords will <span class="emphasis"><em>not</em></span> be automatically removed in such cases.
For example, if there's a keyword "old" associated with document 123 in DSTINDEX, and a keyword "new" associated
with it in SRCINDEX, document 123 will be found by <span class="emphasis"><em>both</em></span> keywords after the merge.
You can supply an explicit condition to remove documents from DSTINDEX to mitigate that;
the relevant switch is <code class="option">--merge-dst-range</code>:
</p><pre class="programlisting">
indexer --merge main delta --merge-dst-range deleted 0 0
</pre><p>
This switch lets you apply filters to the destination index along with merging.
There can be several filters; all of their conditions must be met in order
to include the document in the resulting mergid index. In the example above,
the filter passes only those records where 'deleted' is 0, eliminating all
records that were flagged as deleted (for instance, using
<a href="#api-func-updateatttributes" title="5.7.2.&nbsp;UpdateAttributes">UpdateAttributes()</a> call).
</p></div></div><div class="sect1" lang="en"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="searching"></a>4.&nbsp;Searching</h2></div></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="matching-modes"></a>4.1.&nbsp;Matching modes</h3></div></div></div><p>
There are the following matching modes available:
</p><div class="itemizedlist"><ul type="disc"><li>SPH_MATCH_ALL, matches all query words (default mode);</li><li>SPH_MATCH_ANY, matches any of the query words;</li><li>SPH_MATCH_PHRASE, matches query as a phrase, requiring perfect match;</li><li>SPH_MATCH_BOOLEAN, matches query as a boolean expression (see <a href="#boolean-syntax" title="4.2.&nbsp;Boolean query syntax">Section&nbsp;4.2, &#8220;Boolean query syntax&#8221;</a>);</li><li>SPH_MATCH_EXTENDED, matches query as an expression in Sphinx internal query language (see <a href="#extended-syntax" title="4.3.&nbsp;Extended query syntax">Section&nbsp;4.3, &#8220;Extended query syntax&#8221;</a>).</li></ul></div><p>
</p><p>
There also is a special "full scan" mode that will be automatically activated when the following conditions are met:
</p><div class="orderedlist"><ol type="1"><li>The query string is empty (ie. its length is zero).</li><li><a href="#conf-docinfo" title="8.2.4.&nbsp;docinfo">docinfo</a> storage is set to <code class="code">extern</code>.</li></ol></div><p>
In full scan mode, all the indexed documents will be considered as matching.
Such queries will still apply filters, sorting, or group by, but will not perform any real full-text searching.
This can be useful to unify full-text and non-full-text searching code, or to offload SQL server (there are cases when Sphinx scans will perform better than analogous MySQL queries).
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="boolean-syntax"></a>4.2.&nbsp;Boolean query syntax</h3></div></div></div><p>
Boolean queries allow the following special operators to be used:
</p><div class="itemizedlist"><ul type="disc"><li>explicit operator AND: <pre class="programlisting">hello &amp; world</pre></li><li>operator OR: <pre class="programlisting">hello | world</pre></li><li>operator NOT:
<pre class="programlisting">
hello -world
hello !world
</pre></li><li>grouping: <pre class="programlisting">( hello world )</pre></li></ul></div><p>
Here's an example query which uses all these operators:
</p><div class="example"><a name="ex-boolean-query"></a><p class="title"><b>Example&nbsp;5.&nbsp;Boolean query example</b></p><div class="example-contents"><pre class="programlisting">
( cat -dog ) | ( cat -mouse)
</pre></div></div><p><br class="example-break">
</p><p>
There always is implicit AND operator, so "hello world" query actually
means "hello &amp; world".
</p><p>
OR operator precedence is higher than AND, so "looking for cat | dog | mouse"
means "looking for ( cat | dog | mouse )" and <span class="emphasis"><em>not</em></span>
"(looking for cat) | dog | mouse".
</p><p>
Queries like "-dog", which implicitly include all documents from the
collection, can not be evaluated. This is both for technical and performance
reasons. Technically, Sphinx does not always keep a list of all IDs.
Performance-wise, when the collection is huge (ie. 10-100M documents),
evaluating such queries could take very long.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="extended-syntax"></a>4.3.&nbsp;Extended query syntax</h3></div></div></div><p>
The following special operators can be used when using the extended matching mode:
</p><div class="itemizedlist"><ul type="disc"><li>operator OR: <pre class="programlisting">hello | world</pre></li><li>operator NOT:
<pre class="programlisting">
hello -world
hello !world
</pre></li><li>field search operator: <pre class="programlisting">@title hello @body world</pre></li><li>phrase search operator: <pre class="programlisting">"hello world"</pre></li><li>proximity search operator: <pre class="programlisting">"hello world"~10</pre></li><li>quorum matching operator: <pre class="programlisting">"the world is a wonderful place"/3</pre></li></ul></div><p>

Here's an example query which uses most of these operators:
</p><div class="example"><a name="ex-extended-query"></a><p class="title"><b>Example&nbsp;6.&nbsp;Extended query example</b></p><div class="example-contents"><pre class="programlisting">
"hello world" @title "example program"~5 @body python -(php|perl)
</pre></div></div><p><br class="example-break">
</p><p>
There always is implicit AND operator, so "hello world" means that
both "hello" and "world" must be present in matching document.
</p><p>
OR operator precedence is higher than AND, so "looking for cat | dog | mouse"
means "looking for ( cat | dog | mouse )" and <span class="emphasis"><em>not</em></span>
"(looking for cat) | dog | mouse".
</p><p>
Proximity distance is specified in words, adjusted for word count, and
applies to all words within quotes. For instance, "cat dog mouse"~5 query
means that there must be less than 8-word span which contains all 3 words,
ie. "CAT aaa bbb ccc DOG eee fff MOUSE" document will <span class="emphasis"><em>not</em></span>
match this query, because this span is exactly 8 words long.
</p><p>
Quorum matching operator introduces a kind of fuzzy matching.
It will only match those documents that pass a given threshold of given words.
The example above ("the world is a wonderful place"/3) will match all documents
that have at least 3 of the 6 specified words.
</p><p>
Nested brackets, as in queries like
</p><pre class="programlisting">
aaa | ( bbb ccc | ( ddd eee ) )
</pre><p>
are not allowed yet, but this will be fixed.
</p><p>
Negation (ie. operator NOT) is only allowed on top level and not within
brackets (ie. groups). This isn't going to change, because supporting nested
negations would make phrase ranking implementation way too complicated.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="weighting"></a>4.4.&nbsp;Weighting</h3></div></div></div><p>
Specific weighting function (currently) depends on the search mode.
</p><p>
There are these major parts which are used in the weighting functions:
</p><div class="orderedlist"><ol type="1"><li>phrase rank,</li><li>statistical rank.</li></ol></div><p>
</p><p>
Phrase rank is based on a length of longest common subsequence
(LCS) of search words between document body and query phrase. So if
there's a perfect phrase match in some document then its phrase rank
would be the highest possible, and equal to query words count.
</p><p>
Statistical rank is based on classic BM25 function which only takes
word frequencies into account. If the word is rare in the whole database
(ie. low frequency over document collection) or mentioned a lot in specific
document (ie. high frequency over matching document), it receives more weight.
Final BM25 weight is a floating point number between 0 and 1.
</p><p>
In all modes, per-field weighted phrase ranks are computed as
a product of LCS multiplied by per-field weight speficifed by user.
Per-field weights are integer, default to 1, and can not be set
lower than 1.
</p><p>
In SPH_MATCH_BOOLEAN mode, no weighting is performed at all, every match weight
is set to 1.
</p><p>
In SPH_MATCH_ALL and SPH_MATCH_PHRASE modes, final weight is a sum of weighted phrase ranks.
</p><p>
In SPH_MATCH_ANY mode, the idea is essentially the same, but it also
adds a count of matching words in each field. Before that, weighted
phrase ranks are additionally mutliplied by a value big enough to
guarantee that higher phrase rank in <span class="bold"><strong>any</strong></span> field will make the
match ranked higher, even if it's field weight is low.
</p><p>
In SPH_MATCH_EXTENDED mode, final weight is a sum of weighted phrase
ranks and BM25 weight, multiplied by 1000 and rounded to integer.
</p><p>
This is going to be changed, so that MATCH_ALL and MATCH_ANY modes
use BM25 weights as well. This would improve search results in those
match spans where phrase ranks are equal; this is especially useful
for 1-word queries.
</p><p>
The key idea (in all modes, besides boolean) is that better subphrase
matches are ranked higher, and perfect matches are pulled to the top. Author's
experience is that this phrase proximity based ranking provides noticeably
better search quality than any statistical scheme alone (such as BM25,
which is commonly used in other search engines).
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="sorting-modes"></a>4.5.&nbsp;Sorting modes</h3></div></div></div><p>
There are the following result sorting modes available:
</p><div class="itemizedlist"><ul type="disc"><li>SPH_SORT_RELEVANCE mode, that sorts by relevance in descending order (best matches first);</li><li>SPH_SORT_ATTR_DESC mode, that sorts by an attribute in descending order (bigger attribute values first);</li><li>SPH_SORT_ATTR_ASC mode, that sorts by an attribute in ascending order (smaller attribute values first);</li><li>SPH_SORT_TIME_SEGMENTS mode, that sorts by time segments (last hour/day/week/month) in descending order, and then by relevance in descending order;</li><li>SPH_SORT_EXTENDED mode, that sorts by SQL-like combination of columns in ASC/DESC order;</li><li>SPH_SORT_EXPR mode, that sorts by an arithmetic expression.</li></ul></div><p>
</p><p>
SPH_SORT_RELEVANCE ignores any additional parameters and always sorts matches
by relevance rank. All other modes require an additional sorting clause, with the
syntax depending on specific mode. SPH_SORT_ATTR_ASC, SPH_SORT_ATTR_DESC and
SPH_SORT_TIME_SEGMENTS modes require simply an attribute name.

SPH_SORT_RELEVANCE is equivalent to sorting by "@weight DESC, @id ASC" in extended mode,
SPH_SORT_ATTR_ASC is equivalent to "attribute ASC, @weight DESC, @id ASC",
and SPH_SORT_ATTR_DESC to "attribute DESC, @weight DESC, @id ASC" respectively.
</p><h4><a name="id353640"></a>SPH_SORT_TIME_SEGMENTS mode</h4><p>
In SPH_SORT_TIME_SEGMENTS mode, attribute values are split into so-called
time segments, and then sorted by time segment first, and by relevance second.
</p><p>
The segments are calculated according to the <span class="emphasis"><em>current timestamp</em></span>
at the time when the search is performed, so the results would change over time.
The segments are as follows:
</p><div class="itemizedlist"><ul type="disc"><li>last hour,</li><li>last day,</li><li>last week,</li><li>last month,</li><li>last 3 months,</li><li>everything else.</li></ul></div><p>
These segments are hardcoded, but it is trivial to change them if necessary.
</p><p>
This mode was added to support searching through blogs, news headlines, etc.
When using time segments, recent records would be ranked higher because of segment,
but withing the same segment, more relevant records would be ranked higher -
unlike sorting by just the timestamp attribute, which would not take relevance
into account at all.
</p><h4><a name="sort-extended"></a>SPH_SORT_EXTENDED mode</h4><p>
In SPH_SORT_EXTENDED mode, you can specify an SQL-like sort expression
with up to 5 attributes (including internal attributes), eg:
</p><pre class="programlisting">
@relevance DESC, price ASC, @id DESC
</pre><p>
</p><p>
Both internal attributes (that are computed by the engine on the fly)
and user attributes that were configured for this index are allowed.
Internal attribute names must start with magic @-symbol; user attribute
names can be used as is. In the example above, <code class="option">@relevance</code>
and <code class="option">@id</code> are internal attributes and <code class="option">price</code> is user-specified.
</p><p>
Known internal attributes are:
</p><div class="itemizedlist"><ul type="disc"><li>@id (match ID)</li><li>@weight (match weight)</li><li>@rank (match weight)</li><li>@relevance (match weight)</li></ul></div><p>
<code class="option">@rank</code> and <code class="option">@relevance</code> are just additional
aliases to <code class="option">@weight</code>.
</p><h4><a name="sort-expr"></a>SPH_SORT_EXPR mode</h4><p>
Expression sorting mode lets you sort the matches by an arbitrary arithmetic
expression, involving attribute values, internal attributes (@id and @weight),
arithmetic operations, and a number of built-in functions. Here's an example:
</p><pre class="programlisting">
$cl-&gt;SetSortMode ( SPH_SORT_EXPR,
	"@weight + ( user_karma + ln(pageviews) )*0.1" );
</pre><p>
</p><p>
The following operators and functions are supported. They are mimiced after MySQL.
The functions take a number of arguments depending on the specific function.
</p><div class="itemizedlist"><ul type="disc"><li>Operators: +, -, *, /, &lt;, &gt; &lt;=, &gt;=, =, &lt;&gt;.</li><li>Unary (1-argument) functions: abs(), ceil(), floor(), sin(), cos(), ln(), log2(), log10(), exp(), sqrt().</li><li>Binary (2-argument) functions: min(), max(), pow().</li><li>Ternary (3-argument) functions: if().</li></ul></div><p>
</p><p>
All calculations are performed in single-precision, 32-bit IEEE 754 floating point format.
Comparison operators (eg. = or &lt;=) return 1.0 when the condition is true and 0.0 otherwise.
For instance, <code class="code">(a=b)+3</code> will evaluate to 4 when attribute 'a' is equal to attribute 'b', and to 3 when 'a' is not.
Unlike MySQL, the equality comparisons (ie. = and &lt;&gt; operators) introduce a small equality threshold (1e-6 by default).
If the difference between compared values is within the threshold, they will be considered equal.
</p><p>
All unary and binary functions are straightforward, they behave just like their mathematical counterparts.
But <code class="code">IF()</code> behavior needs to be explained in more detail.
It takes 3 arguments, check whether the 1st argument is equal to 0.0, returns the 2nd argument if it is not zero, or the 3rd one when it is.
Note that unlike comparison operators, <code class="code">IF()</code> does <span class="bold"><strong>not</strong></span> use a threshold!
Therefore, it's safe to use comparison results as its 1st argument, but arithmetic operators might produce unexpected results.
For instance, the following two calls will produce <span class="emphasis"><em>different</em></span> results even though they are logically equivalent:
</p><pre class="programlisting">
IF ( sqrt(3)*sqrt(3)-3&lt;&gt;0, a, b )
IF ( sqrt(3)*sqrt(3)-3, a, b )
</pre><p>
In the first case, the comparison operator &lt;&gt; will return 0.0 (false)
because of a threshold, and <code class="code">IF()</code> will always return 'b' as a result.
In the second one, the same <code class="code">sqrt(3)*sqrt(3)-3</code> expression will be compared
with zero <span class="emphasis"><em>without</em></span> threshold by the <code class="code">IF()</code> function itself.
But its value will be slightly different from zero because of limited floating point
calculations precision. Because of that, the comparison with 0.0 done by <code class="code">IF()</code>
will not pass, and the second variant will return 'a' as a result.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="clustering"></a>4.6.&nbsp;Grouping (clustering) search results </h3></div></div></div><p>
Sometimes it could be useful to group (or in other terms, cluster)
search results and/or count per-group match counts - for instance,
to draw a nice graph of how much maching blog posts were there per
each month; or to group Web search results by site; or to group
matching forum posts by author; etc.
</p><p>
In theory, this could be performed by doing only the full-text search
in Sphinx and then using found IDs to group on SQL server side. However,
in practice doing this with a big result set (10K-10M matches) would
typically kill performance.
</p><p>
To avoid that, Sphinx offers so-called grouping mode. It is enabled
with SetGroupBy() API call. When grouping, all matches are assigned to
different groups based on group-by value. This value is computed from
specified attribute using one of the following built-in functions:
</p><div class="itemizedlist"><ul type="disc"><li>SPH_GROUPBY_DAY, extracts year, month and day in YYYYMMDD format from timestamp;</li><li>SPH_GROUPBY_WEEK, extracts year and first day of the week number (counting from year start) in YYYYNNN format from timestamp;</li><li>SPH_GROUPBY_MONTH, extracts month in YYYYMM format from timestamp;</li><li>SPH_GROUPBY_YEAR, extracts year in YYYY format from timestamp;</li><li>SPH_GROUPBY_ATTR, uses attribute value itself for grouping.</li></ul></div><p>
</p><p>
The final search result set then contains one best match per group.
Grouping function value and per-group match count are returned along
as "virtual" attributes named
<span class="bold"><strong>@group</strong></span> and
<span class="bold"><strong>@count</strong></span> respectively.
</p><p>
The result set is sorted by group-by sorting clause, with the syntax similar
to <a href="#sort-extended"><code class="option">SPH_SORT_EXTENDED</code> sorting clause</a>
syntax. In addition to <code class="option">@id</code> and <code class="option">@weight</code>,
group-by sorting clause may also include:
</p><div class="itemizedlist"><ul type="disc"><li>@group (groupby function value),</li><li>@count (amount of matches in group).</li></ul></div><p>
</p><p>
The default mode is to sort by groupby value in descending order,
ie. by <code class="option">"@group desc"</code>.
</p><p>
On completion, <code class="option">total_found</code> result parameter would
contain total amount of matching groups over he whole index.
</p><p>
<span class="bold"><strong>WARNING:</strong></span> grouping is done in fixed memory
and thus its results are only approximate; so there might be more groups reported
in <code class="option">total_found</code> than actually present. <code class="option">@count</code> might also
be underestimated. To reduce inaccuracy, one should raise <code class="option">max_matches</code>.
If <code class="option">max_matches</code> allows to store all found groups, results will be 100% correct.
</p><p>
For example, if sorting by relevance and grouping by <code class="code">"published"</code>
attribute with <code class="code">SPH_GROUPBY_DAY</code> function, then the result set will
contain
</p><div class="itemizedlist"><ul type="disc"><li>one most relevant match per each day when there were any
matches published,</li><li>with day number and per-day match count attached,</li><li>sorted by day number in descending order (ie. recent days first).</li></ul></div><p>
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="distributed"></a>4.7.&nbsp;Distributed searching</h3></div></div></div><p>
To scale well, Sphinx has distributed searching capabilities.
Distributed searching is useful to improve query latency (ie. search
time) and throughput (ie. max queries/sec) in multi-server, multi-CPU
or multi-core environments. This is essential for applications which
need to search through huge amounts data (ie. billions of records
and terabytes of text).
</p><p>
The key idea is to horizontally partition (HP) searched data
accross search nodes and then process it in parallel.
</p><p>
Partitioning is done manually. You should
</p><div class="itemizedlist"><ul type="disc"><li>setup several instances
of Sphinx programs (<code class="filename">indexer</code> and <code class="filename">searchd</code>)
on different servers;</li><li>make the instances index (and search) different parts of data;</li><li>configure a special distributed index on some of the <code class="filename">searchd</code>
instances;</li><li>and query this index.</li></ul></div><p>
This index only contains references to other
local and remote indexes - so it could not be directly reindexed,
and you should reindex those indexes which it references instead.
</p><p>
When <code class="filename">searchd</code> receives a query against distributed index,
it does the following:
</p><div class="orderedlist"><ol type="1"><li>connects to configured remote agents;</li><li>issues the query;</li><li>sequentially searches configured local indexes (while the remote agents are searching);</li><li>retrieves remote agents' search results;</li><li>merges all the results together, removing the duplicates;</li><li>sends the merged resuls to client.</li></ol></div><p>
</p><p>
From the application's point of view, there are no differences
between usual and distributed index at all.
</p><p>
Any <code class="filename">searchd</code> instance could serve both as a master
(which aggregates the results) and a slave (which only does local searching)
at the same time. This has a number of uses:
</p><div class="orderedlist"><ol type="1"><li>every machine in a cluster could serve as a master which
searches the whole cluster, and search requests could be balanced between
masters to achieve a kind of HA (high availability) in case any of the nodes fails;
</li><li>
if running within a single multi-CPU or multi-core machine, there
would be only 1 searchd instance quering itself as an agent and thus
utilizing all CPUs/core.
</li></ol></div><p>
</p><p>
It is scheduled to implement better HA support which would allow
to specify which agents mirror each other, do health checks, keep track
of alive agents, load-balance requests, etc.
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="query-log-format"></a>4.8.&nbsp;<code class="filename">searchd</code> query log format</h3></div></div></div><p>
<code class="filename">searchd</code> logs all succesfully executed search queries
into query log file. Here's an example:

</p><pre class="programlisting">
[Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] test
[Fri Jun 29 21:20:34 2007] 0.024 sec [all/0/rel 19886 (0,20) @channel_id] [lj] test
</pre><p>

This log format is as follows:
</p><pre class="programlisting">
[query-date] query-time [match-mode/filters-count/sort-mode
    total-matches (offset,limit) @groupby-attr] [index-name] query
</pre><p>

Match mode can take one of the following values:
</p><div class="itemizedlist"><ul type="disc"><li>"all" for SPH_MATCH_ALL mode;</li><li>"any" for SPH_MATCH_ANY mode;</li><li>"phr" for SPH_MATCH_PHRASE mode;</li><li>"bool" for SPH_MATCH_BOOLEAN mode;</li><li>"ext" for SPH_MATCH_EXTENDED mode.</li></ul></div><p>

Sort mode can take one of the following values:

</p><div class="itemizedlist"><ul type="disc"><li>"rel" for SPH_SORT_RELEVANCE mode;</li><li>"attr-" for SPH_SORT_ATTR_DESC mode;</li><li>"attr+" for SPH_SORT_ATTR_ASC mode;</li><li>"tsegs" for SPH_SORT_TIME_SEGMENTS mode;</li><li>"ext" for SPH_SORT_EXTENDED mode.</li></ul></div><p>

</p></div></div><div class="sect1" lang="en"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="api-reference"></a>5.&nbsp;API reference</h2></div></div></div><p>
There is a number of native searchd client API implementations
for Sphinx. As of time of this writing, we officially support our own
PHP, Python, and Java implementations. There also are third party
free, open-source API implementations for Perl, Ruby, and C++.
</p><p>
The reference API implementation is in PHP, because (we believe)
Sphinx is most widely used with PHP than any other language.
This reference documentation is in turn based on reference PHP API,
and all code samples in this section will be given in PHP.
</p><p>
However, all other APIs provide the same methods and implement
the very same network protocol. Therefore the documentation does
apply to them as well. There might be minor differences as to the
method naming conventions or specific data structures used.
But the provided functionality must not differ across languages.
</p><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="api-funcgroup-general"></a>5.1.&nbsp;General API functions</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-getlasterror"></a>5.1.1.&nbsp;GetLastError</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function GetLastError()</p><p>
Returns last error message, as a string, in human readable format.
If there were no errors during the previous API call, empty string is returned.
</p><p>
You should call it when any other function (such as <a href="#api-func-query" title="5.6.1.&nbsp;Query">Query()</a>)
fails (typically, the failing function returns false). The returned string will
contain the error description.
</p><p>
The error message is <span class="emphasis"><em>not</em></span> reset by this call; so you can safely
call it several times if needed.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-getlastwarning"></a>5.1.2.&nbsp;GetLastWarning</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function GetLastWarning ()</p><p>
Returns last warning message, as a string, in human readable format.
If there were no warnings during the previous API call, empty string is returned.
</p><p>
You should call it to verify whether your request
(such as <a href="#api-func-query" title="5.6.1.&nbsp;Query">Query()</a>) was completed but with warnings.
For instance, search query against a distributed index might complete
succesfully even if several remote agents timed out. In that case,
a warning message would be produced.
</p><p>
The warning message is <span class="emphasis"><em>not</em></span> reset by this call; so you can safely
call it several times if needed.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setserver"></a>5.1.3.&nbsp;SetServer</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetServer ( $host, $port )</p><p>
Sets <code class="filename">searchd</code> host name and TCP port.
All subsequent requests will use the new host and port settings.
Default host and port are 'localhost' and 3312, respectively.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setretries"></a>5.1.4.&nbsp;SetRetries</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetRetries ( $count, $delay=0 )</p><p>
Sets distributed retry count and delay.
</p><p>
On temporary failures <code class="filename">searchd</code> will attempt up to
<code class="code">$count</code> retries per agent. <code class="code">$delay</code> is the delay
between the retries, in milliseconds. Retries are disabled by default.
Note that this call will <span class="bold"><strong>not</strong></span> make the API itself retry on
temporary failure; it only tells <code class="filename">searchd</code> to do so.
Currently, the list of temporary failures includes all kinds of connect()
failures and maxed out (too busy) remote agents.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setarrayresult"></a>5.1.5.&nbsp;SetArrayResult</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetArrayResult ( $arrayresult )</p><p>
PHP specific. Controls matches format in the search results set
(whether matches should be returned as an array or a hash).
</p><p>
<code class="code">$arrayresult</code> argument must be boolean. If <code class="code">$arrayresult</code> is <code class="code">false</code>
(the default mode), matches will returned in PHP hash format with
document IDs as keys, and other information (weight, attributes)
as values. If <code class="code">$arrayresult</code> is true, matches will be returned
as a plain array with complete per-match information including
document ID.
</p><p>
Introduced along with GROUP BY support on MVA attributes.
Group-by-MVA result sets may contain duplicate document IDs.
Thus they need to be returned as plain arrays, because hashes
will only keep one entry per document ID.
</p></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="api-funcgroup-general-query-settings"></a>5.2.&nbsp;General query settings</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setlimits"></a>5.2.1.&nbsp;SetLimits</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetLimits ( $offset, $limit, $max_matches=0, $cutoff=0 )</p><p>
Sets offset into server-side result set (<code class="code">$offset</code>) and amount of matches
to return to client starting from that offset (<code class="code">$limit</code>). Can additionally
control maximum server-side result set size for current query (<code class="code">$max_matches</code>)
and the threshold amount of matches to stop searching at (<code class="code">$cutoff</code>).
All parameters must be non-negative integers.
</p><p>
First two parameters to SetLimits() are identical in behavior to MySQL
LIMIT clause. They instruct <code class="filename">searchd</code> to return at
most <code class="code">$limit</code> matches starting from match number <code class="code">$offset</code>.
The default offset and limit settings are 0 and 20, that is, to return
first 20 matches.
</p><p>
<code class="code">max_matches</code> setting controls how much matches <code class="filename">searchd</code>
will keep in RAM while searching. <span class="bold"><strong>All</strong></span> matching documents will be normally
processed, ranked, filtered, and sorted even if <code class="code">max_matches</code> is set to 1.
But only best N documents are stored in memory at any given moment for performance
and RAM usage reasons, and this setting controls that N. Note that there are
<span class="bold"><strong>two</strong></span> places where <code class="code">max_matches</code> limit is enforced. Per-query
limit is controlled by this API call, but there also is per-server limit
controlled by <code class="code">max_matches</code> setting in the config file. To prevent
RAM usage abuse, server will not allow to set per-query limit
higher than the per-server limit.
</p><p>
You can't retrieve more than <code class="code">max_matches</code> matches to the client application.
The default limit is set to 1000. Normally, you must not have to go over
this limit. One thousand records is enough to present to the end user.
And if you're thinking about pulling the results to application
for further sorting or filtering, that would be <span class="bold"><strong>much</strong></span> more efficient
if performed on Sphinx side.
</p><p>
<code class="code">$cutoff</code> setting is intended for advanced performance control.
It tells <code class="filename">searchd</code> to forcibly stop search query
once <code class="code">$cutoff</code> matches had been found and processed.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setmaxquerytime"></a>5.2.2.&nbsp;SetMaxQueryTime</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetMaxQueryTime ( $max_query_time )</p><p>
Sets maximum search query time, in milliseconds. Parameter must be
a non-negative integer. Default valus is 0 which means "do not limit".
</p><p>Similar to <code class="code">$cutoff</code> setting from <a href="#api-func-setlimits" title="5.2.1.&nbsp;SetLimits">SetLimits()</a>,
but limits elapsed query time instead of processed matches count. Local search queries
will be stopped once that much time has elapsed. Note that if you're performing
a search which queries several local indexes, this limit applies to each index
separately.
</p></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="api-funcgroup-fulltext-query-settings"></a>5.3.&nbsp;Full-text search query settings</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setmatchmode"></a>5.3.1.&nbsp;SetMatchMode</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetMatchMode ( $mode )</p><p>
Sets full-text query matching mode, as described in <a href="#matching-modes" title="4.1.&nbsp;Matching modes">Section&nbsp;4.1, &#8220;Matching modes&#8221;</a>.
Parameter must be a constant specifying one of the known modes.
</p><p>
<span class="bold"><strong>WARNING:</strong></span> (PHP specific) you <span class="bold"><strong>must not</strong></span> take the matching mode
constant name in quotes, that syntax specifies a string and is incorrect:
</p><pre class="programlisting">
$cl-&gt;SetMatchMode ( "SPH_MATCH_ANY" ); // INCORRECT! will not work as expected
$cl-&gt;SetMatchMode ( SPH_MATCH_ANY ); // correct, works OK
</pre><p>
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setrankingmode"></a>5.3.2.&nbsp;SetRankingMode</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetRankingMode ( $ranker )</p><p>
Sets ranking mode. Only available in SPH_MATCH_EXTENDED2 matching
mode at the time of this writing. Parameter must be a constant
specifying one of the known modes.
</p><p>
By default, Sphinx computes two factors which contribute to the final
match weight. The major part is query phrase proximity to document text.
The minor part is so-called BM25 statistical function, which varies
from 0 to 1 depending on the keyword frequency within document
(more occurrences yield higher weight) and within the whole index
(more rare keywords yield higher weight).
</p><p>
However, in some cases you'd want to compute weight differently -
or maybe avoid computing it at all for performance reasons because
you're sorting the result set by something else anyway. This can be
accomplished by setting the appropriate ranking mode.
</p><p>
Currently implemented modes are:
</p><div class="itemizedlist"><ul type="disc"><li>SPH_RANK_PROXIMITY_BM25, default ranking mode which uses and combines
	both phrase proximity and BM25 ranking.</li><li>SPH_RANK_BM25, statistical ranking mode which uses BM25 ranking only (similar to
	most other full-text engines). This mode is faster but may result in worse quality
	on queries which contain more than 1 keyword.</li><li>SPH_RANK_NONE, disabled ranking mode. This mode is the fastest.
	It is essentially equivalent to boolean searching. A weight of 1 is assigned
	to all matches.</li></ul></div><p>
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setsortmode"></a>5.3.3.&nbsp;SetSortMode</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetSortMode ( $mode, $sortby="" )</p><p>
Set matches sorting mode, as described in <a href="#sorting-modes" title="4.5.&nbsp;Sorting modes">Section&nbsp;4.5, &#8220;Sorting modes&#8221;</a>.
Parameter must be a constant specifying one of the known modes.
</p><p>
<span class="bold"><strong>WARNING:</strong></span> (PHP specific) you <span class="bold"><strong>must not</strong></span> take the matching mode
constant name in quotes, that syntax specifies a string and is incorrect:
</p><pre class="programlisting">
$cl-&gt;SetSortMode ( "SPH_SORT_ATTR_DESC" ); // INCORRECT! will not work as expected
$cl-&gt;SetSortMode ( SPH_SORT_ATTR_ASC ); // correct, works OK
</pre><p>
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setweights"></a>5.3.4.&nbsp;SetWeights</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetWeights ( $weights )</p><p>
Binds per-field weights in the order of appearance in the index.
<span class="bold"><strong>DEPRECATED</strong></span>, use <a href="#api-func-setfieldweights" title="5.3.5.&nbsp;SetFieldWeights">SetFieldWeights()</a> instead.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setfieldweights"></a>5.3.5.&nbsp;SetFieldWeights</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetFieldWeights ( $weights )</p><p>
Binds per-field weights by name. Parameter must be a hash (associative array)
mapping string field names to integer weights.
</p><p>
Match ranking can be affected by per-field weights. For instance,
see <a href="#weighting" title="4.4.&nbsp;Weighting">Section&nbsp;4.4, &#8220;Weighting&#8221;</a> for an explanation how phrase proximity
ranking is affected. This call lets you specify what non-default
weights to assign to different full-text fields.
</p><p>
The weights must be positive 32-bit integers. The final weight
will be a 32-bit integer too. Default weight value is 1. Unknown
field names will be silently ignored.
</p><p>
There is no enforced limit on the maximum weight value at the
moment. However, beware that if you set it too high you can start
hitting 32-bit wraparound issues. For instance, if you set
a weight of 10,000,000 and search in extended mode, then
maximum possible weight will be equal to 10 million (your weight)
by 1 thousand (internal BM25 scaling factor, see <a href="#weighting" title="4.4.&nbsp;Weighting">Section&nbsp;4.4, &#8220;Weighting&#8221;</a>)
by 1 or more (phrase proximity rank). The result is at least 10 billion
that does not fit in 32 bits and will be wrapped around, producing
unexpected results.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setindexweights"></a>5.3.6.&nbsp;SetIndexWeights</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetIndexWeights ( $weights )</p><p>
Sets per-index weights, and enables weighted summing of match weights
across different indexes. Parameter must be a hash (associative array)
mapping string index names to integer weights. Default is empty array
that means to disable weighting summing.
</p><p>
When a match with the same document ID is found in several different
local indexes, by default Sphinx simply chooses the match from the index
specified last in the query. This is to support searching through
partially overlapping index partitions.
</p><p>
However in some cases the indexes are not just partitions, and you
might want to sum the weights across the indexes instead of picking one.
<code class="code">SetIndexWeights()</code> lets you do that. With summing enabled,
final match weight in result set will be computed as a sum of match
weight coming from the given index multiplied by respective per-index
weight specified in this call. Ie. if the document 123 is found in
index A with the weight of 2, and also in index B with the weight of 3,
and you called <code class="code">SetIndexWeights ( array ( "A"=&gt;100, "B"=&gt;10 ) )</code>,
the final weight return to the client will be 2*100+3*10 = 230.
</p></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="api-funcgroup-filtering"></a>5.4.&nbsp;Result set filtering settings</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setidrange"></a>5.4.1.&nbsp;SetIDRange</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetIDRange ( $min, $max )</p><p>
Sets an accepted range of document IDs. Parameters must be integers.
Defaults are 0 and 0; that combination means to not limit by range.
</p><p>
After this call, only those records that have document ID
between <code class="code">$min</code> and <code class="code">$max</code> (including IDs
exactly equal to <code class="code">$min</code> or <code class="code">$max</code>)
will be matched.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setfilter"></a>5.4.2.&nbsp;SetFilter</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetFilter ( $attribute, $values, $exclude=false )</p><p>
Adds new integer values set filter. 
</p><p>
On this call, additional new filter is added to the existing
list of filters. <code class="code">$attribute</code> must be a string with
attribute name. <code class="code">$values</code> must be a plain array
containing integer values. <code class="code">$exclude</code> must be a boolean
value; it controls whether to accept the matching documents
(default mode, when <code class="code">$exclude</code> is false) or reject them.
</p><p>
Only those documents where <code class="code">$attribute</code> column value
stored in the index matches any of the values from <code class="code">$values</code>
array will be matched (or rejected, if <code class="code">$exclude</code> is true).
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setfilterrange"></a>5.4.3.&nbsp;SetFilterRange</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetFilterRange ( $attribute, $min, $max, $exclude=false )</p><p>
Adds new integer range filter.
</p><p>
On this call, additional new filter is added to the existing
list of filters. <code class="code">$attribute</code> must be a string with
attribute name. <code class="code">$min</code> and <code class="code">$max</code> must be
integers that define the acceptable attribute values range
(including the boundaries). <code class="code">$exclude</code> must be a boolean
value; it controls whether to accept the matching documents
(default mode, when <code class="code">$exclude</code> is false) or reject them.
</p><p>
Only those documents where <code class="code">$attribute</code> column value
stored in the index is between <code class="code">$min</code> and <code class="code">$max</code>
(including values that are exactly equal to <code class="code">$min</code> or <code class="code">$max</code>)
will be matched (or rejected, if <code class="code">$exclude</code> is true).
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setfilterfloatrange"></a>5.4.4.&nbsp;SetFilterFloatRange</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetFilterFloatRange ( $attribute, $min, $max, $exclude=false )</p><p>
Adds new float range filter.
</p><p>
On this call, additional new filter is added to the existing
list of filters. <code class="code">$attribute</code> must be a string with
attribute name. <code class="code">$min</code> and <code class="code">$max</code> must be
floats that define the acceptable attribute values range
(including the boundaries). <code class="code">$exclude</code> must be a boolean
value; it controls whether to accept the matching documents
(default mode, when <code class="code">$exclude</code> is false) or reject them.
</p><p>
Only those documents where <code class="code">$attribute</code> column value
stored in the index is between <code class="code">$min</code> and <code class="code">$max</code>
(including values that are exactly equal to <code class="code">$min</code> or <code class="code">$max</code>)
will be matched (or rejected, if <code class="code">$exclude</code> is true).
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setgeoanchor"></a>5.4.5.&nbsp;SetGeoAnchor</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetGeoAnchor ( $attrlat, $attrlong, $lat, $long )</p><p>
Sets anchor point for and geosphere distance (geodistance) calculations, and enable them.
</p><p>
<code class="code">$attrlat</code> and <code class="code">$attrlong</code> must be strings that contain the names
of latitude and longitude attributes, respectively. <code class="code">$lat</code> and <code class="code">$long</code>
are floats that specify anchor point latitude and longitude, in radians.
</p><p>
Once an anchor point is set, you can use magic <code class="code">"@geodist"</code> attribute
name in your filters and/or sorting expressions. Sphinx will compute geosphere distance
between the given anchor point and a point specified by latitude and lognitude
attributes from each full-text match, and attach this value to the resulting match.
The latitude and longitude values both in <code class="code">SetGeoAnchor</code> and the index
attribute data are expected to be in radians. The result will be returned in meters,
so geodistance value of 1000.0 means 1 km. 1 mile is approximately 1609.344 meters.
</p></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="api-funcgroup-groupby"></a>5.5.&nbsp;GROUP BY settings</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setgroupby"></a>5.5.1.&nbsp;SetGroupBy</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetGroupBy ( $attribute, $func, $groupsort="@group desc" )</p><p>
Sets grouping attribute, function, and groups sorting mode; and enables grouping
(as described in <a href="#clustering" title="4.6.&nbsp;Grouping (clustering) search results ">Section&nbsp;4.6, &#8220;Grouping (clustering) search results &#8221;</a>).
</p><p>
<code class="code">$attribute</code> is a string that contains group-by attribute name.
<code class="code">$func</code> is a constant that chooses a function applied to the attribute value in order to compute group-by key.
<code class="code">$groupsort</code> is a clause that controls how the groups will be sorted. Its syntax is similar
to that described in <a href="#sort-extended">Section&nbsp;4.5, &#8220;SPH_SORT_EXTENDED mode&#8221;</a>.
</p><p>
Grouping feature is very similar in nature to GROUP BY clause from SQL.
Results produces by this function call are going to be the same as produced
by the following pseudo code:
</p><pre class="programlisting">
SELECT ... GROUP BY $func($attribute) ORDER BY $groupsort
</pre><p>
Note that it's <code class="code">$groupsort</code> that affects the order of matches
in the final result set. Sorting mode (see <a href="#api-func-setsortmode" title="5.3.3.&nbsp;SetSortMode">Section&nbsp;5.3.3, &#8220;SetSortMode&#8221;</a>)
affect the ordering of matches <span class="emphasis"><em>within</em></span> group, ie.
what match will be selected as the best one from the group.
So you can for instance order the groups by matches count
and select the most relevant match within each group at the same time.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-setgroupdistinct"></a>5.5.2.&nbsp;SetGroupDistinct</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function SetGroupDistinct ( $attribute )</p><p>
Sets attribute name for per-group distinct values count calculations.
Only available for grouping queries.
</p><p>
<code class="code">$attribute</code> is a string that contains the attribute name.
For each group, all values of this attribute will be stored (as RAM limits
permit), then the amount of distinct values will be calculated and returned
to the client. This feature is similar to <code class="code">COUNT(DISTINCT)</code>
clause in standard SQL; so these Sphinx calls:
</p><pre class="programlisting">
$cl-&gt;SetGroupBy ( "category", SPH_GROUPBY_ATTR, "@count desc" );
$cl-&gt;SetGroupDistinct ( "vendor" );
</pre><p>
can be expressed using the following SQL clauses:
</p><pre class="programlisting">
SELECT id, weight, all-attributes,
	COUNT(DISTINCT vendor) AS @distinct,
	COUNT(*) AS @count
FROM products
GROUP BY category
ORDER BY @count DESC
</pre><p>
In the sample pseudo code shown just above, <code class="code">SetGroupDistinct()</code> call
corresponds to <code class="code">COUNT(DISINCT vendor)</code> clause only.
<code class="code">GROUP BY</code>, <code class="code">ORDER BY</code>, and  <code class="code">COUNT(*)</code>
clauses are all an equivalent of <code class="code">SetGroupBy()</code> settings. Both queries
will return one matching row for each category. In addition to indexed attributes,
matches will also contain total per-category matches count, and the count
of distinct vendor IDs within each category.
</p></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="api-funcgroup-querying"></a>5.6.&nbsp;Querying</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-query"></a>5.6.1.&nbsp;Query</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function Query ( $query, $index="*" )</p><p>
Connects to <code class="filename">searchd</code> server, runs given search query
with current settings, obtains and returns the result set.
</p><p>
<code class="code">$query</code> is a query string. <code class="code">$index</code> is an index name (or names) string.
Returns false and sets <code class="code">GetLastError()</code> message on general error. 
Returns search result set on success.
</p><p>
Default value for <code class="code">$index</code> is <code class="code">"*"</code> that means
to query all local indexes. Characters allowed in index names include
Latin letters (a-z), numbers (0-9), minus sign (-), and underscore (_);
everything else is considered a separator. Therefore, all of the
following samples calls are valid and will search the same
two indexes:
</p><pre class="programlisting">
$cl-&gt;Query ( "test query", "main delta" );
$cl-&gt;Query ( "test query", "main;delta" );
$cl-&gt;Query ( "test query", "main, delta" );
</pre><p>
Index specification order matters. If document with identical IDs are found
in two or more indexes, weight and attribute values from the very last matching 
index will be used for sorting and returning to client (unless explicitly
overridden with <a href="#api-func-setindexweights" title="5.3.6.&nbsp;SetIndexWeights">SetIndexWeights()</a>). Therefore,
in the example above, matches from "delta" index will always win over
matches from "main".
</p><p>
On success, <code class="code">Query()</code> returns a result set that contains
some of the found matches (as requested by <a href="#api-func-setlimits" title="5.2.1.&nbsp;SetLimits">SetLimits()</a>)
and additional general per-query statistics. The result set is a hash
(PHP specific; other languages might utilize other structures instead
of hash) with the following keys and values:
</p><div class="variablelist"><dl><dt><span class="term">"matches":</span></dt><dd>Hash which maps found document IDs to another small hash containing document weight and attribute values
		(or an array of the similar small hashes if <a href="#api-func-setarrayresult" title="5.1.5.&nbsp;SetArrayResult">SetArrayResult()</a> was enabled).
	</dd><dt><span class="term">"total":</span></dt><dd>Total amount of matches retrieved <span class="emphasis"><em>on server</em></span> (ie. to the server side result set) by this query.
		You can retrieve up to this amount of matches from server for this query text with current query settings.
	</dd><dt><span class="term">"total_found":</span></dt><dd>Total amount of matching documents in index (that were found and procesed on server).</dd><dt><span class="term">"words":</span></dt><dd>Hash which maps query keywords (case-folded, stemmed, and otherwise processed) to a small hash with per-keyword statitics ("docs", "hits").</dd><dt><span class="term">"error":</span></dt><dd>Query error message reported by <code class="filename">searchd</code> (string, human readable). Empty if there were no errors.</dd><dt><span class="term">"warning":</span></dt><dd>Query warning message reported by <code class="filename">searchd</code> (string, human readable). Empty if there were no warnings.</dd></dl></div><p>
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-addquery"></a>5.6.2.&nbsp;AddQuery</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function AddQuery ( $query, $index="*" )</p><p>
Adds additional query with current settings to multi-query batch.
<code class="code">$query</code> is a query string. <code class="code">$index</code> is an index name (or names) string.
Returns index to results array returned from <a href="#api-func-runqueries" title="5.6.3.&nbsp;RunQueries">RunQueries()</a>.
</p><p>
Batch queries (or multi-queries) enable <code class="filename">searchd</code> to perform internal
optimizations if possible. They also reduce network connection overheads and search process
creation overheads in all cases. They do not result in any additional overheads compared
to simple queries. Thus, if you run several different queries from your web page,
you should always consider using multi-queries.
</p><p>
For instance, running the same full-text query but with different
sorting or group-by settings will enable <code class="filename">searchd</code>
to perform expensive full-text search and ranking operation only once,
but compute multiple group-by results from its output.
</p><p>
This can be a big saver when you need to display not just plain
search results but also some per-category counts, such as the amount of
products grouped by vendor. Without multi-query, you would have to run several
queries which perform essentially the same search and retrieve the
same matches, but create result sets differently. With multi-query,
you simply pass all these querys in a single batch and Sphinx
optimizes the redundant full-text search internally.
</p><p>
<code class="code">AddQuery()</code> internally saves full current settings state
along with the query, and you can safely change them afterwards for subsequent
<code class="code">AddQuery()</code> calls. Already added queries will not be affected;
there's actually no way to change them at all. Here's an example:
</p><pre class="programlisting">
$cl-&gt;SetSortMode ( SPH_SORT_RELEVANCE );
$cl-&gt;AddQuery ( "hello world", "documents" );

$cl-&gt;SetSortMode ( SPH_SORT_ATTR_DESC, "price" );
$cl-&gt;AddQuery ( "ipod", "products" );

$cl-&gt;AddQuery ( "harry potter", "books" );

$results = $cl-&gt;RunQueries ();
</pre><p>
With the code above, 1st query will search for "hello world" in "documents" index
and sort results by relevance, 2nd query will search for "ipod" in "products"
index and sort results by price, and 3rd query will search for "harry potter"
in "books" index while still sorting by price. Note that 2nd <code class="code">SetSortMode()</code> call
does not affect the first query (because it's already added) but affects both other
subsequent queries.
</p><p>
<code class="code">AddQuery()</code> does <span class="bold"><strong>not</strong></span> modify the current state. That is,
all current sorting, filtering, and grouping settings will not be affected by
this call; so subsequent queries can easily reuse current query settings.
</p><p>
<code class="code">AddQuery()</code> returns an index into an array of results
that will be returned from <code class="code">RunQueries()</code> call. It is simply
a sequentially increasing 0-based integer, ie. first call will return 0,
second will return 1, and so on. Just a small helper so you won't have
to track the indexes manualy if you need then.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-runqueries"></a>5.6.3.&nbsp;RunQueries</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function RunQueries ()</p><p>
Connect to searchd, runs a batch of all queries added using <code class="code">AddQuery()</code>,
obtains and returns the result sets. Returns false and sets <code class="code">GetLastError()</code>
message on general error (such as network I/O failure). Returns a plain array
of result sets on success.
</p><p>
Each result set in the returned array is exactly the same as
the result set returned from <a href="#api-func-query" title="5.6.1.&nbsp;Query"><code class="code">Query()</code></a>.
</p><p>
Note that the batch query request itself almost always succeds -
unless there's a network error, blocking index rotation in progress,
or another general failure which prevents the whole request from being
processed.
</p><p>
However individual queries within the batch might very well fail.
In this case their respective result sets will contain non-empty <code class="code">"error"</code> message,
but no matches or query statistics. In the extreme case all queries within the batch
could fail. There still will be no general error reported, because API was able to
succesfully connect to <code class="filename">searchd</code>, submit the batch, and receive
the results - but every result set will have a specific error message.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-resetfilters"></a>5.6.4.&nbsp;ResetFilters</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function ResetFilters ()</p><p>
Clears all currently set filters.
</p><p>
This call is only normally required when using multi-queries. You might want
to set different filters for different queries in the batch. To do that,
you should call <code class="code">ResetFilters()</code> and add new filters using
the respective calls.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-resetgroupby"></a>5.6.5.&nbsp;ResetGroupBy</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function ResetGroupBy ()</p><p>
Clears all currently group-by settings, and disables group-by.
</p><p>
This call is only normally required when using multi-queries.
You can change individual group-by settings using <code class="code">SetGroupBy()</code>
and <code class="code">SetGroupDistinct()</code> calls, but you can not disable
group-by using those calls. <code class="code">ResetGroupBy()</code>
fully resets previous group-by settings and disables group-by mode
in the current state, so that subsequent <code class="code">AddQuery()</code>
calls can perform non-grouping searches.
</p></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="api-funcgroup-additional-functionality"></a>5.7.&nbsp;Additional functionality</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-buildexcerpts"></a>5.7.1.&nbsp;BuildExcerpts</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function BuildExcerpts ( $docs, $index, $words, $opts=array() )</p><p>
Excerpts (snippets) builder function. Connects to <code class="filename">searchd</code>,
asks it to generate excerpts (snippets) from given documents, and returns the results.
</p><p>
<code class="code">$docs</code> is a plain array of strings that carry the documents' contents.
<code class="code">$index</code> is an index name string. Different settings (such as charset,
morphology, wordforms) from given index will be used.
<code class="code">$words</code> is a string that contains the keywords to highlight. They will
be processed with respect to index settings. For instance, if English stemming
is enabled in the index, "shoes" will be highlighted even if keyword is "shoe".
<code class="code">$opts</code> is a hash which contains additional optional highlighting parameters:
</p><div class="variablelist"><dl><dt><span class="term">"before_match":</span></dt><dd>A string to insert before a keyword match. Default is "&lt;b&gt;".</dd><dt><span class="term">"after_match":</span></dt><dd>A string to insert after a keyword match. Default is "&lt;b&gt;".</dd><dt><span class="term">"chunk_separator":</span></dt><dd>A string to insert between snippet chunks (passages). Default is "&nbsp;...&nbsp;".</dd><dt><span class="term">"limit":</span></dt><dd>Maximum snippet size, in symbols (codepoints). Integer, default is 256.</dd><dt><span class="term">"around":</span></dt><dd>How much words to pick around each matching keywords block. Integer, default is 5.</dd><dt><span class="term">"exact_phrase":</span></dt><dd>Whether to highlight exact query phrase matches only instead of individual keywords. Boolean, default is false.</dd><dt><span class="term">"single_passage":</span></dt><dd>whether to extract single best passage only. Boolean, default is false.</dd></dl></div><p>
</p><p>
Returns false on failure. Returns a plain array of strings with excerpts (snippets) on success.
</p></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="api-func-updateatttributes"></a>5.7.2.&nbsp;UpdateAttributes</h4></div></div></div><p><span class="bold"><strong>Prototype:</strong></span> function UpdateAttributes ( $index, $attrs, $values )</p><p>
Instantly updates given attribute values in given documents.
Returns number of actually updated documents (0 or more) on success, or -1 on failure.
</p><p>
<code class="code">$index</code> is a name of the index (or indexes) to be updated.
<code class="code">$attrs</code> is a plain array with string attribute names, listing attributes that are updated.
<code class="code">$values</code> is a hash where key is document ID, and value is a plain array of new attribute values.
</p><p>
<code class="code">$index</code> can be either a single index name or a list, like in <code class="code">Query()</code>.
Unlike <code class="code">Query()</code>, wildcard is not allowed and all the indexes
to update must be specified explicitly. The list of indexes can include
distributed index names. Updates on distributed indexes will be pushed
to all agents.
</p><p>
The updates only work with <code class="code">docinfo=extern</code> storage strategy.
They are very fast because they're working fully in RAM, but they can also
be made persistent: updates are saved on disk on clean <code class="filename">searchd</code>
shutdown initiated by SIGTERM signal.
</p><p>
Usage example:
</p><pre class="programlisting">
$cl-&gt;UpdateAttributes ( "test1", array("group_id"), array(1=&gt;array(456)) );
$cl-&gt;UpdateAttributes ( "products", array ( "price", "amount_in_stock" ),
	array ( 1001=&gt;array(123,5), 1002=&gt;array(37,11), 1003=&gt;(25,129) ) );
</pre><p>
The first sample statement will update document 1 in index "test1", setting "group_id" to 456.
The second one will update documents 1001, 1002 and 1003 in index "products". For document 1001,
the new price will be set to 123 and the new amount in stock to 5; for document 1002, the new price
will be 37 and the new amount will be 11; etc.
</p></div></div></div><div class="sect1" lang="en"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="sphinxse"></a>6.&nbsp;MySQL storage engine (SphinxSE)</h2></div></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="sphinxse-overview"></a>6.1.&nbsp;SphinxSE overview</h3></div></div></div><p>
SphinxSE is MySQL storage engine which can be compiled
into MySQL server 5.x using its pluggable architecure.
It is not available for MySQL 4.x series. It also requires
MySQL 5.0.22 or higher in 5.0.x series, or MySQL 5.1.12
or higher in 5.1.x series.
</p><p>
Despite the name, SphinxSE does <span class="emphasis"><em>not</em></span>
actually store any data itself. It is actually a built-in client
which allows MySQL server to talk to <code class="filename">searchd</code>,
run search queries, and obtain search results. All indexing and
searching happen outside MySQL.
</p><p>
Obvious SphinxSE applications include:
</p><div class="itemizedlist"><ul type="disc"><li>easier porting of MySQL FTS applications to Sphinx;</li><li>allowing Sphinx use with progamming languages for which native APIs are not available yet;</li><li>optimizations when additional Sphinx result set processing on MySQL side is required
	(eg. JOINs with original document tables, additional MySQL-side filtering, etc).</li></ul></div><p>
</p></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="sphinxse-installing"></a>6.2.&nbsp;Installing SphinxSE</h3></div></div></div><p>
You will need to obtain a copy of MySQL sources, prepare those,
and then recompile MySQL binary.
MySQL sources (mysql-5.x.yy.tar.gz) could be obtained from 
<a href="http://dev.mysql.com" target="_top">dev.mysql.com</a> Web site.
</p><p>
For some MySQL versions, there are delta tarballs with already
prepared source versions available from Sphinx Web site. After unzipping
those over original sources MySQL would be ready to be configured and
built with Sphinx support.
</p><p>
If such tarball is not available, or does not work for you for any
reason, you would have to prepare sources manually. You will need to
GNU Autotools framework (autoconf, automake and libtool) installed
to do that.
</p><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="sphinxse-mysql50"></a>6.2.1.&nbsp;Compiling MySQL 5.0.x with SphinxSE</h4></div></div></div><p>
Skips steps 1-3 if using already prepared delta tarball.
</p><div class="orderedlist"><ol type="1"><li><p>copy <code class="filename">sphinx.5.0.yy.diff</code> patch file
into MySQL sources directory and run
</p><pre class="programlisting">
patch -p1 &lt; sphinx.5.0.yy.diff
</pre><p>
If there's no .diff file exactly for the specific version you need
to build, try applying .diff with closest version numbers. It is important
that the patch should apply with no rejects.
</p></li><li>in MySQL sources directory, run
<pre class="programlisting">
sh BUILD/autorun.sh
</pre></li><li>in MySQL sources directory, create <code class="filename">sql/sphinx</code>
directory in and copy all files in <code class="filename">mysqlse</code> directory 
from Sphinx sources there. Example:
<pre class="programlisting">
cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.0.24/sql/sphinx
</pre></li><li>
configure MySQL and enable Sphinx engine:
<pre class="programlisting">
./configure --with-sphinx-storage-engine
</pre></li><li>
build and install MySQL:
<pre class="programlisting">
make
make install
</pre></li></ol></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="sphinxse-mysql51"></a>6.2.2.&nbsp;Compiling MySQL 5.1.x with SphinxSE</h4></div></div></div><p>
Skip steps 1-2 if using already prepared delta tarball.
</p><div class="orderedlist"><ol type="1"><li>in MySQL sources directory, create <code class="filename">storage/sphinx</code>
directory in and copy all files in <code class="filename">mysqlse</code> directory 
from Sphinx sources there. Example:
<pre class="programlisting">
cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.1.14/storage/sphinx
</pre></li><li>in MySQL sources directory, run
<pre class="programlisting">
sh BUILD/autorun.sh
</pre></li><li>
configure MySQL and enable Sphinx engine:
<pre class="programlisting">
./configure --with-plugins=sphinx
</pre></li><li>
build and install MySQL:
<pre class="programlisting">
make
make install
</pre></li></ol></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="sphinxse-checking"></a>6.2.3.&nbsp;Checking SphinxSE installation</h4></div></div></div>
To check whether SphinxSE has been succesfully compiled
into MySQL, launch newly built servers, run mysql client and
issue <code class="code">SHOW ENGINES</code> query. You should see a list
of all available engines. Sphinx should be present and "Support"
column should contain "YES":

<pre class="programlisting">     
mysql&gt; show engines;
+------------+----------+----------------------------------------------------------------+
| Engine     | Support  | Comment                                                        |
+------------+----------+----------------------------------------------------------------+
| MyISAM     | DEFAULT  | Default engine as of MySQL 3.23 with great performance         |
  ...
| SPHINX     | YES      | Sphinx storage engine                                          |
  ...
+------------+----------+----------------------------------------------------------------+
13 rows in set (0.00 sec)    
</pre></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="sphinxse-using"></a>6.3.&nbsp;Using SphinxSE</h3></div></div></div><p>
To search via SphinxSE, you would need to create special ENGINE=SPHINX "search table",
and then SELECT from it with full text query put into WHERE clause for query column.
</p><p>
Let's begin with an example create statement and search query:
</p><pre class="programlisting">
CREATE TABLE t1
(
    id          INTEGER NOT NULL,
    weight      INTEGER NOT NULL,
    query       VARCHAR(3072) NOT NULL,
    group_id    INTEGER,
    INDEX(query)
) ENGINE=SPHINX CONNECTION="sphinx://localhost:3312/test";

SELECT * FROM t1 WHERE query='test it;mode=any';
</pre><p>
</p><p>
First 3 columns of search table <span class="emphasis"><em>must</em></span> be <code class="code">INTEGER</code>,
<code class="code">INTEGER</code> and <code class="code">VARCHAR</code> which will be mapped to document ID,
match weight and search query accordingly. Query column must be indexed;
all the others must be kept unindexed. Columns' names are ignored so you
can use arbitrary ones.
</p><p>
Additional columns must be either <code class="code">INTEGER</code> or <code class="code">TIMESTAMP</code>.
They will be bound to attributes provided in Sphinx result set by name, so their
names must match attribute names specified in <code class="filename">sphinx.conf</code>.
If there's no such attribute name in Sphinx search results, column will have
<code class="code">NULL</code> values.
</p><p>
Special "virtual" attributes names can also be bound to SphinxSE columns.
<code class="code">_sph_</code> needs to be used instead of <code class="code">@</code> for that.
For instance, to obtain <code class="code">@group</code> and <code class="code">@count</code>
virtual attributes, use <code class="code">_sph_group</code> and <code class="code">_sph_count</code>
column names.
</p><p>
<code class="code">CONNECTION</code> string parameter can be used to specify default
searchd host, port and indexes for queries issued using this table.
If no connection string is specified in <code class="code">CREATE TABLE</code>,
index name "*" (ie. search all indexes) and localhost:3312 are assumed.
Connection string syntax is as follows:
</p><pre class="programlisting">
CONNECTION="sphinx://HOST:PORT/INDEXNAME"
</pre><p>
You can change the default connection string later:
</p><pre class="programlisting">
ALTER TABLE t1 CONNECTION="sphinx://NEWHOST:NEWPORT/NEWINDEXNAME";
</pre><p>
You can also override all these parameters per-query.
</p><p>
As seen in example, both query text and search options should be put
into WHERE clause on search query column (ie. 3rd column); the options
are separated by semicolons; and their names from values by equality sign.
Any number of options can be specified. Available options are:
</p><div class="itemizedlist"><ul type="disc"><li>query - query text;</li><li>mode - matching mode. Must be one of "all", "any", "phrase",
	"boolean", or "extended". Default is "all";</li><li>sort - match sorting mode. Must be one of "relevance", "attr_desc",
"attr_asc", "time_segments", or "extended". In all modes besides "relevance"
attribute name (or sorting clause for "extended") is also required after a colon:
<pre class="programlisting">
... WHERE query='test;sort=attr_asc:group_id';
... WHERE query='test;sort=extended:@weight desc, group_id asc';
</pre></li><li>offset - offset into result set, default is 0;</li><li>limit - amount of matches to retrieve from result set, default is 20;</li><li>index - names of the indexes to search:
<pre class="programlisting">
... WHERE query='test;index=test1;';
... WHERE query='test;index=test1,test2,test3;';
</pre></li><li>minid, maxid - min and max document ID to match;</li><li>weights - comma-separated list of weights to be assigned to Sphinx full-text fields:
<pre class="programlisting">
... WHERE query='test;weights=1,2,3;';
</pre></li><li>filter, !filter - comma-separated attribute name and a set of values to match:
<pre class="programlisting">
# only include groups 1, 5 and 19
... WHERE query='test;filter=group_id,1,5,19;';

# exclude groups 3 and 11
... WHERE query='test;!filter=group_id,3,11;';
</pre></li><li>range, !range - comma-separated attribute name, min and max value to match:
<pre class="programlisting">
# include groups from 3 to 7, inclusive
... WHERE query='test;range=group_id,3,7;';

# exclude groups from 5 to 25
... WHERE query='test;!range=group_id,5,25;';
</pre></li><li>maxmatches - per-query max matches value:
<pre class="programlisting">
... WHERE query='test;maxmatches=2000;';
</pre></li><li>groupby - group-by function and attribute:
<pre class="programlisting">
... WHERE query='test;groupby=day:published_ts;';
... WHERE query='test;groupby=attr:group_id;';
</pre></li><li>groupsort - group-by sorting clause:
<pre class="programlisting">
... WHERE query='test;groupsort=@count desc;';
</pre></li><li>indexweights - comma-separated list of index names and weights
to use when searching through several indexes:
<pre class="programlisting">
... WHERE query='test;indexweights=idx_exact,2,idx_stemmed,1;';
</pre></li></ul></div><p>
</p><p>
One <span class="bold"><strong>very important</strong></span> note that it is
<span class="bold"><strong>much</strong></span> more efficient to allow Sphinx
to perform sorting, filtering and slicing the result set than to raise
max matches count and use WHERE, ORDER BY and LIMIT clauses on MySQL
side. This is for two reasons. First, Sphinx does a number of
optimizations and performs better than MySQL on these tasks.
Second, less data would need to be packed by searchd, transferred
and unpacked by SphinxSE.
</p><p>
Additional query info besides result set could be
retrieved with <code class="code">SHOW ENGINE SPHINX STATUS</code> statement:
</p><pre class="programlisting">
mysql&gt; SHOW ENGINE SPHINX STATUS;
+--------+-------+-------------------------------------------------+
| Type   | Name  | Status                                          |
+--------+-------+-------------------------------------------------+
| SPHINX | stats | total: 25, total found: 25, time: 126, words: 2 | 
| SPHINX | words | sphinx:591:1256 soft:11076:15945                | 
+--------+-------+-------------------------------------------------+
2 rows in set (0.00 sec)
</pre><p>
</p><p>
You could perform JOINs on SphinxSE search table and tables using
other engines. Here's an example with "documents" from example.sql:
</p><pre class="programlisting">
mysql&gt; SELECT content, date_added FROM test.documents docs
-&gt; JOIN t1 ON (docs.id=t1.id) 
-&gt; WHERE query="one document;mode=any";
+-------------------------------------+---------------------+
| content                             | docdate             |
+-------------------------------------+---------------------+
| this is my test document number two | 2006-06-17 14:04:28 | 
| this is my test document number one | 2006-06-17 14:04:28 | 
+-------------------------------------+---------------------+
2 rows in set (0.00 sec)

mysql&gt; SHOW ENGINE SPHINX STATUS;
+--------+-------+---------------------------------------------+
| Type   | Name  | Status                                      |
+--------+-------+---------------------------------------------+
| SPHINX | stats | total: 2, total found: 2, time: 0, words: 2 | 
| SPHINX | words | one:1:2 document:2:2                        | 
+--------+-------+---------------------------------------------+
2 rows in set (0.00 sec)
</pre><p>
</p></div></div><div class="sect1" lang="en"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="reporting-bugs"></a>7.&nbsp;Reporting bugs</h2></div></div></div><p>
Unfortunately, Sphinx is not yet 100% bug free (even though I'm working hard
towards that), so you might occasionally run into some issues.
</p><p>
Reporting as much as possible about each bug is very important -
because to fix it, I need to be able either to reproduce and debug the bug,
or to deduce what's causing it from the information that you provide.
So here are some instructions on how to do that.
</p><h3><a name="id357704"></a>Build-time issues</h3><p>If Sphinx fails to build for some reason, please do the following:</p><div class="orderedlist"><ol type="1"><li>check that headers and libraries for your DBMS are properly installed
(for instance, check that <code class="filename">mysql-devel</code> package is present);
</li><li>report Sphinx version and config file (be sure to remove the passwords!),
MySQL (or PostgreSQL) configuration info, gcc version, OS version and CPU type
(ie. x86, x86-64, PowerPC, etc):
<pre class="programlisting">
mysql_config
gcc --version
uname -a
</pre></li><li>
report the error message which is produced by <code class="filename">configure</code>
or <code class="filename">gcc</code> (it should be to include error message itself only,
not the whole build log).
</li></ol></div><h3><a name="id357758"></a>Run-time issues</h3><p>
If Sphinx builds and runs, but there are any problems running it,
please do the following:
</p><div class="orderedlist"><ol type="1"><li>describe the bug (ie. both the expected behavior and actual behavior)
and all the steps necessary to reproduce it;</li><li>include Sphinx version and config file (be sure to remove the passwords!),
MySQL (or PostgreSQL) version, gcc version, OS version and CPU type (ie. x86, x86-64,
PowerPC, etc):
<pre class="programlisting">
mysql --version
gcc --version
uname -a
</pre></li><li>build, install and run debug versions of all Sphinx programs (this is
to enable a lot of additional internal checks, so-called assertions):
<pre class="programlisting">
make distclean
./configure --with-debug
make install
killall -TERM searchd
</pre></li><li>reindex to check if any assertions are triggered (in this case,
it's likely that the index is corrupted and causing problems);
</li><li>if the bug does not reproduce with debug versions,
revert to non-debug and mention it in your report;
</li><li>if the bug could be easily reproduced with a small (1-100 record)
part of your database, please provide a gzipped dump of that part;
</li><li>if the problem is related to <code class="filename">searchd</code>, include
relevant entries from <code class="filename">searchd.log</code> and
<code class="filename">query.log</code> in your bug report;
</li><li>if the problem is related to <code class="filename">searchd</code>, try
running it in console mode and check if it dies with an assertion:
<pre class="programlisting">
./searchd --console
</pre></li><li>if any program dies with an assertion, provide the assertion message.</li></ol></div><h3><a name="id357872"></a>Debugging assertions, crashes and hangups</h3><p>
If any program dies with an assertion, crashes without an assertion or hangs up,
you would additionally need to generate a core dump and examine it.
</p><div class="orderedlist"><ol type="1"><li>
enable core dumps. On most Linux systems, this is done
using <code class="filename">ulimit</code>:
<pre class="programlisting">
ulimit -c 32768
</pre></li><li>
run the program and try to reproduce the bug;
</li><li>
if the program crashes (either with or without an assertion),
find the core file in current directory (it should typically print
out "Segmentation fault (core dumped)" message);
</li><li>
if the program hangs, use <code class="filename">kill -SEGV</code>
from another console to force it to exit and dump core:
<pre class="programlisting">
kill -SEGV HANGED-PROCESS-ID
</pre></li><li>
use <code class="filename">gdb</code> to examine the core file
and obtain a backtrace:
<pre class="programlisting">
gdb ./CRASHED-PROGRAM-FILE-NAME CORE-DUMP-FILE-NAME
(gdb) bt
(gdb) quit
</pre></li></ol></div><p>
Note that HANGED-PROCESS-ID, CRASHED-PROGRAM-FILE-NAME and
CORE-DUMP-FILE-NAME must all be replaced with specific numbers
and file names. For example, hanged searchd debugging session
would look like:
</p><pre class="programlisting">
# kill -SEGV 12345
# ls *core*
core.12345
# gdb ./searchd core.12345
(gdb) bt
...
(gdb) quit
</pre><p>
</p><p>
Note that <code class="filename">ulimit</code> is not server-wide
and only affects current shell session. This means that you will not
have to restore any server-wide limits - but if you relogin,
you will have to set <code class="filename">ulimit</code> again.
</p><p>
Core dumps should be placed in current working directory
(and Sphinx programs do not change it), so this is where you
would look for them.
</p><p>
Please do not immediately remove the core file because there could
be additional helpful information which could be retrieved from it.
You do not need to send me this file (as the debug info there is
closely tied to your system) but I might need to ask
you a few additional questions about it.
</p></div><div class="sect1" lang="en"><div class="titlepage"><div><div><h2 class="title" style="clear: both"><a name="conf-reference"></a>8.&nbsp;<code class="filename">sphinx.conf</code> options reference</h2></div></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="confgroup-source"></a>8.1.&nbsp;Data source configuration options</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-source-type"></a>8.1.1.&nbsp;type</h4></div></div></div><p>
Data source type.
Mandatory, no default value.
Known types are <code class="option">mysql</code>, <code class="option">pgsql</code>, <code class="option">xmlpipe</code> and <code class="option">xmlpipe2</code>.
</p><p>
All other per-source options depend on source type selected by this option.
Names of the options used for SQL sources (ie. MySQL and PostgreSQL) start with "sql_";
names of the ones used for xmlpipe and xmlpipe2 start with "xmlpipe_".
</p><h5><a name="id358053"></a>Example:</h5><pre class="programlisting">
type = mysql
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-host"></a>8.1.2.&nbsp;sql_host</h4></div></div></div><p>
SQL server host to connect to.
Mandatory, no default value.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
In the simplest case when Sphinx resides on the same host with your MySQL
or PostgreSQL installation, you would simply specify "localhost". Note that
MySQL client library chooses whether to connect over TCP/IP or over UNIX
socket based on the host name. Generally speaking, "localhost" will force it
to use UNIX socket (this is the default and generally recommended mode)
and "127.0.0.1" will force TCP/IP usage. Refer to
<a href="http://dev.mysql.com/doc/refman/5.0/en/mysql-real-connect.html" target="_top">MySQL manual</a>
for more details.
</p><h5><a name="id358109"></a>Example:</h5><pre class="programlisting">
sql_host = localhost
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-port"></a>8.1.3.&nbsp;sql_port</h4></div></div></div><p>
SQL server IP port to connect to.
Optional, default is 3306 for <code class="option">mysql</code> source type and 5432 for <code class="option">pgsql</code> type.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
Note that it depends on <a href="#conf-sql-host" title="8.1.2.&nbsp;sql_host">sql_host</a> setting whether this value will actually be used.
</p><h5><a name="id358160"></a>Example:</h5><pre class="programlisting">
sql_port = 3306
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-user"></a>8.1.4.&nbsp;sql_user</h4></div></div></div><p>
SQL user to use when connecting to <a href="#conf-sql-host" title="8.1.2.&nbsp;sql_host">sql_host</a>.
Mandatory, no default value.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><h5><a name="id358199"></a>Example:</h5><pre class="programlisting">
sql_user = test
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-pass"></a>8.1.5.&nbsp;sql_pass</h4></div></div></div><p>
SQL user password to use when connecting to <a href="#conf-sql-host" title="8.1.2.&nbsp;sql_host">sql_host</a>.
Mandatory, no default value.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><h5><a name="id358237"></a>Example:</h5><pre class="programlisting">
sql_pass = mysecretpassword
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-db"></a>8.1.6.&nbsp;sql_db</h4></div></div></div><p>
SQL database (in MySQL terms) to use after the connection and perform further queries within.
Mandatory, no default value.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><h5><a name="id358271"></a>Example:</h5><pre class="programlisting">
sql_db = test
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-sock"></a>8.1.7.&nbsp;sql_sock</h4></div></div></div><p>
UNIX socket name to connect to for local SQL servers.
Optional, default value is empty (use client library default settings).
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
On Linux, it would typically be <code class="filename">/var/lib/mysql/mysql.sock</code>.
On FreeBSD, it would typically be <code class="filename">/tmp/mysql.sock</code>.
Note that it depends on <a href="#conf-sql-host" title="8.1.2.&nbsp;sql_host">sql_host</a> setting whether this value will actually be used.
</p><h5><a name="id358329"></a>Example:</h5><pre class="programlisting">
sql_sock = /tmp/mysql.sock
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-mysql-connect-flags"></a>8.1.8.&nbsp;mysql_connect_flags</h4></div></div></div><p>
MySQL client connection flags.
Optional, default value is 0 (do not set any flags).
Applies to <code class="option">mysql</code> source type only.
</p><p>
This option must contain an integer value with the sum of the flags.
The value will be passed to <a href="http://dev.mysql.com/doc/refman/5.0/en/mysql-real-connect.html" target="_top">mysql_real_connect()</a> verbatim.
The flags are enumerated in mysql_com.h include file.
Flags that are especially interesting in regard to indexing, with their respective values, are as follows:
</p><div class="itemizedlist"><ul type="disc"><li>CLIENT_COMPRESS = 32; can use compression protocol</li><li>CLIENT_SSL = 2048; switch to SSL after handshake</li><li>CLIENT_SECURE_CONNECTION = 32768; new 4.1 authentication</li></ul></div><p>
For instance, you can specify 2080 (2048+32) to use both compression and SSL,
or 32768 to use new authentication only. Initially, this option was introduced
to be able to use compression when the <code class="filename">indexer</code>
and <code class="filename">mysqld</code> are on different hosts. Compression on 1 Gbps
links is most likely to hurt indexing time though it reduces network traffic,
both in theory and in practice. However, enabling compression on 100 Mbps links
may improve indexing time significantly (upto 20-30% of the total indexing time
improvement was reported). Your mileage may vary.
</p><h5><a name="id358418"></a>Example:</h5><pre class="programlisting">
mysql_connect_flags = 32 # enable compression
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-query-pre"></a>8.1.9.&nbsp;sql_query_pre</h4></div></div></div><p>
Pre-fetch query, or pre-query.
Multi-value, optional, default is empty list of queries.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
Multi-value means that you can specify several pre-queries.
They are executed before <a href="#conf-sql-query" title="8.1.10.&nbsp;sql_query">the main fetch query</a>,
and they will be exectued exactly in order of appeareance in the configuration file.
Pre-query results are ignored.
</p><p>
Pre-queries are useful in a lot of ways. They are used to setup encoding,
mark records that are going to be indexed, update internal counters,
set various per-connection SQL server options and variables, and so on.
</p><p>
Perhaps the most frequent pre-query usage is to specify the encoding
that the server will use for the rows it returnes. It <span class="bold"><strong>must</strong></span> match
the encoding that Sphinx expects (as specified by <a href="#conf-charset-type" title="8.2.11.&nbsp;charset_type">charset_type</a>
and <a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a> options).
Two MySQL specific examples of setting the encoding are:
</p><pre class="programlisting">
sql_query_pre = SET CHARACTER_SET_RESULTS=cp1251
sql_query_pre = SET NAMES utf8
</pre><p>
Also specific to MySQL sources, it is useful to disable query cache
(for indexer connection only) in pre-query, because indexing queries
are not going to be re-run frequently anyway, and there's no sense
in caching their results. That could be achieved with:
</p><pre class="programlisting">
sql_query_pre = SET SESSION query_cache_type=OFF
</pre><p>
</p><h5><a name="id358526"></a>Example:</h5><pre class="programlisting">
sql_query_pre = SET NAMES=utf8
sql_query_pre = SET SESSION query_cache_type=OFF
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-query"></a>8.1.10.&nbsp;sql_query</h4></div></div></div><p>
Main document fetch query.
Mandatory, no default value.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
There can be only one main query.
This is the query which is used to retrieve documents from SQL server.
You can specify up to 32 full-text fields (formally, upto SPH_MAX_FIELDS from sphinx.h), and an arbitrary amount of attributes.
All of the columns that are neither document ID (the first one) nor attributes will be full-text indexed.
</p><p>
Document ID <span class="bold"><strong>MUST</strong></span> be the very first field,
and it <span class="bold"><strong>MUST BE UNIQUE UNSIGNED POSITIVE (NON-ZERO, NON-NEGATIVE) INTEGER NUMBER</strong></span>.
It can be either 32-bit or 64-bit, depending on how you built Sphinx;
by default it builds with 32-bit IDs support but <code class="option">--enable-id64</code> option
to <code class="filename">configure</code> allows to build with 64-bit document and word IDs support.

</p><h5><a name="id358611"></a>Example:</h5><pre class="programlisting">
sql_query = \
	SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, \
		title, content \
	FROM documents
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-query-range"></a>8.1.11.&nbsp;sql_query_range</h4></div></div></div><p>
Range query setup.
Optional, default is empty.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
Setting this option enables ranged document fetch queries (see <a href="#ranged-queries">Section&nbsp;3.7, &#8220;Ranged queries&#8221;</a>).
Ranged queries are useful to avoid notorious MyISAM table locks when indexing
lots of data. (They also help with other less notorious issues, such as reduced
performance caused by big result sets, or additional resources consumed by InnoDB
to serialize big read transactions.)
</p><p>
The query specified in this option must fetch min and max document IDs that will be
used as range boundaries. It must return exactly two integer fields, min ID first
and max ID second; the field names are ignored.
</p><p>
When ranged queries are enabled, <a href="#conf-sql-query" title="8.1.10.&nbsp;sql_query">sql_query</a>
will be required to contain <code class="option">$start</code> and <code class="option">$end</code> macros
(because it obviously would be a mistake to index the whole table many times over).
Note that the intervals specified by <code class="option">$start</code>..<code class="option">$end</code>
will not overlap, so you should <span class="bold"><strong>not</strong></span> remove document IDs that are
exactly equal to <code class="option">$start</code> or <code class="option">$end</code> from your query.
The example in <a href="#ranged-queries">Section&nbsp;3.7, &#8220;Ranged queries&#8221;</a>) illustrates that; note how it
uses greater-or-equal and less-or-equal comparisons.
</p><h5><a name="id358723"></a>Example:</h5><pre class="programlisting">
sql_query_range = SELECT MIN(id),MAX(id) FROM documents
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-range-step"></a>8.1.12.&nbsp;sql_range_step</h4></div></div></div><p>
Range query step.
Optional, default is 1024.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
Only used when <a href="#ranged-queries">ranged queries</a> are enabled.
The full document IDs interval fetched by <a href="#conf-sql-query-range" title="8.1.11.&nbsp;sql_query_range">sql_query_range</a>
will be walked in this big steps. For example, if min and max IDs fetched
are 12 and 3456 respectively, and the step is 1000, indexer will call
<a href="#conf-sql-query" title="8.1.10.&nbsp;sql_query">sql_query</a> several times with the
following substitutions:
</p><div class="itemizedlist"><ul type="disc"><li>$start=12, $end=1011</li><li>$start=1012, $end=2011</li><li>$start=2012, $end=3011</li><li>$start=3012, $end=3456</li></ul></div><p>
</p><h5><a name="id358809"></a>Example:</h5><pre class="programlisting">
sql_range_step = 1000
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-attr-uint"></a>8.1.13.&nbsp;sql_attr_uint</h4></div></div></div><p>
Unsigned integer <a href="#attributes" title="3.2.&nbsp;Attributes">attribute</a> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
The column value should fit into 32-bit unsigned integer range.
Values outside this range will be accepted but wrapped around.
For instance, -1 will be wrapped around to 2^32-1 or 4,294,967,295.
</p><p>
You can specify bit count for integer attributes by appending
':BITCOUNT' to attribute name (see example below).  Attributes with
less than default 32-bit size, or bitfields, perform slower.
But they require less RAM when using <a href="#conf-docinfo" title="8.2.4.&nbsp;docinfo">extern storage</a>:
such bitfields are packed together in 32-bit chunks in <code class="filename">.spa</code>
attribute data file. Bit size settings are ignored if using
<a href="#conf-docinfo" title="8.2.4.&nbsp;docinfo">inline storage</a>.
</p><h5><a name="id358887"></a>Example:</h5><pre class="programlisting">
sql_attr_uint = group_id
sql_attr_uint = forum_id:9 # 9 bits for forum_id
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-attr-bool"></a>8.1.14.&nbsp;sql_attr_bool</h4></div></div></div><p>
Boolean <a href="#attributes" title="3.2.&nbsp;Attributes">attribute</a> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
Equivalent to <a href="#conf-sql-attr-uint" title="8.1.13.&nbsp;sql_attr_uint">sql_attr_uint</a> declaration with a bit count of 1.
</p><h5><a name="id358937"></a>Example:</h5><pre class="programlisting">
sql_attr_bool = is_deleted # will be packed to 1 bit
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-attr-timestamp"></a>8.1.15.&nbsp;sql_attr_timestamp</h4></div></div></div><p>
UNIX timestamp <a href="#attributes" title="3.2.&nbsp;Attributes">attribute</a> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
The column value should be a timestamp in UNIX format, ie. 32-bit unsigned
integer number of seconds elapsed since midnight, January 01, 1970, GMT.
Timestamps are internally stored and handled as integers everywhere.
But in addition to working with timestamps as integers, it's also legal
to use them along with different date-based functions - such as time segments
sorting mode, or day/week/month/year extraction for GROUP BY. Note that
DATE or DATETIME column types in MySQL can <span class="bold"><strong>not</strong></span> be directly used
as timestamps; you need to explicitly convert such columns using
UNIX_TIMESTAMP function.
</p><h5><a name="id359003"></a>Example:</h5><pre class="programlisting">
sql_attr_timestamp = UNIX_TIMESTAMP(added_datetime) AS added_ts
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-attr-str2ordinal"></a>8.1.16.&nbsp;sql_attr_str2ordinal</h4></div></div></div><p>
Ordinal string number <a href="#attributes" title="3.2.&nbsp;Attributes">attribute</a> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
This attribute type (so-called ordinal, for brevity) is intended
to allow sorting by string values, but without storing the strings
themselves. When indexing ordinals, string values are fetched from
database, temporarily stored, sorted, and then replaced by their
respective ordinal numbers in the array of sorted strings.
So, the ordinal number is an integer such that sorting by it
produces the same result as if lexicographically sorting by original strings.
by string values lexicographically.
</p><p>
Earlier versions could consume a lot of RAM for indexing ordinals.
Starting with revision r1112, ordinals accumulation and sorting
also runs in fixed memory (at the cost of using additional temporary
disk space), and honors
<a href="#conf-mem-limit" title="8.3.1.&nbsp;mem_limit">mem_limit</a> settings.
</p><p>
Ideally the strings should be sorted differently, depending
on the encoding and locale. For instance, if the strings are known
to be Russian text in KOI8R encoding, sorting the bytes 0xE0, 0xE1,
and 0xE2 should produce 0xE1, 0xE2 and 0xE0, because in KOI8R
value 0xE0 encodes a character that is (noticeably) after
characters encoded by 0xE1 and 0xE2. Unfortunately, Sphinx
does not support that at the moment and will simply sort
the strings bytewise.
</p><h5><a name="id359083"></a>Example:</h5><pre class="programlisting">
sql_attr_str2ordinal = author_name
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-attr-float"></a>8.1.17.&nbsp;sql_attr_float</h4></div></div></div><p>
Floating point <a href="#attributes" title="3.2.&nbsp;Attributes">attribute</a> declaration.
Multi-value (there might be multiple attributes declared), optional.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
The values will be stored in single precision, 32-bit IEEE 754 format.
Represented range is approximately from 1e-38 to 1e+38. The amount
of decimal digits that can be stored precisely is approximately 7.
One important usage of the float attributes is storing latitude
and longitude values (in radians), for further usage in query-time
geosphere distance calculations.
</p><h5><a name="id359139"></a>Example:</h5><pre class="programlisting">
sql_attr_float = lat_radians
sql_attr_float = long_radians
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-attr-multi"></a>8.1.18.&nbsp;sql_attr_multi</h4></div></div></div><p>
<a href="#mva" title="3.3.&nbsp;MVA (multi-valued attributes)">Multi-valued attribute</a> (MVA) declaration.
Multi-value (ie. there may be more than one such attribute declared), optional.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
Plain attributes only allow to attach 1 value per each document.
However, there are cases (such as tags or categories) when it is
desired to attach multiple values of the same attribute and be able
to apply filtering or grouping to value lists.
</p><p>
The declaration format is as follows (backslashes are for clarity only;
everything can be declared in a single line as well):
</p><pre class="programlisting">
sql_attr_multi = ATTR-TYPE ATTR-NAME 'from' SOURCE-TYPE \
	[;QUERY] \
	[;RANGE-QUERY]
</pre><p>
where
</p><div class="itemizedlist"><ul type="disc"><li>ATTR-TYPE is 'uint' or 'timestamp'</li><li>SOURCE-TYPE is 'field', 'query', or 'ranged-query'</li><li>QUERY is SQL query used to fetch all ( docid, attrvalue ) pairs</li><li>RANGE-QUERY is SQL query used to fetch min and max ID values, similar to 'sql_query_range'</li></ul></div><p>
</p><h5><a name="id359224"></a>Example:</h5><pre class="programlisting">
sql_attr_multi = uint tag from query; SELECT id, tag FROM tags
sql_attr_multi = uint tag from ranged-query; \
	SELECT id, tag FROM tags WHERE id&gt;=$start AND id&lt;=$end; \
	SELECT MIN(id), MAX(id) FROM tags
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-query-post"></a>8.1.19.&nbsp;sql_query_post</h4></div></div></div><p>
Post-fetch query.
Optional, default value is empty.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
This query is executed immediately after <a href="#conf-sql-query" title="8.1.10.&nbsp;sql_query">sql_query</a>
completes successfully. When post-fetch query produces errors,
they are reported as warnings, but indexing is <span class="bold"><strong>not</strong></span> terminated.
It's result set is ignored. Note that indexing is <span class="bold"><strong>not</strong></span> yet completed
at the point when this query gets executed, and further indexing still may fail.
Therefore, any permanent updates should not be done from here.
For instance, updates on helper table that permanently change
the last successfully indexed ID should not be run from post-fetch
query; they should be run from <a href="#conf-sql-query-post-index" title="8.1.20.&nbsp;sql_query_post_index">post-index query</a> instead.
</p><h5><a name="id359304"></a>Example:</h5><pre class="programlisting">
sql_query_post = DROP TABLE my_tmp_table
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-query-post-index"></a>8.1.20.&nbsp;sql_query_post_index</h4></div></div></div><p>
Post-index query.
Optional, default value is empty.
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
This query is executed when indexing is fully and succesfully completed.
If this query produces errors, they are reported as warnings,
but indexing is <span class="bold"><strong>not</strong></span> terminated. It's result set is ignored.
<code class="code">$maxid</code> macro can be used in its text; it will be
expanded to maximum document ID which was actually fetched
from the database during indexing.
</p><h5><a name="id359360"></a>Example:</h5><pre class="programlisting">
sql_query_post_index = REPLACE INTO counters ( id, val ) \
    VALUES ( 'max_indexed_id', $maxid )
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-ranged-throttle"></a>8.1.21.&nbsp;sql_ranged_throttle</h4></div></div></div><p>
Ranged query throttling period, in milliseconds.
Optional, default is 0 (no throttling).
Applies to SQL source types (<code class="option">mysql</code> and <code class="option">pgsql</code>) only.
</p><p>
Throttling can be useful when indexer imposes too much load on the
database server. It causes the indexer to sleep for given amount of
milliseconds once per each ranged query step. This sleep is unconditional,
and is performed before the fetch query.
</p><h5><a name="id359405"></a>Example:</h5><pre class="programlisting">
sql_ranged_throttle = 1000 # sleep for 1 sec before each query step
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-sql-query-info"></a>8.1.22.&nbsp;sql_query_info</h4></div></div></div><p>
Document info query.
Optional, default is empty.
Applies to <code class="option">mysql</code> source type only.
</p><p>
Only used by CLI search to fetch and display document information,
only works with MySQL at the moment, and only intended for debugging purposes.
This query fetches the row that will be displayed by CLI search utility
for each document ID. It is required to contain <code class="code">$id</code> macro
that expands to the queried document ID.
</p><h5><a name="id359449"></a>Example:</h5><pre class="programlisting">
sql_query_info = SELECT * FROM documents WHERE id=$id
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-xmlpipe-command"></a>8.1.23.&nbsp;xmlpipe_command</h4></div></div></div><p>
Shell command that invokes xmlpipe stream producer.
Mandatory.
Applies to <code class="option">xmlpipe</code> and <code class="option">xmlpipe2</code> source types only.
</p><p>
Specifies a command that will be executed and which output
will be parsed for documents. Refer to <a href="#xmlpipe" title="3.8.&nbsp;xmlpipe data source">Section&nbsp;3.8, &#8220;xmlpipe data source&#8221;</a>
or <a href="#xmlpipe2" title="3.9.&nbsp;xmlpipe2 data source">Section&nbsp;3.9, &#8220;xmlpipe2 data source&#8221;</a> for specific format description.
</p><h5><a name="id359501"></a>Example:</h5><pre class="programlisting">
xmlpipe_command = cat /home/sphinx/test.xml
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-xmlpipe-field"></a>8.1.24.&nbsp;xmlpipe_field</h4></div></div></div><p>
xmlpipe field declaration.
Multi-value, optional.
Applies to <code class="option">xmlpipe2</code> source type only. Refer to <a href="#xmlpipe2" title="3.9.&nbsp;xmlpipe2 data source">Section&nbsp;3.9, &#8220;xmlpipe2 data source&#8221;</a>.
</p><h5><a name="id359535"></a>Example:</h5><pre class="programlisting">
xmlpipe_field = subject
xmlpipe_field = content
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-xmlpipe-attr-uint"></a>8.1.25.&nbsp;xmlpipe_attr_uint</h4></div></div></div><p>
xmlpipe integer attribute declaration.
Multi-value, optional.
Applies to <code class="option">xmlpipe2</code> source type only.
Syntax fully matches that of <a href="#conf-sql-attr-uint" title="8.1.13.&nbsp;sql_attr_uint">sql_attr_uint</a>.
</p><h5><a name="id359573"></a>Example:</h5><pre class="programlisting">
xmlpipe_attr_uint = author
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-xmlpipe-attr-bool"></a>8.1.26.&nbsp;xmlpipe_attr_bool</h4></div></div></div><p>
xmlpipe boolean attribute declaration.
Multi-value, optional.
Applies to <code class="option">xmlpipe2</code> source type only.
Syntax fully matches that of <a href="#conf-sql-attr-bool" title="8.1.14.&nbsp;sql_attr_bool">sql_attr_bool</a>.
</p><h5><a name="id359610"></a>Example:</h5><pre class="programlisting">
xmlpipe_attr_bool = is_deleted # will be packed to 1 bit
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-xmlpipe-attr-timestamp"></a>8.1.27.&nbsp;xmlpipe_attr_timestamp</h4></div></div></div><p>
xmlpipe UNIX timestamp attribute declaration.
Multi-value, optional.
Applies to <code class="option">xmlpipe2</code> source type only.
Syntax fully matches that of <a href="#conf-sql-attr-timestamp" title="8.1.15.&nbsp;sql_attr_timestamp">sql_attr_timestamp</a>.
</p><h5><a name="id359648"></a>Example:</h5><pre class="programlisting">
xmlpipe_attr_timestamp = published
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-xmlpipe-attr-str2ordinal"></a>8.1.28.&nbsp;xmlpipe_attr_str2ordinal</h4></div></div></div><p>
xmlpipe string ordinal attribute declaration.
Multi-value, optional.
Applies to <code class="option">xmlpipe2</code> source type only.
Syntax fully matches that of <a href="#conf-sql-attr-str2ordinal" title="8.1.16.&nbsp;sql_attr_str2ordinal">sql_attr_str2ordinal</a>.
</p><h5><a name="id359686"></a>Example:</h5><pre class="programlisting">
xmlpipe_attr_str2ordinal = author_sort
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-xmlpipe-attr-float"></a>8.1.29.&nbsp;xmlpipe_attr_float</h4></div></div></div><p>
xmlpipe floating point attribute declaration.
Multi-value, optional.
Applies to <code class="option">xmlpipe2</code> source type only.
Syntax fully matches that of <a href="#conf-sql-attr-float" title="8.1.17.&nbsp;sql_attr_float">sql_attr_float</a>.
</p><h5><a name="id359724"></a>Example:</h5><pre class="programlisting">
xmlpipe_attr_float = lat_radians
xmlpipe_attr_float = long_radians
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-xmlpipe-attr-multi"></a>8.1.30.&nbsp;xmlpipe_attr_multi</h4></div></div></div><p>
xmlpipe MVA attribute declaration.
Multi-value, optional.
Applies to <code class="option">xmlpipe2</code> source type only.
</p><p>
This setting declares an MVA attribute tag in xmlpipe2 stream.
The contents of the specified tag will be parsed and a list of integers
that will constitute the MVA will be extracted, similar to how
<a href="#conf-sql-attr-multi" title="8.1.18.&nbsp;sql_attr_multi">sql_attr_multi</a> parses
SQL column contents when 'field' MVA source type is specified.
</p><h5><a name="id359771"></a>Example:</h5><pre class="programlisting">
xmlpipe_attr_multi = taglist
</pre></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="confgroup-index"></a>8.2.&nbsp;Index configuration options</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-index-type"></a>8.2.1.&nbsp;type</h4></div></div></div><p>
Index type.
Optional, default is empty (index is plain local index).
Known values are empty string or 'distributed'.
</p><p>
Sphinx supports two different types of indexes: local, that are stored
and processed on the local machine; and distributed, that involve not only
local searching but querying remote <code class="filename">searchd</code> instances
over the network as well. Index type settings lets you choose this type.
By default, indexes are local. Specifying 'distributed' for type enables
distributed searching, see <a href="#distributed" title="4.7.&nbsp;Distributed searching">Section&nbsp;4.7, &#8220;Distributed searching&#8221;</a>.
</p><h5><a name="id359829"></a>Example:</h5><pre class="programlisting">
type = distributed
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-source"></a>8.2.2.&nbsp;source</h4></div></div></div><p>
Adds document source to local index.
Multi-value, mandatory.
</p><p>
Specifies document source to get documents from when the current
index is indexed. There must be at least one source. There may be multiple
sources, without any restrictions on the source types: ie. you can pull
part of the data from MySQL server, part from PostgreSQL, part from
the filesystem using xmlpipe2 wrapper.
</p><p>
However, there are some restrictions on the source data. First,
document IDs must be globally unique across all sources. If that
condition is not met, you might get unexpected search results.
Second, source schemas must be the same in order to be stored
within the same index.
</p><p>
No source ID is stored automatically. Therefore, in order to be able
to tell what source the matched document came from, you will need to
store some additional information yourself. Two typical approaches
include:
</p><div class="orderedlist"><ol type="1"><li>mangling document ID and encoding source ID in it:
<pre class="programlisting">
source src1
{
	sql_query = SELECT id*10+1, ... FROM table1
	...
}

source src2
{
	sql_query = SELECT id*10+2, ... FROM table2
	...
}
</pre></li><li>
storing source ID simply as an attribute:
<pre class="programlisting">
source src1
{
	sql_query = SELECT id, 1 AS source_id FROM table1
	sql_attr_uint = source_id
	...
}

source src2
{
	sql_query = SELECT id, 2 AS source_id FROM table2
	sql_attr_uint = source_id
	...
}
</pre></li></ol></div><p>
</p><h5><a name="id359922"></a>Example:</h5><pre class="programlisting">
source = srcpart1
source = srcpart2
source = srcpart3
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-path"></a>8.2.3.&nbsp;path</h4></div></div></div><p>
Index files path and file name (without extension).
Mandatory.
</p><p>
Path specifies both directory and file name, but without extension.
<code class="filename">indexer</code> will append different extensions
to this path when generating final names for both permanent and
temporary index files. Permanent data files have several different
extensions starting with '.sp'; temporary files' extensions
start with '.tmp'. It's safe to remove <code class="filename">.tmp*</code>
files is if indexer fails to remove them automatically.
</p><p>
For reference, different index files store the following data:
</p><div class="itemizedlist"><ul type="disc"><li><code class="filename">.spa</code> stores document attributes (used in <a href="#conf-docinfo" title="8.2.4.&nbsp;docinfo">extern docinfo</a> storage mode only);</li><li><code class="filename">.spd</code> stores matching document ID lists for each word ID;</li><li><code class="filename">.sph</code> stores index header information;</li><li><code class="filename">.spi</code> stores word lists (word IDs and pointers to <code class="filename">.spd</code> file);</li><li><code class="filename">.spm</code> stores MVA data;</li><li><code class="filename">.spp</code> stores hit (aka posting, aka word occurence) lists for each word ID.</li></ul></div><p>
</p><h5><a name="id360034"></a>Example:</h5><pre class="programlisting">
path = /var/data/test1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-docinfo"></a>8.2.4.&nbsp;docinfo</h4></div></div></div><p>
Document attribute values (docinfo) storage mode.
Optional, default is 'extern'.
Known values are 'none', 'extern' and 'inline'.
</p><p>
Docinfo storage mode defines how exactly docinfo will be
physically stored on disk and RAM. "none" means that there will be
no docinfo at all (ie. no attributes). Normally you need not to set
"none" explicitly because Sphinx will automatically select "none"
when there are no attributes configured. "inline" means that the
docinfo will be stored in the <code class="filename">.spd</code> file,
along with the document ID lists. "extern" means that the docinfo
will be stored separately (externally) from document ID lists,
in a special <code class="filename">.spa</code> file.
</p><p>
Basically, externally stored docinfo must be kept in RAM when querying.
for performance reasons. So in some cases "inline" might be the only option.
However, such cases are infrequent, and docinfo defaults to "extern".
Refer to <a href="#attributes" title="3.2.&nbsp;Attributes">Section&nbsp;3.2, &#8220;Attributes&#8221;</a> for in-depth discussion
and RAM usage estimates.
</p><h5><a name="id360102"></a>Example:</h5><pre class="programlisting">
docinfo = inline
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-mlock"></a>8.2.5.&nbsp;mlock</h4></div></div></div><p>
Memory locking for cached data.
Optional, default is 0 (do not call mlock()).
</p><p>
For search performance, <code class="filename">searchd</code> preloads
a copy of <code class="filename">.spa</code> and <code class="filename">.spi</code>
files in RAM, and keeps that copy in RAM at all times. But if there
are no searches on the index for some time, there are no accesses
to that cached copy, and OS might decide to swap it out to disk.
First queries to such "cooled down" index will cause swap-in
and their latency will suffer.
</p><p>
Setting mlock option to 1 makes Sphinx lock physical RAM used
for that cached data using mlock(2) system call, and that prevents
swapping (see man 2 mlock for details). mlock(2) is a privileged call,
so it will require <code class="filename">searchd</code> to be either run
from root account, or be granted enough privileges otherwise.
If mlock() fails, a warning is emitted, but index continues
working.
</p><h5><a name="id360168"></a>Example:</h5><pre class="programlisting">
mlock = 1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-morphology"></a>8.2.6.&nbsp;morphology</h4></div></div></div><p>
A list of morphology preprocessors to apply.
Optional, default is empty (do not apply any preprocessor).
</p><p>
Morphology preprocessors can be applied to the words being
indexed to replace different forms of the same word with the base,
normalized form. For instance, English stemmer will normalize
both "dogs" and "dog" to "dog", making search results for
both searches the same.
</p><p>
Built-in preprocessors include English stemmer, Russian stemmer
(that supports UTF-8 and Windows-1251 encodings), Soundex,
and Metaphone. The latter two replace the words with special
phonetic codes that are equal is words are phonetically close.
Additional stemmers provided by <a href="http://snowball.tartarus.org/" target="_top">Snowball</a>
project <a href="http://snowball.tartarus.org/dist/libstemmer_c.tgz" target="_top">libstemmer</a> library
can be enabled at compile time using <code class="option">--with-libstemmer</code> <code class="filename">configure</code> option.
Built-in English and Russian stemmers should be faster than their
libstemmer counterparts, but can produce slightly different results,
because they are based on an older version. Metaphone implementation
is based on Double Metaphone algorithm and indexes the primary code.
</p><p>
Built-in values that be used in <code class="option">morphology</code> option are:
'none', 'stem_en', 'stem_ru', 'stem_enru', 'soundex', and 'metaphone'.
Additional values provided by libstemmer are in 'libstemmer_XXX' format,
where XXX is libstemmer algorithm codename (refer to
<code class="filename">libstemmer_c/libstemmer/modules.txt</code> for a complete list).
</p><p>
Several stemmers can be specified (comma-separated). They will be applied
to incoming words in the order they are listed, and the processing will stop
once one of the stemmers actually modifies the word.
Also when <a href="#conf-wordforms" title="8.2.8.&nbsp;wordforms">wordforms</a> feature is enabled
the word will be looked up in word forms dictionary first, and if there is
a matching entry in the dictionary, stemmers will not be applied at all.
Or in other words, <a href="#conf-wordforms" title="8.2.8.&nbsp;wordforms">wordforms</a> can be
used to implement stemming exceptions.
</p><h5><a name="id360299"></a>Example:</h5><pre class="programlisting">
morphology = stem_en, libstemmer_sv
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-stopwords"></a>8.2.7.&nbsp;stopwords</h4></div></div></div><p>
Stopword files list (space separated).
Optional, default is empty.
</p><p>
Stopwords are the words that will not be indexed. Typically you'd
put most frequent words in the stopwords list because they do not add
much value to search results but consume a lot of resources to process.
</p><p>
You can specify several file names, separated by spaces. All the files
will be loaded. Stopwords file format is simple plain text. The encoding
must match index encoding specified in <a href="#conf-charset-type" title="8.2.11.&nbsp;charset_type">charset_type</a>.
File data will be tokenized with respect to <a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a>
settings, so you can use the same separators as in the indexed data.
The <a href="#conf-morphology" title="8.2.6.&nbsp;morphology">stemmers</a> will also be
applied when parsing stopwords file.
</p><p>
While stopwords are not indexed, they still do affect the keyword positions.
For instance, assume that "the" is a stopword, that document 1 contains the line
"in office", and that document 2 contains "in the office". Searching for "in office"
as for exact phrase will only return the first document, as expected, even though
"the" in the second one is stopped.
</p><h5><a name="id360389"></a>Example:</h5><pre class="programlisting">
stopwords = /usr/local/sphinx/data/stopwords.txt
stopwords = stopwords-ru.txt stopwords-en.txt
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-wordforms"></a>8.2.8.&nbsp;wordforms</h4></div></div></div><p>
Word forms dictionary.
Optional, default is empty.
</p><p>
Word forms are applied after tokenizing the incoming text
by <a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a> rules.
They essentialy let you replace one word with another. Normally,
that would be used to bring different word forms to a single
normal form (eg. to normalize all the variants such as "walks",
"walked", "walking" to the normal form "walk"). It can also be used
to implement stemming exceptions, because stemming is not applied
to words found in the forms list.
</p><p>
Dictionaries are used to normalize incoming words both during indexing
and searching. Therefore, to pick up changes in wordforms file
it's required to reindex and restart <code class="filename">searchd</code>.
</p><p>
Word forms support in Sphinx is designed to support big dictionaries well.
They moderately affect indexing speed: for instance, a dictionary with 1 million
entries slows down indexing about 1.5 times. Searching speed is not affected at all.
Additional RAM impact is roughly equal to the dictionary file size,
and dictionaries are shared across indexes: ie. if the very same 50 MB wordforms
file is specified for 10 different indexes, additional <code class="filename">searchd</code>
RAM usage will be about 50 MB.
</p><p>
Dictionary file should be in a simple plain text format. Each line 
should contain source and destination word forms, in exactly the same
encoding as specified in <a href="#conf-charset-type" title="8.2.11.&nbsp;charset_type">charset_type</a>,
separated by "greater" sign. Rules from the
<a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a> will be
applied when the file is loaded. So basically it's as case sensitive
as your other full-text indexed data, ie. typically case insensitive.
Here's the file contents sample:
</p><pre class="programlisting">
walks &gt; walk
walked &gt; walk
walking &gt; walk
</pre><p>
</p><p>
There is bundled <code class="filename">spelldump</code> utility that
helps you create a dictionary file in the format Sphinx can read
from source <code class="filename">.dict</code> and <code class="filename">.aff</code>
dictionary files in <code class="filename">ispell</code> format.
</p><h5><a name="id360528"></a>Example:</h5><pre class="programlisting">
wordforms = /usr/local/sphinx/data/wordforms.txt
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-exceptions"></a>8.2.9.&nbsp;exceptions</h4></div></div></div><p>
Tokenizing exceptions file.
Optional, default is empty.
</p><p>
Exceptions allow to map one or more tokens (including tokens with
characters that would normally be excluded) to a single keyword.
They are similar to <a href="#conf-wordforms" title="8.2.8.&nbsp;wordforms">wordforms</a>
in that they also perform mapping, but have a number of important
differences.
</p><p>
Short summary of the differences is as follows:
</p><div class="itemizedlist"><ul type="disc"><li>exceptions are case sensitive, wordforms are not;</li><li>exceptions allow to detect sequences of tokens, wordforms work with single words only;</li><li>exceptions can use special characters that are <span class="bold"><strong>not</strong></span> in charset_table, wordforms fully obey charset_table;</li><li>exceptions can underperform on huge dictionaries, wordforms handle millions of entries well.</li></ul></div><p>
</p><p>
The expected file format is also plain text, with one line per exception,
and the line format is as follows:
</p><pre class="programlisting">
map-from-tokens =&gt; map-to-token
</pre><p>
Example file:
</p><pre class="programlisting">
AT &amp; T =&gt; AT&amp;T
AT&amp;T =&gt; AT&amp;T
Standarten   Fuehrer =&gt; standartenfuhrer
Standarten Fuhrer =&gt; standartenfuhrer
MS Windows =&gt; ms windows
Microsoft Windows =&gt; ms windows
C++ =&gt; cplusplus
c++ =&gt; cplusplus
C plus plus =&gt; cplusplus
</pre><p>
All tokens here are case sensitive: they will <span class="bold"><strong>not</strong></span> be processed by
<a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a> rules. Thus, with
the example exceptions file above, "At&amp;t" text will be tokenized as two
keywords "at" and "t", because of lowercase letters. On the other hand,
"AT&amp;T" will match exactly and produce single "AT&amp;T" keyword.
</p><p>
Note that this map-to keyword is a) always interpereted
as a <span class="emphasis"><em>single</em></span> word, and b) is both case and space
sensitive! In our sample, "ms windows" query will <span class="emphasis"><em>not</em></span>
match the document with "MS Windows" text. The query will be interpreted
as a query for two keywords, "ms" and "windows". And what "MS Windows"
gets mapped to is a <span class="emphasis"><em>single</em></span> keyword "ms windows",
with a space in the middle. On the other hand, "standartenfuhrer"
will retrieve documents with "Standarten Fuhrer" or "Standarten Fuehrer"
contents (capitalized exactly like this), or any capitalization variant
of the keyword itself, eg. "staNdarTenfUhreR". (It won't catch
"standarten fuhrer", however: this text does not match any of the
listed exceptions because of case sensitivity, and gets indexed
as two separate keywords.)
</p><p>
Whitespace in the map-from tokens list matters, but its amount does not.
Any amount of the whitespace in the map-form list will match any other amount
of whitespace in the indexed document or query. For instance, "AT&nbsp;&amp;&nbsp;T"
map-from token will match "AT&nbsp;&nbsp;&nbsp;&nbsp;&amp;&nbsp;&nbsp;T" text,
whatever the amount of space in both map-from part and the indexed text.
Such text will therefore be indexed as a special "AT&amp;T" keyword,
thanks to the very first entry from the sample.
</p><p>
Exceptions also allow to capture special characters (that are exceptions
from general <a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a> rules;
hence the name). Assume that you generally do not want to treat '+'
as a valid character, but still want to be able search for some exceptions
from this rule such as 'C++'. The sample above will do just that, totally
independent of what characters are in the table and what are not.
</p><p>
Exceptions are applied to raw incoming document and query data
during indexing  and searching respectively. Therefore, to pick up
changes in the file it's required to reindex and restart
<code class="filename">searchd</code>.
</p><h5><a name="id360732"></a>Example:</h5><pre class="programlisting">
exceptions = /usr/local/sphinx/data/exceptions.txt
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-min-word-len"></a>8.2.10.&nbsp;min_word_len</h4></div></div></div><p>
Minimum indexed word length.
Optional, default is 1 (index everything).
</p><p>
Only those words that are not shorter than this minimum will be indexed.
For instance, if min_word_len is 4, then 'the' won't be indexed, but 'they' will be.
</p><h5><a name="id360766"></a>Example:</h5><pre class="programlisting">
min_word_len = 4
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-charset-type"></a>8.2.11.&nbsp;charset_type</h4></div></div></div><p>
Character set encoding type.
Optional, default is 'sbcs'.
Known values are 'sbcs' and 'utf-8'.
</p><p>
Different encodings have different methods for mapping their internal
characters codes into specific byte sequences. Two most common methods
in use today are single-byte encoding and UTF-8. Their corresponding
charset_type values are 'sbcs' (stands for Single Byte Character Set)
and 'utf-8'. The selected encoding type will be used everywhere where
the index is used: when indexing the data, when parsing the query
against this index, when generating snippets, etc.
</p><p>
Note that while 'utf-8' implies that the decoded values must be treated
as Unicode codepoint numbers, there's a family of 'sbcs' encodings that
may in turn treat different byte values differently, and that should be
properly reflected in your <a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a> settings.
For example, the same byte value of 224 (0xE0 hex) maps to different Russian letters
depending on whether koi-8r or windows-1251 encoding is used.
</p><h5><a name="id360824"></a>Example:</h5><pre class="programlisting">
charset_type = utf-8
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-charset-table"></a>8.2.12.&nbsp;charset_table</h4></div></div></div><p>
Accepted characters table, with case folding rules.
Optional, default value depends on <a href="#conf-charset-type" title="8.2.11.&nbsp;charset_type">charset_type</a> value.
</p><p>
charset_table is the main workhorse of Sphinx tokenizing process,
ie. the process of extracting keywords from document text or query txet.
It controls what characters are accepted as valid and what are not,
and how the accepted characters should be transformed (eg. should
the case be removed or not).
</p><p>
You can think of charset_table as of a big table that has a mapping
for each and every of 100K+ characters in Unicode (or as of a small
256-character table if you're using SBCS). By default, every character
maps to 0, which means that it does not occur within keywords and
should be treated as a separator. Once mentioned in the table,
character is mapped to some other character (most frequently,
either to itself or to a lowercase letter), and is treated
as a valid keyword part.
</p><p>
The expected value format is a commas-separated list of mappings.
Two simplest mappings simply declare a character as valid, and map
a single character to another single character, respectively.
But specifying the whole table in such form would result 
in bloated and barely manageable specifications. So there are
several syntax shortcuts that let you map ranges of characters
at once. The complete list is as follows:
</p><div class="variablelist"><dl><dt><span class="term">A-&gt;a</span></dt><dd>Single char mapping, declares source char 'A' as allowed
		to occur within keywords and maps it to destination char 'a'
		(but does <span class="emphasis"><em>not</em></span> declare 'a' as allowed).
	</dd><dt><span class="term">A..Z-&gt;a..z</span></dt><dd>Range mapping, declares all chars in source range
		as allowed and maps them to the destination range. Does <span class="emphasis"><em>not</em></span>
		declare destination range as allowed. Also checks ranges' lengths
		(the lengths must be equal).
	</dd><dt><span class="term">a</span></dt><dd>Stray char mapping, declares a character as allowed
		and maps it to itself. Equivalent to a-&gt;a single char mapping.
	</dd><dt><span class="term">a..z</span></dt><dd>Stray range mapping, declares all characters in range
		as allowed and maps them to themselves. Equivalent to
		a..z-&gt;a..z range mapping.
	</dd><dt><span class="term">A..Z/2</span></dt><dd>Checkerboard range map. Maps every pair of chars
		to the second char. More formally, declares odd characters
		in range as allowed and maps them to the even ones; also
		declares even characters as allowed and maps them to themselves.
		For instance, A..Z/2 is equivalent to A-&gt;B, B-&gt;B, C-&gt;D, D-&gt;D,
		..., Y-&gt;Z, Z-&gt;Z. This mapping shortcut is helpful for
		a number of Unicode blocks where uppercase and lowercase
		letters go in such interleaved order instead of contiguous
		chunks.
	</dd></dl></div><p>
</p><p>
Control characters with codes from 0 to 31 are always treated as separators.
Characters with codes 32 to 127, ie. 7-bit ASCII characters, can be used
in the mappings as is. To avoid configuration file encoding issues,
8-bit ASCII characters and Unicode characters must be specified in U+xxx form,
where 'xxx' is hexadecimal codepoint number. This form can also be used
for 7-bit ASCII characters to encode special ones: eg. use U+20 to
encode space, U+2E to encode dot, U+2C to encode comma.
</p><h5><a name="id361012"></a>Example:</h5><pre class="programlisting">
# 'sbcs' defaults for English and Russian
charset_table = 0..9, A..Z-&gt;a..z, _, a..z, \
	U+A8-&gt;U+B8, U+B8, U+C0..U+DF-&gt;U+E0..U+FF, U+E0..U+FF

# 'utf-8' defaults for English and Russian
charset_table = 0..9, A..Z-&gt;a..z, _, a..z, \
	U+410..U+42F-&gt;U+430..U+44F, U+430..U+44F
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-ignore-chars"></a>8.2.13.&nbsp;ignore_chars</h4></div></div></div><p>
Ignored characters list.
Optional, default is empty.
</p><p>
Useful in the cases when some characters, such as soft hyphenation mark (U+00AD),
should be not just treated as separators but rather fully ignored.
For example, if '-' is simply not in the charset_table,
"abc-def" text will be indexed as "abc" and "def" keywords.
On the contrary, if '-' is added to ignore_chars list, the same
text will be indexed as a single "abcdef" keyword.
</p><p>
The syntax is the same as for <a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a>,
but it's only allowed to declare characters, and not allowed to map them. Also,
the ignored characters must not be present in charset_table.
</p><h5><a name="id361071"></a>Example:</h5><pre class="programlisting">
ignore_chars = U+AD
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-min-prefix-len"></a>8.2.14.&nbsp;min_prefix_len</h4></div></div></div><p>
Minimum word prefix length to index.
Optional, default is 0 (do not index prefixes).
</p><p>
Prefix indexing allows to implement wildcard searching by 'wordstart*' wildcards
(refer to <a href="#conf-enable-star" title="8.2.18.&nbsp;enable_star">enable_star</a> option for details on wildcard syntax).
When mininum prefix length is set to a positive number, indexer will index
all the possible keyword prefixes (ie. word beginnings) in addition to the keywords
themselves. Too short prefixes (below the minimum allowed length) will not
be indexed.
</p><p>
For instance, indexing a keyword "example" with min_prefix_len=3
will result in indexing "exa", "exam", "examp", "exampl" prefixes along
with the word itself. Searches against such index for "exam" will match
documents that contain "example" word, even if they do not contain "exam"
on itself. However, indexing prefixes will make the index grow significantly
(because of many more indexed keywords), and will degrade both indexing
and searching times.
</p><p>
There's no automatic way to rank perfect word matches higher
in a prefix index, but there's a number of tricks to achieve that.
First, you can setup two indexes, one with prefix indexing and one
without it, search through both, and use <a href="#api-func-setindexweights" title="5.3.6.&nbsp;SetIndexWeights">SetIndexWeights()</a>
call to combine weights. Second, you can enable star-syntax and rewrite
your extended-mode queries:
</p><pre class="programlisting">
# in sphinx.conf
enable_star = 1

// in query
$cl-&gt;Query ( "( keyword | keyword* ) other keywords" );
</pre><p>
</p><h5><a name="id361167"></a>Example:</h5><pre class="programlisting">
min_prefix_len = 3
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-min-infix-len"></a>8.2.15.&nbsp;min_infix_len</h4></div></div></div><p>
Minimum infix prefix length to index.
Optional, default is 0 (do not index infixes).
</p>
Infix indexing allows to implement wildcard searching by 'start*', '*end', and '*middle*' wildcards
(refer to <a href="#conf-enable-star" title="8.2.18.&nbsp;enable_star">enable_star</a> option for details on wildcard syntax).
When mininum infix length is set to a positive number, indexer will index all the possible keyword infixes
(ie. substrings) in addition to the keywords themselves. Too short infixes
(below the minimum allowed length) will not be indexed.
<p>
For instance, indexing a keyword "test" with min_infix_len=2
will result in indexing "te", "es", "st", "tes", "est" infixes along
with the word itself. Searches against such index for "es" will match
documents that contain "test" word, even if they do not contain "es"
on itself. However, indexing infixes will make the index grow significantly
(because of many more indexed keywords), and will degrade both indexing
and searching times.
</p><p>
There's no automatic way to rank perfect word matches higher
in an infix index, but the same tricks as with <a href="#conf-min-prefix-len" title="8.2.14.&nbsp;min_prefix_len">prefix indexes</a>
can be applied.
</p><h5><a name="id361237"></a>Example:</h5><pre class="programlisting">
min_infix_len = 3
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-prefix-fields"></a>8.2.16.&nbsp;prefix_fields</h4></div></div></div><p>
The list of full-text fields to limit prefix indexing to.
Optional, default is empty (index all fields in prefix mode).
</p><p>
Because prefix indexing impacts both indexing and searching performance,
it might be desired to limit it to specific full-text fields only:
for instance, to provide prefix searching through URLs, but not through
page contents. prefix_fields specifies what fields will be prefix-indexed;
all other fields will be indexed in normal mode. The value format is a
comma-separated list of field names.
</p><h5><a name="id361280"></a>Example:</h5><pre class="programlisting">
prefix_fields = url, domain
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-infix-fields"></a>8.2.17.&nbsp;infix_fields</h4></div></div></div><p>
The list of full-text fields to limit infix indexing to.
Optional, default is empty (index all fields in infix mode).
</p><p>
Similar to <a href="#conf-prefix-fields" title="8.2.16.&nbsp;prefix_fields">prefix_fields</a>,
but lets you limit infix-indexing to given fields.
</p><h5><a name="id361318"></a>Example:</h5><pre class="programlisting">
infix_fields = url, domain
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-enable-star"></a>8.2.18.&nbsp;enable_star</h4></div></div></div><p>
Enables star-syntax (or wildcard syntax) when searching through prefix/infix indexes.
Optional, default is is 0 (do not use wildcard syntax), for compatibility with 0.9.7.
Known values are 0 and 1.
</p><p>
This feature enables "star-syntax", or wildcard syntax, when searching
through indexes which were created with prefix or infix indexing enabled.
It only affects searching; so it can be changed without reindexing
by simply restarting <code class="filename">searchd</code>.
</p><p>
The default value is 0, that means to disable star-syntax
and treat all keywords as prefixes or infixes respectively,
depending on indexing-time <a href="#conf-min-prefix-len" title="8.2.14.&nbsp;min_prefix_len">min_prefix_len</a>
and <a href="#conf-min-infix-len" title="8.2.15.&nbsp;min_infix_len">min_infix_len settings</a>.
The value of 1 means that star ('*') can be used at the start
and/or the end of the keyword. The star will match zero or more characters.
</p><p>
For example, assume that the index was built with infixes and
that enable_star is 1. Searching should work as follows:
</p><div class="orderedlist"><ol type="1"><li>"abcdef" query will match only those documents that contain the exact "abcdef" word in them.</li><li>"abc*" query will match those documents that contain
any words starting with "abc" (including the documents which
contain the exact "abc" word only);</li><li>"*cde*" query will match those documents that contain
any words which have "cde" characters in any part of the word
(including the documents which contain the exact "cde" word only).</li><li>"*def" query will match those documents that contain
any words ending with "def" (including the documents that
contain the exact "def" word only).</li></ol></div><p>
</p><h5><a name="id361427"></a>Example:</h5><pre class="programlisting">
enable_star = 1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-ngram-len"></a>8.2.19.&nbsp;ngram_len</h4></div></div></div><p>
N-gram lengths for N-gram indexing.
Optional, default is 0 (disable n-gram indexing).
Known values are 0 and 1 (other lengths to be implemented).
</p><p>
N-grams provide basic CJK (Chinese, Japanse, Koreasn) support for
unsegmented texts. The issue with CJK searching is that there could be no
clear separators between the words. Ideally, the texts would be filtered
through a special program called segmenter that would insert separators
in proper locations. However, segmenters are slow and error prone,
and it's common to index contiguous groups of N characters, or n-grams,
instead.
</p><p>
When this feature is enabled, streams of CJK characters are indexed
as N-grams. For example, if incoming text is "ABCDEF" (where A to F represent
some CJK characters) and length is 1, in will be indexed as if
it was "A B C D E F". (With length equal to 2, it would produce "AB BC CD DE EF";
but only 1 is supported at the moment.) Only those characters that are
listed in <a href="#conf-ngram-chars" title="8.2.20.&nbsp;ngram_chars">ngram_chars</a> table
will be split this way; other ones will not be affected.
</p><p>
Note that if search query is segmented, ie. there are separators between
individual words, then wrapping the words in quotes and using extended mode
will resut in proper matches being found even if the text was <span class="bold"><strong>not</strong></span>
segmented. For instance, assume that the original query is BC&nbsp;DEF.
After wrapping in quotes on the application side, it should look
like "BC"&nbsp;"DEF" (<span class="emphasis"><em>with</em></span> quotes). This query
will be passed to Sphinx and internally split into 1-grams too,
resulting in "B&nbsp;C"&nbsp;"D&nbsp;E&nbsp;F" query, still with
quotes that are the phrase matching operator. And it will match
the text even though there were no separators in the text.
</p><p>
Even if the search query is not segmented, Sphinx should still produce
good results, thanks to phrase based ranking: it will pull closer phrase
matches (which in case of N-gram CJK words can mean closer multi-character
word matches) to the top.
</p><h5><a name="id361530"></a>Example:</h5><pre class="programlisting">
ngram_len = 1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-ngram-chars"></a>8.2.20.&nbsp;ngram_chars</h4></div></div></div><p>
N-gram characters list.
Optional, default is empty.
</p><p>
To be used in conjunction with in <a href="#conf-ngram-len" title="8.2.19.&nbsp;ngram_len">ngram_len</a>,
this list defines characters, sequences of which are subject to N-gram extraction.
Words comprised of other characters will not be affected by N-gram indexing
feature. The value format is identical to <a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a>.
</p><h5><a name="id361578"></a>Example:</h5><pre class="programlisting">
ngram_chars = U+3000..U+2FA1F
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-phrase-boundary"></a>8.2.21.&nbsp;phrase_boundary</h4></div></div></div><p>
Phrase boundary characters list.
Optional, default is empty.
</p><p>
This list controls what characters will be treated as phrase boundaries,
in order to adjust word positions and enable phrase-level search
emulation through proximity search. The syntax is similar
to <a href="#conf-charset-table" title="8.2.12.&nbsp;charset_table">charset_table</a>.
Mappings are not allowed and the boundary characters must not
overlap with anything else.
</p><p>
On phrase boundary, additional word position increment (specified by
<a href="#conf-phrase-boundary-step" title="8.2.22.&nbsp;phrase_boundary_step">phrase_boundary_step</a>)
will be added to current word position. This enables phrase-level
searching through proximity queries: words in different phrases
will be guaranteed to be more than phrase_boundary_step distance
away from each other; so proximity search within that distance
will be equivalent to phrase-level search.
</p><p>
Phrase boundary condition will be raised if and only if such character
is followed by a separator; this is to avoid abbreviations such as
S.T.A.L.K.E.R or URLs being treated as several phrases.
</p><h5><a name="id361643"></a>Example:</h5><pre class="programlisting">
phrase_boundary = ., ?, !, U+2026 # horizontal ellipsis
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-phrase-boundary-step"></a>8.2.22.&nbsp;phrase_boundary_step</h4></div></div></div><p>
Phrase boundary word position increment.
Optional, default is 0.
</p><p>
On phrase boundary, current word position will be additionally incremented
by this number. See <a href="#conf-phrase-boundary" title="8.2.21.&nbsp;phrase_boundary">phrase_boundary</a> for details.
</p><h5><a name="id361680"></a>Example:</h5><pre class="programlisting">
phrase_boundary_step = 100
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-html-strip"></a>8.2.23.&nbsp;html_strip</h4></div></div></div><p>
Whether to strip HTML markup from incoming full-text data.
Optional, default is 0.
Known values are 0 (disable stripping) and 1 (enable stripping).
</p><p>
Stripping does not work with <code class="option">xmlpipe</code> source type
(it's suggested to upgrade to xmlpipe2 anyway). It should work with
properly formed HTML and XHTML, but, just as most browsers, may produce
unexpected results on malformed input (such as HTML with stray &lt;'s
or unclosed &gt;'s).
</p><p>
Only the tags themselves, and also HTML comments, are stripped.
To strip the contents of the tags too (eg. to strip embedded scripts),
see <a href="#conf-html-remove-elements" title="8.2.25.&nbsp;html_remove_elements">html_remove_elements</a> option.
There are no restrictions on tag names; ie. everything
that looks like a valid tag start, or end, or a comment
will be stripped.
</p><h5><a name="id361736"></a>Example:</h5><pre class="programlisting">
html_strip = 1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-html-index-attrs"></a>8.2.24.&nbsp;html_index_attrs</h4></div></div></div><p>
A list of markup attributes to index when stripping HTML.
Optional, default is emptu (do not index markup attributes).
</p><p>
Specifies HTML markup attributes whose contents should be retained and indexed
even though other HTML markup is stripped. The format is per-tag enumeration of
indexable attributes, as shown in the example below.
</p><h5><a name="id361765"></a>Example:</h5><pre class="programlisting">
html_index_attrs = img=alt,title; a=title;
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-html-remove-elements"></a>8.2.25.&nbsp;html_remove_elements</h4></div></div></div><p>
A list of HTML elements for which to strip contents along with the elements themselves.
Optional, default is empty string (do not strip contents of any elements).
</p><p>
This feature allows to strip element contents, ie. everything that
is between the opening and the closing tags. It is useful to remove
embedded scripts, CSS, etc.	Short tag form for empty elements
(ie. &lt;br /&gt;) is properly supported; ie. the text that
follows such tag will <span class="bold"><strong>not</strong></span> be removed.
</p><p>
The value is a comma-separated list of element (tag) names whose
contents should be removed. Tag names are case insensitive.
</p><h5><a name="id361817"></a>Example:</h5><pre class="programlisting">
html_remove_elements = style, script
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-local"></a>8.2.26.&nbsp;local</h4></div></div></div><p>
Local index declaration in the <a href="#distributed" title="4.7.&nbsp;Distributed searching">distributed index</a>.
Multi-value, optional, default is empty.
</p><p>
This setting is used to declare local indexes that will be searched when
given distributed index is searched. All local indexes will be searched
<span class="bold"><strong>sequentially</strong></span>, utilizing only 1 CPU or core; to parallelize processing,
you can configure <code class="filename">searchd</code> to query itself (refer to
<a href="#conf-agent" title="8.2.27.&nbsp;agent">Section&nbsp;8.2.27, &#8220;agent&#8221;</a> for the details). There might be several local
indexes declared per each distributed index. Any local index can be mentioned
several times in other distributed indexes.
</p><h5><a name="id361880"></a>Example:</h5><pre class="programlisting">
local = chunk1
local = chunk2
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-agent"></a>8.2.27.&nbsp;agent</h4></div></div></div><p>
Remote agents and indexes declaration in the <a href="#distributed" title="4.7.&nbsp;Distributed searching">distributed index</a>.
Multi-value, optional, default is empty.
</p><p>
This setting is used to declare remote agents that will be searched
when given distributed index is searched. The agents can be thought of
as network pointers that specify host, port, and index names. In the basic
case agents would correspond to remote physical machines. More formally,
that is not always correct: you can point several agents to the
same remote machine; or you can even point agents to the very same
single instance of <code class="filename">searchd</code> (in order to utilize
many CPUs or cores).
</p><p>
The value format is as follows:
</p><pre class="programlisting">
agent = hostname:port:remote-indexes-list
</pre><p>
where 'hostname' is remote host name; 'port' is remote TCP port;
and 'remote-indexes-list' is a comma-separated list of remote index
names.
</p><p>
All agents will be searched in parallel. However, all indexes
specified for a given agent will be searched sequentially
in this agent. This lets you fine-tune the configuration
to the hardware. For instance, if two remote indexes are stored
on the same physical HDD, it's better to configure one agent
with several sequentially searched indexes to avoid HDD steping.
If they are stored on different HDDs, having two agents will be
advantageous, because the work will be fully parallelized.
The same applies to CPUs; though CPU performance impact caused
by two processes stepping on each other is somewhat smaller
and frequently can be ignored at all.
</p><p>
On machines with many CPUs and/or HDDs, agents can be pointed
to the same machine to utilize all of the hardware in parallel
and reduce query latency. There is no need to setup several
<code class="filename">searchd</code> instances for that; it's legal
to configure the instance to contact itself. Here's an example
setup, intended for a 4-CPU machine, that will use up to
4 CPUs in parallel to process each query:
</p><pre class="programlisting">
index dist
{
	type = distributed
	local = chunk1
	agent = localhost:3312:chunk2
	agent = localhost:3312:chunk3
	agent = localhost:3312:chunk4
}
</pre><p>
Note how one of the chunks is searched locally and the same instance
of searchd queries itself to launch searches through three other ones
in parallel.
</p><h5><a name="id362005"></a>Example:</h5><pre class="programlisting">
agent = localhost:3312:chunk2 # contact itself
agent = searchbox2:3312:chunk3,chunk4 # search remote indexes
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-agent-connect-timeout"></a>8.2.28.&nbsp;agent_connect_timeout</h4></div></div></div><p>
Remote agent connection timeout, in milliseconds.
Optional, default is 1000 (ie. 1 second).
</p><p>
When connecting to remote agents, <code class="filename">searchd</code>
will wait at most this much time for connect() call to complete
succesfully. If the timeout is reached but connect() does not complete,
and <a href="#api-func-setretries" title="5.1.4.&nbsp;SetRetries">retries</a> are enabled,
retry will be initiated.
</p><h5><a name="id362054"></a>Example:</h5><pre class="programlisting">
agent_connect_timeout = 300
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-agent-query-timeout"></a>8.2.29.&nbsp;agent_query_timeout</h4></div></div></div><p>
Remote agent query timeout, in milliseconds.
Optional, default is 3000 (ie. 3 seconds).
</p><p>
After connection, <code class="filename">searchd</code> will wait at most this
much time for remote queries to complete. This timeout is fully separate
from connection timeout; so the maximum possible delay caused by
a remote agent equals to the sum of <code class="code">agent_connection_timeout</code> and
<code class="code">agent_query_timeout</code>. Queries will <span class="bold"><strong>not</strong></span> be retried
if this timeout is reached; a warning will be produced instead.
</p><h5><a name="id362113"></a>Example:</h5><pre class="programlisting">
agent_query_timeout = 10000 # our query can be long, allow up to 10 sec
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-preopen"></a>8.2.30.&nbsp;preopen</h4></div></div></div><p>
Whether to pre-open all index files, or open them per each query.
Optional, default is 0 (do not preopen).
</p><p>
This option tells <code class="filename">searchd</code> that it should pre-open
all index files on startup (or rotation) and keep them open while it runs.
Currently, the default mode is <span class="bold"><strong>not</strong></span> to pre-open the files (this may
change in the future). Preopened indexes take a few (currently 2) file
descriptors per index. However, they save on per-query <code class="code">open()</code> calls;
and also they are invulnerable to subtle race conditions that may happen during
index rotation under high load. On the other hand, when serving many indexes
(100s to 1000s), it still might be desired to open the on per-query basis
in order to save file descriptors.
</p><h5><a name="id362173"></a>Example:</h5><pre class="programlisting">
preopen = 1
</pre></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="confgroup-indexer"></a>8.3.&nbsp;<code class="filename">indexer</code> program configuration options</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-mem-limit"></a>8.3.1.&nbsp;mem_limit</h4></div></div></div><p>
Indexing RAM usage limit.
Optional, default is 32M.
</p><p>
Enforced memory usage limit that the <code class="filename">indexer</code>
will not go above. Can be specified in bytes, or kilobytes
(using K postfix), or megabytes (using M postfix); see the example.
This limit will be automatically raised if set to extremely low value
causing I/O buffers to be less than 8 KB; the exact lower bound
for that depends on the indexed data size. If the buffers are
less than 256 KB, a warning will be produced.
</p><p>
Maximum possible limit is 2047M. Too low values can hurt
indexing speed, but 256M to 1024M should be enough for most
if not all datasets. Setting this value too high can cause
SQL server timeouts. During the document collection phase,
there will be periods when the memory buffer is partially
sorted and no communication with the database is performed;
and the database server can timeout. You can resolve that
either by raising timeouts on SQL server side or by lowering
<code class="code">mem_limit</code>.
</p><h5><a name="id362250"></a>Example:</h5><pre class="programlisting">
mem_limit = 256M
# mem_limit = 262144K # same, but in KB
# mem_limit = 268435456 # same, but in bytes
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-max-iops"></a>8.3.2.&nbsp;max_iops</h4></div></div></div><p>
Maximum I/O operations per second, for I/O throttling.
Optional, default is 0 (unlimited).
</p><p>
I/O throttling related option.
It limits maximum count of I/O operations (reads or writes) per any given second.
A value of 0 means that no limit is imposed.
</p><p>
<code class="filename">indexer</code> can cause bursts of intensive disk I/O during
indexing, and it might desired to limit its disk activity  (and keep something
for other programs running on the same machine, such as <code class="filename">searchd</code>).
I/O throttling helps to do that. It works by enforcing a minimum guaranteed
delay between subsequent disk I/O operations performed by <code class="filename">indexer</code>. 
Modern SATA HDDs are able to perform up to 70-100+ I/O operations per second
(that's mostly limited by disk heads seek time). Limiting indexing I/O
to a fraction of that can help reduce search performance dedgradation
caused by indexing.
</p><h5><a name="id362310"></a>Example:</h5><pre class="programlisting">
max_iops = 40
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-max-iosize"></a>8.3.3.&nbsp;max_iosize</h4></div></div></div><p>
Maximum allowed I/O operation size, in bytes, for I/O throttling.
Optional, default is 0 (unlimited).
</p><p>
I/O throttling related option. It limits maximum file I/O operation
(read or write) size for all operations performed by <code class="filename">indexer</code>.
A value of 0 means that no limit is imposed.
Reads or writes that are bigger than the limit
will be split in several smaller operations, and counted as several operation
by <a href="#conf-max-iops" title="8.3.2.&nbsp;max_iops">max_iops</a> setting. At the time of this
writing, all I/O calls should be under 256 KB (default internal buffer size)
anyway, so <code class="code">max_iosize</code> values higher than 256 KB must not affect anything.
</p><h5><a name="id362363"></a>Example:</h5><pre class="programlisting">
max_iosize = 1048576
</pre></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="confgroup-searchd"></a>8.4.&nbsp;<code class="filename">searchd</code> program configuration options</h3></div></div></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-address"></a>8.4.1.&nbsp;address</h4></div></div></div><p>
Interface IP address to bind on.
Optional, default is 0.0.0.0 (ie. listen on all interfaces).
</p><p>
<code class="code">address</code> setting lets you specify which network interface
<code class="filename">searchd</code> will bind to, listen on, and accept incoming
network connections on. The default value is 0.0.0.0 which means to listen
on all interfaces. At the time, you can <span class="bold"><strong>not</strong></span> specify multiple interfaces.
</p><h5><a name="id362424"></a>Example:</h5><pre class="programlisting">
address = 192.168.0.1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-port"></a>8.4.2.&nbsp;port</h4></div></div></div><p>
<code class="filename">searchd</code> TCP port number.
Mandatory, default is 3312.
</p><h5><a name="id362451"></a>Example:</h5><pre class="programlisting">
port = 3312
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-log"></a>8.4.3.&nbsp;log</h4></div></div></div><p>
Log file name.
Optional, default is 'searchd.log'.
All <code class="filename">searchd</code> run time events will be logged in this file.
</p><h5><a name="id362480"></a>Example:</h5><pre class="programlisting">
log = /var/log/searchd.log
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-query-log"></a>8.4.4.&nbsp;query_log</h4></div></div></div><p>
Query log file name.
Optional, default is empty (do not log queries).
All search queries will be logged in this file. The format is described in <a href="#query-log-format" title="4.8.&nbsp;searchd query log format">Section&nbsp;4.8, &#8220;<code class="filename">searchd</code> query log format&#8221;</a>.
</p><h5><a name="id362514"></a>Example:</h5><pre class="programlisting">
query_log = /var/log/query.log
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-read-timeout"></a>8.4.5.&nbsp;read_timeout</h4></div></div></div><p>
Network client request read timeout, in seconds.
Optional, default is 5 seconds.
<code class="filename">searchd</code> will forcibly close the client connections which fail to send a query within this timeout.
</p><h5><a name="id362546"></a>Example:</h5><pre class="programlisting">
read_timeout = 1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-max-children"></a>8.4.6.&nbsp;max_children</h4></div></div></div><p>
Maximum amount of children to fork (or in other words, concurrent searches to run in parallel).
Optional, default is 0 (unlimited).
</p><p>
Useful to control server load. There will be no more than this much concurrent
searches running, at all times. When the limit is reached, additional incoming
clients are dismissed with temporarily failure (SEARCHD_RETRY) status code
and a message stating that the server is maxed out.
</p><h5><a name="id362581"></a>Example:</h5><pre class="programlisting">
max_children = 10
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-pid-file"></a>8.4.7.&nbsp;pid_file</h4></div></div></div><p>
<code class="filename">searchd</code> process ID file name.
Mandatory.
</p><p>
PID file will be re-created (and locked) on startup. It will contain
head daemon process ID while the daemon is running, and it will be unlinked
on daemon shutdown. It's mandatory because Sphinx uses it internally
for a number of things: to check whether there already is a running instance
of <code class="filename">searchd</code>; to stop <code class="filename">searchd</code>;
to notify it that it should rotate the indexes. Can also be used for
different external automation scripts.
</p><h5><a name="id362632"></a>Example:</h5><pre class="programlisting">
pid_file = /var/run/searchd.pid
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-max-matches"></a>8.4.8.&nbsp;max_matches</h4></div></div></div><p>
Maximum amount of matches that the daemon keeps in RAM for each index and can return to the client.
Optional, default is 1000.
</p><p>
Introduced in order to control and limit RAM usage, <code class="code">max_matches</code>
setting defines how much matches will be kept in RAM while searching each index.
Every match found will still be <span class="emphasis"><em>processed</em></span>; but only
best N of them will be kept in memory and return to the client in the end.
Assume that the index contains 2,000,000 matches for the query. You rarely
(if ever) need to retrieve <span class="emphasis"><em>all</em></span> of them. Rather, you need
to scan all of them, but only choose "best" at most, say, 500 by some criteria
(ie. sorted by relevance, or price, or anything else), and display those
500 matches to the end user in pages of 20 to 100 matches. And tracking
only the best 500 matches is much more RAM and CPU efficient than keeping
all 2,000,000 matches, sorting them, and then discarding everything but
the first 20 needed to display the search results page. <code class="code">max_matches</code>
controls N in that "best N" amount.
</p><p>
This parameter noticeably affects per-query RAM and CPU usage.
Values of 1,000 to 10,000 are generally fine, but higher limits must be
used with care. Recklessly raising <code class="code">max_matches</code> to 1,000,000
means that <code class="filename">searchd</code> will have to allocate and
initialize 1-million-entry matches buffer for <span class="emphasis"><em>every</em></span>
query. That will obviously increase per-query RAM usage, and in some cases
can also noticeably impact performance.
</p><p>
<span class="bold"><strong>CAVEAT EMPTOR!</strong></span> Note that there also is <span class="bold"><strong>another</strong></span> place where this limit
is enforced. <code class="code">max_matches</code> can be decreased on the fly
through the <a href="#api-func-setlimits" title="5.2.1.&nbsp;SetLimits">corresponding API call</a>,
and the default value in the API is <span class="bold"><strong>also</strong></span> set to 1,000. So in order
to retrieve more than 1,000 matches to your application, you will have
to change the configuration file, restart searchd, and set proper limit
in <a href="#api-func-setlimits" title="5.2.1.&nbsp;SetLimits">SetLimits()</a> call.
Also note that you can not set the value in the API higher than the value
in the .conf file. This is prohibited in order to have some protection
against malicious and/or malformed requests.
</p><h5><a name="id362780"></a>Example:</h5><pre class="programlisting">
max_matches = 10000
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-seamless-rotate"></a>8.4.9.&nbsp;seamless_rotate</h4></div></div></div><p>
Prevents <code class="filename">searchd</code> stalls while rotating indexes with huge amounts of data to precache.
Optional, default is 1 (enable seamless rotation).
</p><p>
Indexes may contain some data that needs to be precached in RAM.
At the moment, <code class="filename">.spa</code>, <code class="filename">.spi</code> and
<code class="filename">.spm</code> files are fully precached (they contain attribute data,
MVA data, and keyword index, respectively.)
Without seamless rotate, rotating an index tries to use as little RAM
as possible and works as follows:
</p><div class="orderedlist"><ol type="1"><li>new queries are temporarly rejected (with "retry" error code);</li><li><code class="filename">searchd</code> waits for all currently running queries to finish;</li><li>old index is deallocated and its files are renamed;</li><li>new index files are renamed and required RAM is allocated;</li><li>new index attribute and dictionary data is preloaded to RAM;</li><li><code class="filename">searchd</code> resumes serving queries from new index.</li></ol></div><p>
</p><p>
However, if there's a lot of attribute or dictionary data, then preloading step
could take noticeble time - up to several minutes in case of preloading 1-5+ GB files.
</p><p>
With seamless rotate enabled, rotation works as follows:
</p><div class="orderedlist"><ol type="1"><li>new index RAM storage is allocated;</li><li>new index attribute and dictionary data is asynchronously preloaded to RAM;</li><li>on success, old index is deallocated and both indexes' files are renamed;</li><li>on failure, new index is deallocated;</li><li>at any given moment, queries are served either from old or new index copy.</li></ol></div><p>
</p><p>
Seamless rotate comes at the cost of higher <span class="bold"><strong>peak</strong></span>
memory usage during the rotation (because both old and new copies of
<code class="filename">.spa/.spi/.spm</code> data need to be in RAM while
preloading new copy). Average usage stays the same.
</p><h5><a name="id362931"></a>Example:</h5><pre class="programlisting">
seamless_rotate = 1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-preopen-indexes"></a>8.4.10.&nbsp;preopen_indexes</h4></div></div></div><p>
Whether to forcibly preopen all indexes on startup.
Optional, default is 0 (do not preopen).
Enforces enabled <a href="#conf-preopen" title="8.2.30.&nbsp;preopen">preopen</a> on all served indexes,
to avoid manually specifying it in every index.
</p><h5><a name="id362965"></a>Example:</h5><pre class="programlisting">
preopen_indexes = 1
</pre></div><div class="sect3" lang="en"><div class="titlepage"><div><div><h4 class="title"><a name="conf-unlink-old"></a>8.4.11.&nbsp;unlink_old</h4></div></div></div><p>
Whether to unlink .old index copies on succesful rotation.
Optional, default is 1 (do unlink).
</p><h5><a name="id362989"></a>Example:</h5><pre class="programlisting">
unlink_old = 0
</pre></div></div></div><div class="appendix" lang="en"><h2 class="title" style="clear: both"><a name="changelog"></a>A.&nbsp;Sphinx revision history</h2><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="ver_0_9_7"></a>A.1.&nbsp;Version 0.9.7, 02 apr 2007</h3></div></div></div><div class="itemizedlist"><ul type="disc"><li>added support for <code class="option">sql_str2ordinal_column</code></li><li>added support for upto 5 sort-by attrs (in extended sorting mode)</li><li>added support for separate groups sorting clause (in group-by mode)</li><li>added support for on-the-fly attribute updates (PRE-ALPHA; will change heavily; use for preliminary testing ONLY)</li><li>added support for zero/NULL attributes</li><li>added support for 0.9.7 features to SphinxSE</li><li>added support for n-grams (alpha, 1-grams only for now)</li><li>added support for warnings reported to client</li><li>added support for exclude-filters</li><li>added support for prefix and infix indexing (see <code class="option">max_prefix_len</code>, <code class="option">max_infix_len</code>)</li><li>added <code class="option">@*</code> syntax to reset current field to query language</li><li>added removal of duplicate entries in query index order</li><li>added PHP API workarounds for PHP signed/unsigned braindamage</li><li>added locks to avoid two concurrent indexers working on same index</li><li>added check for existing attributes vs. <code class="option">docinfo=none</code> case</li><li>improved groupby code a lot (better precision, and upto 25x times faster in extreme cases)</li><li>improved error handling and reporting</li><li>improved handling of broken indexes (reports error instead of hanging/crashing)</li><li>improved <code class="option">mmap()</code> limits for attributes and wordlists (now able to map over 4 GB on x64 and over 2 GB on x32 where possible)</li><li>improved <code class="option">malloc()</code> pressure in head daemon (search time should not degrade with time any more)</li><li>improved <code class="filename">test.php</code> command line options</li><li>improved error reporting (distributed query, broken index etc issues now reported to client)</li><li>changed default network packet size to be 8M, added extra checks</li><li>fixed division by zero in BM25 on 1-document collections (in extended matching mode)</li><li>fixed <code class="filename">.spl</code> files getting unlinked</li><li>fixed crash in schema compatibility test</li><li>fixed UTF-8 Russian stemmer</li><li>fixed requested matches count when querying distributed agents</li><li>fixed signed vs. unsigned issues everywhere (ranged queries, CLI search output, and obtaining docid)</li><li>fixed potential crashes vs. negative query offsets</li><li>fixed 0-match docs vs. extended mode vs. stats</li><li>fixed group/timestamp filters being ignored if querying from older clients</li><li>fixed docs to mention <code class="option">pgsql</code> source type</li><li>fixed issues with explicit '&amp;' in extended matching mode</li><li>fixed wrong assertion in SBCS encoder</li><li>fixed crashes with no-attribute indexes after rotate</li></ul></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="ver_0_9_7_rc2"></a>A.2.&nbsp;Version 0.9.7-RC2, 15 dec 2006</h3></div></div></div><div class="itemizedlist"><ul type="disc"><li>added support for extended matching mode (query language)</li><li>added support for extended sorting mode (sorting clauses)</li><li>added support for SBCS excerpts</li><li>added <code class="option">mmap()ing</code> for attributes and wordlist (improves search time, speeds up <code class="option">fork()</code> greatly)</li><li>fixed attribute name handling to be case insensitive</li><li>fixed default compiler options to simplify post-mortem debugging (added <code class="option">-g</code>, removed <code class="option">-fomit-frame-pointer</code>)</li><li>fixed rare memory leak</li><li>fixed "hello hello" queries in "match phrase" mode</li><li>fixed issue with excerpts, texts and overlong queries</li><li>fixed logging multiple index name (no longer tokenized)</li><li>fixed trailing stopword not flushed from tokenizer</li><li>fixed boolean evaluation</li><li>fixed pidfile being wrongly <code class="option">unlink()ed</code> on <code class="option">bind()</code> failure</li><li>fixed <code class="option">--with-mysql-includes/libs</code> (they conflicted with well-known paths)</li><li>fixes for 64-bit platforms</li></ul></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="ver_0_9_7_rc1"></a>A.3.&nbsp;Version 0.9.7-RC1, 26 oct 2006</h3></div></div></div><div class="itemizedlist"><ul type="disc"><li>added alpha index merging code</li><li>added an option to decrease <code class="option">max_matches</code> per-query</li><li>added an option to specify IP address for searchd to listen on</li><li>added support for unlimited amount of configured sources and indexes</li><li>added support for group-by queries</li><li>added support for /2 range modifier in charset_table</li><li>added support for arbitrary amount of document attributes</li><li>added logging filter count and index name</li><li>added <code class="option">--with-debug</code> option to configure to compile in debug mode</li><li>added <code class="option">-DNDEBUG</code> when compiling in default mode</li><li>improved search time (added doclist size hints, in-memory wordlist cache, and used VLB coding everywhere)</li><li>improved (refactored) SQL driver code (adding new drivers should be very easy now)</li><li>improved exceprts generation</li><li>fixed issue with empty sources and ranged queries</li><li>fixed querying purely remote distributed indexes</li><li>fixed suffix length check in English stemmer in some cases</li><li>fixed UTF-8 decoder for codes over U+20000 (for CJK)</li><li>fixed UTF-8 encoder for 3-byte sequences (for CJK)</li><li>fixed overshort (less than <code class="option">min_word_len</code>) words prepended to next field</li><li>fixed source connection order (indexer does not connect to all sources at once now)</li><li>fixed line numbering in config parser</li><li>fixed some issues with index rotation</li></ul></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="ver_0_9_6"></a>A.4.&nbsp;Version 0.9.6, 24 jul 2006</h3></div></div></div><div class="itemizedlist"><ul type="disc"><li>added support for empty indexes</li><li>added support for multiple sql_query_pre/post/post_index</li><li>fixed timestamp ranges filter in "match any" mode</li><li>fixed configure issues with --without-mysql and --with-pgsql options</li><li>fixed building on Solaris 9</li></ul></div></div><div class="sect2" lang="en"><div class="titlepage"><div><div><h3 class="title"><a name="ver_0_9_6_rc1"></a>A.5.&nbsp;Version 0.9.6-RC1, 26 jun 2006</h3></div></div></div><div class="itemizedlist"><ul type="disc"><li>added boolean queries support (experimental, beta version)</li><li>added simple file-based query cache (experimental, beta version)</li><li>added storage engine for MySQL 5.0 and 5.1 (experimental, beta version)</li><li>added GNU style <code class="filename">configure</code> script</li><li>added new searchd protocol (all binary, and should be backwards compatible)</li><li>added distributed searching support to searchd</li><li>added PostgreSQL driver</li><li>added excerpts generation</li><li>added <code class="option">min_word_len</code> option to index</li><li>added <code class="option">max_matches</code> option to searchd, removed hardcoded MAX_MATCHES limit</li><li>added initial documentation, and a working <code class="filename">example.sql</code></li><li>added support for multiple sources per index</li><li>added soundex support</li><li>added group ID ranges support</li><li>added <code class="option">--stdin</code> command-line option to search utility</li><li>added <code class="option">--noprogress</code> option to indexer</li><li>added <code class="option">--index</code> option to search</li><li>fixed UTF-8 decoder (3-byte codepoints did not work)</li><li>fixed PHP API to handle big result sets faster</li><li>fixed config parser to handle empty values properly</li><li>fixed redundant <code class="code">time(NULL)</code> calls in time-segments mode</li></ul></div></div></div></div></body></html>