Sophie

Sophie

distrib > Mageia > 4 > i586 > by-pkgid > f800694edefe91adea2624f711a41a2d > files > 1844

php-manual-en-5.5.7-1.mga4.noarch.rpm

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
 <head>
  <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  <title>Predefined Constants</title>

 </head>
 <body><div class="manualnavbar" style="text-align: center;">
 <div class="prev" style="text-align: left; float: left;"><a href="fann.resources.html">Resource Types</a></div>
 <div class="next" style="text-align: right; float: right;"><a href="fann.examples.html">Examples</a></div>
 <div class="up"><a href="book.fann.html">FANN</a></div>
 <div class="home"><a href="index.html">PHP Manual</a></div>
</div><hr /><div id="fann.constants" class="appendix">
 <h1>Predefined Constants</h1>

 <p class="simpara">
The constants below are defined by this extension, and
will only be available when the extension has either
been compiled into PHP or dynamically loaded at runtime.
</p>
 <p class="para">
  <dl>

   <strong class="title">Training algorithms</strong>
   <dt id="constant.fann-train-incremental">
    <span class="term">
     <strong><code>FANN_TRAIN_INCREMENTAL</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Standard backpropagation algorithm, where the weights are updated after each training pattern.
      This means that the weights are updated many times during a single epoch. For this reason some problems,
      will train very fast with this algorithm, while other more advanced problems will not train very well.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-train-batch">
    <span class="term">
     <strong><code>FANN_TRAIN_BATCH</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Standard backpropagation algorithm, where the weights are updated after calculating the mean square error
      for the whole training set. This means that the weights are only updated once during a epoch.
      For this reason some problems, will train slower with this algorithm. But since the mean square
      error is calculated more correctly than in incremental training, some problems will reach a better
      solutions with this algorithm.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-train-rprop">
    <span class="term">
     <strong><code>FANN_TRAIN_RPROP</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      A more advanced batch training algorithm which achieves good results for many problems. The RPROP
      training algorithm is adaptive, and does therefore not use the learning_rate. Some other parameters
      can however be set to change the way the RPROP algorithm works, but it is only recommended
      for users with insight in how the RPROP training algorithm works. The RPROP training algorithm
      is described by [Riedmiller and Braun, 1993], but the actual learning algorithm used here is
      the iRPROP- training algorithm which is described by [Igel and Husken, 2000] which is an variety
      of the standard RPROP training algorithm.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-train-quickprop">
    <span class="term">
     <strong><code>FANN_TRAIN_QUICKPROP</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      A more advanced batch training algorithm which achieves good results for many problems.
      The quickprop training algorithm uses the learning_rate parameter along with other more advanced parameters,
      but it is only recommended to change these advanced parameters, for users with insight in how the quickprop
      training algorithm works. The quickprop training algorithm is described by [Fahlman, 1988].
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-train-sarprop">
    <span class="term">
     <strong><code>FANN_TRAIN_SARPROP</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Even more advance training algorithm. Only for version 2.2
     </span>
    </dd>

   </dt>

  </dl>

  <dl>

   <strong class="title">Activation functions</strong>
   <dt id="constant.fann-linear">
    <span class="term">
     <strong><code>FANN_LINEAR</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Linear activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-threshold">
    <span class="term">
     <strong><code>FANN_THRESHOLD</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Threshold activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-threshold-symmetric">
    <span class="term">
     <strong><code>FANN_THRESHOLD_SYMMETRIC</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Threshold activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-sigmoid">
    <span class="term">
     <strong><code>FANN_SIGMOID</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Sigmoid activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-sigmoid-stepwise">
    <span class="term">
     <strong><code>FANN_SIGMOID_STEPWISE</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Stepwise linear approximation to sigmoid.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-sigmoid-symmetric">
    <span class="term">
     <strong><code>FANN_SIGMOID_SYMMETRIC</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Symmetric sigmoid activation function, aka. tanh.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-sigmoid-symmetric-stepwise">
    <span class="term">
     <strong><code>FANN_SIGMOID_SYMMETRIC_STEPWISE</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Stepwise linear approximation to symmetric sigmoid
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-gaussian">
    <span class="term">
     <strong><code>FANN_GAUSSIAN</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Gaussian activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-gaussian-symmetric">
    <span class="term">
     <strong><code>FANN_GAUSSIAN_SYMMETRIC</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Symmetric gaussian activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-gaussian-stepwise">
    <span class="term">
     <strong><code>FANN_GAUSSIAN_STEPWISE</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Stepwise gaussian activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-elliot">
    <span class="term">
     <strong><code>FANN_ELLIOT</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Fast (sigmoid like) activation function defined by David Elliott.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-elliot-symmetric">
    <span class="term">
     <strong><code>FANN_ELLIOT_SYMMETRIC</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Fast (symmetric sigmoid like) activation function defined by David Elliott.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-linear-piece">
    <span class="term">
     <strong><code>FANN_LINEAR_PIECE</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Bounded linear activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-linear-piece-symmetric">
    <span class="term">
     <strong><code>FANN_LINEAR_PIECE_SYMMETRIC</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Bounded linear activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-sin-symmetric">
    <span class="term">
     <strong><code>FANN_SIN_SYMMETRIC</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Periodical sinus activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-cos-symmetric">
    <span class="term">
     <strong><code>FANN_COS_SYMMETRIC</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Periodical cosinus activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-sin">
    <span class="term">
     <strong><code>FANN_SIN</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Periodical sinus activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-cos">
    <span class="term">
     <strong><code>FANN_COS</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Periodical cosinus activation function.
     </span>
    </dd>

   </dt>

  </dl>

  <dl>

   <strong class="title">Error function used during training</strong>
   <dt id="constant.fann-errorfunc-linear">
    <span class="term">
     <strong><code>FANN_ERRORFUNC_LINEAR</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Standard linear error function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-errorfunc-tanh">
    <span class="term">
     <strong><code>FANN_ERRORFUNC_TANH</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Tanh error function, usually better but can require a lower learning rate. This error function agressively
      targets outputs that differ much from the desired, while not targetting outputs that only differ a little that much.
      This activation function is not recommended for cascade training and incremental training.
     </span>
    </dd>

   </dt>

  </dl>

  <dl>

   <strong class="title">Stop criteria used during training</strong>
   <dt id="constant.fann-stopfunc-mse">
    <span class="term">
     <strong><code>FANN_STOPFUNC_MSE</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Stop criteria is Mean Square Error (MSE) value.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-stopfunc-bit">
    <span class="term">
     <strong><code>FANN_STOPFUNC_BIT</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Stop criteria is number of bits that fail.  The number of bits means the number of output neurons
      which differs more than the bit fail limit (see fann_get_bit_fail_limit, fann_set_bit_fail_limit). The bits are counted
      in all of the training data, so this number can be higher than the number of training data.
     </span>
    </dd>

   </dt>

  </dl>

  <dl>

   <strong class="title">Definition of network types used by  <span class="function"><a href="function.fann-get-network-type.html" class="function">fann_get_network_type()</a></span></strong>
   <dt id="constant.fann-nettype-layer">
    <span class="term">
     <strong><code>FANN_NETTYPE_LAYER</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Each layer only has connections to the next layer.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-nettype-shortcut">
    <span class="term">
     <strong><code>FANN_NETTYPE_SHORTCUT</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Each layer has connections to all following layers
     </span>
    </dd>

   </dt>

   </dl>

  <dl>

   <strong class="title">Errors</strong>
   <dt id="constant.fann-e-no-error">
    <span class="term">
     <strong><code>FANN_E_NO_ERROR</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      No error.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-open-config-r">
    <span class="term">
     <strong><code>FANN_E_CANT_OPEN_CONFIG_R</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Unable to open configuration file for reading.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-open-config-w">
    <span class="term">
     <strong><code>FANN_E_CANT_OPEN_CONFIG_W</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Unable to open configuration file for writing.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-wrong-config-version">
    <span class="term">
     <strong><code>FANN_E_WRONG_CONFIG_VERSION</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Wrong version of configuration file.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-read-config">
    <span class="term">
     <strong><code>FANN_E_CANT_READ_CONFIG</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Error reading info from configuration file.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-read-neuron">
    <span class="term">
     <strong><code>FANN_E_CANT_READ_NEURON</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Error reading neuron info from configuration file.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-read-connections">
    <span class="term">
     <strong><code>FANN_E_CANT_READ_CONNECTIONS</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Error reading connections from configuration file.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-wrong-num-connections">
    <span class="term">
     <strong><code>FANN_E_WRONG_NUM_CONNECTIONS</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Number of connections not equal to the number expected.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-open-td-w">
    <span class="term">
     <strong><code>FANN_E_CANT_OPEN_TD_W</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Unable to open train data file for writing.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-open-td-r">
    <span class="term">
     <strong><code>FANN_E_CANT_OPEN_TD_R</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Unable to open train data file for reading.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-read-td">
    <span class="term">
     <strong><code>FANN_E_CANT_READ_TD</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Error reading training data from file.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-allocate-mem">
    <span class="term">
     <strong><code>FANN_E_CANT_ALLOCATE_MEM</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Unable to allocate memory.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-train-activation">
    <span class="term">
     <strong><code>FANN_E_CANT_TRAIN_ACTIVATION</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Unable to train with the selected activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-use-activation">
    <span class="term">
     <strong><code>FANN_E_CANT_USE_ACTIVATION</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Unable to use the selected activation function.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-train-data-mismatch">
    <span class="term">
     <strong><code>FANN_E_TRAIN_DATA_MISMATCH</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Irreconcilable differences between two struct fann_train_data structures.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-cant-use-train-alg">
    <span class="term">
     <strong><code>FANN_E_CANT_USE_TRAIN_ALG</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Unable to use the selected training algorithm.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-train-data-subset">
    <span class="term">
     <strong><code>FANN_E_TRAIN_DATA_SUBSET</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Trying to take subset which is not within the training set.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-index-out-of-bound">
    <span class="term">
     <strong><code>FANN_E_INDEX_OUT_OF_BOUND</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Index is out of bound.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-scale-not-present">
    <span class="term">
     <strong><code>FANN_E_SCALE_NOT_PRESENT</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      Scaling parameters not present.
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-input-no-match">
    <span class="term">
     <strong><code>FANN_E_INPUT_NO_MATCH</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      The number of input neurons in the ann data do not match
     </span>
    </dd>

   </dt>

   <dt id="constant.fann-e-output-no-match">
    <span class="term">
     <strong><code>FANN_E_OUTPUT_NO_MATCH</code></strong>
     (<span class="type"><a href="language.types.integer.html" class="type integer">integer</a></span>)
    </span>
    <dd>

     <span class="simpara">
      The number of output neurons in the ann and data do not match.
     </span>
    </dd>

   </dt>

  </dl>

 </p>
</div>
<hr /><div class="manualnavbar" style="text-align: center;">
 <div class="prev" style="text-align: left; float: left;"><a href="fann.resources.html">Resource Types</a></div>
 <div class="next" style="text-align: right; float: right;"><a href="fann.examples.html">Examples</a></div>
 <div class="up"><a href="book.fann.html">FANN</a></div>
 <div class="home"><a href="index.html">PHP Manual</a></div>
</div></body></html>