Sophie

Sophie

distrib > Fedora > 14 > x86_64 > media > updates > by-pkgid > 0e54ba0ee564ce6063a5e83aa86060c5 > files > 440

festival-speechtools-devel-1.2.96-18.fc14.i686.rpm

<chapter>
	<title>Grammar</title>

<sect1><title>Overview</title>
<para>
To aid speech recognition and general text processing the speech
tools library provides an number of techniques to process various
types of grammars.
</para>

<para>
These consist of a continuing expanding set of class and related 
programs.  These are at various levels of development but
all have fairly stable and working basic features.
</para>

<sect2>
	<title>N-grams</title>


<para>
Ngram language models are supported by the EST_Ngrammar class, and
associated routines and programs.   Support is provided for
building. tuning, smoothing and running n-gram models.  N-grams
themselves have been can be internally stored in either a 
<emphasis>dense</emphasis> or <emphasis>sparse</emphasis>.  In
the dense case all possible states are represented with probability
distributions, while the sparece case only has those possible
states for with data is available.  Sparse will be much smaller
in very large cases.  However we consider the dense case to be
the most fully developed.
</para><para>
Formally n-grams can be views as a special case of weighted finite
state machines.  Our implementation reflects that where possible (backoff
options break this simple view), thus a start state is provided
and traversal operation are given.  This method of using n-grams is
by far the most efficient as only one new piece of information
is required at each stage, so no vectors of tokens need be collected
(or shifted) and presented to n-gram class.  However as this finite
state machine view can't always be reasonable used we also support
access through a vector of tokens.
</para>

<sect3>
<title>Building ngram language models</title>

<para>
The program <function>ngram_build</function> estimates ngram language 
models from data. The data can be in a number of formats and be 
saved in both an ascii (easier for humans to read) and binary
(quick to load) format.
</para>

<sect4><title>Vocabularies</title>

<para>
The vocabulary of an ngram must be predefined.  This is required to
allow efficient internal representation.  This implementation
supports two vocabularies, one for the n-1 tokens in an ngram
and one for the nth token as potentially this "predictee" could
be from a different class.  We also support the notion of
out of vocabulary word, so any token found in the input data
that is not in the vocabulary may be mapped to that token.
</para><para>
In build n-grams there are options on what to do with n-grams
which contain out of vocabulary tokens.  They may be mapped
to te specifed out of vocabulary word, the ngram can be ignored or
the whole sentence containing the out of vocabulary word can
be ignored.
</para>

</sect4>

<sect4><title>ngram data input formats</title>
<para>
The input data can be in a number of formats depending on how
much preprocessing you wish to do before building.   The most
basic form is to submit n-grams.  That is n tokens, on each
line.  For example for a tri-gram model of phones it might
look like
</para>
<screen>
0 # a1 
# a1 f 
a1 f r 
f r i 
r i k 
i k aa1 
k aa1 n 
aa1 n @ 
</screen>
<para>
In this case the data preparation stage most create each n-gram with
the sigle stepping through the data at each stage.  This
format we call <command>ngram_per_line</command>
</para><para>
A second format is <command>sentence_per_line</command> where each
line of a file is a complete "sentence".  Ngrams for each n-tuple 
will be automatically created and cumulated.   In this case the input
file might look like
</para><para>
<screen>
a1 f r i k aa1 n @
ei1 b r @ h a m
</screen>
</para>
<para>
In this mode, ngrams for the tokens at start of the sentence are
created by using the token by defining a prev_tag (and if necessary a
prev_prev_tag).   Thus given the above sentence by line file,
a prev_tag of "#" and a prev_prev_tag of "0".  The first few
tri-grams cumulated are
</para><para>
<screen>
0 # a1
# a1 f
a1 f r
</screen>
</para><para>
If the ngram size requires looking back further the prev_prev_tag is
repeat indefinitely.  Likewise an end_tag is appended to the end
of every sentence too, (i.e. end of every line).
</para><para>
A third data input format is <command>sentence_per_file</command>
where line breaks are no longer signficant and n-grams are create 
for all n-tuples in the file.  The same special cases are treated
for beginning and end of file as are for beginning and end of
line in the sentence_per_line case.
</para>
</sect4>
<sect4><title>Smoothing and Backoff</title>

<para>
We support a number of different techniques to deal with lack of data in
a training set. 
</para>
<para>
Good Turing smoothing <cite>Church and Gale 1991</cite> is supported
allowing smoothing on n-grams whose frequency is less than M.  We also
support <emphasis>backoff</emphasis> where the n-1 grams are
(recursively) built to provide an estimation of probability
distributions for unseen n-grams.
</para>
</sect4>

</sect3>

<sect3>
	<title>Testing ngram language models</title>


<para>
<function>ngram_test</function> computes language model entropy/perplexity 
on test data.   The test data may be in any of the formats described
above.
</para>

</sect3>

</sect2>

<sect2>
	<title>SCFGs</title>


<para>
Stochastic context-free grammars are a version of context-free grammars
with probabilities on rules.  In this implementation we assume SCFGs
are always in Chomsky Normal From (CNF) where rules may only be binary
or unary branching.  When binary, both daughters must be non-terminals and
when unary, the daughter must be a terminal.
</para>

<para>
The implementation here is primarily based on <cite>Pereira and
Schabes 92</cite> thus allowing unsupervised training of SCFGs as well
as allowing seeding with a bracketed corpus which can vastly reduce
training time, and improve results.  Training uses the inside-outside
algorithm.
</para>

<para>
The support is split into four parts: making grammars, training grammars,
testing grammars and parsing.
</para>

<para>
A grammar file consists of a set of rules.  Each rule is a bracketed
list of probability, nonterminal, followed by two nonterminals or
one terminal.  A simple example is
<screen>
(0.5 S A D)
(0.5 S C B)
(0.79 B S C)
(0.21 B a)
(0.79 D S A)
(0.21 D b)
(1 A b)
(1 C a)
</screen>
The mother non-terminal in the first rule is the distinguished symbol.
</para>

<para>
Grammars may be constructed by hand, by the program <filename>scfg_make</filename> or
by some other external process.  The program <filename>scfg_make</filename> constructs
a full grammar given a list (or number of) terminals and nonterminals.
The rules can be assigned equal probabilities or random ones.  The
"probabilities" may be actual probabilities or log probabilties.
For example given a file <filename>wp19</filename> with a list of terminals, a grammar
suitable for training with 15 non-terminals may be created by the command
<screen>
scfg_make -nonterms 15 -terms wp19 -domain prob \
          -values random -o wsj_15_19.gram
</screen>
The non-terminals or terminal names will be automatically generated if a
number is given, or will be as specified if a file name is given.  In
the case of a filename being given, no brackets should be the file just
whitespace separated tokens.
</para>

<para>
A corpus consists of a number of sentences, each sentence must be
contain within a set of parenthesis.  The sentences themselves may
additionally contain further bracketing (for training and testing).
Each sentence is read by the Lisp reader so comments (semi-colon to
end of file) may be included.  For example
<programlisting arch='lisp'>((((n n) punc ((cd n) j) punc)
 (md (v (dt n) (in (dt j n)) (n cd)))
 punc))
(((n n) (v ((n) (in ((n n) punc (dt n v n))))) punc))
((((n n) punc (((cd n) j) cc ((j n) (in (n n n n)))) punc)
 (v (v (((dt j n) (in (dt j j n))))))
 punc))
...
</programlisting>
Training is done by estimating the inside and outside probabilities of
the rules based on their current probabilities and a corpus.  The corpus
may optionally include internal bracketing which is used by the training
algorithm to precluded some possible parses hence making the training
typically faster (and sometimes more accurate).  After each training
pass the grammar rule probabilities are updated and the process starts
again.  Note depending on the number of training sentences training
passes may take a very long time.  After each passes the cross entropy
for the current version of the grammar is printed.  This should normally
decrease until the the "best" grammar has been found.
</para>

<para>
The program <filename>scfg_train</filename> takes an initial grammar, and corpus and,
by default will train for 100 passes.  Because it can take prohibitively
long for a desired number of passes an option is available to selection
only an N percent chunk of the training set on each pass, cycling
through the other N percent chunks of the corpus on each pass
Experiments have shown that this not only produces much faster training,
but the accuracy of the fully trained grammar is very similar.  Given the
choice of waiting taking 10 days or 48 hours to parse, it is highly
recommended.
</para>

<para>
After each N passes the current state of the grammar may be saved, the
number of passes between saving is specificed by the <filename>-checkpoint</filename>
option.  The grammar is saved in the output file appended with the pass
number.  
</para>

<para>
Because the partitioned training will select different partitions
depending on the pass number you can optionally specify the starting
pass number, making it much easier to continue training after some
interruption.
</para>

<para>
Testing is done by the program <filename>scfg_test</filename> it takes a grammar and a
corpus.  That corpus may be fully bracketed or not.  By default
the mean cross entropy value from anaylzing these senetences will
be printed, also the number sentence sthat fail to parse.  
</para>

<para>
Alternatively a <emphasis>bracketing accuracy</emphasis> may be calculated this is the
percentage of prhases in a parsed sentence that are compatible with the
bracketing in the corpus example.
</para>

<para>
The fourth program provides a mechanism for parsing one or more
sentences.  The corpus this time should contain no bracketing except
around the begining and end of the sentence itself.  Two forms of parses
are produced.  A full form with start and end points for each phrase,
the related non-terminal and the probability, and a simple form where
only the bracketing is given.  Note only one (or no) parses is given.
For any phrase only the best example tree is given though the
probability is given as the sum of all possibily derivations of that
non-terminal for that phrase.
</para>

<screen>
scfg_parse -grammar wsj_15_19.gram -corpus news.data -brackets -o news.out
</screen>
<para>
Note the input for must be strings of terminals for the given grammar.
For real parsing of real text it is likely the grmmar uses part of
speech tags as terminals and the data is avtuall words not part of
speech tags.  If you want to parse texts then you can use the Festival
script <filename>festival/examples/scfg_parse_text</filename> which takes in arbitrary
text, runs the part of speech tagger on it after standard tokenizing
rules and parses the output saving the parse to the specified file.
</para>

</sect2>

<sect2>
	<title>WFSTs</title>


<para>
The speech tools contains a small, but growing library of
basic functions for building, and manipulating weighted finite
state transducers.  Although not complete they already provided many of
the basic operations and compilers one needs for using these devices.
</para>

<para>
Given a WFST the following operations are supported: <function>deterimise</function>,
<function>minimize</function>, <function>complement</function>.  
</para>

<para>
Given two WFSTs the following operations are supported: <function>intersection</function>,
<function>union</function>, <function>difference</function>, <function>concatenation</function> and <function>compose</function>.
</para>

<para>
In addition to these operations compiles are provided for a number of
basic input formats: regular expressions, regular grammars, context-free
grammars (with depth restriction) and Kay/Kaplan/Koksenniemi two-level
morphology rules.
</para>

<para>
Still missing are complete treatment of the weights through some
basic operations (e.g. minimization doesn't presever weights).  Also
techniques for learning WFSTs from data, or at least weightign existing
FSTs from data will be added in later versions.
</para>

<para>
In general inputing symbols is of the form X or X/Y.  When X is
given it is (except if using the wfst as a transducer) treated
as X/X.  Where X/Y is input/output symbols, thus using single symbols
will mostly cause the wfst mechanisms to act as if they are finite
state machines.
</para>

<para>
The two main programs are <filename>wfst_build</filename> and <filename>wfst_run</filename>.
<filename>wfst_run</filename> runs in both recognition and transduction mode.
</para>

<para>
<filename>wfst_build</filename> builds wfst's from description files or through
<para>
combination of existing ones.  The output may be optionally
determinized or determinized and minimized.
</para>


<sect3>
	<title>Kay/Kaplan/Koskenniemi morphological rules</title>


<para>
One of the major drives in interest in wfst has been through their
use in morphology @cite{kaplan94}.  Hence we provide a method for
compiling Kay/Kaplan/Koskenniemi type (restricted) context sensitive
rewrite rules.  The exact form is given in the example below. 
</para>

<para>
This example covers basic letters to letters but also Epenthesis for
e-insertion in words like <function>churches</function> and <function>boxes</function>.
<programlisting arch='lisp'>(KKrules
 engmorph
 (Alphabets
  ;; Input Alphabet
  (a b c d e f g h i j k l m n o p q r s t u v w x y z #) 
  ;; Output Alphabet
  (a b c d e f g h i j k l m n o p q r s t u v w x y z + #) 
 )
 (Sets
  (LET a b c d e f g h i j k l m n o p q r s t u v w x y z)
 )
 (Rules
 ;; The basic rules
 ( a => nil --- nil) 
 ( b => nil --- nil) 
 ( c => nil --- nil) 
 ( d => nil --- nil) 
 ( e => nil --- nil) 
 ( f => nil --- nil) 
 ( g => nil --- nil) 
 ( h => nil --- nil) 
 ( i => nil --- nil) 
 ( j => nil --- nil) 
 ( k => nil --- nil) 
 ( l => nil --- nil) 
 ( m => nil --- nil) 
 ( n => nil --- nil) 
 ( o => nil --- nil) 
 ( p => nil --- nil) 
 ( q => nil --- nil) 
 ( r => nil --- nil) 
 ( s => nil --- nil) 
 ( t => nil --- nil) 
 ( u => nil --- nil) 
 ( v => nil --- nil) 
 ( w => nil --- nil) 
 ( x => nil --- nil) 
 ( y => nil --- nil) 
 ( z => nil --- nil) 
 ( # => nil --- nil)
 ( _epsilon_/+ => (or LET _epsilon_/e) --- nil)

 ;; Epenthesis
 ;;   churches -> church+s
 ;;   boxes -> box+s
 (e/+ <=> (or (s h) (or s x z) (i/y) (c h))
	    ---
	    (s))
)
</programlisting>
A substantially larger example of morphographenic rules is distributed with
the Festival speech synthesis system in <filename>festival/lib/engmorph.scm</filename>.
This is based on the English description in @cite{ritchie92}.
</para>

<para>
For a definition of the semantics fo the basic types of rule,
surface coercion, context restriction and combined rules
see @cite{ritchie92}.  Note that these rules are run in
parallel (the transducers are intersected) making they rule
interact in ways that the author might not intend.  A good rule
debugger is really required in order to write a substantial set
of rules in this formalism.
</para>

<para>
The rule compilation method used differs from Kay and Kaplan, and
also from @cite{mohri96} and actually follows them method used
in @cite{ritchie92} though in this case, unlike @cite{ritchie92},
the technique is followed through to true wfst's.  The actual
compilation method shold be described somewhere.
</para>

<para>
The above may be compiled into a wfst by the command (assuming it is
in the file <filename>mm.rules</filename>.
<screen>
wfst_build -type kk -o engmorph.wfst -detmin engmorph.scm
</screen>
</para>

<para>
This rule compiler has also been used in finding equivalent transducers
for restricted forms of decision tree (following @cite{sproat96}) and
may be view as mostly stable.
</para>

</sect3>

<sect3>
	<title>Regular expressions</title>


<para>
A simple method for building wfst's from regular expressions is also
provided.
</para>

<para>
An example is
<programlisting arch='lisp'>((a b c)
 (a b c)
 (and a (+ (or b c)) d))
</programlisting>
This consists of the input alphabet and the output alphabet
followed by a LISP s-expression contains the regex.  The supported
operators are <function>and</function>, <function>or</function>, <function>+</function>, <function>*</function> and <function>not</function>.
</para>

<para>
Compilation is by the following command
<screen>
wfst_build -type regex -o t1.wfst -detmin t1.regex
</screen>
</para>

</sect3>

<sect3><title>Regular Grammars</title>


<para>
A compilation method also exists for regular grammars.  These grammars
do not need to be a normal form, in fact no chaeck is made that they
are regular, if they contain center-embedding the construct algorithm
will go into a loop and eventually run out of storage.  The
correction to that is to add a depth limit which would then
allow wfst approximations of context-free grammars, which would
be useful.
</para>

<para>
An example regular grammar is
<programlisting arch='lisp'>(RegularGrammar
 engsuffixmorphosyntax
 ;; Sets 
 (
 (V a e i o u y)
 (C b c d f g h j k l m n p q r s t v w x y z)
 )
 ;; Rules

 (
 ;; A word *must* have a suffix to be recognized
 (Word -> # Syls Suffix )
 (Word -> # Syls End )

 ;; This matches any string of characters that contains at least one vowel
 (Syls -> Syl Syls )
 (Syls -> Syl )
 (Syl -> Cs V Cs )
 (Cs -> C Cs )
 (Cs -> )

 (Suffix -> VerbSuffix )
 (Suffix -> NounSuffix )
 (Suffix -> AdjSuffix )
 (VerbSuffix -> VerbFinal End )
 (VerbSuffix -> VerbtoNoun NounSuffix )
 (VerbSuffix -> VerbtoNoun End )
 (VerbSuffix -> VerbtoAdj  AdjSuffix )
 (VerbSuffix -> VerbtoAdj End )
 (NounSuffix -> NounFinal End )
 (NounSuffix -> NountoNoun NounSuffix )
 (NounSuffix -> NountoNoun End )
 (NounSuffix -> NountoAdj AdjSuffix )
 (NounSuffix -> NountoAdj End )
 (NounSuffix -> NountoVerb VerbSuffix )
 (NounSuffix -> NountoVerb End )
 (AdjSuffix -> AdjFinal End )
 (AdjSuffix -> AdjtoAdj AdjSuffix)
 (AdjSuffix -> AdjtoAdj End)
 (AdjSuffix -> AdjtoAdv End)  ;; isn't any Adv to anything

 (End -> # )  ;; word boundary symbol *always* present

 (VerbFinal -> + e d)
 (VerbFinal -> + i n g)
 (VerbFinal -> + s)

 (VerbtoNoun -> + e r)
 (VerbtoNoun -> + e s s)
 (VerbtoNoun -> + a t i o n)
 (VerbtoNoun -> + i n g)
 (VerbtoNoun -> + m e n t)

 (VerbtoAdj -> + a b l e)

 (NounFinal -> + s)

 (NountoNoun -> + i s m)
 (NountoNoun -> + i s t)
 (NountoNoun -> + s h i p)

 (NountoAdj -> + l i k e)
 (NountoAdj -> + l e s s)
 (NountoAdj -> + i s h)
 (NountoAdj -> + o u s)

 (NountoVerb -> + i f y)
 (NountoVerb -> + i s e)
 (NountoVerb -> + i z e)

 (AdjFinal -> + e r)
 (AdjFinal -> + e s t)

 (AdjtoAdj -> + i s h)
 (AdjtoAdv -> + l y)
 (AdjtoNoun -> + n e s s)
 (AdjtoVerb -> + i s e)
 (AdjtoVerb -> + i z e)

)
)
</programlisting>
The above is a simple morpho-syntax for English.
</para>

</sect3>
</sect2>
</sect1>



<sect1><title>Programs</title>

<para>The following are exectutable programs for grammars
</para>

<simplesect><title><link linkend="scfg-make-manual">scfg_make</link></title><para> make a set of rules for a SCFG.</para></simplesect>
<simplesect><title><link linkend="scfg-train-manual">scfg_train</link></title><para> 
make a set of rules for a SCFG.</para></simplesect>
<simplesect><title><link linkend="scfg-parse-manual">scfg_parse</link></title><para> 
train a set of rules for a SCFG.</para></simplesect>
<simplesect><title><link linkend="scfg-test-manual">scfg_test</link></title><para> 
Perform parsing using pre-trained grammar.</para></simplesect>
<simplesect><title><link linkend="wfst-build-manual">wfst_build</link></title><para> 
Build a weighted finite state transducer.</para></simplesect>
<simplesect><title><link linkend="wfst-run-manual">wfst_run</link></title><para> 
Run a weighted finite state transducer.</para></simplesect>
<simplesect><title><link linkend="ngram-build-manual">ngram_build</link></title><para> 
Train a n-gram from text.</para></simplesect>
<simplesect><title><link linkend="ngram-test-manual">ngram_test</link></title><para> 
Caldulate the perplexity etc of an n-gram.</para></simplesect>

</sect1>


<sect1><title>Classes</title>

<toc depth=2></toc>
&docpp-EST-SCFG-2;
&docpp-EST-SCFG-Rule-2;
&docpp-EST-SCFG-traintest-2;
&docpp-EST-bracketed_string-2;
&docpp-EST-Ngrammar-2;
&docpp-EST-WFST-2;
</sect1>
</chapter>


<!--
Local Variables:
mode: sgml
sgml-doctype: "speechtools.sgml"
sgml-parent-document: ("speechtools.sgml" "chapter" "book")
End: