Sophie

Sophie

distrib > Mandriva > 8.0 > i586 > media > main > by-pkgid > 6a3ae4e037535c68bd8a5fcce387b3cb > files > 745

kdemultimedia-2.1.1-5mdk.i586.rpm

<!DOCTYPE book PUBLIC  "-//KDE//DTD DocBook V3.1-Based Variant V1.0//EN" [
  <!ENTITY kappname "&arts;" >
  <!ENTITY % addindex "IGNORE">
  <!ENTITY % English "INCLUDE" >
  <!ENTITY % ents PUBLIC "-//KDE//ENTITIES Documentation V2.0//EN">
  %ents;
]>
<!-- Note: I am temporarily obligated to revert this file back to DocBook 3.1
     We'll switch to DocBook 4.1 probably after KDE 2.1
     When it happens, please remove the "DocBook 4.1:" comments from the document
	(E.B. 2001-02-08)
-->

<book lang="&language;">
<bookinfo>
<title>The &arts; Handbook</title>
<authorgroup>

<author>
<firstname>Stefan</firstname>
<surname>Westerfeld</surname>
<affiliation>
<address><email>stefan@space.twc.de</email></address>
</affiliation>
</author>

<author>
<firstname>Jeff</firstname>
<surname>Tranter</surname>
<affiliation>
<address><email>tranter@kde.org</email></address>
</affiliation>
</author>
</authorgroup>

<copyright>
<year>1999-2001</year>
<holder>Stefan Westerfeld &amp; Jeff Tranter</holder>
</copyright>
<legalnotice>&FDLNotice;</legalnotice>

<date>04/01/2001</date>
<releaseinfo>1.0.8</releaseinfo>

<abstract><para>This handbook describes &arts;, the Analog Real-time
Synthesizer.</para>

</abstract>

<keywordset>
<keyword>aRts</keyword>
<keyword>artsbuilder</keyword>
<keyword>synthesizer</keyword>
<keyword>multimedia</keyword>
<keyword>structure</keyword>
<keyword>music</keyword>
<keyword>sound</keyword>
<keyword>KDE</keyword>
</keywordset>
</bookinfo>

<chapter id="introduction">
<title>Introduction</title>

<sect1 id="what-is-arts">
<title>What is &arts;?</title>

<para>The Analog Real-Time Synthesizer, or &arts;, is a modular system for
synthesizing sound and music on a digital computer. Using small building blocks
called modules, the user can easily build complex audio processing
tools. Modules typically provide functions such as sound waveform generators,
filters, audio effects, mixing, and playback of digital audio in different file
formats.</para>

<para>The <application>artsd</application> sound server mixes audio from several
sources in real time, allowing multiple sound applications to transparently
share access to sound hardware.</para>

<para>Using <acronym>MCOP</acronym>, the Multimedia Communication Protocol,
multimedia applications can be network transparent, authenticated for security,
and cross-platform using interfaces defined in a language-independent way using
<acronym>IDL</acronym>. Support is also provided for non &arts;-aware legacy
applications. As a core component of the &kde; 2 desktop environment, &arts;
provides the basis for the &kde; multimedia architecture, and will in future
support more media types including video. Like &kde;, &arts; runs on a number of
operating systems, including &Linux; and BSD variants. It can also be used
independently of &kde;.</para>

</sect1>

<sect1 id="using-this-manual">
<title>Using This Manual</title>

<para>This manual is intended to provide comprehensive documentation on &arts; for
users at different skill levels. Depending on whether you are a casual user of
multimedia applications that make use of &arts; or a multimedia application
developer, you may want to take different paths through the manual.</para>

<para>It is suggested that you first read the <link
linkend="installation">Downloading and Building &arts;</link> chapter if you need
to get &arts; initially installed and running. If you already have a working
system, likely bundled with your operating system distribution, you may choose
to skip this section.</para>

<para>You should then read the sections in the <link linkend="arts-tools">&arts;
Tools</link> chapter, especially <application>artsd</application>,
<application>artscontrol</application>, <application>artsshell</application>,
and <application>artsdsp</application>. This will help you make the most
effective use of &arts;.</para>

<para>If you are interested in going further with &arts;, read the chapter on
<link linkend="artsbuilder">artsbuilder</link> and go through the tutorial. This
should give you an appreciation of the powerful capabilities of &arts; and the
provided modules that can be used without the need to be a programmer.</para>

<para>If you want to know more about the internals of &arts;, either to develop
multimedia applications or extend &arts; itself, read some or all of the chapter
<link linkend="arts-in-detail">&arts; in Detail</link>. This should give you an
understanding of all of the concepts that are prerequisites to &arts; software
development.</para>

<para>If you are interested specifically in the <acronym>MIDI</acronym>
capabilities of &arts;, you should read the chapter on <link
linkend="midi"><acronym>MIDI</acronym></link>.</para>

<!-- TODO
<para>To learn more about the &arts; graphical elements, either as an advanced
user of artsbuilder or to create new elements, read the section on <link
linkend="gui-elements"><acronym>GUI</acronym> Elements</link>.</para>
-->

<para>If you want to develop &arts;-aware multimedia applications, the <link
linkend="arts-apis">&arts; Application Programming Interfaces</link> chapter
covers the different <acronym>API</acronym>s in detail.</para>

<para>If you want to extend &arts; by creating new modules, read the <link
linkend="arts-modules">&arts; Modules</link> chapter.</para>

<para>If you are modifying an existing application to run under &arts;, read the
chapter on <link linkend="porting">Porting Applications to &arts;</link>.</para>

<para>You you can find out how to help contribute to the &arts; project in the
<link linkend="contributing">Contributing to &arts;</link> chapter, read about
upcoming &arts; development in the chapter on <link linkend="future-work">Future
Work</link>, and find links to more information in the <link
linkend="references">References</link> section.</para>

<para>We have also rounded out the manual with some additional material,
including <link linkend="faq">answers to frequently asked questions</link>, a
<link linkend="contributors">list of contributors</link>, the details on aRts
<link linkend="copyright-and-licenses">copyright and licensing</link>, and some
background material on <link linkend="intro-digital-audio">digital audio</link>
and <link linkend="midi-introduction"><acronym>MIDI</acronym></link>. A <link
linkend="glossary">glossary</link> of terms is also included.</para>

<note>
<para>
This manual is still very much a work in progress. You are welcome to contribute
by writing portions of it, but if you wish to do so, contact Jeff Tranter
<email>tranter@kde.org</email> first to avoid duplication of effort.
</para>
</note>

</sect1>

<sect1 id="history">
<title>History</title>

<para>
In late 1997 Stefan Westerfeld started working on a real-time, modular system
for sound synthesis. The code initially ran on a PowerPC system running
&AIX;. This first implementation was quite simple but supported a full-featured
flow system that was able to do such things as play MP3 files and pipe audio
streams through effects modules.
</para>


<para>The next step was to implement a <acronym>GUI</acronym> so that modules
could be manipulated graphically. Stefan had had some good experience using
&kde;, so that was chosen as the <acronym>GUI</acronym> toolkit, (knowing that
it might be necessary to do a Gnome/Gtk+ version as well) and this later led to
using &Linux; as the main development platform. Originally named
<application>ksynth</application>, the project was renamed &arts; and the pace
of development accelerated. The project at this stage was quite complete, with a
<acronym>CORBA</acronym>-based protocol, dozens of modules, a graphical module
editing tool, C and C++ <acronym>API</acronym>s, documentation, utilities, and a
mailing list and web site with a small group of developers. The project had come
a long way after only a little more than a year of development.</para>

<para>As the &kde; team started planning for &kde; 2.0, it became clear that
&kde; needed a more powerful infrastructure for sound and other streaming
media. It was decided to adapt &arts;, as it was a good step in this direction
with a proven architecture. Much new development effort went into this new
version of &arts;, most notably the replacement of the <acronym>CORBA</acronym>
code with an entirely new subsystem, <acronym>MCOP</acronym>, optimized for
multimedia. Version 0.4 of &arts; was included in the &kde; 2.0 release.</para>
      
<para>Work continues on &arts;, improving performance and adding new
functionality. It should be noted that even though &arts; is now a core
component of &kde;, it can be used without &kde;, and is also being used for
applications that go beyond traditional multimedia. The project has attracted
some interest from the Gnome team, opening up the possibility that it may
someday become the standard multimedia architecture for &UNIX; desktop
systems.</para>

</sect1>

</chapter>

<chapter id="arts-tools">
<title>&arts; Tools</title>

<para>Included with &arts; is a number of utilities for controlling and
configuring its behavior. You need to have some familiarity with most of these
tools in order to use &arts; effectively. This section describes each of the
utilities and their command options.</para>

<sect1 id="kde-control-center">
<title>&kcontrol;</title>

<para>When running &arts; under &kde;, the &kcontrol; provides a group of
control panel settings under the <guilabel>Sound</guilabel> category. Some of
these settings are used by &arts;. You can also associate sounds with various
window manager and &kde; events using the <menuchoice><guilabel>Look &amp;
Feel</guilabel><guilabel>System Notifications</guilabel></menuchoice> panel. See
the &kcontrol; manual for information on using the panel settings.</para>

</sect1>

<sect1 id="artsd">
<title><application>artsd</application></title>

<para>Access to the sound hardware resources is controlled by
<application>artsd</application>, the &arts; daemon. This allows different
applications to simultaneously send requests to the server, where they can be
mixed together and played. Without a centralized sound server a single
application using a sound device would prevent other applications from using
it.</para>

<para>To use &arts; there should be one and only one copy of
<application>artsd</application> running. It is typically run when &kde: starts
up if it is enabled in the &kcontrol; <guilabel>Sound Server</guilabel>
panel.</para>

<para>The program accepts the following arguments:</para>

<!-- LW: FIX THIS -->

<cmdsynopsis>
<command>artsd</command>
<group choice="opt">
<option>-n</option>
<option>-p</option>
<option>-u</option>
</group>
<group choice="opt">
<option>-a <replaceable>audiomethod</replaceable></option>
<option>-r <replaceable>sampling rate</replaceable></option>
<option>-b <replaceable>bits</replaceable></option>
<option>-d</option>
<option>-D <replaceable>devicename</replaceable></option>
<option>-F <replaceable>fragments</replaceable></option>
<option>-S <replaceable>size</replaceable></option>
</group>
<group choice="opt">
<option>-h</option>
<option>-A</option>
<option>-l <replaceable>level</replaceable></option>
</group>
</cmdsynopsis>

<variablelist>

<varlistentry>
<term><option>-r <replaceable>sampling rate</replaceable></option></term>
<listitem>
<para>Set sampling rate to use.</para>
</listitem>
</varlistentry>
	
<varlistentry>
<term><option>-h</option></term>
<listitem>
<para>Display command usage.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><guilabel>-n</guilabel></term>
<listitem>
<para>Enable network transparency.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-p <replaceable>port</replaceable></option>
</term>
<listitem>
<para>Set <acronym>TCP</acronym> port to use (implies
<option>-n</option>).</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-u</option></term>
<listitem>
<para>Public, no authentication (dangerous).</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-d</option></term>
<listitem>
<para>Enable full duplex operation.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-D <replaceable>device name</replaceable></option></term>
<listitem>
<para>Specify audio device (usually <filename>/dev/dsp</filename>).</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-F <replaceable>fragments</replaceable></option></term>
<listitem>
<para>Set number of fragments.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-S <replaceable>size</replaceable></option></term>
<listitem>
<para>Set fragment size, in bytes.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-l <replaceable>level</replaceable></option></term>
<listitem>
<para>Set information level - 3 (quiet), 2 (warnings), 1 (info), 0
(debug).</para>
</listitem>
</varlistentry>

</variablelist>

<para>In most cases simply running <command>artsd</command> will suffice.</para>


</sect1>

<sect1 id="artswrapper">
<title><application>artswrapper</application></title>

<para>To provide good real-time response, <application>artsd</application> is
usually run as a real-time process (on platforms where real-time priorities are
supported). This requires root permissions, so to minimize the security
implications, <application>artsd</application> can be started using the small
wrapper program <application>artswrapper</application> which simply sets
real-time priority (running as <systemitem <!-- DocBook 4.1: class="username" --> >root</systemitem>)
and then executes <command>artsd</command> as a non-root user.</para>

</sect1>

<sect1 id="artsshell">
<title><application>artsshell</application></title>

<para>The <command>artsshell</command> command is intended as a utility to
perform miscellaneous functions related to the sound server. It is expected that
the utility will be extended with new commands in the future (see the comments
in the source code for some ideas).</para>

<para>The command accepts the following format:</para>

<!-- LW: FIX THIS -->

<cmdsynopsis>
<command>artsshell</command>
<group>
<arg>suspend</arg>
<arg>status</arg>
</group>
<group>
<option>-h</option>
<option>-q</option>
</group>
</cmdsynopsis>

<para>artsshell [options] <replaceable>command</replaceable> [<replaceable>command-options</replaceable>] </para>

<para>
The following options are supported:
</para>

<variablelist>

<varlistentry>
<term><option>-q</option></term>
<listitem>
<para>Suppress all output.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-h</option></term>
<listitem>
<para>Display command usage.</para>
</listitem>
</varlistentry>

</variablelist>

<para>The following commands are supported:</para>

<variablelist>

<varlistentry>
<term><option>suspend</option></term>
<listitem>
<para>
Suspend the sound server.
</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>status</option></term>
<listitem>
<para>Display sound server status information.</para>
</listitem>
</varlistentry>

</variablelist>

</sect1>

<sect1 id="artsplay">
<title><application>artsplay</application></title>

<para>The <application>artsplay</application> command is a simple utility to
play a sound file. It accepts a single argument corresponding to the name of a
sound file which is sent to the sound server to be played. The sound file can be
any common sound file type such as <literal role="extension">wav</literal> or
<literal role="extension">au</literal>. This utility is good for testing that
the sound server is working. By running two commands in parallel or in rapid
succession you can demonstrate how the sound servers mixes more than one sound
source.</para>

</sect1>

<sect1 id="artsdsp">
<title><application>artsdsp</application></title>

<para>The sound server only supports applications that are &arts;-aware.  Many
legacy applications want to access the sound device directly.  The
<command>artsdsp</command> command provides an interim solution that allows most
of these applications to run unchanged.</para>

<para>When an application is run under <application>artsdsp</application> all
accesses to the <filename>/dev/dsp</filename> audio device are intercepted and
mapped into &arts; <acronym>API</acronym> calls. While the device emulation is
not perfect, most applications work this way, albeit with some degradation in
performance and latency.</para>

<para>The <command>artsdsp</command> command follows the format:
</para>

<!-- LW: FIX THIS -->

<para>
artsdsp [<replaceable>options</replaceable>] <replaceable>application arguments</replaceable>
</para>

<para>
The following options are recognized:
</para>

<variablelist>

<varlistentry>
<term><option>-h</option>,  <option>--help</option></term>
<listitem>
<para>Show brief help.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-n</option> <option>--name</option> = <replaceable>name</replaceable></term>
<listitem>
<para>Use <replaceable>name</replaceable> to identify player to <command>artsd</command>.</para>

</listitem>
</varlistentry>

<varlistentry>
<term><option>-m</option> <option>--mmap</option></term>
<listitem>
<para>Emulate memory mapping (&ie; for <application>Quake</application>).</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-v</option> <option>--verbose</option></term>
<listitem>
<para>Show parameters.</para>
</listitem>
</varlistentry>

</variablelist>

<para>
A typical invocation is:
</para>

<para>
<userinput><command>artsdsp</command> <option>-v</option> <option>-m realplay <replaceable>song.mp3</replaceable></option></userinput>
</para>

<para>Some applications work better with the <option>--mmap</option> option. Not
all features of the sound device are fully emulated, but most applications
should work. If you find one that does not, submit a detailed bug report and the
developers may be able to fix it. Again, remember this is an interim solution
and something of an ugly hack; the best solution is to add native &arts; support
to the applications.  If your favorite sound application does not have &arts;
support, ask the developer to provide it.</para>

</sect1>

<sect1 id="artscat">
<title><application>artscat</application></title>

<para>This is a simple utility to send raw audio data to the sound server.  You
need to specify the data format (sampling rate, sample size, and number of
channels). This is probably not a utility that you will use often, but it can be
handy for testing purposes. The command syntax is:</para>

<!-- LW: FIX THIS -->

<para>
artscat [ <replaceable>options</replaceable> ] [ <replaceable>filename</replaceable> ]
</para>

<para>If no file name is specified, it reads standard input. The following
options are supported: </para>

<variablelist>

<varlistentry>
<term><option>-r <replaceable>sampling rate</replaceable></option></term>
<listitem>
<para>Set the sampling rate to use.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-b <replaceable>bits</replaceable></option></term>
<listitem>
<para>Set sample size to use (8 or 16).</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-c <replaceable>channels</replaceable></option></term>
<listitem>
<para>Set number of channels (1 or 2).</para>
</listitem>
</varlistentry>

<varlistentry>
<term><option>-h</option></term>
<listitem>
<para>Display command usage and exit.</para>
</listitem>
</varlistentry>

</variablelist>

</sect1>

<sect1 id="artscontrol">
<title><application>artscontrol</application></title>

<para>This is a graphical utility for performing a number of tasks related to
the sound server. The default window displays two volume level indicators and a
slider to control overall output volume. From the <guimenu>View</guimenu> menu
you can select other functions:</para>

<variablelist>

<varlistentry>
<term><guimenuitem>FFT Scope</guimenuitem></term>
<listitem>
<para>Opens a window which shows a real-time spectrum analyzer style
display.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><guimenuitem>Audio Manager</guimenuitem></term>
<listitem>
<para>Displays active sound sources and allows you to connect them to any of the
available busses.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><guimenuitem>aRts Status</guimenuitem></term>
<listitem>
<para>Shows if the sound server is running and if scheduling is
real-time. Indicates when server will autosuspend and allows you to suspend it
immediately. </para>
</listitem>
</varlistentry>

<varlistentry>
<term><guimenuitem>Midi Manager</guimenuitem></term>
<listitem>
<para>Shows active Midi inputs and outputs and allows you to make connections
[TODO: Does this work yet? Need more detail]. </para>
</listitem>
</varlistentry>

<varlistentry>
<term><guimenuitem>FreeVerb</guimenuitem></term>
<listitem>
<para>Connects a FreeVerb reverb effect to the stack of &arts; output effects
and allows you to control the effect parameters graphically.</para>
</listitem>
</varlistentry>

<varlistentry>
<term><guimenuitem>Leds-like volume display</guimenuitem></term>
<listitem>
<para>Changes the volume indicators in the main window to use a colored
<acronym>LED</acronym> display format instead of progress bars.</para>
</listitem>
</varlistentry>

</variablelist>

</sect1>

<sect1 id="artsc-config">
<title><application>artsc-config</application></title>

<para>This is a utility to assist developers using the &arts; C
<acronym>API</acronym>. It outputs the appropriate compiler and linker options
needed when compiling and linking code with &arts;. It is intended to be used
within make files to assist in portability. The command accepts three
options:</para>

<variablelist>
<varlistentry>
<term><option>--cflags</option></term>
<listitem>
<para>Displays the compiler flags needed when compiling with the &arts; C
<acronym>API</acronym>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>--libs</option></term>
<listitem>
<para>Displays the linker flags needed when linking with the &arts; C
<acronym>API</acronym>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><acronym>--version</acronym></term>
<listitem>
<para>Displays the version of the <command>artsc-config</command>
command.</para>
</listitem>
</varlistentry>
</variablelist>

<para>Typical output from the command is shown below:</para>

<screen width="40"><prompt>%</prompt> <userinput><command>artsc-config</command> <option>--cflags</option></userinput>
<computeroutput>-I/usr/local/kde2/include/artsc</computeroutput>
<prompt>%</prompt> <userinput><command>artsc-config</command> <option>--libs</option></userinput>
<computeroutput>-L/usr/local/kde2/lib -ldl -lartsc -DPIC -fPIC -lpthread</computeroutput>
<prompt>%</prompt> <userinput><command>artsc-config</command> <option>--version</option></userinput>
<computeroutput>0.9.5</computeroutput>
</screen>

<para>You could use this utility in a make file using a rule
such as:</para>

<programlisting>
artsc: artsc.c
        gcc `artsc-config --cflags` -o artsc artsc.c `artsc-config --libs`
</programlisting>

</sect1>

<sect1 id="mcopidl">
<title><command>mcopidl</command></title>

<para>The <command>mcopidl</command> command is the <acronym>IDL</acronym> file
compiler for <acronym>MCOP</acronym>, the Multimedia Communication Protocol used
by &arts;. Interfaces in &arts; are defined in <acronym>IDL</acronym>, a
language independent Interface Definition Language. The
<command>mcopidl</command> utility accepts an <acronym>IDL</acronym> file as
input and generates C++ header and source files for a class implementing the
interface. The command accepts the following syntax:</para>

<!-- LW: FIX THIS -->

<para>mcopidl [ <replaceable>options</replaceable> ] <replaceable>filename</replaceable>
</para>

<para>The valid options are:</para>


<variablelist>
<varlistentry>
<term><option>-I <replaceable parameter>directory</replaceable></option></term>
<listitem>
<para>Search in <replaceable>directory</replaceable> for includes.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>-e <replaceable>name</replaceable></option></term>
<listitem>
<para>Exclude a struct, interface, or enum type <replaceable>name</replaceable>
from code generation.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><option>-t</option></term>
<listitem>
<para>Also create .mcoptype/.mcopclass files containing type information for the <acronym>IDL</acronym> file.</para>
</listitem>
</varlistentry>
</variablelist>

<para>More information about <acronym>MCOP</acronym> and <acronym>IDL</acronym>
is covered in the section <link linkend="interfaces">Interfaces and
<acronym>IDL</acronym></link>.</para>

</sect1>

</chapter>

<chapter id="artsbuilder">
<title>&artsbuilder;</title>

<sect1 id="overview">
<title>Overview</title>

<para>First of all, when trying to run &artsbuilder; , you should also be
running the sound server (&artsd;). Usually, when you use &kde; 2.1, this
should already be the case. If not, you can configure the automatic sound
server startup under
<menuchoice><guimenu>KControl</guimenu><guilabel>Sound</guilabel><guilabel>Sound
Server</guilabel></menuchoice>.</para>

<para>When you are running &arts;, it always runs small modules. &artsbuilder;
is a tool to create new structures of small connected modules. You simply
click the modules inside the grid. To do so, choose them from the
<guimenu>Modules</guimenu> menu, and then click somewhere in the green-grey
plane.</para>

<para>Modules usually have ports (where usually audio signals are flowing in or
out). To connect two ports, click on the first, which causes it to turn orange,
and then click on the second. You can only connect an input port (on the upper
side of a module) with an output port (on the lower side of a module). If you
want to assign a fixed value to a port (or disconnect it), do so by double
clicking on the port.</para>

</sect1>

<sect1 id="artsbuilder-tutorial">
<title>Tutorial</title>

<sect2 id="step-1">
<title>Step 1</title>

<para>Start artsbuilder.</para>

<para>You need a Synth&lowbar;AMAN&lowbar;PLAY-module to hear the output you
are creating. So create a Synth&lowbar;AMAN&lowbar;PLAY-module by selecting
<menuchoice>
<guimenu>Modules</guimenu>
<guisubmenu>Synthesis</guisubmenu>
<guisubmenu>SoundIO</guisubmenu>
<guisubmenu>Synth&lowbar;AMAN&lowbar;PLAY</guisubmenu>
</menuchoice>
and clicking on the empty module space. Put it below
the fifth line or so, because we'll add some stuff above.</para>

<para>The module will have a parameter "title" (leftmost port), and
"autoRestoreID" (besides the leftmost port) for finding it. To fill these
out, doubleclick on these ports, select constant value and type "tutorial" in
the edit box. Click okay to apply.</para>

<para>Hit File/Execute structure. You will hear absolutely nothing. The play
module needs some input yet... ;) If you have listened to the silence
for a while, click okay and go to Step 2</para>
</sect2>

<sect2 id="step-2">
<title>Step 2</title>

<para>Create a Synth&lowbar;WAVE&lowbar;SIN module (from
<menuchoice>
<guimenu>Modules</guimenu>
<guimenuitem>Synthesis</guimenuitem>
<guimenuitem>Waveforms</guimenuitem>
</menuchoice>)
and put it above the Synth&lowbar;AMAN&lowbar;PLAY module. (Leave one line
space in between). As you see, it produces some output, but requires a "pos"
as input. First lets put the output to the speakers. Click on the "out" port
of the Synth&lowbar;WAVE&lowbar;SIN and then on the "left" port of
Synth&lowbar;AMAN&lowbar;PLAY. Voila, you have connected two modules.</para>

<para>All oscillators in aRts don't require a frequency as input, but a position
in the wave. The position should be between 0 and 1, which maps for a
standard Synth&lowbar;WAVE&lowbar;SIN object to the range 0..2*pi. To generate
oscillating values from a frequency, a Synth&lowbar;FREQUENCY modules is used.</para>

<para>Create a Synth&lowbar;FREQUENCY module (from
<menuchoice>
<guimenu>Modules</guimenu>
<guimenu>Synthesis</guimenu>
<guimenu>Oscillation & Modulation</guimenu>
</menuchoice>)
and connect it's "pos" output to the "pos" input of your
Synth&lowbar;WAVE&lowbar;SIN. Specify the frequency port of the FREQUENCY
generator as constant value 440.</para>

<para>Hit File/Execute structure. You will hear a sinus wave at 440 Hz on one
of your speakers. If you have listened to it for a while, click okay and go
to Step 3.</para>

</sect2>

<sect2 id="step-3">
<title>Step 3</title>

<para>Ok, it would be nicer if you would hear the sin wave on both speakers.
Connect the right port of Synth&lowbar;PLAY to the outvalue of the 
Synth&lowbar;WAVE&lowbar;SIN as well.</para>

<para>Create a Synth&lowbar;SEQUENCE object (from
<menuchoice>
<guimenu>Modules</guimenu>
<guisubmenu>Synthesis</guisubmenu>
<guisubmenu>Midi & Sequencing</guisubmenu>
</menuchoice>).
It should be at the top of the screen. If you need more room you can move
the other modules by selecting them (to select multiple modules use Shift),
and dragging them around.</para>

<para>Now connect the frequency output of Synth&lowbar;SEQUENCE to the
frequency input of the Synth&lowbar;FREQUENCY module. Then specify the
sequence speed as constant value 0.13 (the speed is the leftmost port).</para>

<para>Now go to the rightmost port (sequence) of Synth&lowbar;SEQUENCE and type in
as constant value A-3;C-4;E-4;C-4; this specifies a sequence. More to
that in the Module Reference.</para>

<para>Note: Synth&lowbar;SEQUENCE really <emphasis>needs</emphasis> a sequence
and the speed. Without that you'll perhaps get core dumps.</para>

<para>Hit File/Execute structure. You will hear a nice sequence playing.
If you have enjoyed the feeling, click okay and go to Step 4.</para>
</sect2>

<sect2 id="step-4">
<title>Step 4</title>

<para>Create a Synth&lowbar;PSCALE module (from
<menuchoice>
<guimenu>Modules</guimenu>
<guisubmenu>Synthesis</guisubmenu>
<guisubmenu>Envelopes</guisubmenu>
</menuchoice>). Disconnect the outvalue of the SIN wave by doubleclicking it
and choosing "not connected". Connect</para>

<para><itemizedlist>
<listitem>
<para>The SIN outvalue to the PSCALE invalue</para>
</listitem>
<listitem>
<para>The PSCALE outvalue to the AMAN_PLAY left</para>
</listitem>
<listitem>
<para>The PSCALE outvalue to the AMAN_PLAY right</para>
</listitem>
<listitem>
<para>The SEQUENCE pos to the PSCALE pos</para>
</listitem>
</itemizedlist>
</para>

<para>Finally, set the PSCALE top to some value, for instance 0.1.</para>

<para>How that works now: The Synth&lowbar;SEQUENCE gives additional
information about the position of the note it is playing right now, while 0
means just started and 1 means finished. The Synth&lowbar;PSCALE module will
scale the audio stream that is directed through it from a volume 0 (silent)
to 1 (original loudness) back to 0 (silent). According to the position. The
position where the peak should occur can be given as pos. 0.1 means that
after 10&percnt; of the note has been played, the volume has reached its
maximum, and starts decaying afterwards.</para>

<para>Hit File/Execute structure. You will hear the sequence with scaled notes.
If you have enjoyed the feeling, click okay and go to Step 5.</para>
</sect2>

<sect2 id="step-5-starting-to-beam-data-around">
<title>Step 5: Starting to beam data around ;)</title>

<para>Start another artsbuilder</para>

<para>Put a Synth&lowbar;AMAN&lowbar;PLAY into it, configure it to a sane
name. Put a Synth&lowbar;BUS&lowbar;DOWNLINK into it and 
<itemizedlist>
<listitem>
<para>set Synth&lowbar;BUS&lowbar;DOWNLINK bus to audio (that is just a name, call it
fred if you like)</para>
</listitem>
<listitem>
<para>connect Synth&lowbar;BUS&lowbar;DOWNLINK left to Synth&lowbar;AMAN&lowbar;PLAY left</para>
</listitem>
<listitem>
<para>connect Synth&lowbar;BUS&lowbar;DOWNLINK right to Synth&lowbar;AMAN&lowbar;PLAY right</para>
</listitem>
</itemizedlist>
</para>

<para>Start executing the structure. As expected, you hear nothing, ... not yet</para>

<para>Go back to the structure with the Synth&lowbar;WAVE&lowbar;SIN stuff and
replace the Synth&lowbar;AMAN&lowbar;PLAY module by an
Synth&lowbar;BUS&lowbar;UPLINK, and configure the name to audio (or fred if
you like). Deleting modules works with selecting them and choosing edit/delete
from menu (or pressing the del key).</para>

<para>Hit <menuchoice><guimenu>File</guimenu>
<guilabel>Execute structure</guilabel></menuchoice>. You will hear the
sequence with scaled notes, transported over the bus.</para>

<para>If you want to find out why something like this can actually be useful,
click okay (in the artsbuilder that is executing the Synth&lowbar;SEQUENCE
stuff, you can leave the other one running) and go to Step 6.</para>
</sect2>

<sect2 id="step-6-beaming-for-advanced-users">
<title>Step 6: Beaming for advanced users</title>

<para>Choose File/Rename structure from the menu of the artsbuilder which
contains the Synth&lowbar;SEQUENCE stuff, and call it tutorial. Hit okay.</para>

<para>Choose File/Save</para>

<para>Start yet another artsbuilder and choose File/Load, and load the
tutorial again.</para>

<para>Now you can hit File/Execute structure in both artsbuilders having that
structure. You'll now hear two times the same thing. Depending on the time
when you start it it will sound more or less nice.</para>

<para>Another thing that is good to do at this point in time is: start noatun,
and play some mp3. Start artscontrol. Go to <menuchoice><guimenu>View</guimenu><guimenuitem>View audio manager</guimenuitem></menuchoice>. What you will see
is noatun and your "tutorial" playback structure playing something. The nice
thing you can do is this: doubleclick on noatun. You'll now get a list of
destinations. And see? You can assign noatun to send it's output via the audio
bus your tutorial playback structure provides.
</para>
</sect2>

<sect2 id="step-7-midi-synthesis">
<title>Step 7: Midi synthesis</title>

<para>Finally, now you should be able to turn your sin wave into an real
instrument. This only makes sense if you have something handy that
could send midi events to arts. I'll describe here how you can use
some external keyboard, but a midibus aware sequence like Brahms
will work as well.</para>

<para>First of all, clean up on your desktop until you only have one
artsbuilder with the sine wave structure running (not executing).
Then, three times go to <menuchoice><guimenu>Ports</guimenu>
<guisubmenu>Create IN audio signal</guisubmenu></menuchoice>, and three
times to <menuchoice><guimenu>Ports</guimenu>
<guisubmenu>Create OUT audio signal</guisubmenu></menuchoice>. Place the
ports somewhere.
</para>

<para>
Finally, go to <menuchoice><guimenu>Ports</guimenu>
<guilabel>Change positions and names</guilabel></menuchoice> and call
the ports frequency, velocity, pressed, left, right, done.
</para>
<para>
Finally, you can delete the Synth&lowbar;SEQUENCE module, and rather connect
connect the frequency input port of the structure to the Synth&lowbar;FREQUENCY
frequency port. Hm. But what do do about pos?</para>

<para>We don't have this, because with no algorithm in the world, you can
predict when the user will release the note he just pressed on the midi
keyboard. So we rather have a pressed parameter instead that just indicates
wether the user still holds down the key. (pressed = 1: key still hold down,
pressed = 0: key released)</para>

<para>That means the Synth&lowbar;PSCALE object also must be replaced now. Plug in a Synth&lowbar;ENVELOPE&lowbar;ADSR instead (from
<menuchoice>
<guimenu>Modules</guimenu>
<guisubmenu>Synthesis</guisubmenu>
<guisubmenu>Envelopes</guisubmenu>
</menuchoice>).
Connect
<itemizedlist>
<listitem>
<para>The pressed structure input to the ADSR active</para>
</listitem>
<listitem>
<para>The SIN outvalue to the ADSR invalue</para>
</listitem>
<listitem>
<para>The ADSR outvalue to the left structure output</para>
</listitem>
<listitem>
<para>The ADSR outvalue to the right structure output</para>
</listitem>
</itemizedlist>

Set the parameters attack to 0.1, decay to 0.2, sustain to 0.7, release to 0.1.</para>

<para>Another thing we need to think of is that the instrument structure somehow
should know when it is ready playing and then be cleaned up, because otherwise
it would be never stopped even if the note has been released. Fortunately,
the ADSR envelope knows when the will be nothing to hear anymore, since it
anyway scales the signal to zero at some point after the note has been
released.</para>

<para>This is indicated by setting the done output to 1. So connect this to
the done output of the structure. The structure will be removed as soon as
done goes up to 1.</para>

<para>Rename your structure to instrument_tutorial (from <menuchoice><guimenu>
File</guimenu> <guimenuitem>Rename structure</guimenuitem></menuchoice>. Then,
save it using save as (the default name offered should be instrument_tutorial
now).</para>

<para>Start artscontrol, and go to <menuchoice><guimenu>View</guimenu><guimenuitem>Midi Manager</guimenuitem></menuchoice>, and choose <menuchoice><guimenu>Add</guimenu><guimenuitem>aRts Synthesis Midi Output</guimenuitem></menuchoice>.
Finally, you should be able to select your instrument (tutorial) here.</para>

<para>Open a terminal and type <literal remap="tt">midisend</literal>. You'll
see that midisend and the instrument are listed now in the aRts midi manager.
After selecting both and hitting connect, we're finally done. Take your
keyboard and start playing (of course it should be connected to your computer).
</para>
</sect2>

<sect2 id="suggestions">
<title>Suggestions</title>

<para>You now should be able to work with arts. Here are a few tips what you could
try to improve with your structures now:
<itemizedlist>
<listitem>
<para>Try using other things than a SIN wave. When you plug in a TRI wave,
you will most likely think the sound is not too nice. But try appending
a SHELVE&lowbar;CUTOFF filter right after the TRI wave to cut the frequencies
above a certain frequency (try something like 1000 Hz, or even better
two times the input frequency or input frequency+200Hz or something like
that).</para>
</listitem>
<listitem>
<para>Try using more than one oscillator. Synth&lowbar;XFADE can be used to cross
fade (mix) two signals, Synth&lowbar;ADD to add them.</para>
</listitem>
<listitem>
<para>Try setting the frequencies of the oscillators to not exactly the
same value, that gives nice oscillations.</para>
</listitem>
<listitem>
<para>Experiment with more than one envelope.</para>
</listitem>
<listitem>
<para>Try synthesizing instruments with different output left and right.</para>
</listitem>
<listitem>
<para>Try postprocessing the signal after it comes out the bus downlink.
You could for instance mix a delayed version of the signal to the
original to get an echo effect.</para>
</listitem>
<listitem>
<para>Try using the velocity setting (its the strength with which the note
has been pressed, you could also say volume). The special effect is always
when this not only modifies the volume of the resulting signal, but as
well the sound of the instrument (for instance the cutoff frequency).</para>
</listitem>
<listitem>
<para>...</para>
</listitem>
</itemizedlist>
</para>

<para>If you have created something great, please consider providing it
for the aRts web page. Or for inclusion into the next release.</para>
</sect2>

</sect1>

<sect1 id="artsbuilder-examples">
<title>Examples</title>
<para>
Artsbuilder comes with several examples, which can be opened through
<menuchoice><guimenu>File</guimenu><guimenuitem>Open Example...</guimenuitem>
</menuchoice>. Some of them are in the directory, some of them (which for
some reason don't work with the current release) are left in the todo directory.
</para>
<para>
The examples fall into several categories:
<itemizedlist>
<listitem><para>
Standalone examples illustrating how to use each of the built-in
arts modules (named example_*.arts). These typically send some
output to a sound card.
</para></listitem>
<listitem><para>
Instruments built from lower level arts modules (named
instrument_*.arts). These following a standard convention for
input and output ports so they can be used by the midi manager
in artscontrol.
</para></listitem>
<listitem><para>
Templates for creating new modules (names template_*.arts).
</para></listitem>
<listitem><para>
Effects which can be used as reusable building blocks (named
effect_*.arts) [ all in todo ]
</para></listitem>
<listitem><para>
Mixer elements used for creating mixers, including graphical
controls (named mixer_element_*.arts). [ all in todo ]
</para></listitem>
<listitem><para>
Miscellaneous modules that don't fit into any of the above
categories.
</para></listitem>
</itemizedlist>
</para>

<para>
Detailed Description Of Each Module:
<variablelist>
<varlistentry>
<term>example_stereo_beep.arts</term>
<listitem><para>
Generates a 440Hz sine wave tone in the left channel and an 880Hz sine
wave tone in the right channel, and sends it to the sound card
output. This is referenced in the aRts documentation.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_sine.arts</term>
<listitem><para>
Generates a 440 Hz sine wave.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_pulse.arts</term>
<listitem><para>

Generates a 440 Hz pulse wave with a 20% duty cycle.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_softsaw.arts</term>
<listitem><para>

Generates a 440 Hz sawtooth wave.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_square.arts</term>
<listitem><para>

Generates a 440 Hz square wave.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_tri.arts</term>
<listitem><para>

Generates a 440 Hz triangle wave.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_noise.arts</term>
<listitem><para>

Generates white noise.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_dtmf1.arts</term>
<listitem><para>

Generates a dual tone by producing 697 and 1209 Hz sine waves, scaling
them by 0.5, and adding them together. This is the DTMF tone for the
digit "1" on a telephone keypad.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_atan_saturate.arts</term>
<listitem><para>

Runs a triangle wave through the atan saturate filter.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_autopanner.arts</term>
<listitem><para>
Uses an autopanner to pan a 400 Hz sine wave between the left and
right speakers at a 2 Hz rate.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_brickwall.arts</term>
<listitem><para>

Scales a sine wave by a factor of 5 and then runs it through a
brickwall limiter.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_bus.arts</term>
<listitem><para>

Downlinks from a bus called "Bus" and uplinks to the bus
"out_soundcard" with the left and right channels reversed.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_cdelay.arts</term>
<listitem><para>

Downlinks from a bus called "Delay", uplinks the right channel through
a 0.5 second cdelay, and the left channel unchanged. You can use
artscontrol to connect the effect to a sound player and observe the
results.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_delay.arts</term>
<listitem><para>

This is the same as example_cdelay but used the delay effect.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_capture_wav.arts</term>
<listitem><para>

This uses the Synth_CAPTURE_WAV to save a 400 Hz sine wave as a wav
file. Run the module for a few seconds, and then examine the file
created in /tmp. You can play the file with a player such as kaiman.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_data.arts</term>
<listitem><para>

This uses the Data module to generate a constant stream of the value
"3" and sends it to a Debug module to periodically display it. It
also contains a Nil module, illustrating how it can be used to
do nothing at all.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_adsr.arts</term>
<listitem><para>

Shows how to create a simple instrument sound using the Envelope Adsr
module, repetitively triggered by a square wave.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_fm.arts</term>
<listitem><para>

This uses the FM Source module to generate a 440 Hz sine
wave which is frequency modulated at a 5 Hz rate.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_freeverb.arts</term>
<listitem><para>

This connects the Freeverb effect from a bus downlink to a bus
outlink. You can use artscontrol to connect the effect to a sound
player and observe the results.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_flanger.arts</term>
<listitem><para>

This implements a simple flanger effect (it doesn't appear
to work yet, though).
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_moog.arts</term>
<listitem><para>

This structure combines the two channels from a bus into
one, passes it though the Moog VCF filter, and sends
it out the out_soundcard bus.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_pitch_shift.arts</term>
<listitem><para>

This structure passes the left channel of sound card data through the
Pitch Shift effect. Adjust the speed parameter to vary the effect.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_rc.arts</term>
<listitem><para>

This structure passes a white noise generator though an RC filter and
out to the sound card. By viewing the FFT Scope display in artscontrol
you can see how this varies from an unfiltered noise waveform.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_sequence.arts</term>
<listitem><para>

This demonstrates the Sequence module by playing a sequence of notes.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_shelve_cutoff.arts</term>
<listitem><para>

This structure passes a white noise generator though a Shelve Cutoff
filter and out to the sound card. By viewing the FFT Scope display in
artscontrol you can see how this varies from an unfiltered noise
waveform.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_equalizer.arts</term>
<listitem><para>

This demonstrates the Std_Equalizer module. It boosts the low and high
frequencies by 6 dB.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_tremolo.arts</term>
<listitem><para>

This demonstrates the Tremolo effect. It modulates the left and right
channels using a 10 Hz tremolo.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_xfade.arts</term>
<listitem><para>

This example mixes 440 and 880 Hz sine waves using a cross fader.
Adjust the value of the cross fader's percentage input from -1 to 1 to
control the mixing of the two signals.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_pscale.arts</term>
<listitem><para>

This illustrates the Pscale module (I'm not sure if this is a
meaningful example).
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_play_wav.arts</term>
<listitem><para>

This illustrates the Play Wave module. You will need to
enter the full path to a .wav file as the filename
parameter.
</para></listitem>
</varlistentry>

<varlistentry>
<term>example_multi_add.arts</term>
<listitem><para>
This shows the Multi Add module which accepts any number of inputs. It
sums three Data modules which produce inputs of 1, 2, and 3, and
displays the result 6.
</para></listitem>
</varlistentry>
</variablelist>
</para>
</sect1>


</chapter>

<chapter id="arts-in-detail">
<title>&arts; in Detail</title>

<sect1 id="architecture">
<title>Architecture</title>

<mediaobject>
<imageobject>
<imagedata fileref="arts-structure.png" format="PNG">
</imageobject>
</mediaobject>

</sect1>

<sect1 id="modules-ports">
<title>Modules & Ports</title>

<para>The idea of aRts is, that synthesis can be done using small modules, which
only do one thing, and then recombine them in complex structures. The small
modules normally have inputs, where they can get some signals or parameters,
and outputs, where they produce some signals.</para>

<para>One module (Synth&lowbar;ADD) for instance just takes the two signals at it's input
and adds them together. The result is available as output signal. The places
where modules provide their input/output signals are called ports.</para>
</sect1>

<sect1 id="structures">
<title>Structures</title>

<para>A structure is a combination of connected modules, some of which may have
parameters coded directly to their input ports, others which may be connected,
and others, which are not connected at all.</para>

<para>What you can do with ArtsBuilder is describing structures. You describe,
which modules you want to be connected with which other modules. When you
are done, you can save that structure description to a file, or tell Arts
to create such a structure you described (Execute).</para>

<para>Then you'll probably hear some sound, if you did everything the right
way.</para>
</sect1>

<!-- TODO

<sect1 id="streams">
<title>Streams</title>
<para>
</para>
</sect1>

<sect1 id="latency">
<title>Latency</title>
<para>
</para>
</sect1>

<sect1 id="dynamic-nstantiation">
<title>Dynamic Instantiation</title>
<para>
</para>
</sect1>
-->

<sect1 id="busses">
<title>Busses</title>

<para>Busses are dynamically built connections that transfer audio. Basically,
there are some uplinks and some downlinks. All signals from the uplinks
are added and send to the downlinks.</para>

<para>Busses as currently implemented operate in stereo, so you can only
transfer stereo data over busses. If you want mono data, well, transfer it
only over one channel and set the other to zero or whatever.
What you need to to, is to create one or more Synth&lowbar;BUS&lowbar;UPLINK
objects and tell them a bus name, to which they should talk (e.g. "audio" or
"drums"). Simply throw the data in there.</para>

<para>Then, you'll need to create one or more Synth&lowbar;BUS&lowbar;DOWNLINK
objects, and tell them the bus name ("audio" or "drums" ... if it matches, the
data will get through), and the mixed data will come out again.</para>

<para>The uplinks and downlinks can reside in different structures, you can
even have different Artsbuilders running and start an uplink in one
and receive the data from the other with a downlink.</para>

<para>What is nice about busses is, that they are fully dynamic. Clients can
plug in and out on the fly. There should be no clicking or noise as
this happens.</para>

<para>(Of course, you should not plug out a client playing a signal, since
it will probably not be a zero level when plugged out the bus, and
then it will click.)</para>
</sect1>

<!-- TODO
<sect1 id="network-ransparency">
<title>Network Transparency</title>
<para>
</para>
</sect1>

<sect1 id="security">
<title>Security</title>
<para>
</para>
</sect1>


<sect1 id="effects">
<title>Effects and Effect Stacks</title>
<para>
</para>
</sect1>


<sect1 id="trader">
<title>Trader</title>
<para>
</para>
</sect1>

<sect1 id="midi-synthesis">
<title><acronym>MIDI</acronym> Synthesis</title>
<para>
</para>
</sect1>

<sect1 id="instruments">
<title>Instruments</title>
<para>
</para>
</sect1>

<sect1 id="session-management">
<title>Session Management</title>
<para>
</para>
</sect1>

<sect1 id="full-duplex">
<title>Full duplex Audio</title>
<para>
</para>
</sect1>

-->

<sect1 id="detail-gui-elements">
<title><acronym>GUI</acronym> Elements</title>
<para>
GUI elements are currently in the experimental state. However, this section
will describe what is supposed to happen here, so if you are a developer,
you will be able to understand how &arts; will deal with GUIs in the future.
There is some code there already, too.
</para>

<para>GUI elements should be used to allow synthesis structures to interact
with the user. In the simplest case, the user should be able to modify some
parameters of a structure directly (such as a gain factor which is used
before the final play module).</para>

<para>In more complex settings, one could imagine the user modifying parameters
of groups of structures and/or not yet running structures, such as modifying
the ADSR envelope of the currently active midi instrument. Another thing
would be setting the filename of some sample based instrument.</para>

<para>On the other hand, the user could like to monitor what the synthesizer is
doing. There could be oscilloscopes, spectrum analyzers, volume meters and
"experiments" that figure out the frequency transfer curve of some given
filter module.</para>

<para>Finally, the GUI elements should be able to control the whole structure
of what is running inside Arts and how. The user should be able to assign
instruments to midi channels, start new effect processors, configure his
main mixer pult (which is built of aRts structures itself) to have one
channel more and use another strategy for its equalizers.</para>

<para>You see - the GUI elements should bring all possibilities of the virtual
studio aRts should simulate to the user. Of course, they should also
gracefully interact with midi inputs (such as sliders should move if they
get midi inputs which also change just that parameter), and probably even
generate events themselves, to allow the user interaction to be recorded
via sequencer.</para>

<para>Technically, the idea is to have an IDL base class for all widgets
(Arts::Widget), and derive a number of commonly used widgets from there
(like Arts::Poti, Arts::Panel, Arts::Window, ...). Then, one can implement
these widgets using a toolkit, for instance Qt or Gtk. Finally, effects
should build their GUIs out of existing widgets. For instance, a freeverb
effect could build it's GUI out of five Arts::Poti thingies and a Arts::Window.
So IF there is a Qt implementation for these base widgets, the effect will
be able to display itself using Qt. If there is Gtk implementation, it will
also work for Gtk (and more or less look/work the same).
</para>

<para>Finally, as we're using IDL here, artsbuilder (or other tools) will be
able to plug GUIs together visually, or autogenerate GUIs given hints for
parameters, only based on the interfaces. It should be relatively straight
forward to write a "create GUI from description" class, which takes a GUI
description (containing the various parameters and widgets), and creates
a living GUI object out of it. Based on IDL and the &arts;/MCOP component
model, it should be easy to extend the possible objects which can be used
for the GUI just as easy as it is to add a plugin implementing a new filter
to &arts;.
</para>
</sect1>

</chapter>

<chapter id="midi">
<title><acronym>MIDI</acronym></title>

<sect1 id="midi-overview">
<title>Overview</title>
<!-- what-to-say-here: aRts has three roles
  * moving midi events around between applications
  * abstracting the hardware
  * synthesizer -->
<para>
The midi support in aRts can do a number of things. First of all, it allows
<emphasis>communication</emphasis> between different pieces of software that
produce or consume MIDI events. If you for instance have a sequencer and a
sampler that are both aRts aware, aRts can send the MIDI events from the
sequencer to the sampler.
</para>

<para>
On the other hand, aRts can also help applications to <emphasis>interact with
the hardware</emphasis>. If a piece of software (for instance the sampler)
works together with aRts, it will be able to receive the MIDI events from an
external MIDI keyboard as well.
</para>

<para>
Finally, aRts makes a great <emphasis>modular synthesizer</emphasis>. It is
designed to do exactly this. So you can build instruments out of small modules
using artsbuilder, and then use these instruments to compose or play music.
Synthesis does not necessarily mean pure synthesis, there are modules you can
use to play samples. So aRts can be a sampler, synthesizer, and so on, and
being fully modular, it is very easy to extend, very easy to experiment with,
powerful and flexible.
</para>
</sect1>

<sect1 id="midi-manager">
<title>The Midi Manager</title>
<!-- what-to-say-here: 
  * how to use artscontrol - view midimanager
  * what does autorestore do? (not yet implemented - so not yet documented) -->
<para>
The central component in aRts that keeps track which applications are
connected and how midi events should be passed between them is the midi
manager. To see or influence what it does, start artscontrol. Then, choose
<menuchoice><guilabel>View</guilabel><guilabel>View Midi Manager</guilabel>
</menuchoice> over the menu.
</para>
<para>
On the left side, you will see <guilabel>Midi Inputs</guilabel>. There, all
objects that produce midi events, such as an external midi port which sends
data from a connected midi keyboard, a sequencer which plays a song and so
on will be listed. On the right side, you will see <guilabel>Midi
Outputs</guilabel>. There, all things that consume midi events, such as a
simulated sampler (as software), or the external midi port where your hardware
sampler outside your computer is connected will be listed. New applications,
such as sequencers and so on will register themselves, so the list will be
changing over time.
</para>
<para>
You can connect inputs and outputs if you mark the input on the left side and
the output on the right side, and choose <guilabel>Connect</guilabel> with
the button below. <guilabel>Disconnect</guilabel> works the same. You will
see what is connected as small lines between the inputs and outputs, in
the middle of the window. Note that you can connect one sender to more than
one receiver (and the other way round).
</para>
<para>
Programs (like the Brahms sequencer) will add themselves when they start
and be removed from the list when they are terminated. But you can also add
new things in the <guilabel>Add</guilabel> menu:
<variablelist>

<varlistentry>
<term><guimenuitem>System Midi Port (OSS)</guimenuitem></term>
<listitem>
<para>This will create a new aRts object that talks to an external midi port.
As external midi ports can do both, send and receive data, choosing this option
will add a midi input and a midi output. Under linux, you should either have
an OSS (or OSS/Free, the thing that comes with your linux kernel) or an ALSA
driver for your soundcard installed, to make it work. It will ask for the name
of the device. Usually, this is /dev/midi or /dev/midi00. However, if you have
more than one midi device or a midi loopback driver installed, there might be
more choices. To see information about your midi ports, start the KDE
Control Center, and choose <menuchoice><guilabel>Information</guilabel>
<guilabel>Sound</guilabel></menuchoice>.
</para>
</listitem>
</varlistentry>

<varlistentry>
<term><guimenuitem>aRts Synthesis Midi Output</guimenuitem></term>
<listitem>
<para>This will add a new midi output with an aRts synthesis instrument. If
you choose the menu item, a dialog will pop up, and allow you to choose an
instrument. You can create new instruments using artsbuilder. All
.arts-files with a name that starts with "instrument_" will appear here.
</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</sect1>

<sect1 id="brahms">
<title>Using &arts; &amp; Brahms</title>
<para>
Actually, getting started is quite easy. You need a KDE2.1-aware version of
Brahms, which can be found in the kmusic CVS module. There is also information
on how to get Brahms on the <ulink url="http://www.arts-project.org/">aRts
Homepage</ulink> in the Download section.
</para>
<para>
When you start it, it will show up in the midi manager. If you want to do
synthesis, simply add a synthesis midi instrument via <menuchoice><guilabel>
Add</guilabel><guilabel>aRts Synthesis Midi Output</guilabel></menuchoice>.
Choose an instrument (for instance organ2). Connect them using the
<guilabel>Connect</guilabel> button. Finally, you can start composing in
Brahms, and the output will be synthesized with aRts. It is usually a good
idea to have the artscontrol window open, and see that the volume is not
too loud (quality gets bad when the bars hit the upper limit). Now you
can start working on a new aRts demosong, and if you are done, you can
get it published on aRts-project.org ;-).
</para>
<!-- TODO: how to do more than one instrument in Brahms (hm, not implemented
     yet, not documented yet), how to use samples, mapping and so on. These
  	 things need to be implemented, too. -->
</sect1>

<sect1 id="midisend">
<title>midisend</title>
<para>
Midisend is a small application that will allow you to send midi events from
the shell. It will register as client like all other applications. The most
simple way to use it is to do
<screen><prompt>&percnt;</prompt> <userinput><command>midisend</command> <option>-f</option> <option><replaceable>/dev/midi00</replaceable></option></userinput></screen>
which will achieve about the same as adding a system midi port in artscontrol
(not quite, because midisend only sends events). The difference is that it is
easy for instance to start midisend on different computers (and like that,
use network transparency). It is also possible to make midisend send data from
stdin, which you can use to pipe data from non-aRts-aware applications to aRts,
like this
<screen><prompt>&percnt;</prompt> <userinput><command><replaceable>applicationwhichproducesmidieventsonstdout</replaceable></command> | <command>midisend</command> <option>-f</option> <option><replaceable>-</replaceable></option></userinput></screen>
<!-- TODO: document all options -->
</para>
</sect1>

<sect1 id="midi-creating-instruments">
<title>Creating Instruments</title>

<para>The way &arts; does midi synthesis is this: you have a structures which
has some input ports, where it gets the frequency, velocity (volume) and a
parameter which indicates whether the note is still pressed.
The structure should now synthesize exactly that note with that volume,
and react on the pressed parameter (where pressed = 1 means the user still
holds down that key and pressed = 0 means the user has released that key).
</para>
<para>
When midi events arrive, &arts; will create new structures for the notes as
needed, give them the parameters, and clean them up once they are done.
</para>

<para>To create and use such a structure, you should do the following:
<itemizedlist>
<listitem>
<para>To get started, the most convenient way is to open template_Instrument.arts
in artsbuilder. This can be achieved by using <menuchoice>
<guimenu>File</guimenu><guimenuitem>Open Example...</guimenuitem></menuchoice>
and choosing template_Instrument in the file selector. This will give you an
empty structure with the required parameters, which you only need to "fill
out".</para>
</listitem>
<listitem>
<para>To process the pressed parameter, it is convenient to use
Synth&lowbar;ENVELOPE&lowbar;ADSR, or, in case of playing some drum wav, just
play it anyway, and ignore the pressed parameter.</para>
</listitem>
<listitem>
<para>The structure should indicate when it is no longer needed on the "done"
output. If done is 1, &arts; assumes that it can delete the structure.
Conveniently, the ADSR envelope provides a parameter when it is done, so you
just need to connect this to the done output of the structure.
</para>
</listitem>
<listitem>
<para>You should rename your structure to some name starting with instrument_,
like instrument_piano.arts - you should save the file under the same name,
in your $HOME/arts/structures directory (which is where artsbuilder wants to
save files normally).</para>
</listitem>
<listitem>
<para>Finally, once you saved it, you will be able to use it with artscontrol
in the midi manager <!-- todo link to midimanager -->.</para>
</listitem>
<listitem>
<para>Oh, and of course your structure should play the audio data it generates
to the left and right output of the structure, which will then be played via
audio manager (you can see that in artscontrol), so that you finally can hear
it (or postprocess it with effects).</para>
</listitem>
</itemizedlist>
</para>
<para>
A good way to learn how to do instruments is to open an existing instrument
via <menuchoice><guilabel>File</guilabel><guilabel>Open Example</guilabel>
</menuchoice> and see how it works ;)
</para>
</sect1>

<sect1 id="mapped-instruments">
<title>Mapped Instruments</title>

<para>Mapped instruments are instruments, that behave differently depending
on the pitch, the program, the channel or the velocity. You could for instance
build a piano of 5 octaves, using one sample for each octave (pitchshifting
it accordingly). That sounds a whole lot better than only using one sample.
</para>

<para>You could also build a drum map, that plays one specific drum sample per
key.</para>

<para>Finally, it is very useful if you put quite some different sounds into
one mapped instrument on different programs. That way, you can use your
sequencer, external keyboard or other midi source to switch between the sounds
without having to tweak aRts as you work. A good example for this is the
instrument "arts_all", which just puts together all instruments that come
with aRts in one map. That way, you just need to setup once in aRtscontrol to
use this "instrument", and then, you can compose a whole song in a sequencer
without ever bothering about aRts. Need another sound? Simply change the
program in the sequencer, and aRts will give you another sound.</para>

<para>Creating such maps is pretty straightforward. You just need to create a
textfile, and write rules which look like this:
<programlisting>
ON <replaceable>[ conditions ...]</replaceable> DO structure=<replaceable>somestructure</replaceable>.arts
</programlisting></para>

<para>The conditions could be one or more than one of the following</para>
<variablelist>
<!-- --------------------------------------------------------------------- -->
<varlistentry>
<term>pitch</term>

  <listitem><para>
  The pitch that is being played. You would use this if you want to split
  your instrument depending on the pitch. In our initial examples, a piano
  which uses different samples for different octaves would use this as
  condition. You can specify a single pitch, like
  pitch=<replaceable>62</replaceable> or a range of pitches, like
  pitch=<replaceable>60</replaceable>-<replaceable>72</replaceable>.
  The possible pitches are between 0 and 127.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>program</term>

  <listitem><para>
  The program that is active on the channel that the note is being sent on.
  Usually, sequencers let you choose the "instrument" via the program setting.
  Single programs or ranges are allowed, that is
  program=<replaceable>3</replaceable> or
  program=<replaceable>3</replaceable>-<replaceable>6</replaceable>.
  The possible programs are between 0 and 127.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>channel</term>

  <listitem><para>
  The channel that that the note is being sent on. 
  Single channels or ranges are allowed, that is
  channel=<replaceable>0</replaceable> or
  channel=<replaceable>0</replaceable>-<replaceable>8</replaceable>.
  The possible channels are between 0 and 15.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>velocity</term>

  <listitem><para>
  The velocity (volume) that that the note has.
  Single velocities (who would use that?) or ranges are allowed, that is
  velocity=<replaceable>127</replaceable> or
  veclocity=<replaceable>64</replaceable>-<replaceable>127</replaceable>.
  The possible velocities are between 0 and 127.
  </para></listitem>

</varlistentry>
</variablelist>

<para>
A complete example for a map would be (this is taken from the current
instrument_arts_all.arts-map):
</para>

<para>
<programlisting>
ON program=0 DO structure=instrument_tri.arts
ON program=1 DO structure=instrument_organ2.arts
ON program=2 DO structure=instrument_slide1.arts
ON program=3 DO structure=instrument_square.arts
ON program=4 DO structure=instrument_neworgan.arts
ON program=5 DO structure=instrument_nokind.arts
ON program=6 DO structure=instrument_full_square.arts
ON program=7 DO structure=instrument_simple_sin.arts
ON program=8 DO structure=instrument_simple_square.arts
ON program=9 DO structure=instrument_simple_tri.arts
ON program=10 DO structure=instrument_slide.arts
ON program=11 pitch=60 DO structure=instrument_deepdrum.arts
ON program=11 pitch=61 DO structure=instrument_chirpdrum.arts
</programlisting>
</para>

<para>As you see, the structure is choosen depending on the program. On
program 11, you see a "drum map" (with two entries), which would play
a "deepdrum" on C-5 (pitch=60), and a "chirpdrum" on C#5 (pitch=61).</para>

<para>To make map files automatically appear in artscontrol as choice for
the instrument, they have to be called
"instrument_<replaceable>something</replaceable>.arts-map" and reside either
in your Home-Directory, under $HOME/arts/structures, or in the KDE directory
under $KDEDIR/usr/local/kde/share/apps/artsbuilder/examples. Structures that
are used by the map can either be given with an absolute path, or relative
to the directory the map file resides in.</para>

<para>Extending the arts_all map or even making a complete general midi map
for &arts; is a good idea for making &arts; easier to use out-of-the-box.
Please consider contributing interesting instruments you make, so that they
can be included in further version of &arts;.
</para>
</sect1>


<!-- TODO: Maybe helpful
 * using an external keyboard
 * loopback midi device

<sect1 id="quick-start">
<title>Quick Start</title>
<para>
</para>
</sect1>

<sect1 id="internal-details">
<title>More Internal Details</title>
<para>
</para>
</sect1>

<sect1 id="other-considerations">
<title>Other Considerations</title>
<para>
</para>
</sect1>
-->

</chapter>

<!--
<chapter id="gui-elements">
<title><acronym>GUI</acronym> Elements</title>

<sect1 id="gui-introduction">
<title>Introduction</title>
<para>
</para>
</sect1>

<sect1 id="parents">
<title>Parents</title>
<para>

</para>
</sect1>

<sect1 id="mixers">
<title>Mixers</title>
<para>
</para>
</sect1>
</chapter>
-->

<chapter id="mcop">
<title>MCOP: Object Model and Streaming</title>

<sect1 id="mcop-overview">

<title>Overview</title>

<para><acronym>MCOP</acronym> is the standard &arts; uses for:</para>

<itemizedlist>
<listitem><para>communication between objects</para></listitem>
<listitem><para>network transparency</para></listitem>
<listitem><para>describing object interfaces</para></listitem>
<listitem><para>language independancy</para></listitem>
</itemizedlist>

<para>One major aspect of <acronym>MCOP</acronym> is the <emphasis>interface
description language</emphasis>, <acronym>IDL</acronym>, in which many of the
&arts; interfaces and <acronym>API</acronym>s are defined in a language
independand way. To use IDL interfaces from C++, is compiled by the
<acronym>IDL</acronym> compiler into C++ code. When you implement an interface,
you derive from the skeleton class the <acronym>IDL</acronym> compiler has
generated. When you use an interface, you do so using a wrapper. This way,
<acronym>MCOP</acronym> can use a protocol if the object you are talking to is
not local - you get network transparency.</para>

<para>This chapter is supposed to describe the basic features of the object
model that results from the use of <acronym>MCOP</acronym>, the protocol, how do
use <acronym>MCOP</acronym> in C++ (language binding), and so on.</para>

</sect1>

<sect1 id="interfaces">

<title>Interfaces and <acronym>IDL</acronym></title>

<para> </para>

</sect1>

<sect1 id="attribute-change-notify">
<title>Attribute change notifications</title>

<!-- TODO: This should be embedded better into the context - I mean: the context
 should be written ;-). -->

<para> Attribute change notifications are a way to know when an attribute
changed.  They are a bit comparable with &Qt;'s or Gtk's signals and slots. For
instance, if you have a <acronym>GUI</acronym> element, a slider, which
configures a number between 0 and 100, you will usually have an object that does
something with that number (for instance, it might be controlling the volume of
some audio signal). So you would like that whenever the slider is moved, the
object which scales the volume gets notified. A connection between a sender and
a receiver.</para>

<para><acronym>MCOP</acronym> deals with that by being able to providing
notifications when attributes change. Whatever is declared as
<quote>attribute</quote> in the <acronym>IDL</acronym>, can emit such change
notifications, and should do so, whenever it is modified. Whatever is declared
as <quote>attribute</quote> can also receive such change notifications. So for
instance if you had two <acronym>IDL</acronym> interfaces, like these:</para>

<programlisting>
 interface Slider {
         attribute long min,max;
         attribute long position;
 }; 
 interface VolumeControl : Arts::StereoEffect {
     attribute long volume; // 0..100
 };
</programlisting> 

<para>You can connect them using change notifications. It works using the normal
flowsystem connect operation. In this case,  the C++ code to connect two objects
would look like this</para>

<programlisting> 
#include &lt;connect.h&gt; 
using namespace Arts;
[...]
connect(slider,"position_changed",volumeControl,"volume");
</programlisting>

<para>As you see, each attribute offers two different streams, one for sending
the change notifications, called
<function><replaceable>attributename</replaceable>_changed</function>, 

<!-- TODO - how do I markup code that is an example - you wouldn't write
 attributename in the source, but the name of some attribute 

 LW: I'm guessing
 here, because I know how to markup QT code, but your stuff is different.
 Hopefully this will give you inspiration, and we can work out later the fine
 tuning if I have it wrong.  The line above in the code sample, if it were qt
 stuff, I would mark up this way (linebreaks for clarity of markup only, yes I
 know it's incorrect!):

 <function>connect(<classname>slider</classname>,
 <function><replaceable>position</replaceable>_changed</function>, 
 <classname>volumeControl</classname>,
 <function>volume</function>);</function>

 You can use <function><replaceable>attributename</function> and even
 <function><replaceable>attributename</replaceable>_changed</function>.

 If I have the above totally wrong (which is entirely possible!) Some other
 elements you might find handy:

 <varname>, <type>, <returnvalue>, <constant>, <methodname>
  There's also a markup guide at http://madmax.atconnex.net/kde/ that might
  help, although unfortunately the programming section is still incomplete. -->

 and one for receiving change notifications, called
<function>attributename</function>.</para> 

<para>It is important to know that change notifications and asynchronous streams
are compatible. They are also network transparent. So you can connect a change
notification of a float attribute of a <acronym>GUI</acronym> widget has to an
asynchronous stream of a synthesis module running on another computer. This of
course also implies that change notifications are <emphasis>not
synchronous</emphasis>, this means, that after you have sent the change
notification, it may take some time until it really gets received.</para>

<sect2 id="sending-change-notifications">

<title>Sending change notifications</title>

<para>When implementing objects that have attributes, you need to send change
notifications whereever an attribute changes. The code for doing this looks like
this:</para>

<programlisting>
 void KPoti_impl::value(float newValue)
 {
     if(newValue != _value)
     {
         _value = newValue;
         value_changed(newValue); // <- send change notification
     }
 }
</programlisting>
 
<para>It is strongly recommended to use code like this for all objects you
implement, so that change notifications can be used by other people. You should
however void sending notifications too often, so if you are doing signal
processing, it is probably the best if you keep track when you sent your last
notification, so that you don't send one with every sample you process.</para>

</sect2>

<sect2 id="change-notifications-apps">
 
<title>Applications for change notifications</title>

<para>It will be especially useful to use change notifications in conjunction
with scopes (things that visualize audio data for instance), gui elements,
control widgets, and monitoring. Code using this is in kdelibs/arts/tests, and
in the experimental artsgui implementation, which you can find under <filename
class="directory">kdemultimedia/arts/gui</filename>.</para>

<!-- TODO: can I markup links into the source code - if yes, how? -->

<!-- LW: Linking into the source is problematic - we can't assume people are
reading this on a machine with the sources available, or that they aren't
reading it from a website. We're working on it! -->

</sect2>

</sect1>
</chapter>

<chapter id="arts-apis">
<title>&arts; Application Programming Interfaces</title>

<sect1 id="api-overview">
<title>Overview</title>
<para>
aRts is not only a piece of software, it also provides a variety of APIs
for a variety of purposes. In this section, I will try to describe the "big
picture", a brief glance what those APIs are supposed to do, and how they
interact.
</para>

<para>
There is one important distinction to make: most of the APIs are <emphasis>
language and location independant</emphasis> because they are specified as
<emphasis>mcopidl</emphasis>.
That is, you can basically use the services they offer from any language,
implement them in any language, and you will not have to care whether you
are talking to local or remote objects. Here is a list of these first:
</para>


<variablelist>
<!-- --------------------------------------------------------------------- -->
<varlistentry>
<term>core.idl</term>

  <listitem><para>
  Basic definitions that form the core of the MCOP functionality, such as
  the protocol itself, definitions of the object, the trader, the flow
  system and so on.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>artsflow.idl</term>

  <listitem><para>
  These contain the flow system you will use for connecting audio streams, the
  definition of <emphasis>Arts::SynthModule</emphasis> which is the base for
  any interface that has streams, and finally a few useful audio objects
  </para></listitem>

</varlistentry>

<varlistentry>
<term>kmedia2.idl</term>

  <listitem><para>
  Here, an object that can play a media, <emphasis>Arts::PlayObject</emphasis>
  gets defined. Media players such as the KDE media player noatun will be able
  to play any media for which a PlayObject can be found. So it makes sense to
  implement PlayObjects for various formats (such as mp3, mpg video, midi, wav,
  ...) on that base, and there are a lot already.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>soundserver.idl</term>

  <listitem><para>
  Here, an interface for the system wide sound server artsd is defined. The
  interface is called <emphasis>Arts::SoundServer</emphasis>, which implements
  functionality like accepting streams from the network, playing samples,
  creating custom other aRts objects and so on. Network transparency is
  implied due to the use of MCOP (as for everything else here).
  </para></listitem>

</varlistentry>

<varlistentry>
<term>artsbuilder.idl</term>

  <listitem><para>
  This module defines basic flow graph functionality, that is, combining
  simpler objects to more complex ones, by defining a graph of them. It defines
  the basic interface <emphasis>Arts::StructureDesc</emphasis>,
  <emphasis>Arts::ModuleDesc</emphasis> and <emphasis>Arts::PortDesc</emphasis>
  which contain a description of a structure, module, and port. There is also
  a way to get a "living network of objects" out of these connection and value
  descriptions, using a factory.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>artsmidi.idl</term>

  <listitem><para>
  This module defines basic midi functionality, like objects that produce
  midi events, what is a midi event, an <emphasis>Arts::MidiManager</emphasis>
  to connect the producers and consumers of midi events, and so on. As always
  network transparency implied.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>artsmodules.idl</term>

  <listitem><para>
  Here are various additional filters, oscillators, effects, delays and
  so on, everything required for real useful signal processing, and to
  build complex instruments and effects out of these basic building blocks.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>artsgui.idl</term>

  <listitem><para>
  This cares about visual objects. It defines the basic type <emphasis>
  Arts::Widget</emphasis> from which all GUI modules derive. This will produce
  toolkit independency, and ... visual GUI editing, and serializable GUIs.
  Also, as the GUI elements have normal attributes, their values can be
  straight forward connected to some signal processing modules. (I.e. the
  value of a slider to the cutoff of a filter). As always: network transparent.
  </para></listitem>

</varlistentry>

</variablelist>

<para>
Where possible, aRts itself is implemented using IDL. On the other hand, there
are some <emphasis>language specific</emphasis> APIs, using either plain C++ or
plain C. It is usually wise to use IDL interfaces where possible, and the
other APIs where necessary. Here is a list of language specific APIs:
</para>

<variablelist>

<varlistentry>
<term>KNotify, KAudioPlayer (included in libkdecore)</term>

  <listitem><para>
  These are convenience KDE APIs for the simple and common common case, where
  you just want to play a sample. The APIs are plain C++, Qt/KDE optimized,
  and as easy as it can get.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>libartsc</term>

  <listitem><para>
  Plain C interface for the sound server. Very useful for porting legacy
  applications.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>libmcop</term>

  <listitem><para>
  Here all magic for MCOP happens. The library contains the basic things you
  need to know for writing a simple MCOP application, the dispatcher, timers,
  iomanagement, but also the internals to make the MCOP protocol itself work.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>libartsflow</term>

  <listitem><para>
  Besides the implementation of artsflow.idl, some useful utilities like
  sampling rate conversion.
  </para></listitem>

</varlistentry>

<varlistentry>
<term>libqiomanager</term>

  <listitem><para>
  Integration of MCOP into the Qt event loop, when you write Qt applications
  using MCOP.
  </para></listitem>

</varlistentry>

</variablelist>


</sect1>
<sect1 id="knotify">
<title>knotify</title>
<para>
</para>
</sect1>

<sect1 id="kaudioplayer">
<title>kaudioplayer</title>
<para>
</para>
</sect1>

<sect1 id="libkmid">
<title>libkmid</title>
<para>
</para>
</sect1>

<sect1 id="kmedia2">
<title>kmedia2</title>
<para>
</para>
</sect1>

<sect1 id="soundserver">
<title>sound server</title>
<para>
</para>
</sect1>

<sect1 id="artsflow">
<title>artsflow</title>
<para>
</para>
</sect1>

<sect1 id="capi">
<title>C <acronym>API</acronym></title>

<sect2 id="capiintro">
<title>Introduction</title>

<para> The &arts; C <acronym>API</acronym> was designed to make it easy to
writing and port plain C applications to the &arts; sound server. It provides
streaming functionality (sending sample streams to
<application>artsd</application>), either blocking or non-blocking. For most
applications you simply remove the few system calls that deal with your audio
device and replace them with the appropriate &arts; calls.</para>

<para>I did two ports as a proof of concept: <application>mpg123</application>
and <application>quake</application>. You can get the patches from <ulink
url="http://space.twc.de/~stefan/kde/download/artsc-patches.tar.gz">here</ulink>.
Feel free to submit your own patches to the maintainer of &arts; or of
multimedia software packages so that they can integrate &arts; support into
their code.</para>

</sect2>

<sect2 id="capiwalkthru">
<title>Quick Walkthrough</title>

<para>Sending audio to the sound server with the <acronym>API</acronym> is very
simple:</para>

<procedure>
<step><para>include the header file using <userinput>#include
&lt;artsc.h&gt;</userinput></para></step>
<step><para>initialize the <acronym>API</acronym> with
<function>arts_init()</function></para></step>
<step><para>create a stream with
<function>arts_play_stream()</function></para></step>
<step><para>configure specific parameters with
<function>arts_stream_set()</function></para></step>
<step><para>write sampling data to the stream with
<function>arts_write()</function></para></step>
<step><para>close the stream with
<function>arts_close_stream()</function></para></step>
<step><para>free the <acronym>API</acronym> with
<function>arts_free()</function></para></step>
</procedure>

<para>Here is a small example program that illustrates this:</para>

<programlisting>
#include &lt;stdio.h&gt;
#include &lt;artsc.h&gt;

int main()
{
    arts_stream_t stream;
    char buffer[8192];
    int bytes;
    int errorcode;

    errorcode = arts_init();
    if (errorcode &lt; 0)
    {
        fprintf(stderr, "arts_init error: %s\n", arts_error_text(errorcode));
        return 1;
    }

    stream = arts_play_stream(44100, 16, 2, "artsctest");

    while((bytes = fread(buffer, 1, 8192, stdin)) &gt; 0)
    {
        errorcode = arts_write(stream, buffer, bytes);
        if(errorcode &lt; 0)
        {
            fprintf(stderr, "arts_write error: %s\n", arts_error_text(errorcode));
            return 1;
        }
    }

    arts_close_stream(stream);
    arts_free();

    return 0;
}
</programlisting>

</sect2>

<sect2 id="capiartscconfig">
<title>Compiling and Linking: <application>artsc-config</application></title>

<para>To easily compile and link programs using the &arts; C
<acronym>API</acronym>, the <application>artsc-config</application> utility is
provided which knows which libraries you need to link and where the includes
are. It is called using</para>

<screen>
<userinput><command>artsc-config</command> <option>--libs</option></userinput>
</screen>

<para>to find out the libraries and </para>

<screen>
<userinput><command>artsc-config</command> <option>--cflags</option></userinput>
</screen>

<para>to find out additional C compiler flags. The example above could have been
compiled using the command line:</para>

<screen>
<userinput><command>cc</command> <option>-o artsctest artsctest.c `artsc-config --cflags` `artsc-config --libs`</option></userinput>

<userinput><command>cc</command> <option>-o artsctest</option> <option>artsctest.c</option> <option>`artsc-config --cflags`</option> <option>`artsc-config --libs`</option></userinput>
</screen>

</sect2>

<sect2 id="c-api-reference">
<title>Library Reference</title>

<para>
[TODO: generate the documentation for artsc.h using kdoc]
</para>

</sect2>

</sect1>

</chapter>

<chapter id="arts-modules">
<title>&arts; modules</title>

<sect1 id="modules-introduction">
<title>Introduction</title>
<para>
</para>
</sect1>

<sect1 id="synth-modules-reference">
<title>Synthesis Modules Reference</title>
<para>
</para>

<sect2 id="mcat-synth-arithmetic-mixing">
<title>Arithmetic + Mixing</title>
<para>
</para>

<sect3 id="mref-synth-add-sect">
<title>Synth&lowbar;ADD</title>
<anchor id="mref-synth-add">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_ADD.png" format="png"></imageobject></mediaobject></para>

<para>This adds two signals.</para>
</sect3>

<sect3 id="mref-synth-mul-sect">
<title>Synth&lowbar;MUL</title>
<anchor id="mref-synth-mul">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_MUL.png" format="png"></imageobject></mediaobject></para>

<para>This multiplies a signal by a factor. You can use this to scale signals
down (0 &lt; factor &lt; 1) or up (factor &gt; 1) or invert signals
(factor &lt; 0). Note that the factor may be a signal and don't has to
be constant (e.g. envelope or real signal).</para>
</sect3>

<sect3 id="mref-synth-multi-add-sect">
<title>Synth&lowbar;MULTI&lowbar;ADD</title>
<anchor id="mref-synth-multi-add">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_MULTI_ADD.png" format="png"></imageobject></mediaobject></para>

<para>This adds an arbitary number of signals. If you need to sum up the
waveforms produces by four different oscillators, you for instance can connect
all their outputs to one Synth&lowbar;MULTI&lowbar;ADD module. This is more
efficient than using three Synth&lowbar;ADD modules.</para>
</sect3>

<sect3 id="mref-synth-xfade-sect">
<title>Synth&lowbar;XFADE</title>
<anchor id="mref-synth-xfade">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_XFADE.png" format="png"></imageobject></mediaobject></para>

<para>This crossfades two signals. If the percentage input is -1, only the left
signal is heard, if it is 1, only the right signal is heard. When it is 0,
both signals a heard with the same volume.</para>

<para>This allows you to ensure that your signal stays in a well defined range.
If you had two signals that were between -1 and 1 before crossfading, they
will be in the same range after crossfading.</para>
</sect3>

<sect3 id="mref-synth-autopanner-sect">
<title>Synth&lowbar;AUTOPANNER</title>
<anchor id="mref-synth-autopanner">

<para>The opposite of a crossfader. This takes a mono signal and splits it into
a stereo signal: It is used to automatically pan the input signal between
the left and the right output. This makes mixes more lively. A standard
application would be a guitar or lead sound.</para>

<para>Connect a LFO, a sine or saw wave for example to inlfo.
and select a frequency between 0.1 and 5Hz for a traditional effect or even
more for Special FX.</para>
</sect3>


</sect2>

<sect2 id="mcat-synth-busses">
<title>Busses</title>
<para>
</para>

<sect3 id="mref-synth-bus-uplink-sect">
<title>Synth&lowbar;BUS&lowbar;UPLINK</title>
<anchor id="mref-synth-bus-uplink">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_BUS_UPLINK.png" format="png"></imageobject></mediaobject></para>

<para>An uplink to a bus. Give signals to left and right, and the name of the bus
where the data should go on the "bus" port. The combined signal from all
uplinks with this name will appear on every downlink on that "bus".</para>
</sect3>

<sect3 id="mref-synth-bus-downlink-sect">
<title>Synth&lowbar;BUS&lowbar;DOWNLINK</title>
<anchor id="mref-synth-bus-downlink">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_BUS_DOWNLINK.png" format="png"></imageobject></mediaobject></para>

<para>Gets (the sum of) all data that is put to a certain bus (with the name
you specify at the "bus" port).</para>
</sect3>

</sect2>

<!-- TODO AFTER KDE2.1: move freeverb into delays, and rename category to
     Delays & reverbs -->

<sect2 id="mcat-synth-delays">
<title>Delays</title>
<para>
</para>

<sect3 id="mref-synth-delay-sect">
<title>Synth&lowbar;DELAY</title>
<anchor id="mref-synth-delay">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_DELAY.png" format="png"></imageobject></mediaobject></para>

<para>This delays the input signal for an amount of time. The time specification
must be between 0 and 1 for a delay between 0 seconds and 1 second.</para>

<para>This kind of delay <emphasis>may not be used</emphasis> in feedback structures. This
is because it's a variable delay. You can modify it's length while it is
running, and even set it down to zero. But since in a feedback structure
the own output is needed to calculate the next samples, a delay whose value
could drop to zero during synthesis could lead to a stall situation.</para>

<para>Use CDELAYs in that setup, perhaps combine a small constant delay (of 0.001
seconds) with a flexible delay.</para>

<para>You can also combine a CDELAY and a DELAY to achieve a variable length delay
with a minimum value in a feedback loop. Just make sure that you have a
CDELAY involved.</para>
</sect3>

<sect3 id="mref-synth-cdelay-sect">
<title>Synth&lowbar;CDELAY</title>
<anchor id="mref-synth-cdelay">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_CDELAY.png" format="png"></imageobject></mediaobject></para>

<para>This delays the input signal for an amount of time. The time specification
must be between 0 and 1 for a delay between 0 seconds and 1 second. The
delay is constant during the calculation, that means it can't be modified.</para>

<para>This saves computing time as no interpolation is done, and is useful for
recursive structures. See description above (Synth&lowbar;DELAY).</para>
</sect3>

</sect2>

<sect2 id="mcat-synth-envelopes">
<title>Envelopes</title>
<para>
</para>

<sect3 id="mref-synth-envelope-adsr-sect">
<title>Synth&lowbar;ENVELOPE&lowbar;ADSR</title>
<anchor id="mref-synth-envelope-adsr">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_ENVELOPE_ADSR.png" format="png"></imageobject></mediaobject></para>

<para>This is a classic ADSR envelope which means you specify:
<variablelist>
<varlistentry>
<term>active</term>
<listitem>
<para>whether the note is being pressed right now by the user</para>
</listitem>
</varlistentry>
<varlistentry>
<term>invalue</term>
<listitem>
<para>the input signal</para>
</listitem>
</varlistentry>
<varlistentry>
<term>attack</term>
<listitem>
<para>the time that should pass between the user presses the note and the signal
reaching it's maximum amplitude (in seconds)</para>
</listitem>
</varlistentry>
<varlistentry>
<term>decay</term>
<listitem>
<para>the time that should pass between the the signal reaching it's maximum
amplitude and the signal going back to some constant level (in seconds)</para>
</listitem>
</varlistentry>
<varlistentry>
<term>sustain</term>
<listitem>
<para>the constant level the signal is held at afterwards, until the user releases
the note</para>
</listitem>
</varlistentry>
<varlistentry>
<term>release</term>
<listitem>
<para>the time that should pass after the user has released the note until the
signal is scaled down to zero (in seconds)</para>
</listitem>
</varlistentry>
</variablelist>
</para>

<para>You'll get the scaled signal at outvalue. If the ASDR envelope is finished,
it will set done to 1. You can use this to provide the "done" output of an
instrument (which will make the instrument structure be deleted by the midi
router object once the release phase is over).</para>
</sect3>

<sect3 id="mref-synth-pscale-sect">
<title>Synth&lowbar;PSCALE</title>
<anchor id="mref-synth-pscale">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_PSCALE.png" format="png"></imageobject></mediaobject></para>

<para>The Synth&lowbar;PSCALE module will scale the audio stream that is directed
through it from a volume 0 (silent) to 1 (original loudness) back to
0 (silent). According to the position (get the position from Synth&lowbar;SEQUENCE).
The position where the peak should occur can be given as pos.</para>

<para>Example:
Setting top to 0.1 means that after 10&percnt; of the note has been played, the
volume has reached its maximum, and starts decaying afterwards.</para>
</sect3>


</sect2>

<sect2 id="mcat-synth-effects">
<title>Effects</title>
<para>
</para>

<sect3 id="mref-synth-freeverb-sect">
<title>Synth&lowbar;FREEVERB</title>
<anchor id="mref-synth-freeverb">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_FREEVERB.png" format="png"></imageobject></mediaobject></para>

<para>This is a reverb effect. In the current implementation, it is thought
to pass a stereo signal through the reverb, and it will -add- it's reverb
effect to the signal. (Note: this means that it can be used inside an
StereoEffectStack as well). The input signal should be connected to (inleft,
inright), the output signal will be (outleft, outright).
</para>

<para>
The parameters which you can configure are:
<variablelist>
<varlistentry>
<term>roomsize</term>
<listitem>
<para>the size of the room which the reverb simulates (range: 0..1, where 1 is
the largest possible room)</para>
</listitem>
</varlistentry>
<varlistentry>
<term>damp</term>
<listitem>
<para>this specifies a filter which will make the simulated room absorb high
frequencies (range 0..1, where 1 means absorb high frequencies quite
agressive)</para>
</listitem>
</varlistentry>
<varlistentry>
<term>wet</term>
<listitem>
<para>the amount of reverb-signal (that is, the amount of the signal that
should be modified by the filters, resulting in a "wet", that is "reverb
sound"</para>
</listitem>
</varlistentry>
<varlistentry>
<term>dry</term>
<listitem>
<para>the amount of pure signal passed through, resulting in an echo (or
combined delay) rather than reverb effect (range: 0..1)</para>
<!-- TODO: do some measurements to show that this documentation -is- correct,
I am not sure if it is echo, or really pure (non-delayed), or multiple delay
or whatever -->
</listitem>
</varlistentry>
<varlistentry>
<term>width</term>
<listitem>
<para>the amount of stereo-magic the reverb algorithm adds to the reverb effect,
making the reverb sound wider in the stereo panorama (range: 0..1)</para>
</listitem>
</varlistentry>
<varlistentry>
<term>mode</term>
<listitem>
<para>[ TODO: I think if mode is 1, the reverb holds the current image of the
sound, whereas 0 is normal operation ]</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</sect3>

<sect3 id="mref-synth-tremolo-sect">
<title>Synth&lowbar;TREMOLO</title>
<anchor id="mref-synth-tremolo">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_TREMOLO.png" format="png"></imageobject></mediaobject></para>

<para>
The tremolo module modulates the amplitude according to a LFO-Wave.
Traditionally you would use a sine wave but why limit yourself?
What you get is a very intense effect that cuts through most
arrangements because of its high dynamic range.  The tremolo effect
is still one of guitarists favourite effects although it's not as
popular as in the 1960's.
</para>

<para>
[ TODO: currently this is implemented as invalue + abs(inlfo) - maybe it would
make more sense to implement it as invalue * (1+inlfo*depth), where depth
would be a parameter between 0..1 - decide this after KDE2.1 ; if you have
a comment, send a mail to the aRts list ;). ]
</para>

</sect3>

<sect3 id="mref-synth-fx-cflanger-sect">
<title>Synth&lowbar;FX&lowbar;CFLANGER</title>
<anchor id="mref-synth-fx-cflanger">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_FX_CFLANGER.png" format="png"></imageobject></mediaobject></para>

<para>A flanger is a time-varying delay effect. To make development of complex
flanger effects simpler, this module is provided, which contains the core
of a one-channel flanger.</para>

<para>It has the following ports:
<variablelist>
<varlistentry>
<term>invalue</term>
<listitem>
<para>The signal which you want to process.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>lfo</term>
<listitem>
<para>Preferably a sine wave which modulates the delay time inside the
flanger (-1 .. 1).</para>
</listitem>
</varlistentry>
<varlistentry>
<term>mintime</term>
<listitem>
<para>The minimum value for the delay inside the flanger in milliseconds.
Suggested values: try something like 1 ms. Please use values &lt; 1000 ms.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>maxtime</term>
<listitem>
<para>The minimum value for the delay inside the flanger in milliseconds.
Suggested values: try something like 5 ms. Please use values &lt; 1000 ms.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>outvalue</term>
<listitem>
<para>The output signal. It is important that you mix that with the
original (unflanged) signal to get the desired effect.</para>
</listitem>
</varlistentry>
</variablelist>

Hint: you can use this as a basis for a chorus effect.</para>
</sect3>

</sect2>

<sect2 id="mcat-synth-filters">
<title>Filters</title>
<para>
</para>

<sect3 id="mref-synth-pitch-shift-sect">
<title>Synth&lowbar;PITCH&lowbar;SHIFT</title>
<anchor id="mref-synth-pitch-shift">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_PITCH_SHIFT.png" format="png"></imageobject></mediaobject></para>

<para>This pitch shifting effect changes the frequency of the input signal
without affecting the speed. An application for this is for instance changing
the pitch of your voice while you record (and replay) it in realtime</para>

<para>The <emphasis>speed</emphasis> parameter is the relative speed with
which the signal will be replayed. So a speed of two would make it sound
twice as high (i.e. an input frequency of 440 Hz would result in an output
frequency of 880 Hz).</para>

<para>The <emphasis>frequency</emphasis> parameter is used internally to
switch between different grains of the signal. It is tunable, and depending
on your choice, the pitch shifting will sound more or less realistic for your
use case. A good value to start with is something like 5 or 10.</para>

</sect3>

<sect3 id="mref-synth-shelve-cutoff-sect">
<title>Synth&lowbar;SHELVE&lowbar;CUTOFF</title>
<anchor id="mref-synth-shelve-cutoff">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_SHELVE_CUTOFF.png" format="png"></imageobject></mediaobject></para>

<para>Filters out all frequencies over the cutoff frequency.</para>
</sect3>

<sect3 id="mref-synth-brickwall-limiter-sect">
<title>Synth&lowbar;BRICKWALL&lowbar;LIMITER</title>
<anchor id="mref-synth-brickwall-limiter">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_BRICKWALL_LIMITER.png" format="png"></imageobject></mediaobject></para>

<para>This modules clips a signal to make it fit into the range of [-1;1]. It
doesn't do anything to prevent the distortion that happens when clipping loud
signals. You can use this as effect (for instance to create a slightly clipped
sine wave). However, it's probably a good idea to run the signal through a
lowpass filter afterwards if you do so, to make it sound less agressive.</para>
</sect3>

<sect3 id="mref-synth-std-equalizer-sect">
<title>Synth&lowbar;STD&lowbar;EQUALIZER</title>
<anchor id="mref-synth-std-equalizer">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_STD_EQUALIZER.png" format="png"></imageobject></mediaobject></para>

<para>This is a nice parametric equalizer building block. It's parameters are</para>

<para><variablelist>
<varlistentry>
<term>invalue, outvalue</term>
<listitem>
<para>The signal that gets filtered by the equalizer.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>low</term>
<listitem>
<para>How low frequencies should be changed. The value is in dB, while 0 means
don't change low frequencies, -6 would mean take them out by 6dB, and +6
mean boost them by 6dB.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>mid</term>
<listitem>
<para>How middle frequencies should be changed by the equalizer in dB (see low).</para>
</listitem>
</varlistentry>
<varlistentry>
<term>high</term>
<listitem>
<para>How high frequencies should be changed by the equalizer in dB (see low).</para>
</listitem>
</varlistentry>
<varlistentry>
<term>frequency</term>
<listitem>
<para>This is the center frequency of the equalizer in Hz, the mid frequencies
are around that spectrum, the low and high frequencies below and above.
Note that the frequency may not be higher than half the sampling rate,
usually that is 22050 Hz, and not lower than 1 Hz.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>q</term>
<listitem>
<para>This influences how broad the mid spectrum is. It must be be a positive
number &gt; 0. A value of one is reasonable, higher values of q mean a
narrower spectrum of middle frequencies. Lower values than one mean a
broader sprectrum.</para>
</listitem>
</varlistentry>
</variablelist>
</para>
</sect3>

<sect3 id="mref-synth-rc-sect">
<title>Synth&lowbar;RC</title>
<anchor id="mref-synth-rc">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_RC.png" format="png"></imageobject></mediaobject></para>

<para>A damped resonator filter filtering all frequencies around some peak value.
There is no useful way of specifying middle frequency (that won't be cut),
since the input are two strange constants f and b. The code is very old,
from the first days of the synthesizer, and will probably replaced by a
new filter which will have a frequency and a resonance value as parameters.</para>

<para>Try something like b=5, f=5 or b=10, f=10 or b=15, f=15 though.</para>
</sect3>

<sect3 id="mref-synth-moog-vcf-sect">
<title>Synth&lowbar;MOOG&lowbar;VCF</title>
<anchor id="mref-synth-moog-vcf">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_MOOG_VCF.png" format="png"></imageobject></mediaobject></para>

<para>Filters out all frequencies over the cutoff frequency (it's a 24db 4pole
filter, which filters -24db per octave above the cutoff frequency), but
offers an additional parameter for tuning the filter resonance, while 0
means no resonance and 4 means self oscillation.</para>
</sect3>

</sect2>

<sect2 id="mcat-synth-midi-sequencing">
<title>Midi + Sequencing</title>
<para>
</para>

<sect3 id="mref-synth-midi-test-sect">
<title>Synth&lowbar;MIDI&lowbar;TEST</title>
<anchor id="mref-synth-midi-test">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_MIDI_TEST.png" format="png"></imageobject></mediaobject></para>

<para>This modules loads an instrument structure from a file, and registers
itself as midi output with the aRts midi manager. Notes sent to this output
will result in instrument voices being created. Note: you can setup something
like this more convenient in artscontrol than manually in artsbuilder.</para>
</sect3>

<sect3 id="mref-synth-sequence-sect">
<title>Synth&lowbar;SEQUENCE</title>
<anchor id="mref-synth-sequence">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_SEQUENCE.png" format="png"></imageobject></mediaobject></para>

<para>Will play a sequence of notes over and over again. The notes are given in
tracker notation, and are seperated by semicolons. An example is
<literal remap="tt">A-3;C-4;E-4;C-4;</literal>. The speed is given as seconds per note, so if you
want to get 120 bpm, you will probably specify 0.5 seconds/note, as
60 seconds/0.5 seconds per note=120 bpm.</para>

<para>You can give each note an length relative to the speed by using a colon
after the note and then then length. <literal remap="tt">A-3:2;C-4:0.5;D-4:0.5;E-4;</literal>
demonstrates this. As you see, midi composing programs tend to offer
more comfort ;)</para>

<para>The Synth&lowbar;SEQUENCE gives additional information about the position of
the note it is playing right now, while 0 means just started and 1 means
finished. This information you can use with Synth&lowbar;PSCALE (see below).</para>
</sect3>

</sect2>

<sect2 id="mcat-synth-samples">
<title>Samples</title>
<para>
</para>

<sect3 id="mref-synth-play-wav-sect">
<title>Synth&lowbar;PLAY&lowbar;WAV</title>
<anchor id="mref-synth-play-wav">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_PLAY_WAV.png" format="png"></imageobject></mediaobject></para>

<para>This will play a wav file. It will only be present if you have libaudiofile
on your computer. The wave file will start as soon as the module gets created.
It will stop as soon as it's over, then finished will be set to 1. The speed
parameter can be used to replay the file faster or slower, where 1.0 is the
normal (recorded) speed.</para>
<!-- TODO: KDE2.2: check that this really works together in instruments with
the done parameter things ;) -->
</sect3>

</sect2>

<sect2 id="mcat-synth-soundio">
<title>Sound IO</title>
<para>
</para>

<sect3 id="mref-synth-play-sect">
<title>Synth&lowbar;PLAY</title>
<anchor id="mref-synth-play">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_PLAY.png" format="png"></imageobject></mediaobject></para>

<para><emphasis>You will normally not need this module, unless you are writing
standalone applications. Inside artsd, there normally is already a Synth&lowbar;PLAY module, and creating another one will not work.</emphasis>.</para>
<para>The Synth&lowbar;PLAY-module will output your audio signal to the
soundcard. The left and right channels should contain the
<emphasis>normalized</emphasis> input for the channels.
If your input is not between -1 and 1, you get clipping.</para>

<para>As already mentioned, there may only be one Synth&lowbar;PLAY module
used, as this one directly accesses your soundcard. Use busses if you want
to mix more than one audio stream together before playing. Use the
Synth&lowbar;AMAN&lowbar;PLAY module to get something like an output inside
artsd.</para>

<para>Note that Synth&lowbar;PLAY also does the timing of the whole structure. This
means: no Synth&lowbar;PLAY = no source for timing = no sound. So you absolutely
need (exactly) one Synth&lowbar;PLAY object.</para>
</sect3>

<sect3 id="mref-synth-record-sect">
<title>Synth&lowbar;RECORD</title>
<anchor id="mref-synth-record">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_RECORD.png" format="png"></imageobject></mediaobject></para>

<para><emphasis>You will normally not need this module, unless you are writing
standalone applications. Inside artsd, there normally is already a Synth&lowbar;RECORD module, and creating another one will not work.</emphasis>.</para>
<para>The Synth&lowbar;RECORD-module will record a signal from the soundcard.
The left and right channels will contain the input for the channels (between
-1 and 1).</para>

<para>As already mentioned, there may only be one Synth&lowbar;RECORD module
used, as this one directly accesses your soundcard. Use busses if you want
to use the recorded audio stream in more than one place. Use the
Synth&lowbar;AMAN&lowbar;RECORD module to get something like an input inside
artsd. For this to work, artsd must run <emphasis>with full duplex enabled
</emphasis></para>
</sect3>

<sect3 id="mref-synth-aman-play-sect">
<title>Synth&lowbar;AMAN&lowbar;PLAY</title>
<anchor id="mref-synth-aman-play">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_AMAN_PLAY.png" format="png"></imageobject></mediaobject></para>

<para>The Synth&lowbar;AMAN&lowbar;PLAY-module will output your audio signal.
It is nice (but not necessary) if you output a normalized signal (between -1
and 1).</para>

<para>This module will use the audio manager to assign where the signal will
be played. The audio manager can be controlled through artscontrol. To make
it more intuitive to use, it is good to give the signal you play a name. This
can be achieved through setting <emphasis>title</emphasis>. Another feature
of the audio manager is to be able to remember where you played a signal the
last time. To do so it needs to be able to distinguish signals. That is why
you should assign something unique to <emphasis>autoRestoreID</emphasis>,
too.</para>
</sect3>

<sect3 id="mref-synth-aman-record-sect">
<title>Synth&lowbar;AMAN&lowbar;RECORD</title>
<anchor id="mref-synth-aman-record">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_AMAN_RECORD.png" format="png"></imageobject></mediaobject></para>

<para>The Synth&lowbar;AMAN&lowbar;RECORD-module will record an audio signal
from an external source (i.e. line in/microphone) within artsd. The output
will be a normalized signal (between -1 and 1).</para>

<para>This module will use the audio manager to assign where the signal will
be played. The audio manager can be controlled through artscontrol. To make
it more intuitive to use, it is good to give the signal you record a name. This
can be achieved through setting <emphasis>title</emphasis>. Another feature
of the audio manager is to be able to remember where you recorded a signal the
last time. To do so it needs to be able to distinguish signals. That is why
you should assign something unique to <emphasis>autoRestoreID</emphasis>,
too.</para>
</sect3>

<sect3 id="mref-synth-capture-sect">
<title>Synth&lowbar;CAPTURE</title>
<anchor id="mref-synth-capture">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_CAPTURE.png" format="png"></imageobject></mediaobject></para>

<para>The Synth&lowbar;CAPTURE-module will write an audio signal to a wave 
file on your hard disc. The file will always be called
/tmp/mcop-<replaceable>usename</replaceable>/capture.wav</para>
</sect3>

</sect2>

<sect2 id="mcat-synth-tests">
<title>Tests</title>
<para>
</para>

<sect3 id="mref-synth-nil-sect">
<title>Synth&lowbar;NIL</title>
<anchor id="mref-synth-nil">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_NIL.png" format="png"></imageobject></mediaobject></para>

<para>This just does nothing. It is only useful for test situations.</para>
</sect3>

<sect3 id="mref-synth-debug-sect">
<title>Synth&lowbar;DEBUG</title>
<anchor id="mref-synth-debug">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_DEBUG.png" format="png"></imageobject></mediaobject></para>

<para>You can use this for debugging. It will print out the value of the signal
at invalue in regular intervals (ca. 1 second), combined with the comment
you have specified. That way you can find out if some signals stay in
certain ranges, or if they are there at all.</para>
</sect3>

<sect3 id="mref-synth-midi-debug-sect">
<title>Synth&lowbar;MIDI&lowbar;DEBUG</title>
<anchor id="mref-synth-midi-debug">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_MIDI_DEBUG.png" format="png"></imageobject></mediaobject></para>

<para>You can use this to debug how your midi events are actually arriving in aRts.</para>

<para>When a MIDI&lowbar;DEBUG is running artsserver will print out a lines like
<screen>201 100753.837585 on 0 42 127</screen>

<screen>202 101323.128355 off 0 42</screen>

While the first line would be telling you that 100753ms (that is 100 seconds)
after the MIDI&lowbar;DEBUG started, a midi on event arrived on channel 0. This
midi on event had the velocity (volume) of 127, the loudest possible. The
next line shows the midi release event. [ TODO: this does not work currently,
make it work, and do it via midi manager ].</para>
</sect3>

<sect3 id="mref-synth-data-sect">
<title>Synth&lowbar;DATA</title>
<anchor id="mref-synth-data">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_DATA.png" format="png"></imageobject></mediaobject></para>

<para>This creates a signal with a constant number. </para>
<!-- TODO: this doesn't really belong in test, does it? -->
</sect3>
</sect2>

<sect2 id="mcat-synth-osc-mod">
<title>Oscillation & Modulation</title>
<para>
</para>
<sect3 id="mref-synth-frequency-sect">
<title>Synth&lowbar;FREQUENCY</title>
<anchor id="mref-synth-frequency">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_FREQUENCY.png" format="png"></imageobject></mediaobject></para>

<para>All oscillators in aRts don't require a frequency as input, but a position
in the wave. The position should be between 0 and 1, which maps for a
standard Synth&lowbar;WAVE&lowbar;SIN object to the range 0..2*pi. To generate
oscillating values from a frequency, a Synth&lowbar;FREQUENCY modules is used.</para>
</sect3>


<sect3 id="mref-synth-fm-source-sect">
<title>Synth&lowbar;FM&lowbar;SOURCE</title>
<anchor id="mref-synth-fm-source">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_FM_SOURCE.png" format="png"></imageobject></mediaobject></para>

<para>This is used for frequency modulation. Put your frequency to the frequency
input and put another signal on the modulator input. Then set modlevel to
something, say 0.3. The frequency will be modulated with modulator then.
Just try it. Works nice when you put a feedback in there, that means take
a combination of the delayed output signal from the Synth&lowbar;FM&lowbar;SOURCE
(you need to put it to some oscillator as it only takes the role of
Synth&lowbar;FREQUENCY) and some other signal to get good results.</para>

<para>Works nicely in combination with Synth&lowbar;WAVE&lowbar;SIN oscillators.</para>
</sect3>

</sect2>

<sect2 id="mcat-synth-waveforms">
<title>Wave Forms</title>
<para>
</para>

<sect3 id="mref-synth-wave-sin-sect">
<title>Synth&lowbar;WAVE&lowbar;SIN</title>
<anchor id="mref-synth-wave-sin">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_WAVE_SIN.png" format="png"></imageobject></mediaobject></para>

<para>Sinus oscillator. Put a pos signal from Synth&lowbar;FREQUENCY or Synth&lowbar;FM&lowbar;SOURCE
at the input. And get a sinus wave as output. The pos signal specifies the
position in the wave, the range 0..1 is mapped to 0..2*pi internally.</para>
</sect3>

<sect3 id="mref-synth-wave-tri-sect">
<title>Synth&lowbar;WAVE&lowbar;TRI</title>
<anchor id="mref-synth-wave-tri">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_WAVE_TRI.png" format="png"></imageobject></mediaobject></para>

<para>Triangle oscillator. Put a pos signal from Synth&lowbar;FREQUENCY or Synth&lowbar;FM&lowbar;SOURCE
at the input. And get a triangle wave as output. The pos signal specifies the
position in the wave, the range 0..1 is mapped to 0..2*pi internally. Be
careful. The input signal *MUST* be in the range 0..1 for the output signal
to produce good results.</para>
</sect3>

<sect3 id="mref-synth-noise-sect">
<title>Synth&lowbar;NOISE</title>
<anchor id="mref-synth-noise">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_NOISE.png" format="png"></imageobject></mediaobject></para>

<para>Noise generator. This generates a random signal between -1 and 1.</para>
</sect3>


<sect3 id="mref-synth-wave-square-sect">
<title>Synth&lowbar;WAVE&lowbar;SQUARE</title>
<anchor id="mref-synth-wave-square">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_WAVE_SQUARE.png" format="png"></imageobject></mediaobject></para>

<para>Square oscillator. Put a pos signal from Synth&lowbar;FREQUENCY or Synth&lowbar;FM&lowbar;SOURCE
at the input. And get a square wave as output. The pos signal specifies the
position in the wave, the range 0..1 is mapped to 0..2*pi internally. Be
careful. The input signal *MUST* be in the range 0..1 for the output signal
to produce good results.</para>
</sect3>

<sect3 id="mref-synth-wave-softsaw-sect">
<title>Synth&lowbar;WAVE&lowbar;SOFTSAW</title>
<anchor id="mref-synth-wave-softsaw">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_WAVE_SOFTSAW.png" format="png"></imageobject></mediaobject></para>

<para>Softened saw wave, similar in look like the Synth_WAVE_TRI oscillator.
Put a pos signal from Synth&lowbar;FREQUENCY or Synth&lowbar;FM&lowbar;SOURCE
at the input. You'll get a softened saw wave as output. The pos signal
specifies the position in the wave, the range 0..1 is mapped to 0..2*pi
internally. Be careful. The input signal *MUST* be in the range 0..1 for
the output signal to produce good results.</para>
</sect3>

<sect3 id="mref-synth-wave-pulse-sect">
<title>Synth&lowbar;WAVE&lowbar;PULSE</title>
<anchor id="mref-synth-wave-pulse">

<para><mediaobject><imageobject><imagedata fileref="images/Synth_WAVE_PULSE.png" format="png"></imageobject></mediaobject></para>

<para>Pulse oscillator - this module is similar in spirit like the rectangular
oscillator (Synth_WAVE_RECT), but it provides a configurable up/down ratio,
through the <emphasis>dutycycle</emphasis> parameter. Put a pos signal from
Synth&lowbar;FREQUENCY or Synth&lowbar;FM&lowbar;SOURCE at the input. Get a
pulse wave as output. The pos signal specifies the position in the wave, the
range 0..1 is mapped to 0..2*pi internally. Be careful. The input signal
*MUST* be in the range 0..1 for the output signal to produce good results.
</para>
</sect3>
</sect2>
</sect1>

<sect1 id="visual-modules-reference">
<title>Visual Modules Reference</title>
<para>
TODO when visual modules are more "finished".
</para>
</sect1>

</chapter>

<chapter id="porting">
<title>Porting Applications to &arts;</title>

<sect1 id="using-artsdsp">
<title>Using <application>artsdsp</application></title>
<para>
</para>
</sect1>

<sect1 id="adding-native-arts-support">
<title>Adding Native &arts; support</title>
<para>
</para>
</sect1>

</chapter>

<chapter id="contributing">
<title>Contributing to &arts;</title>

<sect1 id="how-to-help">
<title>How You Can Help</title>

<para>The &arts; project can use help from developers to make existing
multimedia applications &arts;-aware, write new multimedia applications, and
enhance the capabilities of &arts;. However, you don't have to be a developer to
contribute. We can also use help from testers to submit bug reports, translators
to translate the application text and documentation into other languages,
artists to design bitmaps (especially for <application>artsbuilder</application>
modules), musicians to create sample &arts; modules, and writers to write or
proofread documentation.
</para>
</sect1>

<sect1 id="mailing-lists">
<title>Mailing Lists</title>
<para>
</para>
</sect1>

<sect1 id="coding-standards">
<title>Coding Standards</title>

<para>For getting a consistent reading through all the sources, it is
important to keep the coding style the same, all over the aRts source. Please,
even if you just write a module, try to write/format your source accordingly,
as it will make it easier for different people to maintain the source tree,
and easier to copy pieces of from one source to another.
</para>

<variablelist>
<varlistentry>
<term>Naming of member functions</term>
<listitem>
<para>QT/Java style, that means capitalization on word breaks, and first
letter always without capitalization; no underscores.</para>
<para>This means for instance
<programlisting>   createStructureDesc()
   updateWidget();
   start(); </programlisting>
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Class members</term>
<listitem>
<para>Class members are not capitalized, such as menubar or button.</para>
<para> 
When there are accessing functions, the standard should be the MCOP
way, that is, when having an long member foo, which shouldn't be
visible directly, you create</para>
<para> 
<programlisting>   foo(long new_value);
   long foo(); </programlisting>
</para>
<para>functions to get and set the value. In that case, the real value of
foo should be stored in &lowbar;foo.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Class names</term>
<listitem>
<para>All classes should be wordwise capitalized, that means ModuleView,
SynthModule. All classes that belong to the libraries should use the Arts
namespace, like Arts::Soundserver.</para>
<para>The implementations of MCOP classes should get called Class&lowbar;impl,
such as SoundServer&lowbar;impl.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Parameters</term>
<listitem>
<para>Parameters are always uncapitalized.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Local variables</term>
<listitem>
<para>Local variables are always uncapitalized, and may have names like i, p, x,
etc. where appropriate.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Tab width (Shift width)</term>
<listitem>
<para>One tab is as long as 4 spaces.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Naming of source files</term>
<listitem>
<para>Source files should have no capitalization in the name. They should have
the name of the class when they implement a single class. Their extension
is .cc if they refer to Qt/GUI independant code, and .cpp if they refer to
Qt/GUI dependant code. Implementation files for interfaces should be called
<replaceable>foo</replaceable>_impl, if Foo was the name of the interface.
</para>
<para>
IDL files should be called in a descriptive way for the collection of
interfaces they contain, also all lower case. Especially it is not good
to call an idl file like the class itself, as the .mcopclass trader and
type info entries will collide, then.
</para>
</listitem>
</varlistentry>
</variablelist>
</sect1>

</chapter>

<chapter id="future-work">
<title>Future Work</title>
<para>
</para>

</chapter>

<chapter id="references">
<title>References</title>

<para>
&arts; project web site
</para>

<para>
&kde; Multimedia web site
</para>

<para>Chapter 14 (Multimedia) Of &kde; 2.0 Development book, available for
purchase or viewable on-line at <ulink
url="http://www.andamooka.org/">http://www.andamooka.org</ulink></para>
</chapter>

<chapter id="faq">
<title>Questions and answers</title>

<para>This section answers some frequently asked questions about &arts;.</para>

<sect1 id="faq-general">
<title>General Questions</title>

<qandaset id="faqlist">
<qandaentry>
<question>
<para>Does &kde; support my sound card for audio output?</para>
</question>
<answer>
<para>
&kde; uses &arts; to play sound, and &arts; uses the &Linux; kernel sound
drivers, either <acronym>OSS</acronym> or <acronym>ALSA</acronym> (using
<acronym>OSS</acronym> emulation). If your sound card is supported by either
<acronym>ALSA</acronym> or <acronym>OSS</acronym> and properly configured (&ie;
any other &Linux; application can output sound), it will work.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
Is there sound support on platforms other than &Linux;?
</para>
</question>
<answer>
<para>
There is currently only support for <acronym>OSS</acronym> (or compatible) drivers on
other platforms (&eg; FreeBSD). You are welcome to contribute by
adding support for other platforms. The relevant source code is
<filename>kdelibs/arts/flow/audiosubsys.cc</filename>.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
I can't play <literal role="extension">wav</literal> files with
<application>artsd</application>!
</para>
</question>
<answer>
<para>
Check that <application>artsd</application> is linked to
<filename>libaudiofile</filename> (<userinput><command>ldd</command>
<option>artsd</option></userinput>). If it isn't, download kdesupport, recompile
everything, and it will work.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
I hear sound when logged in as <systemitem <!-- DocBook 4.1: class="username" --> >root</systemitem>
but no other users have sound!</para>
</question>
<answer>
<para>
The permissions of the file <filename <!-- DocBook 4.1: class="devicefile" --> >/dev/dsp</filename>
affect which users will have sound. To allow everyone to use it, do this:</para>

<procedure>
<step><para>log in as <systemitem <!-- DocBook 4.1: class="username" --> >root</systemitem></para></step>

<step><para>open a &konqueror; window</para></step>

<step><para>go into the <filename class="directory">/dev</filename>
directory</para></step>

<step><para>click on the file <filename>dsp</filename> with the
<mousebutton>right</mousebutton> mouse button, and choose properties.</para></step>

<step><para>click on the <guilabel>Permissions</guilabel> tab</para></step>

<step><para>check the <guilabel>Read</guilabel> and <guilabel>Write</guilabel>
check boxes in all sections</para></step>

<step><para>click on <guibutton>ok</guibutton></para></step>

</procedure>

<para>You can achieve the same effect in a terminal window using the command
<userinput><command>chmod</command> <option>666
<replaceable>/dev/dsp</replaceable></option></userinput>.</para>

<para>For restricting access to sound to specific users, you can use group
permissions. On some &Linux; distributions, for instance Debian/Potato,
<filename>/dev/dsp</filename> is already owned by a group called
<systemitem <!-- DocBook 4.1: class="groupname" --> >audio</systemitem>, so all you need to do is add the users to
this group.</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>This helps for <application>artsd</application>, but what about &kmix;,
&kmid;, &kscd;,&etc;?</para>
</question>
<answer>

<para>
There are various other devices which provide functionality accessed by
multimedia applications. You can treat them in the same way, either by making
them accessible for everyone, or using groups to control access. Here is a list,
which may still be incomplete (also if there are various devices in a form like
midi0, midi1, ..., then only the 0-version is listed here):
</para>

<para>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/admmidi0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/adsp0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/amidi0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/amixer0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/audio</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/audio0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/cdrom</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/dmfm0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/dmmidi0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/dsp</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/dsp0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/midi0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/midi0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/midi00</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/midi00</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/mixer</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/mixer0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/mpu401data</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/mpu401stat</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/music</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/rmidi0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/rtc</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/sequencer</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/smpte0</filename>
<filename <!-- DocBook 4.1: class="devicefile" --> >/dev/sndstat</filename>
</para>

</answer>
</qandaentry>

<qandaentry>
<question>
<para>What can I do if artsd doesn't start or crashes while running?</para>
</question>
<answer>
<para>
First of all: try using the default settings in KControl (of if you are
starting manually, don't give additional options besides maybe -F10 -S4096
for latency). Especially <emphasis>full duplex is likely to break</emphasis>
with various drivers, so try disabling it.
</para>
<para>
A good way to figure out why artsd doesn't start (or crashes while running)
is to start it manually. Open a konsole window, and do:
</para>
<screen width="40"><prompt>%</prompt> <userinput><command>artsd</command> <option>-F10</option> <option>-S4096</option></userinput></screen>
<para>You can also add the -l0 option, which will print more information about
what is happening, like this:
</para>
<screen width="40"><prompt>%</prompt> <userinput><command>artsd</command> <option>-l0</option> <option>-F10</option> <option>-S4096</option></userinput></screen>

<para>Doing so, you will probably get some useful information why it didn't
start. Or, if it crashes when doing this-and-that, you can do this-and-that,
and see "how" it crashes. If you want to report a bug, producing a backtrace
with gdb and/or an strace may help finding the problem.
</para>
</answer>
</qandaentry>

</qandaset>

</sect1>

<sect1 id="faq-non-arts">
<title>Non-Arts Applications</title>

<qandaset>

<qandaentry>
<question>
<para>
As soon as &kde; is running, no other application can access my sound device!
</para>
</question>
<answer>
<para>
Since the &arts; sound server used by &kde; is running, it is using the
sound device. If the server is idle for 60 seconds, it will
auto-suspend and release it automatically.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
You said it suspends after 60 seconds, it doesn't for me!
</para>
</question>
<answer>
<para>
Currently it doesn't suspend when using full duplex. Turn full duplex
off from the &kcontrol; and it will suspend. Disabling full
duplex is generally a good idea anyway if you only use &arts; for
playing audio and not recording.
</para>
</answer>
</qandaentry>


<qandaentry>
<question>
<para>
How can I run old, non-&arts; applications?
</para>
</question>
<answer>
<para>
Run them using the <application>artsdsp</application>. For instance, if you
normally would run:
</para>

<screen><prompt>&percnt;</prompt> <userinput><command>mpg123</command> <option>foo.mp3</option></userinput></screen>

<para>instead use:</para>

<screen><prompt>&percnt;</prompt> <userinput><command>artsdsp</command> <option>mpg123 foo.mp3</option></userinput></screen>

<para>
This will redirect the sound output to &arts;. This method doesn't
require changes to the applications. It is something of an ugly hack
however, and does not yet fully support all features of the sound card
device, so some applications may not work.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
I can't run <application>artsdsp</application> with any application, it always crashes!
</para>
</question>
<answer>
<para>
You need a recent version of the glibc library;
<command>artsdsp</command> will not work reliably on some older &Linux;
distributions. For instance, on Debian 2.1 (which is glibc 2.0
based) it doesn't work, while on Debian 2.2 (which is glibc 2.1.3 based),
it does.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
Are there theoretical limitations with some applications that
will prevent them from ever working with <application>artsdsp</application> ?
</para>
</question>
<answer>
<para>
No. Using <command>artsdsp</command> can result in slightly more latency and
<acronym>CPU</acronym> usage that using the &arts; <acronym>API</acronym>s
directly. Other than that, any application that doesn't work should be
considered a bug in <application>artsdsp</application>. The technique used by
<application>artsdsp</application> should, if implemented properly, allow
<emphasis>every</emphasis> application to work with it (including large
applications like <application>Quake</application> 3).
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
What can I do if an application doesn't work with <command>artsdsp</command>?
</para>
</question>
<answer>
<para>
You can wait for <application>artsd</application> to suspend or use the command
<userinput><command>artsshell</command> <option>suspend</option></userinput> to
ask the server to suspend itself. You will only be able to suspend the server if
no &arts; applications are currently using it, and no &arts; applications will
be able to run when the server is suspended.
</para>

<para>If the server is busy, a crude but effective way to get rid of it is:
</para>


<screen><prompt>&percnt;</prompt> <userinput><command>killall</command> <option>artsd</option> ; <command>killall</command> <option>artswrapper</option></userinput>
<lineannotation>Now start your own application.</lineannotation>
<prompt>&percnt;</prompt> <userinput><command>kcminit</command> <option>arts</option></userinput>
</screen>

<para>Any currently running &arts; applications may crash, however, once you
kill the server.</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
What about applications written for KDE1?
</para>
</question>
<answer>
<para>
If you are running KDE1 applications, which output sound via the KDE1 audio
server, you will need to run kaudioserver to make it work. You can start
kaudioserver in the same way than other non-aRts-applications:
<screen>
<prompt>&percnt;</prompt> <userinput><command>artsdsp</command> <option>kaudioserver</option></userinput>
</screen>
You will need to have installed kaudioserver (from the same source where you
got your KDE1 applications from) - it belongs to KDE1, not KDE2.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
What about applications using the enlightened sound daemon, ESD?
</para>
</question>
<answer>
<para>
The issue is similar than with kaudioserver. Such applications will need a
running esd server. You can start esd via artsdsp, and every ESD aware
application should work fine, like this:
<screen>
<prompt>&percnt;</prompt> <userinput><command>artsdsp</command> <option>esd</option></userinput>
</screen>
</para>
</answer>
</qandaentry>

</qandaset>

</sect1>

<sect1 id="faq-latency">
<title>Latency</title>

<qandaset>

<qandaentry>
<question>
<para>
I sometimes hear short pauses when listening to music, is this a bug?
</para>
</question>
<answer>
<para>
This is most likely not a bug, but caused by the fact that the &Linux;
kernel is not very good at real-time scheduling. There are situations
where &arts; will not be able to keep up with playback. You can,
however, enable real-time rights (via &kcontrol:), and use a large
latency setting (like <guilabel>250ms</guilabel> or <guilabel>don't
care</guilabel>), which should improve the situation.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
What's the effect of the response time setting?
</para>
</question>
<answer>
<para>
The help text for this setting in the &kcontrol; can be misleading.
A lower value means that &arts; will take less time to respond to
external events (&ie;. the time that it takes between closing a window
and hearing a sound played by <application>artsd</application>). It will also
use more <acronym>CPU</acronym> resources, and be more likely to cause
dropouts.</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
Is there anything else I can do to prevent pauses?
</para>
</question>
<answer>
<para>
For users of <acronym>IDE</acronym> drives, you can use the <command>hdparm</command>
command to put your <acronym>IDE</acronym> drive in <acronym>DMA</acronym>
mode. A word of warning: this does not work on all hardware, and can result in
having to do a hard reset or in rare cases, data loss. Read the documentation
for the <command>hdparm</command> command for more details. I have successfully
used the following command: </para>

<screen>
<prompt>&percnt;</prompt> <userinput><command>hdparm</command> <option>-c1 -d1 -k1 -K1 /dev/hda</option></userinput>
</screen>

<para>You need to run this after every boot, so you might want to place it in a
system startup script (how to do this distribution specific, on Debian &Linux;
it is usually put in <filename>/etc/rc.boot</filename>).</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
Why is <application>artsd</application> taking so much <acronym>CPU</acronym> time?
</para>
</question>
<answer>
<para>
Check your response time settings. However, the current version is not yet
really optimized. This will improve, and until then no real prediction can be
made how fast <application>artsd</application> can or can't be.
</para>
</answer>
</qandaentry>

</qandaset>

</sect1>

<sect1 id="faq-network">
<title>Network Transparency</title>

<qandaset>

<qandaentry>
<question>
<para>
What do I need for network transparency?
</para>
</question>
<answer>
<para>
Enable it in the &kcontrol; <guilabel>Sound Server</guilabel> settings
(<guilabel>enable X11 server for security information</guilabel> and
<guilabel>network transparency</guilabel>). Then copy your
<filename>.mcoprc</filename> to all machines you plan to use network
transparency from. Log in again. Make sure that the hosts that interact know
each other by name (&ie; they have resolvable names or are in
<filename>/etc/hosts</filename>).
</para>

<para>
This should be all you need to do. However, if it still doesn't work here are
some additional details. The &arts; sound server process,
<application>artsd</application>, should only run on one host, the one with the sound
card where the sound should be played. It can be started automatically on login
by &kde; (if you configure that in &kcontrol;), or manually using something like:
</para>

<screen>
<prompt>&percnt;</prompt> <userinput><command>artsd</command> <option>-n -F 5 -S 8192</option></userinput>
</screen>

<para>The <option>-n</option> parameter is for network transparency, while the
others configure latency.</para>

<para>
Your <filename>.mcoprc</filename> file should have this entry:
</para>

<screen>
<userinput>GlobalComm=Arts::X11GlobalComm</userinput>
</screen>

<para>on all machines involved, in order for network transparency to work, This
is what is enabled by the <guilabel>X11 server for security
information</guilabel> control panel setting.
</para>

<para>Finally, in any &kde; version in the 2.0.x series, there is a bug which
applies if you don't have a domain name set. Clients of
<application>artsd</application> try to find where to connect to via the
<replaceable>hostname</replaceable>.<replaceable>domainname</replaceable>
combination. If your domain name is empty, it will try to connect to
<replaceable>hostname</replaceable>. (note the extra dot). Adding an entry like
this to <filename>/etc/hosts</filename> (&ie; <userinput>orion.</userinput> if
your hostname is <systemitem class="systemname">orion</systemitem>) works around
the problem.
</para>
</answer>
</qandaentry>

<qandaentry>
<question>
<para>
How do I debug network transparency if it doesn't work?
</para>
</question>
<answer>
<para>
Assuming you have the &kde; source code, go to <filename
class="directory">kdelibs/arts/examples</filename>, and run
<userinput><command>make</command> <option>check</option></userinput> to compile
some programs, including <application>referenceinfo</application>. Then run
</para>

<screen>
<prompt>&percnt;</prompt> <userinput><command>./referenceinfo</command> <option>global:Arts&lowbar;SimpleSoundServer</option></userinput>
</screen>

<para>
The output will indicate the host name and port being used by &arts;. For
example, <computeroutput>tcp:orion:1698</computeroutput> would mean that any
client trying to use network transparency should know how to reach host
<systemitem class="systemname">orion</systemitem>.
</para>
</answer>
</qandaentry>

</qandaset>

</sect1>

<sect1 id="faq-other">
<title>Other Issues</title>

<qandaset>

<qandaentry>
<question>
<para>I can't use <application>artsbuilder</application>. It crashes when
executing a module!</para>
</question>
<answer>
<para>The most likely cause is that you are using old structures or modules
which aren't supported with the &kde; 2 version. Unfortunately the documentation
which is on the web refers to &arts;-0.3.4.1 which is quite outdated. The most
often reported crash is: that performing an execute structure in
<application>artsbuilder</application> results in the error message
<errorname>[artsd] Synth_PLAY: audio subsystem is already used.</errorname>
</para>

<para>You should use a Synth_AMAN_PLAY instead of a Synth_PLAY module
and the problem will go away. Also see the
<application>artsbuilder</application> help file (hit <keycap>F1</keycap> in
<application>artsbuilder</application>).
</para>

<para>
Recent versions of <application>artsbuilder</application> (&kde; 2.1 beta 1 and
later) come with a set of examples which you can use.
</para>
</answer>
</qandaentry>

</qandaset>

</sect1>
</chapter>

<chapter id="copyright-and-licenses">

<title>&arts; Copyright and Licensing</title>

<para>&arts; software copyright 1998-2001 Stefan Westerfeld
<email>stefan@space.twc.de</email></para>
 
<para><anchor id="contributors">
Documentation copyright 1999-2001
Stefan Westerfeld <email>stefan@space.twc.de</email> and
Jeff Tranter <email>tranter@kde.org</email>.
</para>
 
&underFDL;

<para>
All libraries that are in &arts; are licensed under the terms of the
<acronym>GNU</acronym> Lesser General Public license. The vast majority of the
&arts; code is in the libraries, including the whole of <acronym>MCOP</acronym>
and ArtsFlow. This allows the libraries to be used for non-free/non-open source
applications if desired.
</para>

<para>There are a few programs (such as <application>artsd</application>), that
are released under the terms of the <acronym>GNU</acronym> General Public
License. As there have been different opinions on whether or not linking
<acronym>GPL</acronym> programs with &Qt; is legal, I also added an explicit
notice which allows that, in addition to the <acronym>GPL</acronym>: permission
is also granted to link this program with the &Qt; library, treating &Qt; like a
library that normally accompanies the operating system kernel, whether or not
that is in fact the case.</para>

</chapter>

<chapter id="intro-digital-audio">
<title>Introduction to Digital Audio</title>

<para>digital sampling, filters, effects, &etc;</para>

</chapter>

<chapter id="midi-introduction">
<title>Introduction to <acronym>MIDI</acronym></title>
<para>
</para>
</chapter>


<glossary id="glossary">

<glossentry id="gloss-alsa">
<glossterm><acronym>ALSA</acronym></glossterm>
<glossdef>
<para>Advanced &Linux; Sound Architecture; a &Linux; sound card driver; not
currently included with the standard kernel source code.</para>
</glossdef>
</glossentry>

<glossentry id="gloss-arts">
<glossterm>&arts;</glossterm>
<glossdef>
<para>Analog Real-Time Synthesizer; the name of the multimedia
architecture/library/toolkit used by the &kde; project (note capitalization)
</para>
</glossdef>
</glossentry>

<glossentry id="gloss-bsd">
<glossterm><acronym>BSD</acronym></glossterm>
<glossdef>
<para>Berkeley Software Distribution; here refers to any of several free
&UNIX;-compatible operating systems derived from <acronym>BSD</acronym>
&UNIX;.</para>
</glossdef>
</glossentry>

<glossentry id="gloss-corba">
<glossterm><acronym>CORBA</acronym></glossterm>
<glossdef>
<para>Common Object Request Broker Architecture; a standard for implementing
object-oriented remote execution.</para>
</glossdef>
</glossentry>

<glossentry id="gloss-cvs">
<glossterm><acronym>CVS</acronym></glossterm>
<glossdef>
<para>
Concurrent Versions System; a software configuration management system used by
many software projects including KDE and &arts;.
</para>
</glossdef>
</glossentry>

<glossentry id="glos-fft">
<glossterm><acronym>FFT</acronym></glossterm>
<glossdef>
<para>Fast Fourier Transform; an algorithm for converting data from the time to
frequency domain; often used in signal processing. </para>
</glossdef>
</glossentry>

<glossentry id="gloss-full-duplex">
<glossterm>Full Duplex</glossterm>
<glossdef>
<para>
The ability of a sound card to simultaneously record and play audio.
</para>
</glossdef>
</glossentry>

<glossentry id="gloss-gpl">
<glossterm><acronym>GPL</acronym></glossterm>
<glossdef>
<para><acronym>GNU</acronym> General Public License; a software license created
by the Free Software Foundation defining the terms for releasing free software.
</para>
</glossdef>
</glossentry>

<glossentry id="gloss-gui">
<glossterm><acronym>GUI</acronym></glossterm>
<glossdef><para>Graphical User Interface</para></glossdef>
</glossentry>

<glossentry id="gloss-idl">
<glossterm><acronym>IDL</acronym></glossterm>
<glossdef>
<para>Interface Definition Language; a programming language independent format
for specifying interfaces (methods and data). </para>
</glossdef>
</glossentry>

<glossentry id="gloss-kde">
<glossterm>&kde;</glossterm>
<glossdef>
<para>K Desktop Environment; a project to develop a free graphical desktop
environment for &UNIX; compatible systems.
</para>
</glossdef>
</glossentry>

<glossentry id="gloss-lgpl">
<glossterm><acronym>LGPL</acronym></glossterm>
<glossdef>
<para><acronym>GNU</acronym> Lesser General Public License; a software license
created by the Free Software Foundation defining the terms for releasing
free software; less restrictive than the <acronym>GPL</acronym> and often used
for software libraries. </para>
</glossdef>
</glossentry>

<glossentry id="gloss-mcop">
<glossterm><acronym>MCOP</acronym></glossterm>
<glossdef>
<para>Multimedia COmmunication Protocol; the protocol used for communication
between &arts; software modules; similar to <acronym>CORBA</acronym> but simpler
and optimized for multimedia. </para>
</glossdef>
</glossentry>

<glossentry id="gloss-midi">
<glossterm><acronym>MIDI</acronym></glossterm>
<glossdef>
<para>Musical Instrument Digital Interface; a standard protocol for
communication between electronic musical instruments; often also used to refer
to a file format for storing <acronym>MIDI</acronym> commands.
</para>
</glossdef>
</glossentry>

<glossentry id="gloss-oss">
<glossterm><acronym>OSS</acronym></glossterm>
<glossdef>
<para>
Open Sound System; the sound drivers included with the &Linux; kernel (sometimes
called <acronym>OSS</acronym>/Free) or a commercial version sold by 4Front
Technologies.
</para>
</glossdef>
</glossentry>
 
</glossary>

<appendix id="installation">
<title>Installing &arts;</title>

<para>
In order to use &arts; you obviously need to have it installed and running on
your system. There are two approaches for doing this, which are described in the
next sections.
</para>

<sect1 id="binary-install">
<title>Installing a Precompiled Binary Release</title>

<para>
The quickest and easiest way to get &arts; up and running is to install
precompiled binary packages for your system. Most recent &Linux; distributions
include &kde;, and if it is &kde; 2.0 or later it will include &arts;. If &kde;
is not included on your installation media it may be available as a download
from your operating system vendor.  Alternatively it may be available from third
parties. Make sure that you use packages that are compatible with your operating
system version.
</para>

<para>
A basic install of &kde; will include the sound server, allowing most
applications to play sound. If you want the full set of multimedia tools and
applications you will likely need to install additional optional packages.
</para>

<para>
The disadvantage of using precompiled binaries is that they may not be the most
recent version of &arts;. This is particularly likely if they are provided on
&CD-ROM;, as the pace of development of &arts; and &kde; is such that &CD-ROM;
media cannot usually keep pace. You may also find that, if you have one of the
less common architectures or operating system distributions, precompiled binary
packages may not be available and you will need to use the second method.
</para>

</sect1>

<sect1 id="source-install">
<title>Building From Source</title>

<para>
While time consuming, the most flexible way to build &arts; is to compile it
yourself from source code. This ensures you have a version compiled optimally
for your system configuration and allows you to build the most recent version.
</para>

<para>
You have two choices here -- you can either install the most recent stable
version included with &kde; or you can get the most recent (but possibly
unstable) version directly from the &kde; project <acronym>CVS</acronym>
repository. Most users who aren't developing for &arts; should use the stable
version. You can download it from <ulink
url="ftp://ftp.kde.org">ftp://ftp.kde.org</ulink> or one of the many mirror
sites. If you are actively developing for &arts; you probably want to use the
<acronym>CVS</acronym> version.
</para>

<para>
Note that at the time of writing, the most recent version of &arts; was only
provided as part of &kde;. If you want a standalone version of &arts; you will
need to manually build and install only the &arts; portions of &kde;.
</para>

<para>
Note also that if you are building from <acronym>CVS</acronym>, some components
of &arts; (&ie; the basic core components including the sound server) are found
in the <acronym>CVS</acronym> module kdelibs, while additional components (&eg;
<application>artsbuilder</application>) are included in the. This may change in
the future. You may also find a version in the kmusic module; this is the old
(pre-&kde; 2.0) version which is now obsolete.
</para>

<para>
The requirements for building &arts; are essentially the same as for building
&kde;. The configure scripts should detect your system configuration and
indicate if any required components are missing. Make sure that you have a
working sound driver on your system (either the <acronym>OSS</acronym>/Free
driver in the kernel, <acronym>OSS</acronym> driver from 4Front Technologies, or
<acronym>ALSA</acronym> driver with <acronym>OSS</acronym> emulation).
</para>

<para>More information on downloading and installing &kde; (including &arts;)
can be found in the <ulink
url="http://www.kde.org/documentation/faq/index.html">&kde;
&FAQ;</ulink>.</para>

</sect1>

</appendix>

</book>
<!--
Local Variables:
mode: sgml
sgml-omittag:nil
sgml-shorttag:t
sgml-namecase-general:t
sgml-general-insert-case:lower
sgml-minimize-attributes:nil
sgml-always-quote-attributes:t
sgml-indent-step:0
sgml-indent-data:nil
End:
-->