Sophie

Sophie

distrib > Fedora > 14 > x86_64 > media > updates > by-pkgid > 0a2d1da5078620d6abbc0a5e920f2a92 > files > 24

fluidsynth-devel-1.1.3-1.fc14.x86_64.rpm

/*! 
\mainpage FluidSynth 1.1 Developer Documentation 
\author Peter Hanappe
\author Conrad Berhörster
\author Antoine Schmitt
\author Pedro López-Cabanillas
\author Josh Green
\author David Henningsson
\author Copyright © 2003-2010 Peter Hanappe, Conrad Berhörster, Antoine Schmitt, Pedro López-Cabanillas, Josh Green, David Henningsson
\version Revision 1.1.2
\date 2010-08-26

All the source code examples in this document are in the public domain; you can use them as you please. This document is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ . The FluidSynth library is distributed under the GNU Library General Public License. A copy of the GNU Library General Public License is contained in the FluidSynth package; if not, visit http://www.gnu.org/licenses/old-licenses/lgpl-2.1.txt or write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

\section Abstract

<a href="http://www.fluidsynth.org">FluidSynth</a> is a software synthesizer based on the <a href="http://en.wikipedia.org/wiki/SoundFont">SoundFont 2</a> specifications. The synthesizer is available as a shared object that can easily be reused in any application that wants to use wave-table synthesis. This document explains the basic usage of FluidSynth. Some of the more advanced features are not yet discussed but will be added in future versions.

\section Contents Table of Contents

- \ref Disclaimer 
- \ref Introduction 
- \ref NewIn1_1_3
- \ref NewIn1_1_2
- \ref NewIn1_1_1
- \ref NewIn1_1_0
- \ref CreatingSettings 
- \ref CreatingSynth 
- \ref CreatingAudioDriver 
- \ref UsingSynth 
- \ref LoadingSoundfonts 
- \ref SendingMIDI
- \ref RealtimeMIDI
- \ref MIDIPlayer
- \ref MIDIRouter 
- \ref Sequencer
- \ref Shell
- \ref Advanced 

\section Disclaimer

This documentation, in its current version, is incomplete. As always, the source code is the final reference.

SoundFont(R) is a registered trademark of E-mu Systems, Inc.

\section Introduction

What is FluidSynth?

- FluidSynth is a software synthesizer based on the SoundFont 2 specifications. The synthesizer is available as a shared object (a concept also named Dynamic Linking Library, or DLL) that can be easily reused in any application for wave-table synthesis. This document explains the basic usage of FluidSynth.

- FluidSynth provides a Command Line Interface program ready to be used from the console terminal, offering most of the library functionalities to end users, among them the ability of render and play Standard MIDI Files, receive real-time MIDI events from external hardware ports and other applications, perform advanced routing of such events, enabling at the same time a local shell as well as a remote server commands interface.

- FluidSynth is an API (Application Programming Interface) relieving programmers from a lot of details of reading SoundFont and MIDI events and files, and sending the digital audio output to a Sound Card. These tasks can be accomplished using a small set of functions. This document explains most of the API functions and gives short examples about them.

- FluidSynth uses instrument samples contained in standard SF2 (SoundFont 2) files, having a file structure based on the RIFF format. The specification can be obtained here: http://connect.creativelabs.com/developer/SoundFont/Forms/AllItems.aspx but most users don't need to know any details of the format.

- FluidSynth can easily be embedded in an application. It has a main header file, fluidsynth.h, and one dynamically linkable library. FluidSynth runs on Linux, Mac OS X, and the Windows platforms, and support for OS/2 and OpenSolaris is experimental. It has audio and midi drivers for all mentioned platforms but you can use it with your own drivers if your application already handles MIDI and audio input/output. This document explains the basic usage of FluidSynth and provides examples that you can reuse. 

- FluidSynth is open source, in active development. For more details, take a look at http://www.fluidsynth.org

\section NewIn1_1_3 Whats new in 1.1.3?

Changes in FluidSynth 1.1.2 concerning developers:

- There are no new API additions in 1.1.3, this is a pure bug-fix release.
  For a list of bugs fixed, see 
  https://sourceforge.net/apps/trac/fluidsynth/wiki/ChangeLog1_1_3

\section NewIn1_1_2 Whats new in 1.1.2?

Changes in FluidSynth 1.1.2 concerning developers:

- Build system has switched from autotools to CMake. For more information, see
  README.cmake. The autotools build system is still working, but it is 
  deprecated. The "winbuild" and "macbuild" directories have been dropped in 
  favor of CMake's ability to create project files on demand.
- Thread safety has been reworked to overcome the limitations and bugs in 
  version 1.1.0 and 1.1.1. There are two new settings controlling the thread 
  safety, synth.threadsafe-api and synth.parallel-render. More information 
  about these settings is in the \ref CreatingSettings section. Please look
  them through and set them appropriately according to your use case.
- Voice overflow, i e what voice to kill when polyphony is exceeded, is now
  configurable.
- Possibility to update polyphony and sample rate real-time. Note that 
  updating polyphony is not hard real-time safe, and updating sample rate will 
  kill all currently sounding voices.
- MIDI Bank Select handling is now configurable. See the synth.midi-bank-select
  setting in the \ref CreatingSettings section for more information.
- Can use RealTimeKit (on Linux) to get real-time priority, if the original 
  attempt fails. Note that you'll need development headers for DBus to enable 
  this functionality.
- Shell commands for pitch bend and pitch bend range.
- PulseAudio driver: two new settings allows you to specify media role, 
  and control whether pulseaudio can adjust latency.

\section NewIn1_1_1 Whats new in 1.1.1?

Changes in FluidSynth 1.1.1 concerning developers:

- fluid_synth_get_channel_preset() marked as deprecated.  New function
  fluid_synth_get_channel_info() added which is thread safe and should replace
  most uses of the older function.
- Added fluid_synth_unset_program() to unset the active preset on a channel.

\section NewIn1_1_0 Whats new in 1.1.0?

Overview of changes in FluidSynth 1.1.0 concerning developers:

- Extensive work to make FluidSynth thread safe.  Previous versions had many multi-thread
  issues which could lead to crashes or synthesis glitches.  Some of the API additions,
  deprecations and function recommended conditions of use are related to this work.
- File renderer object for rendering audio to files.
- Sequencer objects can now use the system timer or the sample clock.  When using the sample
  clock, events are triggered based on the current output audio sample position.  This means
  that MIDI is synchronized with the audio and identical output will be achieved for the same
  MIDI event input.
- libsndfile support for rendering audio to different formats and file types.
- API for using the MIDI router subsystem.
- MIDI Tuning Standard functions were added for specifying whether to activate tuning changes
  in realtime or not.
- SYSEX support (MIDI Tuning Standard only at the moment).
- Changed all yes/no boolean string settings to integer #FLUID_HINT_TOGGLED settings with
  backwards compatibility (assignment and query of boolean values as strings).
- Many other improvements and bug fixes.

API additions:

- A file renderer can be created with new_fluid_file_renderer(), deleted with
  delete_fluid_file_renderer() and a block of audio processed with fluid_file_renderer_process_block().
- Additional functions were added for using the MIDI router subsystem.
  To clear all rules from a router use fluid_midi_router_clear_rules() and to set a router to default rules
  use fluid_midi_router_set_default_rules().
  To create a router rule use new_fluid_midi_router_rule() and to delete a rule use
  delete_fluid_midi_router_rule() (seldom used).  Set values of a router rule with
  fluid_midi_router_rule_set_chan(), fluid_midi_router_rule_set_param1() and fluid_midi_router_rule_set_param2().
  fluid_midi_router_add_rule() can be used to add a rule to a router.
- New MIDI event functions were added, including fluid_event_channel_pressure(), fluid_event_system_reset(),
  and fluid_event_unregistering().  
- Additional sequencer functions include fluid_sequencer_add_midi_event_to_buffer(),
  fluid_sequencer_get_use_system_timer() and fluid_sequencer_process().  new_fluid_sequencer2() was added to
  allow for the timer type to be specified (system or sample clock). 
- The settings subsystem has some new functions for thread safety: fluid_settings_copystr() and fluid_settings_dupstr().
  Also there are new convenience functions to count the number of string options for a setting: fluid_settings_option_count()
  and for concatenating setting options with a separator: fluid_settings_option_concat().
- MIDI Tuning Standard functions added include: fluid_synth_activate_key_tuning(), fluid_synth_activate_octave_tuning(),
  fluid_synth_activate_tuning() and fluid_synth_deactivate_tuning().  All of which provide a parameter for specifying if
  tuning changes should occur in realtime (affect existing voices) or not.
- Additional synthesizer API: fluid_synth_get_sfont_by_name() to get a SoundFont by name,
  fluid_synth_program_select_by_sfont_name() to select an instrument by SoundFont name/bank/program,
  fluid_synth_set_gen2() for specifying additional parameters when assigning a generator value,
  fluid_synth_sysex() for sending SYSEX messages to the synth and fluid_synth_get_active_voice_count() to
  get the current number of active synthesis voices.
- Miscellaneous additions: fluid_player_set_loop() to set playlist loop count and fluid_player_get_status() to get current player state.



\section CreatingSettings Creating and changing the settings

Before you can use the synthesizer, you have to create a settings object. The settings objects is used by many components of the FluidSynth library. It gives a unified API to set the parameters of the audio drivers, the midi drivers, the synthesizer, and so forth. A number of default settings are defined by the current implementation.

All settings have a name that follows the "dotted-name" notation. For example, "synth.polyphony" refers to the number of voices (polyphony) preallocated by the synthesizer. The settings also have a type. There are currently three types: strings, numbers (double floats), and integers. You can change the values of a setting using the fluid_settings_setstr(), fluid_settings_setnum(), and fluid_settings_setint() functions. For example: 

\code
#include <fluidsynth.h>

int main(int argc, char** argv) 
{
    fluid_settings_t* settings = new_fluid_settings();
    fluid_synth_setint(settings, "synth.polyphony", 128);
    /* ... */
    delete_fluid_settings(settings);
    return 0;
}
\endcode

The API contains the functions to query the type, the current value, the default value, the range and the "hints" of a setting. The range is the minimum and maximum value of the setting. The hints gives additional information about a setting. For example, whether a string represents a filename. Or whether a number should be interpreted on on a logarithmic scale. Check the settings.h API documentation for a description of all functions. 

\section CreatingSynth Creating the synthesizer

To create the synthesizer, you pass it the settings object, as in the following example: 

\code
#include <fluidsynth.h>

int main(int argc, char** argv) 
{
    fluid_settings_t* settings;
    fluid_synth_t* synth;
    settings = new_fluid_settings();
    synth = new_fluid_synth(settings);

    /* Do useful things here */

    delete_fluid_synth(synth);
    delete_fluid_settings(settings);
    return 0;
}
\endcode

The following table provides details on all the settings used by the synthesizer. 

<table border="1" cellspacing="0">
  <caption>Table 1. Synthesizer settings</caption>
  <tr>
    <td>synth.audio-channels</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>1</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>1-128</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>By default, the synthesizer outputs a single stereo signal.
    Using this option, the synthesizer can output multichannel
    audio. Sets the number of stereo channel pairs. So 1 is actually
    2 channels (a stereo pair).</td>
  </tr>

  <tr>
    <td>synth.audio-groups</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>1</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>1-128</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Normally the same value as synth.audio-channels. LADSPA
    effects subsystem can use this value though, in which case it may
    differ.</td>
  </tr>

  <tr>
    <td>synth.chorus.active</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>1 (TRUE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>When set to 1 (TRUE) the chorus effects module is activated.
    Otherwise, no chorus will be added to the output signal. Note
    that the amount of signal sent to the chorus module depends on the
    "chorus send" generator defined in the SoundFont.</td>
  </tr>

  <tr>
    <td>synth.cpu-cores</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>1</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>1-256</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>(Experimental) Sets the number of synthesis CPU cores. If set to a value
    greater than 1, then additional synthesis threads will be created to take
    advantage of a multi CPU or CPU core system. This has the affect of
    utilizing more of the total CPU for voices or decreasing render times
    when synthesizing audio to a file.</td>
  </tr>

  <tr>
    <td>synth.device-id</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>0</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>0-126</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Device identifier used for SYSEX commands, such as MIDI
    Tuning Standard commands. Only those SYSEX commands destined
    for this ID or to all devices will be acted upon.</td>
  </tr>

  <tr>
    <td>synth.dump</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>0 (FALSE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Does nothing currently.</td>
  </tr>

  <tr>
    <td>synth.effects-channels</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>2</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>2-2</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td></td>
  </tr>

  <tr>
    <td>synth.gain</td>
    <td>Type</td>
    <td>number</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>0.2</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>0.0-10.0</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>The gain is applied to the final or master output of the
    synthesizer. It is set to a low value by default to avoid the
    saturation of the output when many notes are played.</td>
  </tr>

  <tr>
    <td>synth.ladspa.active</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>0 (FALSE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>When set to "yes" the LADSPA subsystem will be enabled. This
    subsystem allows to load and interconnect LADSPA plug-ins. The
    output of the synthesizer is processed by the LADSPA subsystem.
    Note that the synthesizer has to be compiled with LADSPA
    support. More information about the LADSPA subsystem
    later.</td>
  </tr>

  <tr>
    <td>synth.midi-channels</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>16</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>16-256</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>This setting defines the number of MIDI channels of the
    synthesizer. The MIDI standard defines 16 channels, so MIDI
    hardware is limited to this number. Internally FluidSynth can use
    more channels which can be mapped to different MIDI sources.</td>
  </tr>

  <tr>
    <td>synth.midi-bank-select</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>gs</td>
  </tr>
  <tr>
    <td></td>
    <td>Options</td>
    <td>gm, gs, xg, mma</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>This setting defines how the synthesizer interprets Bank Select messages.
       <ul>
         <li>gm: ignores CC0 and CC32 messages.</li>
         <li>gs: (default) CC0 becomes the bank number, CC32 is ignored.</li>
         <li>xg: CC32 becomes the bank number, CC0 is ignored.</li>
         <li>mma: bank is calculated as CC0*128+CC32.</li>
       </ul>  
    </td>
  </tr>

  <tr>
    <td>synth.min-note-length</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>10</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>0-65535</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Sets the minimum note duration in milliseconds. This
    ensures that really short duration note events, such as
    percussion notes, have a better chance of sounding as
    intended. Set to 0 to disable this feature.</td>
  </tr>

  <tr>
    <td>synth.parallel-render</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>1 (TRUE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>synth.parallel-render is the low-latency setting. If on, you're allowed
to call fluid_synth_write_s16, fluid_synth_write_float,
fluid_synth_nwrite_float or fluid_synth_process in parallel with the
rest of the calls, and it won't be blocked by time intensive calls to
the synth. Turn it off if throughput is more important than latency, e g
in rendering-to-file scenarios where underruns is not an issue.</td>
  </tr>


  <tr>
    <td>synth.polyphony</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>256</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>1-65535</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>The polyphony defines how many voices can be played in
    parallel. A note event produces one or more voices.
    Its good to set this to a value which the system can handle
    and will thus limit FluidSynth's CPU usage. When FluidSynth
    runs out of voices it will begin terminating lower priority
    voices for new note events.</td>
  </tr>

  <tr>
    <td>synth.reverb.active</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>1 (TRUE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>When set to 1 (TRUE) the reverb effects module is activated.
    Otherwise, no reverb will be added to the output signal. Note
    that the amount of signal sent to the reverb module depends on the
    "reverb send" generator defined in the SoundFont.</td>
  </tr>

  <tr>
    <td>synth.sample-rate</td>
    <td>Type</td>
    <td>number</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>44100</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>22050-96000</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>The sample rate of the audio generated by the
    synthesizer.</td>
  </tr>

  <tr>
    <td>synth.threadsafe-api</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>1 (TRUE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>synth.threadsafe-api controls whether the synth's public API is
protected by a mutex or not. Default is on, turn it off for slightly
better performance if you know you're only accessing the synth from one
thread only, this could be the case in many embedded use cases for
example. Note that libfluidsynth can use many threads by itself (shell 
is one, midi driver is one, midi player is one etc) so you should usually 
leave it on. Also see synth.parallel-render.</td>
  </tr>


  <tr>
    <td>synth.verbose</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>0 (FALSE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>When set to 1 (TRUE) the synthesizer will print out
    information about the received MIDI events to the stdout. This
    can be helpful for debugging. This setting cannot be changed
    after the synthesizer has started.</td>
  </tr>
</table>

\section CreatingAudioDriver Creating the Audio Driver

The synthesizer itself does not write any audio to the audio output. This allows application developers to manage the audio output themselves if they wish. The next section describes the use of the synthesizer without an audio driver in more detail.

Creating the audio driver is straightforward: set the appropriate settings and create the driver object. Because the FluidSynth has support for several audio systems, you may want to change which one you want to use. The list below shows the audio systems that are currently supported. It displays the name, as used by the fluidsynth library, and a description. 

- jack: JACK Audio Connection Kit (Linux, Mac OS X, Windows)
- alsa: Advanced Linux Sound Architecture (Linux)
- oss: Open Sound System (Linux, Unix)
- pulseaudio: PulseAudio (Linux, Mac OS X, Windows)
- coreaudio: Apple CoreAudio (Mac OS X)
- dsound: Microsoft DirectSound (Windows)
- portaudio: PortAudio Library (Mac OS 9 & X, Windows, Linux)
- sndman: Apple SoundManager (Mac OS Classic)
- dart: DART sound driver (OS/2)
- file: Driver to output audio to a file

The default audio driver depends on the settings with which FluidSynth was compiled. You can get the default driver with fluid_settings_getstr_default(). To get the list of available drivers use the fluid_settings_foreach_option() function. Finally, you can set the driver with fluid_settings_setstr(). In most cases, the default driver should work out of the box. 

Additional options that define the audio quality and latency are "audio.sample-format", "audio.period-size", and "audio.periods". The details are described later. 

You create the audio driver with the new_fluid_audio_driver() function. This function takes the settings and synthesizer object as arguments. For example: 

\code
void init() 
{
    fluid_settings_t* settings;
    fluid_synth_t* synth;
    fluid_audio_driver_t* adriver;
    settings = new_fluid_settings();

    /* Set the synthesizer settings, if necessary */
    synth = new_fluid_synth(settings);

    fluid_settings_setstr(settings, "audio.driver", "jack");
    adriver = new_fluid_audio_driver(settings, synth);
}
\endcode

As soon as the audio driver is created, it will start playing. The audio driver creates a separate thread that uses the synthesizer object to generate the audio.
 
There are a number of general audio driver settings. The audio.driver settings define the audio subsystem that will be used. The audio.periods and audio.period-size settings define the latency and robustness against scheduling delays. There are additional settings for the audio subsystems used which are documented in another table.

<table border="1" cellspacing="0">
  <caption>Table 2. General audio driver settings</caption>
  <tr>
    <td>audio.driver</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>jack (Linux), dsound (Windows), sndman (MacOS9), coreaudio
    (Mac OS X), dart (OS/2)</td>
  </tr>
  <tr>
    <td></td>
    <td>Options</td>
    <td>jack, alsa, oss, pulseaudio, coreaudio, dsound, portaudio, sndman, dart, file</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>The audio system to be used.</td>
  </tr>

  <tr>
    <td>audio.periods</td>
    <td>Type</td>
    <td>int</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>16 (Linux, Mac OS X), 8 (Windows)</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>2-64</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>The number of the audio buffers used by the driver. This
    number of buffers, multiplied by the buffer size (see setting
    audio.period-size), determines the maximum latency of the audio
    driver.</td>
  </tr>

  <tr>
    <td>audio.period-size</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>64 (Linux, Mac OS X), 512 (Windows)</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>64-8192</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>The size of the audio buffers (in frames).</td>
  </tr>

  <tr>
    <td>audio.realtime-prio</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>60</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>0-99</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Sets the realtime scheduling priority of the audio synthesis thread
    (0 disables high priority scheduling).  Linux is the only platform which
    currently makes use of different priority levels.  Drivers which use this
    option: alsa, oss and pulseaudio</td>
  </tr>

  <tr>
    <td>audio.sample-format</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"16bits"</td>
  </tr>
  <tr>
    <td></td>
    <td>Options</td>
    <td>"16bits", "float"</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>The format of the audio samples. This is currently only an
    indication; the audio driver may ignore this setting if it
    can't handle the specified format.</td>
  </tr>
</table>


The following table describes audio driver specific settings.

<table border="1" cellspacing="0">
  <caption>Table 3. Audio driver specific settings</caption>
  <tr>
    <td>audio.alsa.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"default"</td>
  </tr>
  <tr>
    <td></td>
    <td>Options</td>
    <td>ALSA device string, such as: "hw:0", "plughw:1", etc.</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Selects the ALSA audio device to use.</td>
  </tr>

  <tr>
    <td>audio.coreaudio.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"default"</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Selects the CoreAudio device to use.</td>
  </tr>

  <tr>
    <td>audio.dart.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"default"</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Selects the Dart (OS/2 driver) device to use.</td>
  </tr>

  <tr>
    <td>audio.dsound.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"default"</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Selects the DirectSound (Windows) device to use.</td>
  </tr>

  <tr>
    <td>audio.file.endian</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>'auto' if libsndfile support is built in, 'cpu' otherwise.</td>
  </tr>
  <tr>
    <td></td>
    <td>Options</td>
    <td>auto, big, cpu, little ('cpu' is all that is supported if libsndfile
    support is not built in)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Defines the byte order when using the 'file' driver or file renderer to
    store audio to a file. 'auto' uses the default for the given file type,
    'cpu' uses the CPU byte order, 'big' uses big endian byte order and 'little' uses
    little endian byte order.</td>
  </tr>

  <tr>
    <td>audio.file.format</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>s16</td>
  </tr>
  <tr>
    <td></td>
    <td>Options</td>
    <td>double, float, s16, s24, s32, s8, u8 ('s16' is all that is supported if
    libsndfile support not built in)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Defines the audio format when rendering audio to a file. 'double' is
    64 bit floating point, 'float' is 32 bit floating point, 's16' is 16 bit signed
    PCM, 's24' is 24 bit signed PCM, 's32' is 32 bit signed PCM, 's8' is 8 bit
    signed PCM and 'u8' is 8 bit unsigned PCM.</td>
  </tr>

  <tr>
    <td>audio.file.name</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>'fluidsynth.wav' if libsndfile support is built in, 'fluidsynth.raw'
    otherwise.</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Specifies the file name to store the audio to, when rendering audio to
    a file.</td>
  </tr>

  <tr>
    <td>audio.file.type</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>'auto' if libsndfile support is built in, 'raw' otherwise.</td>
  </tr>
  <tr>
    <td></td>
    <td>Options</td>
    <td>aiff, au, auto, avr, caf, flac, htk, iff, mat, oga, paf, pvf, raw, sd2, sds,
    sf, voc, w64, wav, xi (actual list of types may vary and depends on the
    libsndfile library used, 'raw' is the only type available if no libsndfile
    support is built in).</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Sets the file type of the file which the audio will be stored to.
    'auto' attempts to determine the file type from the audio.file.name file
    extension and falls back to 'wav' if the extension doesn't match any types.</td>
  </tr>

  <tr>
    <td>audio.jack.autoconnect</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>0 (FALSE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>If 1 (TRUE), then FluidSynth output is automatically connected to jack
    system audio output.</td>
  </tr>

  <tr>
    <td>audio.jack.id</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>fluidsynth</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>ID used when creating Jack client connection.</td>
  </tr>

  <tr>
    <td>audio.jack.multi</td>
    <td>Type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>0 (FALSE)</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>If 1 (TRUE), then multi-channel Jack output will be enabled if
    synth.audio-channels is greater than 1.</td>
  </tr>

  <tr>
    <td>audio.jack.server</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td></td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Jack server to connect to. Defaults to an empty string, which uses
    default Jack server.</td>
  </tr>

  <tr>
    <td>audio.oss.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>/dev/dsp</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Device to use for OSS audio output.</td>
  </tr>

  <tr>
    <td>audio.portaudio.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>PortAudio Default</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Device to use for PortAudio driver output. Note that 'PortAudio Default'
    is a special value which outputs to the default PortAudio device.</td>
  </tr>

  <tr>
    <td>audio.pulseaudio.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"default"</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Device to use for PulseAudio driver output</td>
  </tr>

  <tr>
    <td>audio.pulseaudio.server</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"default"</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Server to use for PulseAudio driver output</td>
  </tr>
</table>

\section UsingSynth Using the synthesizer without an audio driver

It is possible to use the synthesizer object without creating an audio driver. This is desirable if the application using FluidSynth manages the audio output itself. The synthesizer has several API functions that can be used to obtain the audio output: 

fluid_synth_write_s16() fills two buffers (left and right channel) with samples coded as signed 16 bits (the endian-ness is machine dependent). fluid_synth_write_float() fills a left and right audio buffer with 32 bits floating point samples. For multi channel audio output, the function fluid_synth_nwrite_float() has to be used.

The function fluid_synth_process() is still experimental and its use is therefore not recommended but it will probably become the generic interface in future versions. 

\section LoadingSoundfonts Loading and managing SoundFonts

Before any sound can be produced, the synthesizer needs a SoundFont.

SoundFonts are loaded with the fluid_synth_sfload() function. The function takes the path to a SoundFont file and a boolean to indicate whether the presets of the MIDI channels should be updated after the SoundFont is loaded. When the boolean value is TRUE, all MIDI channel bank and program numbers will be refreshed, which may cause new instruments to be selected from the newly loaded SoundFont.

The synthesizer can load any number of SoundFonts. The loaded SoundFonts are treated as a stack, where each new loaded SoundFont is placed at the top of the stack. When selecting presets by bank and program numbers, SoundFonts are searched beginning at the top of the stack. In the case where there are presets in different SoundFonts with identical bank and program numbers, the preset from the most recently loaded SoundFont is used. The fluid_synth_program_select() can be used for unambiguously selecting a preset or bank offsets could be applied to each SoundFont with fluid_synth_set_bank_offset(), to try and ensure that each preset has unique bank and program numbers.

The fluid_synth_sfload() function returns the unique identifier of the loaded SoundFont, or -1 in case of an error. This identifier is used in subsequent management functions: fluid_synth_sfunload() removes the SoundFont, fluid_synth_sfreload() reloads the SoundFont. When a SoundFont is reloaded, it retains it's ID and position on the SoundFont stack.

Additional API functions are provided to get the number of loaded SoundFonts and to get a pointer to the SoundFont. 

\section SendingMIDI Sending MIDI Events

Once the synthesizer is up and running and a SoundFont is loaded, most people will want to do something useful with it. Make noise, for example. MIDI messages can be sent using the fluid_synth_noteon(), fluid_synth_noteoff(), fluid_synth_cc(), fluid_synth_pitch_bend(), fluid_synth_pitch_wheel_sens(), and fluid_synth_program_change() functions. For convenience, there's also a fluid_synth_bank_select() function (the bank select message is normally sent using a control change message). 

The following example show a generic graphical button that plays a note when clicked: 

\code
class SoundButton : public SomeButton
{
public:	

    SoundButton() : SomeButton() {
        if (!_synth) {
            initSynth();
        }
    }

    static void initSynth() {
        _settings = new_fluid_settings();
        _synth = new_fluid_synth(_settings);
        _adriver = new_fluid_audio_driver(_settings, _synth);
    }

    /* ... */

    virtual int handleMouseDown(int x, int y) {
        /* Play a note on key 60 with velocity 100 on MIDI channel 0 */
        fluid_synth_noteon(_synth, 0, 60, 100);
    }

    virtual int handleMouseUp(int x, int y) {
        /* Release the note on key 60 */
        fluid_synth_noteoff(_synth, 0, 60);
    }

protected:

    static fluid_settings_t* _settings;
    static fluid_synth_t* _synth;
    static fluid_audio_driver_t* _adriver;
};
\endcode

\section RealtimeMIDI Creating a Real-time MIDI Driver

FluidSynth can process real-time MIDI events received from hardware MIDI ports or other applications. To do so, the client must create a MIDI input driver. It is a very similar process to the creation of the audio driver: you initialize some properties in a settings instance and call the new_fluid_midi_driver() function providing a callback function that will be invoked when a MIDI event is received. The following MIDI drivers are currently supported:

- jack: JACK Audio Connection Kit MIDI driver (Linux, Mac OS X)
- oss: Open Sound System raw MIDI (Linux, Unix)
- alsa_raw: ALSA raw MIDI interface (Linux)
- alsa_seq: ALSA sequencer MIDI interface (Linux)
- winmidi: Microsoft Windows MM System (Windows)
- midishare: MIDI Share (Linux, Mac OS X)
- coremidi: Apple CoreMIDI (Mac OS X)

\code
#include <fluidsynth.h>

int handle_midi_event(void* data, fluid_midi_event_t* event)
{
    printf("event type: %d\n", fluid_midi_event_get_type(event));
}

int main(int argc, char** argv)
{
    fluid_settings_t* settings;
    fluid_midi_driver_t* mdriver;
    settings = new_fluid_settings();
    mdriver = new_fluid_midi_driver(settings, handle_midi_event, NULL);
    /* ... */    
    delete_fluid_midi_driver(mdriver);
    return 0;
}
\endcode

There are a number of general MIDI driver settings. The midi.driver setting
defines the MIDI subsystem that will be used. There are additional settings for
the MIDI subsystems used, which are described in a following table. 

<table border="1" cellspacing="0">
  <caption>Table 4. General MIDI driver settings</caption>
  <tr>
    <td>midi.driver</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>alsa_seq (Linux), winmidi (Windows), jack (Mac OS X)</td>
  </tr>
  <tr>
    <td></td>
    <td>Options</td>
    <td>alsa_raw, alsa_seq, coremidi, jack, midishare, oss, winmidi</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>The MIDI system to be used.</td>
  </tr>

  <tr>
    <td>midi.realtime-prio</td>
    <td>Type</td>
    <td>integer</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>50</td>
  </tr>
  <tr>
    <td></td>
    <td>Min-Max</td>
    <td>0-99</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Sets the realtime scheduling priority of the MIDI thread
    (0 disables high priority scheduling).  Linux is the only platform which
    currently makes use of different priority levels.  Drivers which use this
    option: alsa_raw, alsa_seq, oss</td>
  </tr>
</table>

The following table defines MIDI driver specific settings.

<table border="1" cellspacing="0">
  <caption>Table 5. MIDI driver specific settings</caption>
  <tr>
    <td>midi.alsa.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"default"</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>ALSA MIDI device to use for RAW ALSA MIDI driver.</td>
  </tr>

  <tr>
    <td>midi.alsa_seq.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>"default"</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>ALSA sequencer device to use for ALSA sequencer driver.</td>
  </tr>

  <tr>
    <td>midi.alsa_seq.id</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>pid</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>ID to use when registering ports with the ALSA sequencer driver. If
    set to "pid" then the ID will be "FLUID Synth (PID)", where PID is the
    FluidSynth process ID of the audio thread otherwise the provided string
    will be used in place of PID.</td>
  </tr>

  <tr>
    <td>midi.jack.id</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>fluidsynth-midi</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Client ID to use with the Jack MIDI driver.</td>
  </tr>

  <tr>
    <td>midi.jack.server</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td></td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Jack server to connect to for Jack MIDI driver. If an empty string then
    the default server will be used.</td>
  </tr>

  <tr>
    <td>midi.oss.device</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td>/dev/midi</td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Device to use for OSS MIDI driver.</td>
  </tr>

  <tr>
    <td>midi.portname</td>
    <td>Type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>Default</td>
    <td></td>
  </tr>
  <tr>
    <td></td>
    <td>Description</td>
    <td>Used by coremidi and alsa_seq drivers for the portnames registered with
    the MIDI subsystem.</td>
  </tr>
</table>

\section MIDIPlayer Loading and Playing a MIDI file

FluidSynth can be used to play MIDI files, using the MIDI File Player interface. It follows a high level implementation, though its implementation is currently incomplete. After initializing the synthesizer, create the player passing the synth instance to new_fluid_player(). Then, you can add some SMF file names to the player using fluid_player_add(), and finally call fluid_player_play() to start the playback. You can check if the player has finished by calling fluid_player_get_status(), or wait for the player to terminate using fluid_player_join().

\code
#include <fluidsynth.h>

int main(int argc, char** argv) 
{
    int i;
    fluid_settings_t* settings;
    fluid_synth_t* synth;
    fluid_player_t* player;
    fluid_audio_driver_t* adriver;
    settings = new_fluid_settings();
    synth = new_fluid_synth(settings);
    player = new_fluid_player(synth);
    adriver = new_fluid_audio_driver(settings, synth);
    /* process command line arguments */
    for (i = 1; i < argc; i++) {
        if (fluid_is_soundfont(argv[i])) {
           fluid_synth_sfload(synth, argv[1], 1);
        }
        if (fluid_is_midifile(argv[i])) {
            fluid_player_add(player, argv[i]);
        }
    }
    /* play the midi files, if any */
    fluid_player_play(player);
    /* wait for playback termination */
    fluid_player_join(player);
    /* cleanup */
    delete_fluid_audio_driver(adriver);
    delete_fluid_player(player);
    delete_fluid_synth(synth);
    delete_fluid_settings(settings);
    return 0;
}
\endcode

Settings which the MIDI player uses are documented below.

<table border="1" cellspacing="0">
  <caption>Table 6. General MIDI driver settings</caption>
  <tr>
    <td>player.reset-synth</td>
    <td>type</td>
    <td>boolean</td>
  </tr>
  <tr>
    <td></td>
    <td>default</td>
    <td>1 (TRUE)</td>
  </tr>
  <tr>
    <td></td>
    <td>description</td>
    <td>If true, reset the synth before starting a new MIDI song, so the state of a previous song can't affect the new song. Turn it off for seamless looping of a song.</td>
  </tr>

  <tr>
    <td>player.timing-source</td>
    <td>type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>default</td>
    <td>'sample'</td>
  </tr>
  <tr>
    <td></td>
    <td>options</td>
    <td>'sample', 'system'</td>
  </tr>
  <tr>
    <td></td>
    <td>description</td>
    <td>Determines the timing source of the player sequencer. 'sample' uses
    the sample clock (how much audio has been output) to sequence events,
    in which case audio is synchronized with MIDI events. 'system' uses the
    system clock, audio and MIDI are not synchronized exactly.</td>
  </tr>
</table>

\section MIDIRouter Real-time MIDI router

The MIDI router is one more processing layer directly behind the MIDI input. It processes incoming MIDI events and generates control events for the synth. It can be used to filter or modify events prior to sending them to the synthesizer. When created, the MIDI router is transparent and simply passes all MIDI events. Router "rules" must be added to actually make use of its capabilities.

Some examples of MIDI router usage:

- Filter messages (Example: Pass sustain pedal CCs only to selected channels)
- Split the keyboard (Example: noteon with notenr < x: to ch 1, >x to ch 2)
- Layer sounds (Example: for each noteon received on ch 1, create a noteon on ch1, ch2, ch3,...)
- Velocity scaling (Example: for each noteon event, scale the velocity by 1.27)
- Velocity switching (Example: v <= 100: "Angel Choir"; v > 100: "Hell's Bells")
- Get rid of aftertouch

The MIDI driver API has a clean separation between the midi thread and the synthesizer. That opens the door to add a midi router module.

MIDI events coming from the MIDI player do not pass through the MIDI router.

\code
#include <fluidsynth.h>

int main(int argc, char** argv) 
{
    fluid_settings_t* settings;
    fluid_synth_t* synth;
    fluid_midi_router_t* router;
    fluid_midi_router_rule_t* rule;

    settings = new_fluid_settings();
    synth = new_fluid_synth(settings);

    /* Create the MIDI router and pass events to the synthesizer */
    router = new_fluid_midi_router (settings, fluid_synth_handle_midi_event, synth);

    /* Clear default rules */
    fluid_midi_router_clear_rules (router);

    /* Add rule to map all notes < MIDI note #60 on any channel to channel 4 */
    rule = new_fluid_midi_router_rule ();
    fluid_midi_router_rule_set_chan (rule, 0, 15, 0.0, 4);	/* Map all to channel 4 */
    fluid_midi_router_rule_set_param1 (rule, 0, 59, 1.0, 0);	/* Match notes < 60 */
    fluid_midi_router_add_rule (router, rule, FLUID_MIDI_ROUTER_RULE_NOTE);

    /* Add rule to map all notes >= MIDI note #60 on any channel to channel 5 */
    rule = new_fluid_midi_router_rule ();
    fluid_midi_router_rule_set_chan (rule, 0, 15, 0.0, 5);	/* Map all to channel 5 */
    fluid_midi_router_rule_set_param1 (rule, 60, 127, 1.0, 0);	/* Match notes >= 60 */
    fluid_midi_router_add_rule (router, rule, FLUID_MIDI_ROUTER_RULE_NOTE);

    /* Add rule to reverse direction of pitch bender on channel 7 */
    rule = new_fluid_midi_router_rule ();
    fluid_midi_router_rule_set_chan (rule, 7, 7, 1.0, 0);	      /* Match channel 7 only */
    fluid_midi_router_rule_set_param1 (rule, 0, 16383, -1.0, 16383);  /* Reverse pitch bender */
    fluid_midi_router_add_rule (router, rule, FLUID_MIDI_ROUTER_RULE_PITCH_BEND);

    /* ... Create audio driver, process events, etc ... */

    /* cleanup */
    delete_fluid_midi_router(router);
    delete_fluid_synth(synth);
    delete_fluid_settings(settings);
    return 0;
}
\endcode



\section Sequencer

FluidSynth's sequencer can be used to play MIDI events in a more flexible way than using the MIDI file player, which expects the events to be stored as Standard MIDI Files. Using the sequencer, you can provide the events one by one, with an optional timestamp for scheduling. 

The client program should first initialize the sequencer instance using the function new_fluid_sequencer2(). There is a complementary function delete_fluid_sequencer() to delete it. After creating the sequencer instance, the destinations can be registered using fluid_sequencer_register_fluidsynth() for the synthesizer destination, and optionally using fluid_sequencer_register_client() for the client destination providing a suitable callback function. It can be unregistered using fluid_sequencer_unregister_client(). After the initialization, events can be sent with fluid_sequencer_send_now() and scheduled to the future with fluid_sequencer_send_at(). The registration functions return identifiers, that can be used as destinations of an event using fluid_event_set_dest().

The function fluid_sequencer_get_tick() returns the current playing position. A program may choose a new timescale in milliseconds using fluid_sequencer_set_time_scale().

The following example uses the fluidsynth sequencer to implement a sort of music box. FluidSynth internal clock is used to schedule repetitive sequences of notes. The next sequence is scheduled on advance before the end of the current one, using a timer event that triggers a callback function. The scheduling times are always absolute values, to avoid slippage.

\code
#include "fluidsynth.h"

fluid_synth_t* synth;
fluid_audio_driver_t* adriver;
fluid_sequencer_t* sequencer;
short synthSeqID, mySeqID;
unsigned int now;
unsigned int seqduration;

// prototype
void seq_callback(unsigned int time, fluid_event_t* event, fluid_sequencer_t* seq, void* data);

void createsynth() 
{
    fluid_settings_t* settings;
    settings = new_fluid_settings();
    fluid_settings_setstr(settings, "synth.reverb.active", "yes");
    fluid_settings_setstr(settings, "synth.chorus.active", "no");
    synth = new_fluid_synth(settings);
    adriver = new_fluid_audio_driver(settings, synth);
    sequencer = new_fluid_sequencer2(0);

    // register synth as first destination
    synthSeqID = fluid_sequencer_register_fluidsynth(sequencer, synth);

    // register myself as second destination
    mySeqID = fluid_sequencer_register_client(sequencer, "me", seq_callback, NULL);

    // the sequence duration, in ms
    seqduration = 1000;
}

void deletesynth() 
{
    delete_fluid_sequencer(sequencer);
    delete_fluid_audio_driver(adriver);
    delete_fluid_synth(synth);
}

void loadsoundfont() 
{
    int fluid_res;
    // put your own path here
    fluid_res = fluid_synth_sfload(synth, "Inside:VintageDreamsWaves-v2.sf2", 1);
}

void sendnoteon(int chan, short key, unsigned int date) 
{
    int fluid_res;
    fluid_event_t *evt = new_fluid_event();
    fluid_event_set_source(evt, -1);
    fluid_event_set_dest(evt, synthSeqID);
    fluid_event_noteon(evt, chan, key, 127);
    fluid_res = fluid_sequencer_send_at(sequencer, evt, date, 1);
    delete_fluid_event(evt);
}

void schedule_next_callback() 
{
    int fluid_res;
    // I want to be called back before the end of the next sequence
    unsigned int callbackdate = now + seqduration/2;
    fluid_event_t *evt = new_fluid_event();
    fluid_event_set_source(evt, -1);
    fluid_event_set_dest(evt, mySeqID);
    fluid_event_timer(evt, NULL);
    fluid_res = fluid_sequencer_send_at(sequencer, evt, callbackdate, 1);
    delete_fluid_event(evt);
}

void schedule_next_sequence() {
    // Called more or less before each sequence start
    // the next sequence start date
    now = now + seqduration;

    // the sequence to play

    // the beat : 2 beats per sequence
    sendnoteon(0, 60, now + seqduration/2);
    sendnoteon(0, 60, now + seqduration);

    // melody
    sendnoteon(1, 45, now + seqduration/10);
    sendnoteon(1, 50, now + 4*seqduration/10);
    sendnoteon(1, 55, now + 8*seqduration/10);

    // so that we are called back early enough to schedule the next sequence
    schedule_next_callback();
}

/* sequencer callback */
void seq_callback(unsigned int time, fluid_event_t* event, fluid_sequencer_t* seq, void* data) {
    schedule_next_sequence();
}

int main(void) {
    createsynth();
    loadsoundfont();

    // initialize our absolute date
    now = fluid_sequencer_get_tick(sequencer);
    schedule_next_sequence();

    sleep(100000);
    deletesynth();
    return 0;
}
\endcode

\section Shell Shell interface

The shell interface allows you to send simple textual commands to the synthesizer, to parse a command file, or to read commands from the stdin or other input streams. To find the list of currently supported commands, please check the fluid_cmd.c file or type "help" in the fluidsynth command line shell.

<table border="1" cellspacing="0">
  <caption>Table 7. General MIDI driver settings</caption>
  <tr>
    <td>shell.prompt</td>
    <td>type</td>
    <td>string</td>
  </tr>
  <tr>
    <td></td>
    <td>default</td>
    <td>""</td>
  </tr>
  <tr>
    <td></td>
    <td>description</td>
    <td>In dump mode we set the prompt to "". the ui cannot easily
    handle lines, which don't end with cr. changing the prompt
    cannot be done through a command, because the current shell
    does not handle empty arguments.</td>
  </tr>
  <tr>
    <td>shell.port</td>
    <td>type</td>
    <td>number</td>
  </tr>
  <tr>
    <td></td>
    <td>default</td>
    <td>9800</td>
  </tr>
  <tr>
    <td></td>
    <td>min-max</td>
    <td>1-65535</td>
  </tr>
  <tr>
    <td></td>
    <td>description</td>
    <td>The shell can be used in a client/server mode. This setting
    controls what TCP/IP port the server uses.</td>
  </tr>
</table>

\section Advanced Advanced features, not yet documented.  API reference may contain more info.

- Accessing low-level voice parameters
- Reverb settings
- Chorus settings
- Interpolation settings (set_gen, get_gen, NRPN)
- Voice overflow settings
- LADSPA effects unit
- Multi-channel audio
- MIDI tunings
- Fast file renderer for rendering audio to file in non-realtime
*/

/*!
\example example.c
Example producing short random music with FluidSynth
*/

/*!
\example fluidsynth_simple.c
A basic example of using fluidsynth to play a single note
*/

/*!
\example fluidsynth_fx.c
Example of using effects with fluidsynth
*/

/*!
\example fluidsynth_metronome.c
Example of a simple metronome using the MIDI sequencer API 
*/

/*!
\example fluidsynth_arpeggio.c
Example of an arpeggio generated using the MIDI sequencer API
*/