Sophie

Sophie

distrib > Mageia > 5 > i586 > media > tainted-updates > by-pkgid > f31562021404636e2cdd7f56c175561c > files > 16

ffmpeg-2.4.12-1.mga5.tainted.i586.rpm

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <title>FFmpeg documentation</title>
    <link rel="stylesheet" href="bootstrap.min.css" />
    <link rel="stylesheet" href="style.min.css" />

<meta name="description" content="FFmpeg Formats Documentation: ">
<meta name="keywords" content="FFmpeg documentation : FFmpeg Formats ">
<meta name="Generator" content="texi2html 5.0">
<!-- Created on January 13, 2016 by texi2html 5.0 -->
<!--
texi2html was written by: 
            Lionel Cons <Lionel.Cons@cern.ch> (original author)
            Karl Berry  <karl@freefriends.org>
            Olaf Bachmann <obachman@mathematik.uni-kl.de>
            and many others.
Maintained by: Many creative people.
Send bugs and suggestions to <texi2html-bug@nongnu.org>

-->
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <div style="width: 95%; margin: auto">

<h1 class="titlefont">FFmpeg Formats Documentation</h1>
<hr>
<a name="SEC_Top"></a>

<a name="SEC_Contents"></a>
<h1>Table of Contents</h1>

<div class="contents">

<ul class="no-bullet">
  <li><a name="toc-Description" href="#Description">1 Description</a></li>
  <li><a name="toc-Format-Options" href="#Format-Options">2 Format Options</a>
  <ul class="no-bullet">
    <li><a name="toc-Format-stream-specifiers-1" href="#Format-stream-specifiers-1">2.1 Format stream specifiers</a></li>
  </ul></li>
  <li><a name="toc-Demuxers" href="#Demuxers">3 Demuxers</a>
  <ul class="no-bullet">
    <li><a name="toc-applehttp" href="#applehttp">3.1 applehttp</a></li>
    <li><a name="toc-asf" href="#asf">3.2 asf</a></li>
    <li><a name="toc-concat-1" href="#concat-1">3.3 concat</a>
    <ul class="no-bullet">
      <li><a name="toc-Syntax" href="#Syntax">3.3.1 Syntax</a></li>
      <li><a name="toc-Options-7" href="#Options-7">3.3.2 Options</a></li>
    </ul></li>
    <li><a name="toc-flv" href="#flv">3.4 flv</a></li>
    <li><a name="toc-libgme" href="#libgme">3.5 libgme</a></li>
    <li><a name="toc-libquvi" href="#libquvi">3.6 libquvi</a></li>
    <li><a name="toc-gif-1" href="#gif-1">3.7 gif</a></li>
    <li><a name="toc-image2-1" href="#image2-1">3.8 image2</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples-4" href="#Examples-4">3.8.1 Examples</a></li>
    </ul></li>
    <li><a name="toc-mpegts" href="#mpegts">3.9 mpegts</a></li>
    <li><a name="toc-rawvideo" href="#rawvideo">3.10 rawvideo</a></li>
    <li><a name="toc-sbg" href="#sbg">3.11 sbg</a></li>
    <li><a name="toc-tedcaptions" href="#tedcaptions">3.12 tedcaptions</a></li>
  </ul></li>
  <li><a name="toc-Muxers" href="#Muxers">4 Muxers</a>
  <ul class="no-bullet">
    <li><a name="toc-aiff-1" href="#aiff-1">4.1 aiff</a>
    <ul class="no-bullet">
      <li><a name="toc-Options-2" href="#Options-2">4.1.1 Options</a></li>
    </ul></li>
    <li><a name="toc-crc-1" href="#crc-1">4.2 crc</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples-3" href="#Examples-3">4.2.1 Examples</a></li>
    </ul></li>
    <li><a name="toc-framecrc-1" href="#framecrc-1">4.3 framecrc</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples-6" href="#Examples-6">4.3.1 Examples</a></li>
    </ul></li>
    <li><a name="toc-framemd5-1" href="#framemd5-1">4.4 framemd5</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples-1" href="#Examples-1">4.4.1 Examples</a></li>
    </ul></li>
    <li><a name="toc-gif-2" href="#gif-2">4.5 gif</a></li>
    <li><a name="toc-hls-1" href="#hls-1">4.6 hls</a>
    <ul class="no-bullet">
      <li><a name="toc-Options-3" href="#Options-3">4.6.1 Options</a></li>
    </ul></li>
    <li><a name="toc-ico-1" href="#ico-1">4.7 ico</a></li>
    <li><a name="toc-image2-2" href="#image2-2">4.8 image2</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples-5" href="#Examples-5">4.8.1 Examples</a></li>
      <li><a name="toc-Options-8" href="#Options-8">4.8.2 Options</a></li>
    </ul></li>
    <li><a name="toc-matroska" href="#matroska">4.9 matroska</a>
    <ul class="no-bullet">
      <li><a name="toc-Metadata" href="#Metadata">4.9.1 Metadata</a></li>
      <li><a name="toc-Options-6" href="#Options-6">4.9.2 Options</a></li>
    </ul></li>
    <li><a name="toc-md5-1" href="#md5-1">4.10 md5</a></li>
    <li><a name="toc-mov_002c-mp4_002c-ismv" href="#mov_002c-mp4_002c-ismv">4.11 mov, mp4, ismv</a>
    <ul class="no-bullet">
      <li><a name="toc-Options-5" href="#Options-5">4.11.1 Options</a></li>
      <li><a name="toc-Example" href="#Example">4.11.2 Example</a></li>
    </ul></li>
    <li><a name="toc-mp3" href="#mp3">4.12 mp3</a></li>
    <li><a name="toc-mpegts-1" href="#mpegts-1">4.13 mpegts</a>
    <ul class="no-bullet">
      <li><a name="toc-Options" href="#Options">4.13.1 Options</a></li>
      <li><a name="toc-Example-1" href="#Example-1">4.13.2 Example</a></li>
    </ul></li>
    <li><a name="toc-null" href="#null">4.14 null</a></li>
    <li><a name="toc-nut" href="#nut">4.15 nut</a></li>
    <li><a name="toc-ogg" href="#ogg">4.16 ogg</a></li>
    <li><a name="toc-segment_002c-stream_005fsegment_002c-ssegment" href="#segment_002c-stream_005fsegment_002c-ssegment">4.17 segment, stream_segment, ssegment</a>
    <ul class="no-bullet">
      <li><a name="toc-Options-1" href="#Options-1">4.17.1 Options</a></li>
      <li><a name="toc-Examples-2" href="#Examples-2">4.17.2 Examples</a></li>
    </ul></li>
    <li><a name="toc-smoothstreaming" href="#smoothstreaming">4.18 smoothstreaming</a></li>
    <li><a name="toc-tee" href="#tee">4.19 tee</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples" href="#Examples">4.19.1 Examples</a></li>
    </ul></li>
    <li><a name="toc-webm_005fdash_005fmanifest" href="#webm_005fdash_005fmanifest">4.20 webm_dash_manifest</a>
    <ul class="no-bullet">
      <li><a name="toc-Options-4" href="#Options-4">4.20.1 Options</a></li>
      <li><a name="toc-Example-2" href="#Example-2">4.20.2 Example</a></li>
    </ul>
</li>
  </ul></li>
  <li><a name="toc-Metadata-1" href="#Metadata-1">5 Metadata</a></li>
  <li><a name="toc-See-Also" href="#See-Also">6 See Also</a></li>
  <li><a name="toc-Authors" href="#Authors">7 Authors</a></li>
</ul>
</div>


<hr size="6">
<a name="Description"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Description">1 Description</a></h1>

<p>This document describes the supported formats (muxers and demuxers)
provided by the libavformat library.
</p>

<a name="Format-Options"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Format-Options">2 Format Options</a></h1>

<p>The libavformat library provides some generic global options, which
can be set on all the muxers and demuxers. In addition each muxer or
demuxer may support so-called private options, which are specific for
that component.
</p>
<p>Options may be set by specifying -<var>option</var> <var>value</var> in the
FFmpeg tools, or by setting the value explicitly in the
<code>AVFormatContext</code> options or using the &lsquo;<tt>libavutil/opt.h</tt>&rsquo; API
for programmatic use.
</p>
<p>The list of supported options follows:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>avioflags <var>flags</var> (<em>input/output</em>)</samp>&rsquo;</dt>
<dd><p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>direct</samp>&rsquo;</dt>
<dd><p>Reduce buffering.
</p></dd>
</dl>

</dd>
<dt>&lsquo;<samp>probesize <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set probing size in bytes, i.e. the size of the data to analyze to get
stream information. A higher value will enable detecting more
information in case it is dispersed into the stream, but will increase
latency. Must be an integer not lesser than 32. It is 5000000 by default.
</p>
</dd>
<dt>&lsquo;<samp>packetsize <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set packet size.
</p>
</dd>
<dt>&lsquo;<samp>fflags <var>flags</var> (<em>input/output</em>)</samp>&rsquo;</dt>
<dd><p>Set format flags.
</p>
<p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>ignidx</samp>&rsquo;</dt>
<dd><p>Ignore index.
</p></dd>
<dt>&lsquo;<samp>genpts</samp>&rsquo;</dt>
<dd><p>Generate PTS.
</p></dd>
<dt>&lsquo;<samp>nofillin</samp>&rsquo;</dt>
<dd><p>Do not fill in missing values that can be exactly calculated.
</p></dd>
<dt>&lsquo;<samp>noparse</samp>&rsquo;</dt>
<dd><p>Disable AVParsers, this needs <code>+nofillin</code> too.
</p></dd>
<dt>&lsquo;<samp>igndts</samp>&rsquo;</dt>
<dd><p>Ignore DTS.
</p></dd>
<dt>&lsquo;<samp>discardcorrupt</samp>&rsquo;</dt>
<dd><p>Discard corrupted frames.
</p></dd>
<dt>&lsquo;<samp>sortdts</samp>&rsquo;</dt>
<dd><p>Try to interleave output packets by DTS.
</p></dd>
<dt>&lsquo;<samp>keepside</samp>&rsquo;</dt>
<dd><p>Do not merge side data.
</p></dd>
<dt>&lsquo;<samp>latm</samp>&rsquo;</dt>
<dd><p>Enable RTP MP4A-LATM payload.
</p></dd>
<dt>&lsquo;<samp>nobuffer</samp>&rsquo;</dt>
<dd><p>Reduce the latency introduced by optional buffering
</p></dd>
</dl>

</dd>
<dt>&lsquo;<samp>seek2any <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Allow seeking to non-keyframes on demuxer level when supported if set to 1.
Default is 0.
</p>
</dd>
<dt>&lsquo;<samp>analyzeduration <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Specify how many microseconds are analyzed to probe the input. A
higher value will enable detecting more accurate information, but will
increase latency. It defaults to 5,000,000 microseconds = 5 seconds.
</p>
</dd>
<dt>&lsquo;<samp>cryptokey <var>hexadecimal string</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set decryption key.
</p>
</dd>
<dt>&lsquo;<samp>indexmem <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set max memory used for timestamp index (per stream).
</p>
</dd>
<dt>&lsquo;<samp>rtbufsize <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set max memory used for buffering real-time frames.
</p>
</dd>
<dt>&lsquo;<samp>fdebug <var>flags</var> (<em>input/output</em>)</samp>&rsquo;</dt>
<dd><p>Print specific debug info.
</p>
<p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>ts</samp>&rsquo;</dt>
</dl>

</dd>
<dt>&lsquo;<samp>max_delay <var>integer</var> (<em>input/output</em>)</samp>&rsquo;</dt>
<dd><p>Set maximum muxing or demuxing delay in microseconds.
</p>
</dd>
<dt>&lsquo;<samp>fpsprobesize <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set number of frames used to probe fps.
</p>
</dd>
<dt>&lsquo;<samp>audio_preload <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set microseconds by which audio packets should be interleaved earlier.
</p>
</dd>
<dt>&lsquo;<samp>chunk_duration <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set microseconds for each chunk.
</p>
</dd>
<dt>&lsquo;<samp>chunk_size <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set size in bytes for each chunk.
</p>
</dd>
<dt>&lsquo;<samp>err_detect, f_err_detect <var>flags</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set error detection flags. <code>f_err_detect</code> is deprecated and
should be used only via the <code>ffmpeg</code> tool.
</p>
<p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>crccheck</samp>&rsquo;</dt>
<dd><p>Verify embedded CRCs.
</p></dd>
<dt>&lsquo;<samp>bitstream</samp>&rsquo;</dt>
<dd><p>Detect bitstream specification deviations.
</p></dd>
<dt>&lsquo;<samp>buffer</samp>&rsquo;</dt>
<dd><p>Detect improper bitstream length.
</p></dd>
<dt>&lsquo;<samp>explode</samp>&rsquo;</dt>
<dd><p>Abort decoding on minor error detection.
</p></dd>
<dt>&lsquo;<samp>careful</samp>&rsquo;</dt>
<dd><p>Consider things that violate the spec and have not been seen in the
wild as errors.
</p></dd>
<dt>&lsquo;<samp>compliant</samp>&rsquo;</dt>
<dd><p>Consider all spec non compliancies as errors.
</p></dd>
<dt>&lsquo;<samp>aggressive</samp>&rsquo;</dt>
<dd><p>Consider things that a sane encoder should not do as an error.
</p></dd>
</dl>

</dd>
<dt>&lsquo;<samp>use_wallclock_as_timestamps <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Use wallclock as timestamps.
</p>
</dd>
<dt>&lsquo;<samp>avoid_negative_ts <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd>
<p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>make_non_negative</samp>&rsquo;</dt>
<dd><p>Shift timestamps to make them non-negative.
Also note that this affects only leading negative timestamps, and not
non-monotonic negative timestamps.
</p></dd>
<dt>&lsquo;<samp>make_zero</samp>&rsquo;</dt>
<dd><p>Shift timestamps so that the first timestamp is 0.
</p></dd>
<dt>&lsquo;<samp>auto (default)</samp>&rsquo;</dt>
<dd><p>Enables shifting when required by the target format.
</p></dd>
<dt>&lsquo;<samp>disabled</samp>&rsquo;</dt>
<dd><p>Disables shifting of timestamp.
</p></dd>
</dl>

<p>When shifting is enabled, all output timestamps are shifted by the
same amount. Audio, video, and subtitles desynching and relative
timestamp differences are preserved compared to how they would have
been without shifting.
</p>
</dd>
<dt>&lsquo;<samp>skip_initial_bytes <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set number of bytes to skip before reading header and frames if set to 1.
Default is 0.
</p>
</dd>
<dt>&lsquo;<samp>correct_ts_overflow <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Correct single timestamp overflows if set to 1. Default is 1.
</p>
</dd>
<dt>&lsquo;<samp>flush_packets <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Flush the underlying I/O stream after each packet. Default 1 enables it, and
has the effect of reducing the latency; 0 disables it and may slightly
increase performance in some cases.
</p>
</dd>
<dt>&lsquo;<samp>output_ts_offset <var>offset</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set the output time offset.
</p>
<p><var>offset</var> must be a time duration specification,
see <a href="ffmpeg-utils.html#time-duration-syntax">(ffmpeg-utils)the Time duration section in the ffmpeg-utils(1) manual</a>.
</p>
<p>The offset is added by the muxer to the output timestamps.
</p>
<p>Specifying a positive offset means that the corresponding streams are
delayed bt the time duration specified in <var>offset</var>. Default value
is <code>0</code> (meaning that no offset is applied).
</p></dd>
</dl>


<p><a name="Format-stream-specifiers"></a>
</p><a name="Format-stream-specifiers-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-Format-stream-specifiers-1">2.1 Format stream specifiers</a></h2>

<p>Format stream specifiers allow selection of one or more streams that
match specific properties.
</p>
<p>Possible forms of stream specifiers are:
</p><dl compact="compact">
<dt>&lsquo;<samp><var>stream_index</var></samp>&rsquo;</dt>
<dd><p>Matches the stream with this index.
</p>
</dd>
<dt>&lsquo;<samp><var>stream_type</var>[:<var>stream_index</var>]</samp>&rsquo;</dt>
<dd><p><var>stream_type</var> is one of following: &rsquo;v&rsquo; for video, &rsquo;a&rsquo; for audio,
&rsquo;s&rsquo; for subtitle, &rsquo;d&rsquo; for data, and &rsquo;t&rsquo; for attachments. If
<var>stream_index</var> is given, then it matches the stream number
<var>stream_index</var> of this type. Otherwise, it matches all streams of
this type.
</p>
</dd>
<dt>&lsquo;<samp>p:<var>program_id</var>[:<var>stream_index</var>]</samp>&rsquo;</dt>
<dd><p>If <var>stream_index</var> is given, then it matches the stream with number
<var>stream_index</var> in the program with the id
<var>program_id</var>. Otherwise, it matches all streams in the program.
</p>
</dd>
<dt>&lsquo;<samp>#<var>stream_id</var></samp>&rsquo;</dt>
<dd><p>Matches the stream by a format-specific ID.
</p></dd>
</dl>

<p>The exact semantics of stream specifiers is defined by the
<code>avformat_match_stream_specifier()</code> function declared in the
&lsquo;<tt>libavformat/avformat.h</tt>&rsquo; header.
</p>
<a name="Demuxers"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Demuxers">3 Demuxers</a></h1>

<p>Demuxers are configured elements in FFmpeg that can read the
multimedia streams from a particular type of file.
</p>
<p>When you configure your FFmpeg build, all the supported demuxers
are enabled by default. You can list all available ones using the
configure option <code>--list-demuxers</code>.
</p>
<p>You can disable all the demuxers using the configure option
<code>--disable-demuxers</code>, and selectively enable a single demuxer with
the option <code>--enable-demuxer=<var>DEMUXER</var></code>, or disable it
with the option <code>--disable-demuxer=<var>DEMUXER</var></code>.
</p>
<p>The option <code>-formats</code> of the ff* tools will display the list of
enabled demuxers.
</p>
<p>The description of some of the currently available demuxers follows.
</p>
<a name="applehttp"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-applehttp">3.1 applehttp</a></h2>

<p>Apple HTTP Live Streaming demuxer.
</p>
<p>This demuxer presents all AVStreams from all variant streams.
The id field is set to the bitrate variant index number. By setting
the discard flags on AVStreams (by pressing &rsquo;a&rsquo; or &rsquo;v&rsquo; in ffplay),
the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is
available in a metadata key named &quot;variant_bitrate&quot;.
</p>
<a name="asf"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-asf">3.2 asf</a></h2>

<p>Advanced Systems Format demuxer.
</p>
<p>This demuxer is used to demux ASF files and MMS network streams.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-no_resync_search <var>bool</var></samp>&rsquo;</dt>
<dd><p>Do not try to resynchronize by looking for a certain optional start code.
</p></dd>
</dl>

<p><a name="concat"></a>
</p><a name="concat-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-concat-1">3.3 concat</a></h2>

<p>Virtual concatenation script demuxer.
</p>
<p>This demuxer reads a list of files and other directives from a text file and
demuxes them one after the other, as if all their packet had been muxed
together.
</p>
<p>The timestamps in the files are adjusted so that the first file starts at 0
and each next file starts where the previous one finishes. Note that it is
done globally and may cause gaps if all streams do not have exactly the same
length.
</p>
<p>All files must have the same streams (same codecs, same time base, etc.).
</p>
<p>The duration of each file is used to adjust the timestamps of the next file:
if the duration is incorrect (because it was computed using the bit-rate or
because the file is truncated, for example), it can cause artifacts. The
<code>duration</code> directive can be used to override the duration stored in
each file.
</p>
<a name="Syntax"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Syntax">3.3.1 Syntax</a></h3>

<p>The script is a text file in extended-ASCII, with one directive per line.
Empty lines, leading spaces and lines starting with &rsquo;#&rsquo; are ignored. The
following directive is recognized:
</p>
<dl compact="compact">
<dt>&lsquo;<samp><code>file <var>path</var></code></samp>&rsquo;</dt>
<dd><p>Path to a file to read; special characters and spaces must be escaped with
backslash or single quotes.
</p>
<p>All subsequent file-related directives apply to that file.
</p>
</dd>
<dt>&lsquo;<samp><code>ffconcat version 1.0</code></samp>&rsquo;</dt>
<dd><p>Identify the script type and version. It also sets the &lsquo;<samp>safe</samp>&rsquo; option
to 1 if it was to its default -1.
</p>
<p>To make FFmpeg recognize the format automatically, this directive must
appears exactly as is (no extra space or byte-order-mark) on the very first
line of the script.
</p>
</dd>
<dt>&lsquo;<samp><code>duration <var>dur</var></code></samp>&rsquo;</dt>
<dd><p>Duration of the file. This information can be specified from the file;
specifying it here may be more efficient or help if the information from the
file is not available or accurate.
</p>
<p>If the duration is set for all files, then it is possible to seek in the
whole concatenated video.
</p>
</dd>
<dt>&lsquo;<samp><code>stream</code></samp>&rsquo;</dt>
<dd><p>Introduce a stream in the virtual file.
All subsequent stream-related directives apply to the last introduced
stream.
Some streams properties must be set in order to allow identifying the
matching streams in the subfiles.
If no streams are defined in the script, the streams from the first file are
copied.
</p>
</dd>
<dt>&lsquo;<samp><code>exact_stream_id <var>id</var></code></samp>&rsquo;</dt>
<dd><p>Set the id of the stream.
If this directive is given, the string with the corresponding id in the
subfiles will be used.
This is especially useful for MPEG-PS (VOB) files, where the order of the
streams is not reliable.
</p>
</dd>
</dl>

<a name="Options-7"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options-7">3.3.2 Options</a></h3>

<p>This demuxer accepts the following option:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>safe</samp>&rsquo;</dt>
<dd><p>If set to 1, reject unsafe file paths. A file path is considered safe if it
does not contain a protocol specification and is relative and all components
only contain characters from the portable character set (letters, digits,
period, underscore and hyphen) and have no period at the beginning of a
component.
</p>
<p>If set to 0, any file name is accepted.
</p>
<p>The default is -1, it is equivalent to 1 if the format was automatically
probed and 0 otherwise.
</p>
</dd>
<dt>&lsquo;<samp>auto_convert</samp>&rsquo;</dt>
<dd><p>If set to 1, try to perform automatic conversions on packet data to make the
streams concatenable.
</p>
<p>Currently, the only conversion is adding the h264_mp4toannexb bitstream
filter to H.264 streams in MP4 format. This is necessary in particular if
there are resolution changes.
</p>
</dd>
</dl>

<a name="flv"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-flv">3.4 flv</a></h2>

<p>Adobe Flash Video Format demuxer.
</p>
<p>This demuxer is used to demux FLV files and RTMP network streams.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-flv_metadata <var>bool</var></samp>&rsquo;</dt>
<dd><p>Allocate the streams according to the onMetaData array content.
</p></dd>
</dl>

<a name="libgme"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-libgme">3.5 libgme</a></h2>

<p>The Game Music Emu library is a collection of video game music file emulators.
</p>
<p>See <a href="http://code.google.com/p/game-music-emu/">http://code.google.com/p/game-music-emu/</a> for more information.
</p>
<p>Some files have multiple tracks. The demuxer will pick the first track by
default. The &lsquo;<samp>track_index</samp>&rsquo; option can be used to select a different
track. Track indexes start at 0. The demuxer exports the number of tracks as
<var>tracks</var> meta data entry.
</p>
<p>For very large files, the &lsquo;<samp>max_size</samp>&rsquo; option may have to be adjusted.
</p>
<a name="libquvi"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-libquvi">3.6 libquvi</a></h2>

<p>Play media from Internet services using the quvi project.
</p>
<p>The demuxer accepts a &lsquo;<samp>format</samp>&rsquo; option to request a specific quality. It
is by default set to <var>best</var>.
</p>
<p>See <a href="http://quvi.sourceforge.net/">http://quvi.sourceforge.net/</a> for more information.
</p>
<p>FFmpeg needs to be built with <code>--enable-libquvi</code> for this demuxer to be
enabled.
</p>
<a name="gif-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-gif-1">3.7 gif</a></h2>

<p>Animated GIF demuxer.
</p>
<p>It accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>min_delay</samp>&rsquo;</dt>
<dd><p>Set the minimum valid delay between frames in hundredths of seconds.
Range is 0 to 6000. Default value is 2.
</p>
</dd>
<dt>&lsquo;<samp>default_delay</samp>&rsquo;</dt>
<dd><p>Set the default delay between frames in hundredths of seconds.
Range is 0 to 6000. Default value is 10.
</p>
</dd>
<dt>&lsquo;<samp>ignore_loop</samp>&rsquo;</dt>
<dd><p>GIF files can contain information to loop a certain number of times (or
infinitely). If &lsquo;<samp>ignore_loop</samp>&rsquo; is set to 1, then the loop setting
from the input will be ignored and looping will not occur. If set to 0,
then looping will occur and will cycle the number of times according to
the GIF. Default value is 1.
</p></dd>
</dl>

<p>For example, with the overlay filter, place an infinitely looping GIF
over another video:
</p><div class="example">
<pre class="example">ffmpeg -i input.mp4 -ignore_loop 0 -i input.gif -filter_complex overlay=shortest=1 out.mkv
</pre></div>

<p>Note that in the above example the shortest option for overlay filter is
used to end the output video at the length of the shortest input file,
which in this case is &lsquo;<tt>input.mp4</tt>&rsquo; as the GIF in this example loops
infinitely.
</p>
<a name="image2-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-image2-1">3.8 image2</a></h2>

<p>Image file demuxer.
</p>
<p>This demuxer reads from a list of image files specified by a pattern.
The syntax and meaning of the pattern is specified by the
option <var>pattern_type</var>.
</p>
<p>The pattern may contain a suffix which is used to automatically
determine the format of the images contained in the files.
</p>
<p>The size, the pixel format, and the format of each image must be the
same for all the files in the sequence.
</p>
<p>This demuxer accepts the following options:
</p><dl compact="compact">
<dt>&lsquo;<samp>framerate</samp>&rsquo;</dt>
<dd><p>Set the frame rate for the video stream. It defaults to 25.
</p></dd>
<dt>&lsquo;<samp>loop</samp>&rsquo;</dt>
<dd><p>If set to 1, loop over the input. Default value is 0.
</p></dd>
<dt>&lsquo;<samp>pattern_type</samp>&rsquo;</dt>
<dd><p>Select the pattern type used to interpret the provided filename.
</p>
<p><var>pattern_type</var> accepts one of the following values.
</p><dl compact="compact">
<dt>&lsquo;<samp>sequence</samp>&rsquo;</dt>
<dd><p>Select a sequence pattern type, used to specify a sequence of files
indexed by sequential numbers.
</p>
<p>A sequence pattern may contain the string &quot;%d&quot; or &quot;%0<var>N</var>d&quot;, which
specifies the position of the characters representing a sequential
number in each filename matched by the pattern. If the form
&quot;%d0<var>N</var>d&quot; is used, the string representing the number in each
filename is 0-padded and <var>N</var> is the total number of 0-padded
digits representing the number. The literal character &rsquo;%&rsquo; can be
specified in the pattern with the string &quot;%%&quot;.
</p>
<p>If the sequence pattern contains &quot;%d&quot; or &quot;%0<var>N</var>d&quot;, the first filename of
the file list specified by the pattern must contain a number
inclusively contained between <var>start_number</var> and
<var>start_number</var>+<var>start_number_range</var>-1, and all the following
numbers must be sequential.
</p>
<p>For example the pattern &quot;img-%03d.bmp&quot; will match a sequence of
filenames of the form &lsquo;<tt>img-001.bmp</tt>&rsquo;, &lsquo;<tt>img-002.bmp</tt>&rsquo;, ...,
&lsquo;<tt>img-010.bmp</tt>&rsquo;, etc.; the pattern &quot;i%%m%%g-%d.jpg&quot; will match a
sequence of filenames of the form &lsquo;<tt>i%m%g-1.jpg</tt>&rsquo;,
&lsquo;<tt>i%m%g-2.jpg</tt>&rsquo;, ..., &lsquo;<tt>i%m%g-10.jpg</tt>&rsquo;, etc.
</p>
<p>Note that the pattern must not necessarily contain &quot;%d&quot; or
&quot;%0<var>N</var>d&quot;, for example to convert a single image file
&lsquo;<tt>img.jpeg</tt>&rsquo; you can employ the command:
</p><div class="example">
<pre class="example">ffmpeg -i img.jpeg img.png
</pre></div>

</dd>
<dt>&lsquo;<samp>glob</samp>&rsquo;</dt>
<dd><p>Select a glob wildcard pattern type.
</p>
<p>The pattern is interpreted like a <code>glob()</code> pattern. This is only
selectable if libavformat was compiled with globbing support.
</p>
</dd>
<dt>&lsquo;<samp>glob_sequence <em>(deprecated, will be removed)</em></samp>&rsquo;</dt>
<dd><p>Select a mixed glob wildcard/sequence pattern.
</p>
<p>If your version of libavformat was compiled with globbing support, and
the provided pattern contains at least one glob meta character among
<code>%*?[]{}</code> that is preceded by an unescaped &quot;%&quot;, the pattern is
interpreted like a <code>glob()</code> pattern, otherwise it is interpreted
like a sequence pattern.
</p>
<p>All glob special characters <code>%*?[]{}</code> must be prefixed
with &quot;%&quot;. To escape a literal &quot;%&quot; you shall use &quot;%%&quot;.
</p>
<p>For example the pattern <code>foo-%*.jpeg</code> will match all the
filenames prefixed by &quot;foo-&quot; and terminating with &quot;.jpeg&quot;, and
<code>foo-%?%?%?.jpeg</code> will match all the filenames prefixed with
&quot;foo-&quot;, followed by a sequence of three characters, and terminating
with &quot;.jpeg&quot;.
</p>
<p>This pattern type is deprecated in favor of <var>glob</var> and
<var>sequence</var>.
</p></dd>
</dl>

<p>Default value is <var>glob_sequence</var>.
</p></dd>
<dt>&lsquo;<samp>pixel_format</samp>&rsquo;</dt>
<dd><p>Set the pixel format of the images to read. If not specified the pixel
format is guessed from the first image file in the sequence.
</p></dd>
<dt>&lsquo;<samp>start_number</samp>&rsquo;</dt>
<dd><p>Set the index of the file matched by the image file pattern to start
to read from. Default value is 0.
</p></dd>
<dt>&lsquo;<samp>start_number_range</samp>&rsquo;</dt>
<dd><p>Set the index interval range to check when looking for the first image
file in the sequence, starting from <var>start_number</var>. Default value
is 5.
</p></dd>
<dt>&lsquo;<samp>ts_from_file</samp>&rsquo;</dt>
<dd><p>If set to 1, will set frame timestamp to modification time of image file. Note
that monotonity of timestamps is not provided: images go in the same order as
without this option. Default value is 0.
If set to 2, will set frame timestamp to the modification time of the image file in
nanosecond precision.
</p></dd>
<dt>&lsquo;<samp>video_size</samp>&rsquo;</dt>
<dd><p>Set the video size of the images to read. If not specified the video
size is guessed from the first image file in the sequence.
</p></dd>
</dl>

<a name="Examples-4"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Examples-4">3.8.1 Examples</a></h3>

<ul>
<li>
Use <code>ffmpeg</code> for creating a video from the images in the file
sequence &lsquo;<tt>img-001.jpeg</tt>&rsquo;, &lsquo;<tt>img-002.jpeg</tt>&rsquo;, ..., assuming an
input frame rate of 10 frames per second:
<div class="example">
<pre class="example">ffmpeg -framerate 10 -i 'img-%03d.jpeg' out.mkv
</pre></div>

</li><li>
As above, but start by reading from a file with index 100 in the sequence:
<div class="example">
<pre class="example">ffmpeg -framerate 10 -start_number 100 -i 'img-%03d.jpeg' out.mkv
</pre></div>

</li><li>
Read images matching the &quot;*.png&quot; glob pattern , that is all the files
terminating with the &quot;.png&quot; suffix:
<div class="example">
<pre class="example">ffmpeg -framerate 10 -pattern_type glob -i &quot;*.png&quot; out.mkv
</pre></div>
</li></ul>

<a name="mpegts"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-mpegts">3.9 mpegts</a></h2>

<p>MPEG-2 transport stream demuxer.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>fix_teletext_pts</samp>&rsquo;</dt>
<dd><p>Overrides teletext packet PTS and DTS values with the timestamps calculated
from the PCR of the first program which the teletext stream is part of and is
not discarded. Default value is 1, set this option to 0 if you want your
teletext packet PTS and DTS values untouched.
</p></dd>
</dl>

<a name="rawvideo"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-rawvideo">3.10 rawvideo</a></h2>

<p>Raw video demuxer.
</p>
<p>This demuxer allows one to read raw video data. Since there is no header
specifying the assumed video parameters, the user must specify them
in order to be able to decode the data correctly.
</p>
<p>This demuxer accepts the following options:
</p><dl compact="compact">
<dt>&lsquo;<samp>framerate</samp>&rsquo;</dt>
<dd><p>Set input video frame rate. Default value is 25.
</p>
</dd>
<dt>&lsquo;<samp>pixel_format</samp>&rsquo;</dt>
<dd><p>Set the input video pixel format. Default value is <code>yuv420p</code>.
</p>
</dd>
<dt>&lsquo;<samp>video_size</samp>&rsquo;</dt>
<dd><p>Set the input video size. This value must be specified explicitly.
</p></dd>
</dl>

<p>For example to read a rawvideo file &lsquo;<tt>input.raw</tt>&rsquo; with
<code>ffplay</code>, assuming a pixel format of <code>rgb24</code>, a video
size of <code>320x240</code>, and a frame rate of 10 images per second, use
the command:
</p><div class="example">
<pre class="example">ffplay -f rawvideo -pixel_format rgb24 -video_size 320x240 -framerate 10 input.raw
</pre></div>

<a name="sbg"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-sbg">3.11 sbg</a></h2>

<p>SBaGen script demuxer.
</p>
<p>This demuxer reads the script language used by SBaGen
<a href="http://uazu.net/sbagen/">http://uazu.net/sbagen/</a> to generate binaural beats sessions. A SBG
script looks like that:
</p><div class="example">
<pre class="example">-SE
a: 300-2.5/3 440+4.5/0
b: 300-2.5/0 440+4.5/3
off: -
NOW      == a
+0:07:00 == b
+0:14:00 == a
+0:21:00 == b
+0:30:00    off
</pre></div>

<p>A SBG script can mix absolute and relative timestamps. If the script uses
either only absolute timestamps (including the script start time) or only
relative ones, then its layout is fixed, and the conversion is
straightforward. On the other hand, if the script mixes both kind of
timestamps, then the <var>NOW</var> reference for relative timestamps will be
taken from the current time of day at the time the script is read, and the
script layout will be frozen according to that reference. That means that if
the script is directly played, the actual times will match the absolute
timestamps up to the sound controller&rsquo;s clock accuracy, but if the user
somehow pauses the playback or seeks, all times will be shifted accordingly.
</p>
<a name="tedcaptions"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-tedcaptions">3.12 tedcaptions</a></h2>

<p>JSON captions used for <a href="http://www.ted.com/">TED Talks</a>.
</p>
<p>TED does not provide links to the captions, but they can be guessed from the
page. The file &lsquo;<tt>tools/bookmarklets.html</tt>&rsquo; from the FFmpeg source tree
contains a bookmarklet to expose them.
</p>
<p>This demuxer accepts the following option:
</p><dl compact="compact">
<dt>&lsquo;<samp>start_time</samp>&rsquo;</dt>
<dd><p>Set the start time of the TED talk, in milliseconds. The default is 15000
(15s). It is used to sync the captions with the downloadable videos, because
they include a 15s intro.
</p></dd>
</dl>

<p>Example: convert the captions to a format most players understand:
</p><div class="example">
<pre class="example">ffmpeg -i http://www.ted.com/talks/subtitles/id/1/lang/en talk1-en.srt
</pre></div>

<a name="Muxers"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Muxers">4 Muxers</a></h1>

<p>Muxers are configured elements in FFmpeg which allow writing
multimedia streams to a particular type of file.
</p>
<p>When you configure your FFmpeg build, all the supported muxers
are enabled by default. You can list all available muxers using the
configure option <code>--list-muxers</code>.
</p>
<p>You can disable all the muxers with the configure option
<code>--disable-muxers</code> and selectively enable / disable single muxers
with the options <code>--enable-muxer=<var>MUXER</var></code> /
<code>--disable-muxer=<var>MUXER</var></code>.
</p>
<p>The option <code>-formats</code> of the ff* tools will display the list of
enabled muxers.
</p>
<p>A description of some of the currently available muxers follows.
</p>
<p><a name="aiff"></a>
</p><a name="aiff-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-aiff-1">4.1 aiff</a></h2>

<p>Audio Interchange File Format muxer.
</p>
<a name="Options-2"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options-2">4.1.1 Options</a></h3>

<p>It accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>write_id3v2</samp>&rsquo;</dt>
<dd><p>Enable ID3v2 tags writing when set to 1. Default is 0 (disabled).
</p>
</dd>
<dt>&lsquo;<samp>id3v2_version</samp>&rsquo;</dt>
<dd><p>Select ID3v2 version to write. Currently only version 3 and 4 (aka.
ID3v2.3 and ID3v2.4) are supported. The default is version 4.
</p>
</dd>
</dl>

<p><a name="crc"></a>
</p><a name="crc-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-crc-1">4.2 crc</a></h2>

<p>CRC (Cyclic Redundancy Check) testing format.
</p>
<p>This muxer computes and prints the Adler-32 CRC of all the input audio
and video frames. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
CRC.
</p>
<p>The output of the muxer consists of a single line of the form:
CRC=0x<var>CRC</var>, where <var>CRC</var> is a hexadecimal number 0-padded to
8 digits containing the CRC for all the decoded input frames.
</p>
<p>See also the <a href="#framecrc">framecrc</a> muxer.
</p>
<a name="Examples-3"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Examples-3">4.2.1 Examples</a></h3>

<p>For example to compute the CRC of the input, and store it in the file
&lsquo;<tt>out.crc</tt>&rsquo;:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f crc out.crc
</pre></div>

<p>You can print the CRC to stdout with the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f crc -
</pre></div>

<p>You can select the output format of each frame with <code>ffmpeg</code> by
specifying the audio and video codec and format. For example to
compute the CRC of the input audio converted to PCM unsigned 8-bit
and the input video converted to MPEG-2 video, use the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
</pre></div>

<p><a name="framecrc"></a>
</p><a name="framecrc-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-framecrc-1">4.3 framecrc</a></h2>

<p>Per-packet CRC (Cyclic Redundancy Check) testing format.
</p>
<p>This muxer computes and prints the Adler-32 CRC for each audio
and video packet. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
CRC.
</p>
<p>The output of the muxer consists of a line for each audio and video
packet of the form:
</p><div class="example">
<pre class="example"><var>stream_index</var>, <var>packet_dts</var>, <var>packet_pts</var>, <var>packet_duration</var>, <var>packet_size</var>, 0x<var>CRC</var>
</pre></div>

<p><var>CRC</var> is a hexadecimal number 0-padded to 8 digits containing the
CRC of the packet.
</p>
<a name="Examples-6"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Examples-6">4.3.1 Examples</a></h3>

<p>For example to compute the CRC of the audio and video frames in
&lsquo;<tt>INPUT</tt>&rsquo;, converted to raw audio and video packets, and store it
in the file &lsquo;<tt>out.crc</tt>&rsquo;:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f framecrc out.crc
</pre></div>

<p>To print the information to stdout, use the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f framecrc -
</pre></div>

<p>With <code>ffmpeg</code>, you can select the output format to which the
audio and video frames are encoded before computing the CRC for each
packet by specifying the audio and video codec. For example, to
compute the CRC of each decoded input audio frame converted to PCM
unsigned 8-bit and of each decoded input video frame converted to
MPEG-2 video, use the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
</pre></div>

<p>See also the <a href="#crc">crc</a> muxer.
</p>
<p><a name="framemd5"></a>
</p><a name="framemd5-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-framemd5-1">4.4 framemd5</a></h2>

<p>Per-packet MD5 testing format.
</p>
<p>This muxer computes and prints the MD5 hash for each audio
and video packet. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
hash.
</p>
<p>The output of the muxer consists of a line for each audio and video
packet of the form:
</p><div class="example">
<pre class="example"><var>stream_index</var>, <var>packet_dts</var>, <var>packet_pts</var>, <var>packet_duration</var>, <var>packet_size</var>, <var>MD5</var>
</pre></div>

<p><var>MD5</var> is a hexadecimal number representing the computed MD5 hash
for the packet.
</p>
<a name="Examples-1"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Examples-1">4.4.1 Examples</a></h3>

<p>For example to compute the MD5 of the audio and video frames in
&lsquo;<tt>INPUT</tt>&rsquo;, converted to raw audio and video packets, and store it
in the file &lsquo;<tt>out.md5</tt>&rsquo;:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f framemd5 out.md5
</pre></div>

<p>To print the information to stdout, use the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f framemd5 -
</pre></div>

<p>See also the <a href="#md5">md5</a> muxer.
</p>
<p><a name="gif"></a>
</p><a name="gif-2"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-gif-2">4.5 gif</a></h2>

<p>Animated GIF muxer.
</p>
<p>It accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>loop</samp>&rsquo;</dt>
<dd><p>Set the number of times to loop the output. Use <code>-1</code> for no loop, <code>0</code>
for looping indefinitely (default).
</p>
</dd>
<dt>&lsquo;<samp>final_delay</samp>&rsquo;</dt>
<dd><p>Force the delay (expressed in centiseconds) after the last frame. Each frame
ends with a delay until the next frame. The default is <code>-1</code>, which is a
special value to tell the muxer to re-use the previous delay. In case of a
loop, you might want to customize this value to mark a pause for instance.
</p></dd>
</dl>

<p>For example, to encode a gif looping 10 times, with a 5 seconds delay between
the loops:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -loop 10 -final_delay 500 out.gif
</pre></div>

<p>Note 1: if you wish to extract the frames in separate GIF files, you need to
force the <a href="#image2">image2</a> muxer:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -c:v gif -f image2 &quot;out%d.gif&quot;
</pre></div>

<p>Note 2: the GIF format has a very small time base: the delay between two frames
can not be smaller than one centi second.
</p>
<p><a name="hls"></a>
</p><a name="hls-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-hls-1">4.6 hls</a></h2>

<p>Apple HTTP Live Streaming muxer that segments MPEG-TS according to
the HTTP Live Streaming (HLS) specification.
</p>
<p>It creates a playlist file and numbered segment files. The output
filename specifies the playlist filename; the segment filenames
receive the same basename as the playlist, a sequential number and
a .ts extension.
</p>
<p>For example, to convert an input file with <code>ffmpeg</code>:
</p><div class="example">
<pre class="example">ffmpeg -i in.nut out.m3u8
</pre></div>

<p>See also the <a href="#segment">segment</a> muxer, which provides a more generic and
flexible implementation of a segmenter, and can be used to perform HLS
segmentation.
</p>
<a name="Options-3"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options-3">4.6.1 Options</a></h3>

<p>This muxer supports the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>hls_time <var>seconds</var></samp>&rsquo;</dt>
<dd><p>Set the segment length in seconds. Default value is 2.
</p>
</dd>
<dt>&lsquo;<samp>hls_list_size <var>size</var></samp>&rsquo;</dt>
<dd><p>Set the maximum number of playlist entries. If set to 0 the list file
will contain all the segments. Default value is 5.
</p>
</dd>
<dt>&lsquo;<samp>hls_wrap <var>wrap</var></samp>&rsquo;</dt>
<dd><p>Set the number after which the segment filename number (the number
specified in each segment file) wraps. If set to 0 the number will be
never wrapped. Default value is 0.
</p>
<p>This option is useful to avoid to fill the disk with many segment
files, and limits the maximum number of segment files written to disk
to <var>wrap</var>.
</p>
</dd>
<dt>&lsquo;<samp>start_number <var>number</var></samp>&rsquo;</dt>
<dd><p>Start the playlist sequence number from <var>number</var>. Default value is
0.
</p>
</dd>
<dt>&lsquo;<samp>hls_base_url <var>baseurl</var></samp>&rsquo;</dt>
<dd><p>Append <var>baseurl</var> to every entry in the playlist.
Useful to generate playlists with absolute paths.
</p>
<p>Note that the playlist sequence number must be unique for each segment
and it is not to be confused with the segment filename sequence number
which can be cyclic, for example if the &lsquo;<samp>wrap</samp>&rsquo; option is
specified.
</p></dd>
</dl>

<p><a name="ico"></a>
</p><a name="ico-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-ico-1">4.7 ico</a></h2>

<p>ICO file muxer.
</p>
<p>Microsoft&rsquo;s icon file format (ICO) has some strict limitations that should be noted:
</p>
<ul>
<li>
Size cannot exceed 256 pixels in any dimension

</li><li>
Only BMP and PNG images can be stored

</li><li>
If a BMP image is used, it must be one of the following pixel formats:
<div class="example">
<pre class="example">BMP Bit Depth      FFmpeg Pixel Format
1bit               pal8
4bit               pal8
8bit               pal8
16bit              rgb555le
24bit              bgr24
32bit              bgra
</pre></div>

</li><li>
If a BMP image is used, it must use the BITMAPINFOHEADER DIB header

</li><li>
If a PNG image is used, it must use the rgba pixel format
</li></ul>

<p><a name="image2"></a>
</p><a name="image2-2"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-image2-2">4.8 image2</a></h2>

<p>Image file muxer.
</p>
<p>The image file muxer writes video frames to image files.
</p>
<p>The output filenames are specified by a pattern, which can be used to
produce sequentially numbered series of files.
The pattern may contain the string &quot;%d&quot; or &quot;%0<var>N</var>d&quot;, this string
specifies the position of the characters representing a numbering in
the filenames. If the form &quot;%0<var>N</var>d&quot; is used, the string
representing the number in each filename is 0-padded to <var>N</var>
digits. The literal character &rsquo;%&rsquo; can be specified in the pattern with
the string &quot;%%&quot;.
</p>
<p>If the pattern contains &quot;%d&quot; or &quot;%0<var>N</var>d&quot;, the first filename of
the file list specified will contain the number 1, all the following
numbers will be sequential.
</p>
<p>The pattern may contain a suffix which is used to automatically
determine the format of the image files to write.
</p>
<p>For example the pattern &quot;img-%03d.bmp&quot; will specify a sequence of
filenames of the form &lsquo;<tt>img-001.bmp</tt>&rsquo;, &lsquo;<tt>img-002.bmp</tt>&rsquo;, ...,
&lsquo;<tt>img-010.bmp</tt>&rsquo;, etc.
The pattern &quot;img%%-%d.jpg&quot; will specify a sequence of filenames of the
form &lsquo;<tt>img%-1.jpg</tt>&rsquo;, &lsquo;<tt>img%-2.jpg</tt>&rsquo;, ..., &lsquo;<tt>img%-10.jpg</tt>&rsquo;,
etc.
</p>
<a name="Examples-5"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Examples-5">4.8.1 Examples</a></h3>

<p>The following example shows how to use <code>ffmpeg</code> for creating a
sequence of files &lsquo;<tt>img-001.jpeg</tt>&rsquo;, &lsquo;<tt>img-002.jpeg</tt>&rsquo;, ...,
taking one image every second from the input video:
</p><div class="example">
<pre class="example">ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
</pre></div>

<p>Note that with <code>ffmpeg</code>, if the format is not specified with the
<code>-f</code> option and the output filename specifies an image file
format, the image2 muxer is automatically selected, so the previous
command can be written as:
</p><div class="example">
<pre class="example">ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
</pre></div>

<p>Note also that the pattern must not necessarily contain &quot;%d&quot; or
&quot;%0<var>N</var>d&quot;, for example to create a single image file
&lsquo;<tt>img.jpeg</tt>&rsquo; from the input video you can employ the command:
</p><div class="example">
<pre class="example">ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
</pre></div>

<p>The &lsquo;<samp>strftime</samp>&rsquo; option allows you to expand the filename with
date and time information. Check the documentation of
the <code>strftime()</code> function for the syntax.
</p>
<p>For example to generate image files from the <code>strftime()</code>
&quot;%Y-%m-%d_%H-%M-%S&quot; pattern, the following <code>ffmpeg</code> command
can be used:
</p><div class="example">
<pre class="example">ffmpeg -f v4l2 -r 1 -i /dev/video0 -f image2 -strftime 1 &quot;%Y-%m-%d_%H-%M-%S.jpg&quot;
</pre></div>

<a name="Options-8"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options-8">4.8.2 Options</a></h3>

<dl compact="compact">
<dt>&lsquo;<samp>start_number</samp>&rsquo;</dt>
<dd><p>Start the sequence from the specified number. Default value is 1. Must
be a non-negative number.
</p>
</dd>
<dt>&lsquo;<samp>update</samp>&rsquo;</dt>
<dd><p>If set to 1, the filename will always be interpreted as just a
filename, not a pattern, and the corresponding file will be continuously
overwritten with new images. Default value is 0.
</p>
</dd>
<dt>&lsquo;<samp>strftime</samp>&rsquo;</dt>
<dd><p>If set to 1, expand the filename with date and time information from
<code>strftime()</code>. Default value is 0.
</p></dd>
</dl>

<p>The image muxer supports the .Y.U.V image file format. This format is
special in that that each image frame consists of three files, for
each of the YUV420P components. To read or write this image file format,
specify the name of the &rsquo;.Y&rsquo; file. The muxer will automatically open the
&rsquo;.U&rsquo; and &rsquo;.V&rsquo; files as required.
</p>
<a name="matroska"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-matroska">4.9 matroska</a></h2>

<p>Matroska container muxer.
</p>
<p>This muxer implements the matroska and webm container specs.
</p>
<a name="Metadata"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Metadata">4.9.1 Metadata</a></h3>

<p>The recognized metadata settings in this muxer are:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>title</samp>&rsquo;</dt>
<dd><p>Set title name provided to a single track.
</p>
</dd>
<dt>&lsquo;<samp>language</samp>&rsquo;</dt>
<dd><p>Specify the language of the track in the Matroska languages form.
</p>
<p>The language can be either the 3 letters bibliographic ISO-639-2 (ISO
639-2/B) form (like &quot;fre&quot; for French), or a language code mixed with a
country code for specialities in languages (like &quot;fre-ca&quot; for Canadian
French).
</p>
</dd>
<dt>&lsquo;<samp>stereo_mode</samp>&rsquo;</dt>
<dd><p>Set stereo 3D video layout of two views in a single video track.
</p>
<p>The following values are recognized:
</p><dl compact="compact">
<dt>&lsquo;<samp>mono</samp>&rsquo;</dt>
<dd><p>video is not stereo
</p></dd>
<dt>&lsquo;<samp>left_right</samp>&rsquo;</dt>
<dd><p>Both views are arranged side by side, Left-eye view is on the left
</p></dd>
<dt>&lsquo;<samp>bottom_top</samp>&rsquo;</dt>
<dd><p>Both views are arranged in top-bottom orientation, Left-eye view is at bottom
</p></dd>
<dt>&lsquo;<samp>top_bottom</samp>&rsquo;</dt>
<dd><p>Both views are arranged in top-bottom orientation, Left-eye view is on top
</p></dd>
<dt>&lsquo;<samp>checkerboard_rl</samp>&rsquo;</dt>
<dd><p>Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
</p></dd>
<dt>&lsquo;<samp>checkerboard_lr</samp>&rsquo;</dt>
<dd><p>Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
</p></dd>
<dt>&lsquo;<samp>row_interleaved_rl</samp>&rsquo;</dt>
<dd><p>Each view is constituted by a row based interleaving, Right-eye view is first row
</p></dd>
<dt>&lsquo;<samp>row_interleaved_lr</samp>&rsquo;</dt>
<dd><p>Each view is constituted by a row based interleaving, Left-eye view is first row
</p></dd>
<dt>&lsquo;<samp>col_interleaved_rl</samp>&rsquo;</dt>
<dd><p>Both views are arranged in a column based interleaving manner, Right-eye view is first column
</p></dd>
<dt>&lsquo;<samp>col_interleaved_lr</samp>&rsquo;</dt>
<dd><p>Both views are arranged in a column based interleaving manner, Left-eye view is first column
</p></dd>
<dt>&lsquo;<samp>anaglyph_cyan_red</samp>&rsquo;</dt>
<dd><p>All frames are in anaglyph format viewable through red-cyan filters
</p></dd>
<dt>&lsquo;<samp>right_left</samp>&rsquo;</dt>
<dd><p>Both views are arranged side by side, Right-eye view is on the left
</p></dd>
<dt>&lsquo;<samp>anaglyph_green_magenta</samp>&rsquo;</dt>
<dd><p>All frames are in anaglyph format viewable through green-magenta filters
</p></dd>
<dt>&lsquo;<samp>block_lr</samp>&rsquo;</dt>
<dd><p>Both eyes laced in one Block, Left-eye view is first
</p></dd>
<dt>&lsquo;<samp>block_rl</samp>&rsquo;</dt>
<dd><p>Both eyes laced in one Block, Right-eye view is first
</p></dd>
</dl>
</dd>
</dl>

<p>For example a 3D WebM clip can be created using the following command line:
</p><div class="example">
<pre class="example">ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
</pre></div>

<a name="Options-6"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options-6">4.9.2 Options</a></h3>

<p>This muxer supports the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>reserve_index_space</samp>&rsquo;</dt>
<dd><p>By default, this muxer writes the index for seeking (called cues in Matroska
terms) at the end of the file, because it cannot know in advance how much space
to leave for the index at the beginning of the file. However for some use cases
&ndash; e.g.  streaming where seeking is possible but slow &ndash; it is useful to put the
index at the beginning of the file.
</p>
<p>If this option is set to a non-zero value, the muxer will reserve a given amount
of space in the file header and then try to write the cues there when the muxing
finishes. If the available space does not suffice, muxing will fail. A safe size
for most use cases should be about 50kB per hour of video.
</p>
<p>Note that cues are only written if the output is seekable and this option will
have no effect if it is not.
</p></dd>
</dl>

<p><a name="md5"></a>
</p><a name="md5-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-md5-1">4.10 md5</a></h2>

<p>MD5 testing format.
</p>
<p>This muxer computes and prints the MD5 hash of all the input audio
and video frames. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
hash.
</p>
<p>The output of the muxer consists of a single line of the form:
MD5=<var>MD5</var>, where <var>MD5</var> is a hexadecimal number representing
the computed MD5 hash.
</p>
<p>For example to compute the MD5 hash of the input converted to raw
audio and video, and store it in the file &lsquo;<tt>out.md5</tt>&rsquo;:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f md5 out.md5
</pre></div>

<p>You can print the MD5 to stdout with the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f md5 -
</pre></div>

<p>See also the <a href="#framemd5">framemd5</a> muxer.
</p>
<a name="mov_002c-mp4_002c-ismv"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-mov_002c-mp4_002c-ismv">4.11 mov, mp4, ismv</a></h2>

<p>MOV/MP4/ISMV (Smooth Streaming) muxer.
</p>
<p>The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
file has all the metadata about all packets stored in one location
(written at the end of the file, it can be moved to the start for
better playback by adding <var>faststart</var> to the <var>movflags</var>, or
using the <code>qt-faststart</code> tool). A fragmented
file consists of a number of fragments, where packets and metadata
about these packets are stored together. Writing a fragmented
file has the advantage that the file is decodable even if the
writing is interrupted (while a normal MOV/MP4 is undecodable if
it is not properly finished), and it requires less memory when writing
very long files (since writing normal MOV/MP4 files stores info about
every single packet in memory until the file is closed). The downside
is that it is less compatible with other applications.
</p>
<a name="Options-5"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options-5">4.11.1 Options</a></h3>

<p>Fragmentation is enabled by setting one of the AVOptions that define
how to cut the file into fragments:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-moov_size <var>bytes</var></samp>&rsquo;</dt>
<dd><p>Reserves space for the moov atom at the beginning of the file instead of placing the
moov atom at the end. If the space reserved is insufficient, muxing will fail.
</p></dd>
<dt>&lsquo;<samp>-movflags frag_keyframe</samp>&rsquo;</dt>
<dd><p>Start a new fragment at each video keyframe.
</p></dd>
<dt>&lsquo;<samp>-frag_duration <var>duration</var></samp>&rsquo;</dt>
<dd><p>Create fragments that are <var>duration</var> microseconds long.
</p></dd>
<dt>&lsquo;<samp>-frag_size <var>size</var></samp>&rsquo;</dt>
<dd><p>Create fragments that contain up to <var>size</var> bytes of payload data.
</p></dd>
<dt>&lsquo;<samp>-movflags frag_custom</samp>&rsquo;</dt>
<dd><p>Allow the caller to manually choose when to cut fragments, by
calling <code>av_write_frame(ctx, NULL)</code> to write a fragment with
the packets written so far. (This is only useful with other
applications integrating libavformat, not from <code>ffmpeg</code>.)
</p></dd>
<dt>&lsquo;<samp>-min_frag_duration <var>duration</var></samp>&rsquo;</dt>
<dd><p>Don&rsquo;t create fragments that are shorter than <var>duration</var> microseconds long.
</p></dd>
</dl>

<p>If more than one condition is specified, fragments are cut when
one of the specified conditions is fulfilled. The exception to this is
<code>-min_frag_duration</code>, which has to be fulfilled for any of the other
conditions to apply.
</p>
<p>Additionally, the way the output file is written can be adjusted
through a few other options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-movflags empty_moov</samp>&rsquo;</dt>
<dd><p>Write an initial moov atom directly at the start of the file, without
describing any samples in it. Generally, an mdat/moov pair is written
at the start of the file, as a normal MOV/MP4 file, containing only
a short portion of the file. With this option set, there is no initial
mdat atom, and the moov atom only describes the tracks but has
a zero duration.
</p>
<p>Files written with this option set do not work in QuickTime.
This option is implicitly set when writing ismv (Smooth Streaming) files.
</p></dd>
<dt>&lsquo;<samp>-movflags separate_moof</samp>&rsquo;</dt>
<dd><p>Write a separate moof (movie fragment) atom for each track. Normally,
packets for all tracks are written in a moof atom (which is slightly
more efficient), but with this option set, the muxer writes one moof/mdat
pair for each track, making it easier to separate tracks.
</p>
<p>This option is implicitly set when writing ismv (Smooth Streaming) files.
</p></dd>
<dt>&lsquo;<samp>-movflags faststart</samp>&rsquo;</dt>
<dd><p>Run a second pass moving the index (moov atom) to the beginning of the file.
This operation can take a while, and will not work in various situations such
as fragmented output, thus it is not enabled by default.
</p></dd>
<dt>&lsquo;<samp>-movflags rtphint</samp>&rsquo;</dt>
<dd><p>Add RTP hinting tracks to the output file.
</p></dd>
<dt>&lsquo;<samp>-movflags disable_chpl</samp>&rsquo;</dt>
<dd><p>Disable Nero chapter markers (chpl atom).  Normally, both Nero chapters
and a QuickTime chapter track are written to the file. With this option
set, only the QuickTime chapter track will be written. Nero chapters can
cause failures when the file is reprocessed with certain tagging programs, like
mp3Tag 2.61a and iTunes 11.3, most likely other versions are affected as well.
</p></dd>
</dl>

<a name="Example"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Example">4.11.2 Example</a></h3>

<p>Smooth Streaming content can be pushed in real time to a publishing
point on IIS with this muxer. Example:
</p><div class="example">
<pre class="example">ffmpeg -re <var>&lt;normal input/transcoding options&gt;</var> -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
</pre></div>

<a name="mp3"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-mp3">4.12 mp3</a></h2>

<p>The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
<code>id3v2_version</code> option controls which one is used. Setting
<code>id3v2_version</code> to 0 will disable the ID3v2 header completely. The legacy
ID3v1 tag is not written by default, but may be enabled with the
<code>write_id3v1</code> option.
</p>
<p>The muxer may also write a Xing frame at the beginning, which contains the
number of frames in the file. It is useful for computing duration of VBR files.
The Xing frame is written if the output stream is seekable and if the
<code>write_xing</code> option is set to 1 (the default).
</p>
<p>The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
are supplied to the muxer in form of a video stream with a single packet. There
can be any number of those streams, each will correspond to a single APIC frame.
The stream metadata tags <var>title</var> and <var>comment</var> map to APIC
<var>description</var> and <var>picture type</var> respectively. See
<a href="http://id3.org/id3v2.4.0-frames">http://id3.org/id3v2.4.0-frames</a> for allowed picture types.
</p>
<p>Note that the APIC frames must be written at the beginning, so the muxer will
buffer the audio frames until it gets all the pictures. It is therefore advised
to provide the pictures as soon as possible to avoid excessive buffering.
</p>
<p>Examples:
</p>
<p>Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
</pre></div>

<p>To attach a picture to an mp3 file select both the audio and the picture stream
with <code>map</code>:
</p><div class="example">
<pre class="example">ffmpeg -i input.mp3 -i cover.png -c copy -map 0 -map 1
-metadata:s:v title=&quot;Album cover&quot; -metadata:s:v comment=&quot;Cover (Front)&quot; out.mp3
</pre></div>

<p>Write a &quot;clean&quot; MP3 without any extra features:
</p><div class="example">
<pre class="example">ffmpeg -i input.wav -write_xing 0 -id3v2_version 0 out.mp3
</pre></div>

<a name="mpegts-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-mpegts-1">4.13 mpegts</a></h2>

<p>MPEG transport stream muxer.
</p>
<p>This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
</p>
<p>The recognized metadata settings in mpegts muxer are <code>service_provider</code>
and <code>service_name</code>. If they are not set the default for
<code>service_provider</code> is &quot;FFmpeg&quot; and the default for
<code>service_name</code> is &quot;Service01&quot;.
</p>
<a name="Options"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options">4.13.1 Options</a></h3>

<p>The muxer options are:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-mpegts_original_network_id <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the original_network_id (default 0x0001). This is unique identifier
of a network in DVB. Its main use is in the unique identification of a
service through the path Original_Network_ID, Transport_Stream_ID.
</p></dd>
<dt>&lsquo;<samp>-mpegts_transport_stream_id <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the transport_stream_id (default 0x0001). This identifies a
transponder in DVB.
</p></dd>
<dt>&lsquo;<samp>-mpegts_service_id <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the service_id (default 0x0001) also known as program in DVB.
</p></dd>
<dt>&lsquo;<samp>-mpegts_pmt_start_pid <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the first PID for PMT (default 0x1000, max 0x1f00).
</p></dd>
<dt>&lsquo;<samp>-mpegts_start_pid <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the first PID for data packets (default 0x0100, max 0x0f00).
</p></dd>
<dt>&lsquo;<samp>-mpegts_m2ts_mode <var>number</var></samp>&rsquo;</dt>
<dd><p>Enable m2ts mode if set to 1. Default value is -1 which disables m2ts mode.
</p></dd>
<dt>&lsquo;<samp>-muxrate <var>number</var></samp>&rsquo;</dt>
<dd><p>Set a constant muxrate (default VBR).
</p></dd>
<dt>&lsquo;<samp>-pcr_period <var>numer</var></samp>&rsquo;</dt>
<dd><p>Override the default PCR retransmission time (default 20ms), ignored
if variable muxrate is selected.
</p></dd>
<dt>&lsquo;<samp>-pes_payload_size <var>number</var></samp>&rsquo;</dt>
<dd><p>Set minimum PES packet payload in bytes.
</p></dd>
<dt>&lsquo;<samp>-mpegts_flags <var>flags</var></samp>&rsquo;</dt>
<dd><p>Set flags (see below).
</p></dd>
<dt>&lsquo;<samp>-mpegts_copyts <var>number</var></samp>&rsquo;</dt>
<dd><p>Preserve original timestamps, if value is set to 1. Default value is -1, which
results in shifting timestamps so that they start from 0.
</p></dd>
<dt>&lsquo;<samp>-tables_version <var>number</var></samp>&rsquo;</dt>
<dd><p>Set PAT, PMT and SDT version (default 0, valid values are from 0 to 31, inclusively).
This option allows updating stream structure so that standard consumer may
detect the change. To do so, reopen output AVFormatContext (in case of API
usage) or restart ffmpeg instance, cyclically changing tables_version value:
</p><div class="example">
<pre class="example">ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111
ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
...
ffmpeg -i source3.ts -codec copy -f mpegts -tables_version 31 udp://1.1.1.1:1111
ffmpeg -i source1.ts -codec copy -f mpegts -tables_version 0 udp://1.1.1.1:1111
ffmpeg -i source2.ts -codec copy -f mpegts -tables_version 1 udp://1.1.1.1:1111
...
</pre></div>
</dd>
</dl>

<p>Option mpegts_flags may take a set of such flags:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>resend_headers</samp>&rsquo;</dt>
<dd><p>Reemit PAT/PMT before writing the next packet.
</p></dd>
<dt>&lsquo;<samp>latm</samp>&rsquo;</dt>
<dd><p>Use LATM packetization for AAC.
</p></dd>
</dl>

<a name="Example-1"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Example-1">4.13.2 Example</a></h3>

<div class="example">
<pre class="example">ffmpeg -i file.mpg -c copy \
     -mpegts_original_network_id 0x1122 \
     -mpegts_transport_stream_id 0x3344 \
     -mpegts_service_id 0x5566 \
     -mpegts_pmt_start_pid 0x1500 \
     -mpegts_start_pid 0x150 \
     -metadata service_provider=&quot;Some provider&quot; \
     -metadata service_name=&quot;Some Channel&quot; \
     -y out.ts
</pre></div>

<a name="null"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-null">4.14 null</a></h2>

<p>Null muxer.
</p>
<p>This muxer does not generate any output file, it is mainly useful for
testing or benchmarking purposes.
</p>
<p>For example to benchmark decoding with <code>ffmpeg</code> you can use the
command:
</p><div class="example">
<pre class="example">ffmpeg -benchmark -i INPUT -f null out.null
</pre></div>

<p>Note that the above command does not read or write the &lsquo;<tt>out.null</tt>&rsquo;
file, but specifying the output file is required by the <code>ffmpeg</code>
syntax.
</p>
<p>Alternatively you can write the command as:
</p><div class="example">
<pre class="example">ffmpeg -benchmark -i INPUT -f null -
</pre></div>

<a name="nut"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-nut">4.15 nut</a></h2>

<dl compact="compact">
<dt>&lsquo;<samp>-syncpoints <var>flags</var></samp>&rsquo;</dt>
<dd><p>Change the syncpoint usage in nut:
</p><dl compact="compact">
<dt>&lsquo;<samp><var>default</var> use the normal low-overhead seeking aids.</samp>&rsquo;</dt>
<dt>&lsquo;<samp><var>none</var> do not use the syncpoints at all, reducing the overhead but making the stream non-seekable;</samp>&rsquo;</dt>
<dd><p>    Use of this option is not recommended, as the resulting files are very damage
    sensitive and seeking is not possible. Also in general the overhead from
    syncpoints is negligible. Note, -<code>write_index</code> 0 can be used to disable
    all growing data tables, allowing to mux endless streams with limited memory
    and wihout these disadvantages.
</p></dd>
<dt>&lsquo;<samp><var>timestamped</var> extend the syncpoint with a wallclock field.</samp>&rsquo;</dt>
</dl>
<p>The <var>none</var> and <var>timestamped</var> flags are experimental.
</p></dd>
<dt>&lsquo;<samp>-write_index <var>bool</var></samp>&rsquo;</dt>
<dd><p>Write index at the end, the default is to write an index.
</p></dd>
</dl>

<div class="example">
<pre class="example">ffmpeg -i INPUT -f_strict experimental -syncpoints none - | processor
</pre></div>

<a name="ogg"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-ogg">4.16 ogg</a></h2>

<p>Ogg container muxer.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-page_duration <var>duration</var></samp>&rsquo;</dt>
<dd><p>Preferred page duration, in microseconds. The muxer will attempt to create
pages that are approximately <var>duration</var> microseconds long. This allows the
user to compromise between seek granularity and container overhead. The default
is 1 second. A value of 0 will fill all segments, making pages as large as
possible. A value of 1 will effectively use 1 packet-per-page in most
situations, giving a small seek granularity at the cost of additional container
overhead.
</p></dd>
</dl>

<p><a name="segment"></a>
</p><a name="segment_002c-stream_005fsegment_002c-ssegment"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-segment_002c-stream_005fsegment_002c-ssegment">4.17 segment, stream_segment, ssegment</a></h2>

<p>Basic stream segmenter.
</p>
<p>This muxer outputs streams to a number of separate files of nearly
fixed duration. Output filename pattern can be set in a fashion similar to
<a href="#image2">image2</a>.
</p>
<p><code>stream_segment</code> is a variant of the muxer used to write to
streaming output formats, i.e. which do not require global headers,
and is recommended for outputting e.g. to MPEG transport stream segments.
<code>ssegment</code> is a shorter alias for <code>stream_segment</code>.
</p>
<p>Every segment starts with a keyframe of the selected reference stream,
which is set through the &lsquo;<samp>reference_stream</samp>&rsquo; option.
</p>
<p>Note that if you want accurate splitting for a video file, you need to
make the input key frames correspond to the exact splitting times
expected by the segmenter, or the segment muxer will start the new
segment with the key frame found next after the specified start
time.
</p>
<p>The segment muxer works best with a single constant frame rate video.
</p>
<p>Optionally it can generate a list of the created segments, by setting
the option <var>segment_list</var>. The list type is specified by the
<var>segment_list_type</var> option. The entry filenames in the segment
list are set by default to the basename of the corresponding segment
files.
</p>
<p>See also the <a href="#hls">hls</a> muxer, which provides a more specific
implementation for HLS segmentation.
</p>
<a name="Options-1"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options-1">4.17.1 Options</a></h3>

<p>The segment muxer supports the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>reference_stream <var>specifier</var></samp>&rsquo;</dt>
<dd><p>Set the reference stream, as specified by the string <var>specifier</var>.
If <var>specifier</var> is set to <code>auto</code>, the reference is chosen
automatically. Otherwise it must be a stream specifier (see the &ldquo;Stream
specifiers&rdquo; chapter in the ffmpeg manual) which specifies the
reference stream. The default value is <code>auto</code>.
</p>
</dd>
<dt>&lsquo;<samp>segment_format <var>format</var></samp>&rsquo;</dt>
<dd><p>Override the inner container format, by default it is guessed by the filename
extension.
</p>
</dd>
<dt>&lsquo;<samp>segment_format_options <var>options_list</var></samp>&rsquo;</dt>
<dd><p>Set output format options using a :-separated list of key=value
parameters. Values containing the <code>:</code> special character must be
escaped.
</p>
</dd>
<dt>&lsquo;<samp>segment_list <var>name</var></samp>&rsquo;</dt>
<dd><p>Generate also a listfile named <var>name</var>. If not specified no
listfile is generated.
</p>
</dd>
<dt>&lsquo;<samp>segment_list_flags <var>flags</var></samp>&rsquo;</dt>
<dd><p>Set flags affecting the segment list generation.
</p>
<p>It currently supports the following flags:
</p><dl compact="compact">
<dt>&lsquo;<samp>cache</samp>&rsquo;</dt>
<dd><p>Allow caching (only affects M3U8 list files).
</p>
</dd>
<dt>&lsquo;<samp>live</samp>&rsquo;</dt>
<dd><p>Allow live-friendly file generation.
</p></dd>
</dl>

</dd>
<dt>&lsquo;<samp>segment_list_type <var>type</var></samp>&rsquo;</dt>
<dd><p>Select the listing format.
</p><dl compact="compact">
<dt>&lsquo;<samp><var>flat</var> use a simple flat list of entries.</samp>&rsquo;</dt>
<dt>&lsquo;<samp><var>hls</var> use a m3u8-like structure.</samp>&rsquo;</dt>
</dl>

</dd>
<dt>&lsquo;<samp>segment_list_size <var>size</var></samp>&rsquo;</dt>
<dd><p>Update the list file so that it contains at most <var>size</var>
segments. If 0 the list file will contain all the segments. Default
value is 0.
</p>
</dd>
<dt>&lsquo;<samp>segment_list_entry_prefix <var>prefix</var></samp>&rsquo;</dt>
<dd><p>Prepend <var>prefix</var> to each entry. Useful to generate absolute paths.
By default no prefix is applied.
</p>
<p>The following values are recognized:
</p><dl compact="compact">
<dt>&lsquo;<samp>flat</samp>&rsquo;</dt>
<dd><p>Generate a flat list for the created segments, one segment per line.
</p>
</dd>
<dt>&lsquo;<samp>csv, ext</samp>&rsquo;</dt>
<dd><p>Generate a list for the created segments, one segment per line,
each line matching the format (comma-separated values):
</p><div class="example">
<pre class="example"><var>segment_filename</var>,<var>segment_start_time</var>,<var>segment_end_time</var>
</pre></div>

<p><var>segment_filename</var> is the name of the output file generated by the
muxer according to the provided pattern. CSV escaping (according to
RFC4180) is applied if required.
</p>
<p><var>segment_start_time</var> and <var>segment_end_time</var> specify
the segment start and end time expressed in seconds.
</p>
<p>A list file with the suffix <code>&quot;.csv&quot;</code> or <code>&quot;.ext&quot;</code> will
auto-select this format.
</p>
<p>&lsquo;<samp>ext</samp>&rsquo; is deprecated in favor or &lsquo;<samp>csv</samp>&rsquo;.
</p>
</dd>
<dt>&lsquo;<samp>ffconcat</samp>&rsquo;</dt>
<dd><p>Generate an ffconcat file for the created segments. The resulting file
can be read using the FFmpeg <a href="#concat">concat</a> demuxer.
</p>
<p>A list file with the suffix <code>&quot;.ffcat&quot;</code> or <code>&quot;.ffconcat&quot;</code> will
auto-select this format.
</p>
</dd>
<dt>&lsquo;<samp>m3u8</samp>&rsquo;</dt>
<dd><p>Generate an extended M3U8 file, version 3, compliant with
<a href="http://tools.ietf.org/id/draft-pantos-http-live-streaming">http://tools.ietf.org/id/draft-pantos-http-live-streaming</a>.
</p>
<p>A list file with the suffix <code>&quot;.m3u8&quot;</code> will auto-select this format.
</p></dd>
</dl>

<p>If not specified the type is guessed from the list file name suffix.
</p>
</dd>
<dt>&lsquo;<samp>segment_time <var>time</var></samp>&rsquo;</dt>
<dd><p>Set segment duration to <var>time</var>, the value must be a duration
specification. Default value is &quot;2&quot;. See also the
&lsquo;<samp>segment_times</samp>&rsquo; option.
</p>
<p>Note that splitting may not be accurate, unless you force the
reference stream key-frames at the given time. See the introductory
notice and the examples below.
</p>
</dd>
<dt>&lsquo;<samp>segment_atclocktime <var>1|0</var></samp>&rsquo;</dt>
<dd><p>If set to &quot;1&quot; split at regular clock time intervals starting from 00:00
o&rsquo;clock. The <var>time</var> value specified in &lsquo;<samp>segment_time</samp>&rsquo; is
used for setting the length of the splitting interval.
</p>
<p>For example with &lsquo;<samp>segment_time</samp>&rsquo; set to &quot;900&quot; this makes it possible
to create files at 12:00 o&rsquo;clock, 12:15, 12:30, etc.
</p>
<p>Default value is &quot;0&quot;.
</p>
</dd>
<dt>&lsquo;<samp>segment_time_delta <var>delta</var></samp>&rsquo;</dt>
<dd><p>Specify the accuracy time when selecting the start time for a
segment, expressed as a duration specification. Default value is &quot;0&quot;.
</p>
<p>When delta is specified a key-frame will start a new segment if its
PTS satisfies the relation:
</p><div class="example">
<pre class="example">PTS &gt;= start_time - time_delta
</pre></div>

<p>This option is useful when splitting video content, which is always
split at GOP boundaries, in case a key frame is found just before the
specified split time.
</p>
<p>In particular may be used in combination with the &lsquo;<tt>ffmpeg</tt>&rsquo; option
<var>force_key_frames</var>. The key frame times specified by
<var>force_key_frames</var> may not be set accurately because of rounding
issues, with the consequence that a key frame time may result set just
before the specified time. For constant frame rate videos a value of
1/(2*<var>frame_rate</var>) should address the worst case mismatch between
the specified time and the time set by <var>force_key_frames</var>.
</p>
</dd>
<dt>&lsquo;<samp>segment_times <var>times</var></samp>&rsquo;</dt>
<dd><p>Specify a list of split points. <var>times</var> contains a list of comma
separated duration specifications, in increasing order. See also
the &lsquo;<samp>segment_time</samp>&rsquo; option.
</p>
</dd>
<dt>&lsquo;<samp>segment_frames <var>frames</var></samp>&rsquo;</dt>
<dd><p>Specify a list of split video frame numbers. <var>frames</var> contains a
list of comma separated integer numbers, in increasing order.
</p>
<p>This option specifies to start a new segment whenever a reference
stream key frame is found and the sequential number (starting from 0)
of the frame is greater or equal to the next value in the list.
</p>
</dd>
<dt>&lsquo;<samp>segment_wrap <var>limit</var></samp>&rsquo;</dt>
<dd><p>Wrap around segment index once it reaches <var>limit</var>.
</p>
</dd>
<dt>&lsquo;<samp>segment_start_number <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the sequence number of the first segment. Defaults to <code>0</code>.
</p>
</dd>
<dt>&lsquo;<samp>reset_timestamps <var>1|0</var></samp>&rsquo;</dt>
<dd><p>Reset timestamps at the begin of each segment, so that each segment
will start with near-zero timestamps. It is meant to ease the playback
of the generated segments. May not work with some combinations of
muxers/codecs. It is set to <code>0</code> by default.
</p>
</dd>
<dt>&lsquo;<samp>initial_offset <var>offset</var></samp>&rsquo;</dt>
<dd><p>Specify timestamp offset to apply to the output packet timestamps. The
argument must be a time duration specification, and defaults to 0.
</p></dd>
</dl>

<a name="Examples-2"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Examples-2">4.17.2 Examples</a></h3>

<ul>
<li>
Remux the content of file &lsquo;<tt>in.mkv</tt>&rsquo; to a list of segments
&lsquo;<tt>out-000.nut</tt>&rsquo;, &lsquo;<tt>out-001.nut</tt>&rsquo;, etc., and write the list of
generated segments to &lsquo;<tt>out.list</tt>&rsquo;:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nut
</pre></div>

</li><li>
Segment input and set output format options for the output segments:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -f segment -segment_time 10 -segment_format_options movflags=+faststart out%03d.mp4
</pre></div>

</li><li>
Segment the input file according to the split points specified by the
<var>segment_times</var> option:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
</pre></div>

</li><li>
Use the <code>ffmpeg</code> &lsquo;<samp>force_key_frames</samp>&rsquo;
option to force key frames in the input at the specified location, together
with the segment option &lsquo;<samp>segment_time_delta</samp>&rsquo; to account for
possible roundings operated when setting key frame times.
<div class="example">
<pre class="example">ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -codec:v mpeg4 -codec:a pcm_s16le -map 0 \
-f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
</pre></div>
<p>In order to force key frames on the input file, transcoding is
required.
</p>
</li><li>
Segment the input file by splitting the input file according to the
frame numbers sequence specified with the &lsquo;<samp>segment_frames</samp>&rsquo; option:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_frames 100,200,300,500,800 out%03d.nut
</pre></div>

</li><li>
Convert the &lsquo;<tt>in.mkv</tt>&rsquo; to TS segments using the <code>libx264</code>
and <code>libfaac</code> encoders:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
</pre></div>

</li><li>
Segment the input file, and create an M3U8 live playlist (can be used
as live HLS source):
<div class="example">
<pre class="example">ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
-segment_list_flags +live -segment_time 10 out%03d.mkv
</pre></div>
</li></ul>

<a name="smoothstreaming"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-smoothstreaming">4.18 smoothstreaming</a></h2>

<p>Smooth Streaming muxer generates a set of files (Manifest, chunks) suitable for serving with conventional web server.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>window_size</samp>&rsquo;</dt>
<dd><p>Specify the number of fragments kept in the manifest. Default 0 (keep all).
</p>
</dd>
<dt>&lsquo;<samp>extra_window_size</samp>&rsquo;</dt>
<dd><p>Specify the number of fragments kept outside of the manifest before removing from disk. Default 5.
</p>
</dd>
<dt>&lsquo;<samp>lookahead_count</samp>&rsquo;</dt>
<dd><p>Specify the number of lookahead fragments. Default 2.
</p>
</dd>
<dt>&lsquo;<samp>min_frag_duration</samp>&rsquo;</dt>
<dd><p>Specify the minimum fragment duration (in microseconds). Default 5000000.
</p>
</dd>
<dt>&lsquo;<samp>remove_at_exit</samp>&rsquo;</dt>
<dd><p>Specify whether to remove all fragments when finished. Default 0 (do not remove).
</p>
</dd>
</dl>

<a name="tee"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-tee">4.19 tee</a></h2>

<p>The tee muxer can be used to write the same data to several files or any
other kind of muxer. It can be used, for example, to both stream a video to
the network and save it to disk at the same time.
</p>
<p>It is different from specifying several outputs to the <code>ffmpeg</code>
command-line tool because the audio and video data will be encoded only once
with the tee muxer; encoding can be a very expensive process. It is not
useful when using the libavformat API directly because it is then possible
to feed the same packets to several muxers directly.
</p>
<p>The slave outputs are specified in the file name given to the muxer,
separated by &rsquo;|&rsquo;. If any of the slave name contains the &rsquo;|&rsquo; separator,
leading or trailing spaces or any special character, it must be
escaped (see <a href="ffmpeg-utils.html#quoting_005fand_005fescaping">(ffmpeg-utils)the &quot;Quoting and escaping&quot; section in the ffmpeg-utils(1) manual</a>).
</p>
<p>Muxer options can be specified for each slave by prepending them as a list of
<var>key</var>=<var>value</var> pairs separated by &rsquo;:&rsquo;, between square brackets. If
the options values contain a special character or the &rsquo;:&rsquo; separator, they
must be escaped; note that this is a second level escaping.
</p>
<p>The following special options are also recognized:
</p><dl compact="compact">
<dt>&lsquo;<samp>f</samp>&rsquo;</dt>
<dd><p>Specify the format name. Useful if it cannot be guessed from the
output name suffix.
</p>
</dd>
<dt>&lsquo;<samp>bsfs[/<var>spec</var>]</samp>&rsquo;</dt>
<dd><p>Specify a list of bitstream filters to apply to the specified
output.
</p>
<p>It is possible to specify to which streams a given bitstream filter
applies, by appending a stream specifier to the option separated by
<code>/</code>. <var>spec</var> must be a stream specifier (see <a href="#Format-stream-specifiers">Format stream specifiers</a>).  If the stream specifier is not specified, the
bitstream filters will be applied to all streams in the output.
</p>
<p>Several bitstream filters can be specified, separated by &quot;,&quot;.
</p>
</dd>
<dt>&lsquo;<samp>select</samp>&rsquo;</dt>
<dd><p>Select the streams that should be mapped to the slave output,
specified by a stream specifier. If not specified, this defaults to
all the input streams.
</p></dd>
</dl>

<a name="Examples"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Examples">4.19.1 Examples</a></h3>

<ul>
<li>
Encode something and both archive it in a WebM file and stream it
as MPEG-TS over UDP (the streams need to be explicitly mapped):
<div class="example">
<pre class="example">ffmpeg -i ... -c:v libx264 -c:a mp2 -f tee -map 0:v -map 0:a
  &quot;archive-20121107.mkv|[f=mpegts]udp://10.0.1.255:1234/&quot;
</pre></div>

</li><li>
Use <code>ffmpeg</code> to encode the input, and send the output
to three different destinations. The <code>dump_extra</code> bitstream
filter is used to add extradata information to all the output video
keyframes packets, as requested by the MPEG-TS format. The select
option is applied to &lsquo;<tt>out.aac</tt>&rsquo; in order to make it contain only
audio packets.
<div class="example">
<pre class="example">ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -strict experimental
       -f tee &quot;[bsfs/v=dump_extra]out.ts|[movflags=+faststart]out.mp4|[select=a]out.aac&quot;
</pre></div>

</li><li>
As below, but select only stream <code>a:1</code> for the audio output. Note
that a second level escaping must be performed, as &quot;:&quot; is a special
character used to separate options.
<div class="example">
<pre class="example">ffmpeg -i ... -map 0 -flags +global_header -c:v libx264 -c:a aac -strict experimental
       -f tee &quot;[bsfs/v=dump_extra]out.ts|[movflags=+faststart]out.mp4|[select=\'a:1\']out.aac&quot;
</pre></div>
</li></ul>

<p>Note: some codecs may need different options depending on the output format;
the auto-detection of this can not work with the tee muxer. The main example
is the &lsquo;<samp>global_header</samp>&rsquo; flag.
</p>
<a name="webm_005fdash_005fmanifest"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-webm_005fdash_005fmanifest">4.20 webm_dash_manifest</a></h2>

<p>WebM DASH Manifest muxer.
</p>
<p>This muxer implements the WebM DASH Manifest specification to generate the DASH manifest XML.
</p>
<a name="Options-4"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Options-4">4.20.1 Options</a></h3>

<p>This muxer supports the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>adaptation_sets</samp>&rsquo;</dt>
<dd><p>This option has the following syntax: &quot;id=x,streams=a,b,c id=y,streams=d,e&quot; where x and y are the
unique identifiers of the adaptation sets and a,b,c,d and e are the indices of the corresponding
audio and video streams. Any number of adaptation sets can be added using this option.
</p></dd>
</dl>

<a name="Example-2"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Example-2">4.20.2 Example</a></h3>
<div class="example">
<pre class="example">ffmpeg -f webm_dash_manifest -i video1.webm \
       -f webm_dash_manifest -i video2.webm \
       -f webm_dash_manifest -i audio1.webm \
       -f webm_dash_manifest -i audio2.webm \
       -map 0 -map 1 -map 2 -map 3 \
       -c copy \
       -f webm_dash_manifest \
       -adaptation_sets &quot;id=0,streams=0,1 id=1,streams=2,3&quot; \
       manifest.xml
</pre></div>

<a name="Metadata-1"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Metadata-1">5 Metadata</a></h1>

<p>FFmpeg is able to dump metadata from media files into a simple UTF-8-encoded
INI-like text file and then load it back using the metadata muxer/demuxer.
</p>
<p>The file format is as follows:
</p><ol>
<li>
A file consists of a header and a number of metadata tags divided into sections,
each on its own line.

</li><li>
The header is a &rsquo;;FFMETADATA&rsquo; string, followed by a version number (now 1).

</li><li>
Metadata tags are of the form &rsquo;key=value&rsquo;

</li><li>
Immediately after header follows global metadata

</li><li>
After global metadata there may be sections with per-stream/per-chapter
metadata.

</li><li>
A section starts with the section name in uppercase (i.e. STREAM or CHAPTER) in
brackets (&rsquo;[&rsquo;, &rsquo;]&rsquo;) and ends with next section or end of file.

</li><li>
At the beginning of a chapter section there may be an optional timebase to be
used for start/end values. It must be in form &rsquo;TIMEBASE=num/den&rsquo;, where num and
den are integers. If the timebase is missing then start/end times are assumed to
be in milliseconds.
Next a chapter section must contain chapter start and end times in form
&rsquo;START=num&rsquo;, &rsquo;END=num&rsquo;, where num is a positive integer.

</li><li>
Empty lines and lines starting with &rsquo;;&rsquo; or &rsquo;#&rsquo; are ignored.

</li><li>
Metadata keys or values containing special characters (&rsquo;=&rsquo;, &rsquo;;&rsquo;, &rsquo;#&rsquo;, &rsquo;\&rsquo; and a
newline) must be escaped with a backslash &rsquo;\&rsquo;.

</li><li>
Note that whitespace in metadata (e.g. foo = bar) is considered to be a part of
the tag (in the example above key is &rsquo;foo &rsquo;, value is &rsquo; bar&rsquo;).
</li></ol>

<p>A ffmetadata file might look like this:
</p><div class="example">
<pre class="example">;FFMETADATA1
title=bike\\shed
;this is a comment
artist=FFmpeg troll team

[CHAPTER]
TIMEBASE=1/1000
START=0
#chapter ends at 0:01:00
END=60000
title=chapter \#1
[STREAM]
title=multi\
line
</pre></div>

<p>By using the ffmetadata muxer and demuxer it is possible to extract
metadata from an input file to an ffmetadata file, and then transcode
the file into an output file with the edited ffmetadata file.
</p>
<p>Extracting an ffmetadata file with &lsquo;<tt>ffmpeg</tt>&rsquo; goes as follows:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f ffmetadata FFMETADATAFILE
</pre></div>

<p>Reinserting edited metadata information from the FFMETADATAFILE file can
be done as:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -i FFMETADATAFILE -map_metadata 1 -codec copy OUTPUT
</pre></div>


<a name="See-Also"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-See-Also">6 See Also</a></h1>

<p><a href="ffmpeg.html">ffmpeg</a>, <a href="ffplay.html">ffplay</a>, <a href="ffprobe.html">ffprobe</a>, <a href="ffserver.html">ffserver</a>,
<a href="libavformat.html">libavformat</a>
</p>

<a name="Authors"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Authors">7 Authors</a></h1>

<p>The FFmpeg developers.
</p>
<p>For details about the authorship, see the Git history of the project
(git://source.ffmpeg.org/ffmpeg), e.g. by typing the command
<code>git log</code> in the FFmpeg source directory, or browsing the
online repository at <a href="http://source.ffmpeg.org">http://source.ffmpeg.org</a>.
</p>
<p>Maintainers for the specific components are listed in the file
&lsquo;<tt>MAINTAINERS</tt>&rsquo; in the source code tree.
</p>

    </div>
  </body>
</html>