Sophie

Sophie

distrib > Mageia > 3 > x86_64 > media > tainted-updates > by-pkgid > 9ed110908a90e75a93bc55ea5b096552 > files > 18

ffmpeg-1.1.8-1.mga3.tainted.x86_64.rpm

<!DOCTYPE html>
<html>
<!-- Created on February 9, 2014 by texi2html 5.0 -->
<!--
texi2html was written by: 
            Lionel Cons <Lionel.Cons@cern.ch> (original author)
            Karl Berry  <karl@freefriends.org>
            Olaf Bachmann <obachman@mathematik.uni-kl.de>
            and many others.
Maintained by: Many creative people.
Send bugs and suggestions to <texi2html-bug@nongnu.org>

-->
<head>
<title>FFmpeg documentation : : </title>

<meta name="description" content=": ">
<meta name="keywords" content="FFmpeg documentation : : ">
<meta name="resource-type" content="document">
<meta name="distribution" content="global">
<meta name="Generator" content="texi2html 5.0">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<link rel="stylesheet" type="text/css" href="default.css" />

<link rel="icon" href="favicon.png" type="image/png" />
</head>
<body>
<div id="container">

<h1 class="titlefont">FFmpeg Formats Documentation</h1>
<hr>
<a name="SEC_Top"></a>

<a name="SEC_Contents"></a>
<h1>Table of Contents</h1>

<div class="contents">

<ul class="no-bullet">
  <li><a name="toc-Description" href="#Description">1 Description</a></li>
  <li><a name="toc-Format-Options" href="#Format-Options">2 Format Options</a></li>
  <li><a name="toc-Demuxers" href="#Demuxers">3 Demuxers</a>
  <ul class="no-bullet">
    <li><a name="toc-image2-2" href="#image2-2">3.1 image2</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples-1" href="#Examples-1">3.1.1 Examples</a></li>
    </ul></li>
    <li><a name="toc-applehttp" href="#applehttp">3.2 applehttp</a></li>
    <li><a name="toc-sbg" href="#sbg">3.3 sbg</a></li>
    <li><a name="toc-concat" href="#concat">3.4 concat</a>
    <ul class="no-bullet">
      <li><a name="toc-Syntax" href="#Syntax">3.4.1 Syntax</a></li>
    </ul></li>
    <li><a name="toc-tedcaptions" href="#tedcaptions">3.5 tedcaptions</a></li>
  </ul></li>
  <li><a name="toc-Muxers" href="#Muxers">4 Muxers</a>
  <ul class="no-bullet">
    <li><a name="toc-aiff-1" href="#aiff-1">4.1 aiff</a></li>
    <li><a name="toc-crc-1" href="#crc-1">4.2 crc</a></li>
    <li><a name="toc-framecrc-1" href="#framecrc-1">4.3 framecrc</a></li>
    <li><a name="toc-framemd5-1" href="#framemd5-1">4.4 framemd5</a></li>
    <li><a name="toc-hls-1" href="#hls-1">4.5 hls</a></li>
    <li><a name="toc-ico-1" href="#ico-1">4.6 ico</a></li>
    <li><a name="toc-image2-1" href="#image2-1">4.7 image2</a></li>
    <li><a name="toc-md5-1" href="#md5-1">4.8 md5</a></li>
    <li><a name="toc-MOV_002fMP4_002fISMV" href="#MOV_002fMP4_002fISMV">4.9 MOV/MP4/ISMV</a></li>
    <li><a name="toc-mpegts" href="#mpegts">4.10 mpegts</a></li>
    <li><a name="toc-null" href="#null">4.11 null</a></li>
    <li><a name="toc-matroska" href="#matroska">4.12 matroska</a></li>
    <li><a name="toc-segment_002c-stream_005fsegment_002c-ssegment" href="#segment_002c-stream_005fsegment_002c-ssegment">4.13 segment, stream_segment, ssegment</a></li>
    <li><a name="toc-Examples" href="#Examples">4.14 Examples</a></li>
    <li><a name="toc-mp3" href="#mp3">4.15 mp3</a></li>
  </ul></li>
  <li><a name="toc-Metadata" href="#Metadata">5 Metadata</a></li>
  <li><a name="toc-See-Also" href="#See-Also">6 See Also</a></li>
  <li><a name="toc-Authors" href="#Authors">7 Authors</a></li>
</ul>
</div>


<hr size="6">
<a name="Description"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Description">1 Description</a></h1>

<p>This document describes the supported formats (muxers and demuxers)
provided by the libavformat library.
</p>

<a name="Format-Options"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Format-Options">2 Format Options</a></h1>

<p>The libavformat library provides some generic global options, which
can be set on all the muxers and demuxers. In addition each muxer or
demuxer may support so-called private options, which are specific for
that component.
</p>
<p>Options may be set by specifying -<var>option</var> <var>value</var> in the
FFmpeg tools, or by setting the value explicitly in the
<code>AVFormatContext</code> options or using the &lsquo;<tt>libavutil/opt.h</tt>&rsquo; API
for programmatic use.
</p>
<p>The list of supported options follows:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>avioflags <var>flags</var> (<em>input/output</em>)</samp>&rsquo;</dt>
<dd><p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>direct</samp>&rsquo;</dt>
<dd><p>Reduce buffering.
</p></dd>
</dl>

</dd>
<dt>&lsquo;<samp>probesize <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set probing size in bytes, i.e. the size of the data to analyze to get
stream information. A higher value will allow to detect more
information in case it is dispersed into the stream, but will increase
latency. Must be an integer not lesser than 32. It is 5000000 by default.
</p>
</dd>
<dt>&lsquo;<samp>packetsize <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set packet size.
</p>
</dd>
<dt>&lsquo;<samp>fflags <var>flags</var> (<em>input/output</em>)</samp>&rsquo;</dt>
<dd><p>Set format flags.
</p>
<p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>ignidx</samp>&rsquo;</dt>
<dd><p>Ignore index.
</p></dd>
<dt>&lsquo;<samp>genpts</samp>&rsquo;</dt>
<dd><p>Generate PTS.
</p></dd>
<dt>&lsquo;<samp>nofillin</samp>&rsquo;</dt>
<dd><p>Do not fill in missing values that can be exactly calculated.
</p></dd>
<dt>&lsquo;<samp>noparse</samp>&rsquo;</dt>
<dd><p>Disable AVParsers, this needs <code>+nofillin</code> too.
</p></dd>
<dt>&lsquo;<samp>igndts</samp>&rsquo;</dt>
<dd><p>Ignore DTS.
</p></dd>
<dt>&lsquo;<samp>discardcorrupt</samp>&rsquo;</dt>
<dd><p>Discard corrupted frames.
</p></dd>
<dt>&lsquo;<samp>sortdts</samp>&rsquo;</dt>
<dd><p>Try to interleave output packets by DTS.
</p></dd>
<dt>&lsquo;<samp>keepside</samp>&rsquo;</dt>
<dd><p>Do not merge side data.
</p></dd>
<dt>&lsquo;<samp>latm</samp>&rsquo;</dt>
<dd><p>Enable RTP MP4A-LATM payload.
</p></dd>
<dt>&lsquo;<samp>nobuffer</samp>&rsquo;</dt>
<dd><p>Reduce the latency introduced by optional buffering
</p></dd>
</dl>

</dd>
<dt>&lsquo;<samp>analyzeduration <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Specify how many microseconds are analyzed to estimate duration.
</p>
</dd>
<dt>&lsquo;<samp>cryptokey <var>hexadecimal string</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set decryption key.
</p>
</dd>
<dt>&lsquo;<samp>indexmem <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set max memory used for timestamp index (per stream).
</p>
</dd>
<dt>&lsquo;<samp>rtbufsize <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set max memory used for buffering real-time frames.
</p>
</dd>
<dt>&lsquo;<samp>fdebug <var>flags</var> (<em>input/output</em>)</samp>&rsquo;</dt>
<dd><p>Print specific debug info.
</p>
<p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>ts</samp>&rsquo;</dt>
</dl>

</dd>
<dt>&lsquo;<samp>max_delay <var>integer</var> (<em>input/output</em>)</samp>&rsquo;</dt>
<dd><p>Set maximum muxing or demuxing delay in microseconds.
</p>
</dd>
<dt>&lsquo;<samp>fpsprobesize <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set number of frames used to probe fps.
</p>
</dd>
<dt>&lsquo;<samp>audio_preload <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set microseconds by which audio packets should be interleaved earlier.
</p>
</dd>
<dt>&lsquo;<samp>chunk_duration <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set microseconds for each chunk.
</p>
</dd>
<dt>&lsquo;<samp>chunk_size <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Set size in bytes for each chunk.
</p>
</dd>
<dt>&lsquo;<samp>err_detect, f_err_detect <var>flags</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set error detection flags. <code>f_err_detect</code> is deprecated and
should be used only via the <code>ffmpeg</code> tool.
</p>
<p>Possible values:
</p><dl compact="compact">
<dt>&lsquo;<samp>crccheck</samp>&rsquo;</dt>
<dd><p>Verify embedded CRCs.
</p></dd>
<dt>&lsquo;<samp>bitstream</samp>&rsquo;</dt>
<dd><p>Detect bitstream specification deviations.
</p></dd>
<dt>&lsquo;<samp>buffer</samp>&rsquo;</dt>
<dd><p>Detect improper bitstream length.
</p></dd>
<dt>&lsquo;<samp>explode</samp>&rsquo;</dt>
<dd><p>Abort decoding on minor error detection.
</p></dd>
<dt>&lsquo;<samp>careful</samp>&rsquo;</dt>
<dd><p>Consider things that violate the spec and have not been seen in the
wild as errors.
</p></dd>
<dt>&lsquo;<samp>compliant</samp>&rsquo;</dt>
<dd><p>Consider all spec non compliancies as errors.
</p></dd>
<dt>&lsquo;<samp>aggressive</samp>&rsquo;</dt>
<dd><p>Consider things that a sane encoder should not do as an error.
</p></dd>
</dl>

</dd>
<dt>&lsquo;<samp>use_wallclock_as_timestamps <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Use wallclock as timestamps.
</p>
</dd>
<dt>&lsquo;<samp>avoid_negative_ts <var>integer</var> (<em>output</em>)</samp>&rsquo;</dt>
<dd><p>Shift timestamps to make them positive. 1 enables, 0 disables, default
of -1 enables when required by target format.
</p>
</dd>
<dt>&lsquo;<samp>skip_initial_bytes <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set number initial bytes to skip. Default is 0.
</p>
</dd>
<dt>&lsquo;<samp>correct_ts_overflow <var>integer</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Correct single timestamp overflows if set to 1. Default is 1.
</p></dd>
</dl>


<a name="Demuxers"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Demuxers">3 Demuxers</a></h1>

<p>Demuxers are configured elements in FFmpeg which allow to read the
multimedia streams from a particular type of file.
</p>
<p>When you configure your FFmpeg build, all the supported demuxers
are enabled by default. You can list all available ones using the
configure option &quot;&ndash;list-demuxers&quot;.
</p>
<p>You can disable all the demuxers using the configure option
&quot;&ndash;disable-demuxers&quot;, and selectively enable a single demuxer with
the option &quot;&ndash;enable-demuxer=<var>DEMUXER</var>&quot;, or disable it
with the option &quot;&ndash;disable-demuxer=<var>DEMUXER</var>&quot;.
</p>
<p>The option &quot;-formats&quot; of the ff* tools will display the list of
enabled demuxers.
</p>
<p>The description of some of the currently available demuxers follows.
</p>
<a name="image2-2"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-image2-2">3.1 image2</a></h2>

<p>Image file demuxer.
</p>
<p>This demuxer reads from a list of image files specified by a pattern.
The syntax and meaning of the pattern is specified by the
option <var>pattern_type</var>.
</p>
<p>The pattern may contain a suffix which is used to automatically
determine the format of the images contained in the files.
</p>
<p>The size, the pixel format, and the format of each image must be the
same for all the files in the sequence.
</p>
<p>This demuxer accepts the following options:
</p><dl compact="compact">
<dt>&lsquo;<samp>framerate</samp>&rsquo;</dt>
<dd><p>Set the framerate for the video stream. It defaults to 25.
</p></dd>
<dt>&lsquo;<samp>loop</samp>&rsquo;</dt>
<dd><p>If set to 1, loop over the input. Default value is 0.
</p></dd>
<dt>&lsquo;<samp>pattern_type</samp>&rsquo;</dt>
<dd><p>Select the pattern type used to interpret the provided filename.
</p>
<p><var>pattern_type</var> accepts one of the following values.
</p><dl compact="compact">
<dt>&lsquo;<samp>sequence</samp>&rsquo;</dt>
<dd><p>Select a sequence pattern type, used to specify a sequence of files
indexed by sequential numbers.
</p>
<p>A sequence pattern may contain the string &quot;%d&quot; or &quot;%0<var>N</var>d&quot;, which
specifies the position of the characters representing a sequential
number in each filename matched by the pattern. If the form
&quot;%d0<var>N</var>d&quot; is used, the string representing the number in each
filename is 0-padded and <var>N</var> is the total number of 0-padded
digits representing the number. The literal character &rsquo;%&rsquo; can be
specified in the pattern with the string &quot;%%&quot;.
</p>
<p>If the sequence pattern contains &quot;%d&quot; or &quot;%0<var>N</var>d&quot;, the first filename of
the file list specified by the pattern must contain a number
inclusively contained between <var>start_number</var> and
<var>start_number</var>+<var>start_number_range</var>-1, and all the following
numbers must be sequential.
</p>
<p>For example the pattern &quot;img-%03d.bmp&quot; will match a sequence of
filenames of the form &lsquo;<tt>img-001.bmp</tt>&rsquo;, &lsquo;<tt>img-002.bmp</tt>&rsquo;, ...,
&lsquo;<tt>img-010.bmp</tt>&rsquo;, etc.; the pattern &quot;i%%m%%g-%d.jpg&quot; will match a
sequence of filenames of the form &lsquo;<tt>i%m%g-1.jpg</tt>&rsquo;,
&lsquo;<tt>i%m%g-2.jpg</tt>&rsquo;, ..., &lsquo;<tt>i%m%g-10.jpg</tt>&rsquo;, etc.
</p>
<p>Note that the pattern must not necessarily contain &quot;%d&quot; or
&quot;%0<var>N</var>d&quot;, for example to convert a single image file
&lsquo;<tt>img.jpeg</tt>&rsquo; you can employ the command:
</p><div class="example">
<pre class="example">ffmpeg -i img.jpeg img.png
</pre></div>

</dd>
<dt>&lsquo;<samp>glob</samp>&rsquo;</dt>
<dd><p>Select a glob wildcard pattern type.
</p>
<p>The pattern is interpreted like a <code>glob()</code> pattern. This is only
selectable if libavformat was compiled with globbing support.
</p>
</dd>
<dt>&lsquo;<samp>glob_sequence <em>(deprecated, will be removed)</em></samp>&rsquo;</dt>
<dd><p>Select a mixed glob wildcard/sequence pattern.
</p>
<p>If your version of libavformat was compiled with globbing support, and
the provided pattern contains at least one glob meta character among
<code>%*?[]{}</code> that is preceded by an unescaped &quot;%&quot;, the pattern is
interpreted like a <code>glob()</code> pattern, otherwise it is interpreted
like a sequence pattern.
</p>
<p>All glob special characters <code>%*?[]{}</code> must be prefixed
with &quot;%&quot;. To escape a literal &quot;%&quot; you shall use &quot;%%&quot;.
</p>
<p>For example the pattern <code>foo-%*.jpeg</code> will match all the
filenames prefixed by &quot;foo-&quot; and terminating with &quot;.jpeg&quot;, and
<code>foo-%?%?%?.jpeg</code> will match all the filenames prefixed with
&quot;foo-&quot;, followed by a sequence of three characters, and terminating
with &quot;.jpeg&quot;.
</p>
<p>This pattern type is deprecated in favor of <var>glob</var> and
<var>sequence</var>.
</p></dd>
</dl>

<p>Default value is <var>glob_sequence</var>.
</p></dd>
<dt>&lsquo;<samp>pixel_format</samp>&rsquo;</dt>
<dd><p>Set the pixel format of the images to read. If not specified the pixel
format is guessed from the first image file in the sequence.
</p></dd>
<dt>&lsquo;<samp>start_number</samp>&rsquo;</dt>
<dd><p>Set the index of the file matched by the image file pattern to start
to read from. Default value is 0.
</p></dd>
<dt>&lsquo;<samp>start_number_range</samp>&rsquo;</dt>
<dd><p>Set the index interval range to check when looking for the first image
file in the sequence, starting from <var>start_number</var>. Default value
is 5.
</p></dd>
<dt>&lsquo;<samp>video_size</samp>&rsquo;</dt>
<dd><p>Set the video size of the images to read. If not specified the video
size is guessed from the first image file in the sequence.
</p></dd>
</dl>

<a name="Examples-1"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Examples-1">3.1.1 Examples</a></h3>

<ul>
<li>
Use <code>ffmpeg</code> for creating a video from the images in the file
sequence &lsquo;<tt>img-001.jpeg</tt>&rsquo;, &lsquo;<tt>img-002.jpeg</tt>&rsquo;, ..., assuming an
input frame rate of 10 frames per second:
<div class="example">
<pre class="example">ffmpeg -i 'img-%03d.jpeg' -r 10 out.mkv
</pre></div>

</li><li>
As above, but start by reading from a file with index 100 in the sequence:
<div class="example">
<pre class="example">ffmpeg -start_number 100 -i 'img-%03d.jpeg' -r 10 out.mkv
</pre></div>

</li><li>
Read images matching the &quot;*.png&quot; glob pattern , that is all the files
terminating with the &quot;.png&quot; suffix:
<div class="example">
<pre class="example">ffmpeg -pattern_type glob -i &quot;*.png&quot; -r 10 out.mkv
</pre></div>
</li></ul>

<a name="applehttp"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-applehttp">3.2 applehttp</a></h2>

<p>Apple HTTP Live Streaming demuxer.
</p>
<p>This demuxer presents all AVStreams from all variant streams.
The id field is set to the bitrate variant index number. By setting
the discard flags on AVStreams (by pressing &rsquo;a&rsquo; or &rsquo;v&rsquo; in ffplay),
the caller can decide which variant streams to actually receive.
The total bitrate of the variant that the stream belongs to is
available in a metadata key named &quot;variant_bitrate&quot;.
</p>
<a name="sbg"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-sbg">3.3 sbg</a></h2>

<p>SBaGen script demuxer.
</p>
<p>This demuxer reads the script language used by SBaGen
<a href="http://uazu.net/sbagen/">http://uazu.net/sbagen/</a> to generate binaural beats sessions. A SBG
script looks like that:
</p><div class="example">
<pre class="example">-SE
a: 300-2.5/3 440+4.5/0
b: 300-2.5/0 440+4.5/3
off: -
NOW      == a
+0:07:00 == b
+0:14:00 == a
+0:21:00 == b
+0:30:00    off
</pre></div>

<p>A SBG script can mix absolute and relative timestamps. If the script uses
either only absolute timestamps (including the script start time) or only
relative ones, then its layout is fixed, and the conversion is
straightforward. On the other hand, if the script mixes both kind of
timestamps, then the <var>NOW</var> reference for relative timestamps will be
taken from the current time of day at the time the script is read, and the
script layout will be frozen according to that reference. That means that if
the script is directly played, the actual times will match the absolute
timestamps up to the sound controller&rsquo;s clock accuracy, but if the user
somehow pauses the playback or seeks, all times will be shifted accordingly.
</p>
<a name="concat"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-concat">3.4 concat</a></h2>

<p>Virtual concatenation script demuxer.
</p>
<p>This demuxer reads a list of files and other directives from a text file and
demuxes them one after the other, as if all their packet had been muxed
together.
</p>
<p>The timestamps in the files are adjusted so that the first file starts at 0
and each next file starts where the previous one finishes. Note that it is
done globally and may cause gaps if all streams do not have exactly the same
length.
</p>
<p>All files must have the same streams (same codecs, same time base, etc.).
</p>
<p>This script format can currently not be probed, it must be specified explicitly.
</p>
<a name="Syntax"></a>
<h3 class="subsection"><a href="ffmpeg-formats.html#toc-Syntax">3.4.1 Syntax</a></h3>

<p>The script is a text file in extended-ASCII, with one directive per line.
Empty lines, leading spaces and lines starting with &rsquo;#&rsquo; are ignored. The
following directive is recognized:
</p>
<dl compact="compact">
<dt>&lsquo;<samp><code>file <var>path</var></code></samp>&rsquo;</dt>
<dd><p>Path to a file to read; special characters and spaces must be escaped with
backslash or single quotes.
</p>
</dd>
</dl>

<a name="tedcaptions"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-tedcaptions">3.5 tedcaptions</a></h2>

<p>JSON captions used for <a href="http://www.ted.com/">TED Talks</a>.
</p>
<p>TED does not provide links to the captions, but they can be guessed from the
page. The file &lsquo;<tt>tools/bookmarklets.html</tt>&rsquo; from the FFmpeg source tree
contains a bookmarklet to expose them.
</p>
<p>This demuxer accepts the following option:
</p><dl compact="compact">
<dt>&lsquo;<samp>start_time</samp>&rsquo;</dt>
<dd><p>Set the start time of the TED talk, in milliseconds. The default is 15000
(15s). It is used to sync the captions with the downloadable videos, because
they include a 15s intro.
</p></dd>
</dl>

<p>Example: convert the captions to a format most players understand:
</p><div class="example">
<pre class="example">ffmpeg -i http://www.ted.com/talks/subtitles/id/1/lang/en talk1-en.srt
</pre></div>

<a name="Muxers"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Muxers">4 Muxers</a></h1>

<p>Muxers are configured elements in FFmpeg which allow writing
multimedia streams to a particular type of file.
</p>
<p>When you configure your FFmpeg build, all the supported muxers
are enabled by default. You can list all available muxers using the
configure option <code>--list-muxers</code>.
</p>
<p>You can disable all the muxers with the configure option
<code>--disable-muxers</code> and selectively enable / disable single muxers
with the options <code>--enable-muxer=<var>MUXER</var></code> /
<code>--disable-muxer=<var>MUXER</var></code>.
</p>
<p>The option <code>-formats</code> of the ff* tools will display the list of
enabled muxers.
</p>
<p>A description of some of the currently available muxers follows.
</p>
<p><a name="aiff"></a>
</p><a name="aiff-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-aiff-1">4.1 aiff</a></h2>

<p>Audio Interchange File Format muxer.
</p>
<p>It accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>write_id3v2</samp>&rsquo;</dt>
<dd><p>Enable ID3v2 tags writing when set to 1. Default is 0 (disabled).
</p>
</dd>
<dt>&lsquo;<samp>id3v2_version</samp>&rsquo;</dt>
<dd><p>Select ID3v2 version to write. Currently only version 3 and 4 (aka.
ID3v2.3 and ID3v2.4) are supported. The default is version 4.
</p>
</dd>
</dl>

<p><a name="crc"></a>
</p><a name="crc-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-crc-1">4.2 crc</a></h2>

<p>CRC (Cyclic Redundancy Check) testing format.
</p>
<p>This muxer computes and prints the Adler-32 CRC of all the input audio
and video frames. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
CRC.
</p>
<p>The output of the muxer consists of a single line of the form:
CRC=0x<var>CRC</var>, where <var>CRC</var> is a hexadecimal number 0-padded to
8 digits containing the CRC for all the decoded input frames.
</p>
<p>For example to compute the CRC of the input, and store it in the file
&lsquo;<tt>out.crc</tt>&rsquo;:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f crc out.crc
</pre></div>

<p>You can print the CRC to stdout with the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f crc -
</pre></div>

<p>You can select the output format of each frame with <code>ffmpeg</code> by
specifying the audio and video codec and format. For example to
compute the CRC of the input audio converted to PCM unsigned 8-bit
and the input video converted to MPEG-2 video, use the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f crc -
</pre></div>

<p>See also the <a href="#framecrc">framecrc</a> muxer.
</p>
<p><a name="framecrc"></a>
</p><a name="framecrc-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-framecrc-1">4.3 framecrc</a></h2>

<p>Per-packet CRC (Cyclic Redundancy Check) testing format.
</p>
<p>This muxer computes and prints the Adler-32 CRC for each audio
and video packet. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
CRC.
</p>
<p>The output of the muxer consists of a line for each audio and video
packet of the form:
</p><div class="example">
<pre class="example"><var>stream_index</var>, <var>packet_dts</var>, <var>packet_pts</var>, <var>packet_duration</var>, <var>packet_size</var>, 0x<var>CRC</var>
</pre></div>

<p><var>CRC</var> is a hexadecimal number 0-padded to 8 digits containing the
CRC of the packet.
</p>
<p>For example to compute the CRC of the audio and video frames in
&lsquo;<tt>INPUT</tt>&rsquo;, converted to raw audio and video packets, and store it
in the file &lsquo;<tt>out.crc</tt>&rsquo;:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f framecrc out.crc
</pre></div>

<p>To print the information to stdout, use the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f framecrc -
</pre></div>

<p>With <code>ffmpeg</code>, you can select the output format to which the
audio and video frames are encoded before computing the CRC for each
packet by specifying the audio and video codec. For example, to
compute the CRC of each decoded input audio frame converted to PCM
unsigned 8-bit and of each decoded input video frame converted to
MPEG-2 video, use the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -c:a pcm_u8 -c:v mpeg2video -f framecrc -
</pre></div>

<p>See also the <a href="#crc">crc</a> muxer.
</p>
<p><a name="framemd5"></a>
</p><a name="framemd5-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-framemd5-1">4.4 framemd5</a></h2>

<p>Per-packet MD5 testing format.
</p>
<p>This muxer computes and prints the MD5 hash for each audio
and video packet. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
hash.
</p>
<p>The output of the muxer consists of a line for each audio and video
packet of the form:
</p><div class="example">
<pre class="example"><var>stream_index</var>, <var>packet_dts</var>, <var>packet_pts</var>, <var>packet_duration</var>, <var>packet_size</var>, <var>MD5</var>
</pre></div>

<p><var>MD5</var> is a hexadecimal number representing the computed MD5 hash
for the packet.
</p>
<p>For example to compute the MD5 of the audio and video frames in
&lsquo;<tt>INPUT</tt>&rsquo;, converted to raw audio and video packets, and store it
in the file &lsquo;<tt>out.md5</tt>&rsquo;:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f framemd5 out.md5
</pre></div>

<p>To print the information to stdout, use the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f framemd5 -
</pre></div>

<p>See also the <a href="#md5">md5</a> muxer.
</p>
<p><a name="hls"></a>
</p><a name="hls-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-hls-1">4.5 hls</a></h2>

<p>Apple HTTP Live Streaming muxer that segments MPEG-TS according to
the HTTP Live Streaming specification.
</p>
<p>It creates a playlist file and numbered segment files. The output
filename specifies the playlist filename; the segment filenames
receive the same basename as the playlist, a sequential number and
a .ts extension.
</p>
<div class="example">
<pre class="example">ffmpeg -i in.nut out.m3u8
</pre></div>

<dl compact="compact">
<dt>&lsquo;<samp>-hls_time <var>seconds</var></samp>&rsquo;</dt>
<dd><p>Set the segment length in seconds.
</p></dd>
<dt>&lsquo;<samp>-hls_list_size <var>size</var></samp>&rsquo;</dt>
<dd><p>Set the maximum number of playlist entries.
</p></dd>
<dt>&lsquo;<samp>-hls_wrap <var>wrap</var></samp>&rsquo;</dt>
<dd><p>Set the number after which index wraps.
</p></dd>
<dt>&lsquo;<samp>-start_number <var>number</var></samp>&rsquo;</dt>
<dd><p>Start the sequence from <var>number</var>.
</p></dd>
</dl>

<p><a name="ico"></a>
</p><a name="ico-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-ico-1">4.6 ico</a></h2>

<p>ICO file muxer.
</p>
<p>Microsoft&rsquo;s icon file format (ICO) has some strict limitations that should be noted:
</p>
<ul>
<li>
Size cannot exceed 256 pixels in any dimension

</li><li>
Only BMP and PNG images can be stored

</li><li>
If a BMP image is used, it must be one of the following pixel formats:
<div class="example">
<pre class="example">BMP Bit Depth      FFmpeg Pixel Format
1bit               pal8
4bit               pal8
8bit               pal8
16bit              rgb555le
24bit              bgr24
32bit              bgra
</pre></div>

</li><li>
If a BMP image is used, it must use the BITMAPINFOHEADER DIB header

</li><li>
If a PNG image is used, it must use the rgba pixel format
</li></ul>

<p><a name="image2"></a>
</p><a name="image2-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-image2-1">4.7 image2</a></h2>

<p>Image file muxer.
</p>
<p>The image file muxer writes video frames to image files.
</p>
<p>The output filenames are specified by a pattern, which can be used to
produce sequentially numbered series of files.
The pattern may contain the string &quot;%d&quot; or &quot;%0<var>N</var>d&quot;, this string
specifies the position of the characters representing a numbering in
the filenames. If the form &quot;%0<var>N</var>d&quot; is used, the string
representing the number in each filename is 0-padded to <var>N</var>
digits. The literal character &rsquo;%&rsquo; can be specified in the pattern with
the string &quot;%%&quot;.
</p>
<p>If the pattern contains &quot;%d&quot; or &quot;%0<var>N</var>d&quot;, the first filename of
the file list specified will contain the number 1, all the following
numbers will be sequential.
</p>
<p>The pattern may contain a suffix which is used to automatically
determine the format of the image files to write.
</p>
<p>For example the pattern &quot;img-%03d.bmp&quot; will specify a sequence of
filenames of the form &lsquo;<tt>img-001.bmp</tt>&rsquo;, &lsquo;<tt>img-002.bmp</tt>&rsquo;, ...,
&lsquo;<tt>img-010.bmp</tt>&rsquo;, etc.
The pattern &quot;img%%-%d.jpg&quot; will specify a sequence of filenames of the
form &lsquo;<tt>img%-1.jpg</tt>&rsquo;, &lsquo;<tt>img%-2.jpg</tt>&rsquo;, ..., &lsquo;<tt>img%-10.jpg</tt>&rsquo;,
etc.
</p>
<p>The following example shows how to use <code>ffmpeg</code> for creating a
sequence of files &lsquo;<tt>img-001.jpeg</tt>&rsquo;, &lsquo;<tt>img-002.jpeg</tt>&rsquo;, ...,
taking one image every second from the input video:
</p><div class="example">
<pre class="example">ffmpeg -i in.avi -vsync 1 -r 1 -f image2 'img-%03d.jpeg'
</pre></div>

<p>Note that with <code>ffmpeg</code>, if the format is not specified with the
<code>-f</code> option and the output filename specifies an image file
format, the image2 muxer is automatically selected, so the previous
command can be written as:
</p><div class="example">
<pre class="example">ffmpeg -i in.avi -vsync 1 -r 1 'img-%03d.jpeg'
</pre></div>

<p>Note also that the pattern must not necessarily contain &quot;%d&quot; or
&quot;%0<var>N</var>d&quot;, for example to create a single image file
&lsquo;<tt>img.jpeg</tt>&rsquo; from the input video you can employ the command:
</p><div class="example">
<pre class="example">ffmpeg -i in.avi -f image2 -frames:v 1 img.jpeg
</pre></div>

<dl compact="compact">
<dt>&lsquo;<samp>-start_number <var>number</var></samp>&rsquo;</dt>
<dd><p>Start the sequence from <var>number</var>.
</p></dd>
</dl>

<p>The image muxer supports the .Y.U.V image file format. This format is
special in that that each image frame consists of three files, for
each of the YUV420P components. To read or write this image file format,
specify the name of the &rsquo;.Y&rsquo; file. The muxer will automatically open the
&rsquo;.U&rsquo; and &rsquo;.V&rsquo; files as required.
</p>
<p><a name="md5"></a>
</p><a name="md5-1"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-md5-1">4.8 md5</a></h2>

<p>MD5 testing format.
</p>
<p>This muxer computes and prints the MD5 hash of all the input audio
and video frames. By default audio frames are converted to signed
16-bit raw audio and video frames to raw video before computing the
hash.
</p>
<p>The output of the muxer consists of a single line of the form:
MD5=<var>MD5</var>, where <var>MD5</var> is a hexadecimal number representing
the computed MD5 hash.
</p>
<p>For example to compute the MD5 hash of the input converted to raw
audio and video, and store it in the file &lsquo;<tt>out.md5</tt>&rsquo;:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f md5 out.md5
</pre></div>

<p>You can print the MD5 to stdout with the command:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -f md5 -
</pre></div>

<p>See also the <a href="#framemd5">framemd5</a> muxer.
</p>
<a name="MOV_002fMP4_002fISMV"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-MOV_002fMP4_002fISMV">4.9 MOV/MP4/ISMV</a></h2>

<p>The mov/mp4/ismv muxer supports fragmentation. Normally, a MOV/MP4
file has all the metadata about all packets stored in one location
(written at the end of the file, it can be moved to the start for
better playback by adding <var>faststart</var> to the <var>movflags</var>, or
using the <code>qt-faststart</code> tool). A fragmented
file consists of a number of fragments, where packets and metadata
about these packets are stored together. Writing a fragmented
file has the advantage that the file is decodable even if the
writing is interrupted (while a normal MOV/MP4 is undecodable if
it is not properly finished), and it requires less memory when writing
very long files (since writing normal MOV/MP4 files stores info about
every single packet in memory until the file is closed). The downside
is that it is less compatible with other applications.
</p>
<p>Fragmentation is enabled by setting one of the AVOptions that define
how to cut the file into fragments:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-moov_size <var>bytes</var></samp>&rsquo;</dt>
<dd><p>Reserves space for the moov atom at the beginning of the file instead of placing the
moov atom at the end. If the space reserved is insufficient, muxing will fail.
</p></dd>
<dt>&lsquo;<samp>-movflags frag_keyframe</samp>&rsquo;</dt>
<dd><p>Start a new fragment at each video keyframe.
</p></dd>
<dt>&lsquo;<samp>-frag_duration <var>duration</var></samp>&rsquo;</dt>
<dd><p>Create fragments that are <var>duration</var> microseconds long.
</p></dd>
<dt>&lsquo;<samp>-frag_size <var>size</var></samp>&rsquo;</dt>
<dd><p>Create fragments that contain up to <var>size</var> bytes of payload data.
</p></dd>
<dt>&lsquo;<samp>-movflags frag_custom</samp>&rsquo;</dt>
<dd><p>Allow the caller to manually choose when to cut fragments, by
calling <code>av_write_frame(ctx, NULL)</code> to write a fragment with
the packets written so far. (This is only useful with other
applications integrating libavformat, not from <code>ffmpeg</code>.)
</p></dd>
<dt>&lsquo;<samp>-min_frag_duration <var>duration</var></samp>&rsquo;</dt>
<dd><p>Don&rsquo;t create fragments that are shorter than <var>duration</var> microseconds long.
</p></dd>
</dl>

<p>If more than one condition is specified, fragments are cut when
one of the specified conditions is fulfilled. The exception to this is
<code>-min_frag_duration</code>, which has to be fulfilled for any of the other
conditions to apply.
</p>
<p>Additionally, the way the output file is written can be adjusted
through a few other options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-movflags empty_moov</samp>&rsquo;</dt>
<dd><p>Write an initial moov atom directly at the start of the file, without
describing any samples in it. Generally, an mdat/moov pair is written
at the start of the file, as a normal MOV/MP4 file, containing only
a short portion of the file. With this option set, there is no initial
mdat atom, and the moov atom only describes the tracks but has
a zero duration.
</p>
<p>Files written with this option set do not work in QuickTime.
This option is implicitly set when writing ismv (Smooth Streaming) files.
</p></dd>
<dt>&lsquo;<samp>-movflags separate_moof</samp>&rsquo;</dt>
<dd><p>Write a separate moof (movie fragment) atom for each track. Normally,
packets for all tracks are written in a moof atom (which is slightly
more efficient), but with this option set, the muxer writes one moof/mdat
pair for each track, making it easier to separate tracks.
</p>
<p>This option is implicitly set when writing ismv (Smooth Streaming) files.
</p></dd>
<dt>&lsquo;<samp>-movflags faststart</samp>&rsquo;</dt>
<dd><p>Run a second pass moving the moov atom on top of the file. This
operation can take a while, and will not work in various situations such
as fragmented output, thus it is not enabled by default.
</p></dd>
</dl>

<p>Smooth Streaming content can be pushed in real time to a publishing
point on IIS with this muxer. Example:
</p><div class="example">
<pre class="example">ffmpeg -re <var>&lt;normal input/transcoding options&gt;</var> -movflags isml+frag_keyframe -f ismv http://server/publishingpoint.isml/Streams(Encoder1)
</pre></div>

<a name="mpegts"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-mpegts">4.10 mpegts</a></h2>

<p>MPEG transport stream muxer.
</p>
<p>This muxer implements ISO 13818-1 and part of ETSI EN 300 468.
</p>
<p>The muxer options are:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>-mpegts_original_network_id <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the original_network_id (default 0x0001). This is unique identifier
of a network in DVB. Its main use is in the unique identification of a
service through the path Original_Network_ID, Transport_Stream_ID.
</p></dd>
<dt>&lsquo;<samp>-mpegts_transport_stream_id <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the transport_stream_id (default 0x0001). This identifies a
transponder in DVB.
</p></dd>
<dt>&lsquo;<samp>-mpegts_service_id <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the service_id (default 0x0001) also known as program in DVB.
</p></dd>
<dt>&lsquo;<samp>-mpegts_pmt_start_pid <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the first PID for PMT (default 0x1000, max 0x1f00).
</p></dd>
<dt>&lsquo;<samp>-mpegts_start_pid <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the first PID for data packets (default 0x0100, max 0x0f00).
</p></dd>
</dl>

<p>The recognized metadata settings in mpegts muxer are <code>service_provider</code>
and <code>service_name</code>. If they are not set the default for
<code>service_provider</code> is &quot;FFmpeg&quot; and the default for
<code>service_name</code> is &quot;Service01&quot;.
</p>
<div class="example">
<pre class="example">ffmpeg -i file.mpg -c copy \
     -mpegts_original_network_id 0x1122 \
     -mpegts_transport_stream_id 0x3344 \
     -mpegts_service_id 0x5566 \
     -mpegts_pmt_start_pid 0x1500 \
     -mpegts_start_pid 0x150 \
     -metadata service_provider=&quot;Some provider&quot; \
     -metadata service_name=&quot;Some Channel&quot; \
     -y out.ts
</pre></div>

<a name="null"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-null">4.11 null</a></h2>

<p>Null muxer.
</p>
<p>This muxer does not generate any output file, it is mainly useful for
testing or benchmarking purposes.
</p>
<p>For example to benchmark decoding with <code>ffmpeg</code> you can use the
command:
</p><div class="example">
<pre class="example">ffmpeg -benchmark -i INPUT -f null out.null
</pre></div>

<p>Note that the above command does not read or write the &lsquo;<tt>out.null</tt>&rsquo;
file, but specifying the output file is required by the <code>ffmpeg</code>
syntax.
</p>
<p>Alternatively you can write the command as:
</p><div class="example">
<pre class="example">ffmpeg -benchmark -i INPUT -f null -
</pre></div>

<a name="matroska"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-matroska">4.12 matroska</a></h2>

<p>Matroska container muxer.
</p>
<p>This muxer implements the matroska and webm container specs.
</p>
<p>The recognized metadata settings in this muxer are:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>title=<var>title name</var></samp>&rsquo;</dt>
<dd><p>Name provided to a single track
</p></dd>
</dl>

<dl compact="compact">
<dt>&lsquo;<samp>language=<var>language name</var></samp>&rsquo;</dt>
<dd><p>Specifies the language of the track in the Matroska languages form
</p></dd>
</dl>

<dl compact="compact">
<dt>&lsquo;<samp>stereo_mode=<var>mode</var></samp>&rsquo;</dt>
<dd><p>Stereo 3D video layout of two views in a single video track
</p><dl compact="compact">
<dt>&lsquo;<samp>mono</samp>&rsquo;</dt>
<dd><p>video is not stereo
</p></dd>
<dt>&lsquo;<samp>left_right</samp>&rsquo;</dt>
<dd><p>Both views are arranged side by side, Left-eye view is on the left
</p></dd>
<dt>&lsquo;<samp>bottom_top</samp>&rsquo;</dt>
<dd><p>Both views are arranged in top-bottom orientation, Left-eye view is at bottom
</p></dd>
<dt>&lsquo;<samp>top_bottom</samp>&rsquo;</dt>
<dd><p>Both views are arranged in top-bottom orientation, Left-eye view is on top
</p></dd>
<dt>&lsquo;<samp>checkerboard_rl</samp>&rsquo;</dt>
<dd><p>Each view is arranged in a checkerboard interleaved pattern, Left-eye view being first
</p></dd>
<dt>&lsquo;<samp>checkerboard_lr</samp>&rsquo;</dt>
<dd><p>Each view is arranged in a checkerboard interleaved pattern, Right-eye view being first
</p></dd>
<dt>&lsquo;<samp>row_interleaved_rl</samp>&rsquo;</dt>
<dd><p>Each view is constituted by a row based interleaving, Right-eye view is first row
</p></dd>
<dt>&lsquo;<samp>row_interleaved_lr</samp>&rsquo;</dt>
<dd><p>Each view is constituted by a row based interleaving, Left-eye view is first row
</p></dd>
<dt>&lsquo;<samp>col_interleaved_rl</samp>&rsquo;</dt>
<dd><p>Both views are arranged in a column based interleaving manner, Right-eye view is first column
</p></dd>
<dt>&lsquo;<samp>col_interleaved_lr</samp>&rsquo;</dt>
<dd><p>Both views are arranged in a column based interleaving manner, Left-eye view is first column
</p></dd>
<dt>&lsquo;<samp>anaglyph_cyan_red</samp>&rsquo;</dt>
<dd><p>All frames are in anaglyph format viewable through red-cyan filters
</p></dd>
<dt>&lsquo;<samp>right_left</samp>&rsquo;</dt>
<dd><p>Both views are arranged side by side, Right-eye view is on the left
</p></dd>
<dt>&lsquo;<samp>anaglyph_green_magenta</samp>&rsquo;</dt>
<dd><p>All frames are in anaglyph format viewable through green-magenta filters
</p></dd>
<dt>&lsquo;<samp>block_lr</samp>&rsquo;</dt>
<dd><p>Both eyes laced in one Block, Left-eye view is first
</p></dd>
<dt>&lsquo;<samp>block_rl</samp>&rsquo;</dt>
<dd><p>Both eyes laced in one Block, Right-eye view is first
</p></dd>
</dl>
</dd>
</dl>

<p>For example a 3D WebM clip can be created using the following command line:
</p><div class="example">
<pre class="example">ffmpeg -i sample_left_right_clip.mpg -an -c:v libvpx -metadata stereo_mode=left_right -y stereo_clip.webm
</pre></div>

<a name="segment_002c-stream_005fsegment_002c-ssegment"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-segment_002c-stream_005fsegment_002c-ssegment">4.13 segment, stream_segment, ssegment</a></h2>

<p>Basic stream segmenter.
</p>
<p>The segmenter muxer outputs streams to a number of separate files of nearly
fixed duration. Output filename pattern can be set in a fashion similar to
<a href="#image2">image2</a>.
</p>
<p><code>stream_segment</code> is a variant of the muxer used to write to
streaming output formats, i.e. which do not require global headers,
and is recommended for outputting e.g. to MPEG transport stream segments.
<code>ssegment</code> is a shorter alias for <code>stream_segment</code>.
</p>
<p>Every segment starts with a keyframe of the selected reference stream,
which is set through the &lsquo;<samp>reference_stream</samp>&rsquo; option.
</p>
<p>Note that if you want accurate splitting for a video file, you need to
make the input key frames correspond to the exact splitting times
expected by the segmenter, or the segment muxer will start the new
segment with the key frame found next after the specified start
time.
</p>
<p>The segment muxer works best with a single constant frame rate video.
</p>
<p>Optionally it can generate a list of the created segments, by setting
the option <var>segment_list</var>. The list type is specified by the
<var>segment_list_type</var> option.
</p>
<p>The segment muxer supports the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>reference_stream <var>specifier</var></samp>&rsquo;</dt>
<dd><p>Set the reference stream, as specified by the string <var>specifier</var>.
If <var>specifier</var> is set to <code>auto</code>, the reference is choosen
automatically. Otherwise it must be a stream specifier (see the &ldquo;Stream
specifiers&rdquo; chapter in the ffmpeg manual) which specifies the
reference stream. The default value is &ldquo;auto&rdquo;.
</p>
</dd>
<dt>&lsquo;<samp>segment_format <var>format</var></samp>&rsquo;</dt>
<dd><p>Override the inner container format, by default it is guessed by the filename
extension.
</p></dd>
<dt>&lsquo;<samp>segment_list <var>name</var></samp>&rsquo;</dt>
<dd><p>Generate also a listfile named <var>name</var>. If not specified no
listfile is generated.
</p></dd>
<dt>&lsquo;<samp>segment_list_flags <var>flags</var></samp>&rsquo;</dt>
<dd><p>Set flags affecting the segment list generation.
</p>
<p>It currently supports the following flags:
</p><dl compact="compact">
<dt><var>cache</var></dt>
<dd><p>Allow caching (only affects M3U8 list files).
</p>
</dd>
<dt><var>live</var></dt>
<dd><p>Allow live-friendly file generation.
</p>
<p>This currently only affects M3U8 lists. In particular, write a fake
EXT-X-TARGETDURATION duration field at the top of the file, based on
the specified <var>segment_time</var>.
</p></dd>
</dl>

<p>Default value is <code>cache</code>.
</p>
</dd>
<dt>&lsquo;<samp>segment_list_size <var>size</var></samp>&rsquo;</dt>
<dd><p>Overwrite the listfile once it reaches <var>size</var> entries. If 0
the listfile is never overwritten. Default value is 0.
</p></dd>
<dt>&lsquo;<samp>segment_list type <var>type</var></samp>&rsquo;</dt>
<dd><p>Specify the format for the segment list file.
</p>
<p>The following values are recognized:
</p><dl compact="compact">
<dt>&lsquo;<samp>flat</samp>&rsquo;</dt>
<dd><p>Generate a flat list for the created segments, one segment per line.
</p>
</dd>
<dt>&lsquo;<samp>csv, ext</samp>&rsquo;</dt>
<dd><p>Generate a list for the created segments, one segment per line,
each line matching the format (comma-separated values):
</p><div class="example">
<pre class="example"><var>segment_filename</var>,<var>segment_start_time</var>,<var>segment_end_time</var>
</pre></div>

<p><var>segment_filename</var> is the name of the output file generated by the
muxer according to the provided pattern. CSV escaping (according to
RFC4180) is applied if required.
</p>
<p><var>segment_start_time</var> and <var>segment_end_time</var> specify
the segment start and end time expressed in seconds.
</p>
<p>A list file with the suffix <code>&quot;.csv&quot;</code> or <code>&quot;.ext&quot;</code> will
auto-select this format.
</p>
<p><code>ext</code> is deprecated in favor or <code>csv</code>.
</p>
</dd>
<dt>&lsquo;<samp>m3u8</samp>&rsquo;</dt>
<dd><p>Generate an extended M3U8 file, version 4, compliant with
<a href="http://tools.ietf.org/id/draft-pantos-http-live-streaming-08.txt">http://tools.ietf.org/id/draft-pantos-http-live-streaming-08.txt</a>.
</p>
<p>A list file with the suffix <code>&quot;.m3u8&quot;</code> will auto-select this format.
</p></dd>
</dl>

<p>If not specified the type is guessed from the list file name suffix.
</p></dd>
<dt>&lsquo;<samp>segment_time <var>time</var></samp>&rsquo;</dt>
<dd><p>Set segment duration to <var>time</var>. Default value is &quot;2&quot;.
</p></dd>
<dt>&lsquo;<samp>segment_time_delta <var>delta</var></samp>&rsquo;</dt>
<dd><p>Specify the accuracy time when selecting the start time for a
segment. Default value is &quot;0&quot;.
</p>
<p>When delta is specified a key-frame will start a new segment if its
PTS satisfies the relation:
</p><div class="example">
<pre class="example">PTS &gt;= start_time - time_delta
</pre></div>

<p>This option is useful when splitting video content, which is always
split at GOP boundaries, in case a key frame is found just before the
specified split time.
</p>
<p>In particular may be used in combination with the &lsquo;<tt>ffmpeg</tt>&rsquo; option
<var>force_key_frames</var>. The key frame times specified by
<var>force_key_frames</var> may not be set accurately because of rounding
issues, with the consequence that a key frame time may result set just
before the specified time. For constant frame rate videos a value of
1/2*<var>frame_rate</var> should address the worst case mismatch between
the specified time and the time set by <var>force_key_frames</var>.
</p>
</dd>
<dt>&lsquo;<samp>segment_times <var>times</var></samp>&rsquo;</dt>
<dd><p>Specify a list of split points. <var>times</var> contains a list of comma
separated duration specifications, in increasing order.
</p>
</dd>
<dt>&lsquo;<samp>segment_frames <var>frames</var></samp>&rsquo;</dt>
<dd><p>Specify a list of split video frame numbers. <var>frames</var> contains a
list of comma separated integer numbers, in increasing order.
</p>
<p>This option specifies to start a new segment whenever a reference
stream key frame is found and the sequential number (starting from 0)
of the frame is greater or equal to the next value in the list.
</p>
</dd>
<dt>&lsquo;<samp>segment_wrap <var>limit</var></samp>&rsquo;</dt>
<dd><p>Wrap around segment index once it reaches <var>limit</var>.
</p>
</dd>
<dt>&lsquo;<samp>segment_start_number <var>number</var></samp>&rsquo;</dt>
<dd><p>Set the sequence number of the first segment. Defaults to <code>0</code>.
</p>
</dd>
<dt>&lsquo;<samp>reset_timestamps <var>1|0</var></samp>&rsquo;</dt>
<dd><p>Reset timestamps at the begin of each segment, so that each segment
will start with near-zero timestamps. It is meant to ease the playback
of the generated segments. May not work with some combinations of
muxers/codecs. It is set to <code>0</code> by default.
</p></dd>
</dl>

<a name="Examples"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-Examples">4.14 Examples</a></h2>

<ul>
<li>
To remux the content of file &lsquo;<tt>in.mkv</tt>&rsquo; to a list of segments
&lsquo;<tt>out-000.nut</tt>&rsquo;, &lsquo;<tt>out-001.nut</tt>&rsquo;, etc., and write the list of
generated segments to &lsquo;<tt>out.list</tt>&rsquo;:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.list out%03d.nut
</pre></div>

</li><li>
As the example above, but segment the input file according to the split
points specified by the <var>segment_times</var> option:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 out%03d.nut
</pre></div>

</li><li>
As the example above, but use the <code>ffmpeg</code> <var>force_key_frames</var>
option to force key frames in the input at the specified location, together
with the segment option <var>segment_time_delta</var> to account for
possible roundings operated when setting key frame times.
<div class="example">
<pre class="example">ffmpeg -i in.mkv -force_key_frames 1,2,3,5,8,13,21 -codec:v mpeg4 -codec:a pcm_s16le -map 0 \
-f segment -segment_list out.csv -segment_times 1,2,3,5,8,13,21 -segment_time_delta 0.05 out%03d.nut
</pre></div>
<p>In order to force key frames on the input file, transcoding is
required.
</p>
</li><li>
Segment the input file by splitting the input file according to the
frame numbers sequence specified with the <var>segment_frames</var> option:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -codec copy -map 0 -f segment -segment_list out.csv -segment_frames 100,200,300,500,800 out%03d.nut
</pre></div>

</li><li>
To convert the &lsquo;<tt>in.mkv</tt>&rsquo; to TS segments using the <code>libx264</code>
and <code>libfaac</code> encoders:
<div class="example">
<pre class="example">ffmpeg -i in.mkv -map 0 -codec:v libx264 -codec:a libfaac -f ssegment -segment_list out.list out%03d.ts
</pre></div>

</li><li>
Segment the input file, and create an M3U8 live playlist (can be used
as live HLS source):
<div class="example">
<pre class="example">ffmpeg -re -i in.mkv -codec copy -map 0 -f segment -segment_list playlist.m3u8 \
-segment_list_flags +live -segment_time 10 out%03d.mkv
</pre></div>
</li></ul>

<a name="mp3"></a>
<h2 class="section"><a href="ffmpeg-formats.html#toc-mp3">4.15 mp3</a></h2>

<p>The MP3 muxer writes a raw MP3 stream with an ID3v2 header at the beginning and
optionally an ID3v1 tag at the end. ID3v2.3 and ID3v2.4 are supported, the
<code>id3v2_version</code> option controls which one is used. The legacy ID3v1 tag is
not written by default, but may be enabled with the <code>write_id3v1</code> option.
</p>
<p>For seekable output the muxer also writes a Xing frame at the beginning, which
contains the number of frames in the file. It is useful for computing duration
of VBR files.
</p>
<p>The muxer supports writing ID3v2 attached pictures (APIC frames). The pictures
are supplied to the muxer in form of a video stream with a single packet. There
can be any number of those streams, each will correspond to a single APIC frame.
The stream metadata tags <var>title</var> and <var>comment</var> map to APIC
<var>description</var> and <var>picture type</var> respectively. See
<a href="http://id3.org/id3v2.4.0-frames">http://id3.org/id3v2.4.0-frames</a> for allowed picture types.
</p>
<p>Note that the APIC frames must be written at the beginning, so the muxer will
buffer the audio frames until it gets all the pictures. It is therefore advised
to provide the pictures as soon as possible to avoid excessive buffering.
</p>
<p>Examples:
</p>
<p>Write an mp3 with an ID3v2.3 header and an ID3v1 footer:
</p><div class="example">
<pre class="example">ffmpeg -i INPUT -id3v2_version 3 -write_id3v1 1 out.mp3
</pre></div>

<p>To attach a picture to an mp3 file select both the audio and the picture stream
with <code>map</code>:
</p><div class="example">
<pre class="example">ffmpeg -i input.mp3 -i cover.png -c copy -map 0 -map 1
-metadata:s:v title=&quot;Album cover&quot; -metadata:s:v comment=&quot;Cover (Front)&quot; out.mp3
</pre></div>

<a name="Metadata"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Metadata">5 Metadata</a></h1>

<p>FFmpeg is able to dump metadata from media files into a simple UTF-8-encoded
INI-like text file and then load it back using the metadata muxer/demuxer.
</p>
<p>The file format is as follows:
</p><ol>
<li>
A file consists of a header and a number of metadata tags divided into sections,
each on its own line.

</li><li>
The header is a &rsquo;;FFMETADATA&rsquo; string, followed by a version number (now 1).

</li><li>
Metadata tags are of the form &rsquo;key=value&rsquo;

</li><li>
Immediately after header follows global metadata

</li><li>
After global metadata there may be sections with per-stream/per-chapter
metadata.

</li><li>
A section starts with the section name in uppercase (i.e. STREAM or CHAPTER) in
brackets (&rsquo;[&rsquo;, &rsquo;]&rsquo;) and ends with next section or end of file.

</li><li>
At the beginning of a chapter section there may be an optional timebase to be
used for start/end values. It must be in form &rsquo;TIMEBASE=num/den&rsquo;, where num and
den are integers. If the timebase is missing then start/end times are assumed to
be in milliseconds.
Next a chapter section must contain chapter start and end times in form
&rsquo;START=num&rsquo;, &rsquo;END=num&rsquo;, where num is a positive integer.

</li><li>
Empty lines and lines starting with &rsquo;;&rsquo; or &rsquo;#&rsquo; are ignored.

</li><li>
Metadata keys or values containing special characters (&rsquo;=&rsquo;, &rsquo;;&rsquo;, &rsquo;#&rsquo;, &rsquo;\&rsquo; and a
newline) must be escaped with a backslash &rsquo;\&rsquo;.

</li><li>
Note that whitespace in metadata (e.g. foo = bar) is considered to be a part of
the tag (in the example above key is &rsquo;foo &rsquo;, value is &rsquo; bar&rsquo;).
</li></ol>

<p>A ffmetadata file might look like this:
</p><div class="example">
<pre class="example">;FFMETADATA1
title=bike\\shed
;this is a comment
artist=FFmpeg troll team

[CHAPTER]
TIMEBASE=1/1000
START=0
#chapter ends at 0:01:00
END=60000
title=chapter \#1
[STREAM]
title=multi\
line
</pre></div>

<a name="See-Also"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-See-Also">6 See Also</a></h1>

<p><a href="ffmpeg.html">ffmpeg</a>, <a href="ffplay.html">ffplay</a>, <a href="ffprobe.html">ffprobe</a>, <a href="ffserver.html">ffserver</a>,
<a href="libavformat.html">libavformat</a>
</p>

<a name="Authors"></a>
<h1 class="chapter"><a href="ffmpeg-formats.html#toc-Authors">7 Authors</a></h1>

<p>The FFmpeg developers.
</p>
<p>For details about the authorship, see the Git history of the project
(git://source.ffmpeg.org/ffmpeg), e.g. by typing the command
<code>git log</code> in the FFmpeg source directory, or browsing the
online repository at <a href="http://source.ffmpeg.org">http://source.ffmpeg.org</a>.
</p>
<p>Maintainers for the specific components are listed in the file
&lsquo;<tt>MAINTAINERS</tt>&rsquo; in the source code tree.
</p>

<footer class="footer pagination-right">
<span class="label label-info">This document was generated on <i>February 9, 2014</i> using <a href="http://www.nongnu.org/texi2html/"><i>texi2html 5.0</i></a>.</span></footer></div>