Sophie

Sophie

distrib > Mageia > 7 > armv7hl > media > core-updates > by-pkgid > 5c3190f4dd37ba2e7069811e15d7a651 > files > 23

ffmpeg-4.1.5-1.mga7.armv7hl.rpm

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <title>FFmpeg documentation</title>
    <link rel="stylesheet" href="bootstrap.min.css" />
    <link rel="stylesheet" href="style.min.css" />

<meta name="description" content="FFmpeg Protocols Documentation: ">
<meta name="keywords" content="FFmpeg documentation : FFmpeg Protocols ">
<meta name="Generator" content="texi2html 5.0">
<!-- Created on January 15, 2020 by texi2html 5.0 -->
<!--
texi2html was written by: 
            Lionel Cons <Lionel.Cons@cern.ch> (original author)
            Karl Berry  <karl@freefriends.org>
            Olaf Bachmann <obachman@mathematik.uni-kl.de>
            and many others.
Maintained by: Many creative people.
Send bugs and suggestions to <texi2html-bug@nongnu.org>

-->
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
  </head>
  <body>
    <div class="container">

<h1 class="titlefont">FFmpeg Protocols Documentation</h1>
<hr>
<a name="SEC_Top"></a>

<a name="SEC_Contents"></a>
<h1>Table of Contents</h1>

<div class="contents">

<ul class="no-bullet">
  <li><a name="toc-Description" href="#Description">1 Description</a></li>
  <li><a name="toc-Protocol-Options" href="#Protocol-Options">2 Protocol Options</a></li>
  <li><a name="toc-Protocols" href="#Protocols">3 Protocols</a>
  <ul class="no-bullet">
    <li><a name="toc-async" href="#async">3.1 async</a></li>
    <li><a name="toc-bluray" href="#bluray">3.2 bluray</a></li>
    <li><a name="toc-cache" href="#cache">3.3 cache</a></li>
    <li><a name="toc-concat" href="#concat">3.4 concat</a></li>
    <li><a name="toc-crypto" href="#crypto">3.5 crypto</a></li>
    <li><a name="toc-data" href="#data">3.6 data</a></li>
    <li><a name="toc-file" href="#file">3.7 file</a></li>
    <li><a name="toc-ftp" href="#ftp">3.8 ftp</a></li>
    <li><a name="toc-gopher" href="#gopher">3.9 gopher</a></li>
    <li><a name="toc-hls" href="#hls">3.10 hls</a></li>
    <li><a name="toc-http" href="#http">3.11 http</a>
    <ul class="no-bullet">
      <li><a name="toc-HTTP-Cookies" href="#HTTP-Cookies">3.11.1 HTTP Cookies</a></li>
    </ul></li>
    <li><a name="toc-Icecast" href="#Icecast">3.12 Icecast</a></li>
    <li><a name="toc-mmst" href="#mmst">3.13 mmst</a></li>
    <li><a name="toc-mmsh" href="#mmsh">3.14 mmsh</a></li>
    <li><a name="toc-md5" href="#md5">3.15 md5</a></li>
    <li><a name="toc-pipe" href="#pipe">3.16 pipe</a></li>
    <li><a name="toc-prompeg" href="#prompeg">3.17 prompeg</a></li>
    <li><a name="toc-rtmp" href="#rtmp">3.18 rtmp</a></li>
    <li><a name="toc-rtmpe" href="#rtmpe">3.19 rtmpe</a></li>
    <li><a name="toc-rtmps" href="#rtmps">3.20 rtmps</a></li>
    <li><a name="toc-rtmpt" href="#rtmpt">3.21 rtmpt</a></li>
    <li><a name="toc-rtmpte" href="#rtmpte">3.22 rtmpte</a></li>
    <li><a name="toc-rtmpts" href="#rtmpts">3.23 rtmpts</a></li>
    <li><a name="toc-libsmbclient" href="#libsmbclient">3.24 libsmbclient</a></li>
    <li><a name="toc-libssh" href="#libssh">3.25 libssh</a></li>
    <li><a name="toc-librtmp-rtmp_002c-rtmpe_002c-rtmps_002c-rtmpt_002c-rtmpte" href="#librtmp-rtmp_002c-rtmpe_002c-rtmps_002c-rtmpt_002c-rtmpte">3.26 librtmp rtmp, rtmpe, rtmps, rtmpt, rtmpte</a></li>
    <li><a name="toc-rtp" href="#rtp">3.27 rtp</a></li>
    <li><a name="toc-rtsp" href="#rtsp">3.28 rtsp</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples-1" href="#Examples-1">3.28.1 Examples</a></li>
    </ul></li>
    <li><a name="toc-sap" href="#sap">3.29 sap</a>
    <ul class="no-bullet">
      <li><a name="toc-Muxer" href="#Muxer">3.29.1 Muxer</a></li>
      <li><a name="toc-Demuxer" href="#Demuxer">3.29.2 Demuxer</a></li>
    </ul></li>
    <li><a name="toc-sctp" href="#sctp">3.30 sctp</a></li>
    <li><a name="toc-srt" href="#srt">3.31 srt</a></li>
    <li><a name="toc-srtp" href="#srtp">3.32 srtp</a></li>
    <li><a name="toc-subfile" href="#subfile">3.33 subfile</a></li>
    <li><a name="toc-tee" href="#tee">3.34 tee</a></li>
    <li><a name="toc-tcp" href="#tcp">3.35 tcp</a></li>
    <li><a name="toc-tls" href="#tls">3.36 tls</a></li>
    <li><a name="toc-udp" href="#udp">3.37 udp</a>
    <ul class="no-bullet">
      <li><a name="toc-Examples" href="#Examples">3.37.1 Examples</a></li>
    </ul></li>
    <li><a name="toc-unix" href="#unix">3.38 unix</a></li>
  </ul></li>
  <li><a name="toc-See-Also" href="#See-Also">4 See Also</a></li>
  <li><a name="toc-Authors" href="#Authors">5 Authors</a></li>
</ul>
</div>


<hr size="6">
<a name="Description"></a>
<h1 class="chapter"><a href="ffmpeg-protocols.html#toc-Description">1 Description</a></h1>

<p>This document describes the input and output protocols provided by the
libavformat library.
</p>

<a name="Protocol-Options"></a>
<h1 class="chapter"><a href="ffmpeg-protocols.html#toc-Protocol-Options">2 Protocol Options</a></h1>

<p>The libavformat library provides some generic global options, which
can be set on all the protocols. In addition each protocol may support
so-called private options, which are specific for that component.
</p>
<p>Options may be set by specifying -<var>option</var> <var>value</var> in the
FFmpeg tools, or by setting the value explicitly in the
<code>AVFormatContext</code> options or using the &lsquo;<tt>libavutil/opt.h</tt>&rsquo; API
for programmatic use.
</p>
<p>The list of supported options follows:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>protocol_whitelist <var>list</var> (<em>input</em>)</samp>&rsquo;</dt>
<dd><p>Set a &quot;,&quot;-separated list of allowed protocols. &quot;ALL&quot; matches all protocols. Protocols
prefixed by &quot;-&quot; are disabled.
All protocols are allowed by default but protocols used by an another
protocol (nested protocols) are restricted to a per protocol subset.
</p></dd>
</dl>


<a name="Protocols"></a>
<h1 class="chapter"><a href="ffmpeg-protocols.html#toc-Protocols">3 Protocols</a></h1>

<p>Protocols are configured elements in FFmpeg that enable access to
resources that require specific protocols.
</p>
<p>When you configure your FFmpeg build, all the supported protocols are
enabled by default. You can list all available ones using the
configure option &quot;&ndash;list-protocols&quot;.
</p>
<p>You can disable all the protocols using the configure option
&quot;&ndash;disable-protocols&quot;, and selectively enable a protocol using the
option &quot;&ndash;enable-protocol=<var>PROTOCOL</var>&quot;, or you can disable a
particular protocol using the option
&quot;&ndash;disable-protocol=<var>PROTOCOL</var>&quot;.
</p>
<p>The option &quot;-protocols&quot; of the ff* tools will display the list of
supported protocols.
</p>
<p>All protocols accept the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>rw_timeout</samp>&rsquo;</dt>
<dd><p>Maximum time to wait for (network) read/write operations to complete,
in microseconds.
</p></dd>
</dl>

<p>A description of the currently available protocols follows.
</p>
<a name="async"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-async">3.1 async</a></h2>

<p>Asynchronous data filling wrapper for input stream.
</p>
<p>Fill data in a background thread, to decouple I/O operation from demux thread.
</p>
<div class="example">
<pre class="example">async:<var>URL</var>
async:http://host/resource
async:cache:http://host/resource
</pre></div>

<a name="bluray"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-bluray">3.2 bluray</a></h2>

<p>Read BluRay playlist.
</p>
<p>The accepted options are:
</p><dl compact="compact">
<dt>&lsquo;<samp>angle</samp>&rsquo;</dt>
<dd><p>BluRay angle
</p>
</dd>
<dt>&lsquo;<samp>chapter</samp>&rsquo;</dt>
<dd><p>Start chapter (1...N)
</p>
</dd>
<dt>&lsquo;<samp>playlist</samp>&rsquo;</dt>
<dd><p>Playlist to read (BDMV/PLAYLIST/?????.mpls)
</p>
</dd>
</dl>

<p>Examples:
</p>
<p>Read longest playlist from BluRay mounted to /mnt/bluray:
</p><div class="example">
<pre class="example">bluray:/mnt/bluray
</pre></div>

<p>Read angle 2 of playlist 4 from BluRay mounted to /mnt/bluray, start from chapter 2:
</p><div class="example">
<pre class="example">-playlist 4 -angle 2 -chapter 2 bluray:/mnt/bluray
</pre></div>

<a name="cache"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-cache">3.3 cache</a></h2>

<p>Caching wrapper for input stream.
</p>
<p>Cache the input stream to temporary file. It brings seeking capability to live streams.
</p>
<div class="example">
<pre class="example">cache:<var>URL</var>
</pre></div>

<a name="concat"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-concat">3.4 concat</a></h2>

<p>Physical concatenation protocol.
</p>
<p>Read and seek from many resources in sequence as if they were
a unique resource.
</p>
<p>A URL accepted by this protocol has the syntax:
</p><div class="example">
<pre class="example">concat:<var>URL1</var>|<var>URL2</var>|...|<var>URLN</var>
</pre></div>

<p>where <var>URL1</var>, <var>URL2</var>, ..., <var>URLN</var> are the urls of the
resource to be concatenated, each one possibly specifying a distinct
protocol.
</p>
<p>For example to read a sequence of files &lsquo;<tt>split1.mpeg</tt>&rsquo;,
&lsquo;<tt>split2.mpeg</tt>&rsquo;, &lsquo;<tt>split3.mpeg</tt>&rsquo; with <code>ffplay</code> use the
command:
</p><div class="example">
<pre class="example">ffplay concat:split1.mpeg\|split2.mpeg\|split3.mpeg
</pre></div>

<p>Note that you may need to escape the character &quot;|&quot; which is special for
many shells.
</p>
<a name="crypto"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-crypto">3.5 crypto</a></h2>

<p>AES-encrypted stream reading protocol.
</p>
<p>The accepted options are:
</p><dl compact="compact">
<dt>&lsquo;<samp>key</samp>&rsquo;</dt>
<dd><p>Set the AES decryption key binary block from given hexadecimal representation.
</p>
</dd>
<dt>&lsquo;<samp>iv</samp>&rsquo;</dt>
<dd><p>Set the AES decryption initialization vector binary block from given hexadecimal representation.
</p></dd>
</dl>

<p>Accepted URL formats:
</p><div class="example">
<pre class="example">crypto:<var>URL</var>
crypto+<var>URL</var>
</pre></div>

<a name="data"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-data">3.6 data</a></h2>

<p>Data in-line in the URI. See <a href="http://en.wikipedia.org/wiki/Data_URI_scheme">http://en.wikipedia.org/wiki/Data_URI_scheme</a>.
</p>
<p>For example, to convert a GIF file given inline with <code>ffmpeg</code>:
</p><div class="example">
<pre class="example">ffmpeg -i &quot;data:image/gif;base64,R0lGODdhCAAIAMIEAAAAAAAA//8AAP//AP///////////////ywAAAAACAAIAAADF0gEDLojDgdGiJdJqUX02iB4E8Q9jUMkADs=&quot; smiley.png
</pre></div>

<a name="file"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-file">3.7 file</a></h2>

<p>File access protocol.
</p>
<p>Read from or write to a file.
</p>
<p>A file URL can have the form:
</p><div class="example">
<pre class="example">file:<var>filename</var>
</pre></div>

<p>where <var>filename</var> is the path of the file to read.
</p>
<p>An URL that does not have a protocol prefix will be assumed to be a
file URL. Depending on the build, an URL that looks like a Windows
path with the drive letter at the beginning will also be assumed to be
a file URL (usually not the case in builds for unix-like systems).
</p>
<p>For example to read from a file &lsquo;<tt>input.mpeg</tt>&rsquo; with <code>ffmpeg</code>
use the command:
</p><div class="example">
<pre class="example">ffmpeg -i file:input.mpeg output.mpeg
</pre></div>

<p>This protocol accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>truncate</samp>&rsquo;</dt>
<dd><p>Truncate existing files on write, if set to 1. A value of 0 prevents
truncating. Default value is 1.
</p>
</dd>
<dt>&lsquo;<samp>blocksize</samp>&rsquo;</dt>
<dd><p>Set I/O operation maximum block size, in bytes. Default value is
<code>INT_MAX</code>, which results in not limiting the requested block size.
Setting this value reasonably low improves user termination request reaction
time, which is valuable for files on slow medium.
</p></dd>
</dl>

<a name="ftp"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-ftp">3.8 ftp</a></h2>

<p>FTP (File Transfer Protocol).
</p>
<p>Read from or write to remote resources using FTP protocol.
</p>
<p>Following syntax is required.
</p><div class="example">
<pre class="example">ftp://[user[:password]@]server[:port]/path/to/remote/resource.mpeg
</pre></div>

<p>This protocol accepts the following options.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>timeout</samp>&rsquo;</dt>
<dd><p>Set timeout in microseconds of socket I/O operations used by the underlying low level
operation. By default it is set to -1, which means that the timeout is
not specified.
</p>
</dd>
<dt>&lsquo;<samp>ftp-anonymous-password</samp>&rsquo;</dt>
<dd><p>Password used when login as anonymous user. Typically an e-mail address
should be used.
</p>
</dd>
<dt>&lsquo;<samp>ftp-write-seekable</samp>&rsquo;</dt>
<dd><p>Control seekability of connection during encoding. If set to 1 the
resource is supposed to be seekable, if set to 0 it is assumed not
to be seekable. Default value is 0.
</p></dd>
</dl>

<p>NOTE: Protocol can be used as output, but it is recommended to not do
it, unless special care is taken (tests, customized server configuration
etc.). Different FTP servers behave in different way during seek
operation. ff* tools may produce incomplete content due to server limitations.
</p>
<p>This protocol accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>follow</samp>&rsquo;</dt>
<dd><p>If set to 1, the protocol will retry reading at the end of the file, allowing
reading files that still are being written. In order for this to terminate,
you either need to use the rw_timeout option, or use the interrupt callback
(for API users).
</p>
</dd>
</dl>

<a name="gopher"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-gopher">3.9 gopher</a></h2>

<p>Gopher protocol.
</p>
<a name="hls"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-hls">3.10 hls</a></h2>

<p>Read Apple HTTP Live Streaming compliant segmented stream as
a uniform one. The M3U8 playlists describing the segments can be
remote HTTP resources or local files, accessed using the standard
file protocol.
The nested protocol is declared by specifying
&quot;+<var>proto</var>&quot; after the hls URI scheme name, where <var>proto</var>
is either &quot;file&quot; or &quot;http&quot;.
</p>
<div class="example">
<pre class="example">hls+http://host/path/to/remote/resource.m3u8
hls+file://path/to/local/resource.m3u8
</pre></div>

<p>Using this protocol is discouraged - the hls demuxer should work
just as well (if not, please report the issues) and is more complete.
To use the hls demuxer instead, simply use the direct URLs to the
m3u8 files.
</p>
<a name="http"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-http">3.11 http</a></h2>

<p>HTTP (Hyper Text Transfer Protocol).
</p>
<p>This protocol accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>seekable</samp>&rsquo;</dt>
<dd><p>Control seekability of connection. If set to 1 the resource is
supposed to be seekable, if set to 0 it is assumed not to be seekable,
if set to -1 it will try to autodetect if it is seekable. Default
value is -1.
</p>
</dd>
<dt>&lsquo;<samp>chunked_post</samp>&rsquo;</dt>
<dd><p>If set to 1 use chunked Transfer-Encoding for posts, default is 1.
</p>
</dd>
<dt>&lsquo;<samp>content_type</samp>&rsquo;</dt>
<dd><p>Set a specific content type for the POST messages or for listen mode.
</p>
</dd>
<dt>&lsquo;<samp>http_proxy</samp>&rsquo;</dt>
<dd><p>set HTTP proxy to tunnel through e.g. http://example.com:1234
</p>
</dd>
<dt>&lsquo;<samp>headers</samp>&rsquo;</dt>
<dd><p>Set custom HTTP headers, can override built in default headers. The
value must be a string encoding the headers.
</p>
</dd>
<dt>&lsquo;<samp>multiple_requests</samp>&rsquo;</dt>
<dd><p>Use persistent connections if set to 1, default is 0.
</p>
</dd>
<dt>&lsquo;<samp>post_data</samp>&rsquo;</dt>
<dd><p>Set custom HTTP post data.
</p>
</dd>
<dt>&lsquo;<samp>referer</samp>&rsquo;</dt>
<dd><p>Set the Referer header. Include &rsquo;Referer: URL&rsquo; header in HTTP request.
</p>
</dd>
<dt>&lsquo;<samp>user_agent</samp>&rsquo;</dt>
<dd><p>Override the User-Agent header. If not specified the protocol will use a
string describing the libavformat build. (&quot;Lavf/&lt;version&gt;&quot;)
</p>
</dd>
<dt>&lsquo;<samp>user-agent</samp>&rsquo;</dt>
<dd><p>This is a deprecated option, you can use user_agent instead it.
</p>
</dd>
<dt>&lsquo;<samp>timeout</samp>&rsquo;</dt>
<dd><p>Set timeout in microseconds of socket I/O operations used by the underlying low level
operation. By default it is set to -1, which means that the timeout is
not specified.
</p>
</dd>
<dt>&lsquo;<samp>reconnect_at_eof</samp>&rsquo;</dt>
<dd><p>If set then eof is treated like an error and causes reconnection, this is useful
for live / endless streams.
</p>
</dd>
<dt>&lsquo;<samp>reconnect_streamed</samp>&rsquo;</dt>
<dd><p>If set then even streamed/non seekable streams will be reconnected on errors.
</p>
</dd>
<dt>&lsquo;<samp>reconnect_delay_max</samp>&rsquo;</dt>
<dd><p>Sets the maximum delay in seconds after which to give up reconnecting
</p>
</dd>
<dt>&lsquo;<samp>mime_type</samp>&rsquo;</dt>
<dd><p>Export the MIME type.
</p>
</dd>
<dt>&lsquo;<samp>http_version</samp>&rsquo;</dt>
<dd><p>Exports the HTTP response version number. Usually &quot;1.0&quot; or &quot;1.1&quot;.
</p>
</dd>
<dt>&lsquo;<samp>icy</samp>&rsquo;</dt>
<dd><p>If set to 1 request ICY (SHOUTcast) metadata from the server. If the server
supports this, the metadata has to be retrieved by the application by reading
the &lsquo;<samp>icy_metadata_headers</samp>&rsquo; and &lsquo;<samp>icy_metadata_packet</samp>&rsquo; options.
The default is 1.
</p>
</dd>
<dt>&lsquo;<samp>icy_metadata_headers</samp>&rsquo;</dt>
<dd><p>If the server supports ICY metadata, this contains the ICY-specific HTTP reply
headers, separated by newline characters.
</p>
</dd>
<dt>&lsquo;<samp>icy_metadata_packet</samp>&rsquo;</dt>
<dd><p>If the server supports ICY metadata, and &lsquo;<samp>icy</samp>&rsquo; was set to 1, this
contains the last non-empty metadata packet sent by the server. It should be
polled in regular intervals by applications interested in mid-stream metadata
updates.
</p>
</dd>
<dt>&lsquo;<samp>cookies</samp>&rsquo;</dt>
<dd><p>Set the cookies to be sent in future requests. The format of each cookie is the
same as the value of a Set-Cookie HTTP response field. Multiple cookies can be
delimited by a newline character.
</p>
</dd>
<dt>&lsquo;<samp>offset</samp>&rsquo;</dt>
<dd><p>Set initial byte offset.
</p>
</dd>
<dt>&lsquo;<samp>end_offset</samp>&rsquo;</dt>
<dd><p>Try to limit the request to bytes preceding this offset.
</p>
</dd>
<dt>&lsquo;<samp>method</samp>&rsquo;</dt>
<dd><p>When used as a client option it sets the HTTP method for the request.
</p>
<p>When used as a server option it sets the HTTP method that is going to be
expected from the client(s).
If the expected and the received HTTP method do not match the client will
be given a Bad Request response.
When unset the HTTP method is not checked for now. This will be replaced by
autodetection in the future.
</p>
</dd>
<dt>&lsquo;<samp>listen</samp>&rsquo;</dt>
<dd><p>If set to 1 enables experimental HTTP server. This can be used to send data when
used as an output option, or read data from a client with HTTP POST when used as
an input option.
If set to 2 enables experimental multi-client HTTP server. This is not yet implemented
in ffmpeg.c and thus must not be used as a command line option.
</p><div class="example">
<pre class="example"># Server side (sending):
ffmpeg -i somefile.ogg -c copy -listen 1 -f ogg http://<var>server</var>:<var>port</var>

# Client side (receiving):
ffmpeg -i http://<var>server</var>:<var>port</var> -c copy somefile.ogg

# Client can also be done with wget:
wget http://<var>server</var>:<var>port</var> -O somefile.ogg

# Server side (receiving):
ffmpeg -listen 1 -i http://<var>server</var>:<var>port</var> -c copy somefile.ogg

# Client side (sending):
ffmpeg -i somefile.ogg -chunked_post 0 -c copy -f ogg http://<var>server</var>:<var>port</var>

# Client can also be done with wget:
wget --post-file=somefile.ogg http://<var>server</var>:<var>port</var>
</pre></div>

</dd>
</dl>

<a name="HTTP-Cookies"></a>
<h3 class="subsection"><a href="ffmpeg-protocols.html#toc-HTTP-Cookies">3.11.1 HTTP Cookies</a></h3>

<p>Some HTTP requests will be denied unless cookie values are passed in with the
request. The &lsquo;<samp>cookies</samp>&rsquo; option allows these cookies to be specified. At
the very least, each cookie must specify a value along with a path and domain.
HTTP requests that match both the domain and path will automatically include the
cookie value in the HTTP Cookie header field. Multiple cookies can be delimited
by a newline.
</p>
<p>The required syntax to play a stream specifying a cookie is:
</p><div class="example">
<pre class="example">ffplay -cookies &quot;nlqptid=nltid=tsn; path=/; domain=somedomain.com;&quot; http://somedomain.com/somestream.m3u8
</pre></div>

<a name="Icecast"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-Icecast">3.12 Icecast</a></h2>

<p>Icecast protocol (stream to Icecast servers)
</p>
<p>This protocol accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>ice_genre</samp>&rsquo;</dt>
<dd><p>Set the stream genre.
</p>
</dd>
<dt>&lsquo;<samp>ice_name</samp>&rsquo;</dt>
<dd><p>Set the stream name.
</p>
</dd>
<dt>&lsquo;<samp>ice_description</samp>&rsquo;</dt>
<dd><p>Set the stream description.
</p>
</dd>
<dt>&lsquo;<samp>ice_url</samp>&rsquo;</dt>
<dd><p>Set the stream website URL.
</p>
</dd>
<dt>&lsquo;<samp>ice_public</samp>&rsquo;</dt>
<dd><p>Set if the stream should be public.
The default is 0 (not public).
</p>
</dd>
<dt>&lsquo;<samp>user_agent</samp>&rsquo;</dt>
<dd><p>Override the User-Agent header. If not specified a string of the form
&quot;Lavf/&lt;version&gt;&quot; will be used.
</p>
</dd>
<dt>&lsquo;<samp>password</samp>&rsquo;</dt>
<dd><p>Set the Icecast mountpoint password.
</p>
</dd>
<dt>&lsquo;<samp>content_type</samp>&rsquo;</dt>
<dd><p>Set the stream content type. This must be set if it is different from
audio/mpeg.
</p>
</dd>
<dt>&lsquo;<samp>legacy_icecast</samp>&rsquo;</dt>
<dd><p>This enables support for Icecast versions &lt; 2.4.0, that do not support the
HTTP PUT method but the SOURCE method.
</p>
</dd>
</dl>

<div class="example">
<pre class="example">icecast://[<var>username</var>[:<var>password</var>]@]<var>server</var>:<var>port</var>/<var>mountpoint</var>
</pre></div>

<a name="mmst"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-mmst">3.13 mmst</a></h2>

<p>MMS (Microsoft Media Server) protocol over TCP.
</p>
<a name="mmsh"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-mmsh">3.14 mmsh</a></h2>

<p>MMS (Microsoft Media Server) protocol over HTTP.
</p>
<p>The required syntax is:
</p><div class="example">
<pre class="example">mmsh://<var>server</var>[:<var>port</var>][/<var>app</var>][/<var>playpath</var>]
</pre></div>

<a name="md5"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-md5">3.15 md5</a></h2>

<p>MD5 output protocol.
</p>
<p>Computes the MD5 hash of the data to be written, and on close writes
this to the designated output or stdout if none is specified. It can
be used to test muxers without writing an actual file.
</p>
<p>Some examples follow.
</p><div class="example">
<pre class="example"># Write the MD5 hash of the encoded AVI file to the file output.avi.md5.
ffmpeg -i input.flv -f avi -y md5:output.avi.md5

# Write the MD5 hash of the encoded AVI file to stdout.
ffmpeg -i input.flv -f avi -y md5:
</pre></div>

<p>Note that some formats (typically MOV) require the output protocol to
be seekable, so they will fail with the MD5 output protocol.
</p>
<a name="pipe"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-pipe">3.16 pipe</a></h2>

<p>UNIX pipe access protocol.
</p>
<p>Read and write from UNIX pipes.
</p>
<p>The accepted syntax is:
</p><div class="example">
<pre class="example">pipe:[<var>number</var>]
</pre></div>

<p><var>number</var> is the number corresponding to the file descriptor of the
pipe (e.g. 0 for stdin, 1 for stdout, 2 for stderr).  If <var>number</var>
is not specified, by default the stdout file descriptor will be used
for writing, stdin for reading.
</p>
<p>For example to read from stdin with <code>ffmpeg</code>:
</p><div class="example">
<pre class="example">cat test.wav | ffmpeg -i pipe:0
# ...this is the same as...
cat test.wav | ffmpeg -i pipe:
</pre></div>

<p>For writing to stdout with <code>ffmpeg</code>:
</p><div class="example">
<pre class="example">ffmpeg -i test.wav -f avi pipe:1 | cat &gt; test.avi
# ...this is the same as...
ffmpeg -i test.wav -f avi pipe: | cat &gt; test.avi
</pre></div>

<p>This protocol accepts the following options:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>blocksize</samp>&rsquo;</dt>
<dd><p>Set I/O operation maximum block size, in bytes. Default value is
<code>INT_MAX</code>, which results in not limiting the requested block size.
Setting this value reasonably low improves user termination request reaction
time, which is valuable if data transmission is slow.
</p></dd>
</dl>

<p>Note that some formats (typically MOV), require the output protocol to
be seekable, so they will fail with the pipe output protocol.
</p>
<a name="prompeg"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-prompeg">3.17 prompeg</a></h2>

<p>Pro-MPEG Code of Practice #3 Release 2 FEC protocol.
</p>
<p>The Pro-MPEG CoP#3 FEC is a 2D parity-check forward error correction mechanism
for MPEG-2 Transport Streams sent over RTP.
</p>
<p>This protocol must be used in conjunction with the <code>rtp_mpegts</code> muxer and
the <code>rtp</code> protocol.
</p>
<p>The required syntax is:
</p><div class="example">
<pre class="example">-f rtp_mpegts -fec prompeg=<var>option</var>=<var>val</var>... rtp://<var>hostname</var>:<var>port</var>
</pre></div>

<p>The destination UDP ports are <code>port + 2</code> for the column FEC stream
and <code>port + 4</code> for the row FEC stream.
</p>
<p>This protocol accepts the following options:
</p><dl compact="compact">
<dt>&lsquo;<samp>l=<var>n</var></samp>&rsquo;</dt>
<dd><p>The number of columns (4-20, LxD &lt;= 100)
</p>
</dd>
<dt>&lsquo;<samp>d=<var>n</var></samp>&rsquo;</dt>
<dd><p>The number of rows (4-20, LxD &lt;= 100)
</p>
</dd>
</dl>

<p>Example usage:
</p>
<div class="example">
<pre class="example">-f rtp_mpegts -fec prompeg=l=8:d=4 rtp://<var>hostname</var>:<var>port</var>
</pre></div>

<a name="rtmp"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-rtmp">3.18 rtmp</a></h2>

<p>Real-Time Messaging Protocol.
</p>
<p>The Real-Time Messaging Protocol (RTMP) is used for streaming multimedia
content across a TCP/IP network.
</p>
<p>The required syntax is:
</p><div class="example">
<pre class="example">rtmp://[<var>username</var>:<var>password</var>@]<var>server</var>[:<var>port</var>][/<var>app</var>][/<var>instance</var>][/<var>playpath</var>]
</pre></div>

<p>The accepted parameters are:
</p><dl compact="compact">
<dt>&lsquo;<samp>username</samp>&rsquo;</dt>
<dd><p>An optional username (mostly for publishing).
</p>
</dd>
<dt>&lsquo;<samp>password</samp>&rsquo;</dt>
<dd><p>An optional password (mostly for publishing).
</p>
</dd>
<dt>&lsquo;<samp>server</samp>&rsquo;</dt>
<dd><p>The address of the RTMP server.
</p>
</dd>
<dt>&lsquo;<samp>port</samp>&rsquo;</dt>
<dd><p>The number of the TCP port to use (by default is 1935).
</p>
</dd>
<dt>&lsquo;<samp>app</samp>&rsquo;</dt>
<dd><p>It is the name of the application to access. It usually corresponds to
the path where the application is installed on the RTMP server
(e.g. &lsquo;<tt>/ondemand/</tt>&rsquo;, &lsquo;<tt>/flash/live/</tt>&rsquo;, etc.). You can override
the value parsed from the URI through the <code>rtmp_app</code> option, too.
</p>
</dd>
<dt>&lsquo;<samp>playpath</samp>&rsquo;</dt>
<dd><p>It is the path or name of the resource to play with reference to the
application specified in <var>app</var>, may be prefixed by &quot;mp4:&quot;. You
can override the value parsed from the URI through the <code>rtmp_playpath</code>
option, too.
</p>
</dd>
<dt>&lsquo;<samp>listen</samp>&rsquo;</dt>
<dd><p>Act as a server, listening for an incoming connection.
</p>
</dd>
<dt>&lsquo;<samp>timeout</samp>&rsquo;</dt>
<dd><p>Maximum time to wait for the incoming connection. Implies listen.
</p></dd>
</dl>

<p>Additionally, the following parameters can be set via command line options
(or in code via <code>AVOption</code>s):
</p><dl compact="compact">
<dt>&lsquo;<samp>rtmp_app</samp>&rsquo;</dt>
<dd><p>Name of application to connect on the RTMP server. This option
overrides the parameter specified in the URI.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_buffer</samp>&rsquo;</dt>
<dd><p>Set the client buffer time in milliseconds. The default is 3000.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_conn</samp>&rsquo;</dt>
<dd><p>Extra arbitrary AMF connection parameters, parsed from a string,
e.g. like <code>B:1 S:authMe O:1 NN:code:1.23 NS:flag:ok O:0</code>.
Each value is prefixed by a single character denoting the type,
B for Boolean, N for number, S for string, O for object, or Z for null,
followed by a colon. For Booleans the data must be either 0 or 1 for
FALSE or TRUE, respectively.  Likewise for Objects the data must be 0 or
1 to end or begin an object, respectively. Data items in subobjects may
be named, by prefixing the type with &rsquo;N&rsquo; and specifying the name before
the value (i.e. <code>NB:myFlag:1</code>). This option may be used multiple
times to construct arbitrary AMF sequences.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_flashver</samp>&rsquo;</dt>
<dd><p>Version of the Flash plugin used to run the SWF player. The default
is LNX 9,0,124,2. (When publishing, the default is FMLE/3.0 (compatible;
&lt;libavformat version&gt;).)
</p>
</dd>
<dt>&lsquo;<samp>rtmp_flush_interval</samp>&rsquo;</dt>
<dd><p>Number of packets flushed in the same request (RTMPT only). The default
is 10.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_live</samp>&rsquo;</dt>
<dd><p>Specify that the media is a live stream. No resuming or seeking in
live streams is possible. The default value is <code>any</code>, which means the
subscriber first tries to play the live stream specified in the
playpath. If a live stream of that name is not found, it plays the
recorded stream. The other possible values are <code>live</code> and
<code>recorded</code>.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_pageurl</samp>&rsquo;</dt>
<dd><p>URL of the web page in which the media was embedded. By default no
value will be sent.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_playpath</samp>&rsquo;</dt>
<dd><p>Stream identifier to play or to publish. This option overrides the
parameter specified in the URI.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_subscribe</samp>&rsquo;</dt>
<dd><p>Name of live stream to subscribe to. By default no value will be sent.
It is only sent if the option is specified or if rtmp_live
is set to live.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_swfhash</samp>&rsquo;</dt>
<dd><p>SHA256 hash of the decompressed SWF file (32 bytes).
</p>
</dd>
<dt>&lsquo;<samp>rtmp_swfsize</samp>&rsquo;</dt>
<dd><p>Size of the decompressed SWF file, required for SWFVerification.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_swfurl</samp>&rsquo;</dt>
<dd><p>URL of the SWF player for the media. By default no value will be sent.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_swfverify</samp>&rsquo;</dt>
<dd><p>URL to player swf file, compute hash/size automatically.
</p>
</dd>
<dt>&lsquo;<samp>rtmp_tcurl</samp>&rsquo;</dt>
<dd><p>URL of the target stream. Defaults to proto://host[:port]/app.
</p>
</dd>
</dl>

<p>For example to read with <code>ffplay</code> a multimedia resource named
&quot;sample&quot; from the application &quot;vod&quot; from an RTMP server &quot;myserver&quot;:
</p><div class="example">
<pre class="example">ffplay rtmp://myserver/vod/sample
</pre></div>

<p>To publish to a password protected server, passing the playpath and
app names separately:
</p><div class="example">
<pre class="example">ffmpeg -re -i &lt;input&gt; -f flv -rtmp_playpath some/long/path -rtmp_app long/app/name rtmp://username:password@myserver/
</pre></div>

<a name="rtmpe"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-rtmpe">3.19 rtmpe</a></h2>

<p>Encrypted Real-Time Messaging Protocol.
</p>
<p>The Encrypted Real-Time Messaging Protocol (RTMPE) is used for
streaming multimedia content within standard cryptographic primitives,
consisting of Diffie-Hellman key exchange and HMACSHA256, generating
a pair of RC4 keys.
</p>
<a name="rtmps"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-rtmps">3.20 rtmps</a></h2>

<p>Real-Time Messaging Protocol over a secure SSL connection.
</p>
<p>The Real-Time Messaging Protocol (RTMPS) is used for streaming
multimedia content across an encrypted connection.
</p>
<a name="rtmpt"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-rtmpt">3.21 rtmpt</a></h2>

<p>Real-Time Messaging Protocol tunneled through HTTP.
</p>
<p>The Real-Time Messaging Protocol tunneled through HTTP (RTMPT) is used
for streaming multimedia content within HTTP requests to traverse
firewalls.
</p>
<a name="rtmpte"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-rtmpte">3.22 rtmpte</a></h2>

<p>Encrypted Real-Time Messaging Protocol tunneled through HTTP.
</p>
<p>The Encrypted Real-Time Messaging Protocol tunneled through HTTP (RTMPTE)
is used for streaming multimedia content within HTTP requests to traverse
firewalls.
</p>
<a name="rtmpts"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-rtmpts">3.23 rtmpts</a></h2>

<p>Real-Time Messaging Protocol tunneled through HTTPS.
</p>
<p>The Real-Time Messaging Protocol tunneled through HTTPS (RTMPTS) is used
for streaming multimedia content within HTTPS requests to traverse
firewalls.
</p>
<a name="libsmbclient"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-libsmbclient">3.24 libsmbclient</a></h2>

<p>libsmbclient permits one to manipulate CIFS/SMB network resources.
</p>
<p>Following syntax is required.
</p>
<div class="example">
<pre class="example">smb://[[domain:]user[:password@]]server[/share[/path[/file]]]
</pre></div>

<p>This protocol accepts the following options.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>timeout</samp>&rsquo;</dt>
<dd><p>Set timeout in milliseconds of socket I/O operations used by the underlying
low level operation. By default it is set to -1, which means that the timeout
is not specified.
</p>
</dd>
<dt>&lsquo;<samp>truncate</samp>&rsquo;</dt>
<dd><p>Truncate existing files on write, if set to 1. A value of 0 prevents
truncating. Default value is 1.
</p>
</dd>
<dt>&lsquo;<samp>workgroup</samp>&rsquo;</dt>
<dd><p>Set the workgroup used for making connections. By default workgroup is not specified.
</p>
</dd>
</dl>

<p>For more information see: <a href="http://www.samba.org/">http://www.samba.org/</a>.
</p>
<a name="libssh"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-libssh">3.25 libssh</a></h2>

<p>Secure File Transfer Protocol via libssh
</p>
<p>Read from or write to remote resources using SFTP protocol.
</p>
<p>Following syntax is required.
</p>
<div class="example">
<pre class="example">sftp://[user[:password]@]server[:port]/path/to/remote/resource.mpeg
</pre></div>

<p>This protocol accepts the following options.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>timeout</samp>&rsquo;</dt>
<dd><p>Set timeout of socket I/O operations used by the underlying low level
operation. By default it is set to -1, which means that the timeout
is not specified.
</p>
</dd>
<dt>&lsquo;<samp>truncate</samp>&rsquo;</dt>
<dd><p>Truncate existing files on write, if set to 1. A value of 0 prevents
truncating. Default value is 1.
</p>
</dd>
<dt>&lsquo;<samp>private_key</samp>&rsquo;</dt>
<dd><p>Specify the path of the file containing private key to use during authorization.
By default libssh searches for keys in the &lsquo;<tt>~/.ssh/</tt>&rsquo; directory.
</p>
</dd>
</dl>

<p>Example: Play a file stored on remote server.
</p>
<div class="example">
<pre class="example">ffplay sftp://user:password@server_address:22/home/user/resource.mpeg
</pre></div>

<a name="librtmp-rtmp_002c-rtmpe_002c-rtmps_002c-rtmpt_002c-rtmpte"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-librtmp-rtmp_002c-rtmpe_002c-rtmps_002c-rtmpt_002c-rtmpte">3.26 librtmp rtmp, rtmpe, rtmps, rtmpt, rtmpte</a></h2>

<p>Real-Time Messaging Protocol and its variants supported through
librtmp.
</p>
<p>Requires the presence of the librtmp headers and library during
configuration. You need to explicitly configure the build with
&quot;&ndash;enable-librtmp&quot;. If enabled this will replace the native RTMP
protocol.
</p>
<p>This protocol provides most client functions and a few server
functions needed to support RTMP, RTMP tunneled in HTTP (RTMPT),
encrypted RTMP (RTMPE), RTMP over SSL/TLS (RTMPS) and tunneled
variants of these encrypted types (RTMPTE, RTMPTS).
</p>
<p>The required syntax is:
</p><div class="example">
<pre class="example"><var>rtmp_proto</var>://<var>server</var>[:<var>port</var>][/<var>app</var>][/<var>playpath</var>] <var>options</var>
</pre></div>

<p>where <var>rtmp_proto</var> is one of the strings &quot;rtmp&quot;, &quot;rtmpt&quot;, &quot;rtmpe&quot;,
&quot;rtmps&quot;, &quot;rtmpte&quot;, &quot;rtmpts&quot; corresponding to each RTMP variant, and
<var>server</var>, <var>port</var>, <var>app</var> and <var>playpath</var> have the same
meaning as specified for the RTMP native protocol.
<var>options</var> contains a list of space-separated options of the form
<var>key</var>=<var>val</var>.
</p>
<p>See the librtmp manual page (man 3 librtmp) for more information.
</p>
<p>For example, to stream a file in real-time to an RTMP server using
<code>ffmpeg</code>:
</p><div class="example">
<pre class="example">ffmpeg -re -i myfile -f flv rtmp://myserver/live/mystream
</pre></div>

<p>To play the same stream using <code>ffplay</code>:
</p><div class="example">
<pre class="example">ffplay &quot;rtmp://myserver/live/mystream live=1&quot;
</pre></div>

<a name="rtp"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-rtp">3.27 rtp</a></h2>

<p>Real-time Transport Protocol.
</p>
<p>The required syntax for an RTP URL is:
rtp://<var>hostname</var>[:<var>port</var>][?<var>option</var>=<var>val</var>...]
</p>
<p><var>port</var> specifies the RTP port to use.
</p>
<p>The following URL options are supported:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>ttl=<var>n</var></samp>&rsquo;</dt>
<dd><p>Set the TTL (Time-To-Live) value (for multicast only).
</p>
</dd>
<dt>&lsquo;<samp>rtcpport=<var>n</var></samp>&rsquo;</dt>
<dd><p>Set the remote RTCP port to <var>n</var>.
</p>
</dd>
<dt>&lsquo;<samp>localrtpport=<var>n</var></samp>&rsquo;</dt>
<dd><p>Set the local RTP port to <var>n</var>.
</p>
</dd>
<dt>&lsquo;<samp>localrtcpport=<var>n</var>'</samp>&rsquo;</dt>
<dd><p>Set the local RTCP port to <var>n</var>.
</p>
</dd>
<dt>&lsquo;<samp>pkt_size=<var>n</var></samp>&rsquo;</dt>
<dd><p>Set max packet size (in bytes) to <var>n</var>.
</p>
</dd>
<dt>&lsquo;<samp>connect=0|1</samp>&rsquo;</dt>
<dd><p>Do a <code>connect()</code> on the UDP socket (if set to 1) or not (if set
to 0).
</p>
</dd>
<dt>&lsquo;<samp>sources=<var>ip</var>[,<var>ip</var>]</samp>&rsquo;</dt>
<dd><p>List allowed source IP addresses.
</p>
</dd>
<dt>&lsquo;<samp>block=<var>ip</var>[,<var>ip</var>]</samp>&rsquo;</dt>
<dd><p>List disallowed (blocked) source IP addresses.
</p>
</dd>
<dt>&lsquo;<samp>write_to_source=0|1</samp>&rsquo;</dt>
<dd><p>Send packets to the source address of the latest received packet (if
set to 1) or to a default remote address (if set to 0).
</p>
</dd>
<dt>&lsquo;<samp>localport=<var>n</var></samp>&rsquo;</dt>
<dd><p>Set the local RTP port to <var>n</var>.
</p>
<p>This is a deprecated option. Instead, &lsquo;<samp>localrtpport</samp>&rsquo; should be
used.
</p>
</dd>
</dl>

<p>Important notes:
</p>
<ol>
<li>
If &lsquo;<samp>rtcpport</samp>&rsquo; is not set the RTCP port will be set to the RTP
port value plus 1.

</li><li>
If &lsquo;<samp>localrtpport</samp>&rsquo; (the local RTP port) is not set any available
port will be used for the local RTP and RTCP ports.

</li><li>
If &lsquo;<samp>localrtcpport</samp>&rsquo; (the local RTCP port) is not set it will be
set to the local RTP port value plus 1.
</li></ol>

<a name="rtsp"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-rtsp">3.28 rtsp</a></h2>

<p>Real-Time Streaming Protocol.
</p>
<p>RTSP is not technically a protocol handler in libavformat, it is a demuxer
and muxer. The demuxer supports both normal RTSP (with data transferred
over RTP; this is used by e.g. Apple and Microsoft) and Real-RTSP (with
data transferred over RDT).
</p>
<p>The muxer can be used to send a stream using RTSP ANNOUNCE to a server
supporting it (currently Darwin Streaming Server and Mischa Spiegelmock&rsquo;s
<a href="https://github.com/revmischa/rtsp-server">RTSP server</a>).
</p>
<p>The required syntax for a RTSP url is:
</p><div class="example">
<pre class="example">rtsp://<var>hostname</var>[:<var>port</var>]/<var>path</var>
</pre></div>

<p>Options can be set on the <code>ffmpeg</code>/<code>ffplay</code> command
line, or set in code via <code>AVOption</code>s or in
<code>avformat_open_input</code>.
</p>
<p>The following options are supported.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>initial_pause</samp>&rsquo;</dt>
<dd><p>Do not start playing the stream immediately if set to 1. Default value
is 0.
</p>
</dd>
<dt>&lsquo;<samp>rtsp_transport</samp>&rsquo;</dt>
<dd><p>Set RTSP transport protocols.
</p>
<p>It accepts the following values:
</p><dl compact="compact">
<dt>&lsquo;<samp>udp</samp>&rsquo;</dt>
<dd><p>Use UDP as lower transport protocol.
</p>
</dd>
<dt>&lsquo;<samp>tcp</samp>&rsquo;</dt>
<dd><p>Use TCP (interleaving within the RTSP control channel) as lower
transport protocol.
</p>
</dd>
<dt>&lsquo;<samp>udp_multicast</samp>&rsquo;</dt>
<dd><p>Use UDP multicast as lower transport protocol.
</p>
</dd>
<dt>&lsquo;<samp>http</samp>&rsquo;</dt>
<dd><p>Use HTTP tunneling as lower transport protocol, which is useful for
passing proxies.
</p></dd>
</dl>

<p>Multiple lower transport protocols may be specified, in that case they are
tried one at a time (if the setup of one fails, the next one is tried).
For the muxer, only the &lsquo;<samp>tcp</samp>&rsquo; and &lsquo;<samp>udp</samp>&rsquo; options are supported.
</p>
</dd>
<dt>&lsquo;<samp>rtsp_flags</samp>&rsquo;</dt>
<dd><p>Set RTSP flags.
</p>
<p>The following values are accepted:
</p><dl compact="compact">
<dt>&lsquo;<samp>filter_src</samp>&rsquo;</dt>
<dd><p>Accept packets only from negotiated peer address and port.
</p></dd>
<dt>&lsquo;<samp>listen</samp>&rsquo;</dt>
<dd><p>Act as a server, listening for an incoming connection.
</p></dd>
<dt>&lsquo;<samp>prefer_tcp</samp>&rsquo;</dt>
<dd><p>Try TCP for RTP transport first, if TCP is available as RTSP RTP transport.
</p></dd>
</dl>

<p>Default value is &lsquo;<samp>none</samp>&rsquo;.
</p>
</dd>
<dt>&lsquo;<samp>allowed_media_types</samp>&rsquo;</dt>
<dd><p>Set media types to accept from the server.
</p>
<p>The following flags are accepted:
</p><dl compact="compact">
<dt>&lsquo;<samp>video</samp>&rsquo;</dt>
<dt>&lsquo;<samp>audio</samp>&rsquo;</dt>
<dt>&lsquo;<samp>data</samp>&rsquo;</dt>
</dl>

<p>By default it accepts all media types.
</p>
</dd>
<dt>&lsquo;<samp>min_port</samp>&rsquo;</dt>
<dd><p>Set minimum local UDP port. Default value is 5000.
</p>
</dd>
<dt>&lsquo;<samp>max_port</samp>&rsquo;</dt>
<dd><p>Set maximum local UDP port. Default value is 65000.
</p>
</dd>
<dt>&lsquo;<samp>timeout</samp>&rsquo;</dt>
<dd><p>Set maximum timeout (in seconds) to wait for incoming connections.
</p>
<p>A value of -1 means infinite (default). This option implies the
&lsquo;<samp>rtsp_flags</samp>&rsquo; set to &lsquo;<samp>listen</samp>&rsquo;.
</p>
</dd>
<dt>&lsquo;<samp>reorder_queue_size</samp>&rsquo;</dt>
<dd><p>Set number of packets to buffer for handling of reordered packets.
</p>
</dd>
<dt>&lsquo;<samp>stimeout</samp>&rsquo;</dt>
<dd><p>Set socket TCP I/O timeout in microseconds.
</p>
</dd>
<dt>&lsquo;<samp>user-agent</samp>&rsquo;</dt>
<dd><p>Override User-Agent header. If not specified, it defaults to the
libavformat identifier string.
</p></dd>
</dl>

<p>When receiving data over UDP, the demuxer tries to reorder received packets
(since they may arrive out of order, or packets may get lost totally). This
can be disabled by setting the maximum demuxing delay to zero (via
the <code>max_delay</code> field of AVFormatContext).
</p>
<p>When watching multi-bitrate Real-RTSP streams with <code>ffplay</code>, the
streams to display can be chosen with <code>-vst</code> <var>n</var> and
<code>-ast</code> <var>n</var> for video and audio respectively, and can be switched
on the fly by pressing <code>v</code> and <code>a</code>.
</p>
<a name="Examples-1"></a>
<h3 class="subsection"><a href="ffmpeg-protocols.html#toc-Examples-1">3.28.1 Examples</a></h3>

<p>The following examples all make use of the <code>ffplay</code> and
<code>ffmpeg</code> tools.
</p>
<ul>
<li>
Watch a stream over UDP, with a max reordering delay of 0.5 seconds:
<div class="example">
<pre class="example">ffplay -max_delay 500000 -rtsp_transport udp rtsp://server/video.mp4
</pre></div>

</li><li>
Watch a stream tunneled over HTTP:
<div class="example">
<pre class="example">ffplay -rtsp_transport http rtsp://server/video.mp4
</pre></div>

</li><li>
Send a stream in realtime to a RTSP server, for others to watch:
<div class="example">
<pre class="example">ffmpeg -re -i <var>input</var> -f rtsp -muxdelay 0.1 rtsp://server/live.sdp
</pre></div>

</li><li>
Receive a stream in realtime:
<div class="example">
<pre class="example">ffmpeg -rtsp_flags listen -i rtsp://ownaddress/live.sdp <var>output</var>
</pre></div>
</li></ul>

<a name="sap"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-sap">3.29 sap</a></h2>

<p>Session Announcement Protocol (RFC 2974). This is not technically a
protocol handler in libavformat, it is a muxer and demuxer.
It is used for signalling of RTP streams, by announcing the SDP for the
streams regularly on a separate port.
</p>
<a name="Muxer"></a>
<h3 class="subsection"><a href="ffmpeg-protocols.html#toc-Muxer">3.29.1 Muxer</a></h3>

<p>The syntax for a SAP url given to the muxer is:
</p><div class="example">
<pre class="example">sap://<var>destination</var>[:<var>port</var>][?<var>options</var>]
</pre></div>

<p>The RTP packets are sent to <var>destination</var> on port <var>port</var>,
or to port 5004 if no port is specified.
<var>options</var> is a <code>&amp;</code>-separated list. The following options
are supported:
</p>
<dl compact="compact">
<dt>&lsquo;<samp>announce_addr=<var>address</var></samp>&rsquo;</dt>
<dd><p>Specify the destination IP address for sending the announcements to.
If omitted, the announcements are sent to the commonly used SAP
announcement multicast address 224.2.127.254 (sap.mcast.net), or
ff0e::2:7ffe if <var>destination</var> is an IPv6 address.
</p>
</dd>
<dt>&lsquo;<samp>announce_port=<var>port</var></samp>&rsquo;</dt>
<dd><p>Specify the port to send the announcements on, defaults to
9875 if not specified.
</p>
</dd>
<dt>&lsquo;<samp>ttl=<var>ttl</var></samp>&rsquo;</dt>
<dd><p>Specify the time to live value for the announcements and RTP packets,
defaults to 255.
</p>
</dd>
<dt>&lsquo;<samp>same_port=<var>0|1</var></samp>&rsquo;</dt>
<dd><p>If set to 1, send all RTP streams on the same port pair. If zero (the
default), all streams are sent on unique ports, with each stream on a
port 2 numbers higher than the previous.
VLC/Live555 requires this to be set to 1, to be able to receive the stream.
The RTP stack in libavformat for receiving requires all streams to be sent
on unique ports.
</p></dd>
</dl>

<p>Example command lines follow.
</p>
<p>To broadcast a stream on the local subnet, for watching in VLC:
</p>
<div class="example">
<pre class="example">ffmpeg -re -i <var>input</var> -f sap sap://224.0.0.255?same_port=1
</pre></div>

<p>Similarly, for watching in <code>ffplay</code>:
</p>
<div class="example">
<pre class="example">ffmpeg -re -i <var>input</var> -f sap sap://224.0.0.255
</pre></div>

<p>And for watching in <code>ffplay</code>, over IPv6:
</p>
<div class="example">
<pre class="example">ffmpeg -re -i <var>input</var> -f sap sap://[ff0e::1:2:3:4]
</pre></div>

<a name="Demuxer"></a>
<h3 class="subsection"><a href="ffmpeg-protocols.html#toc-Demuxer">3.29.2 Demuxer</a></h3>

<p>The syntax for a SAP url given to the demuxer is:
</p><div class="example">
<pre class="example">sap://[<var>address</var>][:<var>port</var>]
</pre></div>

<p><var>address</var> is the multicast address to listen for announcements on,
if omitted, the default 224.2.127.254 (sap.mcast.net) is used. <var>port</var>
is the port that is listened on, 9875 if omitted.
</p>
<p>The demuxers listens for announcements on the given address and port.
Once an announcement is received, it tries to receive that particular stream.
</p>
<p>Example command lines follow.
</p>
<p>To play back the first stream announced on the normal SAP multicast address:
</p>
<div class="example">
<pre class="example">ffplay sap://
</pre></div>

<p>To play back the first stream announced on one the default IPv6 SAP multicast address:
</p>
<div class="example">
<pre class="example">ffplay sap://[ff0e::2:7ffe]
</pre></div>

<a name="sctp"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-sctp">3.30 sctp</a></h2>

<p>Stream Control Transmission Protocol.
</p>
<p>The accepted URL syntax is:
</p><div class="example">
<pre class="example">sctp://<var>host</var>:<var>port</var>[?<var>options</var>]
</pre></div>

<p>The protocol accepts the following options:
</p><dl compact="compact">
<dt>&lsquo;<samp>listen</samp>&rsquo;</dt>
<dd><p>If set to any value, listen for an incoming connection. Outgoing connection is done by default.
</p>
</dd>
<dt>&lsquo;<samp>max_streams</samp>&rsquo;</dt>
<dd><p>Set the maximum number of streams. By default no limit is set.
</p></dd>
</dl>

<a name="srt"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-srt">3.31 srt</a></h2>

<p>Haivision Secure Reliable Transport Protocol via libsrt.
</p>
<p>The supported syntax for a SRT URL is:
</p><div class="example">
<pre class="example">srt://<var>hostname</var>:<var>port</var>[?<var>options</var>]
</pre></div>

<p><var>options</var> contains a list of &amp;-separated options of the form
<var>key</var>=<var>val</var>.
</p>
<p>or
</p>
<div class="example">
<pre class="example"><var>options</var> srt://<var>hostname</var>:<var>port</var>
</pre></div>

<p><var>options</var> contains a list of &rsquo;-<var>key</var> <var>val</var>&rsquo;
options.
</p>
<p>This protocol accepts the following options.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>connect_timeout</samp>&rsquo;</dt>
<dd><p>Connection timeout; SRT cannot connect for RTT &gt; 1500 msec
(2 handshake exchanges) with the default connect timeout of
3 seconds. This option applies to the caller and rendezvous
connection modes. The connect timeout is 10 times the value
set for the rendezvous mode (which can be used as a
workaround for this connection problem with earlier versions).
</p>
</dd>
<dt>&lsquo;<samp>ffs=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Flight Flag Size (Window Size), in bytes. FFS is actually an
internal parameter and you should set it to not less than
&lsquo;<samp>recv_buffer_size</samp>&rsquo; and &lsquo;<samp>mss</samp>&rsquo;. The default value
is relatively large, therefore unless you set a very large receiver buffer,
you do not need to change this option. Default value is 25600.
</p>
</dd>
<dt>&lsquo;<samp>inputbw=<var>bytes/seconds</var></samp>&rsquo;</dt>
<dd><p>Sender nominal input rate, in bytes per seconds. Used along with
&lsquo;<samp>oheadbw</samp>&rsquo;, when &lsquo;<samp>maxbw</samp>&rsquo; is set to relative (0), to
calculate maximum sending rate when recovery packets are sent
along with the main media stream:
&lsquo;<samp>inputbw</samp>&rsquo; * (100 + &lsquo;<samp>oheadbw</samp>&rsquo;) / 100
if &lsquo;<samp>inputbw</samp>&rsquo; is not set while &lsquo;<samp>maxbw</samp>&rsquo; is set to
relative (0), the actual input rate is evaluated inside
the library. Default value is 0.
</p>
</dd>
<dt>&lsquo;<samp>iptos=<var>tos</var></samp>&rsquo;</dt>
<dd><p>IP Type of Service. Applies to sender only. Default value is 0xB8.
</p>
</dd>
<dt>&lsquo;<samp>ipttl=<var>ttl</var></samp>&rsquo;</dt>
<dd><p>IP Time To Live. Applies to sender only. Default value is 64.
</p>
</dd>
<dt>&lsquo;<samp>latency</samp>&rsquo;</dt>
<dd><p>Timestamp-based Packet Delivery Delay.
Used to absorb bursts of missed packet retransmissions.
This flag sets both &lsquo;<samp>rcvlatency</samp>&rsquo; and &lsquo;<samp>peerlatency</samp>&rsquo;
to the same value. Note that prior to version 1.3.0
this is the only flag to set the latency, however
this is effectively equivalent to setting &lsquo;<samp>peerlatency</samp>&rsquo;,
when side is sender and &lsquo;<samp>rcvlatency</samp>&rsquo;
when side is receiver, and the bidirectional stream
sending is not supported.
</p>
</dd>
<dt>&lsquo;<samp>listen_timeout</samp>&rsquo;</dt>
<dd><p>Set socket listen timeout.
</p>
</dd>
<dt>&lsquo;<samp>maxbw=<var>bytes/seconds</var></samp>&rsquo;</dt>
<dd><p>Maximum sending bandwidth, in bytes per seconds.
-1 infinite (CSRTCC limit is 30mbps)
0 relative to input rate (see &lsquo;<samp>inputbw</samp>&rsquo;)
&gt;0 absolute limit value
Default value is 0 (relative)
</p>
</dd>
<dt>&lsquo;<samp>mode=<var>caller|listener|rendezvous</var></samp>&rsquo;</dt>
<dd><p>Connection mode.
&lsquo;<samp>caller</samp>&rsquo; opens client connection.
&lsquo;<samp>listener</samp>&rsquo; starts server to listen for incoming connections.
&lsquo;<samp>rendezvous</samp>&rsquo; use Rendez-Vous connection mode.
Default value is caller.
</p>
</dd>
<dt>&lsquo;<samp>mss=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Maximum Segment Size, in bytes. Used for buffer allocation
and rate calculation using a packet counter assuming fully
filled packets. The smallest MSS between the peers is
used. This is 1500 by default in the overall internet.
This is the maximum size of the UDP packet and can be
only decreased, unless you have some unusual dedicated
network settings. Default value is 1500.
</p>
</dd>
<dt>&lsquo;<samp>nakreport=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>If set to 1, Receiver will send &lsquo;UMSG_LOSSREPORT&lsquo; messages
periodically until a lost packet is retransmitted or
intentionally dropped. Default value is 1.
</p>
</dd>
<dt>&lsquo;<samp>oheadbw=<var>percents</var></samp>&rsquo;</dt>
<dd><p>Recovery bandwidth overhead above input rate, in percents.
See &lsquo;<samp>inputbw</samp>&rsquo;. Default value is 25%.
</p>
</dd>
<dt>&lsquo;<samp>passphrase=<var>string</var></samp>&rsquo;</dt>
<dd><p>HaiCrypt Encryption/Decryption Passphrase string, length
from 10 to 79 characters. The passphrase is the shared
secret between the sender and the receiver. It is used
to generate the Key Encrypting Key using PBKDF2
(Password-Based Key Derivation Function). It is used
only if &lsquo;<samp>pbkeylen</samp>&rsquo; is non-zero. It is used on
the receiver only if the received data is encrypted.
The configured passphrase cannot be recovered (write-only).
</p>
</dd>
<dt>&lsquo;<samp>payload_size=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Sets the maximum declared size of a packet transferred
during the single call to the sending function in Live
mode. Use 0 if this value isn&rsquo;t used (which is default in
file mode).
Default is -1 (automatic), which typically means MPEG-TS;
if you are going to use SRT
to send any different kind of payload, such as, for example,
wrapping a live stream in very small frames, then you can
use a bigger maximum frame size, though not greater than
1456 bytes.
</p>
</dd>
<dt>&lsquo;<samp>pkt_size=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Alias for &lsquo;<samp>payload_size</samp>&rsquo;.
</p>
</dd>
<dt>&lsquo;<samp>peerlatency</samp>&rsquo;</dt>
<dd><p>The latency value (as described in &lsquo;<samp>rcvlatency</samp>&rsquo;) that is
set by the sender side as a minimum value for the receiver.
</p>
</dd>
<dt>&lsquo;<samp>pbkeylen=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Sender encryption key length, in bytes.
Only can be set to 0, 16, 24 and 32.
Enable sender encryption if not 0.
Not required on receiver (set to 0),
key size obtained from sender in HaiCrypt handshake.
Default value is 0.
</p>
</dd>
<dt>&lsquo;<samp>rcvlatency</samp>&rsquo;</dt>
<dd><p>The time that should elapse since the moment when the
packet was sent and the moment when it&rsquo;s delivered to
the receiver application in the receiving function.
This time should be a buffer time large enough to cover
the time spent for sending, unexpectedly extended RTT
time, and the time needed to retransmit the lost UDP
packet. The effective latency value will be the maximum
of this options&rsquo; value and the value of &lsquo;<samp>peerlatency</samp>&rsquo;
set by the peer side. Before version 1.3.0 this option
is only available as &lsquo;<samp>latency</samp>&rsquo;.
</p>
</dd>
<dt>&lsquo;<samp>recv_buffer_size=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Set UDP receive buffer size, expressed in bytes.
</p>
</dd>
<dt>&lsquo;<samp>send_buffer_size=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Set UDP send buffer size, expressed in bytes.
</p>
</dd>
<dt>&lsquo;<samp>rw_timeout</samp>&rsquo;</dt>
<dd><p>Set raise error timeout for read/write optations.
</p>
<p>This option is only relevant in read mode:
if no data arrived in more than this time
interval, raise error.
</p>
</dd>
<dt>&lsquo;<samp>tlpktdrop=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>Too-late Packet Drop. When enabled on receiver, it skips
missing packets that have not been delivered in time and
delivers the following packets to the application when
their time-to-play has come. It also sends a fake ACK to
the sender. When enabled on sender and enabled on the
receiving peer, the sender drops the older packets that
have no chance of being delivered in time. It was
automatically enabled in the sender if the receiver
supports it.
</p>
</dd>
<dt>&lsquo;<samp>sndbuf=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Set send buffer size, expressed in bytes.
</p>
</dd>
<dt>&lsquo;<samp>rcvbuf=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Set receive buffer size, expressed in bytes.
</p>
<p>Receive buffer must not be greater than &lsquo;<samp>ffs</samp>&rsquo;.
</p>
</dd>
<dt>&lsquo;<samp>lossmaxttl=<var>packets</var></samp>&rsquo;</dt>
<dd><p>The value up to which the Reorder Tolerance may grow. When
Reorder Tolerance is &gt; 0, then packet loss report is delayed
until that number of packets come in. Reorder Tolerance
increases every time a &quot;belated&quot; packet has come, but it
wasn&rsquo;t due to retransmission (that is, when UDP packets tend
to come out of order), with the difference between the latest
sequence and this packet&rsquo;s sequence, and not more than the
value of this option. By default it&rsquo;s 0, which means that this
mechanism is turned off, and the loss report is always sent
immediately upon experiencing a &quot;gap&quot; in sequences.
</p>
</dd>
<dt>&lsquo;<samp>minversion</samp>&rsquo;</dt>
<dd><p>The minimum SRT version that is required from the peer. A connection
to a peer that does not satisfy the minimum version requirement
will be rejected.
</p>
<p>The version format in hex is 0xXXYYZZ for x.y.z in human readable
form.
</p>
</dd>
<dt>&lsquo;<samp>streamid=<var>string</var></samp>&rsquo;</dt>
<dd><p>A string limited to 512 characters that can be set on the socket prior
to connecting. This stream ID will be able to be retrieved by the
listener side from the socket that is returned from srt_accept and
was connected by a socket with that set stream ID. SRT does not enforce
any special interpretation of the contents of this string.
This option doesn’t make sense in Rendezvous connection; the result
might be that simply one side will override the value from the other
side and it’s the matter of luck which one would win
</p>
</dd>
<dt>&lsquo;<samp>smoother=<var>live|file</var></samp>&rsquo;</dt>
<dd><p>The type of Smoother used for the transmission for that socket, which
is responsible for the transmission and congestion control. The Smoother
type must be exactly the same on both connecting parties, otherwise
the connection is rejected.
</p>
</dd>
<dt>&lsquo;<samp>messageapi=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>When set, this socket uses the Message API, otherwise it uses Buffer
API. Note that in live mode (see &lsquo;<samp>transtype</samp>&rsquo;) there’s only
message API available. In File mode you can chose to use one of two modes:
</p>
<p>Stream API (default, when this option is false). In this mode you may
send as many data as you wish with one sending instruction, or even use
dedicated functions that read directly from a file. The internal facility
will take care of any speed and congestion control. When receiving, you
can also receive as many data as desired, the data not extracted will be
waiting for the next call. There is no boundary between data portions in
the Stream mode.
</p>
<p>Message API. In this mode your single sending instruction passes exactly
one piece of data that has boundaries (a message). Contrary to Live mode,
this message may span across multiple UDP packets and the only size
limitation is that it shall fit as a whole in the sending buffer. The
receiver shall use as large buffer as necessary to receive the message,
otherwise the message will not be given up. When the message is not
complete (not all packets received or there was a packet loss) it will
not be given up.
</p>
</dd>
<dt>&lsquo;<samp>transtype=<var>live|file</var></samp>&rsquo;</dt>
<dd><p>Sets the transmission type for the socket, in particular, setting this
option sets multiple other parameters to their default values as required
for a particular transmission type.
</p>
<p>live: Set options as for live transmission. In this mode, you should
send by one sending instruction only so many data that fit in one UDP packet,
and limited to the value defined first in &lsquo;<samp>payload_size</samp>&rsquo; (1316 is
default in this mode). There is no speed control in this mode, only the
bandwidth control, if configured, in order to not exceed the bandwidth with
the overhead transmission (retransmitted and control packets).
</p>
<p>file: Set options as for non-live transmission. See &lsquo;<samp>messageapi</samp>&rsquo;
for further explanations
</p>
</dd>
</dl>

<p>For more information see: <a href="https://github.com/Haivision/srt">https://github.com/Haivision/srt</a>.
</p>
<a name="srtp"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-srtp">3.32 srtp</a></h2>

<p>Secure Real-time Transport Protocol.
</p>
<p>The accepted options are:
</p><dl compact="compact">
<dt>&lsquo;<samp>srtp_in_suite</samp>&rsquo;</dt>
<dt>&lsquo;<samp>srtp_out_suite</samp>&rsquo;</dt>
<dd><p>Select input and output encoding suites.
</p>
<p>Supported values:
</p><dl compact="compact">
<dt>&lsquo;<samp>AES_CM_128_HMAC_SHA1_80</samp>&rsquo;</dt>
<dt>&lsquo;<samp>SRTP_AES128_CM_HMAC_SHA1_80</samp>&rsquo;</dt>
<dt>&lsquo;<samp>AES_CM_128_HMAC_SHA1_32</samp>&rsquo;</dt>
<dt>&lsquo;<samp>SRTP_AES128_CM_HMAC_SHA1_32</samp>&rsquo;</dt>
</dl>

</dd>
<dt>&lsquo;<samp>srtp_in_params</samp>&rsquo;</dt>
<dt>&lsquo;<samp>srtp_out_params</samp>&rsquo;</dt>
<dd><p>Set input and output encoding parameters, which are expressed by a
base64-encoded representation of a binary block. The first 16 bytes of
this binary block are used as master key, the following 14 bytes are
used as master salt.
</p></dd>
</dl>

<a name="subfile"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-subfile">3.33 subfile</a></h2>

<p>Virtually extract a segment of a file or another stream.
The underlying stream must be seekable.
</p>
<p>Accepted options:
</p><dl compact="compact">
<dt>&lsquo;<samp>start</samp>&rsquo;</dt>
<dd><p>Start offset of the extracted segment, in bytes.
</p></dd>
<dt>&lsquo;<samp>end</samp>&rsquo;</dt>
<dd><p>End offset of the extracted segment, in bytes.
If set to 0, extract till end of file.
</p></dd>
</dl>

<p>Examples:
</p>
<p>Extract a chapter from a DVD VOB file (start and end sectors obtained
externally and multiplied by 2048):
</p><div class="example">
<pre class="example">subfile,,start,153391104,end,268142592,,:/media/dvd/VIDEO_TS/VTS_08_1.VOB
</pre></div>

<p>Play an AVI file directly from a TAR archive:
</p><div class="example">
<pre class="example">subfile,,start,183241728,end,366490624,,:archive.tar
</pre></div>

<p>Play a MPEG-TS file from start offset till end:
</p><div class="example">
<pre class="example">subfile,,start,32815239,end,0,,:video.ts
</pre></div>

<a name="tee"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-tee">3.34 tee</a></h2>

<p>Writes the output to multiple protocols. The individual outputs are separated
by |
</p>
<div class="example">
<pre class="example">tee:file://path/to/local/this.avi|file://path/to/local/that.avi
</pre></div>

<a name="tcp"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-tcp">3.35 tcp</a></h2>

<p>Transmission Control Protocol.
</p>
<p>The required syntax for a TCP url is:
</p><div class="example">
<pre class="example">tcp://<var>hostname</var>:<var>port</var>[?<var>options</var>]
</pre></div>

<p><var>options</var> contains a list of &amp;-separated options of the form
<var>key</var>=<var>val</var>.
</p>
<p>The list of supported options follows.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>listen=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>Listen for an incoming connection. Default value is 0.
</p>
</dd>
<dt>&lsquo;<samp>timeout=<var>microseconds</var></samp>&rsquo;</dt>
<dd><p>Set raise error timeout, expressed in microseconds.
</p>
<p>This option is only relevant in read mode: if no data arrived in more
than this time interval, raise error.
</p>
</dd>
<dt>&lsquo;<samp>listen_timeout=<var>milliseconds</var></samp>&rsquo;</dt>
<dd><p>Set listen timeout, expressed in milliseconds.
</p>
</dd>
<dt>&lsquo;<samp>recv_buffer_size=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Set receive buffer size, expressed bytes.
</p>
</dd>
<dt>&lsquo;<samp>send_buffer_size=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Set send buffer size, expressed bytes.
</p>
</dd>
<dt>&lsquo;<samp>tcp_nodelay=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>Set TCP_NODELAY to disable Nagle&rsquo;s algorithm. Default value is 0.
</p>
</dd>
<dt>&lsquo;<samp>tcp_mss=<var>bytes</var></samp>&rsquo;</dt>
<dd><p>Set maximum segment size for outgoing TCP packets, expressed in bytes.
</p></dd>
</dl>

<p>The following example shows how to setup a listening TCP connection
with <code>ffmpeg</code>, which is then accessed with <code>ffplay</code>:
</p><div class="example">
<pre class="example">ffmpeg -i <var>input</var> -f <var>format</var> tcp://<var>hostname</var>:<var>port</var>?listen
ffplay tcp://<var>hostname</var>:<var>port</var>
</pre></div>

<a name="tls"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-tls">3.36 tls</a></h2>

<p>Transport Layer Security (TLS) / Secure Sockets Layer (SSL)
</p>
<p>The required syntax for a TLS/SSL url is:
</p><div class="example">
<pre class="example">tls://<var>hostname</var>:<var>port</var>[?<var>options</var>]
</pre></div>

<p>The following parameters can be set via command line options
(or in code via <code>AVOption</code>s):
</p>
<dl compact="compact">
<dt>&lsquo;<samp>ca_file, cafile=<var>filename</var></samp>&rsquo;</dt>
<dd><p>A file containing certificate authority (CA) root certificates to treat
as trusted. If the linked TLS library contains a default this might not
need to be specified for verification to work, but not all libraries and
setups have defaults built in.
The file must be in OpenSSL PEM format.
</p>
</dd>
<dt>&lsquo;<samp>tls_verify=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>If enabled, try to verify the peer that we are communicating with.
Note, if using OpenSSL, this currently only makes sure that the
peer certificate is signed by one of the root certificates in the CA
database, but it does not validate that the certificate actually
matches the host name we are trying to connect to. (With other backends,
the host name is validated as well.)
</p>
<p>This is disabled by default since it requires a CA database to be
provided by the caller in many cases.
</p>
</dd>
<dt>&lsquo;<samp>cert_file, cert=<var>filename</var></samp>&rsquo;</dt>
<dd><p>A file containing a certificate to use in the handshake with the peer.
(When operating as server, in listen mode, this is more often required
by the peer, while client certificates only are mandated in certain
setups.)
</p>
</dd>
<dt>&lsquo;<samp>key_file, key=<var>filename</var></samp>&rsquo;</dt>
<dd><p>A file containing the private key for the certificate.
</p>
</dd>
<dt>&lsquo;<samp>listen=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>If enabled, listen for connections on the provided port, and assume
the server role in the handshake instead of the client role.
</p>
</dd>
</dl>

<p>Example command lines:
</p>
<p>To create a TLS/SSL server that serves an input stream.
</p>
<div class="example">
<pre class="example">ffmpeg -i <var>input</var> -f <var>format</var> tls://<var>hostname</var>:<var>port</var>?listen&amp;cert=<var>server.crt</var>&amp;key=<var>server.key</var>
</pre></div>

<p>To play back a stream from the TLS/SSL server using <code>ffplay</code>:
</p>
<div class="example">
<pre class="example">ffplay tls://<var>hostname</var>:<var>port</var>
</pre></div>

<a name="udp"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-udp">3.37 udp</a></h2>

<p>User Datagram Protocol.
</p>
<p>The required syntax for an UDP URL is:
</p><div class="example">
<pre class="example">udp://<var>hostname</var>:<var>port</var>[?<var>options</var>]
</pre></div>

<p><var>options</var> contains a list of &amp;-separated options of the form <var>key</var>=<var>val</var>.
</p>
<p>In case threading is enabled on the system, a circular buffer is used
to store the incoming data, which allows one to reduce loss of data due to
UDP socket buffer overruns. The <var>fifo_size</var> and
<var>overrun_nonfatal</var> options are related to this buffer.
</p>
<p>The list of supported options follows.
</p>
<dl compact="compact">
<dt>&lsquo;<samp>buffer_size=<var>size</var></samp>&rsquo;</dt>
<dd><p>Set the UDP maximum socket buffer size in bytes. This is used to set either
the receive or send buffer size, depending on what the socket is used for.
Default is 64KB.  See also <var>fifo_size</var>.
</p>
</dd>
<dt>&lsquo;<samp>bitrate=<var>bitrate</var></samp>&rsquo;</dt>
<dd><p>If set to nonzero, the output will have the specified constant bitrate if the
input has enough packets to sustain it.
</p>
</dd>
<dt>&lsquo;<samp>burst_bits=<var>bits</var></samp>&rsquo;</dt>
<dd><p>When using <var>bitrate</var> this specifies the maximum number of bits in
packet bursts.
</p>
</dd>
<dt>&lsquo;<samp>localport=<var>port</var></samp>&rsquo;</dt>
<dd><p>Override the local UDP port to bind with.
</p>
</dd>
<dt>&lsquo;<samp>localaddr=<var>addr</var></samp>&rsquo;</dt>
<dd><p>Local IP address of a network interface used for sending packets or joining
multicast groups.
</p>
</dd>
<dt>&lsquo;<samp>pkt_size=<var>size</var></samp>&rsquo;</dt>
<dd><p>Set the size in bytes of UDP packets.
</p>
</dd>
<dt>&lsquo;<samp>reuse=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>Explicitly allow or disallow reusing UDP sockets.
</p>
</dd>
<dt>&lsquo;<samp>ttl=<var>ttl</var></samp>&rsquo;</dt>
<dd><p>Set the time to live value (for multicast only).
</p>
</dd>
<dt>&lsquo;<samp>connect=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>Initialize the UDP socket with <code>connect()</code>. In this case, the
destination address can&rsquo;t be changed with ff_udp_set_remote_url later.
If the destination address isn&rsquo;t known at the start, this option can
be specified in ff_udp_set_remote_url, too.
This allows finding out the source address for the packets with getsockname,
and makes writes return with AVERROR(ECONNREFUSED) if &quot;destination
unreachable&quot; is received.
For receiving, this gives the benefit of only receiving packets from
the specified peer address/port.
</p>
</dd>
<dt>&lsquo;<samp>sources=<var>address</var>[,<var>address</var>]</samp>&rsquo;</dt>
<dd><p>Only receive packets sent from the specified addresses. In case of multicast,
also subscribe to multicast traffic coming from these addresses only.
</p>
</dd>
<dt>&lsquo;<samp>block=<var>address</var>[,<var>address</var>]</samp>&rsquo;</dt>
<dd><p>Ignore packets sent from the specified addresses. In case of multicast, also
exclude the source addresses in the multicast subscription.
</p>
</dd>
<dt>&lsquo;<samp>fifo_size=<var>units</var></samp>&rsquo;</dt>
<dd><p>Set the UDP receiving circular buffer size, expressed as a number of
packets with size of 188 bytes. If not specified defaults to 7*4096.
</p>
</dd>
<dt>&lsquo;<samp>overrun_nonfatal=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>Survive in case of UDP receiving circular buffer overrun. Default
value is 0.
</p>
</dd>
<dt>&lsquo;<samp>timeout=<var>microseconds</var></samp>&rsquo;</dt>
<dd><p>Set raise error timeout, expressed in microseconds.
</p>
<p>This option is only relevant in read mode: if no data arrived in more
than this time interval, raise error.
</p>
</dd>
<dt>&lsquo;<samp>broadcast=<var>1|0</var></samp>&rsquo;</dt>
<dd><p>Explicitly allow or disallow UDP broadcasting.
</p>
<p>Note that broadcasting may not work properly on networks having
a broadcast storm protection.
</p></dd>
</dl>

<a name="Examples"></a>
<h3 class="subsection"><a href="ffmpeg-protocols.html#toc-Examples">3.37.1 Examples</a></h3>

<ul>
<li>
Use <code>ffmpeg</code> to stream over UDP to a remote endpoint:
<div class="example">
<pre class="example">ffmpeg -i <var>input</var> -f <var>format</var> udp://<var>hostname</var>:<var>port</var>
</pre></div>

</li><li>
Use <code>ffmpeg</code> to stream in mpegts format over UDP using 188
sized UDP packets, using a large input buffer:
<div class="example">
<pre class="example">ffmpeg -i <var>input</var> -f mpegts udp://<var>hostname</var>:<var>port</var>?pkt_size=188&amp;buffer_size=65535
</pre></div>

</li><li>
Use <code>ffmpeg</code> to receive over UDP from a remote endpoint:
<div class="example">
<pre class="example">ffmpeg -i udp://[<var>multicast-address</var>]:<var>port</var> ...
</pre></div>
</li></ul>

<a name="unix"></a>
<h2 class="section"><a href="ffmpeg-protocols.html#toc-unix">3.38 unix</a></h2>

<p>Unix local socket
</p>
<p>The required syntax for a Unix socket URL is:
</p>
<div class="example">
<pre class="example">unix://<var>filepath</var>
</pre></div>

<p>The following parameters can be set via command line options
(or in code via <code>AVOption</code>s):
</p>
<dl compact="compact">
<dt>&lsquo;<samp>timeout</samp>&rsquo;</dt>
<dd><p>Timeout in ms.
</p></dd>
<dt>&lsquo;<samp>listen</samp>&rsquo;</dt>
<dd><p>Create the Unix socket in listening mode.
</p></dd>
</dl>


<a name="See-Also"></a>
<h1 class="chapter"><a href="ffmpeg-protocols.html#toc-See-Also">4 See Also</a></h1>

<p><a href="ffmpeg.html">ffmpeg</a>, <a href="ffplay.html">ffplay</a>, <a href="ffprobe.html">ffprobe</a>,
<a href="libavformat.html">libavformat</a>
</p>

<a name="Authors"></a>
<h1 class="chapter"><a href="ffmpeg-protocols.html#toc-Authors">5 Authors</a></h1>

<p>The FFmpeg developers.
</p>
<p>For details about the authorship, see the Git history of the project
(git://source.ffmpeg.org/ffmpeg), e.g. by typing the command
<code>git log</code> in the FFmpeg source directory, or browsing the
online repository at <a href="http://source.ffmpeg.org">http://source.ffmpeg.org</a>.
</p>
<p>Maintainers for the specific components are listed in the file
&lsquo;<tt>MAINTAINERS</tt>&rsquo; in the source code tree.
</p>

    </div>
  </body>
</html>