Sophie

Sophie

distrib > Fedora > 15 > i386 > by-pkgid > bf5d4dd88c08049c9bdbf8c483b59e41 > files > 102

libvdpau-docs-0.4.1-4.fc15.i686.rpm

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<title>VDPAU: Video Decode and Presentation API for Unix</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<link href="doxygen.css" rel="stylesheet" type="text/css"/>
</head>
<body>
<!-- Generated by Doxygen 1.7.3 -->
<div id="top">
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td style="padding-left: 0.5em;">
   <div id="projectname">VDPAU</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
  <div id="navrow1" class="tabs">
    <ul class="tablist">
      <li class="current"><a href="index.html"><span>Main&#160;Page</span></a></li>
      <li><a href="modules.html"><span>Modules</span></a></li>
      <li><a href="annotated.html"><span>Data&#160;Structures</span></a></li>
      <li><a href="files.html"><span>Files</span></a></li>
      <li><a href="dirs.html"><span>Directories</span></a></li>
    </ul>
  </div>
</div>
<div class="header">
  <div class="headertitle">
<h1>Video Decode and Presentation API for Unix </h1>  </div>
</div>
<div class="contents">
<div class="textblock"><h2><a class="anchor" id="intro"></a>
Introduction</h2>
<p>The Video Decode and Presentation API for Unix (VDPAU) provides a complete solution for decoding, post-processing, compositing, and displaying compressed or uncompressed video streams. These video streams may be combined (composited) with bitmap content, to implement OSDs and other application user interfaces.</p>
<h2><a class="anchor" id="api_partitioning"></a>
API Partitioning</h2>
<p>VDPAU is split into two distinct modules:</p>
<ul>
<li><a class="el" href="group__api__core.html">Core API</a></li>
<li><a class="el" href="group__api__winsys.html">Window System Integration Layer</a></li>
</ul>
<p>The intent is that most VDPAU functionality exists and operates identically across all possible Windowing Systems. This functionality is the <a class="el" href="group__api__core.html">Core API</a>.</p>
<p>However, a small amount of functionality must be included that is tightly coupled to the underlying Windowing System. This functionality is the <a class="el" href="group__api__winsys.html">Window System Integration Layer</a>. Possibly examples include:</p>
<ul>
<li>Creation of the initial VDPAU <a class="el" href="group___vdp_device.html">VdpDevice</a> handle, since this act requires intimate knowledge of the underlying Window System, such as specific display handle or driver identification.</li>
<li>Conversion of VDPAU surfaces to/from underlying Window System surface types, e.g. to allow manipulation of VDPAU-generated surfaces via native Window System APIs.</li>
</ul>
<h2><a class="anchor" id="objects"></a>
Object Types</h2>
<p>VDPAU is roughly object oriented; most functionality is exposed by creating an object (handle) of a certain class (type), then executing various functions against that handle. The set of object classes supported, and their purpose, is discussed below.</p>
<h3><a class="anchor" id="device_type"></a>
Device Type</h3>
<p>A <a class="el" href="group___vdp_device.html">VdpDevice</a> is the root object in VDPAU's object system. The <a class="el" href="group__api__winsys.html">Window System Integration Layer</a> allows creation of a <a class="el" href="group___vdp_device.html">VdpDevice</a> object handle, from which all other API entry points can be retrieved and invoked.</p>
<h3><a class="anchor" id="surface_types"></a>
Surface Types</h3>
<p>A surface stores pixel information. Various types of surfaces existing for different purposes:</p>
<ul>
<li><a class="el" href="group___vdp_video_surface.html">VdpVideoSurface</a>s store decompressed YCbCr video frames in an implementation-defined internal format.</li>
<li><a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a>s store RGB 4:4:4 data. They are legal render targets for video post-processing and compositing operations.</li>
<li><a class="el" href="group___vdp_bitmap_surface.html">VdpBitmapSurface</a>s store RGB 4:4:4 data. These surfaces are designed to contain read-only bitmap data, to be used for OSD or application UI compositing.</li>
</ul>
<h3><a class="anchor" id="transfer_types"></a>
Transfer Types</h3>
<p>A data transfer object reads data from a surface (or surfaces), processes it, and writes the result to another surface. Various types of processing are possible:</p>
<ul>
<li><a class="el" href="group___vdp_decoder.html">VdpDecoder</a> objects process compressed video data, and generate decompressed images.</li>
<li><a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a>s have their own <a class="el" href="group___vdp_output_surface_render.html">rendering functionality</a>.</li>
<li><a class="el" href="group___vdp_video_mixer.html">VdpVideoMixer</a> objects perform video post-processing, de-interlacing, and compositing.</li>
<li><a class="el" href="group___vdp_presentation_queue.html">VdpPresentationQueue</a> is responsible for timestamp-based display of surfaces.</li>
</ul>
<h2><a class="anchor" id="data_flow"></a>
Data Flow</h2>
<p>Compressed video data originates in the application's memory space. This memory is typically obtained using <code>malloc</code>, and filled via regular file or network read system calls. Alternatively, the application may <code>mmap</code> a file.</p>
<p>The compressed data is then processed using a <a class="el" href="group___vdp_decoder.html">VdpDecoder</a>, which will decompress the field or frame, and write the result into a <a class="el" href="group___vdp_video_surface.html">VdpVideoSurface</a>. This action may require reading pixel data from some number of other <a class="el" href="group___vdp_video_surface.html">VdpVideoSurface</a> objects, depending on the type of compressed data and field/frame in question.</p>
<p>If the application wishes to display any form of OSD or user-interface, this must be created in a <a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a>.</p>
<p>This process begins with the creation of <a class="el" href="group___vdp_bitmap_surface.html">VdpBitmapSurface</a> objects to contain the OSD/UI's static data, such as individual glyphs.</p>
<p><a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a> <a class="el" href="group___vdp_output_surface_render.html">rendering functionality</a> may be used to composite together various <a class="el" href="group___vdp_bitmap_surface.html">VdpBitmapSurface</a>s and <a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a>s, into another VdpOutputSurface "VdpOutputSurface".</p>
<p>Once video has been decoded, it must be post-processed. This involves various steps such as color space conversion, de-interlacing, and other video adjustments. This step is performed using an <a class="el" href="group___vdp_video_mixer.html">VdpVideoMixer</a> object. This object can not only perform the aforementioned video post-processing, but also composite the video with a number of <a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a>s, thus allowing complex user interfaces to be built. The final result is written into another <a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a>.</p>
<p>Note that at this point, the resultant <a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a> may be fed back through the above path, either using <a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a> <a class="el" href="group___vdp_output_surface_render.html">rendering functionality</a>, or as input to the <a class="el" href="group___vdp_video_mixer.html">VdpVideoMixer</a> object.</p>
<p>Finally, the resultant <a class="el" href="group___vdp_output_surface.html">VdpOutputSurface</a> must be displayed on screen. This is the job of the <a class="el" href="group___vdp_presentation_queue.html">VdpPresentationQueue</a> object.</p>
<div align="center">
<img src="vdpau_data_flow.png" alt="vdpau_data_flow.png"/>
</div>
<h2><a class="anchor" id="entry_point_retrieval"></a>
Entry Point Retrieval</h2>
<p>VDPAU is designed so that multiple implementations can be used without application changes. For example, VDPAU could be hosted on X11, or via direct GPU access.</p>
<p>The key technology behind this is the use of function pointers and a "get proc address" style API for all entry points. Put another way, functions are not called directly via global symbols set up by the linker, but rather through pointers.</p>
<p>In practical terms, the <a class="el" href="group__api__winsys.html">Window System Integration Layer</a> provides factory functions which not only create and return <a class="el" href="group___vdp_device.html">VdpDevice</a> objects, but also a function pointer to a <a class="el" href="group__get__proc__address.html#gae722d7342b6788429c07d125366e37da">VdpGetProcAddress</a> function, through which all entry point function pointers will be retrieved.</p>
<h3><a class="anchor" id="entry_point_philosophy"></a>
Philosophy</h3>
<p>It is entirely possible to envisage a simpler scheme whereby such function pointers are hidden. That is, the application would link against a wrapper library that exposed "real" functions. The application would then call such functions directly, by symbol, like any other function. The wrapper library would handle loading the appropriate back-end, and implementing a similar "get proc address" scheme internally.</p>
<p>However, the above scheme does not work well in the context of separated <a class="el" href="group__api__core.html">Core API</a> and <a class="el" href="group__api__winsys.html">Window System Integration Layer</a>. In this scenario, one would require a separate wrapper library per Window System, since each Window System would have a different function name and prototype for the main factory function. If an application then wanted to be Window System agnostic (making final determination at run-time via some form of plugin), it may then need to link against two wrapper libraries, which would cause conflicts for all symbols other than the main factory function.</p>
<p>Another disadvantage of the wrapper library approach is the extra level of function call required; the wrapper library would internally implement the existing "get proc address" and "function pointer" style dispatch anyway. Exposing this directly to the application is slightly more efficient.</p>
<h2><a class="anchor" id="threading"></a>
Multi-threading</h2>
<p>All VDPAU functionality is fully thread-safe; any number of threads may call into any VDPAU functions at any time. VDPAU may not be called from signal-handlers.</p>
<p>Note, however, that this simply guarantees that internal VDPAU state will not be corrupted by thread usage, and that crashes and deadlocks will not occur. Completely arbitrary thread usage may not generate the results that an application desires. In particular, care must be taken when multiple threads are performing operations on the same VDPAU objects.</p>
<p>VDPAU implementations guarantee correct flow of surface content through the rendering pipeline, but only when function calls that read from or write to a surface return to the caller prior to any thread calling any other function(s) that read from or write to the surface. Invoking multiple reads from a surface in parallel is OK.</p>
<p>Note that this restriction is placed upon VDPAU function invocations, and specifically not upon any back-end hardware's physical rendering operations. VDPAU implementations are expected to internally synchronize such hardware operations.</p>
<p>In a single-threaded application, the above restriction comes naturally; each function call completes before it is possible to begin a new function call.</p>
<p>In a multi-threaded application, threads may need to be synchronized. For example, consider the situation where:</p>
<ul>
<li>Thread 1 is parsing compressed video data, passing them through a <a class="el" href="group___vdp_decoder.html">VdpDecoder</a> object, and filling a ring-buffer of <a class="el" href="group___vdp_video_surface.html">VdpVideoSurface</a>s</li>
<li>Thread 2 is consuming those <a class="el" href="group___vdp_video_surface.html">VdpVideoSurface</a>s, and using a <a class="el" href="group___vdp_video_mixer.html">VdpVideoMixer</a> to process them and composite them with UI.</li>
</ul>
<p>In this case, the threads must synchronize to ensure that thread 1's call to <a class="el" href="group___vdp_decoder.html#gae1d7dacb05aa8badbc9c38018e2e36c9">VdpDecoderRender</a> has returned prior to thread 2's call(s) to <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> that use that specific surface. This could be achieved using the following pseudo-code:</p>
<div class="fragment"><pre class="fragment"> Queue&lt;VdpVideoSurface&gt; q_full_surfaces;
 Queue&lt;VdpVideoSurface&gt; q_empty_surfaces;

 thread_1() {
     <span class="keywordflow">for</span> (;;) {
         <a class="code" href="group___vdp_video_surface.html#gab51ee52662d4a785677a49bd1b308825" title="An opaque handle representing a VdpVideoSurface object.">VdpVideoSurface</a> s = q_empty_surfaces.get();
         <span class="comment">// Parse compressed stream here</span>
         <a class="code" href="group___vdp_decoder.html#gae1d7dacb05aa8badbc9c38018e2e36c9" title="Decode a compressed field/frame and render the result into a VdpVideoSurface.">VdpDecoderRender</a>(s, ...);
         q_full_surfaces.put(s);
     }
 }

 <span class="comment">// This would need to be more complex if</span>
 <span class="comment">// VdpVideoMixerRender were to be provided with more</span>
 <span class="comment">// than one field/frame at a time.</span>
 thread_2() {
     <span class="keywordflow">for</span> (;;) {
         <span class="comment">// Possibly, other rendering operations to mixer</span>
         <span class="comment">// layer surfaces here.</span>
         <a class="code" href="group___vdp_output_surface.html#ga39f4859fb6b35dd3172c541f7613bf15" title="An opaque handle representing a VdpOutputSurface object.">VdpOutputSurface</a> t = ...;
         <a class="code" href="group___vdp_presentation_queue.html#ga9177f8fe368ff95863ac6304bd2f106a" title="Wait for a surface to finish being displayed.">VdpPresentationQueueBlockUntilSurfaceIdle</a>(t);
         <a class="code" href="group___vdp_video_surface.html#gab51ee52662d4a785677a49bd1b308825" title="An opaque handle representing a VdpVideoSurface object.">VdpVideoSurface</a> s = q_full_surfaces.get();
         <a class="code" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db" title="Perform a video post-processing and compositing operation.">VdpVideoMixerRender</a>(s, t, ...);
         q_empty_surfaces.put(s);
         <span class="comment">// Possibly, other rendering operations to &quot;t&quot; here</span>
         <a class="code" href="group___vdp_presentation_queue.html#ga5bd61ca8ef5d1bc54ca6921aa57f835a" title="Enter a surface into the presentation queue.">VdpPresentationQueueDisplay</a>(t, ...);
     }
 }
</pre></div><p>Finally, note that VDPAU makes no guarantees regarding any level of parallelism in any given implementation. Put another way, use of multi-threading is not guaranteed to yield any performance gain, and in theory could even slightly reduce performance due to threading/synchronization overhead.</p>
<p>However, the intent of the threading requirements is to allow for e.g. video decoding and video mixer operations to proceed in parallel in hardware. Given a (presumably multi-threaded) application that kept each portion of the hardware busy, this would yield a performance increase.</p>
<h2><a class="anchor" id="endianness"></a>
Surface Endianness</h2>
<p>When dealing with surface content, i.e. the input/output of Put/GetBits functions, applications must take care to access memory in the correct fashion, so as to avoid endianness issues.</p>
<p>By established convention in the 3D graphics world, RGBA data is defined to be an array of 32-bit pixels containing packed RGBA components, not as an array of bytes or interleaved RGBA components. VDPAU follows this convention. As such, applications are expected to access such surfaces as arrays of 32-bit components (i.e. using a 32-bit pointer), and not as interleaved arrays of 8-bit components (i.e. using an 8-bit pointer.) Deviation from this convention will lead to endianness issues, unless appropriate care is taken.</p>
<p>The same convention is followed for some packed YCbCr formats such as <a class="el" href="group__misc__types.html#ga0c7b86dab9d96b1aba96bca4cf048128">VDP_YCBCR_FORMAT_Y8U8V8A8</a>; i.e. they are considered arrays of 32-bit pixels, and hence should be accessed as such.</p>
<p>For YCbCr formats with chroma decimation and/or planar formats, however, this convention is awkward. Therefore, formats such as <a class="el" href="group__misc__types.html#gab7550cf65e6d46f4fd7a1e322372e207">VDP_YCBCR_FORMAT_NV12</a> are defined as arrays of (potentially interleaved) byte-sized components. Hence, applications should manipulate such data 8-bits at a time, using 8-bit pointers.</p>
<p>Note that one common usage for the input/output of Put/GetBits APIs is file I/O. Typical file I/O APIs treat all memory as a simple array of 8-bit values. This violates the rule requiring surface data to be accessed in its true native format. As such, applications may be required to solve endianness issues. Possible solutions include:</p>
<ul>
<li>Authoring static UI data files according to the endianness of the target execution platform.</li>
<li>Conditionally byte-swapping Put/GetBits data buffers at run-time based on execution platform.</li>
</ul>
<p>Note: Complete details regarding each surface format's precise pixel layout is included with the documentation of each surface type. For example, see <a class="el" href="group__misc__types.html#ga2659adf5d019acade5516ea35e4eb5ad">VDP_RGBA_FORMAT_B8G8R8A8</a>.</p>
<h2><a class="anchor" id="video_decoder_usage"></a>
Video Decoder Usage</h2>
<p>VDPAU is a slice-level API. Put another way, VDPAU implementations accept "slice" data from the bitstream, and perform all required processing of those slices (e.g VLD decoding, IDCT, motion compensation, in-loop deblocking, etc.).</p>
<p>The client application is responsible for:</p>
<ul>
<li>Extracting the slices from the bitstream (e.g. parsing/demultiplexing container formats, scanning the data to determine slice start positions and slice sizes).</li>
<li>Parsing various bitstream headers/structures (e.g. sequence header, sequence parameter set, picture parameter set, entry point structures, etc.) Various fields from the parsed header structures needs to be provided to VDPAU alongside the slice bitstream in a "picture info" structure.</li>
<li>Surface management (e.g. H.264 DPB processing, display re-ordering)</li>
</ul>
<p>It is recommended that applications pass solely the slice data to VDPAU; specifically that any header data structures be excluded from the portion of the bitstream passed to VDPAU. VDPAU implementations must operate correctly if non-slice data is included, at least for formats employing start codes to delimit slice data. However, any extra data may need to be uploaded to hardware for parsing thus lowering performance, and/or, in the worst case, may even overflow internal buffers that are sized solely for slice data.</p>
<p>The exact data that should be passed to VDPAU is detailed below for each supported format:</p>
<h3><a class="anchor" id="bitstream_mpeg1_mpeg2"></a>
MPEG-1 and MPEG-2</h3>
<p>Include all slices beginning with start codes 0x00000101 through 0x000001AF. The slice start code must be included for all slices.</p>
<h3><a class="anchor" id="bitstream_h264"></a>
H.264</h3>
<p>Include all NALs with nal_unit_type of 1 or 5 (coded slice of non-IDR/IDR picture respectively). The complete slice start code (including 0x000001 prefix) must be included for all slices, even when the prefix is not included in the bitstream.</p>
<p>Note that if desired:</p>
<ul>
<li>The slice start code prefix may be included in a separate bitstream buffer array entry to the actual slice data extracted from the bitstream.</li>
<li>Multiple bitstream buffer array entries (e.g. one per slice) may point at the same physical data storage for the slice start code prefix.</li>
</ul>
<h3><a class="anchor" id="bitstream_vc1_sp_mp"></a>
VC-1 Simple and Main Profile</h3>
<p>VC-1 simple/main profile bitstreams always consist of a single slice per picture, and do not use start codes to delimit pictures. Instead, the container format must indicate where each picture begins/ends.</p>
<p>As such, no slice start codes should be included in the data passed to VDPAU; simply pass in the exact data from the bitstream.</p>
<p>Header information contained in the bitstream should be parsed by the application and passed to VDPAU using the "picture info" data structure; this header information explicitly must not be included in the bitstream data passed to VDPAU for this encoding format.</p>
<h3><a class="anchor" id="bitstream_vc1_ap"></a>
VC-1 Advanced Profile</h3>
<p>Include all slices beginning with start codes 0x0000010D (frame), 0x0000010C (field) or 0x0000010B (slice). The slice start code should be included in all cases.</p>
<p>Some VC-1 advanced profile streams do not contain slice start codes; again, the container format must indicate where picture data begins and ends. In this case, pictures are assumed to be progressive and to contain a single slice. It is highly recommended that applications detect this condition, and add the missing start codes to the bitstream passed to VDPAU. However, VDPAU implementations must allow bitstreams with missing start codes, and act as if a 0x0000010D (frame) start code had been present.</p>
<p>Note that pictures containing multiple slices, or interlace streams, must contain a complete set of slice start codes in the original bitstream; without them, it is not possible to correctly parse and decode the stream.</p>
<p>The bitstream passed to VDPAU should contain all original emulation prevention bytes present in the original bitstream; do not remove these from the bitstream.</p>
<h3><a class="anchor" id="bitstream_mpeg4part2"></a>
MPEG-4 Part 2 and DivX</h3>
<p>Include all slices beginning with start codes 0x000001B6. The slice start code must be included for all slices.</p>
<h2><a class="anchor" id="video_mixer_usage"></a>
Video Mixer Usage</h2>
<h3><a class="anchor" id="video_surface_content"></a>
VdpVideoSurface Content</h3>
<p>Each <a class="el" href="group___vdp_video_surface.html">VdpVideoSurface</a> is expected to contain an entire frame's-worth of data, irrespective of whether an interlaced of progressive sequence is being decoded.</p>
<p>Depending on the exact encoding structure of the compressed video stream, the application may need to call <a class="el" href="group___vdp_decoder.html#gae1d7dacb05aa8badbc9c38018e2e36c9">VdpDecoderRender</a> twice to fill a single <a class="el" href="group___vdp_video_surface.html">VdpVideoSurface</a>. When the stream contains an encoded progressive frame, or a "frame coded" interlaced field-pair, a single <a class="el" href="group___vdp_decoder.html#gae1d7dacb05aa8badbc9c38018e2e36c9">VdpDecoderRender</a> call will fill the entire surface. When the stream contains separately encoded interlaced fields, two <a class="el" href="group___vdp_decoder.html#gae1d7dacb05aa8badbc9c38018e2e36c9">VdpDecoderRender</a> calls will be required; one for the top field, and one for the bottom field.</p>
<p>Implementation note: When <a class="el" href="group___vdp_decoder.html#gae1d7dacb05aa8badbc9c38018e2e36c9">VdpDecoderRender</a> renders an interlaced field, this operation must not disturb the content of the other field in the surface.</p>
<h3><a class="anchor" id="vm_surf_list"></a>
VdpVideoMixer Surface List</h3>
<p>An video stream is logically composed of a sequence of fields. An example is shown below, in display order, assuming top field first:</p>
<pre>t0 b0 t1 b1 t2 b2 t3 b3 t4 b4 t5 b5 t6 b6 t7 b7 t8 b8 t9 b9</pre><p>The canonical usage is to call <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> once for decoded field, in display order, to yield one post-processed frame for display.</p>
<p>For each call to <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a>, the field to be processed should be provided as the <b>video_surface_current</b> parameter.</p>
<p>To enable operation of advanced de-interlacing algorithms and/or post-processing algorithms, some past and/or future surfaces should be provided as context. These are provided in the <b>video_surface_past</b> and <b>video_surface_future</b> lists. In general, these lists may contain any number of surfaces. Specific implementations may have specific requirements determining the minimum required number of surfaces for optimal operation, and the maximum number of useful surfaces, beyond which surfaces are not used. It is recommended that in all cases other than plain bob/weave, at least 2 past and 1 future field be provided.</p>
<p>Note that it is entirely possible, in general, for any of the <a class="el" href="group___vdp_video_mixer.html">VdpVideoMixer</a> post-processing steps other than de-interlacing to require access to multiple input fields/frames. For example, an motion-sensitive noise-reduction algorithm.</p>
<p>For example, when processing field t4, the <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> parameters may contain the following values, if the application chose to provide 3 fields of context for both the past and future:</p>
<pre>
 current_picture_structure: VDP_VIDEO_MIXER_PICTURE_STRUCTURE_TOP_FIELD
 past:    [b3, t3, b2]
 current: t4
 future:  [b4, t5, b5]
 </pre><p>Note that for both the past/future lists, array index 0 represents the field temporally closest to current, in display order.</p>
<p>The <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> parameter <b>current_picture_structure</b> applies to <b>video_surface_current</b>. The picture structure for the other surfaces will be automatically derived from that for the current picture. The derivation algorithm is extremely simple; the concatenated list past/current/future is simply assumed to have an alternating top/bottom pattern throughout.</p>
<p>Continuing the example above, subsequent calls to <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> would provide the following sets of parameters:</p>
<pre>
 current_picture_structure: VDP_VIDEO_MIXER_PICTURE_STRUCTURE_BOTTOM_FIELD
 past:    [t4, b3, t3]
 current: b4
 future:  [t5, b5, t6]
 </pre><p>then:</p>
<pre>
 current_picture_structure: VDP_VIDEO_MIXER_PICTURE_STRUCTURE_TOP_FIELD
 past:    [b4, t4, b3]
 current: t5
 future:  [b5, t6, b7]
 </pre><p>In other words, the concatenated list of past/current/future frames simply forms a window that slides through the sequence of decoded fields.</p>
<p>It is syntactically legal for an application to choose not to provide a particular entry in the past or future lists. In this case, the "slot" in the surface list must be filled with the special value <a class="el" href="group__misc__types.html#gad58c5db62f871890503d07505253dd18">VDP_INVALID_HANDLE</a>, to explicitly indicate that the picture is missing; do not simply shuffle other surfaces together to fill in the gap. Note that entries should only be omitted under special circumstances, such as failed decode due to bitstream error during picture header parsing, since missing entries will typically cause advanced de-interlacing algorithms to experience significantly degraded operation.</p>
<p>Specific examples for different de-interlacing types are presented below.</p>
<h3><a class="anchor" id="deint_weave"></a>
Weave De-interlacing</h3>
<p>Weave de-interlacing is the act of interleaving the lines of two temporally adjacent fields to form a frame for display.</p>
<p>To disable de-interlacing for progressive streams, simply specify <b>current_picture_structure</b> as <a class="el" href="group___vdp_video_mixer.html#ggac43c37528fbd62a6604171680a531d61aa1947f59b5a69c4960912c8c0105e6ed">VDP_VIDEO_MIXER_PICTURE_STRUCTURE_FRAME</a>; no de-interlacing will be applied.</p>
<p>Weave de-interlacing for interlaced streams is identical to disabling de-interlacing, as describe immediately above, because each <a class="el" href="group___vdp_video_surface.html">VdpVideoSurface; Video Surface object</a> already contains an entire frame's worth (i.e. two fields) of picture data.</p>
<p>Inverse telecine is disabled when using weave de-interlacing.</p>
<p>Weave de-interlacing produces one output frame for each input frame. The application should make one <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> call per pair of decoded fields, or per decoded frame.</p>
<p>Weave de-interlacing requires no entries in the past/future lists.</p>
<p>All implementations must support weave de-interlacing.</p>
<h3><a class="anchor" id="deint_bob"></a>
Bob De-interlacing</h3>
<p>Bob de-interlacing is the act of vertically scaling a single field to the size of a single frame.</p>
<p>To achieve bob de-interlacing, simply provide a single field as <b>video_surface_current</b>, and set <b>current_picture_structure</b> appropriately, to indicate whether a top or bottom field was provided.</p>
<p>Inverse telecine is disabled when using bob de-interlacing.</p>
<p>Bob de-interlacing produces one output frame for each input field. The application should make one <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> call per decoded field.</p>
<p>Bob de-interlacing requires no entries in the past/future lists.</p>
<p>Bob de-interlacing is the default when no advanced method is requested and enabled. Advanced de-interlacing algorithms may fall back to bob e.g. when required past/future fields are missing.</p>
<p>All implementations must support bob de-interlacing.</p>
<h3><a class="anchor" id="deint_adv"></a>
Advanced De-interlacing</h3>
<p>Operation of both temporal and temporal-spatial de-interlacing is identical; the only difference is the internal processing the algorithm performs in generating the output frame.</p>
<p>These algorithms use various advanced processing on the pixels of both the current and various past/future fields in order to determine how best to de-interlacing individual portions of the image.</p>
<p>Inverse telecine may be enabled when using advanced de-interlacing.</p>
<p>Advanced de-interlacing produces one output frame for each input field. The application should make one <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> call per decoded field.</p>
<p>Advanced de-interlacing requires entries in the past/future lists.</p>
<p>Availability of advanced de-interlacing algorithms is implementation dependent.</p>
<h3><a class="anchor" id="deint_rate"></a>
De-interlacing Rate</h3>
<p>For all de-interlacing algorithms except weave, a choice may be made to call <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> for either each decoded field, or every second decoded field.</p>
<p>If <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> is called for every decoded field, the generated post-processed frame rate is equal to the decoded field rate. Put another way, the generated post-processed nominal field rate is equal to 2x the decoded field rate. This is standard practice.</p>
<p>If <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> is called for every second decoded field (say every top field), the generated post-processed frame rate is half to the decoded field rate. This mode of operation is thus referred to as "half-rate".</p>
<p>Implementations may choose whether to support half-rate de-interlacing or not. Regular full-rate de-interlacing should be supported by any supported advanced de-interlacing algorithm.</p>
<p>The descriptions of de-interlacing algorithms above assume that regular (not half-rate) operation is being performed, when detailing the number of VdpVideoMixerRender calls.</p>
<p>Recall that the concatenation of past/current/future surface lists simply forms a window into the stream of decoded fields. To achieve standard de-interlacing, the window is slid through the list of decoded fields one field at a time, and a call is made to <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> for each movement of the window. To achieve half-rate de-interlacing, the window is slid through the* list of decoded fields two fields at a time, and a call is made to <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> for each movement of the window.</p>
<h3><a class="anchor" id="invtc"></a>
Inverse Telecine</h3>
<p>Assuming the implementation supports it, inverse telecine may be enabled alongside any advanced de-interlacing algorithm. Inverse telecine is never active for bob or weave.</p>
<p>Operation of <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> with inverse telecine active is identical to the basic operation mechanisms describe above in every way; all inverse telecine processing is performed internally to the <a class="el" href="group___vdp_video_mixer.html">VdpVideoMixer</a>.</p>
<p>In particular, there is no provision way for <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> to indicate when identical input fields have been observed, and consequently identical output frames may have been produced.</p>
<p>De-interlacing (and inverse telecine) may be applied to streams that are marked as being progressive. This will allow detection of, and correct de-interlacing of, mixed interlace/progressive streams, bad edits, etc. To implement de-interlacing/inverse-telecine on progressive material, simply treat the stream of decoded frames as a stream of decoded fields, apply any telecine flags (see the next section), and then apply de-interlacing to those fields as described above.</p>
<p>Implementations are free to determine whether inverse telecine operates in conjunction with half-rate de-interlacing or not. It should always operate with regular de-interlacing, when advertized.</p>
<h3><a class="anchor" id="tcflags"></a>
Telecine (Pull-Down) Flags</h3>
<p>Some media delivery formats, e.g. DVD-Video, include flags that are intended to modify the decoded field sequence before display. This allows e.g. 24p content to be encoded at 48i, which saves space relative to a 60i encoded stream, but still displayed at 60i, to match target consumer display equipment.</p>
<p>If the inverse telecine option is not activated in the <a class="el" href="group___vdp_video_mixer.html">VdpVideoMixer</a>, these flags should be ignored, and the decoded fields passed directly to <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a> as detailed above.</p>
<p>However, to make full use of the inverse telecine feature, these flags should be applied to the field stream, yielding another field stream with some repeated fields, before passing the field stream to <a class="el" href="group___vdp_video_mixer.html#ga62bf3bf8c5f01322a03b07065c5ea3db">VdpVideoMixerRender</a>. In this scenario, the sliding window mentioned in the descriptions above applies to the field stream after application of flags.</p>
<h2><a class="anchor" id="extending"></a>
Extending the API</h2>
<h3><a class="anchor" id="extend_enums"></a>
Enumerations and Other Constants</h3>
<p>VDPAU defines a number of enumeration types.</p>
<p>When modifying VDPAU, existing enumeration constants must continue to exist (although they may be deprecated), and do so in the existing order.</p>
<p>The above discussion naturally applies to "manually" defined enumerations, using pre-processor macros, too.</p>
<h3><a class="anchor" id="extend_structs"></a>
Structures</h3>
<p>In most case, VDPAU includes no provision for modifying existing structure definitions, although they may be deprecated.</p>
<p>New structures may be created, together with new API entry points or feature/attribute/parameter values, to expose new functionality.</p>
<p>A few structures are considered plausible candidates for future extension. Such structures include a version number as the first field, indicating the exact layout of the client-provided data. Such structures may only be modified by adding new fields to the end of the structure, so that the original structure definition is completely compatible with a leading subset of fields of the extended structure.</p>
<h3><a class="anchor" id="extend_functions"></a>
Functions</h3>
<p>Existing functions may not be modified, although they may be deprecated.</p>
<p>New functions may be added at will. Note the enumeration requirements when modifying the enumeration that defines the list of entry points.</p>
<h2><a class="anchor" id="preemption_note"></a>
Display Preemption</h2>
<p>Please note that the display may be preempted away from VDPAU at any time. See <a class="el" href="group__display__preemption.html">Display Preemption</a> for more details.</p>
<h3><a class="anchor" id="trademarks"></a>
Trademarks</h3>
<p>VDPAU is a trademark of NVIDIA Corporation. You may freely use the VDPAU trademark, as long as trademark ownership is attributed to NVIDIA Corporation. </p>
</div></div>
<hr class="footer"/><address class="footer"><small>Generated on Tue Feb 8 2011 for VDPAU by&#160;
<a href="http://www.doxygen.org/index.html">
<img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.7.3 </small></address>
</body>
</html>