Sophie

Sophie

distrib > Mageia > 1 > x86_64 > media > nonfree-updates > by-pkgid > 7c5f85a6dcfdc4e75aab3daa74fdae93 > files > 3

nvidia173-doc-html-173.14.31-1.1.mga1.nonfree.x86_64.rpm

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta name="generator" content=
"HTML Tidy for Linux (vers 1 September 2005), see www.w3.org">
<meta http-equiv="Content-Type" content=
"text/html; charset=us-ascii">
<title>Appendix&nbsp;B.&nbsp;X Config Options</title>
<meta name="generator" content="DocBook XSL Stylesheets V1.68.1">
<link rel="start" href="index.html" title=
"NVIDIA Accelerated Linux Graphics Driver README and Installation Guide">
<link rel="up" href="part-02.html" title=
"Part&nbsp;II.&nbsp;Appendices">
<link rel="prev" href="appendix-a.html" title=
"Appendix&nbsp;A.&nbsp;Supported NVIDIA GPU Products">
<link rel="next" href="appendix-c.html" title=
"Appendix&nbsp;C.&nbsp;Display Device Names">
</head>
<body>
<div class="navheader">
<table width="100%" summary="Navigation header">
<tr>
<th colspan="3" align="center">Appendix&nbsp;B.&nbsp;X Config
Options</th>
</tr>
<tr>
<td width="20%" align="left"><a accesskey="p" href=
"appendix-a.html">Prev</a>&nbsp;</td>
<th width="60%" align="center">Part&nbsp;II.&nbsp;Appendices</th>
<td width="20%" align="right">&nbsp;<a accesskey="n" href=
"appendix-c.html">Next</a></td>
</tr>
</table>
<hr></div>
<div class="appendix" lang="en">
<div class="titlepage">
<div>
<div>
<h2 class="title"><a name="xconfigoptions" id=
"xconfigoptions"></a>Appendix&nbsp;B.&nbsp;X Config Options</h2>
</div>
</div>
</div>
<p>The following driver options are supported by the NVIDIA X
driver. They may be specified either in the Screen or Device
sections of the X config file.</p>
<div class="variablelist">
<p class="title"><b>X Config Options</b></p>
<dl>
<dt><a name="NvAGP" id="NvAGP"></a><span class="term"><code class=
"computeroutput">Option "NvAGP" "integer"</code></span></dt>
<dd>
<p>Configure AGP support. Integer argument can be one of:</p>
<div class="informaltable">
<table summary="(no summary available)" border="0">
<colgroup>
<col>
<col></colgroup>
<thead>
<tr>
<th>Value</th>
<th>Behavior</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>disable AGP</td>
</tr>
<tr>
<td>1</td>
<td>use NVIDIA internal AGP support, if possible</td>
</tr>
<tr>
<td>2</td>
<td>use AGPGART, if possible</td>
</tr>
<tr>
<td>3</td>
<td>use any AGP support (try AGPGART, then NVIDIA AGP)</td>
</tr>
</tbody>
</table>
</div>
<p>Note that NVIDIA internal AGP support cannot work if AGPGART is
either statically compiled into your kernel or is built as a module
and loaded into your kernel. See <a href="chapter-12.html" title=
"Chapter&nbsp;12.&nbsp;Configuring AGP">Chapter&nbsp;12,
<i>Configuring AGP</i></a> for details. Default: 3.</p>
</dd>
<dt><a name="NoLogo" id="NoLogo"></a><span class=
"term"><code class="computeroutput">Option "NoLogo"
"boolean"</code></span></dt>
<dd>
<p>Disable drawing of the NVIDIA logo splash screen at X startup.
Default: the logo is drawn for screens with depth 24.</p>
</dd>
<dt><a name="LogoPath" id="LogoPath"></a><span class=
"term"><code class="computeroutput">Option "LogoPath"
"string"</code></span></dt>
<dd>
<p>Sets the path to the PNG file to be used as the logo splash
screen at X startup. If the PNG file specified has a bKGD
(background color) chunk, then the screen is cleared to the color
it specifies. Otherwise, the screen is cleared to black. The logo
file must be owned by root and must not be writable by a non-root
group. Note that a logo is only displayed for screens with depth
24. Default: The built-in NVIDIA logo is used.</p>
</dd>
<dt><a name="RenderAccel" id="RenderAccel"></a><span class=
"term"><code class="computeroutput">Option "RenderAccel"
"boolean"</code></span></dt>
<dd>
<p>Enable or disable hardware acceleration of the RENDER extension.
Default: hardware acceleration of the RENDER extension is
enabled.</p>
</dd>
<dt><a name="NoRenderExtension" id=
"NoRenderExtension"></a><span class="term"><code class=
"computeroutput">Option "NoRenderExtension"
"boolean"</code></span></dt>
<dd>
<p>Disable the RENDER extension. Other than recompiling it, the X
server does not seem to have another way of disabling this.
Fortunately, we can control this from the driver so we export this
option. This is useful in depth 8 where RENDER would normally steal
most of the default colormap. Default: RENDER is offered when
possible.</p>
</dd>
<dt><a name="UBB" id="UBB"></a><span class="term"><code class=
"computeroutput">Option "UBB" "boolean"</code></span></dt>
<dd>
<p>Enable or disable the Unified Back Buffer on Quadro-based GPUs
(Quadro4 NVS excluded); see <a href="chapter-20.html" title=
"Chapter&nbsp;20.&nbsp;Configuring Flipping and UBB">Chapter&nbsp;20,
<i>Configuring Flipping and UBB</i></a> for a description of UBB.
This option has no effect on non-Quadro GPU products. Default: UBB
is on for Quadro GPUs.</p>
</dd>
<dt><a name="NoFlip" id="NoFlip"></a><span class=
"term"><code class="computeroutput">Option "NoFlip"
"boolean"</code></span></dt>
<dd>
<p>Disable OpenGL flipping; see <a href="chapter-20.html" title=
"Chapter&nbsp;20.&nbsp;Configuring Flipping and UBB">Chapter&nbsp;20,
<i>Configuring Flipping and UBB</i></a> for a description. Default:
OpenGL will swap by flipping when possible.</p>
</dd>
<dt><a name="Dac8Bit" id="Dac8Bit"></a><span class=
"term"><code class="computeroutput">Option "Dac8Bit"
"boolean"</code></span></dt>
<dd>
<p>Most Quadro products by default use a 10-bit color look-up table
(LUT); setting this option to TRUE forces these GPUs to use an
8-bit (LUT). Default: a 10-bit LUT is used, when available.</p>
</dd>
<dt><a name="Overlay" id="Overlay"></a><span class=
"term"><code class="computeroutput">Option "Overlay"
"boolean"</code></span></dt>
<dd>
<p>Enables RGB workstation overlay visuals. This is only supported
on Quadro GPUs (Quadro NVS GPUs excluded) in depth 24. This option
causes the server to advertise the SERVER_OVERLAY_VISUALS root
window property and GLX will report single- and double-buffered,
Z-buffered 16-bit overlay visuals. The transparency key is pixel
0x0000 (hex). There is no gamma correction support in the overlay
plane. This feature requires XFree86 version 4.1.0 or newer, or the
X.Org X server. When TwinView is enabled, or the X screen is either
wider than 2046 pixels or taller than 2047, the overlay may be
emulated with a substantial performance penalty. RGB workstation
overlays are not supported when the Composite extension is enabled.
Dynamic TwinView is disabled when Overlays are enabled. Default:
off.</p>
<p>UBB must be enabled when overlays are enabled (this is the
default behavior).</p>
</dd>
<dt><a name="CIOverlay" id="CIOverlay"></a><span class=
"term"><code class="computeroutput">Option "CIOverlay"
"boolean"</code></span></dt>
<dd>
<p>Enables Color Index workstation overlay visuals with identical
restrictions to Option "Overlay" above. The server will offer
visuals both with and without a transparency key. These are depth 8
PseudoColor visuals. Enabling Color Index overlays on X servers
older than XFree86 4.3 will force the RENDER extension to be
disabled due to bugs in the RENDER extension in older X servers.
Color Index workstation overlays are not supported when the
Composite extension is enabled. Default: off.</p>
<p>UBB must be enabled when overlays are enabled (this is the
default behavior).</p>
</dd>
<dt><a name="TransparentIndex" id=
"TransparentIndex"></a><span class="term"><code class=
"computeroutput">Option "TransparentIndex"
"integer"</code></span></dt>
<dd>
<p>When color index overlays are enabled, use this option to choose
which pixel is used for the transparent pixel in visuals featuring
transparent pixels. This value is clamped between 0 and 255 (Note:
some applications such as Alias's Maya require this to be zero in
order to work correctly). Default: 0.</p>
</dd>
<dt><a name="OverlayDefaultVisual" id=
"OverlayDefaultVisual"></a><span class="term"><code class=
"computeroutput">Option "OverlayDefaultVisual"
"boolean"</code></span></dt>
<dd>
<p>When overlays are used, this option sets the default visual to
an overlay visual thereby putting the root window in the overlay.
This option is not recommended for RGB overlays. Default: off.</p>
</dd>
<dt><a name="EmulatedOverlaysTimerMs" id=
"EmulatedOverlaysTimerMs"></a><span class="term"><code class=
"computeroutput">Option "EmulatedOverlaysTimerMs"
"integer"</code></span></dt>
<dd>
<p>Enables the use of a timer within the X server to perform the
updates to the emulated overlay or CI overlay. This option can be
used to improve the performance of the emulated or CI overlays by
reducing the frequency of the updates. The value specified
indicates the desired number of milliseconds between overlay
updates. To disable the use of the timer either leave the option
unset or set it to 0. Default: off.</p>
</dd>
<dt><a name="EmulatedOverlaysThreshold" id=
"EmulatedOverlaysThreshold"></a><span class="term"><code class=
"computeroutput">Option "EmulatedOverlaysThreshold"
"boolean"</code></span></dt>
<dd>
<p>Enables the use of a threshold within the X server to perform
the updates to the emulated overlay or CI overlay. The emulated or
CI overlay updates can be defered but this threshold will limit the
number of defered OpenGL updates allowed before the overlay is
updated. This option can be used to trade off performance and
animation quality. Default: on.</p>
</dd>
<dt><a name="EmulatedOverlaysThresholdValue" id=
"EmulatedOverlaysThresholdValue"></a><span class=
"term"><code class="computeroutput">Option
"EmulatedOverlaysThresholdValue" "integer"</code></span></dt>
<dd>
<p>Controls the threshold used in updating the emulated or CI
overlays. This is used in conjunction with the
EmulatedOverlaysThreshold option to trade off performance and
animation quality. Higher values for this option favor performance
over quality. Setting low values of this option will not cause the
overlay to be updated more often than the frequence specified by
the EmulatedOverlaysTimerMs option. Default: 5.</p>
</dd>
<dt><a name="RandRRotation" id="RandRRotation"></a><span class=
"term"><code class="computeroutput">Option "RandRRotation"
"boolean"</code></span></dt>
<dd>
<p>Enable rotation support for the XRandR extension. This allows
use of the XRandR X server extension for configuring the screen
orientation through rotation. This feature is supported using depth
24. This requires an X.Org X 6.8.1 or newer X server. This feature
does not work with hardware overlays; emulated overlays will be
used instead at a substantial performance penalty. See <a href=
"chapter-17.html" title=
"Chapter&nbsp;17.&nbsp;Using the XRandR Extension">Chapter&nbsp;17,
<i>Using the XRandR Extension</i></a> for details. Default:
off.</p>
</dd>
<dt><a name="Rotate" id="Rotate"></a><span class=
"term"><code class="computeroutput">Option "Rotate"
"string"</code></span></dt>
<dd>
<p>Enable static rotation support. Unlike the RandRRotation option
above, this option takes effect as soon as the X server is started
and will work with older versions of X. This feature is supported
using depth 24. This feature does not work with hardware overlays;
emulated overlays will be used instead at a substantial performance
penalty. This option is not compatible with the RandR extension.
Valid rotations are "normal", "left", "inverted", and "right".
Default: off.</p>
</dd>
<dt><a name="AllowDDCCI" id="AllowDDCCI"></a><span class=
"term"><code class="computeroutput">Option "AllowDDCCI"
"boolean"</code></span></dt>
<dd>
<p>Enables DDC/CI support in the NV-CONTROL X extension. DDC/CI is
a mechanism for communication between your computer and your
display device. This can be used to set the values normally
controlled through your display device's On Screen Display. See the
DDC/CI NV-CONTROL attributes in <code class=
"filename">NVCtrl.h</code> and functions in <code class=
"filename">NVCtrlLib.h</code> in the <span><strong class=
"command">nvidia-settings</strong></span> source code. Default: off
(DDC/CI is disabled).</p>
<p>Note that support for DDC/CI within the NVIDIA X driver's
NV-CONTROL extension is deprecated, and will be removed in a future
release. Other mechanisms for DDC/CI, such as the kernel i2c
subsystem on Linux, are preferred over NV-CONTROL's DDC/CI
support.</p>
<p>If you would prefer that the NVIDIA X driver's NV-CONTROL X
extension not remove DDC/CI support, please make your concerns
known my emailing <code class="email">&lt;<a href=
"mailto:linux-bugs@nvidia.com">linux-bugs@nvidia.com</a>&gt;</code>.</p>
</dd>
<dt><a name="SWCursor" id="SWCursor"></a><span class=
"term"><code class="computeroutput">Option "SWCursor"
"boolean"</code></span></dt>
<dd>
<p>Enable or disable software rendering of the X cursor. Default:
off.</p>
</dd>
<dt><a name="HWCursor" id="HWCursor"></a><span class=
"term"><code class="computeroutput">Option "HWCursor"
"boolean"</code></span></dt>
<dd>
<p>Enable or disable hardware rendering of the X cursor. Default:
on.</p>
</dd>
<dt><a name="CursorShadow" id="CursorShadow"></a><span class=
"term"><code class="computeroutput">Option "CursorShadow"
"boolean"</code></span></dt>
<dd>
<p>Enable or disable use of a shadow with the hardware accelerated
cursor; this is a black translucent replica of your cursor shape at
a given offset from the real cursor. Default: off (no cursor
shadow).</p>
</dd>
<dt><a name="CursorShadowAlpha" id=
"CursorShadowAlpha"></a><span class="term"><code class=
"computeroutput">Option "CursorShadowAlpha"
"integer"</code></span></dt>
<dd>
<p>The alpha value to use for the cursor shadow; only applicable if
CursorShadow is enabled. This value must be in the range [0, 255]
-- 0 is completely transparent; 255 is completely opaque. Default:
64.</p>
</dd>
<dt><a name="CursorShadowXOffset" id=
"CursorShadowXOffset"></a><span class="term"><code class=
"computeroutput">Option "CursorShadowXOffset"
"integer"</code></span></dt>
<dd>
<p>The offset, in pixels, that the shadow image will be shifted to
the right from the real cursor image; only applicable if
CursorShadow is enabled. This value must be in the range [0, 32].
Default: 4.</p>
</dd>
<dt><a name="CursorShadowYOffset" id=
"CursorShadowYOffset"></a><span class="term"><code class=
"computeroutput">Option "CursorShadowYOffset"
"integer"</code></span></dt>
<dd>
<p>The offset, in pixels, that the shadow image will be shifted
down from the real cursor image; only applicable if CursorShadow is
enabled. This value must be in the range [0, 32]. Default: 2.</p>
</dd>
<dt><a name="ConnectedMonitor" id=
"ConnectedMonitor"></a><span class="term"><code class=
"computeroutput">Option "ConnectedMonitor"
"string"</code></span></dt>
<dd>
<p>Allows you to override what the NVIDIA kernel module detects is
connected to your graphics card. This may be useful, for example,
if you use a KVM (keyboard, video, mouse) switch and you are
switched away when X is started. In such a situation, the NVIDIA
kernel module cannot detect which display devices are connected,
and the NVIDIA X driver assumes you have a single CRT.</p>
<p>Valid values for this option are "CRT" (cathode ray tube), "DFP"
(digital flat panel), or "TV" (television); if using TwinView, this
option may be a comma-separated list of display devices; e.g.:
"CRT, CRT" or "CRT, DFP".</p>
<p>It is generally recommended to not use this option, but instead
use the "UseDisplayDevice" option.</p>
<p>NOTE: anything attached to a 15 pin VGA connector is regarded by
the driver as a CRT. "DFP" should only be used to refer to digital
flat panels connected via a DVI port.</p>
<p>Default: string is NULL (the NVIDIA driver will detect the
connected display devices).</p>
</dd>
<dt><a name="UseDisplayDevice" id=
"UseDisplayDevice"></a><span class="term"><code class=
"computeroutput">Option "UseDisplayDevice"
"string"</code></span></dt>
<dd>
<p>The "UseDisplayDevice" X configuration option is a list of one
or more display devices, which limits the display devices the
NVIDIA X driver will consider for an X screen. The display device
names used in the option may be either specific (with a numeric
suffix; e.g., "DFP-1") or general (without a numeric suffix; e.g.,
"DFP").</p>
<p>When assigning display devices to X screens, the NVIDIA X driver
walks through the list of all (not already assigned) display
devices detected as connected. When the "UseDisplayDevice" X
configuration option is specified, the X driver will only consider
connected display devices which are also included in the
"UseDisplayDevice" list. This can be thought of as a "mask" against
the connected (and not already assigned) display devices.</p>
<p>Note the subtle difference between this option and the
"ConnectedMonitor" option: the "ConnectedMonitor" option overrides
which display devices are actually detected, while the
"UseDisplayDevice" option controls which of the detected display
devices will be used on this X screen.</p>
<p>Of the list of display devices considered for this X screen
(either all connected display devices, or a subset limited by the
"UseDisplayDevice" option), the NVIDIA X driver first looks at
CRTs, then at DFPs, and finally at TVs. For example, if both a CRT
and a DFP are connected, by default the X driver would assign the
CRT to this X screen. However, by specifying:</p>
<pre class="screen">
    Option "UseDisplayDevice" "DFP"
</pre>
<p>the X screen would use the DFP instead. Or, if CRT-0, DFP-0, and
DFP-1 are connected and TwinView is enabled, the X driver would
assign CRT-0 and DFP-0 to the X screen. However, by specifying:</p>
<pre class="screen">
    Option "UseDisplayDevice" "CRT-0, DFP-1"
</pre>
<p>the X screen would use CRT-0 and DFP-1 instead.</p>
<p>Additionally, the special value "none" can be specified for the
"UseDisplayDevice" option. When this value is given, any
programming of the display hardware is disabled. The NVIDIA driver
will not perform any mode validation or modesetting for this X
screen. This is intended for use in conjunction with CUDA or in
remote graphics solutions such as VNC or Hewlett Packard's Remote
Graphics Software (RGS). This functionality is only available on
Quadro and Tesla GPUs.</p>
<p>Note the following restrictions for setting the
"UseDisplayDevice" to "none":</p>
<div class="itemizedlist">
<ul type="disc">
<li>
<p>OpenGL SyncToVBlank will have no effect.</p>
</li>
<li>
<p>You must also explicitly specify the Virtual screen size for
your X screen (see the xorg.conf(5x) or XF86Config(5x) manpages for
the 'Virtual' option, or the nvidia-xconfig(1) manpage for the
'--virtual' commandline option); the Virtual screen size must be at
least 304x200, and the width must be a multiple of 8.</p>
</li>
<li>
<p>None of Stereo, Overlay, CIOverlay, or SLI are allowed when
"UseDisplayDevice" is set to "none".</p>
</li>
</ul>
</div>
<p></p>
</dd>
<dt><a name="UseEdidFreqs" id="UseEdidFreqs"></a><span class=
"term"><code class="computeroutput">Option "UseEdidFreqs"
"boolean"</code></span></dt>
<dd>
<p>This option controls whether the NVIDIA X driver will use the
HorizSync and VertRefresh ranges given in a display device's EDID,
if any. When UseEdidFreqs is set to True, EDID-provided range
information will override the HorizSync and VertRefresh ranges
specified in the Monitor section. If a display device does not
provide an EDID, or the EDID does not specify an hsync or vrefresh
range, then the X server will default to the HorizSync and
VertRefresh ranges specified in the Monitor section of your X
config file. These frequency ranges are used when validating modes
for your display device.</p>
<p>Default: True (EDID frequencies will be used)</p>
</dd>
<dt><a name="UseEDID" id="UseEDID"></a><span class=
"term"><code class="computeroutput">Option "UseEDID"
"boolean"</code></span></dt>
<dd>
<p>By default, the NVIDIA X driver makes use of a display device's
EDID, when available, during construction of its mode pool. The
EDID is used as a source for possible modes, for valid frequency
ranges, and for collecting data on the physical dimensions of the
display device for computing the DPI (see <a href="appendix-e.html"
title="Appendix&nbsp;E.&nbsp;Dots Per Inch">Appendix&nbsp;E,
<i>Dots Per Inch</i></a>). However, if you wish to disable the
driver's use of the EDID, you can set this option to False:</p>
<pre class="screen">
    Option "UseEDID" "FALSE"
</pre>
<p>Note that, rather than globally disable all uses of the EDID,
you can individually disable each particular use of the EDID;
e.g.,</p>
<pre class="screen">
    Option "UseEDIDFreqs" "FALSE"
    Option "UseEDIDDpi" "FALSE"
    Option "ModeValidation" "NoEdidModes"
</pre>
<p>Default: True (use EDID).</p>
</dd>
<dt><a name="IgnoreEDID" id="IgnoreEDID"></a><span class=
"term"><code class="computeroutput">Option "IgnoreEDID"
"boolean"</code></span></dt>
<dd>
<p>This option is deprecated, and no longer affects behavior of the
X driver. See the "UseEDID" option for details.</p>
</dd>
<dt><a name="NoDDC" id="NoDDC"></a><span class="term"><code class=
"computeroutput">Option "NoDDC" "boolean"</code></span></dt>
<dd>
<p>Synonym for "IgnoreEDID". This option is deprecated, and no
longer affects behavior of the X driver. See the "UseEDID" option
for details.</p>
</dd>
<dt><a name="UseInt10Module" id="UseInt10Module"></a><span class=
"term"><code class="computeroutput">Option "UseInt10Module"
"boolean"</code></span></dt>
<dd>
<p>Enable use of the X Int10 module to soft-boot all secondary
cards, rather than POSTing the cards through the NVIDIA kernel
module. Default: off (POSTing is done through the NVIDIA kernel
module).</p>
</dd>
<dt><a name="TwinView" id="TwinView"></a><span class=
"term"><code class="computeroutput">Option "TwinView"
"boolean"</code></span></dt>
<dd>
<p>Enable or disable TwinView. See <a href="chapter-13.html" title=
"Chapter&nbsp;13.&nbsp;Configuring TwinView">Chapter&nbsp;13,
<i>Configuring TwinView</i></a> for details. Default: off (TwinView
is disabled).</p>
</dd>
<dt><a name="TwinViewOrientation" id=
"TwinViewOrientation"></a><span class="term"><code class=
"computeroutput">Option "TwinViewOrientation"
"string"</code></span></dt>
<dd>
<p>Controls the relationship between the two display devices when
using TwinView. Takes one of the following values: "RightOf"
"LeftOf" "Above" "Below" "Clone". See <a href="chapter-13.html"
title="Chapter&nbsp;13.&nbsp;Configuring TwinView">Chapter&nbsp;13,
<i>Configuring TwinView</i></a> for details. Default: string is
NULL.</p>
</dd>
<dt><a name="SecondMonitorHorizSync" id=
"SecondMonitorHorizSync"></a><span class="term"><code class=
"computeroutput">Option "SecondMonitorHorizSync"
"range(s)"</code></span></dt>
<dd>
<p>This option is like the HorizSync entry in the Monitor section,
but is for the second monitor when using TwinView. See <a href=
"chapter-13.html" title=
"Chapter&nbsp;13.&nbsp;Configuring TwinView">Chapter&nbsp;13,
<i>Configuring TwinView</i></a> for details. Default: none.</p>
</dd>
<dt><a name="SecondMonitorVertRefresh" id=
"SecondMonitorVertRefresh"></a><span class="term"><code class=
"computeroutput">Option "SecondMonitorVertRefresh"
"range(s)"</code></span></dt>
<dd>
<p>This option is like the VertRefresh entry in the Monitor
section, but is for the second monitor when using TwinView. See
<a href="chapter-13.html" title=
"Chapter&nbsp;13.&nbsp;Configuring TwinView">Chapter&nbsp;13,
<i>Configuring TwinView</i></a> for details. Default: none.</p>
</dd>
<dt><a name="MetaModes" id="MetaModes"></a><span class=
"term"><code class="computeroutput">Option "MetaModes"
"string"</code></span></dt>
<dd>
<p>This option describes the combination of modes to use on each
monitor when using TwinView. See <a href="chapter-13.html" title=
"Chapter&nbsp;13.&nbsp;Configuring TwinView">Chapter&nbsp;13,
<i>Configuring TwinView</i></a> for details. Default: string is
NULL.</p>
</dd>
<dt><a name="NoTwinViewXineramaInfo" id=
"NoTwinViewXineramaInfo"></a><span class="term"><code class=
"computeroutput">Option "NoTwinViewXineramaInfo"
"boolean"</code></span></dt>
<dd>
<p>When in TwinView, the NVIDIA X driver normally provides a
Xinerama extension that X clients (such as window managers) can use
to discover the current TwinView configuration, such as where each
display device is positioned within the X screen. Some window
mangers get confused by this information, so this option is
provided to disable this behavior. Default: false (TwinView
Xinerama information is provided).</p>
</dd>
<dt><a name="TwinViewXineramaInfoOrder" id=
"TwinViewXineramaInfoOrder"></a><span class="term"><code class=
"computeroutput">Option "TwinViewXineramaInfoOrder"
"string"</code></span></dt>
<dd>
<p>When the NVIDIA X driver provides TwinViewXineramaInfo (see the
NoTwinViewXineramaInfo X config option), it by default reports the
currently enabled display devices in the order "CRT, DFP, TV". The
TwinViewXineramaInfoOrder X config option can be used to override
this order.</p>
<p>The option string is a comma-separated list of display device
names. The display device names can either be general (e.g, "CRT",
which identifies all CRTs), or specific (e.g., "CRT-1", which
identifies a particular CRT). Not all display devices need to be
identified in the option string; display devices that are not
listed will be implicitly appended to the end of the list, in their
default order.</p>
<p>Note that TwinViewXineramaInfoOrder tracks all display devices
that could possibly be connected to the GPU, not just the ones that
are currently enabled. When reporting the Xinerama information, the
NVIDIA X driver walks through the display devices in the order
specified, only reporting enabled display devices.</p>
<p>Examples:</p>
<pre class="screen">
        "DFP"
        "TV, DFP"
        "DFP-1, DFP-0, TV, CRT"
</pre>
<p>In the first example, any enabled DFPs would be reported first
(any enabled CRTs or TVs would be reported afterwards). In the
second example, any enabled TVs would be reported first, then any
enabled DFPs (any enabled CRTs would be reported last). In the last
example, if DFP-1 were enabled, it would be reported first, then
DFP-0, then any enabled TVs, and then any enabled CRTs; finally,
any other enabled DFPs would be reported.</p>
<p>Default: "CRT, DFP, TV"</p>
</dd>
<dt><span class="term"><code class="computeroutput">Option
"TwinViewXineramaInfoOverride" "string"</code></span></dt>
<dd>
<p>This option overrides the values reported by NVIDIA's TwinView
Xinerama implementation. This disregards the actual display devices
used by the X screen and any order specified in
TwinViewXineramaInfoOrder.</p>
<p>The option string is interpreted as a comma-separated list of
regions, specified as '[width]x[height]+[xoffset]+[yoffset]'. The
regions' sizes and offsets are not validated against the X screen
size, but are directly reported to any Xinerama client.</p>
<p>Examples:</p>
<pre class="screen">
        "1600x1200+0+0, 1600x1200+1600+0"
        "1024x768+0+0, 1024x768+1024+0, 1024x768+0+768, 1024x768+1024+768"
</pre>
<p></p>
</dd>
<dt><a name="TVStandard" id="TVStandard"></a><span class=
"term"><code class="computeroutput">Option "TVStandard"
"string"</code></span></dt>
<dd>
<p>See <a href="chapter-16.html" title=
"Chapter&nbsp;16.&nbsp;Configuring TV-Out">Chapter&nbsp;16,
<i>Configuring TV-Out</i></a> for details on configuring
TV-out.</p>
</dd>
<dt><a name="TVOutFormat" id="TVOutFormat"></a><span class=
"term"><code class="computeroutput">Option "TVOutFormat"
"string"</code></span></dt>
<dd>
<p>See <a href="chapter-16.html" title=
"Chapter&nbsp;16.&nbsp;Configuring TV-Out">Chapter&nbsp;16,
<i>Configuring TV-Out</i></a> for details on configuring
TV-out.</p>
</dd>
<dt><a name="TVOverScan" id="TVOverScan"></a><span class=
"term"><code class="computeroutput">Option "TVOverScan" "Decimal
value in the range 0.0 to 1.0"</code></span></dt>
<dd>
<p>Valid values are in the range 0.0 through 1.0; See <a href=
"chapter-16.html" title=
"Chapter&nbsp;16.&nbsp;Configuring TV-Out">Chapter&nbsp;16,
<i>Configuring TV-Out</i></a> for details on configuring
TV-out.</p>
</dd>
<dt><a name="Stereo" id="Stereo"></a><span class=
"term"><code class="computeroutput">Option "Stereo"
"integer"</code></span></dt>
<dd>
<p>Enable offering of quad-buffered stereo visuals on Quadro.
Integer indicates the type of stereo equipment being used:</p>
<div class="informaltable">
<table summary="(no summary available)" border="0">
<colgroup>
<col>
<col></colgroup>
<thead>
<tr>
<th>Value</th>
<th>Equipment</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>DDC glasses. The sync signal is sent to the glasses via the DDC
signal to the monitor. These usually involve a passthrough cable
between the monitor and the graphics card. This mode is not
available on G8xGL and higher GPUs.</td>
</tr>
<tr>
<td>2</td>
<td>"Blueline" glasses. These usually involve a passthrough cable
between the monitor and graphics card. The glasses know which eye
to display based on the length of a blue line visible at the bottom
of the screen. When in this mode, the root window dimensions are
one pixel shorter in the Y dimension than requested. This mode does
not work with virtual root window sizes larger than the visible
root window size (desktop panning). This mode is not available on
G8xGL and higher GPUs.</td>
</tr>
<tr>
<td>3</td>
<td>Onboard stereo support. This is usually only found on
professional cards. The glasses connect via a DIN connector on the
back of the graphics card.</td>
</tr>
<tr>
<td>4</td>
<td>TwinView clone mode stereo (aka "passive" stereo). On graphics
cards that support TwinView, the left eye is displayed on the first
display, and the right eye is displayed on the second display. This
is normally used in conjunction with special projectors to produce
2 polarized images which are then viewed with polarized glasses. To
use this stereo mode, you must also configure TwinView in clone
mode with the same resolution, panning offset, and panning domains
on each display.</td>
</tr>
<tr>
<td>5</td>
<td>Vertical interlaced stereo mode, for use with SeeReal Stereo
Digital Flat Panels.</td>
</tr>
<tr>
<td>6</td>
<td>Color interleaved stereo mode, for use with Sharp3D Stereo
Digital Flat Panels.</td>
</tr>
</tbody>
</table>
</div>
<p>Stereo is only available on Quadro cards. Stereo options 1, 2,
and 3 (aka "active" stereo) may be used with TwinView if all modes
within each MetaMode have identical timing values. See <a href=
"chapter-19.html" title=
"Chapter&nbsp;19.&nbsp;Programming Modes">Chapter&nbsp;19,
<i>Programming Modes</i></a> for suggestions on making sure the
modes within your MetaModes are identical. The identical ModeLine
requirement is not necessary for Stereo option 4 ("passive"
stereo). Currently, stereo operation may be "quirky" on the
original Quadro (NV10) GPU and left-right flipping may be erratic.
We are trying to resolve this issue for a future release. Default:
0 (Stereo is not enabled).</p>
<p>UBB must be enabled when stereo is enabled (this is the default
behavior).</p>
<p>Stereo options 1, 2, and 3 (aka "active" stereo) are not
supported on digital flat panels.</p>
<p>Multi-GPU cards (such as the Quadro FX 4500 X2) provide a single
connector for onboard stereo support (option 3), which is tied to
the bottommost GPU. In order to synchronize onboard stereo with the
other GPU, you must use a G-Sync device (see <a href=
"chapter-26.html" title=
"Chapter&nbsp;26.&nbsp;Configuring Frame Lock and Genlock">Chapter&nbsp;26,
<i>Configuring Frame Lock and Genlock</i></a> for details).</p>
</dd>
<dt><a name="AllowDFPStereo" id="AllowDFPStereo"></a><span class=
"term"><code class="computeroutput">Option "AllowDFPStereo"
"boolean"</code></span></dt>
<dd>
<p>By default, the NVIDIA X driver performs a check which disables
active stereo (stereo options 1, 2, and 3) if the X screen is
driving a DFP. The "AllowDFPStereo" option bypasses this check.</p>
</dd>
<dt><a name="ForceStereoFlipping" id=
"ForceStereoFlipping"></a><span class="term"><code class=
"computeroutput">Option "ForceStereoFlipping"
"boolean"</code></span></dt>
<dd>
<p>Stereo flipping is the process by which left and right eyes are
displayed on alternating vertical refreshes. Normally, stereo
flipping is only performed when a stereo drawable is visible. This
option forces stereo flipping even when no stereo drawables are
visible.</p>
<p>This is to be used in conjunction with the "Stereo" option. If
"Stereo" is 0, the "ForceStereoFlipping" option has no effect. If
otherwise, the "ForceStereoFlipping" option will force the behavior
indicated by the "Stereo" option, even if no stereo drawables are
visible. This option is useful in a multiple-screen environment in
which a stereo application is run on a different screen than the
stereo master.</p>
<p>Possible values:</p>
<div class="informaltable">
<table summary="(no summary available)" border="0">
<colgroup>
<col>
<col></colgroup>
<thead>
<tr>
<th>Value</th>
<th>Behavior</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Stereo flipping is not forced. The default behavior as
indicated by the "Stereo" option is used.</td>
</tr>
<tr>
<td>1</td>
<td>Stereo flipping is forced. Stereo is running even if no stereo
drawables are visible. The stereo mode depends on the value of the
"Stereo" option.</td>
</tr>
</tbody>
</table>
</div>
<p>Default: 0 (Stereo flipping is not forced). Note that active
stereo is not supported on digital flat panels.</p>
</dd>
<dt><a name="XineramaStereoFlipping" id=
"XineramaStereoFlipping"></a><span class="term"><code class=
"computeroutput">Option "XineramaStereoFlipping"
"boolean"</code></span></dt>
<dd>
<p>By default, when using Stereo with Xinerama, all physical X
screens having a visible stereo drawable will stereo flip. Use this
option to allow only one physical X screen to stereo flip at a
time.</p>
<p>This is to be used in conjunction with the "Stereo" and
"Xinerama" options. If "Stereo" is 0 or "Xinerama" is 0, the
"XineramaStereoFlipping" option has no effect.</p>
<p>If you wish to have all X screens stereo flip all the time, see
the "ForceStereoFlipping" option.</p>
<p>Possible values:</p>
<div class="informaltable">
<table summary="(no summary available)" border="0">
<colgroup>
<col>
<col></colgroup>
<thead>
<tr>
<th>Value</th>
<th>Behavior</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Stereo flipping is enabled on one X screen at a time. Stereo is
enabled on the first X screen having the stereo drawable.</td>
</tr>
<tr>
<td>1</td>
<td>Stereo flipping in enabled on all X screens.</td>
</tr>
</tbody>
</table>
</div>
<p>Default: 1 (Stereo flipping is enabled on all X screens).</p>
</dd>
<dt><a name="NoBandWidthTest" id="NoBandWidthTest"></a><span class=
"term"><code class="computeroutput">Option "NoBandWidthTest"
"boolean"</code></span></dt>
<dd>
<p>As part of mode validation, the X driver tests if a given mode
fits within the hardware's memory bandwidth constraints. This
option disables this test. Default: false (the memory bandwidth
test is performed).</p>
</dd>
<dt><a name="IgnoreDisplayDevices" id=
"IgnoreDisplayDevices"></a><span class="term"><code class=
"computeroutput">Option "IgnoreDisplayDevices"
"string"</code></span></dt>
<dd>
<p>This option tells the NVIDIA kernel module to completely ignore
the indicated classes of display devices when checking which
display devices are connected. You may specify a comma-separated
list containing any of "CRT", "DFP", and "TV". For example:</p>
<pre class="screen">
Option "IgnoreDisplayDevices" "DFP, TV"
</pre>
<p>will cause the NVIDIA driver to not attempt to detect if any
digital flat panels or TVs are connected. This option is not
normally necessary; however, some video BIOSes contain incorrect
information about which display devices may be connected, or which
i2c port should be used for detection. These errors can cause long
delays in starting X. If you are experiencing such delays, you may
be able to avoid this by telling the NVIDIA driver to ignore
display devices which you know are not connected. NOTE: anything
attached to a 15 pin VGA connector is regarded by the driver as a
CRT. "DFP" should only be used to refer to digital flat panels
connected via a DVI port.</p>
</dd>
<dt><a name="MultisampleCompatibility" id=
"MultisampleCompatibility"></a><span class="term"><code class=
"computeroutput">Option "MultisampleCompatibility"
"boolean"</code></span></dt>
<dd>
<p>Enable or disable the use of separate front and back multisample
buffers. Enabling this will consume more memory but is necessary
for correct output when rendering to both the front and back
buffers of a multisample or FSAA drawable. This option is necessary
for correct operation of SoftImage XSI. Default: false (a single
multisample buffer is shared between the front and back
buffers).</p>
</dd>
<dt><a name="NoPowerConnectorCheck" id=
"NoPowerConnectorCheck"></a><span class="term"><code class=
"computeroutput">Option "NoPowerConnectorCheck"
"boolean"</code></span></dt>
<dd>
<p>The NVIDIA X driver will abort X server initialization if it
detects that a GPU that requires an external power connector does
not have an external power connector plugged in. This option can be
used to bypass this test. Default: false (the power connector test
is performed).</p>
</dd>
<dt><a name="XvmcUsesTextures" id=
"XvmcUsesTextures"></a><span class="term"><code class=
"computeroutput">Option "XvmcUsesTextures"
"boolean"</code></span></dt>
<dd>
<p>Forces XvMC to use the 3D engine for XvMCPutSurface requests
rather than the video overlay. Default: false (video overlay is
used when available).</p>
</dd>
<dt><a name="AllowGLXWithComposite" id=
"AllowGLXWithComposite"></a><span class="term"><code class=
"computeroutput">Option "AllowGLXWithComposite"
"boolean"</code></span></dt>
<dd>
<p>Enables GLX even when the Composite X extension is loaded.
ENABLE AT YOUR OWN RISK. OpenGL applications will not display
correctly in many circumstances with this setting enabled.</p>
<p>This option is intended for use on X.Org X servers older than
X11R6.9.0. On X11R6.9.0 or newer X servers, the NVIDIA OpenGL
implementation interacts properly by default with the Composite X
extension and this option should not be needed. However, on
X11R6.9.0 or newer X servers, support for GLX with Composite can be
disabled by setting this option to False.</p>
<p>Default: false (GLX is disabled when Composite is enabled on X
servers older than X11R6.9.0).</p>
</dd>
<dt><a name="UseCompositeWrapper" id=
"UseCompositeWrapper"></a><span class="term"><code class=
"computeroutput">Option "UseCompositeWrapper"
"boolean"</code></span></dt>
<dd>
<p>Enables the X server's "composite wrapper", which performs
coordinate translations necessary for the Composite extension.</p>
<p>Default: false (the NVIDIA X driver performs its own coordinate
translation).</p>
</dd>
<dt><a name="AddARGBGLXVisuals" id=
"AddARGBGLXVisuals"></a><span class="term"><code class=
"computeroutput">Option "AddARGBGLXVisuals"
"boolean"</code></span></dt>
<dd>
<p>Adds a 32-bit ARGB visual for each supported OpenGL
configuration. This allows applications to use OpenGL to render
with alpha transparency into 32-bit windows and pixmaps. This
option requires the Composite extension. Default: ARGB GLX visuals
are enabled on X servers new enough to support them when the
Composite extension is also enabled.</p>
</dd>
<dt><a name="DisableGLXRootClipping" id=
"DisableGLXRootClipping"></a><span class="term"><code class=
"computeroutput">Option "DisableGLXRootClipping"
"boolean"</code></span></dt>
<dd>
<p>If enabled, no clipping will be performed on rendering done by
OpenGL in the root window. This option is deprecated. It is needed
by older versions of OpenGL-based composite managers that draw the
contents of redirected windows directly into the root window using
OpenGL. Most OpenGL-based composite managers have been updated to
support the Composite Overlay Window, a feature introduced in Xorg
release 7.1. Using the Composite Overlay Window is the preferred
method for performing OpenGL-based compositing.</p>
</dd>
<dt><a name="DamageEvents" id="DamageEvents"></a><span class=
"term"><code class="computeroutput">Option "DamageEvents"
"boolean"</code></span></dt>
<dd>
<p>Use OS-level events to efficiently notify X when a client has
performed direct rendering to a window that needs to be composited.
This will significantly improve performance and interactivity when
using GLX applications with a composite manager running. It will
also affect applications using GLX when rotation is enabled. This
option is currently incompatible with SLI and Multi-GPU modes and
will be disabled if either are used. Enabled by default.</p>
</dd>
<dt><a name="ExactModeTimingsDVI" id=
"ExactModeTimingsDVI"></a><span class="term"><code class=
"computeroutput">Option "ExactModeTimingsDVI"
"boolean"</code></span></dt>
<dd>
<p>Forces the initialization of the X server with the exact timings
specified in the ModeLine. Default: false (for DVI devices, the X
server initializes with the closest mode in the EDID list).</p>
</dd>
<dt><a name="Coolbits" id="Coolbits"></a><span class=
"term"><code class="computeroutput">Option "Coolbits"
"integer"</code></span></dt>
<dd>
<p>Enables various unsupported features, such as support for GPU
clock manipulation in the NV-CONTROL X extension. This option
accepts a bit mask of features to enable.</p>
<p>When "1" (Bit 0) is set in the "Coolbits" option value, the
nvidia-settings utility will contain a page labeled "Clock
Frequencies" through which clock settings can be manipulated.
"Coolbits" is only available on GeForce FX, Quadro FX and newer
desktop GPUs. On GeForce FX and newer mobile GPUs, limited clock
manipulation support is available when "1" is set in the "Coolbits"
option value: clocks can be lowered relative to the default
settings; overclocking is not supported due to the thermal
constraints of notebook designs.</p>
<p>WARNING: this may cause system damage and void warranties. This
utility can run your computer system out of the manufacturer's
design specifications, including, but not limited to: higher system
voltages, above normal temperatures, excessive frequencies, and
changes to BIOS that may corrupt the BIOS. Your computer's
operating system may hang and result in data loss or corrupted
images. Depending on the manufacturer of your computer system, the
computer system, hardware and software warranties may be voided,
and you may not receive any further manufacturer support. NVIDIA
does not provide customer service support for the Coolbits option.
It is for these reasons that absolutely no warranty or guarantee is
either express or implied. Before enabling and using, you should
determine the suitability of the utility for your intended use, and
you shall assume all responsibility in connection therewith.</p>
<p>When "2" (Bit 1) is set in the "Coolbits" option value, the
NVIDIA driver will attempt to initialize SLI when using GPUs with
different amounts of video memory.</p>
<p>The default for this option is 0 (unsupported features are
disabled).</p>
</dd>
<dt><a name="MultiGPU" id="MultiGPU"></a><span class=
"term"><code class="computeroutput">Option "MultiGPU"
"string"</code></span></dt>
<dd>
<p>This option controls the configuration of Multi-GPU rendering in
supported configurations.</p>
<div class="informaltable">
<table summary="(no summary available)" border="0">
<colgroup>
<col>
<col></colgroup>
<thead>
<tr>
<th>Value</th>
<th>Behavior</th>
</tr>
</thead>
<tbody>
<tr>
<td>0, no, off, false, Single</td>
<td>Use only a single GPU when rendering</td>
</tr>
<tr>
<td>1, yes, on, true, Auto</td>
<td>Enable Multi-GPU and allow the driver to automatically select
the appropriate rendering mode.</td>
</tr>
<tr>
<td>AFR</td>
<td>Enable Multi-GPU and use the Alternate Frame Rendering
mode.</td>
</tr>
<tr>
<td>SFR</td>
<td>Enable Multi-GPU and use the Split Frame Rendering mode.</td>
</tr>
<tr>
<td>AA</td>
<td>Enable Multi-GPU and use antialiasing. Use this in conjunction
with full scene antialiasing to improve visual quality.</td>
</tr>
</tbody>
</table>
</div>
<p></p>
</dd>
<dt><a name="SLI" id="SLI"></a><span class="term"><code class=
"computeroutput">Option "SLI" "string"</code></span></dt>
<dd>
<p>This option controls the configuration of SLI rendering in
supported configurations.</p>
<div class="informaltable">
<table summary="(no summary available)" border="0">
<colgroup>
<col>
<col></colgroup>
<thead>
<tr>
<th>Value</th>
<th>Behavior</th>
</tr>
</thead>
<tbody>
<tr>
<td>0, no, off, false, Single</td>
<td>Use only a single GPU when rendering</td>
</tr>
<tr>
<td>1, yes, on, true, Auto</td>
<td>Enable SLI and allow the driver to automatically select the
appropriate rendering mode.</td>
</tr>
<tr>
<td>AFR</td>
<td>Enable SLI and use the Alternate Frame Rendering mode.</td>
</tr>
<tr>
<td>SFR</td>
<td>Enable SLI and use the Split Frame Rendering mode.</td>
</tr>
<tr>
<td>AA</td>
<td>Enable SLI and use SLI Antialiasing. Use this in conjunction
with full scene antialiasing to improve visual quality.</td>
</tr>
<tr>
<td>AFRofAA</td>
<td>Enable SLI and use SLI Alternate Frame Rendering of
Antialiasing mode. Use this in conjunction with full scene
antialiasing to improve visual quality. This option is only valid
for SLI configurations with 4 GPUs.</td>
</tr>
</tbody>
</table>
</div>
<p></p>
</dd>
<dt><a name="TripleBuffer" id="TripleBuffer"></a><span class=
"term"><code class="computeroutput">Option "TripleBuffer"
"boolean"</code></span></dt>
<dd>
<p>Enable or disable the use of triple buffering. If this option is
enabled, OpenGL windows that sync to vblank and are double-buffered
will be given a third buffer. This decreases the time an
application stalls while waiting for vblank events, but increases
latency slightly (delay between user input and displayed
result).</p>
</dd>
<dt><a name="DPI" id="DPI"></a><span class="term"><code class=
"computeroutput">Option "DPI" "string"</code></span></dt>
<dd>
<p>This option specifies the Dots Per Inch for the X screen; for
example:</p>
<pre class="screen">
    Option "DPI" "75 x 85"
</pre>
<p>will set the horizontal DPI to 75 and the vertical DPI to 85. By
default, the X driver will compute the DPI of the X screen from the
EDID of any connected display devices. See <a href=
"appendix-e.html" title=
"Appendix&nbsp;E.&nbsp;Dots Per Inch">Appendix&nbsp;E, <i>Dots Per
Inch</i></a> for details. Default: string is NULL (disabled).</p>
</dd>
<dt><a name="UseEdidDpi" id="UseEdidDpi"></a><span class=
"term"><code class="computeroutput">Option "UseEdidDpi"
"string"</code></span></dt>
<dd>
<p>By default, the NVIDIA X driver computes the DPI of an X screen
based on the physical size of the display device, as reported in
the EDID, and the size in pixels of the first mode to be used on
the display device. If multiple display devices are used by the X
screen, then the NVIDIA X screen will choose which display device
to use. This option can be used to specify which display device to
use. The string argument can be a display device name, such as:</p>
<pre class="screen">
    Option "UseEdidDpi" "DFP-0"
</pre>
<p>or the argument can be "FALSE" to disable use of EDID-based DPI
calculations:</p>
<pre class="screen">
    Option "UseEdidDpi" "FALSE"
</pre>
<p>See <a href="appendix-e.html" title=
"Appendix&nbsp;E.&nbsp;Dots Per Inch">Appendix&nbsp;E, <i>Dots Per
Inch</i></a> for details. Default: string is NULL (the driver
computes the DPI from the EDID of a display device and selects the
display device).</p>
</dd>
<dt><a name="ConstantDPI" id="ConstantDPI"></a><span class=
"term"><code class="computeroutput">Option "ConstantDPI"
"boolean"</code></span></dt>
<dd>
<p>By default on X.Org 6.9 or newer X servers, the NVIDIA X driver
recomputes the size in millimeters of the X screen whenever the
size in pixels of the X screen is changed using XRandR, such that
the DPI remains constant.</p>
<p>This behavior can be disabled (which means that the size in
millimeters will not change when the size in pixels of the X screen
changes) by setting the "ConstantDPI" option to "FALSE"; e.g.,</p>
<pre class="screen">
    Option "ConstantDPI" "FALSE"
</pre>
<p>ConstantDPI defaults to True.</p>
<p>On X servers older than X.Org 6.9, the NVIDIA X driver cannot
change the size in millimeters of the X screen. Therefore the DPI
of the X screen will change when XRandR changes the size in pixels
of the X screen. The driver will behave as if ConstantDPI was
forced to FALSE.</p>
</dd>
<dt><a name="CustomEDID" id="CustomEDID"></a><span class=
"term"><code class="computeroutput">Option "CustomEDID"
"string"</code></span></dt>
<dd>
<p>This option forces the X driver to use the EDID specified in a
file rather than the display's EDID. You may specify a semicolon
separated list of display names and filename pairs. The display
name is any of "CRT-0", "CRT-1", "DFP-0", "DFP-1", "TV-0", "TV-1".
The file contains a raw EDID (e.g., a file generated by
nvidia-settings).</p>
<p>For example:</p>
<pre class="screen">
    Option "CustomEDID" "CRT-0:/tmp/edid1.bin; DFP-0:/tmp/edid2.bin"
</pre>
<p>will assign the EDID from the file /tmp/edid1.bin to the display
device CRT-0, and the EDID from the file /tmp/edid2.bin to the
display device DFP-0. Note that a display device name must always
be specified even if only one EDID is specified.</p>
</dd>
<dt><a name="ModeValidation" id="ModeValidation"></a><span class=
"term"><code class="computeroutput">Option "ModeValidation"
"string"</code></span></dt>
<dd>
<p>This option provides fine-grained control over each stage of the
mode validation pipeline, disabling individual mode validation
checks. This option should only very rarely be used.</p>
<p>The option string is a semicolon-separated list of
comma-separated lists of mode validation arguments. Each list of
mode validation arguments can optionally be prepended with a
display device name.</p>
<pre class="screen">
    "&lt;dpy-0&gt;: &lt;tok&gt;, &lt;tok&gt;; &lt;dpy-1&gt;: &lt;tok&gt;, &lt;tok&gt;, &lt;tok&gt;; ..."
</pre>
<p></p>
<p>Possible arguments:</p>
<div class="itemizedlist">
<ul type="disc">
<li>
<p>"AllowNon60HzDFPModes": some lower quality TMDS encoders are
only rated to drive DFPs at 60Hz; the driver will determine when
only 60Hz DFP modes are allowed. This argument disables this stage
of the mode validation pipeline.</p>
</li>
<li>
<p>"NoMaxPClkCheck": each mode has a pixel clock; this pixel clock
is validated against the maximum pixel clock of the hardware (for a
DFP, this is the maximum pixel clock of the TMDS encoder, for a
CRT, this is the maximum pixel clock of the DAC). This argument
disables the maximum pixel clock checking stage of the mode
validation pipeline.</p>
</li>
<li>
<p>"NoEdidMaxPClkCheck": a display device's EDID can specify the
maximum pixel clock that the display device supports; a mode's
pixel clock is validated against this pixel clock maximum. This
argument disables this stage of the mode validation pipeline.</p>
</li>
<li>
<p>"AllowInterlacedModes": interlaced modes are not supported on
all NVIDIA GPUs; the driver will discard interlaced modes on GPUs
where interlaced modes are not supported; this argument disables
this stage of the mode validation pipeline.</p>
</li>
<li>
<p>"NoMaxSizeCheck": each NVIDIA GPU has a maximum resolution that
it can drive; this argument disables this stage of the mode
validation pipeline.</p>
</li>
<li>
<p>"NoHorizSyncCheck": a mode's horizontal sync is validated
against the range of valid horizontal sync values; this argument
disables this stage of the mode validation pipeline.</p>
</li>
<li>
<p>"NoVertRefreshCheck": a mode's vertical refresh rate is
validated against the range of valid vertical refresh rate values;
this argument disables this stage of the mode validation
pipeline.</p>
</li>
<li>
<p>"NoWidthAlignmentCheck": the alignment of a mode's visible width
is validated against the capabilities of the GPU; normally, a
mode's visible width must be a multiple of 8. This argument
disables this stage of the mode validation pipeline.</p>
</li>
<li>
<p>"NoDFPNativeResolutionCheck": when validating for a DFP, a
mode's size is validated against the native resolution of the DFP;
this argument disables this stage of the mode validation
pipeline.</p>
</li>
<li>
<p>"NoVirtualSizeCheck": if the X configuration file requests a
specific virtual screen size, a mode cannot be larger than that
virtual size; this argument disables this stage of the mode
validation pipeline.</p>
</li>
<li>
<p>"NoVesaModes": when constructing the mode pool for a display
device, the X driver uses a built-in list of VESA modes as one of
the mode sources; this argument disables use of these built-in VESA
modes.</p>
</li>
<li>
<p>"NoEdidModes": when constructing the mode pool for a display
device, the X driver uses any modes listed in the display device's
EDID as one of the mode sources; this argument disables use of
EDID-specified modes.</p>
</li>
<li>
<p>"NoXServerModes": when constructing the mode pool for a display
device, the X driver uses the built-in modes provided by the core
XFree86/Xorg X server as one of the mode sources; this argument
disables use of these modes. Note that this argument does not
disable custom ModeLines specified in the X config file; see the
"NoCustomModes" argument for that.</p>
</li>
<li>
<p>"NoCustomModes": when constructing the mode pool for a display
device, the X driver uses custom ModeLines specified in the X
config file (through the "Mode" or "ModeLine" entries in the
Monitor Section) as one of the mode sources; this argument disables
use of these modes.</p>
</li>
<li>
<p>"NoPredefinedModes": when constructing the mode pool for a
display device, the X driver uses additional modes predefined by
the NVIDIA X driver; this argument disables use of these modes.</p>
</li>
<li>
<p>"NoUserModes": additional modes can be added to the mode pool
dynamically, using the NV-CONTROL X extension; this argument
prohibits user-specified modes via the NV-CONTROL X extension.</p>
</li>
<li>
<p>"NoExtendedGpuCapabilitiesCheck": allow mode timings that may
exceed the GPU's extended capability checks.</p>
</li>
<li>
<p>"ObeyEdidContradictions": an EDID may contradict itself by
listing a mode as supported, but the mode may exceed an
EDID-specified valid frequency range (HorizSync, VertRefresh, or
maximum pixel clock). Normally, the NVIDIA X driver prints a
warning in this scenario, but does not invalidate an EDID-specified
mode just because it exceeds an EDID-specified valid frequency
range. However, the "ObeyEdidContradictions" argument instructs the
NVIDIA X driver to invalidate these modes.</p>
</li>
<li>
<p>"NoTotalSizeCheck": allow modes in which the invididual visible
or sync pulse timings exceed the total raster size.</p>
</li>
<li>
<p>"DoubleScanPriority": on GPUs older than G80, doublescan modes
are sorted before non-doublescan modes of the same resolution for
purposes of modepool sorting; but on G80 and later GPUs, doublescan
modes are sorted after non-doublescan modes of the same resolution.
This token inverts that priority (i.e., doublescan modes will be
sorted after on pre-G80 GPUs, and sorted before on G80 and later
GPUs).</p>
</li>
<li>
<p>"NoDualLinkDVICheck": for mode timings used on dual link DVI
DFPs, the driver must perform additional checks to ensure that the
correct pixels are sent on the correct link. For some of these
checks, the driver will invalidate the mode timings; for other
checks, the driver will implicitly modify the mode timings to meet
the GPU's dual link DVI requirements. This token disables this dual
link DVI checking.</p>
</li>
</ul>
</div>
<p></p>
<p>Examples:</p>
<pre class="screen">
    Option "ModeValidation" "NoMaxPClkCheck"
</pre>
<p>disable the maximum pixel clock check when validating modes on
all display devices.</p>
<pre class="screen">
    Option "ModeValidation" "CRT-0: NoEdidModes, NoMaxPClkCheck; DFP-0: NoVesaModes"
</pre>
<p>do not use EDID modes and do not perform the maximum pixel clock
check on CRT-0, and do not use VESA modes on DFP-0.</p>
</dd>
<dt><a name="UseEvents" id="UseEvents"></a><span class=
"term"><code class="computeroutput">Option "UseEvents"
"boolean"</code></span></dt>
<dd>
<p>Enables the use of system events in some cases when the X driver
is waiting for the hardware. The X driver can briefly spin through
a tight loop when waiting for the hardware. With this option the X
driver instead sets an event handler and waits for the hardware
through the <span><strong class="command">poll()</strong></span>
system call. Default: the use of the events is disabled.</p>
</dd>
<dt><a name="FlatPanelProperties" id=
"FlatPanelProperties"></a><span class="term"><code class=
"computeroutput">Option "FlatPanelProperties"
"string"</code></span></dt>
<dd>
<p>This option requests particular properties for all or a subset
of the connected flat panels.</p>
<p>The option string is a semicolon-separated list of
comma-separated property=value pairs. Each list of property=value
pairs can optionally be prepended with a flat panel name.</p>
<pre class="screen">
    "&lt;DFP-0&gt;: &lt;property=value&gt;, &lt;property=value&gt;; &lt;DFP-1&gt;: &lt;property=value&gt;; ..."
</pre>
<p></p>
<p>Recognized properties:</p>
<div class="itemizedlist">
<ul type="disc">
<li>
<p>"Scaling": controls the flat panel scaling mode; possible values
are: 'Default' (the driver will use whichever scaling state is
current), 'Native' (the driver will use the flat panel's scaler, if
possible), 'Scaled' (the driver will use the NVIDIA GPU's scaler,
if possible), 'Centered' (the driver will center the image, if
possible), and 'aspect-scaled' (the X driver will scale with the
NVIDIA GPU's scaler, but keep the aspect ratio correct).</p>
</li>
<li>
<p>"Dithering": controls the flat panel dithering mode; possible
values are: 'Default' (the driver will decide when to dither),
'Enabled' (the driver will always dither, if possible), and
'Disabled' (the driver will never dither).</p>
</li>
</ul>
</div>
<p></p>
<p>Examples:</p>
<pre class="screen">
    Option "FlatPanelProperties" "Scaling = Centered"
</pre>
<p>set the flat panel scaling mode to centered on all flat
panels.</p>
<pre class="screen">
    Option "FlatPanelProperties" "DFP-0: Scaling = Centered; DFP-1: Scaling = Scaled, Dithering = Enabled"
</pre>
<p>set DFP-0's scaling mode to centered, set DFP-1's scaling mode
to scaled and its dithering mode to enabled.</p>
</dd>
<dt><a name="ProbeAllGpus" id="ProbeAllGpus"></a><span class=
"term"><code class="computeroutput">Option "ProbeAllGpus"
"boolean"</code></span></dt>
<dd>
<p>When the NVIDIA X driver initializes, it probes all GPUs in the
system, even if no X screens are configured on them. This is done
so that the X driver can report information about all the system's
GPUs through the NV-CONTROL X extension. This option can be set to
FALSE to disable this behavior, such that only GPUs with X screens
configured on them will be probed. Default: all GPUs in the system
are probed.</p>
</dd>
<dt><a name="DynamicTwinView" id="DynamicTwinView"></a><span class=
"term"><code class="computeroutput">Option "DynamicTwinView"
"boolean"</code></span></dt>
<dd>
<p>Enable or disable support for dynamically configuring TwinView
on this X screen. When DynamicTwinView is enabled (the default),
the refresh rate of a mode (reported through XF86VidMode or XRandR)
does not correctly report the refresh rate, but instead is a unique
number such that each MetaMode has a different value. This is to
guarantee that MetaModes can be uniquely identified by XRandR.</p>
<p>When DynamicTwinView is disabled, the refresh rate reported
through XRandR will be accurate, but NV-CONTROL clients such as
nvidia-settings will not be able to dynamically manipulate the X
screen's MetaModes. TwinView can still be configured from the X
config file when DynamicTwinView is disabled.</p>
<p>Default: DynamicTwinView is enabled.</p>
</dd>
<dt><a name="IncludeImplicitMetaModes" id=
"IncludeImplicitMetaModes"></a><span class="term"><code class=
"computeroutput">Option "IncludeImplicitMetaModes"
"boolean"</code></span></dt>
<dd>
<p>When the X server starts, a mode pool is created per display
device, containing all the mode timings that the NVIDIA X driver
determined to be valid for the display device. However, the only
MetaModes that are made available to the X server are the ones
explicitly requested in the X configuration file.</p>
<p>It is convenient for fullscreen applications to be able to
change between the modes in the mode pool, even if a given target
mode was not explicitly requested in the X configuration file.</p>
<p>To facilitate this, the NVIDIA X driver will, if only one
display device is in use when the X server starts, implicitly add
MetaModes for all modes in the display device's mode pool. This
makes all the modes in the mode pool available to full screen
applications that use the XF86VidMode or XRandR X extensions.</p>
<p>To prevent this behavior, and only add MetaModes that are
explicitly requested in the X configuration file, set this option
to FALSE.</p>
<p>Default: IncludeImplicitMetaModes is enabled.</p>
</dd>
<dt><a name="AllowIndirectPixmaps" id=
"AllowIndirectPixmaps"></a><span class="term"><code class=
"computeroutput">Option "AllowIndirectPixmaps"
"boolean"</code></span></dt>
<dd>
<p>Some graphics cards have more video memory than can be mapped at
once by the CPU (generally only 256 MB of video memory can be
CPU-mapped). On graphics cards based on G80 and higher with such a
memory configuration, this option allows the driver to place more
pixmaps in video memory which will improve hardware rendering
performance but will slow down software rendering. On some systems,
up to 768 megabytes of virtual address space will be reserved in
the X server for indirect pixmap access. This virtual memory does
not consume any physical resources.</p>
<p>Default: on (indirect pixmaps will be used, when available).</p>
</dd>
<dt><a name="OnDemandVBlankInterrupts" id=
"OnDemandVBlankInterrupts"></a><span class="term"><code class=
"computeroutput">Option "OnDemandVBlankInterrupts"
"boolean"</code></span></dt>
<dd>
<p>Normally, VBlank interrupts are generated on every vertical
refresh of every display device connected to the GPU(s) installed
in a given system. This experimental option enables on-demand
VBlank control, allowing the driver to enable VBlank interrupt
generation only when it is required. This can help conserve
power.</p>
<p>Default: off (on-demand VBlank control is disabled).</p>
</dd>
<dt><a name="PixmapCacheSize" id="PixmapCacheSize"></a><span class=
"term"><code class="computeroutput">Option "PixmapCacheSize"
"size"</code></span></dt>
<dd>
<p>This option controls how much video memory is reserved for
pixmap allocations. When the option is specified, <code class=
"computeroutput">size</code> specifies the number of pixels to be
used for each of the 8, 16, and 32 bit per pixel pixmap caches.
Reserving this memory improves performance when pixmaps are created
and destroyed rapidly, but prevents this memory from being used by
OpenGL. When this cache is disabled or space in the cache is
exhausted, the driver will still allocate pixmaps in video memory
but pixmap creation and deletion performance will not be
improved.</p>
<p>This option may be removed in a future driver release after
improvements to the pixmap cache make it obsolete.</p>
<p>Example: <code class="computeroutput">Option "PixmapCacheSize"
"200000"</code> will allocate approximately 200,000 pixels for each
of the pixmap caches.</p>
<p>Default: off (no memory is reserved specifically for
pixmaps).</p>
</dd>
<dt><a name="LoadKernelModule" id=
"LoadKernelModule"></a><span class="term"><code class=
"computeroutput">Option "LoadKernelModule"
"boolean"</code></span></dt>
<dd>
<p>Normally, the NVIDIA Linux X driver module will attempt to load
the NVIDIA Linux kernel module. Set this option to "off" to disable
automatic loading of the NVIDIA kernel module by the NVIDIA X
driver. Default: on (the driver loads the kernel module).</p>
</dd>
<dt><a name="ConnectToAcpid" id="ConnectToAcpid"></a><span class=
"term"><code class="computeroutput">Option "ConnectToAcpid"
"boolean"</code></span></dt>
<dd>
<p>The ACPI daemon (acpid) receives information about ACPI events
like AC/Battery power, docking, etc. acpid will deliver these
events to the NVIDIA X driver via a UNIX domain socket connection.
By default, the NVIDIA X driver will attempt to connect to acpid to
receive these events. Set this option to "off" to prevent the
NVIDIA X driver from connecting to acpid. Default: on (the NVIDIA X
driver will attempt to connect to acpid).</p>
</dd>
<dt><a name="AcpidSocketPath" id="AcpidSocketPath"></a><span class=
"term"><code class="computeroutput">Option "AcpidSocketPath"
"string"</code></span></dt>
<dd>
<p>The NVIDIA X driver attempts to connect to the ACPI daemon
(acpid) via a UNIX domain socket. The default path to this socket
is "/var/run/acpid.socket". Set this option to specify an alternate
path to acpid's socket. Default: "/var/run/acpid.socket".</p>
</dd>
<dt><span class="term"><code class="computeroutput">Option
"EnableACPIHotkeys" "boolean"</code></span></dt>
<dd>
<p>The NVIDIA Linux X driver can detect mobile display change
hotkey events either through ACPI or by periodically checking the
GPU hardware state.</p>
<p>While checking the GPU hardware state is generally sufficient to
detect display change hotkey events, ACPI hotkey event delivery is
preferable. However, X servers prior to X.Org xserver-1.2.0 have a
bug that cause the X server to crash when the X server receives an
ACPI hotkey event (freedesktop.org bug 8776). The NVIDIA Linux X
driver will key off the X server ABI version to determine if the X
server in use has this bug (X servers with ABI 1.1 or later do
not).</p>
<p>Since some X servers may have an earlier ABI but have a patch to
fix the bug, the "EnableACPIHotkeys" option can be specified to
override the NVIDIA X driver's default decision to enable or
disable ACPI display change hotkey events.</p>
<p>When running on a mobile system, search for "ACPI display change
hotkey events" in your X log to see the NVIDIA X driver's
decision.</p>
<p>Default: the NVIDIA X driver will decide whether to enable ACPI
display change hotkey events based on the X server ABI.</p>
</dd>
</dl>
</div>
<p></p>
</div>
<div class="navfooter">
<hr>
<table width="100%" summary="Navigation footer">
<tr>
<td width="40%" align="left"><a accesskey="p" href=
"appendix-a.html">Prev</a>&nbsp;</td>
<td width="20%" align="center"><a accesskey="u" href=
"part-02.html">Up</a></td>
<td width="40%" align="right">&nbsp;<a accesskey="n" href=
"appendix-c.html">Next</a></td>
</tr>
<tr>
<td width="40%" align="left" valign="top">
Appendix&nbsp;A.&nbsp;Supported NVIDIA GPU Products&nbsp;</td>
<td width="20%" align="center"><a accesskey="h" href=
"index.html">Home</a></td>
<td width="40%" align="right" valign="top">
&nbsp;Appendix&nbsp;C.&nbsp;Display Device Names</td>
</tr>
</table>
</div>
</body>
</html>