Sophie

Sophie

distrib > Mageia > 4 > x86_64 > by-pkgid > b0aa6cd23b567cd0e312b072b2e3b0bf > files > 1070

nvidia-cuda-toolkit-devel-5.5.22-2.mga4.nonfree.x86_64.rpm

<!DOCTYPE html
  PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" lang="en-us" xml:lang="en-us">
   <head>
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8"></meta>
      <meta http-equiv="X-UA-Compatible" content="IE=edge"></meta>
      <meta name="copyright" content="(C) Copyright 2005"></meta>
      <meta name="DC.rights.owner" content="(C) Copyright 2005"></meta>
      <meta name="DC.Type" content="cppModule"></meta>
      <meta name="DC.Title" content="Unified Addressing"></meta>
      <meta name="abstract" content=""></meta>
      <meta name="description" content=""></meta>
      <meta name="DC.Format" content="XHTML"></meta>
      <meta name="DC.Identifier" content="group__CUDA__UNIFIED"></meta>
      <link rel="stylesheet" type="text/css" href="../common/formatting/commonltr.css"></link>
      <link rel="stylesheet" type="text/css" href="../common/formatting/site.css"></link>
      <title>CUDA Driver API :: CUDA Toolkit Documentation</title>
      <!--[if lt IE 9]>
      <script src="../common/formatting/html5shiv-printshiv.min.js"></script>
      <![endif]-->
      <script type="text/javascript" charset="utf-8" src="../common/formatting/jquery.min.js"></script>
      <script type="text/javascript" charset="utf-8" src="../common/formatting/jquery.ba-hashchange.min.js"></script>
      <link rel="canonical" href="http://docs.nvidia.com/cuda/cuda-driver-api/index.html"></link>
      <link rel="stylesheet" type="text/css" href="../common/formatting/qwcode.highlight.css"></link>
   </head>
   <body>
      
      <article id="contents">
         <div id="breadcrumbs"><a href="group__CUDA__MEM.html" shape="rect">&lt; Previous</a> | <a href="group__CUDA__STREAM.html" shape="rect">Next &gt;</a></div>
         <div id="release-info">CUDA Driver API
            (<a href="../../pdf/CUDA_Driver_API.pdf">PDF</a>)
            -
            CUDA Toolkit v5.5
            (<a href="https://developer.nvidia.com/cuda-toolkit-archive">older</a>)
            -
            Last updated 
            July 19, 2013
            -
            <a href="mailto:cudatools@nvidia.com?subject=CUDA Tools Documentation Feedback: cuda-driver-api">Send Feedback</a></div>
         <div class="topic reference apiRef apiPackage cppModule" id="group__CUDA__UNIFIED"><a name="group__CUDA__UNIFIED" shape="rect">
               <!-- --></a><h2 class="topictitle2 cppModule">2.10.&nbsp;Unified Addressing</h2>
            <div class="section">
               <p>This section describes the unified addressing functions of the low-level CUDA driver application programming interface.</p>
               <p class="p apiDesc_subtitle"><strong class="ph b">Overview</strong></p>
               <p class="p">CUDA devices can share a unified address space with the host. For these devices there is no distinction between a device pointer
                  and a host pointer -- the same pointer value may be used to access memory from the host program and from a kernel running
                  on the device (with exceptions enumerated below).
               </p>
               <p class="p apiDesc_subtitle"><strong class="ph b">Supported Platforms</strong></p>
               <p class="p">Whether or not a device supports unified addressing may be queried by calling <a class="xref" href="group__CUDA__DEVICE.html#group__CUDA__DEVICE_1g9c3e1414f0ad901d3278a4d6645fc266" title="Returns information about the device." shape="rect">cuDeviceGetAttribute()</a> with the device attribute <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gge12b8a782bebe21b1ac0091bf9f4e2a3dc11dd6d9f149a7bae32499f2b802c0d" shape="rect">CU_DEVICE_ATTRIBUTE_UNIFIED_ADDRESSING</a>.
               </p>
               <p class="p">Unified addressing is automatically enabled in 64-bit processes on devices with compute capability greater than or equal to
                  2.0.
               </p>
               <p class="p apiDesc_subtitle"><strong class="ph b">Looking Up Information from Pointer Values</strong></p>
               <p class="p">It is possible to look up information about the memory which backs a pointer value. For instance, one may want to know if
                  a pointer points to host or device memory. As another example, in the case of device memory, one may want to know on which
                  CUDA device the memory resides. These properties may be queried using the function <a class="xref" href="group__CUDA__UNIFIED.html#group__CUDA__UNIFIED_1g0c28ed0aff848042bc0533110e45820c" title="Returns information about a pointer." shape="rect">cuPointerGetAttribute()</a></p>
               <p class="p">Because pointers are unique, it is not necessary to specify information about the pointers specified to the various copy functions
                  in the CUDA API. The function <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g8d0ff510f26d4b87bd3a51e731e7f698" title="Copies memory." shape="rect">cuMemcpy()</a> may be used to perform a copy between two pointers, ignoring whether they point to host or device memory (making <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g4d32266788c440b0220b1a9ba5795169" title="Copies memory from Host to Device." shape="rect">cuMemcpyHtoD()</a>, <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g1725774abf8b51b91945f3336b778c8b" title="Copies memory from Device to Device." shape="rect">cuMemcpyDtoD()</a>, and <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g3480368ee0208a98f75019c9a8450893" title="Copies memory from Device to Host." shape="rect">cuMemcpyDtoH()</a> unnecessary for devices supporting unified addressing). For multidimensional copies, the memory type <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gg8a114cc994ad2e865c44ef3838eaec727a47ca2de6db5cf82084ad80ce66aa71" shape="rect">CU_MEMORYTYPE_UNIFIED</a> may be used to specify that the CUDA driver should infer the location of the pointer from its value.
               </p>
               <p class="p apiDesc_subtitle"><strong class="ph b">Automatic Mapping of Host Allocated Host Memory</strong></p>
               <p class="p">All host memory allocated in all contexts using <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gdd8311286d2c2691605362c689bc64e0" title="Allocates page-locked host memory." shape="rect">cuMemAllocHost()</a> and <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g572ca4011bfcb25034888a14d4e035b9" title="Allocates page-locked host memory." shape="rect">cuMemHostAlloc()</a> is always directly accessible from all contexts on all devices that support unified addressing. This is the case regardless
                  of whether or not the flags <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1g50f4528d46bda58b592551654a7ee0ff" shape="rect">CU_MEMHOSTALLOC_PORTABLE</a> and <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1g054589ee2a0f188e664d93965d81113d" shape="rect">CU_MEMHOSTALLOC_DEVICEMAP</a> are specified.
               </p>
               <p class="p">The pointer value through which allocated host memory may be accessed in kernels on all devices that support unified addressing
                  is the same as the pointer value through which that memory is accessed on the host, so it is not necessary to call <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g57a39e5cba26af4d06be67fc77cc62f0" title="Passes back device pointer of mapped pinned memory." shape="rect">cuMemHostGetDevicePointer()</a> to get the device pointer for these allocations.
               </p>
               <p class="p">Note that this is not the case for memory allocated using the flag <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1g7361580951deecace15352c97a210038" shape="rect">CU_MEMHOSTALLOC_WRITECOMBINED</a>, as discussed below.
               </p>
               <p class="p apiDesc_subtitle"><strong class="ph b">Automatic Registration of Peer Memory</strong></p>
               <p class="p">Upon enabling direct access from a context that supports unified addressing to another peer context that supports unified
                  addressing using <a class="xref" href="group__CUDA__PEER__ACCESS.html#group__CUDA__PEER__ACCESS_1g0889ec6728e61c05ed359551d67b3f5a" title="Enables direct access to memory allocations in a peer context." shape="rect">cuCtxEnablePeerAccess()</a> all memory allocated in the peer context using <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gb82d2a09844a58dd9e744dc31e8aa467" title="Allocates device memory." shape="rect">cuMemAlloc()</a> and <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gcbe9b033f6c4de80f63cc6e58ed9a45a" title="Allocates pitched device memory." shape="rect">cuMemAllocPitch()</a> will immediately be accessible by the current context. The device pointer value through which any peer memory may be accessed
                  in the current context is the same pointer value through which that memory may be accessed in the peer context.
               </p>
               <p class="p apiDesc_subtitle"><strong class="ph b">Exceptions, Disjoint Addressing</strong></p>
               <p class="p">Not all memory may be accessed on devices through the same pointer value through which they are accessed on the host. These
                  exceptions are host memory registered using <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gf0a9fe11544326dabd743b7aa6b54223" title="Registers an existing host memory range for use by CUDA." shape="rect">cuMemHostRegister()</a> and host memory allocated using the flag <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1g7361580951deecace15352c97a210038" shape="rect">CU_MEMHOSTALLOC_WRITECOMBINED</a>. For these exceptions, there exists a distinct host and device address for the memory. The device address is guaranteed to
                  not overlap any valid host pointer range and is guaranteed to have the same value across all contexts that support unified
                  addressing.
               </p>
               <p class="p">This device address may be queried using <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g57a39e5cba26af4d06be67fc77cc62f0" title="Passes back device pointer of mapped pinned memory." shape="rect">cuMemHostGetDevicePointer()</a> when a context using unified addressing is current. Either the host or the unified device pointer value may be used to refer
                  to this memory through <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g8d0ff510f26d4b87bd3a51e731e7f698" title="Copies memory." shape="rect">cuMemcpy()</a> and similar functions using the <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gg8a114cc994ad2e865c44ef3838eaec727a47ca2de6db5cf82084ad80ce66aa71" shape="rect">CU_MEMORYTYPE_UNIFIED</a> memory type. 
               </p>
            </div>
            <h3 class="fake_sectiontitle member_header">Functions</h3>
            <dl class="members">
               <dt><span class="member_type"><a href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gc6c391505e117393cc2558fff6bfc2e9" title="" shape="rect">CUresult</a>&nbsp;</span><span class="member_name"><a href="#group__CUDA__UNIFIED_1g0c28ed0aff848042bc0533110e45820c" shape="rect">cuPointerGetAttribute</a> (  void*<span>&nbsp;</span><span class="keyword keyword apiItemName">data</span>, <a href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gc2cce590e35080745e72633dfc6e0b60" title="" shape="rect">CUpointer_attribute</a><span>&nbsp;</span><span class="keyword keyword apiItemName">attribute</span>, <a href="group__CUDA__TYPES.html#group__CUDA__TYPES_1g5e264ce2ad6a38761e7e04921ef771de" title="" shape="rect">CUdeviceptr</a><span>&nbsp;</span><span class="keyword keyword apiItemName">ptr</span> ) </span></dt>
               <dd class="shortdesc"><span></span><span class="desc">Returns information about a pointer. </span></dd>
            </dl>
            <div class="description">
               <h3 class="sectiontitle">Functions</h3>
               <dl class="description">
                  <dt class="description"><a name="group__CUDA__UNIFIED_1g0c28ed0aff848042bc0533110e45820c" id="group__CUDA__UNIFIED_1g0c28ed0aff848042bc0533110e45820c" shape="rect">
                        <!-- --></a><span><a href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gc6c391505e117393cc2558fff6bfc2e9" title="" shape="rect">CUresult</a> cuPointerGetAttribute (  void*<span>&nbsp;</span><span class="keyword keyword apiItemName">data</span>, <a href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gc2cce590e35080745e72633dfc6e0b60" title="" shape="rect">CUpointer_attribute</a><span>&nbsp;</span><span class="keyword keyword apiItemName">attribute</span>, <a href="group__CUDA__TYPES.html#group__CUDA__TYPES_1g5e264ce2ad6a38761e7e04921ef771de" title="" shape="rect">CUdeviceptr</a><span>&nbsp;</span><span class="keyword keyword apiItemName">ptr</span> ) </span></dt>
                  <dd class="description">
                     <div class="section">Returns information about a pointer. </div>
                     <div class="section">
                        <h6 class="parameter_header">
                           Parameters
                           
                        </h6>
                        <dl class="table-display-params">
                           <dt><tt class="code"><span class="keyword keyword apiItemName">data</span></tt></dt>
                           <dd>- Returned pointer attribute value </dd>
                           <dt><tt class="code"><span class="keyword keyword apiItemName">attribute</span></tt></dt>
                           <dd>- Pointer attribute to query </dd>
                           <dt><tt class="code"><span class="keyword keyword apiItemName">ptr</span></tt></dt>
                           <dd>- Pointer</dd>
                        </dl>
                     </div>
                     <div class="section">
                        <h6 class="return_header">Returns</h6>
                        <p class="return"><a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e9a0eed720f8a87cd1c5fd1c453bc7a03d" shape="rect">CUDA_SUCCESS</a>, <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e9acf52f132faf29b473cdda6061f0f44a" shape="rect">CUDA_ERROR_DEINITIALIZED</a>, <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e98feb999f0af99b4a25ab26b3866f4df8" shape="rect">CUDA_ERROR_NOT_INITIALIZED</a>, <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e9a484e9af32c1e9893ff21f0e0191a12d" shape="rect">CUDA_ERROR_INVALID_CONTEXT</a>, <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e990696c86fcee1f536a1ec7d25867feeb" shape="rect">CUDA_ERROR_INVALID_VALUE</a>, <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e96f047e7215788ca96c81af92a04bfb6c" shape="rect">CUDA_ERROR_INVALID_DEVICE</a></p>
                     </div>
                     <div class="section">
                        <h6 class="description_header">Description</h6>
                        <p>The supported attributes are:</p>
                        <p class="p">
                           <ul class="ul">
                              <li class="li">
                                 <p class="p"><a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc2cce590e35080745e72633dfc6e0b60f0470fdbd1a5ff72c341f762f49506ab" shape="rect">CU_POINTER_ATTRIBUTE_CONTEXT</a>:
                                 </p>
                              </li>
                           </ul>
                        </p>
                        <p class="p">Returns in <tt class="ph tt code">*data</tt> the <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a> in which <tt class="ph tt code">ptr</tt> was allocated or registered. The type of <tt class="ph tt code">data</tt> must be <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a> *.
                        </p>
                        <p class="p">If <tt class="ph tt code">ptr</tt> was not allocated by, mapped by, or registered with a <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a> which uses unified virtual addressing then <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e990696c86fcee1f536a1ec7d25867feeb" shape="rect">CUDA_ERROR_INVALID_VALUE</a> is returned.
                        </p>
                        <p class="p">
                           <ul class="ul">
                              <li class="li">
                                 <p class="p"><a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc2cce590e35080745e72633dfc6e0b600409e16293b60b383f30a9b417b2917c" shape="rect">CU_POINTER_ATTRIBUTE_MEMORY_TYPE</a>:
                                 </p>
                              </li>
                           </ul>
                        </p>
                        <p class="p">Returns in <tt class="ph tt code">*data</tt> the physical memory type of the memory that <tt class="ph tt code">ptr</tt> addresses as a <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1g8a114cc994ad2e865c44ef3838eaec72" shape="rect">CUmemorytype</a> enumerated value. The type of <tt class="ph tt code">data</tt> must be unsigned int.
                        </p>
                        <p class="p">If <tt class="ph tt code">ptr</tt> addresses device memory then <tt class="ph tt code">*data</tt> is set to <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gg8a114cc994ad2e865c44ef3838eaec72ec7e15ba4b111a26adb3487023707299" shape="rect">CU_MEMORYTYPE_DEVICE</a>. The particular <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gcd81b70eb9968392bb5cdf582af8eab4" shape="rect">CUdevice</a> on which the memory resides is the <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gcd81b70eb9968392bb5cdf582af8eab4" shape="rect">CUdevice</a> of the <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a> returned by the <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc2cce590e35080745e72633dfc6e0b60f0470fdbd1a5ff72c341f762f49506ab" shape="rect">CU_POINTER_ATTRIBUTE_CONTEXT</a> attribute of <tt class="ph tt code">ptr</tt>.
                        </p>
                        <p class="p">If <tt class="ph tt code">ptr</tt> addresses host memory then <tt class="ph tt code">*data</tt> is set to <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gg8a114cc994ad2e865c44ef3838eaec727f98a88f26eec8490bfc180c5a73e101" shape="rect">CU_MEMORYTYPE_HOST</a>.
                        </p>
                        <p class="p">If <tt class="ph tt code">ptr</tt> was not allocated by, mapped by, or registered with a <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a> which uses unified virtual addressing then <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e990696c86fcee1f536a1ec7d25867feeb" shape="rect">CUDA_ERROR_INVALID_VALUE</a> is returned.
                        </p>
                        <p class="p">If the current <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a> does not support unified virtual addressing then <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e9a484e9af32c1e9893ff21f0e0191a12d" shape="rect">CUDA_ERROR_INVALID_CONTEXT</a> is returned.
                        </p>
                        <p class="p">
                           <ul class="ul">
                              <li class="li">
                                 <p class="p"><a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc2cce590e35080745e72633dfc6e0b60b5446064bbfa484ea8d13025f1573d5d" shape="rect">CU_POINTER_ATTRIBUTE_DEVICE_POINTER</a>:
                                 </p>
                              </li>
                           </ul>
                        </p>
                        <p class="p">Returns in <tt class="ph tt code">*data</tt> the device pointer value through which <tt class="ph tt code">ptr</tt> may be accessed by kernels running in the current <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a>. The type of <tt class="ph tt code">data</tt> must be CUdeviceptr *.
                        </p>
                        <p class="p">If there exists no device pointer value through which kernels running in the current <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a> may access <tt class="ph tt code">ptr</tt> then <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e990696c86fcee1f536a1ec7d25867feeb" shape="rect">CUDA_ERROR_INVALID_VALUE</a> is returned.
                        </p>
                        <p class="p">If there is no current <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1gf9f5bd81658f866613785b3a0bb7d7d9" shape="rect">CUcontext</a> then <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e9a484e9af32c1e9893ff21f0e0191a12d" shape="rect">CUDA_ERROR_INVALID_CONTEXT</a> is returned.
                        </p>
                        <p class="p">Except in the exceptional disjoint addressing cases discussed below, the value returned in <tt class="ph tt code">*data</tt> will equal the input value <tt class="ph tt code">ptr</tt>.
                        </p>
                        <p class="p">
                           <ul class="ul">
                              <li class="li">
                                 <p class="p"><a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc2cce590e35080745e72633dfc6e0b60ab17d9902b1b631982ae6a3a9a436fdc" shape="rect">CU_POINTER_ATTRIBUTE_HOST_POINTER</a>:
                                 </p>
                              </li>
                           </ul>
                        </p>
                        <p class="p">Returns in <tt class="ph tt code">*data</tt> the host pointer value through which <tt class="ph tt code">ptr</tt> may be accessed by by the host program. The type of <tt class="ph tt code">data</tt> must be void **. If there exists no host pointer value through which the host program may directly access <tt class="ph tt code">ptr</tt> then <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc6c391505e117393cc2558fff6bfc2e990696c86fcee1f536a1ec7d25867feeb" shape="rect">CUDA_ERROR_INVALID_VALUE</a> is returned.
                        </p>
                        <p class="p">Except in the exceptional disjoint addressing cases discussed below, the value returned in <tt class="ph tt code">*data</tt> will equal the input value <tt class="ph tt code">ptr</tt>.
                        </p>
                        <p class="p">
                           <ul class="ul">
                              <li class="li">
                                 <p class="p"><a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc2cce590e35080745e72633dfc6e0b60995003218508cc15bfcf197aa9b30a1b" shape="rect">CU_POINTER_ATTRIBUTE_P2P_TOKENS</a>:
                                 </p>
                              </li>
                           </ul>
                        </p>
                        <p class="p">Returns in <tt class="ph tt code">*data</tt> two tokens for use with the nv-p2p.h Linux kernel interface. <tt class="ph tt code">data</tt> must be a struct of type <a class="xref" href="structCUDA__POINTER__ATTRIBUTE__P2P__TOKENS.html#structCUDA__POINTER__ATTRIBUTE__P2P__TOKENS" shape="rect">CUDA_POINTER_ATTRIBUTE_P2P_TOKENS</a>.
                        </p>
                        <p class="p"><tt class="ph tt code">ptr</tt> must be a pointer to memory obtained from :<a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gb82d2a09844a58dd9e744dc31e8aa467" title="Allocates device memory." shape="rect">cuMemAlloc()</a>. Note that p2pToken and vaSpaceToken are only valid for the lifetime of the source allocation. A subsequent allocation at
                           the same address may return completely different tokens.
                        </p>
                        <p class="p"></p>
                        <p class="p">
                           Note that for most allocations in the unified virtual address space the host and device pointer for accessing the allocation
                           will be the same. The exceptions to this are
                           <ul class="ul">
                              <li class="li">
                                 <p class="p">user memory registered using <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gf0a9fe11544326dabd743b7aa6b54223" title="Registers an existing host memory range for use by CUDA." shape="rect">cuMemHostRegister</a></p>
                              </li>
                              <li class="li">
                                 <p class="p">host memory allocated using <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g572ca4011bfcb25034888a14d4e035b9" title="Allocates page-locked host memory." shape="rect">cuMemHostAlloc</a> with the <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1g7361580951deecace15352c97a210038" shape="rect">CU_MEMHOSTALLOC_WRITECOMBINED</a> flag For these types of allocation there will exist separate, disjoint host and device addresses for accessing the allocation.
                                    In particular
                                 </p>
                              </li>
                              <li class="li">
                                 <p class="p">The host address will correspond to an invalid unmapped device address (which will result in an exception if accessed from
                                    the device)
                                 </p>
                              </li>
                              <li class="li">
                                 <p class="p">The device address will correspond to an invalid unmapped host address (which will result in an exception if accessed from
                                    the host). For these types of allocations, querying <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc2cce590e35080745e72633dfc6e0b60ab17d9902b1b631982ae6a3a9a436fdc" shape="rect">CU_POINTER_ATTRIBUTE_HOST_POINTER</a> and <a class="xref" href="group__CUDA__TYPES.html#group__CUDA__TYPES_1ggc2cce590e35080745e72633dfc6e0b60b5446064bbfa484ea8d13025f1573d5d" shape="rect">CU_POINTER_ATTRIBUTE_DEVICE_POINTER</a> may be used to retrieve the host and device addresses from either address.
                                 </p>
                              </li>
                           </ul>
                        </p>
                        <p class="p"></p>
                        <p class="p"></p>
                        <p class="p">
                           <div class="note note"><span class="notetitle">Note:</span><p class="p">Note that this function may also return error codes from previous, asynchronous launches.</p>
                           </div>
                        </p>
                        <p class="p"></p>
                        <p class="p apiDesc_subtitle"><strong class="ph b">See also:</strong></p>
                        <p class="p see_subsection"><a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gb82d2a09844a58dd9e744dc31e8aa467" title="Allocates device memory." shape="rect">cuMemAlloc</a>, <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g89b3f154e17cc89b6eea277dbdf5c93a" title="Frees device memory." shape="rect">cuMemFree</a>, <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gdd8311286d2c2691605362c689bc64e0" title="Allocates page-locked host memory." shape="rect">cuMemAllocHost</a>, <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g62e0fdbe181dab6b1c90fa1a51c7b92c" title="Frees page-locked host memory." shape="rect">cuMemFreeHost</a>, <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g572ca4011bfcb25034888a14d4e035b9" title="Allocates page-locked host memory." shape="rect">cuMemHostAlloc</a>, <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1gf0a9fe11544326dabd743b7aa6b54223" title="Registers an existing host memory range for use by CUDA." shape="rect">cuMemHostRegister</a>, <a class="xref" href="group__CUDA__MEM.html#group__CUDA__MEM_1g63f450c8125359be87b7623b1c0b2a14" title="Unregisters a memory range that was registered with cuMemHostRegister." shape="rect">cuMemHostUnregister</a></p>
                        <p class="p"></p>
                     </div>
                  </dd>
               </dl>
            </div>
         </div>
         
         <hr id="contents-end"></hr>
         <div id="breadcrumbs"><a href="group__CUDA__MEM.html" shape="rect">&lt; Previous</a> | <a href="group__CUDA__STREAM.html" shape="rect">Next &gt;</a></div>
         <div id="release-info">CUDA Driver API
            (<a href="../../pdf/CUDA_Driver_API.pdf">PDF</a>)
            -
            CUDA Toolkit v5.5
            (<a href="https://developer.nvidia.com/cuda-toolkit-archive">older</a>)
            -
            Last updated 
            July 19, 2013
            -
            <a href="mailto:cudatools@nvidia.com?subject=CUDA Tools Documentation Feedback: cuda-driver-api">Send Feedback</a></div>
         
      </article>
      
      <header id="header"><span id="company">NVIDIA</span><span id="site-title">CUDA Toolkit Documentation</span><form id="search" method="get" action="search">
            <input type="text" name="search-text"></input><fieldset id="search-location">
               <legend>Search In:</legend>
               <label><input type="radio" name="search-type" value="site"></input>Entire Site</label>
               <label><input type="radio" name="search-type" value="document"></input>Just This Document</label></fieldset>
            <button type="reset">clear search</button>
            <button id="submit" type="submit">search</button></form>
      </header>
      <nav id="site-nav">
         <div class="category closed"><span class="twiddle">▷</span><a href="../index.html" title="The root of the site.">CUDA Toolkit</a></div>
         <ul class="closed">
            <li><a href="../cuda-toolkit-release-notes/index.html" title="The Release Notes for the CUDA Toolkit from v4.0 to today.">Release Notes</a></li>
            <li><a href="../eula/index.html" title="The End User License Agreements for the NVIDIA CUDA Toolkit, the NVIDIA CUDA Samples, the NVIDIA Display Driver, and NVIDIA NSight (Visual Studio Edition).">EULA</a></li>
            <li><a href="../cuda-getting-started-guide-for-linux/index.html" title="This guide discusses how to install and check for correct operation of the CUDA Development Tools on GNU/Linux systems.">Getting Started Linux</a></li>
            <li><a href="../cuda-getting-started-guide-for-mac-os-x/index.html" title="This guide discusses how to install and check for correct operation of the CUDA Development Tools on Mac OS X systems.">Getting Started Mac OS X</a></li>
            <li><a href="../cuda-getting-started-guide-for-microsoft-windows/index.html" title="This guide discusses how to install and check for correct operation of the CUDA Development Tools on Microsoft Windows systems.">Getting Started Windows</a></li>
            <li><a href="../cuda-c-programming-guide/index.html" title="This guide provides a detailed discussion of the CUDA programming model and programming interface. It then describes the hardware implementation, and provides guidance on how to achieve maximum performance. The Appendixes include a list of all CUDA-enabled devices, detailed description of all extensions to the C language, listings of supported mathematical functions, C++ features supported in host and device code, details on texture fetching, technical specifications of various devices, and concludes by introducing the low-level driver API.">Programming Guide</a></li>
            <li><a href="../cuda-c-best-practices-guide/index.html" title="This guide presents established parallelization and optimization techniques and explains coding metaphors and idioms that can greatly simplify programming for CUDA-capable GPU architectures. The intent is to provide guidelines for obtaining the best performance from NVIDIA GPUs using the CUDA Toolkit.">Best Practices Guide</a></li>
            <li><a href="../kepler-compatibility-guide/index.html" title="This application note is intended to help developers ensure that their NVIDIA CUDA applications will run effectively on GPUs based on the NVIDIA Kepler Architecture. This document provides guidance to ensure that your software applications are compatible with Kepler.">Kepler Compatibility Guide</a></li>
            <li><a href="../kepler-tuning-guide/index.html" title="Kepler is NVIDIA's next-generation architecture for CUDA compute applications. Applications that follow the best practices for the Fermi architecture should typically see speedups on the Kepler architecture without any code changes. This guide summarizes the ways that an application can be fine-tuned to gain additional speedups by leveraging Kepler architectural features.">Kepler Tuning Guide</a></li>
            <li><a href="../parallel-thread-execution/index.html" title="This guide provides detailed instructions on the use of PTX, a low-level parallel thread execution virtual machine and instruction set architecture (ISA). PTX exposes the GPU as a data-parallel computing device.">PTX ISA</a></li>
            <li><a href="../optimus-developer-guide/index.html" title="This document explains how CUDA APIs can be used to query for GPU capabilities in NVIDIA Optimus systems.">Developer Guide for Optimus</a></li>
            <li><a href="../video-decoder/index.html" title="This document provides the video decoder API specification and the format conversion and display using DirectX or OpenGL following decode.">Video Decoder</a></li>
            <li><a href="../video-encoder/index.html" title="This document provides the CUDA video encoder specifications, including the C-library API functions and encoder query parameters.">Video Encoder</a></li>
            <li><a href="../inline-ptx-assembly/index.html" title="This document shows how to inline PTX (parallel thread execution) assembly language statements into CUDA code. It describes available assembler statement parameters and constraints, and the document also provides a list of some pitfalls that you may encounter.">Inline PTX Assembly</a></li>
            <li><a href="../cuda-runtime-api/index.html" title="The CUDA runtime API.">CUDA Runtime API</a></li>
            <li><a href="../cuda-driver-api/index.html" title="The CUDA driver API.">CUDA Driver API</a></li>
            <li><a href="../cuda-math-api/index.html" title="The CUDA math API.">CUDA Math API</a></li>
            <li><a href="../cublas/index.html" title="The CUBLAS library is an implementation of BLAS (Basic Linear Algebra Subprograms) on top of the NVIDIA CUDA runtime. It allows the user to access the computational resources of NVIDIA Graphical Processing Unit (GPU), but does not auto-parallelize across multiple GPUs.">CUBLAS</a></li>
            <li><a href="../cufft/index.html" title="The CUFFT library user guide.">CUFFT</a></li>
            <li><a href="../curand/index.html" title="The CURAND library user guide.">CURAND</a></li>
            <li><a href="../cusparse/index.html" title="The CUSPARSE library user guide.">CUSPARSE</a></li>
            <li><a href="../npp/index.html" title="NVIDIA NPP is a library of functions for performing CUDA accelerated processing. The initial set of functionality in the library focuses on imaging and video processing and is widely applicable for developers in these areas. NPP will evolve over time to encompass more of the compute heavy tasks in a variety of problem domains. The NPP library is written to maximize flexibility, while maintaining high performance.">NPP</a></li>
            <li><a href="../thrust/index.html" title="The Thrust getting started guide.">Thrust</a></li>
            <li><a href="../cuda-samples/index.html" title="This document contains a complete listing of the code samples that are included with the NVIDIA CUDA Toolkit. It describes each code sample, lists the minimum GPU specification, and provides links to the source code and white papers if available.">CUDA Samples</a></li>
            <li><a href="../cuda-compiler-driver-nvcc/index.html" title="This document is a reference guide on the use of the CUDA compiler driver nvcc. Instead of being a specific CUDA compilation driver, nvcc mimics the behavior of the GNU compiler gcc, accepting a range of conventional compiler options, such as for defining macros and include/library paths, and for steering the compilation process.">NVCC</a></li>
            <li><a href="../cuda-gdb/index.html" title="The NVIDIA tool for debugging CUDA applications running on Linux and Mac, providing developers with a mechanism for debugging CUDA applications running on actual hardware. CUDA-GDB is an extension to the x86-64 port of GDB, the GNU Project debugger.">CUDA-GDB</a></li>
            <li><a href="../cuda-memcheck/index.html" title="CUDA-MEMCHECK is a suite of run time tools capable of precisely detecting out of bounds and misaligned memory access errors, checking device allocation leaks, reporting hardware errors and identifying shared memory data access hazards.">CUDA-MEMCHECK</a></li>
            <li><a href="../nsight-eclipse-edition-getting-started-guide/index.html" title="Nsight Eclipse Edition getting started guide">Nsight Eclipse Edition</a></li>
            <li><a href="../profiler-users-guide/index.html" title="This is the guide to the Profiler.">Profiler</a></li>
            <li><a href="../cuda-binary-utilities/index.html" title="The application notes for cuobjdump and nvdisasm.">CUDA Binary Utilities</a></li>
            <li><a href="../floating-point/index.html" title="A number of issues related to floating point accuracy and compliance are a frequent source of confusion on both CPUs and GPUs. The purpose of this white paper is to discuss the most common issues related to NVIDIA GPUs and to supplement the documentation in the CUDA C Programming Guide.">Floating Point and IEEE 754</a></li>
            <li><a href="../incomplete-lu-cholesky/index.html" title="In this white paper we show how to use the CUSPARSE and CUBLAS libraries to achieve a 2x speedup over CPU in the incomplete-LU and Cholesky preconditioned iterative methods. We focus on the Bi-Conjugate Gradient Stabilized and Conjugate Gradient iterative methods, that can be used to solve large sparse nonsymmetric and symmetric positive definite linear systems, respectively. Also, we comment on the parallel sparse triangular solve, which is an essential building block in these algorithms.">Incomplete-LU and Cholesky Preconditioned Iterative Methods</a></li>
            <li><a href="../libnvvm-api/index.html" title="The libNVVM API.">libNVVM API</a></li>
            <li><a href="../libdevice-users-guide/index.html" title="The libdevice library is an LLVM bitcode library that implements common functions for GPU kernels.">libdevice User's Guide</a></li>
            <li><a href="../nvvm-ir-spec/index.html" title="NVVM IR is a compiler IR (internal representation) based on the LLVM IR. The NVVM IR is designed to represent GPU compute kernels (for example, CUDA kernels). High-level language front-ends, like the CUDA C compiler front-end, can generate NVVM IR.">NVVM IR</a></li>
            <li><a href="../cupti/index.html" title="The CUPTI API.">CUPTI</a></li>
            <li><a href="../debugger-api/index.html" title="The CUDA debugger API.">Debugger API</a></li>
            <li><a href="../gpudirect-rdma/index.html" title="A tool for Kepler-class GPUs and CUDA 5.0 enabling a direct path for communication between the GPU and a peer device on the PCI Express bus when the devices share the same upstream root complex using standard features of PCI Express. This document introduces the technology and describes the steps necessary to enable a RDMA for GPUDirect connection to NVIDIA GPUs within the Linux device driver model.">RDMA for GPUDirect</a></li>
         </ul>
         <div class="category"><span class="twiddle">▼</span><a href="index.html" title="CUDA Driver API">CUDA Driver API</a></div>
         <ul>
            <li><a href="api-sync-behavior.html#api-sync-behavior">1.&nbsp;API synchronization behavior </a></li>
            <li><a href="modules.html#modules">2.&nbsp;Modules</a><ul>
                  <li><a href="group__CUDA__TYPES.html#group__CUDA__TYPES">2.1.&nbsp;Data types used by CUDA driver</a></li>
                  <li><a href="group__CUDA__INITIALIZE.html#group__CUDA__INITIALIZE">2.2.&nbsp;Initialization</a></li>
                  <li><a href="group__CUDA__VERSION.html#group__CUDA__VERSION">2.3.&nbsp;Version Management</a></li>
                  <li><a href="group__CUDA__DEVICE.html#group__CUDA__DEVICE">2.4.&nbsp;Device Management</a></li>
                  <li><a href="group__CUDA__DEVICE__DEPRECATED.html#group__CUDA__DEVICE__DEPRECATED">2.5.&nbsp;Device Management [DEPRECATED]</a></li>
                  <li><a href="group__CUDA__CTX.html#group__CUDA__CTX">2.6.&nbsp;Context Management</a></li>
                  <li><a href="group__CUDA__CTX__DEPRECATED.html#group__CUDA__CTX__DEPRECATED">2.7.&nbsp;Context Management [DEPRECATED]</a></li>
                  <li><a href="group__CUDA__MODULE.html#group__CUDA__MODULE">2.8.&nbsp;Module Management</a></li>
                  <li><a href="group__CUDA__MEM.html#group__CUDA__MEM">2.9.&nbsp;Memory Management</a></li>
                  <li><a href="group__CUDA__UNIFIED.html#group__CUDA__UNIFIED">2.10.&nbsp;Unified Addressing</a></li>
                  <li><a href="group__CUDA__STREAM.html#group__CUDA__STREAM">2.11.&nbsp;Stream Management</a></li>
                  <li><a href="group__CUDA__EVENT.html#group__CUDA__EVENT">2.12.&nbsp;Event Management</a></li>
                  <li><a href="group__CUDA__EXEC.html#group__CUDA__EXEC">2.13.&nbsp;Execution Control</a></li>
                  <li><a href="group__CUDA__EXEC__DEPRECATED.html#group__CUDA__EXEC__DEPRECATED">2.14.&nbsp;Execution Control [DEPRECATED]</a></li>
                  <li><a href="group__CUDA__TEXREF.html#group__CUDA__TEXREF">2.15.&nbsp;Texture Reference Management</a></li>
                  <li><a href="group__CUDA__TEXREF__DEPRECATED.html#group__CUDA__TEXREF__DEPRECATED">2.16.&nbsp;Texture Reference Management [DEPRECATED]</a></li>
                  <li><a href="group__CUDA__SURFREF.html#group__CUDA__SURFREF">2.17.&nbsp;Surface Reference Management</a></li>
                  <li><a href="group__CUDA__TEXOBJECT.html#group__CUDA__TEXOBJECT">2.18.&nbsp;Texture Object Management</a></li>
                  <li><a href="group__CUDA__SURFOBJECT.html#group__CUDA__SURFOBJECT">2.19.&nbsp;Surface Object Management</a></li>
                  <li><a href="group__CUDA__PEER__ACCESS.html#group__CUDA__PEER__ACCESS">2.20.&nbsp;Peer Context Memory Access</a></li>
                  <li><a href="group__CUDA__GRAPHICS.html#group__CUDA__GRAPHICS">2.21.&nbsp;Graphics Interoperability</a></li>
                  <li><a href="group__CUDA__PROFILER.html#group__CUDA__PROFILER">2.22.&nbsp;Profiler Control</a></li>
                  <li><a href="group__CUDA__GL.html#group__CUDA__GL">2.23.&nbsp;OpenGL Interoperability</a><ul>
                        <li><a href="group__CUDA__GL__DEPRECATED.html#group__CUDA__GL__DEPRECATED">2.23.1.&nbsp;OpenGL Interoperability [DEPRECATED]</a></li>
                     </ul>
                  </li>
                  <li><a href="group__CUDA__D3D9.html#group__CUDA__D3D9">2.24.&nbsp;Direct3D 9 Interoperability</a><ul>
                        <li><a href="group__CUDA__D3D9__DEPRECATED.html#group__CUDA__D3D9__DEPRECATED">2.24.1.&nbsp;Direct3D 9 Interoperability [DEPRECATED]</a></li>
                     </ul>
                  </li>
                  <li><a href="group__CUDA__D3D10.html#group__CUDA__D3D10">2.25.&nbsp;Direct3D 10 Interoperability</a><ul>
                        <li><a href="group__CUDA__D3D10__DEPRECATED.html#group__CUDA__D3D10__DEPRECATED">2.25.1.&nbsp;Direct3D 10 Interoperability [DEPRECATED]</a></li>
                     </ul>
                  </li>
                  <li><a href="group__CUDA__D3D11.html#group__CUDA__D3D11">2.26.&nbsp;Direct3D 11 Interoperability</a><ul>
                        <li><a href="group__CUDA__D3D11__DEPRECATED.html#group__CUDA__D3D11__DEPRECATED">2.26.1.&nbsp;Direct3D 11 Interoperability [DEPRECATED]</a></li>
                     </ul>
                  </li>
                  <li><a href="group__CUDA__VDPAU.html#group__CUDA__VDPAU">2.27.&nbsp;VDPAU Interoperability</a></li>
               </ul>
            </li>
            <li><a href="annotated.html#annotated">3.&nbsp;Data Structures</a><ul>
                  <li><a href="structCUDA__ARRAY3D__DESCRIPTOR.html#structCUDA__ARRAY3D__DESCRIPTOR">3.1.&nbsp;CUDA_ARRAY3D_DESCRIPTOR</a></li>
                  <li><a href="structCUDA__ARRAY__DESCRIPTOR.html#structCUDA__ARRAY__DESCRIPTOR">3.2.&nbsp;CUDA_ARRAY_DESCRIPTOR</a></li>
                  <li><a href="structCUDA__MEMCPY2D.html#structCUDA__MEMCPY2D">3.3.&nbsp;CUDA_MEMCPY2D</a></li>
                  <li><a href="structCUDA__MEMCPY3D.html#structCUDA__MEMCPY3D">3.4.&nbsp;CUDA_MEMCPY3D</a></li>
                  <li><a href="structCUDA__MEMCPY3D__PEER.html#structCUDA__MEMCPY3D__PEER">3.5.&nbsp;CUDA_MEMCPY3D_PEER</a></li>
                  <li><a href="structCUDA__POINTER__ATTRIBUTE__P2P__TOKENS.html#structCUDA__POINTER__ATTRIBUTE__P2P__TOKENS">3.6.&nbsp;CUDA_POINTER_ATTRIBUTE_P2P_TOKENS</a></li>
                  <li><a href="structCUDA__RESOURCE__DESC.html#structCUDA__RESOURCE__DESC">3.7.&nbsp;CUDA_RESOURCE_DESC</a></li>
                  <li><a href="structCUDA__RESOURCE__VIEW__DESC.html#structCUDA__RESOURCE__VIEW__DESC">3.8.&nbsp;CUDA_RESOURCE_VIEW_DESC</a></li>
                  <li><a href="structCUDA__TEXTURE__DESC.html#structCUDA__TEXTURE__DESC">3.9.&nbsp;CUDA_TEXTURE_DESC</a></li>
                  <li><a href="structCUdevprop.html#structCUdevprop">3.10.&nbsp;CUdevprop</a></li>
                  <li><a href="structCUipcEventHandle.html#structCUipcEventHandle">3.11.&nbsp;CUipcEventHandle</a></li>
                  <li><a href="structCUipcMemHandle.html#structCUipcMemHandle">3.12.&nbsp;CUipcMemHandle</a></li>
               </ul>
            </li>
            <li><a href="functions.html#functions">4.&nbsp;Data Fields</a></li>
            <li><a href="deprecated.html#deprecated">5.&nbsp;Deprecated List</a></li>
            <li><a href="notices-header.html#notices-header">Notices</a><ul></ul>
            </li>
         </ul>
      </nav>
      <nav id="search-results">
         <h2>Search Results</h2>
         <ol></ol>
      </nav>
      <script language="JavaScript" type="text/javascript" charset="utf-8" src="../common/formatting/common.min.js"></script>
      <script language="JavaScript" type="text/javascript" charset="utf-8" src="../common/scripts/omniture/s_code_us_dev_aut1-nolinktrackin.js"></script>
      <script language="JavaScript" type="text/javascript" charset="utf-8" src="../common/scripts/omniture/omniture.js"></script>
      <noscript><a href="http://www.omniture.com" title="Web Analytics"><img src="http://omniture.nvidia.com/b/ss/nvidiacudadocs/1/H.17--NS/0" height="1" width="1" border="0" alt=""></img></a></noscript>
      <script language="JavaScript" type="text/javascript" charset="utf-8" src="../common/scripts/google-analytics/google-analytics-write.js"></script>
      <script language="JavaScript" type="text/javascript" charset="utf-8" src="../common/scripts/google-analytics/google-analytics-tracker.js"></script>
      </body>
</html>