Sophie

Sophie

distrib > Momonga > development > i686 > media > os > by-pkgid > ebaa2ee905d6df590c05ae2846832b3e > files > 202

xen-doc-4.5.0-1m.mo8.i686.rpm

<h1>PVH Specification</h1>
<h2>Rationale</h2>
<p>PVH is a new kind of guest that has been introduced on Xen 4.4 as a DomU, and
on Xen 4.5 as a Dom0. The aim of PVH is to make use of the hardware
virtualization extensions present in modern x86 CPUs in order to
improve performance.</p>
<p>PVH is considered a mix between PV and HVM, and can be seen as a PV guest
that runs inside of an HVM container, or as a PVHVM guest without any emulated
devices. The design goal of PVH is to provide the best performance possible and
to reduce the amount of modifications needed for a guest OS to run in this mode
(compared to pure PV).</p>
<p>This document tries to describe the interfaces used by PVH guests, focusing
on how an OS should make use of them in order to support PVH.</p>
<h2>Early boot</h2>
<p>PVH guests use the PV boot mechanism, that means that the kernel is loaded and
directly launched by Xen (by jumping into the entry point). In order to do this
Xen ELF Notes need to be added to the guest kernel, so that they contain the
information needed by Xen. Here is an example of the ELF Notes added to the
FreeBSD amd64 kernel in order to boot as PVH:</p>
<pre><code>ELFNOTE(Xen, XEN_ELFNOTE_GUEST_OS,       .asciz, "FreeBSD")
ELFNOTE(Xen, XEN_ELFNOTE_GUEST_VERSION,  .asciz, __XSTRING(__FreeBSD_version))
ELFNOTE(Xen, XEN_ELFNOTE_XEN_VERSION,    .asciz, "xen-3.0")
ELFNOTE(Xen, XEN_ELFNOTE_VIRT_BASE,      .quad,  KERNBASE)
ELFNOTE(Xen, XEN_ELFNOTE_PADDR_OFFSET,   .quad,  KERNBASE)
ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          .quad,  xen_start)
ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, .quad,  hypercall_page)
ELFNOTE(Xen, XEN_ELFNOTE_HV_START_LOW,   .quad,  HYPERVISOR_VIRT_START)
ELFNOTE(Xen, XEN_ELFNOTE_FEATURES,       .asciz, "writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel|hvm_callback_vector")
ELFNOTE(Xen, XEN_ELFNOTE_PAE_MODE,       .asciz, "yes")
ELFNOTE(Xen, XEN_ELFNOTE_L1_MFN_VALID,   .long,  PG_V, PG_V)
ELFNOTE(Xen, XEN_ELFNOTE_LOADER,         .asciz, "generic")
ELFNOTE(Xen, XEN_ELFNOTE_SUSPEND_CANCEL, .long,  0)
ELFNOTE(Xen, XEN_ELFNOTE_BSD_SYMTAB,     .asciz, "yes")
</code></pre>
<p>On the Linux side, the above can be found in <code>arch/x86/xen/xen-head.S</code>.</p>
<p>It is important to highlight the following notes:</p>
<ul>
<li><code>XEN_ELFNOTE_ENTRY</code>: contains the virtual memory address of the kernel entry
    point.</li>
<li><code>XEN_ELFNOTE_HYPERCALL_PAGE</code>: contains the virtual memory address of the
    hypercal page inside of the guest kernel (this memory region will be filled
    by Xen prior to booting).</li>
<li><code>XEN_ELFNOTE_FEATURES</code>: contains the list of features supported by the kernel.
    In the example above the kernel is only able to boot as a PVH guest, but
    those options can be mixed with the ones used by pure PV guests in order to
    have a kernel that supports both PV and PVH (like Linux). The list of
    options available can be found in the <code>features.h</code> public header. Note that
    in the example above <code>hvm_callback_vector</code> is in <code>XEN_ELFNOTE_FEATURES</code>.
    Older hypervisors will balk at this being part of it, so it can also be put
    in <code>XEN_ELFNOTE_SUPPORTED_FEATURES</code> which older hypervisors will ignore.</li>
</ul>
<p>Xen will jump into the kernel entry point defined in <code>XEN_ELFNOTE_ENTRY</code> with
paging enabled (either long mode or protected mode with paging turned on
depending on the kernel bitness) and some basic page tables setup. An important
distinction for a 64bit PVH is that it is launched at privilege level 0 as
opposed to a 64bit PV guest which is launched at privilege level 3.</p>
<p>Also, the <code>rsi</code> (<code>esi</code> on 32bits) register is going to contain the virtual
memory address where Xen has placed the <code>start_info</code> structure. The <code>rsp</code> (<code>esp</code>
on 32bits) will point to the top of an initial single page stack, that can be
used by the guest kernel. The <code>start_info</code> structure contains all the info the
guest needs in order to initialize. More information about the contents can be
found in the <code>xen.h</code> public header.</p>
<h3>Initial amd64 control registers values</h3>
<p>Initial values for the control registers are set up by Xen before booting the
guest kernel. The guest kernel can expect to find the following features
enabled by Xen.</p>
<p><code>CR0</code> has the following bits set by Xen:</p>
<ul>
<li>PE (bit 0): protected mode enable.</li>
<li>ET (bit 4): 387 or newer processor.</li>
<li>PG (bit 31): paging enabled.</li>
</ul>
<p><code>CR4</code> has the following bits set by Xen:</p>
<ul>
<li>PAE (bit 5): PAE enabled.</li>
</ul>
<p>And finally in <code>EFER</code> the following features are enabled:</p>
<ul>
<li>LME (bit 8): Long mode enable.</li>
<li>LMA (bit 10): Long mode active.</li>
</ul>
<p>At least the following flags in <code>EFER</code> are guaranteed to be disabled:</p>
<ul>
<li>SCE (bit 0): System call extensions disabled.</li>
<li>NXE (bit 11): No-Execute disabled.</li>
</ul>
<p>There's no guarantee about the state of the other bits in the <code>EFER</code> register.</p>
<p>All the segments selectors are set with a flat base at zero.</p>
<p>The <code>cs</code> segment selector attributes are set to 0x0a09b, which describes an
executable and readable code segment only accessible by the most privileged
level. The segment is also set as a 64-bit code segment (<code>L</code> flag set, <code>D</code> flag
unset).</p>
<p>The remaining segment selectors (<code>ds</code>, <code>ss</code>, <code>es</code>, <code>fs</code> and <code>gs</code>) are all set
to the same values. The attributes are set to 0x0c093, which implies a read and
write data segment only accessible by the most privileged level.</p>
<p>The <code>FS.base</code>, <code>GS.base</code> and <code>KERNEL_GS.base</code> MSRs are zeroed out.</p>
<p>The <code>IDT</code> and <code>GDT</code> are also zeroed, so the guest must be specially careful to
not trigger a fault until after they have been properly set. The way of setting
the IDT and the GDT is using the native instructions as would be done on bare
metal.</p>
<p>The <code>RFLAGS</code> register is guaranteed to be clear when jumping into the kernel
entry point, with the exception of the reserved bit 1 set.</p>
<h2>Memory</h2>
<p>Since PVH guests rely on virtualization extensions provided by the CPU, they
have access to a hardware virtualized MMU, which means page-table related
operations should use the same instructions used on native.</p>
<p>There are however some differences with native. The usage of native MTRR
operations is forbidden, and <code>XENPF_*_memtype</code> hypercalls should be used
instead. This can be avoided by simply not using MTRR and setting all the
memory attributes using PAT, which doesn't require the usage of any hypercalls.</p>
<p>Since PVH doesn't use a BIOS in order to boot, the physical memory map has
to be retrieved using the <code>XENMEM_memory_map</code> hypercall, which will return
an e820 map. This memory map might contain holes that describe MMIO regions,
that will be already setup by Xen.</p>
<p><em>TODO</em>: we need to figure out what to do with MMIO regions, right now Xen
sets all the holes in the native e820 to MMIO regions for Dom0 up to 4GB. We
need to decide what to do with MMIO regions above 4GB on Dom0, and what to do
for PVH DomUs with pci-passthrough.</p>
<p>In the case of a guest started with memory != maxmem, the e820 memory map
returned by Xen will contain the memory up to maxmem. The guest has to be very
careful to only use the lower memory pages up to the value contained in
<code>start_info-&gt;nr_pages</code> because any memory page above that value will not be
populated.</p>
<h2>Physical devices</h2>
<p>When running as Dom0 the guest OS has the ability to interact with the physical
devices present in the system. A note should be made that PVH guests require
a working IOMMU in order to interact with physical devices.</p>
<p>The first step in order to manipulate the devices is to make Xen aware of
them. Due to the fact that all the hardware description on x86 comes from
ACPI, Dom0 is responsible for parsing the ACPI tables and notifying Xen about
the devices it finds. This is done with the <code>PHYSDEVOP_pci_device_add</code>
hypercall.</p>
<p><em>TODO</em>: explain the way to register the different kinds of PCI devices, like
devices with virtual functions.</p>
<h2>Interrupts</h2>
<p>All interrupts on PVH guests are routed over event channels, see
<a href="http://wiki.xen.org/wiki/Event_Channel_Internals">Event Channel Internals</a> for more detailed information about
event channels. In order to inject interrupts into the guest an IDT vector is
used. This is the same mechanism used on PVHVM guests, and allows having
per-cpu interrupts that can be used to deliver timers or IPIs.</p>
<p>In order to register the callback IDT vector the <code>HVMOP_set_param</code> hypercall
is used with the following values:</p>
<pre><code>domid = DOMID_SELF
index = HVM_PARAM_CALLBACK_IRQ
value = (0x2 &lt;&lt; 56) | vector_value
</code></pre>
<p>The OS has to program the IDT for the <code>vector_value</code> using the baremetal
mechanism.</p>
<p>In order to know which event channel has fired, we need to look into the
information provided in the <code>shared_info</code> structure. The <code>evtchn_pending</code>
array is used as a bitmap in order to find out which event channel has
fired. Event channels can also be masked by setting it's port value in the
<code>shared_info-&gt;evtchn_mask</code> bitmap.</p>
<h3>Interrupts from physical devices</h3>
<p>When running as Dom0 (or when using pci-passthrough) interrupts from physical
devices are routed over event channels. There are 3 different kind of
physical interrupts that can be routed over event channels by Xen: IO APIC,
MSI and MSI-X interrupts.</p>
<p>Since physical interrupts usually need EOI (End Of Interrupt), Xen allows the
registration of a memory region that will contain whether a physical interrupt
needs EOI from the guest or not. This is done with the
<code>PHYSDEVOP_pirq_eoi_gmfn_v2</code> hypercall that takes a parameter containing the
physical address of the memory page that will act as a bitmap. Then in order to
find out if an IRQ needs EOI or not, the OS can perform a simple bit test on the
memory page using the PIRQ value.</p>
<h3>IO APIC interrupt routing</h3>
<p>IO APIC interrupts can be routed over event channels using <code>PHYSDEVOP</code>
hypercalls. First the IRQ is registered using the <code>PHYSDEVOP_map_pirq</code>
hypercall, as an example IRQ#9 is used here:</p>
<pre><code>domid = DOMID_SELF
type = MAP_PIRQ_TYPE_GSI
index = 9
pirq = 9
</code></pre>
<p>The IRQ#9 is now registered as PIRQ#9. The triggering and polarity can also
be configured using the <code>PHYSDEVOP_setup_gsi</code> hypercall:</p>
<pre><code>gsi = 9 # This is the IRQ value.
triggering = 0
polarity = 0
</code></pre>
<p>In this example the IRQ would be configured to use edge triggering and high
polarity.</p>
<p>Finally the PIRQ can be bound to an event channel using the
<code>EVTCHNOP_bind_pirq</code>, that will return the event channel port the PIRQ has been
assigned. After this the event channel will be ready for delivery.</p>
<p><em>NOTE</em>: when running as Dom0, the guest has to parse the interrupt overrides
found on the ACPI tables and notify Xen about them.</p>
<h3>MSI</h3>
<p>In order to configure MSI interrupts for a device, Xen must be made aware of
it's presence first by using the <code>PHYSDEVOP_pci_device_add</code> as described above.
Then the <code>PHYSDEVOP_map_pirq</code> hypercall is used:</p>
<pre><code>domid = DOMID_SELF
type = MAP_PIRQ_TYPE_MSI_SEG or MAP_PIRQ_TYPE_MULTI_MSI
index = -1
pirq = -1
bus = pci_device_bus
devfn = pci_device_function
entry_nr = number of MSI interrupts
</code></pre>
<p>The type has to be set to <code>MAP_PIRQ_TYPE_MSI_SEG</code> if only one MSI interrupt
source is being configured. On devices that support MSI interrupt groups
<code>MAP_PIRQ_TYPE_MULTI_MSI</code> can be used to configure them by also placing the
number of MSI interrupts in the <code>entry_nr</code> field.</p>
<p>The values in the <code>bus</code> and <code>devfn</code> field should be the same as the ones used
when registering the device with <code>PHYSDEVOP_pci_device_add</code>.</p>
<h3>MSI-X</h3>
<p><em>TODO</em>: how to register/use them.</p>
<h2>Event timers and timecounters</h2>
<p>Since some hardware is not available on PVH (like the local APIC), Xen provides
the OS with suitable replacements in order to get the same functionality. One
of them is the timer interface. Using a set of hypercalls, a guest OS can set
event timers that will deliver and event channel interrupt to the guest.</p>
<p>In order to use the timer provided by Xen the guest OS first needs to register
a VIRQ event channel to be used by the timer to deliver the interrupts. The
event channel is registered using the <code>EVTCHNOP_bind_virq</code> hypercall, that
only takes two parameters:</p>
<pre><code>virq = VIRQ_TIMER
vcpu = vcpu_id
</code></pre>
<p>The port that's going to be used by Xen in order to deliver the interrupt is
returned in the <code>port</code> field. Once the interrupt is set, the timer can be
programmed using the <code>VCPUOP_set_singleshot_timer</code> hypercall.</p>
<pre><code>flags = VCPU_SSHOTTMR_future
timeout_abs_ns = absolute value when the timer should fire
</code></pre>
<p>It is important to notice that the <code>VCPUOP_set_singleshot_timer</code> hypercall must
be executed from the same vCPU where the timer should fire, or else Xen will
refuse to set it. This is a single-shot timer, so it must be set by the OS
every time it fires if a periodic timer is desired.</p>
<p>Xen also shares a memory region with the guest OS that contains time related
values that are updated periodically. This values can be used to implement a
timecounter or to obtain the current time. This information is placed inside of
<code>shared_info-&gt;vcpu_info[vcpu_id].time</code>. The uptime (time since the guest has
been launched) can be calculated using the following expression and the values
stored in the <code>vcpu_time_info</code> struct:</p>
<pre><code>system_time + ((((tsc - tsc_timestamp) &lt;&lt; tsc_shift) * tsc_to_system_mul) &gt;&gt; 32)
</code></pre>
<p>The timeout that is passed to <code>VCPUOP_set_singleshot_timer</code> has to be
calculated using the above value, plus the timeout the system wants to set.</p>
<p>If the OS also wants to obtain the current wallclock time, the value calculated
above has to be added to the values found in <code>shared_info-&gt;wc_sec</code> and
<code>shared_info-&gt;wc_nsec</code>.</p>
<h2>SMP discover and bring up</h2>
<p>The process of bringing up secondary CPUs is obviously different from native,
since PVH doesn't have a local APIC. The first thing to do is to figure out
how many vCPUs the guest has. This is done using the <code>VCPUOP_is_up</code> hypercall,
using for example this simple loop:</p>
<pre><code>for (i = 0; i &lt; MAXCPU; i++) {
    ret = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);
    if (ret &gt;= 0)
        /* vCPU#i is present */
}
</code></pre>
<p>Note than when running as Dom0, the ACPI tables might report a different number
of available CPUs. This is because the value on the ACPI tables is the
number of physical CPUs the host has, and it might bear no resemblance with the
number of vCPUs Dom0 actually has so it should be ignored.</p>
<p>In order to bring up the secondary vCPUs they must be configured first. This is
achieved using the <code>VCPUOP_initialise</code> hypercall. A valid context has to be
passed to the vCPU in order to boot. The relevant fields for PVH guests are
the following:</p>
<ul>
<li><code>flags</code>: contains <code>VGCF_*</code> flags (see <code>arch-x86/xen.h</code> public header).</li>
<li><code>user_regs</code>: struct that contains the register values that will be set on
    the vCPU before booting. All GPRs are available to be set, however, the
    most relevant ones are <code>rip</code> and <code>rsp</code> in order to set the start address
    and the stack. Please note, all selectors must be null.</li>
<li><code>ctrlreg[3]</code>: contains the address of the page tables that will be used by
    the vCPU. Other control registers should be set to zero, or else the
    hypercall will fail with -EINVAL.</li>
</ul>
<p>After the vCPU is initialized with the proper values, it can be started by
using the <code>VCPUOP_up</code> hypercall. The values of the other control registers of
the vCPU will be the same as the ones described in the <code>control registers</code>
section.</p>
<p>Examples about how to bring up secondary CPUs can be found on the FreeBSD
code base in <code>sys/x86/xen/pv.c</code> and on Linux <code>arch/x86/xen/smp.c</code>.</p>
<h2>Control operations (reboot/shutdown)</h2>
<p>Reboot and shutdown operations on PVH guests are performed using hypercalls.
In order to issue a reboot, a guest must use the <code>SHUTDOWN_reboot</code> hypercall.
In order to perform a power off from a guest DomU, the <code>SHUTDOWN_poweroff</code>
hypercall should be used.</p>
<p>The way to perform a full system power off from Dom0 is different than what's
done in a DomU guest. In order to perform a power off from Dom0 the native
ACPI path should be followed, but the guest should not write the <code>SLP_EN</code>
bit to the Pm1Control register. Instead the <code>XENPF_enter_acpi_sleep</code> hypercall
should be used, filling the following data in the <code>xen_platform_op</code> struct:</p>
<pre><code>cmd = XENPF_enter_acpi_sleep
interface_version = XENPF_INTERFACE_VERSION
u.enter_acpi_sleep.pm1a_cnt_val = Pm1aControlValue
u.enter_acpi_sleep.pm1b_cnt_val = Pm1bControlValue
</code></pre>
<p>This will allow Xen to do it's clean up and to power off the system. If the
host is using hardware reduced ACPI, the following field should also be set:</p>
<pre><code>u.enter_acpi_sleep.flags = XENPF_ACPI_SLEEP_EXTENDED (0x1)
</code></pre>
<h2>CPUID</h2>
<p>The cpuid instruction that should be used is the normal <code>cpuid</code>, not the
emulated <code>cpuid</code> that PV guests usually require.</p>
<p><em>TDOD</em>: describe which cpuid flags a guest should ignore and also which flags
describe features can be used. It would also be good to describe the set of
cpuid flags that will always be present when running as PVH.</p>
<h2>Final notes</h2>
<p>All the other hardware functionality not described in this document should be
assumed to be performed in the same way as native.</p>