VirtualBox

Changeset 71283 in vbox for trunk/src/VBox/VMM/VMMR3


Ignore:
Timestamp:
Mar 9, 2018 11:43:59 AM (7 years ago)
Author:
vboxsync
svn:sync-xref-src-repo-rev:
121214
Message:

NEM: Working on the @page docs for windows. bugref:9044

Location:
trunk/src/VBox/VMM/VMMR3
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • trunk/src/VBox/VMM/VMMR3/NEMR3.cpp

    r71279 r71283  
    1818/** @page pg_nem NEM - Native Execution Manager.
    1919 *
    20  * Later.
    21  *
    22  *
    23  * @section sec_nem_win     Windows
    24  *
    25  * On Windows the Hyper-V root partition (dom0 in zen terminology) does not have
    26  * nested VT-x or AMD-V capabilities.  For a while raw-mode worked inside it,
    27  * but for a while now we've been getting \#GP when trying to modify CR4 in the
    28  * world switcher.  So, when Hyper-V is active on Windows we have little choice
    29  * but to use Hyper-V to run our VMs.
    30  *
    31  *
    32  * @subsection subsec_nem_win_whv   The WinHvPlatform API
    33  *
    34  * Since Windows 10 build 17083 there is a documented API for managing Hyper-V
    35  * VMs, header file WinHvPlatform.h and implementation in WinHvPlatform.dll.
    36  * This interface is a wrapper around the undocumented Virtualization
    37  * Infrastructure Driver (VID) API - VID.DLL and VID.SYS.  The wrapper is
    38  * written in C++, namespaced, early versions (at least) was using standard C++
    39  * container templates in several places.
    40  *
    41  * When creating a VM using WHvCreatePartition, it will only create the
    42  * WinHvPlatform structures for it, to which you get an abstract pointer.  The
    43  * VID API that actually creates the partition is first engaged when you call
    44  * WHvSetupPartition after first setting a lot of properties using
    45  * WHvSetPartitionProperty.  Since the VID API is just a very thin wrapper
    46  * around CreateFile and NtDeviceIoControl, it returns an actual HANDLE for the
    47  * partition WinHvPlatform.  We fish this HANDLE out of the WinHvPlatform
    48  * partition structures because we need to talk directly to VID for reasons
    49  * we'll get to in a bit.  (Btw. we could also intercept the CreateFileW or
    50  * NtDeviceIoControl calls from VID.DLL to get the HANDLE should fishing in the
    51  * partition structures become difficult.)
    52  *
    53  * The WinHvPlatform API requires us to both set the number of guest CPUs before
    54  * setting up the partition and call WHvCreateVirtualProcessor for each of them.
    55  * The CPU creation function boils down to a VidMessageSlotMap call that sets up
    56  * and maps a message buffer into ring-3 for async communication with hyper-V
    57  * and/or the VID.SYS thread actually running the CPU.  When for instance a
    58  * VMEXIT is encountered, hyper-V sends a message that the
    59  * WHvRunVirtualProcessor API retrieves (and later acknowledges) via
    60  * VidMessageSlotHandleAndGetNext.  It should be noteded that
    61  * WHvDeleteVirtualProcessor doesn't do much as there seems to be no partner
    62  * function VidMessagesSlotMap that reverses what it did.
    63  *
    64  * Memory is managed thru calls to WHvMapGpaRange and WHvUnmapGpaRange (GPA does
    65  * not mean grade point average here, but rather guest physical addressspace),
    66  * which corresponds to VidCreateVaGpaRangeSpecifyUserVa and VidDestroyGpaRange
    67  * respectively.  As 'UserVa' indicates, the functions works on user process
    68  * memory.  The mappings are also subject to quota restrictions, so the number
    69  * of ranges are limited and probably their total size as well.  Obviously
    70  * VID.SYS keeps track of the ranges, but so does WinHvPlatform, which means
    71  * there is a bit of overhead involved and quota restrctions makes sense.  For
    72  * some reason though, regions are lazily mapped on VMEXIT/memory by
    73  * WHvRunVirtualProcessor.
    74  *
    75  * Running guest code is done thru the WHvRunVirtualProcessor function.  It
    76  * asynchronously starts or resumes hyper-V CPU execution and then waits for an
    77  * VMEXIT message.  Hyper-V / VID.SYS will return information about the message
    78  * in the message buffer mapping, and WHvRunVirtualProcessor will convert that
    79  * into it's own WHV_RUN_VP_EXIT_CONTEXT format.
    80  *
    81  * Other threads can interrupt the execution by using WHvCancelVirtualProcessor,
    82  * which which case the thread in WHvRunVirtualProcessor is woken up via a dummy
    83  * QueueUserAPC and will call VidStopVirtualProcessor to asynchronously end
    84  * execution.  The stop CPU call not immediately succeed if the CPU encountered
    85  * a VMEXIT before the stop was processed, in which case the VMEXIT needs to be
    86  * processed first, and the pending stop will be processed in a subsequent call
    87  * to WHvRunVirtualProcessor.
    88  *
    89  * Registers are retrieved and set via WHvGetVirtualProcessorRegisters and
    90  * WHvSetVirtualProcessorRegisters.  In addition, several VMEXITs include
    91  * essential register state in the exit context information, potentially making
    92  * it possible to emulate the instruction causing the exit without involving
    93  * WHvGetVirtualProcessorRegisters.
    94  *
    95  *
    96  * @subsubsection subsubsec_nem_win_whv_cons    Issues / Disadvantages
    97  *
    98  * Here are some observations:
    99  *
    100  * - The WHvCancelVirtualProcessor API schedules a dummy usermode APC callback
    101  *   in order to cancel any current or future alertable wait in VID.SYS during
    102  *   the VidMessageSlotHandleAndGetNext call.
    103  *
    104  *   IIRC this will make the kernel schedule the specified callback thru
    105  *   NTDLL!KiUserApcDispatcher by modifying the thread context and quite
    106  *   possibly the userland thread stack.  When the APC callback returns to
    107  *   KiUserApcDispatcher, it will call NtContinue to restore the old thread
    108  *   context and resume execution from there.  This naturally adds up to some
    109  *   CPU cycles, ring transitions aren't for free, especially after Spectre &
    110  *   Meltdown mitigations.
    111  *
    112  *   Using NtAltertThread call could do the same without the thread context
    113  *   modifications and the extra kernel call.
    114  *
    115  *
    116  * - Not sure if this is a thing, but WHvCancelVirtualProcessor seems to cause
    117  *   cause a lot more spurious WHvRunVirtualProcessor returns that what we get
    118  *   with the replacement code.  By spurious returns we mean that the
    119  *   subsequent call to WHvRunVirtualProcessor would return immediately.
    120  *
    121  *
    122  * - When WHvRunVirtualProcessor returns without a message, or on a terse
    123  *   VID message like HLT, it will make a kernel call to get some registers.
    124  *   This is potentially inefficient if the caller decides he needs more
    125  *   register state.
    126  *
    127  *   It would be better to just return what's available and let the caller fetch
    128  *   what is missing from his point of view in a single kernel call.
    129  *
    130  *
    131  * - The WHvRunVirtualProcessor implementation does lazy GPA range mappings when
    132  *   a unmapped GPA message is received from hyper-V.
    133  *
    134  *   Since MMIO is currently realized as unmapped GPA, this will slow down all
    135  *   MMIO accesses a tiny little bit as WHvRunVirtualProcessor looks up the
    136  *   guest physical address to check if it is a pending lazy mapping.
    137  *
    138  *   The lazy mapping feature makes no sense to us.  We as API user have all the
    139  *   information and can do lazy mapping ourselves if we want/have to (see next
    140  *   point).
    141  *
    142  *
    143  * - There is no API for modifying protection of a page within a GPA range.
    144  *
    145  *   From what we can tell, the only way to modify the protection (like readonly
    146  *   -> writable, or vice versa) is to first unmap the range and then remap it
    147  *   with the new protection.
    148  *
    149  *   We are for instance doing this quite a bit in order to track dirty VRAM
    150  *   pages.  VRAM pages starts out as readonly, when the guest writes to a page
    151  *   we take an exit, notes down which page it is, makes it writable and restart
    152  *   the instruction.  After refreshing the display, we reset all the writable
    153  *   pages to readonly again, bulk fashion.
    154  *
    155  *   Now to work around this issue, we do page sized GPA ranges.  In addition to
    156  *   add a lot of tracking overhead to WinHvPlatform and VID.SYS, this also
    157  *   causes us to exceed our quota before we've even mapped a default sized
    158  *   (128MB) VRAM page-by-page.  So, to work around this quota issue we have to
    159  *   lazily map pages and actively restrict the number of mappings.
    160  *
    161  *   Our best workaround thus far is bypassing WinHvPlatform and VID entirely
    162  *   when in comes to guest memory management and instead use the underlying
    163  *   hypercalls (HvCallMapGpaPages, HvCallUnmapGpaPages) to do it ourselves.
    164  *   (This also maps a whole lot better into our own guest page management
    165  *   infrastructure.)
    166  *
    167  *
    168  * - Observed problems doing WHvUnmapGpaRange immediately followed by
    169  *   WHvMapGpaRange.
    170  *
    171  *   As mentioned above, we've been forced to use this sequence when modifying
    172  *   page protection.   However, when transitioning from readonly to writable,
    173  *   we've ended up looping forever with the same write to readonly memory
    174  *   VMEXIT.  We're wondering if this issue might be related to the lazy mapping
    175  *   logic in WinHvPlatform.
    176  *
    177  *   Workaround: Insert a WHvRunVirtualProcessor call and make sure to get a GPA
    178  *   unmapped exit between the two calls.  Not entirely great performance wise
    179  *   (or the santity of our code).
    180  *
    181  *
    182  * - WHVRunVirtualProcessor wastes time converting VID/Hyper-V messages to its
    183  *   own format (WHV_RUN_VP_EXIT_CONTEXT).
    184  *
    185  *   We understand this might be because Microsoft wishes to remain free to
    186  *   modify the VID/Hyper-V messages, but it's still rather silly and does slow
    187  *   things down a little.  We'd much rather just process the messages directly.
    188  *
    189  *
    190  * - WHVRunVirtualProcessor would've benefited from using a callback interface:
    191  *
    192  *      - The potential size changes of the exit context structure wouldn't be
    193  *        an issue, since the function could manage that itself.
    194  *
    195  *      - State handling could probably be simplified (like cancelation).
    196  *
    197  *
    198  * - WHvGetVirtualProcessorRegisters and WHvSetVirtualProcessorRegisters
    199  *   internally converts register names, probably using temporary heap buffers.
    200  *
    201  *   From the looks of things, they are converting from WHV_REGISTER_NAME to
    202  *   HV_REGISTER_NAME from in the "Virtual Processor Register Names" section in
    203  *   the "Hypervisor Top-Level Functional Specification" document.  This feels
    204  *   like an awful waste of time.
    205  *
    206  *   We simply cannot understand why HV_REGISTER_NAME isn't used directly here,
    207  *   or at least the same values, making any conversion reduntant.  Restricting
    208  *   access to certain registers could easily be implement by scanning the
    209  *   inputs.
    210  *
    211  *   To avoid the heap + conversion overhead, we're currently using the
    212  *   HvCallGetVpRegisters and HvCallSetVpRegisters calls directly.
    213  *
    214  *
    215  * - The YMM and XCR0 registers are not yet named (17083).  This probably
    216  *   wouldn't be a problem if HV_REGISTER_NAME was used, see previous point.
    217  *
    218  *
    219  * - Why does WINHVR.SYS (or VID.SYS) only query/set 32 registers at the time
    220  *   thru the HvCallGetVpRegisters and HvCallSetVpRegisters hypercalls?
    221  *
    222  *   We've not trouble getting/setting all the registers defined by
    223  *   WHV_REGISTER_NAME in one hypercall (around 80)...
    224  *
    225  *
    226  * - The I/O port exit context information seems to be missing the address size
    227  *   information needed for correct string I/O emulation.
    228  *
    229  *   VT-x provides this information in bits 7:9 in the instruction information
    230  *   field on newer CPUs.  AMD-V in bits 7:9 in the EXITINFO1 field in the VMCB.
    231  *
    232  *   We can probably work around this by scanning the instruction bytes for
    233  *   address size prefixes.  Haven't investigated it any further yet.
    234  *
    235  *
    236  * - The WHvGetCapability function has a weird design:
    237  *      - The CapabilityCode parameter is pointlessly duplicated in the output
    238  *        structure (WHV_CAPABILITY).
    239  *
    240  *      - API takes void pointer, but everyone will probably be using
    241  *        WHV_CAPABILITY due to WHV_CAPABILITY::CapabilityCode making it
    242  *        impractical to use anything else.
    243  *
    244  *      - No output size.
    245  *
    246  *      - See GetFileAttributesEx, GetFileInformationByHandleEx,
    247  *        FindFirstFileEx, and others for typical pattern for generic
    248  *        information getters.
    249  *
    250  *
    251  * - The WHvGetPartitionProperty function uses the same weird design as
    252  *   WHvGetCapability, see above.
    253  *
    254  *
    255  * - The WHvSetPartitionProperty function has a totally weird design too:
    256  *      - In contrast to its partner WHvGetPartitionProperty, the property code
    257  *        is not a separate input parameter here but part of the input
    258  *        structure.
    259  *
    260  *      - The input structure is a void pointer rather than a pointer to
    261  *        WHV_PARTITION_PROPERTY which everyone probably will be using because
    262  *        of the WHV_PARTITION_PROPERTY::PropertyCode field.
    263  *
    264  *      - Really, why use PVOID for the input when the function isn't accepting
    265  *        minimal sizes.  E.g. WHVPartitionPropertyCodeProcessorClFlushSize only
    266  *        requires a 9 byte input, but the function insists on 16 bytes (17083).
    267  *
    268  *      - See GetFileAttributesEx, SetFileInformationByHandle, FindFirstFileEx,
    269  *        and others for typical pattern for generic information setters and
    270  *        getters.
    271  *
    272  *
    273  * @subsection subsec_nem_win_impl   Our implementation.
    274  *
    275  * Tomorrow...
    276  *
    277  *
     20 * This is an alternative execution manage to HM and raw-mode.   On one host
     21 * (Windows) we're forced to use this, on the others we just do it because we
     22 * can.   Since this is host specific in nature, information about an
     23 * implementation is contained in the NEMR3Native-xxxx.cpp files.
     24 *
     25 * @rel pg_nem_win
    27826 */
    27927
  • trunk/src/VBox/VMM/VMMR3/NEMR3Native-win.cpp

    r71224 r71283  
    22002200}
    22012201
     2202
     2203/** @page pg_nem_win NEM/win - Native Execution Manager, Windows.
     2204 *
     2205 * On Windows the Hyper-V root partition (dom0 in zen terminology) does not have
     2206 * nested VT-x or AMD-V capabilities.  For a while raw-mode worked inside it,
     2207 * but for a while now we've been getting \#GP when trying to modify CR4 in the
     2208 * world switcher.  So, when Hyper-V is active on Windows we have little choice
     2209 * but to use Hyper-V to run our VMs.
     2210 *
     2211 *
     2212 * @section sub_nem_win_whv   The WinHvPlatform API
     2213 *
     2214 * Since Windows 10 build 17083 there is a documented API for managing Hyper-V
     2215 * VMs, header file WinHvPlatform.h and implementation in WinHvPlatform.dll.
     2216 * This interface is a wrapper around the undocumented Virtualization
     2217 * Infrastructure Driver (VID) API - VID.DLL and VID.SYS.  The wrapper is
     2218 * written in C++, namespaced, early versions (at least) was using standard C++
     2219 * container templates in several places.
     2220 *
     2221 * When creating a VM using WHvCreatePartition, it will only create the
     2222 * WinHvPlatform structures for it, to which you get an abstract pointer.  The
     2223 * VID API that actually creates the partition is first engaged when you call
     2224 * WHvSetupPartition after first setting a lot of properties using
     2225 * WHvSetPartitionProperty.  Since the VID API is just a very thin wrapper
     2226 * around CreateFile and NtDeviceIoControl, it returns an actual HANDLE for the
     2227 * partition WinHvPlatform.  We fish this HANDLE out of the WinHvPlatform
     2228 * partition structures because we need to talk directly to VID for reasons
     2229 * we'll get to in a bit.  (Btw. we could also intercept the CreateFileW or
     2230 * NtDeviceIoControl calls from VID.DLL to get the HANDLE should fishing in the
     2231 * partition structures become difficult.)
     2232 *
     2233 * The WinHvPlatform API requires us to both set the number of guest CPUs before
     2234 * setting up the partition and call WHvCreateVirtualProcessor for each of them.
     2235 * The CPU creation function boils down to a VidMessageSlotMap call that sets up
     2236 * and maps a message buffer into ring-3 for async communication with hyper-V
     2237 * and/or the VID.SYS thread actually running the CPU.  When for instance a
     2238 * VMEXIT is encountered, hyper-V sends a message that the
     2239 * WHvRunVirtualProcessor API retrieves (and later acknowledges) via
     2240 * VidMessageSlotHandleAndGetNext.  It should be noteded that
     2241 * WHvDeleteVirtualProcessor doesn't do much as there seems to be no partner
     2242 * function VidMessagesSlotMap that reverses what it did.
     2243 *
     2244 * Memory is managed thru calls to WHvMapGpaRange and WHvUnmapGpaRange (GPA does
     2245 * not mean grade point average here, but rather guest physical addressspace),
     2246 * which corresponds to VidCreateVaGpaRangeSpecifyUserVa and VidDestroyGpaRange
     2247 * respectively.  As 'UserVa' indicates, the functions works on user process
     2248 * memory.  The mappings are also subject to quota restrictions, so the number
     2249 * of ranges are limited and probably their total size as well.  Obviously
     2250 * VID.SYS keeps track of the ranges, but so does WinHvPlatform, which means
     2251 * there is a bit of overhead involved and quota restrctions makes sense.  For
     2252 * some reason though, regions are lazily mapped on VMEXIT/memory by
     2253 * WHvRunVirtualProcessor.
     2254 *
     2255 * Running guest code is done thru the WHvRunVirtualProcessor function.  It
     2256 * asynchronously starts or resumes hyper-V CPU execution and then waits for an
     2257 * VMEXIT message.  Hyper-V / VID.SYS will return information about the message
     2258 * in the message buffer mapping, and WHvRunVirtualProcessor will convert that
     2259 * into it's own WHV_RUN_VP_EXIT_CONTEXT format.
     2260 *
     2261 * Other threads can interrupt the execution by using WHvCancelVirtualProcessor,
     2262 * which which case the thread in WHvRunVirtualProcessor is woken up via a dummy
     2263 * QueueUserAPC and will call VidStopVirtualProcessor to asynchronously end
     2264 * execution.  The stop CPU call not immediately succeed if the CPU encountered
     2265 * a VMEXIT before the stop was processed, in which case the VMEXIT needs to be
     2266 * processed first, and the pending stop will be processed in a subsequent call
     2267 * to WHvRunVirtualProcessor.
     2268 *
     2269 * Registers are retrieved and set via WHvGetVirtualProcessorRegisters and
     2270 * WHvSetVirtualProcessorRegisters.  In addition, several VMEXITs include
     2271 * essential register state in the exit context information, potentially making
     2272 * it possible to emulate the instruction causing the exit without involving
     2273 * WHvGetVirtualProcessorRegisters.
     2274 *
     2275 *
     2276 * @subsection subsec_nem_win_whv_cons  Issues / Disadvantages
     2277 *
     2278 * Here are some observations (mostly against build 17101):
     2279 *
     2280 * - The WHvCancelVirtualProcessor API schedules a dummy usermode APC callback
     2281 *   in order to cancel any current or future alertable wait in VID.SYS during
     2282 *   the VidMessageSlotHandleAndGetNext call.
     2283 *
     2284 *   IIRC this will make the kernel schedule the specified callback thru
     2285 *   NTDLL!KiUserApcDispatcher by modifying the thread context and quite
     2286 *   possibly the userland thread stack.  When the APC callback returns to
     2287 *   KiUserApcDispatcher, it will call NtContinue to restore the old thread
     2288 *   context and resume execution from there.  This naturally adds up to some
     2289 *   CPU cycles, ring transitions aren't for free, especially after Spectre &
     2290 *   Meltdown mitigations.
     2291 *
     2292 *   Using NtAltertThread call could do the same without the thread context
     2293 *   modifications and the extra kernel call.
     2294 *
     2295 *
     2296 * - Not sure if this is a thing, but WHvCancelVirtualProcessor seems to cause
     2297 *   cause a lot more spurious WHvRunVirtualProcessor returns that what we get
     2298 *   with the replacement code.  By spurious returns we mean that the
     2299 *   subsequent call to WHvRunVirtualProcessor would return immediately.
     2300 *
     2301 *
     2302 * - When WHvRunVirtualProcessor returns without a message, or on a terse
     2303 *   VID message like HLT, it will make a kernel call to get some registers.
     2304 *   This is potentially inefficient if the caller decides he needs more
     2305 *   register state.
     2306 *
     2307 *   It would be better to just return what's available and let the caller fetch
     2308 *   what is missing from his point of view in a single kernel call.
     2309 *
     2310 *
     2311 * - The WHvRunVirtualProcessor implementation does lazy GPA range mappings when
     2312 *   a unmapped GPA message is received from hyper-V.
     2313 *
     2314 *   Since MMIO is currently realized as unmapped GPA, this will slow down all
     2315 *   MMIO accesses a tiny little bit as WHvRunVirtualProcessor looks up the
     2316 *   guest physical address to check if it is a pending lazy mapping.
     2317 *
     2318 *   The lazy mapping feature makes no sense to us.  We as API user have all the
     2319 *   information and can do lazy mapping ourselves if we want/have to (see next
     2320 *   point).
     2321 *
     2322 *
     2323 * - There is no API for modifying protection of a page within a GPA range.
     2324 *
     2325 *   From what we can tell, the only way to modify the protection (like readonly
     2326 *   -> writable, or vice versa) is to first unmap the range and then remap it
     2327 *   with the new protection.
     2328 *
     2329 *   We are for instance doing this quite a bit in order to track dirty VRAM
     2330 *   pages.  VRAM pages starts out as readonly, when the guest writes to a page
     2331 *   we take an exit, notes down which page it is, makes it writable and restart
     2332 *   the instruction.  After refreshing the display, we reset all the writable
     2333 *   pages to readonly again, bulk fashion.
     2334 *
     2335 *   Now to work around this issue, we do page sized GPA ranges.  In addition to
     2336 *   add a lot of tracking overhead to WinHvPlatform and VID.SYS, this also
     2337 *   causes us to exceed our quota before we've even mapped a default sized
     2338 *   (128MB) VRAM page-by-page.  So, to work around this quota issue we have to
     2339 *   lazily map pages and actively restrict the number of mappings.
     2340 *
     2341 *   Our best workaround thus far is bypassing WinHvPlatform and VID entirely
     2342 *   when in comes to guest memory management and instead use the underlying
     2343 *   hypercalls (HvCallMapGpaPages, HvCallUnmapGpaPages) to do it ourselves.
     2344 *   (This also maps a whole lot better into our own guest page management
     2345 *   infrastructure.)
     2346 *
     2347 *
     2348 * - Observed problems doing WHvUnmapGpaRange immediately followed by
     2349 *   WHvMapGpaRange.
     2350 *
     2351 *   As mentioned above, we've been forced to use this sequence when modifying
     2352 *   page protection.   However, when transitioning from readonly to writable,
     2353 *   we've ended up looping forever with the same write to readonly memory
     2354 *   VMEXIT.  We're wondering if this issue might be related to the lazy mapping
     2355 *   logic in WinHvPlatform.
     2356 *
     2357 *   Workaround: Insert a WHvRunVirtualProcessor call and make sure to get a GPA
     2358 *   unmapped exit between the two calls.  Not entirely great performance wise
     2359 *   (or the santity of our code).
     2360 *
     2361 *
     2362 * - WHVRunVirtualProcessor wastes time converting VID/Hyper-V messages to its
     2363 *   own format (WHV_RUN_VP_EXIT_CONTEXT).
     2364 *
     2365 *   We understand this might be because Microsoft wishes to remain free to
     2366 *   modify the VID/Hyper-V messages, but it's still rather silly and does slow
     2367 *   things down a little.  We'd much rather just process the messages directly.
     2368 *
     2369 *
     2370 * - WHVRunVirtualProcessor would've benefited from using a callback interface:
     2371 *
     2372 *      - The potential size changes of the exit context structure wouldn't be
     2373 *        an issue, since the function could manage that itself.
     2374 *
     2375 *      - State handling could probably be simplified (like cancelation).
     2376 *
     2377 *
     2378 * - WHvGetVirtualProcessorRegisters and WHvSetVirtualProcessorRegisters
     2379 *   internally converts register names, probably using temporary heap buffers.
     2380 *
     2381 *   From the looks of things, they are converting from WHV_REGISTER_NAME to
     2382 *   HV_REGISTER_NAME from in the "Virtual Processor Register Names" section in
     2383 *   the "Hypervisor Top-Level Functional Specification" document.  This feels
     2384 *   like an awful waste of time.
     2385 *
     2386 *   We simply cannot understand why HV_REGISTER_NAME isn't used directly here,
     2387 *   or at least the same values, making any conversion reduntant.  Restricting
     2388 *   access to certain registers could easily be implement by scanning the
     2389 *   inputs.
     2390 *
     2391 *   To avoid the heap + conversion overhead, we're currently using the
     2392 *   HvCallGetVpRegisters and HvCallSetVpRegisters calls directly.
     2393 *
     2394 *
     2395 * - The YMM and XCR0 registers are not yet named (17083).  This probably
     2396 *   wouldn't be a problem if HV_REGISTER_NAME was used, see previous point.
     2397 *
     2398 *
     2399 * - Why does WINHVR.SYS (or VID.SYS) only query/set 32 registers at the time
     2400 *   thru the HvCallGetVpRegisters and HvCallSetVpRegisters hypercalls?
     2401 *
     2402 *   We've not trouble getting/setting all the registers defined by
     2403 *   WHV_REGISTER_NAME in one hypercall (around 80)...
     2404 *
     2405 *
     2406 * - The I/O port exit context information seems to be missing the address size
     2407 *   information needed for correct string I/O emulation.
     2408 *
     2409 *   VT-x provides this information in bits 7:9 in the instruction information
     2410 *   field on newer CPUs.  AMD-V in bits 7:9 in the EXITINFO1 field in the VMCB.
     2411 *
     2412 *   We can probably work around this by scanning the instruction bytes for
     2413 *   address size prefixes.  Haven't investigated it any further yet.
     2414 *
     2415 *
     2416 * - The WHvGetCapability function has a weird design:
     2417 *      - The CapabilityCode parameter is pointlessly duplicated in the output
     2418 *        structure (WHV_CAPABILITY).
     2419 *
     2420 *      - API takes void pointer, but everyone will probably be using
     2421 *        WHV_CAPABILITY due to WHV_CAPABILITY::CapabilityCode making it
     2422 *        impractical to use anything else.
     2423 *
     2424 *      - No output size.
     2425 *
     2426 *      - See GetFileAttributesEx, GetFileInformationByHandleEx,
     2427 *        FindFirstFileEx, and others for typical pattern for generic
     2428 *        information getters.
     2429 *
     2430 *
     2431 * - The WHvGetPartitionProperty function uses the same weird design as
     2432 *   WHvGetCapability, see above.
     2433 *
     2434 *
     2435 * - The WHvSetPartitionProperty function has a totally weird design too:
     2436 *      - In contrast to its partner WHvGetPartitionProperty, the property code
     2437 *        is not a separate input parameter here but part of the input
     2438 *        structure.
     2439 *
     2440 *      - The input structure is a void pointer rather than a pointer to
     2441 *        WHV_PARTITION_PROPERTY which everyone probably will be using because
     2442 *        of the WHV_PARTITION_PROPERTY::PropertyCode field.
     2443 *
     2444 *      - Really, why use PVOID for the input when the function isn't accepting
     2445 *        minimal sizes.  E.g. WHVPartitionPropertyCodeProcessorClFlushSize only
     2446 *        requires a 9 byte input, but the function insists on 16 bytes (17083).
     2447 *
     2448 *      - See GetFileAttributesEx, SetFileInformationByHandle, FindFirstFileEx,
     2449 *        and others for typical pattern for generic information setters and
     2450 *        getters.
     2451 *
     2452 *
     2453 * @section sec_nem_win_impl    Our implementation.
     2454 *
     2455 * Tomorrow...
     2456 *
     2457 *
     2458 */
     2459
Note: See TracChangeset for help on using the changeset viewer.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette