VirtualBox

Changeset 71284 in vbox


Ignore:
Timestamp:
Mar 9, 2018 12:38:30 PM (7 years ago)
Author:
vboxsync
svn:sync-xref-src-repo-rev:
121215
Message:

NEM: Working on the @page docs for windows. bugref:9044

Location:
trunk/src/VBox/VMM/VMMR3
Files:
2 edited

Legend:

Unmodified
Added
Removed
  • trunk/src/VBox/VMM/VMMR3/NEMR3.cpp

    r71283 r71284  
    2323 * implementation is contained in the NEMR3Native-xxxx.cpp files.
    2424 *
    25  * @rel pg_nem_win
     25 * @ref pg_nem_win
    2626 */
    2727
  • trunk/src/VBox/VMM/VMMR3/NEMR3Native-win.cpp

    r71283 r71284  
    22242224 * WHvSetupPartition after first setting a lot of properties using
    22252225 * WHvSetPartitionProperty.  Since the VID API is just a very thin wrapper
    2226  * around CreateFile and NtDeviceIoControl, it returns an actual HANDLE for the
    2227  * partition WinHvPlatform.  We fish this HANDLE out of the WinHvPlatform
     2226 * around CreateFile and NtDeviceIoControlFile, it returns an actual HANDLE for
     2227 * the partition WinHvPlatform.  We fish this HANDLE out of the WinHvPlatform
    22282228 * partition structures because we need to talk directly to VID for reasons
    22292229 * we'll get to in a bit.  (Btw. we could also intercept the CreateFileW or
    2230  * NtDeviceIoControl calls from VID.DLL to get the HANDLE should fishing in the
    2231  * partition structures become difficult.)
     2230 * NtDeviceIoControlFile calls from VID.DLL to get the HANDLE should fishing in
     2231 * the partition structures become difficult.)
    22322232 *
    22332233 * The WinHvPlatform API requires us to both set the number of guest CPUs before
     
    22782278 * Here are some observations (mostly against build 17101):
    22792279 *
     2280 * - The VMEXIT performance is dismal (build 17101).
     2281 *
     2282 *   Our proof of concept implementation with a kernel runloop (i.e. not using
     2283 *   WHvRunVirtualProcessor and friends, but calling VID.SYS fast I/O control
     2284 *   entry point directly) delivers 9-10% of the port I/O performance and only
     2285 *   6-7% of the MMIO performance that we have with our own hypervisor.
     2286 *
     2287 *   When using the offical WinHvPlatform API, the numbers are %3 for port I/O
     2288 *   and 5% for MMIO.
     2289 *
     2290 *
    22802291 * - The WHvCancelVirtualProcessor API schedules a dummy usermode APC callback
    22812292 *   in order to cancel any current or future alertable wait in VID.SYS during
     
    24532464 * @section sec_nem_win_impl    Our implementation.
    24542465 *
    2455  * Tomorrow...
    2456  *
    2457  *
    2458  */
    2459 
     2466 * We set out with the goal of wanting to run as much as possible in ring-0,
     2467 * reasoning that this would give use the best performance.
     2468 *
     2469 * This goal was approached gradually, starting out with a pure WinHvPlatform
     2470 * implementation, gradually replacing parts: register access, guest memory
     2471 * handling, running virtual processors.  Then finally moving it all into
     2472 * ring-0, while keeping most of it configurable so that we could make
     2473 * comparisons (see NEMInternal.h and nemR3NativeRunGC()).
     2474 *
     2475 *
     2476 * @subsection subsect_nem_win_impl_ioctl       VID.SYS I/O control calls
     2477 *
     2478 * To run things in ring-0 we need to talk directly to VID.SYS thru its I/O
     2479 * control interface.  Looking at changes between like build 17083 and 17101 (if
     2480 * memory serves) a set of the VID I/O control numbers shifted a little, which
     2481 * means we need to determin them dynamically.  We currently do this by hooking
     2482 * the NtDeviceIoControlFile API call from VID.DLL and snooping up the
     2483 * parameters when making dummy calls to relevant APIs.  (We could also
     2484 * disassemble the relevant APIs and try fish out the information from that, but
     2485 * this is way simpler.)
     2486 *
     2487 * Issuing I/O control calls from ring-0 is facing a small challenge with
     2488 * respect to direct buffering.  When using direct buffering the device will
     2489 * typically check that the buffer is actually in the user address space range
     2490 * and reject kernel addresses.  Fortunately, we've got the cross context VM
     2491 * structure that is mapped into both kernel and user space, it's also locked
     2492 * and safe to access from kernel space.  So, we place the I/O control buffers
     2493 * in the per-CPU part of it (NEMCPU::uIoCtlBuf) and give the driver the user
     2494 * address if direct access buffering or kernel address if not.
     2495 *
     2496 * The I/O control calls are 'abstracted' in the support driver, see
     2497 * SUPR0IoCtlSetupForHandle(), SUPR0IoctlPerform() and SUPR0IoCtlCleanup().
     2498 *
     2499 *
     2500 * @subsection subsect_nem_win_impl_cpumctx     CPUMCTX
     2501 *
     2502 * Since the CPU state needs to live in Hyper-V when executing, we probably
     2503 * should not transfer more than necessary when handling VMEXITs.  To help us
     2504 * manage this CPUMCTX got a new field CPUMCTX::fExtrn that to indicate which
     2505 * part of the state is currently externalized (== in Hyper-V).
     2506 *
     2507 *
     2508 */
     2509
Note: See TracChangeset for help on using the changeset viewer.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette