Changeset 71284 in vbox
- Timestamp:
- Mar 9, 2018 12:38:30 PM (7 years ago)
- svn:sync-xref-src-repo-rev:
- 121215
- Location:
- trunk/src/VBox/VMM/VMMR3
- Files:
-
- 2 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/src/VBox/VMM/VMMR3/NEMR3.cpp
r71283 r71284 23 23 * implementation is contained in the NEMR3Native-xxxx.cpp files. 24 24 * 25 * @re lpg_nem_win25 * @ref pg_nem_win 26 26 */ 27 27 -
trunk/src/VBox/VMM/VMMR3/NEMR3Native-win.cpp
r71283 r71284 2224 2224 * WHvSetupPartition after first setting a lot of properties using 2225 2225 * WHvSetPartitionProperty. Since the VID API is just a very thin wrapper 2226 * around CreateFile and NtDeviceIoControl , it returns an actual HANDLE for the2227 * partition WinHvPlatform. We fish this HANDLE out of the WinHvPlatform2226 * around CreateFile and NtDeviceIoControlFile, it returns an actual HANDLE for 2227 * the partition WinHvPlatform. We fish this HANDLE out of the WinHvPlatform 2228 2228 * partition structures because we need to talk directly to VID for reasons 2229 2229 * we'll get to in a bit. (Btw. we could also intercept the CreateFileW or 2230 * NtDeviceIoControl calls from VID.DLL to get the HANDLE should fishing in the2231 * partition structures become difficult.)2230 * NtDeviceIoControlFile calls from VID.DLL to get the HANDLE should fishing in 2231 * the partition structures become difficult.) 2232 2232 * 2233 2233 * The WinHvPlatform API requires us to both set the number of guest CPUs before … … 2278 2278 * Here are some observations (mostly against build 17101): 2279 2279 * 2280 * - The VMEXIT performance is dismal (build 17101). 2281 * 2282 * Our proof of concept implementation with a kernel runloop (i.e. not using 2283 * WHvRunVirtualProcessor and friends, but calling VID.SYS fast I/O control 2284 * entry point directly) delivers 9-10% of the port I/O performance and only 2285 * 6-7% of the MMIO performance that we have with our own hypervisor. 2286 * 2287 * When using the offical WinHvPlatform API, the numbers are %3 for port I/O 2288 * and 5% for MMIO. 2289 * 2290 * 2280 2291 * - The WHvCancelVirtualProcessor API schedules a dummy usermode APC callback 2281 2292 * in order to cancel any current or future alertable wait in VID.SYS during … … 2453 2464 * @section sec_nem_win_impl Our implementation. 2454 2465 * 2455 * Tomorrow... 2456 * 2457 * 2458 */ 2459 2466 * We set out with the goal of wanting to run as much as possible in ring-0, 2467 * reasoning that this would give use the best performance. 2468 * 2469 * This goal was approached gradually, starting out with a pure WinHvPlatform 2470 * implementation, gradually replacing parts: register access, guest memory 2471 * handling, running virtual processors. Then finally moving it all into 2472 * ring-0, while keeping most of it configurable so that we could make 2473 * comparisons (see NEMInternal.h and nemR3NativeRunGC()). 2474 * 2475 * 2476 * @subsection subsect_nem_win_impl_ioctl VID.SYS I/O control calls 2477 * 2478 * To run things in ring-0 we need to talk directly to VID.SYS thru its I/O 2479 * control interface. Looking at changes between like build 17083 and 17101 (if 2480 * memory serves) a set of the VID I/O control numbers shifted a little, which 2481 * means we need to determin them dynamically. We currently do this by hooking 2482 * the NtDeviceIoControlFile API call from VID.DLL and snooping up the 2483 * parameters when making dummy calls to relevant APIs. (We could also 2484 * disassemble the relevant APIs and try fish out the information from that, but 2485 * this is way simpler.) 2486 * 2487 * Issuing I/O control calls from ring-0 is facing a small challenge with 2488 * respect to direct buffering. When using direct buffering the device will 2489 * typically check that the buffer is actually in the user address space range 2490 * and reject kernel addresses. Fortunately, we've got the cross context VM 2491 * structure that is mapped into both kernel and user space, it's also locked 2492 * and safe to access from kernel space. So, we place the I/O control buffers 2493 * in the per-CPU part of it (NEMCPU::uIoCtlBuf) and give the driver the user 2494 * address if direct access buffering or kernel address if not. 2495 * 2496 * The I/O control calls are 'abstracted' in the support driver, see 2497 * SUPR0IoCtlSetupForHandle(), SUPR0IoctlPerform() and SUPR0IoCtlCleanup(). 2498 * 2499 * 2500 * @subsection subsect_nem_win_impl_cpumctx CPUMCTX 2501 * 2502 * Since the CPU state needs to live in Hyper-V when executing, we probably 2503 * should not transfer more than necessary when handling VMEXITs. To help us 2504 * manage this CPUMCTX got a new field CPUMCTX::fExtrn that to indicate which 2505 * part of the state is currently externalized (== in Hyper-V). 2506 * 2507 * 2508 */ 2509
Note:
See TracChangeset
for help on using the changeset viewer.