Changeset 46883 in vbox for trunk/src/VBox/VMM
- Timestamp:
- Jul 1, 2013 12:24:29 PM (11 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/src/VBox/VMM/VMMR0/HMSVMR0.cpp
r46871 r46883 1555 1555 1556 1556 /** 1557 * Worker for loading the guest-state into the VMCB. 1557 * Loads the guest state into the VMCB. The CPU state will be loaded from these 1558 * fields on every successful VM-entry. 1559 * 1560 * Sets up the appropriate VMRUN function to execute guest code based 1561 * on the guest CPU mode. 1558 1562 * 1559 1563 * @returns VBox status code. 1560 1564 * @param pVM Pointer to the VM. 1561 1565 * @param pVCpu Pointer to the VMCPU. 1562 * @param p CtxPointer to the guest-CPU context.1566 * @param pMixedCtx Pointer to the guest-CPU context. 1563 1567 * 1564 1568 * @remarks No-long-jump zone!!! … … 1616 1620 1617 1621 /** 1618 * Loads the guest state .1622 * Loads the guest state on the way from ring-3. 1619 1623 * 1620 1624 * @returns VBox status code. … … 1627 1631 VMMR0DECL(int) SVMR0LoadGuestState(PVM pVM, PVMCPU pVCpu, PCPUMCTX pCtx) 1628 1632 { 1629 /* Nothing to do here. Loading is done below before VM-entry. */ 1633 /* 1634 * Avoid reloading the guest state on longjmp reentrants and do it lazily just before executing the guest. 1635 * This only helps when we get rescheduled more than once to a different host CPU on a longjmp trip before 1636 * finally executing guest code. 1637 */ 1630 1638 return VINF_SUCCESS; 1631 1639 } 1632 1633 1640 1634 1641 … … 3488 3495 HMSVM_VALIDATE_EXIT_HANDLER_PARAMS(); 3489 3496 STAM_COUNTER_INC(&pVCpu->hm.s.StatExitExtInt); 3490 /* 32-bit Windows hosts (4 cores) has trouble with this on Intel; causes higher interrupt latency. Assuming the 3491 same for AMD-V.*/ 3492 #if HC_ARCH_BITS == 64 && defined(VBOX_WITH_VMMR0_DISABLE_PREEMPTION) 3493 Assert(ASMIntAreEnabled()); 3494 return VINF_SUCCESS; 3495 #else 3497 3498 /* 3499 * AMD-V has no preemption timer and the generic periodic preemption timer has no way to signal -before- the timer 3500 * fires if the current interrupt is our own timer or a some other host interrupt. We also cannot examine what 3501 * interrupt it is until the host actually take the interrupt. 3502 * 3503 * Going back to executing guest code here unconditionally causes random scheduling problems (observed on an 3504 * AMD Phenom 9850 Quad-Core on Windows 64-bit host). 3505 */ 3496 3506 return VINF_EM_RAW_INTERRUPT; 3497 #endif3498 3507 } 3499 3508
Note:
See TracChangeset
for help on using the changeset viewer.