Changeset 53190 in vbox for trunk/src/VBox/VMM/VMMR0/HMVMXR0.cpp
- Timestamp:
- Nov 4, 2014 10:40:22 AM (10 years ago)
- svn:sync-xref-src-repo-rev:
- 96736
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/src/VBox/VMM/VMMR0/HMVMXR0.cpp
r53178 r53190 225 225 /** The VM-exit interruption error code. */ 226 226 uint32_t uExitIntErrorCode; 227 /** The VM-exit exit qualification. */227 /** The VM-exit exit code qualification. */ 228 228 uint64_t uExitQualification; 229 229 … … 503 503 /* 5 */ "VMRESUME with non-launched VMCS.", 504 504 /* 6 */ "VMRESUME after VMXOFF", 505 /* 7 */ "VM 506 /* 8 */ "VM 505 /* 7 */ "VM-entry with invalid control fields.", 506 /* 8 */ "VM-entry with invalid host state fields.", 507 507 /* 9 */ "VMPTRLD with invalid physical address.", 508 508 /* 10 */ "VMPTRLD with VMXON pointer.", … … 512 512 /* 14 */ "(Not Used)", 513 513 /* 15 */ "VMXON executed in VMX root operation.", 514 /* 16 */ "VM 515 /* 17 */ "VM 516 /* 18 */ "VM 514 /* 16 */ "VM-entry with invalid executive-VMCS pointer.", 515 /* 17 */ "VM-entry with non-launched executing VMCS.", 516 /* 18 */ "VM-entry with executive-VMCS pointer not VMXON pointer.", 517 517 /* 19 */ "VMCALL with non-clear VMCS.", 518 518 /* 20 */ "VMCALL with invalid VM-exit control fields.", … … 521 521 /* 23 */ "VMXOFF under dual monitor treatment of SMIs and SMM.", 522 522 /* 24 */ "VMCALL with invalid SMM-monitor features.", 523 /* 25 */ "VM 524 /* 26 */ "VM 523 /* 25 */ "VM-entry with invalid VM-execution control fields in executive VMCS.", 524 /* 26 */ "VM-entry with events blocked by MOV SS.", 525 525 /* 27 */ "(Not Used)", 526 526 /* 28 */ "Invalid operand to INVEPT/INVVPID." … … 683 683 684 684 /** 685 * Reads the exit qualification from the VMCS into the VMX transient structure. 685 * Reads the exit code qualification from the VMCS into the VMX transient 686 * structure. 686 687 * 687 688 * @returns VBox status code. … … 4686 4687 * Loads certain guest MSRs into the VM-entry MSR-load and VM-exit MSR-store 4687 4688 * areas. These MSRs will automatically be loaded to the host CPU on every 4688 * successful VM 4689 * successful VM-entry and stored from the host CPU on every successful VM-exit. 4689 4690 * 4690 4691 * This also creates/updates MSR slots for the host MSRs. The actual host … … 7504 7505 * out-of-sync. Make sure to update the required fields 7505 7506 * before using them. 7506 * @param fStepping Running in hmR0VmxRunGuestCodeStep and we should7507 * return VINF_EM_DBG_STEPPED an event was dispatched7508 * di rectly.7507 * @param fStepping Running in hmR0VmxRunGuestCodeStep() and we should 7508 * return VINF_EM_DBG_STEPPED if the event was 7509 * dispatched directly. 7509 7510 */ 7510 7511 static int hmR0VmxInjectPendingEvent(PVMCPU pVCpu, PCPUMCTX pMixedCtx, bool fStepping) … … 7602 7603 7603 7604 /* 7604 * There's no need to clear the VM entry-interruptioninformation field here if we're not injecting anything.7605 * There's no need to clear the VM-entry interruption-information field here if we're not injecting anything. 7605 7606 * VT-x clears the valid bit on every VM-exit. See Intel spec. 24.8.3 "VM-Entry Controls for Event Injection". 7606 7607 */ … … 7638 7639 * out-of-sync. Make sure to update the required fields 7639 7640 * before using them. 7640 * @param fStepping Whether we're running in hmR0VmxRunGuestCodeStep and7641 * should return VINF_EM_DBG_STEPPED if the event is7642 * i njected directly (registerd modified by us, not by7643 * hardware on VMentry).7641 * @param fStepping Whether we're running in hmR0VmxRunGuestCodeStep() 7642 * and should return VINF_EM_DBG_STEPPED if the event 7643 * is injected directly (register modified by us, not 7644 * by hardware on VM-entry). 7644 7645 * @param puIntrState Pointer to the current guest interruptibility-state. 7645 7646 * This interruptibility-state will be updated if … … 7703 7704 * mode, i.e. in real-mode it's not valid). 7704 7705 * @param u32ErrorCode The error code associated with the #GP. 7705 * @param fStepping Whether we're running in hmR0VmxRunGuestCodeStep 7706 * and should return VINF_EM_DBG_STEPPED if the 7707 * event is injected directly (registerd modified 7708 * by us, not by hardware on VM entry). 7706 * @param fStepping Whether we're running in 7707 * hmR0VmxRunGuestCodeStep() and should return 7708 * VINF_EM_DBG_STEPPED if the event is injected 7709 * directly (registerd modified by us, not by 7710 * hardware on VM-entry). 7709 7711 * @param puIntrState Pointer to the current guest interruptibility-state. 7710 7712 * This interruptibility-state will be updated if … … 7814 7816 * This interruptibility-state will be updated if 7815 7817 * necessary. This cannot not be NULL. 7816 * @param fStepping Whether we're running in hmR0VmxRunGuestCodeStep 7817 * and should return VINF_EM_DBG_STEPPED if the 7818 * event is injected directly (registerd modified 7819 * by us, not by hardware on VM entry). 7818 * @param fStepping Whether we're running in 7819 * hmR0VmxRunGuestCodeStep() and should return 7820 * VINF_EM_DBG_STEPPED if the event is injected 7821 * directly (register modified by us, not by 7822 * hardware on VM-entry). 7820 7823 * 7821 7824 * @remarks Requires CR0! … … 7954 7957 *puIntrState &= ~VMX_VMCS_GUEST_INTERRUPTIBILITY_STATE_BLOCK_STI; 7955 7958 } 7956 Log4(("Injecting real-mode: u32IntInfo=%#x u32ErrCode=%#x instrlen=%#x efl=%#x cs:eip=%04x:%04x\n",7959 Log4(("Injecting real-mode: u32IntInfo=%#x u32ErrCode=%#x cbInstr=%#x Eflags=%#x CS:EIP=%04x:%04x\n", 7957 7960 u32IntInfo, u32ErrCode, cbInstr, pMixedCtx->eflags.u, pMixedCtx->cs.Sel, pMixedCtx->eip)); 7958 7961 … … 8416 8419 * 8417 8420 * This may cause longjmps to ring-3 and may even result in rescheduling to the 8418 * recompiler . We must be cautious what we do here regarding committing8421 * recompiler/IEM. We must be cautious what we do here regarding committing 8419 8422 * guest-state information into the VMCS assuming we assuredly execute the 8420 * guest in VT-x mode. If we fall back to the recompiler after updating the VMCS 8421 * and clearing the common-state (TRPM/forceflags), we must undo those changes 8422 * so that the recompiler can (and should) use them when it resumes guest 8423 * execution. Otherwise such operations must be done when we can no longer 8424 * exit to ring-3. 8423 * guest in VT-x mode. 8424 * 8425 * If we fall back to the recompiler/IEM after updating the VMCS and clearing 8426 * the common-state (TRPM/forceflags), we must undo those changes so that the 8427 * recompiler/IEM can (and should) use them when it resumes guest execution. 8428 * Otherwise such operations must be done when we can no longer exit to ring-3. 8425 8429 * 8426 8430 * @returns Strict VBox status code. … … 8439 8443 * before using them. 8440 8444 * @param pVmxTransient Pointer to the VMX transient structure. 8441 * @param fStepping Set if called from hmR0VmxRunGuestCodeStep . Makes8445 * @param fStepping Set if called from hmR0VmxRunGuestCodeStep(). Makes 8442 8446 * us ignore some of the reasons for returning to 8443 8447 * ring-3, and return VINF_EM_DBG_STEPPED if event … … 8832 8836 to ring-3. This bugger disables interrupts on VINF_SUCCESS! */ 8833 8837 STAM_PROFILE_ADV_START(&pVCpu->hm.s.StatEntry, x); 8834 rc = hmR0VmxPreRunGuest(pVM, pVCpu, pCtx, &VmxTransient, false /* fStepping*/);8838 rc = hmR0VmxPreRunGuest(pVM, pVCpu, pCtx, &VmxTransient, false /* fStepping */); 8835 8839 if (rc != VINF_SUCCESS) 8836 8840 break; … … 8906 8910 to ring-3. This bugger disables interrupts on VINF_SUCCESS! */ 8907 8911 STAM_PROFILE_ADV_START(&pVCpu->hm.s.StatEntry, x); 8908 rcStrict = hmR0VmxPreRunGuest(pVM, pVCpu, pCtx, &VmxTransient, true /* fStepping*/);8912 rcStrict = hmR0VmxPreRunGuest(pVM, pVCpu, pCtx, &VmxTransient, true /* fStepping */); 8909 8913 if (rcStrict != VINF_SUCCESS) 8910 8914 break; … … 8926 8930 } 8927 8931 8928 /* Handle the VM-exit - we quit earlier on certain exits, see hmR0VmxHandleExitStep. */8932 /* Handle the VM-exit - we quit earlier on certain VM-exits, see hmR0VmxHandleExitStep(). */ 8929 8933 AssertMsg(VmxTransient.uExitReason <= VMX_EXIT_MAX, ("%#x\n", VmxTransient.uExitReason)); 8930 8934 STAM_COUNTER_INC(&pVCpu->hm.s.StatExitAll); … … 9095 9099 9096 9100 /** 9097 * Single steppingexit filtering.9101 * Single-stepping VM-exit filtering. 9098 9102 * 9099 9103 * This is preprocessing the exits and deciding whether we've gotten far enough 9100 * to return VINF_EM_DBG_STEPPED already. If not, normal exit handling is9104 * to return VINF_EM_DBG_STEPPED already. If not, normal VM-exit handling is 9101 9105 * performed. 9102 9106 * … … 9107 9111 * fields before using them. 9108 9112 * @param pVmxTransient Pointer to the VMX-transient structure. 9109 * @param uExitReason The exit reason.9113 * @param uExitReason The VM-exit reason. 9110 9114 */ 9111 9115 DECLINLINE(VBOXSTRICTRC) hmR0VmxHandleExitStep(PVMCPU pVCpu, PCPUMCTX pMixedCtx, PVMXTRANSIENT pVmxTransient, … … 9116 9120 case VMX_EXIT_XCPT_OR_NMI: 9117 9121 { 9118 /* Check for NMI. */9122 /* Check for host NMI. */ 9119 9123 int rc2 = hmR0VmxReadExitIntInfoVmcs(pVmxTransient); 9120 9124 AssertRCReturn(rc2, rc2); … … 10375 10379 /* 10376 10380 * This can only happen if we support dual-monitor treatment of SMI, which can be activated by executing VMCALL in VMX 10377 * root operation. Only an STM (SMM transfer monitor) would get this exit when we (the executive monitor) execute a VMCALL10381 * root operation. Only an STM (SMM transfer monitor) would get this VM-exit when we (the executive monitor) execute a VMCALL 10378 10382 * in VMX root mode or receive an SMI. If we get here, something funny is going on. 10379 10383 * See Intel spec. "33.15.6 Activating the Dual-Monitor Treatment" and Intel spec. 25.3 "Other Causes of VM-Exits" … … 10974 10978 default: 10975 10979 { 10976 AssertMsgFailed(("Invalid access-type in Mov CRx exit qualification %#x\n", uAccessType));10980 AssertMsgFailed(("Invalid access-type in Mov CRx VM-exit qualification %#x\n", uAccessType)); 10977 10981 rc = VERR_VMX_UNEXPECTED_EXCEPTION; 10978 10982 } … … 11173 11177 HM_DISABLE_PREEMPT_IF_NEEDED(); 11174 11178 11175 bool fIsGuestDbgActive = CPUMR0DebugStateMaybeSaveGuest(pVCpu, true /* fDr6*/);11179 bool fIsGuestDbgActive = CPUMR0DebugStateMaybeSaveGuest(pVCpu, true /* fDr6 */); 11176 11180 11177 11181 VBOXSTRICTRC rcStrict2 = DBGFBpCheckIo(pVM, pVCpu, pMixedCtx, uIOPort, cbValue); … … 11556 11560 TRPMAssertXcptPF(pVCpu, GCPhys, uErrorCode); 11557 11561 11558 Log4(("EPT violation %#x at %#RX64 ErrorCode %#x CS: EIP=%04x:%08RX64\n", pVmxTransient->uExitQualification, GCPhys,11562 Log4(("EPT violation %#x at %#RX64 ErrorCode %#x CS:RIP=%04x:%08RX64\n", pVmxTransient->uExitQualification, GCPhys, 11559 11563 uErrorCode, pMixedCtx->cs.Sel, pMixedCtx->rip)); 11560 11564 … … 11662 11666 11663 11667 /* 11664 * Get the DR6-like values from the exit qualification and pass it to DBGF11668 * Get the DR6-like values from the VM-exit qualification and pass it to DBGF 11665 11669 * for processing. 11666 11670 */ … … 11882 11886 case OP_POPF: 11883 11887 { 11884 Log4(("POPF CS: RIP %04x:%04RX64\n", pMixedCtx->cs.Sel, pMixedCtx->rip));11888 Log4(("POPF CS:EIP %04x:%04RX64\n", pMixedCtx->cs.Sel, pMixedCtx->rip)); 11885 11889 uint32_t cbParm; 11886 11890 uint32_t uMask; … … 12168 12172 return rc; 12169 12173 } 12174 12170 12175 if (rc == VINF_EM_RAW_GUEST_TRAP) 12171 12176 {
Note:
See TracChangeset
for help on using the changeset viewer.