Changeset 80815 in vbox
- Timestamp:
- Sep 16, 2019 9:22:23 AM (5 years ago)
- Location:
- trunk/src/VBox/VMM
- Files:
-
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/src/VBox/VMM/VMMAll/HMVMXAll.cpp
r80813 r80815 1232 1232 1233 1233 /** 1234 * Notification callback for when the guesthypervisor's current VMCS is loaded or1234 * Notification callback for when the nested hypervisor's current VMCS is loaded or 1235 1235 * changed outside VMX R0 code (e.g. in IEM). 1236 1236 * 1237 * This need -not- be called for modifications to the guesthypervisor's current1237 * This need -not- be called for modifications to the nested hypervisor's current 1238 1238 * VMCS when the guest is in VMX non-root mode as VMCS shadowing is not applicable 1239 1239 * there. … … 1249 1249 1250 1250 /* 1251 * Make sure we need to copy the guesthypervisor's current VMCS into the shadow VMCS1251 * Make sure we need to copy the nested hypervisor's current VMCS into the shadow VMCS 1252 1252 * on the next guest VM-entry. 1253 1253 */ -
trunk/src/VBox/VMM/VMMR0/HMVMXR0.cpp
r80810 r80815 937 937 * 938 938 * @remarks When executing a nested-guest, this will not remove any of the specified 939 * controls if the guesthypervisor has set any one of them.939 * controls if the nested hypervisor has set any one of them. 940 940 */ 941 941 static void hmR0VmxRemoveProcCtlsVmcs(PVMCPUCC pVCpu, PVMXTRANSIENT pVmxTransient, uint32_t uProcCtls) … … 4661 4661 * For nested-guests, the "IA-32e mode guest" control we initialize with what is 4662 4662 * required to get the nested-guest working with hardware-assisted VMX execution. 4663 * It depends on the nested-guest's IA32_EFER.LMA bit. Remember, a guesthypervisor4663 * It depends on the nested-guest's IA32_EFER.LMA bit. Remember, a nested hypervisor 4664 4664 * can skip intercepting changes to the EFER MSR. This is why it it needs to be done 4665 4665 * here rather than while merging the guest VMCS controls. … … 5257 5257 { 5258 5258 /* 5259 * If the guesthypervisor has loaded a current VMCS and is in VMX root mode,5260 * copy the guesthypervisor's current VMCS into the shadow VMCS and enable5259 * If the nested hypervisor has loaded a current VMCS and is in VMX root mode, 5260 * copy the nested hypervisor's current VMCS into the shadow VMCS and enable 5261 5261 * VMCS shadowing to skip intercepting some or all VMREAD/VMWRITE VM-exits. 5262 5262 * … … 5274 5274 5275 5275 /* 5276 * For performance reasons, also check if the guesthypervisor's current VMCS5276 * For performance reasons, also check if the nested hypervisor's current VMCS 5277 5277 * was newly loaded or modified before copying it to the shadow VMCS. 5278 5278 */ … … 5442 5442 * With nested-guests, we may have extended the guest/host mask here since we 5443 5443 * merged in the outer guest's mask. Thus, the merged mask can include more bits 5444 * (to read from the nested-guest CR0 read-shadow) than the guesthypervisor5444 * (to read from the nested-guest CR0 read-shadow) than the nested hypervisor 5445 5445 * originally supplied. We must copy those bits from the nested-guest CR0 into 5446 5446 * the nested-guest CR0 read-shadow. … … 5610 5610 * merged in the outer guest's mask, see hmR0VmxMergeVmcsNested). This means, the 5611 5611 * mask can include more bits (to read from the nested-guest CR4 read-shadow) than 5612 * the guesthypervisor originally supplied. Thus, we should, in essence, copy5612 * the nested hypervisor originally supplied. Thus, we should, in essence, copy 5613 5613 * those bits from the nested-guest CR4 into the nested-guest CR4 read-shadow. 5614 5614 */ … … 9852 9852 * 9853 9853 * - We would need to perform VMREADs with interrupts disabled and is orders of 9854 * magnitude worse when we run as a guesthypervisor without VMCS shadowing9854 * magnitude worse when we run as a nested hypervisor without VMCS shadowing 9855 9855 * supported by the host hypervisor. 9856 9856 * … … 9912 9912 * MSR bitmap in this case. 9913 9913 * 9914 * The guesthypervisor may also switch whether it uses MSR bitmaps for9914 * The nested hypervisor may also switch whether it uses MSR bitmaps for 9915 9915 * each VM-entry, hence initializing it once per-VM while setting up the 9916 9916 * nested-guest VMCS is not sufficient. … … 9940 9940 * this function is never called. 9941 9941 * 9942 * For nested-guests since the guesthypervisor provides these controls on every9942 * For nested-guests since the nested hypervisor provides these controls on every 9943 9943 * nested-guest VM-entry and could potentially change them everytime we need to 9944 9944 * merge them before every nested-guest VM-entry. … … 9990 9990 * These controls contains state that depends on the nested-guest state (primarily 9991 9991 * EFER MSR) and is thus not constant between VMLAUNCH/VMRESUME and the nested-guest 9992 * VM-exit. Although the guesthypervisor cannot change it, we need to in order to9992 * VM-exit. Although the nested hypervisor cannot change it, we need to in order to 9993 9993 * properly continue executing the nested-guest if the EFER MSR changes but does not 9994 9994 * cause a nested-guest VM-exits. … … 9996 9996 * VM-exit controls: 9997 9997 * These controls specify the host state on return. We cannot use the controls from 9998 * the guesthypervisor state as is as it would contain the guest state rather than9998 * the nested hypervisor state as is as it would contain the guest state rather than 9999 9999 * the host state. Since the host state is subject to change (e.g. preemption, trips 10000 10000 * to ring-3, longjmp and rescheduling to a different host CPU) they are not constant … … 10011 10011 * VM-exit MSR-load areas: 10012 10012 * This must contain the real host MSRs with hardware-assisted VMX execution. Hence, we 10013 * can entirely ignore what the guesthypervisor wants to load here.10013 * can entirely ignore what the nested hypervisor wants to load here. 10014 10014 */ 10015 10015 … … 10071 10071 * I/O Bitmap. 10072 10072 * 10073 * We do not use the I/O bitmap that may be provided by the guesthypervisor as we always10073 * We do not use the I/O bitmap that may be provided by the nested hypervisor as we always 10074 10074 * intercept all I/O port accesses. 10075 10075 */ … … 10150 10150 /* 10151 10151 * We must make sure CR8 reads/write must cause VM-exits when TPR shadowing is not 10152 * used by the guesthypervisor. Preventing MMIO accesses to the physical APIC will10152 * used by the nested hypervisor. Preventing MMIO accesses to the physical APIC will 10153 10153 * be taken care of by EPT/shadow paging. 10154 10154 */ … … 12633 12633 /* 12634 12634 * Instructions that cause VM-exits unconditionally or the condition is 12635 * always is taken solely from the guesthypervisor (meaning if the VM-exit12635 * always is taken solely from the nested hypervisor (meaning if the VM-exit 12636 12636 * happens, it's guaranteed to be a nested-guest VM-exit). 12637 12637 * … … 12646 12646 case VMX_EXIT_VMRESUME: 12647 12647 case VMX_EXIT_VMXOFF: 12648 case VMX_EXIT_ENCLS: /* Condition specified solely by guesthypervisor. */12648 case VMX_EXIT_ENCLS: /* Condition specified solely by nested hypervisor. */ 12649 12649 case VMX_EXIT_VMFUNC: 12650 12650 return hmR0VmxExitInstrNested(pVCpu, pVmxTransient); … … 12652 12652 /* 12653 12653 * Instructions that cause VM-exits unconditionally or the condition is 12654 * always is taken solely from the guesthypervisor (meaning if the VM-exit12654 * always is taken solely from the nested hypervisor (meaning if the VM-exit 12655 12655 * happens, it's guaranteed to be a nested-guest VM-exit). 12656 12656 * … … 12671 12671 case VMX_EXIT_VMPTRST: 12672 12672 case VMX_EXIT_VMXON: 12673 case VMX_EXIT_GDTR_IDTR_ACCESS: /* Condition specified solely by guesthypervisor. */12673 case VMX_EXIT_GDTR_IDTR_ACCESS: /* Condition specified solely by nested hypervisor. */ 12674 12674 case VMX_EXIT_LDTR_TR_ACCESS: 12675 12675 case VMX_EXIT_RDRAND: -
trunk/src/VBox/VMM/VMMR3/EM.cpp
r80812 r80815 1652 1652 { 1653 1653 #ifdef VBOX_WITH_NESTED_HWVIRT_SVM 1654 /* Handle the physical interrupt intercept (can be masked by the guesthypervisor). */1654 /* Handle the physical interrupt intercept (can be masked by the nested hypervisor). */ 1655 1655 if (CPUMIsGuestSvmCtrlInterceptSet(pVCpu, &pVCpu->cpum.GstCtx, SVM_CTRL_INTERCEPT_INTR)) 1656 1656 {
Note:
See TracChangeset
for help on using the changeset viewer.