VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/CPUM.cpp@ 93444

Last change on this file since 93444 was 93337, checked in by vboxsync, 3 years ago

VMM: Nested VMX: bugref:10092 Comment clarifying CR0-fixed-0 and CR0-fixed-1 bits reporting.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 239.1 KB
Line 
1/* $Id: CPUM.cpp 93337 2022-01-19 05:46:03Z vboxsync $ */
2/** @file
3 * CPUM - CPU Monitor / Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2022 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_cpum CPUM - CPU Monitor / Manager
19 *
20 * The CPU Monitor / Manager keeps track of all the CPU registers. It is
21 * also responsible for lazy FPU handling and some of the context loading
22 * in raw mode.
23 *
24 * There are three CPU contexts, the most important one is the guest one (GC).
25 * When running in raw-mode (RC) there is a special hyper context for the VMM
26 * part that floats around inside the guest address space. When running in
27 * raw-mode, CPUM also maintains a host context for saving and restoring
28 * registers across world switches. This latter is done in cooperation with the
29 * world switcher (@see pg_vmm).
30 *
31 * @see grp_cpum
32 *
33 * @section sec_cpum_fpu FPU / SSE / AVX / ++ state.
34 *
35 * TODO: proper write up, currently just some notes.
36 *
37 * The ring-0 FPU handling per OS:
38 *
39 * - 64-bit Windows uses XMM registers in the kernel as part of the calling
40 * convention (Visual C++ doesn't seem to have a way to disable
41 * generating such code either), so CR0.TS/EM are always zero from what I
42 * can tell. We are also forced to always load/save the guest XMM0-XMM15
43 * registers when entering/leaving guest context. Interrupt handlers
44 * using FPU/SSE will offically have call save and restore functions
45 * exported by the kernel, if the really really have to use the state.
46 *
47 * - 32-bit windows does lazy FPU handling, I think, probably including
48 * lazying saving. The Windows Internals book states that it's a bad
49 * idea to use the FPU in kernel space. However, it looks like it will
50 * restore the FPU state of the current thread in case of a kernel \#NM.
51 * Interrupt handlers should be same as for 64-bit.
52 *
53 * - Darwin allows taking \#NM in kernel space, restoring current thread's
54 * state if I read the code correctly. It saves the FPU state of the
55 * outgoing thread, and uses CR0.TS to lazily load the state of the
56 * incoming one. No idea yet how the FPU is treated by interrupt
57 * handlers, i.e. whether they are allowed to disable the state or
58 * something.
59 *
60 * - Linux also allows \#NM in kernel space (don't know since when), and
61 * uses CR0.TS for lazy loading. Saves outgoing thread's state, lazy
62 * loads the incoming unless configured to agressivly load it. Interrupt
63 * handlers can ask whether they're allowed to use the FPU, and may
64 * freely trash the state if Linux thinks it has saved the thread's state
65 * already. This is a problem.
66 *
67 * - Solaris will, from what I can tell, panic if it gets an \#NM in kernel
68 * context. When switching threads, the kernel will save the state of
69 * the outgoing thread and lazy load the incoming one using CR0.TS.
70 * There are a few routines in seeblk.s which uses the SSE unit in ring-0
71 * to do stuff, HAT are among the users. The routines there will
72 * manually clear CR0.TS and save the XMM registers they use only if
73 * CR0.TS was zero upon entry. They will skip it when not, because as
74 * mentioned above, the FPU state is saved when switching away from a
75 * thread and CR0.TS set to 1, so when CR0.TS is 1 there is nothing to
76 * preserve. This is a problem if we restore CR0.TS to 1 after loading
77 * the guest state.
78 *
79 * - FreeBSD - no idea yet.
80 *
81 * - OS/2 does not allow \#NMs in kernel space IIRC. Does lazy loading,
82 * possibly also lazy saving. Interrupts must preserve the CR0.TS+EM &
83 * FPU states.
84 *
85 * Up to r107425 (2016-05-24) we would only temporarily modify CR0.TS/EM while
86 * saving and restoring the host and guest states. The motivation for this
87 * change is that we want to be able to emulate SSE instruction in ring-0 (IEM).
88 *
89 * Starting with that change, we will leave CR0.TS=EM=0 after saving the host
90 * state and only restore it once we've restore the host FPU state. This has the
91 * accidental side effect of triggering Solaris to preserve XMM registers in
92 * sseblk.s. When CR0 was changed by saving the FPU state, CPUM must now inform
93 * the VT-x (HMVMX) code about it as it caches the CR0 value in the VMCS.
94 *
95 *
96 * @section sec_cpum_logging Logging Level Assignments.
97 *
98 * Following log level assignments:
99 * - Log6 is used for FPU state management.
100 * - Log7 is used for FPU state actualization.
101 *
102 */
103
104
105/*********************************************************************************************************************************
106* Header Files *
107*********************************************************************************************************************************/
108#define LOG_GROUP LOG_GROUP_CPUM
109#include <VBox/vmm/cpum.h>
110#include <VBox/vmm/cpumdis.h>
111#include <VBox/vmm/cpumctx-v1_6.h>
112#include <VBox/vmm/pgm.h>
113#include <VBox/vmm/apic.h>
114#include <VBox/vmm/mm.h>
115#include <VBox/vmm/em.h>
116#include <VBox/vmm/iem.h>
117#include <VBox/vmm/selm.h>
118#include <VBox/vmm/dbgf.h>
119#include <VBox/vmm/hm.h>
120#include <VBox/vmm/hmvmxinline.h>
121#include <VBox/vmm/ssm.h>
122#include "CPUMInternal.h"
123#include <VBox/vmm/vm.h>
124
125#include <VBox/param.h>
126#include <VBox/dis.h>
127#include <VBox/err.h>
128#include <VBox/log.h>
129#include <iprt/asm-amd64-x86.h>
130#include <iprt/assert.h>
131#include <iprt/cpuset.h>
132#include <iprt/mem.h>
133#include <iprt/mp.h>
134#include <iprt/string.h>
135
136
137/*********************************************************************************************************************************
138* Defined Constants And Macros *
139*********************************************************************************************************************************/
140/**
141 * This was used in the saved state up to the early life of version 14.
142 *
143 * It indicates that we may have some out-of-sync hidden segement registers.
144 * It is only relevant for raw-mode.
145 */
146#define CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID RT_BIT(12)
147
148
149/*********************************************************************************************************************************
150* Structures and Typedefs *
151*********************************************************************************************************************************/
152
153/**
154 * What kind of cpu info dump to perform.
155 */
156typedef enum CPUMDUMPTYPE
157{
158 CPUMDUMPTYPE_TERSE,
159 CPUMDUMPTYPE_DEFAULT,
160 CPUMDUMPTYPE_VERBOSE
161} CPUMDUMPTYPE;
162/** Pointer to a cpu info dump type. */
163typedef CPUMDUMPTYPE *PCPUMDUMPTYPE;
164
165
166/*********************************************************************************************************************************
167* Internal Functions *
168*********************************************************************************************************************************/
169static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass);
170static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM);
171static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM);
172static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
173static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM);
174static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
175static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
176static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
177static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
178static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
179static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
180
181
182/*********************************************************************************************************************************
183* Global Variables *
184*********************************************************************************************************************************/
185/** Saved state field descriptors for CPUMCTX. */
186static const SSMFIELD g_aCpumCtxFields[] =
187{
188 SSMFIELD_ENTRY( CPUMCTX, rdi),
189 SSMFIELD_ENTRY( CPUMCTX, rsi),
190 SSMFIELD_ENTRY( CPUMCTX, rbp),
191 SSMFIELD_ENTRY( CPUMCTX, rax),
192 SSMFIELD_ENTRY( CPUMCTX, rbx),
193 SSMFIELD_ENTRY( CPUMCTX, rdx),
194 SSMFIELD_ENTRY( CPUMCTX, rcx),
195 SSMFIELD_ENTRY( CPUMCTX, rsp),
196 SSMFIELD_ENTRY( CPUMCTX, rflags),
197 SSMFIELD_ENTRY( CPUMCTX, rip),
198 SSMFIELD_ENTRY( CPUMCTX, r8),
199 SSMFIELD_ENTRY( CPUMCTX, r9),
200 SSMFIELD_ENTRY( CPUMCTX, r10),
201 SSMFIELD_ENTRY( CPUMCTX, r11),
202 SSMFIELD_ENTRY( CPUMCTX, r12),
203 SSMFIELD_ENTRY( CPUMCTX, r13),
204 SSMFIELD_ENTRY( CPUMCTX, r14),
205 SSMFIELD_ENTRY( CPUMCTX, r15),
206 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
207 SSMFIELD_ENTRY( CPUMCTX, es.ValidSel),
208 SSMFIELD_ENTRY( CPUMCTX, es.fFlags),
209 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
210 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
211 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
212 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
213 SSMFIELD_ENTRY( CPUMCTX, cs.ValidSel),
214 SSMFIELD_ENTRY( CPUMCTX, cs.fFlags),
215 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
216 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
217 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
218 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
219 SSMFIELD_ENTRY( CPUMCTX, ss.ValidSel),
220 SSMFIELD_ENTRY( CPUMCTX, ss.fFlags),
221 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
222 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
223 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
224 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
225 SSMFIELD_ENTRY( CPUMCTX, ds.ValidSel),
226 SSMFIELD_ENTRY( CPUMCTX, ds.fFlags),
227 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
228 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
229 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
230 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
231 SSMFIELD_ENTRY( CPUMCTX, fs.ValidSel),
232 SSMFIELD_ENTRY( CPUMCTX, fs.fFlags),
233 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
234 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
235 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
236 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
237 SSMFIELD_ENTRY( CPUMCTX, gs.ValidSel),
238 SSMFIELD_ENTRY( CPUMCTX, gs.fFlags),
239 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
240 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
241 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
242 SSMFIELD_ENTRY( CPUMCTX, cr0),
243 SSMFIELD_ENTRY( CPUMCTX, cr2),
244 SSMFIELD_ENTRY( CPUMCTX, cr3),
245 SSMFIELD_ENTRY( CPUMCTX, cr4),
246 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
247 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
248 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
249 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
250 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
251 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
252 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
253 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
254 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
255 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
256 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
257 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
258 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
259 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
260 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
261 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
262 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
263 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
264 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
265 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
266 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
267 SSMFIELD_ENTRY( CPUMCTX, ldtr.ValidSel),
268 SSMFIELD_ENTRY( CPUMCTX, ldtr.fFlags),
269 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
270 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
271 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
272 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
273 SSMFIELD_ENTRY( CPUMCTX, tr.ValidSel),
274 SSMFIELD_ENTRY( CPUMCTX, tr.fFlags),
275 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
276 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
277 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
278 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[0], CPUM_SAVED_STATE_VERSION_XSAVE),
279 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[1], CPUM_SAVED_STATE_VERSION_XSAVE),
280 SSMFIELD_ENTRY_VER( CPUMCTX, fXStateMask, CPUM_SAVED_STATE_VERSION_XSAVE),
281 SSMFIELD_ENTRY_TERM()
282};
283
284/** Saved state field descriptors for SVM nested hardware-virtualization
285 * Host State. */
286static const SSMFIELD g_aSvmHwvirtHostState[] =
287{
288 SSMFIELD_ENTRY( SVMHOSTSTATE, uEferMsr),
289 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr0),
290 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr4),
291 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr3),
292 SSMFIELD_ENTRY( SVMHOSTSTATE, uRip),
293 SSMFIELD_ENTRY( SVMHOSTSTATE, uRsp),
294 SSMFIELD_ENTRY( SVMHOSTSTATE, uRax),
295 SSMFIELD_ENTRY( SVMHOSTSTATE, rflags),
296 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Sel),
297 SSMFIELD_ENTRY( SVMHOSTSTATE, es.ValidSel),
298 SSMFIELD_ENTRY( SVMHOSTSTATE, es.fFlags),
299 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u64Base),
300 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u32Limit),
301 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Attr),
302 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Sel),
303 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.ValidSel),
304 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.fFlags),
305 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u64Base),
306 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u32Limit),
307 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Attr),
308 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Sel),
309 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.ValidSel),
310 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.fFlags),
311 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u64Base),
312 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u32Limit),
313 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Attr),
314 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Sel),
315 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.ValidSel),
316 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.fFlags),
317 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u64Base),
318 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u32Limit),
319 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Attr),
320 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.cbGdt),
321 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.pGdt),
322 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.cbIdt),
323 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.pIdt),
324 SSMFIELD_ENTRY_IGNORE(SVMHOSTSTATE, abPadding),
325 SSMFIELD_ENTRY_TERM()
326};
327
328/** Saved state field descriptors for VMX nested hardware-virtualization
329 * VMCS. */
330static const SSMFIELD g_aVmxHwvirtVmcs[] =
331{
332 SSMFIELD_ENTRY( VMXVVMCS, u32VmcsRevId),
333 SSMFIELD_ENTRY( VMXVVMCS, enmVmxAbort),
334 SSMFIELD_ENTRY( VMXVVMCS, fVmcsState),
335 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au8Padding0),
336 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved0),
337
338 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, u16Reserved0),
339
340 SSMFIELD_ENTRY( VMXVVMCS, u32RoVmInstrError),
341 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitReason),
342 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitIntInfo),
343 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitIntErrCode),
344 SSMFIELD_ENTRY( VMXVVMCS, u32RoIdtVectoringInfo),
345 SSMFIELD_ENTRY( VMXVVMCS, u32RoIdtVectoringErrCode),
346 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitInstrLen),
347 SSMFIELD_ENTRY( VMXVVMCS, u32RoExitInstrInfo),
348 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32RoReserved2),
349
350 SSMFIELD_ENTRY( VMXVVMCS, u64RoGuestPhysAddr),
351 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved1),
352
353 SSMFIELD_ENTRY( VMXVVMCS, u64RoExitQual),
354 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRcx),
355 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRsi),
356 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRdi),
357 SSMFIELD_ENTRY( VMXVVMCS, u64RoIoRip),
358 SSMFIELD_ENTRY( VMXVVMCS, u64RoGuestLinearAddr),
359 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved5),
360
361 SSMFIELD_ENTRY( VMXVVMCS, u16Vpid),
362 SSMFIELD_ENTRY( VMXVVMCS, u16PostIntNotifyVector),
363 SSMFIELD_ENTRY( VMXVVMCS, u16EptpIndex),
364 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved0),
365
366 SSMFIELD_ENTRY( VMXVVMCS, u32PinCtls),
367 SSMFIELD_ENTRY( VMXVVMCS, u32ProcCtls),
368 SSMFIELD_ENTRY( VMXVVMCS, u32XcptBitmap),
369 SSMFIELD_ENTRY( VMXVVMCS, u32XcptPFMask),
370 SSMFIELD_ENTRY( VMXVVMCS, u32XcptPFMatch),
371 SSMFIELD_ENTRY( VMXVVMCS, u32Cr3TargetCount),
372 SSMFIELD_ENTRY( VMXVVMCS, u32ExitCtls),
373 SSMFIELD_ENTRY( VMXVVMCS, u32ExitMsrStoreCount),
374 SSMFIELD_ENTRY( VMXVVMCS, u32ExitMsrLoadCount),
375 SSMFIELD_ENTRY( VMXVVMCS, u32EntryCtls),
376 SSMFIELD_ENTRY( VMXVVMCS, u32EntryMsrLoadCount),
377 SSMFIELD_ENTRY( VMXVVMCS, u32EntryIntInfo),
378 SSMFIELD_ENTRY( VMXVVMCS, u32EntryXcptErrCode),
379 SSMFIELD_ENTRY( VMXVVMCS, u32EntryInstrLen),
380 SSMFIELD_ENTRY( VMXVVMCS, u32TprThreshold),
381 SSMFIELD_ENTRY( VMXVVMCS, u32ProcCtls2),
382 SSMFIELD_ENTRY( VMXVVMCS, u32PleGap),
383 SSMFIELD_ENTRY( VMXVVMCS, u32PleWindow),
384 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved1),
385
386 SSMFIELD_ENTRY( VMXVVMCS, u64AddrIoBitmapA),
387 SSMFIELD_ENTRY( VMXVVMCS, u64AddrIoBitmapB),
388 SSMFIELD_ENTRY( VMXVVMCS, u64AddrMsrBitmap),
389 SSMFIELD_ENTRY( VMXVVMCS, u64AddrExitMsrStore),
390 SSMFIELD_ENTRY( VMXVVMCS, u64AddrExitMsrLoad),
391 SSMFIELD_ENTRY( VMXVVMCS, u64AddrEntryMsrLoad),
392 SSMFIELD_ENTRY( VMXVVMCS, u64ExecVmcsPtr),
393 SSMFIELD_ENTRY( VMXVVMCS, u64AddrPml),
394 SSMFIELD_ENTRY( VMXVVMCS, u64TscOffset),
395 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVirtApic),
396 SSMFIELD_ENTRY( VMXVVMCS, u64AddrApicAccess),
397 SSMFIELD_ENTRY( VMXVVMCS, u64AddrPostedIntDesc),
398 SSMFIELD_ENTRY( VMXVVMCS, u64VmFuncCtls),
399 SSMFIELD_ENTRY( VMXVVMCS, u64EptPtr),
400 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap0),
401 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap1),
402 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap2),
403 SSMFIELD_ENTRY( VMXVVMCS, u64EoiExitBitmap3),
404 SSMFIELD_ENTRY( VMXVVMCS, u64AddrEptpList),
405 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVmreadBitmap),
406 SSMFIELD_ENTRY( VMXVVMCS, u64AddrVmwriteBitmap),
407 SSMFIELD_ENTRY( VMXVVMCS, u64AddrXcptVeInfo),
408 SSMFIELD_ENTRY( VMXVVMCS, u64XssExitBitmap),
409 SSMFIELD_ENTRY( VMXVVMCS, u64EnclsExitBitmap),
410 SSMFIELD_ENTRY( VMXVVMCS, u64SppTablePtr),
411 SSMFIELD_ENTRY( VMXVVMCS, u64TscMultiplier),
412 SSMFIELD_ENTRY_VER( VMXVVMCS, u64ProcCtls3, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
413 SSMFIELD_ENTRY_VER( VMXVVMCS, u64EnclvExitBitmap, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
414 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved0),
415
416 SSMFIELD_ENTRY( VMXVVMCS, u64Cr0Mask),
417 SSMFIELD_ENTRY( VMXVVMCS, u64Cr4Mask),
418 SSMFIELD_ENTRY( VMXVVMCS, u64Cr0ReadShadow),
419 SSMFIELD_ENTRY( VMXVVMCS, u64Cr4ReadShadow),
420 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target0),
421 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target1),
422 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target2),
423 SSMFIELD_ENTRY( VMXVVMCS, u64Cr3Target3),
424 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved4),
425
426 SSMFIELD_ENTRY( VMXVVMCS, HostEs),
427 SSMFIELD_ENTRY( VMXVVMCS, HostCs),
428 SSMFIELD_ENTRY( VMXVVMCS, HostSs),
429 SSMFIELD_ENTRY( VMXVVMCS, HostDs),
430 SSMFIELD_ENTRY( VMXVVMCS, HostFs),
431 SSMFIELD_ENTRY( VMXVVMCS, HostGs),
432 SSMFIELD_ENTRY( VMXVVMCS, HostTr),
433 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved2),
434
435 SSMFIELD_ENTRY( VMXVVMCS, u32HostSysenterCs),
436 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved4),
437
438 SSMFIELD_ENTRY( VMXVVMCS, u64HostPatMsr),
439 SSMFIELD_ENTRY( VMXVVMCS, u64HostEferMsr),
440 SSMFIELD_ENTRY( VMXVVMCS, u64HostPerfGlobalCtlMsr),
441 SSMFIELD_ENTRY_VER( VMXVVMCS, u64HostPkrsMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
442 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved3),
443
444 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr0),
445 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr3),
446 SSMFIELD_ENTRY( VMXVVMCS, u64HostCr4),
447 SSMFIELD_ENTRY( VMXVVMCS, u64HostFsBase),
448 SSMFIELD_ENTRY( VMXVVMCS, u64HostGsBase),
449 SSMFIELD_ENTRY( VMXVVMCS, u64HostTrBase),
450 SSMFIELD_ENTRY( VMXVVMCS, u64HostGdtrBase),
451 SSMFIELD_ENTRY( VMXVVMCS, u64HostIdtrBase),
452 SSMFIELD_ENTRY( VMXVVMCS, u64HostSysenterEsp),
453 SSMFIELD_ENTRY( VMXVVMCS, u64HostSysenterEip),
454 SSMFIELD_ENTRY( VMXVVMCS, u64HostRsp),
455 SSMFIELD_ENTRY( VMXVVMCS, u64HostRip),
456 SSMFIELD_ENTRY_VER( VMXVVMCS, u64HostSCetMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
457 SSMFIELD_ENTRY_VER( VMXVVMCS, u64HostSsp, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
458 SSMFIELD_ENTRY_VER( VMXVVMCS, u64HostIntrSspTableAddrMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
459 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved7),
460
461 SSMFIELD_ENTRY( VMXVVMCS, GuestEs),
462 SSMFIELD_ENTRY( VMXVVMCS, GuestCs),
463 SSMFIELD_ENTRY( VMXVVMCS, GuestSs),
464 SSMFIELD_ENTRY( VMXVVMCS, GuestDs),
465 SSMFIELD_ENTRY( VMXVVMCS, GuestFs),
466 SSMFIELD_ENTRY( VMXVVMCS, GuestGs),
467 SSMFIELD_ENTRY( VMXVVMCS, GuestLdtr),
468 SSMFIELD_ENTRY( VMXVVMCS, GuestTr),
469 SSMFIELD_ENTRY( VMXVVMCS, u16GuestIntStatus),
470 SSMFIELD_ENTRY( VMXVVMCS, u16PmlIndex),
471 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au16Reserved1),
472
473 SSMFIELD_ENTRY( VMXVVMCS, u32GuestEsLimit),
474 SSMFIELD_ENTRY( VMXVVMCS, u32GuestCsLimit),
475 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSsLimit),
476 SSMFIELD_ENTRY( VMXVVMCS, u32GuestDsLimit),
477 SSMFIELD_ENTRY( VMXVVMCS, u32GuestFsLimit),
478 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGsLimit),
479 SSMFIELD_ENTRY( VMXVVMCS, u32GuestLdtrLimit),
480 SSMFIELD_ENTRY( VMXVVMCS, u32GuestTrLimit),
481 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGdtrLimit),
482 SSMFIELD_ENTRY( VMXVVMCS, u32GuestIdtrLimit),
483 SSMFIELD_ENTRY( VMXVVMCS, u32GuestEsAttr),
484 SSMFIELD_ENTRY( VMXVVMCS, u32GuestCsAttr),
485 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSsAttr),
486 SSMFIELD_ENTRY( VMXVVMCS, u32GuestDsAttr),
487 SSMFIELD_ENTRY( VMXVVMCS, u32GuestFsAttr),
488 SSMFIELD_ENTRY( VMXVVMCS, u32GuestGsAttr),
489 SSMFIELD_ENTRY( VMXVVMCS, u32GuestLdtrAttr),
490 SSMFIELD_ENTRY( VMXVVMCS, u32GuestTrAttr),
491 SSMFIELD_ENTRY( VMXVVMCS, u32GuestIntrState),
492 SSMFIELD_ENTRY( VMXVVMCS, u32GuestActivityState),
493 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSmBase),
494 SSMFIELD_ENTRY( VMXVVMCS, u32GuestSysenterCS),
495 SSMFIELD_ENTRY( VMXVVMCS, u32PreemptTimer),
496 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au32Reserved3),
497
498 SSMFIELD_ENTRY( VMXVVMCS, u64VmcsLinkPtr),
499 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDebugCtlMsr),
500 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPatMsr),
501 SSMFIELD_ENTRY( VMXVVMCS, u64GuestEferMsr),
502 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPerfGlobalCtlMsr),
503 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte0),
504 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte1),
505 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte2),
506 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPdpte3),
507 SSMFIELD_ENTRY( VMXVVMCS, u64GuestBndcfgsMsr),
508 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRtitCtlMsr),
509 SSMFIELD_ENTRY_VER( VMXVVMCS, u64GuestPkrsMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
510 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved2),
511
512 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr0),
513 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr3),
514 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCr4),
515 SSMFIELD_ENTRY( VMXVVMCS, u64GuestEsBase),
516 SSMFIELD_ENTRY( VMXVVMCS, u64GuestCsBase),
517 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSsBase),
518 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDsBase),
519 SSMFIELD_ENTRY( VMXVVMCS, u64GuestFsBase),
520 SSMFIELD_ENTRY( VMXVVMCS, u64GuestGsBase),
521 SSMFIELD_ENTRY( VMXVVMCS, u64GuestLdtrBase),
522 SSMFIELD_ENTRY( VMXVVMCS, u64GuestTrBase),
523 SSMFIELD_ENTRY( VMXVVMCS, u64GuestGdtrBase),
524 SSMFIELD_ENTRY( VMXVVMCS, u64GuestIdtrBase),
525 SSMFIELD_ENTRY( VMXVVMCS, u64GuestDr7),
526 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRsp),
527 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRip),
528 SSMFIELD_ENTRY( VMXVVMCS, u64GuestRFlags),
529 SSMFIELD_ENTRY( VMXVVMCS, u64GuestPendingDbgXcpts),
530 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSysenterEsp),
531 SSMFIELD_ENTRY( VMXVVMCS, u64GuestSysenterEip),
532 SSMFIELD_ENTRY_VER( VMXVVMCS, u64GuestSCetMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
533 SSMFIELD_ENTRY_VER( VMXVVMCS, u64GuestSsp, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
534 SSMFIELD_ENTRY_VER( VMXVVMCS, u64GuestIntrSspTableAddrMsr, CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2),
535 SSMFIELD_ENTRY_IGNORE(VMXVVMCS, au64Reserved6),
536
537 SSMFIELD_ENTRY_TERM()
538};
539
540/** Saved state field descriptors for CPUMCTX. */
541static const SSMFIELD g_aCpumX87Fields[] =
542{
543 SSMFIELD_ENTRY( X86FXSTATE, FCW),
544 SSMFIELD_ENTRY( X86FXSTATE, FSW),
545 SSMFIELD_ENTRY( X86FXSTATE, FTW),
546 SSMFIELD_ENTRY( X86FXSTATE, FOP),
547 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
548 SSMFIELD_ENTRY( X86FXSTATE, CS),
549 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
550 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
551 SSMFIELD_ENTRY( X86FXSTATE, DS),
552 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
553 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
554 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
555 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
556 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
557 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
558 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
559 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
560 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
561 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
562 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
563 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
564 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
565 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
566 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
567 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
568 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
569 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
570 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
571 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
572 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
573 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
574 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
575 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
576 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
577 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
578 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
579 SSMFIELD_ENTRY_VER( X86FXSTATE, au32RsrvdForSoftware[0], CPUM_SAVED_STATE_VERSION_XSAVE), /* 32-bit/64-bit hack */
580 SSMFIELD_ENTRY_TERM()
581};
582
583/** Saved state field descriptors for X86XSAVEHDR. */
584static const SSMFIELD g_aCpumXSaveHdrFields[] =
585{
586 SSMFIELD_ENTRY( X86XSAVEHDR, bmXState),
587 SSMFIELD_ENTRY_TERM()
588};
589
590/** Saved state field descriptors for X86XSAVEYMMHI. */
591static const SSMFIELD g_aCpumYmmHiFields[] =
592{
593 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[0]),
594 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[1]),
595 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[2]),
596 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[3]),
597 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[4]),
598 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[5]),
599 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[6]),
600 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[7]),
601 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[8]),
602 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[9]),
603 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[10]),
604 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[11]),
605 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[12]),
606 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[13]),
607 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[14]),
608 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[15]),
609 SSMFIELD_ENTRY_TERM()
610};
611
612/** Saved state field descriptors for X86XSAVEBNDREGS. */
613static const SSMFIELD g_aCpumBndRegsFields[] =
614{
615 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[0]),
616 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[1]),
617 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[2]),
618 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[3]),
619 SSMFIELD_ENTRY_TERM()
620};
621
622/** Saved state field descriptors for X86XSAVEBNDCFG. */
623static const SSMFIELD g_aCpumBndCfgFields[] =
624{
625 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fConfig),
626 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fStatus),
627 SSMFIELD_ENTRY_TERM()
628};
629
630#if 0 /** @todo */
631/** Saved state field descriptors for X86XSAVEOPMASK. */
632static const SSMFIELD g_aCpumOpmaskFields[] =
633{
634 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[0]),
635 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[1]),
636 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[2]),
637 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[3]),
638 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[4]),
639 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[5]),
640 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[6]),
641 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[7]),
642 SSMFIELD_ENTRY_TERM()
643};
644#endif
645
646/** Saved state field descriptors for X86XSAVEZMMHI256. */
647static const SSMFIELD g_aCpumZmmHi256Fields[] =
648{
649 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[0]),
650 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[1]),
651 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[2]),
652 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[3]),
653 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[4]),
654 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[5]),
655 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[6]),
656 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[7]),
657 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[8]),
658 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[9]),
659 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[10]),
660 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[11]),
661 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[12]),
662 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[13]),
663 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[14]),
664 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[15]),
665 SSMFIELD_ENTRY_TERM()
666};
667
668/** Saved state field descriptors for X86XSAVEZMM16HI. */
669static const SSMFIELD g_aCpumZmm16HiFields[] =
670{
671 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[0]),
672 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[1]),
673 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[2]),
674 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[3]),
675 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[4]),
676 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[5]),
677 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[6]),
678 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[7]),
679 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[8]),
680 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[9]),
681 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[10]),
682 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[11]),
683 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[12]),
684 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[13]),
685 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[14]),
686 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[15]),
687 SSMFIELD_ENTRY_TERM()
688};
689
690
691
692/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
693 * registeres changed. */
694static const SSMFIELD g_aCpumX87FieldsMem[] =
695{
696 SSMFIELD_ENTRY( X86FXSTATE, FCW),
697 SSMFIELD_ENTRY( X86FXSTATE, FSW),
698 SSMFIELD_ENTRY( X86FXSTATE, FTW),
699 SSMFIELD_ENTRY( X86FXSTATE, FOP),
700 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
701 SSMFIELD_ENTRY( X86FXSTATE, CS),
702 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
703 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
704 SSMFIELD_ENTRY( X86FXSTATE, DS),
705 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
706 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
707 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
708 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
709 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
710 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
711 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
712 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
713 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
714 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
715 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
716 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
717 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
718 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
719 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
720 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
721 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
722 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
723 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
724 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
725 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
726 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
727 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
728 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
729 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
730 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
731 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
732 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
733 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
734};
735
736/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
737 * registeres changed. */
738static const SSMFIELD g_aCpumCtxFieldsMem[] =
739{
740 SSMFIELD_ENTRY( CPUMCTX, rdi),
741 SSMFIELD_ENTRY( CPUMCTX, rsi),
742 SSMFIELD_ENTRY( CPUMCTX, rbp),
743 SSMFIELD_ENTRY( CPUMCTX, rax),
744 SSMFIELD_ENTRY( CPUMCTX, rbx),
745 SSMFIELD_ENTRY( CPUMCTX, rdx),
746 SSMFIELD_ENTRY( CPUMCTX, rcx),
747 SSMFIELD_ENTRY( CPUMCTX, rsp),
748 SSMFIELD_ENTRY_OLD( lss_esp, sizeof(uint32_t)),
749 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
750 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
751 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
752 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
753 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
754 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
755 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
756 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
757 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
758 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
759 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
760 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
761 SSMFIELD_ENTRY( CPUMCTX, rflags),
762 SSMFIELD_ENTRY( CPUMCTX, rip),
763 SSMFIELD_ENTRY( CPUMCTX, r8),
764 SSMFIELD_ENTRY( CPUMCTX, r9),
765 SSMFIELD_ENTRY( CPUMCTX, r10),
766 SSMFIELD_ENTRY( CPUMCTX, r11),
767 SSMFIELD_ENTRY( CPUMCTX, r12),
768 SSMFIELD_ENTRY( CPUMCTX, r13),
769 SSMFIELD_ENTRY( CPUMCTX, r14),
770 SSMFIELD_ENTRY( CPUMCTX, r15),
771 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
772 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
773 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
774 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
775 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
776 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
777 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
778 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
779 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
780 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
781 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
782 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
783 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
784 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
785 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
786 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
787 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
788 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
789 SSMFIELD_ENTRY( CPUMCTX, cr0),
790 SSMFIELD_ENTRY( CPUMCTX, cr2),
791 SSMFIELD_ENTRY( CPUMCTX, cr3),
792 SSMFIELD_ENTRY( CPUMCTX, cr4),
793 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
794 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
795 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
796 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
797 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
798 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
799 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
800 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
801 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
802 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
803 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
804 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
805 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
806 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
807 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
808 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
809 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
810 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
811 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
812 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
813 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
814 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
815 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
816 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
817 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
818 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
819 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
820 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
821 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
822 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
823 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
824 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
825 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
826 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
827 SSMFIELD_ENTRY_TERM()
828};
829
830/** Saved state field descriptors for CPUMCTX_VER1_6. */
831static const SSMFIELD g_aCpumX87FieldsV16[] =
832{
833 SSMFIELD_ENTRY( X86FXSTATE, FCW),
834 SSMFIELD_ENTRY( X86FXSTATE, FSW),
835 SSMFIELD_ENTRY( X86FXSTATE, FTW),
836 SSMFIELD_ENTRY( X86FXSTATE, FOP),
837 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
838 SSMFIELD_ENTRY( X86FXSTATE, CS),
839 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
840 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
841 SSMFIELD_ENTRY( X86FXSTATE, DS),
842 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
843 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
844 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
845 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
846 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
847 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
848 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
849 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
850 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
851 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
852 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
853 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
854 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
855 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
856 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
857 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
858 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
859 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
860 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
861 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
862 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
863 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
864 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
865 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
866 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
867 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
868 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
869 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
870 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
871 SSMFIELD_ENTRY_TERM()
872};
873
874/** Saved state field descriptors for CPUMCTX_VER1_6. */
875static const SSMFIELD g_aCpumCtxFieldsV16[] =
876{
877 SSMFIELD_ENTRY( CPUMCTX, rdi),
878 SSMFIELD_ENTRY( CPUMCTX, rsi),
879 SSMFIELD_ENTRY( CPUMCTX, rbp),
880 SSMFIELD_ENTRY( CPUMCTX, rax),
881 SSMFIELD_ENTRY( CPUMCTX, rbx),
882 SSMFIELD_ENTRY( CPUMCTX, rdx),
883 SSMFIELD_ENTRY( CPUMCTX, rcx),
884 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, rsp),
885 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
886 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
887 SSMFIELD_ENTRY_OLD( CPUMCTX, sizeof(uint64_t) /*rsp_notused*/),
888 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
889 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
890 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
891 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
892 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
893 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
894 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
895 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
896 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
897 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
898 SSMFIELD_ENTRY( CPUMCTX, rflags),
899 SSMFIELD_ENTRY( CPUMCTX, rip),
900 SSMFIELD_ENTRY( CPUMCTX, r8),
901 SSMFIELD_ENTRY( CPUMCTX, r9),
902 SSMFIELD_ENTRY( CPUMCTX, r10),
903 SSMFIELD_ENTRY( CPUMCTX, r11),
904 SSMFIELD_ENTRY( CPUMCTX, r12),
905 SSMFIELD_ENTRY( CPUMCTX, r13),
906 SSMFIELD_ENTRY( CPUMCTX, r14),
907 SSMFIELD_ENTRY( CPUMCTX, r15),
908 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, es.u64Base),
909 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
910 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
911 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, cs.u64Base),
912 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
913 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
914 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ss.u64Base),
915 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
916 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
917 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ds.u64Base),
918 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
919 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
920 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, fs.u64Base),
921 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
922 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
923 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gs.u64Base),
924 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
925 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
926 SSMFIELD_ENTRY( CPUMCTX, cr0),
927 SSMFIELD_ENTRY( CPUMCTX, cr2),
928 SSMFIELD_ENTRY( CPUMCTX, cr3),
929 SSMFIELD_ENTRY( CPUMCTX, cr4),
930 SSMFIELD_ENTRY_OLD( cr8, sizeof(uint64_t)),
931 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
932 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
933 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
934 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
935 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
936 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
937 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
938 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
939 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
940 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gdtr.pGdt),
941 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
942 SSMFIELD_ENTRY_OLD( gdtrPadding64, sizeof(uint64_t)),
943 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
944 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, idtr.pIdt),
945 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
946 SSMFIELD_ENTRY_OLD( idtrPadding64, sizeof(uint64_t)),
947 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
948 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
949 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
950 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
951 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
952 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
953 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
954 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
955 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
956 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
957 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
958 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
959 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
960 SSMFIELD_ENTRY_OLD( msrFSBASE, sizeof(uint64_t)),
961 SSMFIELD_ENTRY_OLD( msrGSBASE, sizeof(uint64_t)),
962 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
963 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ldtr.u64Base),
964 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
965 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
966 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, tr.u64Base),
967 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
968 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
969 SSMFIELD_ENTRY_OLD( padding, sizeof(uint32_t)*2),
970 SSMFIELD_ENTRY_TERM()
971};
972
973
974/**
975 * Checks for partial/leaky FXSAVE/FXRSTOR handling on AMD CPUs.
976 *
977 * AMD K7, K8 and newer AMD CPUs do not save/restore the x87 error pointers
978 * (last instruction pointer, last data pointer, last opcode) except when the ES
979 * bit (Exception Summary) in x87 FSW (FPU Status Word) is set. Thus if we don't
980 * clear these registers there is potential, local FPU leakage from a process
981 * using the FPU to another.
982 *
983 * See AMD Instruction Reference for FXSAVE, FXRSTOR.
984 *
985 * @param pVM The cross context VM structure.
986 */
987static void cpumR3CheckLeakyFpu(PVM pVM)
988{
989 uint32_t u32CpuVersion = ASMCpuId_EAX(1);
990 uint32_t const u32Family = u32CpuVersion >> 8;
991 if ( u32Family >= 6 /* K7 and higher */
992 && (ASMIsAmdCpu() || ASMIsHygonCpu()) )
993 {
994 uint32_t cExt = ASMCpuId_EAX(0x80000000);
995 if (ASMIsValidExtRange(cExt))
996 {
997 uint32_t fExtFeaturesEDX = ASMCpuId_EDX(0x80000001);
998 if (fExtFeaturesEDX & X86_CPUID_AMD_FEATURE_EDX_FFXSR)
999 {
1000 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
1001 {
1002 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
1003 pVCpu->cpum.s.fUseFlags |= CPUM_USE_FFXSR_LEAKY;
1004 }
1005 Log(("CPUM: Host CPU has leaky fxsave/fxrstor behaviour\n"));
1006 }
1007 }
1008 }
1009}
1010
1011
1012/**
1013 * Initialize SVM hardware virtualization state (used to allocate it).
1014 *
1015 * @param pVM The cross context VM structure.
1016 */
1017static void cpumR3InitSvmHwVirtState(PVM pVM)
1018{
1019 Assert(pVM->cpum.s.GuestFeatures.fSvm);
1020
1021 LogRel(("CPUM: AMD-V nested-guest init\n"));
1022 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1023 {
1024 PVMCPU pVCpu = pVM->apCpusR3[i];
1025 pVCpu->cpum.s.Guest.hwvirt.enmHwvirt = CPUMHWVIRT_SVM;
1026
1027 AssertCompile(SVM_VMCB_PAGES * X86_PAGE_SIZE == sizeof(pVCpu->cpum.s.Guest.hwvirt.svm.Vmcb));
1028 AssertCompile(SVM_MSRPM_PAGES * X86_PAGE_SIZE == sizeof(pVCpu->cpum.s.Guest.hwvirt.svm.abMsrBitmap));
1029 AssertCompile(SVM_IOPM_PAGES * X86_PAGE_SIZE == sizeof(pVCpu->cpum.s.Guest.hwvirt.svm.abIoBitmap));
1030 }
1031}
1032
1033
1034/**
1035 * Resets per-VCPU SVM hardware virtualization state.
1036 *
1037 * @param pVCpu The cross context virtual CPU structure.
1038 */
1039DECLINLINE(void) cpumR3ResetSvmHwVirtState(PVMCPU pVCpu)
1040{
1041 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1042 Assert(pCtx->hwvirt.enmHwvirt == CPUMHWVIRT_SVM);
1043
1044 RT_ZERO(pCtx->hwvirt.svm.Vmcb);
1045 pCtx->hwvirt.svm.uMsrHSavePa = 0;
1046 pCtx->hwvirt.svm.uPrevPauseTick = 0;
1047}
1048
1049
1050/**
1051 * Allocates memory for the VMX hardware virtualization state.
1052 *
1053 * @param pVM The cross context VM structure.
1054 */
1055static void cpumR3InitVmxHwVirtState(PVM pVM)
1056{
1057 LogRel(("CPUM: VT-x nested-guest init\n"));
1058 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1059 {
1060 PVMCPU pVCpu = pVM->apCpusR3[i];
1061 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1062
1063 pCtx->hwvirt.enmHwvirt = CPUMHWVIRT_VMX;
1064
1065 AssertCompile(sizeof(pCtx->hwvirt.vmx.Vmcs) == VMX_V_VMCS_PAGES * X86_PAGE_SIZE);
1066 AssertCompile(sizeof(pCtx->hwvirt.vmx.Vmcs) == VMX_V_VMCS_SIZE);
1067 AssertCompile(sizeof(pCtx->hwvirt.vmx.ShadowVmcs) == VMX_V_SHADOW_VMCS_PAGES * X86_PAGE_SIZE);
1068 AssertCompile(sizeof(pCtx->hwvirt.vmx.ShadowVmcs) == VMX_V_SHADOW_VMCS_SIZE);
1069 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVmreadBitmap) == VMX_V_VMREAD_VMWRITE_BITMAP_PAGES * X86_PAGE_SIZE);
1070 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVmreadBitmap) == VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
1071 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVmwriteBitmap) == VMX_V_VMREAD_VMWRITE_BITMAP_PAGES * X86_PAGE_SIZE);
1072 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVmwriteBitmap) == VMX_V_VMREAD_VMWRITE_BITMAP_SIZE);
1073 AssertCompile(sizeof(pCtx->hwvirt.vmx.aEntryMsrLoadArea) == VMX_V_AUTOMSR_AREA_PAGES * X86_PAGE_SIZE);
1074 AssertCompile(sizeof(pCtx->hwvirt.vmx.aEntryMsrLoadArea) == VMX_V_AUTOMSR_AREA_SIZE);
1075 AssertCompile(sizeof(pCtx->hwvirt.vmx.aExitMsrStoreArea) == VMX_V_AUTOMSR_AREA_PAGES * X86_PAGE_SIZE);
1076 AssertCompile(sizeof(pCtx->hwvirt.vmx.aExitMsrStoreArea) == VMX_V_AUTOMSR_AREA_SIZE);
1077 AssertCompile(sizeof(pCtx->hwvirt.vmx.aExitMsrLoadArea) == VMX_V_AUTOMSR_AREA_PAGES * X86_PAGE_SIZE);
1078 AssertCompile(sizeof(pCtx->hwvirt.vmx.aExitMsrLoadArea) == VMX_V_AUTOMSR_AREA_SIZE);
1079 AssertCompile(sizeof(pCtx->hwvirt.vmx.abMsrBitmap) == VMX_V_MSR_BITMAP_PAGES * X86_PAGE_SIZE);
1080 AssertCompile(sizeof(pCtx->hwvirt.vmx.abMsrBitmap) == VMX_V_MSR_BITMAP_SIZE);
1081 AssertCompile(sizeof(pCtx->hwvirt.vmx.abIoBitmap) == (VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES) * X86_PAGE_SIZE);
1082 AssertCompile(sizeof(pCtx->hwvirt.vmx.abIoBitmap) == VMX_V_IO_BITMAP_A_SIZE + VMX_V_IO_BITMAP_B_SIZE);
1083 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVirtApicPage) == VMX_V_VIRT_APIC_PAGES * X86_PAGE_SIZE);
1084 AssertCompile(sizeof(pCtx->hwvirt.vmx.abVirtApicPage) == VMX_V_VIRT_APIC_SIZE);
1085
1086 /*
1087 * Zero out all allocated pages (should compress well for saved-state).
1088 */
1089 /** @todo r=bird: this is and always was unnecessary - they are already zeroed. */
1090 RT_ZERO(pCtx->hwvirt.vmx.Vmcs);
1091 RT_ZERO(pCtx->hwvirt.vmx.ShadowVmcs);
1092 RT_ZERO(pCtx->hwvirt.vmx.abVmreadBitmap);
1093 RT_ZERO(pCtx->hwvirt.vmx.abVmwriteBitmap);
1094 RT_ZERO(pCtx->hwvirt.vmx.aEntryMsrLoadArea);
1095 RT_ZERO(pCtx->hwvirt.vmx.aExitMsrStoreArea);
1096 RT_ZERO(pCtx->hwvirt.vmx.aExitMsrLoadArea);
1097 RT_ZERO(pCtx->hwvirt.vmx.abMsrBitmap);
1098 RT_ZERO(pCtx->hwvirt.vmx.abIoBitmap);
1099 RT_ZERO(pCtx->hwvirt.vmx.abVirtApicPage);
1100 }
1101}
1102
1103
1104/**
1105 * Resets per-VCPU VMX hardware virtualization state.
1106 *
1107 * @param pVCpu The cross context virtual CPU structure.
1108 */
1109DECLINLINE(void) cpumR3ResetVmxHwVirtState(PVMCPU pVCpu)
1110{
1111 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1112 Assert(pCtx->hwvirt.enmHwvirt == CPUMHWVIRT_VMX);
1113
1114 RT_ZERO(pCtx->hwvirt.vmx.Vmcs);
1115 RT_ZERO(pCtx->hwvirt.vmx.ShadowVmcs);
1116 pCtx->hwvirt.vmx.GCPhysVmxon = NIL_RTGCPHYS;
1117 pCtx->hwvirt.vmx.GCPhysShadowVmcs = NIL_RTGCPHYS;
1118 pCtx->hwvirt.vmx.GCPhysVmxon = NIL_RTGCPHYS;
1119 pCtx->hwvirt.vmx.fInVmxRootMode = false;
1120 pCtx->hwvirt.vmx.fInVmxNonRootMode = false;
1121 /* Don't reset diagnostics here. */
1122
1123 /* Stop any VMX-preemption timer. */
1124 CPUMStopGuestVmxPremptTimer(pVCpu);
1125
1126 /* Clear all nested-guest FFs. */
1127 VMCPU_FF_CLEAR_MASK(pVCpu, VMCPU_FF_VMX_ALL_MASK);
1128}
1129
1130
1131/**
1132 * Displays the host and guest VMX features.
1133 *
1134 * @param pVM The cross context VM structure.
1135 * @param pHlp The info helper functions.
1136 * @param pszArgs "terse", "default" or "verbose".
1137 */
1138DECLCALLBACK(void) cpumR3InfoVmxFeatures(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
1139{
1140 RT_NOREF(pszArgs);
1141 PCCPUMFEATURES pHostFeatures = &pVM->cpum.s.HostFeatures;
1142 PCCPUMFEATURES pGuestFeatures = &pVM->cpum.s.GuestFeatures;
1143 if ( pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_INTEL
1144 || pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_VIA
1145 || pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_SHANGHAI)
1146 {
1147#define VMXFEATDUMP(a_szDesc, a_Var) \
1148 pHlp->pfnPrintf(pHlp, " %s = %u (%u)\n", a_szDesc, pGuestFeatures->a_Var, pHostFeatures->a_Var)
1149
1150 pHlp->pfnPrintf(pHlp, "Nested hardware virtualization - VMX features\n");
1151 pHlp->pfnPrintf(pHlp, " Mnemonic - Description = guest (host)\n");
1152 VMXFEATDUMP("VMX - Virtual-Machine Extensions ", fVmx);
1153 /* Basic. */
1154 VMXFEATDUMP("InsOutInfo - INS/OUTS instruction info. ", fVmxInsOutInfo);
1155
1156 /* Pin-based controls. */
1157 VMXFEATDUMP("ExtIntExit - External interrupt exiting ", fVmxExtIntExit);
1158 VMXFEATDUMP("NmiExit - NMI exiting ", fVmxNmiExit);
1159 VMXFEATDUMP("VirtNmi - Virtual NMIs ", fVmxVirtNmi);
1160 VMXFEATDUMP("PreemptTimer - VMX preemption timer ", fVmxPreemptTimer);
1161 VMXFEATDUMP("PostedInt - Posted interrupts ", fVmxPostedInt);
1162
1163 /* Processor-based controls. */
1164 VMXFEATDUMP("IntWindowExit - Interrupt-window exiting ", fVmxIntWindowExit);
1165 VMXFEATDUMP("TscOffsetting - TSC offsetting ", fVmxTscOffsetting);
1166 VMXFEATDUMP("HltExit - HLT exiting ", fVmxHltExit);
1167 VMXFEATDUMP("InvlpgExit - INVLPG exiting ", fVmxInvlpgExit);
1168 VMXFEATDUMP("MwaitExit - MWAIT exiting ", fVmxMwaitExit);
1169 VMXFEATDUMP("RdpmcExit - RDPMC exiting ", fVmxRdpmcExit);
1170 VMXFEATDUMP("RdtscExit - RDTSC exiting ", fVmxRdtscExit);
1171 VMXFEATDUMP("Cr3LoadExit - CR3-load exiting ", fVmxCr3LoadExit);
1172 VMXFEATDUMP("Cr3StoreExit - CR3-store exiting ", fVmxCr3StoreExit);
1173 VMXFEATDUMP("TertiaryExecCtls - Activate tertiary controls ", fVmxTertiaryExecCtls);
1174 VMXFEATDUMP("Cr8LoadExit - CR8-load exiting ", fVmxCr8LoadExit);
1175 VMXFEATDUMP("Cr8StoreExit - CR8-store exiting ", fVmxCr8StoreExit);
1176 VMXFEATDUMP("UseTprShadow - Use TPR shadow ", fVmxUseTprShadow);
1177 VMXFEATDUMP("NmiWindowExit - NMI-window exiting ", fVmxNmiWindowExit);
1178 VMXFEATDUMP("MovDRxExit - Mov-DR exiting ", fVmxMovDRxExit);
1179 VMXFEATDUMP("UncondIoExit - Unconditional I/O exiting ", fVmxUncondIoExit);
1180 VMXFEATDUMP("UseIoBitmaps - Use I/O bitmaps ", fVmxUseIoBitmaps);
1181 VMXFEATDUMP("MonitorTrapFlag - Monitor Trap Flag ", fVmxMonitorTrapFlag);
1182 VMXFEATDUMP("UseMsrBitmaps - MSR bitmaps ", fVmxUseMsrBitmaps);
1183 VMXFEATDUMP("MonitorExit - MONITOR exiting ", fVmxMonitorExit);
1184 VMXFEATDUMP("PauseExit - PAUSE exiting ", fVmxPauseExit);
1185 VMXFEATDUMP("SecondaryExecCtl - Activate secondary controls ", fVmxSecondaryExecCtls);
1186
1187 /* Secondary processor-based controls. */
1188 VMXFEATDUMP("VirtApic - Virtualize-APIC accesses ", fVmxVirtApicAccess);
1189 VMXFEATDUMP("Ept - Extended Page Tables ", fVmxEpt);
1190 VMXFEATDUMP("DescTableExit - Descriptor-table exiting ", fVmxDescTableExit);
1191 VMXFEATDUMP("Rdtscp - Enable RDTSCP ", fVmxRdtscp);
1192 VMXFEATDUMP("VirtX2ApicMode - Virtualize-x2APIC mode ", fVmxVirtX2ApicMode);
1193 VMXFEATDUMP("Vpid - Enable VPID ", fVmxVpid);
1194 VMXFEATDUMP("WbinvdExit - WBINVD exiting ", fVmxWbinvdExit);
1195 VMXFEATDUMP("UnrestrictedGuest - Unrestricted guest ", fVmxUnrestrictedGuest);
1196 VMXFEATDUMP("ApicRegVirt - APIC-register virtualization ", fVmxApicRegVirt);
1197 VMXFEATDUMP("VirtIntDelivery - Virtual-interrupt delivery ", fVmxVirtIntDelivery);
1198 VMXFEATDUMP("PauseLoopExit - PAUSE-loop exiting ", fVmxPauseLoopExit);
1199 VMXFEATDUMP("RdrandExit - RDRAND exiting ", fVmxRdrandExit);
1200 VMXFEATDUMP("Invpcid - Enable INVPCID ", fVmxInvpcid);
1201 VMXFEATDUMP("VmFuncs - Enable VM Functions ", fVmxVmFunc);
1202 VMXFEATDUMP("VmcsShadowing - VMCS shadowing ", fVmxVmcsShadowing);
1203 VMXFEATDUMP("RdseedExiting - RDSEED exiting ", fVmxRdseedExit);
1204 VMXFEATDUMP("PML - Page-Modification Log (PML) ", fVmxPml);
1205 VMXFEATDUMP("EptVe - EPT violations can cause #VE ", fVmxEptXcptVe);
1206 VMXFEATDUMP("ConcealVmxFromPt - Conceal VMX from Processor Trace ", fVmxConcealVmxFromPt);
1207 VMXFEATDUMP("XsavesXRstors - Enable XSAVES/XRSTORS ", fVmxXsavesXrstors);
1208 VMXFEATDUMP("ModeBasedExecuteEpt - Mode-based execute permissions ", fVmxModeBasedExecuteEpt);
1209 VMXFEATDUMP("SppEpt - Sub-page page write permissions for EPT ", fVmxSppEpt);
1210 VMXFEATDUMP("PtEpt - Processor Trace address' translatable by EPT ", fVmxPtEpt);
1211 VMXFEATDUMP("UseTscScaling - Use TSC scaling ", fVmxUseTscScaling);
1212 VMXFEATDUMP("UserWaitPause - Enable TPAUSE, UMONITOR and UMWAIT ", fVmxUserWaitPause);
1213 VMXFEATDUMP("EnclvExit - ENCLV exiting ", fVmxEnclvExit);
1214
1215 /* Tertiary processor-based controls. */
1216 VMXFEATDUMP("LoadIwKeyExit - LOADIWKEY exiting ", fVmxLoadIwKeyExit);
1217
1218 /* VM-entry controls. */
1219 VMXFEATDUMP("EntryLoadDebugCtls - Load debug controls on VM-entry ", fVmxEntryLoadDebugCtls);
1220 VMXFEATDUMP("Ia32eModeGuest - IA-32e mode guest ", fVmxIa32eModeGuest);
1221 VMXFEATDUMP("EntryLoadEferMsr - Load IA32_EFER MSR on VM-entry ", fVmxEntryLoadEferMsr);
1222 VMXFEATDUMP("EntryLoadPatMsr - Load IA32_PAT MSR on VM-entry ", fVmxEntryLoadPatMsr);
1223
1224 /* VM-exit controls. */
1225 VMXFEATDUMP("ExitSaveDebugCtls - Save debug controls on VM-exit ", fVmxExitSaveDebugCtls);
1226 VMXFEATDUMP("HostAddrSpaceSize - Host address-space size ", fVmxHostAddrSpaceSize);
1227 VMXFEATDUMP("ExitAckExtInt - Acknowledge interrupt on VM-exit ", fVmxExitAckExtInt);
1228 VMXFEATDUMP("ExitSavePatMsr - Save IA32_PAT MSR on VM-exit ", fVmxExitSavePatMsr);
1229 VMXFEATDUMP("ExitLoadPatMsr - Load IA32_PAT MSR on VM-exit ", fVmxExitLoadPatMsr);
1230 VMXFEATDUMP("ExitSaveEferMsr - Save IA32_EFER MSR on VM-exit ", fVmxExitSaveEferMsr);
1231 VMXFEATDUMP("ExitLoadEferMsr - Load IA32_EFER MSR on VM-exit ", fVmxExitLoadEferMsr);
1232 VMXFEATDUMP("SavePreemptTimer - Save VMX-preemption timer ", fVmxSavePreemptTimer);
1233
1234 /* Miscellaneous data. */
1235 VMXFEATDUMP("ExitSaveEferLma - Save IA32_EFER.LMA on VM-exit ", fVmxExitSaveEferLma);
1236 VMXFEATDUMP("IntelPt - Intel PT (Processor Trace) in VMX operation ", fVmxPt);
1237 VMXFEATDUMP("VmwriteAll - VMWRITE to any supported VMCS field ", fVmxVmwriteAll);
1238 VMXFEATDUMP("EntryInjectSoftInt - Inject softint. with 0-len instr. ", fVmxEntryInjectSoftInt);
1239#undef VMXFEATDUMP
1240 }
1241 else
1242 pHlp->pfnPrintf(pHlp, "No VMX features present - requires an Intel or compatible CPU.\n");
1243}
1244
1245
1246/**
1247 * Checks whether nested-guest execution using hardware-assisted VMX (e.g, using HM
1248 * or NEM) is allowed.
1249 *
1250 * @returns @c true if hardware-assisted nested-guest execution is allowed, @c false
1251 * otherwise.
1252 * @param pVM The cross context VM structure.
1253 */
1254static bool cpumR3IsHwAssistNstGstExecAllowed(PVM pVM)
1255{
1256 AssertMsg(pVM->bMainExecutionEngine != VM_EXEC_ENGINE_NOT_SET, ("Calling this function too early!\n"));
1257#ifndef VBOX_WITH_NESTED_HWVIRT_ONLY_IN_IEM
1258 if ( pVM->bMainExecutionEngine == VM_EXEC_ENGINE_HW_VIRT
1259 || pVM->bMainExecutionEngine == VM_EXEC_ENGINE_NATIVE_API)
1260 return true;
1261#else
1262 NOREF(pVM);
1263#endif
1264 return false;
1265}
1266
1267
1268/**
1269 * Initializes the VMX guest MSRs from guest CPU features based on the host MSRs.
1270 *
1271 * @param pVM The cross context VM structure.
1272 * @param pHostVmxMsrs The host VMX MSRs. Pass NULL when fully emulating VMX
1273 * and no hardware-assisted nested-guest execution is
1274 * possible for this VM.
1275 * @param pGuestFeatures The guest features to use (only VMX features are
1276 * accessed).
1277 * @param pGuestVmxMsrs Where to store the initialized guest VMX MSRs.
1278 *
1279 * @remarks This function ASSUMES the VMX guest-features are already exploded!
1280 */
1281static void cpumR3InitVmxGuestMsrs(PVM pVM, PCVMXMSRS pHostVmxMsrs, PCCPUMFEATURES pGuestFeatures, PVMXMSRS pGuestVmxMsrs)
1282{
1283 bool const fIsNstGstHwExecAllowed = cpumR3IsHwAssistNstGstExecAllowed(pVM);
1284
1285 Assert(!fIsNstGstHwExecAllowed || pHostVmxMsrs);
1286 Assert(pGuestFeatures->fVmx);
1287
1288 /*
1289 * We don't support the following MSRs yet:
1290 * - True Pin-based VM-execution controls.
1291 * - True Processor-based VM-execution controls.
1292 * - True VM-entry VM-execution controls.
1293 * - True VM-exit VM-execution controls.
1294 */
1295
1296 /* Basic information. */
1297 uint8_t const fTrueVmxMsrs = 1;
1298 {
1299 uint64_t const u64Basic = RT_BF_MAKE(VMX_BF_BASIC_VMCS_ID, VMX_V_VMCS_REVISION_ID )
1300 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_SIZE, VMX_V_VMCS_SIZE )
1301 | RT_BF_MAKE(VMX_BF_BASIC_PHYSADDR_WIDTH, !pGuestFeatures->fLongMode )
1302 | RT_BF_MAKE(VMX_BF_BASIC_DUAL_MON, 0 )
1303 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_MEM_TYPE, VMX_BASIC_MEM_TYPE_WB )
1304 | RT_BF_MAKE(VMX_BF_BASIC_VMCS_INS_OUTS, pGuestFeatures->fVmxInsOutInfo)
1305 | RT_BF_MAKE(VMX_BF_BASIC_TRUE_CTLS, fTrueVmxMsrs );
1306 pGuestVmxMsrs->u64Basic = u64Basic;
1307 }
1308
1309 /* Pin-based VM-execution controls. */
1310 {
1311 uint32_t const fFeatures = (pGuestFeatures->fVmxExtIntExit << VMX_BF_PIN_CTLS_EXT_INT_EXIT_SHIFT )
1312 | (pGuestFeatures->fVmxNmiExit << VMX_BF_PIN_CTLS_NMI_EXIT_SHIFT )
1313 | (pGuestFeatures->fVmxVirtNmi << VMX_BF_PIN_CTLS_VIRT_NMI_SHIFT )
1314 | (pGuestFeatures->fVmxPreemptTimer << VMX_BF_PIN_CTLS_PREEMPT_TIMER_SHIFT)
1315 | (pGuestFeatures->fVmxPostedInt << VMX_BF_PIN_CTLS_POSTED_INT_SHIFT );
1316 uint32_t const fAllowed0 = VMX_PIN_CTLS_DEFAULT1;
1317 uint32_t const fAllowed1 = fFeatures | VMX_PIN_CTLS_DEFAULT1;
1318 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n",
1319 fAllowed0, fAllowed1, fFeatures));
1320 pGuestVmxMsrs->PinCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1321
1322 /* True pin-based VM-execution controls. */
1323 if (fTrueVmxMsrs)
1324 {
1325 /* VMX_PIN_CTLS_DEFAULT1 contains MB1 reserved bits and must be reserved MB1 in true pin-based controls as well. */
1326 pGuestVmxMsrs->TruePinCtls.u = pGuestVmxMsrs->PinCtls.u;
1327 }
1328 }
1329
1330 /* Processor-based VM-execution controls. */
1331 {
1332 uint32_t const fFeatures = (pGuestFeatures->fVmxIntWindowExit << VMX_BF_PROC_CTLS_INT_WINDOW_EXIT_SHIFT )
1333 | (pGuestFeatures->fVmxTscOffsetting << VMX_BF_PROC_CTLS_USE_TSC_OFFSETTING_SHIFT)
1334 | (pGuestFeatures->fVmxHltExit << VMX_BF_PROC_CTLS_HLT_EXIT_SHIFT )
1335 | (pGuestFeatures->fVmxInvlpgExit << VMX_BF_PROC_CTLS_INVLPG_EXIT_SHIFT )
1336 | (pGuestFeatures->fVmxMwaitExit << VMX_BF_PROC_CTLS_MWAIT_EXIT_SHIFT )
1337 | (pGuestFeatures->fVmxRdpmcExit << VMX_BF_PROC_CTLS_RDPMC_EXIT_SHIFT )
1338 | (pGuestFeatures->fVmxRdtscExit << VMX_BF_PROC_CTLS_RDTSC_EXIT_SHIFT )
1339 | (pGuestFeatures->fVmxCr3LoadExit << VMX_BF_PROC_CTLS_CR3_LOAD_EXIT_SHIFT )
1340 | (pGuestFeatures->fVmxCr3StoreExit << VMX_BF_PROC_CTLS_CR3_STORE_EXIT_SHIFT )
1341 | (pGuestFeatures->fVmxTertiaryExecCtls << VMX_BF_PROC_CTLS_USE_TERTIARY_CTLS_SHIFT )
1342 | (pGuestFeatures->fVmxCr8LoadExit << VMX_BF_PROC_CTLS_CR8_LOAD_EXIT_SHIFT )
1343 | (pGuestFeatures->fVmxCr8StoreExit << VMX_BF_PROC_CTLS_CR8_STORE_EXIT_SHIFT )
1344 | (pGuestFeatures->fVmxUseTprShadow << VMX_BF_PROC_CTLS_USE_TPR_SHADOW_SHIFT )
1345 | (pGuestFeatures->fVmxNmiWindowExit << VMX_BF_PROC_CTLS_NMI_WINDOW_EXIT_SHIFT )
1346 | (pGuestFeatures->fVmxMovDRxExit << VMX_BF_PROC_CTLS_MOV_DR_EXIT_SHIFT )
1347 | (pGuestFeatures->fVmxUncondIoExit << VMX_BF_PROC_CTLS_UNCOND_IO_EXIT_SHIFT )
1348 | (pGuestFeatures->fVmxUseIoBitmaps << VMX_BF_PROC_CTLS_USE_IO_BITMAPS_SHIFT )
1349 | (pGuestFeatures->fVmxMonitorTrapFlag << VMX_BF_PROC_CTLS_MONITOR_TRAP_FLAG_SHIFT )
1350 | (pGuestFeatures->fVmxUseMsrBitmaps << VMX_BF_PROC_CTLS_USE_MSR_BITMAPS_SHIFT )
1351 | (pGuestFeatures->fVmxMonitorExit << VMX_BF_PROC_CTLS_MONITOR_EXIT_SHIFT )
1352 | (pGuestFeatures->fVmxPauseExit << VMX_BF_PROC_CTLS_PAUSE_EXIT_SHIFT )
1353 | (pGuestFeatures->fVmxSecondaryExecCtls << VMX_BF_PROC_CTLS_USE_SECONDARY_CTLS_SHIFT);
1354 uint32_t const fAllowed0 = VMX_PROC_CTLS_DEFAULT1;
1355 uint32_t const fAllowed1 = fFeatures | VMX_PROC_CTLS_DEFAULT1;
1356 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1357 fAllowed1, fFeatures));
1358 pGuestVmxMsrs->ProcCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1359
1360 /* True processor-based VM-execution controls. */
1361 if (fTrueVmxMsrs)
1362 {
1363 /* VMX_PROC_CTLS_DEFAULT1 contains MB1 reserved bits but the following are not really reserved. */
1364 uint32_t const fTrueAllowed0 = VMX_PROC_CTLS_DEFAULT1 & ~( VMX_BF_PROC_CTLS_CR3_LOAD_EXIT_MASK
1365 | VMX_BF_PROC_CTLS_CR3_STORE_EXIT_MASK);
1366 uint32_t const fTrueAllowed1 = fFeatures | fTrueAllowed0;
1367 pGuestVmxMsrs->TrueProcCtls.u = RT_MAKE_U64(fTrueAllowed0, fTrueAllowed1);
1368 }
1369 }
1370
1371 /* Secondary processor-based VM-execution controls. */
1372 if (pGuestFeatures->fVmxSecondaryExecCtls)
1373 {
1374 uint32_t const fFeatures = (pGuestFeatures->fVmxVirtApicAccess << VMX_BF_PROC_CTLS2_VIRT_APIC_ACCESS_SHIFT )
1375 | (pGuestFeatures->fVmxEpt << VMX_BF_PROC_CTLS2_EPT_SHIFT )
1376 | (pGuestFeatures->fVmxDescTableExit << VMX_BF_PROC_CTLS2_DESC_TABLE_EXIT_SHIFT )
1377 | (pGuestFeatures->fVmxRdtscp << VMX_BF_PROC_CTLS2_RDTSCP_SHIFT )
1378 | (pGuestFeatures->fVmxVirtX2ApicMode << VMX_BF_PROC_CTLS2_VIRT_X2APIC_MODE_SHIFT )
1379 | (pGuestFeatures->fVmxVpid << VMX_BF_PROC_CTLS2_VPID_SHIFT )
1380 | (pGuestFeatures->fVmxWbinvdExit << VMX_BF_PROC_CTLS2_WBINVD_EXIT_SHIFT )
1381 | (pGuestFeatures->fVmxUnrestrictedGuest << VMX_BF_PROC_CTLS2_UNRESTRICTED_GUEST_SHIFT )
1382 | (pGuestFeatures->fVmxApicRegVirt << VMX_BF_PROC_CTLS2_APIC_REG_VIRT_SHIFT )
1383 | (pGuestFeatures->fVmxVirtIntDelivery << VMX_BF_PROC_CTLS2_VIRT_INT_DELIVERY_SHIFT )
1384 | (pGuestFeatures->fVmxPauseLoopExit << VMX_BF_PROC_CTLS2_PAUSE_LOOP_EXIT_SHIFT )
1385 | (pGuestFeatures->fVmxRdrandExit << VMX_BF_PROC_CTLS2_RDRAND_EXIT_SHIFT )
1386 | (pGuestFeatures->fVmxInvpcid << VMX_BF_PROC_CTLS2_INVPCID_SHIFT )
1387 | (pGuestFeatures->fVmxVmFunc << VMX_BF_PROC_CTLS2_VMFUNC_SHIFT )
1388 | (pGuestFeatures->fVmxVmcsShadowing << VMX_BF_PROC_CTLS2_VMCS_SHADOWING_SHIFT )
1389 | (pGuestFeatures->fVmxRdseedExit << VMX_BF_PROC_CTLS2_RDSEED_EXIT_SHIFT )
1390 | (pGuestFeatures->fVmxPml << VMX_BF_PROC_CTLS2_PML_SHIFT )
1391 | (pGuestFeatures->fVmxEptXcptVe << VMX_BF_PROC_CTLS2_EPT_VE_SHIFT )
1392 | (pGuestFeatures->fVmxConcealVmxFromPt << VMX_BF_PROC_CTLS2_CONCEAL_VMX_FROM_PT_SHIFT)
1393 | (pGuestFeatures->fVmxXsavesXrstors << VMX_BF_PROC_CTLS2_XSAVES_XRSTORS_SHIFT )
1394 | (pGuestFeatures->fVmxModeBasedExecuteEpt << VMX_BF_PROC_CTLS2_MODE_BASED_EPT_PERM_SHIFT)
1395 | (pGuestFeatures->fVmxSppEpt << VMX_BF_PROC_CTLS2_SPP_EPT_SHIFT )
1396 | (pGuestFeatures->fVmxPtEpt << VMX_BF_PROC_CTLS2_PT_EPT_SHIFT )
1397 | (pGuestFeatures->fVmxUseTscScaling << VMX_BF_PROC_CTLS2_TSC_SCALING_SHIFT )
1398 | (pGuestFeatures->fVmxUserWaitPause << VMX_BF_PROC_CTLS2_USER_WAIT_PAUSE_SHIFT )
1399 | (pGuestFeatures->fVmxEnclvExit << VMX_BF_PROC_CTLS2_ENCLV_EXIT_SHIFT );
1400 uint32_t const fAllowed0 = 0;
1401 uint32_t const fAllowed1 = fFeatures;
1402 pGuestVmxMsrs->ProcCtls2.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1403 }
1404
1405 /* Tertiary processor-based VM-execution controls. */
1406 if (pGuestFeatures->fVmxTertiaryExecCtls)
1407 {
1408 pGuestVmxMsrs->u64ProcCtls3 = (pGuestFeatures->fVmxLoadIwKeyExit << VMX_BF_PROC_CTLS3_LOADIWKEY_EXIT_SHIFT);
1409 }
1410
1411 /* VM-exit controls. */
1412 {
1413 uint32_t const fFeatures = (pGuestFeatures->fVmxExitSaveDebugCtls << VMX_BF_EXIT_CTLS_SAVE_DEBUG_SHIFT )
1414 | (pGuestFeatures->fVmxHostAddrSpaceSize << VMX_BF_EXIT_CTLS_HOST_ADDR_SPACE_SIZE_SHIFT)
1415 | (pGuestFeatures->fVmxExitAckExtInt << VMX_BF_EXIT_CTLS_ACK_EXT_INT_SHIFT )
1416 | (pGuestFeatures->fVmxExitSavePatMsr << VMX_BF_EXIT_CTLS_SAVE_PAT_MSR_SHIFT )
1417 | (pGuestFeatures->fVmxExitLoadPatMsr << VMX_BF_EXIT_CTLS_LOAD_PAT_MSR_SHIFT )
1418 | (pGuestFeatures->fVmxExitSaveEferMsr << VMX_BF_EXIT_CTLS_SAVE_EFER_MSR_SHIFT )
1419 | (pGuestFeatures->fVmxExitLoadEferMsr << VMX_BF_EXIT_CTLS_LOAD_EFER_MSR_SHIFT )
1420 | (pGuestFeatures->fVmxSavePreemptTimer << VMX_BF_EXIT_CTLS_SAVE_PREEMPT_TIMER_SHIFT );
1421 /* Set the default1 class bits. See Intel spec. A.4 "VM-exit Controls". */
1422 uint32_t const fAllowed0 = VMX_EXIT_CTLS_DEFAULT1;
1423 uint32_t const fAllowed1 = fFeatures | VMX_EXIT_CTLS_DEFAULT1;
1424 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed1=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1425 fAllowed1, fFeatures));
1426 pGuestVmxMsrs->ExitCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1427
1428 /* True VM-exit controls. */
1429 if (fTrueVmxMsrs)
1430 {
1431 /* VMX_EXIT_CTLS_DEFAULT1 contains MB1 reserved bits but the following are not really reserved */
1432 uint32_t const fTrueAllowed0 = VMX_EXIT_CTLS_DEFAULT1 & ~VMX_BF_EXIT_CTLS_SAVE_DEBUG_MASK;
1433 uint32_t const fTrueAllowed1 = fFeatures | fTrueAllowed0;
1434 pGuestVmxMsrs->TrueExitCtls.u = RT_MAKE_U64(fTrueAllowed0, fTrueAllowed1);
1435 }
1436 }
1437
1438 /* VM-entry controls. */
1439 {
1440 uint32_t const fFeatures = (pGuestFeatures->fVmxEntryLoadDebugCtls << VMX_BF_ENTRY_CTLS_LOAD_DEBUG_SHIFT )
1441 | (pGuestFeatures->fVmxIa32eModeGuest << VMX_BF_ENTRY_CTLS_IA32E_MODE_GUEST_SHIFT)
1442 | (pGuestFeatures->fVmxEntryLoadEferMsr << VMX_BF_ENTRY_CTLS_LOAD_EFER_MSR_SHIFT )
1443 | (pGuestFeatures->fVmxEntryLoadPatMsr << VMX_BF_ENTRY_CTLS_LOAD_PAT_MSR_SHIFT );
1444 uint32_t const fAllowed0 = VMX_ENTRY_CTLS_DEFAULT1;
1445 uint32_t const fAllowed1 = fFeatures | VMX_ENTRY_CTLS_DEFAULT1;
1446 AssertMsg((fAllowed0 & fAllowed1) == fAllowed0, ("fAllowed0=%#RX32 fAllowed0=%#RX32 fFeatures=%#RX32\n", fAllowed0,
1447 fAllowed1, fFeatures));
1448 pGuestVmxMsrs->EntryCtls.u = RT_MAKE_U64(fAllowed0, fAllowed1);
1449
1450 /* True VM-entry controls. */
1451 if (fTrueVmxMsrs)
1452 {
1453 /* VMX_ENTRY_CTLS_DEFAULT1 contains MB1 reserved bits but the following are not really reserved */
1454 uint32_t const fTrueAllowed0 = VMX_ENTRY_CTLS_DEFAULT1 & ~( VMX_BF_ENTRY_CTLS_LOAD_DEBUG_MASK
1455 | VMX_BF_ENTRY_CTLS_IA32E_MODE_GUEST_MASK
1456 | VMX_BF_ENTRY_CTLS_ENTRY_SMM_MASK
1457 | VMX_BF_ENTRY_CTLS_DEACTIVATE_DUAL_MON_MASK);
1458 uint32_t const fTrueAllowed1 = fFeatures | fTrueAllowed0;
1459 pGuestVmxMsrs->TrueEntryCtls.u = RT_MAKE_U64(fTrueAllowed0, fTrueAllowed1);
1460 }
1461 }
1462
1463 /* Miscellaneous data. */
1464 {
1465 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Misc : 0;
1466
1467 uint8_t const cMaxMsrs = RT_MIN(RT_BF_GET(uHostMsr, VMX_BF_MISC_MAX_MSRS), VMX_V_AUTOMSR_COUNT_MAX);
1468 uint8_t const fActivityState = RT_BF_GET(uHostMsr, VMX_BF_MISC_ACTIVITY_STATES) & VMX_V_GUEST_ACTIVITY_STATE_MASK;
1469 pGuestVmxMsrs->u64Misc = RT_BF_MAKE(VMX_BF_MISC_PREEMPT_TIMER_TSC, VMX_V_PREEMPT_TIMER_SHIFT )
1470 | RT_BF_MAKE(VMX_BF_MISC_EXIT_SAVE_EFER_LMA, pGuestFeatures->fVmxExitSaveEferLma )
1471 | RT_BF_MAKE(VMX_BF_MISC_ACTIVITY_STATES, fActivityState )
1472 | RT_BF_MAKE(VMX_BF_MISC_INTEL_PT, pGuestFeatures->fVmxPt )
1473 | RT_BF_MAKE(VMX_BF_MISC_SMM_READ_SMBASE_MSR, 0 )
1474 | RT_BF_MAKE(VMX_BF_MISC_CR3_TARGET, VMX_V_CR3_TARGET_COUNT )
1475 | RT_BF_MAKE(VMX_BF_MISC_MAX_MSRS, cMaxMsrs )
1476 | RT_BF_MAKE(VMX_BF_MISC_VMXOFF_BLOCK_SMI, 0 )
1477 | RT_BF_MAKE(VMX_BF_MISC_VMWRITE_ALL, pGuestFeatures->fVmxVmwriteAll )
1478 | RT_BF_MAKE(VMX_BF_MISC_ENTRY_INJECT_SOFT_INT, pGuestFeatures->fVmxEntryInjectSoftInt)
1479 | RT_BF_MAKE(VMX_BF_MISC_MSEG_ID, VMX_V_MSEG_REV_ID );
1480 }
1481
1482 /* CR0 Fixed-0 (we report this fixed value regardless of whether UX is supported as it does on real hardware). */
1483 pGuestVmxMsrs->u64Cr0Fixed0 = VMX_V_CR0_FIXED0;
1484
1485 /* CR0 Fixed-1. */
1486 {
1487 /*
1488 * All CPUs I've looked at so far report CR0 fixed-1 bits as 0xffffffff.
1489 * This is different from CR4 fixed-1 bits which are reported as per the
1490 * CPU features and/or micro-architecture/generation. Why? Ask Intel.
1491 */
1492 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Cr0Fixed1 : VMX_V_CR0_FIXED1;
1493 pGuestVmxMsrs->u64Cr0Fixed1 = uHostMsr | pGuestVmxMsrs->u64Cr0Fixed0; /* Make sure the CR0 MB1 bits are not clear. */
1494 }
1495
1496 /* CR4 Fixed-0. */
1497 pGuestVmxMsrs->u64Cr4Fixed0 = VMX_V_CR4_FIXED0;
1498
1499 /* CR4 Fixed-1. */
1500 {
1501 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64Cr4Fixed1 : CPUMGetGuestCR4ValidMask(pVM);
1502 pGuestVmxMsrs->u64Cr4Fixed1 = uHostMsr | pGuestVmxMsrs->u64Cr4Fixed0; /* Make sure the CR4 MB1 bits are not clear. */
1503 }
1504
1505 /* VMCS Enumeration. */
1506 pGuestVmxMsrs->u64VmcsEnum = VMX_V_VMCS_MAX_INDEX << VMX_BF_VMCS_ENUM_HIGHEST_IDX_SHIFT;
1507
1508 /* VPID and EPT Capabilities. */
1509 if (pGuestFeatures->fVmxEpt)
1510 {
1511 /*
1512 * INVVPID instruction always causes a VM-exit unconditionally, so we are free to fake
1513 * and emulate any INVVPID flush type. However, it only makes sense to expose the types
1514 * when INVVPID instruction is supported just to be more compatible with guest
1515 * hypervisors that may make assumptions by only looking at this MSR even though they
1516 * are technically supposed to refer to VMX_PROC_CTLS2_VPID first.
1517 *
1518 * See Intel spec. 25.1.2 "Instructions That Cause VM Exits Unconditionally".
1519 * See Intel spec. 30.3 "VMX Instructions".
1520 */
1521 uint64_t const uHostMsr = fIsNstGstHwExecAllowed ? pHostVmxMsrs->u64EptVpidCaps : UINT64_MAX;
1522 uint8_t const fVpid = pGuestFeatures->fVmxVpid;
1523
1524 uint8_t const fExecOnly = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_EXEC_ONLY);
1525 uint8_t const fPml4 = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_PAGE_WALK_LENGTH_4);
1526 uint8_t const fMemTypeUc = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_MEMTYPE_UC);
1527 uint8_t const fMemTypeWb = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_MEMTYPE_WB);
1528 uint8_t const f2MPage = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_PDE_2M);
1529 uint8_t const f1GPage = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_PDPTE_1G);
1530 uint8_t const fInvept = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVEPT);
1531 /** @todo Nested VMX: Support accessed/dirty bits, see @bugref{10092#c25}. */
1532 /* uint8_t const fAccessDirty = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_ACCESS_DIRTY); */
1533 uint8_t const fEptSingle = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVEPT_SINGLE_CTX);
1534 uint8_t const fEptAll = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVEPT_ALL_CTX);
1535 uint8_t const fVpidIndiv = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVVPID_INDIV_ADDR);
1536 uint8_t const fVpidSingle = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX);
1537 uint8_t const fVpidAll = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVVPID_ALL_CTX);
1538 uint8_t const fVpidSingleGlobal = RT_BF_GET(uHostMsr, VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX_RETAIN_GLOBALS);
1539 pGuestVmxMsrs->u64EptVpidCaps = RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_EXEC_ONLY, fExecOnly)
1540 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_PAGE_WALK_LENGTH_4, fPml4)
1541 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_MEMTYPE_UC, fMemTypeUc)
1542 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_MEMTYPE_WB, fMemTypeWb)
1543 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_PDE_2M, f2MPage)
1544 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_PDPTE_1G, f1GPage)
1545 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVEPT, fInvept)
1546 //| RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_ACCESS_DIRTY, 0)
1547 //| RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_ADVEXITINFO_EPT_VIOLATION, 0)
1548 //| RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_SUPER_SHW_STACK, 0)
1549 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVEPT_SINGLE_CTX, fEptSingle)
1550 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVEPT_ALL_CTX, fEptAll)
1551 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID, fVpid)
1552 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_INDIV_ADDR, fVpid & fVpidIndiv)
1553 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX, fVpid & fVpidSingle)
1554 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_ALL_CTX, fVpid & fVpidAll)
1555 | RT_BF_MAKE(VMX_BF_EPT_VPID_CAP_INVVPID_SINGLE_CTX_RETAIN_GLOBALS, fVpid & fVpidSingleGlobal);
1556 }
1557
1558 /* VM Functions. */
1559 if (pGuestFeatures->fVmxVmFunc)
1560 pGuestVmxMsrs->u64VmFunc = RT_BF_MAKE(VMX_BF_VMFUNC_EPTP_SWITCHING, 1);
1561}
1562
1563
1564/**
1565 * Checks whether the given guest CPU VMX features are compatible with the provided
1566 * base features.
1567 *
1568 * @returns @c true if compatible, @c false otherwise.
1569 * @param pVM The cross context VM structure.
1570 * @param pBase The base VMX CPU features.
1571 * @param pGst The guest VMX CPU features.
1572 *
1573 * @remarks Only VMX feature bits are examined.
1574 */
1575static bool cpumR3AreVmxCpuFeaturesCompatible(PVM pVM, PCCPUMFEATURES pBase, PCCPUMFEATURES pGst)
1576{
1577 if (!cpumR3IsHwAssistNstGstExecAllowed(pVM))
1578 return false;
1579
1580#define CPUM_VMX_FEAT_SHIFT(a_pFeat, a_FeatName, a_cShift) ((uint64_t)(a_pFeat->a_FeatName) << (a_cShift))
1581#define CPUM_VMX_MAKE_FEATURES_1(a_pFeat) ( CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxInsOutInfo , 0) \
1582 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExtIntExit , 1) \
1583 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxNmiExit , 2) \
1584 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVirtNmi , 3) \
1585 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPreemptTimer , 4) \
1586 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPostedInt , 5) \
1587 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxIntWindowExit , 6) \
1588 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxTscOffsetting , 7) \
1589 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxHltExit , 8) \
1590 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxInvlpgExit , 9) \
1591 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxMwaitExit , 10) \
1592 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdpmcExit , 12) \
1593 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdtscExit , 13) \
1594 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxCr3LoadExit , 14) \
1595 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxCr3StoreExit , 15) \
1596 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxTertiaryExecCtls , 16) \
1597 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxCr8LoadExit , 17) \
1598 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxCr8StoreExit , 18) \
1599 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUseTprShadow , 19) \
1600 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxNmiWindowExit , 20) \
1601 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxMovDRxExit , 21) \
1602 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUncondIoExit , 22) \
1603 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUseIoBitmaps , 23) \
1604 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxMonitorTrapFlag , 24) \
1605 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUseMsrBitmaps , 25) \
1606 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxMonitorExit , 26) \
1607 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPauseExit , 27) \
1608 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxSecondaryExecCtls , 28) \
1609 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVirtApicAccess , 29) \
1610 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEpt , 30) \
1611 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxDescTableExit , 31) \
1612 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdtscp , 32) \
1613 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVirtX2ApicMode , 33) \
1614 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVpid , 34) \
1615 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxWbinvdExit , 35) \
1616 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUnrestrictedGuest , 36) \
1617 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxApicRegVirt , 37) \
1618 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVirtIntDelivery , 38) \
1619 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPauseLoopExit , 39) \
1620 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdrandExit , 40) \
1621 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxInvpcid , 41) \
1622 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVmFunc , 42) \
1623 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVmcsShadowing , 43) \
1624 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxRdseedExit , 44) \
1625 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPml , 45) \
1626 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEptXcptVe , 46) \
1627 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxConcealVmxFromPt , 47) \
1628 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxXsavesXrstors , 48) \
1629 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxModeBasedExecuteEpt, 49) \
1630 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxSppEpt , 50) \
1631 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPtEpt , 51) \
1632 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUseTscScaling , 52) \
1633 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxUserWaitPause , 53) \
1634 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEnclvExit , 54) \
1635 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxLoadIwKeyExit , 55) \
1636 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEntryLoadDebugCtls , 56) \
1637 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxIa32eModeGuest , 57) \
1638 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEntryLoadEferMsr , 58) \
1639 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEntryLoadPatMsr , 59) \
1640 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitSaveDebugCtls , 60) \
1641 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxHostAddrSpaceSize , 61) \
1642 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitAckExtInt , 62) \
1643 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitSavePatMsr , 63))
1644
1645#define CPUM_VMX_MAKE_FEATURES_2(a_pFeat) ( CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitLoadPatMsr , 0) \
1646 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitSaveEferMsr , 1) \
1647 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitLoadEferMsr , 2) \
1648 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxSavePreemptTimer , 3) \
1649 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxExitSaveEferLma , 4) \
1650 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxPt , 5) \
1651 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxVmwriteAll , 6) \
1652 | CPUM_VMX_FEAT_SHIFT(a_pFeat, fVmxEntryInjectSoftInt , 7))
1653
1654 /* Check first set of feature bits. */
1655 {
1656 uint64_t const fBase = CPUM_VMX_MAKE_FEATURES_1(pBase);
1657 uint64_t const fGst = CPUM_VMX_MAKE_FEATURES_1(pGst);
1658 if ((fBase | fGst) != fBase)
1659 {
1660 uint64_t const fDiff = fBase ^ fGst;
1661 LogRel(("CPUM: VMX features (1) now exposed to the guest are incompatible with those from the saved state. fBase=%#RX64 fGst=%#RX64 fDiff=%#RX64\n",
1662 fBase, fGst, fDiff));
1663 return false;
1664 }
1665 }
1666
1667 /* Check second set of feature bits. */
1668 {
1669 uint64_t const fBase = CPUM_VMX_MAKE_FEATURES_2(pBase);
1670 uint64_t const fGst = CPUM_VMX_MAKE_FEATURES_2(pGst);
1671 if ((fBase | fGst) != fBase)
1672 {
1673 uint64_t const fDiff = fBase ^ fGst;
1674 LogRel(("CPUM: VMX features (2) now exposed to the guest are incompatible with those from the saved state. fBase=%#RX64 fGst=%#RX64 fDiff=%#RX64\n",
1675 fBase, fGst, fDiff));
1676 return false;
1677 }
1678 }
1679#undef CPUM_VMX_FEAT_SHIFT
1680#undef CPUM_VMX_MAKE_FEATURES_1
1681#undef CPUM_VMX_MAKE_FEATURES_2
1682
1683 return true;
1684}
1685
1686
1687/**
1688 * Initializes VMX guest features and MSRs.
1689 *
1690 * @param pVM The cross context VM structure.
1691 * @param pHostVmxMsrs The host VMX MSRs. Pass NULL when fully emulating VMX
1692 * and no hardware-assisted nested-guest execution is
1693 * possible for this VM.
1694 * @param pGuestVmxMsrs Where to store the initialized guest VMX MSRs.
1695 */
1696void cpumR3InitVmxGuestFeaturesAndMsrs(PVM pVM, PCVMXMSRS pHostVmxMsrs, PVMXMSRS pGuestVmxMsrs)
1697{
1698 Assert(pVM);
1699 Assert(pGuestVmxMsrs);
1700
1701 /*
1702 * While it would be nice to check this earlier while initializing fNestedVmxEpt
1703 * but we would not have enumearted host features then, so do it at least now.
1704 */
1705 if ( !pVM->cpum.s.HostFeatures.fNoExecute
1706 && pVM->cpum.s.fNestedVmxEpt)
1707 {
1708 LogRel(("CPUM: Warning! EPT not exposed to the guest since NX isn't available on the host.\n"));
1709 pVM->cpum.s.fNestedVmxEpt = false;
1710 pVM->cpum.s.fNestedVmxUnrestrictedGuest = false;
1711 }
1712
1713 /*
1714 * Initialize the set of VMX features we emulate.
1715 *
1716 * Note! Some bits might be reported as 1 always if they fall under the
1717 * default1 class bits (e.g. fVmxEntryLoadDebugCtls), see @bugref{9180#c5}.
1718 */
1719 CPUMFEATURES EmuFeat;
1720 RT_ZERO(EmuFeat);
1721 EmuFeat.fVmx = 1;
1722 EmuFeat.fVmxInsOutInfo = 1;
1723 EmuFeat.fVmxExtIntExit = 1;
1724 EmuFeat.fVmxNmiExit = 1;
1725 EmuFeat.fVmxVirtNmi = 1;
1726 EmuFeat.fVmxPreemptTimer = pVM->cpum.s.fNestedVmxPreemptTimer;
1727 EmuFeat.fVmxPostedInt = 0;
1728 EmuFeat.fVmxIntWindowExit = 1;
1729 EmuFeat.fVmxTscOffsetting = 1;
1730 EmuFeat.fVmxHltExit = 1;
1731 EmuFeat.fVmxInvlpgExit = 1;
1732 EmuFeat.fVmxMwaitExit = 1;
1733 EmuFeat.fVmxRdpmcExit = 1;
1734 EmuFeat.fVmxRdtscExit = 1;
1735 EmuFeat.fVmxCr3LoadExit = 1;
1736 EmuFeat.fVmxCr3StoreExit = 1;
1737 EmuFeat.fVmxTertiaryExecCtls = 0;
1738 EmuFeat.fVmxCr8LoadExit = 1;
1739 EmuFeat.fVmxCr8StoreExit = 1;
1740 EmuFeat.fVmxUseTprShadow = 1;
1741 EmuFeat.fVmxNmiWindowExit = 0;
1742 EmuFeat.fVmxMovDRxExit = 1;
1743 EmuFeat.fVmxUncondIoExit = 1;
1744 EmuFeat.fVmxUseIoBitmaps = 1;
1745 EmuFeat.fVmxMonitorTrapFlag = 0;
1746 EmuFeat.fVmxUseMsrBitmaps = 1;
1747 EmuFeat.fVmxMonitorExit = 1;
1748 EmuFeat.fVmxPauseExit = 1;
1749 EmuFeat.fVmxSecondaryExecCtls = 1;
1750 EmuFeat.fVmxVirtApicAccess = 1;
1751 EmuFeat.fVmxEpt = pVM->cpum.s.fNestedVmxEpt;
1752 EmuFeat.fVmxDescTableExit = 1;
1753 EmuFeat.fVmxRdtscp = 1;
1754 EmuFeat.fVmxVirtX2ApicMode = 0;
1755 EmuFeat.fVmxVpid = 0; /** @todo Consider enabling this when EPT works. */
1756 EmuFeat.fVmxWbinvdExit = 1;
1757 EmuFeat.fVmxUnrestrictedGuest = pVM->cpum.s.fNestedVmxUnrestrictedGuest;
1758 EmuFeat.fVmxApicRegVirt = 0;
1759 EmuFeat.fVmxVirtIntDelivery = 0;
1760 EmuFeat.fVmxPauseLoopExit = 0;
1761 EmuFeat.fVmxRdrandExit = 0;
1762 EmuFeat.fVmxInvpcid = 1;
1763 EmuFeat.fVmxVmFunc = 0;
1764 EmuFeat.fVmxVmcsShadowing = 0;
1765 EmuFeat.fVmxRdseedExit = 0;
1766 EmuFeat.fVmxPml = 0;
1767 EmuFeat.fVmxEptXcptVe = 0;
1768 EmuFeat.fVmxConcealVmxFromPt = 0;
1769 EmuFeat.fVmxXsavesXrstors = 0;
1770 EmuFeat.fVmxModeBasedExecuteEpt = 0;
1771 EmuFeat.fVmxSppEpt = 0;
1772 EmuFeat.fVmxPtEpt = 0;
1773 EmuFeat.fVmxUseTscScaling = 0;
1774 EmuFeat.fVmxUserWaitPause = 0;
1775 EmuFeat.fVmxEnclvExit = 0;
1776 EmuFeat.fVmxLoadIwKeyExit = 0;
1777 EmuFeat.fVmxEntryLoadDebugCtls = 1;
1778 EmuFeat.fVmxIa32eModeGuest = 1;
1779 EmuFeat.fVmxEntryLoadEferMsr = 1;
1780 EmuFeat.fVmxEntryLoadPatMsr = 0;
1781 EmuFeat.fVmxExitSaveDebugCtls = 1;
1782 EmuFeat.fVmxHostAddrSpaceSize = 1;
1783 EmuFeat.fVmxExitAckExtInt = 1;
1784 EmuFeat.fVmxExitSavePatMsr = 0;
1785 EmuFeat.fVmxExitLoadPatMsr = 0;
1786 EmuFeat.fVmxExitSaveEferMsr = 1;
1787 EmuFeat.fVmxExitLoadEferMsr = 1;
1788 EmuFeat.fVmxSavePreemptTimer = 0; /* Cannot be enabled if VMX-preemption timer is disabled. */
1789 EmuFeat.fVmxExitSaveEferLma = 1; /* Cannot be disabled if unrestricted guest is enabled. */
1790 EmuFeat.fVmxPt = 0;
1791 EmuFeat.fVmxVmwriteAll = 0; /** @todo NSTVMX: enable this when nested VMCS shadowing is enabled. */
1792 EmuFeat.fVmxEntryInjectSoftInt = 1;
1793
1794 /*
1795 * Merge guest features.
1796 *
1797 * When hardware-assisted VMX may be used, any feature we emulate must also be supported
1798 * by the hardware, hence we merge our emulated features with the host features below.
1799 */
1800 PCCPUMFEATURES pBaseFeat = cpumR3IsHwAssistNstGstExecAllowed(pVM) ? &pVM->cpum.s.HostFeatures : &EmuFeat;
1801 PCPUMFEATURES pGuestFeat = &pVM->cpum.s.GuestFeatures;
1802 Assert(pBaseFeat->fVmx);
1803 pGuestFeat->fVmxInsOutInfo = (pBaseFeat->fVmxInsOutInfo & EmuFeat.fVmxInsOutInfo );
1804 pGuestFeat->fVmxExtIntExit = (pBaseFeat->fVmxExtIntExit & EmuFeat.fVmxExtIntExit );
1805 pGuestFeat->fVmxNmiExit = (pBaseFeat->fVmxNmiExit & EmuFeat.fVmxNmiExit );
1806 pGuestFeat->fVmxVirtNmi = (pBaseFeat->fVmxVirtNmi & EmuFeat.fVmxVirtNmi );
1807 pGuestFeat->fVmxPreemptTimer = (pBaseFeat->fVmxPreemptTimer & EmuFeat.fVmxPreemptTimer );
1808 pGuestFeat->fVmxPostedInt = (pBaseFeat->fVmxPostedInt & EmuFeat.fVmxPostedInt );
1809 pGuestFeat->fVmxIntWindowExit = (pBaseFeat->fVmxIntWindowExit & EmuFeat.fVmxIntWindowExit );
1810 pGuestFeat->fVmxTscOffsetting = (pBaseFeat->fVmxTscOffsetting & EmuFeat.fVmxTscOffsetting );
1811 pGuestFeat->fVmxHltExit = (pBaseFeat->fVmxHltExit & EmuFeat.fVmxHltExit );
1812 pGuestFeat->fVmxInvlpgExit = (pBaseFeat->fVmxInvlpgExit & EmuFeat.fVmxInvlpgExit );
1813 pGuestFeat->fVmxMwaitExit = (pBaseFeat->fVmxMwaitExit & EmuFeat.fVmxMwaitExit );
1814 pGuestFeat->fVmxRdpmcExit = (pBaseFeat->fVmxRdpmcExit & EmuFeat.fVmxRdpmcExit );
1815 pGuestFeat->fVmxRdtscExit = (pBaseFeat->fVmxRdtscExit & EmuFeat.fVmxRdtscExit );
1816 pGuestFeat->fVmxCr3LoadExit = (pBaseFeat->fVmxCr3LoadExit & EmuFeat.fVmxCr3LoadExit );
1817 pGuestFeat->fVmxCr3StoreExit = (pBaseFeat->fVmxCr3StoreExit & EmuFeat.fVmxCr3StoreExit );
1818 pGuestFeat->fVmxTertiaryExecCtls = (pBaseFeat->fVmxTertiaryExecCtls & EmuFeat.fVmxTertiaryExecCtls );
1819 pGuestFeat->fVmxCr8LoadExit = (pBaseFeat->fVmxCr8LoadExit & EmuFeat.fVmxCr8LoadExit );
1820 pGuestFeat->fVmxCr8StoreExit = (pBaseFeat->fVmxCr8StoreExit & EmuFeat.fVmxCr8StoreExit );
1821 pGuestFeat->fVmxUseTprShadow = (pBaseFeat->fVmxUseTprShadow & EmuFeat.fVmxUseTprShadow );
1822 pGuestFeat->fVmxNmiWindowExit = (pBaseFeat->fVmxNmiWindowExit & EmuFeat.fVmxNmiWindowExit );
1823 pGuestFeat->fVmxMovDRxExit = (pBaseFeat->fVmxMovDRxExit & EmuFeat.fVmxMovDRxExit );
1824 pGuestFeat->fVmxUncondIoExit = (pBaseFeat->fVmxUncondIoExit & EmuFeat.fVmxUncondIoExit );
1825 pGuestFeat->fVmxUseIoBitmaps = (pBaseFeat->fVmxUseIoBitmaps & EmuFeat.fVmxUseIoBitmaps );
1826 pGuestFeat->fVmxMonitorTrapFlag = (pBaseFeat->fVmxMonitorTrapFlag & EmuFeat.fVmxMonitorTrapFlag );
1827 pGuestFeat->fVmxUseMsrBitmaps = (pBaseFeat->fVmxUseMsrBitmaps & EmuFeat.fVmxUseMsrBitmaps );
1828 pGuestFeat->fVmxMonitorExit = (pBaseFeat->fVmxMonitorExit & EmuFeat.fVmxMonitorExit );
1829 pGuestFeat->fVmxPauseExit = (pBaseFeat->fVmxPauseExit & EmuFeat.fVmxPauseExit );
1830 pGuestFeat->fVmxSecondaryExecCtls = (pBaseFeat->fVmxSecondaryExecCtls & EmuFeat.fVmxSecondaryExecCtls );
1831 pGuestFeat->fVmxVirtApicAccess = (pBaseFeat->fVmxVirtApicAccess & EmuFeat.fVmxVirtApicAccess );
1832 pGuestFeat->fVmxEpt = (pBaseFeat->fVmxEpt & EmuFeat.fVmxEpt );
1833 pGuestFeat->fVmxDescTableExit = (pBaseFeat->fVmxDescTableExit & EmuFeat.fVmxDescTableExit );
1834 pGuestFeat->fVmxRdtscp = (pBaseFeat->fVmxRdtscp & EmuFeat.fVmxRdtscp );
1835 pGuestFeat->fVmxVirtX2ApicMode = (pBaseFeat->fVmxVirtX2ApicMode & EmuFeat.fVmxVirtX2ApicMode );
1836 pGuestFeat->fVmxVpid = (pBaseFeat->fVmxVpid & EmuFeat.fVmxVpid );
1837 pGuestFeat->fVmxWbinvdExit = (pBaseFeat->fVmxWbinvdExit & EmuFeat.fVmxWbinvdExit );
1838 pGuestFeat->fVmxUnrestrictedGuest = (pBaseFeat->fVmxUnrestrictedGuest & EmuFeat.fVmxUnrestrictedGuest );
1839 pGuestFeat->fVmxApicRegVirt = (pBaseFeat->fVmxApicRegVirt & EmuFeat.fVmxApicRegVirt );
1840 pGuestFeat->fVmxVirtIntDelivery = (pBaseFeat->fVmxVirtIntDelivery & EmuFeat.fVmxVirtIntDelivery );
1841 pGuestFeat->fVmxPauseLoopExit = (pBaseFeat->fVmxPauseLoopExit & EmuFeat.fVmxPauseLoopExit );
1842 pGuestFeat->fVmxRdrandExit = (pBaseFeat->fVmxRdrandExit & EmuFeat.fVmxRdrandExit );
1843 pGuestFeat->fVmxInvpcid = (pBaseFeat->fVmxInvpcid & EmuFeat.fVmxInvpcid );
1844 pGuestFeat->fVmxVmFunc = (pBaseFeat->fVmxVmFunc & EmuFeat.fVmxVmFunc );
1845 pGuestFeat->fVmxVmcsShadowing = (pBaseFeat->fVmxVmcsShadowing & EmuFeat.fVmxVmcsShadowing );
1846 pGuestFeat->fVmxRdseedExit = (pBaseFeat->fVmxRdseedExit & EmuFeat.fVmxRdseedExit );
1847 pGuestFeat->fVmxPml = (pBaseFeat->fVmxPml & EmuFeat.fVmxPml );
1848 pGuestFeat->fVmxEptXcptVe = (pBaseFeat->fVmxEptXcptVe & EmuFeat.fVmxEptXcptVe );
1849 pGuestFeat->fVmxConcealVmxFromPt = (pBaseFeat->fVmxConcealVmxFromPt & EmuFeat.fVmxConcealVmxFromPt );
1850 pGuestFeat->fVmxXsavesXrstors = (pBaseFeat->fVmxXsavesXrstors & EmuFeat.fVmxXsavesXrstors );
1851 pGuestFeat->fVmxModeBasedExecuteEpt = (pBaseFeat->fVmxModeBasedExecuteEpt & EmuFeat.fVmxModeBasedExecuteEpt );
1852 pGuestFeat->fVmxSppEpt = (pBaseFeat->fVmxSppEpt & EmuFeat.fVmxSppEpt );
1853 pGuestFeat->fVmxPtEpt = (pBaseFeat->fVmxPtEpt & EmuFeat.fVmxPtEpt );
1854 pGuestFeat->fVmxUseTscScaling = (pBaseFeat->fVmxUseTscScaling & EmuFeat.fVmxUseTscScaling );
1855 pGuestFeat->fVmxUserWaitPause = (pBaseFeat->fVmxUserWaitPause & EmuFeat.fVmxUserWaitPause );
1856 pGuestFeat->fVmxEnclvExit = (pBaseFeat->fVmxEnclvExit & EmuFeat.fVmxEnclvExit );
1857 pGuestFeat->fVmxLoadIwKeyExit = (pBaseFeat->fVmxLoadIwKeyExit & EmuFeat.fVmxLoadIwKeyExit );
1858 pGuestFeat->fVmxEntryLoadDebugCtls = (pBaseFeat->fVmxEntryLoadDebugCtls & EmuFeat.fVmxEntryLoadDebugCtls );
1859 pGuestFeat->fVmxIa32eModeGuest = (pBaseFeat->fVmxIa32eModeGuest & EmuFeat.fVmxIa32eModeGuest );
1860 pGuestFeat->fVmxEntryLoadEferMsr = (pBaseFeat->fVmxEntryLoadEferMsr & EmuFeat.fVmxEntryLoadEferMsr );
1861 pGuestFeat->fVmxEntryLoadPatMsr = (pBaseFeat->fVmxEntryLoadPatMsr & EmuFeat.fVmxEntryLoadPatMsr );
1862 pGuestFeat->fVmxExitSaveDebugCtls = (pBaseFeat->fVmxExitSaveDebugCtls & EmuFeat.fVmxExitSaveDebugCtls );
1863 pGuestFeat->fVmxHostAddrSpaceSize = (pBaseFeat->fVmxHostAddrSpaceSize & EmuFeat.fVmxHostAddrSpaceSize );
1864 pGuestFeat->fVmxExitAckExtInt = (pBaseFeat->fVmxExitAckExtInt & EmuFeat.fVmxExitAckExtInt );
1865 pGuestFeat->fVmxExitSavePatMsr = (pBaseFeat->fVmxExitSavePatMsr & EmuFeat.fVmxExitSavePatMsr );
1866 pGuestFeat->fVmxExitLoadPatMsr = (pBaseFeat->fVmxExitLoadPatMsr & EmuFeat.fVmxExitLoadPatMsr );
1867 pGuestFeat->fVmxExitSaveEferMsr = (pBaseFeat->fVmxExitSaveEferMsr & EmuFeat.fVmxExitSaveEferMsr );
1868 pGuestFeat->fVmxExitLoadEferMsr = (pBaseFeat->fVmxExitLoadEferMsr & EmuFeat.fVmxExitLoadEferMsr );
1869 pGuestFeat->fVmxSavePreemptTimer = (pBaseFeat->fVmxSavePreemptTimer & EmuFeat.fVmxSavePreemptTimer );
1870 pGuestFeat->fVmxExitSaveEferLma = (pBaseFeat->fVmxExitSaveEferLma & EmuFeat.fVmxExitSaveEferLma );
1871 pGuestFeat->fVmxPt = (pBaseFeat->fVmxPt & EmuFeat.fVmxPt );
1872 pGuestFeat->fVmxVmwriteAll = (pBaseFeat->fVmxVmwriteAll & EmuFeat.fVmxVmwriteAll );
1873 pGuestFeat->fVmxEntryInjectSoftInt = (pBaseFeat->fVmxEntryInjectSoftInt & EmuFeat.fVmxEntryInjectSoftInt );
1874
1875 /* Don't expose VMX preemption timer if host is subject to VMX-preemption timer erratum. */
1876 if ( pGuestFeat->fVmxPreemptTimer
1877 && HMIsSubjectToVmxPreemptTimerErratum())
1878 {
1879 LogRel(("CPUM: Warning! VMX-preemption timer not exposed to guest due to host CPU erratum.\n"));
1880 pGuestFeat->fVmxPreemptTimer = 0;
1881 pGuestFeat->fVmxSavePreemptTimer = 0;
1882 }
1883
1884 /* Sanity checking. */
1885 if (!pGuestFeat->fVmxSecondaryExecCtls)
1886 {
1887 Assert(!pGuestFeat->fVmxVirtApicAccess);
1888 Assert(!pGuestFeat->fVmxEpt);
1889 Assert(!pGuestFeat->fVmxDescTableExit);
1890 Assert(!pGuestFeat->fVmxRdtscp);
1891 Assert(!pGuestFeat->fVmxVirtX2ApicMode);
1892 Assert(!pGuestFeat->fVmxVpid);
1893 Assert(!pGuestFeat->fVmxWbinvdExit);
1894 Assert(!pGuestFeat->fVmxUnrestrictedGuest);
1895 Assert(!pGuestFeat->fVmxApicRegVirt);
1896 Assert(!pGuestFeat->fVmxVirtIntDelivery);
1897 Assert(!pGuestFeat->fVmxPauseLoopExit);
1898 Assert(!pGuestFeat->fVmxRdrandExit);
1899 Assert(!pGuestFeat->fVmxInvpcid);
1900 Assert(!pGuestFeat->fVmxVmFunc);
1901 Assert(!pGuestFeat->fVmxVmcsShadowing);
1902 Assert(!pGuestFeat->fVmxRdseedExit);
1903 Assert(!pGuestFeat->fVmxPml);
1904 Assert(!pGuestFeat->fVmxEptXcptVe);
1905 Assert(!pGuestFeat->fVmxConcealVmxFromPt);
1906 Assert(!pGuestFeat->fVmxXsavesXrstors);
1907 Assert(!pGuestFeat->fVmxModeBasedExecuteEpt);
1908 Assert(!pGuestFeat->fVmxSppEpt);
1909 Assert(!pGuestFeat->fVmxPtEpt);
1910 Assert(!pGuestFeat->fVmxUseTscScaling);
1911 Assert(!pGuestFeat->fVmxUserWaitPause);
1912 Assert(!pGuestFeat->fVmxEnclvExit);
1913 }
1914 else if (pGuestFeat->fVmxUnrestrictedGuest)
1915 {
1916 /* See footnote in Intel spec. 27.2 "Recording VM-Exit Information And Updating VM-entry Control Fields". */
1917 Assert(pGuestFeat->fVmxExitSaveEferLma);
1918 /* Unrestricted guest execution requires EPT. See Intel spec. 25.2.1.1 "VM-Execution Control Fields". */
1919 Assert(pGuestFeat->fVmxEpt);
1920 }
1921
1922 if (!pGuestFeat->fVmxTertiaryExecCtls)
1923 Assert(!pGuestFeat->fVmxLoadIwKeyExit);
1924
1925 /*
1926 * Finally initialize the VMX guest MSRs.
1927 */
1928 cpumR3InitVmxGuestMsrs(pVM, pHostVmxMsrs, pGuestFeat, pGuestVmxMsrs);
1929}
1930
1931
1932/**
1933 * Gets the host hardware-virtualization MSRs.
1934 *
1935 * @returns VBox status code.
1936 * @param pMsrs Where to store the MSRs.
1937 */
1938static int cpumR3GetHostHwvirtMsrs(PCPUMMSRS pMsrs)
1939{
1940 Assert(pMsrs);
1941
1942 uint32_t fCaps = 0;
1943 int rc = SUPR3QueryVTCaps(&fCaps);
1944 if (RT_SUCCESS(rc))
1945 {
1946 if (fCaps & (SUPVTCAPS_VT_X | SUPVTCAPS_AMD_V))
1947 {
1948 SUPHWVIRTMSRS HwvirtMsrs;
1949 rc = SUPR3GetHwvirtMsrs(&HwvirtMsrs, false /* fForceRequery */);
1950 if (RT_SUCCESS(rc))
1951 {
1952 if (fCaps & SUPVTCAPS_VT_X)
1953 HMGetVmxMsrsFromHwvirtMsrs(&HwvirtMsrs, &pMsrs->hwvirt.vmx);
1954 else
1955 HMGetSvmMsrsFromHwvirtMsrs(&HwvirtMsrs, &pMsrs->hwvirt.svm);
1956 return VINF_SUCCESS;
1957 }
1958
1959 LogRel(("CPUM: Querying hardware-virtualization MSRs failed. rc=%Rrc\n", rc));
1960 return rc;
1961 }
1962
1963 LogRel(("CPUM: Querying hardware-virtualization capability succeeded but did not find VT-x or AMD-V\n"));
1964 return VERR_INTERNAL_ERROR_5;
1965 }
1966 LogRel(("CPUM: No hardware-virtualization capability detected\n"));
1967 return VINF_SUCCESS;
1968}
1969
1970
1971/**
1972 * @callback_method_impl{FNTMTIMERINT,
1973 * Callback that fires when the nested VMX-preemption timer expired.}
1974 */
1975static DECLCALLBACK(void) cpumR3VmxPreemptTimerCallback(PVM pVM, TMTIMERHANDLE hTimer, void *pvUser)
1976{
1977 RT_NOREF(pVM, hTimer);
1978 PVMCPU pVCpu = (PVMCPUR3)pvUser;
1979 AssertPtr(pVCpu);
1980 VMCPU_FF_SET(pVCpu, VMCPU_FF_VMX_PREEMPT_TIMER);
1981}
1982
1983
1984/**
1985 * Initializes the CPUM.
1986 *
1987 * @returns VBox status code.
1988 * @param pVM The cross context VM structure.
1989 */
1990VMMR3DECL(int) CPUMR3Init(PVM pVM)
1991{
1992 LogFlow(("CPUMR3Init\n"));
1993
1994 /*
1995 * Assert alignment, sizes and tables.
1996 */
1997 AssertCompileMemberAlignment(VM, cpum.s, 32);
1998 AssertCompile(sizeof(pVM->cpum.s) <= sizeof(pVM->cpum.padding));
1999 AssertCompileSizeAlignment(CPUMCTX, 64);
2000 AssertCompileSizeAlignment(CPUMCTXMSRS, 64);
2001 AssertCompileSizeAlignment(CPUMHOSTCTX, 64);
2002 AssertCompileMemberAlignment(VM, cpum, 64);
2003 AssertCompileMemberAlignment(VMCPU, cpum.s, 64);
2004#ifdef VBOX_STRICT
2005 int rc2 = cpumR3MsrStrictInitChecks();
2006 AssertRCReturn(rc2, rc2);
2007#endif
2008
2009 /*
2010 * Gather info about the host CPU.
2011 */
2012 if (!ASMHasCpuId())
2013 {
2014 LogRel(("The CPU doesn't support CPUID!\n"));
2015 return VERR_UNSUPPORTED_CPU;
2016 }
2017
2018 pVM->cpum.s.fHostMxCsrMask = CPUMR3DeterminHostMxCsrMask();
2019
2020 CPUMMSRS HostMsrs;
2021 RT_ZERO(HostMsrs);
2022 int rc = cpumR3GetHostHwvirtMsrs(&HostMsrs);
2023 AssertLogRelRCReturn(rc, rc);
2024
2025 PCPUMCPUIDLEAF paLeaves;
2026 uint32_t cLeaves;
2027 rc = CPUMR3CpuIdCollectLeaves(&paLeaves, &cLeaves);
2028 AssertLogRelRCReturn(rc, rc);
2029
2030 rc = cpumR3CpuIdExplodeFeatures(paLeaves, cLeaves, &HostMsrs, &pVM->cpum.s.HostFeatures);
2031 RTMemFree(paLeaves);
2032 AssertLogRelRCReturn(rc, rc);
2033 pVM->cpum.s.GuestFeatures.enmCpuVendor = pVM->cpum.s.HostFeatures.enmCpuVendor;
2034
2035 /*
2036 * Check that the CPU supports the minimum features we require.
2037 */
2038 if (!pVM->cpum.s.HostFeatures.fFxSaveRstor)
2039 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support the FXSAVE/FXRSTOR instruction.");
2040 if (!pVM->cpum.s.HostFeatures.fMmx)
2041 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support MMX.");
2042 if (!pVM->cpum.s.HostFeatures.fTsc)
2043 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support RDTSC.");
2044
2045 /*
2046 * Setup the CR4 AND and OR masks used in the raw-mode switcher.
2047 */
2048 pVM->cpum.s.CR4.AndMask = X86_CR4_OSXMMEEXCPT | X86_CR4_PVI | X86_CR4_VME;
2049 pVM->cpum.s.CR4.OrMask = X86_CR4_OSFXSR;
2050
2051 /*
2052 * Figure out which XSAVE/XRSTOR features are available on the host.
2053 */
2054 uint64_t fXcr0Host = 0;
2055 uint64_t fXStateHostMask = 0;
2056 if ( pVM->cpum.s.HostFeatures.fXSaveRstor
2057 && pVM->cpum.s.HostFeatures.fOpSysXSaveRstor)
2058 {
2059 fXStateHostMask = fXcr0Host = ASMGetXcr0();
2060 fXStateHostMask &= XSAVE_C_X87 | XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI;
2061 AssertLogRelMsgStmt((fXStateHostMask & (XSAVE_C_X87 | XSAVE_C_SSE)) == (XSAVE_C_X87 | XSAVE_C_SSE),
2062 ("%#llx\n", fXStateHostMask), fXStateHostMask = 0);
2063 }
2064 pVM->cpum.s.fXStateHostMask = fXStateHostMask;
2065 LogRel(("CPUM: fXStateHostMask=%#llx; initial: %#llx; host XCR0=%#llx\n",
2066 pVM->cpum.s.fXStateHostMask, fXStateHostMask, fXcr0Host));
2067
2068 /*
2069 * Initialize the host XSAVE/XRSTOR mask.
2070 */
2071 uint32_t cbMaxXState = pVM->cpum.s.HostFeatures.cbMaxExtendedState;
2072 cbMaxXState = RT_ALIGN(cbMaxXState, 128);
2073 AssertLogRelReturn( pVM->cpum.s.HostFeatures.cbMaxExtendedState >= sizeof(X86FXSTATE)
2074 && pVM->cpum.s.HostFeatures.cbMaxExtendedState <= sizeof(pVM->apCpusR3[0]->cpum.s.Host.XState)
2075 && pVM->cpum.s.HostFeatures.cbMaxExtendedState <= sizeof(pVM->apCpusR3[0]->cpum.s.Guest.XState)
2076 , VERR_CPUM_IPE_2);
2077
2078 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2079 {
2080 PVMCPU pVCpu = pVM->apCpusR3[i];
2081
2082 pVCpu->cpum.s.Host.fXStateMask = fXStateHostMask;
2083 pVCpu->cpum.s.hNestedVmxPreemptTimer = NIL_TMTIMERHANDLE;
2084 }
2085
2086 /*
2087 * Register saved state data item.
2088 */
2089 rc = SSMR3RegisterInternal(pVM, "cpum", 1, CPUM_SAVED_STATE_VERSION, sizeof(CPUM),
2090 NULL, cpumR3LiveExec, NULL,
2091 NULL, cpumR3SaveExec, NULL,
2092 cpumR3LoadPrep, cpumR3LoadExec, cpumR3LoadDone);
2093 if (RT_FAILURE(rc))
2094 return rc;
2095
2096 /*
2097 * Register info handlers and registers with the debugger facility.
2098 */
2099 DBGFR3InfoRegisterInternalEx(pVM, "cpum", "Displays the all the cpu states.",
2100 &cpumR3InfoAll, DBGFINFO_FLAGS_ALL_EMTS);
2101 DBGFR3InfoRegisterInternalEx(pVM, "cpumguest", "Displays the guest cpu state.",
2102 &cpumR3InfoGuest, DBGFINFO_FLAGS_ALL_EMTS);
2103 DBGFR3InfoRegisterInternalEx(pVM, "cpumguesthwvirt", "Displays the guest hwvirt. cpu state.",
2104 &cpumR3InfoGuestHwvirt, DBGFINFO_FLAGS_ALL_EMTS);
2105 DBGFR3InfoRegisterInternalEx(pVM, "cpumhyper", "Displays the hypervisor cpu state.",
2106 &cpumR3InfoHyper, DBGFINFO_FLAGS_ALL_EMTS);
2107 DBGFR3InfoRegisterInternalEx(pVM, "cpumhost", "Displays the host cpu state.",
2108 &cpumR3InfoHost, DBGFINFO_FLAGS_ALL_EMTS);
2109 DBGFR3InfoRegisterInternalEx(pVM, "cpumguestinstr", "Displays the current guest instruction.",
2110 &cpumR3InfoGuestInstr, DBGFINFO_FLAGS_ALL_EMTS);
2111 DBGFR3InfoRegisterInternal( pVM, "cpuid", "Displays the guest cpuid leaves.", &cpumR3CpuIdInfo);
2112 DBGFR3InfoRegisterInternal( pVM, "cpumvmxfeat", "Displays the host and guest VMX hwvirt. features.",
2113 &cpumR3InfoVmxFeatures);
2114
2115 rc = cpumR3DbgInit(pVM);
2116 if (RT_FAILURE(rc))
2117 return rc;
2118
2119 /*
2120 * Check if we need to workaround partial/leaky FPU handling.
2121 */
2122 cpumR3CheckLeakyFpu(pVM);
2123
2124 /*
2125 * Initialize the Guest CPUID and MSR states.
2126 */
2127 rc = cpumR3InitCpuIdAndMsrs(pVM, &HostMsrs);
2128 if (RT_FAILURE(rc))
2129 return rc;
2130
2131 /*
2132 * Init the VMX/SVM state.
2133 *
2134 * This must be done after initializing CPUID/MSR features as we access the
2135 * the VMX/SVM guest features below.
2136 *
2137 * In the case of nested VT-x, we also need to create the per-VCPU
2138 * VMX preemption timers.
2139 */
2140 if (pVM->cpum.s.GuestFeatures.fVmx)
2141 cpumR3InitVmxHwVirtState(pVM);
2142 else if (pVM->cpum.s.GuestFeatures.fSvm)
2143 cpumR3InitSvmHwVirtState(pVM);
2144 else
2145 Assert(pVM->apCpusR3[0]->cpum.s.Guest.hwvirt.enmHwvirt == CPUMHWVIRT_NONE);
2146
2147 CPUMR3Reset(pVM);
2148 return VINF_SUCCESS;
2149}
2150
2151
2152/**
2153 * Applies relocations to data and code managed by this
2154 * component. This function will be called at init and
2155 * whenever the VMM need to relocate it self inside the GC.
2156 *
2157 * The CPUM will update the addresses used by the switcher.
2158 *
2159 * @param pVM The cross context VM structure.
2160 */
2161VMMR3DECL(void) CPUMR3Relocate(PVM pVM)
2162{
2163 RT_NOREF(pVM);
2164}
2165
2166
2167/**
2168 * Terminates the CPUM.
2169 *
2170 * Termination means cleaning up and freeing all resources,
2171 * the VM it self is at this point powered off or suspended.
2172 *
2173 * @returns VBox status code.
2174 * @param pVM The cross context VM structure.
2175 */
2176VMMR3DECL(int) CPUMR3Term(PVM pVM)
2177{
2178#ifdef VBOX_WITH_CRASHDUMP_MAGIC
2179 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2180 {
2181 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2182 memset(pVCpu->cpum.s.aMagic, 0, sizeof(pVCpu->cpum.s.aMagic));
2183 pVCpu->cpum.s.uMagic = 0;
2184 pvCpu->cpum.s.Guest.dr[5] = 0;
2185 }
2186#endif
2187
2188 if (pVM->cpum.s.GuestFeatures.fVmx)
2189 {
2190 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2191 {
2192 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2193 if (pVCpu->cpum.s.hNestedVmxPreemptTimer != NIL_TMTIMERHANDLE)
2194 {
2195 int rc = TMR3TimerDestroy(pVM, pVCpu->cpum.s.hNestedVmxPreemptTimer); AssertRC(rc);
2196 pVCpu->cpum.s.hNestedVmxPreemptTimer = NIL_TMTIMERHANDLE;
2197 }
2198 }
2199 }
2200 return VINF_SUCCESS;
2201}
2202
2203
2204/**
2205 * Resets a virtual CPU.
2206 *
2207 * Used by CPUMR3Reset and CPU hot plugging.
2208 *
2209 * @param pVM The cross context VM structure.
2210 * @param pVCpu The cross context virtual CPU structure of the CPU that is
2211 * being reset. This may differ from the current EMT.
2212 */
2213VMMR3DECL(void) CPUMR3ResetCpu(PVM pVM, PVMCPU pVCpu)
2214{
2215 /** @todo anything different for VCPU > 0? */
2216 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
2217
2218 /*
2219 * Initialize everything to ZERO first.
2220 */
2221 uint32_t fUseFlags = pVCpu->cpum.s.fUseFlags & ~CPUM_USED_FPU_SINCE_REM;
2222
2223 RT_BZERO(pCtx, RT_UOFFSETOF(CPUMCTX, aoffXState));
2224
2225 pVCpu->cpum.s.fUseFlags = fUseFlags;
2226
2227 pCtx->cr0 = X86_CR0_CD | X86_CR0_NW | X86_CR0_ET; //0x60000010
2228 pCtx->eip = 0x0000fff0;
2229 pCtx->edx = 0x00000600; /* P6 processor */
2230 pCtx->eflags.Bits.u1Reserved0 = 1;
2231
2232 pCtx->cs.Sel = 0xf000;
2233 pCtx->cs.ValidSel = 0xf000;
2234 pCtx->cs.fFlags = CPUMSELREG_FLAGS_VALID;
2235 pCtx->cs.u64Base = UINT64_C(0xffff0000);
2236 pCtx->cs.u32Limit = 0x0000ffff;
2237 pCtx->cs.Attr.n.u1DescType = 1; /* code/data segment */
2238 pCtx->cs.Attr.n.u1Present = 1;
2239 pCtx->cs.Attr.n.u4Type = X86_SEL_TYPE_ER_ACC;
2240
2241 pCtx->ds.fFlags = CPUMSELREG_FLAGS_VALID;
2242 pCtx->ds.u32Limit = 0x0000ffff;
2243 pCtx->ds.Attr.n.u1DescType = 1; /* code/data segment */
2244 pCtx->ds.Attr.n.u1Present = 1;
2245 pCtx->ds.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2246
2247 pCtx->es.fFlags = CPUMSELREG_FLAGS_VALID;
2248 pCtx->es.u32Limit = 0x0000ffff;
2249 pCtx->es.Attr.n.u1DescType = 1; /* code/data segment */
2250 pCtx->es.Attr.n.u1Present = 1;
2251 pCtx->es.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2252
2253 pCtx->fs.fFlags = CPUMSELREG_FLAGS_VALID;
2254 pCtx->fs.u32Limit = 0x0000ffff;
2255 pCtx->fs.Attr.n.u1DescType = 1; /* code/data segment */
2256 pCtx->fs.Attr.n.u1Present = 1;
2257 pCtx->fs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2258
2259 pCtx->gs.fFlags = CPUMSELREG_FLAGS_VALID;
2260 pCtx->gs.u32Limit = 0x0000ffff;
2261 pCtx->gs.Attr.n.u1DescType = 1; /* code/data segment */
2262 pCtx->gs.Attr.n.u1Present = 1;
2263 pCtx->gs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2264
2265 pCtx->ss.fFlags = CPUMSELREG_FLAGS_VALID;
2266 pCtx->ss.u32Limit = 0x0000ffff;
2267 pCtx->ss.Attr.n.u1Present = 1;
2268 pCtx->ss.Attr.n.u1DescType = 1; /* code/data segment */
2269 pCtx->ss.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
2270
2271 pCtx->idtr.cbIdt = 0xffff;
2272 pCtx->gdtr.cbGdt = 0xffff;
2273
2274 pCtx->ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2275 pCtx->ldtr.u32Limit = 0xffff;
2276 pCtx->ldtr.Attr.n.u1Present = 1;
2277 pCtx->ldtr.Attr.n.u4Type = X86_SEL_TYPE_SYS_LDT;
2278
2279 pCtx->tr.fFlags = CPUMSELREG_FLAGS_VALID;
2280 pCtx->tr.u32Limit = 0xffff;
2281 pCtx->tr.Attr.n.u1Present = 1;
2282 pCtx->tr.Attr.n.u4Type = X86_SEL_TYPE_SYS_386_TSS_BUSY; /* Deduction, not properly documented by Intel. */
2283
2284 pCtx->dr[6] = X86_DR6_INIT_VAL;
2285 pCtx->dr[7] = X86_DR7_INIT_VAL;
2286
2287 PX86FXSTATE pFpuCtx = &pCtx->XState.x87;
2288 pFpuCtx->FTW = 0x00; /* All empty (abbridged tag reg edition). */
2289 pFpuCtx->FCW = 0x37f;
2290
2291 /* Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3A, Table 8-1.
2292 IA-32 Processor States Following Power-up, Reset, or INIT */
2293 pFpuCtx->MXCSR = 0x1F80;
2294 pFpuCtx->MXCSR_MASK = pVM->cpum.s.GuestInfo.fMxCsrMask; /** @todo check if REM messes this up... */
2295
2296 pCtx->aXcr[0] = XSAVE_C_X87;
2297 if (pVM->cpum.s.HostFeatures.cbMaxExtendedState >= RT_UOFFSETOF(X86XSAVEAREA, Hdr))
2298 {
2299 /* The entire FXSAVE state needs loading when we switch to XSAVE/XRSTOR
2300 as we don't know what happened before. (Bother optimize later?) */
2301 pCtx->XState.Hdr.bmXState = XSAVE_C_X87 | XSAVE_C_SSE;
2302 }
2303
2304 /*
2305 * MSRs.
2306 */
2307 /* Init PAT MSR */
2308 pCtx->msrPAT = MSR_IA32_CR_PAT_INIT_VAL;
2309
2310 /* EFER MBZ; see AMD64 Architecture Programmer's Manual Volume 2: Table 14-1. Initial Processor State.
2311 * The Intel docs don't mention it. */
2312 Assert(!pCtx->msrEFER);
2313
2314 /* IA32_MISC_ENABLE - not entirely sure what the init/reset state really
2315 is supposed to be here, just trying provide useful/sensible values. */
2316 PCPUMMSRRANGE pRange = cpumLookupMsrRange(pVM, MSR_IA32_MISC_ENABLE);
2317 if (pRange)
2318 {
2319 pVCpu->cpum.s.GuestMsrs.msr.MiscEnable = MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
2320 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL
2321 | (pVM->cpum.s.GuestFeatures.fMonitorMWait ? MSR_IA32_MISC_ENABLE_MONITOR : 0)
2322 | MSR_IA32_MISC_ENABLE_FAST_STRINGS;
2323 pRange->fWrIgnMask |= MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
2324 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
2325 pRange->fWrGpMask &= ~pVCpu->cpum.s.GuestMsrs.msr.MiscEnable;
2326 }
2327
2328 /** @todo Wire IA32_MISC_ENABLE bit 22 to our NT 4 CPUID trick. */
2329
2330 /** @todo r=ramshankar: Currently broken for SMP as TMCpuTickSet() expects to be
2331 * called from each EMT while we're getting called by CPUMR3Reset()
2332 * iteratively on the same thread. Fix later. */
2333#if 0 /** @todo r=bird: This we will do in TM, not here. */
2334 /* TSC must be 0. Intel spec. Table 9-1. "IA-32 Processor States Following Power-up, Reset, or INIT." */
2335 CPUMSetGuestMsr(pVCpu, MSR_IA32_TSC, 0);
2336#endif
2337
2338
2339 /* C-state control. Guesses. */
2340 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 1 /*C1*/ | RT_BIT_32(25) | RT_BIT_32(26) | RT_BIT_32(27) | RT_BIT_32(28);
2341 /* For Nehalem+ and Atoms, the 0xE2 MSR (MSR_PKG_CST_CONFIG_CONTROL) is documented. For Core 2,
2342 * it's undocumented but exists as MSR_PMG_CST_CONFIG_CONTROL and has similar but not identical
2343 * functionality. The default value must be different due to incompatible write mask.
2344 */
2345 if (CPUMMICROARCH_IS_INTEL_CORE2(pVM->cpum.s.GuestFeatures.enmMicroarch))
2346 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x202a01; /* From Mac Pro Harpertown, unlocked. */
2347 else if (pVM->cpum.s.GuestFeatures.enmMicroarch == kCpumMicroarch_Intel_Core_Yonah)
2348 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x26740c; /* From MacBookPro1,1. */
2349
2350 /*
2351 * Hardware virtualization state.
2352 */
2353 CPUMSetGuestGif(pCtx, true);
2354 Assert(!pVM->cpum.s.GuestFeatures.fVmx || !pVM->cpum.s.GuestFeatures.fSvm); /* Paranoia. */
2355 if (pVM->cpum.s.GuestFeatures.fVmx)
2356 cpumR3ResetVmxHwVirtState(pVCpu);
2357 else if (pVM->cpum.s.GuestFeatures.fSvm)
2358 cpumR3ResetSvmHwVirtState(pVCpu);
2359}
2360
2361
2362/**
2363 * Resets the CPU.
2364 *
2365 * @returns VINF_SUCCESS.
2366 * @param pVM The cross context VM structure.
2367 */
2368VMMR3DECL(void) CPUMR3Reset(PVM pVM)
2369{
2370 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2371 {
2372 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2373 CPUMR3ResetCpu(pVM, pVCpu);
2374
2375#ifdef VBOX_WITH_CRASHDUMP_MAGIC
2376
2377 /* Magic marker for searching in crash dumps. */
2378 strcpy((char *)pVCpu->.cpum.s.aMagic, "CPUMCPU Magic");
2379 pVCpu->cpum.s.uMagic = UINT64_C(0xDEADBEEFDEADBEEF);
2380 pVCpu->cpum.s.Guest->dr[5] = UINT64_C(0xDEADBEEFDEADBEEF);
2381#endif
2382 }
2383}
2384
2385
2386
2387
2388/**
2389 * Pass 0 live exec callback.
2390 *
2391 * @returns VINF_SSM_DONT_CALL_AGAIN.
2392 * @param pVM The cross context VM structure.
2393 * @param pSSM The saved state handle.
2394 * @param uPass The pass (0).
2395 */
2396static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass)
2397{
2398 AssertReturn(uPass == 0, VERR_SSM_UNEXPECTED_PASS);
2399 cpumR3SaveCpuId(pVM, pSSM);
2400 return VINF_SSM_DONT_CALL_AGAIN;
2401}
2402
2403
2404/**
2405 * Execute state save operation.
2406 *
2407 * @returns VBox status code.
2408 * @param pVM The cross context VM structure.
2409 * @param pSSM SSM operation handle.
2410 */
2411static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM)
2412{
2413 /*
2414 * Save.
2415 */
2416 SSMR3PutU32(pSSM, pVM->cCpus);
2417 SSMR3PutU32(pSSM, sizeof(pVM->apCpusR3[0]->cpum.s.GuestMsrs.msr));
2418 CPUMCTX DummyHyperCtx;
2419 RT_ZERO(DummyHyperCtx);
2420 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2421 {
2422 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2423
2424 SSMR3PutStructEx(pSSM, &DummyHyperCtx, sizeof(DummyHyperCtx), 0, g_aCpumCtxFields, NULL);
2425
2426 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2427 SSMR3PutStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2428 SSMR3PutStructEx(pSSM, &pGstCtx->XState.x87, sizeof(pGstCtx->XState.x87), 0, g_aCpumX87Fields, NULL);
2429 if (pGstCtx->fXStateMask != 0)
2430 SSMR3PutStructEx(pSSM, &pGstCtx->XState.Hdr, sizeof(pGstCtx->XState.Hdr), 0, g_aCpumXSaveHdrFields, NULL);
2431 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2432 {
2433 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
2434 SSMR3PutStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2435 }
2436 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2437 {
2438 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
2439 SSMR3PutStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2440 }
2441 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2442 {
2443 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
2444 SSMR3PutStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2445 }
2446 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2447 {
2448 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
2449 SSMR3PutStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2450 }
2451 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2452 {
2453 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
2454 SSMR3PutStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2455 }
2456 SSMR3PutU64(pSSM, pGstCtx->aPaePdpes[0].u);
2457 SSMR3PutU64(pSSM, pGstCtx->aPaePdpes[1].u);
2458 SSMR3PutU64(pSSM, pGstCtx->aPaePdpes[2].u);
2459 SSMR3PutU64(pSSM, pGstCtx->aPaePdpes[3].u);
2460 if (pVM->cpum.s.GuestFeatures.fSvm)
2461 {
2462 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uMsrHSavePa);
2463 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.svm.GCPhysVmcb);
2464 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uPrevPauseTick);
2465 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilter);
2466 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2467 SSMR3PutBool(pSSM, pGstCtx->hwvirt.svm.fInterceptEvents);
2468 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState), 0 /* fFlags */,
2469 g_aSvmHwvirtHostState, NULL /* pvUser */);
2470 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.svm.Vmcb, sizeof(pGstCtx->hwvirt.svm.Vmcb));
2471 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.svm.abMsrBitmap[0], sizeof(pGstCtx->hwvirt.svm.abMsrBitmap));
2472 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.svm.abIoBitmap[0], sizeof(pGstCtx->hwvirt.svm.abIoBitmap));
2473 SSMR3PutU32(pSSM, pGstCtx->hwvirt.fLocalForcedActions);
2474 SSMR3PutBool(pSSM, pGstCtx->hwvirt.fGif);
2475 }
2476 if (pVM->cpum.s.GuestFeatures.fVmx)
2477 {
2478 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysVmxon);
2479 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysVmcs);
2480 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.vmx.GCPhysShadowVmcs);
2481 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInVmxRootMode);
2482 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInVmxNonRootMode);
2483 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fInterceptEvents);
2484 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fNmiUnblockingIret);
2485 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.vmx.Vmcs, sizeof(pGstCtx->hwvirt.vmx.Vmcs), 0, g_aVmxHwvirtVmcs, NULL);
2486 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.vmx.ShadowVmcs, sizeof(pGstCtx->hwvirt.vmx.ShadowVmcs),
2487 0, g_aVmxHwvirtVmcs, NULL);
2488 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.abVmreadBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abVmreadBitmap));
2489 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.abVmwriteBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abVmwriteBitmap));
2490 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.aEntryMsrLoadArea[0], sizeof(pGstCtx->hwvirt.vmx.aEntryMsrLoadArea));
2491 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.aExitMsrStoreArea[0], sizeof(pGstCtx->hwvirt.vmx.aExitMsrStoreArea));
2492 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.aExitMsrLoadArea[0], sizeof(pGstCtx->hwvirt.vmx.aExitMsrLoadArea));
2493 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.abMsrBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abMsrBitmap));
2494 SSMR3PutMem(pSSM, &pGstCtx->hwvirt.vmx.abIoBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abIoBitmap));
2495 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uFirstPauseLoopTick);
2496 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uPrevPauseTick);
2497 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.uEntryTick);
2498 SSMR3PutU16(pSSM, pGstCtx->hwvirt.vmx.offVirtApicWrite);
2499 SSMR3PutBool(pSSM, pGstCtx->hwvirt.vmx.fVirtNmiBlocking);
2500 SSMR3PutU64(pSSM, MSR_IA32_FEATURE_CONTROL_LOCK | MSR_IA32_FEATURE_CONTROL_VMXON); /* Deprecated since 2021/09/22. Value kept backwards compatibile with 6.1.26. */
2501 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Basic);
2502 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.PinCtls.u);
2503 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ProcCtls.u);
2504 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ProcCtls2.u);
2505 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.ExitCtls.u);
2506 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.EntryCtls.u);
2507 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TruePinCtls.u);
2508 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueProcCtls.u);
2509 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueEntryCtls.u);
2510 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.TrueExitCtls.u);
2511 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Misc);
2512 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed0);
2513 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed1);
2514 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed0);
2515 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed1);
2516 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64VmcsEnum);
2517 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64VmFunc);
2518 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64EptVpidCaps);
2519 SSMR3PutU64(pSSM, pGstCtx->hwvirt.vmx.Msrs.u64ProcCtls3);
2520 }
2521 SSMR3PutU32(pSSM, pVCpu->cpum.s.fUseFlags);
2522 SSMR3PutU32(pSSM, pVCpu->cpum.s.fChanged);
2523 AssertCompileSizeAlignment(pVCpu->cpum.s.GuestMsrs.msr, sizeof(uint64_t));
2524 SSMR3PutMem(pSSM, &pVCpu->cpum.s.GuestMsrs, sizeof(pVCpu->cpum.s.GuestMsrs.msr));
2525 }
2526
2527 cpumR3SaveCpuId(pVM, pSSM);
2528 return VINF_SUCCESS;
2529}
2530
2531
2532/**
2533 * @callback_method_impl{FNSSMINTLOADPREP}
2534 */
2535static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM)
2536{
2537 NOREF(pSSM);
2538 pVM->cpum.s.fPendingRestore = true;
2539 return VINF_SUCCESS;
2540}
2541
2542
2543/**
2544 * @callback_method_impl{FNSSMINTLOADEXEC}
2545 */
2546static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
2547{
2548 int rc; /* Only for AssertRCReturn use. */
2549
2550 /*
2551 * Validate version.
2552 */
2553 if ( uVersion != CPUM_SAVED_STATE_VERSION_PAE_PDPES
2554 && uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2
2555 && uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_VMX
2556 && uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_SVM
2557 && uVersion != CPUM_SAVED_STATE_VERSION_XSAVE
2558 && uVersion != CPUM_SAVED_STATE_VERSION_GOOD_CPUID_COUNT
2559 && uVersion != CPUM_SAVED_STATE_VERSION_BAD_CPUID_COUNT
2560 && uVersion != CPUM_SAVED_STATE_VERSION_PUT_STRUCT
2561 && uVersion != CPUM_SAVED_STATE_VERSION_MEM
2562 && uVersion != CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE
2563 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_2
2564 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_0
2565 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR
2566 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_0
2567 && uVersion != CPUM_SAVED_STATE_VERSION_VER1_6)
2568 {
2569 AssertMsgFailed(("cpumR3LoadExec: Invalid version uVersion=%d!\n", uVersion));
2570 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
2571 }
2572
2573 if (uPass == SSM_PASS_FINAL)
2574 {
2575 /*
2576 * Set the size of RTGCPTR for SSMR3GetGCPtr. (Only necessary for
2577 * really old SSM file versions.)
2578 */
2579 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
2580 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR32));
2581 else if (uVersion <= CPUM_SAVED_STATE_VERSION_VER3_0)
2582 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR));
2583
2584 /*
2585 * Figure x86 and ctx field definitions to use for older states.
2586 */
2587 uint32_t const fLoad = uVersion > CPUM_SAVED_STATE_VERSION_MEM ? 0 : SSMSTRUCT_FLAGS_MEM_BAND_AID_RELAXED;
2588 PCSSMFIELD paCpumCtx1Fields = g_aCpumX87Fields;
2589 PCSSMFIELD paCpumCtx2Fields = g_aCpumCtxFields;
2590 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
2591 {
2592 paCpumCtx1Fields = g_aCpumX87FieldsV16;
2593 paCpumCtx2Fields = g_aCpumCtxFieldsV16;
2594 }
2595 else if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2596 {
2597 paCpumCtx1Fields = g_aCpumX87FieldsMem;
2598 paCpumCtx2Fields = g_aCpumCtxFieldsMem;
2599 }
2600
2601 /*
2602 * The hyper state used to preceed the CPU count. Starting with
2603 * XSAVE it was moved down till after we've got the count.
2604 */
2605 CPUMCTX HyperCtxIgnored;
2606 if (uVersion < CPUM_SAVED_STATE_VERSION_XSAVE)
2607 {
2608 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2609 {
2610 X86FXSTATE Ign;
2611 SSMR3GetStructEx(pSSM, &Ign, sizeof(Ign), fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2612 SSMR3GetStructEx(pSSM, &HyperCtxIgnored, sizeof(HyperCtxIgnored),
2613 fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2614 }
2615 }
2616
2617 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR)
2618 {
2619 uint32_t cCpus;
2620 rc = SSMR3GetU32(pSSM, &cCpus); AssertRCReturn(rc, rc);
2621 AssertLogRelMsgReturn(cCpus == pVM->cCpus, ("Mismatching CPU counts: saved: %u; configured: %u \n", cCpus, pVM->cCpus),
2622 VERR_SSM_UNEXPECTED_DATA);
2623 }
2624 AssertLogRelMsgReturn( uVersion > CPUM_SAVED_STATE_VERSION_VER2_0
2625 || pVM->cCpus == 1,
2626 ("cCpus=%u\n", pVM->cCpus),
2627 VERR_SSM_UNEXPECTED_DATA);
2628
2629 uint32_t cbMsrs = 0;
2630 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2631 {
2632 rc = SSMR3GetU32(pSSM, &cbMsrs); AssertRCReturn(rc, rc);
2633 AssertLogRelMsgReturn(RT_ALIGN(cbMsrs, sizeof(uint64_t)) == cbMsrs, ("Size of MSRs is misaligned: %#x\n", cbMsrs),
2634 VERR_SSM_UNEXPECTED_DATA);
2635 AssertLogRelMsgReturn(cbMsrs <= sizeof(CPUMCTXMSRS) && cbMsrs > 0, ("Size of MSRs is out of range: %#x\n", cbMsrs),
2636 VERR_SSM_UNEXPECTED_DATA);
2637 }
2638
2639 /*
2640 * Do the per-CPU restoring.
2641 */
2642 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2643 {
2644 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2645 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2646
2647 if (uVersion >= CPUM_SAVED_STATE_VERSION_XSAVE)
2648 {
2649 /*
2650 * The XSAVE saved state layout moved the hyper state down here.
2651 */
2652 rc = SSMR3GetStructEx(pSSM, &HyperCtxIgnored, sizeof(HyperCtxIgnored), 0, g_aCpumCtxFields, NULL);
2653 AssertRCReturn(rc, rc);
2654
2655 /*
2656 * Start by restoring the CPUMCTX structure and the X86FXSAVE bits of the extended state.
2657 */
2658 rc = SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2659 rc = SSMR3GetStructEx(pSSM, &pGstCtx->XState.x87, sizeof(pGstCtx->XState.x87), 0, g_aCpumX87Fields, NULL);
2660 AssertRCReturn(rc, rc);
2661
2662 /* Check that the xsave/xrstor mask is valid (invalid results in #GP). */
2663 if (pGstCtx->fXStateMask != 0)
2664 {
2665 AssertLogRelMsgReturn(!(pGstCtx->fXStateMask & ~pVM->cpum.s.fXStateGuestMask),
2666 ("fXStateMask=%#RX64 fXStateGuestMask=%#RX64\n",
2667 pGstCtx->fXStateMask, pVM->cpum.s.fXStateGuestMask),
2668 VERR_CPUM_INCOMPATIBLE_XSAVE_COMP_MASK);
2669 AssertLogRelMsgReturn(pGstCtx->fXStateMask & XSAVE_C_X87,
2670 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2671 AssertLogRelMsgReturn((pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2672 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2673 AssertLogRelMsgReturn( (pGstCtx->fXStateMask & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2674 || (pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2675 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2676 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2677 }
2678
2679 /* Check that the XCR0 mask is valid (invalid results in #GP). */
2680 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87, ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XCR0);
2681 if (pGstCtx->aXcr[0] != XSAVE_C_X87)
2682 {
2683 AssertLogRelMsgReturn(!(pGstCtx->aXcr[0] & ~(pGstCtx->fXStateMask | XSAVE_C_X87)),
2684 ("xcr0=%#RX64 fXStateMask=%#RX64\n", pGstCtx->aXcr[0], pGstCtx->fXStateMask),
2685 VERR_CPUM_INVALID_XCR0);
2686 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87,
2687 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2688 AssertLogRelMsgReturn((pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2689 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2690 AssertLogRelMsgReturn( (pGstCtx->aXcr[0] & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2691 || (pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2692 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2693 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2694 }
2695
2696 /* Check that the XCR1 is zero, as we don't implement it yet. */
2697 AssertLogRelMsgReturn(!pGstCtx->aXcr[1], ("xcr1=%#RX64\n", pGstCtx->aXcr[1]), VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2698
2699 /*
2700 * Restore the individual extended state components we support.
2701 */
2702 if (pGstCtx->fXStateMask != 0)
2703 {
2704 rc = SSMR3GetStructEx(pSSM, &pGstCtx->XState.Hdr, sizeof(pGstCtx->XState.Hdr),
2705 0, g_aCpumXSaveHdrFields, NULL);
2706 AssertRCReturn(rc, rc);
2707 AssertLogRelMsgReturn(!(pGstCtx->XState.Hdr.bmXState & ~pGstCtx->fXStateMask),
2708 ("bmXState=%#RX64 fXStateMask=%#RX64\n",
2709 pGstCtx->XState.Hdr.bmXState, pGstCtx->fXStateMask),
2710 VERR_CPUM_INVALID_XSAVE_HDR);
2711 }
2712 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2713 {
2714 PX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PX86XSAVEYMMHI);
2715 SSMR3GetStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2716 }
2717 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2718 {
2719 PX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PX86XSAVEBNDREGS);
2720 SSMR3GetStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2721 }
2722 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2723 {
2724 PX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PX86XSAVEBNDCFG);
2725 SSMR3GetStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2726 }
2727 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2728 {
2729 PX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PX86XSAVEZMMHI256);
2730 SSMR3GetStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2731 }
2732 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2733 {
2734 PX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PX86XSAVEZMM16HI);
2735 SSMR3GetStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2736 }
2737 if (uVersion >= CPUM_SAVED_STATE_VERSION_PAE_PDPES)
2738 {
2739 SSMR3GetU64(pSSM, &pGstCtx->aPaePdpes[0].u);
2740 SSMR3GetU64(pSSM, &pGstCtx->aPaePdpes[1].u);
2741 SSMR3GetU64(pSSM, &pGstCtx->aPaePdpes[2].u);
2742 SSMR3GetU64(pSSM, &pGstCtx->aPaePdpes[3].u);
2743 }
2744 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_SVM)
2745 {
2746 if (pVM->cpum.s.GuestFeatures.fSvm)
2747 {
2748 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uMsrHSavePa);
2749 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.svm.GCPhysVmcb);
2750 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uPrevPauseTick);
2751 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilter);
2752 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2753 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.svm.fInterceptEvents);
2754 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState),
2755 0 /* fFlags */, g_aSvmHwvirtHostState, NULL /* pvUser */);
2756 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.svm.Vmcb, sizeof(pGstCtx->hwvirt.svm.Vmcb));
2757 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.svm.abMsrBitmap[0], sizeof(pGstCtx->hwvirt.svm.abMsrBitmap));
2758 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.svm.abIoBitmap[0], sizeof(pGstCtx->hwvirt.svm.abIoBitmap));
2759 SSMR3GetU32(pSSM, &pGstCtx->hwvirt.fLocalForcedActions);
2760 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.fGif);
2761 }
2762 }
2763 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_VMX)
2764 {
2765 if (pVM->cpum.s.GuestFeatures.fVmx)
2766 {
2767 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysVmxon);
2768 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysVmcs);
2769 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.vmx.GCPhysShadowVmcs);
2770 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInVmxRootMode);
2771 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInVmxNonRootMode);
2772 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fInterceptEvents);
2773 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fNmiUnblockingIret);
2774 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.vmx.Vmcs, sizeof(pGstCtx->hwvirt.vmx.Vmcs),
2775 0, g_aVmxHwvirtVmcs, NULL);
2776 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.vmx.ShadowVmcs, sizeof(pGstCtx->hwvirt.vmx.ShadowVmcs),
2777 0, g_aVmxHwvirtVmcs, NULL);
2778 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.abVmreadBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abVmreadBitmap));
2779 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.abVmwriteBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abVmwriteBitmap));
2780 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.aEntryMsrLoadArea[0], sizeof(pGstCtx->hwvirt.vmx.aEntryMsrLoadArea));
2781 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.aExitMsrStoreArea[0], sizeof(pGstCtx->hwvirt.vmx.aExitMsrStoreArea));
2782 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.aExitMsrLoadArea[0], sizeof(pGstCtx->hwvirt.vmx.aExitMsrLoadArea));
2783 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.abMsrBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abMsrBitmap));
2784 SSMR3GetMem(pSSM, &pGstCtx->hwvirt.vmx.abIoBitmap[0], sizeof(pGstCtx->hwvirt.vmx.abIoBitmap));
2785 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uFirstPauseLoopTick);
2786 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uPrevPauseTick);
2787 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.uEntryTick);
2788 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.vmx.offVirtApicWrite);
2789 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.vmx.fVirtNmiBlocking);
2790 SSMR3Skip(pSSM, sizeof(uint64_t)); /* Unused - used to be IA32_FEATURE_CONTROL, see @bugref{10106}. */
2791 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Basic);
2792 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.PinCtls.u);
2793 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ProcCtls.u);
2794 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ProcCtls2.u);
2795 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.ExitCtls.u);
2796 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.EntryCtls.u);
2797 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TruePinCtls.u);
2798 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueProcCtls.u);
2799 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueEntryCtls.u);
2800 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.TrueExitCtls.u);
2801 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Misc);
2802 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed0);
2803 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr0Fixed1);
2804 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed0);
2805 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64Cr4Fixed1);
2806 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64VmcsEnum);
2807 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64VmFunc);
2808 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64EptVpidCaps);
2809 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_VMX_2)
2810 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.vmx.Msrs.u64ProcCtls3);
2811 }
2812 }
2813 }
2814 else
2815 {
2816 /*
2817 * Pre XSAVE saved state.
2818 */
2819 SSMR3GetStructEx(pSSM, &pGstCtx->XState.x87, sizeof(pGstCtx->XState.x87),
2820 fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2821 SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2822 }
2823
2824 /*
2825 * Restore a couple of flags and the MSRs.
2826 */
2827 uint32_t fIgnoredUsedFlags = 0;
2828 rc = SSMR3GetU32(pSSM, &fIgnoredUsedFlags); /* we're recalc the two relevant flags after loading state. */
2829 AssertRCReturn(rc, rc);
2830 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fChanged);
2831
2832 rc = VINF_SUCCESS;
2833 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2834 rc = SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], cbMsrs);
2835 else if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_0)
2836 {
2837 SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], 2 * sizeof(uint64_t)); /* Restore two MSRs. */
2838 rc = SSMR3Skip(pSSM, 62 * sizeof(uint64_t));
2839 }
2840 AssertRCReturn(rc, rc);
2841
2842 /* REM and other may have cleared must-be-one fields in DR6 and
2843 DR7, fix these. */
2844 pGstCtx->dr[6] &= ~(X86_DR6_RAZ_MASK | X86_DR6_MBZ_MASK);
2845 pGstCtx->dr[6] |= X86_DR6_RA1_MASK;
2846 pGstCtx->dr[7] &= ~(X86_DR7_RAZ_MASK | X86_DR7_MBZ_MASK);
2847 pGstCtx->dr[7] |= X86_DR7_RA1_MASK;
2848 }
2849
2850 /* Older states does not have the internal selector register flags
2851 and valid selector value. Supply those. */
2852 if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2853 {
2854 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2855 {
2856 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2857 bool const fValid = true /*!VM_IS_RAW_MODE_ENABLED(pVM)*/
2858 || ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2859 && !(pVCpu->cpum.s.fChanged & CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID));
2860 PCPUMSELREG paSelReg = CPUMCTX_FIRST_SREG(&pVCpu->cpum.s.Guest);
2861 if (fValid)
2862 {
2863 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2864 {
2865 paSelReg[iSelReg].fFlags = CPUMSELREG_FLAGS_VALID;
2866 paSelReg[iSelReg].ValidSel = paSelReg[iSelReg].Sel;
2867 }
2868
2869 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2870 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
2871 }
2872 else
2873 {
2874 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2875 {
2876 paSelReg[iSelReg].fFlags = 0;
2877 paSelReg[iSelReg].ValidSel = 0;
2878 }
2879
2880 /* This might not be 104% correct, but I think it's close
2881 enough for all practical purposes... (REM always loaded
2882 LDTR registers.) */
2883 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2884 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
2885 }
2886 pVCpu->cpum.s.Guest.tr.fFlags = CPUMSELREG_FLAGS_VALID;
2887 pVCpu->cpum.s.Guest.tr.ValidSel = pVCpu->cpum.s.Guest.tr.Sel;
2888 }
2889 }
2890
2891 /* Clear CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID. */
2892 if ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2893 && uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2894 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2895 {
2896 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2897 pVCpu->cpum.s.fChanged &= CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID;
2898 }
2899
2900 /*
2901 * A quick sanity check.
2902 */
2903 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2904 {
2905 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2906 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.es.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2907 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.cs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2908 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ss.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2909 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ds.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2910 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.fs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2911 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.gs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2912 }
2913 }
2914
2915 pVM->cpum.s.fPendingRestore = false;
2916
2917 /*
2918 * Guest CPUIDs (and VMX MSR features).
2919 */
2920 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_2)
2921 {
2922 CPUMMSRS GuestMsrs;
2923 RT_ZERO(GuestMsrs);
2924
2925 CPUMFEATURES BaseFeatures;
2926 bool const fVmxGstFeat = pVM->cpum.s.GuestFeatures.fVmx;
2927 if (fVmxGstFeat)
2928 {
2929 /*
2930 * At this point the MSRs in the guest CPU-context are loaded with the guest VMX MSRs from the saved state.
2931 * However the VMX sub-features have not been exploded yet. So cache the base (host derived) VMX features
2932 * here so we can compare them for compatibility after exploding guest features.
2933 */
2934 BaseFeatures = pVM->cpum.s.GuestFeatures;
2935
2936 /* Use the VMX MSR features from the saved state while exploding guest features. */
2937 GuestMsrs.hwvirt.vmx = pVM->apCpusR3[0]->cpum.s.Guest.hwvirt.vmx.Msrs;
2938 }
2939
2940 /* Load CPUID and explode guest features. */
2941 rc = cpumR3LoadCpuId(pVM, pSSM, uVersion, &GuestMsrs);
2942 if (fVmxGstFeat)
2943 {
2944 /*
2945 * Check if the exploded VMX features from the saved state are compatible with the host-derived features
2946 * we cached earlier (above). The is required if we use hardware-assisted nested-guest execution with
2947 * VMX features presented to the guest.
2948 */
2949 bool const fIsCompat = cpumR3AreVmxCpuFeaturesCompatible(pVM, &BaseFeatures, &pVM->cpum.s.GuestFeatures);
2950 if (!fIsCompat)
2951 return VERR_CPUM_INVALID_HWVIRT_FEAT_COMBO;
2952 }
2953 return rc;
2954 }
2955 return cpumR3LoadCpuIdPre32(pVM, pSSM, uVersion);
2956}
2957
2958
2959/**
2960 * @callback_method_impl{FNSSMINTLOADDONE}
2961 */
2962static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM)
2963{
2964 if (RT_FAILURE(SSMR3HandleGetStatus(pSSM)))
2965 return VINF_SUCCESS;
2966
2967 /* just check this since we can. */ /** @todo Add a SSM unit flag for indicating that it's mandatory during a restore. */
2968 if (pVM->cpum.s.fPendingRestore)
2969 {
2970 LogRel(("CPUM: Missing state!\n"));
2971 return VERR_INTERNAL_ERROR_2;
2972 }
2973
2974 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
2975 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2976 {
2977 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
2978
2979 /* Notify PGM of the NXE states in case they've changed. */
2980 PGMNotifyNxeChanged(pVCpu, RT_BOOL(pVCpu->cpum.s.Guest.msrEFER & MSR_K6_EFER_NXE));
2981
2982 /* During init. this is done in CPUMR3InitCompleted(). */
2983 if (fSupportsLongMode)
2984 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
2985
2986 /* Recalc the CPUM_USE_DEBUG_REGS_HYPER value. */
2987 CPUMRecalcHyperDRx(pVCpu, UINT8_MAX);
2988 }
2989 return VINF_SUCCESS;
2990}
2991
2992
2993/**
2994 * Checks if the CPUM state restore is still pending.
2995 *
2996 * @returns true / false.
2997 * @param pVM The cross context VM structure.
2998 */
2999VMMDECL(bool) CPUMR3IsStateRestorePending(PVM pVM)
3000{
3001 return pVM->cpum.s.fPendingRestore;
3002}
3003
3004
3005/**
3006 * Formats the EFLAGS value into mnemonics.
3007 *
3008 * @param pszEFlags Where to write the mnemonics. (Assumes sufficient buffer space.)
3009 * @param efl The EFLAGS value.
3010 */
3011static void cpumR3InfoFormatFlags(char *pszEFlags, uint32_t efl)
3012{
3013 /*
3014 * Format the flags.
3015 */
3016 static const struct
3017 {
3018 const char *pszSet; const char *pszClear; uint32_t fFlag;
3019 } s_aFlags[] =
3020 {
3021 { "vip",NULL, X86_EFL_VIP },
3022 { "vif",NULL, X86_EFL_VIF },
3023 { "ac", NULL, X86_EFL_AC },
3024 { "vm", NULL, X86_EFL_VM },
3025 { "rf", NULL, X86_EFL_RF },
3026 { "nt", NULL, X86_EFL_NT },
3027 { "ov", "nv", X86_EFL_OF },
3028 { "dn", "up", X86_EFL_DF },
3029 { "ei", "di", X86_EFL_IF },
3030 { "tf", NULL, X86_EFL_TF },
3031 { "nt", "pl", X86_EFL_SF },
3032 { "nz", "zr", X86_EFL_ZF },
3033 { "ac", "na", X86_EFL_AF },
3034 { "po", "pe", X86_EFL_PF },
3035 { "cy", "nc", X86_EFL_CF },
3036 };
3037 char *psz = pszEFlags;
3038 for (unsigned i = 0; i < RT_ELEMENTS(s_aFlags); i++)
3039 {
3040 const char *pszAdd = s_aFlags[i].fFlag & efl ? s_aFlags[i].pszSet : s_aFlags[i].pszClear;
3041 if (pszAdd)
3042 {
3043 strcpy(psz, pszAdd);
3044 psz += strlen(pszAdd);
3045 *psz++ = ' ';
3046 }
3047 }
3048 psz[-1] = '\0';
3049}
3050
3051
3052/**
3053 * Formats a full register dump.
3054 *
3055 * @param pVM The cross context VM structure.
3056 * @param pCtx The context to format.
3057 * @param pCtxCore The context core to format.
3058 * @param pHlp Output functions.
3059 * @param enmType The dump type.
3060 * @param pszPrefix Register name prefix.
3061 */
3062static void cpumR3InfoOne(PVM pVM, PCPUMCTX pCtx, PCCPUMCTXCORE pCtxCore, PCDBGFINFOHLP pHlp, CPUMDUMPTYPE enmType,
3063 const char *pszPrefix)
3064{
3065 NOREF(pVM);
3066
3067 /*
3068 * Format the EFLAGS.
3069 */
3070 uint32_t efl = pCtxCore->eflags.u32;
3071 char szEFlags[80];
3072 cpumR3InfoFormatFlags(&szEFlags[0], efl);
3073
3074 /*
3075 * Format the registers.
3076 */
3077 switch (enmType)
3078 {
3079 case CPUMDUMPTYPE_TERSE:
3080 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3081 pHlp->pfnPrintf(pHlp,
3082 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3083 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3084 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3085 "%sr14=%016RX64 %sr15=%016RX64\n"
3086 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3087 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
3088 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3089 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3090 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3091 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3092 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3093 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
3094 else
3095 pHlp->pfnPrintf(pHlp,
3096 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3097 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3098 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
3099 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3100 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3101 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3102 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
3103 break;
3104
3105 case CPUMDUMPTYPE_DEFAULT:
3106 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3107 pHlp->pfnPrintf(pHlp,
3108 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3109 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3110 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3111 "%sr14=%016RX64 %sr15=%016RX64\n"
3112 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3113 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
3114 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%016RX64:%04x %sldtr=%04x\n"
3115 ,
3116 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3117 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3118 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3119 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3120 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3121 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
3122 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3123 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
3124 else
3125 pHlp->pfnPrintf(pHlp,
3126 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3127 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3128 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
3129 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%08RX64:%04x %sldtr=%04x\n"
3130 ,
3131 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3132 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3133 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
3134 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
3135 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3136 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
3137 break;
3138
3139 case CPUMDUMPTYPE_VERBOSE:
3140 if (CPUMIsGuestIn64BitCodeEx(pCtx))
3141 pHlp->pfnPrintf(pHlp,
3142 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
3143 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
3144 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
3145 "%sr14=%016RX64 %sr15=%016RX64\n"
3146 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
3147 "%scs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3148 "%sds={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3149 "%ses={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3150 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3151 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3152 "%sss={%04x base=%016RX64 limit=%08x flags=%08x}\n"
3153 "%scr0=%016RX64 %scr2=%016RX64 %scr3=%016RX64 %scr4=%016RX64\n"
3154 "%sdr0=%016RX64 %sdr1=%016RX64 %sdr2=%016RX64 %sdr3=%016RX64\n"
3155 "%sdr4=%016RX64 %sdr5=%016RX64 %sdr6=%016RX64 %sdr7=%016RX64\n"
3156 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
3157 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3158 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3159 "%sSysEnter={cs=%04llx eip=%016RX64 esp=%016RX64}\n"
3160 ,
3161 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
3162 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
3163 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
3164 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3165 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u,
3166 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u,
3167 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u,
3168 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u,
3169 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u,
3170 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u,
3171 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3172 pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1], pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
3173 pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5], pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
3174 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
3175 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
3176 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
3177 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
3178 else
3179 pHlp->pfnPrintf(pHlp,
3180 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
3181 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
3182 "%scs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr0=%08RX64 %sdr1=%08RX64\n"
3183 "%sds={%04x base=%016RX64 limit=%08x flags=%08x} %sdr2=%08RX64 %sdr3=%08RX64\n"
3184 "%ses={%04x base=%016RX64 limit=%08x flags=%08x} %sdr4=%08RX64 %sdr5=%08RX64\n"
3185 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr6=%08RX64 %sdr7=%08RX64\n"
3186 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x} %scr0=%08RX64 %scr2=%08RX64\n"
3187 "%sss={%04x base=%016RX64 limit=%08x flags=%08x} %scr3=%08RX64 %scr4=%08RX64\n"
3188 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
3189 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3190 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
3191 "%sSysEnter={cs=%04llx eip=%08llx esp=%08llx}\n"
3192 ,
3193 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
3194 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
3195 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u, pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1],
3196 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u, pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
3197 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u, pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5],
3198 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u, pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
3199 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u, pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2,
3200 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
3201 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
3202 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
3203 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
3204 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
3205
3206 pHlp->pfnPrintf(pHlp, "%sxcr=%016RX64 %sxcr1=%016RX64 %sxss=%016RX64 (fXStateMask=%016RX64)\n",
3207 pszPrefix, pCtx->aXcr[0], pszPrefix, pCtx->aXcr[1],
3208 pszPrefix, UINT64_C(0) /** @todo XSS */, pCtx->fXStateMask);
3209 {
3210 PX86FXSTATE pFpuCtx = &pCtx->XState.x87;
3211 pHlp->pfnPrintf(pHlp,
3212 "%sFCW=%04x %sFSW=%04x %sFTW=%04x %sFOP=%04x %sMXCSR=%08x %sMXCSR_MASK=%08x\n"
3213 "%sFPUIP=%08x %sCS=%04x %sRsrvd1=%04x %sFPUDP=%08x %sDS=%04x %sRsvrd2=%04x\n"
3214 ,
3215 pszPrefix, pFpuCtx->FCW, pszPrefix, pFpuCtx->FSW, pszPrefix, pFpuCtx->FTW, pszPrefix, pFpuCtx->FOP,
3216 pszPrefix, pFpuCtx->MXCSR, pszPrefix, pFpuCtx->MXCSR_MASK,
3217 pszPrefix, pFpuCtx->FPUIP, pszPrefix, pFpuCtx->CS, pszPrefix, pFpuCtx->Rsrvd1,
3218 pszPrefix, pFpuCtx->FPUDP, pszPrefix, pFpuCtx->DS, pszPrefix, pFpuCtx->Rsrvd2
3219 );
3220 /*
3221 * The FSAVE style memory image contains ST(0)-ST(7) at increasing addresses,
3222 * not (FP)R0-7 as Intel SDM suggests.
3223 */
3224 unsigned iShift = (pFpuCtx->FSW >> 11) & 7;
3225 for (unsigned iST = 0; iST < RT_ELEMENTS(pFpuCtx->aRegs); iST++)
3226 {
3227 unsigned iFPR = (iST + iShift) % RT_ELEMENTS(pFpuCtx->aRegs);
3228 unsigned uTag = (pFpuCtx->FTW >> (2 * iFPR)) & 3;
3229 char chSign = pFpuCtx->aRegs[iST].au16[4] & 0x8000 ? '-' : '+';
3230 unsigned iInteger = (unsigned)(pFpuCtx->aRegs[iST].au64[0] >> 63);
3231 uint64_t u64Fraction = pFpuCtx->aRegs[iST].au64[0] & UINT64_C(0x7fffffffffffffff);
3232 int iExponent = pFpuCtx->aRegs[iST].au16[4] & 0x7fff;
3233 iExponent -= 16383; /* subtract bias */
3234 /** @todo This isn't entirenly correct and needs more work! */
3235 pHlp->pfnPrintf(pHlp,
3236 "%sST(%u)=%sFPR%u={%04RX16'%08RX32'%08RX32} t%d %c%u.%022llu * 2 ^ %d (*)",
3237 pszPrefix, iST, pszPrefix, iFPR,
3238 pFpuCtx->aRegs[iST].au16[4], pFpuCtx->aRegs[iST].au32[1], pFpuCtx->aRegs[iST].au32[0],
3239 uTag, chSign, iInteger, u64Fraction, iExponent);
3240 if (pFpuCtx->aRegs[iST].au16[5] || pFpuCtx->aRegs[iST].au16[6] || pFpuCtx->aRegs[iST].au16[7])
3241 pHlp->pfnPrintf(pHlp, " res={%04RX16,%04RX16,%04RX16}\n",
3242 pFpuCtx->aRegs[iST].au16[5], pFpuCtx->aRegs[iST].au16[6], pFpuCtx->aRegs[iST].au16[7]);
3243 else
3244 pHlp->pfnPrintf(pHlp, "\n");
3245 }
3246
3247 /* XMM/YMM/ZMM registers. */
3248 if (pCtx->fXStateMask & XSAVE_C_YMM)
3249 {
3250 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
3251 if (!(pCtx->fXStateMask & XSAVE_C_ZMM_HI256))
3252 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3253 pHlp->pfnPrintf(pHlp, "%sYMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3254 pszPrefix, i, i < 10 ? " " : "",
3255 pYmmHiCtx->aYmmHi[i].au32[3],
3256 pYmmHiCtx->aYmmHi[i].au32[2],
3257 pYmmHiCtx->aYmmHi[i].au32[1],
3258 pYmmHiCtx->aYmmHi[i].au32[0],
3259 pFpuCtx->aXMM[i].au32[3],
3260 pFpuCtx->aXMM[i].au32[2],
3261 pFpuCtx->aXMM[i].au32[1],
3262 pFpuCtx->aXMM[i].au32[0]);
3263 else
3264 {
3265 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
3266 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3267 pHlp->pfnPrintf(pHlp,
3268 "%sZMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3269 pszPrefix, i, i < 10 ? " " : "",
3270 pZmmHi256->aHi256Regs[i].au32[7],
3271 pZmmHi256->aHi256Regs[i].au32[6],
3272 pZmmHi256->aHi256Regs[i].au32[5],
3273 pZmmHi256->aHi256Regs[i].au32[4],
3274 pZmmHi256->aHi256Regs[i].au32[3],
3275 pZmmHi256->aHi256Regs[i].au32[2],
3276 pZmmHi256->aHi256Regs[i].au32[1],
3277 pZmmHi256->aHi256Regs[i].au32[0],
3278 pYmmHiCtx->aYmmHi[i].au32[3],
3279 pYmmHiCtx->aYmmHi[i].au32[2],
3280 pYmmHiCtx->aYmmHi[i].au32[1],
3281 pYmmHiCtx->aYmmHi[i].au32[0],
3282 pFpuCtx->aXMM[i].au32[3],
3283 pFpuCtx->aXMM[i].au32[2],
3284 pFpuCtx->aXMM[i].au32[1],
3285 pFpuCtx->aXMM[i].au32[0]);
3286
3287 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
3288 for (unsigned i = 0; i < RT_ELEMENTS(pZmm16Hi->aRegs); i++)
3289 pHlp->pfnPrintf(pHlp,
3290 "%sZMM%u=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
3291 pszPrefix, i + 16,
3292 pZmm16Hi->aRegs[i].au32[15],
3293 pZmm16Hi->aRegs[i].au32[14],
3294 pZmm16Hi->aRegs[i].au32[13],
3295 pZmm16Hi->aRegs[i].au32[12],
3296 pZmm16Hi->aRegs[i].au32[11],
3297 pZmm16Hi->aRegs[i].au32[10],
3298 pZmm16Hi->aRegs[i].au32[9],
3299 pZmm16Hi->aRegs[i].au32[8],
3300 pZmm16Hi->aRegs[i].au32[7],
3301 pZmm16Hi->aRegs[i].au32[6],
3302 pZmm16Hi->aRegs[i].au32[5],
3303 pZmm16Hi->aRegs[i].au32[4],
3304 pZmm16Hi->aRegs[i].au32[3],
3305 pZmm16Hi->aRegs[i].au32[2],
3306 pZmm16Hi->aRegs[i].au32[1],
3307 pZmm16Hi->aRegs[i].au32[0]);
3308 }
3309 }
3310 else
3311 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
3312 pHlp->pfnPrintf(pHlp,
3313 i & 1
3314 ? "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32\n"
3315 : "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32 ",
3316 pszPrefix, i, i < 10 ? " " : "",
3317 pFpuCtx->aXMM[i].au32[3],
3318 pFpuCtx->aXMM[i].au32[2],
3319 pFpuCtx->aXMM[i].au32[1],
3320 pFpuCtx->aXMM[i].au32[0]);
3321
3322 if (pCtx->fXStateMask & XSAVE_C_OPMASK)
3323 {
3324 PCX86XSAVEOPMASK pOpMask = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_OPMASK_BIT, PCX86XSAVEOPMASK);
3325 for (unsigned i = 0; i < RT_ELEMENTS(pOpMask->aKRegs); i += 4)
3326 pHlp->pfnPrintf(pHlp, "%sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64\n",
3327 pszPrefix, i + 0, pOpMask->aKRegs[i + 0],
3328 pszPrefix, i + 1, pOpMask->aKRegs[i + 1],
3329 pszPrefix, i + 2, pOpMask->aKRegs[i + 2],
3330 pszPrefix, i + 3, pOpMask->aKRegs[i + 3]);
3331 }
3332
3333 if (pCtx->fXStateMask & XSAVE_C_BNDREGS)
3334 {
3335 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
3336 for (unsigned i = 0; i < RT_ELEMENTS(pBndRegs->aRegs); i += 2)
3337 pHlp->pfnPrintf(pHlp, "%sBNDREG%u=%016RX64/%016RX64 %sBNDREG%u=%016RX64/%016RX64\n",
3338 pszPrefix, i, pBndRegs->aRegs[i].uLowerBound, pBndRegs->aRegs[i].uUpperBound,
3339 pszPrefix, i + 1, pBndRegs->aRegs[i + 1].uLowerBound, pBndRegs->aRegs[i + 1].uUpperBound);
3340 }
3341
3342 if (pCtx->fXStateMask & XSAVE_C_BNDCSR)
3343 {
3344 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
3345 pHlp->pfnPrintf(pHlp, "%sBNDCFG.CONFIG=%016RX64 %sBNDCFG.STATUS=%016RX64\n",
3346 pszPrefix, pBndCfg->fConfig, pszPrefix, pBndCfg->fStatus);
3347 }
3348
3349 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->au32RsrvdRest); i++)
3350 if (pFpuCtx->au32RsrvdRest[i])
3351 pHlp->pfnPrintf(pHlp, "%sRsrvdRest[%u]=%RX32 (offset=%#x)\n",
3352 pszPrefix, i, pFpuCtx->au32RsrvdRest[i], RT_UOFFSETOF_DYN(X86FXSTATE, au32RsrvdRest[i]) );
3353 }
3354
3355 pHlp->pfnPrintf(pHlp,
3356 "%sEFER =%016RX64\n"
3357 "%sPAT =%016RX64\n"
3358 "%sSTAR =%016RX64\n"
3359 "%sCSTAR =%016RX64\n"
3360 "%sLSTAR =%016RX64\n"
3361 "%sSFMASK =%016RX64\n"
3362 "%sKERNELGSBASE =%016RX64\n",
3363 pszPrefix, pCtx->msrEFER,
3364 pszPrefix, pCtx->msrPAT,
3365 pszPrefix, pCtx->msrSTAR,
3366 pszPrefix, pCtx->msrCSTAR,
3367 pszPrefix, pCtx->msrLSTAR,
3368 pszPrefix, pCtx->msrSFMASK,
3369 pszPrefix, pCtx->msrKERNELGSBASE);
3370
3371 if (CPUMIsGuestInPAEModeEx(pCtx))
3372 for (unsigned i = 0; i < RT_ELEMENTS(pCtx->aPaePdpes); i++)
3373 pHlp->pfnPrintf(pHlp, "%sPAE PDPTE %u =%016RX64\n", pszPrefix, i, pCtx->aPaePdpes[i]);
3374 break;
3375 }
3376}
3377
3378
3379/**
3380 * Display all cpu states and any other cpum info.
3381 *
3382 * @param pVM The cross context VM structure.
3383 * @param pHlp The info helper functions.
3384 * @param pszArgs Arguments, ignored.
3385 */
3386static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3387{
3388 cpumR3InfoGuest(pVM, pHlp, pszArgs);
3389 cpumR3InfoGuestInstr(pVM, pHlp, pszArgs);
3390 cpumR3InfoGuestHwvirt(pVM, pHlp, pszArgs);
3391 cpumR3InfoHyper(pVM, pHlp, pszArgs);
3392 cpumR3InfoHost(pVM, pHlp, pszArgs);
3393}
3394
3395
3396/**
3397 * Parses the info argument.
3398 *
3399 * The argument starts with 'verbose', 'terse' or 'default' and then
3400 * continues with the comment string.
3401 *
3402 * @param pszArgs The pointer to the argument string.
3403 * @param penmType Where to store the dump type request.
3404 * @param ppszComment Where to store the pointer to the comment string.
3405 */
3406static void cpumR3InfoParseArg(const char *pszArgs, CPUMDUMPTYPE *penmType, const char **ppszComment)
3407{
3408 if (!pszArgs)
3409 {
3410 *penmType = CPUMDUMPTYPE_DEFAULT;
3411 *ppszComment = "";
3412 }
3413 else
3414 {
3415 if (!strncmp(pszArgs, RT_STR_TUPLE("verbose")))
3416 {
3417 pszArgs += 7;
3418 *penmType = CPUMDUMPTYPE_VERBOSE;
3419 }
3420 else if (!strncmp(pszArgs, RT_STR_TUPLE("terse")))
3421 {
3422 pszArgs += 5;
3423 *penmType = CPUMDUMPTYPE_TERSE;
3424 }
3425 else if (!strncmp(pszArgs, RT_STR_TUPLE("default")))
3426 {
3427 pszArgs += 7;
3428 *penmType = CPUMDUMPTYPE_DEFAULT;
3429 }
3430 else
3431 *penmType = CPUMDUMPTYPE_DEFAULT;
3432 *ppszComment = RTStrStripL(pszArgs);
3433 }
3434}
3435
3436
3437/**
3438 * Display the guest cpu state.
3439 *
3440 * @param pVM The cross context VM structure.
3441 * @param pHlp The info helper functions.
3442 * @param pszArgs Arguments.
3443 */
3444static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3445{
3446 CPUMDUMPTYPE enmType;
3447 const char *pszComment;
3448 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
3449
3450 PVMCPU pVCpu = VMMGetCpu(pVM);
3451 if (!pVCpu)
3452 pVCpu = pVM->apCpusR3[0];
3453
3454 pHlp->pfnPrintf(pHlp, "Guest CPUM (VCPU %d) state: %s\n", pVCpu->idCpu, pszComment);
3455
3456 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
3457 cpumR3InfoOne(pVM, pCtx, CPUMCTX2CORE(pCtx), pHlp, enmType, "");
3458}
3459
3460
3461/**
3462 * Displays an SVM VMCB control area.
3463 *
3464 * @param pHlp The info helper functions.
3465 * @param pVmcbCtrl Pointer to a SVM VMCB controls area.
3466 * @param pszPrefix Caller specified string prefix.
3467 */
3468static void cpumR3InfoSvmVmcbCtrl(PCDBGFINFOHLP pHlp, PCSVMVMCBCTRL pVmcbCtrl, const char *pszPrefix)
3469{
3470 AssertReturnVoid(pHlp);
3471 AssertReturnVoid(pVmcbCtrl);
3472
3473 pHlp->pfnPrintf(pHlp, "%sCRX-read intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdCRx);
3474 pHlp->pfnPrintf(pHlp, "%sCRX-write intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrCRx);
3475 pHlp->pfnPrintf(pHlp, "%sDRX-read intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdDRx);
3476 pHlp->pfnPrintf(pHlp, "%sDRX-write intercepts = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrDRx);
3477 pHlp->pfnPrintf(pHlp, "%sException intercepts = %#RX32\n", pszPrefix, pVmcbCtrl->u32InterceptXcpt);
3478 pHlp->pfnPrintf(pHlp, "%sControl intercepts = %#RX64\n", pszPrefix, pVmcbCtrl->u64InterceptCtrl);
3479 pHlp->pfnPrintf(pHlp, "%sPause-filter threshold = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterThreshold);
3480 pHlp->pfnPrintf(pHlp, "%sPause-filter count = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterCount);
3481 pHlp->pfnPrintf(pHlp, "%sIOPM bitmap physaddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64IOPMPhysAddr);
3482 pHlp->pfnPrintf(pHlp, "%sMSRPM bitmap physaddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64MSRPMPhysAddr);
3483 pHlp->pfnPrintf(pHlp, "%sTSC offset = %#RX64\n", pszPrefix, pVmcbCtrl->u64TSCOffset);
3484 pHlp->pfnPrintf(pHlp, "%sTLB Control\n", pszPrefix);
3485 pHlp->pfnPrintf(pHlp, " %sASID = %#RX32\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u32ASID);
3486 pHlp->pfnPrintf(pHlp, " %sTLB-flush type = %u\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u8TLBFlush);
3487 pHlp->pfnPrintf(pHlp, "%sInterrupt Control\n", pszPrefix);
3488 pHlp->pfnPrintf(pHlp, " %sVTPR = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VTPR, pVmcbCtrl->IntCtrl.n.u8VTPR);
3489 pHlp->pfnPrintf(pHlp, " %sVIRQ (Pending) = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIrqPending);
3490 pHlp->pfnPrintf(pHlp, " %sVINTR vector = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VIntrVector);
3491 pHlp->pfnPrintf(pHlp, " %sVGIF = %u\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGif);
3492 pHlp->pfnPrintf(pHlp, " %sVINTR priority = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u4VIntrPrio);
3493 pHlp->pfnPrintf(pHlp, " %sIgnore TPR = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1IgnoreTPR);
3494 pHlp->pfnPrintf(pHlp, " %sVINTR masking = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIntrMasking);
3495 pHlp->pfnPrintf(pHlp, " %sVGIF enable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGifEnable);
3496 pHlp->pfnPrintf(pHlp, " %sAVIC enable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1AvicEnable);
3497 pHlp->pfnPrintf(pHlp, "%sInterrupt Shadow\n", pszPrefix);
3498 pHlp->pfnPrintf(pHlp, " %sInterrupt shadow = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1IntShadow);
3499 pHlp->pfnPrintf(pHlp, " %sGuest-interrupt Mask = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1GuestIntMask);
3500 pHlp->pfnPrintf(pHlp, "%sExit Code = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitCode);
3501 pHlp->pfnPrintf(pHlp, "%sEXITINFO1 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo1);
3502 pHlp->pfnPrintf(pHlp, "%sEXITINFO2 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo2);
3503 pHlp->pfnPrintf(pHlp, "%sExit Interrupt Info\n", pszPrefix);
3504 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1Valid);
3505 pHlp->pfnPrintf(pHlp, " %sVector = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u8Vector, pVmcbCtrl->ExitIntInfo.n.u8Vector);
3506 pHlp->pfnPrintf(pHlp, " %sType = %u\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u3Type);
3507 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1ErrorCodeValid);
3508 pHlp->pfnPrintf(pHlp, " %sError-code = %#RX32\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u32ErrorCode);
3509 pHlp->pfnPrintf(pHlp, "%sNested paging and SEV\n", pszPrefix);
3510 pHlp->pfnPrintf(pHlp, " %sNested paging = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1NestedPaging);
3511 pHlp->pfnPrintf(pHlp, " %sSEV (Secure Encrypted VM) = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1Sev);
3512 pHlp->pfnPrintf(pHlp, " %sSEV-ES (Encrypted State) = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1SevEs);
3513 pHlp->pfnPrintf(pHlp, "%sEvent Inject\n", pszPrefix);
3514 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1Valid);
3515 pHlp->pfnPrintf(pHlp, " %sVector = %#RX32 (%u)\n", pszPrefix, pVmcbCtrl->EventInject.n.u8Vector, pVmcbCtrl->EventInject.n.u8Vector);
3516 pHlp->pfnPrintf(pHlp, " %sType = %u\n", pszPrefix, pVmcbCtrl->EventInject.n.u3Type);
3517 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1ErrorCodeValid);
3518 pHlp->pfnPrintf(pHlp, " %sError-code = %#RX32\n", pszPrefix, pVmcbCtrl->EventInject.n.u32ErrorCode);
3519 pHlp->pfnPrintf(pHlp, "%sNested-paging CR3 = %#RX64\n", pszPrefix, pVmcbCtrl->u64NestedPagingCR3);
3520 pHlp->pfnPrintf(pHlp, "%sLBR Virtualization\n", pszPrefix);
3521 pHlp->pfnPrintf(pHlp, " %sLBR virt = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1LbrVirt);
3522 pHlp->pfnPrintf(pHlp, " %sVirt. VMSAVE/VMLOAD = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1VirtVmsaveVmload);
3523 pHlp->pfnPrintf(pHlp, "%sVMCB Clean Bits = %#RX32\n", pszPrefix, pVmcbCtrl->u32VmcbCleanBits);
3524 pHlp->pfnPrintf(pHlp, "%sNext-RIP = %#RX64\n", pszPrefix, pVmcbCtrl->u64NextRIP);
3525 pHlp->pfnPrintf(pHlp, "%sInstruction bytes fetched = %u\n", pszPrefix, pVmcbCtrl->cbInstrFetched);
3526 pHlp->pfnPrintf(pHlp, "%sInstruction bytes = %.*Rhxs\n", pszPrefix, sizeof(pVmcbCtrl->abInstr), pVmcbCtrl->abInstr);
3527 pHlp->pfnPrintf(pHlp, "%sAVIC\n", pszPrefix);
3528 pHlp->pfnPrintf(pHlp, " %sBar addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBar.n.u40Addr);
3529 pHlp->pfnPrintf(pHlp, " %sBacking page addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBackingPagePtr.n.u40Addr);
3530 pHlp->pfnPrintf(pHlp, " %sLogical table addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicLogicalTablePtr.n.u40Addr);
3531 pHlp->pfnPrintf(pHlp, " %sPhysical table addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u40Addr);
3532 pHlp->pfnPrintf(pHlp, " %sLast guest core Id = %u\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u8LastGuestCoreId);
3533}
3534
3535
3536/**
3537 * Helper for dumping the SVM VMCB selector registers.
3538 *
3539 * @param pHlp The info helper functions.
3540 * @param pSel Pointer to the SVM selector register.
3541 * @param pszName Name of the selector.
3542 * @param pszPrefix Caller specified string prefix.
3543 */
3544DECLINLINE(void) cpumR3InfoSvmVmcbSelReg(PCDBGFINFOHLP pHlp, PCSVMSELREG pSel, const char *pszName, const char *pszPrefix)
3545{
3546 /* The string width of 4 used below is to handle 'LDTR'. Change later if longer register names are used. */
3547 pHlp->pfnPrintf(pHlp, "%s%-4s = {%04x base=%016RX64 limit=%08x flags=%04x}\n", pszPrefix,
3548 pszName, pSel->u16Sel, pSel->u64Base, pSel->u32Limit, pSel->u16Attr);
3549}
3550
3551
3552/**
3553 * Helper for dumping the SVM VMCB GDTR/IDTR registers.
3554 *
3555 * @param pHlp The info helper functions.
3556 * @param pXdtr Pointer to the descriptor table register.
3557 * @param pszName Name of the descriptor table register.
3558 * @param pszPrefix Caller specified string prefix.
3559 */
3560DECLINLINE(void) cpumR3InfoSvmVmcbXdtr(PCDBGFINFOHLP pHlp, PCSVMXDTR pXdtr, const char *pszName, const char *pszPrefix)
3561{
3562 /* The string width of 4 used below is to cover 'GDTR', 'IDTR'. Change later if longer register names are used. */
3563 pHlp->pfnPrintf(pHlp, "%s%-4s = %016RX64:%04x\n", pszPrefix, pszName, pXdtr->u64Base, pXdtr->u32Limit);
3564}
3565
3566
3567/**
3568 * Displays an SVM VMCB state-save area.
3569 *
3570 * @param pHlp The info helper functions.
3571 * @param pVmcbStateSave Pointer to a SVM VMCB controls area.
3572 * @param pszPrefix Caller specified string prefix.
3573 */
3574static void cpumR3InfoSvmVmcbStateSave(PCDBGFINFOHLP pHlp, PCSVMVMCBSTATESAVE pVmcbStateSave, const char *pszPrefix)
3575{
3576 AssertReturnVoid(pHlp);
3577 AssertReturnVoid(pVmcbStateSave);
3578
3579 char szEFlags[80];
3580 cpumR3InfoFormatFlags(&szEFlags[0], pVmcbStateSave->u64RFlags);
3581
3582 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->CS, "CS", pszPrefix);
3583 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->SS, "SS", pszPrefix);
3584 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->ES, "ES", pszPrefix);
3585 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->DS, "DS", pszPrefix);
3586 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->FS, "FS", pszPrefix);
3587 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->GS, "GS", pszPrefix);
3588 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->LDTR, "LDTR", pszPrefix);
3589 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->TR, "TR", pszPrefix);
3590 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->GDTR, "GDTR", pszPrefix);
3591 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->IDTR, "IDTR", pszPrefix);
3592 pHlp->pfnPrintf(pHlp, "%sCPL = %u\n", pszPrefix, pVmcbStateSave->u8CPL);
3593 pHlp->pfnPrintf(pHlp, "%sEFER = %#RX64\n", pszPrefix, pVmcbStateSave->u64EFER);
3594 pHlp->pfnPrintf(pHlp, "%sCR4 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR4);
3595 pHlp->pfnPrintf(pHlp, "%sCR3 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR3);
3596 pHlp->pfnPrintf(pHlp, "%sCR0 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR0);
3597 pHlp->pfnPrintf(pHlp, "%sDR7 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR7);
3598 pHlp->pfnPrintf(pHlp, "%sDR6 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR6);
3599 pHlp->pfnPrintf(pHlp, "%sRFLAGS = %#RX64 %31s\n", pszPrefix, pVmcbStateSave->u64RFlags, szEFlags);
3600 pHlp->pfnPrintf(pHlp, "%sRIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RIP);
3601 pHlp->pfnPrintf(pHlp, "%sRSP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RSP);
3602 pHlp->pfnPrintf(pHlp, "%sRAX = %#RX64\n", pszPrefix, pVmcbStateSave->u64RAX);
3603 pHlp->pfnPrintf(pHlp, "%sSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64STAR);
3604 pHlp->pfnPrintf(pHlp, "%sLSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64LSTAR);
3605 pHlp->pfnPrintf(pHlp, "%sCSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64CSTAR);
3606 pHlp->pfnPrintf(pHlp, "%sSFMASK = %#RX64\n", pszPrefix, pVmcbStateSave->u64SFMASK);
3607 pHlp->pfnPrintf(pHlp, "%sKERNELGSBASE = %#RX64\n", pszPrefix, pVmcbStateSave->u64KernelGSBase);
3608 pHlp->pfnPrintf(pHlp, "%sSysEnter CS = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterCS);
3609 pHlp->pfnPrintf(pHlp, "%sSysEnter EIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterEIP);
3610 pHlp->pfnPrintf(pHlp, "%sSysEnter ESP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterESP);
3611 pHlp->pfnPrintf(pHlp, "%sCR2 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR2);
3612 pHlp->pfnPrintf(pHlp, "%sPAT = %#RX64\n", pszPrefix, pVmcbStateSave->u64PAT);
3613 pHlp->pfnPrintf(pHlp, "%sDBGCTL = %#RX64\n", pszPrefix, pVmcbStateSave->u64DBGCTL);
3614 pHlp->pfnPrintf(pHlp, "%sBR_FROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_FROM);
3615 pHlp->pfnPrintf(pHlp, "%sBR_TO = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_TO);
3616 pHlp->pfnPrintf(pHlp, "%sLASTXCPT_FROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPFROM);
3617 pHlp->pfnPrintf(pHlp, "%sLASTXCPT_TO = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPTO);
3618}
3619
3620
3621/**
3622 * Displays a virtual-VMCS.
3623 *
3624 * @param pVCpu The cross context virtual CPU structure.
3625 * @param pHlp The info helper functions.
3626 * @param pVmcs Pointer to a virtual VMCS.
3627 * @param pszPrefix Caller specified string prefix.
3628 */
3629static void cpumR3InfoVmxVmcs(PVMCPU pVCpu, PCDBGFINFOHLP pHlp, PCVMXVVMCS pVmcs, const char *pszPrefix)
3630{
3631 AssertReturnVoid(pHlp);
3632 AssertReturnVoid(pVmcs);
3633
3634 /* The string width of -4 used in the macros below to cover 'LDTR', 'GDTR', 'IDTR. */
3635#define CPUMVMX_DUMP_HOST_XDTR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3636 do { \
3637 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {base=%016RX64}\n", \
3638 (a_pszPrefix), (a_SegName), (a_pVmcs)->u64Host##a_Seg##Base.u); \
3639 } while (0)
3640
3641#define CPUMVMX_DUMP_HOST_FS_GS_TR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3642 do { \
3643 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {%04x base=%016RX64}\n", \
3644 (a_pszPrefix), (a_SegName), (a_pVmcs)->Host##a_Seg, (a_pVmcs)->u64Host##a_Seg##Base.u); \
3645 } while (0)
3646
3647#define CPUMVMX_DUMP_GUEST_SEGREG(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3648 do { \
3649 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {%04x base=%016RX64 limit=%08x flags=%04x}\n", \
3650 (a_pszPrefix), (a_SegName), (a_pVmcs)->Guest##a_Seg, (a_pVmcs)->u64Guest##a_Seg##Base.u, \
3651 (a_pVmcs)->u32Guest##a_Seg##Limit, (a_pVmcs)->u32Guest##a_Seg##Attr); \
3652 } while (0)
3653
3654#define CPUMVMX_DUMP_GUEST_XDTR(a_pHlp, a_pVmcs, a_Seg, a_SegName, a_pszPrefix) \
3655 do { \
3656 (a_pHlp)->pfnPrintf((a_pHlp), " %s%-4s = {base=%016RX64 limit=%08x}\n", \
3657 (a_pszPrefix), (a_SegName), (a_pVmcs)->u64Guest##a_Seg##Base.u, (a_pVmcs)->u32Guest##a_Seg##Limit); \
3658 } while (0)
3659
3660 /* Header. */
3661 {
3662 pHlp->pfnPrintf(pHlp, "%sHeader:\n", pszPrefix);
3663 pHlp->pfnPrintf(pHlp, " %sVMCS revision id = %#RX32\n", pszPrefix, pVmcs->u32VmcsRevId);
3664 pHlp->pfnPrintf(pHlp, " %sVMX-abort id = %#RX32 (%s)\n", pszPrefix, pVmcs->enmVmxAbort, VMXGetAbortDesc(pVmcs->enmVmxAbort));
3665 pHlp->pfnPrintf(pHlp, " %sVMCS state = %#x (%s)\n", pszPrefix, pVmcs->fVmcsState, VMXGetVmcsStateDesc(pVmcs->fVmcsState));
3666 }
3667
3668 /* Control fields. */
3669 {
3670 /* 16-bit. */
3671 pHlp->pfnPrintf(pHlp, "%sControl:\n", pszPrefix);
3672 pHlp->pfnPrintf(pHlp, " %sVPID = %#RX16\n", pszPrefix, pVmcs->u16Vpid);
3673 pHlp->pfnPrintf(pHlp, " %sPosted intr notify vector = %#RX16\n", pszPrefix, pVmcs->u16PostIntNotifyVector);
3674 pHlp->pfnPrintf(pHlp, " %sEPTP index = %#RX16\n", pszPrefix, pVmcs->u16EptpIndex);
3675
3676 /* 32-bit. */
3677 pHlp->pfnPrintf(pHlp, " %sPin ctls = %#RX32\n", pszPrefix, pVmcs->u32PinCtls);
3678 pHlp->pfnPrintf(pHlp, " %sProcessor ctls = %#RX32\n", pszPrefix, pVmcs->u32ProcCtls);
3679 pHlp->pfnPrintf(pHlp, " %sSecondary processor ctls = %#RX32\n", pszPrefix, pVmcs->u32ProcCtls2);
3680 pHlp->pfnPrintf(pHlp, " %sVM-exit ctls = %#RX32\n", pszPrefix, pVmcs->u32ExitCtls);
3681 pHlp->pfnPrintf(pHlp, " %sVM-entry ctls = %#RX32\n", pszPrefix, pVmcs->u32EntryCtls);
3682 pHlp->pfnPrintf(pHlp, " %sException bitmap = %#RX32\n", pszPrefix, pVmcs->u32XcptBitmap);
3683 pHlp->pfnPrintf(pHlp, " %sPage-fault mask = %#RX32\n", pszPrefix, pVmcs->u32XcptPFMask);
3684 pHlp->pfnPrintf(pHlp, " %sPage-fault match = %#RX32\n", pszPrefix, pVmcs->u32XcptPFMatch);
3685 pHlp->pfnPrintf(pHlp, " %sCR3-target count = %RU32\n", pszPrefix, pVmcs->u32Cr3TargetCount);
3686 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR store count = %RU32\n", pszPrefix, pVmcs->u32ExitMsrStoreCount);
3687 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR load count = %RU32\n", pszPrefix, pVmcs->u32ExitMsrLoadCount);
3688 pHlp->pfnPrintf(pHlp, " %sVM-entry MSR load count = %RU32\n", pszPrefix, pVmcs->u32EntryMsrLoadCount);
3689 pHlp->pfnPrintf(pHlp, " %sVM-entry interruption info = %#RX32\n", pszPrefix, pVmcs->u32EntryIntInfo);
3690 {
3691 uint32_t const fInfo = pVmcs->u32EntryIntInfo;
3692 uint8_t const uType = VMX_ENTRY_INT_INFO_TYPE(fInfo);
3693 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_VALID(fInfo));
3694 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetEntryIntInfoTypeDesc(uType));
3695 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_ENTRY_INT_INFO_VECTOR(fInfo));
3696 pHlp->pfnPrintf(pHlp, " %sNMI-unblocking-IRET = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_NMI_UNBLOCK_IRET(fInfo));
3697 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_ENTRY_INT_INFO_IS_ERROR_CODE_VALID(fInfo));
3698 }
3699 pHlp->pfnPrintf(pHlp, " %sVM-entry xcpt error-code = %#RX32\n", pszPrefix, pVmcs->u32EntryXcptErrCode);
3700 pHlp->pfnPrintf(pHlp, " %sVM-entry instr length = %u byte(s)\n", pszPrefix, pVmcs->u32EntryInstrLen);
3701 pHlp->pfnPrintf(pHlp, " %sTPR threshold = %#RX32\n", pszPrefix, pVmcs->u32TprThreshold);
3702 pHlp->pfnPrintf(pHlp, " %sPLE gap = %#RX32\n", pszPrefix, pVmcs->u32PleGap);
3703 pHlp->pfnPrintf(pHlp, " %sPLE window = %#RX32\n", pszPrefix, pVmcs->u32PleWindow);
3704
3705 /* 64-bit. */
3706 pHlp->pfnPrintf(pHlp, " %sIO-bitmap A addr = %#RX64\n", pszPrefix, pVmcs->u64AddrIoBitmapA.u);
3707 pHlp->pfnPrintf(pHlp, " %sIO-bitmap B addr = %#RX64\n", pszPrefix, pVmcs->u64AddrIoBitmapB.u);
3708 pHlp->pfnPrintf(pHlp, " %sMSR-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrMsrBitmap.u);
3709 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR store addr = %#RX64\n", pszPrefix, pVmcs->u64AddrExitMsrStore.u);
3710 pHlp->pfnPrintf(pHlp, " %sVM-exit MSR load addr = %#RX64\n", pszPrefix, pVmcs->u64AddrExitMsrLoad.u);
3711 pHlp->pfnPrintf(pHlp, " %sVM-entry MSR load addr = %#RX64\n", pszPrefix, pVmcs->u64AddrEntryMsrLoad.u);
3712 pHlp->pfnPrintf(pHlp, " %sExecutive VMCS ptr = %#RX64\n", pszPrefix, pVmcs->u64ExecVmcsPtr.u);
3713 pHlp->pfnPrintf(pHlp, " %sPML addr = %#RX64\n", pszPrefix, pVmcs->u64AddrPml.u);
3714 pHlp->pfnPrintf(pHlp, " %sTSC offset = %#RX64\n", pszPrefix, pVmcs->u64TscOffset.u);
3715 pHlp->pfnPrintf(pHlp, " %sVirtual-APIC addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVirtApic.u);
3716 pHlp->pfnPrintf(pHlp, " %sAPIC-access addr = %#RX64\n", pszPrefix, pVmcs->u64AddrApicAccess.u);
3717 pHlp->pfnPrintf(pHlp, " %sPosted-intr desc addr = %#RX64\n", pszPrefix, pVmcs->u64AddrPostedIntDesc.u);
3718 pHlp->pfnPrintf(pHlp, " %sVM-functions control = %#RX64\n", pszPrefix, pVmcs->u64VmFuncCtls.u);
3719 pHlp->pfnPrintf(pHlp, " %sEPTP ptr = %#RX64\n", pszPrefix, pVmcs->u64EptPtr.u);
3720 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 0 = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap0.u);
3721 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 1 = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap1.u);
3722 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 2 = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap2.u);
3723 pHlp->pfnPrintf(pHlp, " %sEOI-exit bitmap 3 = %#RX64\n", pszPrefix, pVmcs->u64EoiExitBitmap3.u);
3724 pHlp->pfnPrintf(pHlp, " %sEPTP-list addr = %#RX64\n", pszPrefix, pVmcs->u64AddrEptpList.u);
3725 pHlp->pfnPrintf(pHlp, " %sVMREAD-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVmreadBitmap.u);
3726 pHlp->pfnPrintf(pHlp, " %sVMWRITE-bitmap addr = %#RX64\n", pszPrefix, pVmcs->u64AddrVmwriteBitmap.u);
3727 pHlp->pfnPrintf(pHlp, " %sVirt-Xcpt info addr = %#RX64\n", pszPrefix, pVmcs->u64AddrXcptVeInfo.u);
3728 pHlp->pfnPrintf(pHlp, " %sXSS-exiting bitmap = %#RX64\n", pszPrefix, pVmcs->u64XssExitBitmap.u);
3729 pHlp->pfnPrintf(pHlp, " %sENCLS-exiting bitmap = %#RX64\n", pszPrefix, pVmcs->u64EnclsExitBitmap.u);
3730 pHlp->pfnPrintf(pHlp, " %sSPP-table ptr = %#RX64\n", pszPrefix, pVmcs->u64SppTablePtr.u);
3731 pHlp->pfnPrintf(pHlp, " %sTSC multiplier = %#RX64\n", pszPrefix, pVmcs->u64TscMultiplier.u);
3732 pHlp->pfnPrintf(pHlp, " %sTertiary processor ctls = %#RX64\n", pszPrefix, pVmcs->u64ProcCtls3.u);
3733 pHlp->pfnPrintf(pHlp, " %sENCLV-exiting bitmap = %#RX64\n", pszPrefix, pVmcs->u64EnclvExitBitmap.u);
3734
3735 /* Natural width. */
3736 pHlp->pfnPrintf(pHlp, " %sCR0 guest/host mask = %#RX64\n", pszPrefix, pVmcs->u64Cr0Mask.u);
3737 pHlp->pfnPrintf(pHlp, " %sCR4 guest/host mask = %#RX64\n", pszPrefix, pVmcs->u64Cr4Mask.u);
3738 pHlp->pfnPrintf(pHlp, " %sCR0 read shadow = %#RX64\n", pszPrefix, pVmcs->u64Cr0ReadShadow.u);
3739 pHlp->pfnPrintf(pHlp, " %sCR4 read shadow = %#RX64\n", pszPrefix, pVmcs->u64Cr4ReadShadow.u);
3740 pHlp->pfnPrintf(pHlp, " %sCR3-target 0 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target0.u);
3741 pHlp->pfnPrintf(pHlp, " %sCR3-target 1 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target1.u);
3742 pHlp->pfnPrintf(pHlp, " %sCR3-target 2 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target2.u);
3743 pHlp->pfnPrintf(pHlp, " %sCR3-target 3 = %#RX64\n", pszPrefix, pVmcs->u64Cr3Target3.u);
3744 }
3745
3746 /* Guest state. */
3747 {
3748 char szEFlags[80];
3749 cpumR3InfoFormatFlags(&szEFlags[0], pVmcs->u64GuestRFlags.u);
3750 pHlp->pfnPrintf(pHlp, "%sGuest state:\n", pszPrefix);
3751
3752 /* 16-bit. */
3753 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Cs, "CS", pszPrefix);
3754 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ss, "SS", pszPrefix);
3755 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Es, "ES", pszPrefix);
3756 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ds, "DS", pszPrefix);
3757 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Fs, "FS", pszPrefix);
3758 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Gs, "GS", pszPrefix);
3759 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Ldtr, "LDTR", pszPrefix);
3760 CPUMVMX_DUMP_GUEST_SEGREG(pHlp, pVmcs, Tr, "TR", pszPrefix);
3761 CPUMVMX_DUMP_GUEST_XDTR(pHlp, pVmcs, Gdtr, "GDTR", pszPrefix);
3762 CPUMVMX_DUMP_GUEST_XDTR(pHlp, pVmcs, Idtr, "IDTR", pszPrefix);
3763 pHlp->pfnPrintf(pHlp, " %sInterrupt status = %#RX16\n", pszPrefix, pVmcs->u16GuestIntStatus);
3764 pHlp->pfnPrintf(pHlp, " %sPML index = %#RX16\n", pszPrefix, pVmcs->u16PmlIndex);
3765
3766 /* 32-bit. */
3767 pHlp->pfnPrintf(pHlp, " %sInterruptibility state = %#RX32\n", pszPrefix, pVmcs->u32GuestIntrState);
3768 pHlp->pfnPrintf(pHlp, " %sActivity state = %#RX32\n", pszPrefix, pVmcs->u32GuestActivityState);
3769 pHlp->pfnPrintf(pHlp, " %sSMBASE = %#RX32\n", pszPrefix, pVmcs->u32GuestSmBase);
3770 pHlp->pfnPrintf(pHlp, " %sSysEnter CS = %#RX32\n", pszPrefix, pVmcs->u32GuestSysenterCS);
3771 pHlp->pfnPrintf(pHlp, " %sVMX-preemption timer value = %#RX32\n", pszPrefix, pVmcs->u32PreemptTimer);
3772
3773 /* 64-bit. */
3774 pHlp->pfnPrintf(pHlp, " %sVMCS link ptr = %#RX64\n", pszPrefix, pVmcs->u64VmcsLinkPtr.u);
3775 pHlp->pfnPrintf(pHlp, " %sDBGCTL = %#RX64\n", pszPrefix, pVmcs->u64GuestDebugCtlMsr.u);
3776 pHlp->pfnPrintf(pHlp, " %sPAT = %#RX64\n", pszPrefix, pVmcs->u64GuestPatMsr.u);
3777 pHlp->pfnPrintf(pHlp, " %sEFER = %#RX64\n", pszPrefix, pVmcs->u64GuestEferMsr.u);
3778 pHlp->pfnPrintf(pHlp, " %sPERFGLOBALCTRL = %#RX64\n", pszPrefix, pVmcs->u64GuestPerfGlobalCtlMsr.u);
3779 pHlp->pfnPrintf(pHlp, " %sPDPTE 0 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte0.u);
3780 pHlp->pfnPrintf(pHlp, " %sPDPTE 1 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte1.u);
3781 pHlp->pfnPrintf(pHlp, " %sPDPTE 2 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte2.u);
3782 pHlp->pfnPrintf(pHlp, " %sPDPTE 3 = %#RX64\n", pszPrefix, pVmcs->u64GuestPdpte3.u);
3783 pHlp->pfnPrintf(pHlp, " %sBNDCFGS = %#RX64\n", pszPrefix, pVmcs->u64GuestBndcfgsMsr.u);
3784 pHlp->pfnPrintf(pHlp, " %sRTIT_CTL = %#RX64\n", pszPrefix, pVmcs->u64GuestRtitCtlMsr.u);
3785 pHlp->pfnPrintf(pHlp, " %sPKRS = %#RX64\n", pszPrefix, pVmcs->u64GuestPkrsMsr.u);
3786
3787 /* Natural width. */
3788 pHlp->pfnPrintf(pHlp, " %sCR0 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr0.u);
3789 pHlp->pfnPrintf(pHlp, " %sCR3 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr3.u);
3790 pHlp->pfnPrintf(pHlp, " %sCR4 = %#RX64\n", pszPrefix, pVmcs->u64GuestCr4.u);
3791 pHlp->pfnPrintf(pHlp, " %sDR7 = %#RX64\n", pszPrefix, pVmcs->u64GuestDr7.u);
3792 pHlp->pfnPrintf(pHlp, " %sRSP = %#RX64\n", pszPrefix, pVmcs->u64GuestRsp.u);
3793 pHlp->pfnPrintf(pHlp, " %sRIP = %#RX64\n", pszPrefix, pVmcs->u64GuestRip.u);
3794 pHlp->pfnPrintf(pHlp, " %sRFLAGS = %#RX64 %31s\n",pszPrefix, pVmcs->u64GuestRFlags.u, szEFlags);
3795 pHlp->pfnPrintf(pHlp, " %sPending debug xcpts = %#RX64\n", pszPrefix, pVmcs->u64GuestPendingDbgXcpts.u);
3796 pHlp->pfnPrintf(pHlp, " %sSysEnter ESP = %#RX64\n", pszPrefix, pVmcs->u64GuestSysenterEsp.u);
3797 pHlp->pfnPrintf(pHlp, " %sSysEnter EIP = %#RX64\n", pszPrefix, pVmcs->u64GuestSysenterEip.u);
3798 pHlp->pfnPrintf(pHlp, " %sS_CET = %#RX64\n", pszPrefix, pVmcs->u64GuestSCetMsr.u);
3799 pHlp->pfnPrintf(pHlp, " %sSSP = %#RX64\n", pszPrefix, pVmcs->u64GuestSsp.u);
3800 pHlp->pfnPrintf(pHlp, " %sINTERRUPT_SSP_TABLE_ADDR = %#RX64\n", pszPrefix, pVmcs->u64GuestIntrSspTableAddrMsr.u);
3801 }
3802
3803 /* Host state. */
3804 {
3805 pHlp->pfnPrintf(pHlp, "%sHost state:\n", pszPrefix);
3806
3807 /* 16-bit. */
3808 pHlp->pfnPrintf(pHlp, " %sCS = %#RX16\n", pszPrefix, pVmcs->HostCs);
3809 pHlp->pfnPrintf(pHlp, " %sSS = %#RX16\n", pszPrefix, pVmcs->HostSs);
3810 pHlp->pfnPrintf(pHlp, " %sDS = %#RX16\n", pszPrefix, pVmcs->HostDs);
3811 pHlp->pfnPrintf(pHlp, " %sES = %#RX16\n", pszPrefix, pVmcs->HostEs);
3812 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Fs, "FS", pszPrefix);
3813 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Gs, "GS", pszPrefix);
3814 CPUMVMX_DUMP_HOST_FS_GS_TR(pHlp, pVmcs, Tr, "TR", pszPrefix);
3815 CPUMVMX_DUMP_HOST_XDTR(pHlp, pVmcs, Gdtr, "GDTR", pszPrefix);
3816 CPUMVMX_DUMP_HOST_XDTR(pHlp, pVmcs, Idtr, "IDTR", pszPrefix);
3817
3818 /* 32-bit. */
3819 pHlp->pfnPrintf(pHlp, " %sSysEnter CS = %#RX32\n", pszPrefix, pVmcs->u32HostSysenterCs);
3820
3821 /* 64-bit. */
3822 pHlp->pfnPrintf(pHlp, " %sEFER = %#RX64\n", pszPrefix, pVmcs->u64HostEferMsr.u);
3823 pHlp->pfnPrintf(pHlp, " %sPAT = %#RX64\n", pszPrefix, pVmcs->u64HostPatMsr.u);
3824 pHlp->pfnPrintf(pHlp, " %sPERFGLOBALCTRL = %#RX64\n", pszPrefix, pVmcs->u64HostPerfGlobalCtlMsr.u);
3825 pHlp->pfnPrintf(pHlp, " %sPKRS = %#RX64\n", pszPrefix, pVmcs->u64HostPkrsMsr.u);
3826
3827 /* Natural width. */
3828 pHlp->pfnPrintf(pHlp, " %sCR0 = %#RX64\n", pszPrefix, pVmcs->u64HostCr0.u);
3829 pHlp->pfnPrintf(pHlp, " %sCR3 = %#RX64\n", pszPrefix, pVmcs->u64HostCr3.u);
3830 pHlp->pfnPrintf(pHlp, " %sCR4 = %#RX64\n", pszPrefix, pVmcs->u64HostCr4.u);
3831 pHlp->pfnPrintf(pHlp, " %sSysEnter ESP = %#RX64\n", pszPrefix, pVmcs->u64HostSysenterEsp.u);
3832 pHlp->pfnPrintf(pHlp, " %sSysEnter EIP = %#RX64\n", pszPrefix, pVmcs->u64HostSysenterEip.u);
3833 pHlp->pfnPrintf(pHlp, " %sRSP = %#RX64\n", pszPrefix, pVmcs->u64HostRsp.u);
3834 pHlp->pfnPrintf(pHlp, " %sRIP = %#RX64\n", pszPrefix, pVmcs->u64HostRip.u);
3835 pHlp->pfnPrintf(pHlp, " %sS_CET = %#RX64\n", pszPrefix, pVmcs->u64HostSCetMsr.u);
3836 pHlp->pfnPrintf(pHlp, " %sSSP = %#RX64\n", pszPrefix, pVmcs->u64HostSsp.u);
3837 pHlp->pfnPrintf(pHlp, " %sINTERRUPT_SSP_TABLE_ADDR = %#RX64\n", pszPrefix, pVmcs->u64HostIntrSspTableAddrMsr.u);
3838
3839 }
3840
3841 /* Read-only fields. */
3842 {
3843 pHlp->pfnPrintf(pHlp, "%sRead-only data fields:\n", pszPrefix);
3844
3845 /* 16-bit (none currently). */
3846
3847 /* 32-bit. */
3848 pHlp->pfnPrintf(pHlp, " %sExit reason = %u (%s)\n", pszPrefix, pVmcs->u32RoExitReason, HMGetVmxExitName(pVmcs->u32RoExitReason));
3849 pHlp->pfnPrintf(pHlp, " %sExit qualification = %#RX64\n", pszPrefix, pVmcs->u64RoExitQual.u);
3850 pHlp->pfnPrintf(pHlp, " %sVM-instruction error = %#RX32\n", pszPrefix, pVmcs->u32RoVmInstrError);
3851 pHlp->pfnPrintf(pHlp, " %sVM-exit intr info = %#RX32\n", pszPrefix, pVmcs->u32RoExitIntInfo);
3852 {
3853 uint32_t const fInfo = pVmcs->u32RoExitIntInfo;
3854 uint8_t const uType = VMX_EXIT_INT_INFO_TYPE(fInfo);
3855 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_VALID(fInfo));
3856 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetExitIntInfoTypeDesc(uType));
3857 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_EXIT_INT_INFO_VECTOR(fInfo));
3858 pHlp->pfnPrintf(pHlp, " %sNMI-unblocking-IRET = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_NMI_UNBLOCK_IRET(fInfo));
3859 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_EXIT_INT_INFO_IS_ERROR_CODE_VALID(fInfo));
3860 }
3861 pHlp->pfnPrintf(pHlp, " %sVM-exit intr error-code = %#RX32\n", pszPrefix, pVmcs->u32RoExitIntErrCode);
3862 pHlp->pfnPrintf(pHlp, " %sIDT-vectoring info = %#RX32\n", pszPrefix, pVmcs->u32RoIdtVectoringInfo);
3863 {
3864 uint32_t const fInfo = pVmcs->u32RoIdtVectoringInfo;
3865 uint8_t const uType = VMX_IDT_VECTORING_INFO_TYPE(fInfo);
3866 pHlp->pfnPrintf(pHlp, " %sValid = %RTbool\n", pszPrefix, VMX_IDT_VECTORING_INFO_IS_VALID(fInfo));
3867 pHlp->pfnPrintf(pHlp, " %sType = %#x (%s)\n", pszPrefix, uType, VMXGetIdtVectoringInfoTypeDesc(uType));
3868 pHlp->pfnPrintf(pHlp, " %sVector = %#x\n", pszPrefix, VMX_IDT_VECTORING_INFO_VECTOR(fInfo));
3869 pHlp->pfnPrintf(pHlp, " %sError-code valid = %RTbool\n", pszPrefix, VMX_IDT_VECTORING_INFO_IS_ERROR_CODE_VALID(fInfo));
3870 }
3871 pHlp->pfnPrintf(pHlp, " %sIDT-vectoring error-code = %#RX32\n", pszPrefix, pVmcs->u32RoIdtVectoringErrCode);
3872 pHlp->pfnPrintf(pHlp, " %sVM-exit instruction length = %u byte(s)\n", pszPrefix, pVmcs->u32RoExitInstrLen);
3873 pHlp->pfnPrintf(pHlp, " %sVM-exit instruction info = %#RX64\n", pszPrefix, pVmcs->u32RoExitInstrInfo);
3874
3875 /* 64-bit. */
3876 pHlp->pfnPrintf(pHlp, " %sGuest-physical addr = %#RX64\n", pszPrefix, pVmcs->u64RoGuestPhysAddr.u);
3877
3878 /* Natural width. */
3879 pHlp->pfnPrintf(pHlp, " %sI/O RCX = %#RX64\n", pszPrefix, pVmcs->u64RoIoRcx.u);
3880 pHlp->pfnPrintf(pHlp, " %sI/O RSI = %#RX64\n", pszPrefix, pVmcs->u64RoIoRsi.u);
3881 pHlp->pfnPrintf(pHlp, " %sI/O RDI = %#RX64\n", pszPrefix, pVmcs->u64RoIoRdi.u);
3882 pHlp->pfnPrintf(pHlp, " %sI/O RIP = %#RX64\n", pszPrefix, pVmcs->u64RoIoRip.u);
3883 pHlp->pfnPrintf(pHlp, " %sGuest-linear addr = %#RX64\n", pszPrefix, pVmcs->u64RoGuestLinearAddr.u);
3884 }
3885
3886#ifdef DEBUG_ramshankar
3887 if (pVmcs->u32ProcCtls & VMX_PROC_CTLS_USE_TPR_SHADOW)
3888 {
3889 void *pvPage = RTMemTmpAllocZ(VMX_V_VIRT_APIC_SIZE);
3890 Assert(pvPage);
3891 RTGCPHYS const GCPhysVirtApic = pVmcs->u64AddrVirtApic.u;
3892 int rc = PGMPhysSimpleReadGCPhys(pVCpu->CTX_SUFF(pVM), pvPage, GCPhysVirtApic, VMX_V_VIRT_APIC_SIZE);
3893 if (RT_SUCCESS(rc))
3894 {
3895 pHlp->pfnPrintf(pHlp, " %sVirtual-APIC page\n", pszPrefix);
3896 pHlp->pfnPrintf(pHlp, "%.*Rhxs\n", VMX_V_VIRT_APIC_SIZE, pvPage);
3897 pHlp->pfnPrintf(pHlp, "\n");
3898 }
3899 RTMemTmpFree(pvPage);
3900 }
3901#else
3902 NOREF(pVCpu);
3903#endif
3904
3905#undef CPUMVMX_DUMP_HOST_XDTR
3906#undef CPUMVMX_DUMP_HOST_FS_GS_TR
3907#undef CPUMVMX_DUMP_GUEST_SEGREG
3908#undef CPUMVMX_DUMP_GUEST_XDTR
3909}
3910
3911
3912/**
3913 * Display the guest's hardware-virtualization cpu state.
3914 *
3915 * @param pVM The cross context VM structure.
3916 * @param pHlp The info helper functions.
3917 * @param pszArgs Arguments, ignored.
3918 */
3919static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3920{
3921 RT_NOREF(pszArgs);
3922
3923 PVMCPU pVCpu = VMMGetCpu(pVM);
3924 if (!pVCpu)
3925 pVCpu = pVM->apCpusR3[0];
3926
3927 PCCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
3928 bool const fSvm = pVM->cpum.s.GuestFeatures.fSvm;
3929 bool const fVmx = pVM->cpum.s.GuestFeatures.fVmx;
3930
3931 pHlp->pfnPrintf(pHlp, "VCPU[%u] hardware virtualization state:\n", pVCpu->idCpu);
3932 pHlp->pfnPrintf(pHlp, "fLocalForcedActions = %#RX32\n", pCtx->hwvirt.fLocalForcedActions);
3933 pHlp->pfnPrintf(pHlp, "In nested-guest hwvirt mode = %RTbool\n", CPUMIsGuestInNestedHwvirtMode(pCtx));
3934
3935 if (fSvm)
3936 {
3937 pHlp->pfnPrintf(pHlp, "SVM hwvirt state:\n");
3938 pHlp->pfnPrintf(pHlp, " fGif = %RTbool\n", pCtx->hwvirt.fGif);
3939
3940 char szEFlags[80];
3941 cpumR3InfoFormatFlags(&szEFlags[0], pCtx->hwvirt.svm.HostState.rflags.u);
3942 pHlp->pfnPrintf(pHlp, " uMsrHSavePa = %#RX64\n", pCtx->hwvirt.svm.uMsrHSavePa);
3943 pHlp->pfnPrintf(pHlp, " GCPhysVmcb = %#RGp\n", pCtx->hwvirt.svm.GCPhysVmcb);
3944 pHlp->pfnPrintf(pHlp, " VmcbCtrl:\n");
3945 cpumR3InfoSvmVmcbCtrl(pHlp, &pCtx->hwvirt.svm.Vmcb.ctrl, " " /* pszPrefix */);
3946 pHlp->pfnPrintf(pHlp, " VmcbStateSave:\n");
3947 cpumR3InfoSvmVmcbStateSave(pHlp, &pCtx->hwvirt.svm.Vmcb.guest, " " /* pszPrefix */);
3948 pHlp->pfnPrintf(pHlp, " HostState:\n");
3949 pHlp->pfnPrintf(pHlp, " uEferMsr = %#RX64\n", pCtx->hwvirt.svm.HostState.uEferMsr);
3950 pHlp->pfnPrintf(pHlp, " uCr0 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr0);
3951 pHlp->pfnPrintf(pHlp, " uCr4 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr4);
3952 pHlp->pfnPrintf(pHlp, " uCr3 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr3);
3953 pHlp->pfnPrintf(pHlp, " uRip = %#RX64\n", pCtx->hwvirt.svm.HostState.uRip);
3954 pHlp->pfnPrintf(pHlp, " uRsp = %#RX64\n", pCtx->hwvirt.svm.HostState.uRsp);
3955 pHlp->pfnPrintf(pHlp, " uRax = %#RX64\n", pCtx->hwvirt.svm.HostState.uRax);
3956 pHlp->pfnPrintf(pHlp, " rflags = %#RX64 %31s\n", pCtx->hwvirt.svm.HostState.rflags.u64, szEFlags);
3957 PCCPUMSELREG pSelEs = &pCtx->hwvirt.svm.HostState.es;
3958 pHlp->pfnPrintf(pHlp, " es = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3959 pSelEs->Sel, pSelEs->u64Base, pSelEs->u32Limit, pSelEs->Attr.u);
3960 PCCPUMSELREG pSelCs = &pCtx->hwvirt.svm.HostState.cs;
3961 pHlp->pfnPrintf(pHlp, " cs = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3962 pSelCs->Sel, pSelCs->u64Base, pSelCs->u32Limit, pSelCs->Attr.u);
3963 PCCPUMSELREG pSelSs = &pCtx->hwvirt.svm.HostState.ss;
3964 pHlp->pfnPrintf(pHlp, " ss = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3965 pSelSs->Sel, pSelSs->u64Base, pSelSs->u32Limit, pSelSs->Attr.u);
3966 PCCPUMSELREG pSelDs = &pCtx->hwvirt.svm.HostState.ds;
3967 pHlp->pfnPrintf(pHlp, " ds = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3968 pSelDs->Sel, pSelDs->u64Base, pSelDs->u32Limit, pSelDs->Attr.u);
3969 pHlp->pfnPrintf(pHlp, " gdtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.gdtr.pGdt,
3970 pCtx->hwvirt.svm.HostState.gdtr.cbGdt);
3971 pHlp->pfnPrintf(pHlp, " idtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.idtr.pIdt,
3972 pCtx->hwvirt.svm.HostState.idtr.cbIdt);
3973 pHlp->pfnPrintf(pHlp, " cPauseFilter = %RU16\n", pCtx->hwvirt.svm.cPauseFilter);
3974 pHlp->pfnPrintf(pHlp, " cPauseFilterThreshold = %RU32\n", pCtx->hwvirt.svm.cPauseFilterThreshold);
3975 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %u\n", pCtx->hwvirt.svm.fInterceptEvents);
3976 }
3977 else if (fVmx)
3978 {
3979 pHlp->pfnPrintf(pHlp, "VMX hwvirt state:\n");
3980 pHlp->pfnPrintf(pHlp, " GCPhysVmxon = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmxon);
3981 pHlp->pfnPrintf(pHlp, " GCPhysVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmcs);
3982 pHlp->pfnPrintf(pHlp, " GCPhysShadowVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysShadowVmcs);
3983 pHlp->pfnPrintf(pHlp, " enmDiag = %u (%s)\n", pCtx->hwvirt.vmx.enmDiag, HMGetVmxDiagDesc(pCtx->hwvirt.vmx.enmDiag));
3984 pHlp->pfnPrintf(pHlp, " uDiagAux = %#RX64\n", pCtx->hwvirt.vmx.uDiagAux);
3985 pHlp->pfnPrintf(pHlp, " enmAbort = %u (%s)\n", pCtx->hwvirt.vmx.enmAbort, VMXGetAbortDesc(pCtx->hwvirt.vmx.enmAbort));
3986 pHlp->pfnPrintf(pHlp, " uAbortAux = %u (%#x)\n", pCtx->hwvirt.vmx.uAbortAux, pCtx->hwvirt.vmx.uAbortAux);
3987 pHlp->pfnPrintf(pHlp, " fInVmxRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxRootMode);
3988 pHlp->pfnPrintf(pHlp, " fInVmxNonRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxNonRootMode);
3989 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %RTbool\n", pCtx->hwvirt.vmx.fInterceptEvents);
3990 pHlp->pfnPrintf(pHlp, " fNmiUnblockingIret = %RTbool\n", pCtx->hwvirt.vmx.fNmiUnblockingIret);
3991 pHlp->pfnPrintf(pHlp, " uFirstPauseLoopTick = %RX64\n", pCtx->hwvirt.vmx.uFirstPauseLoopTick);
3992 pHlp->pfnPrintf(pHlp, " uPrevPauseTick = %RX64\n", pCtx->hwvirt.vmx.uPrevPauseTick);
3993 pHlp->pfnPrintf(pHlp, " uEntryTick = %RX64\n", pCtx->hwvirt.vmx.uEntryTick);
3994 pHlp->pfnPrintf(pHlp, " offVirtApicWrite = %#RX16\n", pCtx->hwvirt.vmx.offVirtApicWrite);
3995 pHlp->pfnPrintf(pHlp, " fVirtNmiBlocking = %RTbool\n", pCtx->hwvirt.vmx.fVirtNmiBlocking);
3996 pHlp->pfnPrintf(pHlp, " VMCS cache:\n");
3997 cpumR3InfoVmxVmcs(pVCpu, pHlp, &pCtx->hwvirt.vmx.Vmcs, " " /* pszPrefix */);
3998 }
3999 else
4000 pHlp->pfnPrintf(pHlp, "Hwvirt state disabled.\n");
4001
4002#undef CPUMHWVIRTDUMP_NONE
4003#undef CPUMHWVIRTDUMP_COMMON
4004#undef CPUMHWVIRTDUMP_SVM
4005#undef CPUMHWVIRTDUMP_VMX
4006#undef CPUMHWVIRTDUMP_LAST
4007#undef CPUMHWVIRTDUMP_ALL
4008}
4009
4010/**
4011 * Display the current guest instruction
4012 *
4013 * @param pVM The cross context VM structure.
4014 * @param pHlp The info helper functions.
4015 * @param pszArgs Arguments, ignored.
4016 */
4017static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4018{
4019 NOREF(pszArgs);
4020
4021 PVMCPU pVCpu = VMMGetCpu(pVM);
4022 if (!pVCpu)
4023 pVCpu = pVM->apCpusR3[0];
4024
4025 char szInstruction[256];
4026 szInstruction[0] = '\0';
4027 DBGFR3DisasInstrCurrent(pVCpu, szInstruction, sizeof(szInstruction));
4028 pHlp->pfnPrintf(pHlp, "\nCPUM%u: %s\n\n", pVCpu->idCpu, szInstruction);
4029}
4030
4031
4032/**
4033 * Display the hypervisor cpu state.
4034 *
4035 * @param pVM The cross context VM structure.
4036 * @param pHlp The info helper functions.
4037 * @param pszArgs Arguments, ignored.
4038 */
4039static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4040{
4041 PVMCPU pVCpu = VMMGetCpu(pVM);
4042 if (!pVCpu)
4043 pVCpu = pVM->apCpusR3[0];
4044
4045 CPUMDUMPTYPE enmType;
4046 const char *pszComment;
4047 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
4048 pHlp->pfnPrintf(pHlp, "Hypervisor CPUM state: %s\n", pszComment);
4049
4050 pHlp->pfnPrintf(pHlp,
4051 ".dr0=%016RX64 .dr1=%016RX64 .dr2=%016RX64 .dr3=%016RX64\n"
4052 ".dr4=%016RX64 .dr5=%016RX64 .dr6=%016RX64 .dr7=%016RX64\n",
4053 pVCpu->cpum.s.Hyper.dr[0], pVCpu->cpum.s.Hyper.dr[1], pVCpu->cpum.s.Hyper.dr[2], pVCpu->cpum.s.Hyper.dr[3],
4054 pVCpu->cpum.s.Hyper.dr[4], pVCpu->cpum.s.Hyper.dr[5], pVCpu->cpum.s.Hyper.dr[6], pVCpu->cpum.s.Hyper.dr[7]);
4055 pHlp->pfnPrintf(pHlp, "CR4OrMask=%#x CR4AndMask=%#x\n", pVM->cpum.s.CR4.OrMask, pVM->cpum.s.CR4.AndMask);
4056}
4057
4058
4059/**
4060 * Display the host cpu state.
4061 *
4062 * @param pVM The cross context VM structure.
4063 * @param pHlp The info helper functions.
4064 * @param pszArgs Arguments, ignored.
4065 */
4066static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
4067{
4068 CPUMDUMPTYPE enmType;
4069 const char *pszComment;
4070 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
4071 pHlp->pfnPrintf(pHlp, "Host CPUM state: %s\n", pszComment);
4072
4073 PVMCPU pVCpu = VMMGetCpu(pVM);
4074 if (!pVCpu)
4075 pVCpu = pVM->apCpusR3[0];
4076 PCPUMHOSTCTX pCtx = &pVCpu->cpum.s.Host;
4077
4078 /*
4079 * Format the EFLAGS.
4080 */
4081 uint64_t efl = pCtx->rflags;
4082 char szEFlags[80];
4083 cpumR3InfoFormatFlags(&szEFlags[0], efl);
4084
4085 /*
4086 * Format the registers.
4087 */
4088 pHlp->pfnPrintf(pHlp,
4089 "rax=xxxxxxxxxxxxxxxx rbx=%016RX64 rcx=xxxxxxxxxxxxxxxx\n"
4090 "rdx=xxxxxxxxxxxxxxxx rsi=%016RX64 rdi=%016RX64\n"
4091 "rip=xxxxxxxxxxxxxxxx rsp=%016RX64 rbp=%016RX64\n"
4092 " r8=xxxxxxxxxxxxxxxx r9=xxxxxxxxxxxxxxxx r10=%016RX64\n"
4093 "r11=%016RX64 r12=%016RX64 r13=%016RX64\n"
4094 "r14=%016RX64 r15=%016RX64\n"
4095 "iopl=%d %31s\n"
4096 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08RX64\n"
4097 "cr0=%016RX64 cr2=xxxxxxxxxxxxxxxx cr3=%016RX64\n"
4098 "cr4=%016RX64 ldtr=%04x tr=%04x\n"
4099 "dr[0]=%016RX64 dr[1]=%016RX64 dr[2]=%016RX64\n"
4100 "dr[3]=%016RX64 dr[6]=%016RX64 dr[7]=%016RX64\n"
4101 "gdtr=%016RX64:%04x idtr=%016RX64:%04x\n"
4102 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
4103 "FSbase=%016RX64 GSbase=%016RX64 efer=%08RX64\n"
4104 ,
4105 /*pCtx->rax,*/ pCtx->rbx, /*pCtx->rcx,
4106 pCtx->rdx,*/ pCtx->rsi, pCtx->rdi,
4107 /*pCtx->rip,*/ pCtx->rsp, pCtx->rbp,
4108 /*pCtx->r8, pCtx->r9,*/ pCtx->r10,
4109 pCtx->r11, pCtx->r12, pCtx->r13,
4110 pCtx->r14, pCtx->r15,
4111 X86_EFL_GET_IOPL(efl), szEFlags,
4112 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
4113 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3,
4114 pCtx->cr4, pCtx->ldtr, pCtx->tr,
4115 pCtx->dr0, pCtx->dr1, pCtx->dr2,
4116 pCtx->dr3, pCtx->dr6, pCtx->dr7,
4117 pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->idtr.uAddr, pCtx->idtr.cb,
4118 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp,
4119 pCtx->FSbase, pCtx->GSbase, pCtx->efer);
4120}
4121
4122/**
4123 * Structure used when disassembling and instructions in DBGF.
4124 * This is used so the reader function can get the stuff it needs.
4125 */
4126typedef struct CPUMDISASSTATE
4127{
4128 /** Pointer to the CPU structure. */
4129 PDISCPUSTATE pCpu;
4130 /** Pointer to the VM. */
4131 PVM pVM;
4132 /** Pointer to the VMCPU. */
4133 PVMCPU pVCpu;
4134 /** Pointer to the first byte in the segment. */
4135 RTGCUINTPTR GCPtrSegBase;
4136 /** Pointer to the byte after the end of the segment. (might have wrapped!) */
4137 RTGCUINTPTR GCPtrSegEnd;
4138 /** The size of the segment minus 1. */
4139 RTGCUINTPTR cbSegLimit;
4140 /** Pointer to the current page - R3 Ptr. */
4141 void const *pvPageR3;
4142 /** Pointer to the current page - GC Ptr. */
4143 RTGCPTR pvPageGC;
4144 /** The lock information that PGMPhysReleasePageMappingLock needs. */
4145 PGMPAGEMAPLOCK PageMapLock;
4146 /** Whether the PageMapLock is valid or not. */
4147 bool fLocked;
4148 /** 64 bits mode or not. */
4149 bool f64Bits;
4150} CPUMDISASSTATE, *PCPUMDISASSTATE;
4151
4152
4153/**
4154 * @callback_method_impl{FNDISREADBYTES}
4155 */
4156static DECLCALLBACK(int) cpumR3DisasInstrRead(PDISCPUSTATE pDis, uint8_t offInstr, uint8_t cbMinRead, uint8_t cbMaxRead)
4157{
4158 PCPUMDISASSTATE pState = (PCPUMDISASSTATE)pDis->pvUser;
4159 for (;;)
4160 {
4161 RTGCUINTPTR GCPtr = pDis->uInstrAddr + offInstr + pState->GCPtrSegBase;
4162
4163 /*
4164 * Need to update the page translation?
4165 */
4166 if ( !pState->pvPageR3
4167 || (GCPtr >> PAGE_SHIFT) != (pState->pvPageGC >> PAGE_SHIFT))
4168 {
4169 /* translate the address */
4170 pState->pvPageGC = GCPtr & PAGE_BASE_GC_MASK;
4171
4172 /* Release mapping lock previously acquired. */
4173 if (pState->fLocked)
4174 PGMPhysReleasePageMappingLock(pState->pVM, &pState->PageMapLock);
4175 int rc = PGMPhysGCPtr2CCPtrReadOnly(pState->pVCpu, pState->pvPageGC, &pState->pvPageR3, &pState->PageMapLock);
4176 if (RT_SUCCESS(rc))
4177 pState->fLocked = true;
4178 else
4179 {
4180 pState->fLocked = false;
4181 pState->pvPageR3 = NULL;
4182 return rc;
4183 }
4184 }
4185
4186 /*
4187 * Check the segment limit.
4188 */
4189 if (!pState->f64Bits && pDis->uInstrAddr + offInstr > pState->cbSegLimit)
4190 return VERR_OUT_OF_SELECTOR_BOUNDS;
4191
4192 /*
4193 * Calc how much we can read.
4194 */
4195 uint32_t cb = PAGE_SIZE - (GCPtr & PAGE_OFFSET_MASK);
4196 if (!pState->f64Bits)
4197 {
4198 RTGCUINTPTR cbSeg = pState->GCPtrSegEnd - GCPtr;
4199 if (cb > cbSeg && cbSeg)
4200 cb = cbSeg;
4201 }
4202 if (cb > cbMaxRead)
4203 cb = cbMaxRead;
4204
4205 /*
4206 * Read and advance or exit.
4207 */
4208 memcpy(&pDis->abInstr[offInstr], (uint8_t *)pState->pvPageR3 + (GCPtr & PAGE_OFFSET_MASK), cb);
4209 offInstr += (uint8_t)cb;
4210 if (cb >= cbMinRead)
4211 {
4212 pDis->cbCachedInstr = offInstr;
4213 return VINF_SUCCESS;
4214 }
4215 cbMinRead -= (uint8_t)cb;
4216 cbMaxRead -= (uint8_t)cb;
4217 }
4218}
4219
4220
4221/**
4222 * Disassemble an instruction and return the information in the provided structure.
4223 *
4224 * @returns VBox status code.
4225 * @param pVM The cross context VM structure.
4226 * @param pVCpu The cross context virtual CPU structure.
4227 * @param pCtx Pointer to the guest CPU context.
4228 * @param GCPtrPC Program counter (relative to CS) to disassemble from.
4229 * @param pCpu Disassembly state.
4230 * @param pszPrefix String prefix for logging (debug only).
4231 *
4232 */
4233VMMR3DECL(int) CPUMR3DisasmInstrCPU(PVM pVM, PVMCPU pVCpu, PCPUMCTX pCtx, RTGCPTR GCPtrPC, PDISCPUSTATE pCpu,
4234 const char *pszPrefix)
4235{
4236 CPUMDISASSTATE State;
4237 int rc;
4238
4239 const PGMMODE enmMode = PGMGetGuestMode(pVCpu);
4240 State.pCpu = pCpu;
4241 State.pvPageGC = 0;
4242 State.pvPageR3 = NULL;
4243 State.pVM = pVM;
4244 State.pVCpu = pVCpu;
4245 State.fLocked = false;
4246 State.f64Bits = false;
4247
4248 /*
4249 * Get selector information.
4250 */
4251 DISCPUMODE enmDisCpuMode;
4252 if ( (pCtx->cr0 & X86_CR0_PE)
4253 && pCtx->eflags.Bits.u1VM == 0)
4254 {
4255 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
4256 return VERR_CPUM_HIDDEN_CS_LOAD_ERROR;
4257 State.f64Bits = enmMode >= PGMMODE_AMD64 && pCtx->cs.Attr.n.u1Long;
4258 State.GCPtrSegBase = pCtx->cs.u64Base;
4259 State.GCPtrSegEnd = pCtx->cs.u32Limit + 1 + (RTGCUINTPTR)pCtx->cs.u64Base;
4260 State.cbSegLimit = pCtx->cs.u32Limit;
4261 enmDisCpuMode = (State.f64Bits)
4262 ? DISCPUMODE_64BIT
4263 : pCtx->cs.Attr.n.u1DefBig
4264 ? DISCPUMODE_32BIT
4265 : DISCPUMODE_16BIT;
4266 }
4267 else
4268 {
4269 /* real or V86 mode */
4270 enmDisCpuMode = DISCPUMODE_16BIT;
4271 State.GCPtrSegBase = pCtx->cs.Sel * 16;
4272 State.GCPtrSegEnd = 0xFFFFFFFF;
4273 State.cbSegLimit = 0xFFFFFFFF;
4274 }
4275
4276 /*
4277 * Disassemble the instruction.
4278 */
4279 uint32_t cbInstr;
4280#ifndef LOG_ENABLED
4281 RT_NOREF_PV(pszPrefix);
4282 rc = DISInstrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State, pCpu, &cbInstr);
4283 if (RT_SUCCESS(rc))
4284 {
4285#else
4286 char szOutput[160];
4287 rc = DISInstrToStrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State,
4288 pCpu, &cbInstr, szOutput, sizeof(szOutput));
4289 if (RT_SUCCESS(rc))
4290 {
4291 /* log it */
4292 if (pszPrefix)
4293 Log(("%s-CPU%d: %s", pszPrefix, pVCpu->idCpu, szOutput));
4294 else
4295 Log(("%s", szOutput));
4296#endif
4297 rc = VINF_SUCCESS;
4298 }
4299 else
4300 Log(("CPUMR3DisasmInstrCPU: DISInstr failed for %04X:%RGv rc=%Rrc\n", pCtx->cs.Sel, GCPtrPC, rc));
4301
4302 /* Release mapping lock acquired in cpumR3DisasInstrRead. */
4303 if (State.fLocked)
4304 PGMPhysReleasePageMappingLock(pVM, &State.PageMapLock);
4305
4306 return rc;
4307}
4308
4309
4310
4311/**
4312 * API for controlling a few of the CPU features found in CR4.
4313 *
4314 * Currently only X86_CR4_TSD is accepted as input.
4315 *
4316 * @returns VBox status code.
4317 *
4318 * @param pVM The cross context VM structure.
4319 * @param fOr The CR4 OR mask.
4320 * @param fAnd The CR4 AND mask.
4321 */
4322VMMR3DECL(int) CPUMR3SetCR4Feature(PVM pVM, RTHCUINTREG fOr, RTHCUINTREG fAnd)
4323{
4324 AssertMsgReturn(!(fOr & ~(X86_CR4_TSD)), ("%#x\n", fOr), VERR_INVALID_PARAMETER);
4325 AssertMsgReturn((fAnd & ~(X86_CR4_TSD)) == ~(X86_CR4_TSD), ("%#x\n", fAnd), VERR_INVALID_PARAMETER);
4326
4327 pVM->cpum.s.CR4.OrMask &= fAnd;
4328 pVM->cpum.s.CR4.OrMask |= fOr;
4329
4330 return VINF_SUCCESS;
4331}
4332
4333
4334/**
4335 * Called when the ring-3 init phase completes.
4336 *
4337 * @returns VBox status code.
4338 * @param pVM The cross context VM structure.
4339 * @param enmWhat Which init phase.
4340 */
4341VMMR3DECL(int) CPUMR3InitCompleted(PVM pVM, VMINITCOMPLETED enmWhat)
4342{
4343 switch (enmWhat)
4344 {
4345 case VMINITCOMPLETED_RING3:
4346 {
4347 /*
4348 * Figure out if the guest uses 32-bit or 64-bit FPU state at runtime for 64-bit capable VMs.
4349 * Only applicable/used on 64-bit hosts, refer CPUMR0A.asm. See @bugref{7138}.
4350 */
4351 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
4352 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
4353 {
4354 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
4355
4356 /* While loading a saved-state we fix it up in, cpumR3LoadDone(). */
4357 if (fSupportsLongMode)
4358 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
4359 }
4360
4361 /* Register statistic counters for MSRs. */
4362 cpumR3MsrRegStats(pVM);
4363
4364 /* Create VMX-preemption timer for nested guests if required. Must be
4365 done here as CPUM is initialized before TM. */
4366 if (pVM->cpum.s.GuestFeatures.fVmx)
4367 {
4368 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
4369 {
4370 PVMCPU pVCpu = pVM->apCpusR3[idCpu];
4371 char szName[32];
4372 RTStrPrintf(szName, sizeof(szName), "Nested VMX-preemption %u", idCpu);
4373 int rc = TMR3TimerCreate(pVM, TMCLOCK_VIRTUAL_SYNC, cpumR3VmxPreemptTimerCallback, pVCpu,
4374 TMTIMER_FLAGS_RING0, szName, &pVCpu->cpum.s.hNestedVmxPreemptTimer);
4375 AssertLogRelRCReturn(rc, rc);
4376 }
4377 }
4378 break;
4379 }
4380
4381 default:
4382 break;
4383 }
4384 return VINF_SUCCESS;
4385}
4386
4387
4388/**
4389 * Called when the ring-0 init phases completed.
4390 *
4391 * @param pVM The cross context VM structure.
4392 */
4393VMMR3DECL(void) CPUMR3LogCpuIdAndMsrFeatures(PVM pVM)
4394{
4395 /*
4396 * Enable log buffering as we're going to log a lot of lines.
4397 */
4398 bool const fOldBuffered = RTLogRelSetBuffering(true /*fBuffered*/);
4399
4400 /*
4401 * Log the cpuid.
4402 */
4403 RTCPUSET OnlineSet;
4404 LogRel(("CPUM: Logical host processors: %u present, %u max, %u online, online mask: %016RX64\n",
4405 (unsigned)RTMpGetPresentCount(), (unsigned)RTMpGetCount(), (unsigned)RTMpGetOnlineCount(),
4406 RTCpuSetToU64(RTMpGetOnlineSet(&OnlineSet)) ));
4407 RTCPUID cCores = RTMpGetCoreCount();
4408 if (cCores)
4409 LogRel(("CPUM: Physical host cores: %u\n", (unsigned)cCores));
4410 LogRel(("************************* CPUID dump ************************\n"));
4411 DBGFR3Info(pVM->pUVM, "cpuid", "verbose", DBGFR3InfoLogRelHlp());
4412 LogRel(("\n"));
4413 DBGFR3_INFO_LOG_SAFE(pVM, "cpuid", "verbose"); /* macro */
4414 LogRel(("******************** End of CPUID dump **********************\n"));
4415
4416 /*
4417 * Log VT-x extended features.
4418 *
4419 * SVM features are currently all covered under CPUID so there is nothing
4420 * to do here for SVM.
4421 */
4422 if (pVM->cpum.s.HostFeatures.fVmx)
4423 {
4424 LogRel(("*********************** VT-x features ***********************\n"));
4425 DBGFR3Info(pVM->pUVM, "cpumvmxfeat", "default", DBGFR3InfoLogRelHlp());
4426 LogRel(("\n"));
4427 LogRel(("******************* End of VT-x features ********************\n"));
4428 }
4429
4430 /*
4431 * Restore the log buffering state to what it was previously.
4432 */
4433 RTLogRelSetBuffering(fOldBuffered);
4434}
4435
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette