VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/CPUM.cpp@ 74155

Last change on this file since 74155 was 74155, checked in by vboxsync, 6 years ago

VMM: Nested VMX: bugref:9180 VMXVDIAG naming.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 174.7 KB
Line 
1/* $Id: CPUM.cpp 74155 2018-09-09 12:37:26Z vboxsync $ */
2/** @file
3 * CPUM - CPU Monitor / Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2017 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_cpum CPUM - CPU Monitor / Manager
19 *
20 * The CPU Monitor / Manager keeps track of all the CPU registers. It is
21 * also responsible for lazy FPU handling and some of the context loading
22 * in raw mode.
23 *
24 * There are three CPU contexts, the most important one is the guest one (GC).
25 * When running in raw-mode (RC) there is a special hyper context for the VMM
26 * part that floats around inside the guest address space. When running in
27 * raw-mode, CPUM also maintains a host context for saving and restoring
28 * registers across world switches. This latter is done in cooperation with the
29 * world switcher (@see pg_vmm).
30 *
31 * @see grp_cpum
32 *
33 * @section sec_cpum_fpu FPU / SSE / AVX / ++ state.
34 *
35 * TODO: proper write up, currently just some notes.
36 *
37 * The ring-0 FPU handling per OS:
38 *
39 * - 64-bit Windows uses XMM registers in the kernel as part of the calling
40 * convention (Visual C++ doesn't seem to have a way to disable
41 * generating such code either), so CR0.TS/EM are always zero from what I
42 * can tell. We are also forced to always load/save the guest XMM0-XMM15
43 * registers when entering/leaving guest context. Interrupt handlers
44 * using FPU/SSE will offically have call save and restore functions
45 * exported by the kernel, if the really really have to use the state.
46 *
47 * - 32-bit windows does lazy FPU handling, I think, probably including
48 * lazying saving. The Windows Internals book states that it's a bad
49 * idea to use the FPU in kernel space. However, it looks like it will
50 * restore the FPU state of the current thread in case of a kernel \#NM.
51 * Interrupt handlers should be same as for 64-bit.
52 *
53 * - Darwin allows taking \#NM in kernel space, restoring current thread's
54 * state if I read the code correctly. It saves the FPU state of the
55 * outgoing thread, and uses CR0.TS to lazily load the state of the
56 * incoming one. No idea yet how the FPU is treated by interrupt
57 * handlers, i.e. whether they are allowed to disable the state or
58 * something.
59 *
60 * - Linux also allows \#NM in kernel space (don't know since when), and
61 * uses CR0.TS for lazy loading. Saves outgoing thread's state, lazy
62 * loads the incoming unless configured to agressivly load it. Interrupt
63 * handlers can ask whether they're allowed to use the FPU, and may
64 * freely trash the state if Linux thinks it has saved the thread's state
65 * already. This is a problem.
66 *
67 * - Solaris will, from what I can tell, panic if it gets an \#NM in kernel
68 * context. When switching threads, the kernel will save the state of
69 * the outgoing thread and lazy load the incoming one using CR0.TS.
70 * There are a few routines in seeblk.s which uses the SSE unit in ring-0
71 * to do stuff, HAT are among the users. The routines there will
72 * manually clear CR0.TS and save the XMM registers they use only if
73 * CR0.TS was zero upon entry. They will skip it when not, because as
74 * mentioned above, the FPU state is saved when switching away from a
75 * thread and CR0.TS set to 1, so when CR0.TS is 1 there is nothing to
76 * preserve. This is a problem if we restore CR0.TS to 1 after loading
77 * the guest state.
78 *
79 * - FreeBSD - no idea yet.
80 *
81 * - OS/2 does not allow \#NMs in kernel space IIRC. Does lazy loading,
82 * possibly also lazy saving. Interrupts must preserve the CR0.TS+EM &
83 * FPU states.
84 *
85 * Up to r107425 (2016-05-24) we would only temporarily modify CR0.TS/EM while
86 * saving and restoring the host and guest states. The motivation for this
87 * change is that we want to be able to emulate SSE instruction in ring-0 (IEM).
88 *
89 * Starting with that change, we will leave CR0.TS=EM=0 after saving the host
90 * state and only restore it once we've restore the host FPU state. This has the
91 * accidental side effect of triggering Solaris to preserve XMM registers in
92 * sseblk.s. When CR0 was changed by saving the FPU state, CPUM must now inform
93 * the VT-x (HMVMX) code about it as it caches the CR0 value in the VMCS.
94 *
95 *
96 * @section sec_cpum_logging Logging Level Assignments.
97 *
98 * Following log level assignments:
99 * - Log6 is used for FPU state management.
100 * - Log7 is used for FPU state actualization.
101 *
102 */
103
104
105/*********************************************************************************************************************************
106* Header Files *
107*********************************************************************************************************************************/
108#define LOG_GROUP LOG_GROUP_CPUM
109#include <VBox/vmm/cpum.h>
110#include <VBox/vmm/cpumdis.h>
111#include <VBox/vmm/cpumctx-v1_6.h>
112#include <VBox/vmm/pgm.h>
113#include <VBox/vmm/apic.h>
114#include <VBox/vmm/mm.h>
115#include <VBox/vmm/em.h>
116#include <VBox/vmm/iem.h>
117#include <VBox/vmm/selm.h>
118#include <VBox/vmm/dbgf.h>
119#include <VBox/vmm/patm.h>
120#include <VBox/vmm/hm.h>
121#include <VBox/vmm/ssm.h>
122#include "CPUMInternal.h"
123#include <VBox/vmm/vm.h>
124
125#include <VBox/param.h>
126#include <VBox/dis.h>
127#include <VBox/err.h>
128#include <VBox/log.h>
129#include <iprt/asm-amd64-x86.h>
130#include <iprt/assert.h>
131#include <iprt/cpuset.h>
132#include <iprt/mem.h>
133#include <iprt/mp.h>
134#include <iprt/string.h>
135
136
137/*********************************************************************************************************************************
138* Defined Constants And Macros *
139*********************************************************************************************************************************/
140/**
141 * This was used in the saved state up to the early life of version 14.
142 *
143 * It indicates that we may have some out-of-sync hidden segement registers.
144 * It is only relevant for raw-mode.
145 */
146#define CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID RT_BIT(12)
147
148
149/*********************************************************************************************************************************
150* Structures and Typedefs *
151*********************************************************************************************************************************/
152
153/**
154 * What kind of cpu info dump to perform.
155 */
156typedef enum CPUMDUMPTYPE
157{
158 CPUMDUMPTYPE_TERSE,
159 CPUMDUMPTYPE_DEFAULT,
160 CPUMDUMPTYPE_VERBOSE
161} CPUMDUMPTYPE;
162/** Pointer to a cpu info dump type. */
163typedef CPUMDUMPTYPE *PCPUMDUMPTYPE;
164
165
166/*********************************************************************************************************************************
167* Internal Functions *
168*********************************************************************************************************************************/
169static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass);
170static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM);
171static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM);
172static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
173static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM);
174static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
175static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
176static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
177static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
178static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
179static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
180
181
182/*********************************************************************************************************************************
183* Global Variables *
184*********************************************************************************************************************************/
185/** Saved state field descriptors for CPUMCTX. */
186static const SSMFIELD g_aCpumCtxFields[] =
187{
188 SSMFIELD_ENTRY( CPUMCTX, rdi),
189 SSMFIELD_ENTRY( CPUMCTX, rsi),
190 SSMFIELD_ENTRY( CPUMCTX, rbp),
191 SSMFIELD_ENTRY( CPUMCTX, rax),
192 SSMFIELD_ENTRY( CPUMCTX, rbx),
193 SSMFIELD_ENTRY( CPUMCTX, rdx),
194 SSMFIELD_ENTRY( CPUMCTX, rcx),
195 SSMFIELD_ENTRY( CPUMCTX, rsp),
196 SSMFIELD_ENTRY( CPUMCTX, rflags),
197 SSMFIELD_ENTRY( CPUMCTX, rip),
198 SSMFIELD_ENTRY( CPUMCTX, r8),
199 SSMFIELD_ENTRY( CPUMCTX, r9),
200 SSMFIELD_ENTRY( CPUMCTX, r10),
201 SSMFIELD_ENTRY( CPUMCTX, r11),
202 SSMFIELD_ENTRY( CPUMCTX, r12),
203 SSMFIELD_ENTRY( CPUMCTX, r13),
204 SSMFIELD_ENTRY( CPUMCTX, r14),
205 SSMFIELD_ENTRY( CPUMCTX, r15),
206 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
207 SSMFIELD_ENTRY( CPUMCTX, es.ValidSel),
208 SSMFIELD_ENTRY( CPUMCTX, es.fFlags),
209 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
210 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
211 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
212 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
213 SSMFIELD_ENTRY( CPUMCTX, cs.ValidSel),
214 SSMFIELD_ENTRY( CPUMCTX, cs.fFlags),
215 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
216 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
217 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
218 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
219 SSMFIELD_ENTRY( CPUMCTX, ss.ValidSel),
220 SSMFIELD_ENTRY( CPUMCTX, ss.fFlags),
221 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
222 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
223 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
224 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
225 SSMFIELD_ENTRY( CPUMCTX, ds.ValidSel),
226 SSMFIELD_ENTRY( CPUMCTX, ds.fFlags),
227 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
228 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
229 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
230 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
231 SSMFIELD_ENTRY( CPUMCTX, fs.ValidSel),
232 SSMFIELD_ENTRY( CPUMCTX, fs.fFlags),
233 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
234 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
235 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
236 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
237 SSMFIELD_ENTRY( CPUMCTX, gs.ValidSel),
238 SSMFIELD_ENTRY( CPUMCTX, gs.fFlags),
239 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
240 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
241 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
242 SSMFIELD_ENTRY( CPUMCTX, cr0),
243 SSMFIELD_ENTRY( CPUMCTX, cr2),
244 SSMFIELD_ENTRY( CPUMCTX, cr3),
245 SSMFIELD_ENTRY( CPUMCTX, cr4),
246 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
247 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
248 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
249 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
250 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
251 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
252 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
253 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
254 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
255 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
256 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
257 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
258 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
259 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
260 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
261 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
262 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
263 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
264 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
265 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
266 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
267 SSMFIELD_ENTRY( CPUMCTX, ldtr.ValidSel),
268 SSMFIELD_ENTRY( CPUMCTX, ldtr.fFlags),
269 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
270 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
271 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
272 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
273 SSMFIELD_ENTRY( CPUMCTX, tr.ValidSel),
274 SSMFIELD_ENTRY( CPUMCTX, tr.fFlags),
275 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
276 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
277 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
278 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[0], CPUM_SAVED_STATE_VERSION_XSAVE),
279 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[1], CPUM_SAVED_STATE_VERSION_XSAVE),
280 SSMFIELD_ENTRY_VER( CPUMCTX, fXStateMask, CPUM_SAVED_STATE_VERSION_XSAVE),
281 SSMFIELD_ENTRY_TERM()
282};
283
284/** Saved state field descriptors for SVM nested hardware-virtualization
285 * Host State. */
286static const SSMFIELD g_aSvmHwvirtHostState[] =
287{
288 SSMFIELD_ENTRY( SVMHOSTSTATE, uEferMsr),
289 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr0),
290 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr4),
291 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr3),
292 SSMFIELD_ENTRY( SVMHOSTSTATE, uRip),
293 SSMFIELD_ENTRY( SVMHOSTSTATE, uRsp),
294 SSMFIELD_ENTRY( SVMHOSTSTATE, uRax),
295 SSMFIELD_ENTRY( SVMHOSTSTATE, rflags),
296 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Sel),
297 SSMFIELD_ENTRY( SVMHOSTSTATE, es.ValidSel),
298 SSMFIELD_ENTRY( SVMHOSTSTATE, es.fFlags),
299 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u64Base),
300 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u32Limit),
301 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Attr),
302 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Sel),
303 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.ValidSel),
304 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.fFlags),
305 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u64Base),
306 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u32Limit),
307 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Attr),
308 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Sel),
309 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.ValidSel),
310 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.fFlags),
311 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u64Base),
312 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u32Limit),
313 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Attr),
314 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Sel),
315 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.ValidSel),
316 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.fFlags),
317 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u64Base),
318 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u32Limit),
319 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Attr),
320 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.cbGdt),
321 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.pGdt),
322 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.cbIdt),
323 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.pIdt),
324 SSMFIELD_ENTRY_IGNORE(SVMHOSTSTATE, abPadding),
325 SSMFIELD_ENTRY_TERM()
326};
327
328/** Saved state field descriptors for CPUMCTX. */
329static const SSMFIELD g_aCpumX87Fields[] =
330{
331 SSMFIELD_ENTRY( X86FXSTATE, FCW),
332 SSMFIELD_ENTRY( X86FXSTATE, FSW),
333 SSMFIELD_ENTRY( X86FXSTATE, FTW),
334 SSMFIELD_ENTRY( X86FXSTATE, FOP),
335 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
336 SSMFIELD_ENTRY( X86FXSTATE, CS),
337 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
338 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
339 SSMFIELD_ENTRY( X86FXSTATE, DS),
340 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
341 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
342 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
343 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
344 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
345 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
346 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
347 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
348 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
349 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
350 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
351 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
352 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
353 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
354 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
355 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
356 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
357 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
358 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
359 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
360 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
361 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
362 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
363 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
364 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
365 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
366 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
367 SSMFIELD_ENTRY_VER( X86FXSTATE, au32RsrvdForSoftware[0], CPUM_SAVED_STATE_VERSION_XSAVE), /* 32-bit/64-bit hack */
368 SSMFIELD_ENTRY_TERM()
369};
370
371/** Saved state field descriptors for X86XSAVEHDR. */
372static const SSMFIELD g_aCpumXSaveHdrFields[] =
373{
374 SSMFIELD_ENTRY( X86XSAVEHDR, bmXState),
375 SSMFIELD_ENTRY_TERM()
376};
377
378/** Saved state field descriptors for X86XSAVEYMMHI. */
379static const SSMFIELD g_aCpumYmmHiFields[] =
380{
381 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[0]),
382 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[1]),
383 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[2]),
384 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[3]),
385 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[4]),
386 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[5]),
387 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[6]),
388 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[7]),
389 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[8]),
390 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[9]),
391 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[10]),
392 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[11]),
393 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[12]),
394 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[13]),
395 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[14]),
396 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[15]),
397 SSMFIELD_ENTRY_TERM()
398};
399
400/** Saved state field descriptors for X86XSAVEBNDREGS. */
401static const SSMFIELD g_aCpumBndRegsFields[] =
402{
403 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[0]),
404 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[1]),
405 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[2]),
406 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[3]),
407 SSMFIELD_ENTRY_TERM()
408};
409
410/** Saved state field descriptors for X86XSAVEBNDCFG. */
411static const SSMFIELD g_aCpumBndCfgFields[] =
412{
413 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fConfig),
414 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fStatus),
415 SSMFIELD_ENTRY_TERM()
416};
417
418#if 0 /** @todo */
419/** Saved state field descriptors for X86XSAVEOPMASK. */
420static const SSMFIELD g_aCpumOpmaskFields[] =
421{
422 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[0]),
423 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[1]),
424 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[2]),
425 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[3]),
426 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[4]),
427 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[5]),
428 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[6]),
429 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[7]),
430 SSMFIELD_ENTRY_TERM()
431};
432#endif
433
434/** Saved state field descriptors for X86XSAVEZMMHI256. */
435static const SSMFIELD g_aCpumZmmHi256Fields[] =
436{
437 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[0]),
438 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[1]),
439 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[2]),
440 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[3]),
441 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[4]),
442 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[5]),
443 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[6]),
444 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[7]),
445 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[8]),
446 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[9]),
447 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[10]),
448 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[11]),
449 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[12]),
450 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[13]),
451 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[14]),
452 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[15]),
453 SSMFIELD_ENTRY_TERM()
454};
455
456/** Saved state field descriptors for X86XSAVEZMM16HI. */
457static const SSMFIELD g_aCpumZmm16HiFields[] =
458{
459 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[0]),
460 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[1]),
461 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[2]),
462 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[3]),
463 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[4]),
464 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[5]),
465 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[6]),
466 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[7]),
467 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[8]),
468 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[9]),
469 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[10]),
470 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[11]),
471 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[12]),
472 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[13]),
473 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[14]),
474 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[15]),
475 SSMFIELD_ENTRY_TERM()
476};
477
478
479
480/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
481 * registeres changed. */
482static const SSMFIELD g_aCpumX87FieldsMem[] =
483{
484 SSMFIELD_ENTRY( X86FXSTATE, FCW),
485 SSMFIELD_ENTRY( X86FXSTATE, FSW),
486 SSMFIELD_ENTRY( X86FXSTATE, FTW),
487 SSMFIELD_ENTRY( X86FXSTATE, FOP),
488 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
489 SSMFIELD_ENTRY( X86FXSTATE, CS),
490 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
491 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
492 SSMFIELD_ENTRY( X86FXSTATE, DS),
493 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
494 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
495 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
496 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
497 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
498 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
499 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
500 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
501 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
502 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
503 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
504 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
505 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
506 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
507 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
508 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
509 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
510 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
511 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
512 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
513 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
514 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
515 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
516 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
517 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
518 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
519 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
520 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
521 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
522};
523
524/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
525 * registeres changed. */
526static const SSMFIELD g_aCpumCtxFieldsMem[] =
527{
528 SSMFIELD_ENTRY( CPUMCTX, rdi),
529 SSMFIELD_ENTRY( CPUMCTX, rsi),
530 SSMFIELD_ENTRY( CPUMCTX, rbp),
531 SSMFIELD_ENTRY( CPUMCTX, rax),
532 SSMFIELD_ENTRY( CPUMCTX, rbx),
533 SSMFIELD_ENTRY( CPUMCTX, rdx),
534 SSMFIELD_ENTRY( CPUMCTX, rcx),
535 SSMFIELD_ENTRY( CPUMCTX, rsp),
536 SSMFIELD_ENTRY_OLD( lss_esp, sizeof(uint32_t)),
537 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
538 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
539 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
540 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
541 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
542 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
543 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
544 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
545 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
546 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
547 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
548 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
549 SSMFIELD_ENTRY( CPUMCTX, rflags),
550 SSMFIELD_ENTRY( CPUMCTX, rip),
551 SSMFIELD_ENTRY( CPUMCTX, r8),
552 SSMFIELD_ENTRY( CPUMCTX, r9),
553 SSMFIELD_ENTRY( CPUMCTX, r10),
554 SSMFIELD_ENTRY( CPUMCTX, r11),
555 SSMFIELD_ENTRY( CPUMCTX, r12),
556 SSMFIELD_ENTRY( CPUMCTX, r13),
557 SSMFIELD_ENTRY( CPUMCTX, r14),
558 SSMFIELD_ENTRY( CPUMCTX, r15),
559 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
560 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
561 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
562 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
563 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
564 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
565 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
566 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
567 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
568 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
569 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
570 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
571 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
572 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
573 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
574 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
575 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
576 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
577 SSMFIELD_ENTRY( CPUMCTX, cr0),
578 SSMFIELD_ENTRY( CPUMCTX, cr2),
579 SSMFIELD_ENTRY( CPUMCTX, cr3),
580 SSMFIELD_ENTRY( CPUMCTX, cr4),
581 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
582 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
583 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
584 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
585 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
586 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
587 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
588 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
589 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
590 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
591 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
592 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
593 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
594 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
595 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
596 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
597 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
598 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
599 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
600 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
601 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
602 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
603 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
604 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
605 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
606 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
607 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
608 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
609 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
610 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
611 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
612 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
613 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
614 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
615 SSMFIELD_ENTRY_TERM()
616};
617
618/** Saved state field descriptors for CPUMCTX_VER1_6. */
619static const SSMFIELD g_aCpumX87FieldsV16[] =
620{
621 SSMFIELD_ENTRY( X86FXSTATE, FCW),
622 SSMFIELD_ENTRY( X86FXSTATE, FSW),
623 SSMFIELD_ENTRY( X86FXSTATE, FTW),
624 SSMFIELD_ENTRY( X86FXSTATE, FOP),
625 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
626 SSMFIELD_ENTRY( X86FXSTATE, CS),
627 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
628 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
629 SSMFIELD_ENTRY( X86FXSTATE, DS),
630 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
631 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
632 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
633 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
634 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
635 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
636 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
637 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
638 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
639 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
640 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
641 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
642 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
643 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
644 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
645 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
646 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
647 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
648 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
649 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
650 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
651 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
652 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
653 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
654 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
655 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
656 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
657 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
658 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
659 SSMFIELD_ENTRY_TERM()
660};
661
662/** Saved state field descriptors for CPUMCTX_VER1_6. */
663static const SSMFIELD g_aCpumCtxFieldsV16[] =
664{
665 SSMFIELD_ENTRY( CPUMCTX, rdi),
666 SSMFIELD_ENTRY( CPUMCTX, rsi),
667 SSMFIELD_ENTRY( CPUMCTX, rbp),
668 SSMFIELD_ENTRY( CPUMCTX, rax),
669 SSMFIELD_ENTRY( CPUMCTX, rbx),
670 SSMFIELD_ENTRY( CPUMCTX, rdx),
671 SSMFIELD_ENTRY( CPUMCTX, rcx),
672 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, rsp),
673 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
674 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
675 SSMFIELD_ENTRY_OLD( CPUMCTX, sizeof(uint64_t) /*rsp_notused*/),
676 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
677 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
678 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
679 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
680 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
681 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
682 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
683 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
684 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
685 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
686 SSMFIELD_ENTRY( CPUMCTX, rflags),
687 SSMFIELD_ENTRY( CPUMCTX, rip),
688 SSMFIELD_ENTRY( CPUMCTX, r8),
689 SSMFIELD_ENTRY( CPUMCTX, r9),
690 SSMFIELD_ENTRY( CPUMCTX, r10),
691 SSMFIELD_ENTRY( CPUMCTX, r11),
692 SSMFIELD_ENTRY( CPUMCTX, r12),
693 SSMFIELD_ENTRY( CPUMCTX, r13),
694 SSMFIELD_ENTRY( CPUMCTX, r14),
695 SSMFIELD_ENTRY( CPUMCTX, r15),
696 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, es.u64Base),
697 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
698 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
699 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, cs.u64Base),
700 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
701 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
702 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ss.u64Base),
703 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
704 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
705 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ds.u64Base),
706 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
707 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
708 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, fs.u64Base),
709 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
710 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
711 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gs.u64Base),
712 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
713 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
714 SSMFIELD_ENTRY( CPUMCTX, cr0),
715 SSMFIELD_ENTRY( CPUMCTX, cr2),
716 SSMFIELD_ENTRY( CPUMCTX, cr3),
717 SSMFIELD_ENTRY( CPUMCTX, cr4),
718 SSMFIELD_ENTRY_OLD( cr8, sizeof(uint64_t)),
719 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
720 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
721 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
722 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
723 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
724 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
725 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
726 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
727 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
728 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gdtr.pGdt),
729 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
730 SSMFIELD_ENTRY_OLD( gdtrPadding64, sizeof(uint64_t)),
731 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
732 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, idtr.pIdt),
733 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
734 SSMFIELD_ENTRY_OLD( idtrPadding64, sizeof(uint64_t)),
735 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
736 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
737 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
738 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
739 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
740 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
741 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
742 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
743 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
744 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
745 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
746 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
747 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
748 SSMFIELD_ENTRY_OLD( msrFSBASE, sizeof(uint64_t)),
749 SSMFIELD_ENTRY_OLD( msrGSBASE, sizeof(uint64_t)),
750 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
751 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ldtr.u64Base),
752 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
753 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
754 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, tr.u64Base),
755 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
756 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
757 SSMFIELD_ENTRY_OLD( padding, sizeof(uint32_t)*2),
758 SSMFIELD_ENTRY_TERM()
759};
760
761
762/**
763 * Checks for partial/leaky FXSAVE/FXRSTOR handling on AMD CPUs.
764 *
765 * AMD K7, K8 and newer AMD CPUs do not save/restore the x87 error pointers
766 * (last instruction pointer, last data pointer, last opcode) except when the ES
767 * bit (Exception Summary) in x87 FSW (FPU Status Word) is set. Thus if we don't
768 * clear these registers there is potential, local FPU leakage from a process
769 * using the FPU to another.
770 *
771 * See AMD Instruction Reference for FXSAVE, FXRSTOR.
772 *
773 * @param pVM The cross context VM structure.
774 */
775static void cpumR3CheckLeakyFpu(PVM pVM)
776{
777 uint32_t u32CpuVersion = ASMCpuId_EAX(1);
778 uint32_t const u32Family = u32CpuVersion >> 8;
779 if ( u32Family >= 6 /* K7 and higher */
780 && ASMIsAmdCpu())
781 {
782 uint32_t cExt = ASMCpuId_EAX(0x80000000);
783 if (ASMIsValidExtRange(cExt))
784 {
785 uint32_t fExtFeaturesEDX = ASMCpuId_EDX(0x80000001);
786 if (fExtFeaturesEDX & X86_CPUID_AMD_FEATURE_EDX_FFXSR)
787 {
788 for (VMCPUID i = 0; i < pVM->cCpus; i++)
789 pVM->aCpus[i].cpum.s.fUseFlags |= CPUM_USE_FFXSR_LEAKY;
790 Log(("CPUMR3Init: host CPU has leaky fxsave/fxrstor behaviour\n"));
791 }
792 }
793 }
794}
795
796
797/**
798 * Frees memory allocated for the SVM hardware virtualization state.
799 *
800 * @param pVM The cross context VM structure.
801 */
802static void cpumR3FreeSvmHwVirtState(PVM pVM)
803{
804 Assert(pVM->cpum.ro.GuestFeatures.fSvm);
805 for (VMCPUID i = 0; i < pVM->cCpus; i++)
806 {
807 PVMCPU pVCpu = &pVM->aCpus[i];
808 if (pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3)
809 {
810 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES);
811 pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3 = NULL;
812 }
813 pVCpu->cpum.s.Guest.hwvirt.svm.HCPhysVmcb = NIL_RTHCPHYS;
814
815 if (pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3)
816 {
817 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES);
818 pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3 = NULL;
819 }
820
821 if (pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3)
822 {
823 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES);
824 pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3 = NULL;
825 }
826 }
827}
828
829
830/**
831 * Allocates memory for the SVM hardware virtualization state.
832 *
833 * @returns VBox status code.
834 * @param pVM The cross context VM structure.
835 */
836static int cpumR3AllocSvmHwVirtState(PVM pVM)
837{
838 Assert(pVM->cpum.ro.GuestFeatures.fSvm);
839
840 int rc = VINF_SUCCESS;
841 LogRel(("CPUM: Allocating %u pages for the nested-guest SVM MSR and IO permission bitmaps\n",
842 pVM->cCpus * (SVM_MSRPM_PAGES + SVM_IOPM_PAGES)));
843 for (VMCPUID i = 0; i < pVM->cCpus; i++)
844 {
845 PVMCPU pVCpu = &pVM->aCpus[i];
846
847 /*
848 * Allocate the nested-guest VMCB.
849 */
850 SUPPAGE SupNstGstVmcbPage;
851 RT_ZERO(SupNstGstVmcbPage);
852 SupNstGstVmcbPage.Phys = NIL_RTHCPHYS;
853 Assert(SVM_VMCB_PAGES == 1);
854 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3);
855 rc = SUPR3PageAllocEx(SVM_VMCB_PAGES, 0 /* fFlags */, (void **)&pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3,
856 &pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR0, &SupNstGstVmcbPage);
857 if (RT_FAILURE(rc))
858 {
859 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3);
860 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMCB\n", pVCpu->idCpu, SVM_VMCB_PAGES));
861 break;
862 }
863 pVCpu->cpum.s.Guest.hwvirt.svm.HCPhysVmcb = SupNstGstVmcbPage.Phys;
864
865 /*
866 * Allocate the MSRPM (MSR Permission bitmap).
867 */
868 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3);
869 rc = SUPR3PageAllocEx(SVM_MSRPM_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3,
870 &pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR0, NULL /* paPages */);
871 if (RT_FAILURE(rc))
872 {
873 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3);
874 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's MSR permission bitmap\n", pVCpu->idCpu,
875 SVM_MSRPM_PAGES));
876 break;
877 }
878
879 /*
880 * Allocate the IOPM (IO Permission bitmap).
881 */
882 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3);
883 rc = SUPR3PageAllocEx(SVM_IOPM_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3,
884 &pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR0, NULL /* paPages */);
885 if (RT_FAILURE(rc))
886 {
887 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3);
888 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's IO permission bitmap\n", pVCpu->idCpu,
889 SVM_IOPM_PAGES));
890 break;
891 }
892 }
893
894 /* On any failure, cleanup. */
895 if (RT_FAILURE(rc))
896 cpumR3FreeSvmHwVirtState(pVM);
897
898 return rc;
899}
900
901
902/**
903 * Frees memory allocated for the VMX hardware virtualization state.
904 *
905 * @param pVM The cross context VM structure.
906 */
907static void cpumR3FreeVmxHwVirtState(PVM pVM)
908{
909 Assert(pVM->cpum.ro.GuestFeatures.fVmx);
910 for (VMCPUID i = 0; i < pVM->cCpus; i++)
911 {
912 PVMCPU pVCpu = &pVM->aCpus[i];
913 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3)
914 {
915 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3, VMX_V_VMCS_PAGES);
916 pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3 = NULL;
917 }
918 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3)
919 {
920 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3, VMX_V_VIRT_APIC_PAGES);
921 pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3 = NULL;
922 }
923 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3)
924 {
925 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_PAGES);
926 pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3 = NULL;
927 }
928 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3)
929 {
930 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_PAGES);
931 pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3 = NULL;
932 }
933 }
934}
935
936
937/**
938 * Allocates memory for the VMX hardware virtualization state.
939 *
940 * @returns VBox status code.
941 * @param pVM The cross context VM structure.
942 */
943static int cpumR3AllocVmxHwVirtState(PVM pVM)
944{
945 int rc = VINF_SUCCESS;
946 LogRel(("CPUM: Allocating %u pages for the nested-guest VMCS\n", pVM->cCpus * VMX_V_VMCS_SIZE));
947 for (VMCPUID i = 0; i < pVM->cCpus; i++)
948 {
949 PVMCPU pVCpu = &pVM->aCpus[i];
950
951 /*
952 * Allocate the nested-guest current VMCS.
953 */
954 SUPPAGE SupNstGstVmcsPage;
955 RT_ZERO(SupNstGstVmcsPage);
956 SupNstGstVmcsPage.Phys = NIL_RTHCPHYS;
957 Assert(VMX_V_VMCS_PAGES == 1);
958 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3);
959 rc = SUPR3PageAllocEx(VMX_V_VMCS_PAGES, 0 /* fFlags */, (void **)&pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3,
960 &pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR0, &SupNstGstVmcsPage);
961 if (RT_FAILURE(rc))
962 {
963 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3);
964 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMCS\n", pVCpu->idCpu, VMX_V_VMCS_PAGES));
965 break;
966 }
967
968 /*
969 * Allocate the Virtual-APIC page.
970 */
971 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3);
972 rc = SUPR3PageAllocEx(VMX_V_VIRT_APIC_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3,
973 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR0, NULL /* paPages */);
974 if (RT_FAILURE(rc))
975 {
976 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3);
977 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's Virtual-APIC page\n", pVCpu->idCpu,
978 VMX_V_VIRT_APIC_PAGES));
979 break;
980 }
981
982 /*
983 * Allocate the VMREAD-bitmap.
984 */
985 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3);
986 rc = SUPR3PageAllocEx(VMX_V_VMREAD_VMWRITE_BITMAP_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3,
987 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR0, NULL /* paPages */);
988 if (RT_FAILURE(rc))
989 {
990 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3);
991 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMREAD-bitmap\n", pVCpu->idCpu,
992 VMX_V_VMREAD_VMWRITE_BITMAP_PAGES));
993 break;
994 }
995
996 /*
997 * Allocatge the VMWRITE-bitmap.
998 */
999 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3);
1000 rc = SUPR3PageAllocEx(VMX_V_VMREAD_VMWRITE_BITMAP_PAGES, 0 /* fFlags */,
1001 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3,
1002 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR0, NULL /* paPages */);
1003 if (RT_FAILURE(rc))
1004 {
1005 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3);
1006 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMWRITE-bitmap\n", pVCpu->idCpu,
1007 VMX_V_VMREAD_VMWRITE_BITMAP_PAGES));
1008 break;
1009 }
1010 }
1011
1012 /* On any failure, cleanup. */
1013 if (RT_FAILURE(rc))
1014 cpumR3FreeVmxHwVirtState(pVM);
1015
1016 return rc;
1017}
1018
1019
1020/**
1021 * Displays the host and guest VMX features.
1022 *
1023 * @param pVM The cross context VM structure.
1024 * @param pHlp The info helper functions.
1025 * @param pszArgs "terse", "default" or "verbose".
1026 */
1027DECLCALLBACK(void) cpumR3InfoVmxFeatures(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
1028{
1029 RT_NOREF(pszArgs);
1030 PCCPUMFEATURES pHostFeatures = &pVM->cpum.s.HostFeatures;
1031 PCCPUMFEATURES pGuestFeatures = &pVM->cpum.s.GuestFeatures;
1032 if ( pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_INTEL
1033 || pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_VIA)
1034 {
1035#define VMXFEATDUMP(a_szDesc, a_Var) \
1036 pHlp->pfnPrintf(pHlp, " %s = %u (%u)\n", a_szDesc, pGuestFeatures->a_Var, pHostFeatures->a_Var)
1037
1038 pHlp->pfnPrintf(pHlp, "Nested hardware virtualization - VMX features\n");
1039 pHlp->pfnPrintf(pHlp, " Mnemonic - Description = guest (host)\n");
1040 VMXFEATDUMP("VMX - Virtual-Machine Extensions ", fVmx);
1041 if (!pGuestFeatures->fVmx)
1042 return;
1043 /* Basic. */
1044 VMXFEATDUMP("InsOutInfo - INS/OUTS instruction info. ", fVmxInsOutInfo);
1045 /* Pin-based controls. */
1046 VMXFEATDUMP("ExtIntExit - External interrupt VM-exit ", fVmxExtIntExit);
1047 VMXFEATDUMP("NmiExit - NMI VM-exit ", fVmxNmiExit);
1048 VMXFEATDUMP("VirtNmi - Virtual NMIs ", fVmxVirtNmi);
1049 VMXFEATDUMP("PreemptTimer - VMX preemption timer ", fVmxPreemptTimer);
1050 VMXFEATDUMP("PostedInt - Posted interrupts ", fVmxPostedInt);
1051 /* Processor-based controls. */
1052 VMXFEATDUMP("IntWindowExit - Interrupt-window exiting ", fVmxIntWindowExit);
1053 VMXFEATDUMP("TscOffsetting - TSC offsetting ", fVmxTscOffsetting);
1054 VMXFEATDUMP("HltExit - HLT exiting ", fVmxHltExit);
1055 VMXFEATDUMP("InvlpgExit - INVLPG exiting ", fVmxInvlpgExit);
1056 VMXFEATDUMP("MwaitExit - MWAIT exiting ", fVmxMwaitExit);
1057 VMXFEATDUMP("RdpmcExit - RDPMC exiting ", fVmxRdpmcExit);
1058 VMXFEATDUMP("RdtscExit - RDTSC exiting ", fVmxRdtscExit);
1059 VMXFEATDUMP("Cr3LoadExit - CR3-load exiting ", fVmxCr3LoadExit);
1060 VMXFEATDUMP("Cr3StoreExit - CR3-store exiting ", fVmxCr3StoreExit);
1061 VMXFEATDUMP("Cr8LoadExit - CR8-load exiting ", fVmxCr8LoadExit);
1062 VMXFEATDUMP("Cr8StoreExit - CR8-store exiting ", fVmxCr8StoreExit);
1063 VMXFEATDUMP("UseTprShadow - Use TPR shadow ", fVmxUseTprShadow);
1064 VMXFEATDUMP("NmiWindowExit - NMI-window exiting ", fVmxNmiWindowExit);
1065 VMXFEATDUMP("MovDRxExit - Mov-DR exiting ", fVmxMovDRxExit);
1066 VMXFEATDUMP("UncondIoExit - Unconditional I/O exiting ", fVmxUncondIoExit);
1067 VMXFEATDUMP("UseIoBitmaps - Use I/O bitmaps ", fVmxUseIoBitmaps);
1068 VMXFEATDUMP("MonitorTrapFlag - Monitor trap flag ", fVmxMonitorTrapFlag);
1069 VMXFEATDUMP("UseMsrBitmaps - MSR bitmaps ", fVmxUseMsrBitmaps);
1070 VMXFEATDUMP("MonitorExit - MONITOR exiting ", fVmxMonitorExit);
1071 VMXFEATDUMP("PauseExit - PAUSE exiting ", fVmxPauseExit);
1072 VMXFEATDUMP("SecondaryExecCtl - Activate secondary controls ", fVmxSecondaryExecCtls);
1073 /* Secondary processor-based controls. */
1074 VMXFEATDUMP("VirtApic - Virtualize-APIC accesses ", fVmxVirtApicAccess);
1075 VMXFEATDUMP("Ept - Extended Page Tables ", fVmxEpt);
1076 VMXFEATDUMP("DescTableExit - Descriptor-table exiting ", fVmxDescTableExit);
1077 VMXFEATDUMP("Rdtscp - Enable RDTSCP ", fVmxRdtscp);
1078 VMXFEATDUMP("VirtX2ApicMode - Virtualize-x2APIC mode ", fVmxVirtX2ApicMode);
1079 VMXFEATDUMP("Vpid - Enable VPID ", fVmxVpid);
1080 VMXFEATDUMP("WbinvdExit - WBINVD exiting ", fVmxWbinvdExit);
1081 VMXFEATDUMP("UnrestrictedGuest - Unrestricted guest ", fVmxUnrestrictedGuest);
1082 VMXFEATDUMP("ApicRegVirt - APIC-register virtualization ", fVmxApicRegVirt);
1083 VMXFEATDUMP("VirtIntDelivery - Virtual-interrupt delivery ", fVmxVirtIntDelivery);
1084 VMXFEATDUMP("PauseLoopExit - PAUSE-loop exiting ", fVmxPauseLoopExit);
1085 VMXFEATDUMP("RdrandExit - RDRAND exiting ", fVmxRdrandExit);
1086 VMXFEATDUMP("Invpcid - Enable INVPCID ", fVmxInvpcid);
1087 VMXFEATDUMP("VmFuncs - Enable VM Functions ", fVmxVmFunc);
1088 VMXFEATDUMP("VmcsShadowing - VMCS shadowing ", fVmxVmcsShadowing);
1089 VMXFEATDUMP("RdseedExiting - RDSEED exiting ", fVmxRdseedExit);
1090 VMXFEATDUMP("PML - Supports Page-Modification Log (PML) ", fVmxPml);
1091 VMXFEATDUMP("EptVe - EPT violations can cause #VE ", fVmxEptXcptVe);
1092 VMXFEATDUMP("XsavesXRstors - Enable XSAVES/XRSTORS ", fVmxXsavesXrstors);
1093 /* VM-entry controls. */
1094 VMXFEATDUMP("EntryLoadDebugCtls - Load debug controls on VM-entry ", fVmxEntryLoadDebugCtls);
1095 VMXFEATDUMP("Ia32eModeGuest - IA-32e mode guest ", fVmxIa32eModeGuest);
1096 VMXFEATDUMP("EntryLoadEferMsr - Load IA32_EFER on VM-entry ", fVmxEntryLoadEferMsr);
1097 VMXFEATDUMP("EntryLoadPatMsr - Load IA32_PAT on VM-entry ", fVmxEntryLoadPatMsr);
1098 /* VM-exit controls. */
1099 VMXFEATDUMP("ExitSaveDebugCtls - Save debug controls on VM-exit ", fVmxExitSaveDebugCtls);
1100 VMXFEATDUMP("HostAddrSpaceSize - Host address-space size ", fVmxHostAddrSpaceSize);
1101 VMXFEATDUMP("ExitAckExtInt - Acknowledge interrupt on VM-exit ", fVmxExitAckExtInt);
1102 VMXFEATDUMP("ExitSavePatMsr - Save IA32_PAT on VM-exit ", fVmxExitSavePatMsr);
1103 VMXFEATDUMP("ExitLoadPatMsr - Load IA32_PAT on VM-exit ", fVmxExitLoadPatMsr);
1104 VMXFEATDUMP("ExitSaveEferMsr - Save IA32_EFER on VM-exit ", fVmxExitSaveEferMsr);
1105 VMXFEATDUMP("ExitLoadEferMsr - Load IA32_EFER on VM-exit ", fVmxExitLoadEferMsr);
1106 VMXFEATDUMP("SavePreemptTimer - Save VMX-preemption timer ", fVmxSavePreemptTimer);
1107 VMXFEATDUMP("ExitStoreEferLma - Store EFER.LMA on VM-exit ", fVmxExitStoreEferLma);
1108 VMXFEATDUMP("VmwriteAll - VMWRITE to any VMCS field ", fVmxVmwriteAll);
1109 VMXFEATDUMP("EntryInjectSoftInt - Inject softint. with 0-len instr. ", fVmxEntryInjectSoftInt);
1110 /* Miscellaneous data. */
1111 VMXFEATDUMP("ExitStoreEferLma - Inject softint. with 0-len instr. ", fVmxExitStoreEferLma);
1112 VMXFEATDUMP("VmwriteAll - Inject softint. with 0-len instr. ", fVmxVmwriteAll);
1113 VMXFEATDUMP("EntryInjectSoftInt - Inject softint. with 0-len instr. ", fVmxEntryInjectSoftInt);
1114#undef VMXFEATDUMP
1115 }
1116 else
1117 pHlp->pfnPrintf(pHlp, "No VMX features present - requires an Intel or compatible CPU.\n");
1118}
1119
1120
1121/**
1122 * Initializes VMX host and guest features.
1123 *
1124 * @param pVM The cross context VM structure.
1125 *
1126 * @remarks This must be called only after HM has fully initialized since it calls
1127 * into HM to retrieve VMX and related MSRs.
1128 */
1129static void cpumR3InitVmxCpuFeatures(PVM pVM)
1130{
1131 /*
1132 * Init. host features.
1133 */
1134 PCPUMFEATURES pHostFeat = &pVM->cpum.s.HostFeatures;
1135 VMXMSRS VmxMsrs;
1136 int rc = HMVmxGetHostMsrs(pVM, &VmxMsrs);
1137 if (RT_SUCCESS(rc))
1138 {
1139 /* Basic information. */
1140 pHostFeat->fVmxInsOutInfo = RT_BF_GET(VmxMsrs.u64Basic, VMX_BF_BASIC_VMCS_INS_OUTS);
1141
1142 /* Pin-based VM-execution controls. */
1143 uint32_t const fPinCtls = VmxMsrs.PinCtls.n.allowed1;
1144 pHostFeat->fVmxExtIntExit = RT_BOOL(fPinCtls & VMX_PIN_CTLS_EXT_INT_EXIT);
1145 pHostFeat->fVmxNmiExit = RT_BOOL(fPinCtls & VMX_PIN_CTLS_NMI_EXIT);
1146 pHostFeat->fVmxVirtNmi = RT_BOOL(fPinCtls & VMX_PIN_CTLS_VIRT_NMI);
1147 pHostFeat->fVmxPreemptTimer = RT_BOOL(fPinCtls & VMX_PIN_CTLS_PREEMPT_TIMER);
1148 pHostFeat->fVmxPostedInt = RT_BOOL(fPinCtls & VMX_PIN_CTLS_POSTED_INT);
1149
1150 /* Processor-based VM-execution controls. */
1151 uint32_t const fProcCtls = VmxMsrs.ProcCtls.n.allowed1;
1152 pHostFeat->fVmxIntWindowExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_INT_WINDOW_EXIT);
1153 pHostFeat->fVmxTscOffsetting = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_TSC_OFFSETTING);
1154 pHostFeat->fVmxHltExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_HLT_EXIT);
1155 pHostFeat->fVmxInvlpgExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_INVLPG_EXIT);
1156 pHostFeat->fVmxMwaitExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_MWAIT_EXIT);
1157 pHostFeat->fVmxRdpmcExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_RDPMC_EXIT);
1158 pHostFeat->fVmxRdtscExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_RDTSC_EXIT);
1159 pHostFeat->fVmxCr3LoadExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_CR3_LOAD_EXIT);
1160 pHostFeat->fVmxCr3StoreExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_CR3_STORE_EXIT);
1161 pHostFeat->fVmxCr8LoadExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_CR8_LOAD_EXIT);
1162 pHostFeat->fVmxCr8StoreExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_CR8_STORE_EXIT);
1163 pHostFeat->fVmxUseTprShadow = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_TPR_SHADOW);
1164 pHostFeat->fVmxNmiWindowExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_NMI_WINDOW_EXIT);
1165 pHostFeat->fVmxMovDRxExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_MOV_DR_EXIT);
1166 pHostFeat->fVmxUncondIoExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_UNCOND_IO_EXIT);
1167 pHostFeat->fVmxUseIoBitmaps = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_IO_BITMAPS);
1168 pHostFeat->fVmxMonitorTrapFlag = RT_BOOL(fProcCtls & VMX_PROC_CTLS_MONITOR_TRAP_FLAG);
1169 pHostFeat->fVmxUseMsrBitmaps = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_MSR_BITMAPS);
1170 pHostFeat->fVmxMonitorExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_MONITOR_EXIT);
1171 pHostFeat->fVmxPauseExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_PAUSE_EXIT);
1172 pHostFeat->fVmxSecondaryExecCtls = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_SECONDARY_CTLS);
1173
1174 /* Secondary processor-based VM-execution controls. */
1175 if (pHostFeat->fVmxSecondaryExecCtls)
1176 {
1177 uint32_t const fProcCtls2 = VmxMsrs.ProcCtls2.n.allowed1;
1178 pHostFeat->fVmxVirtApicAccess = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VIRT_APIC_ACCESS);
1179 pHostFeat->fVmxEpt = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_EPT);
1180 pHostFeat->fVmxDescTableExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_DESC_TABLE_EXIT);
1181 pHostFeat->fVmxRdtscp = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_RDTSCP);
1182 pHostFeat->fVmxVirtX2ApicMode = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VIRT_X2APIC_MODE);
1183 pHostFeat->fVmxVpid = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VPID);
1184 pHostFeat->fVmxWbinvdExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_WBINVD_EXIT);
1185 pHostFeat->fVmxUnrestrictedGuest = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_UNRESTRICTED_GUEST);
1186 pHostFeat->fVmxApicRegVirt = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_APIC_REG_VIRT);
1187 pHostFeat->fVmxVirtIntDelivery = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VIRT_INT_DELIVERY);
1188 pHostFeat->fVmxPauseLoopExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_PAUSE_LOOP_EXIT);
1189 pHostFeat->fVmxRdrandExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_RDRAND_EXIT);
1190 pHostFeat->fVmxInvpcid = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_INVPCID);
1191 pHostFeat->fVmxVmFunc = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VMFUNC);
1192 pHostFeat->fVmxVmcsShadowing = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VMCS_SHADOWING);
1193 pHostFeat->fVmxRdseedExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_RDSEED_EXIT);
1194 pHostFeat->fVmxPml = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_PML);
1195 pHostFeat->fVmxEptXcptVe = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_EPT_VE);
1196 pHostFeat->fVmxXsavesXrstors = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_XSAVES_XRSTORS);
1197 pHostFeat->fVmxUseTscScaling = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_TSC_SCALING);
1198 }
1199
1200 /* VM-entry controls. */
1201 uint32_t const fEntryCtls = VmxMsrs.EntryCtls.n.allowed1;
1202 pHostFeat->fVmxEntryLoadDebugCtls = RT_BOOL(fEntryCtls & VMX_ENTRY_CTLS_LOAD_DEBUG);
1203 pHostFeat->fVmxIa32eModeGuest = RT_BOOL(fEntryCtls & VMX_ENTRY_CTLS_IA32E_MODE_GUEST);
1204 pHostFeat->fVmxEntryLoadEferMsr = RT_BOOL(fEntryCtls & VMX_ENTRY_CTLS_LOAD_EFER_MSR);
1205 pHostFeat->fVmxEntryLoadPatMsr = RT_BOOL(fEntryCtls & VMX_ENTRY_CTLS_LOAD_PAT_MSR);
1206
1207 /* VM-exit controls. */
1208 uint32_t const fExitCtls = VmxMsrs.ExitCtls.n.allowed1;
1209 pHostFeat->fVmxExitSaveDebugCtls = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_SAVE_DEBUG);
1210 pHostFeat->fVmxHostAddrSpaceSize = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_HOST_ADDR_SPACE_SIZE);
1211 pHostFeat->fVmxExitAckExtInt = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_ACK_EXT_INT);
1212 pHostFeat->fVmxExitSavePatMsr = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_SAVE_PAT_MSR);
1213 pHostFeat->fVmxExitLoadPatMsr = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_LOAD_PAT_MSR);
1214 pHostFeat->fVmxExitSaveEferMsr = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_SAVE_EFER_MSR);
1215 pHostFeat->fVmxExitLoadEferMsr = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_LOAD_EFER_MSR);
1216 pHostFeat->fVmxSavePreemptTimer = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_SAVE_PREEMPT_TIMER);
1217
1218 /* Miscellaneous data. */
1219 uint32_t const fMiscData = VmxMsrs.u64Misc;
1220 pHostFeat->fVmxExitStoreEferLma = RT_BOOL(fMiscData & VMX_MISC_EXIT_STORE_EFER_LMA);
1221 pHostFeat->fVmxVmwriteAll = RT_BOOL(fMiscData & VMX_MISC_VMWRITE_ALL);
1222 pHostFeat->fVmxEntryInjectSoftInt = RT_BOOL(fMiscData & VMX_MISC_ENTRY_INJECT_SOFT_INT);
1223 }
1224
1225 /*
1226 * Initialize the set of VMX features we emulate.
1227 * Note! Some bits might be reported as 1 always if they fall under the default1 class bits
1228 * (e.g. fVmxEntryLoadDebugCtls), see @bugref{9180#c5}.
1229 */
1230 CPUMFEATURES EmuFeat;
1231 RT_ZERO(EmuFeat);
1232 EmuFeat.fVmx = 1;
1233 EmuFeat.fVmxInsOutInfo = 0;
1234 EmuFeat.fVmxExtIntExit = 1;
1235 EmuFeat.fVmxNmiExit = 1;
1236 EmuFeat.fVmxVirtNmi = 0;
1237 EmuFeat.fVmxPreemptTimer = 0; /** @todo NSTVMX: enable this. */
1238 EmuFeat.fVmxPostedInt = 0;
1239 EmuFeat.fVmxIntWindowExit = 1;
1240 EmuFeat.fVmxTscOffsetting = 1;
1241 EmuFeat.fVmxHltExit = 1;
1242 EmuFeat.fVmxInvlpgExit = 1;
1243 EmuFeat.fVmxMwaitExit = 1;
1244 EmuFeat.fVmxRdpmcExit = 1;
1245 EmuFeat.fVmxRdtscExit = 1;
1246 EmuFeat.fVmxCr3LoadExit = 1;
1247 EmuFeat.fVmxCr3StoreExit = 1;
1248 EmuFeat.fVmxCr8LoadExit = 1;
1249 EmuFeat.fVmxCr8StoreExit = 1;
1250 EmuFeat.fVmxUseTprShadow = 0;
1251 EmuFeat.fVmxNmiWindowExit = 0;
1252 EmuFeat.fVmxMovDRxExit = 1;
1253 EmuFeat.fVmxUncondIoExit = 1;
1254 EmuFeat.fVmxUseIoBitmaps = 1;
1255 EmuFeat.fVmxMonitorTrapFlag = 0;
1256 EmuFeat.fVmxUseMsrBitmaps = 0;
1257 EmuFeat.fVmxMonitorExit = 1;
1258 EmuFeat.fVmxPauseExit = 1;
1259 EmuFeat.fVmxSecondaryExecCtls = 1;
1260 EmuFeat.fVmxVirtApicAccess = 0;
1261 EmuFeat.fVmxEpt = 0;
1262 EmuFeat.fVmxDescTableExit = 1;
1263 EmuFeat.fVmxRdtscp = 1;
1264 EmuFeat.fVmxVirtX2ApicMode = 0;
1265 EmuFeat.fVmxVpid = 0;
1266 EmuFeat.fVmxWbinvdExit = 1;
1267 EmuFeat.fVmxUnrestrictedGuest = 0;
1268 EmuFeat.fVmxApicRegVirt = 0;
1269 EmuFeat.fVmxVirtIntDelivery = 0;
1270 EmuFeat.fVmxPauseLoopExit = 0;
1271 EmuFeat.fVmxRdrandExit = 0;
1272 EmuFeat.fVmxInvpcid = 1;
1273 EmuFeat.fVmxVmFunc = 0;
1274 EmuFeat.fVmxVmcsShadowing = 0;
1275 EmuFeat.fVmxRdseedExit = 0;
1276 EmuFeat.fVmxPml = 0;
1277 EmuFeat.fVmxEptXcptVe = 0;
1278 EmuFeat.fVmxXsavesXrstors = 0;
1279 EmuFeat.fVmxUseTscScaling = 0;
1280 EmuFeat.fVmxEntryLoadDebugCtls = 1;
1281 EmuFeat.fVmxIa32eModeGuest = 1;
1282 EmuFeat.fVmxEntryLoadEferMsr = 1;
1283 EmuFeat.fVmxEntryLoadPatMsr = 0;
1284 EmuFeat.fVmxExitSaveDebugCtls = 1;
1285 EmuFeat.fVmxHostAddrSpaceSize = 1;
1286 EmuFeat.fVmxExitAckExtInt = 0;
1287 EmuFeat.fVmxExitSavePatMsr = 0;
1288 EmuFeat.fVmxExitLoadPatMsr = 0;
1289 EmuFeat.fVmxExitSaveEferMsr = 1;
1290 EmuFeat.fVmxExitLoadEferMsr = 1;
1291 EmuFeat.fVmxSavePreemptTimer = 0;
1292 EmuFeat.fVmxExitStoreEferLma = 1;
1293 EmuFeat.fVmxVmwriteAll = 0;
1294 EmuFeat.fVmxEntryInjectSoftInt = 0;
1295
1296 /*
1297 * Explode guest features.
1298 *
1299 * When hardware-assisted VMX may be used, any feature we emulate must also be supported
1300 * by the hardware, hence we merge our emulated features with the host features below.
1301 */
1302 bool const fHostSupportsVmx = pHostFeat->fVmx;
1303 AssertLogRelReturnVoid(!fHostSupportsVmx || HMIsVmxSupported(pVM));
1304 PCCPUMFEATURES pBaseFeat = fHostSupportsVmx ? pHostFeat : &EmuFeat;
1305 PCPUMFEATURES pGuestFeat = &pVM->cpum.s.GuestFeatures;
1306 pGuestFeat->fVmx = (pBaseFeat->fVmx & EmuFeat.fVmx );
1307 pGuestFeat->fVmxInsOutInfo = (pBaseFeat->fVmxInsOutInfo & EmuFeat.fVmxInsOutInfo );
1308 pGuestFeat->fVmxExtIntExit = (pBaseFeat->fVmxExtIntExit & EmuFeat.fVmxExtIntExit );
1309 pGuestFeat->fVmxNmiExit = (pBaseFeat->fVmxNmiExit & EmuFeat.fVmxNmiExit );
1310 pGuestFeat->fVmxVirtNmi = (pBaseFeat->fVmxVirtNmi & EmuFeat.fVmxVirtNmi );
1311 pGuestFeat->fVmxPreemptTimer = (pBaseFeat->fVmxPreemptTimer & EmuFeat.fVmxPreemptTimer );
1312 pGuestFeat->fVmxPostedInt = (pBaseFeat->fVmxPostedInt & EmuFeat.fVmxPostedInt );
1313 pGuestFeat->fVmxIntWindowExit = (pBaseFeat->fVmxIntWindowExit & EmuFeat.fVmxIntWindowExit );
1314 pGuestFeat->fVmxTscOffsetting = (pBaseFeat->fVmxTscOffsetting & EmuFeat.fVmxTscOffsetting );
1315 pGuestFeat->fVmxHltExit = (pBaseFeat->fVmxHltExit & EmuFeat.fVmxHltExit );
1316 pGuestFeat->fVmxInvlpgExit = (pBaseFeat->fVmxInvlpgExit & EmuFeat.fVmxInvlpgExit );
1317 pGuestFeat->fVmxMwaitExit = (pBaseFeat->fVmxMwaitExit & EmuFeat.fVmxMwaitExit );
1318 pGuestFeat->fVmxRdpmcExit = (pBaseFeat->fVmxRdpmcExit & EmuFeat.fVmxRdpmcExit );
1319 pGuestFeat->fVmxRdtscExit = (pBaseFeat->fVmxRdtscExit & EmuFeat.fVmxRdtscExit );
1320 pGuestFeat->fVmxCr3LoadExit = (pBaseFeat->fVmxCr3LoadExit & EmuFeat.fVmxCr3LoadExit );
1321 pGuestFeat->fVmxCr3StoreExit = (pBaseFeat->fVmxCr3StoreExit & EmuFeat.fVmxCr3StoreExit );
1322 pGuestFeat->fVmxCr8LoadExit = (pBaseFeat->fVmxCr8LoadExit & EmuFeat.fVmxCr8LoadExit );
1323 pGuestFeat->fVmxCr8StoreExit = (pBaseFeat->fVmxCr8StoreExit & EmuFeat.fVmxCr8StoreExit );
1324 pGuestFeat->fVmxUseTprShadow = (pBaseFeat->fVmxUseTprShadow & EmuFeat.fVmxUseTprShadow );
1325 pGuestFeat->fVmxNmiWindowExit = (pBaseFeat->fVmxNmiWindowExit & EmuFeat.fVmxNmiWindowExit );
1326 pGuestFeat->fVmxMovDRxExit = (pBaseFeat->fVmxMovDRxExit & EmuFeat.fVmxMovDRxExit );
1327 pGuestFeat->fVmxUncondIoExit = (pBaseFeat->fVmxUncondIoExit & EmuFeat.fVmxUncondIoExit );
1328 pGuestFeat->fVmxUseIoBitmaps = (pBaseFeat->fVmxUseIoBitmaps & EmuFeat.fVmxUseIoBitmaps );
1329 pGuestFeat->fVmxMonitorTrapFlag = (pBaseFeat->fVmxMonitorTrapFlag & EmuFeat.fVmxMonitorTrapFlag );
1330 pGuestFeat->fVmxUseMsrBitmaps = (pBaseFeat->fVmxUseMsrBitmaps & EmuFeat.fVmxUseMsrBitmaps );
1331 pGuestFeat->fVmxMonitorExit = (pBaseFeat->fVmxMonitorExit & EmuFeat.fVmxMonitorExit );
1332 pGuestFeat->fVmxPauseExit = (pBaseFeat->fVmxPauseExit & EmuFeat.fVmxPauseExit );
1333 pGuestFeat->fVmxSecondaryExecCtls = (pBaseFeat->fVmxSecondaryExecCtls & EmuFeat.fVmxSecondaryExecCtls );
1334 pGuestFeat->fVmxVirtApicAccess = (pBaseFeat->fVmxVirtApicAccess & EmuFeat.fVmxVirtApicAccess );
1335 pGuestFeat->fVmxEpt = (pBaseFeat->fVmxEpt & EmuFeat.fVmxEpt );
1336 pGuestFeat->fVmxDescTableExit = (pBaseFeat->fVmxDescTableExit & EmuFeat.fVmxDescTableExit );
1337 pGuestFeat->fVmxRdtscp = (pBaseFeat->fVmxRdtscp & EmuFeat.fVmxRdtscp );
1338 pGuestFeat->fVmxVirtX2ApicMode = (pBaseFeat->fVmxVirtX2ApicMode & EmuFeat.fVmxVirtX2ApicMode );
1339 pGuestFeat->fVmxVpid = (pBaseFeat->fVmxVpid & EmuFeat.fVmxVpid );
1340 pGuestFeat->fVmxWbinvdExit = (pBaseFeat->fVmxWbinvdExit & EmuFeat.fVmxWbinvdExit );
1341 pGuestFeat->fVmxUnrestrictedGuest = (pBaseFeat->fVmxUnrestrictedGuest & EmuFeat.fVmxUnrestrictedGuest );
1342 pGuestFeat->fVmxApicRegVirt = (pBaseFeat->fVmxApicRegVirt & EmuFeat.fVmxApicRegVirt );
1343 pGuestFeat->fVmxVirtIntDelivery = (pBaseFeat->fVmxVirtIntDelivery & EmuFeat.fVmxVirtIntDelivery );
1344 pGuestFeat->fVmxPauseLoopExit = (pBaseFeat->fVmxPauseLoopExit & EmuFeat.fVmxPauseLoopExit );
1345 pGuestFeat->fVmxRdrandExit = (pBaseFeat->fVmxRdrandExit & EmuFeat.fVmxRdrandExit );
1346 pGuestFeat->fVmxInvpcid = (pBaseFeat->fVmxInvpcid & EmuFeat.fVmxInvpcid );
1347 pGuestFeat->fVmxVmFunc = (pBaseFeat->fVmxVmFunc & EmuFeat.fVmxVmFunc );
1348 pGuestFeat->fVmxVmcsShadowing = (pBaseFeat->fVmxVmcsShadowing & EmuFeat.fVmxVmcsShadowing );
1349 pGuestFeat->fVmxRdseedExit = (pBaseFeat->fVmxRdseedExit & EmuFeat.fVmxRdseedExit );
1350 pGuestFeat->fVmxPml = (pBaseFeat->fVmxPml & EmuFeat.fVmxPml );
1351 pGuestFeat->fVmxEptXcptVe = (pBaseFeat->fVmxEptXcptVe & EmuFeat.fVmxEptXcptVe );
1352 pGuestFeat->fVmxXsavesXrstors = (pBaseFeat->fVmxXsavesXrstors & EmuFeat.fVmxXsavesXrstors );
1353 pGuestFeat->fVmxUseTscScaling = (pBaseFeat->fVmxUseTscScaling & EmuFeat.fVmxUseTscScaling );
1354 pGuestFeat->fVmxEntryLoadDebugCtls = (pBaseFeat->fVmxEntryLoadDebugCtls & EmuFeat.fVmxEntryLoadDebugCtls );
1355 pGuestFeat->fVmxIa32eModeGuest = (pBaseFeat->fVmxIa32eModeGuest & EmuFeat.fVmxIa32eModeGuest );
1356 pGuestFeat->fVmxEntryLoadEferMsr = (pBaseFeat->fVmxEntryLoadEferMsr & EmuFeat.fVmxEntryLoadEferMsr );
1357 pGuestFeat->fVmxEntryLoadPatMsr = (pBaseFeat->fVmxEntryLoadPatMsr & EmuFeat.fVmxEntryLoadPatMsr );
1358 pGuestFeat->fVmxExitSaveDebugCtls = (pBaseFeat->fVmxExitSaveDebugCtls & EmuFeat.fVmxExitSaveDebugCtls );
1359 pGuestFeat->fVmxHostAddrSpaceSize = (pBaseFeat->fVmxHostAddrSpaceSize & EmuFeat.fVmxHostAddrSpaceSize );
1360 pGuestFeat->fVmxExitAckExtInt = (pBaseFeat->fVmxExitAckExtInt & EmuFeat.fVmxExitAckExtInt );
1361 pGuestFeat->fVmxExitSavePatMsr = (pBaseFeat->fVmxExitSavePatMsr & EmuFeat.fVmxExitSavePatMsr );
1362 pGuestFeat->fVmxExitLoadPatMsr = (pBaseFeat->fVmxExitLoadPatMsr & EmuFeat.fVmxExitLoadPatMsr );
1363 pGuestFeat->fVmxExitSaveEferMsr = (pBaseFeat->fVmxExitSaveEferMsr & EmuFeat.fVmxExitSaveEferMsr );
1364 pGuestFeat->fVmxExitLoadEferMsr = (pBaseFeat->fVmxExitLoadEferMsr & EmuFeat.fVmxExitLoadEferMsr );
1365 pGuestFeat->fVmxSavePreemptTimer = (pBaseFeat->fVmxSavePreemptTimer & EmuFeat.fVmxSavePreemptTimer );
1366 pGuestFeat->fVmxExitStoreEferLma = (pBaseFeat->fVmxExitStoreEferLma & EmuFeat.fVmxExitStoreEferLma );
1367 pGuestFeat->fVmxVmwriteAll = (pBaseFeat->fVmxVmwriteAll & EmuFeat.fVmxVmwriteAll );
1368 pGuestFeat->fVmxEntryInjectSoftInt = (pBaseFeat->fVmxEntryInjectSoftInt & EmuFeat.fVmxEntryInjectSoftInt );
1369}
1370
1371
1372/**
1373 * Initializes the CPUM.
1374 *
1375 * @returns VBox status code.
1376 * @param pVM The cross context VM structure.
1377 */
1378VMMR3DECL(int) CPUMR3Init(PVM pVM)
1379{
1380 LogFlow(("CPUMR3Init\n"));
1381
1382 /*
1383 * Assert alignment, sizes and tables.
1384 */
1385 AssertCompileMemberAlignment(VM, cpum.s, 32);
1386 AssertCompile(sizeof(pVM->cpum.s) <= sizeof(pVM->cpum.padding));
1387 AssertCompileSizeAlignment(CPUMCTX, 64);
1388 AssertCompileSizeAlignment(CPUMCTXMSRS, 64);
1389 AssertCompileSizeAlignment(CPUMHOSTCTX, 64);
1390 AssertCompileMemberAlignment(VM, cpum, 64);
1391 AssertCompileMemberAlignment(VM, aCpus, 64);
1392 AssertCompileMemberAlignment(VMCPU, cpum.s, 64);
1393 AssertCompileMemberSizeAlignment(VM, aCpus[0].cpum.s, 64);
1394#ifdef VBOX_STRICT
1395 int rc2 = cpumR3MsrStrictInitChecks();
1396 AssertRCReturn(rc2, rc2);
1397#endif
1398
1399 /*
1400 * Initialize offsets.
1401 */
1402
1403 /* Calculate the offset from CPUM to CPUMCPU for the first CPU. */
1404 pVM->cpum.s.offCPUMCPU0 = RT_UOFFSETOF(VM, aCpus[0].cpum) - RT_UOFFSETOF(VM, cpum);
1405 Assert((uintptr_t)&pVM->cpum + pVM->cpum.s.offCPUMCPU0 == (uintptr_t)&pVM->aCpus[0].cpum);
1406
1407
1408 /* Calculate the offset from CPUMCPU to CPUM. */
1409 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1410 {
1411 PVMCPU pVCpu = &pVM->aCpus[i];
1412
1413 pVCpu->cpum.s.offCPUM = RT_UOFFSETOF_DYN(VM, aCpus[i].cpum) - RT_UOFFSETOF(VM, cpum);
1414 Assert((uintptr_t)&pVCpu->cpum - pVCpu->cpum.s.offCPUM == (uintptr_t)&pVM->cpum);
1415 }
1416
1417 /*
1418 * Gather info about the host CPU.
1419 */
1420 if (!ASMHasCpuId())
1421 {
1422 Log(("The CPU doesn't support CPUID!\n"));
1423 return VERR_UNSUPPORTED_CPU;
1424 }
1425
1426 pVM->cpum.s.fHostMxCsrMask = CPUMR3DeterminHostMxCsrMask();
1427
1428 PCPUMCPUIDLEAF paLeaves;
1429 uint32_t cLeaves;
1430 int rc = CPUMR3CpuIdCollectLeaves(&paLeaves, &cLeaves);
1431 AssertLogRelRCReturn(rc, rc);
1432
1433 rc = cpumR3CpuIdExplodeFeatures(paLeaves, cLeaves, &pVM->cpum.s.HostFeatures);
1434 RTMemFree(paLeaves);
1435 AssertLogRelRCReturn(rc, rc);
1436 pVM->cpum.s.GuestFeatures.enmCpuVendor = pVM->cpum.s.HostFeatures.enmCpuVendor;
1437
1438 /*
1439 * Check that the CPU supports the minimum features we require.
1440 */
1441 if (!pVM->cpum.s.HostFeatures.fFxSaveRstor)
1442 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support the FXSAVE/FXRSTOR instruction.");
1443 if (!pVM->cpum.s.HostFeatures.fMmx)
1444 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support MMX.");
1445 if (!pVM->cpum.s.HostFeatures.fTsc)
1446 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support RDTSC.");
1447
1448 /*
1449 * Setup the CR4 AND and OR masks used in the raw-mode switcher.
1450 */
1451 pVM->cpum.s.CR4.AndMask = X86_CR4_OSXMMEEXCPT | X86_CR4_PVI | X86_CR4_VME;
1452 pVM->cpum.s.CR4.OrMask = X86_CR4_OSFXSR;
1453
1454 /*
1455 * Figure out which XSAVE/XRSTOR features are available on the host.
1456 */
1457 uint64_t fXcr0Host = 0;
1458 uint64_t fXStateHostMask = 0;
1459 if ( pVM->cpum.s.HostFeatures.fXSaveRstor
1460 && pVM->cpum.s.HostFeatures.fOpSysXSaveRstor)
1461 {
1462 fXStateHostMask = fXcr0Host = ASMGetXcr0();
1463 fXStateHostMask &= XSAVE_C_X87 | XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI;
1464 AssertLogRelMsgStmt((fXStateHostMask & (XSAVE_C_X87 | XSAVE_C_SSE)) == (XSAVE_C_X87 | XSAVE_C_SSE),
1465 ("%#llx\n", fXStateHostMask), fXStateHostMask = 0);
1466 }
1467 pVM->cpum.s.fXStateHostMask = fXStateHostMask;
1468 if (VM_IS_RAW_MODE_ENABLED(pVM)) /* For raw-mode, we only use XSAVE/XRSTOR when the guest starts using it (CPUID/CR4 visibility). */
1469 fXStateHostMask = 0;
1470 LogRel(("CPUM: fXStateHostMask=%#llx; initial: %#llx; host XCR0=%#llx\n",
1471 pVM->cpum.s.fXStateHostMask, fXStateHostMask, fXcr0Host));
1472
1473 /*
1474 * Allocate memory for the extended CPU state and initialize the host XSAVE/XRSTOR mask.
1475 */
1476 uint32_t cbMaxXState = pVM->cpum.s.HostFeatures.cbMaxExtendedState;
1477 cbMaxXState = RT_ALIGN(cbMaxXState, 128);
1478 AssertLogRelReturn(cbMaxXState >= sizeof(X86FXSTATE) && cbMaxXState <= _8K, VERR_CPUM_IPE_2);
1479
1480 uint8_t *pbXStates;
1481 rc = MMR3HyperAllocOnceNoRelEx(pVM, cbMaxXState * 3 * pVM->cCpus, PAGE_SIZE, MM_TAG_CPUM_CTX,
1482 MMHYPER_AONR_FLAGS_KERNEL_MAPPING, (void **)&pbXStates);
1483 AssertLogRelRCReturn(rc, rc);
1484
1485 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1486 {
1487 PVMCPU pVCpu = &pVM->aCpus[i];
1488
1489 pVCpu->cpum.s.Guest.pXStateR3 = (PX86XSAVEAREA)pbXStates;
1490 pVCpu->cpum.s.Guest.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
1491 pVCpu->cpum.s.Guest.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
1492 pbXStates += cbMaxXState;
1493
1494 pVCpu->cpum.s.Host.pXStateR3 = (PX86XSAVEAREA)pbXStates;
1495 pVCpu->cpum.s.Host.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
1496 pVCpu->cpum.s.Host.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
1497 pbXStates += cbMaxXState;
1498
1499 pVCpu->cpum.s.Hyper.pXStateR3 = (PX86XSAVEAREA)pbXStates;
1500 pVCpu->cpum.s.Hyper.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
1501 pVCpu->cpum.s.Hyper.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
1502 pbXStates += cbMaxXState;
1503
1504 pVCpu->cpum.s.Host.fXStateMask = fXStateHostMask;
1505 }
1506
1507 /*
1508 * Register saved state data item.
1509 */
1510 rc = SSMR3RegisterInternal(pVM, "cpum", 1, CPUM_SAVED_STATE_VERSION, sizeof(CPUM),
1511 NULL, cpumR3LiveExec, NULL,
1512 NULL, cpumR3SaveExec, NULL,
1513 cpumR3LoadPrep, cpumR3LoadExec, cpumR3LoadDone);
1514 if (RT_FAILURE(rc))
1515 return rc;
1516
1517 /*
1518 * Register info handlers and registers with the debugger facility.
1519 */
1520 DBGFR3InfoRegisterInternalEx(pVM, "cpum", "Displays the all the cpu states.",
1521 &cpumR3InfoAll, DBGFINFO_FLAGS_ALL_EMTS);
1522 DBGFR3InfoRegisterInternalEx(pVM, "cpumguest", "Displays the guest cpu state.",
1523 &cpumR3InfoGuest, DBGFINFO_FLAGS_ALL_EMTS);
1524 DBGFR3InfoRegisterInternalEx(pVM, "cpumguesthwvirt", "Displays the guest hwvirt. cpu state.",
1525 &cpumR3InfoGuestHwvirt, DBGFINFO_FLAGS_ALL_EMTS);
1526 DBGFR3InfoRegisterInternalEx(pVM, "cpumhyper", "Displays the hypervisor cpu state.",
1527 &cpumR3InfoHyper, DBGFINFO_FLAGS_ALL_EMTS);
1528 DBGFR3InfoRegisterInternalEx(pVM, "cpumhost", "Displays the host cpu state.",
1529 &cpumR3InfoHost, DBGFINFO_FLAGS_ALL_EMTS);
1530 DBGFR3InfoRegisterInternalEx(pVM, "cpumguestinstr", "Displays the current guest instruction.",
1531 &cpumR3InfoGuestInstr, DBGFINFO_FLAGS_ALL_EMTS);
1532 DBGFR3InfoRegisterInternal( pVM, "cpuid", "Displays the guest cpuid leaves.", &cpumR3CpuIdInfo);
1533 DBGFR3InfoRegisterInternal( pVM, "cpumvmxfeat", "Displays the host and guest VMX hwvirt. features.",
1534 &cpumR3InfoVmxFeatures);
1535
1536 rc = cpumR3DbgInit(pVM);
1537 if (RT_FAILURE(rc))
1538 return rc;
1539
1540 /*
1541 * Check if we need to workaround partial/leaky FPU handling.
1542 */
1543 cpumR3CheckLeakyFpu(pVM);
1544
1545 /*
1546 * Initialize the Guest CPUID and MSR states.
1547 */
1548 rc = cpumR3InitCpuIdAndMsrs(pVM);
1549 if (RT_FAILURE(rc))
1550 return rc;
1551
1552 /*
1553 * Allocate memory required by the guest hardware virtualization state.
1554 */
1555 if (pVM->cpum.ro.GuestFeatures.fVmx)
1556 rc = cpumR3AllocVmxHwVirtState(pVM);
1557 else if (pVM->cpum.ro.GuestFeatures.fSvm)
1558 rc = cpumR3AllocSvmHwVirtState(pVM);
1559 if (RT_FAILURE(rc))
1560 return rc;
1561
1562 /*
1563 * Workaround for missing cpuid(0) patches when leaf 4 returns GuestInfo.DefCpuId:
1564 * If we miss to patch a cpuid(0).eax then Linux tries to determine the number
1565 * of processors from (cpuid(4).eax >> 26) + 1.
1566 *
1567 * Note: this code is obsolete, but let's keep it here for reference.
1568 * Purpose is valid when we artificially cap the max std id to less than 4.
1569 *
1570 * Note: This used to be a separate function CPUMR3SetHwVirt that was called
1571 * after VMINITCOMPLETED_HM.
1572 */
1573 if (VM_IS_RAW_MODE_ENABLED(pVM))
1574 {
1575 Assert( (pVM->cpum.s.aGuestCpuIdPatmStd[4].uEax & UINT32_C(0xffffc000)) == 0
1576 || pVM->cpum.s.aGuestCpuIdPatmStd[0].uEax < 0x4);
1577 pVM->cpum.s.aGuestCpuIdPatmStd[4].uEax &= UINT32_C(0x00003fff);
1578 }
1579
1580 CPUMR3Reset(pVM);
1581 return VINF_SUCCESS;
1582}
1583
1584
1585/**
1586 * Applies relocations to data and code managed by this
1587 * component. This function will be called at init and
1588 * whenever the VMM need to relocate it self inside the GC.
1589 *
1590 * The CPUM will update the addresses used by the switcher.
1591 *
1592 * @param pVM The cross context VM structure.
1593 */
1594VMMR3DECL(void) CPUMR3Relocate(PVM pVM)
1595{
1596 LogFlow(("CPUMR3Relocate\n"));
1597
1598 pVM->cpum.s.GuestInfo.paMsrRangesRC = MMHyperR3ToRC(pVM, pVM->cpum.s.GuestInfo.paMsrRangesR3);
1599 pVM->cpum.s.GuestInfo.paCpuIdLeavesRC = MMHyperR3ToRC(pVM, pVM->cpum.s.GuestInfo.paCpuIdLeavesR3);
1600
1601 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1602 {
1603 PVMCPU pVCpu = &pVM->aCpus[iCpu];
1604 pVCpu->cpum.s.Guest.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Guest.pXStateR3);
1605 pVCpu->cpum.s.Host.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Host.pXStateR3);
1606 pVCpu->cpum.s.Hyper.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Hyper.pXStateR3); /** @todo remove me */
1607
1608 /* Recheck the guest DRx values in raw-mode. */
1609 CPUMRecalcHyperDRx(pVCpu, UINT8_MAX, false);
1610 }
1611}
1612
1613
1614/**
1615 * Terminates the CPUM.
1616 *
1617 * Termination means cleaning up and freeing all resources,
1618 * the VM it self is at this point powered off or suspended.
1619 *
1620 * @returns VBox status code.
1621 * @param pVM The cross context VM structure.
1622 */
1623VMMR3DECL(int) CPUMR3Term(PVM pVM)
1624{
1625#ifdef VBOX_WITH_CRASHDUMP_MAGIC
1626 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1627 {
1628 PVMCPU pVCpu = &pVM->aCpus[i];
1629 PCPUMCTX pCtx = CPUMQueryGuestCtxPtr(pVCpu);
1630
1631 memset(pVCpu->cpum.s.aMagic, 0, sizeof(pVCpu->cpum.s.aMagic));
1632 pVCpu->cpum.s.uMagic = 0;
1633 pCtx->dr[5] = 0;
1634 }
1635#endif
1636
1637 if (pVM->cpum.ro.GuestFeatures.fSvm)
1638 cpumR3FreeVmxHwVirtState(pVM);
1639 else if (pVM->cpum.ro.GuestFeatures.fSvm)
1640 cpumR3FreeSvmHwVirtState(pVM);
1641 return VINF_SUCCESS;
1642}
1643
1644
1645/**
1646 * Resets a virtual CPU.
1647 *
1648 * Used by CPUMR3Reset and CPU hot plugging.
1649 *
1650 * @param pVM The cross context VM structure.
1651 * @param pVCpu The cross context virtual CPU structure of the CPU that is
1652 * being reset. This may differ from the current EMT.
1653 */
1654VMMR3DECL(void) CPUMR3ResetCpu(PVM pVM, PVMCPU pVCpu)
1655{
1656 /** @todo anything different for VCPU > 0? */
1657 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1658
1659 /*
1660 * Initialize everything to ZERO first.
1661 */
1662 uint32_t fUseFlags = pVCpu->cpum.s.fUseFlags & ~CPUM_USED_FPU_SINCE_REM;
1663
1664 AssertCompile(RTASSERT_OFFSET_OF(CPUMCTX, pXStateR0) < RTASSERT_OFFSET_OF(CPUMCTX, pXStateR3));
1665 AssertCompile(RTASSERT_OFFSET_OF(CPUMCTX, pXStateR0) < RTASSERT_OFFSET_OF(CPUMCTX, pXStateRC));
1666 memset(pCtx, 0, RT_UOFFSETOF(CPUMCTX, pXStateR0));
1667
1668 pVCpu->cpum.s.fUseFlags = fUseFlags;
1669
1670 pCtx->cr0 = X86_CR0_CD | X86_CR0_NW | X86_CR0_ET; //0x60000010
1671 pCtx->eip = 0x0000fff0;
1672 pCtx->edx = 0x00000600; /* P6 processor */
1673 pCtx->eflags.Bits.u1Reserved0 = 1;
1674
1675 pCtx->cs.Sel = 0xf000;
1676 pCtx->cs.ValidSel = 0xf000;
1677 pCtx->cs.fFlags = CPUMSELREG_FLAGS_VALID;
1678 pCtx->cs.u64Base = UINT64_C(0xffff0000);
1679 pCtx->cs.u32Limit = 0x0000ffff;
1680 pCtx->cs.Attr.n.u1DescType = 1; /* code/data segment */
1681 pCtx->cs.Attr.n.u1Present = 1;
1682 pCtx->cs.Attr.n.u4Type = X86_SEL_TYPE_ER_ACC;
1683
1684 pCtx->ds.fFlags = CPUMSELREG_FLAGS_VALID;
1685 pCtx->ds.u32Limit = 0x0000ffff;
1686 pCtx->ds.Attr.n.u1DescType = 1; /* code/data segment */
1687 pCtx->ds.Attr.n.u1Present = 1;
1688 pCtx->ds.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1689
1690 pCtx->es.fFlags = CPUMSELREG_FLAGS_VALID;
1691 pCtx->es.u32Limit = 0x0000ffff;
1692 pCtx->es.Attr.n.u1DescType = 1; /* code/data segment */
1693 pCtx->es.Attr.n.u1Present = 1;
1694 pCtx->es.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1695
1696 pCtx->fs.fFlags = CPUMSELREG_FLAGS_VALID;
1697 pCtx->fs.u32Limit = 0x0000ffff;
1698 pCtx->fs.Attr.n.u1DescType = 1; /* code/data segment */
1699 pCtx->fs.Attr.n.u1Present = 1;
1700 pCtx->fs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1701
1702 pCtx->gs.fFlags = CPUMSELREG_FLAGS_VALID;
1703 pCtx->gs.u32Limit = 0x0000ffff;
1704 pCtx->gs.Attr.n.u1DescType = 1; /* code/data segment */
1705 pCtx->gs.Attr.n.u1Present = 1;
1706 pCtx->gs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1707
1708 pCtx->ss.fFlags = CPUMSELREG_FLAGS_VALID;
1709 pCtx->ss.u32Limit = 0x0000ffff;
1710 pCtx->ss.Attr.n.u1Present = 1;
1711 pCtx->ss.Attr.n.u1DescType = 1; /* code/data segment */
1712 pCtx->ss.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1713
1714 pCtx->idtr.cbIdt = 0xffff;
1715 pCtx->gdtr.cbGdt = 0xffff;
1716
1717 pCtx->ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
1718 pCtx->ldtr.u32Limit = 0xffff;
1719 pCtx->ldtr.Attr.n.u1Present = 1;
1720 pCtx->ldtr.Attr.n.u4Type = X86_SEL_TYPE_SYS_LDT;
1721
1722 pCtx->tr.fFlags = CPUMSELREG_FLAGS_VALID;
1723 pCtx->tr.u32Limit = 0xffff;
1724 pCtx->tr.Attr.n.u1Present = 1;
1725 pCtx->tr.Attr.n.u4Type = X86_SEL_TYPE_SYS_386_TSS_BUSY; /* Deduction, not properly documented by Intel. */
1726
1727 pCtx->dr[6] = X86_DR6_INIT_VAL;
1728 pCtx->dr[7] = X86_DR7_INIT_VAL;
1729
1730 PX86FXSTATE pFpuCtx = &pCtx->pXStateR3->x87; AssertReleaseMsg(RT_VALID_PTR(pFpuCtx), ("%p\n", pFpuCtx));
1731 pFpuCtx->FTW = 0x00; /* All empty (abbridged tag reg edition). */
1732 pFpuCtx->FCW = 0x37f;
1733
1734 /* Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3A, Table 8-1.
1735 IA-32 Processor States Following Power-up, Reset, or INIT */
1736 pFpuCtx->MXCSR = 0x1F80;
1737 pFpuCtx->MXCSR_MASK = pVM->cpum.s.GuestInfo.fMxCsrMask; /** @todo check if REM messes this up... */
1738
1739 pCtx->aXcr[0] = XSAVE_C_X87;
1740 if (pVM->cpum.s.HostFeatures.cbMaxExtendedState >= RT_UOFFSETOF(X86XSAVEAREA, Hdr))
1741 {
1742 /* The entire FXSAVE state needs loading when we switch to XSAVE/XRSTOR
1743 as we don't know what happened before. (Bother optimize later?) */
1744 pCtx->pXStateR3->Hdr.bmXState = XSAVE_C_X87 | XSAVE_C_SSE;
1745 }
1746
1747 /*
1748 * MSRs.
1749 */
1750 /* Init PAT MSR */
1751 pCtx->msrPAT = MSR_IA32_CR_PAT_INIT_VAL;
1752
1753 /* EFER MBZ; see AMD64 Architecture Programmer's Manual Volume 2: Table 14-1. Initial Processor State.
1754 * The Intel docs don't mention it. */
1755 Assert(!pCtx->msrEFER);
1756
1757 /* IA32_MISC_ENABLE - not entirely sure what the init/reset state really
1758 is supposed to be here, just trying provide useful/sensible values. */
1759 PCPUMMSRRANGE pRange = cpumLookupMsrRange(pVM, MSR_IA32_MISC_ENABLE);
1760 if (pRange)
1761 {
1762 pVCpu->cpum.s.GuestMsrs.msr.MiscEnable = MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
1763 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL
1764 | (pVM->cpum.s.GuestFeatures.fMonitorMWait ? MSR_IA32_MISC_ENABLE_MONITOR : 0)
1765 | MSR_IA32_MISC_ENABLE_FAST_STRINGS;
1766 pRange->fWrIgnMask |= MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
1767 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
1768 pRange->fWrGpMask &= ~pVCpu->cpum.s.GuestMsrs.msr.MiscEnable;
1769 }
1770
1771 /** @todo Wire IA32_MISC_ENABLE bit 22 to our NT 4 CPUID trick. */
1772
1773 /** @todo r=ramshankar: Currently broken for SMP as TMCpuTickSet() expects to be
1774 * called from each EMT while we're getting called by CPUMR3Reset()
1775 * iteratively on the same thread. Fix later. */
1776#if 0 /** @todo r=bird: This we will do in TM, not here. */
1777 /* TSC must be 0. Intel spec. Table 9-1. "IA-32 Processor States Following Power-up, Reset, or INIT." */
1778 CPUMSetGuestMsr(pVCpu, MSR_IA32_TSC, 0);
1779#endif
1780
1781
1782 /* C-state control. Guesses. */
1783 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 1 /*C1*/ | RT_BIT_32(25) | RT_BIT_32(26) | RT_BIT_32(27) | RT_BIT_32(28);
1784 /* For Nehalem+ and Atoms, the 0xE2 MSR (MSR_PKG_CST_CONFIG_CONTROL) is documented. For Core 2,
1785 * it's undocumented but exists as MSR_PMG_CST_CONFIG_CONTROL and has similar but not identical
1786 * functionality. The default value must be different due to incompatible write mask.
1787 */
1788 if (CPUMMICROARCH_IS_INTEL_CORE2(pVM->cpum.s.GuestFeatures.enmMicroarch))
1789 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x202a01; /* From Mac Pro Harpertown, unlocked. */
1790 else if (pVM->cpum.s.GuestFeatures.enmMicroarch == kCpumMicroarch_Intel_Core_Yonah)
1791 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x26740c; /* From MacBookPro1,1. */
1792
1793 /*
1794 * Hardware virtualization state.
1795 */
1796 pCtx->hwvirt.fGif = true;
1797
1798 /* SVM. */
1799 if (pCtx->hwvirt.svm.CTX_SUFF(pVmcb))
1800 {
1801 memset(pCtx->hwvirt.svm.CTX_SUFF(pVmcb), 0, SVM_VMCB_PAGES << PAGE_SHIFT);
1802 pCtx->hwvirt.svm.uMsrHSavePa = 0;
1803 pCtx->hwvirt.svm.uPrevPauseTick = 0;
1804 }
1805}
1806
1807
1808/**
1809 * Resets the CPU.
1810 *
1811 * @returns VINF_SUCCESS.
1812 * @param pVM The cross context VM structure.
1813 */
1814VMMR3DECL(void) CPUMR3Reset(PVM pVM)
1815{
1816 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1817 {
1818 CPUMR3ResetCpu(pVM, &pVM->aCpus[i]);
1819
1820#ifdef VBOX_WITH_CRASHDUMP_MAGIC
1821 PCPUMCTX pCtx = &pVM->aCpus[i].cpum.s.Guest;
1822
1823 /* Magic marker for searching in crash dumps. */
1824 strcpy((char *)pVM->aCpus[i].cpum.s.aMagic, "CPUMCPU Magic");
1825 pVM->aCpus[i].cpum.s.uMagic = UINT64_C(0xDEADBEEFDEADBEEF);
1826 pCtx->dr[5] = UINT64_C(0xDEADBEEFDEADBEEF);
1827#endif
1828 }
1829}
1830
1831
1832
1833
1834/**
1835 * Pass 0 live exec callback.
1836 *
1837 * @returns VINF_SSM_DONT_CALL_AGAIN.
1838 * @param pVM The cross context VM structure.
1839 * @param pSSM The saved state handle.
1840 * @param uPass The pass (0).
1841 */
1842static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass)
1843{
1844 AssertReturn(uPass == 0, VERR_SSM_UNEXPECTED_PASS);
1845 cpumR3SaveCpuId(pVM, pSSM);
1846 return VINF_SSM_DONT_CALL_AGAIN;
1847}
1848
1849
1850/**
1851 * Execute state save operation.
1852 *
1853 * @returns VBox status code.
1854 * @param pVM The cross context VM structure.
1855 * @param pSSM SSM operation handle.
1856 */
1857static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM)
1858{
1859 /*
1860 * Save.
1861 */
1862 SSMR3PutU32(pSSM, pVM->cCpus);
1863 SSMR3PutU32(pSSM, sizeof(pVM->aCpus[0].cpum.s.GuestMsrs.msr));
1864 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1865 {
1866 PVMCPU pVCpu = &pVM->aCpus[iCpu];
1867
1868 SSMR3PutStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper), 0, g_aCpumCtxFields, NULL);
1869
1870 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
1871 SSMR3PutStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
1872 SSMR3PutStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87), 0, g_aCpumX87Fields, NULL);
1873 if (pGstCtx->fXStateMask != 0)
1874 SSMR3PutStructEx(pSSM, &pGstCtx->pXStateR3->Hdr, sizeof(pGstCtx->pXStateR3->Hdr), 0, g_aCpumXSaveHdrFields, NULL);
1875 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
1876 {
1877 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
1878 SSMR3PutStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
1879 }
1880 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
1881 {
1882 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
1883 SSMR3PutStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
1884 }
1885 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
1886 {
1887 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
1888 SSMR3PutStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
1889 }
1890 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
1891 {
1892 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
1893 SSMR3PutStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
1894 }
1895 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
1896 {
1897 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
1898 SSMR3PutStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
1899 }
1900 if (pVM->cpum.ro.GuestFeatures.fSvm)
1901 {
1902 Assert(pGstCtx->hwvirt.svm.CTX_SUFF(pVmcb));
1903 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uMsrHSavePa);
1904 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.svm.GCPhysVmcb);
1905 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uPrevPauseTick);
1906 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilter);
1907 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilterThreshold);
1908 SSMR3PutBool(pSSM, pGstCtx->hwvirt.svm.fInterceptEvents);
1909 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState), 0 /* fFlags */,
1910 g_aSvmHwvirtHostState, NULL /* pvUser */);
1911 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES << X86_PAGE_4K_SHIFT);
1912 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES << X86_PAGE_4K_SHIFT);
1913 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES << X86_PAGE_4K_SHIFT);
1914 SSMR3PutU32(pSSM, pGstCtx->hwvirt.fLocalForcedActions);
1915 SSMR3PutBool(pSSM, pGstCtx->hwvirt.fGif);
1916 }
1917 SSMR3PutU32(pSSM, pVCpu->cpum.s.fUseFlags);
1918 SSMR3PutU32(pSSM, pVCpu->cpum.s.fChanged);
1919 AssertCompileSizeAlignment(pVCpu->cpum.s.GuestMsrs.msr, sizeof(uint64_t));
1920 SSMR3PutMem(pSSM, &pVCpu->cpum.s.GuestMsrs, sizeof(pVCpu->cpum.s.GuestMsrs.msr));
1921 }
1922
1923 cpumR3SaveCpuId(pVM, pSSM);
1924 return VINF_SUCCESS;
1925}
1926
1927
1928/**
1929 * @callback_method_impl{FNSSMINTLOADPREP}
1930 */
1931static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM)
1932{
1933 NOREF(pSSM);
1934 pVM->cpum.s.fPendingRestore = true;
1935 return VINF_SUCCESS;
1936}
1937
1938
1939/**
1940 * @callback_method_impl{FNSSMINTLOADEXEC}
1941 */
1942static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
1943{
1944 int rc; /* Only for AssertRCReturn use. */
1945
1946 /*
1947 * Validate version.
1948 */
1949 if ( uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_SVM
1950 && uVersion != CPUM_SAVED_STATE_VERSION_XSAVE
1951 && uVersion != CPUM_SAVED_STATE_VERSION_GOOD_CPUID_COUNT
1952 && uVersion != CPUM_SAVED_STATE_VERSION_BAD_CPUID_COUNT
1953 && uVersion != CPUM_SAVED_STATE_VERSION_PUT_STRUCT
1954 && uVersion != CPUM_SAVED_STATE_VERSION_MEM
1955 && uVersion != CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE
1956 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_2
1957 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_0
1958 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR
1959 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_0
1960 && uVersion != CPUM_SAVED_STATE_VERSION_VER1_6)
1961 {
1962 AssertMsgFailed(("cpumR3LoadExec: Invalid version uVersion=%d!\n", uVersion));
1963 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
1964 }
1965
1966 if (uPass == SSM_PASS_FINAL)
1967 {
1968 /*
1969 * Set the size of RTGCPTR for SSMR3GetGCPtr. (Only necessary for
1970 * really old SSM file versions.)
1971 */
1972 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
1973 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR32));
1974 else if (uVersion <= CPUM_SAVED_STATE_VERSION_VER3_0)
1975 SSMR3HandleSetGCPtrSize(pSSM, HC_ARCH_BITS == 32 ? sizeof(RTGCPTR32) : sizeof(RTGCPTR));
1976
1977 /*
1978 * Figure x86 and ctx field definitions to use for older states.
1979 */
1980 uint32_t const fLoad = uVersion > CPUM_SAVED_STATE_VERSION_MEM ? 0 : SSMSTRUCT_FLAGS_MEM_BAND_AID_RELAXED;
1981 PCSSMFIELD paCpumCtx1Fields = g_aCpumX87Fields;
1982 PCSSMFIELD paCpumCtx2Fields = g_aCpumCtxFields;
1983 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
1984 {
1985 paCpumCtx1Fields = g_aCpumX87FieldsV16;
1986 paCpumCtx2Fields = g_aCpumCtxFieldsV16;
1987 }
1988 else if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
1989 {
1990 paCpumCtx1Fields = g_aCpumX87FieldsMem;
1991 paCpumCtx2Fields = g_aCpumCtxFieldsMem;
1992 }
1993
1994 /*
1995 * The hyper state used to preceed the CPU count. Starting with
1996 * XSAVE it was moved down till after we've got the count.
1997 */
1998 if (uVersion < CPUM_SAVED_STATE_VERSION_XSAVE)
1999 {
2000 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2001 {
2002 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2003 X86FXSTATE Ign;
2004 SSMR3GetStructEx(pSSM, &Ign, sizeof(Ign), fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2005 uint64_t uCR3 = pVCpu->cpum.s.Hyper.cr3;
2006 uint64_t uRSP = pVCpu->cpum.s.Hyper.rsp; /* see VMMR3Relocate(). */
2007 SSMR3GetStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper),
2008 fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2009 pVCpu->cpum.s.Hyper.cr3 = uCR3;
2010 pVCpu->cpum.s.Hyper.rsp = uRSP;
2011 }
2012 }
2013
2014 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR)
2015 {
2016 uint32_t cCpus;
2017 rc = SSMR3GetU32(pSSM, &cCpus); AssertRCReturn(rc, rc);
2018 AssertLogRelMsgReturn(cCpus == pVM->cCpus, ("Mismatching CPU counts: saved: %u; configured: %u \n", cCpus, pVM->cCpus),
2019 VERR_SSM_UNEXPECTED_DATA);
2020 }
2021 AssertLogRelMsgReturn( uVersion > CPUM_SAVED_STATE_VERSION_VER2_0
2022 || pVM->cCpus == 1,
2023 ("cCpus=%u\n", pVM->cCpus),
2024 VERR_SSM_UNEXPECTED_DATA);
2025
2026 uint32_t cbMsrs = 0;
2027 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2028 {
2029 rc = SSMR3GetU32(pSSM, &cbMsrs); AssertRCReturn(rc, rc);
2030 AssertLogRelMsgReturn(RT_ALIGN(cbMsrs, sizeof(uint64_t)) == cbMsrs, ("Size of MSRs is misaligned: %#x\n", cbMsrs),
2031 VERR_SSM_UNEXPECTED_DATA);
2032 AssertLogRelMsgReturn(cbMsrs <= sizeof(CPUMCTXMSRS) && cbMsrs > 0, ("Size of MSRs is out of range: %#x\n", cbMsrs),
2033 VERR_SSM_UNEXPECTED_DATA);
2034 }
2035
2036 /*
2037 * Do the per-CPU restoring.
2038 */
2039 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2040 {
2041 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2042 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2043
2044 if (uVersion >= CPUM_SAVED_STATE_VERSION_XSAVE)
2045 {
2046 /*
2047 * The XSAVE saved state layout moved the hyper state down here.
2048 */
2049 uint64_t uCR3 = pVCpu->cpum.s.Hyper.cr3;
2050 uint64_t uRSP = pVCpu->cpum.s.Hyper.rsp; /* see VMMR3Relocate(). */
2051 rc = SSMR3GetStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper), 0, g_aCpumCtxFields, NULL);
2052 pVCpu->cpum.s.Hyper.cr3 = uCR3;
2053 pVCpu->cpum.s.Hyper.rsp = uRSP;
2054 AssertRCReturn(rc, rc);
2055
2056 /*
2057 * Start by restoring the CPUMCTX structure and the X86FXSAVE bits of the extended state.
2058 */
2059 rc = SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2060 rc = SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87), 0, g_aCpumX87Fields, NULL);
2061 AssertRCReturn(rc, rc);
2062
2063 /* Check that the xsave/xrstor mask is valid (invalid results in #GP). */
2064 if (pGstCtx->fXStateMask != 0)
2065 {
2066 AssertLogRelMsgReturn(!(pGstCtx->fXStateMask & ~pVM->cpum.s.fXStateGuestMask),
2067 ("fXStateMask=%#RX64 fXStateGuestMask=%#RX64\n",
2068 pGstCtx->fXStateMask, pVM->cpum.s.fXStateGuestMask),
2069 VERR_CPUM_INCOMPATIBLE_XSAVE_COMP_MASK);
2070 AssertLogRelMsgReturn(pGstCtx->fXStateMask & XSAVE_C_X87,
2071 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2072 AssertLogRelMsgReturn((pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2073 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2074 AssertLogRelMsgReturn( (pGstCtx->fXStateMask & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2075 || (pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2076 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2077 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2078 }
2079
2080 /* Check that the XCR0 mask is valid (invalid results in #GP). */
2081 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87, ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XCR0);
2082 if (pGstCtx->aXcr[0] != XSAVE_C_X87)
2083 {
2084 AssertLogRelMsgReturn(!(pGstCtx->aXcr[0] & ~(pGstCtx->fXStateMask | XSAVE_C_X87)),
2085 ("xcr0=%#RX64 fXStateMask=%#RX64\n", pGstCtx->aXcr[0], pGstCtx->fXStateMask),
2086 VERR_CPUM_INVALID_XCR0);
2087 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87,
2088 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2089 AssertLogRelMsgReturn((pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2090 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2091 AssertLogRelMsgReturn( (pGstCtx->aXcr[0] & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2092 || (pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2093 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2094 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2095 }
2096
2097 /* Check that the XCR1 is zero, as we don't implement it yet. */
2098 AssertLogRelMsgReturn(!pGstCtx->aXcr[1], ("xcr1=%#RX64\n", pGstCtx->aXcr[1]), VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2099
2100 /*
2101 * Restore the individual extended state components we support.
2102 */
2103 if (pGstCtx->fXStateMask != 0)
2104 {
2105 rc = SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->Hdr, sizeof(pGstCtx->pXStateR3->Hdr),
2106 0, g_aCpumXSaveHdrFields, NULL);
2107 AssertRCReturn(rc, rc);
2108 AssertLogRelMsgReturn(!(pGstCtx->pXStateR3->Hdr.bmXState & ~pGstCtx->fXStateMask),
2109 ("bmXState=%#RX64 fXStateMask=%#RX64\n",
2110 pGstCtx->pXStateR3->Hdr.bmXState, pGstCtx->fXStateMask),
2111 VERR_CPUM_INVALID_XSAVE_HDR);
2112 }
2113 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2114 {
2115 PX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PX86XSAVEYMMHI);
2116 SSMR3GetStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2117 }
2118 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2119 {
2120 PX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PX86XSAVEBNDREGS);
2121 SSMR3GetStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2122 }
2123 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2124 {
2125 PX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PX86XSAVEBNDCFG);
2126 SSMR3GetStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2127 }
2128 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2129 {
2130 PX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PX86XSAVEZMMHI256);
2131 SSMR3GetStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2132 }
2133 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2134 {
2135 PX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PX86XSAVEZMM16HI);
2136 SSMR3GetStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2137 }
2138 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_SVM)
2139 {
2140 if (pVM->cpum.ro.GuestFeatures.fSvm)
2141 {
2142 Assert(pGstCtx->hwvirt.svm.CTX_SUFF(pVmcb));
2143 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uMsrHSavePa);
2144 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.svm.GCPhysVmcb);
2145 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uPrevPauseTick);
2146 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilter);
2147 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2148 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.svm.fInterceptEvents);
2149 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState),
2150 0 /* fFlags */, g_aSvmHwvirtHostState, NULL /* pvUser */);
2151 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES << X86_PAGE_4K_SHIFT);
2152 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES << X86_PAGE_4K_SHIFT);
2153 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES << X86_PAGE_4K_SHIFT);
2154 SSMR3GetU32(pSSM, &pGstCtx->hwvirt.fLocalForcedActions);
2155 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.fGif);
2156 }
2157 }
2158 }
2159 else
2160 {
2161 /*
2162 * Pre XSAVE saved state.
2163 */
2164 SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87),
2165 fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2166 SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2167 }
2168
2169 /*
2170 * Restore a couple of flags and the MSRs.
2171 */
2172 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fUseFlags);
2173 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fChanged);
2174
2175 rc = VINF_SUCCESS;
2176 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2177 rc = SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], cbMsrs);
2178 else if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_0)
2179 {
2180 SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], 2 * sizeof(uint64_t)); /* Restore two MSRs. */
2181 rc = SSMR3Skip(pSSM, 62 * sizeof(uint64_t));
2182 }
2183 AssertRCReturn(rc, rc);
2184
2185 /* REM and other may have cleared must-be-one fields in DR6 and
2186 DR7, fix these. */
2187 pGstCtx->dr[6] &= ~(X86_DR6_RAZ_MASK | X86_DR6_MBZ_MASK);
2188 pGstCtx->dr[6] |= X86_DR6_RA1_MASK;
2189 pGstCtx->dr[7] &= ~(X86_DR7_RAZ_MASK | X86_DR7_MBZ_MASK);
2190 pGstCtx->dr[7] |= X86_DR7_RA1_MASK;
2191 }
2192
2193 /* Older states does not have the internal selector register flags
2194 and valid selector value. Supply those. */
2195 if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2196 {
2197 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2198 {
2199 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2200 bool const fValid = !VM_IS_RAW_MODE_ENABLED(pVM)
2201 || ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2202 && !(pVCpu->cpum.s.fChanged & CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID));
2203 PCPUMSELREG paSelReg = CPUMCTX_FIRST_SREG(&pVCpu->cpum.s.Guest);
2204 if (fValid)
2205 {
2206 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2207 {
2208 paSelReg[iSelReg].fFlags = CPUMSELREG_FLAGS_VALID;
2209 paSelReg[iSelReg].ValidSel = paSelReg[iSelReg].Sel;
2210 }
2211
2212 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2213 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
2214 }
2215 else
2216 {
2217 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2218 {
2219 paSelReg[iSelReg].fFlags = 0;
2220 paSelReg[iSelReg].ValidSel = 0;
2221 }
2222
2223 /* This might not be 104% correct, but I think it's close
2224 enough for all practical purposes... (REM always loaded
2225 LDTR registers.) */
2226 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2227 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
2228 }
2229 pVCpu->cpum.s.Guest.tr.fFlags = CPUMSELREG_FLAGS_VALID;
2230 pVCpu->cpum.s.Guest.tr.ValidSel = pVCpu->cpum.s.Guest.tr.Sel;
2231 }
2232 }
2233
2234 /* Clear CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID. */
2235 if ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2236 && uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2237 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2238 pVM->aCpus[iCpu].cpum.s.fChanged &= CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID;
2239
2240 /*
2241 * A quick sanity check.
2242 */
2243 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2244 {
2245 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2246 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.es.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2247 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.cs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2248 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ss.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2249 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ds.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2250 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.fs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2251 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.gs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2252 }
2253 }
2254
2255 pVM->cpum.s.fPendingRestore = false;
2256
2257 /*
2258 * Guest CPUIDs.
2259 */
2260 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_2)
2261 return cpumR3LoadCpuId(pVM, pSSM, uVersion);
2262 return cpumR3LoadCpuIdPre32(pVM, pSSM, uVersion);
2263}
2264
2265
2266/**
2267 * @callback_method_impl{FNSSMINTLOADDONE}
2268 */
2269static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM)
2270{
2271 if (RT_FAILURE(SSMR3HandleGetStatus(pSSM)))
2272 return VINF_SUCCESS;
2273
2274 /* just check this since we can. */ /** @todo Add a SSM unit flag for indicating that it's mandatory during a restore. */
2275 if (pVM->cpum.s.fPendingRestore)
2276 {
2277 LogRel(("CPUM: Missing state!\n"));
2278 return VERR_INTERNAL_ERROR_2;
2279 }
2280
2281 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
2282 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2283 {
2284 PVMCPU pVCpu = &pVM->aCpus[idCpu];
2285
2286 /* Notify PGM of the NXE states in case they've changed. */
2287 PGMNotifyNxeChanged(pVCpu, RT_BOOL(pVCpu->cpum.s.Guest.msrEFER & MSR_K6_EFER_NXE));
2288
2289 /* During init. this is done in CPUMR3InitCompleted(). */
2290 if (fSupportsLongMode)
2291 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
2292 }
2293 return VINF_SUCCESS;
2294}
2295
2296
2297/**
2298 * Checks if the CPUM state restore is still pending.
2299 *
2300 * @returns true / false.
2301 * @param pVM The cross context VM structure.
2302 */
2303VMMDECL(bool) CPUMR3IsStateRestorePending(PVM pVM)
2304{
2305 return pVM->cpum.s.fPendingRestore;
2306}
2307
2308
2309/**
2310 * Formats the EFLAGS value into mnemonics.
2311 *
2312 * @param pszEFlags Where to write the mnemonics. (Assumes sufficient buffer space.)
2313 * @param efl The EFLAGS value.
2314 */
2315static void cpumR3InfoFormatFlags(char *pszEFlags, uint32_t efl)
2316{
2317 /*
2318 * Format the flags.
2319 */
2320 static const struct
2321 {
2322 const char *pszSet; const char *pszClear; uint32_t fFlag;
2323 } s_aFlags[] =
2324 {
2325 { "vip",NULL, X86_EFL_VIP },
2326 { "vif",NULL, X86_EFL_VIF },
2327 { "ac", NULL, X86_EFL_AC },
2328 { "vm", NULL, X86_EFL_VM },
2329 { "rf", NULL, X86_EFL_RF },
2330 { "nt", NULL, X86_EFL_NT },
2331 { "ov", "nv", X86_EFL_OF },
2332 { "dn", "up", X86_EFL_DF },
2333 { "ei", "di", X86_EFL_IF },
2334 { "tf", NULL, X86_EFL_TF },
2335 { "nt", "pl", X86_EFL_SF },
2336 { "nz", "zr", X86_EFL_ZF },
2337 { "ac", "na", X86_EFL_AF },
2338 { "po", "pe", X86_EFL_PF },
2339 { "cy", "nc", X86_EFL_CF },
2340 };
2341 char *psz = pszEFlags;
2342 for (unsigned i = 0; i < RT_ELEMENTS(s_aFlags); i++)
2343 {
2344 const char *pszAdd = s_aFlags[i].fFlag & efl ? s_aFlags[i].pszSet : s_aFlags[i].pszClear;
2345 if (pszAdd)
2346 {
2347 strcpy(psz, pszAdd);
2348 psz += strlen(pszAdd);
2349 *psz++ = ' ';
2350 }
2351 }
2352 psz[-1] = '\0';
2353}
2354
2355
2356/**
2357 * Formats a full register dump.
2358 *
2359 * @param pVM The cross context VM structure.
2360 * @param pCtx The context to format.
2361 * @param pCtxCore The context core to format.
2362 * @param pHlp Output functions.
2363 * @param enmType The dump type.
2364 * @param pszPrefix Register name prefix.
2365 */
2366static void cpumR3InfoOne(PVM pVM, PCPUMCTX pCtx, PCCPUMCTXCORE pCtxCore, PCDBGFINFOHLP pHlp, CPUMDUMPTYPE enmType,
2367 const char *pszPrefix)
2368{
2369 NOREF(pVM);
2370
2371 /*
2372 * Format the EFLAGS.
2373 */
2374 uint32_t efl = pCtxCore->eflags.u32;
2375 char szEFlags[80];
2376 cpumR3InfoFormatFlags(&szEFlags[0], efl);
2377
2378 /*
2379 * Format the registers.
2380 */
2381 switch (enmType)
2382 {
2383 case CPUMDUMPTYPE_TERSE:
2384 if (CPUMIsGuestIn64BitCodeEx(pCtx))
2385 pHlp->pfnPrintf(pHlp,
2386 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
2387 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
2388 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
2389 "%sr14=%016RX64 %sr15=%016RX64\n"
2390 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
2391 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
2392 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
2393 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
2394 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
2395 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2396 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
2397 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
2398 else
2399 pHlp->pfnPrintf(pHlp,
2400 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
2401 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
2402 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
2403 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
2404 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2405 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
2406 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
2407 break;
2408
2409 case CPUMDUMPTYPE_DEFAULT:
2410 if (CPUMIsGuestIn64BitCodeEx(pCtx))
2411 pHlp->pfnPrintf(pHlp,
2412 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
2413 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
2414 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
2415 "%sr14=%016RX64 %sr15=%016RX64\n"
2416 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
2417 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
2418 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%016RX64:%04x %sldtr=%04x\n"
2419 ,
2420 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
2421 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
2422 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
2423 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2424 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
2425 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
2426 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
2427 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
2428 else
2429 pHlp->pfnPrintf(pHlp,
2430 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
2431 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
2432 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
2433 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%08RX64:%04x %sldtr=%04x\n"
2434 ,
2435 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
2436 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2437 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
2438 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
2439 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
2440 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
2441 break;
2442
2443 case CPUMDUMPTYPE_VERBOSE:
2444 if (CPUMIsGuestIn64BitCodeEx(pCtx))
2445 pHlp->pfnPrintf(pHlp,
2446 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
2447 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
2448 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
2449 "%sr14=%016RX64 %sr15=%016RX64\n"
2450 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
2451 "%scs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2452 "%sds={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2453 "%ses={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2454 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2455 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2456 "%sss={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2457 "%scr0=%016RX64 %scr2=%016RX64 %scr3=%016RX64 %scr4=%016RX64\n"
2458 "%sdr0=%016RX64 %sdr1=%016RX64 %sdr2=%016RX64 %sdr3=%016RX64\n"
2459 "%sdr4=%016RX64 %sdr5=%016RX64 %sdr6=%016RX64 %sdr7=%016RX64\n"
2460 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
2461 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
2462 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
2463 "%sSysEnter={cs=%04llx eip=%016RX64 esp=%016RX64}\n"
2464 ,
2465 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
2466 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
2467 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
2468 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2469 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u,
2470 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u,
2471 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u,
2472 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u,
2473 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u,
2474 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u,
2475 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
2476 pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1], pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
2477 pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5], pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
2478 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
2479 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
2480 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
2481 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
2482 else
2483 pHlp->pfnPrintf(pHlp,
2484 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
2485 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
2486 "%scs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr0=%08RX64 %sdr1=%08RX64\n"
2487 "%sds={%04x base=%016RX64 limit=%08x flags=%08x} %sdr2=%08RX64 %sdr3=%08RX64\n"
2488 "%ses={%04x base=%016RX64 limit=%08x flags=%08x} %sdr4=%08RX64 %sdr5=%08RX64\n"
2489 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr6=%08RX64 %sdr7=%08RX64\n"
2490 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x} %scr0=%08RX64 %scr2=%08RX64\n"
2491 "%sss={%04x base=%016RX64 limit=%08x flags=%08x} %scr3=%08RX64 %scr4=%08RX64\n"
2492 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
2493 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
2494 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
2495 "%sSysEnter={cs=%04llx eip=%08llx esp=%08llx}\n"
2496 ,
2497 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
2498 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2499 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u, pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1],
2500 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u, pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
2501 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u, pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5],
2502 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u, pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
2503 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u, pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2,
2504 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
2505 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
2506 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
2507 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
2508 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
2509
2510 pHlp->pfnPrintf(pHlp, "%sxcr=%016RX64 %sxcr1=%016RX64 %sxss=%016RX64 (fXStateMask=%016RX64)\n",
2511 pszPrefix, pCtx->aXcr[0], pszPrefix, pCtx->aXcr[1],
2512 pszPrefix, UINT64_C(0) /** @todo XSS */, pCtx->fXStateMask);
2513 if (pCtx->CTX_SUFF(pXState))
2514 {
2515 PX86FXSTATE pFpuCtx = &pCtx->CTX_SUFF(pXState)->x87;
2516 pHlp->pfnPrintf(pHlp,
2517 "%sFCW=%04x %sFSW=%04x %sFTW=%04x %sFOP=%04x %sMXCSR=%08x %sMXCSR_MASK=%08x\n"
2518 "%sFPUIP=%08x %sCS=%04x %sRsrvd1=%04x %sFPUDP=%08x %sDS=%04x %sRsvrd2=%04x\n"
2519 ,
2520 pszPrefix, pFpuCtx->FCW, pszPrefix, pFpuCtx->FSW, pszPrefix, pFpuCtx->FTW, pszPrefix, pFpuCtx->FOP,
2521 pszPrefix, pFpuCtx->MXCSR, pszPrefix, pFpuCtx->MXCSR_MASK,
2522 pszPrefix, pFpuCtx->FPUIP, pszPrefix, pFpuCtx->CS, pszPrefix, pFpuCtx->Rsrvd1,
2523 pszPrefix, pFpuCtx->FPUDP, pszPrefix, pFpuCtx->DS, pszPrefix, pFpuCtx->Rsrvd2
2524 );
2525 /*
2526 * The FSAVE style memory image contains ST(0)-ST(7) at increasing addresses,
2527 * not (FP)R0-7 as Intel SDM suggests.
2528 */
2529 unsigned iShift = (pFpuCtx->FSW >> 11) & 7;
2530 for (unsigned iST = 0; iST < RT_ELEMENTS(pFpuCtx->aRegs); iST++)
2531 {
2532 unsigned iFPR = (iST + iShift) % RT_ELEMENTS(pFpuCtx->aRegs);
2533 unsigned uTag = (pFpuCtx->FTW >> (2 * iFPR)) & 3;
2534 char chSign = pFpuCtx->aRegs[iST].au16[4] & 0x8000 ? '-' : '+';
2535 unsigned iInteger = (unsigned)(pFpuCtx->aRegs[iST].au64[0] >> 63);
2536 uint64_t u64Fraction = pFpuCtx->aRegs[iST].au64[0] & UINT64_C(0x7fffffffffffffff);
2537 int iExponent = pFpuCtx->aRegs[iST].au16[4] & 0x7fff;
2538 iExponent -= 16383; /* subtract bias */
2539 /** @todo This isn't entirenly correct and needs more work! */
2540 pHlp->pfnPrintf(pHlp,
2541 "%sST(%u)=%sFPR%u={%04RX16'%08RX32'%08RX32} t%d %c%u.%022llu * 2 ^ %d (*)",
2542 pszPrefix, iST, pszPrefix, iFPR,
2543 pFpuCtx->aRegs[iST].au16[4], pFpuCtx->aRegs[iST].au32[1], pFpuCtx->aRegs[iST].au32[0],
2544 uTag, chSign, iInteger, u64Fraction, iExponent);
2545 if (pFpuCtx->aRegs[iST].au16[5] || pFpuCtx->aRegs[iST].au16[6] || pFpuCtx->aRegs[iST].au16[7])
2546 pHlp->pfnPrintf(pHlp, " res={%04RX16,%04RX16,%04RX16}\n",
2547 pFpuCtx->aRegs[iST].au16[5], pFpuCtx->aRegs[iST].au16[6], pFpuCtx->aRegs[iST].au16[7]);
2548 else
2549 pHlp->pfnPrintf(pHlp, "\n");
2550 }
2551
2552 /* XMM/YMM/ZMM registers. */
2553 if (pCtx->fXStateMask & XSAVE_C_YMM)
2554 {
2555 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
2556 if (!(pCtx->fXStateMask & XSAVE_C_ZMM_HI256))
2557 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
2558 pHlp->pfnPrintf(pHlp, "%sYMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
2559 pszPrefix, i, i < 10 ? " " : "",
2560 pYmmHiCtx->aYmmHi[i].au32[3],
2561 pYmmHiCtx->aYmmHi[i].au32[2],
2562 pYmmHiCtx->aYmmHi[i].au32[1],
2563 pYmmHiCtx->aYmmHi[i].au32[0],
2564 pFpuCtx->aXMM[i].au32[3],
2565 pFpuCtx->aXMM[i].au32[2],
2566 pFpuCtx->aXMM[i].au32[1],
2567 pFpuCtx->aXMM[i].au32[0]);
2568 else
2569 {
2570 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
2571 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
2572 pHlp->pfnPrintf(pHlp,
2573 "%sZMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
2574 pszPrefix, i, i < 10 ? " " : "",
2575 pZmmHi256->aHi256Regs[i].au32[7],
2576 pZmmHi256->aHi256Regs[i].au32[6],
2577 pZmmHi256->aHi256Regs[i].au32[5],
2578 pZmmHi256->aHi256Regs[i].au32[4],
2579 pZmmHi256->aHi256Regs[i].au32[3],
2580 pZmmHi256->aHi256Regs[i].au32[2],
2581 pZmmHi256->aHi256Regs[i].au32[1],
2582 pZmmHi256->aHi256Regs[i].au32[0],
2583 pYmmHiCtx->aYmmHi[i].au32[3],
2584 pYmmHiCtx->aYmmHi[i].au32[2],
2585 pYmmHiCtx->aYmmHi[i].au32[1],
2586 pYmmHiCtx->aYmmHi[i].au32[0],
2587 pFpuCtx->aXMM[i].au32[3],
2588 pFpuCtx->aXMM[i].au32[2],
2589 pFpuCtx->aXMM[i].au32[1],
2590 pFpuCtx->aXMM[i].au32[0]);
2591
2592 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
2593 for (unsigned i = 0; i < RT_ELEMENTS(pZmm16Hi->aRegs); i++)
2594 pHlp->pfnPrintf(pHlp,
2595 "%sZMM%u=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
2596 pszPrefix, i + 16,
2597 pZmm16Hi->aRegs[i].au32[15],
2598 pZmm16Hi->aRegs[i].au32[14],
2599 pZmm16Hi->aRegs[i].au32[13],
2600 pZmm16Hi->aRegs[i].au32[12],
2601 pZmm16Hi->aRegs[i].au32[11],
2602 pZmm16Hi->aRegs[i].au32[10],
2603 pZmm16Hi->aRegs[i].au32[9],
2604 pZmm16Hi->aRegs[i].au32[8],
2605 pZmm16Hi->aRegs[i].au32[7],
2606 pZmm16Hi->aRegs[i].au32[6],
2607 pZmm16Hi->aRegs[i].au32[5],
2608 pZmm16Hi->aRegs[i].au32[4],
2609 pZmm16Hi->aRegs[i].au32[3],
2610 pZmm16Hi->aRegs[i].au32[2],
2611 pZmm16Hi->aRegs[i].au32[1],
2612 pZmm16Hi->aRegs[i].au32[0]);
2613 }
2614 }
2615 else
2616 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
2617 pHlp->pfnPrintf(pHlp,
2618 i & 1
2619 ? "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32\n"
2620 : "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32 ",
2621 pszPrefix, i, i < 10 ? " " : "",
2622 pFpuCtx->aXMM[i].au32[3],
2623 pFpuCtx->aXMM[i].au32[2],
2624 pFpuCtx->aXMM[i].au32[1],
2625 pFpuCtx->aXMM[i].au32[0]);
2626
2627 if (pCtx->fXStateMask & XSAVE_C_OPMASK)
2628 {
2629 PCX86XSAVEOPMASK pOpMask = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_OPMASK_BIT, PCX86XSAVEOPMASK);
2630 for (unsigned i = 0; i < RT_ELEMENTS(pOpMask->aKRegs); i += 4)
2631 pHlp->pfnPrintf(pHlp, "%sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64\n",
2632 pszPrefix, i + 0, pOpMask->aKRegs[i + 0],
2633 pszPrefix, i + 1, pOpMask->aKRegs[i + 1],
2634 pszPrefix, i + 2, pOpMask->aKRegs[i + 2],
2635 pszPrefix, i + 3, pOpMask->aKRegs[i + 3]);
2636 }
2637
2638 if (pCtx->fXStateMask & XSAVE_C_BNDREGS)
2639 {
2640 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
2641 for (unsigned i = 0; i < RT_ELEMENTS(pBndRegs->aRegs); i += 2)
2642 pHlp->pfnPrintf(pHlp, "%sBNDREG%u=%016RX64/%016RX64 %sBNDREG%u=%016RX64/%016RX64\n",
2643 pszPrefix, i, pBndRegs->aRegs[i].uLowerBound, pBndRegs->aRegs[i].uUpperBound,
2644 pszPrefix, i + 1, pBndRegs->aRegs[i + 1].uLowerBound, pBndRegs->aRegs[i + 1].uUpperBound);
2645 }
2646
2647 if (pCtx->fXStateMask & XSAVE_C_BNDCSR)
2648 {
2649 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
2650 pHlp->pfnPrintf(pHlp, "%sBNDCFG.CONFIG=%016RX64 %sBNDCFG.STATUS=%016RX64\n",
2651 pszPrefix, pBndCfg->fConfig, pszPrefix, pBndCfg->fStatus);
2652 }
2653
2654 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->au32RsrvdRest); i++)
2655 if (pFpuCtx->au32RsrvdRest[i])
2656 pHlp->pfnPrintf(pHlp, "%sRsrvdRest[%u]=%RX32 (offset=%#x)\n",
2657 pszPrefix, i, pFpuCtx->au32RsrvdRest[i], RT_UOFFSETOF_DYN(X86FXSTATE, au32RsrvdRest[i]) );
2658 }
2659
2660 pHlp->pfnPrintf(pHlp,
2661 "%sEFER =%016RX64\n"
2662 "%sPAT =%016RX64\n"
2663 "%sSTAR =%016RX64\n"
2664 "%sCSTAR =%016RX64\n"
2665 "%sLSTAR =%016RX64\n"
2666 "%sSFMASK =%016RX64\n"
2667 "%sKERNELGSBASE =%016RX64\n",
2668 pszPrefix, pCtx->msrEFER,
2669 pszPrefix, pCtx->msrPAT,
2670 pszPrefix, pCtx->msrSTAR,
2671 pszPrefix, pCtx->msrCSTAR,
2672 pszPrefix, pCtx->msrLSTAR,
2673 pszPrefix, pCtx->msrSFMASK,
2674 pszPrefix, pCtx->msrKERNELGSBASE);
2675 break;
2676 }
2677}
2678
2679
2680/**
2681 * Display all cpu states and any other cpum info.
2682 *
2683 * @param pVM The cross context VM structure.
2684 * @param pHlp The info helper functions.
2685 * @param pszArgs Arguments, ignored.
2686 */
2687static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2688{
2689 cpumR3InfoGuest(pVM, pHlp, pszArgs);
2690 cpumR3InfoGuestInstr(pVM, pHlp, pszArgs);
2691 cpumR3InfoGuestHwvirt(pVM, pHlp, pszArgs);
2692 cpumR3InfoHyper(pVM, pHlp, pszArgs);
2693 cpumR3InfoHost(pVM, pHlp, pszArgs);
2694}
2695
2696
2697/**
2698 * Parses the info argument.
2699 *
2700 * The argument starts with 'verbose', 'terse' or 'default' and then
2701 * continues with the comment string.
2702 *
2703 * @param pszArgs The pointer to the argument string.
2704 * @param penmType Where to store the dump type request.
2705 * @param ppszComment Where to store the pointer to the comment string.
2706 */
2707static void cpumR3InfoParseArg(const char *pszArgs, CPUMDUMPTYPE *penmType, const char **ppszComment)
2708{
2709 if (!pszArgs)
2710 {
2711 *penmType = CPUMDUMPTYPE_DEFAULT;
2712 *ppszComment = "";
2713 }
2714 else
2715 {
2716 if (!strncmp(pszArgs, RT_STR_TUPLE("verbose")))
2717 {
2718 pszArgs += 7;
2719 *penmType = CPUMDUMPTYPE_VERBOSE;
2720 }
2721 else if (!strncmp(pszArgs, RT_STR_TUPLE("terse")))
2722 {
2723 pszArgs += 5;
2724 *penmType = CPUMDUMPTYPE_TERSE;
2725 }
2726 else if (!strncmp(pszArgs, RT_STR_TUPLE("default")))
2727 {
2728 pszArgs += 7;
2729 *penmType = CPUMDUMPTYPE_DEFAULT;
2730 }
2731 else
2732 *penmType = CPUMDUMPTYPE_DEFAULT;
2733 *ppszComment = RTStrStripL(pszArgs);
2734 }
2735}
2736
2737
2738/**
2739 * Display the guest cpu state.
2740 *
2741 * @param pVM The cross context VM structure.
2742 * @param pHlp The info helper functions.
2743 * @param pszArgs Arguments.
2744 */
2745static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2746{
2747 CPUMDUMPTYPE enmType;
2748 const char *pszComment;
2749 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
2750
2751 PVMCPU pVCpu = VMMGetCpu(pVM);
2752 if (!pVCpu)
2753 pVCpu = &pVM->aCpus[0];
2754
2755 pHlp->pfnPrintf(pHlp, "Guest CPUM (VCPU %d) state: %s\n", pVCpu->idCpu, pszComment);
2756
2757 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
2758 cpumR3InfoOne(pVM, pCtx, CPUMCTX2CORE(pCtx), pHlp, enmType, "");
2759}
2760
2761
2762/**
2763 * Displays an SVM VMCB control area.
2764 *
2765 * @param pHlp The info helper functions.
2766 * @param pVmcbCtrl Pointer to a SVM VMCB controls area.
2767 * @param pszPrefix Caller specified string prefix.
2768 */
2769static void cpumR3InfoSvmVmcbCtrl(PCDBGFINFOHLP pHlp, PCSVMVMCBCTRL pVmcbCtrl, const char *pszPrefix)
2770{
2771 AssertReturnVoid(pHlp);
2772 AssertReturnVoid(pVmcbCtrl);
2773
2774 pHlp->pfnPrintf(pHlp, "%su16InterceptRdCRx = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdCRx);
2775 pHlp->pfnPrintf(pHlp, "%su16InterceptWrCRx = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrCRx);
2776 pHlp->pfnPrintf(pHlp, "%su16InterceptRdDRx = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdDRx);
2777 pHlp->pfnPrintf(pHlp, "%su16InterceptWrDRx = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrDRx);
2778 pHlp->pfnPrintf(pHlp, "%su32InterceptXcpt = %#RX32\n", pszPrefix, pVmcbCtrl->u32InterceptXcpt);
2779 pHlp->pfnPrintf(pHlp, "%su64InterceptCtrl = %#RX64\n", pszPrefix, pVmcbCtrl->u64InterceptCtrl);
2780 pHlp->pfnPrintf(pHlp, "%su16PauseFilterThreshold = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterThreshold);
2781 pHlp->pfnPrintf(pHlp, "%su16PauseFilterCount = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterCount);
2782 pHlp->pfnPrintf(pHlp, "%su64IOPMPhysAddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64IOPMPhysAddr);
2783 pHlp->pfnPrintf(pHlp, "%su64MSRPMPhysAddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64MSRPMPhysAddr);
2784 pHlp->pfnPrintf(pHlp, "%su64TSCOffset = %#RX64\n", pszPrefix, pVmcbCtrl->u64TSCOffset);
2785 pHlp->pfnPrintf(pHlp, "%sTLBCtrl\n", pszPrefix);
2786 pHlp->pfnPrintf(pHlp, "%s u32ASID = %#RX32\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u32ASID);
2787 pHlp->pfnPrintf(pHlp, "%s u8TLBFlush = %u\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u8TLBFlush);
2788 pHlp->pfnPrintf(pHlp, "%sIntCtrl\n", pszPrefix);
2789 pHlp->pfnPrintf(pHlp, "%s u8VTPR = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VTPR, pVmcbCtrl->IntCtrl.n.u8VTPR);
2790 pHlp->pfnPrintf(pHlp, "%s u1VIrqPending = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIrqPending);
2791 pHlp->pfnPrintf(pHlp, "%s u1VGif = %u\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGif);
2792 pHlp->pfnPrintf(pHlp, "%s u4VIntrPrio = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u4VIntrPrio);
2793 pHlp->pfnPrintf(pHlp, "%s u1IgnoreTPR = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1IgnoreTPR);
2794 pHlp->pfnPrintf(pHlp, "%s u1VIntrMasking = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIntrMasking);
2795 pHlp->pfnPrintf(pHlp, "%s u1VGifEnable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGifEnable);
2796 pHlp->pfnPrintf(pHlp, "%s u1AvicEnable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1AvicEnable);
2797 pHlp->pfnPrintf(pHlp, "%s u8VIntrVector = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VIntrVector);
2798 pHlp->pfnPrintf(pHlp, "%sIntShadow\n", pszPrefix);
2799 pHlp->pfnPrintf(pHlp, "%s u1IntShadow = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1IntShadow);
2800 pHlp->pfnPrintf(pHlp, "%s u1GuestIntMask = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1GuestIntMask);
2801 pHlp->pfnPrintf(pHlp, "%su64ExitCode = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitCode);
2802 pHlp->pfnPrintf(pHlp, "%su64ExitInfo1 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo1);
2803 pHlp->pfnPrintf(pHlp, "%su64ExitInfo2 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo2);
2804 pHlp->pfnPrintf(pHlp, "%sExitIntInfo\n", pszPrefix);
2805 pHlp->pfnPrintf(pHlp, "%s u8Vector = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u8Vector, pVmcbCtrl->ExitIntInfo.n.u8Vector);
2806 pHlp->pfnPrintf(pHlp, "%s u3Type = %u\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u3Type);
2807 pHlp->pfnPrintf(pHlp, "%s u1ErrorCodeValid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1ErrorCodeValid);
2808 pHlp->pfnPrintf(pHlp, "%s u1Valid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1Valid);
2809 pHlp->pfnPrintf(pHlp, "%s u32ErrorCode = %#RX32\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u32ErrorCode);
2810 pHlp->pfnPrintf(pHlp, "%sNestedPaging and SEV\n", pszPrefix);
2811 pHlp->pfnPrintf(pHlp, "%s u1NestedPaging = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1NestedPaging);
2812 pHlp->pfnPrintf(pHlp, "%s u1Sev = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1Sev);
2813 pHlp->pfnPrintf(pHlp, "%s u1SevEs = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1SevEs);
2814 pHlp->pfnPrintf(pHlp, "%sAvicBar\n", pszPrefix);
2815 pHlp->pfnPrintf(pHlp, "%s u40Addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBar.n.u40Addr);
2816 pHlp->pfnPrintf(pHlp, "%sEventInject\n", pszPrefix);
2817 pHlp->pfnPrintf(pHlp, "%s EventInject\n", pszPrefix);
2818 pHlp->pfnPrintf(pHlp, "%s u8Vector = %#RX32 (%u)\n", pszPrefix, pVmcbCtrl->EventInject.n.u8Vector, pVmcbCtrl->EventInject.n.u8Vector);
2819 pHlp->pfnPrintf(pHlp, "%s u3Type = %u\n", pszPrefix, pVmcbCtrl->EventInject.n.u3Type);
2820 pHlp->pfnPrintf(pHlp, "%s u1ErrorCodeValid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1ErrorCodeValid);
2821 pHlp->pfnPrintf(pHlp, "%s u1Valid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1Valid);
2822 pHlp->pfnPrintf(pHlp, "%s u32ErrorCode = %#RX32\n", pszPrefix, pVmcbCtrl->EventInject.n.u32ErrorCode);
2823 pHlp->pfnPrintf(pHlp, "%su64NestedPagingCR3 = %#RX64\n", pszPrefix, pVmcbCtrl->u64NestedPagingCR3);
2824 pHlp->pfnPrintf(pHlp, "%sLBR virtualization\n", pszPrefix);
2825 pHlp->pfnPrintf(pHlp, "%s u1LbrVirt = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1LbrVirt);
2826 pHlp->pfnPrintf(pHlp, "%s u1VirtVmsaveVmload = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1VirtVmsaveVmload);
2827 pHlp->pfnPrintf(pHlp, "%su32VmcbCleanBits = %#RX32\n", pszPrefix, pVmcbCtrl->u32VmcbCleanBits);
2828 pHlp->pfnPrintf(pHlp, "%su64NextRIP = %#RX64\n", pszPrefix, pVmcbCtrl->u64NextRIP);
2829 pHlp->pfnPrintf(pHlp, "%scbInstrFetched = %u\n", pszPrefix, pVmcbCtrl->cbInstrFetched);
2830 pHlp->pfnPrintf(pHlp, "%sabInstr = %.*Rhxs\n", pszPrefix, sizeof(pVmcbCtrl->abInstr), pVmcbCtrl->abInstr);
2831 pHlp->pfnPrintf(pHlp, "%sAvicBackingPagePtr\n", pszPrefix);
2832 pHlp->pfnPrintf(pHlp, "%s u40Addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBackingPagePtr.n.u40Addr);
2833 pHlp->pfnPrintf(pHlp, "%sAvicLogicalTablePtr\n", pszPrefix);
2834 pHlp->pfnPrintf(pHlp, "%s u40Addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicLogicalTablePtr.n.u40Addr);
2835 pHlp->pfnPrintf(pHlp, "%sAvicPhysicalTablePtr\n", pszPrefix);
2836 pHlp->pfnPrintf(pHlp, "%s u8LastGuestCoreId = %u\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u8LastGuestCoreId);
2837 pHlp->pfnPrintf(pHlp, "%s u40Addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u40Addr);
2838}
2839
2840
2841/**
2842 * Helper for dumping the SVM VMCB selector registers.
2843 *
2844 * @param pHlp The info helper functions.
2845 * @param pSel Pointer to the SVM selector register.
2846 * @param pszName Name of the selector.
2847 * @param pszPrefix Caller specified string prefix.
2848 */
2849DECLINLINE(void) cpumR3InfoSvmVmcbSelReg(PCDBGFINFOHLP pHlp, PCSVMSELREG pSel, const char *pszName, const char *pszPrefix)
2850{
2851 /* The string width of 4 used below is to handle 'LDTR'. Change later if longer register names are used. */
2852 pHlp->pfnPrintf(pHlp, "%s%-4s = {%04x base=%016RX64 limit=%08x flags=%04x}\n", pszPrefix,
2853 pszName, pSel->u16Sel, pSel->u64Base, pSel->u32Limit, pSel->u16Attr);
2854}
2855
2856
2857/**
2858 * Helper for dumping the SVM VMCB GDTR/IDTR registers.
2859 *
2860 * @param pHlp The info helper functions.
2861 * @param pXdtr Pointer to the descriptor table register.
2862 * @param pszName Name of the descriptor table register.
2863 * @param pszPrefix Caller specified string prefix.
2864 */
2865DECLINLINE(void) cpumR3InfoSvmVmcbXdtr(PCDBGFINFOHLP pHlp, PCSVMXDTR pXdtr, const char *pszName, const char *pszPrefix)
2866{
2867 /* The string width of 4 used below is to cover 'GDTR', 'IDTR'. Change later if longer register names are used. */
2868 pHlp->pfnPrintf(pHlp, "%s%-4s = %016RX64:%04x\n", pszPrefix, pszName, pXdtr->u64Base, pXdtr->u32Limit);
2869}
2870
2871
2872/**
2873 * Displays an SVM VMCB state-save area.
2874 *
2875 * @param pHlp The info helper functions.
2876 * @param pVmcbStateSave Pointer to a SVM VMCB controls area.
2877 * @param pszPrefix Caller specified string prefix.
2878 */
2879static void cpumR3InfoSvmVmcbStateSave(PCDBGFINFOHLP pHlp, PCSVMVMCBSTATESAVE pVmcbStateSave, const char *pszPrefix)
2880{
2881 AssertReturnVoid(pHlp);
2882 AssertReturnVoid(pVmcbStateSave);
2883
2884 char szEFlags[80];
2885 cpumR3InfoFormatFlags(&szEFlags[0], pVmcbStateSave->u64RFlags);
2886
2887 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->CS, "CS", pszPrefix);
2888 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->SS, "SS", pszPrefix);
2889 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->ES, "ES", pszPrefix);
2890 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->DS, "DS", pszPrefix);
2891 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->FS, "FS", pszPrefix);
2892 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->GS, "GS", pszPrefix);
2893 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->LDTR, "LDTR", pszPrefix);
2894 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->TR, "TR", pszPrefix);
2895 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->GDTR, "GDTR", pszPrefix);
2896 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->IDTR, "IDTR", pszPrefix);
2897 pHlp->pfnPrintf(pHlp, "%su8CPL = %u\n", pszPrefix, pVmcbStateSave->u8CPL);
2898 pHlp->pfnPrintf(pHlp, "%su64EFER = %#RX64\n", pszPrefix, pVmcbStateSave->u64EFER);
2899 pHlp->pfnPrintf(pHlp, "%su64CR4 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR4);
2900 pHlp->pfnPrintf(pHlp, "%su64CR3 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR3);
2901 pHlp->pfnPrintf(pHlp, "%su64CR0 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR0);
2902 pHlp->pfnPrintf(pHlp, "%su64DR7 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR7);
2903 pHlp->pfnPrintf(pHlp, "%su64DR6 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR6);
2904 pHlp->pfnPrintf(pHlp, "%su64RFlags = %#RX64 %31s\n", pszPrefix, pVmcbStateSave->u64RFlags, szEFlags);
2905 pHlp->pfnPrintf(pHlp, "%su64RIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RIP);
2906 pHlp->pfnPrintf(pHlp, "%su64RSP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RSP);
2907 pHlp->pfnPrintf(pHlp, "%su64RAX = %#RX64\n", pszPrefix, pVmcbStateSave->u64RAX);
2908 pHlp->pfnPrintf(pHlp, "%su64STAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64STAR);
2909 pHlp->pfnPrintf(pHlp, "%su64LSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64LSTAR);
2910 pHlp->pfnPrintf(pHlp, "%su64CSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64CSTAR);
2911 pHlp->pfnPrintf(pHlp, "%su64SFMASK = %#RX64\n", pszPrefix, pVmcbStateSave->u64SFMASK);
2912 pHlp->pfnPrintf(pHlp, "%su64KernelGSBase = %#RX64\n", pszPrefix, pVmcbStateSave->u64KernelGSBase);
2913 pHlp->pfnPrintf(pHlp, "%su64SysEnterCS = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterCS);
2914 pHlp->pfnPrintf(pHlp, "%su64SysEnterEIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterEIP);
2915 pHlp->pfnPrintf(pHlp, "%su64SysEnterESP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterESP);
2916 pHlp->pfnPrintf(pHlp, "%su64CR2 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR2);
2917 pHlp->pfnPrintf(pHlp, "%su64PAT = %#RX64\n", pszPrefix, pVmcbStateSave->u64PAT);
2918 pHlp->pfnPrintf(pHlp, "%su64DBGCTL = %#RX64\n", pszPrefix, pVmcbStateSave->u64DBGCTL);
2919 pHlp->pfnPrintf(pHlp, "%su64BR_FROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_FROM);
2920 pHlp->pfnPrintf(pHlp, "%su64BR_TO = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_TO);
2921 pHlp->pfnPrintf(pHlp, "%su64LASTEXCPFROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPFROM);
2922 pHlp->pfnPrintf(pHlp, "%su64LASTEXCPTO = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPTO);
2923}
2924
2925
2926/**
2927 * Display the guest's hardware-virtualization cpu state.
2928 *
2929 * @param pVM The cross context VM structure.
2930 * @param pHlp The info helper functions.
2931 * @param pszArgs Arguments, ignored.
2932 */
2933static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2934{
2935 RT_NOREF(pszArgs);
2936
2937 PVMCPU pVCpu = VMMGetCpu(pVM);
2938 if (!pVCpu)
2939 pVCpu = &pVM->aCpus[0];
2940
2941 /*
2942 * Figure out what to dump.
2943 *
2944 * In the future we may need to dump everything whether or not we're actively in nested-guest mode
2945 * or not, hence the reason why we use a mask to determine what needs dumping. Currently, we only
2946 * dump hwvirt. state when the guest CPU is executing a nested-guest.
2947 */
2948 /** @todo perhaps make this configurable through pszArgs, depending on how much
2949 * noise we wish to accept when nested hwvirt. isn't used. */
2950#define CPUMHWVIRTDUMP_NONE (0)
2951#define CPUMHWVIRTDUMP_SVM RT_BIT(0)
2952#define CPUMHWVIRTDUMP_VMX RT_BIT(1)
2953#define CPUMHWVIRTDUMP_COMMON RT_BIT(2)
2954#define CPUMHWVIRTDUMP_LAST CPUMHWVIRTDUMP_VMX
2955
2956 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
2957 static const char *const s_aHwvirtModes[] = { "No/inactive", "SVM", "VMX", "Common" };
2958 bool const fSvm = pVM->cpum.ro.GuestFeatures.fSvm;
2959 bool const fVmx = pVM->cpum.ro.GuestFeatures.fVmx;
2960 uint8_t const idxHwvirtState = fSvm ? CPUMHWVIRTDUMP_SVM : (fVmx ? CPUMHWVIRTDUMP_VMX : CPUMHWVIRTDUMP_NONE);
2961 AssertCompile(CPUMHWVIRTDUMP_LAST <= RT_ELEMENTS(s_aHwvirtModes));
2962 Assert(idxHwvirtState < RT_ELEMENTS(s_aHwvirtModes));
2963 const char *pcszHwvirtMode = s_aHwvirtModes[idxHwvirtState];
2964 uint32_t fDumpState = idxHwvirtState | CPUMHWVIRTDUMP_COMMON;
2965
2966 /*
2967 * Dump it.
2968 */
2969 pHlp->pfnPrintf(pHlp, "VCPU[%u] hardware virtualization state:\n", pVCpu->idCpu);
2970
2971 if (fDumpState & CPUMHWVIRTDUMP_COMMON)
2972 pHlp->pfnPrintf(pHlp, "fLocalForcedActions = %#RX32\n", pCtx->hwvirt.fLocalForcedActions);
2973
2974 pHlp->pfnPrintf(pHlp, "%s hwvirt state%s\n", pcszHwvirtMode, (fDumpState & (CPUMHWVIRTDUMP_SVM | CPUMHWVIRTDUMP_VMX)) ?
2975 ":" : "");
2976 if (fDumpState & CPUMHWVIRTDUMP_SVM)
2977 {
2978 pHlp->pfnPrintf(pHlp, " fGif = %RTbool\n", pCtx->hwvirt.fGif);
2979
2980 char szEFlags[80];
2981 cpumR3InfoFormatFlags(&szEFlags[0], pCtx->hwvirt.svm.HostState.rflags.u);
2982 pHlp->pfnPrintf(pHlp, " uMsrHSavePa = %#RX64\n", pCtx->hwvirt.svm.uMsrHSavePa);
2983 pHlp->pfnPrintf(pHlp, " GCPhysVmcb = %#RGp\n", pCtx->hwvirt.svm.GCPhysVmcb);
2984 pHlp->pfnPrintf(pHlp, " VmcbCtrl:\n");
2985 cpumR3InfoSvmVmcbCtrl(pHlp, &pCtx->hwvirt.svm.pVmcbR3->ctrl, " " /* pszPrefix */);
2986 pHlp->pfnPrintf(pHlp, " VmcbStateSave:\n");
2987 cpumR3InfoSvmVmcbStateSave(pHlp, &pCtx->hwvirt.svm.pVmcbR3->guest, " " /* pszPrefix */);
2988 pHlp->pfnPrintf(pHlp, " HostState:\n");
2989 pHlp->pfnPrintf(pHlp, " uEferMsr = %#RX64\n", pCtx->hwvirt.svm.HostState.uEferMsr);
2990 pHlp->pfnPrintf(pHlp, " uCr0 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr0);
2991 pHlp->pfnPrintf(pHlp, " uCr4 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr4);
2992 pHlp->pfnPrintf(pHlp, " uCr3 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr3);
2993 pHlp->pfnPrintf(pHlp, " uRip = %#RX64\n", pCtx->hwvirt.svm.HostState.uRip);
2994 pHlp->pfnPrintf(pHlp, " uRsp = %#RX64\n", pCtx->hwvirt.svm.HostState.uRsp);
2995 pHlp->pfnPrintf(pHlp, " uRax = %#RX64\n", pCtx->hwvirt.svm.HostState.uRax);
2996 pHlp->pfnPrintf(pHlp, " rflags = %#RX64 %31s\n", pCtx->hwvirt.svm.HostState.rflags.u64, szEFlags);
2997 PCPUMSELREG pSel = &pCtx->hwvirt.svm.HostState.es;
2998 pHlp->pfnPrintf(pHlp, " es = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
2999 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
3000 pSel = &pCtx->hwvirt.svm.HostState.cs;
3001 pHlp->pfnPrintf(pHlp, " cs = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3002 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
3003 pSel = &pCtx->hwvirt.svm.HostState.ss;
3004 pHlp->pfnPrintf(pHlp, " ss = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3005 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
3006 pSel = &pCtx->hwvirt.svm.HostState.ds;
3007 pHlp->pfnPrintf(pHlp, " ds = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3008 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
3009 pHlp->pfnPrintf(pHlp, " gdtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.gdtr.pGdt,
3010 pCtx->hwvirt.svm.HostState.gdtr.cbGdt);
3011 pHlp->pfnPrintf(pHlp, " idtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.idtr.pIdt,
3012 pCtx->hwvirt.svm.HostState.idtr.cbIdt);
3013 pHlp->pfnPrintf(pHlp, " cPauseFilter = %RU16\n", pCtx->hwvirt.svm.cPauseFilter);
3014 pHlp->pfnPrintf(pHlp, " cPauseFilterThreshold = %RU32\n", pCtx->hwvirt.svm.cPauseFilterThreshold);
3015 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %u\n", pCtx->hwvirt.svm.fInterceptEvents);
3016 pHlp->pfnPrintf(pHlp, " pvMsrBitmapR3 = %p\n", pCtx->hwvirt.svm.pvMsrBitmapR3);
3017 pHlp->pfnPrintf(pHlp, " pvMsrBitmapR0 = %RKv\n", pCtx->hwvirt.svm.pvMsrBitmapR0);
3018 pHlp->pfnPrintf(pHlp, " pvIoBitmapR3 = %p\n", pCtx->hwvirt.svm.pvIoBitmapR3);
3019 pHlp->pfnPrintf(pHlp, " pvIoBitmapR0 = %RKv\n", pCtx->hwvirt.svm.pvIoBitmapR0);
3020 }
3021
3022 if (fDumpState & CPUMHWVIRTDUMP_VMX)
3023 {
3024 pHlp->pfnPrintf(pHlp, " fInVmxRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxRootMode);
3025 pHlp->pfnPrintf(pHlp, " fInVmxNonRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxNonRootMode);
3026 pHlp->pfnPrintf(pHlp, " GCPhysVmxon = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmxon);
3027 pHlp->pfnPrintf(pHlp, " GCPhysVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmcs);
3028 pHlp->pfnPrintf(pHlp, " enmDiag = %u (%s)\n", pCtx->hwvirt.vmx.enmDiag,
3029 HMVmxGetDiagDesc(pCtx->hwvirt.vmx.enmDiag));
3030 /** @todo NSTVMX: Dump remaining/new fields. */
3031 }
3032
3033#undef CPUMHWVIRTDUMP_NONE
3034#undef CPUMHWVIRTDUMP_COMMON
3035#undef CPUMHWVIRTDUMP_SVM
3036#undef CPUMHWVIRTDUMP_VMX
3037#undef CPUMHWVIRTDUMP_LAST
3038#undef CPUMHWVIRTDUMP_ALL
3039}
3040
3041/**
3042 * Display the current guest instruction
3043 *
3044 * @param pVM The cross context VM structure.
3045 * @param pHlp The info helper functions.
3046 * @param pszArgs Arguments, ignored.
3047 */
3048static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3049{
3050 NOREF(pszArgs);
3051
3052 PVMCPU pVCpu = VMMGetCpu(pVM);
3053 if (!pVCpu)
3054 pVCpu = &pVM->aCpus[0];
3055
3056 char szInstruction[256];
3057 szInstruction[0] = '\0';
3058 DBGFR3DisasInstrCurrent(pVCpu, szInstruction, sizeof(szInstruction));
3059 pHlp->pfnPrintf(pHlp, "\nCPUM%u: %s\n\n", pVCpu->idCpu, szInstruction);
3060}
3061
3062
3063/**
3064 * Display the hypervisor cpu state.
3065 *
3066 * @param pVM The cross context VM structure.
3067 * @param pHlp The info helper functions.
3068 * @param pszArgs Arguments, ignored.
3069 */
3070static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3071{
3072 PVMCPU pVCpu = VMMGetCpu(pVM);
3073 if (!pVCpu)
3074 pVCpu = &pVM->aCpus[0];
3075
3076 CPUMDUMPTYPE enmType;
3077 const char *pszComment;
3078 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
3079 pHlp->pfnPrintf(pHlp, "Hypervisor CPUM state: %s\n", pszComment);
3080 cpumR3InfoOne(pVM, &pVCpu->cpum.s.Hyper, CPUMCTX2CORE(&pVCpu->cpum.s.Hyper), pHlp, enmType, ".");
3081 pHlp->pfnPrintf(pHlp, "CR4OrMask=%#x CR4AndMask=%#x\n", pVM->cpum.s.CR4.OrMask, pVM->cpum.s.CR4.AndMask);
3082}
3083
3084
3085/**
3086 * Display the host cpu state.
3087 *
3088 * @param pVM The cross context VM structure.
3089 * @param pHlp The info helper functions.
3090 * @param pszArgs Arguments, ignored.
3091 */
3092static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3093{
3094 CPUMDUMPTYPE enmType;
3095 const char *pszComment;
3096 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
3097 pHlp->pfnPrintf(pHlp, "Host CPUM state: %s\n", pszComment);
3098
3099 PVMCPU pVCpu = VMMGetCpu(pVM);
3100 if (!pVCpu)
3101 pVCpu = &pVM->aCpus[0];
3102 PCPUMHOSTCTX pCtx = &pVCpu->cpum.s.Host;
3103
3104 /*
3105 * Format the EFLAGS.
3106 */
3107#if HC_ARCH_BITS == 32
3108 uint32_t efl = pCtx->eflags.u32;
3109#else
3110 uint64_t efl = pCtx->rflags;
3111#endif
3112 char szEFlags[80];
3113 cpumR3InfoFormatFlags(&szEFlags[0], efl);
3114
3115 /*
3116 * Format the registers.
3117 */
3118#if HC_ARCH_BITS == 32
3119 pHlp->pfnPrintf(pHlp,
3120 "eax=xxxxxxxx ebx=%08x ecx=xxxxxxxx edx=xxxxxxxx esi=%08x edi=%08x\n"
3121 "eip=xxxxxxxx esp=%08x ebp=%08x iopl=%d %31s\n"
3122 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08x\n"
3123 "cr0=%08RX64 cr2=xxxxxxxx cr3=%08RX64 cr4=%08RX64 gdtr=%08x:%04x ldtr=%04x\n"
3124 "dr[0]=%08RX64 dr[1]=%08RX64x dr[2]=%08RX64 dr[3]=%08RX64x dr[6]=%08RX64 dr[7]=%08RX64\n"
3125 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
3126 ,
3127 /*pCtx->eax,*/ pCtx->ebx, /*pCtx->ecx, pCtx->edx,*/ pCtx->esi, pCtx->edi,
3128 /*pCtx->eip,*/ pCtx->esp, pCtx->ebp, X86_EFL_GET_IOPL(efl), szEFlags,
3129 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
3130 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3, pCtx->cr4,
3131 pCtx->dr0, pCtx->dr1, pCtx->dr2, pCtx->dr3, pCtx->dr6, pCtx->dr7,
3132 (uint32_t)pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->ldtr,
3133 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
3134#else
3135 pHlp->pfnPrintf(pHlp,
3136 "rax=xxxxxxxxxxxxxxxx rbx=%016RX64 rcx=xxxxxxxxxxxxxxxx\n"
3137 "rdx=xxxxxxxxxxxxxxxx rsi=%016RX64 rdi=%016RX64\n"
3138 "rip=xxxxxxxxxxxxxxxx rsp=%016RX64 rbp=%016RX64\n"
3139 " r8=xxxxxxxxxxxxxxxx r9=xxxxxxxxxxxxxxxx r10=%016RX64\n"
3140 "r11=%016RX64 r12=%016RX64 r13=%016RX64\n"
3141 "r14=%016RX64 r15=%016RX64\n"
3142 "iopl=%d %31s\n"
3143 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08RX64\n"
3144 "cr0=%016RX64 cr2=xxxxxxxxxxxxxxxx cr3=%016RX64\n"
3145 "cr4=%016RX64 ldtr=%04x tr=%04x\n"
3146 "dr[0]=%016RX64 dr[1]=%016RX64 dr[2]=%016RX64\n"
3147 "dr[3]=%016RX64 dr[6]=%016RX64 dr[7]=%016RX64\n"
3148 "gdtr=%016RX64:%04x idtr=%016RX64:%04x\n"
3149 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
3150 "FSbase=%016RX64 GSbase=%016RX64 efer=%08RX64\n"
3151 ,
3152 /*pCtx->rax,*/ pCtx->rbx, /*pCtx->rcx,
3153 pCtx->rdx,*/ pCtx->rsi, pCtx->rdi,
3154 /*pCtx->rip,*/ pCtx->rsp, pCtx->rbp,
3155 /*pCtx->r8, pCtx->r9,*/ pCtx->r10,
3156 pCtx->r11, pCtx->r12, pCtx->r13,
3157 pCtx->r14, pCtx->r15,
3158 X86_EFL_GET_IOPL(efl), szEFlags,
3159 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
3160 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3,
3161 pCtx->cr4, pCtx->ldtr, pCtx->tr,
3162 pCtx->dr0, pCtx->dr1, pCtx->dr2,
3163 pCtx->dr3, pCtx->dr6, pCtx->dr7,
3164 pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->idtr.uAddr, pCtx->idtr.cb,
3165 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp,
3166 pCtx->FSbase, pCtx->GSbase, pCtx->efer);
3167#endif
3168}
3169
3170/**
3171 * Structure used when disassembling and instructions in DBGF.
3172 * This is used so the reader function can get the stuff it needs.
3173 */
3174typedef struct CPUMDISASSTATE
3175{
3176 /** Pointer to the CPU structure. */
3177 PDISCPUSTATE pCpu;
3178 /** Pointer to the VM. */
3179 PVM pVM;
3180 /** Pointer to the VMCPU. */
3181 PVMCPU pVCpu;
3182 /** Pointer to the first byte in the segment. */
3183 RTGCUINTPTR GCPtrSegBase;
3184 /** Pointer to the byte after the end of the segment. (might have wrapped!) */
3185 RTGCUINTPTR GCPtrSegEnd;
3186 /** The size of the segment minus 1. */
3187 RTGCUINTPTR cbSegLimit;
3188 /** Pointer to the current page - R3 Ptr. */
3189 void const *pvPageR3;
3190 /** Pointer to the current page - GC Ptr. */
3191 RTGCPTR pvPageGC;
3192 /** The lock information that PGMPhysReleasePageMappingLock needs. */
3193 PGMPAGEMAPLOCK PageMapLock;
3194 /** Whether the PageMapLock is valid or not. */
3195 bool fLocked;
3196 /** 64 bits mode or not. */
3197 bool f64Bits;
3198} CPUMDISASSTATE, *PCPUMDISASSTATE;
3199
3200
3201/**
3202 * @callback_method_impl{FNDISREADBYTES}
3203 */
3204static DECLCALLBACK(int) cpumR3DisasInstrRead(PDISCPUSTATE pDis, uint8_t offInstr, uint8_t cbMinRead, uint8_t cbMaxRead)
3205{
3206 PCPUMDISASSTATE pState = (PCPUMDISASSTATE)pDis->pvUser;
3207 for (;;)
3208 {
3209 RTGCUINTPTR GCPtr = pDis->uInstrAddr + offInstr + pState->GCPtrSegBase;
3210
3211 /*
3212 * Need to update the page translation?
3213 */
3214 if ( !pState->pvPageR3
3215 || (GCPtr >> PAGE_SHIFT) != (pState->pvPageGC >> PAGE_SHIFT))
3216 {
3217 int rc = VINF_SUCCESS;
3218
3219 /* translate the address */
3220 pState->pvPageGC = GCPtr & PAGE_BASE_GC_MASK;
3221 if ( VM_IS_RAW_MODE_ENABLED(pState->pVM)
3222 && MMHyperIsInsideArea(pState->pVM, pState->pvPageGC))
3223 {
3224 pState->pvPageR3 = MMHyperRCToR3(pState->pVM, (RTRCPTR)pState->pvPageGC);
3225 if (!pState->pvPageR3)
3226 rc = VERR_INVALID_POINTER;
3227 }
3228 else
3229 {
3230 /* Release mapping lock previously acquired. */
3231 if (pState->fLocked)
3232 PGMPhysReleasePageMappingLock(pState->pVM, &pState->PageMapLock);
3233 rc = PGMPhysGCPtr2CCPtrReadOnly(pState->pVCpu, pState->pvPageGC, &pState->pvPageR3, &pState->PageMapLock);
3234 pState->fLocked = RT_SUCCESS_NP(rc);
3235 }
3236 if (RT_FAILURE(rc))
3237 {
3238 pState->pvPageR3 = NULL;
3239 return rc;
3240 }
3241 }
3242
3243 /*
3244 * Check the segment limit.
3245 */
3246 if (!pState->f64Bits && pDis->uInstrAddr + offInstr > pState->cbSegLimit)
3247 return VERR_OUT_OF_SELECTOR_BOUNDS;
3248
3249 /*
3250 * Calc how much we can read.
3251 */
3252 uint32_t cb = PAGE_SIZE - (GCPtr & PAGE_OFFSET_MASK);
3253 if (!pState->f64Bits)
3254 {
3255 RTGCUINTPTR cbSeg = pState->GCPtrSegEnd - GCPtr;
3256 if (cb > cbSeg && cbSeg)
3257 cb = cbSeg;
3258 }
3259 if (cb > cbMaxRead)
3260 cb = cbMaxRead;
3261
3262 /*
3263 * Read and advance or exit.
3264 */
3265 memcpy(&pDis->abInstr[offInstr], (uint8_t *)pState->pvPageR3 + (GCPtr & PAGE_OFFSET_MASK), cb);
3266 offInstr += (uint8_t)cb;
3267 if (cb >= cbMinRead)
3268 {
3269 pDis->cbCachedInstr = offInstr;
3270 return VINF_SUCCESS;
3271 }
3272 cbMinRead -= (uint8_t)cb;
3273 cbMaxRead -= (uint8_t)cb;
3274 }
3275}
3276
3277
3278/**
3279 * Disassemble an instruction and return the information in the provided structure.
3280 *
3281 * @returns VBox status code.
3282 * @param pVM The cross context VM structure.
3283 * @param pVCpu The cross context virtual CPU structure.
3284 * @param pCtx Pointer to the guest CPU context.
3285 * @param GCPtrPC Program counter (relative to CS) to disassemble from.
3286 * @param pCpu Disassembly state.
3287 * @param pszPrefix String prefix for logging (debug only).
3288 *
3289 */
3290VMMR3DECL(int) CPUMR3DisasmInstrCPU(PVM pVM, PVMCPU pVCpu, PCPUMCTX pCtx, RTGCPTR GCPtrPC, PDISCPUSTATE pCpu,
3291 const char *pszPrefix)
3292{
3293 CPUMDISASSTATE State;
3294 int rc;
3295
3296 const PGMMODE enmMode = PGMGetGuestMode(pVCpu);
3297 State.pCpu = pCpu;
3298 State.pvPageGC = 0;
3299 State.pvPageR3 = NULL;
3300 State.pVM = pVM;
3301 State.pVCpu = pVCpu;
3302 State.fLocked = false;
3303 State.f64Bits = false;
3304
3305 /*
3306 * Get selector information.
3307 */
3308 DISCPUMODE enmDisCpuMode;
3309 if ( (pCtx->cr0 & X86_CR0_PE)
3310 && pCtx->eflags.Bits.u1VM == 0)
3311 {
3312 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
3313 {
3314# ifdef VBOX_WITH_RAW_MODE_NOT_R0
3315 CPUMGuestLazyLoadHiddenSelectorReg(pVCpu, &pCtx->cs);
3316# endif
3317 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
3318 return VERR_CPUM_HIDDEN_CS_LOAD_ERROR;
3319 }
3320 State.f64Bits = enmMode >= PGMMODE_AMD64 && pCtx->cs.Attr.n.u1Long;
3321 State.GCPtrSegBase = pCtx->cs.u64Base;
3322 State.GCPtrSegEnd = pCtx->cs.u32Limit + 1 + (RTGCUINTPTR)pCtx->cs.u64Base;
3323 State.cbSegLimit = pCtx->cs.u32Limit;
3324 enmDisCpuMode = (State.f64Bits)
3325 ? DISCPUMODE_64BIT
3326 : pCtx->cs.Attr.n.u1DefBig
3327 ? DISCPUMODE_32BIT
3328 : DISCPUMODE_16BIT;
3329 }
3330 else
3331 {
3332 /* real or V86 mode */
3333 enmDisCpuMode = DISCPUMODE_16BIT;
3334 State.GCPtrSegBase = pCtx->cs.Sel * 16;
3335 State.GCPtrSegEnd = 0xFFFFFFFF;
3336 State.cbSegLimit = 0xFFFFFFFF;
3337 }
3338
3339 /*
3340 * Disassemble the instruction.
3341 */
3342 uint32_t cbInstr;
3343#ifndef LOG_ENABLED
3344 RT_NOREF_PV(pszPrefix);
3345 rc = DISInstrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State, pCpu, &cbInstr);
3346 if (RT_SUCCESS(rc))
3347 {
3348#else
3349 char szOutput[160];
3350 rc = DISInstrToStrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State,
3351 pCpu, &cbInstr, szOutput, sizeof(szOutput));
3352 if (RT_SUCCESS(rc))
3353 {
3354 /* log it */
3355 if (pszPrefix)
3356 Log(("%s-CPU%d: %s", pszPrefix, pVCpu->idCpu, szOutput));
3357 else
3358 Log(("%s", szOutput));
3359#endif
3360 rc = VINF_SUCCESS;
3361 }
3362 else
3363 Log(("CPUMR3DisasmInstrCPU: DISInstr failed for %04X:%RGv rc=%Rrc\n", pCtx->cs.Sel, GCPtrPC, rc));
3364
3365 /* Release mapping lock acquired in cpumR3DisasInstrRead. */
3366 if (State.fLocked)
3367 PGMPhysReleasePageMappingLock(pVM, &State.PageMapLock);
3368
3369 return rc;
3370}
3371
3372
3373
3374/**
3375 * API for controlling a few of the CPU features found in CR4.
3376 *
3377 * Currently only X86_CR4_TSD is accepted as input.
3378 *
3379 * @returns VBox status code.
3380 *
3381 * @param pVM The cross context VM structure.
3382 * @param fOr The CR4 OR mask.
3383 * @param fAnd The CR4 AND mask.
3384 */
3385VMMR3DECL(int) CPUMR3SetCR4Feature(PVM pVM, RTHCUINTREG fOr, RTHCUINTREG fAnd)
3386{
3387 AssertMsgReturn(!(fOr & ~(X86_CR4_TSD)), ("%#x\n", fOr), VERR_INVALID_PARAMETER);
3388 AssertMsgReturn((fAnd & ~(X86_CR4_TSD)) == ~(X86_CR4_TSD), ("%#x\n", fAnd), VERR_INVALID_PARAMETER);
3389
3390 pVM->cpum.s.CR4.OrMask &= fAnd;
3391 pVM->cpum.s.CR4.OrMask |= fOr;
3392
3393 return VINF_SUCCESS;
3394}
3395
3396
3397/**
3398 * Enters REM, gets and resets the changed flags (CPUM_CHANGED_*).
3399 *
3400 * Only REM should ever call this function!
3401 *
3402 * @returns The changed flags.
3403 * @param pVCpu The cross context virtual CPU structure.
3404 * @param puCpl Where to return the current privilege level (CPL).
3405 */
3406VMMR3DECL(uint32_t) CPUMR3RemEnter(PVMCPU pVCpu, uint32_t *puCpl)
3407{
3408 Assert(!pVCpu->cpum.s.fRawEntered);
3409 Assert(!pVCpu->cpum.s.fRemEntered);
3410
3411 /*
3412 * Get the CPL first.
3413 */
3414 *puCpl = CPUMGetGuestCPL(pVCpu);
3415
3416 /*
3417 * Get and reset the flags.
3418 */
3419 uint32_t fFlags = pVCpu->cpum.s.fChanged;
3420 pVCpu->cpum.s.fChanged = 0;
3421
3422 /** @todo change the switcher to use the fChanged flags. */
3423 if (pVCpu->cpum.s.fUseFlags & CPUM_USED_FPU_SINCE_REM)
3424 {
3425 fFlags |= CPUM_CHANGED_FPU_REM;
3426 pVCpu->cpum.s.fUseFlags &= ~CPUM_USED_FPU_SINCE_REM;
3427 }
3428
3429 pVCpu->cpum.s.fRemEntered = true;
3430 return fFlags;
3431}
3432
3433
3434/**
3435 * Leaves REM.
3436 *
3437 * @param pVCpu The cross context virtual CPU structure.
3438 * @param fNoOutOfSyncSels This is @c false if there are out of sync
3439 * registers.
3440 */
3441VMMR3DECL(void) CPUMR3RemLeave(PVMCPU pVCpu, bool fNoOutOfSyncSels)
3442{
3443 Assert(!pVCpu->cpum.s.fRawEntered);
3444 Assert(pVCpu->cpum.s.fRemEntered);
3445
3446 RT_NOREF_PV(fNoOutOfSyncSels);
3447
3448 pVCpu->cpum.s.fRemEntered = false;
3449}
3450
3451
3452/**
3453 * Called when the ring-3 init phase completes.
3454 *
3455 * @returns VBox status code.
3456 * @param pVM The cross context VM structure.
3457 * @param enmWhat Which init phase.
3458 */
3459VMMR3DECL(int) CPUMR3InitCompleted(PVM pVM, VMINITCOMPLETED enmWhat)
3460{
3461 switch (enmWhat)
3462 {
3463 case VMINITCOMPLETED_RING3:
3464 {
3465 /*
3466 * Figure out if the guest uses 32-bit or 64-bit FPU state at runtime for 64-bit capable VMs.
3467 * Only applicable/used on 64-bit hosts, refer CPUMR0A.asm. See @bugref{7138}.
3468 */
3469 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
3470 for (VMCPUID i = 0; i < pVM->cCpus; i++)
3471 {
3472 PVMCPU pVCpu = &pVM->aCpus[i];
3473 /* While loading a saved-state we fix it up in, cpumR3LoadDone(). */
3474 if (fSupportsLongMode)
3475 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
3476 }
3477
3478 cpumR3MsrRegStats(pVM);
3479 break;
3480 }
3481
3482 case VMINITCOMPLETED_HM:
3483 {
3484 /*
3485 * Currently, nested VMX/SVM both derives their guest VMX/SVM CPUID bit from the host
3486 * CPUID bit. This could be later changed if we need to support nested-VMX on CPUs
3487 * that are not capable of VMX.
3488 */
3489 if (pVM->cpum.s.GuestFeatures.fVmx)
3490 {
3491 Assert( pVM->cpum.s.GuestFeatures.enmCpuVendor == CPUMCPUVENDOR_INTEL
3492 || pVM->cpum.s.GuestFeatures.enmCpuVendor == CPUMCPUVENDOR_VIA);
3493 cpumR3InitVmxCpuFeatures(pVM);
3494 DBGFR3Info(pVM->pUVM, "cpumvmxfeat", "default", DBGFR3InfoLogRelHlp());
3495 }
3496
3497 if (pVM->cpum.s.GuestFeatures.fVmx)
3498 LogRel(("CPUM: Enabled guest VMX support\n"));
3499 else if (pVM->cpum.s.GuestFeatures.fSvm)
3500 LogRel(("CPUM: Enabled guest SVM support\n"));
3501 break;
3502 }
3503
3504 default:
3505 break;
3506 }
3507 return VINF_SUCCESS;
3508}
3509
3510
3511/**
3512 * Called when the ring-0 init phases completed.
3513 *
3514 * @param pVM The cross context VM structure.
3515 */
3516VMMR3DECL(void) CPUMR3LogCpuIds(PVM pVM)
3517{
3518 /*
3519 * Log the cpuid.
3520 */
3521 bool fOldBuffered = RTLogRelSetBuffering(true /*fBuffered*/);
3522 RTCPUSET OnlineSet;
3523 LogRel(("CPUM: Logical host processors: %u present, %u max, %u online, online mask: %016RX64\n",
3524 (unsigned)RTMpGetPresentCount(), (unsigned)RTMpGetCount(), (unsigned)RTMpGetOnlineCount(),
3525 RTCpuSetToU64(RTMpGetOnlineSet(&OnlineSet)) ));
3526 RTCPUID cCores = RTMpGetCoreCount();
3527 if (cCores)
3528 LogRel(("CPUM: Physical host cores: %u\n", (unsigned)cCores));
3529 LogRel(("************************* CPUID dump ************************\n"));
3530 DBGFR3Info(pVM->pUVM, "cpuid", "verbose", DBGFR3InfoLogRelHlp());
3531 LogRel(("\n"));
3532 DBGFR3_INFO_LOG_SAFE(pVM, "cpuid", "verbose"); /* macro */
3533 RTLogRelSetBuffering(fOldBuffered);
3534 LogRel(("******************** End of CPUID dump **********************\n"));
3535}
3536
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette