VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/CPUM.cpp@ 75767

Last change on this file since 75767 was 75611, checked in by vboxsync, 6 years ago

VMM: Nested VMX: bugref:9180 Move the VMX APIC-access guest-physical page registration into IEM and got rid of the CPUM all context code that does not quite fit because we still have to declare the prototypes in the HM headers anyway, so just keep it in HM all context code for now.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 182.1 KB
Line 
1/* $Id: CPUM.cpp 75611 2018-11-20 11:20:25Z vboxsync $ */
2/** @file
3 * CPUM - CPU Monitor / Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2017 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_cpum CPUM - CPU Monitor / Manager
19 *
20 * The CPU Monitor / Manager keeps track of all the CPU registers. It is
21 * also responsible for lazy FPU handling and some of the context loading
22 * in raw mode.
23 *
24 * There are three CPU contexts, the most important one is the guest one (GC).
25 * When running in raw-mode (RC) there is a special hyper context for the VMM
26 * part that floats around inside the guest address space. When running in
27 * raw-mode, CPUM also maintains a host context for saving and restoring
28 * registers across world switches. This latter is done in cooperation with the
29 * world switcher (@see pg_vmm).
30 *
31 * @see grp_cpum
32 *
33 * @section sec_cpum_fpu FPU / SSE / AVX / ++ state.
34 *
35 * TODO: proper write up, currently just some notes.
36 *
37 * The ring-0 FPU handling per OS:
38 *
39 * - 64-bit Windows uses XMM registers in the kernel as part of the calling
40 * convention (Visual C++ doesn't seem to have a way to disable
41 * generating such code either), so CR0.TS/EM are always zero from what I
42 * can tell. We are also forced to always load/save the guest XMM0-XMM15
43 * registers when entering/leaving guest context. Interrupt handlers
44 * using FPU/SSE will offically have call save and restore functions
45 * exported by the kernel, if the really really have to use the state.
46 *
47 * - 32-bit windows does lazy FPU handling, I think, probably including
48 * lazying saving. The Windows Internals book states that it's a bad
49 * idea to use the FPU in kernel space. However, it looks like it will
50 * restore the FPU state of the current thread in case of a kernel \#NM.
51 * Interrupt handlers should be same as for 64-bit.
52 *
53 * - Darwin allows taking \#NM in kernel space, restoring current thread's
54 * state if I read the code correctly. It saves the FPU state of the
55 * outgoing thread, and uses CR0.TS to lazily load the state of the
56 * incoming one. No idea yet how the FPU is treated by interrupt
57 * handlers, i.e. whether they are allowed to disable the state or
58 * something.
59 *
60 * - Linux also allows \#NM in kernel space (don't know since when), and
61 * uses CR0.TS for lazy loading. Saves outgoing thread's state, lazy
62 * loads the incoming unless configured to agressivly load it. Interrupt
63 * handlers can ask whether they're allowed to use the FPU, and may
64 * freely trash the state if Linux thinks it has saved the thread's state
65 * already. This is a problem.
66 *
67 * - Solaris will, from what I can tell, panic if it gets an \#NM in kernel
68 * context. When switching threads, the kernel will save the state of
69 * the outgoing thread and lazy load the incoming one using CR0.TS.
70 * There are a few routines in seeblk.s which uses the SSE unit in ring-0
71 * to do stuff, HAT are among the users. The routines there will
72 * manually clear CR0.TS and save the XMM registers they use only if
73 * CR0.TS was zero upon entry. They will skip it when not, because as
74 * mentioned above, the FPU state is saved when switching away from a
75 * thread and CR0.TS set to 1, so when CR0.TS is 1 there is nothing to
76 * preserve. This is a problem if we restore CR0.TS to 1 after loading
77 * the guest state.
78 *
79 * - FreeBSD - no idea yet.
80 *
81 * - OS/2 does not allow \#NMs in kernel space IIRC. Does lazy loading,
82 * possibly also lazy saving. Interrupts must preserve the CR0.TS+EM &
83 * FPU states.
84 *
85 * Up to r107425 (2016-05-24) we would only temporarily modify CR0.TS/EM while
86 * saving and restoring the host and guest states. The motivation for this
87 * change is that we want to be able to emulate SSE instruction in ring-0 (IEM).
88 *
89 * Starting with that change, we will leave CR0.TS=EM=0 after saving the host
90 * state and only restore it once we've restore the host FPU state. This has the
91 * accidental side effect of triggering Solaris to preserve XMM registers in
92 * sseblk.s. When CR0 was changed by saving the FPU state, CPUM must now inform
93 * the VT-x (HMVMX) code about it as it caches the CR0 value in the VMCS.
94 *
95 *
96 * @section sec_cpum_logging Logging Level Assignments.
97 *
98 * Following log level assignments:
99 * - Log6 is used for FPU state management.
100 * - Log7 is used for FPU state actualization.
101 *
102 */
103
104
105/*********************************************************************************************************************************
106* Header Files *
107*********************************************************************************************************************************/
108#define LOG_GROUP LOG_GROUP_CPUM
109#include <VBox/vmm/cpum.h>
110#include <VBox/vmm/cpumdis.h>
111#include <VBox/vmm/cpumctx-v1_6.h>
112#include <VBox/vmm/pgm.h>
113#include <VBox/vmm/apic.h>
114#include <VBox/vmm/mm.h>
115#include <VBox/vmm/em.h>
116#include <VBox/vmm/iem.h>
117#include <VBox/vmm/selm.h>
118#include <VBox/vmm/dbgf.h>
119#include <VBox/vmm/patm.h>
120#include <VBox/vmm/hm.h>
121#include <VBox/vmm/ssm.h>
122#include "CPUMInternal.h"
123#include <VBox/vmm/vm.h>
124
125#include <VBox/param.h>
126#include <VBox/dis.h>
127#include <VBox/err.h>
128#include <VBox/log.h>
129#include <iprt/asm-amd64-x86.h>
130#include <iprt/assert.h>
131#include <iprt/cpuset.h>
132#include <iprt/mem.h>
133#include <iprt/mp.h>
134#include <iprt/string.h>
135
136
137/*********************************************************************************************************************************
138* Defined Constants And Macros *
139*********************************************************************************************************************************/
140/**
141 * This was used in the saved state up to the early life of version 14.
142 *
143 * It indicates that we may have some out-of-sync hidden segement registers.
144 * It is only relevant for raw-mode.
145 */
146#define CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID RT_BIT(12)
147
148
149/*********************************************************************************************************************************
150* Structures and Typedefs *
151*********************************************************************************************************************************/
152
153/**
154 * What kind of cpu info dump to perform.
155 */
156typedef enum CPUMDUMPTYPE
157{
158 CPUMDUMPTYPE_TERSE,
159 CPUMDUMPTYPE_DEFAULT,
160 CPUMDUMPTYPE_VERBOSE
161} CPUMDUMPTYPE;
162/** Pointer to a cpu info dump type. */
163typedef CPUMDUMPTYPE *PCPUMDUMPTYPE;
164
165
166/*********************************************************************************************************************************
167* Internal Functions *
168*********************************************************************************************************************************/
169static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass);
170static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM);
171static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM);
172static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
173static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM);
174static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
175static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
176static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
177static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
178static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
179static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
180
181
182/*********************************************************************************************************************************
183* Global Variables *
184*********************************************************************************************************************************/
185/** Saved state field descriptors for CPUMCTX. */
186static const SSMFIELD g_aCpumCtxFields[] =
187{
188 SSMFIELD_ENTRY( CPUMCTX, rdi),
189 SSMFIELD_ENTRY( CPUMCTX, rsi),
190 SSMFIELD_ENTRY( CPUMCTX, rbp),
191 SSMFIELD_ENTRY( CPUMCTX, rax),
192 SSMFIELD_ENTRY( CPUMCTX, rbx),
193 SSMFIELD_ENTRY( CPUMCTX, rdx),
194 SSMFIELD_ENTRY( CPUMCTX, rcx),
195 SSMFIELD_ENTRY( CPUMCTX, rsp),
196 SSMFIELD_ENTRY( CPUMCTX, rflags),
197 SSMFIELD_ENTRY( CPUMCTX, rip),
198 SSMFIELD_ENTRY( CPUMCTX, r8),
199 SSMFIELD_ENTRY( CPUMCTX, r9),
200 SSMFIELD_ENTRY( CPUMCTX, r10),
201 SSMFIELD_ENTRY( CPUMCTX, r11),
202 SSMFIELD_ENTRY( CPUMCTX, r12),
203 SSMFIELD_ENTRY( CPUMCTX, r13),
204 SSMFIELD_ENTRY( CPUMCTX, r14),
205 SSMFIELD_ENTRY( CPUMCTX, r15),
206 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
207 SSMFIELD_ENTRY( CPUMCTX, es.ValidSel),
208 SSMFIELD_ENTRY( CPUMCTX, es.fFlags),
209 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
210 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
211 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
212 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
213 SSMFIELD_ENTRY( CPUMCTX, cs.ValidSel),
214 SSMFIELD_ENTRY( CPUMCTX, cs.fFlags),
215 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
216 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
217 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
218 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
219 SSMFIELD_ENTRY( CPUMCTX, ss.ValidSel),
220 SSMFIELD_ENTRY( CPUMCTX, ss.fFlags),
221 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
222 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
223 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
224 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
225 SSMFIELD_ENTRY( CPUMCTX, ds.ValidSel),
226 SSMFIELD_ENTRY( CPUMCTX, ds.fFlags),
227 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
228 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
229 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
230 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
231 SSMFIELD_ENTRY( CPUMCTX, fs.ValidSel),
232 SSMFIELD_ENTRY( CPUMCTX, fs.fFlags),
233 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
234 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
235 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
236 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
237 SSMFIELD_ENTRY( CPUMCTX, gs.ValidSel),
238 SSMFIELD_ENTRY( CPUMCTX, gs.fFlags),
239 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
240 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
241 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
242 SSMFIELD_ENTRY( CPUMCTX, cr0),
243 SSMFIELD_ENTRY( CPUMCTX, cr2),
244 SSMFIELD_ENTRY( CPUMCTX, cr3),
245 SSMFIELD_ENTRY( CPUMCTX, cr4),
246 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
247 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
248 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
249 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
250 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
251 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
252 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
253 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
254 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
255 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
256 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
257 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
258 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
259 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
260 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
261 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
262 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
263 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
264 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
265 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
266 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
267 SSMFIELD_ENTRY( CPUMCTX, ldtr.ValidSel),
268 SSMFIELD_ENTRY( CPUMCTX, ldtr.fFlags),
269 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
270 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
271 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
272 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
273 SSMFIELD_ENTRY( CPUMCTX, tr.ValidSel),
274 SSMFIELD_ENTRY( CPUMCTX, tr.fFlags),
275 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
276 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
277 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
278 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[0], CPUM_SAVED_STATE_VERSION_XSAVE),
279 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[1], CPUM_SAVED_STATE_VERSION_XSAVE),
280 SSMFIELD_ENTRY_VER( CPUMCTX, fXStateMask, CPUM_SAVED_STATE_VERSION_XSAVE),
281 SSMFIELD_ENTRY_TERM()
282};
283
284/** Saved state field descriptors for SVM nested hardware-virtualization
285 * Host State. */
286static const SSMFIELD g_aSvmHwvirtHostState[] =
287{
288 SSMFIELD_ENTRY( SVMHOSTSTATE, uEferMsr),
289 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr0),
290 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr4),
291 SSMFIELD_ENTRY( SVMHOSTSTATE, uCr3),
292 SSMFIELD_ENTRY( SVMHOSTSTATE, uRip),
293 SSMFIELD_ENTRY( SVMHOSTSTATE, uRsp),
294 SSMFIELD_ENTRY( SVMHOSTSTATE, uRax),
295 SSMFIELD_ENTRY( SVMHOSTSTATE, rflags),
296 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Sel),
297 SSMFIELD_ENTRY( SVMHOSTSTATE, es.ValidSel),
298 SSMFIELD_ENTRY( SVMHOSTSTATE, es.fFlags),
299 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u64Base),
300 SSMFIELD_ENTRY( SVMHOSTSTATE, es.u32Limit),
301 SSMFIELD_ENTRY( SVMHOSTSTATE, es.Attr),
302 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Sel),
303 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.ValidSel),
304 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.fFlags),
305 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u64Base),
306 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.u32Limit),
307 SSMFIELD_ENTRY( SVMHOSTSTATE, cs.Attr),
308 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Sel),
309 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.ValidSel),
310 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.fFlags),
311 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u64Base),
312 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.u32Limit),
313 SSMFIELD_ENTRY( SVMHOSTSTATE, ss.Attr),
314 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Sel),
315 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.ValidSel),
316 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.fFlags),
317 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u64Base),
318 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.u32Limit),
319 SSMFIELD_ENTRY( SVMHOSTSTATE, ds.Attr),
320 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.cbGdt),
321 SSMFIELD_ENTRY( SVMHOSTSTATE, gdtr.pGdt),
322 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.cbIdt),
323 SSMFIELD_ENTRY( SVMHOSTSTATE, idtr.pIdt),
324 SSMFIELD_ENTRY_IGNORE(SVMHOSTSTATE, abPadding),
325 SSMFIELD_ENTRY_TERM()
326};
327
328/** Saved state field descriptors for CPUMCTX. */
329static const SSMFIELD g_aCpumX87Fields[] =
330{
331 SSMFIELD_ENTRY( X86FXSTATE, FCW),
332 SSMFIELD_ENTRY( X86FXSTATE, FSW),
333 SSMFIELD_ENTRY( X86FXSTATE, FTW),
334 SSMFIELD_ENTRY( X86FXSTATE, FOP),
335 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
336 SSMFIELD_ENTRY( X86FXSTATE, CS),
337 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
338 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
339 SSMFIELD_ENTRY( X86FXSTATE, DS),
340 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
341 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
342 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
343 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
344 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
345 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
346 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
347 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
348 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
349 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
350 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
351 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
352 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
353 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
354 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
355 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
356 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
357 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
358 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
359 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
360 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
361 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
362 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
363 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
364 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
365 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
366 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
367 SSMFIELD_ENTRY_VER( X86FXSTATE, au32RsrvdForSoftware[0], CPUM_SAVED_STATE_VERSION_XSAVE), /* 32-bit/64-bit hack */
368 SSMFIELD_ENTRY_TERM()
369};
370
371/** Saved state field descriptors for X86XSAVEHDR. */
372static const SSMFIELD g_aCpumXSaveHdrFields[] =
373{
374 SSMFIELD_ENTRY( X86XSAVEHDR, bmXState),
375 SSMFIELD_ENTRY_TERM()
376};
377
378/** Saved state field descriptors for X86XSAVEYMMHI. */
379static const SSMFIELD g_aCpumYmmHiFields[] =
380{
381 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[0]),
382 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[1]),
383 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[2]),
384 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[3]),
385 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[4]),
386 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[5]),
387 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[6]),
388 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[7]),
389 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[8]),
390 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[9]),
391 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[10]),
392 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[11]),
393 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[12]),
394 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[13]),
395 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[14]),
396 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[15]),
397 SSMFIELD_ENTRY_TERM()
398};
399
400/** Saved state field descriptors for X86XSAVEBNDREGS. */
401static const SSMFIELD g_aCpumBndRegsFields[] =
402{
403 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[0]),
404 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[1]),
405 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[2]),
406 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[3]),
407 SSMFIELD_ENTRY_TERM()
408};
409
410/** Saved state field descriptors for X86XSAVEBNDCFG. */
411static const SSMFIELD g_aCpumBndCfgFields[] =
412{
413 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fConfig),
414 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fStatus),
415 SSMFIELD_ENTRY_TERM()
416};
417
418#if 0 /** @todo */
419/** Saved state field descriptors for X86XSAVEOPMASK. */
420static const SSMFIELD g_aCpumOpmaskFields[] =
421{
422 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[0]),
423 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[1]),
424 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[2]),
425 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[3]),
426 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[4]),
427 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[5]),
428 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[6]),
429 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[7]),
430 SSMFIELD_ENTRY_TERM()
431};
432#endif
433
434/** Saved state field descriptors for X86XSAVEZMMHI256. */
435static const SSMFIELD g_aCpumZmmHi256Fields[] =
436{
437 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[0]),
438 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[1]),
439 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[2]),
440 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[3]),
441 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[4]),
442 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[5]),
443 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[6]),
444 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[7]),
445 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[8]),
446 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[9]),
447 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[10]),
448 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[11]),
449 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[12]),
450 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[13]),
451 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[14]),
452 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[15]),
453 SSMFIELD_ENTRY_TERM()
454};
455
456/** Saved state field descriptors for X86XSAVEZMM16HI. */
457static const SSMFIELD g_aCpumZmm16HiFields[] =
458{
459 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[0]),
460 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[1]),
461 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[2]),
462 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[3]),
463 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[4]),
464 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[5]),
465 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[6]),
466 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[7]),
467 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[8]),
468 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[9]),
469 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[10]),
470 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[11]),
471 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[12]),
472 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[13]),
473 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[14]),
474 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[15]),
475 SSMFIELD_ENTRY_TERM()
476};
477
478
479
480/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
481 * registeres changed. */
482static const SSMFIELD g_aCpumX87FieldsMem[] =
483{
484 SSMFIELD_ENTRY( X86FXSTATE, FCW),
485 SSMFIELD_ENTRY( X86FXSTATE, FSW),
486 SSMFIELD_ENTRY( X86FXSTATE, FTW),
487 SSMFIELD_ENTRY( X86FXSTATE, FOP),
488 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
489 SSMFIELD_ENTRY( X86FXSTATE, CS),
490 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
491 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
492 SSMFIELD_ENTRY( X86FXSTATE, DS),
493 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
494 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
495 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
496 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
497 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
498 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
499 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
500 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
501 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
502 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
503 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
504 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
505 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
506 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
507 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
508 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
509 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
510 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
511 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
512 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
513 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
514 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
515 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
516 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
517 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
518 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
519 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
520 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
521 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
522};
523
524/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
525 * registeres changed. */
526static const SSMFIELD g_aCpumCtxFieldsMem[] =
527{
528 SSMFIELD_ENTRY( CPUMCTX, rdi),
529 SSMFIELD_ENTRY( CPUMCTX, rsi),
530 SSMFIELD_ENTRY( CPUMCTX, rbp),
531 SSMFIELD_ENTRY( CPUMCTX, rax),
532 SSMFIELD_ENTRY( CPUMCTX, rbx),
533 SSMFIELD_ENTRY( CPUMCTX, rdx),
534 SSMFIELD_ENTRY( CPUMCTX, rcx),
535 SSMFIELD_ENTRY( CPUMCTX, rsp),
536 SSMFIELD_ENTRY_OLD( lss_esp, sizeof(uint32_t)),
537 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
538 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
539 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
540 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
541 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
542 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
543 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
544 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
545 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
546 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
547 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
548 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
549 SSMFIELD_ENTRY( CPUMCTX, rflags),
550 SSMFIELD_ENTRY( CPUMCTX, rip),
551 SSMFIELD_ENTRY( CPUMCTX, r8),
552 SSMFIELD_ENTRY( CPUMCTX, r9),
553 SSMFIELD_ENTRY( CPUMCTX, r10),
554 SSMFIELD_ENTRY( CPUMCTX, r11),
555 SSMFIELD_ENTRY( CPUMCTX, r12),
556 SSMFIELD_ENTRY( CPUMCTX, r13),
557 SSMFIELD_ENTRY( CPUMCTX, r14),
558 SSMFIELD_ENTRY( CPUMCTX, r15),
559 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
560 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
561 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
562 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
563 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
564 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
565 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
566 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
567 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
568 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
569 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
570 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
571 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
572 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
573 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
574 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
575 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
576 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
577 SSMFIELD_ENTRY( CPUMCTX, cr0),
578 SSMFIELD_ENTRY( CPUMCTX, cr2),
579 SSMFIELD_ENTRY( CPUMCTX, cr3),
580 SSMFIELD_ENTRY( CPUMCTX, cr4),
581 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
582 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
583 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
584 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
585 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
586 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
587 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
588 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
589 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
590 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
591 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
592 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
593 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
594 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
595 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
596 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
597 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
598 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
599 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
600 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
601 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
602 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
603 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
604 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
605 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
606 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
607 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
608 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
609 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
610 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
611 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
612 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
613 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
614 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
615 SSMFIELD_ENTRY_TERM()
616};
617
618/** Saved state field descriptors for CPUMCTX_VER1_6. */
619static const SSMFIELD g_aCpumX87FieldsV16[] =
620{
621 SSMFIELD_ENTRY( X86FXSTATE, FCW),
622 SSMFIELD_ENTRY( X86FXSTATE, FSW),
623 SSMFIELD_ENTRY( X86FXSTATE, FTW),
624 SSMFIELD_ENTRY( X86FXSTATE, FOP),
625 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
626 SSMFIELD_ENTRY( X86FXSTATE, CS),
627 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
628 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
629 SSMFIELD_ENTRY( X86FXSTATE, DS),
630 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
631 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
632 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
633 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
634 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
635 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
636 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
637 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
638 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
639 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
640 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
641 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
642 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
643 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
644 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
645 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
646 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
647 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
648 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
649 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
650 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
651 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
652 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
653 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
654 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
655 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
656 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
657 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
658 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
659 SSMFIELD_ENTRY_TERM()
660};
661
662/** Saved state field descriptors for CPUMCTX_VER1_6. */
663static const SSMFIELD g_aCpumCtxFieldsV16[] =
664{
665 SSMFIELD_ENTRY( CPUMCTX, rdi),
666 SSMFIELD_ENTRY( CPUMCTX, rsi),
667 SSMFIELD_ENTRY( CPUMCTX, rbp),
668 SSMFIELD_ENTRY( CPUMCTX, rax),
669 SSMFIELD_ENTRY( CPUMCTX, rbx),
670 SSMFIELD_ENTRY( CPUMCTX, rdx),
671 SSMFIELD_ENTRY( CPUMCTX, rcx),
672 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, rsp),
673 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
674 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
675 SSMFIELD_ENTRY_OLD( CPUMCTX, sizeof(uint64_t) /*rsp_notused*/),
676 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
677 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
678 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
679 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
680 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
681 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
682 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
683 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
684 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
685 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
686 SSMFIELD_ENTRY( CPUMCTX, rflags),
687 SSMFIELD_ENTRY( CPUMCTX, rip),
688 SSMFIELD_ENTRY( CPUMCTX, r8),
689 SSMFIELD_ENTRY( CPUMCTX, r9),
690 SSMFIELD_ENTRY( CPUMCTX, r10),
691 SSMFIELD_ENTRY( CPUMCTX, r11),
692 SSMFIELD_ENTRY( CPUMCTX, r12),
693 SSMFIELD_ENTRY( CPUMCTX, r13),
694 SSMFIELD_ENTRY( CPUMCTX, r14),
695 SSMFIELD_ENTRY( CPUMCTX, r15),
696 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, es.u64Base),
697 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
698 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
699 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, cs.u64Base),
700 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
701 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
702 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ss.u64Base),
703 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
704 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
705 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ds.u64Base),
706 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
707 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
708 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, fs.u64Base),
709 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
710 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
711 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gs.u64Base),
712 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
713 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
714 SSMFIELD_ENTRY( CPUMCTX, cr0),
715 SSMFIELD_ENTRY( CPUMCTX, cr2),
716 SSMFIELD_ENTRY( CPUMCTX, cr3),
717 SSMFIELD_ENTRY( CPUMCTX, cr4),
718 SSMFIELD_ENTRY_OLD( cr8, sizeof(uint64_t)),
719 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
720 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
721 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
722 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
723 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
724 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
725 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
726 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
727 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
728 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gdtr.pGdt),
729 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
730 SSMFIELD_ENTRY_OLD( gdtrPadding64, sizeof(uint64_t)),
731 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
732 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, idtr.pIdt),
733 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
734 SSMFIELD_ENTRY_OLD( idtrPadding64, sizeof(uint64_t)),
735 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
736 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
737 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
738 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
739 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
740 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
741 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
742 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
743 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
744 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
745 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
746 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
747 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
748 SSMFIELD_ENTRY_OLD( msrFSBASE, sizeof(uint64_t)),
749 SSMFIELD_ENTRY_OLD( msrGSBASE, sizeof(uint64_t)),
750 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
751 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ldtr.u64Base),
752 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
753 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
754 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, tr.u64Base),
755 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
756 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
757 SSMFIELD_ENTRY_OLD( padding, sizeof(uint32_t)*2),
758 SSMFIELD_ENTRY_TERM()
759};
760
761
762/**
763 * Checks for partial/leaky FXSAVE/FXRSTOR handling on AMD CPUs.
764 *
765 * AMD K7, K8 and newer AMD CPUs do not save/restore the x87 error pointers
766 * (last instruction pointer, last data pointer, last opcode) except when the ES
767 * bit (Exception Summary) in x87 FSW (FPU Status Word) is set. Thus if we don't
768 * clear these registers there is potential, local FPU leakage from a process
769 * using the FPU to another.
770 *
771 * See AMD Instruction Reference for FXSAVE, FXRSTOR.
772 *
773 * @param pVM The cross context VM structure.
774 */
775static void cpumR3CheckLeakyFpu(PVM pVM)
776{
777 uint32_t u32CpuVersion = ASMCpuId_EAX(1);
778 uint32_t const u32Family = u32CpuVersion >> 8;
779 if ( u32Family >= 6 /* K7 and higher */
780 && ASMIsAmdCpu())
781 {
782 uint32_t cExt = ASMCpuId_EAX(0x80000000);
783 if (ASMIsValidExtRange(cExt))
784 {
785 uint32_t fExtFeaturesEDX = ASMCpuId_EDX(0x80000001);
786 if (fExtFeaturesEDX & X86_CPUID_AMD_FEATURE_EDX_FFXSR)
787 {
788 for (VMCPUID i = 0; i < pVM->cCpus; i++)
789 pVM->aCpus[i].cpum.s.fUseFlags |= CPUM_USE_FFXSR_LEAKY;
790 Log(("CPUMR3Init: host CPU has leaky fxsave/fxrstor behaviour\n"));
791 }
792 }
793 }
794}
795
796
797/**
798 * Frees memory allocated for the SVM hardware virtualization state.
799 *
800 * @param pVM The cross context VM structure.
801 */
802static void cpumR3FreeSvmHwVirtState(PVM pVM)
803{
804 Assert(pVM->cpum.ro.GuestFeatures.fSvm);
805 for (VMCPUID i = 0; i < pVM->cCpus; i++)
806 {
807 PVMCPU pVCpu = &pVM->aCpus[i];
808 if (pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3)
809 {
810 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES);
811 pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3 = NULL;
812 }
813 pVCpu->cpum.s.Guest.hwvirt.svm.HCPhysVmcb = NIL_RTHCPHYS;
814
815 if (pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3)
816 {
817 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES);
818 pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3 = NULL;
819 }
820
821 if (pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3)
822 {
823 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES);
824 pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3 = NULL;
825 }
826 }
827}
828
829
830/**
831 * Allocates memory for the SVM hardware virtualization state.
832 *
833 * @returns VBox status code.
834 * @param pVM The cross context VM structure.
835 */
836static int cpumR3AllocSvmHwVirtState(PVM pVM)
837{
838 Assert(pVM->cpum.ro.GuestFeatures.fSvm);
839
840 int rc = VINF_SUCCESS;
841 LogRel(("CPUM: Allocating %u pages for the nested-guest SVM MSR and IO permission bitmaps\n",
842 pVM->cCpus * (SVM_MSRPM_PAGES + SVM_IOPM_PAGES)));
843 for (VMCPUID i = 0; i < pVM->cCpus; i++)
844 {
845 PVMCPU pVCpu = &pVM->aCpus[i];
846 pVCpu->cpum.s.Guest.hwvirt.enmHwvirt = CPUMHWVIRT_SVM;
847
848 /*
849 * Allocate the nested-guest VMCB.
850 */
851 SUPPAGE SupNstGstVmcbPage;
852 RT_ZERO(SupNstGstVmcbPage);
853 SupNstGstVmcbPage.Phys = NIL_RTHCPHYS;
854 Assert(SVM_VMCB_PAGES == 1);
855 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3);
856 rc = SUPR3PageAllocEx(SVM_VMCB_PAGES, 0 /* fFlags */, (void **)&pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3,
857 &pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR0, &SupNstGstVmcbPage);
858 if (RT_FAILURE(rc))
859 {
860 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pVmcbR3);
861 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMCB\n", pVCpu->idCpu, SVM_VMCB_PAGES));
862 break;
863 }
864 pVCpu->cpum.s.Guest.hwvirt.svm.HCPhysVmcb = SupNstGstVmcbPage.Phys;
865
866 /*
867 * Allocate the MSRPM (MSR Permission bitmap).
868 */
869 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3);
870 rc = SUPR3PageAllocEx(SVM_MSRPM_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3,
871 &pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR0, NULL /* paPages */);
872 if (RT_FAILURE(rc))
873 {
874 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvMsrBitmapR3);
875 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's MSR permission bitmap\n", pVCpu->idCpu,
876 SVM_MSRPM_PAGES));
877 break;
878 }
879
880 /*
881 * Allocate the IOPM (IO Permission bitmap).
882 */
883 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3);
884 rc = SUPR3PageAllocEx(SVM_IOPM_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3,
885 &pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR0, NULL /* paPages */);
886 if (RT_FAILURE(rc))
887 {
888 Assert(!pVCpu->cpum.s.Guest.hwvirt.svm.pvIoBitmapR3);
889 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's IO permission bitmap\n", pVCpu->idCpu,
890 SVM_IOPM_PAGES));
891 break;
892 }
893 }
894
895 /* On any failure, cleanup. */
896 if (RT_FAILURE(rc))
897 cpumR3FreeSvmHwVirtState(pVM);
898
899 return rc;
900}
901
902
903/**
904 * Initializes (or re-initializes) per-VCPU SVM hardware virtualization state.
905 *
906 * @param pVCpu The cross context virtual CPU structure.
907 */
908DECLINLINE(void) cpumR3InitSvmHwVirtState(PVMCPU pVCpu)
909{
910 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
911 Assert(pCtx->hwvirt.enmHwvirt == CPUMHWVIRT_SVM);
912 Assert(pCtx->hwvirt.svm.CTX_SUFF(pVmcb));
913
914 memset(pCtx->hwvirt.svm.CTX_SUFF(pVmcb), 0, SVM_VMCB_PAGES << PAGE_SHIFT);
915 pCtx->hwvirt.svm.uMsrHSavePa = 0;
916 pCtx->hwvirt.svm.uPrevPauseTick = 0;
917}
918
919
920/**
921 * Frees memory allocated for the VMX hardware virtualization state.
922 *
923 * @param pVM The cross context VM structure.
924 */
925static void cpumR3FreeVmxHwVirtState(PVM pVM)
926{
927 Assert(pVM->cpum.ro.GuestFeatures.fVmx);
928 for (VMCPUID i = 0; i < pVM->cCpus; i++)
929 {
930 PVMCPU pVCpu = &pVM->aCpus[i];
931 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3)
932 {
933 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3, VMX_V_VMCS_PAGES);
934 pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3 = NULL;
935 }
936 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pShadowVmcsR3)
937 {
938 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pShadowVmcsR3, VMX_V_VMCS_PAGES);
939 pVCpu->cpum.s.Guest.hwvirt.vmx.pShadowVmcsR3 = NULL;
940 }
941 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3)
942 {
943 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3, VMX_V_VIRT_APIC_PAGES);
944 pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3 = NULL;
945 }
946 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3)
947 {
948 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_PAGES);
949 pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3 = NULL;
950 }
951 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3)
952 {
953 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3, VMX_V_VMREAD_VMWRITE_BITMAP_PAGES);
954 pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3 = NULL;
955 }
956 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pAutoMsrAreaR3)
957 {
958 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pAutoMsrAreaR3, VMX_V_AUTOMSR_AREA_PAGES);
959 pVCpu->cpum.s.Guest.hwvirt.vmx.pAutoMsrAreaR3 = NULL;
960 }
961 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pvMsrBitmapR3)
962 {
963 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pvMsrBitmapR3, VMX_V_MSR_BITMAP_PAGES);
964 pVCpu->cpum.s.Guest.hwvirt.vmx.pvMsrBitmapR3 = NULL;
965 }
966 if (pVCpu->cpum.s.Guest.hwvirt.vmx.pvIoBitmapR3)
967 {
968 SUPR3PageFreeEx(pVCpu->cpum.s.Guest.hwvirt.vmx.pvIoBitmapR3, VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES);
969 pVCpu->cpum.s.Guest.hwvirt.vmx.pvIoBitmapR3 = NULL;
970 }
971 }
972}
973
974
975/**
976 * Allocates memory for the VMX hardware virtualization state.
977 *
978 * @returns VBox status code.
979 * @param pVM The cross context VM structure.
980 */
981static int cpumR3AllocVmxHwVirtState(PVM pVM)
982{
983 int rc = VINF_SUCCESS;
984 LogRel(("CPUM: Allocating %u pages for the nested-guest VMCS and related structures\n",
985 pVM->cCpus * ( VMX_V_VMCS_PAGES + VMX_V_VIRT_APIC_PAGES + VMX_V_VMREAD_VMWRITE_BITMAP_PAGES * 2
986 + VMX_V_AUTOMSR_AREA_PAGES)));
987 for (VMCPUID i = 0; i < pVM->cCpus; i++)
988 {
989 PVMCPU pVCpu = &pVM->aCpus[i];
990 pVCpu->cpum.s.Guest.hwvirt.enmHwvirt = CPUMHWVIRT_VMX;
991
992 /*
993 * Allocate the nested-guest current VMCS.
994 */
995 Assert(VMX_V_VMCS_PAGES == 1);
996 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3);
997 rc = SUPR3PageAllocEx(VMX_V_VMCS_PAGES, 0 /* fFlags */, (void **)&pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3,
998 &pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR0, NULL /* paPages */);
999 if (RT_FAILURE(rc))
1000 {
1001 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pVmcsR3);
1002 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMCS\n", pVCpu->idCpu, VMX_V_VMCS_PAGES));
1003 break;
1004 }
1005
1006 /*
1007 * Allocate the nested-guest shadow VMCS.
1008 */
1009 Assert(VMX_V_VMCS_PAGES == 1);
1010 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pShadowVmcsR3);
1011 rc = SUPR3PageAllocEx(VMX_V_VMCS_PAGES, 0 /* fFlags */, (void **)&pVCpu->cpum.s.Guest.hwvirt.vmx.pShadowVmcsR3,
1012 &pVCpu->cpum.s.Guest.hwvirt.vmx.pShadowVmcsR0, NULL /* paPages */);
1013 if (RT_FAILURE(rc))
1014 {
1015 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pShadowVmcsR3);
1016 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's shadow VMCS\n", pVCpu->idCpu, VMX_V_VMCS_PAGES));
1017 break;
1018 }
1019
1020 /*
1021 * Allocate the Virtual-APIC page.
1022 */
1023 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3);
1024 rc = SUPR3PageAllocEx(VMX_V_VIRT_APIC_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3,
1025 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR0, NULL /* paPages */);
1026 if (RT_FAILURE(rc))
1027 {
1028 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVirtApicPageR3);
1029 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's Virtual-APIC page\n", pVCpu->idCpu,
1030 VMX_V_VIRT_APIC_PAGES));
1031 break;
1032 }
1033
1034 /*
1035 * Allocate the VMREAD-bitmap.
1036 */
1037 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3);
1038 rc = SUPR3PageAllocEx(VMX_V_VMREAD_VMWRITE_BITMAP_PAGES, 0 /* fFlags */, &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3,
1039 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR0, NULL /* paPages */);
1040 if (RT_FAILURE(rc))
1041 {
1042 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmreadBitmapR3);
1043 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMREAD-bitmap\n", pVCpu->idCpu,
1044 VMX_V_VMREAD_VMWRITE_BITMAP_PAGES));
1045 break;
1046 }
1047
1048 /*
1049 * Allocatge the VMWRITE-bitmap.
1050 */
1051 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3);
1052 rc = SUPR3PageAllocEx(VMX_V_VMREAD_VMWRITE_BITMAP_PAGES, 0 /* fFlags */,
1053 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3,
1054 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR0, NULL /* paPages */);
1055 if (RT_FAILURE(rc))
1056 {
1057 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvVmwriteBitmapR3);
1058 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's VMWRITE-bitmap\n", pVCpu->idCpu,
1059 VMX_V_VMREAD_VMWRITE_BITMAP_PAGES));
1060 break;
1061 }
1062
1063 /*
1064 * Allocate the MSR auto-load/store area.
1065 */
1066 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pAutoMsrAreaR3);
1067 rc = SUPR3PageAllocEx(VMX_V_AUTOMSR_AREA_PAGES, 0 /* fFlags */, (void **)&pVCpu->cpum.s.Guest.hwvirt.vmx.pAutoMsrAreaR3,
1068 &pVCpu->cpum.s.Guest.hwvirt.vmx.pAutoMsrAreaR0, NULL /* paPages */);
1069 if (RT_FAILURE(rc))
1070 {
1071 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pAutoMsrAreaR3);
1072 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's auto-load/store MSR area\n", pVCpu->idCpu,
1073 VMX_V_AUTOMSR_AREA_PAGES));
1074 break;
1075 }
1076
1077 /*
1078 * Allocate the MSR bitmap.
1079 */
1080 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvMsrBitmapR3);
1081 rc = SUPR3PageAllocEx(VMX_V_MSR_BITMAP_PAGES, 0 /* fFlags */, (void **)&pVCpu->cpum.s.Guest.hwvirt.vmx.pvMsrBitmapR3,
1082 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvMsrBitmapR0, NULL /* paPages */);
1083 if (RT_FAILURE(rc))
1084 {
1085 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvMsrBitmapR3);
1086 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's MSR bitmap\n", pVCpu->idCpu,
1087 VMX_V_MSR_BITMAP_PAGES));
1088 break;
1089 }
1090
1091 /*
1092 * Allocate the I/O bitmaps (A and B).
1093 */
1094 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvIoBitmapR3);
1095 rc = SUPR3PageAllocEx(VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES, 0 /* fFlags */,
1096 (void **)&pVCpu->cpum.s.Guest.hwvirt.vmx.pvIoBitmapR3,
1097 &pVCpu->cpum.s.Guest.hwvirt.vmx.pvIoBitmapR0, NULL /* paPages */);
1098 if (RT_FAILURE(rc))
1099 {
1100 Assert(!pVCpu->cpum.s.Guest.hwvirt.vmx.pvIoBitmapR3);
1101 LogRel(("CPUM%u: Failed to alloc %u pages for the nested-guest's I/O bitmaps\n", pVCpu->idCpu,
1102 VMX_V_IO_BITMAP_A_PAGES + VMX_V_IO_BITMAP_B_PAGES));
1103 break;
1104 }
1105 }
1106
1107 /* On any failure, cleanup. */
1108 if (RT_FAILURE(rc))
1109 cpumR3FreeVmxHwVirtState(pVM);
1110
1111 return rc;
1112}
1113
1114
1115/**
1116 * Initializes (or re-initializes) per-VCPU VMX hardware virtualization state.
1117 *
1118 * @param pVCpu The cross context virtual CPU structure.
1119 */
1120DECLINLINE(void) cpumR3InitVmxHwVirtState(PVMCPU pVCpu)
1121{
1122 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1123 Assert(pCtx->hwvirt.enmHwvirt == CPUMHWVIRT_VMX);
1124 Assert(pCtx->hwvirt.vmx.CTX_SUFF(pVmcs));
1125 Assert(pCtx->hwvirt.vmx.CTX_SUFF(pShadowVmcs));
1126
1127 memset(pCtx->hwvirt.vmx.CTX_SUFF(pVmcs), 0, VMX_V_VMCS_SIZE);
1128 memset(pCtx->hwvirt.vmx.CTX_SUFF(pShadowVmcs), 0, VMX_V_VMCS_SIZE);
1129 pCtx->hwvirt.vmx.GCPhysVmxon = NIL_RTGCPHYS;
1130 pCtx->hwvirt.vmx.GCPhysShadowVmcs = NIL_RTGCPHYS;
1131 pCtx->hwvirt.vmx.GCPhysVmxon = NIL_RTGCPHYS;
1132 pCtx->hwvirt.vmx.fInVmxRootMode = false;
1133 pCtx->hwvirt.vmx.fInVmxNonRootMode = false;
1134 /* Don't reset diagnostics here. */
1135}
1136
1137
1138/**
1139 * Displays the host and guest VMX features.
1140 *
1141 * @param pVM The cross context VM structure.
1142 * @param pHlp The info helper functions.
1143 * @param pszArgs "terse", "default" or "verbose".
1144 */
1145DECLCALLBACK(void) cpumR3InfoVmxFeatures(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
1146{
1147 RT_NOREF(pszArgs);
1148 PCCPUMFEATURES pHostFeatures = &pVM->cpum.s.HostFeatures;
1149 PCCPUMFEATURES pGuestFeatures = &pVM->cpum.s.GuestFeatures;
1150 if ( pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_INTEL
1151 || pHostFeatures->enmCpuVendor == CPUMCPUVENDOR_VIA)
1152 {
1153#define VMXFEATDUMP(a_szDesc, a_Var) \
1154 pHlp->pfnPrintf(pHlp, " %s = %u (%u)\n", a_szDesc, pGuestFeatures->a_Var, pHostFeatures->a_Var)
1155
1156 pHlp->pfnPrintf(pHlp, "Nested hardware virtualization - VMX features\n");
1157 pHlp->pfnPrintf(pHlp, " Mnemonic - Description = guest (host)\n");
1158 VMXFEATDUMP("VMX - Virtual-Machine Extensions ", fVmx);
1159 if (!pGuestFeatures->fVmx)
1160 return;
1161 /* Basic. */
1162 VMXFEATDUMP("InsOutInfo - INS/OUTS instruction info. ", fVmxInsOutInfo);
1163 /* Pin-based controls. */
1164 VMXFEATDUMP("ExtIntExit - External interrupt VM-exit ", fVmxExtIntExit);
1165 VMXFEATDUMP("NmiExit - NMI VM-exit ", fVmxNmiExit);
1166 VMXFEATDUMP("VirtNmi - Virtual NMIs ", fVmxVirtNmi);
1167 VMXFEATDUMP("PreemptTimer - VMX preemption timer ", fVmxPreemptTimer);
1168 VMXFEATDUMP("PostedInt - Posted interrupts ", fVmxPostedInt);
1169 /* Processor-based controls. */
1170 VMXFEATDUMP("IntWindowExit - Interrupt-window exiting ", fVmxIntWindowExit);
1171 VMXFEATDUMP("TscOffsetting - TSC offsetting ", fVmxTscOffsetting);
1172 VMXFEATDUMP("HltExit - HLT exiting ", fVmxHltExit);
1173 VMXFEATDUMP("InvlpgExit - INVLPG exiting ", fVmxInvlpgExit);
1174 VMXFEATDUMP("MwaitExit - MWAIT exiting ", fVmxMwaitExit);
1175 VMXFEATDUMP("RdpmcExit - RDPMC exiting ", fVmxRdpmcExit);
1176 VMXFEATDUMP("RdtscExit - RDTSC exiting ", fVmxRdtscExit);
1177 VMXFEATDUMP("Cr3LoadExit - CR3-load exiting ", fVmxCr3LoadExit);
1178 VMXFEATDUMP("Cr3StoreExit - CR3-store exiting ", fVmxCr3StoreExit);
1179 VMXFEATDUMP("Cr8LoadExit - CR8-load exiting ", fVmxCr8LoadExit);
1180 VMXFEATDUMP("Cr8StoreExit - CR8-store exiting ", fVmxCr8StoreExit);
1181 VMXFEATDUMP("UseTprShadow - Use TPR shadow ", fVmxUseTprShadow);
1182 VMXFEATDUMP("NmiWindowExit - NMI-window exiting ", fVmxNmiWindowExit);
1183 VMXFEATDUMP("MovDRxExit - Mov-DR exiting ", fVmxMovDRxExit);
1184 VMXFEATDUMP("UncondIoExit - Unconditional I/O exiting ", fVmxUncondIoExit);
1185 VMXFEATDUMP("UseIoBitmaps - Use I/O bitmaps ", fVmxUseIoBitmaps);
1186 VMXFEATDUMP("MonitorTrapFlag - Monitor trap flag ", fVmxMonitorTrapFlag);
1187 VMXFEATDUMP("UseMsrBitmaps - MSR bitmaps ", fVmxUseMsrBitmaps);
1188 VMXFEATDUMP("MonitorExit - MONITOR exiting ", fVmxMonitorExit);
1189 VMXFEATDUMP("PauseExit - PAUSE exiting ", fVmxPauseExit);
1190 VMXFEATDUMP("SecondaryExecCtl - Activate secondary controls ", fVmxSecondaryExecCtls);
1191 /* Secondary processor-based controls. */
1192 VMXFEATDUMP("VirtApic - Virtualize-APIC accesses ", fVmxVirtApicAccess);
1193 VMXFEATDUMP("Ept - Extended Page Tables ", fVmxEpt);
1194 VMXFEATDUMP("DescTableExit - Descriptor-table exiting ", fVmxDescTableExit);
1195 VMXFEATDUMP("Rdtscp - Enable RDTSCP ", fVmxRdtscp);
1196 VMXFEATDUMP("VirtX2ApicMode - Virtualize-x2APIC mode ", fVmxVirtX2ApicMode);
1197 VMXFEATDUMP("Vpid - Enable VPID ", fVmxVpid);
1198 VMXFEATDUMP("WbinvdExit - WBINVD exiting ", fVmxWbinvdExit);
1199 VMXFEATDUMP("UnrestrictedGuest - Unrestricted guest ", fVmxUnrestrictedGuest);
1200 VMXFEATDUMP("ApicRegVirt - APIC-register virtualization ", fVmxApicRegVirt);
1201 VMXFEATDUMP("VirtIntDelivery - Virtual-interrupt delivery ", fVmxVirtIntDelivery);
1202 VMXFEATDUMP("PauseLoopExit - PAUSE-loop exiting ", fVmxPauseLoopExit);
1203 VMXFEATDUMP("RdrandExit - RDRAND exiting ", fVmxRdrandExit);
1204 VMXFEATDUMP("Invpcid - Enable INVPCID ", fVmxInvpcid);
1205 VMXFEATDUMP("VmFuncs - Enable VM Functions ", fVmxVmFunc);
1206 VMXFEATDUMP("VmcsShadowing - VMCS shadowing ", fVmxVmcsShadowing);
1207 VMXFEATDUMP("RdseedExiting - RDSEED exiting ", fVmxRdseedExit);
1208 VMXFEATDUMP("PML - Supports Page-Modification Log (PML) ", fVmxPml);
1209 VMXFEATDUMP("EptVe - EPT violations can cause #VE ", fVmxEptXcptVe);
1210 VMXFEATDUMP("XsavesXRstors - Enable XSAVES/XRSTORS ", fVmxXsavesXrstors);
1211 /* VM-entry controls. */
1212 VMXFEATDUMP("EntryLoadDebugCtls - Load debug controls on VM-entry ", fVmxEntryLoadDebugCtls);
1213 VMXFEATDUMP("Ia32eModeGuest - IA-32e mode guest ", fVmxIa32eModeGuest);
1214 VMXFEATDUMP("EntryLoadEferMsr - Load IA32_EFER on VM-entry ", fVmxEntryLoadEferMsr);
1215 VMXFEATDUMP("EntryLoadPatMsr - Load IA32_PAT on VM-entry ", fVmxEntryLoadPatMsr);
1216 /* VM-exit controls. */
1217 VMXFEATDUMP("ExitSaveDebugCtls - Save debug controls on VM-exit ", fVmxExitSaveDebugCtls);
1218 VMXFEATDUMP("HostAddrSpaceSize - Host address-space size ", fVmxHostAddrSpaceSize);
1219 VMXFEATDUMP("ExitAckExtInt - Acknowledge interrupt on VM-exit ", fVmxExitAckExtInt);
1220 VMXFEATDUMP("ExitSavePatMsr - Save IA32_PAT on VM-exit ", fVmxExitSavePatMsr);
1221 VMXFEATDUMP("ExitLoadPatMsr - Load IA32_PAT on VM-exit ", fVmxExitLoadPatMsr);
1222 VMXFEATDUMP("ExitSaveEferMsr - Save IA32_EFER on VM-exit ", fVmxExitSaveEferMsr);
1223 VMXFEATDUMP("ExitLoadEferMsr - Load IA32_EFER on VM-exit ", fVmxExitLoadEferMsr);
1224 VMXFEATDUMP("SavePreemptTimer - Save VMX-preemption timer ", fVmxSavePreemptTimer);
1225 /* Miscellaneous data. */
1226 VMXFEATDUMP("ExitSaveEferLma - Save EFER.LMA on VM-exit ", fVmxExitSaveEferLma);
1227 VMXFEATDUMP("IntelPt - Intel PT (Processor Trace) in VMX operation ", fVmxIntelPt);
1228 VMXFEATDUMP("VmwriteAll - Inject softint. with 0-len instr. ", fVmxVmwriteAll);
1229 VMXFEATDUMP("EntryInjectSoftInt - Inject softint. with 0-len instr. ", fVmxEntryInjectSoftInt);
1230#undef VMXFEATDUMP
1231 }
1232 else
1233 pHlp->pfnPrintf(pHlp, "No VMX features present - requires an Intel or compatible CPU.\n");
1234}
1235
1236
1237/**
1238 * Initializes VMX host and guest features.
1239 *
1240 * @param pVM The cross context VM structure.
1241 *
1242 * @remarks This must be called only after HM has fully initialized since it calls
1243 * into HM to retrieve VMX and related MSRs.
1244 */
1245static void cpumR3InitVmxCpuFeatures(PVM pVM)
1246{
1247 /*
1248 * Init. host features.
1249 */
1250 PCPUMFEATURES pHostFeat = &pVM->cpum.s.HostFeatures;
1251 VMXMSRS VmxMsrs;
1252 int rc = HMVmxGetHostMsrs(pVM, &VmxMsrs);
1253 if (RT_SUCCESS(rc))
1254 {
1255 /* Basic information. */
1256 pHostFeat->fVmxInsOutInfo = RT_BF_GET(VmxMsrs.u64Basic, VMX_BF_BASIC_VMCS_INS_OUTS);
1257
1258 /* Pin-based VM-execution controls. */
1259 uint32_t const fPinCtls = VmxMsrs.PinCtls.n.allowed1;
1260 pHostFeat->fVmxExtIntExit = RT_BOOL(fPinCtls & VMX_PIN_CTLS_EXT_INT_EXIT);
1261 pHostFeat->fVmxNmiExit = RT_BOOL(fPinCtls & VMX_PIN_CTLS_NMI_EXIT);
1262 pHostFeat->fVmxVirtNmi = RT_BOOL(fPinCtls & VMX_PIN_CTLS_VIRT_NMI);
1263 pHostFeat->fVmxPreemptTimer = RT_BOOL(fPinCtls & VMX_PIN_CTLS_PREEMPT_TIMER);
1264 pHostFeat->fVmxPostedInt = RT_BOOL(fPinCtls & VMX_PIN_CTLS_POSTED_INT);
1265
1266 /* Processor-based VM-execution controls. */
1267 uint32_t const fProcCtls = VmxMsrs.ProcCtls.n.allowed1;
1268 pHostFeat->fVmxIntWindowExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_INT_WINDOW_EXIT);
1269 pHostFeat->fVmxTscOffsetting = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_TSC_OFFSETTING);
1270 pHostFeat->fVmxHltExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_HLT_EXIT);
1271 pHostFeat->fVmxInvlpgExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_INVLPG_EXIT);
1272 pHostFeat->fVmxMwaitExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_MWAIT_EXIT);
1273 pHostFeat->fVmxRdpmcExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_RDPMC_EXIT);
1274 pHostFeat->fVmxRdtscExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_RDTSC_EXIT);
1275 pHostFeat->fVmxCr3LoadExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_CR3_LOAD_EXIT);
1276 pHostFeat->fVmxCr3StoreExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_CR3_STORE_EXIT);
1277 pHostFeat->fVmxCr8LoadExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_CR8_LOAD_EXIT);
1278 pHostFeat->fVmxCr8StoreExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_CR8_STORE_EXIT);
1279 pHostFeat->fVmxUseTprShadow = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_TPR_SHADOW);
1280 pHostFeat->fVmxNmiWindowExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_NMI_WINDOW_EXIT);
1281 pHostFeat->fVmxMovDRxExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_MOV_DR_EXIT);
1282 pHostFeat->fVmxUncondIoExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_UNCOND_IO_EXIT);
1283 pHostFeat->fVmxUseIoBitmaps = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_IO_BITMAPS);
1284 pHostFeat->fVmxMonitorTrapFlag = RT_BOOL(fProcCtls & VMX_PROC_CTLS_MONITOR_TRAP_FLAG);
1285 pHostFeat->fVmxUseMsrBitmaps = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_MSR_BITMAPS);
1286 pHostFeat->fVmxMonitorExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_MONITOR_EXIT);
1287 pHostFeat->fVmxPauseExit = RT_BOOL(fProcCtls & VMX_PROC_CTLS_PAUSE_EXIT);
1288 pHostFeat->fVmxSecondaryExecCtls = RT_BOOL(fProcCtls & VMX_PROC_CTLS_USE_SECONDARY_CTLS);
1289
1290 /* Secondary processor-based VM-execution controls. */
1291 if (pHostFeat->fVmxSecondaryExecCtls)
1292 {
1293 uint32_t const fProcCtls2 = VmxMsrs.ProcCtls2.n.allowed1;
1294 pHostFeat->fVmxVirtApicAccess = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VIRT_APIC_ACCESS);
1295 pHostFeat->fVmxEpt = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_EPT);
1296 pHostFeat->fVmxDescTableExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_DESC_TABLE_EXIT);
1297 pHostFeat->fVmxRdtscp = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_RDTSCP);
1298 pHostFeat->fVmxVirtX2ApicMode = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VIRT_X2APIC_MODE);
1299 pHostFeat->fVmxVpid = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VPID);
1300 pHostFeat->fVmxWbinvdExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_WBINVD_EXIT);
1301 pHostFeat->fVmxUnrestrictedGuest = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_UNRESTRICTED_GUEST);
1302 pHostFeat->fVmxApicRegVirt = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_APIC_REG_VIRT);
1303 pHostFeat->fVmxVirtIntDelivery = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VIRT_INT_DELIVERY);
1304 pHostFeat->fVmxPauseLoopExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_PAUSE_LOOP_EXIT);
1305 pHostFeat->fVmxRdrandExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_RDRAND_EXIT);
1306 pHostFeat->fVmxInvpcid = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_INVPCID);
1307 pHostFeat->fVmxVmFunc = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VMFUNC);
1308 pHostFeat->fVmxVmcsShadowing = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_VMCS_SHADOWING);
1309 pHostFeat->fVmxRdseedExit = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_RDSEED_EXIT);
1310 pHostFeat->fVmxPml = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_PML);
1311 pHostFeat->fVmxEptXcptVe = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_EPT_VE);
1312 pHostFeat->fVmxXsavesXrstors = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_XSAVES_XRSTORS);
1313 pHostFeat->fVmxUseTscScaling = RT_BOOL(fProcCtls2 & VMX_PROC_CTLS2_TSC_SCALING);
1314 }
1315
1316 /* VM-entry controls. */
1317 uint32_t const fEntryCtls = VmxMsrs.EntryCtls.n.allowed1;
1318 pHostFeat->fVmxEntryLoadDebugCtls = RT_BOOL(fEntryCtls & VMX_ENTRY_CTLS_LOAD_DEBUG);
1319 pHostFeat->fVmxIa32eModeGuest = RT_BOOL(fEntryCtls & VMX_ENTRY_CTLS_IA32E_MODE_GUEST);
1320 pHostFeat->fVmxEntryLoadEferMsr = RT_BOOL(fEntryCtls & VMX_ENTRY_CTLS_LOAD_EFER_MSR);
1321 pHostFeat->fVmxEntryLoadPatMsr = RT_BOOL(fEntryCtls & VMX_ENTRY_CTLS_LOAD_PAT_MSR);
1322
1323 /* VM-exit controls. */
1324 uint32_t const fExitCtls = VmxMsrs.ExitCtls.n.allowed1;
1325 pHostFeat->fVmxExitSaveDebugCtls = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_SAVE_DEBUG);
1326 pHostFeat->fVmxHostAddrSpaceSize = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_HOST_ADDR_SPACE_SIZE);
1327 pHostFeat->fVmxExitAckExtInt = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_ACK_EXT_INT);
1328 pHostFeat->fVmxExitSavePatMsr = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_SAVE_PAT_MSR);
1329 pHostFeat->fVmxExitLoadPatMsr = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_LOAD_PAT_MSR);
1330 pHostFeat->fVmxExitSaveEferMsr = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_SAVE_EFER_MSR);
1331 pHostFeat->fVmxExitLoadEferMsr = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_LOAD_EFER_MSR);
1332 pHostFeat->fVmxSavePreemptTimer = RT_BOOL(fExitCtls & VMX_EXIT_CTLS_SAVE_PREEMPT_TIMER);
1333
1334 /* Miscellaneous data. */
1335 uint32_t const fMiscData = VmxMsrs.u64Misc;
1336 pHostFeat->fVmxExitSaveEferLma = RT_BOOL(fMiscData & VMX_MISC_EXIT_SAVE_EFER_LMA);
1337 pHostFeat->fVmxIntelPt = RT_BOOL(fMiscData & VMX_MISC_INTEL_PT);
1338 pHostFeat->fVmxVmwriteAll = RT_BOOL(fMiscData & VMX_MISC_VMWRITE_ALL);
1339 pHostFeat->fVmxEntryInjectSoftInt = RT_BOOL(fMiscData & VMX_MISC_ENTRY_INJECT_SOFT_INT);
1340 }
1341
1342 /*
1343 * Initialize the set of VMX features we emulate.
1344 * Note! Some bits might be reported as 1 always if they fall under the default1 class bits
1345 * (e.g. fVmxEntryLoadDebugCtls), see @bugref{9180#c5}.
1346 */
1347 CPUMFEATURES EmuFeat;
1348 RT_ZERO(EmuFeat);
1349 EmuFeat.fVmx = 1;
1350 EmuFeat.fVmxInsOutInfo = 0;
1351 EmuFeat.fVmxExtIntExit = 1;
1352 EmuFeat.fVmxNmiExit = 1;
1353 EmuFeat.fVmxVirtNmi = 0;
1354 EmuFeat.fVmxPreemptTimer = 0; /** @todo NSTVMX: enable this. */
1355 EmuFeat.fVmxPostedInt = 0;
1356 EmuFeat.fVmxIntWindowExit = 1;
1357 EmuFeat.fVmxTscOffsetting = 1;
1358 EmuFeat.fVmxHltExit = 1;
1359 EmuFeat.fVmxInvlpgExit = 1;
1360 EmuFeat.fVmxMwaitExit = 1;
1361 EmuFeat.fVmxRdpmcExit = 1;
1362 EmuFeat.fVmxRdtscExit = 1;
1363 EmuFeat.fVmxCr3LoadExit = 1;
1364 EmuFeat.fVmxCr3StoreExit = 1;
1365 EmuFeat.fVmxCr8LoadExit = 1;
1366 EmuFeat.fVmxCr8StoreExit = 1;
1367 EmuFeat.fVmxUseTprShadow = 0;
1368 EmuFeat.fVmxNmiWindowExit = 0;
1369 EmuFeat.fVmxMovDRxExit = 1;
1370 EmuFeat.fVmxUncondIoExit = 1;
1371 EmuFeat.fVmxUseIoBitmaps = 1;
1372 EmuFeat.fVmxMonitorTrapFlag = 0;
1373 EmuFeat.fVmxUseMsrBitmaps = 0;
1374 EmuFeat.fVmxMonitorExit = 1;
1375 EmuFeat.fVmxPauseExit = 1;
1376 EmuFeat.fVmxSecondaryExecCtls = 1;
1377 EmuFeat.fVmxVirtApicAccess = 0;
1378 EmuFeat.fVmxEpt = 0;
1379 EmuFeat.fVmxDescTableExit = 1;
1380 EmuFeat.fVmxRdtscp = 1;
1381 EmuFeat.fVmxVirtX2ApicMode = 0;
1382 EmuFeat.fVmxVpid = 0;
1383 EmuFeat.fVmxWbinvdExit = 1;
1384 EmuFeat.fVmxUnrestrictedGuest = 0;
1385 EmuFeat.fVmxApicRegVirt = 0;
1386 EmuFeat.fVmxVirtIntDelivery = 0;
1387 EmuFeat.fVmxPauseLoopExit = 0;
1388 EmuFeat.fVmxRdrandExit = 0;
1389 EmuFeat.fVmxInvpcid = 1;
1390 EmuFeat.fVmxVmFunc = 0;
1391 EmuFeat.fVmxVmcsShadowing = 0;
1392 EmuFeat.fVmxRdseedExit = 0;
1393 EmuFeat.fVmxPml = 0;
1394 EmuFeat.fVmxEptXcptVe = 0;
1395 EmuFeat.fVmxXsavesXrstors = 0;
1396 EmuFeat.fVmxUseTscScaling = 0;
1397 EmuFeat.fVmxEntryLoadDebugCtls = 1;
1398 EmuFeat.fVmxIa32eModeGuest = 1;
1399 EmuFeat.fVmxEntryLoadEferMsr = 1;
1400 EmuFeat.fVmxEntryLoadPatMsr = 0;
1401 EmuFeat.fVmxExitSaveDebugCtls = 1;
1402 EmuFeat.fVmxHostAddrSpaceSize = 1;
1403 EmuFeat.fVmxExitAckExtInt = 0;
1404 EmuFeat.fVmxExitSavePatMsr = 0;
1405 EmuFeat.fVmxExitLoadPatMsr = 0;
1406 EmuFeat.fVmxExitSaveEferMsr = 1;
1407 EmuFeat.fVmxExitLoadEferMsr = 1;
1408 EmuFeat.fVmxSavePreemptTimer = 0;
1409 EmuFeat.fVmxExitSaveEferLma = 1;
1410 EmuFeat.fVmxIntelPt = 0;
1411 EmuFeat.fVmxVmwriteAll = 0;
1412 EmuFeat.fVmxEntryInjectSoftInt = 0;
1413
1414 /*
1415 * Explode guest features.
1416 *
1417 * When hardware-assisted VMX may be used, any feature we emulate must also be supported
1418 * by the hardware, hence we merge our emulated features with the host features below.
1419 */
1420 bool const fHostSupportsVmx = pHostFeat->fVmx;
1421 AssertLogRelReturnVoid(!fHostSupportsVmx || HMIsVmxSupported(pVM));
1422 PCCPUMFEATURES pBaseFeat = fHostSupportsVmx ? pHostFeat : &EmuFeat;
1423 PCPUMFEATURES pGuestFeat = &pVM->cpum.s.GuestFeatures;
1424 pGuestFeat->fVmx = (pBaseFeat->fVmx & EmuFeat.fVmx );
1425 pGuestFeat->fVmxInsOutInfo = (pBaseFeat->fVmxInsOutInfo & EmuFeat.fVmxInsOutInfo );
1426 pGuestFeat->fVmxExtIntExit = (pBaseFeat->fVmxExtIntExit & EmuFeat.fVmxExtIntExit );
1427 pGuestFeat->fVmxNmiExit = (pBaseFeat->fVmxNmiExit & EmuFeat.fVmxNmiExit );
1428 pGuestFeat->fVmxVirtNmi = (pBaseFeat->fVmxVirtNmi & EmuFeat.fVmxVirtNmi );
1429 pGuestFeat->fVmxPreemptTimer = (pBaseFeat->fVmxPreemptTimer & EmuFeat.fVmxPreemptTimer );
1430 pGuestFeat->fVmxPostedInt = (pBaseFeat->fVmxPostedInt & EmuFeat.fVmxPostedInt );
1431 pGuestFeat->fVmxIntWindowExit = (pBaseFeat->fVmxIntWindowExit & EmuFeat.fVmxIntWindowExit );
1432 pGuestFeat->fVmxTscOffsetting = (pBaseFeat->fVmxTscOffsetting & EmuFeat.fVmxTscOffsetting );
1433 pGuestFeat->fVmxHltExit = (pBaseFeat->fVmxHltExit & EmuFeat.fVmxHltExit );
1434 pGuestFeat->fVmxInvlpgExit = (pBaseFeat->fVmxInvlpgExit & EmuFeat.fVmxInvlpgExit );
1435 pGuestFeat->fVmxMwaitExit = (pBaseFeat->fVmxMwaitExit & EmuFeat.fVmxMwaitExit );
1436 pGuestFeat->fVmxRdpmcExit = (pBaseFeat->fVmxRdpmcExit & EmuFeat.fVmxRdpmcExit );
1437 pGuestFeat->fVmxRdtscExit = (pBaseFeat->fVmxRdtscExit & EmuFeat.fVmxRdtscExit );
1438 pGuestFeat->fVmxCr3LoadExit = (pBaseFeat->fVmxCr3LoadExit & EmuFeat.fVmxCr3LoadExit );
1439 pGuestFeat->fVmxCr3StoreExit = (pBaseFeat->fVmxCr3StoreExit & EmuFeat.fVmxCr3StoreExit );
1440 pGuestFeat->fVmxCr8LoadExit = (pBaseFeat->fVmxCr8LoadExit & EmuFeat.fVmxCr8LoadExit );
1441 pGuestFeat->fVmxCr8StoreExit = (pBaseFeat->fVmxCr8StoreExit & EmuFeat.fVmxCr8StoreExit );
1442 pGuestFeat->fVmxUseTprShadow = (pBaseFeat->fVmxUseTprShadow & EmuFeat.fVmxUseTprShadow );
1443 pGuestFeat->fVmxNmiWindowExit = (pBaseFeat->fVmxNmiWindowExit & EmuFeat.fVmxNmiWindowExit );
1444 pGuestFeat->fVmxMovDRxExit = (pBaseFeat->fVmxMovDRxExit & EmuFeat.fVmxMovDRxExit );
1445 pGuestFeat->fVmxUncondIoExit = (pBaseFeat->fVmxUncondIoExit & EmuFeat.fVmxUncondIoExit );
1446 pGuestFeat->fVmxUseIoBitmaps = (pBaseFeat->fVmxUseIoBitmaps & EmuFeat.fVmxUseIoBitmaps );
1447 pGuestFeat->fVmxMonitorTrapFlag = (pBaseFeat->fVmxMonitorTrapFlag & EmuFeat.fVmxMonitorTrapFlag );
1448 pGuestFeat->fVmxUseMsrBitmaps = (pBaseFeat->fVmxUseMsrBitmaps & EmuFeat.fVmxUseMsrBitmaps );
1449 pGuestFeat->fVmxMonitorExit = (pBaseFeat->fVmxMonitorExit & EmuFeat.fVmxMonitorExit );
1450 pGuestFeat->fVmxPauseExit = (pBaseFeat->fVmxPauseExit & EmuFeat.fVmxPauseExit );
1451 pGuestFeat->fVmxSecondaryExecCtls = (pBaseFeat->fVmxSecondaryExecCtls & EmuFeat.fVmxSecondaryExecCtls );
1452 pGuestFeat->fVmxVirtApicAccess = (pBaseFeat->fVmxVirtApicAccess & EmuFeat.fVmxVirtApicAccess );
1453 pGuestFeat->fVmxEpt = (pBaseFeat->fVmxEpt & EmuFeat.fVmxEpt );
1454 pGuestFeat->fVmxDescTableExit = (pBaseFeat->fVmxDescTableExit & EmuFeat.fVmxDescTableExit );
1455 pGuestFeat->fVmxRdtscp = (pBaseFeat->fVmxRdtscp & EmuFeat.fVmxRdtscp );
1456 pGuestFeat->fVmxVirtX2ApicMode = (pBaseFeat->fVmxVirtX2ApicMode & EmuFeat.fVmxVirtX2ApicMode );
1457 pGuestFeat->fVmxVpid = (pBaseFeat->fVmxVpid & EmuFeat.fVmxVpid );
1458 pGuestFeat->fVmxWbinvdExit = (pBaseFeat->fVmxWbinvdExit & EmuFeat.fVmxWbinvdExit );
1459 pGuestFeat->fVmxUnrestrictedGuest = (pBaseFeat->fVmxUnrestrictedGuest & EmuFeat.fVmxUnrestrictedGuest );
1460 pGuestFeat->fVmxApicRegVirt = (pBaseFeat->fVmxApicRegVirt & EmuFeat.fVmxApicRegVirt );
1461 pGuestFeat->fVmxVirtIntDelivery = (pBaseFeat->fVmxVirtIntDelivery & EmuFeat.fVmxVirtIntDelivery );
1462 pGuestFeat->fVmxPauseLoopExit = (pBaseFeat->fVmxPauseLoopExit & EmuFeat.fVmxPauseLoopExit );
1463 pGuestFeat->fVmxRdrandExit = (pBaseFeat->fVmxRdrandExit & EmuFeat.fVmxRdrandExit );
1464 pGuestFeat->fVmxInvpcid = (pBaseFeat->fVmxInvpcid & EmuFeat.fVmxInvpcid );
1465 pGuestFeat->fVmxVmFunc = (pBaseFeat->fVmxVmFunc & EmuFeat.fVmxVmFunc );
1466 pGuestFeat->fVmxVmcsShadowing = (pBaseFeat->fVmxVmcsShadowing & EmuFeat.fVmxVmcsShadowing );
1467 pGuestFeat->fVmxRdseedExit = (pBaseFeat->fVmxRdseedExit & EmuFeat.fVmxRdseedExit );
1468 pGuestFeat->fVmxPml = (pBaseFeat->fVmxPml & EmuFeat.fVmxPml );
1469 pGuestFeat->fVmxEptXcptVe = (pBaseFeat->fVmxEptXcptVe & EmuFeat.fVmxEptXcptVe );
1470 pGuestFeat->fVmxXsavesXrstors = (pBaseFeat->fVmxXsavesXrstors & EmuFeat.fVmxXsavesXrstors );
1471 pGuestFeat->fVmxUseTscScaling = (pBaseFeat->fVmxUseTscScaling & EmuFeat.fVmxUseTscScaling );
1472 pGuestFeat->fVmxEntryLoadDebugCtls = (pBaseFeat->fVmxEntryLoadDebugCtls & EmuFeat.fVmxEntryLoadDebugCtls );
1473 pGuestFeat->fVmxIa32eModeGuest = (pBaseFeat->fVmxIa32eModeGuest & EmuFeat.fVmxIa32eModeGuest );
1474 pGuestFeat->fVmxEntryLoadEferMsr = (pBaseFeat->fVmxEntryLoadEferMsr & EmuFeat.fVmxEntryLoadEferMsr );
1475 pGuestFeat->fVmxEntryLoadPatMsr = (pBaseFeat->fVmxEntryLoadPatMsr & EmuFeat.fVmxEntryLoadPatMsr );
1476 pGuestFeat->fVmxExitSaveDebugCtls = (pBaseFeat->fVmxExitSaveDebugCtls & EmuFeat.fVmxExitSaveDebugCtls );
1477 pGuestFeat->fVmxHostAddrSpaceSize = (pBaseFeat->fVmxHostAddrSpaceSize & EmuFeat.fVmxHostAddrSpaceSize );
1478 pGuestFeat->fVmxExitAckExtInt = (pBaseFeat->fVmxExitAckExtInt & EmuFeat.fVmxExitAckExtInt );
1479 pGuestFeat->fVmxExitSavePatMsr = (pBaseFeat->fVmxExitSavePatMsr & EmuFeat.fVmxExitSavePatMsr );
1480 pGuestFeat->fVmxExitLoadPatMsr = (pBaseFeat->fVmxExitLoadPatMsr & EmuFeat.fVmxExitLoadPatMsr );
1481 pGuestFeat->fVmxExitSaveEferMsr = (pBaseFeat->fVmxExitSaveEferMsr & EmuFeat.fVmxExitSaveEferMsr );
1482 pGuestFeat->fVmxExitLoadEferMsr = (pBaseFeat->fVmxExitLoadEferMsr & EmuFeat.fVmxExitLoadEferMsr );
1483 pGuestFeat->fVmxSavePreemptTimer = (pBaseFeat->fVmxSavePreemptTimer & EmuFeat.fVmxSavePreemptTimer );
1484 pGuestFeat->fVmxExitSaveEferLma = (pBaseFeat->fVmxExitSaveEferLma & EmuFeat.fVmxExitSaveEferLma );
1485 pGuestFeat->fVmxIntelPt = (pBaseFeat->fVmxIntelPt & EmuFeat.fVmxIntelPt );
1486 pGuestFeat->fVmxVmwriteAll = (pBaseFeat->fVmxVmwriteAll & EmuFeat.fVmxVmwriteAll );
1487 pGuestFeat->fVmxEntryInjectSoftInt = (pBaseFeat->fVmxEntryInjectSoftInt & EmuFeat.fVmxEntryInjectSoftInt );
1488
1489 /* Paranoia. */
1490 if (!pGuestFeat->fVmxSecondaryExecCtls)
1491 {
1492 Assert(!pGuestFeat->fVmxVirtApicAccess);
1493 Assert(!pGuestFeat->fVmxEpt);
1494 Assert(!pGuestFeat->fVmxDescTableExit);
1495 Assert(!pGuestFeat->fVmxRdtscp);
1496 Assert(!pGuestFeat->fVmxVirtX2ApicMode);
1497 Assert(!pGuestFeat->fVmxVpid);
1498 Assert(!pGuestFeat->fVmxWbinvdExit);
1499 Assert(!pGuestFeat->fVmxUnrestrictedGuest);
1500 Assert(!pGuestFeat->fVmxApicRegVirt);
1501 Assert(!pGuestFeat->fVmxVirtIntDelivery);
1502 Assert(!pGuestFeat->fVmxPauseLoopExit);
1503 Assert(!pGuestFeat->fVmxRdrandExit);
1504 Assert(!pGuestFeat->fVmxInvpcid);
1505 Assert(!pGuestFeat->fVmxVmFunc);
1506 Assert(!pGuestFeat->fVmxVmcsShadowing);
1507 Assert(!pGuestFeat->fVmxRdseedExit);
1508 Assert(!pGuestFeat->fVmxPml);
1509 Assert(!pGuestFeat->fVmxEptXcptVe);
1510 Assert(!pGuestFeat->fVmxXsavesXrstors);
1511 Assert(!pGuestFeat->fVmxUseTscScaling);
1512 }
1513}
1514
1515
1516/**
1517 * Initializes the CPUM.
1518 *
1519 * @returns VBox status code.
1520 * @param pVM The cross context VM structure.
1521 */
1522VMMR3DECL(int) CPUMR3Init(PVM pVM)
1523{
1524 LogFlow(("CPUMR3Init\n"));
1525
1526 /*
1527 * Assert alignment, sizes and tables.
1528 */
1529 AssertCompileMemberAlignment(VM, cpum.s, 32);
1530 AssertCompile(sizeof(pVM->cpum.s) <= sizeof(pVM->cpum.padding));
1531 AssertCompileSizeAlignment(CPUMCTX, 64);
1532 AssertCompileSizeAlignment(CPUMCTXMSRS, 64);
1533 AssertCompileSizeAlignment(CPUMHOSTCTX, 64);
1534 AssertCompileMemberAlignment(VM, cpum, 64);
1535 AssertCompileMemberAlignment(VM, aCpus, 64);
1536 AssertCompileMemberAlignment(VMCPU, cpum.s, 64);
1537 AssertCompileMemberSizeAlignment(VM, aCpus[0].cpum.s, 64);
1538#ifdef VBOX_STRICT
1539 int rc2 = cpumR3MsrStrictInitChecks();
1540 AssertRCReturn(rc2, rc2);
1541#endif
1542
1543 /*
1544 * Initialize offsets.
1545 */
1546
1547 /* Calculate the offset from CPUM to CPUMCPU for the first CPU. */
1548 pVM->cpum.s.offCPUMCPU0 = RT_UOFFSETOF(VM, aCpus[0].cpum) - RT_UOFFSETOF(VM, cpum);
1549 Assert((uintptr_t)&pVM->cpum + pVM->cpum.s.offCPUMCPU0 == (uintptr_t)&pVM->aCpus[0].cpum);
1550
1551
1552 /* Calculate the offset from CPUMCPU to CPUM. */
1553 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1554 {
1555 PVMCPU pVCpu = &pVM->aCpus[i];
1556
1557 pVCpu->cpum.s.offCPUM = RT_UOFFSETOF_DYN(VM, aCpus[i].cpum) - RT_UOFFSETOF(VM, cpum);
1558 Assert((uintptr_t)&pVCpu->cpum - pVCpu->cpum.s.offCPUM == (uintptr_t)&pVM->cpum);
1559 }
1560
1561 /*
1562 * Gather info about the host CPU.
1563 */
1564 if (!ASMHasCpuId())
1565 {
1566 Log(("The CPU doesn't support CPUID!\n"));
1567 return VERR_UNSUPPORTED_CPU;
1568 }
1569
1570 pVM->cpum.s.fHostMxCsrMask = CPUMR3DeterminHostMxCsrMask();
1571
1572 PCPUMCPUIDLEAF paLeaves;
1573 uint32_t cLeaves;
1574 int rc = CPUMR3CpuIdCollectLeaves(&paLeaves, &cLeaves);
1575 AssertLogRelRCReturn(rc, rc);
1576
1577 rc = cpumR3CpuIdExplodeFeatures(paLeaves, cLeaves, &pVM->cpum.s.HostFeatures);
1578 RTMemFree(paLeaves);
1579 AssertLogRelRCReturn(rc, rc);
1580 pVM->cpum.s.GuestFeatures.enmCpuVendor = pVM->cpum.s.HostFeatures.enmCpuVendor;
1581
1582 /*
1583 * Check that the CPU supports the minimum features we require.
1584 */
1585 if (!pVM->cpum.s.HostFeatures.fFxSaveRstor)
1586 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support the FXSAVE/FXRSTOR instruction.");
1587 if (!pVM->cpum.s.HostFeatures.fMmx)
1588 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support MMX.");
1589 if (!pVM->cpum.s.HostFeatures.fTsc)
1590 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support RDTSC.");
1591
1592 /*
1593 * Setup the CR4 AND and OR masks used in the raw-mode switcher.
1594 */
1595 pVM->cpum.s.CR4.AndMask = X86_CR4_OSXMMEEXCPT | X86_CR4_PVI | X86_CR4_VME;
1596 pVM->cpum.s.CR4.OrMask = X86_CR4_OSFXSR;
1597
1598 /*
1599 * Figure out which XSAVE/XRSTOR features are available on the host.
1600 */
1601 uint64_t fXcr0Host = 0;
1602 uint64_t fXStateHostMask = 0;
1603 if ( pVM->cpum.s.HostFeatures.fXSaveRstor
1604 && pVM->cpum.s.HostFeatures.fOpSysXSaveRstor)
1605 {
1606 fXStateHostMask = fXcr0Host = ASMGetXcr0();
1607 fXStateHostMask &= XSAVE_C_X87 | XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI;
1608 AssertLogRelMsgStmt((fXStateHostMask & (XSAVE_C_X87 | XSAVE_C_SSE)) == (XSAVE_C_X87 | XSAVE_C_SSE),
1609 ("%#llx\n", fXStateHostMask), fXStateHostMask = 0);
1610 }
1611 pVM->cpum.s.fXStateHostMask = fXStateHostMask;
1612 if (VM_IS_RAW_MODE_ENABLED(pVM)) /* For raw-mode, we only use XSAVE/XRSTOR when the guest starts using it (CPUID/CR4 visibility). */
1613 fXStateHostMask = 0;
1614 LogRel(("CPUM: fXStateHostMask=%#llx; initial: %#llx; host XCR0=%#llx\n",
1615 pVM->cpum.s.fXStateHostMask, fXStateHostMask, fXcr0Host));
1616
1617 /*
1618 * Allocate memory for the extended CPU state and initialize the host XSAVE/XRSTOR mask.
1619 */
1620 uint32_t cbMaxXState = pVM->cpum.s.HostFeatures.cbMaxExtendedState;
1621 cbMaxXState = RT_ALIGN(cbMaxXState, 128);
1622 AssertLogRelReturn(cbMaxXState >= sizeof(X86FXSTATE) && cbMaxXState <= _8K, VERR_CPUM_IPE_2);
1623
1624 uint8_t *pbXStates;
1625 rc = MMR3HyperAllocOnceNoRelEx(pVM, cbMaxXState * 3 * pVM->cCpus, PAGE_SIZE, MM_TAG_CPUM_CTX,
1626 MMHYPER_AONR_FLAGS_KERNEL_MAPPING, (void **)&pbXStates);
1627 AssertLogRelRCReturn(rc, rc);
1628
1629 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1630 {
1631 PVMCPU pVCpu = &pVM->aCpus[i];
1632
1633 pVCpu->cpum.s.Guest.pXStateR3 = (PX86XSAVEAREA)pbXStates;
1634 pVCpu->cpum.s.Guest.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
1635 pVCpu->cpum.s.Guest.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
1636 pbXStates += cbMaxXState;
1637
1638 pVCpu->cpum.s.Host.pXStateR3 = (PX86XSAVEAREA)pbXStates;
1639 pVCpu->cpum.s.Host.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
1640 pVCpu->cpum.s.Host.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
1641 pbXStates += cbMaxXState;
1642
1643 pVCpu->cpum.s.Hyper.pXStateR3 = (PX86XSAVEAREA)pbXStates;
1644 pVCpu->cpum.s.Hyper.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
1645 pVCpu->cpum.s.Hyper.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
1646 pbXStates += cbMaxXState;
1647
1648 pVCpu->cpum.s.Host.fXStateMask = fXStateHostMask;
1649 }
1650
1651 /*
1652 * Register saved state data item.
1653 */
1654 rc = SSMR3RegisterInternal(pVM, "cpum", 1, CPUM_SAVED_STATE_VERSION, sizeof(CPUM),
1655 NULL, cpumR3LiveExec, NULL,
1656 NULL, cpumR3SaveExec, NULL,
1657 cpumR3LoadPrep, cpumR3LoadExec, cpumR3LoadDone);
1658 if (RT_FAILURE(rc))
1659 return rc;
1660
1661 /*
1662 * Register info handlers and registers with the debugger facility.
1663 */
1664 DBGFR3InfoRegisterInternalEx(pVM, "cpum", "Displays the all the cpu states.",
1665 &cpumR3InfoAll, DBGFINFO_FLAGS_ALL_EMTS);
1666 DBGFR3InfoRegisterInternalEx(pVM, "cpumguest", "Displays the guest cpu state.",
1667 &cpumR3InfoGuest, DBGFINFO_FLAGS_ALL_EMTS);
1668 DBGFR3InfoRegisterInternalEx(pVM, "cpumguesthwvirt", "Displays the guest hwvirt. cpu state.",
1669 &cpumR3InfoGuestHwvirt, DBGFINFO_FLAGS_ALL_EMTS);
1670 DBGFR3InfoRegisterInternalEx(pVM, "cpumhyper", "Displays the hypervisor cpu state.",
1671 &cpumR3InfoHyper, DBGFINFO_FLAGS_ALL_EMTS);
1672 DBGFR3InfoRegisterInternalEx(pVM, "cpumhost", "Displays the host cpu state.",
1673 &cpumR3InfoHost, DBGFINFO_FLAGS_ALL_EMTS);
1674 DBGFR3InfoRegisterInternalEx(pVM, "cpumguestinstr", "Displays the current guest instruction.",
1675 &cpumR3InfoGuestInstr, DBGFINFO_FLAGS_ALL_EMTS);
1676 DBGFR3InfoRegisterInternal( pVM, "cpuid", "Displays the guest cpuid leaves.", &cpumR3CpuIdInfo);
1677 DBGFR3InfoRegisterInternal( pVM, "cpumvmxfeat", "Displays the host and guest VMX hwvirt. features.",
1678 &cpumR3InfoVmxFeatures);
1679
1680 rc = cpumR3DbgInit(pVM);
1681 if (RT_FAILURE(rc))
1682 return rc;
1683
1684 /*
1685 * Check if we need to workaround partial/leaky FPU handling.
1686 */
1687 cpumR3CheckLeakyFpu(pVM);
1688
1689 /*
1690 * Initialize the Guest CPUID and MSR states.
1691 */
1692 rc = cpumR3InitCpuIdAndMsrs(pVM);
1693 if (RT_FAILURE(rc))
1694 return rc;
1695
1696 /*
1697 * Allocate memory required by the guest hardware virtualization state.
1698 */
1699 if (pVM->cpum.ro.GuestFeatures.fVmx)
1700 rc = cpumR3AllocVmxHwVirtState(pVM);
1701 else if (pVM->cpum.ro.GuestFeatures.fSvm)
1702 rc = cpumR3AllocSvmHwVirtState(pVM);
1703 else
1704 Assert(pVM->aCpus[0].cpum.s.Guest.hwvirt.enmHwvirt == CPUMHWVIRT_NONE);
1705 if (RT_FAILURE(rc))
1706 return rc;
1707
1708 /*
1709 * Initialize guest hardware virtualization state.
1710 */
1711 CPUMHWVIRT const enmHwvirt = pVM->aCpus[0].cpum.s.Guest.hwvirt.enmHwvirt;
1712 if (enmHwvirt == CPUMHWVIRT_VMX)
1713 {
1714 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1715 cpumR3InitVmxHwVirtState(&pVM->aCpus[i]);
1716 }
1717 else if (enmHwvirt == CPUMHWVIRT_SVM)
1718 {
1719 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1720 cpumR3InitSvmHwVirtState(&pVM->aCpus[i]);
1721 }
1722
1723 /*
1724 * Workaround for missing cpuid(0) patches when leaf 4 returns GuestInfo.DefCpuId:
1725 * If we miss to patch a cpuid(0).eax then Linux tries to determine the number
1726 * of processors from (cpuid(4).eax >> 26) + 1.
1727 *
1728 * Note: this code is obsolete, but let's keep it here for reference.
1729 * Purpose is valid when we artificially cap the max std id to less than 4.
1730 *
1731 * Note: This used to be a separate function CPUMR3SetHwVirt that was called
1732 * after VMINITCOMPLETED_HM.
1733 */
1734 if (VM_IS_RAW_MODE_ENABLED(pVM))
1735 {
1736 Assert( (pVM->cpum.s.aGuestCpuIdPatmStd[4].uEax & UINT32_C(0xffffc000)) == 0
1737 || pVM->cpum.s.aGuestCpuIdPatmStd[0].uEax < 0x4);
1738 pVM->cpum.s.aGuestCpuIdPatmStd[4].uEax &= UINT32_C(0x00003fff);
1739 }
1740
1741 CPUMR3Reset(pVM);
1742 return VINF_SUCCESS;
1743}
1744
1745
1746/**
1747 * Applies relocations to data and code managed by this
1748 * component. This function will be called at init and
1749 * whenever the VMM need to relocate it self inside the GC.
1750 *
1751 * The CPUM will update the addresses used by the switcher.
1752 *
1753 * @param pVM The cross context VM structure.
1754 */
1755VMMR3DECL(void) CPUMR3Relocate(PVM pVM)
1756{
1757 LogFlow(("CPUMR3Relocate\n"));
1758
1759 pVM->cpum.s.GuestInfo.paMsrRangesRC = MMHyperR3ToRC(pVM, pVM->cpum.s.GuestInfo.paMsrRangesR3);
1760 pVM->cpum.s.GuestInfo.paCpuIdLeavesRC = MMHyperR3ToRC(pVM, pVM->cpum.s.GuestInfo.paCpuIdLeavesR3);
1761
1762 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1763 {
1764 PVMCPU pVCpu = &pVM->aCpus[iCpu];
1765 pVCpu->cpum.s.Guest.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Guest.pXStateR3);
1766 pVCpu->cpum.s.Host.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Host.pXStateR3);
1767 pVCpu->cpum.s.Hyper.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Hyper.pXStateR3); /** @todo remove me */
1768
1769 /* Recheck the guest DRx values in raw-mode. */
1770 CPUMRecalcHyperDRx(pVCpu, UINT8_MAX, false);
1771 }
1772}
1773
1774
1775/**
1776 * Terminates the CPUM.
1777 *
1778 * Termination means cleaning up and freeing all resources,
1779 * the VM it self is at this point powered off or suspended.
1780 *
1781 * @returns VBox status code.
1782 * @param pVM The cross context VM structure.
1783 */
1784VMMR3DECL(int) CPUMR3Term(PVM pVM)
1785{
1786#ifdef VBOX_WITH_CRASHDUMP_MAGIC
1787 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1788 {
1789 PVMCPU pVCpu = &pVM->aCpus[i];
1790 PCPUMCTX pCtx = CPUMQueryGuestCtxPtr(pVCpu);
1791
1792 memset(pVCpu->cpum.s.aMagic, 0, sizeof(pVCpu->cpum.s.aMagic));
1793 pVCpu->cpum.s.uMagic = 0;
1794 pCtx->dr[5] = 0;
1795 }
1796#endif
1797
1798 if (pVM->cpum.ro.GuestFeatures.fVmx)
1799 cpumR3FreeVmxHwVirtState(pVM);
1800 else if (pVM->cpum.ro.GuestFeatures.fSvm)
1801 cpumR3FreeSvmHwVirtState(pVM);
1802 return VINF_SUCCESS;
1803}
1804
1805
1806/**
1807 * Resets a virtual CPU.
1808 *
1809 * Used by CPUMR3Reset and CPU hot plugging.
1810 *
1811 * @param pVM The cross context VM structure.
1812 * @param pVCpu The cross context virtual CPU structure of the CPU that is
1813 * being reset. This may differ from the current EMT.
1814 */
1815VMMR3DECL(void) CPUMR3ResetCpu(PVM pVM, PVMCPU pVCpu)
1816{
1817 /** @todo anything different for VCPU > 0? */
1818 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1819
1820 /*
1821 * Initialize everything to ZERO first.
1822 */
1823 uint32_t fUseFlags = pVCpu->cpum.s.fUseFlags & ~CPUM_USED_FPU_SINCE_REM;
1824
1825 AssertCompile(RTASSERT_OFFSET_OF(CPUMCTX, pXStateR0) < RTASSERT_OFFSET_OF(CPUMCTX, pXStateR3));
1826 AssertCompile(RTASSERT_OFFSET_OF(CPUMCTX, pXStateR0) < RTASSERT_OFFSET_OF(CPUMCTX, pXStateRC));
1827 memset(pCtx, 0, RT_UOFFSETOF(CPUMCTX, pXStateR0));
1828
1829 pVCpu->cpum.s.fUseFlags = fUseFlags;
1830
1831 pCtx->cr0 = X86_CR0_CD | X86_CR0_NW | X86_CR0_ET; //0x60000010
1832 pCtx->eip = 0x0000fff0;
1833 pCtx->edx = 0x00000600; /* P6 processor */
1834 pCtx->eflags.Bits.u1Reserved0 = 1;
1835
1836 pCtx->cs.Sel = 0xf000;
1837 pCtx->cs.ValidSel = 0xf000;
1838 pCtx->cs.fFlags = CPUMSELREG_FLAGS_VALID;
1839 pCtx->cs.u64Base = UINT64_C(0xffff0000);
1840 pCtx->cs.u32Limit = 0x0000ffff;
1841 pCtx->cs.Attr.n.u1DescType = 1; /* code/data segment */
1842 pCtx->cs.Attr.n.u1Present = 1;
1843 pCtx->cs.Attr.n.u4Type = X86_SEL_TYPE_ER_ACC;
1844
1845 pCtx->ds.fFlags = CPUMSELREG_FLAGS_VALID;
1846 pCtx->ds.u32Limit = 0x0000ffff;
1847 pCtx->ds.Attr.n.u1DescType = 1; /* code/data segment */
1848 pCtx->ds.Attr.n.u1Present = 1;
1849 pCtx->ds.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1850
1851 pCtx->es.fFlags = CPUMSELREG_FLAGS_VALID;
1852 pCtx->es.u32Limit = 0x0000ffff;
1853 pCtx->es.Attr.n.u1DescType = 1; /* code/data segment */
1854 pCtx->es.Attr.n.u1Present = 1;
1855 pCtx->es.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1856
1857 pCtx->fs.fFlags = CPUMSELREG_FLAGS_VALID;
1858 pCtx->fs.u32Limit = 0x0000ffff;
1859 pCtx->fs.Attr.n.u1DescType = 1; /* code/data segment */
1860 pCtx->fs.Attr.n.u1Present = 1;
1861 pCtx->fs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1862
1863 pCtx->gs.fFlags = CPUMSELREG_FLAGS_VALID;
1864 pCtx->gs.u32Limit = 0x0000ffff;
1865 pCtx->gs.Attr.n.u1DescType = 1; /* code/data segment */
1866 pCtx->gs.Attr.n.u1Present = 1;
1867 pCtx->gs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1868
1869 pCtx->ss.fFlags = CPUMSELREG_FLAGS_VALID;
1870 pCtx->ss.u32Limit = 0x0000ffff;
1871 pCtx->ss.Attr.n.u1Present = 1;
1872 pCtx->ss.Attr.n.u1DescType = 1; /* code/data segment */
1873 pCtx->ss.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1874
1875 pCtx->idtr.cbIdt = 0xffff;
1876 pCtx->gdtr.cbGdt = 0xffff;
1877
1878 pCtx->ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
1879 pCtx->ldtr.u32Limit = 0xffff;
1880 pCtx->ldtr.Attr.n.u1Present = 1;
1881 pCtx->ldtr.Attr.n.u4Type = X86_SEL_TYPE_SYS_LDT;
1882
1883 pCtx->tr.fFlags = CPUMSELREG_FLAGS_VALID;
1884 pCtx->tr.u32Limit = 0xffff;
1885 pCtx->tr.Attr.n.u1Present = 1;
1886 pCtx->tr.Attr.n.u4Type = X86_SEL_TYPE_SYS_386_TSS_BUSY; /* Deduction, not properly documented by Intel. */
1887
1888 pCtx->dr[6] = X86_DR6_INIT_VAL;
1889 pCtx->dr[7] = X86_DR7_INIT_VAL;
1890
1891 PX86FXSTATE pFpuCtx = &pCtx->pXStateR3->x87; AssertReleaseMsg(RT_VALID_PTR(pFpuCtx), ("%p\n", pFpuCtx));
1892 pFpuCtx->FTW = 0x00; /* All empty (abbridged tag reg edition). */
1893 pFpuCtx->FCW = 0x37f;
1894
1895 /* Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3A, Table 8-1.
1896 IA-32 Processor States Following Power-up, Reset, or INIT */
1897 pFpuCtx->MXCSR = 0x1F80;
1898 pFpuCtx->MXCSR_MASK = pVM->cpum.s.GuestInfo.fMxCsrMask; /** @todo check if REM messes this up... */
1899
1900 pCtx->aXcr[0] = XSAVE_C_X87;
1901 if (pVM->cpum.s.HostFeatures.cbMaxExtendedState >= RT_UOFFSETOF(X86XSAVEAREA, Hdr))
1902 {
1903 /* The entire FXSAVE state needs loading when we switch to XSAVE/XRSTOR
1904 as we don't know what happened before. (Bother optimize later?) */
1905 pCtx->pXStateR3->Hdr.bmXState = XSAVE_C_X87 | XSAVE_C_SSE;
1906 }
1907
1908 /*
1909 * MSRs.
1910 */
1911 /* Init PAT MSR */
1912 pCtx->msrPAT = MSR_IA32_CR_PAT_INIT_VAL;
1913
1914 /* EFER MBZ; see AMD64 Architecture Programmer's Manual Volume 2: Table 14-1. Initial Processor State.
1915 * The Intel docs don't mention it. */
1916 Assert(!pCtx->msrEFER);
1917
1918 /* IA32_MISC_ENABLE - not entirely sure what the init/reset state really
1919 is supposed to be here, just trying provide useful/sensible values. */
1920 PCPUMMSRRANGE pRange = cpumLookupMsrRange(pVM, MSR_IA32_MISC_ENABLE);
1921 if (pRange)
1922 {
1923 pVCpu->cpum.s.GuestMsrs.msr.MiscEnable = MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
1924 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL
1925 | (pVM->cpum.s.GuestFeatures.fMonitorMWait ? MSR_IA32_MISC_ENABLE_MONITOR : 0)
1926 | MSR_IA32_MISC_ENABLE_FAST_STRINGS;
1927 pRange->fWrIgnMask |= MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
1928 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
1929 pRange->fWrGpMask &= ~pVCpu->cpum.s.GuestMsrs.msr.MiscEnable;
1930 }
1931
1932 /** @todo Wire IA32_MISC_ENABLE bit 22 to our NT 4 CPUID trick. */
1933
1934 /** @todo r=ramshankar: Currently broken for SMP as TMCpuTickSet() expects to be
1935 * called from each EMT while we're getting called by CPUMR3Reset()
1936 * iteratively on the same thread. Fix later. */
1937#if 0 /** @todo r=bird: This we will do in TM, not here. */
1938 /* TSC must be 0. Intel spec. Table 9-1. "IA-32 Processor States Following Power-up, Reset, or INIT." */
1939 CPUMSetGuestMsr(pVCpu, MSR_IA32_TSC, 0);
1940#endif
1941
1942
1943 /* C-state control. Guesses. */
1944 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 1 /*C1*/ | RT_BIT_32(25) | RT_BIT_32(26) | RT_BIT_32(27) | RT_BIT_32(28);
1945 /* For Nehalem+ and Atoms, the 0xE2 MSR (MSR_PKG_CST_CONFIG_CONTROL) is documented. For Core 2,
1946 * it's undocumented but exists as MSR_PMG_CST_CONFIG_CONTROL and has similar but not identical
1947 * functionality. The default value must be different due to incompatible write mask.
1948 */
1949 if (CPUMMICROARCH_IS_INTEL_CORE2(pVM->cpum.s.GuestFeatures.enmMicroarch))
1950 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x202a01; /* From Mac Pro Harpertown, unlocked. */
1951 else if (pVM->cpum.s.GuestFeatures.enmMicroarch == kCpumMicroarch_Intel_Core_Yonah)
1952 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 0x26740c; /* From MacBookPro1,1. */
1953
1954 /*
1955 * Hardware virtualization state.
1956 */
1957 pCtx->hwvirt.fGif = true;
1958 Assert(!pVM->cpum.ro.GuestFeatures.fVmx || !pVM->cpum.ro.GuestFeatures.fSvm); /* Paranoia. */
1959 if (pVM->cpum.ro.GuestFeatures.fVmx)
1960 cpumR3InitVmxHwVirtState(pVCpu);
1961 else if (pVM->cpum.ro.GuestFeatures.fSvm)
1962 cpumR3InitSvmHwVirtState(pVCpu);
1963}
1964
1965
1966/**
1967 * Resets the CPU.
1968 *
1969 * @returns VINF_SUCCESS.
1970 * @param pVM The cross context VM structure.
1971 */
1972VMMR3DECL(void) CPUMR3Reset(PVM pVM)
1973{
1974 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1975 {
1976 CPUMR3ResetCpu(pVM, &pVM->aCpus[i]);
1977
1978#ifdef VBOX_WITH_CRASHDUMP_MAGIC
1979 PCPUMCTX pCtx = &pVM->aCpus[i].cpum.s.Guest;
1980
1981 /* Magic marker for searching in crash dumps. */
1982 strcpy((char *)pVM->aCpus[i].cpum.s.aMagic, "CPUMCPU Magic");
1983 pVM->aCpus[i].cpum.s.uMagic = UINT64_C(0xDEADBEEFDEADBEEF);
1984 pCtx->dr[5] = UINT64_C(0xDEADBEEFDEADBEEF);
1985#endif
1986 }
1987}
1988
1989
1990
1991
1992/**
1993 * Pass 0 live exec callback.
1994 *
1995 * @returns VINF_SSM_DONT_CALL_AGAIN.
1996 * @param pVM The cross context VM structure.
1997 * @param pSSM The saved state handle.
1998 * @param uPass The pass (0).
1999 */
2000static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass)
2001{
2002 AssertReturn(uPass == 0, VERR_SSM_UNEXPECTED_PASS);
2003 cpumR3SaveCpuId(pVM, pSSM);
2004 return VINF_SSM_DONT_CALL_AGAIN;
2005}
2006
2007
2008/**
2009 * Execute state save operation.
2010 *
2011 * @returns VBox status code.
2012 * @param pVM The cross context VM structure.
2013 * @param pSSM SSM operation handle.
2014 */
2015static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM)
2016{
2017 /*
2018 * Save.
2019 */
2020 SSMR3PutU32(pSSM, pVM->cCpus);
2021 SSMR3PutU32(pSSM, sizeof(pVM->aCpus[0].cpum.s.GuestMsrs.msr));
2022 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2023 {
2024 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2025
2026 SSMR3PutStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper), 0, g_aCpumCtxFields, NULL);
2027
2028 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2029 SSMR3PutStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2030 SSMR3PutStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87), 0, g_aCpumX87Fields, NULL);
2031 if (pGstCtx->fXStateMask != 0)
2032 SSMR3PutStructEx(pSSM, &pGstCtx->pXStateR3->Hdr, sizeof(pGstCtx->pXStateR3->Hdr), 0, g_aCpumXSaveHdrFields, NULL);
2033 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2034 {
2035 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
2036 SSMR3PutStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2037 }
2038 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2039 {
2040 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
2041 SSMR3PutStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2042 }
2043 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2044 {
2045 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
2046 SSMR3PutStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2047 }
2048 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2049 {
2050 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
2051 SSMR3PutStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2052 }
2053 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2054 {
2055 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
2056 SSMR3PutStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2057 }
2058 if (pVM->cpum.ro.GuestFeatures.fSvm)
2059 {
2060 Assert(pGstCtx->hwvirt.svm.CTX_SUFF(pVmcb));
2061 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uMsrHSavePa);
2062 SSMR3PutGCPhys(pSSM, pGstCtx->hwvirt.svm.GCPhysVmcb);
2063 SSMR3PutU64(pSSM, pGstCtx->hwvirt.svm.uPrevPauseTick);
2064 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilter);
2065 SSMR3PutU16(pSSM, pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2066 SSMR3PutBool(pSSM, pGstCtx->hwvirt.svm.fInterceptEvents);
2067 SSMR3PutStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState), 0 /* fFlags */,
2068 g_aSvmHwvirtHostState, NULL /* pvUser */);
2069 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES << X86_PAGE_4K_SHIFT);
2070 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES << X86_PAGE_4K_SHIFT);
2071 SSMR3PutMem(pSSM, pGstCtx->hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES << X86_PAGE_4K_SHIFT);
2072 SSMR3PutU32(pSSM, pGstCtx->hwvirt.fLocalForcedActions);
2073 SSMR3PutBool(pSSM, pGstCtx->hwvirt.fGif);
2074 }
2075 SSMR3PutU32(pSSM, pVCpu->cpum.s.fUseFlags);
2076 SSMR3PutU32(pSSM, pVCpu->cpum.s.fChanged);
2077 AssertCompileSizeAlignment(pVCpu->cpum.s.GuestMsrs.msr, sizeof(uint64_t));
2078 SSMR3PutMem(pSSM, &pVCpu->cpum.s.GuestMsrs, sizeof(pVCpu->cpum.s.GuestMsrs.msr));
2079 }
2080
2081 cpumR3SaveCpuId(pVM, pSSM);
2082 return VINF_SUCCESS;
2083}
2084
2085
2086/**
2087 * @callback_method_impl{FNSSMINTLOADPREP}
2088 */
2089static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM)
2090{
2091 NOREF(pSSM);
2092 pVM->cpum.s.fPendingRestore = true;
2093 return VINF_SUCCESS;
2094}
2095
2096
2097/**
2098 * @callback_method_impl{FNSSMINTLOADEXEC}
2099 */
2100static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
2101{
2102 int rc; /* Only for AssertRCReturn use. */
2103
2104 /*
2105 * Validate version.
2106 */
2107 if ( uVersion != CPUM_SAVED_STATE_VERSION_HWVIRT_SVM
2108 && uVersion != CPUM_SAVED_STATE_VERSION_XSAVE
2109 && uVersion != CPUM_SAVED_STATE_VERSION_GOOD_CPUID_COUNT
2110 && uVersion != CPUM_SAVED_STATE_VERSION_BAD_CPUID_COUNT
2111 && uVersion != CPUM_SAVED_STATE_VERSION_PUT_STRUCT
2112 && uVersion != CPUM_SAVED_STATE_VERSION_MEM
2113 && uVersion != CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE
2114 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_2
2115 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_0
2116 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR
2117 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_0
2118 && uVersion != CPUM_SAVED_STATE_VERSION_VER1_6)
2119 {
2120 AssertMsgFailed(("cpumR3LoadExec: Invalid version uVersion=%d!\n", uVersion));
2121 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
2122 }
2123
2124 if (uPass == SSM_PASS_FINAL)
2125 {
2126 /*
2127 * Set the size of RTGCPTR for SSMR3GetGCPtr. (Only necessary for
2128 * really old SSM file versions.)
2129 */
2130 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
2131 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR32));
2132 else if (uVersion <= CPUM_SAVED_STATE_VERSION_VER3_0)
2133 SSMR3HandleSetGCPtrSize(pSSM, HC_ARCH_BITS == 32 ? sizeof(RTGCPTR32) : sizeof(RTGCPTR));
2134
2135 /*
2136 * Figure x86 and ctx field definitions to use for older states.
2137 */
2138 uint32_t const fLoad = uVersion > CPUM_SAVED_STATE_VERSION_MEM ? 0 : SSMSTRUCT_FLAGS_MEM_BAND_AID_RELAXED;
2139 PCSSMFIELD paCpumCtx1Fields = g_aCpumX87Fields;
2140 PCSSMFIELD paCpumCtx2Fields = g_aCpumCtxFields;
2141 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
2142 {
2143 paCpumCtx1Fields = g_aCpumX87FieldsV16;
2144 paCpumCtx2Fields = g_aCpumCtxFieldsV16;
2145 }
2146 else if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2147 {
2148 paCpumCtx1Fields = g_aCpumX87FieldsMem;
2149 paCpumCtx2Fields = g_aCpumCtxFieldsMem;
2150 }
2151
2152 /*
2153 * The hyper state used to preceed the CPU count. Starting with
2154 * XSAVE it was moved down till after we've got the count.
2155 */
2156 if (uVersion < CPUM_SAVED_STATE_VERSION_XSAVE)
2157 {
2158 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2159 {
2160 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2161 X86FXSTATE Ign;
2162 SSMR3GetStructEx(pSSM, &Ign, sizeof(Ign), fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2163 uint64_t uCR3 = pVCpu->cpum.s.Hyper.cr3;
2164 uint64_t uRSP = pVCpu->cpum.s.Hyper.rsp; /* see VMMR3Relocate(). */
2165 SSMR3GetStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper),
2166 fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2167 pVCpu->cpum.s.Hyper.cr3 = uCR3;
2168 pVCpu->cpum.s.Hyper.rsp = uRSP;
2169 }
2170 }
2171
2172 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR)
2173 {
2174 uint32_t cCpus;
2175 rc = SSMR3GetU32(pSSM, &cCpus); AssertRCReturn(rc, rc);
2176 AssertLogRelMsgReturn(cCpus == pVM->cCpus, ("Mismatching CPU counts: saved: %u; configured: %u \n", cCpus, pVM->cCpus),
2177 VERR_SSM_UNEXPECTED_DATA);
2178 }
2179 AssertLogRelMsgReturn( uVersion > CPUM_SAVED_STATE_VERSION_VER2_0
2180 || pVM->cCpus == 1,
2181 ("cCpus=%u\n", pVM->cCpus),
2182 VERR_SSM_UNEXPECTED_DATA);
2183
2184 uint32_t cbMsrs = 0;
2185 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2186 {
2187 rc = SSMR3GetU32(pSSM, &cbMsrs); AssertRCReturn(rc, rc);
2188 AssertLogRelMsgReturn(RT_ALIGN(cbMsrs, sizeof(uint64_t)) == cbMsrs, ("Size of MSRs is misaligned: %#x\n", cbMsrs),
2189 VERR_SSM_UNEXPECTED_DATA);
2190 AssertLogRelMsgReturn(cbMsrs <= sizeof(CPUMCTXMSRS) && cbMsrs > 0, ("Size of MSRs is out of range: %#x\n", cbMsrs),
2191 VERR_SSM_UNEXPECTED_DATA);
2192 }
2193
2194 /*
2195 * Do the per-CPU restoring.
2196 */
2197 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2198 {
2199 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2200 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
2201
2202 if (uVersion >= CPUM_SAVED_STATE_VERSION_XSAVE)
2203 {
2204 /*
2205 * The XSAVE saved state layout moved the hyper state down here.
2206 */
2207 uint64_t uCR3 = pVCpu->cpum.s.Hyper.cr3;
2208 uint64_t uRSP = pVCpu->cpum.s.Hyper.rsp; /* see VMMR3Relocate(). */
2209 rc = SSMR3GetStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper), 0, g_aCpumCtxFields, NULL);
2210 pVCpu->cpum.s.Hyper.cr3 = uCR3;
2211 pVCpu->cpum.s.Hyper.rsp = uRSP;
2212 AssertRCReturn(rc, rc);
2213
2214 /*
2215 * Start by restoring the CPUMCTX structure and the X86FXSAVE bits of the extended state.
2216 */
2217 rc = SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
2218 rc = SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87), 0, g_aCpumX87Fields, NULL);
2219 AssertRCReturn(rc, rc);
2220
2221 /* Check that the xsave/xrstor mask is valid (invalid results in #GP). */
2222 if (pGstCtx->fXStateMask != 0)
2223 {
2224 AssertLogRelMsgReturn(!(pGstCtx->fXStateMask & ~pVM->cpum.s.fXStateGuestMask),
2225 ("fXStateMask=%#RX64 fXStateGuestMask=%#RX64\n",
2226 pGstCtx->fXStateMask, pVM->cpum.s.fXStateGuestMask),
2227 VERR_CPUM_INCOMPATIBLE_XSAVE_COMP_MASK);
2228 AssertLogRelMsgReturn(pGstCtx->fXStateMask & XSAVE_C_X87,
2229 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2230 AssertLogRelMsgReturn((pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2231 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2232 AssertLogRelMsgReturn( (pGstCtx->fXStateMask & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2233 || (pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2234 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2235 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2236 }
2237
2238 /* Check that the XCR0 mask is valid (invalid results in #GP). */
2239 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87, ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XCR0);
2240 if (pGstCtx->aXcr[0] != XSAVE_C_X87)
2241 {
2242 AssertLogRelMsgReturn(!(pGstCtx->aXcr[0] & ~(pGstCtx->fXStateMask | XSAVE_C_X87)),
2243 ("xcr0=%#RX64 fXStateMask=%#RX64\n", pGstCtx->aXcr[0], pGstCtx->fXStateMask),
2244 VERR_CPUM_INVALID_XCR0);
2245 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87,
2246 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2247 AssertLogRelMsgReturn((pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
2248 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2249 AssertLogRelMsgReturn( (pGstCtx->aXcr[0] & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
2250 || (pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
2251 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
2252 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
2253 }
2254
2255 /* Check that the XCR1 is zero, as we don't implement it yet. */
2256 AssertLogRelMsgReturn(!pGstCtx->aXcr[1], ("xcr1=%#RX64\n", pGstCtx->aXcr[1]), VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2257
2258 /*
2259 * Restore the individual extended state components we support.
2260 */
2261 if (pGstCtx->fXStateMask != 0)
2262 {
2263 rc = SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->Hdr, sizeof(pGstCtx->pXStateR3->Hdr),
2264 0, g_aCpumXSaveHdrFields, NULL);
2265 AssertRCReturn(rc, rc);
2266 AssertLogRelMsgReturn(!(pGstCtx->pXStateR3->Hdr.bmXState & ~pGstCtx->fXStateMask),
2267 ("bmXState=%#RX64 fXStateMask=%#RX64\n",
2268 pGstCtx->pXStateR3->Hdr.bmXState, pGstCtx->fXStateMask),
2269 VERR_CPUM_INVALID_XSAVE_HDR);
2270 }
2271 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
2272 {
2273 PX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PX86XSAVEYMMHI);
2274 SSMR3GetStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
2275 }
2276 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
2277 {
2278 PX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PX86XSAVEBNDREGS);
2279 SSMR3GetStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
2280 }
2281 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
2282 {
2283 PX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PX86XSAVEBNDCFG);
2284 SSMR3GetStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
2285 }
2286 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
2287 {
2288 PX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PX86XSAVEZMMHI256);
2289 SSMR3GetStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
2290 }
2291 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
2292 {
2293 PX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PX86XSAVEZMM16HI);
2294 SSMR3GetStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
2295 }
2296 if (uVersion >= CPUM_SAVED_STATE_VERSION_HWVIRT_SVM)
2297 {
2298 if (pVM->cpum.ro.GuestFeatures.fSvm)
2299 {
2300 Assert(pGstCtx->hwvirt.svm.CTX_SUFF(pVmcb));
2301 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uMsrHSavePa);
2302 SSMR3GetGCPhys(pSSM, &pGstCtx->hwvirt.svm.GCPhysVmcb);
2303 SSMR3GetU64(pSSM, &pGstCtx->hwvirt.svm.uPrevPauseTick);
2304 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilter);
2305 SSMR3GetU16(pSSM, &pGstCtx->hwvirt.svm.cPauseFilterThreshold);
2306 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.svm.fInterceptEvents);
2307 SSMR3GetStructEx(pSSM, &pGstCtx->hwvirt.svm.HostState, sizeof(pGstCtx->hwvirt.svm.HostState),
2308 0 /* fFlags */, g_aSvmHwvirtHostState, NULL /* pvUser */);
2309 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pVmcbR3, SVM_VMCB_PAGES << X86_PAGE_4K_SHIFT);
2310 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pvMsrBitmapR3, SVM_MSRPM_PAGES << X86_PAGE_4K_SHIFT);
2311 SSMR3GetMem(pSSM, pGstCtx->hwvirt.svm.pvIoBitmapR3, SVM_IOPM_PAGES << X86_PAGE_4K_SHIFT);
2312 SSMR3GetU32(pSSM, &pGstCtx->hwvirt.fLocalForcedActions);
2313 SSMR3GetBool(pSSM, &pGstCtx->hwvirt.fGif);
2314 }
2315 }
2316 }
2317 else
2318 {
2319 /*
2320 * Pre XSAVE saved state.
2321 */
2322 SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87),
2323 fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
2324 SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
2325 }
2326
2327 /*
2328 * Restore a couple of flags and the MSRs.
2329 */
2330 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fUseFlags);
2331 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fChanged);
2332
2333 rc = VINF_SUCCESS;
2334 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
2335 rc = SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], cbMsrs);
2336 else if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_0)
2337 {
2338 SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], 2 * sizeof(uint64_t)); /* Restore two MSRs. */
2339 rc = SSMR3Skip(pSSM, 62 * sizeof(uint64_t));
2340 }
2341 AssertRCReturn(rc, rc);
2342
2343 /* REM and other may have cleared must-be-one fields in DR6 and
2344 DR7, fix these. */
2345 pGstCtx->dr[6] &= ~(X86_DR6_RAZ_MASK | X86_DR6_MBZ_MASK);
2346 pGstCtx->dr[6] |= X86_DR6_RA1_MASK;
2347 pGstCtx->dr[7] &= ~(X86_DR7_RAZ_MASK | X86_DR7_MBZ_MASK);
2348 pGstCtx->dr[7] |= X86_DR7_RA1_MASK;
2349 }
2350
2351 /* Older states does not have the internal selector register flags
2352 and valid selector value. Supply those. */
2353 if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2354 {
2355 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2356 {
2357 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2358 bool const fValid = !VM_IS_RAW_MODE_ENABLED(pVM)
2359 || ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2360 && !(pVCpu->cpum.s.fChanged & CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID));
2361 PCPUMSELREG paSelReg = CPUMCTX_FIRST_SREG(&pVCpu->cpum.s.Guest);
2362 if (fValid)
2363 {
2364 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2365 {
2366 paSelReg[iSelReg].fFlags = CPUMSELREG_FLAGS_VALID;
2367 paSelReg[iSelReg].ValidSel = paSelReg[iSelReg].Sel;
2368 }
2369
2370 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2371 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
2372 }
2373 else
2374 {
2375 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
2376 {
2377 paSelReg[iSelReg].fFlags = 0;
2378 paSelReg[iSelReg].ValidSel = 0;
2379 }
2380
2381 /* This might not be 104% correct, but I think it's close
2382 enough for all practical purposes... (REM always loaded
2383 LDTR registers.) */
2384 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
2385 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
2386 }
2387 pVCpu->cpum.s.Guest.tr.fFlags = CPUMSELREG_FLAGS_VALID;
2388 pVCpu->cpum.s.Guest.tr.ValidSel = pVCpu->cpum.s.Guest.tr.Sel;
2389 }
2390 }
2391
2392 /* Clear CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID. */
2393 if ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
2394 && uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
2395 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2396 pVM->aCpus[iCpu].cpum.s.fChanged &= CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID;
2397
2398 /*
2399 * A quick sanity check.
2400 */
2401 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2402 {
2403 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2404 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.es.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2405 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.cs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2406 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ss.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2407 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ds.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2408 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.fs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2409 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.gs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
2410 }
2411 }
2412
2413 pVM->cpum.s.fPendingRestore = false;
2414
2415 /*
2416 * Guest CPUIDs.
2417 */
2418 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_2)
2419 return cpumR3LoadCpuId(pVM, pSSM, uVersion);
2420 return cpumR3LoadCpuIdPre32(pVM, pSSM, uVersion);
2421}
2422
2423
2424/**
2425 * @callback_method_impl{FNSSMINTLOADDONE}
2426 */
2427static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM)
2428{
2429 if (RT_FAILURE(SSMR3HandleGetStatus(pSSM)))
2430 return VINF_SUCCESS;
2431
2432 /* just check this since we can. */ /** @todo Add a SSM unit flag for indicating that it's mandatory during a restore. */
2433 if (pVM->cpum.s.fPendingRestore)
2434 {
2435 LogRel(("CPUM: Missing state!\n"));
2436 return VERR_INTERNAL_ERROR_2;
2437 }
2438
2439 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
2440 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
2441 {
2442 PVMCPU pVCpu = &pVM->aCpus[idCpu];
2443
2444 /* Notify PGM of the NXE states in case they've changed. */
2445 PGMNotifyNxeChanged(pVCpu, RT_BOOL(pVCpu->cpum.s.Guest.msrEFER & MSR_K6_EFER_NXE));
2446
2447 /* During init. this is done in CPUMR3InitCompleted(). */
2448 if (fSupportsLongMode)
2449 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
2450 }
2451 return VINF_SUCCESS;
2452}
2453
2454
2455/**
2456 * Checks if the CPUM state restore is still pending.
2457 *
2458 * @returns true / false.
2459 * @param pVM The cross context VM structure.
2460 */
2461VMMDECL(bool) CPUMR3IsStateRestorePending(PVM pVM)
2462{
2463 return pVM->cpum.s.fPendingRestore;
2464}
2465
2466
2467/**
2468 * Formats the EFLAGS value into mnemonics.
2469 *
2470 * @param pszEFlags Where to write the mnemonics. (Assumes sufficient buffer space.)
2471 * @param efl The EFLAGS value.
2472 */
2473static void cpumR3InfoFormatFlags(char *pszEFlags, uint32_t efl)
2474{
2475 /*
2476 * Format the flags.
2477 */
2478 static const struct
2479 {
2480 const char *pszSet; const char *pszClear; uint32_t fFlag;
2481 } s_aFlags[] =
2482 {
2483 { "vip",NULL, X86_EFL_VIP },
2484 { "vif",NULL, X86_EFL_VIF },
2485 { "ac", NULL, X86_EFL_AC },
2486 { "vm", NULL, X86_EFL_VM },
2487 { "rf", NULL, X86_EFL_RF },
2488 { "nt", NULL, X86_EFL_NT },
2489 { "ov", "nv", X86_EFL_OF },
2490 { "dn", "up", X86_EFL_DF },
2491 { "ei", "di", X86_EFL_IF },
2492 { "tf", NULL, X86_EFL_TF },
2493 { "nt", "pl", X86_EFL_SF },
2494 { "nz", "zr", X86_EFL_ZF },
2495 { "ac", "na", X86_EFL_AF },
2496 { "po", "pe", X86_EFL_PF },
2497 { "cy", "nc", X86_EFL_CF },
2498 };
2499 char *psz = pszEFlags;
2500 for (unsigned i = 0; i < RT_ELEMENTS(s_aFlags); i++)
2501 {
2502 const char *pszAdd = s_aFlags[i].fFlag & efl ? s_aFlags[i].pszSet : s_aFlags[i].pszClear;
2503 if (pszAdd)
2504 {
2505 strcpy(psz, pszAdd);
2506 psz += strlen(pszAdd);
2507 *psz++ = ' ';
2508 }
2509 }
2510 psz[-1] = '\0';
2511}
2512
2513
2514/**
2515 * Formats a full register dump.
2516 *
2517 * @param pVM The cross context VM structure.
2518 * @param pCtx The context to format.
2519 * @param pCtxCore The context core to format.
2520 * @param pHlp Output functions.
2521 * @param enmType The dump type.
2522 * @param pszPrefix Register name prefix.
2523 */
2524static void cpumR3InfoOne(PVM pVM, PCPUMCTX pCtx, PCCPUMCTXCORE pCtxCore, PCDBGFINFOHLP pHlp, CPUMDUMPTYPE enmType,
2525 const char *pszPrefix)
2526{
2527 NOREF(pVM);
2528
2529 /*
2530 * Format the EFLAGS.
2531 */
2532 uint32_t efl = pCtxCore->eflags.u32;
2533 char szEFlags[80];
2534 cpumR3InfoFormatFlags(&szEFlags[0], efl);
2535
2536 /*
2537 * Format the registers.
2538 */
2539 switch (enmType)
2540 {
2541 case CPUMDUMPTYPE_TERSE:
2542 if (CPUMIsGuestIn64BitCodeEx(pCtx))
2543 pHlp->pfnPrintf(pHlp,
2544 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
2545 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
2546 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
2547 "%sr14=%016RX64 %sr15=%016RX64\n"
2548 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
2549 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
2550 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
2551 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
2552 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
2553 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2554 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
2555 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
2556 else
2557 pHlp->pfnPrintf(pHlp,
2558 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
2559 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
2560 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
2561 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
2562 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2563 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
2564 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
2565 break;
2566
2567 case CPUMDUMPTYPE_DEFAULT:
2568 if (CPUMIsGuestIn64BitCodeEx(pCtx))
2569 pHlp->pfnPrintf(pHlp,
2570 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
2571 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
2572 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
2573 "%sr14=%016RX64 %sr15=%016RX64\n"
2574 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
2575 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
2576 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%016RX64:%04x %sldtr=%04x\n"
2577 ,
2578 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
2579 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
2580 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
2581 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2582 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
2583 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
2584 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
2585 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
2586 else
2587 pHlp->pfnPrintf(pHlp,
2588 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
2589 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
2590 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
2591 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%08RX64:%04x %sldtr=%04x\n"
2592 ,
2593 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
2594 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2595 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
2596 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
2597 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
2598 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
2599 break;
2600
2601 case CPUMDUMPTYPE_VERBOSE:
2602 if (CPUMIsGuestIn64BitCodeEx(pCtx))
2603 pHlp->pfnPrintf(pHlp,
2604 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
2605 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
2606 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
2607 "%sr14=%016RX64 %sr15=%016RX64\n"
2608 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
2609 "%scs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2610 "%sds={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2611 "%ses={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2612 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2613 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2614 "%sss={%04x base=%016RX64 limit=%08x flags=%08x}\n"
2615 "%scr0=%016RX64 %scr2=%016RX64 %scr3=%016RX64 %scr4=%016RX64\n"
2616 "%sdr0=%016RX64 %sdr1=%016RX64 %sdr2=%016RX64 %sdr3=%016RX64\n"
2617 "%sdr4=%016RX64 %sdr5=%016RX64 %sdr6=%016RX64 %sdr7=%016RX64\n"
2618 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
2619 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
2620 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
2621 "%sSysEnter={cs=%04llx eip=%016RX64 esp=%016RX64}\n"
2622 ,
2623 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
2624 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
2625 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
2626 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2627 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u,
2628 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u,
2629 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u,
2630 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u,
2631 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u,
2632 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u,
2633 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
2634 pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1], pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
2635 pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5], pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
2636 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
2637 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
2638 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
2639 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
2640 else
2641 pHlp->pfnPrintf(pHlp,
2642 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
2643 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
2644 "%scs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr0=%08RX64 %sdr1=%08RX64\n"
2645 "%sds={%04x base=%016RX64 limit=%08x flags=%08x} %sdr2=%08RX64 %sdr3=%08RX64\n"
2646 "%ses={%04x base=%016RX64 limit=%08x flags=%08x} %sdr4=%08RX64 %sdr5=%08RX64\n"
2647 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr6=%08RX64 %sdr7=%08RX64\n"
2648 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x} %scr0=%08RX64 %scr2=%08RX64\n"
2649 "%sss={%04x base=%016RX64 limit=%08x flags=%08x} %scr3=%08RX64 %scr4=%08RX64\n"
2650 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
2651 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
2652 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
2653 "%sSysEnter={cs=%04llx eip=%08llx esp=%08llx}\n"
2654 ,
2655 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
2656 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
2657 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u, pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1],
2658 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u, pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
2659 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u, pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5],
2660 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u, pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
2661 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u, pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2,
2662 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
2663 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
2664 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
2665 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
2666 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
2667
2668 pHlp->pfnPrintf(pHlp, "%sxcr=%016RX64 %sxcr1=%016RX64 %sxss=%016RX64 (fXStateMask=%016RX64)\n",
2669 pszPrefix, pCtx->aXcr[0], pszPrefix, pCtx->aXcr[1],
2670 pszPrefix, UINT64_C(0) /** @todo XSS */, pCtx->fXStateMask);
2671 if (pCtx->CTX_SUFF(pXState))
2672 {
2673 PX86FXSTATE pFpuCtx = &pCtx->CTX_SUFF(pXState)->x87;
2674 pHlp->pfnPrintf(pHlp,
2675 "%sFCW=%04x %sFSW=%04x %sFTW=%04x %sFOP=%04x %sMXCSR=%08x %sMXCSR_MASK=%08x\n"
2676 "%sFPUIP=%08x %sCS=%04x %sRsrvd1=%04x %sFPUDP=%08x %sDS=%04x %sRsvrd2=%04x\n"
2677 ,
2678 pszPrefix, pFpuCtx->FCW, pszPrefix, pFpuCtx->FSW, pszPrefix, pFpuCtx->FTW, pszPrefix, pFpuCtx->FOP,
2679 pszPrefix, pFpuCtx->MXCSR, pszPrefix, pFpuCtx->MXCSR_MASK,
2680 pszPrefix, pFpuCtx->FPUIP, pszPrefix, pFpuCtx->CS, pszPrefix, pFpuCtx->Rsrvd1,
2681 pszPrefix, pFpuCtx->FPUDP, pszPrefix, pFpuCtx->DS, pszPrefix, pFpuCtx->Rsrvd2
2682 );
2683 /*
2684 * The FSAVE style memory image contains ST(0)-ST(7) at increasing addresses,
2685 * not (FP)R0-7 as Intel SDM suggests.
2686 */
2687 unsigned iShift = (pFpuCtx->FSW >> 11) & 7;
2688 for (unsigned iST = 0; iST < RT_ELEMENTS(pFpuCtx->aRegs); iST++)
2689 {
2690 unsigned iFPR = (iST + iShift) % RT_ELEMENTS(pFpuCtx->aRegs);
2691 unsigned uTag = (pFpuCtx->FTW >> (2 * iFPR)) & 3;
2692 char chSign = pFpuCtx->aRegs[iST].au16[4] & 0x8000 ? '-' : '+';
2693 unsigned iInteger = (unsigned)(pFpuCtx->aRegs[iST].au64[0] >> 63);
2694 uint64_t u64Fraction = pFpuCtx->aRegs[iST].au64[0] & UINT64_C(0x7fffffffffffffff);
2695 int iExponent = pFpuCtx->aRegs[iST].au16[4] & 0x7fff;
2696 iExponent -= 16383; /* subtract bias */
2697 /** @todo This isn't entirenly correct and needs more work! */
2698 pHlp->pfnPrintf(pHlp,
2699 "%sST(%u)=%sFPR%u={%04RX16'%08RX32'%08RX32} t%d %c%u.%022llu * 2 ^ %d (*)",
2700 pszPrefix, iST, pszPrefix, iFPR,
2701 pFpuCtx->aRegs[iST].au16[4], pFpuCtx->aRegs[iST].au32[1], pFpuCtx->aRegs[iST].au32[0],
2702 uTag, chSign, iInteger, u64Fraction, iExponent);
2703 if (pFpuCtx->aRegs[iST].au16[5] || pFpuCtx->aRegs[iST].au16[6] || pFpuCtx->aRegs[iST].au16[7])
2704 pHlp->pfnPrintf(pHlp, " res={%04RX16,%04RX16,%04RX16}\n",
2705 pFpuCtx->aRegs[iST].au16[5], pFpuCtx->aRegs[iST].au16[6], pFpuCtx->aRegs[iST].au16[7]);
2706 else
2707 pHlp->pfnPrintf(pHlp, "\n");
2708 }
2709
2710 /* XMM/YMM/ZMM registers. */
2711 if (pCtx->fXStateMask & XSAVE_C_YMM)
2712 {
2713 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
2714 if (!(pCtx->fXStateMask & XSAVE_C_ZMM_HI256))
2715 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
2716 pHlp->pfnPrintf(pHlp, "%sYMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
2717 pszPrefix, i, i < 10 ? " " : "",
2718 pYmmHiCtx->aYmmHi[i].au32[3],
2719 pYmmHiCtx->aYmmHi[i].au32[2],
2720 pYmmHiCtx->aYmmHi[i].au32[1],
2721 pYmmHiCtx->aYmmHi[i].au32[0],
2722 pFpuCtx->aXMM[i].au32[3],
2723 pFpuCtx->aXMM[i].au32[2],
2724 pFpuCtx->aXMM[i].au32[1],
2725 pFpuCtx->aXMM[i].au32[0]);
2726 else
2727 {
2728 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
2729 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
2730 pHlp->pfnPrintf(pHlp,
2731 "%sZMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
2732 pszPrefix, i, i < 10 ? " " : "",
2733 pZmmHi256->aHi256Regs[i].au32[7],
2734 pZmmHi256->aHi256Regs[i].au32[6],
2735 pZmmHi256->aHi256Regs[i].au32[5],
2736 pZmmHi256->aHi256Regs[i].au32[4],
2737 pZmmHi256->aHi256Regs[i].au32[3],
2738 pZmmHi256->aHi256Regs[i].au32[2],
2739 pZmmHi256->aHi256Regs[i].au32[1],
2740 pZmmHi256->aHi256Regs[i].au32[0],
2741 pYmmHiCtx->aYmmHi[i].au32[3],
2742 pYmmHiCtx->aYmmHi[i].au32[2],
2743 pYmmHiCtx->aYmmHi[i].au32[1],
2744 pYmmHiCtx->aYmmHi[i].au32[0],
2745 pFpuCtx->aXMM[i].au32[3],
2746 pFpuCtx->aXMM[i].au32[2],
2747 pFpuCtx->aXMM[i].au32[1],
2748 pFpuCtx->aXMM[i].au32[0]);
2749
2750 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
2751 for (unsigned i = 0; i < RT_ELEMENTS(pZmm16Hi->aRegs); i++)
2752 pHlp->pfnPrintf(pHlp,
2753 "%sZMM%u=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
2754 pszPrefix, i + 16,
2755 pZmm16Hi->aRegs[i].au32[15],
2756 pZmm16Hi->aRegs[i].au32[14],
2757 pZmm16Hi->aRegs[i].au32[13],
2758 pZmm16Hi->aRegs[i].au32[12],
2759 pZmm16Hi->aRegs[i].au32[11],
2760 pZmm16Hi->aRegs[i].au32[10],
2761 pZmm16Hi->aRegs[i].au32[9],
2762 pZmm16Hi->aRegs[i].au32[8],
2763 pZmm16Hi->aRegs[i].au32[7],
2764 pZmm16Hi->aRegs[i].au32[6],
2765 pZmm16Hi->aRegs[i].au32[5],
2766 pZmm16Hi->aRegs[i].au32[4],
2767 pZmm16Hi->aRegs[i].au32[3],
2768 pZmm16Hi->aRegs[i].au32[2],
2769 pZmm16Hi->aRegs[i].au32[1],
2770 pZmm16Hi->aRegs[i].au32[0]);
2771 }
2772 }
2773 else
2774 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
2775 pHlp->pfnPrintf(pHlp,
2776 i & 1
2777 ? "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32\n"
2778 : "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32 ",
2779 pszPrefix, i, i < 10 ? " " : "",
2780 pFpuCtx->aXMM[i].au32[3],
2781 pFpuCtx->aXMM[i].au32[2],
2782 pFpuCtx->aXMM[i].au32[1],
2783 pFpuCtx->aXMM[i].au32[0]);
2784
2785 if (pCtx->fXStateMask & XSAVE_C_OPMASK)
2786 {
2787 PCX86XSAVEOPMASK pOpMask = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_OPMASK_BIT, PCX86XSAVEOPMASK);
2788 for (unsigned i = 0; i < RT_ELEMENTS(pOpMask->aKRegs); i += 4)
2789 pHlp->pfnPrintf(pHlp, "%sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64\n",
2790 pszPrefix, i + 0, pOpMask->aKRegs[i + 0],
2791 pszPrefix, i + 1, pOpMask->aKRegs[i + 1],
2792 pszPrefix, i + 2, pOpMask->aKRegs[i + 2],
2793 pszPrefix, i + 3, pOpMask->aKRegs[i + 3]);
2794 }
2795
2796 if (pCtx->fXStateMask & XSAVE_C_BNDREGS)
2797 {
2798 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
2799 for (unsigned i = 0; i < RT_ELEMENTS(pBndRegs->aRegs); i += 2)
2800 pHlp->pfnPrintf(pHlp, "%sBNDREG%u=%016RX64/%016RX64 %sBNDREG%u=%016RX64/%016RX64\n",
2801 pszPrefix, i, pBndRegs->aRegs[i].uLowerBound, pBndRegs->aRegs[i].uUpperBound,
2802 pszPrefix, i + 1, pBndRegs->aRegs[i + 1].uLowerBound, pBndRegs->aRegs[i + 1].uUpperBound);
2803 }
2804
2805 if (pCtx->fXStateMask & XSAVE_C_BNDCSR)
2806 {
2807 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
2808 pHlp->pfnPrintf(pHlp, "%sBNDCFG.CONFIG=%016RX64 %sBNDCFG.STATUS=%016RX64\n",
2809 pszPrefix, pBndCfg->fConfig, pszPrefix, pBndCfg->fStatus);
2810 }
2811
2812 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->au32RsrvdRest); i++)
2813 if (pFpuCtx->au32RsrvdRest[i])
2814 pHlp->pfnPrintf(pHlp, "%sRsrvdRest[%u]=%RX32 (offset=%#x)\n",
2815 pszPrefix, i, pFpuCtx->au32RsrvdRest[i], RT_UOFFSETOF_DYN(X86FXSTATE, au32RsrvdRest[i]) );
2816 }
2817
2818 pHlp->pfnPrintf(pHlp,
2819 "%sEFER =%016RX64\n"
2820 "%sPAT =%016RX64\n"
2821 "%sSTAR =%016RX64\n"
2822 "%sCSTAR =%016RX64\n"
2823 "%sLSTAR =%016RX64\n"
2824 "%sSFMASK =%016RX64\n"
2825 "%sKERNELGSBASE =%016RX64\n",
2826 pszPrefix, pCtx->msrEFER,
2827 pszPrefix, pCtx->msrPAT,
2828 pszPrefix, pCtx->msrSTAR,
2829 pszPrefix, pCtx->msrCSTAR,
2830 pszPrefix, pCtx->msrLSTAR,
2831 pszPrefix, pCtx->msrSFMASK,
2832 pszPrefix, pCtx->msrKERNELGSBASE);
2833 break;
2834 }
2835}
2836
2837
2838/**
2839 * Display all cpu states and any other cpum info.
2840 *
2841 * @param pVM The cross context VM structure.
2842 * @param pHlp The info helper functions.
2843 * @param pszArgs Arguments, ignored.
2844 */
2845static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2846{
2847 cpumR3InfoGuest(pVM, pHlp, pszArgs);
2848 cpumR3InfoGuestInstr(pVM, pHlp, pszArgs);
2849 cpumR3InfoGuestHwvirt(pVM, pHlp, pszArgs);
2850 cpumR3InfoHyper(pVM, pHlp, pszArgs);
2851 cpumR3InfoHost(pVM, pHlp, pszArgs);
2852}
2853
2854
2855/**
2856 * Parses the info argument.
2857 *
2858 * The argument starts with 'verbose', 'terse' or 'default' and then
2859 * continues with the comment string.
2860 *
2861 * @param pszArgs The pointer to the argument string.
2862 * @param penmType Where to store the dump type request.
2863 * @param ppszComment Where to store the pointer to the comment string.
2864 */
2865static void cpumR3InfoParseArg(const char *pszArgs, CPUMDUMPTYPE *penmType, const char **ppszComment)
2866{
2867 if (!pszArgs)
2868 {
2869 *penmType = CPUMDUMPTYPE_DEFAULT;
2870 *ppszComment = "";
2871 }
2872 else
2873 {
2874 if (!strncmp(pszArgs, RT_STR_TUPLE("verbose")))
2875 {
2876 pszArgs += 7;
2877 *penmType = CPUMDUMPTYPE_VERBOSE;
2878 }
2879 else if (!strncmp(pszArgs, RT_STR_TUPLE("terse")))
2880 {
2881 pszArgs += 5;
2882 *penmType = CPUMDUMPTYPE_TERSE;
2883 }
2884 else if (!strncmp(pszArgs, RT_STR_TUPLE("default")))
2885 {
2886 pszArgs += 7;
2887 *penmType = CPUMDUMPTYPE_DEFAULT;
2888 }
2889 else
2890 *penmType = CPUMDUMPTYPE_DEFAULT;
2891 *ppszComment = RTStrStripL(pszArgs);
2892 }
2893}
2894
2895
2896/**
2897 * Display the guest cpu state.
2898 *
2899 * @param pVM The cross context VM structure.
2900 * @param pHlp The info helper functions.
2901 * @param pszArgs Arguments.
2902 */
2903static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2904{
2905 CPUMDUMPTYPE enmType;
2906 const char *pszComment;
2907 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
2908
2909 PVMCPU pVCpu = VMMGetCpu(pVM);
2910 if (!pVCpu)
2911 pVCpu = &pVM->aCpus[0];
2912
2913 pHlp->pfnPrintf(pHlp, "Guest CPUM (VCPU %d) state: %s\n", pVCpu->idCpu, pszComment);
2914
2915 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
2916 cpumR3InfoOne(pVM, pCtx, CPUMCTX2CORE(pCtx), pHlp, enmType, "");
2917}
2918
2919
2920/**
2921 * Displays an SVM VMCB control area.
2922 *
2923 * @param pHlp The info helper functions.
2924 * @param pVmcbCtrl Pointer to a SVM VMCB controls area.
2925 * @param pszPrefix Caller specified string prefix.
2926 */
2927static void cpumR3InfoSvmVmcbCtrl(PCDBGFINFOHLP pHlp, PCSVMVMCBCTRL pVmcbCtrl, const char *pszPrefix)
2928{
2929 AssertReturnVoid(pHlp);
2930 AssertReturnVoid(pVmcbCtrl);
2931
2932 pHlp->pfnPrintf(pHlp, "%su16InterceptRdCRx = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdCRx);
2933 pHlp->pfnPrintf(pHlp, "%su16InterceptWrCRx = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrCRx);
2934 pHlp->pfnPrintf(pHlp, "%su16InterceptRdDRx = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptRdDRx);
2935 pHlp->pfnPrintf(pHlp, "%su16InterceptWrDRx = %#RX16\n", pszPrefix, pVmcbCtrl->u16InterceptWrDRx);
2936 pHlp->pfnPrintf(pHlp, "%su32InterceptXcpt = %#RX32\n", pszPrefix, pVmcbCtrl->u32InterceptXcpt);
2937 pHlp->pfnPrintf(pHlp, "%su64InterceptCtrl = %#RX64\n", pszPrefix, pVmcbCtrl->u64InterceptCtrl);
2938 pHlp->pfnPrintf(pHlp, "%su16PauseFilterThreshold = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterThreshold);
2939 pHlp->pfnPrintf(pHlp, "%su16PauseFilterCount = %#RX16\n", pszPrefix, pVmcbCtrl->u16PauseFilterCount);
2940 pHlp->pfnPrintf(pHlp, "%su64IOPMPhysAddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64IOPMPhysAddr);
2941 pHlp->pfnPrintf(pHlp, "%su64MSRPMPhysAddr = %#RX64\n", pszPrefix, pVmcbCtrl->u64MSRPMPhysAddr);
2942 pHlp->pfnPrintf(pHlp, "%su64TSCOffset = %#RX64\n", pszPrefix, pVmcbCtrl->u64TSCOffset);
2943 pHlp->pfnPrintf(pHlp, "%sTLBCtrl\n", pszPrefix);
2944 pHlp->pfnPrintf(pHlp, "%s u32ASID = %#RX32\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u32ASID);
2945 pHlp->pfnPrintf(pHlp, "%s u8TLBFlush = %u\n", pszPrefix, pVmcbCtrl->TLBCtrl.n.u8TLBFlush);
2946 pHlp->pfnPrintf(pHlp, "%sIntCtrl\n", pszPrefix);
2947 pHlp->pfnPrintf(pHlp, "%s u8VTPR = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VTPR, pVmcbCtrl->IntCtrl.n.u8VTPR);
2948 pHlp->pfnPrintf(pHlp, "%s u1VIrqPending = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIrqPending);
2949 pHlp->pfnPrintf(pHlp, "%s u1VGif = %u\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGif);
2950 pHlp->pfnPrintf(pHlp, "%s u4VIntrPrio = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u4VIntrPrio);
2951 pHlp->pfnPrintf(pHlp, "%s u1IgnoreTPR = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1IgnoreTPR);
2952 pHlp->pfnPrintf(pHlp, "%s u1VIntrMasking = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VIntrMasking);
2953 pHlp->pfnPrintf(pHlp, "%s u1VGifEnable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1VGifEnable);
2954 pHlp->pfnPrintf(pHlp, "%s u1AvicEnable = %RTbool\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u1AvicEnable);
2955 pHlp->pfnPrintf(pHlp, "%s u8VIntrVector = %#RX8\n", pszPrefix, pVmcbCtrl->IntCtrl.n.u8VIntrVector);
2956 pHlp->pfnPrintf(pHlp, "%sIntShadow\n", pszPrefix);
2957 pHlp->pfnPrintf(pHlp, "%s u1IntShadow = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1IntShadow);
2958 pHlp->pfnPrintf(pHlp, "%s u1GuestIntMask = %RTbool\n", pszPrefix, pVmcbCtrl->IntShadow.n.u1GuestIntMask);
2959 pHlp->pfnPrintf(pHlp, "%su64ExitCode = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitCode);
2960 pHlp->pfnPrintf(pHlp, "%su64ExitInfo1 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo1);
2961 pHlp->pfnPrintf(pHlp, "%su64ExitInfo2 = %#RX64\n", pszPrefix, pVmcbCtrl->u64ExitInfo2);
2962 pHlp->pfnPrintf(pHlp, "%sExitIntInfo\n", pszPrefix);
2963 pHlp->pfnPrintf(pHlp, "%s u8Vector = %#RX8 (%u)\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u8Vector, pVmcbCtrl->ExitIntInfo.n.u8Vector);
2964 pHlp->pfnPrintf(pHlp, "%s u3Type = %u\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u3Type);
2965 pHlp->pfnPrintf(pHlp, "%s u1ErrorCodeValid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1ErrorCodeValid);
2966 pHlp->pfnPrintf(pHlp, "%s u1Valid = %RTbool\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u1Valid);
2967 pHlp->pfnPrintf(pHlp, "%s u32ErrorCode = %#RX32\n", pszPrefix, pVmcbCtrl->ExitIntInfo.n.u32ErrorCode);
2968 pHlp->pfnPrintf(pHlp, "%sNestedPaging and SEV\n", pszPrefix);
2969 pHlp->pfnPrintf(pHlp, "%s u1NestedPaging = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1NestedPaging);
2970 pHlp->pfnPrintf(pHlp, "%s u1Sev = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1Sev);
2971 pHlp->pfnPrintf(pHlp, "%s u1SevEs = %RTbool\n", pszPrefix, pVmcbCtrl->NestedPagingCtrl.n.u1SevEs);
2972 pHlp->pfnPrintf(pHlp, "%sAvicBar\n", pszPrefix);
2973 pHlp->pfnPrintf(pHlp, "%s u40Addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBar.n.u40Addr);
2974 pHlp->pfnPrintf(pHlp, "%sEventInject\n", pszPrefix);
2975 pHlp->pfnPrintf(pHlp, "%s EventInject\n", pszPrefix);
2976 pHlp->pfnPrintf(pHlp, "%s u8Vector = %#RX32 (%u)\n", pszPrefix, pVmcbCtrl->EventInject.n.u8Vector, pVmcbCtrl->EventInject.n.u8Vector);
2977 pHlp->pfnPrintf(pHlp, "%s u3Type = %u\n", pszPrefix, pVmcbCtrl->EventInject.n.u3Type);
2978 pHlp->pfnPrintf(pHlp, "%s u1ErrorCodeValid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1ErrorCodeValid);
2979 pHlp->pfnPrintf(pHlp, "%s u1Valid = %RTbool\n", pszPrefix, pVmcbCtrl->EventInject.n.u1Valid);
2980 pHlp->pfnPrintf(pHlp, "%s u32ErrorCode = %#RX32\n", pszPrefix, pVmcbCtrl->EventInject.n.u32ErrorCode);
2981 pHlp->pfnPrintf(pHlp, "%su64NestedPagingCR3 = %#RX64\n", pszPrefix, pVmcbCtrl->u64NestedPagingCR3);
2982 pHlp->pfnPrintf(pHlp, "%sLBR virtualization\n", pszPrefix);
2983 pHlp->pfnPrintf(pHlp, "%s u1LbrVirt = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1LbrVirt);
2984 pHlp->pfnPrintf(pHlp, "%s u1VirtVmsaveVmload = %RTbool\n", pszPrefix, pVmcbCtrl->LbrVirt.n.u1VirtVmsaveVmload);
2985 pHlp->pfnPrintf(pHlp, "%su32VmcbCleanBits = %#RX32\n", pszPrefix, pVmcbCtrl->u32VmcbCleanBits);
2986 pHlp->pfnPrintf(pHlp, "%su64NextRIP = %#RX64\n", pszPrefix, pVmcbCtrl->u64NextRIP);
2987 pHlp->pfnPrintf(pHlp, "%scbInstrFetched = %u\n", pszPrefix, pVmcbCtrl->cbInstrFetched);
2988 pHlp->pfnPrintf(pHlp, "%sabInstr = %.*Rhxs\n", pszPrefix, sizeof(pVmcbCtrl->abInstr), pVmcbCtrl->abInstr);
2989 pHlp->pfnPrintf(pHlp, "%sAvicBackingPagePtr\n", pszPrefix);
2990 pHlp->pfnPrintf(pHlp, "%s u40Addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicBackingPagePtr.n.u40Addr);
2991 pHlp->pfnPrintf(pHlp, "%sAvicLogicalTablePtr\n", pszPrefix);
2992 pHlp->pfnPrintf(pHlp, "%s u40Addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicLogicalTablePtr.n.u40Addr);
2993 pHlp->pfnPrintf(pHlp, "%sAvicPhysicalTablePtr\n", pszPrefix);
2994 pHlp->pfnPrintf(pHlp, "%s u8LastGuestCoreId = %u\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u8LastGuestCoreId);
2995 pHlp->pfnPrintf(pHlp, "%s u40Addr = %#RX64\n", pszPrefix, pVmcbCtrl->AvicPhysicalTablePtr.n.u40Addr);
2996}
2997
2998
2999/**
3000 * Helper for dumping the SVM VMCB selector registers.
3001 *
3002 * @param pHlp The info helper functions.
3003 * @param pSel Pointer to the SVM selector register.
3004 * @param pszName Name of the selector.
3005 * @param pszPrefix Caller specified string prefix.
3006 */
3007DECLINLINE(void) cpumR3InfoSvmVmcbSelReg(PCDBGFINFOHLP pHlp, PCSVMSELREG pSel, const char *pszName, const char *pszPrefix)
3008{
3009 /* The string width of 4 used below is to handle 'LDTR'. Change later if longer register names are used. */
3010 pHlp->pfnPrintf(pHlp, "%s%-4s = {%04x base=%016RX64 limit=%08x flags=%04x}\n", pszPrefix,
3011 pszName, pSel->u16Sel, pSel->u64Base, pSel->u32Limit, pSel->u16Attr);
3012}
3013
3014
3015/**
3016 * Helper for dumping the SVM VMCB GDTR/IDTR registers.
3017 *
3018 * @param pHlp The info helper functions.
3019 * @param pXdtr Pointer to the descriptor table register.
3020 * @param pszName Name of the descriptor table register.
3021 * @param pszPrefix Caller specified string prefix.
3022 */
3023DECLINLINE(void) cpumR3InfoSvmVmcbXdtr(PCDBGFINFOHLP pHlp, PCSVMXDTR pXdtr, const char *pszName, const char *pszPrefix)
3024{
3025 /* The string width of 4 used below is to cover 'GDTR', 'IDTR'. Change later if longer register names are used. */
3026 pHlp->pfnPrintf(pHlp, "%s%-4s = %016RX64:%04x\n", pszPrefix, pszName, pXdtr->u64Base, pXdtr->u32Limit);
3027}
3028
3029
3030/**
3031 * Displays an SVM VMCB state-save area.
3032 *
3033 * @param pHlp The info helper functions.
3034 * @param pVmcbStateSave Pointer to a SVM VMCB controls area.
3035 * @param pszPrefix Caller specified string prefix.
3036 */
3037static void cpumR3InfoSvmVmcbStateSave(PCDBGFINFOHLP pHlp, PCSVMVMCBSTATESAVE pVmcbStateSave, const char *pszPrefix)
3038{
3039 AssertReturnVoid(pHlp);
3040 AssertReturnVoid(pVmcbStateSave);
3041
3042 char szEFlags[80];
3043 cpumR3InfoFormatFlags(&szEFlags[0], pVmcbStateSave->u64RFlags);
3044
3045 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->CS, "CS", pszPrefix);
3046 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->SS, "SS", pszPrefix);
3047 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->ES, "ES", pszPrefix);
3048 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->DS, "DS", pszPrefix);
3049 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->FS, "FS", pszPrefix);
3050 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->GS, "GS", pszPrefix);
3051 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->LDTR, "LDTR", pszPrefix);
3052 cpumR3InfoSvmVmcbSelReg(pHlp, &pVmcbStateSave->TR, "TR", pszPrefix);
3053 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->GDTR, "GDTR", pszPrefix);
3054 cpumR3InfoSvmVmcbXdtr(pHlp, &pVmcbStateSave->IDTR, "IDTR", pszPrefix);
3055 pHlp->pfnPrintf(pHlp, "%su8CPL = %u\n", pszPrefix, pVmcbStateSave->u8CPL);
3056 pHlp->pfnPrintf(pHlp, "%su64EFER = %#RX64\n", pszPrefix, pVmcbStateSave->u64EFER);
3057 pHlp->pfnPrintf(pHlp, "%su64CR4 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR4);
3058 pHlp->pfnPrintf(pHlp, "%su64CR3 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR3);
3059 pHlp->pfnPrintf(pHlp, "%su64CR0 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR0);
3060 pHlp->pfnPrintf(pHlp, "%su64DR7 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR7);
3061 pHlp->pfnPrintf(pHlp, "%su64DR6 = %#RX64\n", pszPrefix, pVmcbStateSave->u64DR6);
3062 pHlp->pfnPrintf(pHlp, "%su64RFlags = %#RX64 %31s\n", pszPrefix, pVmcbStateSave->u64RFlags, szEFlags);
3063 pHlp->pfnPrintf(pHlp, "%su64RIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RIP);
3064 pHlp->pfnPrintf(pHlp, "%su64RSP = %#RX64\n", pszPrefix, pVmcbStateSave->u64RSP);
3065 pHlp->pfnPrintf(pHlp, "%su64RAX = %#RX64\n", pszPrefix, pVmcbStateSave->u64RAX);
3066 pHlp->pfnPrintf(pHlp, "%su64STAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64STAR);
3067 pHlp->pfnPrintf(pHlp, "%su64LSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64LSTAR);
3068 pHlp->pfnPrintf(pHlp, "%su64CSTAR = %#RX64\n", pszPrefix, pVmcbStateSave->u64CSTAR);
3069 pHlp->pfnPrintf(pHlp, "%su64SFMASK = %#RX64\n", pszPrefix, pVmcbStateSave->u64SFMASK);
3070 pHlp->pfnPrintf(pHlp, "%su64KernelGSBase = %#RX64\n", pszPrefix, pVmcbStateSave->u64KernelGSBase);
3071 pHlp->pfnPrintf(pHlp, "%su64SysEnterCS = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterCS);
3072 pHlp->pfnPrintf(pHlp, "%su64SysEnterEIP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterEIP);
3073 pHlp->pfnPrintf(pHlp, "%su64SysEnterESP = %#RX64\n", pszPrefix, pVmcbStateSave->u64SysEnterESP);
3074 pHlp->pfnPrintf(pHlp, "%su64CR2 = %#RX64\n", pszPrefix, pVmcbStateSave->u64CR2);
3075 pHlp->pfnPrintf(pHlp, "%su64PAT = %#RX64\n", pszPrefix, pVmcbStateSave->u64PAT);
3076 pHlp->pfnPrintf(pHlp, "%su64DBGCTL = %#RX64\n", pszPrefix, pVmcbStateSave->u64DBGCTL);
3077 pHlp->pfnPrintf(pHlp, "%su64BR_FROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_FROM);
3078 pHlp->pfnPrintf(pHlp, "%su64BR_TO = %#RX64\n", pszPrefix, pVmcbStateSave->u64BR_TO);
3079 pHlp->pfnPrintf(pHlp, "%su64LASTEXCPFROM = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPFROM);
3080 pHlp->pfnPrintf(pHlp, "%su64LASTEXCPTO = %#RX64\n", pszPrefix, pVmcbStateSave->u64LASTEXCPTO);
3081}
3082
3083
3084/**
3085 * Display the guest's hardware-virtualization cpu state.
3086 *
3087 * @param pVM The cross context VM structure.
3088 * @param pHlp The info helper functions.
3089 * @param pszArgs Arguments, ignored.
3090 */
3091static DECLCALLBACK(void) cpumR3InfoGuestHwvirt(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3092{
3093 RT_NOREF(pszArgs);
3094
3095 PVMCPU pVCpu = VMMGetCpu(pVM);
3096 if (!pVCpu)
3097 pVCpu = &pVM->aCpus[0];
3098
3099 /*
3100 * Figure out what to dump.
3101 *
3102 * In the future we may need to dump everything whether or not we're actively in nested-guest mode
3103 * or not, hence the reason why we use a mask to determine what needs dumping. Currently, we only
3104 * dump hwvirt. state when the guest CPU is executing a nested-guest.
3105 */
3106 /** @todo perhaps make this configurable through pszArgs, depending on how much
3107 * noise we wish to accept when nested hwvirt. isn't used. */
3108#define CPUMHWVIRTDUMP_NONE (0)
3109#define CPUMHWVIRTDUMP_SVM RT_BIT(0)
3110#define CPUMHWVIRTDUMP_VMX RT_BIT(1)
3111#define CPUMHWVIRTDUMP_COMMON RT_BIT(2)
3112#define CPUMHWVIRTDUMP_LAST CPUMHWVIRTDUMP_VMX
3113
3114 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
3115 static const char *const s_aHwvirtModes[] = { "No/inactive", "SVM", "VMX", "Common" };
3116 bool const fSvm = pVM->cpum.ro.GuestFeatures.fSvm;
3117 bool const fVmx = pVM->cpum.ro.GuestFeatures.fVmx;
3118 uint8_t const idxHwvirtState = fSvm ? CPUMHWVIRTDUMP_SVM : (fVmx ? CPUMHWVIRTDUMP_VMX : CPUMHWVIRTDUMP_NONE);
3119 AssertCompile(CPUMHWVIRTDUMP_LAST <= RT_ELEMENTS(s_aHwvirtModes));
3120 Assert(idxHwvirtState < RT_ELEMENTS(s_aHwvirtModes));
3121 const char *pcszHwvirtMode = s_aHwvirtModes[idxHwvirtState];
3122 uint32_t fDumpState = idxHwvirtState | CPUMHWVIRTDUMP_COMMON;
3123
3124 /*
3125 * Dump it.
3126 */
3127 pHlp->pfnPrintf(pHlp, "VCPU[%u] hardware virtualization state:\n", pVCpu->idCpu);
3128
3129 if (fDumpState & CPUMHWVIRTDUMP_COMMON)
3130 pHlp->pfnPrintf(pHlp, "fLocalForcedActions = %#RX32\n", pCtx->hwvirt.fLocalForcedActions);
3131
3132 pHlp->pfnPrintf(pHlp, "%s hwvirt state%s\n", pcszHwvirtMode, (fDumpState & (CPUMHWVIRTDUMP_SVM | CPUMHWVIRTDUMP_VMX)) ?
3133 ":" : "");
3134 if (fDumpState & CPUMHWVIRTDUMP_SVM)
3135 {
3136 pHlp->pfnPrintf(pHlp, " fGif = %RTbool\n", pCtx->hwvirt.fGif);
3137
3138 char szEFlags[80];
3139 cpumR3InfoFormatFlags(&szEFlags[0], pCtx->hwvirt.svm.HostState.rflags.u);
3140 pHlp->pfnPrintf(pHlp, " uMsrHSavePa = %#RX64\n", pCtx->hwvirt.svm.uMsrHSavePa);
3141 pHlp->pfnPrintf(pHlp, " GCPhysVmcb = %#RGp\n", pCtx->hwvirt.svm.GCPhysVmcb);
3142 pHlp->pfnPrintf(pHlp, " VmcbCtrl:\n");
3143 cpumR3InfoSvmVmcbCtrl(pHlp, &pCtx->hwvirt.svm.pVmcbR3->ctrl, " " /* pszPrefix */);
3144 pHlp->pfnPrintf(pHlp, " VmcbStateSave:\n");
3145 cpumR3InfoSvmVmcbStateSave(pHlp, &pCtx->hwvirt.svm.pVmcbR3->guest, " " /* pszPrefix */);
3146 pHlp->pfnPrintf(pHlp, " HostState:\n");
3147 pHlp->pfnPrintf(pHlp, " uEferMsr = %#RX64\n", pCtx->hwvirt.svm.HostState.uEferMsr);
3148 pHlp->pfnPrintf(pHlp, " uCr0 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr0);
3149 pHlp->pfnPrintf(pHlp, " uCr4 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr4);
3150 pHlp->pfnPrintf(pHlp, " uCr3 = %#RX64\n", pCtx->hwvirt.svm.HostState.uCr3);
3151 pHlp->pfnPrintf(pHlp, " uRip = %#RX64\n", pCtx->hwvirt.svm.HostState.uRip);
3152 pHlp->pfnPrintf(pHlp, " uRsp = %#RX64\n", pCtx->hwvirt.svm.HostState.uRsp);
3153 pHlp->pfnPrintf(pHlp, " uRax = %#RX64\n", pCtx->hwvirt.svm.HostState.uRax);
3154 pHlp->pfnPrintf(pHlp, " rflags = %#RX64 %31s\n", pCtx->hwvirt.svm.HostState.rflags.u64, szEFlags);
3155 PCPUMSELREG pSel = &pCtx->hwvirt.svm.HostState.es;
3156 pHlp->pfnPrintf(pHlp, " es = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3157 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
3158 pSel = &pCtx->hwvirt.svm.HostState.cs;
3159 pHlp->pfnPrintf(pHlp, " cs = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3160 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
3161 pSel = &pCtx->hwvirt.svm.HostState.ss;
3162 pHlp->pfnPrintf(pHlp, " ss = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3163 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
3164 pSel = &pCtx->hwvirt.svm.HostState.ds;
3165 pHlp->pfnPrintf(pHlp, " ds = {%04x base=%016RX64 limit=%08x flags=%08x}\n",
3166 pSel->Sel, pSel->u64Base, pSel->u32Limit, pSel->Attr.u);
3167 pHlp->pfnPrintf(pHlp, " gdtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.gdtr.pGdt,
3168 pCtx->hwvirt.svm.HostState.gdtr.cbGdt);
3169 pHlp->pfnPrintf(pHlp, " idtr = %016RX64:%04x\n", pCtx->hwvirt.svm.HostState.idtr.pIdt,
3170 pCtx->hwvirt.svm.HostState.idtr.cbIdt);
3171 pHlp->pfnPrintf(pHlp, " cPauseFilter = %RU16\n", pCtx->hwvirt.svm.cPauseFilter);
3172 pHlp->pfnPrintf(pHlp, " cPauseFilterThreshold = %RU32\n", pCtx->hwvirt.svm.cPauseFilterThreshold);
3173 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %u\n", pCtx->hwvirt.svm.fInterceptEvents);
3174 pHlp->pfnPrintf(pHlp, " pvMsrBitmapR3 = %p\n", pCtx->hwvirt.svm.pvMsrBitmapR3);
3175 pHlp->pfnPrintf(pHlp, " pvMsrBitmapR0 = %RKv\n", pCtx->hwvirt.svm.pvMsrBitmapR0);
3176 pHlp->pfnPrintf(pHlp, " pvIoBitmapR3 = %p\n", pCtx->hwvirt.svm.pvIoBitmapR3);
3177 pHlp->pfnPrintf(pHlp, " pvIoBitmapR0 = %RKv\n", pCtx->hwvirt.svm.pvIoBitmapR0);
3178 }
3179
3180 if (fDumpState & CPUMHWVIRTDUMP_VMX)
3181 {
3182 pHlp->pfnPrintf(pHlp, " GCPhysVmxon = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmxon);
3183 pHlp->pfnPrintf(pHlp, " GCPhysVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysVmcs);
3184 pHlp->pfnPrintf(pHlp, " GCPhysShadowVmcs = %#RGp\n", pCtx->hwvirt.vmx.GCPhysShadowVmcs);
3185 pHlp->pfnPrintf(pHlp, " enmDiag = %u (%s)\n", pCtx->hwvirt.vmx.enmDiag, HMVmxGetDiagDesc(pCtx->hwvirt.vmx.enmDiag));
3186 pHlp->pfnPrintf(pHlp, " enmAbort = %u (%s)\n", pCtx->hwvirt.vmx.enmAbort, HMVmxGetAbortDesc(pCtx->hwvirt.vmx.enmAbort));
3187 pHlp->pfnPrintf(pHlp, " uAbortAux = %u (%#x)\n", pCtx->hwvirt.vmx.uAbortAux, pCtx->hwvirt.vmx.uAbortAux);
3188 pHlp->pfnPrintf(pHlp, " fInVmxRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxRootMode);
3189 pHlp->pfnPrintf(pHlp, " fInVmxNonRootMode = %RTbool\n", pCtx->hwvirt.vmx.fInVmxNonRootMode);
3190 pHlp->pfnPrintf(pHlp, " fInterceptEvents = %RTbool\n", pCtx->hwvirt.vmx.fInterceptEvents);
3191 pHlp->pfnPrintf(pHlp, " uFirstPauseLoopTick = %RX64\n", pCtx->hwvirt.vmx.uFirstPauseLoopTick);
3192 pHlp->pfnPrintf(pHlp, " uPrevPauseTick = %RX64\n", pCtx->hwvirt.vmx.uPrevPauseTick);
3193
3194 /** @todo NSTVMX: Dump remaining/new fields. */
3195 }
3196
3197#undef CPUMHWVIRTDUMP_NONE
3198#undef CPUMHWVIRTDUMP_COMMON
3199#undef CPUMHWVIRTDUMP_SVM
3200#undef CPUMHWVIRTDUMP_VMX
3201#undef CPUMHWVIRTDUMP_LAST
3202#undef CPUMHWVIRTDUMP_ALL
3203}
3204
3205/**
3206 * Display the current guest instruction
3207 *
3208 * @param pVM The cross context VM structure.
3209 * @param pHlp The info helper functions.
3210 * @param pszArgs Arguments, ignored.
3211 */
3212static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3213{
3214 NOREF(pszArgs);
3215
3216 PVMCPU pVCpu = VMMGetCpu(pVM);
3217 if (!pVCpu)
3218 pVCpu = &pVM->aCpus[0];
3219
3220 char szInstruction[256];
3221 szInstruction[0] = '\0';
3222 DBGFR3DisasInstrCurrent(pVCpu, szInstruction, sizeof(szInstruction));
3223 pHlp->pfnPrintf(pHlp, "\nCPUM%u: %s\n\n", pVCpu->idCpu, szInstruction);
3224}
3225
3226
3227/**
3228 * Display the hypervisor cpu state.
3229 *
3230 * @param pVM The cross context VM structure.
3231 * @param pHlp The info helper functions.
3232 * @param pszArgs Arguments, ignored.
3233 */
3234static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3235{
3236 PVMCPU pVCpu = VMMGetCpu(pVM);
3237 if (!pVCpu)
3238 pVCpu = &pVM->aCpus[0];
3239
3240 CPUMDUMPTYPE enmType;
3241 const char *pszComment;
3242 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
3243 pHlp->pfnPrintf(pHlp, "Hypervisor CPUM state: %s\n", pszComment);
3244 cpumR3InfoOne(pVM, &pVCpu->cpum.s.Hyper, CPUMCTX2CORE(&pVCpu->cpum.s.Hyper), pHlp, enmType, ".");
3245 pHlp->pfnPrintf(pHlp, "CR4OrMask=%#x CR4AndMask=%#x\n", pVM->cpum.s.CR4.OrMask, pVM->cpum.s.CR4.AndMask);
3246}
3247
3248
3249/**
3250 * Display the host cpu state.
3251 *
3252 * @param pVM The cross context VM structure.
3253 * @param pHlp The info helper functions.
3254 * @param pszArgs Arguments, ignored.
3255 */
3256static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3257{
3258 CPUMDUMPTYPE enmType;
3259 const char *pszComment;
3260 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
3261 pHlp->pfnPrintf(pHlp, "Host CPUM state: %s\n", pszComment);
3262
3263 PVMCPU pVCpu = VMMGetCpu(pVM);
3264 if (!pVCpu)
3265 pVCpu = &pVM->aCpus[0];
3266 PCPUMHOSTCTX pCtx = &pVCpu->cpum.s.Host;
3267
3268 /*
3269 * Format the EFLAGS.
3270 */
3271#if HC_ARCH_BITS == 32
3272 uint32_t efl = pCtx->eflags.u32;
3273#else
3274 uint64_t efl = pCtx->rflags;
3275#endif
3276 char szEFlags[80];
3277 cpumR3InfoFormatFlags(&szEFlags[0], efl);
3278
3279 /*
3280 * Format the registers.
3281 */
3282#if HC_ARCH_BITS == 32
3283 pHlp->pfnPrintf(pHlp,
3284 "eax=xxxxxxxx ebx=%08x ecx=xxxxxxxx edx=xxxxxxxx esi=%08x edi=%08x\n"
3285 "eip=xxxxxxxx esp=%08x ebp=%08x iopl=%d %31s\n"
3286 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08x\n"
3287 "cr0=%08RX64 cr2=xxxxxxxx cr3=%08RX64 cr4=%08RX64 gdtr=%08x:%04x ldtr=%04x\n"
3288 "dr[0]=%08RX64 dr[1]=%08RX64x dr[2]=%08RX64 dr[3]=%08RX64x dr[6]=%08RX64 dr[7]=%08RX64\n"
3289 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
3290 ,
3291 /*pCtx->eax,*/ pCtx->ebx, /*pCtx->ecx, pCtx->edx,*/ pCtx->esi, pCtx->edi,
3292 /*pCtx->eip,*/ pCtx->esp, pCtx->ebp, X86_EFL_GET_IOPL(efl), szEFlags,
3293 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
3294 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3, pCtx->cr4,
3295 pCtx->dr0, pCtx->dr1, pCtx->dr2, pCtx->dr3, pCtx->dr6, pCtx->dr7,
3296 (uint32_t)pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->ldtr,
3297 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
3298#else
3299 pHlp->pfnPrintf(pHlp,
3300 "rax=xxxxxxxxxxxxxxxx rbx=%016RX64 rcx=xxxxxxxxxxxxxxxx\n"
3301 "rdx=xxxxxxxxxxxxxxxx rsi=%016RX64 rdi=%016RX64\n"
3302 "rip=xxxxxxxxxxxxxxxx rsp=%016RX64 rbp=%016RX64\n"
3303 " r8=xxxxxxxxxxxxxxxx r9=xxxxxxxxxxxxxxxx r10=%016RX64\n"
3304 "r11=%016RX64 r12=%016RX64 r13=%016RX64\n"
3305 "r14=%016RX64 r15=%016RX64\n"
3306 "iopl=%d %31s\n"
3307 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08RX64\n"
3308 "cr0=%016RX64 cr2=xxxxxxxxxxxxxxxx cr3=%016RX64\n"
3309 "cr4=%016RX64 ldtr=%04x tr=%04x\n"
3310 "dr[0]=%016RX64 dr[1]=%016RX64 dr[2]=%016RX64\n"
3311 "dr[3]=%016RX64 dr[6]=%016RX64 dr[7]=%016RX64\n"
3312 "gdtr=%016RX64:%04x idtr=%016RX64:%04x\n"
3313 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
3314 "FSbase=%016RX64 GSbase=%016RX64 efer=%08RX64\n"
3315 ,
3316 /*pCtx->rax,*/ pCtx->rbx, /*pCtx->rcx,
3317 pCtx->rdx,*/ pCtx->rsi, pCtx->rdi,
3318 /*pCtx->rip,*/ pCtx->rsp, pCtx->rbp,
3319 /*pCtx->r8, pCtx->r9,*/ pCtx->r10,
3320 pCtx->r11, pCtx->r12, pCtx->r13,
3321 pCtx->r14, pCtx->r15,
3322 X86_EFL_GET_IOPL(efl), szEFlags,
3323 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
3324 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3,
3325 pCtx->cr4, pCtx->ldtr, pCtx->tr,
3326 pCtx->dr0, pCtx->dr1, pCtx->dr2,
3327 pCtx->dr3, pCtx->dr6, pCtx->dr7,
3328 pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->idtr.uAddr, pCtx->idtr.cb,
3329 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp,
3330 pCtx->FSbase, pCtx->GSbase, pCtx->efer);
3331#endif
3332}
3333
3334/**
3335 * Structure used when disassembling and instructions in DBGF.
3336 * This is used so the reader function can get the stuff it needs.
3337 */
3338typedef struct CPUMDISASSTATE
3339{
3340 /** Pointer to the CPU structure. */
3341 PDISCPUSTATE pCpu;
3342 /** Pointer to the VM. */
3343 PVM pVM;
3344 /** Pointer to the VMCPU. */
3345 PVMCPU pVCpu;
3346 /** Pointer to the first byte in the segment. */
3347 RTGCUINTPTR GCPtrSegBase;
3348 /** Pointer to the byte after the end of the segment. (might have wrapped!) */
3349 RTGCUINTPTR GCPtrSegEnd;
3350 /** The size of the segment minus 1. */
3351 RTGCUINTPTR cbSegLimit;
3352 /** Pointer to the current page - R3 Ptr. */
3353 void const *pvPageR3;
3354 /** Pointer to the current page - GC Ptr. */
3355 RTGCPTR pvPageGC;
3356 /** The lock information that PGMPhysReleasePageMappingLock needs. */
3357 PGMPAGEMAPLOCK PageMapLock;
3358 /** Whether the PageMapLock is valid or not. */
3359 bool fLocked;
3360 /** 64 bits mode or not. */
3361 bool f64Bits;
3362} CPUMDISASSTATE, *PCPUMDISASSTATE;
3363
3364
3365/**
3366 * @callback_method_impl{FNDISREADBYTES}
3367 */
3368static DECLCALLBACK(int) cpumR3DisasInstrRead(PDISCPUSTATE pDis, uint8_t offInstr, uint8_t cbMinRead, uint8_t cbMaxRead)
3369{
3370 PCPUMDISASSTATE pState = (PCPUMDISASSTATE)pDis->pvUser;
3371 for (;;)
3372 {
3373 RTGCUINTPTR GCPtr = pDis->uInstrAddr + offInstr + pState->GCPtrSegBase;
3374
3375 /*
3376 * Need to update the page translation?
3377 */
3378 if ( !pState->pvPageR3
3379 || (GCPtr >> PAGE_SHIFT) != (pState->pvPageGC >> PAGE_SHIFT))
3380 {
3381 int rc = VINF_SUCCESS;
3382
3383 /* translate the address */
3384 pState->pvPageGC = GCPtr & PAGE_BASE_GC_MASK;
3385 if ( VM_IS_RAW_MODE_ENABLED(pState->pVM)
3386 && MMHyperIsInsideArea(pState->pVM, pState->pvPageGC))
3387 {
3388 pState->pvPageR3 = MMHyperRCToR3(pState->pVM, (RTRCPTR)pState->pvPageGC);
3389 if (!pState->pvPageR3)
3390 rc = VERR_INVALID_POINTER;
3391 }
3392 else
3393 {
3394 /* Release mapping lock previously acquired. */
3395 if (pState->fLocked)
3396 PGMPhysReleasePageMappingLock(pState->pVM, &pState->PageMapLock);
3397 rc = PGMPhysGCPtr2CCPtrReadOnly(pState->pVCpu, pState->pvPageGC, &pState->pvPageR3, &pState->PageMapLock);
3398 pState->fLocked = RT_SUCCESS_NP(rc);
3399 }
3400 if (RT_FAILURE(rc))
3401 {
3402 pState->pvPageR3 = NULL;
3403 return rc;
3404 }
3405 }
3406
3407 /*
3408 * Check the segment limit.
3409 */
3410 if (!pState->f64Bits && pDis->uInstrAddr + offInstr > pState->cbSegLimit)
3411 return VERR_OUT_OF_SELECTOR_BOUNDS;
3412
3413 /*
3414 * Calc how much we can read.
3415 */
3416 uint32_t cb = PAGE_SIZE - (GCPtr & PAGE_OFFSET_MASK);
3417 if (!pState->f64Bits)
3418 {
3419 RTGCUINTPTR cbSeg = pState->GCPtrSegEnd - GCPtr;
3420 if (cb > cbSeg && cbSeg)
3421 cb = cbSeg;
3422 }
3423 if (cb > cbMaxRead)
3424 cb = cbMaxRead;
3425
3426 /*
3427 * Read and advance or exit.
3428 */
3429 memcpy(&pDis->abInstr[offInstr], (uint8_t *)pState->pvPageR3 + (GCPtr & PAGE_OFFSET_MASK), cb);
3430 offInstr += (uint8_t)cb;
3431 if (cb >= cbMinRead)
3432 {
3433 pDis->cbCachedInstr = offInstr;
3434 return VINF_SUCCESS;
3435 }
3436 cbMinRead -= (uint8_t)cb;
3437 cbMaxRead -= (uint8_t)cb;
3438 }
3439}
3440
3441
3442/**
3443 * Disassemble an instruction and return the information in the provided structure.
3444 *
3445 * @returns VBox status code.
3446 * @param pVM The cross context VM structure.
3447 * @param pVCpu The cross context virtual CPU structure.
3448 * @param pCtx Pointer to the guest CPU context.
3449 * @param GCPtrPC Program counter (relative to CS) to disassemble from.
3450 * @param pCpu Disassembly state.
3451 * @param pszPrefix String prefix for logging (debug only).
3452 *
3453 */
3454VMMR3DECL(int) CPUMR3DisasmInstrCPU(PVM pVM, PVMCPU pVCpu, PCPUMCTX pCtx, RTGCPTR GCPtrPC, PDISCPUSTATE pCpu,
3455 const char *pszPrefix)
3456{
3457 CPUMDISASSTATE State;
3458 int rc;
3459
3460 const PGMMODE enmMode = PGMGetGuestMode(pVCpu);
3461 State.pCpu = pCpu;
3462 State.pvPageGC = 0;
3463 State.pvPageR3 = NULL;
3464 State.pVM = pVM;
3465 State.pVCpu = pVCpu;
3466 State.fLocked = false;
3467 State.f64Bits = false;
3468
3469 /*
3470 * Get selector information.
3471 */
3472 DISCPUMODE enmDisCpuMode;
3473 if ( (pCtx->cr0 & X86_CR0_PE)
3474 && pCtx->eflags.Bits.u1VM == 0)
3475 {
3476 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
3477 {
3478# ifdef VBOX_WITH_RAW_MODE_NOT_R0
3479 CPUMGuestLazyLoadHiddenSelectorReg(pVCpu, &pCtx->cs);
3480# endif
3481 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
3482 return VERR_CPUM_HIDDEN_CS_LOAD_ERROR;
3483 }
3484 State.f64Bits = enmMode >= PGMMODE_AMD64 && pCtx->cs.Attr.n.u1Long;
3485 State.GCPtrSegBase = pCtx->cs.u64Base;
3486 State.GCPtrSegEnd = pCtx->cs.u32Limit + 1 + (RTGCUINTPTR)pCtx->cs.u64Base;
3487 State.cbSegLimit = pCtx->cs.u32Limit;
3488 enmDisCpuMode = (State.f64Bits)
3489 ? DISCPUMODE_64BIT
3490 : pCtx->cs.Attr.n.u1DefBig
3491 ? DISCPUMODE_32BIT
3492 : DISCPUMODE_16BIT;
3493 }
3494 else
3495 {
3496 /* real or V86 mode */
3497 enmDisCpuMode = DISCPUMODE_16BIT;
3498 State.GCPtrSegBase = pCtx->cs.Sel * 16;
3499 State.GCPtrSegEnd = 0xFFFFFFFF;
3500 State.cbSegLimit = 0xFFFFFFFF;
3501 }
3502
3503 /*
3504 * Disassemble the instruction.
3505 */
3506 uint32_t cbInstr;
3507#ifndef LOG_ENABLED
3508 RT_NOREF_PV(pszPrefix);
3509 rc = DISInstrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State, pCpu, &cbInstr);
3510 if (RT_SUCCESS(rc))
3511 {
3512#else
3513 char szOutput[160];
3514 rc = DISInstrToStrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State,
3515 pCpu, &cbInstr, szOutput, sizeof(szOutput));
3516 if (RT_SUCCESS(rc))
3517 {
3518 /* log it */
3519 if (pszPrefix)
3520 Log(("%s-CPU%d: %s", pszPrefix, pVCpu->idCpu, szOutput));
3521 else
3522 Log(("%s", szOutput));
3523#endif
3524 rc = VINF_SUCCESS;
3525 }
3526 else
3527 Log(("CPUMR3DisasmInstrCPU: DISInstr failed for %04X:%RGv rc=%Rrc\n", pCtx->cs.Sel, GCPtrPC, rc));
3528
3529 /* Release mapping lock acquired in cpumR3DisasInstrRead. */
3530 if (State.fLocked)
3531 PGMPhysReleasePageMappingLock(pVM, &State.PageMapLock);
3532
3533 return rc;
3534}
3535
3536
3537
3538/**
3539 * API for controlling a few of the CPU features found in CR4.
3540 *
3541 * Currently only X86_CR4_TSD is accepted as input.
3542 *
3543 * @returns VBox status code.
3544 *
3545 * @param pVM The cross context VM structure.
3546 * @param fOr The CR4 OR mask.
3547 * @param fAnd The CR4 AND mask.
3548 */
3549VMMR3DECL(int) CPUMR3SetCR4Feature(PVM pVM, RTHCUINTREG fOr, RTHCUINTREG fAnd)
3550{
3551 AssertMsgReturn(!(fOr & ~(X86_CR4_TSD)), ("%#x\n", fOr), VERR_INVALID_PARAMETER);
3552 AssertMsgReturn((fAnd & ~(X86_CR4_TSD)) == ~(X86_CR4_TSD), ("%#x\n", fAnd), VERR_INVALID_PARAMETER);
3553
3554 pVM->cpum.s.CR4.OrMask &= fAnd;
3555 pVM->cpum.s.CR4.OrMask |= fOr;
3556
3557 return VINF_SUCCESS;
3558}
3559
3560
3561/**
3562 * Enters REM, gets and resets the changed flags (CPUM_CHANGED_*).
3563 *
3564 * Only REM should ever call this function!
3565 *
3566 * @returns The changed flags.
3567 * @param pVCpu The cross context virtual CPU structure.
3568 * @param puCpl Where to return the current privilege level (CPL).
3569 */
3570VMMR3DECL(uint32_t) CPUMR3RemEnter(PVMCPU pVCpu, uint32_t *puCpl)
3571{
3572 Assert(!pVCpu->cpum.s.fRawEntered);
3573 Assert(!pVCpu->cpum.s.fRemEntered);
3574
3575 /*
3576 * Get the CPL first.
3577 */
3578 *puCpl = CPUMGetGuestCPL(pVCpu);
3579
3580 /*
3581 * Get and reset the flags.
3582 */
3583 uint32_t fFlags = pVCpu->cpum.s.fChanged;
3584 pVCpu->cpum.s.fChanged = 0;
3585
3586 /** @todo change the switcher to use the fChanged flags. */
3587 if (pVCpu->cpum.s.fUseFlags & CPUM_USED_FPU_SINCE_REM)
3588 {
3589 fFlags |= CPUM_CHANGED_FPU_REM;
3590 pVCpu->cpum.s.fUseFlags &= ~CPUM_USED_FPU_SINCE_REM;
3591 }
3592
3593 pVCpu->cpum.s.fRemEntered = true;
3594 return fFlags;
3595}
3596
3597
3598/**
3599 * Leaves REM.
3600 *
3601 * @param pVCpu The cross context virtual CPU structure.
3602 * @param fNoOutOfSyncSels This is @c false if there are out of sync
3603 * registers.
3604 */
3605VMMR3DECL(void) CPUMR3RemLeave(PVMCPU pVCpu, bool fNoOutOfSyncSels)
3606{
3607 Assert(!pVCpu->cpum.s.fRawEntered);
3608 Assert(pVCpu->cpum.s.fRemEntered);
3609
3610 RT_NOREF_PV(fNoOutOfSyncSels);
3611
3612 pVCpu->cpum.s.fRemEntered = false;
3613}
3614
3615
3616/**
3617 * Called when the ring-3 init phase completes.
3618 *
3619 * @returns VBox status code.
3620 * @param pVM The cross context VM structure.
3621 * @param enmWhat Which init phase.
3622 */
3623VMMR3DECL(int) CPUMR3InitCompleted(PVM pVM, VMINITCOMPLETED enmWhat)
3624{
3625 switch (enmWhat)
3626 {
3627 case VMINITCOMPLETED_RING3:
3628 {
3629 /*
3630 * Figure out if the guest uses 32-bit or 64-bit FPU state at runtime for 64-bit capable VMs.
3631 * Only applicable/used on 64-bit hosts, refer CPUMR0A.asm. See @bugref{7138}.
3632 */
3633 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
3634 for (VMCPUID i = 0; i < pVM->cCpus; i++)
3635 {
3636 PVMCPU pVCpu = &pVM->aCpus[i];
3637 /* While loading a saved-state we fix it up in, cpumR3LoadDone(). */
3638 if (fSupportsLongMode)
3639 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
3640 }
3641
3642 /* Register statistic counters for MSRs. */
3643 cpumR3MsrRegStats(pVM);
3644 break;
3645 }
3646
3647 case VMINITCOMPLETED_HM:
3648 {
3649 /*
3650 * Currently, nested VMX/SVM both derives their guest VMX/SVM CPUID bit from the host
3651 * CPUID bit. This could be later changed if we need to support nested-VMX on CPUs
3652 * that are not capable of VMX.
3653 */
3654 if (pVM->cpum.s.GuestFeatures.fVmx)
3655 {
3656 Assert( pVM->cpum.s.GuestFeatures.enmCpuVendor == CPUMCPUVENDOR_INTEL
3657 || pVM->cpum.s.GuestFeatures.enmCpuVendor == CPUMCPUVENDOR_VIA);
3658 cpumR3InitVmxCpuFeatures(pVM);
3659 DBGFR3Info(pVM->pUVM, "cpumvmxfeat", "default", DBGFR3InfoLogRelHlp());
3660 }
3661
3662 if (pVM->cpum.s.GuestFeatures.fVmx)
3663 LogRel(("CPUM: Enabled guest VMX support\n"));
3664 else if (pVM->cpum.s.GuestFeatures.fSvm)
3665 LogRel(("CPUM: Enabled guest SVM support\n"));
3666 break;
3667 }
3668
3669 default:
3670 break;
3671 }
3672 return VINF_SUCCESS;
3673}
3674
3675
3676/**
3677 * Called when the ring-0 init phases completed.
3678 *
3679 * @param pVM The cross context VM structure.
3680 */
3681VMMR3DECL(void) CPUMR3LogCpuIds(PVM pVM)
3682{
3683 /*
3684 * Log the cpuid.
3685 */
3686 bool fOldBuffered = RTLogRelSetBuffering(true /*fBuffered*/);
3687 RTCPUSET OnlineSet;
3688 LogRel(("CPUM: Logical host processors: %u present, %u max, %u online, online mask: %016RX64\n",
3689 (unsigned)RTMpGetPresentCount(), (unsigned)RTMpGetCount(), (unsigned)RTMpGetOnlineCount(),
3690 RTCpuSetToU64(RTMpGetOnlineSet(&OnlineSet)) ));
3691 RTCPUID cCores = RTMpGetCoreCount();
3692 if (cCores)
3693 LogRel(("CPUM: Physical host cores: %u\n", (unsigned)cCores));
3694 LogRel(("************************* CPUID dump ************************\n"));
3695 DBGFR3Info(pVM->pUVM, "cpuid", "verbose", DBGFR3InfoLogRelHlp());
3696 LogRel(("\n"));
3697 DBGFR3_INFO_LOG_SAFE(pVM, "cpuid", "verbose"); /* macro */
3698 RTLogRelSetBuffering(fOldBuffered);
3699 LogRel(("******************** End of CPUID dump **********************\n"));
3700}
3701
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette