VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/CPUM.cpp@ 61574

Last change on this file since 61574 was 61570, checked in by vboxsync, 9 years ago

DBGFR3Info*: Added DBGFINFO_FLAGS_ALL_EMTS flag for cpum, apic and others dealing with per-vcpu state. Also got rid of some redundant VMR3ReqCallWaitU calls when we're already on the rigth EMT, and replacing them with priority calls when we actually need them. Didn't make sense to only do priority calls from DBGFR3InfoEx.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 115.4 KB
Line 
1/* $Id: CPUM.cpp 61570 2016-06-08 10:55:10Z vboxsync $ */
2/** @file
3 * CPUM - CPU Monitor / Manager.
4 */
5
6/*
7 * Copyright (C) 2006-2015 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18/** @page pg_cpum CPUM - CPU Monitor / Manager
19 *
20 * The CPU Monitor / Manager keeps track of all the CPU registers. It is
21 * also responsible for lazy FPU handling and some of the context loading
22 * in raw mode.
23 *
24 * There are three CPU contexts, the most important one is the guest one (GC).
25 * When running in raw-mode (RC) there is a special hyper context for the VMM
26 * part that floats around inside the guest address space. When running in
27 * raw-mode, CPUM also maintains a host context for saving and restoring
28 * registers across world switches. This latter is done in cooperation with the
29 * world switcher (@see pg_vmm).
30 *
31 * @see grp_cpum
32 *
33 * @section sec_cpum_fpu FPU / SSE / AVX / ++ state.
34 *
35 * TODO: proper write up, currently just some notes.
36 *
37 * The ring-0 FPU handling per OS:
38 *
39 * - 64-bit Windows uses XMM registers in the kernel as part of the calling
40 * convention (Visual C++ doesn't seem to have a way to disable
41 * generating such code either), so CR0.TS/EM are always zero from what I
42 * can tell. We are also forced to always load/save the guest XMM0-XMM15
43 * registers when entering/leaving guest context. Interrupt handlers
44 * using FPU/SSE will offically have call save and restore functions
45 * exported by the kernel, if the really really have to use the state.
46 *
47 * - 32-bit windows does lazy FPU handling, I think, probably including
48 * lazying saving. The Windows Internals book states that it's a bad
49 * idea to use the FPU in kernel space. However, it looks like it will
50 * restore the FPU state of the current thread in case of a kernel \#NM.
51 * Interrupt handlers should be same as for 64-bit.
52 *
53 * - Darwin allows taking \#NM in kernel space, restoring current thread's
54 * state if I read the code correctly. It saves the FPU state of the
55 * outgoing thread, and uses CR0.TS to lazily load the state of the
56 * incoming one. No idea yet how the FPU is treated by interrupt
57 * handlers, i.e. whether they are allowed to disable the state or
58 * something.
59 *
60 * - Linux also allows \#NM in kernel space (don't know since when), and
61 * uses CR0.TS for lazy loading. Saves outgoing thread's state, lazy
62 * loads the incoming unless configured to agressivly load it. Interrupt
63 * handlers can ask whether they're allowed to use the FPU, and may
64 * freely trash the state if Linux thinks it has saved the thread's state
65 * already. This is a problem.
66 *
67 * - Solaris will, from what I can tell, panic if it gets an \#NM in kernel
68 * context. When switching threads, the kernel will save the state of
69 * the outgoing thread and lazy load the incoming one using CR0.TS.
70 * There are a few routines in seeblk.s which uses the SSE unit in ring-0
71 * to do stuff, HAT are among the users. The routines there will
72 * manually clear CR0.TS and save the XMM registers they use only if
73 * CR0.TS was zero upon entry. They will skip it when not, because as
74 * mentioned above, the FPU state is saved when switching away from a
75 * thread and CR0.TS set to 1, so when CR0.TS is 1 there is nothing to
76 * preserve. This is a problem if we restore CR0.TS to 1 after loading
77 * the guest state.
78 *
79 * - FreeBSD - no idea yet.
80 *
81 * - OS/2 does not allow \#NMs in kernel space IIRC. Does lazy loading,
82 * possibly also lazy saving. Interrupts must preserve the CR0.TS+EM &
83 * FPU states.
84 *
85 * Up to r107425 (2016-05-24) we would only temporarily modify CR0.TS/EM while
86 * saving and restoring the host and guest states. The motivation for this
87 * change is that we want to be able to emulate SSE instruction in ring-0 (IEM).
88 *
89 * Starting with that change, we will leave CR0.TS=EM=0 after saving the host
90 * state and only restore it once we've restore the host FPU state. This has the
91 * accidental side effect of triggering Solaris to preserve XMM registers in
92 * sseblk.s. When CR0 was changed by saving the FPU state, CPUM must now inform
93 * the VT-x (HMVMX) code about it as it caches the CR0 value in the VMCS.
94 *
95 *
96 * @section sec_cpum_logging Logging Level Assignments.
97 *
98 * Following log level assignments:
99 * - Log6 is used for FPU state management.
100 * - Log7 is used for FPU state actualization.
101 *
102 */
103
104
105/*********************************************************************************************************************************
106* Header Files *
107*********************************************************************************************************************************/
108#define LOG_GROUP LOG_GROUP_CPUM
109#include <VBox/vmm/cpum.h>
110#include <VBox/vmm/cpumdis.h>
111#include <VBox/vmm/cpumctx-v1_6.h>
112#include <VBox/vmm/pgm.h>
113#include <VBox/vmm/pdmapi.h>
114#include <VBox/vmm/mm.h>
115#include <VBox/vmm/em.h>
116#include <VBox/vmm/selm.h>
117#include <VBox/vmm/dbgf.h>
118#include <VBox/vmm/patm.h>
119#include <VBox/vmm/hm.h>
120#include <VBox/vmm/ssm.h>
121#include "CPUMInternal.h"
122#include <VBox/vmm/vm.h>
123
124#include <VBox/param.h>
125#include <VBox/dis.h>
126#include <VBox/err.h>
127#include <VBox/log.h>
128#include <iprt/asm-amd64-x86.h>
129#include <iprt/assert.h>
130#include <iprt/cpuset.h>
131#include <iprt/mem.h>
132#include <iprt/mp.h>
133#include <iprt/string.h>
134#include "internal/pgm.h"
135
136
137/*********************************************************************************************************************************
138* Defined Constants And Macros *
139*********************************************************************************************************************************/
140/**
141 * This was used in the saved state up to the early life of version 14.
142 *
143 * It indicates that we may have some out-of-sync hidden segement registers.
144 * It is only relevant for raw-mode.
145 */
146#define CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID RT_BIT(12)
147
148
149/*********************************************************************************************************************************
150* Structures and Typedefs *
151*********************************************************************************************************************************/
152
153/**
154 * What kind of cpu info dump to perform.
155 */
156typedef enum CPUMDUMPTYPE
157{
158 CPUMDUMPTYPE_TERSE,
159 CPUMDUMPTYPE_DEFAULT,
160 CPUMDUMPTYPE_VERBOSE
161} CPUMDUMPTYPE;
162/** Pointer to a cpu info dump type. */
163typedef CPUMDUMPTYPE *PCPUMDUMPTYPE;
164
165
166/*********************************************************************************************************************************
167* Internal Functions *
168*********************************************************************************************************************************/
169static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass);
170static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM);
171static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM);
172static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass);
173static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM);
174static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
175static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
176static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
177static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
178static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
179
180
181/*********************************************************************************************************************************
182* Global Variables *
183*********************************************************************************************************************************/
184/** Saved state field descriptors for CPUMCTX. */
185static const SSMFIELD g_aCpumCtxFields[] =
186{
187 SSMFIELD_ENTRY( CPUMCTX, rdi),
188 SSMFIELD_ENTRY( CPUMCTX, rsi),
189 SSMFIELD_ENTRY( CPUMCTX, rbp),
190 SSMFIELD_ENTRY( CPUMCTX, rax),
191 SSMFIELD_ENTRY( CPUMCTX, rbx),
192 SSMFIELD_ENTRY( CPUMCTX, rdx),
193 SSMFIELD_ENTRY( CPUMCTX, rcx),
194 SSMFIELD_ENTRY( CPUMCTX, rsp),
195 SSMFIELD_ENTRY( CPUMCTX, rflags),
196 SSMFIELD_ENTRY( CPUMCTX, rip),
197 SSMFIELD_ENTRY( CPUMCTX, r8),
198 SSMFIELD_ENTRY( CPUMCTX, r9),
199 SSMFIELD_ENTRY( CPUMCTX, r10),
200 SSMFIELD_ENTRY( CPUMCTX, r11),
201 SSMFIELD_ENTRY( CPUMCTX, r12),
202 SSMFIELD_ENTRY( CPUMCTX, r13),
203 SSMFIELD_ENTRY( CPUMCTX, r14),
204 SSMFIELD_ENTRY( CPUMCTX, r15),
205 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
206 SSMFIELD_ENTRY( CPUMCTX, es.ValidSel),
207 SSMFIELD_ENTRY( CPUMCTX, es.fFlags),
208 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
209 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
210 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
211 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
212 SSMFIELD_ENTRY( CPUMCTX, cs.ValidSel),
213 SSMFIELD_ENTRY( CPUMCTX, cs.fFlags),
214 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
215 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
216 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
217 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
218 SSMFIELD_ENTRY( CPUMCTX, ss.ValidSel),
219 SSMFIELD_ENTRY( CPUMCTX, ss.fFlags),
220 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
221 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
222 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
223 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
224 SSMFIELD_ENTRY( CPUMCTX, ds.ValidSel),
225 SSMFIELD_ENTRY( CPUMCTX, ds.fFlags),
226 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
227 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
228 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
229 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
230 SSMFIELD_ENTRY( CPUMCTX, fs.ValidSel),
231 SSMFIELD_ENTRY( CPUMCTX, fs.fFlags),
232 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
233 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
234 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
235 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
236 SSMFIELD_ENTRY( CPUMCTX, gs.ValidSel),
237 SSMFIELD_ENTRY( CPUMCTX, gs.fFlags),
238 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
239 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
240 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
241 SSMFIELD_ENTRY( CPUMCTX, cr0),
242 SSMFIELD_ENTRY( CPUMCTX, cr2),
243 SSMFIELD_ENTRY( CPUMCTX, cr3),
244 SSMFIELD_ENTRY( CPUMCTX, cr4),
245 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
246 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
247 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
248 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
249 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
250 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
251 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
252 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
253 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
254 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
255 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
256 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
257 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
258 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
259 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
260 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
261 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
262 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
263 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
264 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
265 /* msrApicBase is not included here, it resides in the APIC device state. */
266 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
267 SSMFIELD_ENTRY( CPUMCTX, ldtr.ValidSel),
268 SSMFIELD_ENTRY( CPUMCTX, ldtr.fFlags),
269 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
270 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
271 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
272 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
273 SSMFIELD_ENTRY( CPUMCTX, tr.ValidSel),
274 SSMFIELD_ENTRY( CPUMCTX, tr.fFlags),
275 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
276 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
277 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
278 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[0], CPUM_SAVED_STATE_VERSION_XSAVE),
279 SSMFIELD_ENTRY_VER( CPUMCTX, aXcr[1], CPUM_SAVED_STATE_VERSION_XSAVE),
280 SSMFIELD_ENTRY_VER( CPUMCTX, fXStateMask, CPUM_SAVED_STATE_VERSION_XSAVE),
281 SSMFIELD_ENTRY_TERM()
282};
283
284/** Saved state field descriptors for CPUMCTX. */
285static const SSMFIELD g_aCpumX87Fields[] =
286{
287 SSMFIELD_ENTRY( X86FXSTATE, FCW),
288 SSMFIELD_ENTRY( X86FXSTATE, FSW),
289 SSMFIELD_ENTRY( X86FXSTATE, FTW),
290 SSMFIELD_ENTRY( X86FXSTATE, FOP),
291 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
292 SSMFIELD_ENTRY( X86FXSTATE, CS),
293 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
294 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
295 SSMFIELD_ENTRY( X86FXSTATE, DS),
296 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
297 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
298 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
299 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
300 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
301 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
302 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
303 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
304 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
305 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
306 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
307 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
308 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
309 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
310 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
311 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
312 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
313 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
314 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
315 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
316 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
317 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
318 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
319 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
320 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
321 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
322 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
323 SSMFIELD_ENTRY_VER( X86FXSTATE, au32RsrvdForSoftware[0], CPUM_SAVED_STATE_VERSION_XSAVE), /* 32-bit/64-bit hack */
324 SSMFIELD_ENTRY_TERM()
325};
326
327/** Saved state field descriptors for X86XSAVEHDR. */
328static const SSMFIELD g_aCpumXSaveHdrFields[] =
329{
330 SSMFIELD_ENTRY( X86XSAVEHDR, bmXState),
331 SSMFIELD_ENTRY_TERM()
332};
333
334/** Saved state field descriptors for X86XSAVEYMMHI. */
335static const SSMFIELD g_aCpumYmmHiFields[] =
336{
337 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[0]),
338 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[1]),
339 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[2]),
340 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[3]),
341 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[4]),
342 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[5]),
343 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[6]),
344 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[7]),
345 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[8]),
346 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[9]),
347 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[10]),
348 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[11]),
349 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[12]),
350 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[13]),
351 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[14]),
352 SSMFIELD_ENTRY( X86XSAVEYMMHI, aYmmHi[15]),
353 SSMFIELD_ENTRY_TERM()
354};
355
356/** Saved state field descriptors for X86XSAVEBNDREGS. */
357static const SSMFIELD g_aCpumBndRegsFields[] =
358{
359 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[0]),
360 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[1]),
361 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[2]),
362 SSMFIELD_ENTRY( X86XSAVEBNDREGS, aRegs[3]),
363 SSMFIELD_ENTRY_TERM()
364};
365
366/** Saved state field descriptors for X86XSAVEBNDCFG. */
367static const SSMFIELD g_aCpumBndCfgFields[] =
368{
369 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fConfig),
370 SSMFIELD_ENTRY( X86XSAVEBNDCFG, fStatus),
371 SSMFIELD_ENTRY_TERM()
372};
373
374/** Saved state field descriptors for X86XSAVEOPMASK. */
375static const SSMFIELD g_aCpumOpmaskFields[] =
376{
377 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[0]),
378 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[1]),
379 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[2]),
380 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[3]),
381 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[4]),
382 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[5]),
383 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[6]),
384 SSMFIELD_ENTRY( X86XSAVEOPMASK, aKRegs[7]),
385 SSMFIELD_ENTRY_TERM()
386};
387
388/** Saved state field descriptors for X86XSAVEZMMHI256. */
389static const SSMFIELD g_aCpumZmmHi256Fields[] =
390{
391 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[0]),
392 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[1]),
393 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[2]),
394 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[3]),
395 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[4]),
396 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[5]),
397 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[6]),
398 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[7]),
399 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[8]),
400 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[9]),
401 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[10]),
402 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[11]),
403 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[12]),
404 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[13]),
405 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[14]),
406 SSMFIELD_ENTRY( X86XSAVEZMMHI256, aHi256Regs[15]),
407 SSMFIELD_ENTRY_TERM()
408};
409
410/** Saved state field descriptors for X86XSAVEZMM16HI. */
411static const SSMFIELD g_aCpumZmm16HiFields[] =
412{
413 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[0]),
414 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[1]),
415 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[2]),
416 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[3]),
417 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[4]),
418 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[5]),
419 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[6]),
420 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[7]),
421 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[8]),
422 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[9]),
423 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[10]),
424 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[11]),
425 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[12]),
426 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[13]),
427 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[14]),
428 SSMFIELD_ENTRY( X86XSAVEZMM16HI, aRegs[15]),
429 SSMFIELD_ENTRY_TERM()
430};
431
432
433
434/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
435 * registeres changed. */
436static const SSMFIELD g_aCpumX87FieldsMem[] =
437{
438 SSMFIELD_ENTRY( X86FXSTATE, FCW),
439 SSMFIELD_ENTRY( X86FXSTATE, FSW),
440 SSMFIELD_ENTRY( X86FXSTATE, FTW),
441 SSMFIELD_ENTRY( X86FXSTATE, FOP),
442 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
443 SSMFIELD_ENTRY( X86FXSTATE, CS),
444 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
445 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
446 SSMFIELD_ENTRY( X86FXSTATE, DS),
447 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
448 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
449 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
450 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
451 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
452 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
453 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
454 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
455 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
456 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
457 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
458 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
459 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
460 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
461 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
462 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
463 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
464 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
465 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
466 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
467 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
468 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
469 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
470 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
471 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
472 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
473 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
474 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
475 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
476};
477
478/** Saved state field descriptors for CPUMCTX in V4.1 before the hidden selector
479 * registeres changed. */
480static const SSMFIELD g_aCpumCtxFieldsMem[] =
481{
482 SSMFIELD_ENTRY( CPUMCTX, rdi),
483 SSMFIELD_ENTRY( CPUMCTX, rsi),
484 SSMFIELD_ENTRY( CPUMCTX, rbp),
485 SSMFIELD_ENTRY( CPUMCTX, rax),
486 SSMFIELD_ENTRY( CPUMCTX, rbx),
487 SSMFIELD_ENTRY( CPUMCTX, rdx),
488 SSMFIELD_ENTRY( CPUMCTX, rcx),
489 SSMFIELD_ENTRY( CPUMCTX, rsp),
490 SSMFIELD_ENTRY_OLD( lss_esp, sizeof(uint32_t)),
491 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
492 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
493 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
494 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
495 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
496 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
497 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
498 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
499 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
500 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
501 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
502 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
503 SSMFIELD_ENTRY( CPUMCTX, rflags),
504 SSMFIELD_ENTRY( CPUMCTX, rip),
505 SSMFIELD_ENTRY( CPUMCTX, r8),
506 SSMFIELD_ENTRY( CPUMCTX, r9),
507 SSMFIELD_ENTRY( CPUMCTX, r10),
508 SSMFIELD_ENTRY( CPUMCTX, r11),
509 SSMFIELD_ENTRY( CPUMCTX, r12),
510 SSMFIELD_ENTRY( CPUMCTX, r13),
511 SSMFIELD_ENTRY( CPUMCTX, r14),
512 SSMFIELD_ENTRY( CPUMCTX, r15),
513 SSMFIELD_ENTRY( CPUMCTX, es.u64Base),
514 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
515 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
516 SSMFIELD_ENTRY( CPUMCTX, cs.u64Base),
517 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
518 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
519 SSMFIELD_ENTRY( CPUMCTX, ss.u64Base),
520 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
521 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
522 SSMFIELD_ENTRY( CPUMCTX, ds.u64Base),
523 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
524 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
525 SSMFIELD_ENTRY( CPUMCTX, fs.u64Base),
526 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
527 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
528 SSMFIELD_ENTRY( CPUMCTX, gs.u64Base),
529 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
530 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
531 SSMFIELD_ENTRY( CPUMCTX, cr0),
532 SSMFIELD_ENTRY( CPUMCTX, cr2),
533 SSMFIELD_ENTRY( CPUMCTX, cr3),
534 SSMFIELD_ENTRY( CPUMCTX, cr4),
535 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
536 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
537 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
538 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
539 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
540 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
541 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
542 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
543 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
544 SSMFIELD_ENTRY( CPUMCTX, gdtr.pGdt),
545 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
546 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
547 SSMFIELD_ENTRY( CPUMCTX, idtr.pIdt),
548 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
549 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
550 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
551 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
552 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
553 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
554 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
555 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
556 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
557 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
558 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
559 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
560 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
561 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
562 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
563 SSMFIELD_ENTRY( CPUMCTX, ldtr.u64Base),
564 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
565 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
566 SSMFIELD_ENTRY( CPUMCTX, tr.u64Base),
567 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
568 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
569 SSMFIELD_ENTRY_TERM()
570};
571
572/** Saved state field descriptors for CPUMCTX_VER1_6. */
573static const SSMFIELD g_aCpumX87FieldsV16[] =
574{
575 SSMFIELD_ENTRY( X86FXSTATE, FCW),
576 SSMFIELD_ENTRY( X86FXSTATE, FSW),
577 SSMFIELD_ENTRY( X86FXSTATE, FTW),
578 SSMFIELD_ENTRY( X86FXSTATE, FOP),
579 SSMFIELD_ENTRY( X86FXSTATE, FPUIP),
580 SSMFIELD_ENTRY( X86FXSTATE, CS),
581 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd1),
582 SSMFIELD_ENTRY( X86FXSTATE, FPUDP),
583 SSMFIELD_ENTRY( X86FXSTATE, DS),
584 SSMFIELD_ENTRY( X86FXSTATE, Rsrvd2),
585 SSMFIELD_ENTRY( X86FXSTATE, MXCSR),
586 SSMFIELD_ENTRY( X86FXSTATE, MXCSR_MASK),
587 SSMFIELD_ENTRY( X86FXSTATE, aRegs[0]),
588 SSMFIELD_ENTRY( X86FXSTATE, aRegs[1]),
589 SSMFIELD_ENTRY( X86FXSTATE, aRegs[2]),
590 SSMFIELD_ENTRY( X86FXSTATE, aRegs[3]),
591 SSMFIELD_ENTRY( X86FXSTATE, aRegs[4]),
592 SSMFIELD_ENTRY( X86FXSTATE, aRegs[5]),
593 SSMFIELD_ENTRY( X86FXSTATE, aRegs[6]),
594 SSMFIELD_ENTRY( X86FXSTATE, aRegs[7]),
595 SSMFIELD_ENTRY( X86FXSTATE, aXMM[0]),
596 SSMFIELD_ENTRY( X86FXSTATE, aXMM[1]),
597 SSMFIELD_ENTRY( X86FXSTATE, aXMM[2]),
598 SSMFIELD_ENTRY( X86FXSTATE, aXMM[3]),
599 SSMFIELD_ENTRY( X86FXSTATE, aXMM[4]),
600 SSMFIELD_ENTRY( X86FXSTATE, aXMM[5]),
601 SSMFIELD_ENTRY( X86FXSTATE, aXMM[6]),
602 SSMFIELD_ENTRY( X86FXSTATE, aXMM[7]),
603 SSMFIELD_ENTRY( X86FXSTATE, aXMM[8]),
604 SSMFIELD_ENTRY( X86FXSTATE, aXMM[9]),
605 SSMFIELD_ENTRY( X86FXSTATE, aXMM[10]),
606 SSMFIELD_ENTRY( X86FXSTATE, aXMM[11]),
607 SSMFIELD_ENTRY( X86FXSTATE, aXMM[12]),
608 SSMFIELD_ENTRY( X86FXSTATE, aXMM[13]),
609 SSMFIELD_ENTRY( X86FXSTATE, aXMM[14]),
610 SSMFIELD_ENTRY( X86FXSTATE, aXMM[15]),
611 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdRest),
612 SSMFIELD_ENTRY_IGNORE( X86FXSTATE, au32RsrvdForSoftware),
613 SSMFIELD_ENTRY_TERM()
614};
615
616/** Saved state field descriptors for CPUMCTX_VER1_6. */
617static const SSMFIELD g_aCpumCtxFieldsV16[] =
618{
619 SSMFIELD_ENTRY( CPUMCTX, rdi),
620 SSMFIELD_ENTRY( CPUMCTX, rsi),
621 SSMFIELD_ENTRY( CPUMCTX, rbp),
622 SSMFIELD_ENTRY( CPUMCTX, rax),
623 SSMFIELD_ENTRY( CPUMCTX, rbx),
624 SSMFIELD_ENTRY( CPUMCTX, rdx),
625 SSMFIELD_ENTRY( CPUMCTX, rcx),
626 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, rsp),
627 SSMFIELD_ENTRY( CPUMCTX, ss.Sel),
628 SSMFIELD_ENTRY_OLD( ssPadding, sizeof(uint16_t)),
629 SSMFIELD_ENTRY_OLD( CPUMCTX, sizeof(uint64_t) /*rsp_notused*/),
630 SSMFIELD_ENTRY( CPUMCTX, gs.Sel),
631 SSMFIELD_ENTRY_OLD( gsPadding, sizeof(uint16_t)),
632 SSMFIELD_ENTRY( CPUMCTX, fs.Sel),
633 SSMFIELD_ENTRY_OLD( fsPadding, sizeof(uint16_t)),
634 SSMFIELD_ENTRY( CPUMCTX, es.Sel),
635 SSMFIELD_ENTRY_OLD( esPadding, sizeof(uint16_t)),
636 SSMFIELD_ENTRY( CPUMCTX, ds.Sel),
637 SSMFIELD_ENTRY_OLD( dsPadding, sizeof(uint16_t)),
638 SSMFIELD_ENTRY( CPUMCTX, cs.Sel),
639 SSMFIELD_ENTRY_OLD( csPadding, sizeof(uint16_t)*3),
640 SSMFIELD_ENTRY( CPUMCTX, rflags),
641 SSMFIELD_ENTRY( CPUMCTX, rip),
642 SSMFIELD_ENTRY( CPUMCTX, r8),
643 SSMFIELD_ENTRY( CPUMCTX, r9),
644 SSMFIELD_ENTRY( CPUMCTX, r10),
645 SSMFIELD_ENTRY( CPUMCTX, r11),
646 SSMFIELD_ENTRY( CPUMCTX, r12),
647 SSMFIELD_ENTRY( CPUMCTX, r13),
648 SSMFIELD_ENTRY( CPUMCTX, r14),
649 SSMFIELD_ENTRY( CPUMCTX, r15),
650 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, es.u64Base),
651 SSMFIELD_ENTRY( CPUMCTX, es.u32Limit),
652 SSMFIELD_ENTRY( CPUMCTX, es.Attr),
653 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, cs.u64Base),
654 SSMFIELD_ENTRY( CPUMCTX, cs.u32Limit),
655 SSMFIELD_ENTRY( CPUMCTX, cs.Attr),
656 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ss.u64Base),
657 SSMFIELD_ENTRY( CPUMCTX, ss.u32Limit),
658 SSMFIELD_ENTRY( CPUMCTX, ss.Attr),
659 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ds.u64Base),
660 SSMFIELD_ENTRY( CPUMCTX, ds.u32Limit),
661 SSMFIELD_ENTRY( CPUMCTX, ds.Attr),
662 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, fs.u64Base),
663 SSMFIELD_ENTRY( CPUMCTX, fs.u32Limit),
664 SSMFIELD_ENTRY( CPUMCTX, fs.Attr),
665 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gs.u64Base),
666 SSMFIELD_ENTRY( CPUMCTX, gs.u32Limit),
667 SSMFIELD_ENTRY( CPUMCTX, gs.Attr),
668 SSMFIELD_ENTRY( CPUMCTX, cr0),
669 SSMFIELD_ENTRY( CPUMCTX, cr2),
670 SSMFIELD_ENTRY( CPUMCTX, cr3),
671 SSMFIELD_ENTRY( CPUMCTX, cr4),
672 SSMFIELD_ENTRY_OLD( cr8, sizeof(uint64_t)),
673 SSMFIELD_ENTRY( CPUMCTX, dr[0]),
674 SSMFIELD_ENTRY( CPUMCTX, dr[1]),
675 SSMFIELD_ENTRY( CPUMCTX, dr[2]),
676 SSMFIELD_ENTRY( CPUMCTX, dr[3]),
677 SSMFIELD_ENTRY_OLD( dr[4], sizeof(uint64_t)),
678 SSMFIELD_ENTRY_OLD( dr[5], sizeof(uint64_t)),
679 SSMFIELD_ENTRY( CPUMCTX, dr[6]),
680 SSMFIELD_ENTRY( CPUMCTX, dr[7]),
681 SSMFIELD_ENTRY( CPUMCTX, gdtr.cbGdt),
682 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, gdtr.pGdt),
683 SSMFIELD_ENTRY_OLD( gdtrPadding, sizeof(uint16_t)),
684 SSMFIELD_ENTRY_OLD( gdtrPadding64, sizeof(uint64_t)),
685 SSMFIELD_ENTRY( CPUMCTX, idtr.cbIdt),
686 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, idtr.pIdt),
687 SSMFIELD_ENTRY_OLD( idtrPadding, sizeof(uint16_t)),
688 SSMFIELD_ENTRY_OLD( idtrPadding64, sizeof(uint64_t)),
689 SSMFIELD_ENTRY( CPUMCTX, ldtr.Sel),
690 SSMFIELD_ENTRY_OLD( ldtrPadding, sizeof(uint16_t)),
691 SSMFIELD_ENTRY( CPUMCTX, tr.Sel),
692 SSMFIELD_ENTRY_OLD( trPadding, sizeof(uint16_t)),
693 SSMFIELD_ENTRY( CPUMCTX, SysEnter.cs),
694 SSMFIELD_ENTRY( CPUMCTX, SysEnter.eip),
695 SSMFIELD_ENTRY( CPUMCTX, SysEnter.esp),
696 SSMFIELD_ENTRY( CPUMCTX, msrEFER),
697 SSMFIELD_ENTRY( CPUMCTX, msrSTAR),
698 SSMFIELD_ENTRY( CPUMCTX, msrPAT),
699 SSMFIELD_ENTRY( CPUMCTX, msrLSTAR),
700 SSMFIELD_ENTRY( CPUMCTX, msrCSTAR),
701 SSMFIELD_ENTRY( CPUMCTX, msrSFMASK),
702 SSMFIELD_ENTRY_OLD( msrFSBASE, sizeof(uint64_t)),
703 SSMFIELD_ENTRY_OLD( msrGSBASE, sizeof(uint64_t)),
704 SSMFIELD_ENTRY( CPUMCTX, msrKERNELGSBASE),
705 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, ldtr.u64Base),
706 SSMFIELD_ENTRY( CPUMCTX, ldtr.u32Limit),
707 SSMFIELD_ENTRY( CPUMCTX, ldtr.Attr),
708 SSMFIELD_ENTRY_U32_ZX_U64( CPUMCTX, tr.u64Base),
709 SSMFIELD_ENTRY( CPUMCTX, tr.u32Limit),
710 SSMFIELD_ENTRY( CPUMCTX, tr.Attr),
711 SSMFIELD_ENTRY_OLD( padding, sizeof(uint32_t)*2),
712 SSMFIELD_ENTRY_TERM()
713};
714
715
716/**
717 * Checks for partial/leaky FXSAVE/FXRSTOR handling on AMD CPUs.
718 *
719 * AMD K7, K8 and newer AMD CPUs do not save/restore the x87 error pointers
720 * (last instruction pointer, last data pointer, last opcode) except when the ES
721 * bit (Exception Summary) in x87 FSW (FPU Status Word) is set. Thus if we don't
722 * clear these registers there is potential, local FPU leakage from a process
723 * using the FPU to another.
724 *
725 * See AMD Instruction Reference for FXSAVE, FXRSTOR.
726 *
727 * @param pVM The cross context VM structure.
728 */
729static void cpumR3CheckLeakyFpu(PVM pVM)
730{
731 uint32_t u32CpuVersion = ASMCpuId_EAX(1);
732 uint32_t const u32Family = u32CpuVersion >> 8;
733 if ( u32Family >= 6 /* K7 and higher */
734 && ASMIsAmdCpu())
735 {
736 uint32_t cExt = ASMCpuId_EAX(0x80000000);
737 if (ASMIsValidExtRange(cExt))
738 {
739 uint32_t fExtFeaturesEDX = ASMCpuId_EDX(0x80000001);
740 if (fExtFeaturesEDX & X86_CPUID_AMD_FEATURE_EDX_FFXSR)
741 {
742 for (VMCPUID i = 0; i < pVM->cCpus; i++)
743 pVM->aCpus[i].cpum.s.fUseFlags |= CPUM_USE_FFXSR_LEAKY;
744 Log(("CPUMR3Init: host CPU has leaky fxsave/fxrstor behaviour\n"));
745 }
746 }
747 }
748}
749
750
751/**
752 * Initializes the CPUM.
753 *
754 * @returns VBox status code.
755 * @param pVM The cross context VM structure.
756 */
757VMMR3DECL(int) CPUMR3Init(PVM pVM)
758{
759 LogFlow(("CPUMR3Init\n"));
760
761 /*
762 * Assert alignment, sizes and tables.
763 */
764 AssertCompileMemberAlignment(VM, cpum.s, 32);
765 AssertCompile(sizeof(pVM->cpum.s) <= sizeof(pVM->cpum.padding));
766 AssertCompileSizeAlignment(CPUMCTX, 64);
767 AssertCompileSizeAlignment(CPUMCTXMSRS, 64);
768 AssertCompileSizeAlignment(CPUMHOSTCTX, 64);
769 AssertCompileMemberAlignment(VM, cpum, 64);
770 AssertCompileMemberAlignment(VM, aCpus, 64);
771 AssertCompileMemberAlignment(VMCPU, cpum.s, 64);
772 AssertCompileMemberSizeAlignment(VM, aCpus[0].cpum.s, 64);
773#ifdef VBOX_STRICT
774 int rc2 = cpumR3MsrStrictInitChecks();
775 AssertRCReturn(rc2, rc2);
776#endif
777
778 /*
779 * Initialize offsets.
780 */
781
782 /* Calculate the offset from CPUM to CPUMCPU for the first CPU. */
783 pVM->cpum.s.offCPUMCPU0 = RT_OFFSETOF(VM, aCpus[0].cpum) - RT_OFFSETOF(VM, cpum);
784 Assert((uintptr_t)&pVM->cpum + pVM->cpum.s.offCPUMCPU0 == (uintptr_t)&pVM->aCpus[0].cpum);
785
786
787 /* Calculate the offset from CPUMCPU to CPUM. */
788 for (VMCPUID i = 0; i < pVM->cCpus; i++)
789 {
790 PVMCPU pVCpu = &pVM->aCpus[i];
791
792 pVCpu->cpum.s.offCPUM = RT_OFFSETOF(VM, aCpus[i].cpum) - RT_OFFSETOF(VM, cpum);
793 Assert((uintptr_t)&pVCpu->cpum - pVCpu->cpum.s.offCPUM == (uintptr_t)&pVM->cpum);
794 }
795
796 /*
797 * Gather info about the host CPU.
798 */
799 if (!ASMHasCpuId())
800 {
801 Log(("The CPU doesn't support CPUID!\n"));
802 return VERR_UNSUPPORTED_CPU;
803 }
804
805 PCPUMCPUIDLEAF paLeaves;
806 uint32_t cLeaves;
807 int rc = CPUMR3CpuIdCollectLeaves(&paLeaves, &cLeaves);
808 AssertLogRelRCReturn(rc, rc);
809
810 rc = cpumR3CpuIdExplodeFeatures(paLeaves, cLeaves, &pVM->cpum.s.HostFeatures);
811 RTMemFree(paLeaves);
812 AssertLogRelRCReturn(rc, rc);
813 pVM->cpum.s.GuestFeatures.enmCpuVendor = pVM->cpum.s.HostFeatures.enmCpuVendor;
814
815 /*
816 * Check that the CPU supports the minimum features we require.
817 */
818 if (!pVM->cpum.s.HostFeatures.fFxSaveRstor)
819 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support the FXSAVE/FXRSTOR instruction.");
820 if (!pVM->cpum.s.HostFeatures.fMmx)
821 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support MMX.");
822 if (!pVM->cpum.s.HostFeatures.fTsc)
823 return VMSetError(pVM, VERR_UNSUPPORTED_CPU, RT_SRC_POS, "Host CPU does not support RDTSC.");
824
825 /*
826 * Setup the CR4 AND and OR masks used in the raw-mode switcher.
827 */
828 pVM->cpum.s.CR4.AndMask = X86_CR4_OSXMMEEXCPT | X86_CR4_PVI | X86_CR4_VME;
829 pVM->cpum.s.CR4.OrMask = X86_CR4_OSFXSR;
830
831 /*
832 * Figure out which XSAVE/XRSTOR features are available on the host.
833 */
834 uint64_t fXcr0Host = 0;
835 uint64_t fXStateHostMask = 0;
836 if ( pVM->cpum.s.HostFeatures.fXSaveRstor
837 && pVM->cpum.s.HostFeatures.fOpSysXSaveRstor)
838 {
839 fXStateHostMask = fXcr0Host = ASMGetXcr0();
840 fXStateHostMask &= XSAVE_C_X87 | XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI;
841 AssertLogRelMsgStmt((fXStateHostMask & (XSAVE_C_X87 | XSAVE_C_SSE)) == (XSAVE_C_X87 | XSAVE_C_SSE),
842 ("%#llx\n", fXStateHostMask), fXStateHostMask = 0);
843 }
844 pVM->cpum.s.fXStateHostMask = fXStateHostMask;
845 if (!HMIsEnabled(pVM)) /* For raw-mode, we only use XSAVE/XRSTOR when the guest starts using it (CPUID/CR4 visibility). */
846 fXStateHostMask = 0;
847 LogRel(("CPUM: fXStateHostMask=%#llx; initial: %#llx; host XCR0=%#llx\n",
848 pVM->cpum.s.fXStateHostMask, fXStateHostMask, fXcr0Host));
849
850 /*
851 * Allocate memory for the extended CPU state and initialize the host XSAVE/XRSTOR mask.
852 */
853 uint32_t cbMaxXState = pVM->cpum.s.HostFeatures.cbMaxExtendedState;
854 cbMaxXState = RT_ALIGN(cbMaxXState, 128);
855 AssertLogRelReturn(cbMaxXState >= sizeof(X86FXSTATE) && cbMaxXState <= _8K, VERR_CPUM_IPE_2);
856
857 uint8_t *pbXStates;
858 rc = MMR3HyperAllocOnceNoRelEx(pVM, cbMaxXState * 3 * pVM->cCpus, PAGE_SIZE, MM_TAG_CPUM_CTX,
859 MMHYPER_AONR_FLAGS_KERNEL_MAPPING, (void **)&pbXStates);
860 AssertLogRelRCReturn(rc, rc);
861
862 for (VMCPUID i = 0; i < pVM->cCpus; i++)
863 {
864 PVMCPU pVCpu = &pVM->aCpus[i];
865
866 pVCpu->cpum.s.Guest.pXStateR3 = (PX86XSAVEAREA)pbXStates;
867 pVCpu->cpum.s.Guest.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
868 pVCpu->cpum.s.Guest.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
869 pbXStates += cbMaxXState;
870
871 pVCpu->cpum.s.Host.pXStateR3 = (PX86XSAVEAREA)pbXStates;
872 pVCpu->cpum.s.Host.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
873 pVCpu->cpum.s.Host.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
874 pbXStates += cbMaxXState;
875
876 pVCpu->cpum.s.Hyper.pXStateR3 = (PX86XSAVEAREA)pbXStates;
877 pVCpu->cpum.s.Hyper.pXStateR0 = MMHyperR3ToR0(pVM, pbXStates);
878 pVCpu->cpum.s.Hyper.pXStateRC = MMHyperR3ToR0(pVM, pbXStates);
879 pbXStates += cbMaxXState;
880
881 pVCpu->cpum.s.Host.fXStateMask = fXStateHostMask;
882 }
883
884 /*
885 * Setup hypervisor startup values.
886 */
887
888 /*
889 * Register saved state data item.
890 */
891 rc = SSMR3RegisterInternal(pVM, "cpum", 1, CPUM_SAVED_STATE_VERSION, sizeof(CPUM),
892 NULL, cpumR3LiveExec, NULL,
893 NULL, cpumR3SaveExec, NULL,
894 cpumR3LoadPrep, cpumR3LoadExec, cpumR3LoadDone);
895 if (RT_FAILURE(rc))
896 return rc;
897
898 /*
899 * Register info handlers and registers with the debugger facility.
900 */
901 DBGFR3InfoRegisterInternalEx(pVM, "cpum", "Displays the all the cpu states.",
902 &cpumR3InfoAll, DBGFINFO_FLAGS_ALL_EMTS);
903 DBGFR3InfoRegisterInternalEx(pVM, "cpumguest", "Displays the guest cpu state.",
904 &cpumR3InfoGuest, DBGFINFO_FLAGS_ALL_EMTS);
905 DBGFR3InfoRegisterInternalEx(pVM, "cpumhyper", "Displays the hypervisor cpu state.",
906 &cpumR3InfoHyper, DBGFINFO_FLAGS_ALL_EMTS);
907 DBGFR3InfoRegisterInternalEx(pVM, "cpumhost", "Displays the host cpu state.",
908 &cpumR3InfoHost, DBGFINFO_FLAGS_ALL_EMTS);
909 DBGFR3InfoRegisterInternalEx(pVM, "cpumguestinstr", "Displays the current guest instruction.",
910 &cpumR3InfoGuestInstr, DBGFINFO_FLAGS_ALL_EMTS);
911 DBGFR3InfoRegisterInternal( pVM, "cpuid", "Displays the guest cpuid leaves.", &cpumR3CpuIdInfo);
912
913 rc = cpumR3DbgInit(pVM);
914 if (RT_FAILURE(rc))
915 return rc;
916
917 /*
918 * Check if we need to workaround partial/leaky FPU handling.
919 */
920 cpumR3CheckLeakyFpu(pVM);
921
922 /*
923 * Initialize the Guest CPUID and MSR states.
924 */
925 rc = cpumR3InitCpuIdAndMsrs(pVM);
926 if (RT_FAILURE(rc))
927 return rc;
928 CPUMR3Reset(pVM);
929 return VINF_SUCCESS;
930}
931
932
933/**
934 * Applies relocations to data and code managed by this
935 * component. This function will be called at init and
936 * whenever the VMM need to relocate it self inside the GC.
937 *
938 * The CPUM will update the addresses used by the switcher.
939 *
940 * @param pVM The cross context VM structure.
941 */
942VMMR3DECL(void) CPUMR3Relocate(PVM pVM)
943{
944 LogFlow(("CPUMR3Relocate\n"));
945
946 pVM->cpum.s.GuestInfo.paMsrRangesRC = MMHyperR3ToRC(pVM, pVM->cpum.s.GuestInfo.paMsrRangesR3);
947 pVM->cpum.s.GuestInfo.paCpuIdLeavesRC = MMHyperR3ToRC(pVM, pVM->cpum.s.GuestInfo.paCpuIdLeavesR3);
948
949 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
950 {
951 PVMCPU pVCpu = &pVM->aCpus[iCpu];
952 pVCpu->cpum.s.Guest.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Guest.pXStateR3);
953 pVCpu->cpum.s.Host.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Host.pXStateR3);
954 pVCpu->cpum.s.Hyper.pXStateRC = MMHyperR3ToRC(pVM, pVCpu->cpum.s.Hyper.pXStateR3); /** @todo remove me */
955
956 /* Recheck the guest DRx values in raw-mode. */
957 CPUMRecalcHyperDRx(pVCpu, UINT8_MAX, false);
958 }
959}
960
961
962/**
963 * Apply late CPUM property changes based on the fHWVirtEx setting
964 *
965 * @param pVM The cross context VM structure.
966 * @param fHWVirtExEnabled HWVirtEx enabled/disabled
967 */
968VMMR3DECL(void) CPUMR3SetHWVirtEx(PVM pVM, bool fHWVirtExEnabled)
969{
970 /*
971 * Workaround for missing cpuid(0) patches when leaf 4 returns GuestInfo.DefCpuId:
972 * If we miss to patch a cpuid(0).eax then Linux tries to determine the number
973 * of processors from (cpuid(4).eax >> 26) + 1.
974 *
975 * Note: this code is obsolete, but let's keep it here for reference.
976 * Purpose is valid when we artificially cap the max std id to less than 4.
977 */
978 if (!fHWVirtExEnabled)
979 {
980 Assert( (pVM->cpum.s.aGuestCpuIdPatmStd[4].uEax & UINT32_C(0xffffc000)) == 0
981 || pVM->cpum.s.aGuestCpuIdPatmStd[0].uEax < 0x4);
982 pVM->cpum.s.aGuestCpuIdPatmStd[4].uEax &= UINT32_C(0x00003fff);
983 }
984}
985
986/**
987 * Terminates the CPUM.
988 *
989 * Termination means cleaning up and freeing all resources,
990 * the VM it self is at this point powered off or suspended.
991 *
992 * @returns VBox status code.
993 * @param pVM The cross context VM structure.
994 */
995VMMR3DECL(int) CPUMR3Term(PVM pVM)
996{
997#ifdef VBOX_WITH_CRASHDUMP_MAGIC
998 for (VMCPUID i = 0; i < pVM->cCpus; i++)
999 {
1000 PVMCPU pVCpu = &pVM->aCpus[i];
1001 PCPUMCTX pCtx = CPUMQueryGuestCtxPtr(pVCpu);
1002
1003 memset(pVCpu->cpum.s.aMagic, 0, sizeof(pVCpu->cpum.s.aMagic));
1004 pVCpu->cpum.s.uMagic = 0;
1005 pCtx->dr[5] = 0;
1006 }
1007#else
1008 NOREF(pVM);
1009#endif
1010 return VINF_SUCCESS;
1011}
1012
1013
1014/**
1015 * Resets a virtual CPU.
1016 *
1017 * Used by CPUMR3Reset and CPU hot plugging.
1018 *
1019 * @param pVM The cross context VM structure.
1020 * @param pVCpu The cross context virtual CPU structure of the CPU that is
1021 * being reset. This may differ from the current EMT.
1022 */
1023VMMR3DECL(void) CPUMR3ResetCpu(PVM pVM, PVMCPU pVCpu)
1024{
1025 /** @todo anything different for VCPU > 0? */
1026 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
1027
1028 /*
1029 * Initialize everything to ZERO first.
1030 */
1031 uint32_t fUseFlags = pVCpu->cpum.s.fUseFlags & ~CPUM_USED_FPU_SINCE_REM;
1032
1033 AssertCompile(RTASSERT_OFFSET_OF(CPUMCTX, pXStateR0) < RTASSERT_OFFSET_OF(CPUMCTX, pXStateR3));
1034 AssertCompile(RTASSERT_OFFSET_OF(CPUMCTX, pXStateR0) < RTASSERT_OFFSET_OF(CPUMCTX, pXStateRC));
1035 memset(pCtx, 0, RT_OFFSETOF(CPUMCTX, pXStateR0));
1036
1037 pVCpu->cpum.s.fUseFlags = fUseFlags;
1038
1039 pCtx->cr0 = X86_CR0_CD | X86_CR0_NW | X86_CR0_ET; //0x60000010
1040 pCtx->eip = 0x0000fff0;
1041 pCtx->edx = 0x00000600; /* P6 processor */
1042 pCtx->eflags.Bits.u1Reserved0 = 1;
1043
1044 pCtx->cs.Sel = 0xf000;
1045 pCtx->cs.ValidSel = 0xf000;
1046 pCtx->cs.fFlags = CPUMSELREG_FLAGS_VALID;
1047 pCtx->cs.u64Base = UINT64_C(0xffff0000);
1048 pCtx->cs.u32Limit = 0x0000ffff;
1049 pCtx->cs.Attr.n.u1DescType = 1; /* code/data segment */
1050 pCtx->cs.Attr.n.u1Present = 1;
1051 pCtx->cs.Attr.n.u4Type = X86_SEL_TYPE_ER_ACC;
1052
1053 pCtx->ds.fFlags = CPUMSELREG_FLAGS_VALID;
1054 pCtx->ds.u32Limit = 0x0000ffff;
1055 pCtx->ds.Attr.n.u1DescType = 1; /* code/data segment */
1056 pCtx->ds.Attr.n.u1Present = 1;
1057 pCtx->ds.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1058
1059 pCtx->es.fFlags = CPUMSELREG_FLAGS_VALID;
1060 pCtx->es.u32Limit = 0x0000ffff;
1061 pCtx->es.Attr.n.u1DescType = 1; /* code/data segment */
1062 pCtx->es.Attr.n.u1Present = 1;
1063 pCtx->es.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1064
1065 pCtx->fs.fFlags = CPUMSELREG_FLAGS_VALID;
1066 pCtx->fs.u32Limit = 0x0000ffff;
1067 pCtx->fs.Attr.n.u1DescType = 1; /* code/data segment */
1068 pCtx->fs.Attr.n.u1Present = 1;
1069 pCtx->fs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1070
1071 pCtx->gs.fFlags = CPUMSELREG_FLAGS_VALID;
1072 pCtx->gs.u32Limit = 0x0000ffff;
1073 pCtx->gs.Attr.n.u1DescType = 1; /* code/data segment */
1074 pCtx->gs.Attr.n.u1Present = 1;
1075 pCtx->gs.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1076
1077 pCtx->ss.fFlags = CPUMSELREG_FLAGS_VALID;
1078 pCtx->ss.u32Limit = 0x0000ffff;
1079 pCtx->ss.Attr.n.u1Present = 1;
1080 pCtx->ss.Attr.n.u1DescType = 1; /* code/data segment */
1081 pCtx->ss.Attr.n.u4Type = X86_SEL_TYPE_RW_ACC;
1082
1083 pCtx->idtr.cbIdt = 0xffff;
1084 pCtx->gdtr.cbGdt = 0xffff;
1085
1086 pCtx->ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
1087 pCtx->ldtr.u32Limit = 0xffff;
1088 pCtx->ldtr.Attr.n.u1Present = 1;
1089 pCtx->ldtr.Attr.n.u4Type = X86_SEL_TYPE_SYS_LDT;
1090
1091 pCtx->tr.fFlags = CPUMSELREG_FLAGS_VALID;
1092 pCtx->tr.u32Limit = 0xffff;
1093 pCtx->tr.Attr.n.u1Present = 1;
1094 pCtx->tr.Attr.n.u4Type = X86_SEL_TYPE_SYS_386_TSS_BUSY; /* Deduction, not properly documented by Intel. */
1095
1096 pCtx->dr[6] = X86_DR6_INIT_VAL;
1097 pCtx->dr[7] = X86_DR7_INIT_VAL;
1098
1099 PX86FXSTATE pFpuCtx = &pCtx->pXStateR3->x87; AssertReleaseMsg(RT_VALID_PTR(pFpuCtx), ("%p\n", pFpuCtx));
1100 pFpuCtx->FTW = 0x00; /* All empty (abbridged tag reg edition). */
1101 pFpuCtx->FCW = 0x37f;
1102
1103 /* Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3A, Table 8-1.
1104 IA-32 Processor States Following Power-up, Reset, or INIT */
1105 pFpuCtx->MXCSR = 0x1F80;
1106 pFpuCtx->MXCSR_MASK = 0xffff; /** @todo REM always changed this for us. Should probably check if the HW really
1107 supports all bits, since a zero value here should be read as 0xffbf. */
1108 pCtx->aXcr[0] = XSAVE_C_X87;
1109 if (pVM->cpum.s.HostFeatures.cbMaxExtendedState >= RT_OFFSETOF(X86XSAVEAREA, Hdr))
1110 {
1111 /* The entire FXSAVE state needs loading when we switch to XSAVE/XRSTOR
1112 as we don't know what happened before. (Bother optimize later?) */
1113 pCtx->pXStateR3->Hdr.bmXState = XSAVE_C_X87 | XSAVE_C_SSE;
1114 }
1115
1116 /*
1117 * MSRs.
1118 */
1119 /* Init PAT MSR */
1120 pCtx->msrPAT = UINT64_C(0x0007040600070406); /** @todo correct? */
1121
1122 /* EFER MBZ; see AMD64 Architecture Programmer's Manual Volume 2: Table 14-1. Initial Processor State.
1123 * The Intel docs don't mention it. */
1124 Assert(!pCtx->msrEFER);
1125
1126 /* IA32_MISC_ENABLE - not entirely sure what the init/reset state really
1127 is supposed to be here, just trying provide useful/sensible values. */
1128 PCPUMMSRRANGE pRange = cpumLookupMsrRange(pVM, MSR_IA32_MISC_ENABLE);
1129 if (pRange)
1130 {
1131 pVCpu->cpum.s.GuestMsrs.msr.MiscEnable = MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
1132 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL
1133 | (pVM->cpum.s.GuestFeatures.fMonitorMWait ? MSR_IA32_MISC_ENABLE_MONITOR : 0)
1134 | MSR_IA32_MISC_ENABLE_FAST_STRINGS;
1135 pRange->fWrIgnMask |= MSR_IA32_MISC_ENABLE_BTS_UNAVAIL
1136 | MSR_IA32_MISC_ENABLE_PEBS_UNAVAIL;
1137 pRange->fWrGpMask &= ~pVCpu->cpum.s.GuestMsrs.msr.MiscEnable;
1138 }
1139
1140 /** @todo Wire IA32_MISC_ENABLE bit 22 to our NT 4 CPUID trick. */
1141
1142 /** @todo r=ramshankar: Currently broken for SMP as TMCpuTickSet() expects to be
1143 * called from each EMT while we're getting called by CPUMR3Reset()
1144 * iteratively on the same thread. Fix later. */
1145#if 0 /** @todo r=bird: This we will do in TM, not here. */
1146 /* TSC must be 0. Intel spec. Table 9-1. "IA-32 Processor States Following Power-up, Reset, or INIT." */
1147 CPUMSetGuestMsr(pVCpu, MSR_IA32_TSC, 0);
1148#endif
1149
1150
1151 /* C-state control. Guesses. */
1152 pVCpu->cpum.s.GuestMsrs.msr.PkgCStateCfgCtrl = 1 /*C1*/ | RT_BIT_32(25) | RT_BIT_32(26) | RT_BIT_32(27) | RT_BIT_32(28);
1153
1154
1155 /*
1156 * Get the APIC base MSR from the APIC device. For historical reasons (saved state), the APIC base
1157 * continues to reside in the APIC device and we cache it here in the VCPU for all further accesses.
1158 */
1159 PDMApicGetBaseMsr(pVCpu, &pCtx->msrApicBase, true /* fIgnoreErrors */);
1160#ifdef VBOX_WITH_NEW_APIC
1161 LogRel(("CPUM: VCPU%3d: Cached APIC base MSR = %#RX64\n", pVCpu->idCpu, pVCpu->cpum.s.Guest.msrApicBase));
1162#endif
1163}
1164
1165
1166/**
1167 * Resets the CPU.
1168 *
1169 * @returns VINF_SUCCESS.
1170 * @param pVM The cross context VM structure.
1171 */
1172VMMR3DECL(void) CPUMR3Reset(PVM pVM)
1173{
1174 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1175 {
1176 CPUMR3ResetCpu(pVM, &pVM->aCpus[i]);
1177
1178#ifdef VBOX_WITH_CRASHDUMP_MAGIC
1179 PCPUMCTX pCtx = &pVM->aCpus[i].cpum.s.Guest;
1180
1181 /* Magic marker for searching in crash dumps. */
1182 strcpy((char *)pVM->aCpus[i].cpum.s.aMagic, "CPUMCPU Magic");
1183 pVM->aCpus[i].cpum.s.uMagic = UINT64_C(0xDEADBEEFDEADBEEF);
1184 pCtx->dr[5] = UINT64_C(0xDEADBEEFDEADBEEF);
1185#endif
1186 }
1187}
1188
1189
1190
1191
1192/**
1193 * Pass 0 live exec callback.
1194 *
1195 * @returns VINF_SSM_DONT_CALL_AGAIN.
1196 * @param pVM The cross context VM structure.
1197 * @param pSSM The saved state handle.
1198 * @param uPass The pass (0).
1199 */
1200static DECLCALLBACK(int) cpumR3LiveExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uPass)
1201{
1202 AssertReturn(uPass == 0, VERR_SSM_UNEXPECTED_PASS);
1203 cpumR3SaveCpuId(pVM, pSSM);
1204 return VINF_SSM_DONT_CALL_AGAIN;
1205}
1206
1207
1208/**
1209 * Execute state save operation.
1210 *
1211 * @returns VBox status code.
1212 * @param pVM The cross context VM structure.
1213 * @param pSSM SSM operation handle.
1214 */
1215static DECLCALLBACK(int) cpumR3SaveExec(PVM pVM, PSSMHANDLE pSSM)
1216{
1217 /*
1218 * Save.
1219 */
1220 SSMR3PutU32(pSSM, pVM->cCpus);
1221 SSMR3PutU32(pSSM, sizeof(pVM->aCpus[0].cpum.s.GuestMsrs.msr));
1222 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1223 {
1224 PVMCPU pVCpu = &pVM->aCpus[iCpu];
1225
1226 SSMR3PutStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper), 0, g_aCpumCtxFields, NULL);
1227
1228 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
1229 SSMR3PutStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
1230 SSMR3PutStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87), 0, g_aCpumX87Fields, NULL);
1231 if (pGstCtx->fXStateMask != 0)
1232 SSMR3PutStructEx(pSSM, &pGstCtx->pXStateR3->Hdr, sizeof(pGstCtx->pXStateR3->Hdr), 0, g_aCpumXSaveHdrFields, NULL);
1233 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
1234 {
1235 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
1236 SSMR3PutStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
1237 }
1238 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
1239 {
1240 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
1241 SSMR3PutStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
1242 }
1243 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
1244 {
1245 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
1246 SSMR3PutStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
1247 }
1248 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
1249 {
1250 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
1251 SSMR3PutStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
1252 }
1253 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
1254 {
1255 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
1256 SSMR3PutStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
1257 }
1258
1259 SSMR3PutU32(pSSM, pVCpu->cpum.s.fUseFlags);
1260 SSMR3PutU32(pSSM, pVCpu->cpum.s.fChanged);
1261 AssertCompileSizeAlignment(pVCpu->cpum.s.GuestMsrs.msr, sizeof(uint64_t));
1262 SSMR3PutMem(pSSM, &pVCpu->cpum.s.GuestMsrs, sizeof(pVCpu->cpum.s.GuestMsrs.msr));
1263 }
1264
1265 cpumR3SaveCpuId(pVM, pSSM);
1266 return VINF_SUCCESS;
1267}
1268
1269
1270/**
1271 * @callback_method_impl{FNSSMINTLOADPREP}
1272 */
1273static DECLCALLBACK(int) cpumR3LoadPrep(PVM pVM, PSSMHANDLE pSSM)
1274{
1275 NOREF(pSSM);
1276 pVM->cpum.s.fPendingRestore = true;
1277 return VINF_SUCCESS;
1278}
1279
1280
1281/**
1282 * @callback_method_impl{FNSSMINTLOADEXEC}
1283 */
1284static DECLCALLBACK(int) cpumR3LoadExec(PVM pVM, PSSMHANDLE pSSM, uint32_t uVersion, uint32_t uPass)
1285{
1286 int rc; /* Only for AssertRCReturn use. */
1287
1288 /*
1289 * Validate version.
1290 */
1291 if ( uVersion != CPUM_SAVED_STATE_VERSION_XSAVE
1292 && uVersion != CPUM_SAVED_STATE_VERSION_GOOD_CPUID_COUNT
1293 && uVersion != CPUM_SAVED_STATE_VERSION_BAD_CPUID_COUNT
1294 && uVersion != CPUM_SAVED_STATE_VERSION_PUT_STRUCT
1295 && uVersion != CPUM_SAVED_STATE_VERSION_MEM
1296 && uVersion != CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE
1297 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_2
1298 && uVersion != CPUM_SAVED_STATE_VERSION_VER3_0
1299 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR
1300 && uVersion != CPUM_SAVED_STATE_VERSION_VER2_0
1301 && uVersion != CPUM_SAVED_STATE_VERSION_VER1_6)
1302 {
1303 AssertMsgFailed(("cpumR3LoadExec: Invalid version uVersion=%d!\n", uVersion));
1304 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
1305 }
1306
1307 if (uPass == SSM_PASS_FINAL)
1308 {
1309 /*
1310 * Set the size of RTGCPTR for SSMR3GetGCPtr. (Only necessary for
1311 * really old SSM file versions.)
1312 */
1313 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
1314 SSMR3HandleSetGCPtrSize(pSSM, sizeof(RTGCPTR32));
1315 else if (uVersion <= CPUM_SAVED_STATE_VERSION_VER3_0)
1316 SSMR3HandleSetGCPtrSize(pSSM, HC_ARCH_BITS == 32 ? sizeof(RTGCPTR32) : sizeof(RTGCPTR));
1317
1318 /*
1319 * Figure x86 and ctx field definitions to use for older states.
1320 */
1321 uint32_t const fLoad = uVersion > CPUM_SAVED_STATE_VERSION_MEM ? 0 : SSMSTRUCT_FLAGS_MEM_BAND_AID_RELAXED;
1322 PCSSMFIELD paCpumCtx1Fields = g_aCpumX87Fields;
1323 PCSSMFIELD paCpumCtx2Fields = g_aCpumCtxFields;
1324 if (uVersion == CPUM_SAVED_STATE_VERSION_VER1_6)
1325 {
1326 paCpumCtx1Fields = g_aCpumX87FieldsV16;
1327 paCpumCtx2Fields = g_aCpumCtxFieldsV16;
1328 }
1329 else if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
1330 {
1331 paCpumCtx1Fields = g_aCpumX87FieldsMem;
1332 paCpumCtx2Fields = g_aCpumCtxFieldsMem;
1333 }
1334
1335 /*
1336 * The hyper state used to preceed the CPU count. Starting with
1337 * XSAVE it was moved down till after we've got the count.
1338 */
1339 if (uVersion < CPUM_SAVED_STATE_VERSION_XSAVE)
1340 {
1341 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1342 {
1343 PVMCPU pVCpu = &pVM->aCpus[iCpu];
1344 X86FXSTATE Ign;
1345 SSMR3GetStructEx(pSSM, &Ign, sizeof(Ign), fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
1346 uint64_t uCR3 = pVCpu->cpum.s.Hyper.cr3;
1347 uint64_t uRSP = pVCpu->cpum.s.Hyper.rsp; /* see VMMR3Relocate(). */
1348 SSMR3GetStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper),
1349 fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
1350 pVCpu->cpum.s.Hyper.cr3 = uCR3;
1351 pVCpu->cpum.s.Hyper.rsp = uRSP;
1352 }
1353 }
1354
1355 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER2_1_NOMSR)
1356 {
1357 uint32_t cCpus;
1358 rc = SSMR3GetU32(pSSM, &cCpus); AssertRCReturn(rc, rc);
1359 AssertLogRelMsgReturn(cCpus == pVM->cCpus, ("Mismatching CPU counts: saved: %u; configured: %u \n", cCpus, pVM->cCpus),
1360 VERR_SSM_UNEXPECTED_DATA);
1361 }
1362 AssertLogRelMsgReturn( uVersion > CPUM_SAVED_STATE_VERSION_VER2_0
1363 || pVM->cCpus == 1,
1364 ("cCpus=%u\n", pVM->cCpus),
1365 VERR_SSM_UNEXPECTED_DATA);
1366
1367 uint32_t cbMsrs = 0;
1368 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
1369 {
1370 rc = SSMR3GetU32(pSSM, &cbMsrs); AssertRCReturn(rc, rc);
1371 AssertLogRelMsgReturn(RT_ALIGN(cbMsrs, sizeof(uint64_t)) == cbMsrs, ("Size of MSRs is misaligned: %#x\n", cbMsrs),
1372 VERR_SSM_UNEXPECTED_DATA);
1373 AssertLogRelMsgReturn(cbMsrs <= sizeof(CPUMCTXMSRS) && cbMsrs > 0, ("Size of MSRs is out of range: %#x\n", cbMsrs),
1374 VERR_SSM_UNEXPECTED_DATA);
1375 }
1376
1377 /*
1378 * Do the per-CPU restoring.
1379 */
1380 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1381 {
1382 PVMCPU pVCpu = &pVM->aCpus[iCpu];
1383 PCPUMCTX pGstCtx = &pVCpu->cpum.s.Guest;
1384
1385 if (uVersion >= CPUM_SAVED_STATE_VERSION_XSAVE)
1386 {
1387 /*
1388 * The XSAVE saved state layout moved the hyper state down here.
1389 */
1390 uint64_t uCR3 = pVCpu->cpum.s.Hyper.cr3;
1391 uint64_t uRSP = pVCpu->cpum.s.Hyper.rsp; /* see VMMR3Relocate(). */
1392 rc = SSMR3GetStructEx(pSSM, &pVCpu->cpum.s.Hyper, sizeof(pVCpu->cpum.s.Hyper), 0, g_aCpumCtxFields, NULL);
1393 pVCpu->cpum.s.Hyper.cr3 = uCR3;
1394 pVCpu->cpum.s.Hyper.rsp = uRSP;
1395 AssertRCReturn(rc, rc);
1396
1397 /*
1398 * Start by restoring the CPUMCTX structure and the X86FXSAVE bits of the extended state.
1399 */
1400 rc = SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), 0, g_aCpumCtxFields, NULL);
1401 rc = SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87), 0, g_aCpumX87Fields, NULL);
1402 AssertRCReturn(rc, rc);
1403
1404 /* Check that the xsave/xrstor mask is valid (invalid results in #GP). */
1405 if (pGstCtx->fXStateMask != 0)
1406 {
1407 AssertLogRelMsgReturn(!(pGstCtx->fXStateMask & ~pVM->cpum.s.fXStateGuestMask),
1408 ("fXStateMask=%#RX64 fXStateGuestMask=%#RX64\n",
1409 pGstCtx->fXStateMask, pVM->cpum.s.fXStateGuestMask),
1410 VERR_CPUM_INCOMPATIBLE_XSAVE_COMP_MASK);
1411 AssertLogRelMsgReturn(pGstCtx->fXStateMask & XSAVE_C_X87,
1412 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
1413 AssertLogRelMsgReturn((pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
1414 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
1415 AssertLogRelMsgReturn( (pGstCtx->fXStateMask & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
1416 || (pGstCtx->fXStateMask & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
1417 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
1418 ("fXStateMask=%#RX64\n", pGstCtx->fXStateMask), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
1419 }
1420
1421 /* Check that the XCR0 mask is valid (invalid results in #GP). */
1422 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87, ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XCR0);
1423 if (pGstCtx->aXcr[0] != XSAVE_C_X87)
1424 {
1425 AssertLogRelMsgReturn(!(pGstCtx->aXcr[0] & ~(pGstCtx->fXStateMask | XSAVE_C_X87)),
1426 ("xcr0=%#RX64 fXStateMask=%#RX64\n", pGstCtx->aXcr[0], pGstCtx->fXStateMask),
1427 VERR_CPUM_INVALID_XCR0);
1428 AssertLogRelMsgReturn(pGstCtx->aXcr[0] & XSAVE_C_X87,
1429 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
1430 AssertLogRelMsgReturn((pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM)) != XSAVE_C_YMM,
1431 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
1432 AssertLogRelMsgReturn( (pGstCtx->aXcr[0] & (XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI)) == 0
1433 || (pGstCtx->aXcr[0] & (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI))
1434 == (XSAVE_C_SSE | XSAVE_C_YMM | XSAVE_C_OPMASK | XSAVE_C_ZMM_HI256 | XSAVE_C_ZMM_16HI),
1435 ("xcr0=%#RX64\n", pGstCtx->aXcr[0]), VERR_CPUM_INVALID_XSAVE_COMP_MASK);
1436 }
1437
1438 /* Check that the XCR1 is zero, as we don't implement it yet. */
1439 AssertLogRelMsgReturn(!pGstCtx->aXcr[1], ("xcr1=%#RX64\n", pGstCtx->aXcr[1]), VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
1440
1441 /*
1442 * Restore the individual extended state components we support.
1443 */
1444 if (pGstCtx->fXStateMask != 0)
1445 {
1446 rc = SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->Hdr, sizeof(pGstCtx->pXStateR3->Hdr),
1447 0, g_aCpumXSaveHdrFields, NULL);
1448 AssertRCReturn(rc, rc);
1449 AssertLogRelMsgReturn(!(pGstCtx->pXStateR3->Hdr.bmXState & ~pGstCtx->fXStateMask),
1450 ("bmXState=%#RX64 fXStateMask=%#RX64\n",
1451 pGstCtx->pXStateR3->Hdr.bmXState, pGstCtx->fXStateMask),
1452 VERR_CPUM_INVALID_XSAVE_HDR);
1453 }
1454 if (pGstCtx->fXStateMask & XSAVE_C_YMM)
1455 {
1456 PX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_YMM_BIT, PX86XSAVEYMMHI);
1457 SSMR3GetStructEx(pSSM, pYmmHiCtx, sizeof(*pYmmHiCtx), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumYmmHiFields, NULL);
1458 }
1459 if (pGstCtx->fXStateMask & XSAVE_C_BNDREGS)
1460 {
1461 PX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDREGS_BIT, PX86XSAVEBNDREGS);
1462 SSMR3GetStructEx(pSSM, pBndRegs, sizeof(*pBndRegs), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndRegsFields, NULL);
1463 }
1464 if (pGstCtx->fXStateMask & XSAVE_C_BNDCSR)
1465 {
1466 PX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_BNDCSR_BIT, PX86XSAVEBNDCFG);
1467 SSMR3GetStructEx(pSSM, pBndCfg, sizeof(*pBndCfg), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumBndCfgFields, NULL);
1468 }
1469 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_HI256)
1470 {
1471 PX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_HI256_BIT, PX86XSAVEZMMHI256);
1472 SSMR3GetStructEx(pSSM, pZmmHi256, sizeof(*pZmmHi256), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmmHi256Fields, NULL);
1473 }
1474 if (pGstCtx->fXStateMask & XSAVE_C_ZMM_16HI)
1475 {
1476 PX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pGstCtx, XSAVE_C_ZMM_16HI_BIT, PX86XSAVEZMM16HI);
1477 SSMR3GetStructEx(pSSM, pZmm16Hi, sizeof(*pZmm16Hi), SSMSTRUCT_FLAGS_FULL_STRUCT, g_aCpumZmm16HiFields, NULL);
1478 }
1479 }
1480 else
1481 {
1482 /*
1483 * Pre XSAVE saved state.
1484 */
1485 SSMR3GetStructEx(pSSM, &pGstCtx->pXStateR3->x87, sizeof(pGstCtx->pXStateR3->x87),
1486 fLoad | SSMSTRUCT_FLAGS_NO_TAIL_MARKER, paCpumCtx1Fields, NULL);
1487 SSMR3GetStructEx(pSSM, pGstCtx, sizeof(*pGstCtx), fLoad | SSMSTRUCT_FLAGS_NO_LEAD_MARKER, paCpumCtx2Fields, NULL);
1488 }
1489
1490 /*
1491 * Restore a couple of flags and the MSRs.
1492 */
1493 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fUseFlags);
1494 SSMR3GetU32(pSSM, &pVCpu->cpum.s.fChanged);
1495
1496 rc = VINF_SUCCESS;
1497 if (uVersion > CPUM_SAVED_STATE_VERSION_NO_MSR_SIZE)
1498 rc = SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], cbMsrs);
1499 else if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_0)
1500 {
1501 SSMR3GetMem(pSSM, &pVCpu->cpum.s.GuestMsrs.au64[0], 2 * sizeof(uint64_t)); /* Restore two MSRs. */
1502 rc = SSMR3Skip(pSSM, 62 * sizeof(uint64_t));
1503 }
1504 AssertRCReturn(rc, rc);
1505
1506 /* REM and other may have cleared must-be-one fields in DR6 and
1507 DR7, fix these. */
1508 pGstCtx->dr[6] &= ~(X86_DR6_RAZ_MASK | X86_DR6_MBZ_MASK);
1509 pGstCtx->dr[6] |= X86_DR6_RA1_MASK;
1510 pGstCtx->dr[7] &= ~(X86_DR7_RAZ_MASK | X86_DR7_MBZ_MASK);
1511 pGstCtx->dr[7] |= X86_DR7_RA1_MASK;
1512 }
1513
1514 /* Older states does not have the internal selector register flags
1515 and valid selector value. Supply those. */
1516 if (uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
1517 {
1518 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1519 {
1520 PVMCPU pVCpu = &pVM->aCpus[iCpu];
1521 bool const fValid = HMIsEnabled(pVM)
1522 || ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
1523 && !(pVCpu->cpum.s.fChanged & CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID));
1524 PCPUMSELREG paSelReg = CPUMCTX_FIRST_SREG(&pVCpu->cpum.s.Guest);
1525 if (fValid)
1526 {
1527 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
1528 {
1529 paSelReg[iSelReg].fFlags = CPUMSELREG_FLAGS_VALID;
1530 paSelReg[iSelReg].ValidSel = paSelReg[iSelReg].Sel;
1531 }
1532
1533 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
1534 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
1535 }
1536 else
1537 {
1538 for (uint32_t iSelReg = 0; iSelReg < X86_SREG_COUNT; iSelReg++)
1539 {
1540 paSelReg[iSelReg].fFlags = 0;
1541 paSelReg[iSelReg].ValidSel = 0;
1542 }
1543
1544 /* This might not be 104% correct, but I think it's close
1545 enough for all practical purposes... (REM always loaded
1546 LDTR registers.) */
1547 pVCpu->cpum.s.Guest.ldtr.fFlags = CPUMSELREG_FLAGS_VALID;
1548 pVCpu->cpum.s.Guest.ldtr.ValidSel = pVCpu->cpum.s.Guest.ldtr.Sel;
1549 }
1550 pVCpu->cpum.s.Guest.tr.fFlags = CPUMSELREG_FLAGS_VALID;
1551 pVCpu->cpum.s.Guest.tr.ValidSel = pVCpu->cpum.s.Guest.tr.Sel;
1552 }
1553 }
1554
1555 /* Clear CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID. */
1556 if ( uVersion > CPUM_SAVED_STATE_VERSION_VER3_2
1557 && uVersion <= CPUM_SAVED_STATE_VERSION_MEM)
1558 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1559 pVM->aCpus[iCpu].cpum.s.fChanged &= CPUM_CHANGED_HIDDEN_SEL_REGS_INVALID;
1560
1561 /*
1562 * A quick sanity check.
1563 */
1564 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1565 {
1566 PVMCPU pVCpu = &pVM->aCpus[iCpu];
1567 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.es.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
1568 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.cs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
1569 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ss.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
1570 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.ds.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
1571 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.fs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
1572 AssertLogRelReturn(!(pVCpu->cpum.s.Guest.gs.fFlags & ~CPUMSELREG_FLAGS_VALID_MASK), VERR_SSM_UNEXPECTED_DATA);
1573 }
1574 }
1575
1576 pVM->cpum.s.fPendingRestore = false;
1577
1578 /*
1579 * Guest CPUIDs.
1580 */
1581 if (uVersion >= CPUM_SAVED_STATE_VERSION_VER3_2)
1582 return cpumR3LoadCpuId(pVM, pSSM, uVersion);
1583 return cpumR3LoadCpuIdPre32(pVM, pSSM, uVersion);
1584}
1585
1586
1587/**
1588 * @callback_method_impl{FNSSMINTLOADDONE}
1589 */
1590static DECLCALLBACK(int) cpumR3LoadDone(PVM pVM, PSSMHANDLE pSSM)
1591{
1592 if (RT_FAILURE(SSMR3HandleGetStatus(pSSM)))
1593 return VINF_SUCCESS;
1594
1595 /* just check this since we can. */ /** @todo Add a SSM unit flag for indicating that it's mandatory during a restore. */
1596 if (pVM->cpum.s.fPendingRestore)
1597 {
1598 LogRel(("CPUM: Missing state!\n"));
1599 return VERR_INTERNAL_ERROR_2;
1600 }
1601
1602 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
1603 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
1604 {
1605 PVMCPU pVCpu = &pVM->aCpus[idCpu];
1606
1607 /* Notify PGM of the NXE states in case they've changed. */
1608 PGMNotifyNxeChanged(pVCpu, RT_BOOL(pVCpu->cpum.s.Guest.msrEFER & MSR_K6_EFER_NXE));
1609
1610 /* Cache the local APIC base from the APIC device. During init. this is done in CPUMR3ResetCpu(). */
1611 PDMApicGetBaseMsr(pVCpu, &pVCpu->cpum.s.Guest.msrApicBase, true /* fIgnoreErrors */);
1612#ifdef VBOX_WITH_NEW_APIC
1613 LogRel(("CPUM: VCPU%3d: Cached APIC base MSR = %#RX64\n", idCpu, pVCpu->cpum.s.Guest.msrApicBase));
1614#endif
1615
1616 /* During init. this is done in CPUMR3InitCompleted(). */
1617 if (fSupportsLongMode)
1618 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
1619 }
1620 return VINF_SUCCESS;
1621}
1622
1623
1624/**
1625 * Checks if the CPUM state restore is still pending.
1626 *
1627 * @returns true / false.
1628 * @param pVM The cross context VM structure.
1629 */
1630VMMDECL(bool) CPUMR3IsStateRestorePending(PVM pVM)
1631{
1632 return pVM->cpum.s.fPendingRestore;
1633}
1634
1635
1636/**
1637 * Formats the EFLAGS value into mnemonics.
1638 *
1639 * @param pszEFlags Where to write the mnemonics. (Assumes sufficient buffer space.)
1640 * @param efl The EFLAGS value.
1641 */
1642static void cpumR3InfoFormatFlags(char *pszEFlags, uint32_t efl)
1643{
1644 /*
1645 * Format the flags.
1646 */
1647 static const struct
1648 {
1649 const char *pszSet; const char *pszClear; uint32_t fFlag;
1650 } s_aFlags[] =
1651 {
1652 { "vip",NULL, X86_EFL_VIP },
1653 { "vif",NULL, X86_EFL_VIF },
1654 { "ac", NULL, X86_EFL_AC },
1655 { "vm", NULL, X86_EFL_VM },
1656 { "rf", NULL, X86_EFL_RF },
1657 { "nt", NULL, X86_EFL_NT },
1658 { "ov", "nv", X86_EFL_OF },
1659 { "dn", "up", X86_EFL_DF },
1660 { "ei", "di", X86_EFL_IF },
1661 { "tf", NULL, X86_EFL_TF },
1662 { "nt", "pl", X86_EFL_SF },
1663 { "nz", "zr", X86_EFL_ZF },
1664 { "ac", "na", X86_EFL_AF },
1665 { "po", "pe", X86_EFL_PF },
1666 { "cy", "nc", X86_EFL_CF },
1667 };
1668 char *psz = pszEFlags;
1669 for (unsigned i = 0; i < RT_ELEMENTS(s_aFlags); i++)
1670 {
1671 const char *pszAdd = s_aFlags[i].fFlag & efl ? s_aFlags[i].pszSet : s_aFlags[i].pszClear;
1672 if (pszAdd)
1673 {
1674 strcpy(psz, pszAdd);
1675 psz += strlen(pszAdd);
1676 *psz++ = ' ';
1677 }
1678 }
1679 psz[-1] = '\0';
1680}
1681
1682
1683/**
1684 * Formats a full register dump.
1685 *
1686 * @param pVM The cross context VM structure.
1687 * @param pCtx The context to format.
1688 * @param pCtxCore The context core to format.
1689 * @param pHlp Output functions.
1690 * @param enmType The dump type.
1691 * @param pszPrefix Register name prefix.
1692 */
1693static void cpumR3InfoOne(PVM pVM, PCPUMCTX pCtx, PCCPUMCTXCORE pCtxCore, PCDBGFINFOHLP pHlp, CPUMDUMPTYPE enmType,
1694 const char *pszPrefix)
1695{
1696 NOREF(pVM);
1697
1698 /*
1699 * Format the EFLAGS.
1700 */
1701 uint32_t efl = pCtxCore->eflags.u32;
1702 char szEFlags[80];
1703 cpumR3InfoFormatFlags(&szEFlags[0], efl);
1704
1705 /*
1706 * Format the registers.
1707 */
1708 switch (enmType)
1709 {
1710 case CPUMDUMPTYPE_TERSE:
1711 if (CPUMIsGuestIn64BitCodeEx(pCtx))
1712 pHlp->pfnPrintf(pHlp,
1713 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
1714 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
1715 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
1716 "%sr14=%016RX64 %sr15=%016RX64\n"
1717 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
1718 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
1719 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
1720 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
1721 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
1722 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
1723 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
1724 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
1725 else
1726 pHlp->pfnPrintf(pHlp,
1727 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
1728 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
1729 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %seflags=%08x\n",
1730 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
1731 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
1732 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
1733 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, efl);
1734 break;
1735
1736 case CPUMDUMPTYPE_DEFAULT:
1737 if (CPUMIsGuestIn64BitCodeEx(pCtx))
1738 pHlp->pfnPrintf(pHlp,
1739 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
1740 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
1741 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
1742 "%sr14=%016RX64 %sr15=%016RX64\n"
1743 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
1744 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
1745 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%016RX64:%04x %sldtr=%04x\n"
1746 ,
1747 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
1748 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
1749 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
1750 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
1751 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
1752 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
1753 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
1754 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
1755 else
1756 pHlp->pfnPrintf(pHlp,
1757 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
1758 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
1759 "%scs=%04x %sss=%04x %sds=%04x %ses=%04x %sfs=%04x %sgs=%04x %str=%04x %seflags=%08x\n"
1760 "%scr0=%08RX64 %scr2=%08RX64 %scr3=%08RX64 %scr4=%08RX64 %sgdtr=%08RX64:%04x %sldtr=%04x\n"
1761 ,
1762 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
1763 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
1764 pszPrefix, pCtxCore->cs.Sel, pszPrefix, pCtxCore->ss.Sel, pszPrefix, pCtxCore->ds.Sel, pszPrefix, pCtxCore->es.Sel,
1765 pszPrefix, pCtxCore->fs.Sel, pszPrefix, pCtxCore->gs.Sel, pszPrefix, pCtx->tr.Sel, pszPrefix, efl,
1766 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
1767 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->ldtr.Sel);
1768 break;
1769
1770 case CPUMDUMPTYPE_VERBOSE:
1771 if (CPUMIsGuestIn64BitCodeEx(pCtx))
1772 pHlp->pfnPrintf(pHlp,
1773 "%srax=%016RX64 %srbx=%016RX64 %srcx=%016RX64 %srdx=%016RX64\n"
1774 "%srsi=%016RX64 %srdi=%016RX64 %sr8 =%016RX64 %sr9 =%016RX64\n"
1775 "%sr10=%016RX64 %sr11=%016RX64 %sr12=%016RX64 %sr13=%016RX64\n"
1776 "%sr14=%016RX64 %sr15=%016RX64\n"
1777 "%srip=%016RX64 %srsp=%016RX64 %srbp=%016RX64 %siopl=%d %*s\n"
1778 "%scs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
1779 "%sds={%04x base=%016RX64 limit=%08x flags=%08x}\n"
1780 "%ses={%04x base=%016RX64 limit=%08x flags=%08x}\n"
1781 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
1782 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x}\n"
1783 "%sss={%04x base=%016RX64 limit=%08x flags=%08x}\n"
1784 "%scr0=%016RX64 %scr2=%016RX64 %scr3=%016RX64 %scr4=%016RX64\n"
1785 "%sdr0=%016RX64 %sdr1=%016RX64 %sdr2=%016RX64 %sdr3=%016RX64\n"
1786 "%sdr4=%016RX64 %sdr5=%016RX64 %sdr6=%016RX64 %sdr7=%016RX64\n"
1787 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
1788 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
1789 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
1790 "%sSysEnter={cs=%04llx eip=%016RX64 esp=%016RX64}\n"
1791 ,
1792 pszPrefix, pCtxCore->rax, pszPrefix, pCtxCore->rbx, pszPrefix, pCtxCore->rcx, pszPrefix, pCtxCore->rdx, pszPrefix, pCtxCore->rsi, pszPrefix, pCtxCore->rdi,
1793 pszPrefix, pCtxCore->r8, pszPrefix, pCtxCore->r9, pszPrefix, pCtxCore->r10, pszPrefix, pCtxCore->r11, pszPrefix, pCtxCore->r12, pszPrefix, pCtxCore->r13,
1794 pszPrefix, pCtxCore->r14, pszPrefix, pCtxCore->r15,
1795 pszPrefix, pCtxCore->rip, pszPrefix, pCtxCore->rsp, pszPrefix, pCtxCore->rbp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
1796 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u,
1797 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u,
1798 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u,
1799 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u,
1800 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u,
1801 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u,
1802 pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
1803 pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1], pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
1804 pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5], pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
1805 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
1806 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
1807 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
1808 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
1809 else
1810 pHlp->pfnPrintf(pHlp,
1811 "%seax=%08x %sebx=%08x %secx=%08x %sedx=%08x %sesi=%08x %sedi=%08x\n"
1812 "%seip=%08x %sesp=%08x %sebp=%08x %siopl=%d %*s\n"
1813 "%scs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr0=%08RX64 %sdr1=%08RX64\n"
1814 "%sds={%04x base=%016RX64 limit=%08x flags=%08x} %sdr2=%08RX64 %sdr3=%08RX64\n"
1815 "%ses={%04x base=%016RX64 limit=%08x flags=%08x} %sdr4=%08RX64 %sdr5=%08RX64\n"
1816 "%sfs={%04x base=%016RX64 limit=%08x flags=%08x} %sdr6=%08RX64 %sdr7=%08RX64\n"
1817 "%sgs={%04x base=%016RX64 limit=%08x flags=%08x} %scr0=%08RX64 %scr2=%08RX64\n"
1818 "%sss={%04x base=%016RX64 limit=%08x flags=%08x} %scr3=%08RX64 %scr4=%08RX64\n"
1819 "%sgdtr=%016RX64:%04x %sidtr=%016RX64:%04x %seflags=%08x\n"
1820 "%sldtr={%04x base=%08RX64 limit=%08x flags=%08x}\n"
1821 "%str ={%04x base=%08RX64 limit=%08x flags=%08x}\n"
1822 "%sSysEnter={cs=%04llx eip=%08llx esp=%08llx}\n"
1823 ,
1824 pszPrefix, pCtxCore->eax, pszPrefix, pCtxCore->ebx, pszPrefix, pCtxCore->ecx, pszPrefix, pCtxCore->edx, pszPrefix, pCtxCore->esi, pszPrefix, pCtxCore->edi,
1825 pszPrefix, pCtxCore->eip, pszPrefix, pCtxCore->esp, pszPrefix, pCtxCore->ebp, pszPrefix, X86_EFL_GET_IOPL(efl), *pszPrefix ? 33 : 31, szEFlags,
1826 pszPrefix, pCtxCore->cs.Sel, pCtx->cs.u64Base, pCtx->cs.u32Limit, pCtx->cs.Attr.u, pszPrefix, pCtx->dr[0], pszPrefix, pCtx->dr[1],
1827 pszPrefix, pCtxCore->ds.Sel, pCtx->ds.u64Base, pCtx->ds.u32Limit, pCtx->ds.Attr.u, pszPrefix, pCtx->dr[2], pszPrefix, pCtx->dr[3],
1828 pszPrefix, pCtxCore->es.Sel, pCtx->es.u64Base, pCtx->es.u32Limit, pCtx->es.Attr.u, pszPrefix, pCtx->dr[4], pszPrefix, pCtx->dr[5],
1829 pszPrefix, pCtxCore->fs.Sel, pCtx->fs.u64Base, pCtx->fs.u32Limit, pCtx->fs.Attr.u, pszPrefix, pCtx->dr[6], pszPrefix, pCtx->dr[7],
1830 pszPrefix, pCtxCore->gs.Sel, pCtx->gs.u64Base, pCtx->gs.u32Limit, pCtx->gs.Attr.u, pszPrefix, pCtx->cr0, pszPrefix, pCtx->cr2,
1831 pszPrefix, pCtxCore->ss.Sel, pCtx->ss.u64Base, pCtx->ss.u32Limit, pCtx->ss.Attr.u, pszPrefix, pCtx->cr3, pszPrefix, pCtx->cr4,
1832 pszPrefix, pCtx->gdtr.pGdt, pCtx->gdtr.cbGdt, pszPrefix, pCtx->idtr.pIdt, pCtx->idtr.cbIdt, pszPrefix, efl,
1833 pszPrefix, pCtx->ldtr.Sel, pCtx->ldtr.u64Base, pCtx->ldtr.u32Limit, pCtx->ldtr.Attr.u,
1834 pszPrefix, pCtx->tr.Sel, pCtx->tr.u64Base, pCtx->tr.u32Limit, pCtx->tr.Attr.u,
1835 pszPrefix, pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
1836
1837 pHlp->pfnPrintf(pHlp, "%sxcr=%016RX64 %sxcr1=%016RX64 %sxss=%016RX64 (fXStateMask=%016RX64)\n",
1838 pszPrefix, pCtx->aXcr[0], pszPrefix, pCtx->aXcr[1],
1839 pszPrefix, UINT64_C(0) /** @todo XSS */, pCtx->fXStateMask);
1840 if (pCtx->CTX_SUFF(pXState))
1841 {
1842 PX86FXSTATE pFpuCtx = &pCtx->CTX_SUFF(pXState)->x87;
1843 pHlp->pfnPrintf(pHlp,
1844 "%sFCW=%04x %sFSW=%04x %sFTW=%04x %sFOP=%04x %sMXCSR=%08x %sMXCSR_MASK=%08x\n"
1845 "%sFPUIP=%08x %sCS=%04x %sRsrvd1=%04x %sFPUDP=%08x %sDS=%04x %sRsvrd2=%04x\n"
1846 ,
1847 pszPrefix, pFpuCtx->FCW, pszPrefix, pFpuCtx->FSW, pszPrefix, pFpuCtx->FTW, pszPrefix, pFpuCtx->FOP,
1848 pszPrefix, pFpuCtx->MXCSR, pszPrefix, pFpuCtx->MXCSR_MASK,
1849 pszPrefix, pFpuCtx->FPUIP, pszPrefix, pFpuCtx->CS, pszPrefix, pFpuCtx->Rsrvd1,
1850 pszPrefix, pFpuCtx->FPUDP, pszPrefix, pFpuCtx->DS, pszPrefix, pFpuCtx->Rsrvd2
1851 );
1852 /*
1853 * The FSAVE style memory image contains ST(0)-ST(7) at increasing addresses,
1854 * not (FP)R0-7 as Intel SDM suggests.
1855 */
1856 unsigned iShift = (pFpuCtx->FSW >> 11) & 7;
1857 for (unsigned iST = 0; iST < RT_ELEMENTS(pFpuCtx->aRegs); iST++)
1858 {
1859 unsigned iFPR = (iST + iShift) % RT_ELEMENTS(pFpuCtx->aRegs);
1860 unsigned uTag = (pFpuCtx->FTW >> (2 * iFPR)) & 3;
1861 char chSign = pFpuCtx->aRegs[iST].au16[4] & 0x8000 ? '-' : '+';
1862 unsigned iInteger = (unsigned)(pFpuCtx->aRegs[iST].au64[0] >> 63);
1863 uint64_t u64Fraction = pFpuCtx->aRegs[iST].au64[0] & UINT64_C(0x7fffffffffffffff);
1864 int iExponent = pFpuCtx->aRegs[iST].au16[4] & 0x7fff;
1865 iExponent -= 16383; /* subtract bias */
1866 /** @todo This isn't entirenly correct and needs more work! */
1867 pHlp->pfnPrintf(pHlp,
1868 "%sST(%u)=%sFPR%u={%04RX16'%08RX32'%08RX32} t%d %c%u.%022llu * 2 ^ %d (*)",
1869 pszPrefix, iST, pszPrefix, iFPR,
1870 pFpuCtx->aRegs[iST].au16[4], pFpuCtx->aRegs[iST].au32[1], pFpuCtx->aRegs[iST].au32[0],
1871 uTag, chSign, iInteger, u64Fraction, iExponent);
1872 if (pFpuCtx->aRegs[iST].au16[5] || pFpuCtx->aRegs[iST].au16[6] || pFpuCtx->aRegs[iST].au16[7])
1873 pHlp->pfnPrintf(pHlp, " res={%04RX16,%04RX16,%04RX16}\n",
1874 pFpuCtx->aRegs[iST].au16[5], pFpuCtx->aRegs[iST].au16[6], pFpuCtx->aRegs[iST].au16[7]);
1875 else
1876 pHlp->pfnPrintf(pHlp, "\n");
1877 }
1878
1879 /* XMM/YMM/ZMM registers. */
1880 if (pCtx->fXStateMask & XSAVE_C_YMM)
1881 {
1882 PCX86XSAVEYMMHI pYmmHiCtx = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_YMM_BIT, PCX86XSAVEYMMHI);
1883 if (!(pCtx->fXStateMask & XSAVE_C_ZMM_HI256))
1884 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
1885 pHlp->pfnPrintf(pHlp, "%sYMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
1886 pszPrefix, i, i < 10 ? " " : "",
1887 pYmmHiCtx->aYmmHi[i].au32[3],
1888 pYmmHiCtx->aYmmHi[i].au32[2],
1889 pYmmHiCtx->aYmmHi[i].au32[1],
1890 pYmmHiCtx->aYmmHi[i].au32[0],
1891 pFpuCtx->aXMM[i].au32[3],
1892 pFpuCtx->aXMM[i].au32[2],
1893 pFpuCtx->aXMM[i].au32[1],
1894 pFpuCtx->aXMM[i].au32[0]);
1895 else
1896 {
1897 PCX86XSAVEZMMHI256 pZmmHi256 = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_HI256_BIT, PCX86XSAVEZMMHI256);
1898 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
1899 pHlp->pfnPrintf(pHlp,
1900 "%sZMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
1901 pszPrefix, i, i < 10 ? " " : "",
1902 pZmmHi256->aHi256Regs[i].au32[7],
1903 pZmmHi256->aHi256Regs[i].au32[6],
1904 pZmmHi256->aHi256Regs[i].au32[5],
1905 pZmmHi256->aHi256Regs[i].au32[4],
1906 pZmmHi256->aHi256Regs[i].au32[3],
1907 pZmmHi256->aHi256Regs[i].au32[2],
1908 pZmmHi256->aHi256Regs[i].au32[1],
1909 pZmmHi256->aHi256Regs[i].au32[0],
1910 pYmmHiCtx->aYmmHi[i].au32[3],
1911 pYmmHiCtx->aYmmHi[i].au32[2],
1912 pYmmHiCtx->aYmmHi[i].au32[1],
1913 pYmmHiCtx->aYmmHi[i].au32[0],
1914 pFpuCtx->aXMM[i].au32[3],
1915 pFpuCtx->aXMM[i].au32[2],
1916 pFpuCtx->aXMM[i].au32[1],
1917 pFpuCtx->aXMM[i].au32[0]);
1918
1919 PCX86XSAVEZMM16HI pZmm16Hi = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_ZMM_16HI_BIT, PCX86XSAVEZMM16HI);
1920 for (unsigned i = 0; i < RT_ELEMENTS(pZmm16Hi->aRegs); i++)
1921 pHlp->pfnPrintf(pHlp,
1922 "%sZMM%u=%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32''%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32'%08RX32\n",
1923 pszPrefix, i + 16,
1924 pZmm16Hi->aRegs[i].au32[15],
1925 pZmm16Hi->aRegs[i].au32[14],
1926 pZmm16Hi->aRegs[i].au32[13],
1927 pZmm16Hi->aRegs[i].au32[12],
1928 pZmm16Hi->aRegs[i].au32[11],
1929 pZmm16Hi->aRegs[i].au32[10],
1930 pZmm16Hi->aRegs[i].au32[9],
1931 pZmm16Hi->aRegs[i].au32[8],
1932 pZmm16Hi->aRegs[i].au32[7],
1933 pZmm16Hi->aRegs[i].au32[6],
1934 pZmm16Hi->aRegs[i].au32[5],
1935 pZmm16Hi->aRegs[i].au32[4],
1936 pZmm16Hi->aRegs[i].au32[3],
1937 pZmm16Hi->aRegs[i].au32[2],
1938 pZmm16Hi->aRegs[i].au32[1],
1939 pZmm16Hi->aRegs[i].au32[0]);
1940 }
1941 }
1942 else
1943 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->aXMM); i++)
1944 pHlp->pfnPrintf(pHlp,
1945 i & 1
1946 ? "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32\n"
1947 : "%sXMM%u%s=%08RX32'%08RX32'%08RX32'%08RX32 ",
1948 pszPrefix, i, i < 10 ? " " : "",
1949 pFpuCtx->aXMM[i].au32[3],
1950 pFpuCtx->aXMM[i].au32[2],
1951 pFpuCtx->aXMM[i].au32[1],
1952 pFpuCtx->aXMM[i].au32[0]);
1953
1954 if (pCtx->fXStateMask & XSAVE_C_OPMASK)
1955 {
1956 PCX86XSAVEOPMASK pOpMask = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_OPMASK_BIT, PCX86XSAVEOPMASK);
1957 for (unsigned i = 0; i < RT_ELEMENTS(pOpMask->aKRegs); i += 4)
1958 pHlp->pfnPrintf(pHlp, "%sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64 %sK%u=%016RX64\n",
1959 pszPrefix, i + 0, pOpMask->aKRegs[i + 0],
1960 pszPrefix, i + 1, pOpMask->aKRegs[i + 1],
1961 pszPrefix, i + 2, pOpMask->aKRegs[i + 2],
1962 pszPrefix, i + 3, pOpMask->aKRegs[i + 3]);
1963 }
1964
1965 if (pCtx->fXStateMask & XSAVE_C_BNDREGS)
1966 {
1967 PCX86XSAVEBNDREGS pBndRegs = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDREGS_BIT, PCX86XSAVEBNDREGS);
1968 for (unsigned i = 0; i < RT_ELEMENTS(pBndRegs->aRegs); i += 2)
1969 pHlp->pfnPrintf(pHlp, "%sBNDREG%u=%016RX64/%016RX64 %sBNDREG%u=%016RX64/%016RX64\n",
1970 pszPrefix, i, pBndRegs->aRegs[i].uLowerBound, pBndRegs->aRegs[i].uUpperBound,
1971 pszPrefix, i + 1, pBndRegs->aRegs[i + 1].uLowerBound, pBndRegs->aRegs[i + 1].uUpperBound);
1972 }
1973
1974 if (pCtx->fXStateMask & XSAVE_C_BNDCSR)
1975 {
1976 PCX86XSAVEBNDCFG pBndCfg = CPUMCTX_XSAVE_C_PTR(pCtx, XSAVE_C_BNDCSR_BIT, PCX86XSAVEBNDCFG);
1977 pHlp->pfnPrintf(pHlp, "%sBNDCFG.CONFIG=%016RX64 %sBNDCFG.STATUS=%016RX64\n",
1978 pszPrefix, pBndCfg->fConfig, pszPrefix, pBndCfg->fStatus);
1979 }
1980
1981 for (unsigned i = 0; i < RT_ELEMENTS(pFpuCtx->au32RsrvdRest); i++)
1982 if (pFpuCtx->au32RsrvdRest[i])
1983 pHlp->pfnPrintf(pHlp, "%sRsrvdRest[%u]=%RX32 (offset=%#x)\n",
1984 pszPrefix, i, pFpuCtx->au32RsrvdRest[i], RT_OFFSETOF(X86FXSTATE, au32RsrvdRest[i]) );
1985 }
1986
1987 pHlp->pfnPrintf(pHlp,
1988 "%sEFER =%016RX64\n"
1989 "%sPAT =%016RX64\n"
1990 "%sSTAR =%016RX64\n"
1991 "%sCSTAR =%016RX64\n"
1992 "%sLSTAR =%016RX64\n"
1993 "%sSFMASK =%016RX64\n"
1994 "%sKERNELGSBASE =%016RX64\n",
1995 pszPrefix, pCtx->msrEFER,
1996 pszPrefix, pCtx->msrPAT,
1997 pszPrefix, pCtx->msrSTAR,
1998 pszPrefix, pCtx->msrCSTAR,
1999 pszPrefix, pCtx->msrLSTAR,
2000 pszPrefix, pCtx->msrSFMASK,
2001 pszPrefix, pCtx->msrKERNELGSBASE);
2002 break;
2003 }
2004}
2005
2006
2007/**
2008 * Display all cpu states and any other cpum info.
2009 *
2010 * @param pVM The cross context VM structure.
2011 * @param pHlp The info helper functions.
2012 * @param pszArgs Arguments, ignored.
2013 */
2014static DECLCALLBACK(void) cpumR3InfoAll(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2015{
2016 cpumR3InfoGuest(pVM, pHlp, pszArgs);
2017 cpumR3InfoGuestInstr(pVM, pHlp, pszArgs);
2018 cpumR3InfoHyper(pVM, pHlp, pszArgs);
2019 cpumR3InfoHost(pVM, pHlp, pszArgs);
2020}
2021
2022
2023/**
2024 * Parses the info argument.
2025 *
2026 * The argument starts with 'verbose', 'terse' or 'default' and then
2027 * continues with the comment string.
2028 *
2029 * @param pszArgs The pointer to the argument string.
2030 * @param penmType Where to store the dump type request.
2031 * @param ppszComment Where to store the pointer to the comment string.
2032 */
2033static void cpumR3InfoParseArg(const char *pszArgs, CPUMDUMPTYPE *penmType, const char **ppszComment)
2034{
2035 if (!pszArgs)
2036 {
2037 *penmType = CPUMDUMPTYPE_DEFAULT;
2038 *ppszComment = "";
2039 }
2040 else
2041 {
2042 if (!strncmp(pszArgs, RT_STR_TUPLE("verbose")))
2043 {
2044 pszArgs += 7;
2045 *penmType = CPUMDUMPTYPE_VERBOSE;
2046 }
2047 else if (!strncmp(pszArgs, RT_STR_TUPLE("terse")))
2048 {
2049 pszArgs += 5;
2050 *penmType = CPUMDUMPTYPE_TERSE;
2051 }
2052 else if (!strncmp(pszArgs, RT_STR_TUPLE("default")))
2053 {
2054 pszArgs += 7;
2055 *penmType = CPUMDUMPTYPE_DEFAULT;
2056 }
2057 else
2058 *penmType = CPUMDUMPTYPE_DEFAULT;
2059 *ppszComment = RTStrStripL(pszArgs);
2060 }
2061}
2062
2063
2064/**
2065 * Display the guest cpu state.
2066 *
2067 * @param pVM The cross context VM structure.
2068 * @param pHlp The info helper functions.
2069 * @param pszArgs Arguments, ignored.
2070 */
2071static DECLCALLBACK(void) cpumR3InfoGuest(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2072{
2073 CPUMDUMPTYPE enmType;
2074 const char *pszComment;
2075 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
2076
2077 PVMCPU pVCpu = VMMGetCpu(pVM);
2078 if (!pVCpu)
2079 pVCpu = &pVM->aCpus[0];
2080
2081 pHlp->pfnPrintf(pHlp, "Guest CPUM (VCPU %d) state: %s\n", pVCpu->idCpu, pszComment);
2082
2083 PCPUMCTX pCtx = &pVCpu->cpum.s.Guest;
2084 cpumR3InfoOne(pVM, pCtx, CPUMCTX2CORE(pCtx), pHlp, enmType, "");
2085}
2086
2087
2088/**
2089 * Display the current guest instruction
2090 *
2091 * @param pVM The cross context VM structure.
2092 * @param pHlp The info helper functions.
2093 * @param pszArgs Arguments, ignored.
2094 */
2095static DECLCALLBACK(void) cpumR3InfoGuestInstr(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2096{
2097 NOREF(pszArgs);
2098
2099 PVMCPU pVCpu = VMMGetCpu(pVM);
2100 if (!pVCpu)
2101 pVCpu = &pVM->aCpus[0];
2102
2103 char szInstruction[256];
2104 szInstruction[0] = '\0';
2105 DBGFR3DisasInstrCurrent(pVCpu, szInstruction, sizeof(szInstruction));
2106 pHlp->pfnPrintf(pHlp, "\nCPUM: %s\n\n", szInstruction);
2107}
2108
2109
2110/**
2111 * Display the hypervisor cpu state.
2112 *
2113 * @param pVM The cross context VM structure.
2114 * @param pHlp The info helper functions.
2115 * @param pszArgs Arguments, ignored.
2116 */
2117static DECLCALLBACK(void) cpumR3InfoHyper(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2118{
2119 PVMCPU pVCpu = VMMGetCpu(pVM);
2120 if (!pVCpu)
2121 pVCpu = &pVM->aCpus[0];
2122
2123 CPUMDUMPTYPE enmType;
2124 const char *pszComment;
2125 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
2126 pHlp->pfnPrintf(pHlp, "Hypervisor CPUM state: %s\n", pszComment);
2127 cpumR3InfoOne(pVM, &pVCpu->cpum.s.Hyper, CPUMCTX2CORE(&pVCpu->cpum.s.Hyper), pHlp, enmType, ".");
2128 pHlp->pfnPrintf(pHlp, "CR4OrMask=%#x CR4AndMask=%#x\n", pVM->cpum.s.CR4.OrMask, pVM->cpum.s.CR4.AndMask);
2129}
2130
2131
2132/**
2133 * Display the host cpu state.
2134 *
2135 * @param pVM The cross context VM structure.
2136 * @param pHlp The info helper functions.
2137 * @param pszArgs Arguments, ignored.
2138 */
2139static DECLCALLBACK(void) cpumR3InfoHost(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2140{
2141 CPUMDUMPTYPE enmType;
2142 const char *pszComment;
2143 cpumR3InfoParseArg(pszArgs, &enmType, &pszComment);
2144 pHlp->pfnPrintf(pHlp, "Host CPUM state: %s\n", pszComment);
2145
2146 PVMCPU pVCpu = VMMGetCpu(pVM);
2147 if (!pVCpu)
2148 pVCpu = &pVM->aCpus[0];
2149 PCPUMHOSTCTX pCtx = &pVCpu->cpum.s.Host;
2150
2151 /*
2152 * Format the EFLAGS.
2153 */
2154#if HC_ARCH_BITS == 32
2155 uint32_t efl = pCtx->eflags.u32;
2156#else
2157 uint64_t efl = pCtx->rflags;
2158#endif
2159 char szEFlags[80];
2160 cpumR3InfoFormatFlags(&szEFlags[0], efl);
2161
2162 /*
2163 * Format the registers.
2164 */
2165#if HC_ARCH_BITS == 32
2166 pHlp->pfnPrintf(pHlp,
2167 "eax=xxxxxxxx ebx=%08x ecx=xxxxxxxx edx=xxxxxxxx esi=%08x edi=%08x\n"
2168 "eip=xxxxxxxx esp=%08x ebp=%08x iopl=%d %31s\n"
2169 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08x\n"
2170 "cr0=%08RX64 cr2=xxxxxxxx cr3=%08RX64 cr4=%08RX64 gdtr=%08x:%04x ldtr=%04x\n"
2171 "dr[0]=%08RX64 dr[1]=%08RX64x dr[2]=%08RX64 dr[3]=%08RX64x dr[6]=%08RX64 dr[7]=%08RX64\n"
2172 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
2173 ,
2174 /*pCtx->eax,*/ pCtx->ebx, /*pCtx->ecx, pCtx->edx,*/ pCtx->esi, pCtx->edi,
2175 /*pCtx->eip,*/ pCtx->esp, pCtx->ebp, X86_EFL_GET_IOPL(efl), szEFlags,
2176 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
2177 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3, pCtx->cr4,
2178 pCtx->dr0, pCtx->dr1, pCtx->dr2, pCtx->dr3, pCtx->dr6, pCtx->dr7,
2179 (uint32_t)pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->ldtr,
2180 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp);
2181#else
2182 pHlp->pfnPrintf(pHlp,
2183 "rax=xxxxxxxxxxxxxxxx rbx=%016RX64 rcx=xxxxxxxxxxxxxxxx\n"
2184 "rdx=xxxxxxxxxxxxxxxx rsi=%016RX64 rdi=%016RX64\n"
2185 "rip=xxxxxxxxxxxxxxxx rsp=%016RX64 rbp=%016RX64\n"
2186 " r8=xxxxxxxxxxxxxxxx r9=xxxxxxxxxxxxxxxx r10=%016RX64\n"
2187 "r11=%016RX64 r12=%016RX64 r13=%016RX64\n"
2188 "r14=%016RX64 r15=%016RX64\n"
2189 "iopl=%d %31s\n"
2190 "cs=%04x ds=%04x es=%04x fs=%04x gs=%04x eflags=%08RX64\n"
2191 "cr0=%016RX64 cr2=xxxxxxxxxxxxxxxx cr3=%016RX64\n"
2192 "cr4=%016RX64 ldtr=%04x tr=%04x\n"
2193 "dr[0]=%016RX64 dr[1]=%016RX64 dr[2]=%016RX64\n"
2194 "dr[3]=%016RX64 dr[6]=%016RX64 dr[7]=%016RX64\n"
2195 "gdtr=%016RX64:%04x idtr=%016RX64:%04x\n"
2196 "SysEnter={cs=%04x eip=%08x esp=%08x}\n"
2197 "FSbase=%016RX64 GSbase=%016RX64 efer=%08RX64\n"
2198 ,
2199 /*pCtx->rax,*/ pCtx->rbx, /*pCtx->rcx,
2200 pCtx->rdx,*/ pCtx->rsi, pCtx->rdi,
2201 /*pCtx->rip,*/ pCtx->rsp, pCtx->rbp,
2202 /*pCtx->r8, pCtx->r9,*/ pCtx->r10,
2203 pCtx->r11, pCtx->r12, pCtx->r13,
2204 pCtx->r14, pCtx->r15,
2205 X86_EFL_GET_IOPL(efl), szEFlags,
2206 pCtx->cs, pCtx->ds, pCtx->es, pCtx->fs, pCtx->gs, efl,
2207 pCtx->cr0, /*pCtx->cr2,*/ pCtx->cr3,
2208 pCtx->cr4, pCtx->ldtr, pCtx->tr,
2209 pCtx->dr0, pCtx->dr1, pCtx->dr2,
2210 pCtx->dr3, pCtx->dr6, pCtx->dr7,
2211 pCtx->gdtr.uAddr, pCtx->gdtr.cb, pCtx->idtr.uAddr, pCtx->idtr.cb,
2212 pCtx->SysEnter.cs, pCtx->SysEnter.eip, pCtx->SysEnter.esp,
2213 pCtx->FSbase, pCtx->GSbase, pCtx->efer);
2214#endif
2215}
2216
2217/**
2218 * Structure used when disassembling and instructions in DBGF.
2219 * This is used so the reader function can get the stuff it needs.
2220 */
2221typedef struct CPUMDISASSTATE
2222{
2223 /** Pointer to the CPU structure. */
2224 PDISCPUSTATE pCpu;
2225 /** Pointer to the VM. */
2226 PVM pVM;
2227 /** Pointer to the VMCPU. */
2228 PVMCPU pVCpu;
2229 /** Pointer to the first byte in the segment. */
2230 RTGCUINTPTR GCPtrSegBase;
2231 /** Pointer to the byte after the end of the segment. (might have wrapped!) */
2232 RTGCUINTPTR GCPtrSegEnd;
2233 /** The size of the segment minus 1. */
2234 RTGCUINTPTR cbSegLimit;
2235 /** Pointer to the current page - R3 Ptr. */
2236 void const *pvPageR3;
2237 /** Pointer to the current page - GC Ptr. */
2238 RTGCPTR pvPageGC;
2239 /** The lock information that PGMPhysReleasePageMappingLock needs. */
2240 PGMPAGEMAPLOCK PageMapLock;
2241 /** Whether the PageMapLock is valid or not. */
2242 bool fLocked;
2243 /** 64 bits mode or not. */
2244 bool f64Bits;
2245} CPUMDISASSTATE, *PCPUMDISASSTATE;
2246
2247
2248/**
2249 * @callback_method_impl{FNDISREADBYTES}
2250 */
2251static DECLCALLBACK(int) cpumR3DisasInstrRead(PDISCPUSTATE pDis, uint8_t offInstr, uint8_t cbMinRead, uint8_t cbMaxRead)
2252{
2253 PCPUMDISASSTATE pState = (PCPUMDISASSTATE)pDis->pvUser;
2254 for (;;)
2255 {
2256 RTGCUINTPTR GCPtr = pDis->uInstrAddr + offInstr + pState->GCPtrSegBase;
2257
2258 /*
2259 * Need to update the page translation?
2260 */
2261 if ( !pState->pvPageR3
2262 || (GCPtr >> PAGE_SHIFT) != (pState->pvPageGC >> PAGE_SHIFT))
2263 {
2264 int rc = VINF_SUCCESS;
2265
2266 /* translate the address */
2267 pState->pvPageGC = GCPtr & PAGE_BASE_GC_MASK;
2268 if ( !HMIsEnabled(pState->pVM)
2269 && MMHyperIsInsideArea(pState->pVM, pState->pvPageGC))
2270 {
2271 pState->pvPageR3 = MMHyperRCToR3(pState->pVM, (RTRCPTR)pState->pvPageGC);
2272 if (!pState->pvPageR3)
2273 rc = VERR_INVALID_POINTER;
2274 }
2275 else
2276 {
2277 /* Release mapping lock previously acquired. */
2278 if (pState->fLocked)
2279 PGMPhysReleasePageMappingLock(pState->pVM, &pState->PageMapLock);
2280 rc = PGMPhysGCPtr2CCPtrReadOnly(pState->pVCpu, pState->pvPageGC, &pState->pvPageR3, &pState->PageMapLock);
2281 pState->fLocked = RT_SUCCESS_NP(rc);
2282 }
2283 if (RT_FAILURE(rc))
2284 {
2285 pState->pvPageR3 = NULL;
2286 return rc;
2287 }
2288 }
2289
2290 /*
2291 * Check the segment limit.
2292 */
2293 if (!pState->f64Bits && pDis->uInstrAddr + offInstr > pState->cbSegLimit)
2294 return VERR_OUT_OF_SELECTOR_BOUNDS;
2295
2296 /*
2297 * Calc how much we can read.
2298 */
2299 uint32_t cb = PAGE_SIZE - (GCPtr & PAGE_OFFSET_MASK);
2300 if (!pState->f64Bits)
2301 {
2302 RTGCUINTPTR cbSeg = pState->GCPtrSegEnd - GCPtr;
2303 if (cb > cbSeg && cbSeg)
2304 cb = cbSeg;
2305 }
2306 if (cb > cbMaxRead)
2307 cb = cbMaxRead;
2308
2309 /*
2310 * Read and advance or exit.
2311 */
2312 memcpy(&pDis->abInstr[offInstr], (uint8_t *)pState->pvPageR3 + (GCPtr & PAGE_OFFSET_MASK), cb);
2313 offInstr += (uint8_t)cb;
2314 if (cb >= cbMinRead)
2315 {
2316 pDis->cbCachedInstr = offInstr;
2317 return VINF_SUCCESS;
2318 }
2319 cbMinRead -= (uint8_t)cb;
2320 cbMaxRead -= (uint8_t)cb;
2321 }
2322}
2323
2324
2325/**
2326 * Disassemble an instruction and return the information in the provided structure.
2327 *
2328 * @returns VBox status code.
2329 * @param pVM The cross context VM structure.
2330 * @param pVCpu The cross context virtual CPU structure.
2331 * @param pCtx Pointer to the guest CPU context.
2332 * @param GCPtrPC Program counter (relative to CS) to disassemble from.
2333 * @param pCpu Disassembly state.
2334 * @param pszPrefix String prefix for logging (debug only).
2335 *
2336 */
2337VMMR3DECL(int) CPUMR3DisasmInstrCPU(PVM pVM, PVMCPU pVCpu, PCPUMCTX pCtx, RTGCPTR GCPtrPC, PDISCPUSTATE pCpu, const char *pszPrefix)
2338{
2339 CPUMDISASSTATE State;
2340 int rc;
2341
2342 const PGMMODE enmMode = PGMGetGuestMode(pVCpu);
2343 State.pCpu = pCpu;
2344 State.pvPageGC = 0;
2345 State.pvPageR3 = NULL;
2346 State.pVM = pVM;
2347 State.pVCpu = pVCpu;
2348 State.fLocked = false;
2349 State.f64Bits = false;
2350
2351 /*
2352 * Get selector information.
2353 */
2354 DISCPUMODE enmDisCpuMode;
2355 if ( (pCtx->cr0 & X86_CR0_PE)
2356 && pCtx->eflags.Bits.u1VM == 0)
2357 {
2358 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
2359 {
2360# ifdef VBOX_WITH_RAW_MODE_NOT_R0
2361 CPUMGuestLazyLoadHiddenSelectorReg(pVCpu, &pCtx->cs);
2362# endif
2363 if (!CPUMSELREG_ARE_HIDDEN_PARTS_VALID(pVCpu, &pCtx->cs))
2364 return VERR_CPUM_HIDDEN_CS_LOAD_ERROR;
2365 }
2366 State.f64Bits = enmMode >= PGMMODE_AMD64 && pCtx->cs.Attr.n.u1Long;
2367 State.GCPtrSegBase = pCtx->cs.u64Base;
2368 State.GCPtrSegEnd = pCtx->cs.u32Limit + 1 + (RTGCUINTPTR)pCtx->cs.u64Base;
2369 State.cbSegLimit = pCtx->cs.u32Limit;
2370 enmDisCpuMode = (State.f64Bits)
2371 ? DISCPUMODE_64BIT
2372 : pCtx->cs.Attr.n.u1DefBig
2373 ? DISCPUMODE_32BIT
2374 : DISCPUMODE_16BIT;
2375 }
2376 else
2377 {
2378 /* real or V86 mode */
2379 enmDisCpuMode = DISCPUMODE_16BIT;
2380 State.GCPtrSegBase = pCtx->cs.Sel * 16;
2381 State.GCPtrSegEnd = 0xFFFFFFFF;
2382 State.cbSegLimit = 0xFFFFFFFF;
2383 }
2384
2385 /*
2386 * Disassemble the instruction.
2387 */
2388 uint32_t cbInstr;
2389#ifndef LOG_ENABLED
2390 rc = DISInstrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State, pCpu, &cbInstr);
2391 if (RT_SUCCESS(rc))
2392 {
2393#else
2394 char szOutput[160];
2395 rc = DISInstrToStrWithReader(GCPtrPC, enmDisCpuMode, cpumR3DisasInstrRead, &State,
2396 pCpu, &cbInstr, szOutput, sizeof(szOutput));
2397 if (RT_SUCCESS(rc))
2398 {
2399 /* log it */
2400 if (pszPrefix)
2401 Log(("%s-CPU%d: %s", pszPrefix, pVCpu->idCpu, szOutput));
2402 else
2403 Log(("%s", szOutput));
2404#endif
2405 rc = VINF_SUCCESS;
2406 }
2407 else
2408 Log(("CPUMR3DisasmInstrCPU: DISInstr failed for %04X:%RGv rc=%Rrc\n", pCtx->cs.Sel, GCPtrPC, rc));
2409
2410 /* Release mapping lock acquired in cpumR3DisasInstrRead. */
2411 if (State.fLocked)
2412 PGMPhysReleasePageMappingLock(pVM, &State.PageMapLock);
2413
2414 return rc;
2415}
2416
2417
2418
2419/**
2420 * API for controlling a few of the CPU features found in CR4.
2421 *
2422 * Currently only X86_CR4_TSD is accepted as input.
2423 *
2424 * @returns VBox status code.
2425 *
2426 * @param pVM The cross context VM structure.
2427 * @param fOr The CR4 OR mask.
2428 * @param fAnd The CR4 AND mask.
2429 */
2430VMMR3DECL(int) CPUMR3SetCR4Feature(PVM pVM, RTHCUINTREG fOr, RTHCUINTREG fAnd)
2431{
2432 AssertMsgReturn(!(fOr & ~(X86_CR4_TSD)), ("%#x\n", fOr), VERR_INVALID_PARAMETER);
2433 AssertMsgReturn((fAnd & ~(X86_CR4_TSD)) == ~(X86_CR4_TSD), ("%#x\n", fAnd), VERR_INVALID_PARAMETER);
2434
2435 pVM->cpum.s.CR4.OrMask &= fAnd;
2436 pVM->cpum.s.CR4.OrMask |= fOr;
2437
2438 return VINF_SUCCESS;
2439}
2440
2441
2442/**
2443 * Enters REM, gets and resets the changed flags (CPUM_CHANGED_*).
2444 *
2445 * Only REM should ever call this function!
2446 *
2447 * @returns The changed flags.
2448 * @param pVCpu The cross context virtual CPU structure.
2449 * @param puCpl Where to return the current privilege level (CPL).
2450 */
2451VMMR3DECL(uint32_t) CPUMR3RemEnter(PVMCPU pVCpu, uint32_t *puCpl)
2452{
2453 Assert(!pVCpu->cpum.s.fRawEntered);
2454 Assert(!pVCpu->cpum.s.fRemEntered);
2455
2456 /*
2457 * Get the CPL first.
2458 */
2459 *puCpl = CPUMGetGuestCPL(pVCpu);
2460
2461 /*
2462 * Get and reset the flags.
2463 */
2464 uint32_t fFlags = pVCpu->cpum.s.fChanged;
2465 pVCpu->cpum.s.fChanged = 0;
2466
2467 /** @todo change the switcher to use the fChanged flags. */
2468 if (pVCpu->cpum.s.fUseFlags & CPUM_USED_FPU_SINCE_REM)
2469 {
2470 fFlags |= CPUM_CHANGED_FPU_REM;
2471 pVCpu->cpum.s.fUseFlags &= ~CPUM_USED_FPU_SINCE_REM;
2472 }
2473
2474 pVCpu->cpum.s.fRemEntered = true;
2475 return fFlags;
2476}
2477
2478
2479/**
2480 * Leaves REM.
2481 *
2482 * @param pVCpu The cross context virtual CPU structure.
2483 * @param fNoOutOfSyncSels This is @c false if there are out of sync
2484 * registers.
2485 */
2486VMMR3DECL(void) CPUMR3RemLeave(PVMCPU pVCpu, bool fNoOutOfSyncSels)
2487{
2488 Assert(!pVCpu->cpum.s.fRawEntered);
2489 Assert(pVCpu->cpum.s.fRemEntered);
2490
2491 pVCpu->cpum.s.fRemEntered = false;
2492}
2493
2494
2495/**
2496 * Called when the ring-3 init phase completes.
2497 *
2498 * @returns VBox status code.
2499 * @param pVM The cross context VM structure.
2500 * @param enmWhat Which init phase.
2501 */
2502VMMR3DECL(int) CPUMR3InitCompleted(PVM pVM, VMINITCOMPLETED enmWhat)
2503{
2504 switch (enmWhat)
2505 {
2506 case VMINITCOMPLETED_RING3:
2507 {
2508 /*
2509 * Figure out if the guest uses 32-bit or 64-bit FPU state at runtime for 64-bit capable VMs.
2510 * Only applicable/used on 64-bit hosts, refer CPUMR0A.asm. See @bugref{7138}.
2511 */
2512 bool const fSupportsLongMode = VMR3IsLongModeAllowed(pVM);
2513 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2514 {
2515 PVMCPU pVCpu = &pVM->aCpus[i];
2516 /* While loading a saved-state we fix it up in, cpumR3LoadDone(). */
2517 if (fSupportsLongMode)
2518 pVCpu->cpum.s.fUseFlags |= CPUM_USE_SUPPORTS_LONGMODE;
2519 }
2520
2521 cpumR3MsrRegStats(pVM);
2522 break;
2523 }
2524
2525 case VMINITCOMPLETED_RING0:
2526 {
2527 /* Cache the APIC base (from the APIC device) once it has been initialized. */
2528 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2529 {
2530 PVMCPU pVCpu = &pVM->aCpus[i];
2531 PDMApicGetBaseMsr(pVCpu, &pVCpu->cpum.s.Guest.msrApicBase, true /* fIgnoreErrors */);
2532#ifdef VBOX_WITH_NEW_APIC
2533 LogRel(("CPUM: VCPU%3d: Cached APIC base MSR = %#RX64\n", i, pVCpu->cpum.s.Guest.msrApicBase));
2534#endif
2535 }
2536 break;
2537 }
2538
2539 default:
2540 break;
2541 }
2542 return VINF_SUCCESS;
2543}
2544
2545
2546/**
2547 * Called when the ring-0 init phases completed.
2548 *
2549 * @param pVM The cross context VM structure.
2550 */
2551VMMR3DECL(void) CPUMR3LogCpuIds(PVM pVM)
2552{
2553 /*
2554 * Log the cpuid.
2555 */
2556 bool fOldBuffered = RTLogRelSetBuffering(true /*fBuffered*/);
2557 RTCPUSET OnlineSet;
2558 LogRel(("CPUM: Logical host processors: %u present, %u max, %u online, online mask: %016RX64\n",
2559 (unsigned)RTMpGetPresentCount(), (unsigned)RTMpGetCount(), (unsigned)RTMpGetOnlineCount(),
2560 RTCpuSetToU64(RTMpGetOnlineSet(&OnlineSet)) ));
2561 RTCPUID cCores = RTMpGetCoreCount();
2562 if (cCores)
2563 LogRel(("CPUM: Physical host cores: %u\n", (unsigned)cCores));
2564 LogRel(("************************* CPUID dump ************************\n"));
2565 DBGFR3Info(pVM->pUVM, "cpuid", "verbose", DBGFR3InfoLogRelHlp());
2566 LogRel(("\n"));
2567 DBGFR3_INFO_LOG(pVM, "cpuid", "verbose"); /* macro */
2568 RTLogRelSetBuffering(fOldBuffered);
2569 LogRel(("******************** End of CPUID dump **********************\n"));
2570}
2571
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette