VirtualBox

source: vbox/trunk/src/VBox/VMM/PGM.cpp@ 19472

Last change on this file since 19472 was 19141, checked in by vboxsync, 16 years ago

Action flags breakup.
Fixed PGM saved state loading of 2.2.2 images.
Reduced hacks in PATM state loading (fixups).

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 230.2 KB
Line 
1/* $Id: PGM.cpp 19141 2009-04-23 13:52:18Z vboxsync $ */
2/** @file
3 * PGM - Page Manager and Monitor. (Mixing stuff here, not good?)
4 */
5
6/*
7 * Copyright (C) 2006-2007 Sun Microsystems, Inc.
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 *
17 * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
18 * Clara, CA 95054 USA or visit http://www.sun.com if you need
19 * additional information or have any questions.
20 */
21
22
23/** @page pg_pgm PGM - The Page Manager and Monitor
24 *
25 * @see grp_pgm,
26 * @ref pg_pgm_pool,
27 * @ref pg_pgm_phys.
28 *
29 *
30 * @section sec_pgm_modes Paging Modes
31 *
32 * There are three memory contexts: Host Context (HC), Guest Context (GC)
33 * and intermediate context. When talking about paging HC can also be refered to
34 * as "host paging", and GC refered to as "shadow paging".
35 *
36 * We define three basic paging modes: 32-bit, PAE and AMD64. The host paging mode
37 * is defined by the host operating system. The mode used in the shadow paging mode
38 * depends on the host paging mode and what the mode the guest is currently in. The
39 * following relation between the two is defined:
40 *
41 * @verbatim
42 Host > 32-bit | PAE | AMD64 |
43 Guest | | | |
44 ==v================================
45 32-bit 32-bit PAE PAE
46 -------|--------|--------|--------|
47 PAE PAE PAE PAE
48 -------|--------|--------|--------|
49 AMD64 AMD64 AMD64 AMD64
50 -------|--------|--------|--------| @endverbatim
51 *
52 * All configuration except those in the diagonal (upper left) are expected to
53 * require special effort from the switcher (i.e. a bit slower).
54 *
55 *
56 *
57 *
58 * @section sec_pgm_shw The Shadow Memory Context
59 *
60 *
61 * [..]
62 *
63 * Because of guest context mappings requires PDPT and PML4 entries to allow
64 * writing on AMD64, the two upper levels will have fixed flags whatever the
65 * guest is thinking of using there. So, when shadowing the PD level we will
66 * calculate the effective flags of PD and all the higher levels. In legacy
67 * PAE mode this only applies to the PWT and PCD bits (the rest are
68 * ignored/reserved/MBZ). We will ignore those bits for the present.
69 *
70 *
71 *
72 * @section sec_pgm_int The Intermediate Memory Context
73 *
74 * The world switch goes thru an intermediate memory context which purpose it is
75 * to provide different mappings of the switcher code. All guest mappings are also
76 * present in this context.
77 *
78 * The switcher code is mapped at the same location as on the host, at an
79 * identity mapped location (physical equals virtual address), and at the
80 * hypervisor location. The identity mapped location is for when the world
81 * switches that involves disabling paging.
82 *
83 * PGM maintain page tables for 32-bit, PAE and AMD64 paging modes. This
84 * simplifies switching guest CPU mode and consistency at the cost of more
85 * code to do the work. All memory use for those page tables is located below
86 * 4GB (this includes page tables for guest context mappings).
87 *
88 *
89 * @subsection subsec_pgm_int_gc Guest Context Mappings
90 *
91 * During assignment and relocation of a guest context mapping the intermediate
92 * memory context is used to verify the new location.
93 *
94 * Guest context mappings are currently restricted to below 4GB, for reasons
95 * of simplicity. This may change when we implement AMD64 support.
96 *
97 *
98 *
99 *
100 * @section sec_pgm_misc Misc
101 *
102 * @subsection subsec_pgm_misc_diff Differences Between Legacy PAE and Long Mode PAE
103 *
104 * The differences between legacy PAE and long mode PAE are:
105 * -# PDPE bits 1, 2, 5 and 6 are defined differently. In leagcy mode they are
106 * all marked down as must-be-zero, while in long mode 1, 2 and 5 have the
107 * usual meanings while 6 is ignored (AMD). This means that upon switching to
108 * legacy PAE mode we'll have to clear these bits and when going to long mode
109 * they must be set. This applies to both intermediate and shadow contexts,
110 * however we don't need to do it for the intermediate one since we're
111 * executing with CR0.WP at that time.
112 * -# CR3 allows a 32-byte aligned address in legacy mode, while in long mode
113 * a page aligned one is required.
114 *
115 *
116 * @section sec_pgm_handlers Access Handlers
117 *
118 * Placeholder.
119 *
120 *
121 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
122 *
123 * Placeholder.
124 *
125 *
126 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
127 *
128 * We currently implement three types of virtual access handlers: ALL, WRITE
129 * and HYPERVISOR (WRITE). See PGMVIRTHANDLERTYPE for some more details.
130 *
131 * The HYPERVISOR access handlers is kept in a separate tree since it doesn't apply
132 * to physical pages (PGMTREES::HyperVirtHandlers) and only needs to be consulted in
133 * a special \#PF case. The ALL and WRITE are in the PGMTREES::VirtHandlers tree, the
134 * rest of this section is going to be about these handlers.
135 *
136 * We'll go thru the life cycle of a handler and try make sense of it all, don't know
137 * how successfull this is gonna be...
138 *
139 * 1. A handler is registered thru the PGMR3HandlerVirtualRegister and
140 * PGMHandlerVirtualRegisterEx APIs. We check for conflicting virtual handlers
141 * and create a new node that is inserted into the AVL tree (range key). Then
142 * a full PGM resync is flagged (clear pool, sync cr3, update virtual bit of PGMPAGE).
143 *
144 * 2. The following PGMSyncCR3/SyncCR3 operation will first make invoke HandlerVirtualUpdate.
145 *
146 * 2a. HandlerVirtualUpdate will will lookup all the pages covered by virtual handlers
147 * via the current guest CR3 and update the physical page -> virtual handler
148 * translation. Needless to say, this doesn't exactly scale very well. If any changes
149 * are detected, it will flag a virtual bit update just like we did on registration.
150 * PGMPHYS pages with changes will have their virtual handler state reset to NONE.
151 *
152 * 2b. The virtual bit update process will iterate all the pages covered by all the
153 * virtual handlers and update the PGMPAGE virtual handler state to the max of all
154 * virtual handlers on that page.
155 *
156 * 2c. Back in SyncCR3 we will now flush the entire shadow page cache to make sure
157 * we don't miss any alias mappings of the monitored pages.
158 *
159 * 2d. SyncCR3 will then proceed with syncing the CR3 table.
160 *
161 * 3. \#PF(np,read) on a page in the range. This will cause it to be synced
162 * read-only and resumed if it's a WRITE handler. If it's an ALL handler we
163 * will call the handlers like in the next step. If the physical mapping has
164 * changed we will - some time in the future - perform a handler callback
165 * (optional) and update the physical -> virtual handler cache.
166 *
167 * 4. \#PF(,write) on a page in the range. This will cause the handler to
168 * be invoked.
169 *
170 * 5. The guest invalidates the page and changes the physical backing or
171 * unmaps it. This should cause the invalidation callback to be invoked
172 * (it might not yet be 100% perfect). Exactly what happens next... is
173 * this where we mess up and end up out of sync for a while?
174 *
175 * 6. The handler is deregistered by the client via PGMHandlerVirtualDeregister.
176 * We will then set all PGMPAGEs in the physical -> virtual handler cache for
177 * this handler to NONE and trigger a full PGM resync (basically the same
178 * as int step 1). Which means 2 is executed again.
179 *
180 *
181 * @subsubsection sub_sec_pgm_handler_virt_todo TODOs
182 *
183 * There is a bunch of things that needs to be done to make the virtual handlers
184 * work 100% correctly and work more efficiently.
185 *
186 * The first bit hasn't been implemented yet because it's going to slow the
187 * whole mess down even more, and besides it seems to be working reliably for
188 * our current uses. OTOH, some of the optimizations might end up more or less
189 * implementing the missing bits, so we'll see.
190 *
191 * On the optimization side, the first thing to do is to try avoid unnecessary
192 * cache flushing. Then try team up with the shadowing code to track changes
193 * in mappings by means of access to them (shadow in), updates to shadows pages,
194 * invlpg, and shadow PT discarding (perhaps).
195 *
196 * Some idea that have popped up for optimization for current and new features:
197 * - bitmap indicating where there are virtual handlers installed.
198 * (4KB => 2**20 pages, page 2**12 => covers 32-bit address space 1:1!)
199 * - Further optimize this by min/max (needs min/max avl getters).
200 * - Shadow page table entry bit (if any left)?
201 *
202 */
203
204
205/** @page pg_pgm_phys PGM Physical Guest Memory Management
206 *
207 *
208 * Objectives:
209 * - Guest RAM over-commitment using memory ballooning,
210 * zero pages and general page sharing.
211 * - Moving or mirroring a VM onto a different physical machine.
212 *
213 *
214 * @subsection subsec_pgmPhys_Definitions Definitions
215 *
216 * Allocation chunk - A RTR0MemObjAllocPhysNC object and the tracking
217 * machinery assoicated with it.
218 *
219 *
220 *
221 *
222 * @subsection subsec_pgmPhys_AllocPage Allocating a page.
223 *
224 * Initially we map *all* guest memory to the (per VM) zero page, which
225 * means that none of the read functions will cause pages to be allocated.
226 *
227 * Exception, access bit in page tables that have been shared. This must
228 * be handled, but we must also make sure PGMGst*Modify doesn't make
229 * unnecessary modifications.
230 *
231 * Allocation points:
232 * - PGMPhysSimpleWriteGCPhys and PGMPhysWrite.
233 * - Replacing a zero page mapping at \#PF.
234 * - Replacing a shared page mapping at \#PF.
235 * - ROM registration (currently MMR3RomRegister).
236 * - VM restore (pgmR3Load).
237 *
238 * For the first three it would make sense to keep a few pages handy
239 * until we've reached the max memory commitment for the VM.
240 *
241 * For the ROM registration, we know exactly how many pages we need
242 * and will request these from ring-0. For restore, we will save
243 * the number of non-zero pages in the saved state and allocate
244 * them up front. This would allow the ring-0 component to refuse
245 * the request if the isn't sufficient memory available for VM use.
246 *
247 * Btw. for both ROM and restore allocations we won't be requiring
248 * zeroed pages as they are going to be filled instantly.
249 *
250 *
251 * @subsection subsec_pgmPhys_FreePage Freeing a page
252 *
253 * There are a few points where a page can be freed:
254 * - After being replaced by the zero page.
255 * - After being replaced by a shared page.
256 * - After being ballooned by the guest additions.
257 * - At reset.
258 * - At restore.
259 *
260 * When freeing one or more pages they will be returned to the ring-0
261 * component and replaced by the zero page.
262 *
263 * The reasoning for clearing out all the pages on reset is that it will
264 * return us to the exact same state as on power on, and may thereby help
265 * us reduce the memory load on the system. Further it might have a
266 * (temporary) positive influence on memory fragmentation (@see subsec_pgmPhys_Fragmentation).
267 *
268 * On restore, as mention under the allocation topic, pages should be
269 * freed / allocated depending on how many is actually required by the
270 * new VM state. The simplest approach is to do like on reset, and free
271 * all non-ROM pages and then allocate what we need.
272 *
273 * A measure to prevent some fragmentation, would be to let each allocation
274 * chunk have some affinity towards the VM having allocated the most pages
275 * from it. Also, try make sure to allocate from allocation chunks that
276 * are almost full. Admittedly, both these measures might work counter to
277 * our intentions and its probably not worth putting a lot of effort,
278 * cpu time or memory into this.
279 *
280 *
281 * @subsection subsec_pgmPhys_SharePage Sharing a page
282 *
283 * The basic idea is that there there will be a idle priority kernel
284 * thread walking the non-shared VM pages hashing them and looking for
285 * pages with the same checksum. If such pages are found, it will compare
286 * them byte-by-byte to see if they actually are identical. If found to be
287 * identical it will allocate a shared page, copy the content, check that
288 * the page didn't change while doing this, and finally request both the
289 * VMs to use the shared page instead. If the page is all zeros (special
290 * checksum and byte-by-byte check) it will request the VM that owns it
291 * to replace it with the zero page.
292 *
293 * To make this efficient, we will have to make sure not to try share a page
294 * that will change its contents soon. This part requires the most work.
295 * A simple idea would be to request the VM to write monitor the page for
296 * a while to make sure it isn't modified any time soon. Also, it may
297 * make sense to skip pages that are being write monitored since this
298 * information is readily available to the thread if it works on the
299 * per-VM guest memory structures (presently called PGMRAMRANGE).
300 *
301 *
302 * @subsection subsec_pgmPhys_Fragmentation Fragmentation Concerns and Counter Measures
303 *
304 * The pages are organized in allocation chunks in ring-0, this is a necessity
305 * if we wish to have an OS agnostic approach to this whole thing. (On Linux we
306 * could easily work on a page-by-page basis if we liked. Whether this is possible
307 * or efficient on NT I don't quite know.) Fragmentation within these chunks may
308 * become a problem as part of the idea here is that we wish to return memory to
309 * the host system.
310 *
311 * For instance, starting two VMs at the same time, they will both allocate the
312 * guest memory on-demand and if permitted their page allocations will be
313 * intermixed. Shut down one of the two VMs and it will be difficult to return
314 * any memory to the host system because the page allocation for the two VMs are
315 * mixed up in the same allocation chunks.
316 *
317 * To further complicate matters, when pages are freed because they have been
318 * ballooned or become shared/zero the whole idea is that the page is supposed
319 * to be reused by another VM or returned to the host system. This will cause
320 * allocation chunks to contain pages belonging to different VMs and prevent
321 * returning memory to the host when one of those VM shuts down.
322 *
323 * The only way to really deal with this problem is to move pages. This can
324 * either be done at VM shutdown and or by the idle priority worker thread
325 * that will be responsible for finding sharable/zero pages. The mechanisms
326 * involved for coercing a VM to move a page (or to do it for it) will be
327 * the same as when telling it to share/zero a page.
328 *
329 *
330 * @subsection subsec_pgmPhys_Tracking Tracking Structures And Their Cost
331 *
332 * There's a difficult balance between keeping the per-page tracking structures
333 * (global and guest page) easy to use and keeping them from eating too much
334 * memory. We have limited virtual memory resources available when operating in
335 * 32-bit kernel space (on 64-bit there'll it's quite a different story). The
336 * tracking structures will be attemted designed such that we can deal with up
337 * to 32GB of memory on a 32-bit system and essentially unlimited on 64-bit ones.
338 *
339 *
340 * @subsubsection subsubsec_pgmPhys_Tracking_Kernel Kernel Space
341 *
342 * @see pg_GMM
343 *
344 * @subsubsection subsubsec_pgmPhys_Tracking_PerVM Per-VM
345 *
346 * Fixed info is the physical address of the page (HCPhys) and the page id
347 * (described above). Theoretically we'll need 48(-12) bits for the HCPhys part.
348 * Today we've restricting ourselves to 40(-12) bits because this is the current
349 * restrictions of all AMD64 implementations (I think Barcelona will up this
350 * to 48(-12) bits, not that it really matters) and I needed the bits for
351 * tracking mappings of a page. 48-12 = 36. That leaves 28 bits, which means a
352 * decent range for the page id: 2^(28+12) = 1024TB.
353 *
354 * In additions to these, we'll have to keep maintaining the page flags as we
355 * currently do. Although it wouldn't harm to optimize these quite a bit, like
356 * for instance the ROM shouldn't depend on having a write handler installed
357 * in order for it to become read-only. A RO/RW bit should be considered so
358 * that the page syncing code doesn't have to mess about checking multiple
359 * flag combinations (ROM || RW handler || write monitored) in order to
360 * figure out how to setup a shadow PTE. But this of course, is second
361 * priority at present. Current this requires 12 bits, but could probably
362 * be optimized to ~8.
363 *
364 * Then there's the 24 bits used to track which shadow page tables are
365 * currently mapping a page for the purpose of speeding up physical
366 * access handlers, and thereby the page pool cache. More bit for this
367 * purpose wouldn't hurt IIRC.
368 *
369 * Then there is a new bit in which we need to record what kind of page
370 * this is, shared, zero, normal or write-monitored-normal. This'll
371 * require 2 bits. One bit might be needed for indicating whether a
372 * write monitored page has been written to. And yet another one or
373 * two for tracking migration status. 3-4 bits total then.
374 *
375 * Whatever is left will can be used to record the sharabilitiy of a
376 * page. The page checksum will not be stored in the per-VM table as
377 * the idle thread will not be permitted to do modifications to it.
378 * It will instead have to keep its own working set of potentially
379 * shareable pages and their check sums and stuff.
380 *
381 * For the present we'll keep the current packing of the
382 * PGMRAMRANGE::aHCPhys to keep the changes simple, only of course,
383 * we'll have to change it to a struct with a total of 128-bits at
384 * our disposal.
385 *
386 * The initial layout will be like this:
387 * @verbatim
388 RTHCPHYS HCPhys; The current stuff.
389 63:40 Current shadow PT tracking stuff.
390 39:12 The physical page frame number.
391 11:0 The current flags.
392 uint32_t u28PageId : 28; The page id.
393 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
394 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
395 uint32_t u1Reserved : 1; Reserved for later.
396 uint32_t u32Reserved; Reserved for later, mostly sharing stats.
397 @endverbatim
398 *
399 * The final layout will be something like this:
400 * @verbatim
401 RTHCPHYS HCPhys; The current stuff.
402 63:48 High page id (12+).
403 47:12 The physical page frame number.
404 11:0 Low page id.
405 uint32_t fReadOnly : 1; Whether it's readonly page (rom or monitored in some way).
406 uint32_t u3Type : 3; The page type {RESERVED, MMIO, MMIO2, ROM, shadowed ROM, RAM}.
407 uint32_t u2PhysMon : 2; Physical access handler type {none, read, write, all}.
408 uint32_t u2VirtMon : 2; Virtual access handler type {none, read, write, all}..
409 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
410 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
411 uint32_t u20Reserved : 20; Reserved for later, mostly sharing stats.
412 uint32_t u32Tracking; The shadow PT tracking stuff, roughly.
413 @endverbatim
414 *
415 * Cost wise, this means we'll double the cost for guest memory. There isn't anyway
416 * around that I'm afraid. It means that the cost of dealing out 32GB of memory
417 * to one or more VMs is: (32GB >> PAGE_SHIFT) * 16 bytes, or 128MBs. Or another
418 * example, the VM heap cost when assigning 1GB to a VM will be: 4MB.
419 *
420 * A couple of cost examples for the total cost per-VM + kernel.
421 * 32-bit Windows and 32-bit linux:
422 * 1GB guest ram, 256K pages: 4MB + 2MB(+) = 6MB
423 * 4GB guest ram, 1M pages: 16MB + 8MB(+) = 24MB
424 * 32GB guest ram, 8M pages: 128MB + 64MB(+) = 192MB
425 * 64-bit Windows and 64-bit linux:
426 * 1GB guest ram, 256K pages: 4MB + 3MB(+) = 7MB
427 * 4GB guest ram, 1M pages: 16MB + 12MB(+) = 28MB
428 * 32GB guest ram, 8M pages: 128MB + 96MB(+) = 224MB
429 *
430 * UPDATE - 2007-09-27:
431 * Will need a ballooned flag/state too because we cannot
432 * trust the guest 100% and reporting the same page as ballooned more
433 * than once will put the GMM off balance.
434 *
435 *
436 * @subsection subsec_pgmPhys_Serializing Serializing Access
437 *
438 * Initially, we'll try a simple scheme:
439 *
440 * - The per-VM RAM tracking structures (PGMRAMRANGE) is only modified
441 * by the EMT thread of that VM while in the pgm critsect.
442 * - Other threads in the VM process that needs to make reliable use of
443 * the per-VM RAM tracking structures will enter the critsect.
444 * - No process external thread or kernel thread will ever try enter
445 * the pgm critical section, as that just won't work.
446 * - The idle thread (and similar threads) doesn't not need 100% reliable
447 * data when performing it tasks as the EMT thread will be the one to
448 * do the actual changes later anyway. So, as long as it only accesses
449 * the main ram range, it can do so by somehow preventing the VM from
450 * being destroyed while it works on it...
451 *
452 * - The over-commitment management, including the allocating/freeing
453 * chunks, is serialized by a ring-0 mutex lock (a fast one since the
454 * more mundane mutex implementation is broken on Linux).
455 * - A separeate mutex is protecting the set of allocation chunks so
456 * that pages can be shared or/and freed up while some other VM is
457 * allocating more chunks. This mutex can be take from under the other
458 * one, but not the otherway around.
459 *
460 *
461 * @subsection subsec_pgmPhys_Request VM Request interface
462 *
463 * When in ring-0 it will become necessary to send requests to a VM so it can
464 * for instance move a page while defragmenting during VM destroy. The idle
465 * thread will make use of this interface to request VMs to setup shared
466 * pages and to perform write monitoring of pages.
467 *
468 * I would propose an interface similar to the current VMReq interface, similar
469 * in that it doesn't require locking and that the one sending the request may
470 * wait for completion if it wishes to. This shouldn't be very difficult to
471 * realize.
472 *
473 * The requests themselves are also pretty simple. They are basically:
474 * -# Check that some precondition is still true.
475 * -# Do the update.
476 * -# Update all shadow page tables involved with the page.
477 *
478 * The 3rd step is identical to what we're already doing when updating a
479 * physical handler, see pgmHandlerPhysicalSetRamFlagsAndFlushShadowPTs.
480 *
481 *
482 *
483 * @section sec_pgmPhys_MappingCaches Mapping Caches
484 *
485 * In order to be able to map in and out memory and to be able to support
486 * guest with more RAM than we've got virtual address space, we'll employing
487 * a mapping cache. There is already a tiny one for GC (see PGMGCDynMapGCPageEx)
488 * and we'll create a similar one for ring-0 unless we decide to setup a dedicate
489 * memory context for the HWACCM execution.
490 *
491 *
492 * @subsection subsec_pgmPhys_MappingCaches_R3 Ring-3
493 *
494 * We've considered implementing the ring-3 mapping cache page based but found
495 * that this was bother some when one had to take into account TLBs+SMP and
496 * portability (missing the necessary APIs on several platforms). There were
497 * also some performance concerns with this approach which hadn't quite been
498 * worked out.
499 *
500 * Instead, we'll be mapping allocation chunks into the VM process. This simplifies
501 * matters greatly quite a bit since we don't need to invent any new ring-0 stuff,
502 * only some minor RTR0MEMOBJ mapping stuff. The main concern here is that mapping
503 * compared to the previous idea is that mapping or unmapping a 1MB chunk is more
504 * costly than a single page, although how much more costly is uncertain. We'll
505 * try address this by using a very big cache, preferably bigger than the actual
506 * VM RAM size if possible. The current VM RAM sizes should give some idea for
507 * 32-bit boxes, while on 64-bit we can probably get away with employing an
508 * unlimited cache.
509 *
510 * The cache have to parts, as already indicated, the ring-3 side and the
511 * ring-0 side.
512 *
513 * The ring-0 will be tied to the page allocator since it will operate on the
514 * memory objects it contains. It will therefore require the first ring-0 mutex
515 * discussed in @ref subsec_pgmPhys_Serializing. We
516 * some double house keeping wrt to who has mapped what I think, since both
517 * VMMR0.r0 and RTR0MemObj will keep track of mapping relataions
518 *
519 * The ring-3 part will be protected by the pgm critsect. For simplicity, we'll
520 * require anyone that desires to do changes to the mapping cache to do that
521 * from within this critsect. Alternatively, we could employ a separate critsect
522 * for serializing changes to the mapping cache as this would reduce potential
523 * contention with other threads accessing mappings unrelated to the changes
524 * that are in process. We can see about this later, contention will show
525 * up in the statistics anyway, so it'll be simple to tell.
526 *
527 * The organization of the ring-3 part will be very much like how the allocation
528 * chunks are organized in ring-0, that is in an AVL tree by chunk id. To avoid
529 * having to walk the tree all the time, we'll have a couple of lookaside entries
530 * like in we do for I/O ports and MMIO in IOM.
531 *
532 * The simplified flow of a PGMPhysRead/Write function:
533 * -# Enter the PGM critsect.
534 * -# Lookup GCPhys in the ram ranges and get the Page ID.
535 * -# Calc the Allocation Chunk ID from the Page ID.
536 * -# Check the lookaside entries and then the AVL tree for the Chunk ID.
537 * If not found in cache:
538 * -# Call ring-0 and request it to be mapped and supply
539 * a chunk to be unmapped if the cache is maxed out already.
540 * -# Insert the new mapping into the AVL tree (id + R3 address).
541 * -# Update the relevant lookaside entry and return the mapping address.
542 * -# Do the read/write according to monitoring flags and everything.
543 * -# Leave the critsect.
544 *
545 *
546 * @section sec_pgmPhys_Fallback Fallback
547 *
548 * Current all the "second tier" hosts will not support the RTR0MemObjAllocPhysNC
549 * API and thus require a fallback.
550 *
551 * So, when RTR0MemObjAllocPhysNC returns VERR_NOT_SUPPORTED the page allocator
552 * will return to the ring-3 caller (and later ring-0) and asking it to seed
553 * the page allocator with some fresh pages (VERR_GMM_SEED_ME). Ring-3 will
554 * then perform an SUPPageAlloc(cbChunk >> PAGE_SHIFT) call and make a
555 * "SeededAllocPages" call to ring-0.
556 *
557 * The first time ring-0 sees the VERR_NOT_SUPPORTED failure it will disable
558 * all page sharing (zero page detection will continue). It will also force
559 * all allocations to come from the VM which seeded the page. Both these
560 * measures are taken to make sure that there will never be any need for
561 * mapping anything into ring-3 - everything will be mapped already.
562 *
563 * Whether we'll continue to use the current MM locked memory management
564 * for this I don't quite know (I'd prefer not to and just ditch that all
565 * togther), we'll see what's simplest to do.
566 *
567 *
568 *
569 * @section sec_pgmPhys_Changes Changes
570 *
571 * Breakdown of the changes involved?
572 */
573
574/*******************************************************************************
575* Header Files *
576*******************************************************************************/
577#define LOG_GROUP LOG_GROUP_PGM
578#include <VBox/dbgf.h>
579#include <VBox/pgm.h>
580#include <VBox/cpum.h>
581#include <VBox/iom.h>
582#include <VBox/sup.h>
583#include <VBox/mm.h>
584#include <VBox/em.h>
585#include <VBox/stam.h>
586#include <VBox/rem.h>
587#include <VBox/dbgf.h>
588#include <VBox/rem.h>
589#include <VBox/selm.h>
590#include <VBox/ssm.h>
591#include "PGMInternal.h"
592#include <VBox/vm.h>
593#include <VBox/dbg.h>
594#include <VBox/hwaccm.h>
595
596#include <iprt/assert.h>
597#include <iprt/alloc.h>
598#include <iprt/asm.h>
599#include <iprt/thread.h>
600#include <iprt/string.h>
601#ifdef DEBUG_bird
602# include <iprt/env.h>
603#endif
604#include <VBox/param.h>
605#include <VBox/err.h>
606
607
608/*******************************************************************************
609* Defined Constants And Macros *
610*******************************************************************************/
611/** Saved state data unit version for 2.5.x and later. */
612#define PGM_SAVED_STATE_VERSION 9
613/** Saved state data unit version for 2.2.2 and later. */
614#define PGM_SAVED_STATE_VERSION_2_2_2 8
615/** Saved state data unit version for 2.2.0. */
616#define PGM_SAVED_STATE_VERSION_RR_DESC 7
617/** Saved state data unit version. */
618#define PGM_SAVED_STATE_VERSION_OLD_PHYS_CODE 6
619
620
621/*******************************************************************************
622* Internal Functions *
623*******************************************************************************/
624static int pgmR3InitPaging(PVM pVM);
625static void pgmR3InitStats(PVM pVM);
626static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
627static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
628static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
629static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser);
630static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
631static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
632#ifdef VBOX_STRICT
633static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser);
634#endif
635static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM);
636static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version);
637static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0);
638static void pgmR3ModeDataSwitch(PVM pVM, PVMCPU pVCpu, PGMMODE enmShw, PGMMODE enmGst);
639static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher);
640
641#ifdef VBOX_WITH_DEBUGGER
642/** @todo Convert the first two commands to 'info' items. */
643static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
644static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
645static DECLCALLBACK(int) pgmR3CmdError(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
646static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
647static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
648# ifdef VBOX_STRICT
649static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
650# endif
651#endif
652
653
654/*******************************************************************************
655* Global Variables *
656*******************************************************************************/
657#ifdef VBOX_WITH_DEBUGGER
658/** Argument descriptors for '.pgmerror' and '.pgmerroroff'. */
659static const DBGCVARDESC g_aPgmErrorArgs[] =
660{
661 /* cTimesMin, cTimesMax, enmCategory, fFlags, pszName, pszDescription */
662 { 0, 1, DBGCVAR_CAT_STRING, 0, "where", "Error injection location." },
663};
664
665/** Command descriptors. */
666static const DBGCCMD g_aCmds[] =
667{
668 /* pszCmd, cArgsMin, cArgsMax, paArgDesc, cArgDescs, pResultDesc, fFlags, pfnHandler pszSyntax, ....pszDescription */
669 { "pgmram", 0, 0, NULL, 0, NULL, 0, pgmR3CmdRam, "", "Display the ram ranges." },
670 { "pgmmap", 0, 0, NULL, 0, NULL, 0, pgmR3CmdMap, "", "Display the mapping ranges." },
671 { "pgmsync", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSync, "", "Sync the CR3 page." },
672 { "pgmerror", 0, 1, &g_aPgmErrorArgs[0],1, NULL, 0, pgmR3CmdError, "", "Enables inject runtime of errors into parts of PGM." },
673 { "pgmerroroff", 0, 1, &g_aPgmErrorArgs[0],1, NULL, 0, pgmR3CmdError, "", "Disables inject runtime errors into parts of PGM." },
674#ifdef VBOX_STRICT
675 { "pgmassertcr3", 0, 0, NULL, 0, NULL, 0, pgmR3CmdAssertCR3, "", "Check the shadow CR3 mapping." },
676#endif
677 { "pgmsyncalways", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSyncAlways, "", "Toggle permanent CR3 syncing." },
678};
679#endif
680
681
682
683
684/*
685 * Shadow - 32-bit mode
686 */
687#define PGM_SHW_TYPE PGM_TYPE_32BIT
688#define PGM_SHW_NAME(name) PGM_SHW_NAME_32BIT(name)
689#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_32BIT_STR(name)
690#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_32BIT_STR(name)
691#include "PGMShw.h"
692
693/* Guest - real mode */
694#define PGM_GST_TYPE PGM_TYPE_REAL
695#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
696#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
697#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
698#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_REAL(name)
699#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_REAL_STR(name)
700#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_REAL_STR(name)
701#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
702#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
703#include "PGMBth.h"
704#include "PGMGstDefs.h"
705#include "PGMGst.h"
706#undef BTH_PGMPOOLKIND_PT_FOR_PT
707#undef BTH_PGMPOOLKIND_ROOT
708#undef PGM_BTH_NAME
709#undef PGM_BTH_NAME_RC_STR
710#undef PGM_BTH_NAME_R0_STR
711#undef PGM_GST_TYPE
712#undef PGM_GST_NAME
713#undef PGM_GST_NAME_RC_STR
714#undef PGM_GST_NAME_R0_STR
715
716/* Guest - protected mode */
717#define PGM_GST_TYPE PGM_TYPE_PROT
718#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
719#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
720#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
721#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_PROT(name)
722#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_PROT_STR(name)
723#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_PROT_STR(name)
724#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
725#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
726#include "PGMBth.h"
727#include "PGMGstDefs.h"
728#include "PGMGst.h"
729#undef BTH_PGMPOOLKIND_PT_FOR_PT
730#undef BTH_PGMPOOLKIND_ROOT
731#undef PGM_BTH_NAME
732#undef PGM_BTH_NAME_RC_STR
733#undef PGM_BTH_NAME_R0_STR
734#undef PGM_GST_TYPE
735#undef PGM_GST_NAME
736#undef PGM_GST_NAME_RC_STR
737#undef PGM_GST_NAME_R0_STR
738
739/* Guest - 32-bit mode */
740#define PGM_GST_TYPE PGM_TYPE_32BIT
741#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
742#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
743#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
744#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_32BIT(name)
745#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_32BIT_STR(name)
746#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_32BIT_STR(name)
747#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT
748#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB
749#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD
750#include "PGMBth.h"
751#include "PGMGstDefs.h"
752#include "PGMGst.h"
753#undef BTH_PGMPOOLKIND_PT_FOR_BIG
754#undef BTH_PGMPOOLKIND_PT_FOR_PT
755#undef BTH_PGMPOOLKIND_ROOT
756#undef PGM_BTH_NAME
757#undef PGM_BTH_NAME_RC_STR
758#undef PGM_BTH_NAME_R0_STR
759#undef PGM_GST_TYPE
760#undef PGM_GST_NAME
761#undef PGM_GST_NAME_RC_STR
762#undef PGM_GST_NAME_R0_STR
763
764#undef PGM_SHW_TYPE
765#undef PGM_SHW_NAME
766#undef PGM_SHW_NAME_RC_STR
767#undef PGM_SHW_NAME_R0_STR
768
769
770/*
771 * Shadow - PAE mode
772 */
773#define PGM_SHW_TYPE PGM_TYPE_PAE
774#define PGM_SHW_NAME(name) PGM_SHW_NAME_PAE(name)
775#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_PAE_STR(name)
776#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_PAE_STR(name)
777#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
778#include "PGMShw.h"
779
780/* Guest - real mode */
781#define PGM_GST_TYPE PGM_TYPE_REAL
782#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
783#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
784#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
785#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
786#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_REAL_STR(name)
787#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_REAL_STR(name)
788#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
789#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
790#include "PGMGstDefs.h"
791#include "PGMBth.h"
792#undef BTH_PGMPOOLKIND_PT_FOR_PT
793#undef BTH_PGMPOOLKIND_ROOT
794#undef PGM_BTH_NAME
795#undef PGM_BTH_NAME_RC_STR
796#undef PGM_BTH_NAME_R0_STR
797#undef PGM_GST_TYPE
798#undef PGM_GST_NAME
799#undef PGM_GST_NAME_RC_STR
800#undef PGM_GST_NAME_R0_STR
801
802/* Guest - protected mode */
803#define PGM_GST_TYPE PGM_TYPE_PROT
804#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
805#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
806#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
807#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PROT(name)
808#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PROT_STR(name)
809#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PROT_STR(name)
810#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
811#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
812#include "PGMGstDefs.h"
813#include "PGMBth.h"
814#undef BTH_PGMPOOLKIND_PT_FOR_PT
815#undef BTH_PGMPOOLKIND_ROOT
816#undef PGM_BTH_NAME
817#undef PGM_BTH_NAME_RC_STR
818#undef PGM_BTH_NAME_R0_STR
819#undef PGM_GST_TYPE
820#undef PGM_GST_NAME
821#undef PGM_GST_NAME_RC_STR
822#undef PGM_GST_NAME_R0_STR
823
824/* Guest - 32-bit mode */
825#define PGM_GST_TYPE PGM_TYPE_32BIT
826#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
827#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
828#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
829#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_32BIT(name)
830#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_32BIT_STR(name)
831#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_32BIT_STR(name)
832#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
833#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
834#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_FOR_32BIT
835#include "PGMGstDefs.h"
836#include "PGMBth.h"
837#undef BTH_PGMPOOLKIND_PT_FOR_BIG
838#undef BTH_PGMPOOLKIND_PT_FOR_PT
839#undef BTH_PGMPOOLKIND_ROOT
840#undef PGM_BTH_NAME
841#undef PGM_BTH_NAME_RC_STR
842#undef PGM_BTH_NAME_R0_STR
843#undef PGM_GST_TYPE
844#undef PGM_GST_NAME
845#undef PGM_GST_NAME_RC_STR
846#undef PGM_GST_NAME_R0_STR
847
848/* Guest - PAE mode */
849#define PGM_GST_TYPE PGM_TYPE_PAE
850#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
851#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
852#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
853#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PAE(name)
854#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PAE_STR(name)
855#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PAE_STR(name)
856#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
857#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
858#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT
859#include "PGMBth.h"
860#include "PGMGstDefs.h"
861#include "PGMGst.h"
862#undef BTH_PGMPOOLKIND_PT_FOR_BIG
863#undef BTH_PGMPOOLKIND_PT_FOR_PT
864#undef BTH_PGMPOOLKIND_ROOT
865#undef PGM_BTH_NAME
866#undef PGM_BTH_NAME_RC_STR
867#undef PGM_BTH_NAME_R0_STR
868#undef PGM_GST_TYPE
869#undef PGM_GST_NAME
870#undef PGM_GST_NAME_RC_STR
871#undef PGM_GST_NAME_R0_STR
872
873#undef PGM_SHW_TYPE
874#undef PGM_SHW_NAME
875#undef PGM_SHW_NAME_RC_STR
876#undef PGM_SHW_NAME_R0_STR
877
878
879/*
880 * Shadow - AMD64 mode
881 */
882#define PGM_SHW_TYPE PGM_TYPE_AMD64
883#define PGM_SHW_NAME(name) PGM_SHW_NAME_AMD64(name)
884#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_AMD64_STR(name)
885#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_AMD64_STR(name)
886#include "PGMShw.h"
887
888#ifdef VBOX_WITH_64_BITS_GUESTS
889/* Guest - AMD64 mode */
890# define PGM_GST_TYPE PGM_TYPE_AMD64
891# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
892# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
893# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
894# define PGM_BTH_NAME(name) PGM_BTH_NAME_AMD64_AMD64(name)
895# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_AMD64_AMD64_STR(name)
896# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_AMD64_AMD64_STR(name)
897# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
898# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
899# define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_64BIT_PML4
900# include "PGMBth.h"
901# include "PGMGstDefs.h"
902# include "PGMGst.h"
903# undef BTH_PGMPOOLKIND_PT_FOR_BIG
904# undef BTH_PGMPOOLKIND_PT_FOR_PT
905# undef BTH_PGMPOOLKIND_ROOT
906# undef PGM_BTH_NAME
907# undef PGM_BTH_NAME_RC_STR
908# undef PGM_BTH_NAME_R0_STR
909# undef PGM_GST_TYPE
910# undef PGM_GST_NAME
911# undef PGM_GST_NAME_RC_STR
912# undef PGM_GST_NAME_R0_STR
913#endif /* VBOX_WITH_64_BITS_GUESTS */
914
915#undef PGM_SHW_TYPE
916#undef PGM_SHW_NAME
917#undef PGM_SHW_NAME_RC_STR
918#undef PGM_SHW_NAME_R0_STR
919
920
921/*
922 * Shadow - Nested paging mode
923 */
924#define PGM_SHW_TYPE PGM_TYPE_NESTED
925#define PGM_SHW_NAME(name) PGM_SHW_NAME_NESTED(name)
926#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_NESTED_STR(name)
927#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_NESTED_STR(name)
928#include "PGMShw.h"
929
930/* Guest - real mode */
931#define PGM_GST_TYPE PGM_TYPE_REAL
932#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
933#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
934#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
935#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_REAL(name)
936#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_REAL_STR(name)
937#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_REAL_STR(name)
938#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
939#include "PGMGstDefs.h"
940#include "PGMBth.h"
941#undef BTH_PGMPOOLKIND_PT_FOR_PT
942#undef PGM_BTH_NAME
943#undef PGM_BTH_NAME_RC_STR
944#undef PGM_BTH_NAME_R0_STR
945#undef PGM_GST_TYPE
946#undef PGM_GST_NAME
947#undef PGM_GST_NAME_RC_STR
948#undef PGM_GST_NAME_R0_STR
949
950/* Guest - protected mode */
951#define PGM_GST_TYPE PGM_TYPE_PROT
952#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
953#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
954#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
955#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PROT(name)
956#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PROT_STR(name)
957#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PROT_STR(name)
958#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
959#include "PGMGstDefs.h"
960#include "PGMBth.h"
961#undef BTH_PGMPOOLKIND_PT_FOR_PT
962#undef PGM_BTH_NAME
963#undef PGM_BTH_NAME_RC_STR
964#undef PGM_BTH_NAME_R0_STR
965#undef PGM_GST_TYPE
966#undef PGM_GST_NAME
967#undef PGM_GST_NAME_RC_STR
968#undef PGM_GST_NAME_R0_STR
969
970/* Guest - 32-bit mode */
971#define PGM_GST_TYPE PGM_TYPE_32BIT
972#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
973#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
974#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
975#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_32BIT(name)
976#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_32BIT_STR(name)
977#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_32BIT_STR(name)
978#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
979#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
980#include "PGMGstDefs.h"
981#include "PGMBth.h"
982#undef BTH_PGMPOOLKIND_PT_FOR_BIG
983#undef BTH_PGMPOOLKIND_PT_FOR_PT
984#undef PGM_BTH_NAME
985#undef PGM_BTH_NAME_RC_STR
986#undef PGM_BTH_NAME_R0_STR
987#undef PGM_GST_TYPE
988#undef PGM_GST_NAME
989#undef PGM_GST_NAME_RC_STR
990#undef PGM_GST_NAME_R0_STR
991
992/* Guest - PAE mode */
993#define PGM_GST_TYPE PGM_TYPE_PAE
994#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
995#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
996#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
997#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PAE(name)
998#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PAE_STR(name)
999#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PAE_STR(name)
1000#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1001#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1002#include "PGMGstDefs.h"
1003#include "PGMBth.h"
1004#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1005#undef BTH_PGMPOOLKIND_PT_FOR_PT
1006#undef PGM_BTH_NAME
1007#undef PGM_BTH_NAME_RC_STR
1008#undef PGM_BTH_NAME_R0_STR
1009#undef PGM_GST_TYPE
1010#undef PGM_GST_NAME
1011#undef PGM_GST_NAME_RC_STR
1012#undef PGM_GST_NAME_R0_STR
1013
1014#ifdef VBOX_WITH_64_BITS_GUESTS
1015/* Guest - AMD64 mode */
1016# define PGM_GST_TYPE PGM_TYPE_AMD64
1017# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1018# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1019# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1020# define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_AMD64(name)
1021# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_AMD64_STR(name)
1022# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_AMD64_STR(name)
1023# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1024# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1025# include "PGMGstDefs.h"
1026# include "PGMBth.h"
1027# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1028# undef BTH_PGMPOOLKIND_PT_FOR_PT
1029# undef PGM_BTH_NAME
1030# undef PGM_BTH_NAME_RC_STR
1031# undef PGM_BTH_NAME_R0_STR
1032# undef PGM_GST_TYPE
1033# undef PGM_GST_NAME
1034# undef PGM_GST_NAME_RC_STR
1035# undef PGM_GST_NAME_R0_STR
1036#endif /* VBOX_WITH_64_BITS_GUESTS */
1037
1038#undef PGM_SHW_TYPE
1039#undef PGM_SHW_NAME
1040#undef PGM_SHW_NAME_RC_STR
1041#undef PGM_SHW_NAME_R0_STR
1042
1043
1044/*
1045 * Shadow - EPT
1046 */
1047#define PGM_SHW_TYPE PGM_TYPE_EPT
1048#define PGM_SHW_NAME(name) PGM_SHW_NAME_EPT(name)
1049#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_EPT_STR(name)
1050#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_EPT_STR(name)
1051#include "PGMShw.h"
1052
1053/* Guest - real mode */
1054#define PGM_GST_TYPE PGM_TYPE_REAL
1055#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
1056#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
1057#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
1058#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_REAL(name)
1059#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_REAL_STR(name)
1060#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_REAL_STR(name)
1061#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1062#include "PGMGstDefs.h"
1063#include "PGMBth.h"
1064#undef BTH_PGMPOOLKIND_PT_FOR_PT
1065#undef PGM_BTH_NAME
1066#undef PGM_BTH_NAME_RC_STR
1067#undef PGM_BTH_NAME_R0_STR
1068#undef PGM_GST_TYPE
1069#undef PGM_GST_NAME
1070#undef PGM_GST_NAME_RC_STR
1071#undef PGM_GST_NAME_R0_STR
1072
1073/* Guest - protected mode */
1074#define PGM_GST_TYPE PGM_TYPE_PROT
1075#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
1076#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
1077#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
1078#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PROT(name)
1079#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PROT_STR(name)
1080#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PROT_STR(name)
1081#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1082#include "PGMGstDefs.h"
1083#include "PGMBth.h"
1084#undef BTH_PGMPOOLKIND_PT_FOR_PT
1085#undef PGM_BTH_NAME
1086#undef PGM_BTH_NAME_RC_STR
1087#undef PGM_BTH_NAME_R0_STR
1088#undef PGM_GST_TYPE
1089#undef PGM_GST_NAME
1090#undef PGM_GST_NAME_RC_STR
1091#undef PGM_GST_NAME_R0_STR
1092
1093/* Guest - 32-bit mode */
1094#define PGM_GST_TYPE PGM_TYPE_32BIT
1095#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
1096#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
1097#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
1098#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_32BIT(name)
1099#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_32BIT_STR(name)
1100#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_32BIT_STR(name)
1101#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
1102#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
1103#include "PGMGstDefs.h"
1104#include "PGMBth.h"
1105#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1106#undef BTH_PGMPOOLKIND_PT_FOR_PT
1107#undef PGM_BTH_NAME
1108#undef PGM_BTH_NAME_RC_STR
1109#undef PGM_BTH_NAME_R0_STR
1110#undef PGM_GST_TYPE
1111#undef PGM_GST_NAME
1112#undef PGM_GST_NAME_RC_STR
1113#undef PGM_GST_NAME_R0_STR
1114
1115/* Guest - PAE mode */
1116#define PGM_GST_TYPE PGM_TYPE_PAE
1117#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
1118#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
1119#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
1120#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PAE(name)
1121#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PAE_STR(name)
1122#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PAE_STR(name)
1123#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1124#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1125#include "PGMGstDefs.h"
1126#include "PGMBth.h"
1127#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1128#undef BTH_PGMPOOLKIND_PT_FOR_PT
1129#undef PGM_BTH_NAME
1130#undef PGM_BTH_NAME_RC_STR
1131#undef PGM_BTH_NAME_R0_STR
1132#undef PGM_GST_TYPE
1133#undef PGM_GST_NAME
1134#undef PGM_GST_NAME_RC_STR
1135#undef PGM_GST_NAME_R0_STR
1136
1137#ifdef VBOX_WITH_64_BITS_GUESTS
1138/* Guest - AMD64 mode */
1139# define PGM_GST_TYPE PGM_TYPE_AMD64
1140# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1141# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1142# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1143# define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_AMD64(name)
1144# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_AMD64_STR(name)
1145# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_AMD64_STR(name)
1146# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1147# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1148# include "PGMGstDefs.h"
1149# include "PGMBth.h"
1150# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1151# undef BTH_PGMPOOLKIND_PT_FOR_PT
1152# undef PGM_BTH_NAME
1153# undef PGM_BTH_NAME_RC_STR
1154# undef PGM_BTH_NAME_R0_STR
1155# undef PGM_GST_TYPE
1156# undef PGM_GST_NAME
1157# undef PGM_GST_NAME_RC_STR
1158# undef PGM_GST_NAME_R0_STR
1159#endif /* VBOX_WITH_64_BITS_GUESTS */
1160
1161#undef PGM_SHW_TYPE
1162#undef PGM_SHW_NAME
1163#undef PGM_SHW_NAME_RC_STR
1164#undef PGM_SHW_NAME_R0_STR
1165
1166
1167
1168/**
1169 * Initiates the paging of VM.
1170 *
1171 * @returns VBox status code.
1172 * @param pVM Pointer to VM structure.
1173 */
1174VMMR3DECL(int) PGMR3Init(PVM pVM)
1175{
1176 LogFlow(("PGMR3Init:\n"));
1177 PCFGMNODE pCfgPGM = CFGMR3GetChild(CFGMR3GetRoot(pVM), "/PGM");
1178 int rc;
1179
1180 /*
1181 * Assert alignment and sizes.
1182 */
1183 AssertCompile(sizeof(pVM->pgm.s) <= sizeof(pVM->pgm.padding));
1184
1185 /*
1186 * Init the structure.
1187 */
1188 pVM->pgm.s.offVM = RT_OFFSETOF(VM, pgm.s);
1189 pVM->pgm.s.offVCpuPGM = RT_OFFSETOF(VMCPU, pgm.s);
1190
1191 /* Init the per-CPU part. */
1192 for (unsigned i=0;i<pVM->cCPUs;i++)
1193 {
1194 PVMCPU pVCpu = &pVM->aCpus[i];
1195 PPGMCPU pPGM = &pVCpu->pgm.s;
1196
1197 pPGM->offVM = (uintptr_t)&pVCpu->pgm.s - (uintptr_t)pVM;
1198 pPGM->offVCpu = RT_OFFSETOF(VMCPU, pgm.s);
1199 pPGM->offPGM = (uintptr_t)&pVCpu->pgm.s - (uintptr_t)&pVM->pgm.s;
1200
1201 pPGM->enmShadowMode = PGMMODE_INVALID;
1202 pPGM->enmGuestMode = PGMMODE_INVALID;
1203
1204 pPGM->GCPhysCR3 = NIL_RTGCPHYS;
1205
1206 pPGM->pGstPaePdptR3 = NULL;
1207#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1208 pPGM->pGstPaePdptR0 = NIL_RTR0PTR;
1209#endif
1210 pPGM->pGstPaePdptRC = NIL_RTRCPTR;
1211 for (unsigned i = 0; i < RT_ELEMENTS(pVCpu->pgm.s.apGstPaePDsR3); i++)
1212 {
1213 pPGM->apGstPaePDsR3[i] = NULL;
1214#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1215 pPGM->apGstPaePDsR0[i] = NIL_RTR0PTR;
1216#endif
1217 pPGM->apGstPaePDsRC[i] = NIL_RTRCPTR;
1218 pPGM->aGCPhysGstPaePDs[i] = NIL_RTGCPHYS;
1219 pPGM->aGCPhysGstPaePDsMonitored[i] = NIL_RTGCPHYS;
1220 }
1221
1222 pPGM->fA20Enabled = true;
1223 }
1224
1225 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1226 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1; /* default; checked later */
1227 pVM->pgm.s.GCPtrPrevRamRangeMapping = MM_HYPER_AREA_ADDRESS;
1228
1229 rc = CFGMR3QueryBoolDef(CFGMR3GetRoot(pVM), "RamPreAlloc", &pVM->pgm.s.fRamPreAlloc,
1230#ifdef VBOX_WITH_PREALLOC_RAM_BY_DEFAULT
1231 true
1232#else
1233 false
1234#endif
1235 );
1236 AssertLogRelRCReturn(rc, rc);
1237
1238#if HC_ARCH_BITS == 64 || 1 /** @todo 4GB/32-bit: remove || 1 later and adjust the limit. */
1239 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, UINT32_MAX);
1240#else
1241 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, _1G / GMM_CHUNK_SIZE);
1242#endif
1243 AssertLogRelRCReturn(rc, rc);
1244 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->pgm.s.ChunkR3Map.Tlb.aEntries); i++)
1245 pVM->pgm.s.ChunkR3Map.Tlb.aEntries[i].idChunk = NIL_GMM_CHUNKID;
1246
1247 /*
1248 * Get the configured RAM size - to estimate saved state size.
1249 */
1250 uint64_t cbRam;
1251 rc = CFGMR3QueryU64(CFGMR3GetRoot(pVM), "RamSize", &cbRam);
1252 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
1253 cbRam = 0;
1254 else if (RT_SUCCESS(rc))
1255 {
1256 if (cbRam < PAGE_SIZE)
1257 cbRam = 0;
1258 cbRam = RT_ALIGN_64(cbRam, PAGE_SIZE);
1259 }
1260 else
1261 {
1262 AssertMsgFailed(("Configuration error: Failed to query integer \"RamSize\", rc=%Rrc.\n", rc));
1263 return rc;
1264 }
1265
1266 /*
1267 * Register callbacks, string formatters and the saved state data unit.
1268 */
1269#ifdef VBOX_STRICT
1270 VMR3AtStateRegister(pVM, pgmR3ResetNoMorePhysWritesFlag, NULL);
1271#endif
1272 PGMRegisterStringFormatTypes();
1273
1274 rc = SSMR3RegisterInternal(pVM, "pgm", 1, PGM_SAVED_STATE_VERSION, (size_t)cbRam + sizeof(PGM),
1275 NULL, pgmR3Save, NULL,
1276 NULL, pgmR3Load, NULL);
1277 if (RT_FAILURE(rc))
1278 return rc;
1279
1280 /*
1281 * Initialize the PGM critical section and flush the phys TLBs
1282 */
1283 rc = PDMR3CritSectInit(pVM, &pVM->pgm.s.CritSect, "PGM");
1284 AssertRCReturn(rc, rc);
1285
1286 PGMR3PhysChunkInvalidateTLB(pVM);
1287 PGMPhysInvalidatePageR3MapTLB(pVM);
1288 PGMPhysInvalidatePageR0MapTLB(pVM);
1289 PGMPhysInvalidatePageGCMapTLB(pVM);
1290
1291 /*
1292 * For the time being we sport a full set of handy pages in addition to the base
1293 * memory to simplify things.
1294 */
1295 rc = MMR3ReserveHandyPages(pVM, RT_ELEMENTS(pVM->pgm.s.aHandyPages)); /** @todo this should be changed to PGM_HANDY_PAGES_MIN but this needs proper testing... */
1296 AssertRCReturn(rc, rc);
1297
1298 /*
1299 * Trees
1300 */
1301 rc = MMHyperAlloc(pVM, sizeof(PGMTREES), 0, MM_TAG_PGM, (void **)&pVM->pgm.s.pTreesR3);
1302 if (RT_SUCCESS(rc))
1303 {
1304 pVM->pgm.s.pTreesR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pTreesR3);
1305 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
1306
1307 /*
1308 * Alocate the zero page.
1309 */
1310 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvZeroPgR3);
1311 }
1312 if (RT_SUCCESS(rc))
1313 {
1314 pVM->pgm.s.pvZeroPgRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pvZeroPgR3);
1315 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
1316 pVM->pgm.s.HCPhysZeroPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvZeroPgR3);
1317 AssertRelease(pVM->pgm.s.HCPhysZeroPg != NIL_RTHCPHYS);
1318
1319 /*
1320 * Init the paging.
1321 */
1322 rc = pgmR3InitPaging(pVM);
1323 }
1324 if (RT_SUCCESS(rc))
1325 {
1326 /*
1327 * Init the page pool.
1328 */
1329 rc = pgmR3PoolInit(pVM);
1330 }
1331 if (RT_SUCCESS(rc))
1332 {
1333 for (unsigned i=0;i<pVM->cCPUs;i++)
1334 {
1335 PVMCPU pVCpu = &pVM->aCpus[i];
1336
1337 rc = PGMR3ChangeMode(pVM, pVCpu, PGMMODE_REAL);
1338 if (RT_FAILURE(rc))
1339 break;
1340 }
1341 }
1342
1343 if (RT_SUCCESS(rc))
1344 {
1345 /*
1346 * Info & statistics
1347 */
1348 DBGFR3InfoRegisterInternal(pVM, "mode",
1349 "Shows the current paging mode. "
1350 "Recognizes 'all', 'guest', 'shadow' and 'host' as arguments, defaulting to 'all' if nothing's given.",
1351 pgmR3InfoMode);
1352 DBGFR3InfoRegisterInternal(pVM, "pgmcr3",
1353 "Dumps all the entries in the top level paging table. No arguments.",
1354 pgmR3InfoCr3);
1355 DBGFR3InfoRegisterInternal(pVM, "phys",
1356 "Dumps all the physical address ranges. No arguments.",
1357 pgmR3PhysInfo);
1358 DBGFR3InfoRegisterInternal(pVM, "handlers",
1359 "Dumps physical, virtual and hyper virtual handlers. "
1360 "Pass 'phys', 'virt', 'hyper' as argument if only one kind is wanted."
1361 "Add 'nost' if the statistics are unwanted, use together with 'all' or explicit selection.",
1362 pgmR3InfoHandlers);
1363 DBGFR3InfoRegisterInternal(pVM, "mappings",
1364 "Dumps guest mappings.",
1365 pgmR3MapInfo);
1366
1367 pgmR3InitStats(pVM);
1368
1369#ifdef VBOX_WITH_DEBUGGER
1370 /*
1371 * Debugger commands.
1372 */
1373 static bool s_fRegisteredCmds = false;
1374 if (!s_fRegisteredCmds)
1375 {
1376 int rc = DBGCRegisterCommands(&g_aCmds[0], RT_ELEMENTS(g_aCmds));
1377 if (RT_SUCCESS(rc))
1378 s_fRegisteredCmds = true;
1379 }
1380#endif
1381 return VINF_SUCCESS;
1382 }
1383
1384 /* Almost no cleanup necessary, MM frees all memory. */
1385 PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
1386
1387 return rc;
1388}
1389
1390
1391/**
1392 * Initializes the per-VCPU PGM.
1393 *
1394 * @returns VBox status code.
1395 * @param pVM The VM to operate on.
1396 */
1397VMMR3DECL(int) PGMR3InitCPU(PVM pVM)
1398{
1399 LogFlow(("PGMR3InitCPU\n"));
1400 return VINF_SUCCESS;
1401}
1402
1403
1404/**
1405 * Init paging.
1406 *
1407 * Since we need to check what mode the host is operating in before we can choose
1408 * the right paging functions for the host we have to delay this until R0 has
1409 * been initialized.
1410 *
1411 * @returns VBox status code.
1412 * @param pVM VM handle.
1413 */
1414static int pgmR3InitPaging(PVM pVM)
1415{
1416 /*
1417 * Force a recalculation of modes and switcher so everyone gets notified.
1418 */
1419 for (unsigned i=0;i<pVM->cCPUs;i++)
1420 {
1421 PVMCPU pVCpu = &pVM->aCpus[i];
1422
1423 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
1424 pVCpu->pgm.s.enmGuestMode = PGMMODE_INVALID;
1425 }
1426
1427 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1428
1429 /*
1430 * Allocate static mapping space for whatever the cr3 register
1431 * points to and in the case of PAE mode to the 4 PDs.
1432 */
1433 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * 5, "CR3 mapping", &pVM->pgm.s.GCPtrCR3Mapping);
1434 if (RT_FAILURE(rc))
1435 {
1436 AssertMsgFailed(("Failed to reserve two pages for cr mapping in HMA, rc=%Rrc\n", rc));
1437 return rc;
1438 }
1439 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1440
1441 /*
1442 * Allocate pages for the three possible intermediate contexts
1443 * (AMD64, PAE and plain 32-Bit). We maintain all three contexts
1444 * for the sake of simplicity. The AMD64 uses the PAE for the
1445 * lower levels, making the total number of pages 11 (3 + 7 + 1).
1446 *
1447 * We assume that two page tables will be enought for the core code
1448 * mappings (HC virtual and identity).
1449 */
1450 pVM->pgm.s.pInterPD = (PX86PD)MMR3PageAllocLow(pVM);
1451 pVM->pgm.s.apInterPTs[0] = (PX86PT)MMR3PageAllocLow(pVM);
1452 pVM->pgm.s.apInterPTs[1] = (PX86PT)MMR3PageAllocLow(pVM);
1453 pVM->pgm.s.apInterPaePTs[0] = (PX86PTPAE)MMR3PageAlloc(pVM);
1454 pVM->pgm.s.apInterPaePTs[1] = (PX86PTPAE)MMR3PageAlloc(pVM);
1455 pVM->pgm.s.apInterPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1456 pVM->pgm.s.apInterPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1457 pVM->pgm.s.apInterPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1458 pVM->pgm.s.apInterPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1459 pVM->pgm.s.pInterPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM);
1460 pVM->pgm.s.pInterPaePDPT64 = (PX86PDPT)MMR3PageAllocLow(pVM);
1461 pVM->pgm.s.pInterPaePML4 = (PX86PML4)MMR3PageAllocLow(pVM);
1462 if ( !pVM->pgm.s.pInterPD
1463 || !pVM->pgm.s.apInterPTs[0]
1464 || !pVM->pgm.s.apInterPTs[1]
1465 || !pVM->pgm.s.apInterPaePTs[0]
1466 || !pVM->pgm.s.apInterPaePTs[1]
1467 || !pVM->pgm.s.apInterPaePDs[0]
1468 || !pVM->pgm.s.apInterPaePDs[1]
1469 || !pVM->pgm.s.apInterPaePDs[2]
1470 || !pVM->pgm.s.apInterPaePDs[3]
1471 || !pVM->pgm.s.pInterPaePDPT
1472 || !pVM->pgm.s.pInterPaePDPT64
1473 || !pVM->pgm.s.pInterPaePML4)
1474 {
1475 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1476 return VERR_NO_PAGE_MEMORY;
1477 }
1478
1479 pVM->pgm.s.HCPhysInterPD = MMPage2Phys(pVM, pVM->pgm.s.pInterPD);
1480 AssertRelease(pVM->pgm.s.HCPhysInterPD != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPD & PAGE_OFFSET_MASK));
1481 pVM->pgm.s.HCPhysInterPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT);
1482 AssertRelease(pVM->pgm.s.HCPhysInterPaePDPT != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePDPT & PAGE_OFFSET_MASK));
1483 pVM->pgm.s.HCPhysInterPaePML4 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePML4);
1484 AssertRelease(pVM->pgm.s.HCPhysInterPaePML4 != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePML4 & PAGE_OFFSET_MASK) && pVM->pgm.s.HCPhysInterPaePML4 < 0xffffffff);
1485
1486 /*
1487 * Initialize the pages, setting up the PML4 and PDPT for repetitive 4GB action.
1488 */
1489 ASMMemZeroPage(pVM->pgm.s.pInterPD);
1490 ASMMemZeroPage(pVM->pgm.s.apInterPTs[0]);
1491 ASMMemZeroPage(pVM->pgm.s.apInterPTs[1]);
1492
1493 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[0]);
1494 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[1]);
1495
1496 ASMMemZeroPage(pVM->pgm.s.pInterPaePDPT);
1497 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apInterPaePDs); i++)
1498 {
1499 ASMMemZeroPage(pVM->pgm.s.apInterPaePDs[i]);
1500 pVM->pgm.s.pInterPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT
1501 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[i]);
1502 }
1503
1504 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePDPT64->a); i++)
1505 {
1506 const unsigned iPD = i % RT_ELEMENTS(pVM->pgm.s.apInterPaePDs);
1507 pVM->pgm.s.pInterPaePDPT64->a[i].u = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A | PGM_PLXFLAGS_PERMANENT
1508 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[iPD]);
1509 }
1510
1511 RTHCPHYS HCPhysInterPaePDPT64 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64);
1512 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePML4->a); i++)
1513 pVM->pgm.s.pInterPaePML4->a[i].u = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A | PGM_PLXFLAGS_PERMANENT
1514 | HCPhysInterPaePDPT64;
1515
1516 /*
1517 * Initialize paging workers and mode from current host mode
1518 * and the guest running in real mode.
1519 */
1520 pVM->pgm.s.enmHostMode = SUPGetPagingMode();
1521 switch (pVM->pgm.s.enmHostMode)
1522 {
1523 case SUPPAGINGMODE_32_BIT:
1524 case SUPPAGINGMODE_32_BIT_GLOBAL:
1525 case SUPPAGINGMODE_PAE:
1526 case SUPPAGINGMODE_PAE_GLOBAL:
1527 case SUPPAGINGMODE_PAE_NX:
1528 case SUPPAGINGMODE_PAE_GLOBAL_NX:
1529 break;
1530
1531 case SUPPAGINGMODE_AMD64:
1532 case SUPPAGINGMODE_AMD64_GLOBAL:
1533 case SUPPAGINGMODE_AMD64_NX:
1534 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
1535#ifndef VBOX_WITH_HYBRID_32BIT_KERNEL
1536 if (ARCH_BITS != 64)
1537 {
1538 AssertMsgFailed(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1539 LogRel(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1540 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1541 }
1542#endif
1543 break;
1544 default:
1545 AssertMsgFailed(("Host mode %d is not supported\n", pVM->pgm.s.enmHostMode));
1546 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1547 }
1548 rc = pgmR3ModeDataInit(pVM, false /* don't resolve GC and R0 syms yet */);
1549 if (RT_SUCCESS(rc))
1550 {
1551 LogFlow(("pgmR3InitPaging: returns successfully\n"));
1552#if HC_ARCH_BITS == 64
1553 LogRel(("Debug: HCPhysInterPD=%RHp HCPhysInterPaePDPT=%RHp HCPhysInterPaePML4=%RHp\n",
1554 pVM->pgm.s.HCPhysInterPD, pVM->pgm.s.HCPhysInterPaePDPT, pVM->pgm.s.HCPhysInterPaePML4));
1555 LogRel(("Debug: apInterPTs={%RHp,%RHp} apInterPaePTs={%RHp,%RHp} apInterPaePDs={%RHp,%RHp,%RHp,%RHp} pInterPaePDPT64=%RHp\n",
1556 MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[1]),
1557 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[1]),
1558 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[1]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[2]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[3]),
1559 MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64)));
1560#endif
1561
1562 return VINF_SUCCESS;
1563 }
1564
1565 LogFlow(("pgmR3InitPaging: returns %Rrc\n", rc));
1566 return rc;
1567}
1568
1569
1570/**
1571 * Init statistics
1572 */
1573static void pgmR3InitStats(PVM pVM)
1574{
1575 PPGM pPGM = &pVM->pgm.s;
1576 int rc;
1577
1578 /* Common - misc variables */
1579 STAM_REL_REG(pVM, &pPGM->cAllPages, STAMTYPE_U32, "/PGM/Page/cAllPages", STAMUNIT_OCCURENCES, "The total number of pages.");
1580 STAM_REL_REG(pVM, &pPGM->cPrivatePages, STAMTYPE_U32, "/PGM/Page/cPrivatePages", STAMUNIT_OCCURENCES, "The number of private pages.");
1581 STAM_REL_REG(pVM, &pPGM->cSharedPages, STAMTYPE_U32, "/PGM/Page/cSharedPages", STAMUNIT_OCCURENCES, "The number of shared pages.");
1582 STAM_REL_REG(pVM, &pPGM->cZeroPages, STAMTYPE_U32, "/PGM/Page/cZeroPages", STAMUNIT_OCCURENCES, "The number of zero backed pages.");
1583 STAM_REL_REG(pVM, &pPGM->cHandyPages, STAMTYPE_U32, "/PGM/Page/cHandyPages", STAMUNIT_OCCURENCES, "The number of handy pages (not included in cAllPages).");
1584 STAM_REL_REG(pVM, &pPGM->cRelocations, STAMTYPE_COUNTER, "/PGM/cRelocations", STAMUNIT_OCCURENCES, "Number of hypervisor relocations.");
1585 STAM_REL_REG(pVM, &pPGM->ChunkR3Map.c, STAMTYPE_U32, "/PGM/ChunkR3Map/c", STAMUNIT_OCCURENCES, "Number of mapped chunks.");
1586 STAM_REL_REG(pVM, &pPGM->ChunkR3Map.cMax, STAMTYPE_U32, "/PGM/ChunkR3Map/cMax", STAMUNIT_OCCURENCES, "Maximum number of mapped chunks.");
1587
1588#ifdef VBOX_WITH_STATISTICS
1589
1590# define PGM_REG_COUNTER(a, b, c) \
1591 rc = STAMR3RegisterF(pVM, a, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, c, b); \
1592 AssertRC(rc);
1593
1594# define PGM_REG_PROFILE(a, b, c) \
1595 rc = STAMR3RegisterF(pVM, a, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS_PER_CALL, c, b); \
1596 AssertRC(rc);
1597
1598 PGM_REG_COUNTER(&pPGM->StatR3DetectedConflicts, "/PGM/R3/DetectedConflicts", "The number of times PGMR3CheckMappingConflicts() detected a conflict.");
1599 PGM_REG_PROFILE(&pPGM->StatR3ResolveConflict, "/PGM/R3/ResolveConflict", "pgmR3SyncPTResolveConflict() profiling (includes the entire relocation).");
1600
1601 PGM_REG_COUNTER(&pPGM->StatRZChunkR3MapTlbHits, "/PGM/ChunkR3Map/TlbHitsRZ", "TLB hits.");
1602 PGM_REG_COUNTER(&pPGM->StatRZChunkR3MapTlbMisses, "/PGM/ChunkR3Map/TlbMissesRZ", "TLB misses.");
1603 PGM_REG_COUNTER(&pPGM->StatRZPageMapTlbHits, "/PGM/RZ/Page/MapTlbHits", "TLB hits.");
1604 PGM_REG_COUNTER(&pPGM->StatRZPageMapTlbMisses, "/PGM/RZ/Page/MapTlbMisses", "TLB misses.");
1605 PGM_REG_COUNTER(&pPGM->StatR3ChunkR3MapTlbHits, "/PGM/ChunkR3Map/TlbHitsR3", "TLB hits.");
1606 PGM_REG_COUNTER(&pPGM->StatR3ChunkR3MapTlbMisses, "/PGM/ChunkR3Map/TlbMissesR3", "TLB misses.");
1607 PGM_REG_COUNTER(&pPGM->StatR3PageMapTlbHits, "/PGM/R3/Page/MapTlbHits", "TLB hits.");
1608 PGM_REG_COUNTER(&pPGM->StatR3PageMapTlbMisses, "/PGM/R3/Page/MapTlbMisses", "TLB misses.");
1609
1610 PGM_REG_PROFILE(&pPGM->StatRZSyncCR3HandlerVirtualUpdate, "/PGM/RZ/SyncCR3/Handlers/VirtualUpdate", "Profiling of the virtual handler updates.");
1611 PGM_REG_PROFILE(&pPGM->StatRZSyncCR3HandlerVirtualReset, "/PGM/RZ/SyncCR3/Handlers/VirtualReset", "Profiling of the virtual handler resets.");
1612 PGM_REG_PROFILE(&pPGM->StatR3SyncCR3HandlerVirtualUpdate, "/PGM/R3/SyncCR3/Handlers/VirtualUpdate", "Profiling of the virtual handler updates.");
1613 PGM_REG_PROFILE(&pPGM->StatR3SyncCR3HandlerVirtualReset, "/PGM/R3/SyncCR3/Handlers/VirtualReset", "Profiling of the virtual handler resets.");
1614
1615 PGM_REG_COUNTER(&pPGM->StatRZPhysHandlerReset, "/PGM/RZ/PhysHandlerReset", "The number of times PGMHandlerPhysicalReset is called.");
1616 PGM_REG_COUNTER(&pPGM->StatR3PhysHandlerReset, "/PGM/R3/PhysHandlerReset", "The number of times PGMHandlerPhysicalReset is called.");
1617 PGM_REG_PROFILE(&pPGM->StatRZVirtHandlerSearchByPhys, "/PGM/RZ/VirtHandlerSearchByPhys", "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1618 PGM_REG_PROFILE(&pPGM->StatR3VirtHandlerSearchByPhys, "/PGM/R3/VirtHandlerSearchByPhys", "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1619
1620 PGM_REG_COUNTER(&pPGM->StatRZPageReplaceShared, "/PGM/RZ/Page/ReplacedShared", "Times a shared page was replaced.");
1621 PGM_REG_COUNTER(&pPGM->StatRZPageReplaceZero, "/PGM/RZ/Page/ReplacedZero", "Times the zero page was replaced.");
1622/// @todo PGM_REG_COUNTER(&pPGM->StatRZPageHandyAllocs, "/PGM/RZ/Page/HandyAllocs", "Number of times we've allocated more handy pages.");
1623 PGM_REG_COUNTER(&pPGM->StatR3PageReplaceShared, "/PGM/R3/Page/ReplacedShared", "Times a shared page was replaced.");
1624 PGM_REG_COUNTER(&pPGM->StatR3PageReplaceZero, "/PGM/R3/Page/ReplacedZero", "Times the zero page was replaced.");
1625/// @todo PGM_REG_COUNTER(&pPGM->StatR3PageHandyAllocs, "/PGM/R3/Page/HandyAllocs", "Number of times we've allocated more handy pages.");
1626
1627 /* GC only: */
1628 PGM_REG_COUNTER(&pPGM->StatRCDynMapCacheHits, "/PGM/RC/DynMapCache/Hits" , "Number of dynamic page mapping cache hits.");
1629 PGM_REG_COUNTER(&pPGM->StatRCDynMapCacheMisses, "/PGM/RC/DynMapCache/Misses" , "Number of dynamic page mapping cache misses.");
1630 PGM_REG_COUNTER(&pPGM->StatRCInvlPgConflict, "/PGM/RC/InvlPgConflict", "Number of times PGMInvalidatePage() detected a mapping conflict.");
1631 PGM_REG_COUNTER(&pPGM->StatRCInvlPgSyncMonCR3, "/PGM/RC/InvlPgSyncMonitorCR3", "Number of times PGMInvalidatePage() ran into PGM_SYNC_MONITOR_CR3.");
1632
1633# ifdef PGMPOOL_WITH_GCPHYS_TRACKING
1634 PGM_REG_COUNTER(&pPGM->StatTrackVirgin, "/PGM/Track/Virgin", "The number of first time shadowings");
1635 PGM_REG_COUNTER(&pPGM->StatTrackAliased, "/PGM/Track/Aliased", "The number of times switching to cRef2, i.e. the page is being shadowed by two PTs.");
1636 PGM_REG_COUNTER(&pPGM->StatTrackAliasedMany, "/PGM/Track/AliasedMany", "The number of times we're tracking using cRef2.");
1637 PGM_REG_COUNTER(&pPGM->StatTrackAliasedLots, "/PGM/Track/AliasedLots", "The number of times we're hitting pages which has overflowed cRef2");
1638 PGM_REG_COUNTER(&pPGM->StatTrackOverflows, "/PGM/Track/Overflows", "The number of times the extent list grows too long.");
1639 PGM_REG_PROFILE(&pPGM->StatTrackDeref, "/PGM/Track/Deref", "Profiling of SyncPageWorkerTrackDeref (expensive).");
1640# endif
1641
1642# undef PGM_REG_COUNTER
1643# undef PGM_REG_PROFILE
1644#endif
1645
1646 /*
1647 * Note! The layout below matches the member layout exactly!
1648 */
1649
1650 /*
1651 * Common - stats
1652 */
1653 for (unsigned i=0;i<pVM->cCPUs;i++)
1654 {
1655 PVMCPU pVCpu = &pVM->aCpus[i];
1656 PPGMCPU pPGM = &pVCpu->pgm.s;
1657
1658#define PGM_REG_COUNTER(a, b, c) \
1659 rc = STAMR3RegisterF(pVM, a, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, c, b, i); \
1660 AssertRC(rc);
1661#define PGM_REG_PROFILE(a, b, c) \
1662 rc = STAMR3RegisterF(pVM, a, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS_PER_CALL, c, b, i); \
1663 AssertRC(rc);
1664
1665 PGM_REG_COUNTER(&pPGM->cGuestModeChanges, "/PGM/CPU%d/cGuestModeChanges", "Number of guest mode changes.");
1666
1667#ifdef VBOX_WITH_STATISTICS
1668 for (unsigned j = 0; j < RT_ELEMENTS(pPGM->StatSyncPtPD); j++)
1669 STAMR3RegisterF(pVM, &pPGM->StatSyncPtPD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1670 "The number of SyncPT per PD n.", "/PGM/CPU%d/PDSyncPT/%04X", i, j);
1671 for (unsigned j = 0; j < RT_ELEMENTS(pPGM->StatSyncPagePD); j++)
1672 STAMR3RegisterF(pVM, &pPGM->StatSyncPagePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1673 "The number of SyncPage per PD n.", "/PGM/CPU%d/PDSyncPage/%04X", i, j);
1674
1675 /* R0 only: */
1676 PGM_REG_COUNTER(&pPGM->StatR0DynMapMigrateInvlPg, "/PGM/CPU%d/R0/DynMapMigrateInvlPg", "invlpg count in PGMDynMapMigrateAutoSet.");
1677 PGM_REG_PROFILE(&pPGM->StatR0DynMapGCPageInl, "/PGM/CPU%d/R0/DynMapPageGCPageInl", "Calls to pgmR0DynMapGCPageInlined.");
1678 PGM_REG_COUNTER(&pPGM->StatR0DynMapGCPageInlHits, "/PGM/CPU%d/R0/DynMapPageGCPageInl/Hits", "Hash table lookup hits.");
1679 PGM_REG_COUNTER(&pPGM->StatR0DynMapGCPageInlMisses, "/PGM/CPU%d/R0/DynMapPageGCPageInl/Misses", "Misses that falls back to code common with PGMDynMapHCPage.");
1680 PGM_REG_COUNTER(&pPGM->StatR0DynMapGCPageInlRamHits, "/PGM/CPU%d/R0/DynMapPageGCPageInl/RamHits", "1st ram range hits.");
1681 PGM_REG_COUNTER(&pPGM->StatR0DynMapGCPageInlRamMisses, "/PGM/CPU%d/R0/DynMapPageGCPageInl/RamMisses", "1st ram range misses, takes slow path.");
1682 PGM_REG_PROFILE(&pPGM->StatR0DynMapHCPageInl, "/PGM/CPU%d/R0/DynMapPageHCPageInl", "Calls to pgmR0DynMapHCPageInlined.");
1683 PGM_REG_COUNTER(&pPGM->StatR0DynMapHCPageInlHits, "/PGM/CPU%d/R0/DynMapPageHCPageInl/Hits", "Hash table lookup hits.");
1684 PGM_REG_COUNTER(&pPGM->StatR0DynMapHCPageInlMisses, "/PGM/CPU%d/R0/DynMapPageHCPageInl/Misses", "Misses that falls back to code common with PGMDynMapHCPage.");
1685 PGM_REG_COUNTER(&pPGM->StatR0DynMapPage, "/PGM/CPU%d/R0/DynMapPage", "Calls to pgmR0DynMapPage");
1686 PGM_REG_COUNTER(&pPGM->StatR0DynMapSetOptimize, "/PGM/CPU%d/R0/DynMapPage/SetOptimize", "Calls to pgmDynMapOptimizeAutoSet.");
1687 PGM_REG_COUNTER(&pPGM->StatR0DynMapSetSearchFlushes, "/PGM/CPU%d/R0/DynMapPage/SetSearchFlushes","Set search restorting to subset flushes.");
1688 PGM_REG_COUNTER(&pPGM->StatR0DynMapSetSearchHits, "/PGM/CPU%d/R0/DynMapPage/SetSearchHits", "Set search hits.");
1689 PGM_REG_COUNTER(&pPGM->StatR0DynMapSetSearchMisses, "/PGM/CPU%d/R0/DynMapPage/SetSearchMisses", "Set search misses.");
1690 PGM_REG_PROFILE(&pPGM->StatR0DynMapHCPage, "/PGM/CPU%d/R0/DynMapPage/HCPage", "Calls to PGMDynMapHCPage (ring-0).");
1691 PGM_REG_COUNTER(&pPGM->StatR0DynMapPageHits0, "/PGM/CPU%d/R0/DynMapPage/Hits0", "Hits at iPage+0");
1692 PGM_REG_COUNTER(&pPGM->StatR0DynMapPageHits1, "/PGM/CPU%d/R0/DynMapPage/Hits1", "Hits at iPage+1");
1693 PGM_REG_COUNTER(&pPGM->StatR0DynMapPageHits2, "/PGM/CPU%d/R0/DynMapPage/Hits2", "Hits at iPage+2");
1694 PGM_REG_COUNTER(&pPGM->StatR0DynMapPageInvlPg, "/PGM/CPU%d/R0/DynMapPage/InvlPg", "invlpg count in pgmR0DynMapPageSlow.");
1695 PGM_REG_COUNTER(&pPGM->StatR0DynMapPageSlow, "/PGM/CPU%d/R0/DynMapPage/Slow", "Calls to pgmR0DynMapPageSlow - subtract this from pgmR0DynMapPage to get 1st level hits.");
1696 PGM_REG_COUNTER(&pPGM->StatR0DynMapPageSlowLoopHits, "/PGM/CPU%d/R0/DynMapPage/SlowLoopHits" , "Hits in the loop path.");
1697 PGM_REG_COUNTER(&pPGM->StatR0DynMapPageSlowLoopMisses, "/PGM/CPU%d/R0/DynMapPage/SlowLoopMisses", "Misses in the loop path. NonLoopMisses = Slow - SlowLoopHit - SlowLoopMisses");
1698 //PGM_REG_COUNTER(&pPGM->StatR0DynMapPageSlowLostHits, "/PGM/CPU%d/R0/DynMapPage/SlowLostHits", "Lost hits.");
1699 PGM_REG_COUNTER(&pPGM->StatR0DynMapSubsets, "/PGM/CPU%d/R0/Subsets", "Times PGMDynMapPushAutoSubset was called.");
1700 PGM_REG_COUNTER(&pPGM->StatR0DynMapPopFlushes, "/PGM/CPU%d/R0/SubsetPopFlushes", "Times PGMDynMapPopAutoSubset flushes the subset.");
1701 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[0], "/PGM/CPU%d/R0/SetSize000..09", "00-09% filled");
1702 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[1], "/PGM/CPU%d/R0/SetSize010..19", "10-19% filled");
1703 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[2], "/PGM/CPU%d/R0/SetSize020..29", "20-29% filled");
1704 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[3], "/PGM/CPU%d/R0/SetSize030..39", "30-39% filled");
1705 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[4], "/PGM/CPU%d/R0/SetSize040..49", "40-49% filled");
1706 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[5], "/PGM/CPU%d/R0/SetSize050..59", "50-59% filled");
1707 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[6], "/PGM/CPU%d/R0/SetSize060..69", "60-69% filled");
1708 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[7], "/PGM/CPU%d/R0/SetSize070..79", "70-79% filled");
1709 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[8], "/PGM/CPU%d/R0/SetSize080..89", "80-89% filled");
1710 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[9], "/PGM/CPU%d/R0/SetSize090..99", "90-99% filled");
1711 PGM_REG_COUNTER(&pPGM->aStatR0DynMapSetSize[10], "/PGM/CPU%d/R0/SetSize100", "100% filled");
1712
1713 /* RZ only: */
1714 PGM_REG_PROFILE(&pPGM->StatRZTrap0e, "/PGM/CPU%d/RZ/Trap0e", "Profiling of the PGMTrap0eHandler() body.");
1715 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTimeCheckPageFault, "/PGM/CPU%d/RZ/Trap0e/Time/CheckPageFault", "Profiling of checking for dirty/access emulation faults.");
1716 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTimeSyncPT, "/PGM/CPU%d/RZ/Trap0e/Time/SyncPT", "Profiling of lazy page table syncing.");
1717 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTimeMapping, "/PGM/CPU%d/RZ/Trap0e/Time/Mapping", "Profiling of checking virtual mappings.");
1718 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTimeOutOfSync, "/PGM/CPU%d/RZ/Trap0e/Time/OutOfSync", "Profiling of out of sync page handling.");
1719 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTimeHandlers, "/PGM/CPU%d/RZ/Trap0e/Time/Handlers", "Profiling of checking handlers.");
1720 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2CSAM, "/PGM/CPU%d/RZ/Trap0e/Time2/CSAM", "Profiling of the Trap0eHandler body when the cause is CSAM.");
1721 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2DirtyAndAccessed, "/PGM/CPU%d/RZ/Trap0e/Time2/DirtyAndAccessedBits", "Profiling of the Trap0eHandler body when the cause is dirty and/or accessed bit emulation.");
1722 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2GuestTrap, "/PGM/CPU%d/RZ/Trap0e/Time2/GuestTrap", "Profiling of the Trap0eHandler body when the cause is a guest trap.");
1723 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2HndPhys, "/PGM/CPU%d/RZ/Trap0e/Time2/HandlerPhysical", "Profiling of the Trap0eHandler body when the cause is a physical handler.");
1724 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2HndVirt, "/PGM/CPU%d/RZ/Trap0e/Time2/HandlerVirtual", "Profiling of the Trap0eHandler body when the cause is a virtual handler.");
1725 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2HndUnhandled, "/PGM/CPU%d/RZ/Trap0e/Time2/HandlerUnhandled", "Profiling of the Trap0eHandler body when the cause is access outside the monitored areas of a monitored page.");
1726 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2Misc, "/PGM/CPU%d/RZ/Trap0e/Time2/Misc", "Profiling of the Trap0eHandler body when the cause is not known.");
1727 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2OutOfSync, "/PGM/CPU%d/RZ/Trap0e/Time2/OutOfSync", "Profiling of the Trap0eHandler body when the cause is an out-of-sync page.");
1728 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2OutOfSyncHndPhys, "/PGM/CPU%d/RZ/Trap0e/Time2/OutOfSyncHndPhys", "Profiling of the Trap0eHandler body when the cause is an out-of-sync physical handler page.");
1729 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2OutOfSyncHndVirt, "/PGM/CPU%d/RZ/Trap0e/Time2/OutOfSyncHndVirt", "Profiling of the Trap0eHandler body when the cause is an out-of-sync virtual handler page.");
1730 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2OutOfSyncHndObs, "/PGM/CPU%d/RZ/Trap0e/Time2/OutOfSyncObsHnd", "Profiling of the Trap0eHandler body when the cause is an obsolete handler page.");
1731 PGM_REG_PROFILE(&pPGM->StatRZTrap0eTime2SyncPT, "/PGM/CPU%d/RZ/Trap0e/Time2/SyncPT", "Profiling of the Trap0eHandler body when the cause is lazy syncing of a PT.");
1732 PGM_REG_COUNTER(&pPGM->StatRZTrap0eConflicts, "/PGM/CPU%d/RZ/Trap0e/Conflicts", "The number of times #PF was caused by an undetected conflict.");
1733 PGM_REG_COUNTER(&pPGM->StatRZTrap0eHandlersMapping, "/PGM/CPU%d/RZ/Trap0e/Handlers/Mapping", "Number of traps due to access handlers in mappings.");
1734 PGM_REG_COUNTER(&pPGM->StatRZTrap0eHandlersOutOfSync, "/PGM/CPU%d/RZ/Trap0e/Handlers/OutOfSync", "Number of traps due to out-of-sync handled pages.");
1735 PGM_REG_COUNTER(&pPGM->StatRZTrap0eHandlersPhysical, "/PGM/CPU%d/RZ/Trap0e/Handlers/Physical", "Number of traps due to physical access handlers.");
1736 PGM_REG_COUNTER(&pPGM->StatRZTrap0eHandlersVirtual, "/PGM/CPU%d/RZ/Trap0e/Handlers/Virtual", "Number of traps due to virtual access handlers.");
1737 PGM_REG_COUNTER(&pPGM->StatRZTrap0eHandlersVirtualByPhys, "/PGM/CPU%d/RZ/Trap0e/Handlers/VirtualByPhys", "Number of traps due to virtual access handlers by physical address.");
1738 PGM_REG_COUNTER(&pPGM->StatRZTrap0eHandlersVirtualUnmarked,"/PGM/CPU%d/RZ/Trap0e/Handlers/VirtualUnmarked","Number of traps due to virtual access handlers by virtual address (without proper physical flags).");
1739 PGM_REG_COUNTER(&pPGM->StatRZTrap0eHandlersUnhandled, "/PGM/CPU%d/RZ/Trap0e/Handlers/Unhandled", "Number of traps due to access outside range of monitored page(s).");
1740 PGM_REG_COUNTER(&pPGM->StatRZTrap0eHandlersInvalid, "/PGM/CPU%d/RZ/Trap0e/Handlers/Invalid", "Number of traps due to access to invalid physical memory.");
1741 PGM_REG_COUNTER(&pPGM->StatRZTrap0eUSNotPresentRead, "/PGM/CPU%d/RZ/Trap0e/Err/User/NPRead", "Number of user mode not present read page faults.");
1742 PGM_REG_COUNTER(&pPGM->StatRZTrap0eUSNotPresentWrite, "/PGM/CPU%d/RZ/Trap0e/Err/User/NPWrite", "Number of user mode not present write page faults.");
1743 PGM_REG_COUNTER(&pPGM->StatRZTrap0eUSWrite, "/PGM/CPU%d/RZ/Trap0e/Err/User/Write", "Number of user mode write page faults.");
1744 PGM_REG_COUNTER(&pPGM->StatRZTrap0eUSReserved, "/PGM/CPU%d/RZ/Trap0e/Err/User/Reserved", "Number of user mode reserved bit page faults.");
1745 PGM_REG_COUNTER(&pPGM->StatRZTrap0eUSNXE, "/PGM/CPU%d/RZ/Trap0e/Err/User/NXE", "Number of user mode NXE page faults.");
1746 PGM_REG_COUNTER(&pPGM->StatRZTrap0eUSRead, "/PGM/CPU%d/RZ/Trap0e/Err/User/Read", "Number of user mode read page faults.");
1747 PGM_REG_COUNTER(&pPGM->StatRZTrap0eSVNotPresentRead, "/PGM/CPU%d/RZ/Trap0e/Err/Supervisor/NPRead", "Number of supervisor mode not present read page faults.");
1748 PGM_REG_COUNTER(&pPGM->StatRZTrap0eSVNotPresentWrite, "/PGM/CPU%d/RZ/Trap0e/Err/Supervisor/NPWrite", "Number of supervisor mode not present write page faults.");
1749 PGM_REG_COUNTER(&pPGM->StatRZTrap0eSVWrite, "/PGM/CPU%d/RZ/Trap0e/Err/Supervisor/Write", "Number of supervisor mode write page faults.");
1750 PGM_REG_COUNTER(&pPGM->StatRZTrap0eSVReserved, "/PGM/CPU%d/RZ/Trap0e/Err/Supervisor/Reserved", "Number of supervisor mode reserved bit page faults.");
1751 PGM_REG_COUNTER(&pPGM->StatRZTrap0eSNXE, "/PGM/CPU%d/RZ/Trap0e/Err/Supervisor/NXE", "Number of supervisor mode NXE page faults.");
1752 PGM_REG_COUNTER(&pPGM->StatRZTrap0eGuestPF, "/PGM/CPU%d/RZ/Trap0e/GuestPF", "Number of real guest page faults.");
1753 PGM_REG_COUNTER(&pPGM->StatRZTrap0eGuestPFUnh, "/PGM/CPU%d/RZ/Trap0e/GuestPF/Unhandled", "Number of real guest page faults from the 'unhandled' case.");
1754 PGM_REG_COUNTER(&pPGM->StatRZTrap0eGuestPFMapping, "/PGM/CPU%d/RZ/Trap0e/GuestPF/InMapping", "Number of real guest page faults in a mapping.");
1755 PGM_REG_COUNTER(&pPGM->StatRZTrap0eWPEmulInRZ, "/PGM/CPU%d/RZ/Trap0e/WP/InRZ", "Number of guest page faults due to X86_CR0_WP emulation.");
1756 PGM_REG_COUNTER(&pPGM->StatRZTrap0eWPEmulToR3, "/PGM/CPU%d/RZ/Trap0e/WP/ToR3", "Number of guest page faults due to X86_CR0_WP emulation (forward to R3 for emulation).");
1757 for (unsigned j = 0; j < RT_ELEMENTS(pPGM->StatRZTrap0ePD); j++)
1758 STAMR3RegisterF(pVM, &pPGM->StatRZTrap0ePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1759 "The number of traps in page directory n.", "/PGM/CPU%d/RZ/Trap0e/PD/%04X", i, j);
1760
1761 PGM_REG_COUNTER(&pPGM->StatRZGuestCR3WriteHandled, "/PGM/CPU%d/RZ/CR3WriteHandled", "The number of times the Guest CR3 change was successfully handled.");
1762 PGM_REG_COUNTER(&pPGM->StatRZGuestCR3WriteUnhandled, "/PGM/CPU%d/RZ/CR3WriteUnhandled", "The number of times the Guest CR3 change was passed back to the recompiler.");
1763 PGM_REG_COUNTER(&pPGM->StatRZGuestCR3WriteConflict, "/PGM/CPU%d/RZ/CR3WriteConflict", "The number of times the Guest CR3 monitoring detected a conflict.");
1764 PGM_REG_COUNTER(&pPGM->StatRZGuestROMWriteHandled, "/PGM/CPU%d/RZ/ROMWriteHandled", "The number of times the Guest ROM change was successfully handled.");
1765 PGM_REG_COUNTER(&pPGM->StatRZGuestROMWriteUnhandled, "/PGM/CPU%d/RZ/ROMWriteUnhandled", "The number of times the Guest ROM change was passed back to the recompiler.");
1766
1767 /* HC only: */
1768
1769 /* RZ & R3: */
1770 PGM_REG_PROFILE(&pPGM->StatRZSyncCR3, "/PGM/CPU%d/RZ/SyncCR3", "Profiling of the PGMSyncCR3() body.");
1771 PGM_REG_PROFILE(&pPGM->StatRZSyncCR3Handlers, "/PGM/CPU%d/RZ/SyncCR3/Handlers", "Profiling of the PGMSyncCR3() update handler section.");
1772 PGM_REG_COUNTER(&pPGM->StatRZSyncCR3Global, "/PGM/CPU%d/RZ/SyncCR3/Global", "The number of global CR3 syncs.");
1773 PGM_REG_COUNTER(&pPGM->StatRZSyncCR3NotGlobal, "/PGM/CPU%d/RZ/SyncCR3/NotGlobal", "The number of non-global CR3 syncs.");
1774 PGM_REG_COUNTER(&pPGM->StatRZSyncCR3DstCacheHit, "/PGM/CPU%d/RZ/SyncCR3/DstChacheHit", "The number of times we got some kind of a cache hit.");
1775 PGM_REG_COUNTER(&pPGM->StatRZSyncCR3DstFreed, "/PGM/CPU%d/RZ/SyncCR3/DstFreed", "The number of times we've had to free a shadow entry.");
1776 PGM_REG_COUNTER(&pPGM->StatRZSyncCR3DstFreedSrcNP, "/PGM/CPU%d/RZ/SyncCR3/DstFreedSrcNP", "The number of times we've had to free a shadow entry for which the source entry was not present.");
1777 PGM_REG_COUNTER(&pPGM->StatRZSyncCR3DstNotPresent, "/PGM/CPU%d/RZ/SyncCR3/DstNotPresent", "The number of times we've encountered a not present shadow entry for a present guest entry.");
1778 PGM_REG_COUNTER(&pPGM->StatRZSyncCR3DstSkippedGlobalPD, "/PGM/CPU%d/RZ/SyncCR3/DstSkippedGlobalPD", "The number of times a global page directory wasn't flushed.");
1779 PGM_REG_COUNTER(&pPGM->StatRZSyncCR3DstSkippedGlobalPT, "/PGM/CPU%d/RZ/SyncCR3/DstSkippedGlobalPT", "The number of times a page table with only global entries wasn't flushed.");
1780 PGM_REG_PROFILE(&pPGM->StatRZSyncPT, "/PGM/CPU%d/RZ/SyncPT", "Profiling of the pfnSyncPT() body.");
1781 PGM_REG_COUNTER(&pPGM->StatRZSyncPTFailed, "/PGM/CPU%d/RZ/SyncPT/Failed", "The number of times pfnSyncPT() failed.");
1782 PGM_REG_COUNTER(&pPGM->StatRZSyncPT4K, "/PGM/CPU%d/RZ/SyncPT/4K", "Nr of 4K PT syncs");
1783 PGM_REG_COUNTER(&pPGM->StatRZSyncPT4M, "/PGM/CPU%d/RZ/SyncPT/4M", "Nr of 4M PT syncs");
1784 PGM_REG_COUNTER(&pPGM->StatRZSyncPagePDNAs, "/PGM/CPU%d/RZ/SyncPagePDNAs", "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1785 PGM_REG_COUNTER(&pPGM->StatRZSyncPagePDOutOfSync, "/PGM/CPU%d/RZ/SyncPagePDOutOfSync", "The number of time we've encountered an out-of-sync PD in SyncPage.");
1786 PGM_REG_COUNTER(&pPGM->StatRZAccessedPage, "/PGM/CPU%d/RZ/AccessedPage", "The number of pages marked not present for accessed bit emulation.");
1787 PGM_REG_PROFILE(&pPGM->StatRZDirtyBitTracking, "/PGM/CPU%d/RZ/DirtyPage", "Profiling the dirty bit tracking in CheckPageFault().");
1788 PGM_REG_COUNTER(&pPGM->StatRZDirtyPage, "/PGM/CPU%d/RZ/DirtyPage/Mark", "The number of pages marked read-only for dirty bit tracking.");
1789 PGM_REG_COUNTER(&pPGM->StatRZDirtyPageBig, "/PGM/CPU%d/RZ/DirtyPage/MarkBig", "The number of 4MB pages marked read-only for dirty bit tracking.");
1790 PGM_REG_COUNTER(&pPGM->StatRZDirtyPageSkipped, "/PGM/CPU%d/RZ/DirtyPage/Skipped", "The number of pages already dirty or readonly.");
1791 PGM_REG_COUNTER(&pPGM->StatRZDirtyPageTrap, "/PGM/CPU%d/RZ/DirtyPage/Trap", "The number of traps generated for dirty bit tracking.");
1792 PGM_REG_COUNTER(&pPGM->StatRZDirtiedPage, "/PGM/CPU%d/RZ/DirtyPage/SetDirty", "The number of pages marked dirty because of write accesses.");
1793 PGM_REG_COUNTER(&pPGM->StatRZDirtyTrackRealPF, "/PGM/CPU%d/RZ/DirtyPage/RealPF", "The number of real pages faults during dirty bit tracking.");
1794 PGM_REG_COUNTER(&pPGM->StatRZPageAlreadyDirty, "/PGM/CPU%d/RZ/DirtyPage/AlreadySet", "The number of pages already marked dirty because of write accesses.");
1795 PGM_REG_PROFILE(&pPGM->StatRZInvalidatePage, "/PGM/CPU%d/RZ/InvalidatePage", "PGMInvalidatePage() profiling.");
1796 PGM_REG_COUNTER(&pPGM->StatRZInvalidatePage4KBPages, "/PGM/CPU%d/RZ/InvalidatePage/4KBPages", "The number of times PGMInvalidatePage() was called for a 4KB page.");
1797 PGM_REG_COUNTER(&pPGM->StatRZInvalidatePage4MBPages, "/PGM/CPU%d/RZ/InvalidatePage/4MBPages", "The number of times PGMInvalidatePage() was called for a 4MB page.");
1798 PGM_REG_COUNTER(&pPGM->StatRZInvalidatePage4MBPagesSkip, "/PGM/CPU%d/RZ/InvalidatePage/4MBPagesSkip","The number of times PGMInvalidatePage() skipped a 4MB page.");
1799 PGM_REG_COUNTER(&pPGM->StatRZInvalidatePagePDMappings, "/PGM/CPU%d/RZ/InvalidatePage/PDMappings", "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
1800 PGM_REG_COUNTER(&pPGM->StatRZInvalidatePagePDNAs, "/PGM/CPU%d/RZ/InvalidatePage/PDNAs", "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
1801 PGM_REG_COUNTER(&pPGM->StatRZInvalidatePagePDNPs, "/PGM/CPU%d/RZ/InvalidatePage/PDNPs", "The number of times PGMInvalidatePage() was called for a not present page directory.");
1802 PGM_REG_COUNTER(&pPGM->StatRZInvalidatePagePDOutOfSync, "/PGM/CPU%d/RZ/InvalidatePage/PDOutOfSync", "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
1803 PGM_REG_COUNTER(&pPGM->StatRZInvalidatePageSkipped, "/PGM/CPU%d/RZ/InvalidatePage/Skipped", "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1804 PGM_REG_COUNTER(&pPGM->StatRZPageOutOfSyncSupervisor, "/PGM/CPU%d/RZ/OutOfSync/SuperVisor", "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1805 PGM_REG_COUNTER(&pPGM->StatRZPageOutOfSyncUser, "/PGM/CPU%d/RZ/OutOfSync/User", "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1806 PGM_REG_PROFILE(&pPGM->StatRZPrefetch, "/PGM/CPU%d/RZ/Prefetch", "PGMPrefetchPage profiling.");
1807 PGM_REG_PROFILE(&pPGM->StatRZFlushTLB, "/PGM/CPU%d/RZ/FlushTLB", "Profiling of the PGMFlushTLB() body.");
1808 PGM_REG_COUNTER(&pPGM->StatRZFlushTLBNewCR3, "/PGM/CPU%d/RZ/FlushTLB/NewCR3", "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
1809 PGM_REG_COUNTER(&pPGM->StatRZFlushTLBNewCR3Global, "/PGM/CPU%d/RZ/FlushTLB/NewCR3Global", "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
1810 PGM_REG_COUNTER(&pPGM->StatRZFlushTLBSameCR3, "/PGM/CPU%d/RZ/FlushTLB/SameCR3", "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
1811 PGM_REG_COUNTER(&pPGM->StatRZFlushTLBSameCR3Global, "/PGM/CPU%d/RZ/FlushTLB/SameCR3Global", "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
1812 PGM_REG_PROFILE(&pPGM->StatRZGstModifyPage, "/PGM/CPU%d/RZ/GstModifyPage", "Profiling of the PGMGstModifyPage() body.");
1813
1814 PGM_REG_PROFILE(&pPGM->StatR3SyncCR3, "/PGM/CPU%d/R3/SyncCR3", "Profiling of the PGMSyncCR3() body.");
1815 PGM_REG_PROFILE(&pPGM->StatR3SyncCR3Handlers, "/PGM/CPU%d/R3/SyncCR3/Handlers", "Profiling of the PGMSyncCR3() update handler section.");
1816 PGM_REG_COUNTER(&pPGM->StatR3SyncCR3Global, "/PGM/CPU%d/R3/SyncCR3/Global", "The number of global CR3 syncs.");
1817 PGM_REG_COUNTER(&pPGM->StatR3SyncCR3NotGlobal, "/PGM/CPU%d/R3/SyncCR3/NotGlobal", "The number of non-global CR3 syncs.");
1818 PGM_REG_COUNTER(&pPGM->StatR3SyncCR3DstCacheHit, "/PGM/CPU%d/R3/SyncCR3/DstChacheHit", "The number of times we got some kind of a cache hit.");
1819 PGM_REG_COUNTER(&pPGM->StatR3SyncCR3DstFreed, "/PGM/CPU%d/R3/SyncCR3/DstFreed", "The number of times we've had to free a shadow entry.");
1820 PGM_REG_COUNTER(&pPGM->StatR3SyncCR3DstFreedSrcNP, "/PGM/CPU%d/R3/SyncCR3/DstFreedSrcNP", "The number of times we've had to free a shadow entry for which the source entry was not present.");
1821 PGM_REG_COUNTER(&pPGM->StatR3SyncCR3DstNotPresent, "/PGM/CPU%d/R3/SyncCR3/DstNotPresent", "The number of times we've encountered a not present shadow entry for a present guest entry.");
1822 PGM_REG_COUNTER(&pPGM->StatR3SyncCR3DstSkippedGlobalPD, "/PGM/CPU%d/R3/SyncCR3/DstSkippedGlobalPD", "The number of times a global page directory wasn't flushed.");
1823 PGM_REG_COUNTER(&pPGM->StatR3SyncCR3DstSkippedGlobalPT, "/PGM/CPU%d/R3/SyncCR3/DstSkippedGlobalPT", "The number of times a page table with only global entries wasn't flushed.");
1824 PGM_REG_PROFILE(&pPGM->StatR3SyncPT, "/PGM/CPU%d/R3/SyncPT", "Profiling of the pfnSyncPT() body.");
1825 PGM_REG_COUNTER(&pPGM->StatR3SyncPTFailed, "/PGM/CPU%d/R3/SyncPT/Failed", "The number of times pfnSyncPT() failed.");
1826 PGM_REG_COUNTER(&pPGM->StatR3SyncPT4K, "/PGM/CPU%d/R3/SyncPT/4K", "Nr of 4K PT syncs");
1827 PGM_REG_COUNTER(&pPGM->StatR3SyncPT4M, "/PGM/CPU%d/R3/SyncPT/4M", "Nr of 4M PT syncs");
1828 PGM_REG_COUNTER(&pPGM->StatR3SyncPagePDNAs, "/PGM/CPU%d/R3/SyncPagePDNAs", "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1829 PGM_REG_COUNTER(&pPGM->StatR3SyncPagePDOutOfSync, "/PGM/CPU%d/R3/SyncPagePDOutOfSync", "The number of time we've encountered an out-of-sync PD in SyncPage.");
1830 PGM_REG_COUNTER(&pPGM->StatR3AccessedPage, "/PGM/CPU%d/R3/AccessedPage", "The number of pages marked not present for accessed bit emulation.");
1831 PGM_REG_PROFILE(&pPGM->StatR3DirtyBitTracking, "/PGM/CPU%d/R3/DirtyPage", "Profiling the dirty bit tracking in CheckPageFault().");
1832 PGM_REG_COUNTER(&pPGM->StatR3DirtyPage, "/PGM/CPU%d/R3/DirtyPage/Mark", "The number of pages marked read-only for dirty bit tracking.");
1833 PGM_REG_COUNTER(&pPGM->StatR3DirtyPageBig, "/PGM/CPU%d/R3/DirtyPage/MarkBig", "The number of 4MB pages marked read-only for dirty bit tracking.");
1834 PGM_REG_COUNTER(&pPGM->StatR3DirtyPageSkipped, "/PGM/CPU%d/R3/DirtyPage/Skipped", "The number of pages already dirty or readonly.");
1835 PGM_REG_COUNTER(&pPGM->StatR3DirtyPageTrap, "/PGM/CPU%d/R3/DirtyPage/Trap", "The number of traps generated for dirty bit tracking.");
1836 PGM_REG_COUNTER(&pPGM->StatR3DirtiedPage, "/PGM/CPU%d/R3/DirtyPage/SetDirty", "The number of pages marked dirty because of write accesses.");
1837 PGM_REG_COUNTER(&pPGM->StatR3DirtyTrackRealPF, "/PGM/CPU%d/R3/DirtyPage/RealPF", "The number of real pages faults during dirty bit tracking.");
1838 PGM_REG_COUNTER(&pPGM->StatR3PageAlreadyDirty, "/PGM/CPU%d/R3/DirtyPage/AlreadySet", "The number of pages already marked dirty because of write accesses.");
1839 PGM_REG_PROFILE(&pPGM->StatR3InvalidatePage, "/PGM/CPU%d/R3/InvalidatePage", "PGMInvalidatePage() profiling.");
1840 PGM_REG_COUNTER(&pPGM->StatR3InvalidatePage4KBPages, "/PGM/CPU%d/R3/InvalidatePage/4KBPages", "The number of times PGMInvalidatePage() was called for a 4KB page.");
1841 PGM_REG_COUNTER(&pPGM->StatR3InvalidatePage4MBPages, "/PGM/CPU%d/R3/InvalidatePage/4MBPages", "The number of times PGMInvalidatePage() was called for a 4MB page.");
1842 PGM_REG_COUNTER(&pPGM->StatR3InvalidatePage4MBPagesSkip, "/PGM/CPU%d/R3/InvalidatePage/4MBPagesSkip","The number of times PGMInvalidatePage() skipped a 4MB page.");
1843 PGM_REG_COUNTER(&pPGM->StatR3InvalidatePagePDMappings, "/PGM/CPU%d/R3/InvalidatePage/PDMappings", "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
1844 PGM_REG_COUNTER(&pPGM->StatR3InvalidatePagePDNAs, "/PGM/CPU%d/R3/InvalidatePage/PDNAs", "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
1845 PGM_REG_COUNTER(&pPGM->StatR3InvalidatePagePDNPs, "/PGM/CPU%d/R3/InvalidatePage/PDNPs", "The number of times PGMInvalidatePage() was called for a not present page directory.");
1846 PGM_REG_COUNTER(&pPGM->StatR3InvalidatePagePDOutOfSync, "/PGM/CPU%d/R3/InvalidatePage/PDOutOfSync", "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
1847 PGM_REG_COUNTER(&pPGM->StatR3InvalidatePageSkipped, "/PGM/CPU%d/R3/InvalidatePage/Skipped", "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1848 PGM_REG_COUNTER(&pPGM->StatR3PageOutOfSyncSupervisor, "/PGM/CPU%d/R3/OutOfSync/SuperVisor", "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1849 PGM_REG_COUNTER(&pPGM->StatR3PageOutOfSyncUser, "/PGM/CPU%d/R3/OutOfSync/User", "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1850 PGM_REG_PROFILE(&pPGM->StatR3Prefetch, "/PGM/CPU%d/R3/Prefetch", "PGMPrefetchPage profiling.");
1851 PGM_REG_PROFILE(&pPGM->StatR3FlushTLB, "/PGM/CPU%d/R3/FlushTLB", "Profiling of the PGMFlushTLB() body.");
1852 PGM_REG_COUNTER(&pPGM->StatR3FlushTLBNewCR3, "/PGM/CPU%d/R3/FlushTLB/NewCR3", "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
1853 PGM_REG_COUNTER(&pPGM->StatR3FlushTLBNewCR3Global, "/PGM/CPU%d/R3/FlushTLB/NewCR3Global", "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
1854 PGM_REG_COUNTER(&pPGM->StatR3FlushTLBSameCR3, "/PGM/CPU%d/R3/FlushTLB/SameCR3", "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
1855 PGM_REG_COUNTER(&pPGM->StatR3FlushTLBSameCR3Global, "/PGM/CPU%d/R3/FlushTLB/SameCR3Global", "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
1856 PGM_REG_PROFILE(&pPGM->StatR3GstModifyPage, "/PGM/CPU%d/R3/GstModifyPage", "Profiling of the PGMGstModifyPage() body.");
1857#endif /* VBOX_WITH_STATISTICS */
1858
1859#undef PGM_REG_PROFILE
1860#undef PGM_REG_COUNTER
1861
1862 }
1863}
1864
1865
1866/**
1867 * Init the PGM bits that rely on VMMR0 and MM to be fully initialized.
1868 *
1869 * The dynamic mapping area will also be allocated and initialized at this
1870 * time. We could allocate it during PGMR3Init of course, but the mapping
1871 * wouldn't be allocated at that time preventing us from setting up the
1872 * page table entries with the dummy page.
1873 *
1874 * @returns VBox status code.
1875 * @param pVM VM handle.
1876 */
1877VMMR3DECL(int) PGMR3InitDynMap(PVM pVM)
1878{
1879 RTGCPTR GCPtr;
1880 int rc;
1881
1882 /*
1883 * Reserve space for the dynamic mappings.
1884 */
1885 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping", &GCPtr);
1886 if (RT_SUCCESS(rc))
1887 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
1888
1889 if ( RT_SUCCESS(rc)
1890 && (pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) != ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT))
1891 {
1892 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping not crossing", &GCPtr);
1893 if (RT_SUCCESS(rc))
1894 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
1895 }
1896 if (RT_SUCCESS(rc))
1897 {
1898 AssertRelease((pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) == ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT));
1899 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1900 }
1901 return rc;
1902}
1903
1904
1905/**
1906 * Ring-3 init finalizing.
1907 *
1908 * @returns VBox status code.
1909 * @param pVM The VM handle.
1910 */
1911VMMR3DECL(int) PGMR3InitFinalize(PVM pVM)
1912{
1913 int rc;
1914
1915 /*
1916 * Reserve space for the dynamic mappings.
1917 * Initialize the dynamic mapping pages with dummy pages to simply the cache.
1918 */
1919 /* get the pointer to the page table entries. */
1920 PPGMMAPPING pMapping = pgmGetMapping(pVM, pVM->pgm.s.pbDynPageMapBaseGC);
1921 AssertRelease(pMapping);
1922 const uintptr_t off = pVM->pgm.s.pbDynPageMapBaseGC - pMapping->GCPtr;
1923 const unsigned iPT = off >> X86_PD_SHIFT;
1924 const unsigned iPG = (off >> X86_PT_SHIFT) & X86_PT_MASK;
1925 pVM->pgm.s.paDynPageMap32BitPTEsGC = pMapping->aPTs[iPT].pPTRC + iPG * sizeof(pMapping->aPTs[0].pPTR3->a[0]);
1926 pVM->pgm.s.paDynPageMapPaePTEsGC = pMapping->aPTs[iPT].paPaePTsRC + iPG * sizeof(pMapping->aPTs[0].paPaePTsR3->a[0]);
1927
1928 /* init cache */
1929 RTHCPHYS HCPhysDummy = MMR3PageDummyHCPhys(pVM);
1930 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.aHCPhysDynPageMapCache); i++)
1931 pVM->pgm.s.aHCPhysDynPageMapCache[i] = HCPhysDummy;
1932
1933 for (unsigned i = 0; i < MM_HYPER_DYNAMIC_SIZE; i += PAGE_SIZE)
1934 {
1935 rc = PGMMap(pVM, pVM->pgm.s.pbDynPageMapBaseGC + i, HCPhysDummy, PAGE_SIZE, 0);
1936 AssertRCReturn(rc, rc);
1937 }
1938
1939 /*
1940 * Note that AMD uses all the 8 reserved bits for the address (so 40 bits in total);
1941 * Intel only goes up to 36 bits, so we stick to 36 as well.
1942 */
1943 /** @todo How to test for the 40 bits support? Long mode seems to be the test criterium. */
1944 uint32_t u32Dummy, u32Features;
1945 CPUMGetGuestCpuId(VMMGetCpu(pVM), 1, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
1946
1947 if (u32Features & X86_CPUID_FEATURE_EDX_PSE36)
1948 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(36) - 1;
1949 else
1950 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1;
1951
1952 /*
1953 * Allocate memory if we're supposed to do that.
1954 */
1955 if (pVM->pgm.s.fRamPreAlloc)
1956 rc = pgmR3PhysRamPreAllocate(pVM);
1957
1958 LogRel(("PGMR3InitFinalize: 4 MB PSE mask %RGp\n", pVM->pgm.s.GCPhys4MBPSEMask));
1959 return rc;
1960}
1961
1962
1963/**
1964 * Applies relocations to data and code managed by this component.
1965 *
1966 * This function will be called at init and whenever the VMM need to relocate it
1967 * self inside the GC.
1968 *
1969 * @param pVM The VM.
1970 * @param offDelta Relocation delta relative to old location.
1971 */
1972VMMR3DECL(void) PGMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
1973{
1974 LogFlow(("PGMR3Relocate %RGv to %RGv\n", pVM->pgm.s.GCPtrCR3Mapping, pVM->pgm.s.GCPtrCR3Mapping + offDelta));
1975
1976 /*
1977 * Paging stuff.
1978 */
1979 pVM->pgm.s.GCPtrCR3Mapping += offDelta;
1980
1981 pgmR3ModeDataInit(pVM, true /* resolve GC/R0 symbols */);
1982
1983 /* Shadow, guest and both mode switch & relocation for each VCPU. */
1984 for (unsigned i=0;i<pVM->cCPUs;i++)
1985 {
1986 PVMCPU pVCpu = &pVM->aCpus[i];
1987
1988 pgmR3ModeDataSwitch(pVM, pVCpu, pVCpu->pgm.s.enmShadowMode, pVCpu->pgm.s.enmGuestMode);
1989
1990 PGM_SHW_PFN(Relocate, pVCpu)(pVCpu, offDelta);
1991 PGM_GST_PFN(Relocate, pVCpu)(pVCpu, offDelta);
1992 PGM_BTH_PFN(Relocate, pVCpu)(pVCpu, offDelta);
1993 }
1994
1995 /*
1996 * Trees.
1997 */
1998 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
1999
2000 /*
2001 * Ram ranges.
2002 */
2003 if (pVM->pgm.s.pRamRangesR3)
2004 {
2005 /* Update the pSelfRC pointers and relink them. */
2006 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur; pCur = pCur->pNextR3)
2007 if (!(pCur->fFlags & PGM_RAM_RANGE_FLAGS_FLOATING))
2008 pCur->pSelfRC = MMHyperCCToRC(pVM, pCur);
2009 pgmR3PhysRelinkRamRanges(pVM);
2010 }
2011
2012 /*
2013 * Update the two page directories with all page table mappings.
2014 * (One or more of them have changed, that's why we're here.)
2015 */
2016 pVM->pgm.s.pMappingsRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pMappingsR3);
2017 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur->pNextR3; pCur = pCur->pNextR3)
2018 pCur->pNextRC = MMHyperR3ToRC(pVM, pCur->pNextR3);
2019
2020 /* Relocate GC addresses of Page Tables. */
2021 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
2022 {
2023 for (RTHCUINT i = 0; i < pCur->cPTs; i++)
2024 {
2025 pCur->aPTs[i].pPTRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].pPTR3);
2026 pCur->aPTs[i].paPaePTsRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].paPaePTsR3);
2027 }
2028 }
2029
2030 /*
2031 * Dynamic page mapping area.
2032 */
2033 pVM->pgm.s.paDynPageMap32BitPTEsGC += offDelta;
2034 pVM->pgm.s.paDynPageMapPaePTEsGC += offDelta;
2035 pVM->pgm.s.pbDynPageMapBaseGC += offDelta;
2036
2037 /*
2038 * The Zero page.
2039 */
2040 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
2041#ifdef VBOX_WITH_2X_4GB_ADDR_SPACE
2042 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR || !VMMIsHwVirtExtForced(pVM));
2043#else
2044 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR);
2045#endif
2046
2047 /*
2048 * Physical and virtual handlers.
2049 */
2050 RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3RelocatePhysHandler, &offDelta);
2051 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3RelocateVirtHandler, &offDelta);
2052 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3RelocateHyperVirtHandler, &offDelta);
2053
2054 /*
2055 * The page pool.
2056 */
2057 pgmR3PoolRelocate(pVM);
2058}
2059
2060
2061/**
2062 * Callback function for relocating a physical access handler.
2063 *
2064 * @returns 0 (continue enum)
2065 * @param pNode Pointer to a PGMPHYSHANDLER node.
2066 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2067 * not certain the delta will fit in a void pointer for all possible configs.
2068 */
2069static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser)
2070{
2071 PPGMPHYSHANDLER pHandler = (PPGMPHYSHANDLER)pNode;
2072 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2073 if (pHandler->pfnHandlerRC)
2074 pHandler->pfnHandlerRC += offDelta;
2075 if (pHandler->pvUserRC >= 0x10000)
2076 pHandler->pvUserRC += offDelta;
2077 return 0;
2078}
2079
2080
2081/**
2082 * Callback function for relocating a virtual access handler.
2083 *
2084 * @returns 0 (continue enum)
2085 * @param pNode Pointer to a PGMVIRTHANDLER node.
2086 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2087 * not certain the delta will fit in a void pointer for all possible configs.
2088 */
2089static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2090{
2091 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2092 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2093 Assert( pHandler->enmType == PGMVIRTHANDLERTYPE_ALL
2094 || pHandler->enmType == PGMVIRTHANDLERTYPE_WRITE);
2095 Assert(pHandler->pfnHandlerRC);
2096 pHandler->pfnHandlerRC += offDelta;
2097 return 0;
2098}
2099
2100
2101/**
2102 * Callback function for relocating a virtual access handler for the hypervisor mapping.
2103 *
2104 * @returns 0 (continue enum)
2105 * @param pNode Pointer to a PGMVIRTHANDLER node.
2106 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2107 * not certain the delta will fit in a void pointer for all possible configs.
2108 */
2109static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2110{
2111 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2112 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2113 Assert(pHandler->enmType == PGMVIRTHANDLERTYPE_HYPERVISOR);
2114 Assert(pHandler->pfnHandlerRC);
2115 pHandler->pfnHandlerRC += offDelta;
2116 return 0;
2117}
2118
2119
2120/**
2121 * The VM is being reset.
2122 *
2123 * For the PGM component this means that any PD write monitors
2124 * needs to be removed.
2125 *
2126 * @param pVM VM handle.
2127 */
2128VMMR3DECL(void) PGMR3Reset(PVM pVM)
2129{
2130 int rc;
2131
2132 LogFlow(("PGMR3Reset:\n"));
2133 VM_ASSERT_EMT(pVM);
2134
2135 pgmLock(pVM);
2136
2137 /*
2138 * Unfix any fixed mappings and disable CR3 monitoring.
2139 */
2140 pVM->pgm.s.fMappingsFixed = false;
2141 pVM->pgm.s.GCPtrMappingFixed = 0;
2142 pVM->pgm.s.cbMappingFixed = 0;
2143
2144 /* Exit the guest paging mode before the pgm pool gets reset.
2145 * Important to clean up the amd64 case.
2146 */
2147 for (unsigned i=0;i<pVM->cCPUs;i++)
2148 {
2149 PVMCPU pVCpu = &pVM->aCpus[i];
2150
2151 rc = PGM_GST_PFN(Exit, pVCpu)(pVCpu);
2152 AssertRC(rc);
2153 }
2154
2155#ifdef DEBUG
2156 DBGFR3InfoLog(pVM, "mappings", NULL);
2157 DBGFR3InfoLog(pVM, "handlers", "all nostat");
2158#endif
2159
2160 /*
2161 * Reset the shadow page pool.
2162 */
2163 pgmR3PoolReset(pVM);
2164
2165 for (unsigned i=0;i<pVM->cCPUs;i++)
2166 {
2167 PVMCPU pVCpu = &pVM->aCpus[i];
2168
2169 /*
2170 * Re-init other members.
2171 */
2172 pVCpu->pgm.s.fA20Enabled = true;
2173
2174 /*
2175 * Clear the FFs PGM owns.
2176 */
2177 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
2178 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3_NON_GLOBAL);
2179 }
2180
2181 /*
2182 * Reset (zero) RAM pages.
2183 */
2184 rc = pgmR3PhysRamReset(pVM);
2185 if (RT_SUCCESS(rc))
2186 {
2187 /*
2188 * Reset (zero) shadow ROM pages.
2189 */
2190 rc = pgmR3PhysRomReset(pVM);
2191 if (RT_SUCCESS(rc))
2192 {
2193 /*
2194 * Switch mode back to real mode.
2195 */
2196 for (unsigned i=0;i<pVM->cCPUs;i++)
2197 {
2198 PVMCPU pVCpu = &pVM->aCpus[i];
2199
2200 rc = PGMR3ChangeMode(pVM, pVCpu, PGMMODE_REAL);
2201 AssertRC(rc);
2202
2203 STAM_REL_COUNTER_RESET(&pVCpu->pgm.s.cGuestModeChanges);
2204 }
2205 }
2206 }
2207
2208 pgmUnlock(pVM);
2209 //return rc;
2210 AssertReleaseRC(rc);
2211}
2212
2213
2214#ifdef VBOX_STRICT
2215/**
2216 * VM state change callback for clearing fNoMorePhysWrites after
2217 * a snapshot has been created.
2218 */
2219static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser)
2220{
2221 if (enmState == VMSTATE_RUNNING)
2222 pVM->pgm.s.fNoMorePhysWrites = false;
2223}
2224#endif
2225
2226
2227/**
2228 * Terminates the PGM.
2229 *
2230 * @returns VBox status code.
2231 * @param pVM Pointer to VM structure.
2232 */
2233VMMR3DECL(int) PGMR3Term(PVM pVM)
2234{
2235 PGMDeregisterStringFormatTypes();
2236 return PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
2237}
2238
2239
2240/**
2241 * Terminates the per-VCPU PGM.
2242 *
2243 * Termination means cleaning up and freeing all resources,
2244 * the VM it self is at this point powered off or suspended.
2245 *
2246 * @returns VBox status code.
2247 * @param pVM The VM to operate on.
2248 */
2249VMMR3DECL(int) PGMR3TermCPU(PVM pVM)
2250{
2251 return 0;
2252}
2253
2254
2255/**
2256 * Find the ROM tracking structure for the given page.
2257 *
2258 * @returns Pointer to the ROM page structure. NULL if the caller didn't check
2259 * that it's a ROM page.
2260 * @param pVM The VM handle.
2261 * @param GCPhys The address of the ROM page.
2262 */
2263static PPGMROMPAGE pgmR3GetRomPage(PVM pVM, RTGCPHYS GCPhys)
2264{
2265 for (PPGMROMRANGE pRomRange = pVM->pgm.s.CTX_SUFF(pRomRanges);
2266 pRomRange;
2267 pRomRange = pRomRange->CTX_SUFF(pNext))
2268 {
2269 RTGCPHYS off = GCPhys - pRomRange->GCPhys;
2270 if (GCPhys - pRomRange->GCPhys < pRomRange->cb)
2271 return &pRomRange->aPages[off >> PAGE_SHIFT];
2272 }
2273 return NULL;
2274}
2275
2276
2277/**
2278 * Save zero indicator + bits for the specified page.
2279 *
2280 * @returns VBox status code, errors are logged/asserted before returning.
2281 * @param pVM The VM handle.
2282 * @param pSSH The saved state handle.
2283 * @param pPage The page to save.
2284 * @param GCPhys The address of the page.
2285 * @param pRam The ram range (for error logging).
2286 */
2287static int pgmR3SavePage(PVM pVM, PSSMHANDLE pSSM, PPGMPAGE pPage, RTGCPHYS GCPhys, PPGMRAMRANGE pRam)
2288{
2289 int rc;
2290 if (PGM_PAGE_IS_ZERO(pPage))
2291 rc = SSMR3PutU8(pSSM, 0);
2292 else
2293 {
2294 void const *pvPage;
2295 rc = pgmPhysGCPhys2CCPtrInternalReadOnly(pVM, pPage, GCPhys, &pvPage);
2296 AssertLogRelMsgRCReturn(rc, ("pPage=%R[pgmpage] GCPhys=%#x %s\n", pPage, GCPhys, pRam->pszDesc), rc);
2297
2298 SSMR3PutU8(pSSM, 1);
2299 rc = SSMR3PutMem(pSSM, pvPage, PAGE_SIZE);
2300 }
2301 return rc;
2302}
2303
2304
2305/**
2306 * Save a shadowed ROM page.
2307 *
2308 * Format: Type, protection, and two pages with zero indicators.
2309 *
2310 * @returns VBox status code, errors are logged/asserted before returning.
2311 * @param pVM The VM handle.
2312 * @param pSSH The saved state handle.
2313 * @param pPage The page to save.
2314 * @param GCPhys The address of the page.
2315 * @param pRam The ram range (for error logging).
2316 */
2317static int pgmR3SaveShadowedRomPage(PVM pVM, PSSMHANDLE pSSM, PPGMPAGE pPage, RTGCPHYS GCPhys, PPGMRAMRANGE pRam)
2318{
2319 /* Need to save both pages and the current state. */
2320 PPGMROMPAGE pRomPage = pgmR3GetRomPage(pVM, GCPhys);
2321 AssertLogRelMsgReturn(pRomPage, ("GCPhys=%RGp %s\n", GCPhys, pRam->pszDesc), VERR_INTERNAL_ERROR);
2322
2323 SSMR3PutU8(pSSM, PGMPAGETYPE_ROM_SHADOW);
2324 SSMR3PutU8(pSSM, pRomPage->enmProt);
2325
2326 int rc = pgmR3SavePage(pVM, pSSM, pPage, GCPhys, pRam);
2327 if (RT_SUCCESS(rc))
2328 {
2329 PPGMPAGE pPagePassive = PGMROMPROT_IS_ROM(pRomPage->enmProt) ? &pRomPage->Shadow : &pRomPage->Virgin;
2330 rc = pgmR3SavePage(pVM, pSSM, pPagePassive, GCPhys, pRam);
2331 }
2332 return rc;
2333}
2334
2335/** PGM fields to save/load. */
2336static const SSMFIELD s_aPGMFields[] =
2337{
2338 SSMFIELD_ENTRY( PGM, fMappingsFixed),
2339 SSMFIELD_ENTRY_GCPTR( PGM, GCPtrMappingFixed),
2340 SSMFIELD_ENTRY( PGM, cbMappingFixed),
2341 SSMFIELD_ENTRY_TERM()
2342};
2343
2344static const SSMFIELD s_aPGMCpuFields[] =
2345{
2346 SSMFIELD_ENTRY( PGMCPU, fA20Enabled),
2347 SSMFIELD_ENTRY_GCPHYS( PGMCPU, GCPhysA20Mask),
2348 SSMFIELD_ENTRY( PGMCPU, enmGuestMode),
2349 SSMFIELD_ENTRY_TERM()
2350};
2351
2352/* For loading old saved states. (pre-smp) */
2353typedef struct
2354{
2355 /** If set no conflict checks are required. (boolean) */
2356 bool fMappingsFixed;
2357 /** Size of fixed mapping */
2358 uint32_t cbMappingFixed;
2359 /** Base address (GC) of fixed mapping */
2360 RTGCPTR GCPtrMappingFixed;
2361 /** A20 gate mask.
2362 * Our current approach to A20 emulation is to let REM do it and don't bother
2363 * anywhere else. The interesting Guests will be operating with it enabled anyway.
2364 * But whould need arrise, we'll subject physical addresses to this mask. */
2365 RTGCPHYS GCPhysA20Mask;
2366 /** A20 gate state - boolean! */
2367 bool fA20Enabled;
2368 /** The guest paging mode. */
2369 PGMMODE enmGuestMode;
2370} PGMOLD;
2371
2372static const SSMFIELD s_aPGMFields_Old[] =
2373{
2374 SSMFIELD_ENTRY( PGMOLD, fMappingsFixed),
2375 SSMFIELD_ENTRY_GCPTR( PGMOLD, GCPtrMappingFixed),
2376 SSMFIELD_ENTRY( PGMOLD, cbMappingFixed),
2377 SSMFIELD_ENTRY( PGMOLD, fA20Enabled),
2378 SSMFIELD_ENTRY_GCPHYS( PGMOLD, GCPhysA20Mask),
2379 SSMFIELD_ENTRY( PGMOLD, enmGuestMode),
2380 SSMFIELD_ENTRY_TERM()
2381};
2382
2383
2384/**
2385 * Execute state save operation.
2386 *
2387 * @returns VBox status code.
2388 * @param pVM VM Handle.
2389 * @param pSSM SSM operation handle.
2390 */
2391static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM)
2392{
2393 int rc;
2394 unsigned i;
2395 PPGM pPGM = &pVM->pgm.s;
2396
2397 /*
2398 * Lock PGM and set the no-more-writes indicator.
2399 */
2400 pgmLock(pVM);
2401 pVM->pgm.s.fNoMorePhysWrites = true;
2402
2403 /*
2404 * Save basic data (required / unaffected by relocation).
2405 */
2406 SSMR3PutStruct(pSSM, pPGM, &s_aPGMFields[0]);
2407
2408 for (i=0;i<pVM->cCPUs;i++)
2409 {
2410 PVMCPU pVCpu = &pVM->aCpus[i];
2411
2412 SSMR3PutStruct(pSSM, &pVCpu->pgm.s, &s_aPGMCpuFields[0]);
2413 }
2414
2415 /*
2416 * The guest mappings.
2417 */
2418 i = 0;
2419 for (PPGMMAPPING pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3, i++)
2420 {
2421 SSMR3PutU32( pSSM, i);
2422 SSMR3PutStrZ( pSSM, pMapping->pszDesc); /* This is the best unique id we have... */
2423 SSMR3PutGCPtr( pSSM, pMapping->GCPtr);
2424 SSMR3PutGCUIntPtr(pSSM, pMapping->cPTs);
2425 }
2426 rc = SSMR3PutU32(pSSM, ~0); /* terminator. */
2427
2428 /*
2429 * Ram ranges and the memory they describe.
2430 */
2431 i = 0;
2432 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2433 {
2434 /*
2435 * Save the ram range details.
2436 */
2437 SSMR3PutU32(pSSM, i);
2438 SSMR3PutGCPhys(pSSM, pRam->GCPhys);
2439 SSMR3PutGCPhys(pSSM, pRam->GCPhysLast);
2440 SSMR3PutGCPhys(pSSM, pRam->cb);
2441 SSMR3PutU8(pSSM, !!pRam->pvR3); /* Boolean indicating memory or not. */
2442 SSMR3PutStrZ(pSSM, pRam->pszDesc); /* This is the best unique id we have... */
2443
2444 /*
2445 * Iterate the pages, only two special case.
2446 */
2447 uint32_t const cPages = pRam->cb >> PAGE_SHIFT;
2448 for (uint32_t iPage = 0; iPage < cPages; iPage++)
2449 {
2450 RTGCPHYS GCPhysPage = pRam->GCPhys + ((RTGCPHYS)iPage << PAGE_SHIFT);
2451 PPGMPAGE pPage = &pRam->aPages[iPage];
2452 uint8_t uType = PGM_PAGE_GET_TYPE(pPage);
2453
2454 if (uType == PGMPAGETYPE_ROM_SHADOW)
2455 rc = pgmR3SaveShadowedRomPage(pVM, pSSM, pPage, GCPhysPage, pRam);
2456 else if (uType == PGMPAGETYPE_MMIO2_ALIAS_MMIO)
2457 {
2458 /* MMIO2 alias -> MMIO; the device will just have to deal with this. */
2459 SSMR3PutU8(pSSM, PGMPAGETYPE_MMIO);
2460 rc = SSMR3PutU8(pSSM, 0 /* ZERO */);
2461 }
2462 else
2463 {
2464 SSMR3PutU8(pSSM, uType);
2465 rc = pgmR3SavePage(pVM, pSSM, pPage, GCPhysPage, pRam);
2466 }
2467 if (RT_FAILURE(rc))
2468 break;
2469 }
2470 if (RT_FAILURE(rc))
2471 break;
2472 }
2473
2474 pgmUnlock(pVM);
2475 return SSMR3PutU32(pSSM, ~0); /* terminator. */
2476}
2477
2478
2479/**
2480 * Load an ignored page.
2481 *
2482 * @returns VBox status code.
2483 * @param pSSM The saved state handle.
2484 */
2485static int pgmR3LoadPageToDevNull(PSSMHANDLE pSSM)
2486{
2487 uint8_t abPage[PAGE_SIZE];
2488 return SSMR3GetMem(pSSM, &abPage[0], sizeof(abPage));
2489}
2490
2491
2492/**
2493 * Loads a page without any bits in the saved state, i.e. making sure it's
2494 * really zero.
2495 *
2496 * @returns VBox status code.
2497 * @param pVM The VM handle.
2498 * @param uType The page type or PGMPAGETYPE_INVALID (old saved
2499 * state).
2500 * @param pPage The guest page tracking structure.
2501 * @param GCPhys The page address.
2502 * @param pRam The ram range (logging).
2503 */
2504static int pgmR3LoadPageZero(PVM pVM, uint8_t uType, PPGMPAGE pPage, RTGCPHYS GCPhys, PPGMRAMRANGE pRam)
2505{
2506 if ( PGM_PAGE_GET_TYPE(pPage) != uType
2507 && uType != PGMPAGETYPE_INVALID)
2508 return VERR_SSM_UNEXPECTED_DATA;
2509
2510 /* I think this should be sufficient. */
2511 if (!PGM_PAGE_IS_ZERO(pPage))
2512 return VERR_SSM_UNEXPECTED_DATA;
2513
2514 NOREF(pVM);
2515 NOREF(GCPhys);
2516 NOREF(pRam);
2517 return VINF_SUCCESS;
2518}
2519
2520
2521/**
2522 * Loads a page from the saved state.
2523 *
2524 * @returns VBox status code.
2525 * @param pVM The VM handle.
2526 * @param pSSM The SSM handle.
2527 * @param uType The page type or PGMPAGETYEP_INVALID (old saved
2528 * state).
2529 * @param pPage The guest page tracking structure.
2530 * @param GCPhys The page address.
2531 * @param pRam The ram range (logging).
2532 */
2533static int pgmR3LoadPageBits(PVM pVM, PSSMHANDLE pSSM, uint8_t uType, PPGMPAGE pPage, RTGCPHYS GCPhys, PPGMRAMRANGE pRam)
2534{
2535 int rc;
2536
2537 /*
2538 * Match up the type, dealing with MMIO2 aliases (dropped).
2539 */
2540 AssertLogRelMsgReturn( PGM_PAGE_GET_TYPE(pPage) == uType
2541 || uType == PGMPAGETYPE_INVALID,
2542 ("pPage=%R[pgmpage] GCPhys=%#x %s\n", pPage, GCPhys, pRam->pszDesc),
2543 VERR_SSM_UNEXPECTED_DATA);
2544
2545 /*
2546 * Load the page.
2547 */
2548 void *pvPage;
2549 rc = pgmPhysGCPhys2CCPtrInternal(pVM, pPage, GCPhys, &pvPage);
2550 if (RT_SUCCESS(rc))
2551 rc = SSMR3GetMem(pSSM, pvPage, PAGE_SIZE);
2552
2553 return rc;
2554}
2555
2556
2557/**
2558 * Loads a page (counter part to pgmR3SavePage).
2559 *
2560 * @returns VBox status code, fully bitched errors.
2561 * @param pVM The VM handle.
2562 * @param pSSM The SSM handle.
2563 * @param uType The page type.
2564 * @param pPage The page.
2565 * @param GCPhys The page address.
2566 * @param pRam The RAM range (for error messages).
2567 */
2568static int pgmR3LoadPage(PVM pVM, PSSMHANDLE pSSM, uint8_t uType, PPGMPAGE pPage, RTGCPHYS GCPhys, PPGMRAMRANGE pRam)
2569{
2570 uint8_t uState;
2571 int rc = SSMR3GetU8(pSSM, &uState);
2572 AssertLogRelMsgRCReturn(rc, ("pPage=%R[pgmpage] GCPhys=%#x %s rc=%Rrc\n", pPage, GCPhys, pRam->pszDesc, rc), rc);
2573 if (uState == 0 /* zero */)
2574 rc = pgmR3LoadPageZero(pVM, uType, pPage, GCPhys, pRam);
2575 else if (uState == 1)
2576 rc = pgmR3LoadPageBits(pVM, pSSM, uType, pPage, GCPhys, pRam);
2577 else
2578 rc = VERR_INTERNAL_ERROR;
2579 AssertLogRelMsgRCReturn(rc, ("pPage=%R[pgmpage] uState=%d uType=%d GCPhys=%RGp %s rc=%Rrc\n",
2580 pPage, uState, uType, GCPhys, pRam->pszDesc, rc),
2581 rc);
2582 return VINF_SUCCESS;
2583}
2584
2585
2586/**
2587 * Loads a shadowed ROM page.
2588 *
2589 * @returns VBox status code, errors are fully bitched.
2590 * @param pVM The VM handle.
2591 * @param pSSM The saved state handle.
2592 * @param pPage The page.
2593 * @param GCPhys The page address.
2594 * @param pRam The RAM range (for error messages).
2595 */
2596static int pgmR3LoadShadowedRomPage(PVM pVM, PSSMHANDLE pSSM, PPGMPAGE pPage, RTGCPHYS GCPhys, PPGMRAMRANGE pRam)
2597{
2598 /*
2599 * Load and set the protection first, then load the two pages, the first
2600 * one is the active the other is the passive.
2601 */
2602 PPGMROMPAGE pRomPage = pgmR3GetRomPage(pVM, GCPhys);
2603 AssertLogRelMsgReturn(pRomPage, ("GCPhys=%RGp %s\n", GCPhys, pRam->pszDesc), VERR_INTERNAL_ERROR);
2604
2605 uint8_t uProt;
2606 int rc = SSMR3GetU8(pSSM, &uProt);
2607 AssertLogRelMsgRCReturn(rc, ("pPage=%R[pgmpage] GCPhys=%#x %s\n", pPage, GCPhys, pRam->pszDesc), rc);
2608 PGMROMPROT enmProt = (PGMROMPROT)uProt;
2609 AssertLogRelMsgReturn( enmProt >= PGMROMPROT_INVALID
2610 && enmProt < PGMROMPROT_END,
2611 ("enmProt=%d pPage=%R[pgmpage] GCPhys=%#x %s\n", enmProt, pPage, GCPhys, pRam->pszDesc),
2612 VERR_SSM_UNEXPECTED_DATA);
2613
2614 if (pRomPage->enmProt != enmProt)
2615 {
2616 rc = PGMR3PhysRomProtect(pVM, GCPhys, PAGE_SIZE, enmProt);
2617 AssertLogRelRCReturn(rc, rc);
2618 AssertLogRelReturn(pRomPage->enmProt == enmProt, VERR_INTERNAL_ERROR);
2619 }
2620
2621 PPGMPAGE pPageActive = PGMROMPROT_IS_ROM(enmProt) ? &pRomPage->Virgin : &pRomPage->Shadow;
2622 PPGMPAGE pPagePassive = PGMROMPROT_IS_ROM(enmProt) ? &pRomPage->Shadow : &pRomPage->Virgin;
2623 uint8_t u8ActiveType = PGMROMPROT_IS_ROM(enmProt) ? PGMPAGETYPE_ROM : PGMPAGETYPE_ROM_SHADOW;
2624 uint8_t u8PassiveType= PGMROMPROT_IS_ROM(enmProt) ? PGMPAGETYPE_ROM_SHADOW : PGMPAGETYPE_ROM;
2625
2626 rc = pgmR3LoadPage(pVM, pSSM, u8ActiveType, pPage, GCPhys, pRam);
2627 if (RT_SUCCESS(rc))
2628 {
2629 *pPageActive = *pPage;
2630 rc = pgmR3LoadPage(pVM, pSSM, u8PassiveType, pPagePassive, GCPhys, pRam);
2631 }
2632 return rc;
2633}
2634
2635
2636/**
2637 * Worker for pgmR3Load.
2638 *
2639 * @returns VBox status code.
2640 *
2641 * @param pVM The VM handle.
2642 * @param pSSM The SSM handle.
2643 * @param u32Version The saved state version.
2644 */
2645static int pgmR3LoadLocked(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version)
2646{
2647 int rc;
2648 PPGM pPGM = &pVM->pgm.s;
2649 uint32_t u32Sep;
2650
2651 /*
2652 * Load basic data (required / unaffected by relocation).
2653 */
2654 if (u32Version >= PGM_SAVED_STATE_VERSION)
2655 {
2656 rc = SSMR3GetStruct(pSSM, pPGM, &s_aPGMFields[0]);
2657 AssertLogRelRCReturn(rc, rc);
2658
2659 for (unsigned i=0;i<pVM->cCPUs;i++)
2660 {
2661 PVMCPU pVCpu = &pVM->aCpus[i];
2662
2663 rc = SSMR3GetStruct(pSSM, &pVCpu->pgm.s, &s_aPGMCpuFields[0]);
2664 AssertLogRelRCReturn(rc, rc);
2665 }
2666 }
2667 else
2668 if (u32Version >= PGM_SAVED_STATE_VERSION_RR_DESC)
2669 {
2670 PGMOLD pgmOld;
2671
2672 AssertRelease(pVM->cCPUs == 1);
2673
2674 rc = SSMR3GetStruct(pSSM, &pgmOld, &s_aPGMFields_Old[0]);
2675 AssertLogRelRCReturn(rc, rc);
2676
2677 pPGM->fMappingsFixed = pgmOld.fMappingsFixed;
2678 pPGM->GCPtrMappingFixed = pgmOld.GCPtrMappingFixed;
2679 pPGM->cbMappingFixed = pgmOld.cbMappingFixed;
2680
2681 pVM->aCpus[0].pgm.s.fA20Enabled = pgmOld.fA20Enabled;
2682 pVM->aCpus[0].pgm.s.GCPhysA20Mask = pgmOld.GCPhysA20Mask;
2683 pVM->aCpus[0].pgm.s.enmGuestMode = pgmOld.enmGuestMode;
2684 }
2685 else
2686 {
2687 AssertRelease(pVM->cCPUs == 1);
2688
2689 SSMR3GetBool(pSSM, &pPGM->fMappingsFixed);
2690 SSMR3GetGCPtr(pSSM, &pPGM->GCPtrMappingFixed);
2691 SSMR3GetU32(pSSM, &pPGM->cbMappingFixed);
2692
2693 uint32_t cbRamSizeIgnored;
2694 rc = SSMR3GetU32(pSSM, &cbRamSizeIgnored);
2695 if (RT_FAILURE(rc))
2696 return rc;
2697 SSMR3GetGCPhys(pSSM, &pVM->aCpus[0].pgm.s.GCPhysA20Mask);
2698
2699 uint32_t u32 = 0;
2700 SSMR3GetUInt(pSSM, &u32);
2701 pVM->aCpus[0].pgm.s.fA20Enabled = !!u32;
2702 SSMR3GetUInt(pSSM, &pVM->aCpus[0].pgm.s.fSyncFlags);
2703 RTUINT uGuestMode;
2704 SSMR3GetUInt(pSSM, &uGuestMode);
2705 pVM->aCpus[0].pgm.s.enmGuestMode = (PGMMODE)uGuestMode;
2706
2707 /* check separator. */
2708 SSMR3GetU32(pSSM, &u32Sep);
2709 if (RT_FAILURE(rc))
2710 return rc;
2711 if (u32Sep != (uint32_t)~0)
2712 {
2713 AssertMsgFailed(("u32Sep=%#x (first)\n", u32Sep));
2714 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2715 }
2716 }
2717
2718 /*
2719 * The guest mappings.
2720 */
2721 uint32_t i = 0;
2722 for (;; i++)
2723 {
2724 /* Check the seqence number / separator. */
2725 rc = SSMR3GetU32(pSSM, &u32Sep);
2726 if (RT_FAILURE(rc))
2727 return rc;
2728 if (u32Sep == ~0U)
2729 break;
2730 if (u32Sep != i)
2731 {
2732 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2733 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2734 }
2735
2736 /* get the mapping details. */
2737 char szDesc[256];
2738 szDesc[0] = '\0';
2739 rc = SSMR3GetStrZ(pSSM, szDesc, sizeof(szDesc));
2740 if (RT_FAILURE(rc))
2741 return rc;
2742 RTGCPTR GCPtr;
2743 SSMR3GetGCPtr(pSSM, &GCPtr);
2744 RTGCPTR cPTs;
2745 rc = SSMR3GetGCUIntPtr(pSSM, &cPTs);
2746 if (RT_FAILURE(rc))
2747 return rc;
2748
2749 /* find matching range. */
2750 PPGMMAPPING pMapping;
2751 for (pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3)
2752 if ( pMapping->cPTs == cPTs
2753 && !strcmp(pMapping->pszDesc, szDesc))
2754 break;
2755 AssertLogRelMsgReturn(pMapping, ("Couldn't find mapping: cPTs=%#x szDesc=%s (GCPtr=%RGv)\n",
2756 cPTs, szDesc, GCPtr),
2757 VERR_SSM_LOAD_CONFIG_MISMATCH);
2758
2759 /* relocate it. */
2760 if (pMapping->GCPtr != GCPtr)
2761 {
2762 AssertMsg((GCPtr >> X86_PD_SHIFT << X86_PD_SHIFT) == GCPtr, ("GCPtr=%RGv\n", GCPtr));
2763 pgmR3MapRelocate(pVM, pMapping, pMapping->GCPtr, GCPtr);
2764 }
2765 else
2766 Log(("pgmR3Load: '%s' needed no relocation (%RGv)\n", szDesc, GCPtr));
2767 }
2768
2769 /*
2770 * Ram range flags and bits.
2771 */
2772 i = 0;
2773 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2774 {
2775 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
2776
2777 /* Check the seqence number / separator. */
2778 rc = SSMR3GetU32(pSSM, &u32Sep);
2779 if (RT_FAILURE(rc))
2780 return rc;
2781 if (u32Sep == ~0U)
2782 break;
2783 if (u32Sep != i)
2784 {
2785 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2786 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2787 }
2788
2789 /* Get the range details. */
2790 RTGCPHYS GCPhys;
2791 SSMR3GetGCPhys(pSSM, &GCPhys);
2792 RTGCPHYS GCPhysLast;
2793 SSMR3GetGCPhys(pSSM, &GCPhysLast);
2794 RTGCPHYS cb;
2795 SSMR3GetGCPhys(pSSM, &cb);
2796 uint8_t fHaveBits;
2797 rc = SSMR3GetU8(pSSM, &fHaveBits);
2798 if (RT_FAILURE(rc))
2799 return rc;
2800 if (fHaveBits & ~1)
2801 {
2802 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2803 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2804 }
2805 size_t cchDesc = 0;
2806 char szDesc[256];
2807 szDesc[0] = '\0';
2808 if (u32Version >= PGM_SAVED_STATE_VERSION_RR_DESC)
2809 {
2810 rc = SSMR3GetStrZ(pSSM, szDesc, sizeof(szDesc));
2811 if (RT_FAILURE(rc))
2812 return rc;
2813 /* Since we've modified the description strings in r45878, only compare
2814 them if the saved state is more recent. */
2815 if (u32Version != PGM_SAVED_STATE_VERSION_RR_DESC)
2816 cchDesc = strlen(szDesc);
2817 }
2818
2819 /*
2820 * Match it up with the current range.
2821 *
2822 * Note there is a hack for dealing with the high BIOS mapping
2823 * in the old saved state format, this means we might not have
2824 * a 1:1 match on success.
2825 */
2826 if ( ( GCPhys != pRam->GCPhys
2827 || GCPhysLast != pRam->GCPhysLast
2828 || cb != pRam->cb
2829 || ( cchDesc
2830 && strcmp(szDesc, pRam->pszDesc)) )
2831 /* Hack for PDMDevHlpPhysReserve(pDevIns, 0xfff80000, 0x80000, "High ROM Region"); */
2832 && ( u32Version != PGM_SAVED_STATE_VERSION_OLD_PHYS_CODE
2833 || GCPhys != UINT32_C(0xfff80000)
2834 || GCPhysLast != UINT32_C(0xffffffff)
2835 || pRam->GCPhysLast != GCPhysLast
2836 || pRam->GCPhys < GCPhys
2837 || !fHaveBits)
2838 )
2839 {
2840 LogRel(("Ram range: %RGp-%RGp %RGp bytes %s %s\n"
2841 "State : %RGp-%RGp %RGp bytes %s %s\n",
2842 pRam->GCPhys, pRam->GCPhysLast, pRam->cb, pRam->pvR3 ? "bits" : "nobits", pRam->pszDesc,
2843 GCPhys, GCPhysLast, cb, fHaveBits ? "bits" : "nobits", szDesc));
2844 /*
2845 * If we're loading a state for debugging purpose, don't make a fuss if
2846 * the MMIO and ROM stuff isn't 100% right, just skip the mismatches.
2847 */
2848 if ( SSMR3HandleGetAfter(pSSM) != SSMAFTER_DEBUG_IT
2849 || GCPhys < 8 * _1M)
2850 AssertFailedReturn(VERR_SSM_LOAD_CONFIG_MISMATCH);
2851
2852 AssertMsgFailed(("debug skipping not implemented, sorry\n"));
2853 continue;
2854 }
2855
2856 uint32_t cPages = (GCPhysLast - GCPhys + 1) >> PAGE_SHIFT;
2857 if (u32Version >= PGM_SAVED_STATE_VERSION_RR_DESC)
2858 {
2859 /*
2860 * Load the pages one by one.
2861 */
2862 for (uint32_t iPage = 0; iPage < cPages; iPage++)
2863 {
2864 RTGCPHYS const GCPhysPage = ((RTGCPHYS)iPage << PAGE_SHIFT) + pRam->GCPhys;
2865 PPGMPAGE pPage = &pRam->aPages[iPage];
2866 uint8_t uType;
2867 rc = SSMR3GetU8(pSSM, &uType);
2868 AssertLogRelMsgRCReturn(rc, ("pPage=%R[pgmpage] iPage=%#x GCPhysPage=%#x %s\n", pPage, iPage, GCPhysPage, pRam->pszDesc), rc);
2869 if (uType == PGMPAGETYPE_ROM_SHADOW)
2870 rc = pgmR3LoadShadowedRomPage(pVM, pSSM, pPage, GCPhysPage, pRam);
2871 else
2872 rc = pgmR3LoadPage(pVM, pSSM, uType, pPage, GCPhysPage, pRam);
2873 AssertLogRelMsgRCReturn(rc, ("rc=%Rrc iPage=%#x GCPhysPage=%#x %s\n", rc, iPage, GCPhysPage, pRam->pszDesc), rc);
2874 }
2875 }
2876 else
2877 {
2878 /*
2879 * Old format.
2880 */
2881 AssertLogRelReturn(!pVM->pgm.s.fRamPreAlloc, VERR_NOT_SUPPORTED); /* can't be detected. */
2882
2883 /* Of the page flags, pick up MMIO2 and ROM/RESERVED for the !fHaveBits case.
2884 The rest is generally irrelevant and wrong since the stuff have to match registrations. */
2885 uint32_t fFlags = 0;
2886 for (uint32_t iPage = 0; iPage < cPages; iPage++)
2887 {
2888 uint16_t u16Flags;
2889 rc = SSMR3GetU16(pSSM, &u16Flags);
2890 AssertLogRelMsgRCReturn(rc, ("rc=%Rrc iPage=%#x GCPhys=%#x %s\n", rc, iPage, pRam->GCPhys, pRam->pszDesc), rc);
2891 fFlags |= u16Flags;
2892 }
2893
2894 /* Load the bits */
2895 if ( !fHaveBits
2896 && GCPhysLast < UINT32_C(0xe0000000))
2897 {
2898 /*
2899 * Dynamic chunks.
2900 */
2901 const uint32_t cPagesInChunk = (1*1024*1024) >> PAGE_SHIFT;
2902 AssertLogRelMsgReturn(cPages % cPagesInChunk == 0,
2903 ("cPages=%#x cPagesInChunk=%#x\n", cPages, cPagesInChunk, pRam->GCPhys, pRam->pszDesc),
2904 VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2905
2906 for (uint32_t iPage = 0; iPage < cPages; /* incremented by inner loop */ )
2907 {
2908 uint8_t fPresent;
2909 rc = SSMR3GetU8(pSSM, &fPresent);
2910 AssertLogRelMsgRCReturn(rc, ("rc=%Rrc iPage=%#x GCPhys=%#x %s\n", rc, iPage, pRam->GCPhys, pRam->pszDesc), rc);
2911 AssertLogRelMsgReturn(fPresent == (uint8_t)true || fPresent == (uint8_t)false,
2912 ("fPresent=%#x iPage=%#x GCPhys=%#x %s\n", fPresent, iPage, pRam->GCPhys, pRam->pszDesc),
2913 VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2914
2915 for (uint32_t iChunkPage = 0; iChunkPage < cPagesInChunk; iChunkPage++, iPage++)
2916 {
2917 RTGCPHYS const GCPhysPage = ((RTGCPHYS)iPage << PAGE_SHIFT) + pRam->GCPhys;
2918 PPGMPAGE pPage = &pRam->aPages[iPage];
2919 if (fPresent)
2920 {
2921 if (PGM_PAGE_GET_TYPE(pPage) == PGMPAGETYPE_MMIO)
2922 rc = pgmR3LoadPageToDevNull(pSSM);
2923 else
2924 rc = pgmR3LoadPageBits(pVM, pSSM, PGMPAGETYPE_INVALID, pPage, GCPhysPage, pRam);
2925 }
2926 else
2927 rc = pgmR3LoadPageZero(pVM, PGMPAGETYPE_INVALID, pPage, GCPhysPage, pRam);
2928 AssertLogRelMsgRCReturn(rc, ("rc=%Rrc iPage=%#x GCPhysPage=%#x %s\n", rc, iPage, GCPhysPage, pRam->pszDesc), rc);
2929 }
2930 }
2931 }
2932 else if (pRam->pvR3)
2933 {
2934 /*
2935 * MMIO2.
2936 */
2937 AssertLogRelMsgReturn((fFlags & 0x0f) == RT_BIT(3) /*MM_RAM_FLAGS_MMIO2*/,
2938 ("fFlags=%#x GCPhys=%#x %s\n", fFlags, pRam->GCPhys, pRam->pszDesc),
2939 VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2940 AssertLogRelMsgReturn(pRam->pvR3,
2941 ("GCPhys=%#x %s\n", pRam->GCPhys, pRam->pszDesc),
2942 VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2943
2944 rc = SSMR3GetMem(pSSM, pRam->pvR3, pRam->cb);
2945 AssertLogRelMsgRCReturn(rc, ("GCPhys=%#x %s\n", pRam->GCPhys, pRam->pszDesc), rc);
2946 }
2947 else if (GCPhysLast < UINT32_C(0xfff80000))
2948 {
2949 /*
2950 * PCI MMIO, no pages saved.
2951 */
2952 }
2953 else
2954 {
2955 /*
2956 * Load the 0xfff80000..0xffffffff BIOS range.
2957 * It starts with X reserved pages that we have to skip over since
2958 * the RAMRANGE create by the new code won't include those.
2959 */
2960 AssertLogRelMsgReturn( !(fFlags & RT_BIT(3) /*MM_RAM_FLAGS_MMIO2*/)
2961 && (fFlags & RT_BIT(0) /*MM_RAM_FLAGS_RESERVED*/),
2962 ("fFlags=%#x GCPhys=%#x %s\n", fFlags, pRam->GCPhys, pRam->pszDesc),
2963 VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2964 AssertLogRelMsgReturn(GCPhys == UINT32_C(0xfff80000),
2965 ("GCPhys=%RGp pRamRange{GCPhys=%#x %s}\n", GCPhys, pRam->GCPhys, pRam->pszDesc),
2966 VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2967
2968 /* Skip wasted reserved pages before the ROM. */
2969 while (GCPhys < pRam->GCPhys)
2970 {
2971 rc = pgmR3LoadPageToDevNull(pSSM);
2972 GCPhys += PAGE_SIZE;
2973 }
2974
2975 /* Load the bios pages. */
2976 cPages = pRam->cb >> PAGE_SHIFT;
2977 for (uint32_t iPage = 0; iPage < cPages; iPage++)
2978 {
2979 RTGCPHYS const GCPhysPage = ((RTGCPHYS)iPage << PAGE_SHIFT) + pRam->GCPhys;
2980 PPGMPAGE pPage = &pRam->aPages[iPage];
2981
2982 AssertLogRelMsgReturn(PGM_PAGE_GET_TYPE(pPage) == PGMPAGETYPE_ROM,
2983 ("GCPhys=%RGp pPage=%R[pgmpage]\n", GCPhys, GCPhys),
2984 VERR_SSM_DATA_UNIT_FORMAT_CHANGED);
2985 rc = pgmR3LoadPageBits(pVM, pSSM, PGMPAGETYPE_ROM, pPage, GCPhysPage, pRam);
2986 AssertLogRelMsgRCReturn(rc, ("rc=%Rrc iPage=%#x GCPhys=%#x %s\n", rc, iPage, pRam->GCPhys, pRam->pszDesc), rc);
2987 }
2988 }
2989 }
2990 }
2991
2992 return rc;
2993}
2994
2995
2996/**
2997 * Execute state load operation.
2998 *
2999 * @returns VBox status code.
3000 * @param pVM VM Handle.
3001 * @param pSSM SSM operation handle.
3002 * @param u32Version Data layout version.
3003 */
3004static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version)
3005{
3006 int rc;
3007 PPGM pPGM = &pVM->pgm.s;
3008
3009 /*
3010 * Validate version.
3011 */
3012 if ( u32Version != PGM_SAVED_STATE_VERSION
3013 && u32Version != PGM_SAVED_STATE_VERSION_2_2_2
3014 && u32Version != PGM_SAVED_STATE_VERSION_RR_DESC
3015 && u32Version != PGM_SAVED_STATE_VERSION_OLD_PHYS_CODE)
3016 {
3017 AssertMsgFailed(("pgmR3Load: Invalid version u32Version=%d (current %d)!\n", u32Version, PGM_SAVED_STATE_VERSION));
3018 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
3019 }
3020
3021 /*
3022 * Call the reset function to make sure all the memory is cleared.
3023 */
3024 PGMR3Reset(pVM);
3025
3026 /*
3027 * Do the loading while owning the lock because a bunch of the functions
3028 * we're using requires this.
3029 */
3030 pgmLock(pVM);
3031 rc = pgmR3LoadLocked(pVM, pSSM, u32Version);
3032 pgmUnlock(pVM);
3033 if (RT_SUCCESS(rc))
3034 {
3035 /*
3036 * We require a full resync now.
3037 */
3038 for (unsigned i=0;i<pVM->cCPUs;i++)
3039 {
3040 PVMCPU pVCpu = &pVM->aCpus[i];
3041 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3_NON_GLOBAL);
3042 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
3043
3044 pVCpu->pgm.s.fSyncFlags |= PGM_SYNC_UPDATE_PAGE_BIT_VIRTUAL;
3045 }
3046
3047 pPGM->fPhysCacheFlushPending = true;
3048 pgmR3HandlerPhysicalUpdateAll(pVM);
3049
3050 for (unsigned i=0;i<pVM->cCPUs;i++)
3051 {
3052 PVMCPU pVCpu = &pVM->aCpus[i];
3053
3054 /*
3055 * Change the paging mode.
3056 */
3057 rc = PGMR3ChangeMode(pVM, pVCpu, pVCpu->pgm.s.enmGuestMode);
3058
3059 /* Restore pVM->pgm.s.GCPhysCR3. */
3060 Assert(pVCpu->pgm.s.GCPhysCR3 == NIL_RTGCPHYS);
3061 RTGCPHYS GCPhysCR3 = CPUMGetGuestCR3(pVCpu);
3062 if ( pVCpu->pgm.s.enmGuestMode == PGMMODE_PAE
3063 || pVCpu->pgm.s.enmGuestMode == PGMMODE_PAE_NX
3064 || pVCpu->pgm.s.enmGuestMode == PGMMODE_AMD64
3065 || pVCpu->pgm.s.enmGuestMode == PGMMODE_AMD64_NX)
3066 GCPhysCR3 = (GCPhysCR3 & X86_CR3_PAE_PAGE_MASK);
3067 else
3068 GCPhysCR3 = (GCPhysCR3 & X86_CR3_PAGE_MASK);
3069 pVCpu->pgm.s.GCPhysCR3 = GCPhysCR3;
3070 }
3071 }
3072
3073 return rc;
3074}
3075
3076
3077/**
3078 * Show paging mode.
3079 *
3080 * @param pVM VM Handle.
3081 * @param pHlp The info helpers.
3082 * @param pszArgs "all" (default), "guest", "shadow" or "host".
3083 */
3084static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3085{
3086 /* digest argument. */
3087 bool fGuest, fShadow, fHost;
3088 if (pszArgs)
3089 pszArgs = RTStrStripL(pszArgs);
3090 if (!pszArgs || !*pszArgs || strstr(pszArgs, "all"))
3091 fShadow = fHost = fGuest = true;
3092 else
3093 {
3094 fShadow = fHost = fGuest = false;
3095 if (strstr(pszArgs, "guest"))
3096 fGuest = true;
3097 if (strstr(pszArgs, "shadow"))
3098 fShadow = true;
3099 if (strstr(pszArgs, "host"))
3100 fHost = true;
3101 }
3102
3103 /** @todo SMP support! */
3104 /* print info. */
3105 if (fGuest)
3106 pHlp->pfnPrintf(pHlp, "Guest paging mode: %s, changed %RU64 times, A20 %s\n",
3107 PGMGetModeName(pVM->aCpus[0].pgm.s.enmGuestMode), pVM->aCpus[0].pgm.s.cGuestModeChanges.c,
3108 pVM->aCpus[0].pgm.s.fA20Enabled ? "enabled" : "disabled");
3109 if (fShadow)
3110 pHlp->pfnPrintf(pHlp, "Shadow paging mode: %s\n", PGMGetModeName(pVM->aCpus[0].pgm.s.enmShadowMode));
3111 if (fHost)
3112 {
3113 const char *psz;
3114 switch (pVM->pgm.s.enmHostMode)
3115 {
3116 case SUPPAGINGMODE_INVALID: psz = "invalid"; break;
3117 case SUPPAGINGMODE_32_BIT: psz = "32-bit"; break;
3118 case SUPPAGINGMODE_32_BIT_GLOBAL: psz = "32-bit+G"; break;
3119 case SUPPAGINGMODE_PAE: psz = "PAE"; break;
3120 case SUPPAGINGMODE_PAE_GLOBAL: psz = "PAE+G"; break;
3121 case SUPPAGINGMODE_PAE_NX: psz = "PAE+NX"; break;
3122 case SUPPAGINGMODE_PAE_GLOBAL_NX: psz = "PAE+G+NX"; break;
3123 case SUPPAGINGMODE_AMD64: psz = "AMD64"; break;
3124 case SUPPAGINGMODE_AMD64_GLOBAL: psz = "AMD64+G"; break;
3125 case SUPPAGINGMODE_AMD64_NX: psz = "AMD64+NX"; break;
3126 case SUPPAGINGMODE_AMD64_GLOBAL_NX: psz = "AMD64+G+NX"; break;
3127 default: psz = "unknown"; break;
3128 }
3129 pHlp->pfnPrintf(pHlp, "Host paging mode: %s\n", psz);
3130 }
3131}
3132
3133
3134/**
3135 * Dump registered MMIO ranges to the log.
3136 *
3137 * @param pVM VM Handle.
3138 * @param pHlp The info helpers.
3139 * @param pszArgs Arguments, ignored.
3140 */
3141static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3142{
3143 NOREF(pszArgs);
3144 pHlp->pfnPrintf(pHlp,
3145 "RAM ranges (pVM=%p)\n"
3146 "%.*s %.*s\n",
3147 pVM,
3148 sizeof(RTGCPHYS) * 4 + 1, "GC Phys Range ",
3149 sizeof(RTHCPTR) * 2, "pvHC ");
3150
3151 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur; pCur = pCur->pNextR3)
3152 pHlp->pfnPrintf(pHlp,
3153 "%RGp-%RGp %RHv %s\n",
3154 pCur->GCPhys,
3155 pCur->GCPhysLast,
3156 pCur->pvR3,
3157 pCur->pszDesc);
3158}
3159
3160/**
3161 * Dump the page directory to the log.
3162 *
3163 * @param pVM VM Handle.
3164 * @param pHlp The info helpers.
3165 * @param pszArgs Arguments, ignored.
3166 */
3167static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
3168{
3169 /** @todo SMP support!! */
3170 PVMCPU pVCpu = &pVM->aCpus[0];
3171
3172/** @todo fix this! Convert the PGMR3DumpHierarchyHC functions to do guest stuff. */
3173 /* Big pages supported? */
3174 const bool fPSE = !!(CPUMGetGuestCR4(pVCpu) & X86_CR4_PSE);
3175
3176 /* Global pages supported? */
3177 const bool fPGE = !!(CPUMGetGuestCR4(pVCpu) & X86_CR4_PGE);
3178
3179 NOREF(pszArgs);
3180
3181 /*
3182 * Get page directory addresses.
3183 */
3184 PX86PD pPDSrc = pgmGstGet32bitPDPtr(&pVCpu->pgm.s);
3185 Assert(pPDSrc);
3186 Assert(PGMPhysGCPhys2R3PtrAssert(pVM, (RTGCPHYS)(CPUMGetGuestCR3(pVCpu) & X86_CR3_PAGE_MASK), sizeof(*pPDSrc)) == pPDSrc);
3187
3188 /*
3189 * Iterate the page directory.
3190 */
3191 for (unsigned iPD = 0; iPD < RT_ELEMENTS(pPDSrc->a); iPD++)
3192 {
3193 X86PDE PdeSrc = pPDSrc->a[iPD];
3194 if (PdeSrc.n.u1Present)
3195 {
3196 if (PdeSrc.b.u1Size && fPSE)
3197 pHlp->pfnPrintf(pHlp,
3198 "%04X - %RGp P=%d U=%d RW=%d G=%d - BIG\n",
3199 iPD,
3200 pgmGstGet4MBPhysPage(&pVM->pgm.s, PdeSrc),
3201 PdeSrc.b.u1Present, PdeSrc.b.u1User, PdeSrc.b.u1Write, PdeSrc.b.u1Global && fPGE);
3202 else
3203 pHlp->pfnPrintf(pHlp,
3204 "%04X - %RGp P=%d U=%d RW=%d [G=%d]\n",
3205 iPD,
3206 (RTGCPHYS)(PdeSrc.u & X86_PDE_PG_MASK),
3207 PdeSrc.n.u1Present, PdeSrc.n.u1User, PdeSrc.n.u1Write, PdeSrc.b.u1Global && fPGE);
3208 }
3209 }
3210}
3211
3212
3213/**
3214 * Serivce a VMMCALLHOST_PGM_LOCK call.
3215 *
3216 * @returns VBox status code.
3217 * @param pVM The VM handle.
3218 */
3219VMMR3DECL(int) PGMR3LockCall(PVM pVM)
3220{
3221 int rc = PDMR3CritSectEnterEx(&pVM->pgm.s.CritSect, true /* fHostCall */);
3222 AssertRC(rc);
3223 return rc;
3224}
3225
3226
3227/**
3228 * Converts a PGMMODE value to a PGM_TYPE_* \#define.
3229 *
3230 * @returns PGM_TYPE_*.
3231 * @param pgmMode The mode value to convert.
3232 */
3233DECLINLINE(unsigned) pgmModeToType(PGMMODE pgmMode)
3234{
3235 switch (pgmMode)
3236 {
3237 case PGMMODE_REAL: return PGM_TYPE_REAL;
3238 case PGMMODE_PROTECTED: return PGM_TYPE_PROT;
3239 case PGMMODE_32_BIT: return PGM_TYPE_32BIT;
3240 case PGMMODE_PAE:
3241 case PGMMODE_PAE_NX: return PGM_TYPE_PAE;
3242 case PGMMODE_AMD64:
3243 case PGMMODE_AMD64_NX: return PGM_TYPE_AMD64;
3244 case PGMMODE_NESTED: return PGM_TYPE_NESTED;
3245 case PGMMODE_EPT: return PGM_TYPE_EPT;
3246 default:
3247 AssertFatalMsgFailed(("pgmMode=%d\n", pgmMode));
3248 }
3249}
3250
3251
3252/**
3253 * Gets the index into the paging mode data array of a SHW+GST mode.
3254 *
3255 * @returns PGM::paPagingData index.
3256 * @param uShwType The shadow paging mode type.
3257 * @param uGstType The guest paging mode type.
3258 */
3259DECLINLINE(unsigned) pgmModeDataIndex(unsigned uShwType, unsigned uGstType)
3260{
3261 Assert(uShwType >= PGM_TYPE_32BIT && uShwType <= PGM_TYPE_MAX);
3262 Assert(uGstType >= PGM_TYPE_REAL && uGstType <= PGM_TYPE_AMD64);
3263 return (uShwType - PGM_TYPE_32BIT) * (PGM_TYPE_AMD64 - PGM_TYPE_REAL + 1)
3264 + (uGstType - PGM_TYPE_REAL);
3265}
3266
3267
3268/**
3269 * Gets the index into the paging mode data array of a SHW+GST mode.
3270 *
3271 * @returns PGM::paPagingData index.
3272 * @param enmShw The shadow paging mode.
3273 * @param enmGst The guest paging mode.
3274 */
3275DECLINLINE(unsigned) pgmModeDataIndexByMode(PGMMODE enmShw, PGMMODE enmGst)
3276{
3277 Assert(enmShw >= PGMMODE_32_BIT && enmShw <= PGMMODE_MAX);
3278 Assert(enmGst > PGMMODE_INVALID && enmGst < PGMMODE_MAX);
3279 return pgmModeDataIndex(pgmModeToType(enmShw), pgmModeToType(enmGst));
3280}
3281
3282
3283/**
3284 * Calculates the max data index.
3285 * @returns The number of entries in the paging data array.
3286 */
3287DECLINLINE(unsigned) pgmModeDataMaxIndex(void)
3288{
3289 return pgmModeDataIndex(PGM_TYPE_MAX, PGM_TYPE_AMD64) + 1;
3290}
3291
3292
3293/**
3294 * Initializes the paging mode data kept in PGM::paModeData.
3295 *
3296 * @param pVM The VM handle.
3297 * @param fResolveGCAndR0 Indicate whether or not GC and Ring-0 symbols can be resolved now.
3298 * This is used early in the init process to avoid trouble with PDM
3299 * not being initialized yet.
3300 */
3301static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0)
3302{
3303 PPGMMODEDATA pModeData;
3304 int rc;
3305
3306 /*
3307 * Allocate the array on the first call.
3308 */
3309 if (!pVM->pgm.s.paModeData)
3310 {
3311 pVM->pgm.s.paModeData = (PPGMMODEDATA)MMR3HeapAllocZ(pVM, MM_TAG_PGM, sizeof(PGMMODEDATA) * pgmModeDataMaxIndex());
3312 AssertReturn(pVM->pgm.s.paModeData, VERR_NO_MEMORY);
3313 }
3314
3315 /*
3316 * Initialize the array entries.
3317 */
3318 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_REAL)];
3319 pModeData->uShwType = PGM_TYPE_32BIT;
3320 pModeData->uGstType = PGM_TYPE_REAL;
3321 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3322 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3323 rc = PGM_BTH_NAME_32BIT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3324
3325 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGMMODE_PROTECTED)];
3326 pModeData->uShwType = PGM_TYPE_32BIT;
3327 pModeData->uGstType = PGM_TYPE_PROT;
3328 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3329 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3330 rc = PGM_BTH_NAME_32BIT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3331
3332 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_32BIT)];
3333 pModeData->uShwType = PGM_TYPE_32BIT;
3334 pModeData->uGstType = PGM_TYPE_32BIT;
3335 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3336 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3337 rc = PGM_BTH_NAME_32BIT_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3338
3339 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_REAL)];
3340 pModeData->uShwType = PGM_TYPE_PAE;
3341 pModeData->uGstType = PGM_TYPE_REAL;
3342 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3343 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3344 rc = PGM_BTH_NAME_PAE_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3345
3346 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PROT)];
3347 pModeData->uShwType = PGM_TYPE_PAE;
3348 pModeData->uGstType = PGM_TYPE_PROT;
3349 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3350 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3351 rc = PGM_BTH_NAME_PAE_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3352
3353 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_32BIT)];
3354 pModeData->uShwType = PGM_TYPE_PAE;
3355 pModeData->uGstType = PGM_TYPE_32BIT;
3356 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3357 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3358 rc = PGM_BTH_NAME_PAE_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3359
3360 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PAE)];
3361 pModeData->uShwType = PGM_TYPE_PAE;
3362 pModeData->uGstType = PGM_TYPE_PAE;
3363 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3364 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3365 rc = PGM_BTH_NAME_PAE_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3366
3367#ifdef VBOX_WITH_64_BITS_GUESTS
3368 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_AMD64, PGM_TYPE_AMD64)];
3369 pModeData->uShwType = PGM_TYPE_AMD64;
3370 pModeData->uGstType = PGM_TYPE_AMD64;
3371 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3372 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3373 rc = PGM_BTH_NAME_AMD64_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3374#endif
3375
3376 /* The nested paging mode. */
3377 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_REAL)];
3378 pModeData->uShwType = PGM_TYPE_NESTED;
3379 pModeData->uGstType = PGM_TYPE_REAL;
3380 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3381 rc = PGM_BTH_NAME_NESTED_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3382
3383 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGMMODE_PROTECTED)];
3384 pModeData->uShwType = PGM_TYPE_NESTED;
3385 pModeData->uGstType = PGM_TYPE_PROT;
3386 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3387 rc = PGM_BTH_NAME_NESTED_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3388
3389 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_32BIT)];
3390 pModeData->uShwType = PGM_TYPE_NESTED;
3391 pModeData->uGstType = PGM_TYPE_32BIT;
3392 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3393 rc = PGM_BTH_NAME_NESTED_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3394
3395 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_PAE)];
3396 pModeData->uShwType = PGM_TYPE_NESTED;
3397 pModeData->uGstType = PGM_TYPE_PAE;
3398 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3399 rc = PGM_BTH_NAME_NESTED_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3400
3401#ifdef VBOX_WITH_64_BITS_GUESTS
3402 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3403 pModeData->uShwType = PGM_TYPE_NESTED;
3404 pModeData->uGstType = PGM_TYPE_AMD64;
3405 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3406 rc = PGM_BTH_NAME_NESTED_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3407#endif
3408
3409 /* The shadow part of the nested callback mode depends on the host paging mode (AMD-V only). */
3410 switch (pVM->pgm.s.enmHostMode)
3411 {
3412#if HC_ARCH_BITS == 32
3413 case SUPPAGINGMODE_32_BIT:
3414 case SUPPAGINGMODE_32_BIT_GLOBAL:
3415 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3416 {
3417 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3418 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3419 }
3420# ifdef VBOX_WITH_64_BITS_GUESTS
3421 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3422 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3423# endif
3424 break;
3425
3426 case SUPPAGINGMODE_PAE:
3427 case SUPPAGINGMODE_PAE_NX:
3428 case SUPPAGINGMODE_PAE_GLOBAL:
3429 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3430 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3431 {
3432 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3433 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3434 }
3435# ifdef VBOX_WITH_64_BITS_GUESTS
3436 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3437 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3438# endif
3439 break;
3440#endif /* HC_ARCH_BITS == 32 */
3441
3442#if HC_ARCH_BITS == 64 || defined(RT_OS_DARWIN)
3443 case SUPPAGINGMODE_AMD64:
3444 case SUPPAGINGMODE_AMD64_GLOBAL:
3445 case SUPPAGINGMODE_AMD64_NX:
3446 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3447# ifdef VBOX_WITH_64_BITS_GUESTS
3448 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_AMD64; i++)
3449# else
3450 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3451# endif
3452 {
3453 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3454 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3455 }
3456 break;
3457#endif /* HC_ARCH_BITS == 64 || RT_OS_DARWIN */
3458
3459 default:
3460 AssertFailed();
3461 break;
3462 }
3463
3464 /* Extended paging (EPT) / Intel VT-x */
3465 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_REAL)];
3466 pModeData->uShwType = PGM_TYPE_EPT;
3467 pModeData->uGstType = PGM_TYPE_REAL;
3468 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3469 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3470 rc = PGM_BTH_NAME_EPT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3471
3472 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PROT)];
3473 pModeData->uShwType = PGM_TYPE_EPT;
3474 pModeData->uGstType = PGM_TYPE_PROT;
3475 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3476 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3477 rc = PGM_BTH_NAME_EPT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3478
3479 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_32BIT)];
3480 pModeData->uShwType = PGM_TYPE_EPT;
3481 pModeData->uGstType = PGM_TYPE_32BIT;
3482 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3483 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3484 rc = PGM_BTH_NAME_EPT_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3485
3486 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PAE)];
3487 pModeData->uShwType = PGM_TYPE_EPT;
3488 pModeData->uGstType = PGM_TYPE_PAE;
3489 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3490 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3491 rc = PGM_BTH_NAME_EPT_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3492
3493#ifdef VBOX_WITH_64_BITS_GUESTS
3494 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_AMD64)];
3495 pModeData->uShwType = PGM_TYPE_EPT;
3496 pModeData->uGstType = PGM_TYPE_AMD64;
3497 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3498 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3499 rc = PGM_BTH_NAME_EPT_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3500#endif
3501 return VINF_SUCCESS;
3502}
3503
3504
3505/**
3506 * Switch to different (or relocated in the relocate case) mode data.
3507 *
3508 * @param pVM The VM handle.
3509 * @param pVCpu The VMCPU to operate on.
3510 * @param enmShw The the shadow paging mode.
3511 * @param enmGst The the guest paging mode.
3512 */
3513static void pgmR3ModeDataSwitch(PVM pVM, PVMCPU pVCpu, PGMMODE enmShw, PGMMODE enmGst)
3514{
3515 PPGMMODEDATA pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndexByMode(enmShw, enmGst)];
3516
3517 Assert(pModeData->uGstType == pgmModeToType(enmGst));
3518 Assert(pModeData->uShwType == pgmModeToType(enmShw));
3519
3520 /* shadow */
3521 pVCpu->pgm.s.pfnR3ShwRelocate = pModeData->pfnR3ShwRelocate;
3522 pVCpu->pgm.s.pfnR3ShwExit = pModeData->pfnR3ShwExit;
3523 pVCpu->pgm.s.pfnR3ShwGetPage = pModeData->pfnR3ShwGetPage;
3524 Assert(pVCpu->pgm.s.pfnR3ShwGetPage);
3525 pVCpu->pgm.s.pfnR3ShwModifyPage = pModeData->pfnR3ShwModifyPage;
3526
3527 pVCpu->pgm.s.pfnRCShwGetPage = pModeData->pfnRCShwGetPage;
3528 pVCpu->pgm.s.pfnRCShwModifyPage = pModeData->pfnRCShwModifyPage;
3529
3530 pVCpu->pgm.s.pfnR0ShwGetPage = pModeData->pfnR0ShwGetPage;
3531 pVCpu->pgm.s.pfnR0ShwModifyPage = pModeData->pfnR0ShwModifyPage;
3532
3533
3534 /* guest */
3535 pVCpu->pgm.s.pfnR3GstRelocate = pModeData->pfnR3GstRelocate;
3536 pVCpu->pgm.s.pfnR3GstExit = pModeData->pfnR3GstExit;
3537 pVCpu->pgm.s.pfnR3GstGetPage = pModeData->pfnR3GstGetPage;
3538 Assert(pVCpu->pgm.s.pfnR3GstGetPage);
3539 pVCpu->pgm.s.pfnR3GstModifyPage = pModeData->pfnR3GstModifyPage;
3540 pVCpu->pgm.s.pfnR3GstGetPDE = pModeData->pfnR3GstGetPDE;
3541 pVCpu->pgm.s.pfnRCGstGetPage = pModeData->pfnRCGstGetPage;
3542 pVCpu->pgm.s.pfnRCGstModifyPage = pModeData->pfnRCGstModifyPage;
3543 pVCpu->pgm.s.pfnRCGstGetPDE = pModeData->pfnRCGstGetPDE;
3544 pVCpu->pgm.s.pfnR0GstGetPage = pModeData->pfnR0GstGetPage;
3545 pVCpu->pgm.s.pfnR0GstModifyPage = pModeData->pfnR0GstModifyPage;
3546 pVCpu->pgm.s.pfnR0GstGetPDE = pModeData->pfnR0GstGetPDE;
3547
3548 /* both */
3549 pVCpu->pgm.s.pfnR3BthRelocate = pModeData->pfnR3BthRelocate;
3550 pVCpu->pgm.s.pfnR3BthInvalidatePage = pModeData->pfnR3BthInvalidatePage;
3551 pVCpu->pgm.s.pfnR3BthSyncCR3 = pModeData->pfnR3BthSyncCR3;
3552 Assert(pVCpu->pgm.s.pfnR3BthSyncCR3);
3553 pVCpu->pgm.s.pfnR3BthSyncPage = pModeData->pfnR3BthSyncPage;
3554 pVCpu->pgm.s.pfnR3BthPrefetchPage = pModeData->pfnR3BthPrefetchPage;
3555 pVCpu->pgm.s.pfnR3BthVerifyAccessSyncPage = pModeData->pfnR3BthVerifyAccessSyncPage;
3556#ifdef VBOX_STRICT
3557 pVCpu->pgm.s.pfnR3BthAssertCR3 = pModeData->pfnR3BthAssertCR3;
3558#endif
3559 pVCpu->pgm.s.pfnR3BthMapCR3 = pModeData->pfnR3BthMapCR3;
3560 pVCpu->pgm.s.pfnR3BthUnmapCR3 = pModeData->pfnR3BthUnmapCR3;
3561
3562 pVCpu->pgm.s.pfnRCBthTrap0eHandler = pModeData->pfnRCBthTrap0eHandler;
3563 pVCpu->pgm.s.pfnRCBthInvalidatePage = pModeData->pfnRCBthInvalidatePage;
3564 pVCpu->pgm.s.pfnRCBthSyncCR3 = pModeData->pfnRCBthSyncCR3;
3565 pVCpu->pgm.s.pfnRCBthSyncPage = pModeData->pfnRCBthSyncPage;
3566 pVCpu->pgm.s.pfnRCBthPrefetchPage = pModeData->pfnRCBthPrefetchPage;
3567 pVCpu->pgm.s.pfnRCBthVerifyAccessSyncPage = pModeData->pfnRCBthVerifyAccessSyncPage;
3568#ifdef VBOX_STRICT
3569 pVCpu->pgm.s.pfnRCBthAssertCR3 = pModeData->pfnRCBthAssertCR3;
3570#endif
3571 pVCpu->pgm.s.pfnRCBthMapCR3 = pModeData->pfnRCBthMapCR3;
3572 pVCpu->pgm.s.pfnRCBthUnmapCR3 = pModeData->pfnRCBthUnmapCR3;
3573
3574 pVCpu->pgm.s.pfnR0BthTrap0eHandler = pModeData->pfnR0BthTrap0eHandler;
3575 pVCpu->pgm.s.pfnR0BthInvalidatePage = pModeData->pfnR0BthInvalidatePage;
3576 pVCpu->pgm.s.pfnR0BthSyncCR3 = pModeData->pfnR0BthSyncCR3;
3577 pVCpu->pgm.s.pfnR0BthSyncPage = pModeData->pfnR0BthSyncPage;
3578 pVCpu->pgm.s.pfnR0BthPrefetchPage = pModeData->pfnR0BthPrefetchPage;
3579 pVCpu->pgm.s.pfnR0BthVerifyAccessSyncPage = pModeData->pfnR0BthVerifyAccessSyncPage;
3580#ifdef VBOX_STRICT
3581 pVCpu->pgm.s.pfnR0BthAssertCR3 = pModeData->pfnR0BthAssertCR3;
3582#endif
3583 pVCpu->pgm.s.pfnR0BthMapCR3 = pModeData->pfnR0BthMapCR3;
3584 pVCpu->pgm.s.pfnR0BthUnmapCR3 = pModeData->pfnR0BthUnmapCR3;
3585}
3586
3587
3588/**
3589 * Calculates the shadow paging mode.
3590 *
3591 * @returns The shadow paging mode.
3592 * @param pVM VM handle.
3593 * @param enmGuestMode The guest mode.
3594 * @param enmHostMode The host mode.
3595 * @param enmShadowMode The current shadow mode.
3596 * @param penmSwitcher Where to store the switcher to use.
3597 * VMMSWITCHER_INVALID means no change.
3598 */
3599static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher)
3600{
3601 VMMSWITCHER enmSwitcher = VMMSWITCHER_INVALID;
3602 switch (enmGuestMode)
3603 {
3604 /*
3605 * When switching to real or protected mode we don't change
3606 * anything since it's likely that we'll switch back pretty soon.
3607 *
3608 * During pgmR3InitPaging we'll end up here with PGMMODE_INVALID
3609 * and is supposed to determine which shadow paging and switcher to
3610 * use during init.
3611 */
3612 case PGMMODE_REAL:
3613 case PGMMODE_PROTECTED:
3614 if ( enmShadowMode != PGMMODE_INVALID
3615 && !HWACCMIsEnabled(pVM) /* always switch in hwaccm mode! */)
3616 break; /* (no change) */
3617
3618 switch (enmHostMode)
3619 {
3620 case SUPPAGINGMODE_32_BIT:
3621 case SUPPAGINGMODE_32_BIT_GLOBAL:
3622 enmShadowMode = PGMMODE_32_BIT;
3623 enmSwitcher = VMMSWITCHER_32_TO_32;
3624 break;
3625
3626 case SUPPAGINGMODE_PAE:
3627 case SUPPAGINGMODE_PAE_NX:
3628 case SUPPAGINGMODE_PAE_GLOBAL:
3629 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3630 enmShadowMode = PGMMODE_PAE;
3631 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3632#ifdef DEBUG_bird
3633 if (RTEnvExist("VBOX_32BIT"))
3634 {
3635 enmShadowMode = PGMMODE_32_BIT;
3636 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3637 }
3638#endif
3639 break;
3640
3641 case SUPPAGINGMODE_AMD64:
3642 case SUPPAGINGMODE_AMD64_GLOBAL:
3643 case SUPPAGINGMODE_AMD64_NX:
3644 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3645 enmShadowMode = PGMMODE_PAE;
3646 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3647#ifdef DEBUG_bird
3648 if (RTEnvExist("VBOX_32BIT"))
3649 {
3650 enmShadowMode = PGMMODE_32_BIT;
3651 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3652 }
3653#endif
3654 break;
3655
3656 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3657 }
3658 break;
3659
3660 case PGMMODE_32_BIT:
3661 switch (enmHostMode)
3662 {
3663 case SUPPAGINGMODE_32_BIT:
3664 case SUPPAGINGMODE_32_BIT_GLOBAL:
3665 enmShadowMode = PGMMODE_32_BIT;
3666 enmSwitcher = VMMSWITCHER_32_TO_32;
3667 break;
3668
3669 case SUPPAGINGMODE_PAE:
3670 case SUPPAGINGMODE_PAE_NX:
3671 case SUPPAGINGMODE_PAE_GLOBAL:
3672 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3673 enmShadowMode = PGMMODE_PAE;
3674 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3675#ifdef DEBUG_bird
3676 if (RTEnvExist("VBOX_32BIT"))
3677 {
3678 enmShadowMode = PGMMODE_32_BIT;
3679 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3680 }
3681#endif
3682 break;
3683
3684 case SUPPAGINGMODE_AMD64:
3685 case SUPPAGINGMODE_AMD64_GLOBAL:
3686 case SUPPAGINGMODE_AMD64_NX:
3687 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3688 enmShadowMode = PGMMODE_PAE;
3689 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3690#ifdef DEBUG_bird
3691 if (RTEnvExist("VBOX_32BIT"))
3692 {
3693 enmShadowMode = PGMMODE_32_BIT;
3694 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3695 }
3696#endif
3697 break;
3698
3699 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3700 }
3701 break;
3702
3703 case PGMMODE_PAE:
3704 case PGMMODE_PAE_NX: /** @todo This might require more switchers and guest+both modes. */
3705 switch (enmHostMode)
3706 {
3707 case SUPPAGINGMODE_32_BIT:
3708 case SUPPAGINGMODE_32_BIT_GLOBAL:
3709 enmShadowMode = PGMMODE_PAE;
3710 enmSwitcher = VMMSWITCHER_32_TO_PAE;
3711 break;
3712
3713 case SUPPAGINGMODE_PAE:
3714 case SUPPAGINGMODE_PAE_NX:
3715 case SUPPAGINGMODE_PAE_GLOBAL:
3716 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3717 enmShadowMode = PGMMODE_PAE;
3718 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3719 break;
3720
3721 case SUPPAGINGMODE_AMD64:
3722 case SUPPAGINGMODE_AMD64_GLOBAL:
3723 case SUPPAGINGMODE_AMD64_NX:
3724 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3725 enmShadowMode = PGMMODE_PAE;
3726 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3727 break;
3728
3729 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3730 }
3731 break;
3732
3733 case PGMMODE_AMD64:
3734 case PGMMODE_AMD64_NX:
3735 switch (enmHostMode)
3736 {
3737 case SUPPAGINGMODE_32_BIT:
3738 case SUPPAGINGMODE_32_BIT_GLOBAL:
3739 enmShadowMode = PGMMODE_AMD64;
3740 enmSwitcher = VMMSWITCHER_32_TO_AMD64;
3741 break;
3742
3743 case SUPPAGINGMODE_PAE:
3744 case SUPPAGINGMODE_PAE_NX:
3745 case SUPPAGINGMODE_PAE_GLOBAL:
3746 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3747 enmShadowMode = PGMMODE_AMD64;
3748 enmSwitcher = VMMSWITCHER_PAE_TO_AMD64;
3749 break;
3750
3751 case SUPPAGINGMODE_AMD64:
3752 case SUPPAGINGMODE_AMD64_GLOBAL:
3753 case SUPPAGINGMODE_AMD64_NX:
3754 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3755 enmShadowMode = PGMMODE_AMD64;
3756 enmSwitcher = VMMSWITCHER_AMD64_TO_AMD64;
3757 break;
3758
3759 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3760 }
3761 break;
3762
3763
3764 default:
3765 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3766 return PGMMODE_INVALID;
3767 }
3768 /* Override the shadow mode is nested paging is active. */
3769 if (HWACCMIsNestedPagingActive(pVM))
3770 enmShadowMode = HWACCMGetShwPagingMode(pVM);
3771
3772 *penmSwitcher = enmSwitcher;
3773 return enmShadowMode;
3774}
3775
3776
3777/**
3778 * Performs the actual mode change.
3779 * This is called by PGMChangeMode and pgmR3InitPaging().
3780 *
3781 * @returns VBox status code. May suspend or power off the VM on error, but this
3782 * will trigger using FFs and not status codes.
3783 *
3784 * @param pVM VM handle.
3785 * @param pVCpu The VMCPU to operate on.
3786 * @param enmGuestMode The new guest mode. This is assumed to be different from
3787 * the current mode.
3788 */
3789VMMR3DECL(int) PGMR3ChangeMode(PVM pVM, PVMCPU pVCpu, PGMMODE enmGuestMode)
3790{
3791 Log(("PGMR3ChangeMode: Guest mode: %s -> %s\n", PGMGetModeName(pVCpu->pgm.s.enmGuestMode), PGMGetModeName(enmGuestMode)));
3792 STAM_REL_COUNTER_INC(&pVCpu->pgm.s.cGuestModeChanges);
3793
3794 /*
3795 * Calc the shadow mode and switcher.
3796 */
3797 VMMSWITCHER enmSwitcher;
3798 PGMMODE enmShadowMode = pgmR3CalcShadowMode(pVM, enmGuestMode, pVM->pgm.s.enmHostMode, pVCpu->pgm.s.enmShadowMode, &enmSwitcher);
3799 if (enmSwitcher != VMMSWITCHER_INVALID)
3800 {
3801 /*
3802 * Select new switcher.
3803 */
3804 int rc = VMMR3SelectSwitcher(pVM, enmSwitcher);
3805 if (RT_FAILURE(rc))
3806 {
3807 AssertReleaseMsgFailed(("VMMR3SelectSwitcher(%d) -> %Rrc\n", enmSwitcher, rc));
3808 return rc;
3809 }
3810 }
3811
3812 /*
3813 * Exit old mode(s).
3814 */
3815 /* shadow */
3816 if (enmShadowMode != pVCpu->pgm.s.enmShadowMode)
3817 {
3818 LogFlow(("PGMR3ChangeMode: Shadow mode: %s -> %s\n", PGMGetModeName(pVCpu->pgm.s.enmShadowMode), PGMGetModeName(enmShadowMode)));
3819 if (PGM_SHW_PFN(Exit, pVCpu))
3820 {
3821 int rc = PGM_SHW_PFN(Exit, pVCpu)(pVCpu);
3822 if (RT_FAILURE(rc))
3823 {
3824 AssertMsgFailed(("Exit failed for shadow mode %d: %Rrc\n", pVCpu->pgm.s.enmShadowMode, rc));
3825 return rc;
3826 }
3827 }
3828
3829 }
3830 else
3831 LogFlow(("PGMR3ChangeMode: Shadow mode remains: %s\n", PGMGetModeName(pVCpu->pgm.s.enmShadowMode)));
3832
3833 /* guest */
3834 if (PGM_GST_PFN(Exit, pVCpu))
3835 {
3836 int rc = PGM_GST_PFN(Exit, pVCpu)(pVCpu);
3837 if (RT_FAILURE(rc))
3838 {
3839 AssertMsgFailed(("Exit failed for guest mode %d: %Rrc\n", pVCpu->pgm.s.enmGuestMode, rc));
3840 return rc;
3841 }
3842 }
3843
3844 /*
3845 * Load new paging mode data.
3846 */
3847 pgmR3ModeDataSwitch(pVM, pVCpu, enmShadowMode, enmGuestMode);
3848
3849 /*
3850 * Enter new shadow mode (if changed).
3851 */
3852 if (enmShadowMode != pVCpu->pgm.s.enmShadowMode)
3853 {
3854 int rc;
3855 pVCpu->pgm.s.enmShadowMode = enmShadowMode;
3856 switch (enmShadowMode)
3857 {
3858 case PGMMODE_32_BIT:
3859 rc = PGM_SHW_NAME_32BIT(Enter)(pVCpu);
3860 break;
3861 case PGMMODE_PAE:
3862 case PGMMODE_PAE_NX:
3863 rc = PGM_SHW_NAME_PAE(Enter)(pVCpu);
3864 break;
3865 case PGMMODE_AMD64:
3866 case PGMMODE_AMD64_NX:
3867 rc = PGM_SHW_NAME_AMD64(Enter)(pVCpu);
3868 break;
3869 case PGMMODE_NESTED:
3870 rc = PGM_SHW_NAME_NESTED(Enter)(pVCpu);
3871 break;
3872 case PGMMODE_EPT:
3873 rc = PGM_SHW_NAME_EPT(Enter)(pVCpu);
3874 break;
3875 case PGMMODE_REAL:
3876 case PGMMODE_PROTECTED:
3877 default:
3878 AssertReleaseMsgFailed(("enmShadowMode=%d\n", enmShadowMode));
3879 return VERR_INTERNAL_ERROR;
3880 }
3881 if (RT_FAILURE(rc))
3882 {
3883 AssertReleaseMsgFailed(("Entering enmShadowMode=%d failed: %Rrc\n", enmShadowMode, rc));
3884 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
3885 return rc;
3886 }
3887 }
3888
3889 /*
3890 * Always flag the necessary updates
3891 */
3892 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
3893
3894 /*
3895 * Enter the new guest and shadow+guest modes.
3896 */
3897 int rc = -1;
3898 int rc2 = -1;
3899 RTGCPHYS GCPhysCR3 = NIL_RTGCPHYS;
3900 pVCpu->pgm.s.enmGuestMode = enmGuestMode;
3901 switch (enmGuestMode)
3902 {
3903 case PGMMODE_REAL:
3904 rc = PGM_GST_NAME_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3905 switch (pVCpu->pgm.s.enmShadowMode)
3906 {
3907 case PGMMODE_32_BIT:
3908 rc2 = PGM_BTH_NAME_32BIT_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3909 break;
3910 case PGMMODE_PAE:
3911 case PGMMODE_PAE_NX:
3912 rc2 = PGM_BTH_NAME_PAE_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3913 break;
3914 case PGMMODE_NESTED:
3915 rc2 = PGM_BTH_NAME_NESTED_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3916 break;
3917 case PGMMODE_EPT:
3918 rc2 = PGM_BTH_NAME_EPT_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3919 break;
3920 case PGMMODE_AMD64:
3921 case PGMMODE_AMD64_NX:
3922 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3923 default: AssertFailed(); break;
3924 }
3925 break;
3926
3927 case PGMMODE_PROTECTED:
3928 rc = PGM_GST_NAME_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3929 switch (pVCpu->pgm.s.enmShadowMode)
3930 {
3931 case PGMMODE_32_BIT:
3932 rc2 = PGM_BTH_NAME_32BIT_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3933 break;
3934 case PGMMODE_PAE:
3935 case PGMMODE_PAE_NX:
3936 rc2 = PGM_BTH_NAME_PAE_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3937 break;
3938 case PGMMODE_NESTED:
3939 rc2 = PGM_BTH_NAME_NESTED_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3940 break;
3941 case PGMMODE_EPT:
3942 rc2 = PGM_BTH_NAME_EPT_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3943 break;
3944 case PGMMODE_AMD64:
3945 case PGMMODE_AMD64_NX:
3946 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3947 default: AssertFailed(); break;
3948 }
3949 break;
3950
3951 case PGMMODE_32_BIT:
3952 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & X86_CR3_PAGE_MASK;
3953 rc = PGM_GST_NAME_32BIT(Enter)(pVCpu, GCPhysCR3);
3954 switch (pVCpu->pgm.s.enmShadowMode)
3955 {
3956 case PGMMODE_32_BIT:
3957 rc2 = PGM_BTH_NAME_32BIT_32BIT(Enter)(pVCpu, GCPhysCR3);
3958 break;
3959 case PGMMODE_PAE:
3960 case PGMMODE_PAE_NX:
3961 rc2 = PGM_BTH_NAME_PAE_32BIT(Enter)(pVCpu, GCPhysCR3);
3962 break;
3963 case PGMMODE_NESTED:
3964 rc2 = PGM_BTH_NAME_NESTED_32BIT(Enter)(pVCpu, GCPhysCR3);
3965 break;
3966 case PGMMODE_EPT:
3967 rc2 = PGM_BTH_NAME_EPT_32BIT(Enter)(pVCpu, GCPhysCR3);
3968 break;
3969 case PGMMODE_AMD64:
3970 case PGMMODE_AMD64_NX:
3971 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3972 default: AssertFailed(); break;
3973 }
3974 break;
3975
3976 case PGMMODE_PAE_NX:
3977 case PGMMODE_PAE:
3978 {
3979 uint32_t u32Dummy, u32Features;
3980
3981 CPUMGetGuestCpuId(pVCpu, 1, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
3982 if (!(u32Features & X86_CPUID_FEATURE_EDX_PAE))
3983 return VMSetRuntimeError(pVM, VMSETRTERR_FLAGS_FATAL, "PAEmode",
3984 N_("The guest is trying to switch to the PAE mode which is currently disabled by default in VirtualBox. PAE support can be enabled using the VM settings (General/Advanced)"));
3985
3986 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & X86_CR3_PAE_PAGE_MASK;
3987 rc = PGM_GST_NAME_PAE(Enter)(pVCpu, GCPhysCR3);
3988 switch (pVCpu->pgm.s.enmShadowMode)
3989 {
3990 case PGMMODE_PAE:
3991 case PGMMODE_PAE_NX:
3992 rc2 = PGM_BTH_NAME_PAE_PAE(Enter)(pVCpu, GCPhysCR3);
3993 break;
3994 case PGMMODE_NESTED:
3995 rc2 = PGM_BTH_NAME_NESTED_PAE(Enter)(pVCpu, GCPhysCR3);
3996 break;
3997 case PGMMODE_EPT:
3998 rc2 = PGM_BTH_NAME_EPT_PAE(Enter)(pVCpu, GCPhysCR3);
3999 break;
4000 case PGMMODE_32_BIT:
4001 case PGMMODE_AMD64:
4002 case PGMMODE_AMD64_NX:
4003 AssertMsgFailed(("Should use PAE shadow mode!\n"));
4004 default: AssertFailed(); break;
4005 }
4006 break;
4007 }
4008
4009#ifdef VBOX_WITH_64_BITS_GUESTS
4010 case PGMMODE_AMD64_NX:
4011 case PGMMODE_AMD64:
4012 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & UINT64_C(0xfffffffffffff000); /** @todo define this mask! */
4013 rc = PGM_GST_NAME_AMD64(Enter)(pVCpu, GCPhysCR3);
4014 switch (pVCpu->pgm.s.enmShadowMode)
4015 {
4016 case PGMMODE_AMD64:
4017 case PGMMODE_AMD64_NX:
4018 rc2 = PGM_BTH_NAME_AMD64_AMD64(Enter)(pVCpu, GCPhysCR3);
4019 break;
4020 case PGMMODE_NESTED:
4021 rc2 = PGM_BTH_NAME_NESTED_AMD64(Enter)(pVCpu, GCPhysCR3);
4022 break;
4023 case PGMMODE_EPT:
4024 rc2 = PGM_BTH_NAME_EPT_AMD64(Enter)(pVCpu, GCPhysCR3);
4025 break;
4026 case PGMMODE_32_BIT:
4027 case PGMMODE_PAE:
4028 case PGMMODE_PAE_NX:
4029 AssertMsgFailed(("Should use AMD64 shadow mode!\n"));
4030 default: AssertFailed(); break;
4031 }
4032 break;
4033#endif
4034
4035 default:
4036 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
4037 rc = VERR_NOT_IMPLEMENTED;
4038 break;
4039 }
4040
4041 /* status codes. */
4042 AssertRC(rc);
4043 AssertRC(rc2);
4044 if (RT_SUCCESS(rc))
4045 {
4046 rc = rc2;
4047 if (RT_SUCCESS(rc)) /* no informational status codes. */
4048 rc = VINF_SUCCESS;
4049 }
4050
4051 /* Notify HWACCM as well. */
4052 HWACCMR3PagingModeChanged(pVM, pVCpu, pVCpu->pgm.s.enmShadowMode, pVCpu->pgm.s.enmGuestMode);
4053 return rc;
4054}
4055
4056
4057/**
4058 * Called by pgmPoolFlushAllInt prior to flushing the pool.
4059 *
4060 * @returns VBox status code, fully asserted.
4061 * @param pVM The VM handle.
4062 * @param pVCpu The VMCPU to operate on.
4063 */
4064int pgmR3ExitShadowModeBeforePoolFlush(PVM pVM, PVMCPU pVCpu)
4065{
4066 /** @todo Need to synchronize this across all VCPUs! */
4067
4068 /* Unmap the old CR3 value before flushing everything. */
4069 int rc = PGM_BTH_PFN(UnmapCR3, pVCpu)(pVCpu);
4070 AssertRC(rc);
4071
4072 /* Exit the current shadow paging mode as well; nested paging and EPT use a root CR3 which will get flushed here. */
4073 rc = PGM_SHW_PFN(Exit, pVCpu)(pVCpu);
4074 AssertRC(rc);
4075 Assert(pVCpu->pgm.s.pShwPageCR3R3 == NULL);
4076 return rc;
4077}
4078
4079
4080/**
4081 * Called by pgmPoolFlushAllInt after flushing the pool.
4082 *
4083 * @returns VBox status code, fully asserted.
4084 * @param pVM The VM handle.
4085 * @param pVCpu The VMCPU to operate on.
4086 */
4087int pgmR3ReEnterShadowModeAfterPoolFlush(PVM pVM, PVMCPU pVCpu)
4088{
4089 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
4090 int rc = PGMR3ChangeMode(pVM, pVCpu, PGMGetGuestMode(pVCpu));
4091 Assert(VMCPU_FF_ISSET(pVCpu, VMCPU_FF_PGM_SYNC_CR3));
4092 AssertRCReturn(rc, rc);
4093 AssertRCSuccessReturn(rc, VERR_IPE_UNEXPECTED_INFO_STATUS);
4094
4095 Assert(pVCpu->pgm.s.pShwPageCR3R3 != NULL);
4096 AssertMsg( pVCpu->pgm.s.enmShadowMode >= PGMMODE_NESTED
4097 || CPUMGetHyperCR3(pVCpu) == PGMGetHyperCR3(pVCpu),
4098 ("%RHp != %RHp %s\n", (RTHCPHYS)CPUMGetHyperCR3(pVCpu), PGMGetHyperCR3(pVCpu), PGMGetModeName(pVCpu->pgm.s.enmShadowMode)));
4099 return rc;
4100}
4101
4102
4103/**
4104 * Dumps a PAE shadow page table.
4105 *
4106 * @returns VBox status code (VINF_SUCCESS).
4107 * @param pVM The VM handle.
4108 * @param pPT Pointer to the page table.
4109 * @param u64Address The virtual address of the page table starts.
4110 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
4111 * @param cMaxDepth The maxium depth.
4112 * @param pHlp Pointer to the output functions.
4113 */
4114static int pgmR3DumpHierarchyHCPaePT(PVM pVM, PX86PTPAE pPT, uint64_t u64Address, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4115{
4116 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
4117 {
4118 X86PTEPAE Pte = pPT->a[i];
4119 if (Pte.n.u1Present)
4120 {
4121 pHlp->pfnPrintf(pHlp,
4122 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
4123 ? "%016llx 3 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n"
4124 : "%08llx 2 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n",
4125 u64Address + ((uint64_t)i << X86_PT_PAE_SHIFT),
4126 Pte.n.u1Write ? 'W' : 'R',
4127 Pte.n.u1User ? 'U' : 'S',
4128 Pte.n.u1Accessed ? 'A' : '-',
4129 Pte.n.u1Dirty ? 'D' : '-',
4130 Pte.n.u1Global ? 'G' : '-',
4131 Pte.n.u1WriteThru ? "WT" : "--",
4132 Pte.n.u1CacheDisable? "CD" : "--",
4133 Pte.n.u1PAT ? "AT" : "--",
4134 Pte.n.u1NoExecute ? "NX" : "--",
4135 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
4136 Pte.u & RT_BIT(10) ? '1' : '0',
4137 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED? 'v' : '-',
4138 Pte.u & X86_PTE_PAE_PG_MASK);
4139 }
4140 }
4141 return VINF_SUCCESS;
4142}
4143
4144
4145/**
4146 * Dumps a PAE shadow page directory table.
4147 *
4148 * @returns VBox status code (VINF_SUCCESS).
4149 * @param pVM The VM handle.
4150 * @param HCPhys The physical address of the page directory table.
4151 * @param u64Address The virtual address of the page table starts.
4152 * @param cr4 The CR4, PSE is currently used.
4153 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
4154 * @param cMaxDepth The maxium depth.
4155 * @param pHlp Pointer to the output functions.
4156 */
4157static int pgmR3DumpHierarchyHCPaePD(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4158{
4159 PX86PDPAE pPD = (PX86PDPAE)MMPagePhys2Page(pVM, HCPhys);
4160 if (!pPD)
4161 {
4162 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory at HCPhys=%RHp was not found in the page pool!\n",
4163 fLongMode ? 16 : 8, u64Address, HCPhys);
4164 return VERR_INVALID_PARAMETER;
4165 }
4166 const bool fBigPagesSupported = fLongMode || !!(cr4 & X86_CR4_PSE);
4167
4168 int rc = VINF_SUCCESS;
4169 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
4170 {
4171 X86PDEPAE Pde = pPD->a[i];
4172 if (Pde.n.u1Present)
4173 {
4174 if (fBigPagesSupported && Pde.b.u1Size)
4175 pHlp->pfnPrintf(pHlp,
4176 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
4177 ? "%016llx 2 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n"
4178 : "%08llx 1 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n",
4179 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
4180 Pde.b.u1Write ? 'W' : 'R',
4181 Pde.b.u1User ? 'U' : 'S',
4182 Pde.b.u1Accessed ? 'A' : '-',
4183 Pde.b.u1Dirty ? 'D' : '-',
4184 Pde.b.u1Global ? 'G' : '-',
4185 Pde.b.u1WriteThru ? "WT" : "--",
4186 Pde.b.u1CacheDisable? "CD" : "--",
4187 Pde.b.u1PAT ? "AT" : "--",
4188 Pde.b.u1NoExecute ? "NX" : "--",
4189 Pde.u & RT_BIT_64(9) ? '1' : '0',
4190 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
4191 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
4192 Pde.u & X86_PDE_PAE_PG_MASK);
4193 else
4194 {
4195 pHlp->pfnPrintf(pHlp,
4196 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
4197 ? "%016llx 2 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n"
4198 : "%08llx 1 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n",
4199 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
4200 Pde.n.u1Write ? 'W' : 'R',
4201 Pde.n.u1User ? 'U' : 'S',
4202 Pde.n.u1Accessed ? 'A' : '-',
4203 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
4204 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
4205 Pde.n.u1WriteThru ? "WT" : "--",
4206 Pde.n.u1CacheDisable? "CD" : "--",
4207 Pde.n.u1NoExecute ? "NX" : "--",
4208 Pde.u & RT_BIT_64(9) ? '1' : '0',
4209 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
4210 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
4211 Pde.u & X86_PDE_PAE_PG_MASK);
4212 if (cMaxDepth >= 1)
4213 {
4214 /** @todo what about using the page pool for mapping PTs? */
4215 uint64_t u64AddressPT = u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT);
4216 RTHCPHYS HCPhysPT = Pde.u & X86_PDE_PAE_PG_MASK;
4217 PX86PTPAE pPT = NULL;
4218 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
4219 pPT = (PX86PTPAE)MMPagePhys2Page(pVM, HCPhysPT);
4220 else
4221 {
4222 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
4223 {
4224 uint64_t off = u64AddressPT - pMap->GCPtr;
4225 if (off < pMap->cb)
4226 {
4227 const int iPDE = (uint32_t)(off >> X86_PD_SHIFT);
4228 const int iSub = (int)((off >> X86_PD_PAE_SHIFT) & 1); /* MSC is a pain sometimes */
4229 if ((iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0) != HCPhysPT)
4230 pHlp->pfnPrintf(pHlp, "%0*llx error! Mapping error! PT %d has HCPhysPT=%RHp not %RHp is in the PD.\n",
4231 fLongMode ? 16 : 8, u64AddressPT, iPDE,
4232 iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0, HCPhysPT);
4233 pPT = &pMap->aPTs[iPDE].paPaePTsR3[iSub];
4234 }
4235 }
4236 }
4237 int rc2 = VERR_INVALID_PARAMETER;
4238 if (pPT)
4239 rc2 = pgmR3DumpHierarchyHCPaePT(pVM, pPT, u64AddressPT, fLongMode, cMaxDepth - 1, pHlp);
4240 else
4241 pHlp->pfnPrintf(pHlp, "%0*llx error! Page table at HCPhys=%RHp was not found in the page pool!\n",
4242 fLongMode ? 16 : 8, u64AddressPT, HCPhysPT);
4243 if (rc2 < rc && RT_SUCCESS(rc))
4244 rc = rc2;
4245 }
4246 }
4247 }
4248 }
4249 return rc;
4250}
4251
4252
4253/**
4254 * Dumps a PAE shadow page directory pointer table.
4255 *
4256 * @returns VBox status code (VINF_SUCCESS).
4257 * @param pVM The VM handle.
4258 * @param HCPhys The physical address of the page directory pointer table.
4259 * @param u64Address The virtual address of the page table starts.
4260 * @param cr4 The CR4, PSE is currently used.
4261 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
4262 * @param cMaxDepth The maxium depth.
4263 * @param pHlp Pointer to the output functions.
4264 */
4265static int pgmR3DumpHierarchyHCPaePDPT(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4266{
4267 PX86PDPT pPDPT = (PX86PDPT)MMPagePhys2Page(pVM, HCPhys);
4268 if (!pPDPT)
4269 {
4270 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory pointer table at HCPhys=%RHp was not found in the page pool!\n",
4271 fLongMode ? 16 : 8, u64Address, HCPhys);
4272 return VERR_INVALID_PARAMETER;
4273 }
4274
4275 int rc = VINF_SUCCESS;
4276 const unsigned c = fLongMode ? RT_ELEMENTS(pPDPT->a) : X86_PG_PAE_PDPE_ENTRIES;
4277 for (unsigned i = 0; i < c; i++)
4278 {
4279 X86PDPE Pdpe = pPDPT->a[i];
4280 if (Pdpe.n.u1Present)
4281 {
4282 if (fLongMode)
4283 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
4284 "%016llx 1 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
4285 u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
4286 Pdpe.lm.u1Write ? 'W' : 'R',
4287 Pdpe.lm.u1User ? 'U' : 'S',
4288 Pdpe.lm.u1Accessed ? 'A' : '-',
4289 Pdpe.lm.u3Reserved & 1? '?' : '.', /* ignored */
4290 Pdpe.lm.u3Reserved & 4? '!' : '.', /* mbz */
4291 Pdpe.lm.u1WriteThru ? "WT" : "--",
4292 Pdpe.lm.u1CacheDisable? "CD" : "--",
4293 Pdpe.lm.u3Reserved & 2? "!" : "..",/* mbz */
4294 Pdpe.lm.u1NoExecute ? "NX" : "--",
4295 Pdpe.u & RT_BIT(9) ? '1' : '0',
4296 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
4297 Pdpe.u & RT_BIT(11) ? '1' : '0',
4298 Pdpe.u & X86_PDPE_PG_MASK);
4299 else
4300 pHlp->pfnPrintf(pHlp, /*P G WT CD AT NX 4M a p ? */
4301 "%08x 0 | P %c %s %s %s %s .. %c%c%c %016llx\n",
4302 i << X86_PDPT_SHIFT,
4303 Pdpe.n.u4Reserved & 1? '!' : '.', /* mbz */
4304 Pdpe.n.u4Reserved & 4? '!' : '.', /* mbz */
4305 Pdpe.n.u1WriteThru ? "WT" : "--",
4306 Pdpe.n.u1CacheDisable? "CD" : "--",
4307 Pdpe.n.u4Reserved & 2? "!" : "..",/* mbz */
4308 Pdpe.u & RT_BIT(9) ? '1' : '0',
4309 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
4310 Pdpe.u & RT_BIT(11) ? '1' : '0',
4311 Pdpe.u & X86_PDPE_PG_MASK);
4312 if (cMaxDepth >= 1)
4313 {
4314 int rc2 = pgmR3DumpHierarchyHCPaePD(pVM, Pdpe.u & X86_PDPE_PG_MASK, u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
4315 cr4, fLongMode, cMaxDepth - 1, pHlp);
4316 if (rc2 < rc && RT_SUCCESS(rc))
4317 rc = rc2;
4318 }
4319 }
4320 }
4321 return rc;
4322}
4323
4324
4325/**
4326 * Dumps a 32-bit shadow page table.
4327 *
4328 * @returns VBox status code (VINF_SUCCESS).
4329 * @param pVM The VM handle.
4330 * @param HCPhys The physical address of the table.
4331 * @param cr4 The CR4, PSE is currently used.
4332 * @param cMaxDepth The maxium depth.
4333 * @param pHlp Pointer to the output functions.
4334 */
4335static int pgmR3DumpHierarchyHcPaePML4(PVM pVM, RTHCPHYS HCPhys, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4336{
4337 PX86PML4 pPML4 = (PX86PML4)MMPagePhys2Page(pVM, HCPhys);
4338 if (!pPML4)
4339 {
4340 pHlp->pfnPrintf(pHlp, "Page map level 4 at HCPhys=%RHp was not found in the page pool!\n", HCPhys);
4341 return VERR_INVALID_PARAMETER;
4342 }
4343
4344 int rc = VINF_SUCCESS;
4345 for (unsigned i = 0; i < RT_ELEMENTS(pPML4->a); i++)
4346 {
4347 X86PML4E Pml4e = pPML4->a[i];
4348 if (Pml4e.n.u1Present)
4349 {
4350 uint64_t u64Address = ((uint64_t)i << X86_PML4_SHIFT) | (((uint64_t)i >> (X86_PML4_SHIFT - X86_PDPT_SHIFT - 1)) * 0xffff000000000000ULL);
4351 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
4352 "%016llx 0 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
4353 u64Address,
4354 Pml4e.n.u1Write ? 'W' : 'R',
4355 Pml4e.n.u1User ? 'U' : 'S',
4356 Pml4e.n.u1Accessed ? 'A' : '-',
4357 Pml4e.n.u3Reserved & 1? '?' : '.', /* ignored */
4358 Pml4e.n.u3Reserved & 4? '!' : '.', /* mbz */
4359 Pml4e.n.u1WriteThru ? "WT" : "--",
4360 Pml4e.n.u1CacheDisable? "CD" : "--",
4361 Pml4e.n.u3Reserved & 2? "!" : "..",/* mbz */
4362 Pml4e.n.u1NoExecute ? "NX" : "--",
4363 Pml4e.u & RT_BIT(9) ? '1' : '0',
4364 Pml4e.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
4365 Pml4e.u & RT_BIT(11) ? '1' : '0',
4366 Pml4e.u & X86_PML4E_PG_MASK);
4367
4368 if (cMaxDepth >= 1)
4369 {
4370 int rc2 = pgmR3DumpHierarchyHCPaePDPT(pVM, Pml4e.u & X86_PML4E_PG_MASK, u64Address, cr4, true, cMaxDepth - 1, pHlp);
4371 if (rc2 < rc && RT_SUCCESS(rc))
4372 rc = rc2;
4373 }
4374 }
4375 }
4376 return rc;
4377}
4378
4379
4380/**
4381 * Dumps a 32-bit shadow page table.
4382 *
4383 * @returns VBox status code (VINF_SUCCESS).
4384 * @param pVM The VM handle.
4385 * @param pPT Pointer to the page table.
4386 * @param u32Address The virtual address this table starts at.
4387 * @param pHlp Pointer to the output functions.
4388 */
4389int pgmR3DumpHierarchyHC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, PCDBGFINFOHLP pHlp)
4390{
4391 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
4392 {
4393 X86PTE Pte = pPT->a[i];
4394 if (Pte.n.u1Present)
4395 {
4396 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
4397 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
4398 u32Address + (i << X86_PT_SHIFT),
4399 Pte.n.u1Write ? 'W' : 'R',
4400 Pte.n.u1User ? 'U' : 'S',
4401 Pte.n.u1Accessed ? 'A' : '-',
4402 Pte.n.u1Dirty ? 'D' : '-',
4403 Pte.n.u1Global ? 'G' : '-',
4404 Pte.n.u1WriteThru ? "WT" : "--",
4405 Pte.n.u1CacheDisable? "CD" : "--",
4406 Pte.n.u1PAT ? "AT" : "--",
4407 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
4408 Pte.u & RT_BIT(10) ? '1' : '0',
4409 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
4410 Pte.u & X86_PDE_PG_MASK);
4411 }
4412 }
4413 return VINF_SUCCESS;
4414}
4415
4416
4417/**
4418 * Dumps a 32-bit shadow page directory and page tables.
4419 *
4420 * @returns VBox status code (VINF_SUCCESS).
4421 * @param pVM The VM handle.
4422 * @param cr3 The root of the hierarchy.
4423 * @param cr4 The CR4, PSE is currently used.
4424 * @param cMaxDepth How deep into the hierarchy the dumper should go.
4425 * @param pHlp Pointer to the output functions.
4426 */
4427int pgmR3DumpHierarchyHC32BitPD(PVM pVM, uint32_t cr3, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4428{
4429 PX86PD pPD = (PX86PD)MMPagePhys2Page(pVM, cr3 & X86_CR3_PAGE_MASK);
4430 if (!pPD)
4431 {
4432 pHlp->pfnPrintf(pHlp, "Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK);
4433 return VERR_INVALID_PARAMETER;
4434 }
4435
4436 int rc = VINF_SUCCESS;
4437 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
4438 {
4439 X86PDE Pde = pPD->a[i];
4440 if (Pde.n.u1Present)
4441 {
4442 const uint32_t u32Address = i << X86_PD_SHIFT;
4443 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
4444 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
4445 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
4446 u32Address,
4447 Pde.b.u1Write ? 'W' : 'R',
4448 Pde.b.u1User ? 'U' : 'S',
4449 Pde.b.u1Accessed ? 'A' : '-',
4450 Pde.b.u1Dirty ? 'D' : '-',
4451 Pde.b.u1Global ? 'G' : '-',
4452 Pde.b.u1WriteThru ? "WT" : "--",
4453 Pde.b.u1CacheDisable? "CD" : "--",
4454 Pde.b.u1PAT ? "AT" : "--",
4455 Pde.u & RT_BIT_64(9) ? '1' : '0',
4456 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
4457 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
4458 Pde.u & X86_PDE4M_PG_MASK);
4459 else
4460 {
4461 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
4462 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
4463 u32Address,
4464 Pde.n.u1Write ? 'W' : 'R',
4465 Pde.n.u1User ? 'U' : 'S',
4466 Pde.n.u1Accessed ? 'A' : '-',
4467 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
4468 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
4469 Pde.n.u1WriteThru ? "WT" : "--",
4470 Pde.n.u1CacheDisable? "CD" : "--",
4471 Pde.u & RT_BIT_64(9) ? '1' : '0',
4472 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
4473 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
4474 Pde.u & X86_PDE_PG_MASK);
4475 if (cMaxDepth >= 1)
4476 {
4477 /** @todo what about using the page pool for mapping PTs? */
4478 RTHCPHYS HCPhys = Pde.u & X86_PDE_PG_MASK;
4479 PX86PT pPT = NULL;
4480 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
4481 pPT = (PX86PT)MMPagePhys2Page(pVM, HCPhys);
4482 else
4483 {
4484 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
4485 if (u32Address - pMap->GCPtr < pMap->cb)
4486 {
4487 int iPDE = (u32Address - pMap->GCPtr) >> X86_PD_SHIFT;
4488 if (pMap->aPTs[iPDE].HCPhysPT != HCPhys)
4489 pHlp->pfnPrintf(pHlp, "%08x error! Mapping error! PT %d has HCPhysPT=%RHp not %RHp is in the PD.\n",
4490 u32Address, iPDE, pMap->aPTs[iPDE].HCPhysPT, HCPhys);
4491 pPT = pMap->aPTs[iPDE].pPTR3;
4492 }
4493 }
4494 int rc2 = VERR_INVALID_PARAMETER;
4495 if (pPT)
4496 rc2 = pgmR3DumpHierarchyHC32BitPT(pVM, pPT, u32Address, pHlp);
4497 else
4498 pHlp->pfnPrintf(pHlp, "%08x error! Page table at %#x was not found in the page pool!\n", u32Address, HCPhys);
4499 if (rc2 < rc && RT_SUCCESS(rc))
4500 rc = rc2;
4501 }
4502 }
4503 }
4504 }
4505
4506 return rc;
4507}
4508
4509
4510/**
4511 * Dumps a 32-bit shadow page table.
4512 *
4513 * @returns VBox status code (VINF_SUCCESS).
4514 * @param pVM The VM handle.
4515 * @param pPT Pointer to the page table.
4516 * @param u32Address The virtual address this table starts at.
4517 * @param PhysSearch Address to search for.
4518 */
4519int pgmR3DumpHierarchyGC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, RTGCPHYS PhysSearch)
4520{
4521 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
4522 {
4523 X86PTE Pte = pPT->a[i];
4524 if (Pte.n.u1Present)
4525 {
4526 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4527 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
4528 u32Address + (i << X86_PT_SHIFT),
4529 Pte.n.u1Write ? 'W' : 'R',
4530 Pte.n.u1User ? 'U' : 'S',
4531 Pte.n.u1Accessed ? 'A' : '-',
4532 Pte.n.u1Dirty ? 'D' : '-',
4533 Pte.n.u1Global ? 'G' : '-',
4534 Pte.n.u1WriteThru ? "WT" : "--",
4535 Pte.n.u1CacheDisable? "CD" : "--",
4536 Pte.n.u1PAT ? "AT" : "--",
4537 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
4538 Pte.u & RT_BIT(10) ? '1' : '0',
4539 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
4540 Pte.u & X86_PDE_PG_MASK));
4541
4542 if ((Pte.u & X86_PDE_PG_MASK) == PhysSearch)
4543 {
4544 uint64_t fPageShw = 0;
4545 RTHCPHYS pPhysHC = 0;
4546
4547 /** @todo SMP support!! */
4548 PGMShwGetPage(&pVM->aCpus[0], (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), &fPageShw, &pPhysHC);
4549 Log(("Found %RGp at %RGv -> flags=%llx\n", PhysSearch, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), fPageShw));
4550 }
4551 }
4552 }
4553 return VINF_SUCCESS;
4554}
4555
4556
4557/**
4558 * Dumps a 32-bit guest page directory and page tables.
4559 *
4560 * @returns VBox status code (VINF_SUCCESS).
4561 * @param pVM The VM handle.
4562 * @param cr3 The root of the hierarchy.
4563 * @param cr4 The CR4, PSE is currently used.
4564 * @param PhysSearch Address to search for.
4565 */
4566VMMR3DECL(int) PGMR3DumpHierarchyGC(PVM pVM, uint64_t cr3, uint64_t cr4, RTGCPHYS PhysSearch)
4567{
4568 bool fLongMode = false;
4569 const unsigned cch = fLongMode ? 16 : 8; NOREF(cch);
4570 PX86PD pPD = 0;
4571
4572 int rc = PGM_GCPHYS_2_PTR(pVM, cr3 & X86_CR3_PAGE_MASK, &pPD);
4573 if (RT_FAILURE(rc) || !pPD)
4574 {
4575 Log(("Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK));
4576 return VERR_INVALID_PARAMETER;
4577 }
4578
4579 Log(("cr3=%08x cr4=%08x%s\n"
4580 "%-*s P - Present\n"
4581 "%-*s | R/W - Read (0) / Write (1)\n"
4582 "%-*s | | U/S - User (1) / Supervisor (0)\n"
4583 "%-*s | | | A - Accessed\n"
4584 "%-*s | | | | D - Dirty\n"
4585 "%-*s | | | | | G - Global\n"
4586 "%-*s | | | | | | WT - Write thru\n"
4587 "%-*s | | | | | | | CD - Cache disable\n"
4588 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
4589 "%-*s | | | | | | | | | NX - No execute (K8)\n"
4590 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
4591 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
4592 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
4593 "%-*s Level | | | | | | | | | | | | Page\n"
4594 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
4595 - W U - - - -- -- -- -- -- 010 */
4596 , cr3, cr4, fLongMode ? " Long Mode" : "",
4597 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
4598 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address"));
4599
4600 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
4601 {
4602 X86PDE Pde = pPD->a[i];
4603 if (Pde.n.u1Present)
4604 {
4605 const uint32_t u32Address = i << X86_PD_SHIFT;
4606
4607 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
4608 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4609 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
4610 u32Address,
4611 Pde.b.u1Write ? 'W' : 'R',
4612 Pde.b.u1User ? 'U' : 'S',
4613 Pde.b.u1Accessed ? 'A' : '-',
4614 Pde.b.u1Dirty ? 'D' : '-',
4615 Pde.b.u1Global ? 'G' : '-',
4616 Pde.b.u1WriteThru ? "WT" : "--",
4617 Pde.b.u1CacheDisable? "CD" : "--",
4618 Pde.b.u1PAT ? "AT" : "--",
4619 Pde.u & RT_BIT(9) ? '1' : '0',
4620 Pde.u & RT_BIT(10) ? '1' : '0',
4621 Pde.u & RT_BIT(11) ? '1' : '0',
4622 pgmGstGet4MBPhysPage(&pVM->pgm.s, Pde)));
4623 /** @todo PhysSearch */
4624 else
4625 {
4626 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4627 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
4628 u32Address,
4629 Pde.n.u1Write ? 'W' : 'R',
4630 Pde.n.u1User ? 'U' : 'S',
4631 Pde.n.u1Accessed ? 'A' : '-',
4632 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
4633 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
4634 Pde.n.u1WriteThru ? "WT" : "--",
4635 Pde.n.u1CacheDisable? "CD" : "--",
4636 Pde.u & RT_BIT(9) ? '1' : '0',
4637 Pde.u & RT_BIT(10) ? '1' : '0',
4638 Pde.u & RT_BIT(11) ? '1' : '0',
4639 Pde.u & X86_PDE_PG_MASK));
4640 ////if (cMaxDepth >= 1)
4641 {
4642 /** @todo what about using the page pool for mapping PTs? */
4643 RTGCPHYS GCPhys = Pde.u & X86_PDE_PG_MASK;
4644 PX86PT pPT = NULL;
4645
4646 rc = PGM_GCPHYS_2_PTR(pVM, GCPhys, &pPT);
4647
4648 int rc2 = VERR_INVALID_PARAMETER;
4649 if (pPT)
4650 rc2 = pgmR3DumpHierarchyGC32BitPT(pVM, pPT, u32Address, PhysSearch);
4651 else
4652 Log(("%08x error! Page table at %#x was not found in the page pool!\n", u32Address, GCPhys));
4653 if (rc2 < rc && RT_SUCCESS(rc))
4654 rc = rc2;
4655 }
4656 }
4657 }
4658 }
4659
4660 return rc;
4661}
4662
4663
4664/**
4665 * Dumps a page table hierarchy use only physical addresses and cr4/lm flags.
4666 *
4667 * @returns VBox status code (VINF_SUCCESS).
4668 * @param pVM The VM handle.
4669 * @param cr3 The root of the hierarchy.
4670 * @param cr4 The cr4, only PAE and PSE is currently used.
4671 * @param fLongMode Set if long mode, false if not long mode.
4672 * @param cMaxDepth Number of levels to dump.
4673 * @param pHlp Pointer to the output functions.
4674 */
4675VMMR3DECL(int) PGMR3DumpHierarchyHC(PVM pVM, uint64_t cr3, uint64_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4676{
4677 if (!pHlp)
4678 pHlp = DBGFR3InfoLogHlp();
4679 if (!cMaxDepth)
4680 return VINF_SUCCESS;
4681 const unsigned cch = fLongMode ? 16 : 8;
4682 pHlp->pfnPrintf(pHlp,
4683 "cr3=%08x cr4=%08x%s\n"
4684 "%-*s P - Present\n"
4685 "%-*s | R/W - Read (0) / Write (1)\n"
4686 "%-*s | | U/S - User (1) / Supervisor (0)\n"
4687 "%-*s | | | A - Accessed\n"
4688 "%-*s | | | | D - Dirty\n"
4689 "%-*s | | | | | G - Global\n"
4690 "%-*s | | | | | | WT - Write thru\n"
4691 "%-*s | | | | | | | CD - Cache disable\n"
4692 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
4693 "%-*s | | | | | | | | | NX - No execute (K8)\n"
4694 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
4695 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
4696 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
4697 "%-*s Level | | | | | | | | | | | | Page\n"
4698 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
4699 - W U - - - -- -- -- -- -- 010 */
4700 , cr3, cr4, fLongMode ? " Long Mode" : "",
4701 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
4702 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address");
4703 if (cr4 & X86_CR4_PAE)
4704 {
4705 if (fLongMode)
4706 return pgmR3DumpHierarchyHcPaePML4(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
4707 return pgmR3DumpHierarchyHCPaePDPT(pVM, cr3 & X86_CR3_PAE_PAGE_MASK, 0, cr4, false, cMaxDepth, pHlp);
4708 }
4709 return pgmR3DumpHierarchyHC32BitPD(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
4710}
4711
4712#ifdef VBOX_WITH_DEBUGGER
4713
4714/**
4715 * The '.pgmram' command.
4716 *
4717 * @returns VBox status.
4718 * @param pCmd Pointer to the command descriptor (as registered).
4719 * @param pCmdHlp Pointer to command helper functions.
4720 * @param pVM Pointer to the current VM (if any).
4721 * @param paArgs Pointer to (readonly) array of arguments.
4722 * @param cArgs Number of arguments in the array.
4723 */
4724static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4725{
4726 /*
4727 * Validate input.
4728 */
4729 if (!pVM)
4730 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4731 if (!pVM->pgm.s.pRamRangesRC)
4732 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no Ram is registered.\n");
4733
4734 /*
4735 * Dump the ranges.
4736 */
4737 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "From - To (incl) pvHC\n");
4738 PPGMRAMRANGE pRam;
4739 for (pRam = pVM->pgm.s.pRamRangesR3; pRam; pRam = pRam->pNextR3)
4740 {
4741 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
4742 "%RGp - %RGp %p\n",
4743 pRam->GCPhys, pRam->GCPhysLast, pRam->pvR3);
4744 if (RT_FAILURE(rc))
4745 return rc;
4746 }
4747
4748 return VINF_SUCCESS;
4749}
4750
4751
4752/**
4753 * The '.pgmmap' command.
4754 *
4755 * @returns VBox status.
4756 * @param pCmd Pointer to the command descriptor (as registered).
4757 * @param pCmdHlp Pointer to command helper functions.
4758 * @param pVM Pointer to the current VM (if any).
4759 * @param paArgs Pointer to (readonly) array of arguments.
4760 * @param cArgs Number of arguments in the array.
4761 */
4762static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4763{
4764 /*
4765 * Validate input.
4766 */
4767 if (!pVM)
4768 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4769 if (!pVM->pgm.s.pMappingsR3)
4770 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no mappings are registered.\n");
4771
4772 /*
4773 * Print message about the fixedness of the mappings.
4774 */
4775 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, pVM->pgm.s.fMappingsFixed ? "The mappings are FIXED.\n" : "The mappings are FLOATING.\n");
4776 if (RT_FAILURE(rc))
4777 return rc;
4778
4779 /*
4780 * Dump the ranges.
4781 */
4782 PPGMMAPPING pCur;
4783 for (pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
4784 {
4785 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
4786 "%08x - %08x %s\n",
4787 pCur->GCPtr, pCur->GCPtrLast, pCur->pszDesc);
4788 if (RT_FAILURE(rc))
4789 return rc;
4790 }
4791
4792 return VINF_SUCCESS;
4793}
4794
4795
4796/**
4797 * The '.pgmerror' and '.pgmerroroff' commands.
4798 *
4799 * @returns VBox status.
4800 * @param pCmd Pointer to the command descriptor (as registered).
4801 * @param pCmdHlp Pointer to command helper functions.
4802 * @param pVM Pointer to the current VM (if any).
4803 * @param paArgs Pointer to (readonly) array of arguments.
4804 * @param cArgs Number of arguments in the array.
4805 */
4806static DECLCALLBACK(int) pgmR3CmdError(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4807{
4808 /*
4809 * Validate input.
4810 */
4811 if (!pVM)
4812 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4813 AssertReturn(cArgs == 0 || (cArgs == 1 && paArgs[0].enmType == DBGCVAR_TYPE_STRING),
4814 pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: Hit bug in the parser.\n"));
4815
4816 if (!cArgs)
4817 {
4818 /*
4819 * Print the list of error injection locations with status.
4820 */
4821 pCmdHlp->pfnPrintf(pCmdHlp, NULL, "PGM error inject locations:\n");
4822 pCmdHlp->pfnPrintf(pCmdHlp, NULL, " handy - %RTbool\n", pVM->pgm.s.fErrInjHandyPages);
4823 }
4824 else
4825 {
4826
4827 /*
4828 * String switch on where to inject the error.
4829 */
4830 bool const fNewState = !strcmp(pCmd->pszCmd, "pgmerror");
4831 const char *pszWhere = paArgs[0].u.pszString;
4832 if (!strcmp(pszWhere, "handy"))
4833 ASMAtomicWriteBool(&pVM->pgm.s.fErrInjHandyPages, fNewState);
4834 else
4835 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: Invalid 'where' value: %s.\n", pszWhere);
4836 pCmdHlp->pfnPrintf(pCmdHlp, NULL, "done\n");
4837 }
4838 return VINF_SUCCESS;
4839}
4840
4841
4842/**
4843 * The '.pgmsync' command.
4844 *
4845 * @returns VBox status.
4846 * @param pCmd Pointer to the command descriptor (as registered).
4847 * @param pCmdHlp Pointer to command helper functions.
4848 * @param pVM Pointer to the current VM (if any).
4849 * @param paArgs Pointer to (readonly) array of arguments.
4850 * @param cArgs Number of arguments in the array.
4851 */
4852static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4853{
4854 /** @todo SMP support */
4855 PVMCPU pVCpu = &pVM->aCpus[0];
4856
4857 /*
4858 * Validate input.
4859 */
4860 if (!pVM)
4861 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4862
4863 /*
4864 * Force page directory sync.
4865 */
4866 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
4867
4868 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Forcing page directory sync.\n");
4869 if (RT_FAILURE(rc))
4870 return rc;
4871
4872 return VINF_SUCCESS;
4873}
4874
4875
4876#ifdef VBOX_STRICT
4877/**
4878 * The '.pgmassertcr3' command.
4879 *
4880 * @returns VBox status.
4881 * @param pCmd Pointer to the command descriptor (as registered).
4882 * @param pCmdHlp Pointer to command helper functions.
4883 * @param pVM Pointer to the current VM (if any).
4884 * @param paArgs Pointer to (readonly) array of arguments.
4885 * @param cArgs Number of arguments in the array.
4886 */
4887static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4888{
4889 /** @todo SMP support!! */
4890 PVMCPU pVCpu = &pVM->aCpus[0];
4891
4892 /*
4893 * Validate input.
4894 */
4895 if (!pVM)
4896 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4897
4898 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Checking shadow CR3 page tables for consistency.\n");
4899 if (RT_FAILURE(rc))
4900 return rc;
4901
4902 PGMAssertCR3(pVM, pVCpu, CPUMGetGuestCR3(pVCpu), CPUMGetGuestCR4(pVCpu));
4903
4904 return VINF_SUCCESS;
4905}
4906#endif /* VBOX_STRICT */
4907
4908
4909/**
4910 * The '.pgmsyncalways' command.
4911 *
4912 * @returns VBox status.
4913 * @param pCmd Pointer to the command descriptor (as registered).
4914 * @param pCmdHlp Pointer to command helper functions.
4915 * @param pVM Pointer to the current VM (if any).
4916 * @param paArgs Pointer to (readonly) array of arguments.
4917 * @param cArgs Number of arguments in the array.
4918 */
4919static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4920{
4921 /** @todo SMP support!! */
4922 PVMCPU pVCpu = &pVM->aCpus[0];
4923
4924 /*
4925 * Validate input.
4926 */
4927 if (!pVM)
4928 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4929
4930 /*
4931 * Force page directory sync.
4932 */
4933 if (pVCpu->pgm.s.fSyncFlags & PGM_SYNC_ALWAYS)
4934 {
4935 ASMAtomicAndU32(&pVCpu->pgm.s.fSyncFlags, ~PGM_SYNC_ALWAYS);
4936 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Disabled permanent forced page directory syncing.\n");
4937 }
4938 else
4939 {
4940 ASMAtomicOrU32(&pVCpu->pgm.s.fSyncFlags, PGM_SYNC_ALWAYS);
4941 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
4942 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Enabled permanent forced page directory syncing.\n");
4943 }
4944}
4945
4946#endif /* VBOX_WITH_DEBUGGER */
4947
4948/**
4949 * pvUser argument of the pgmR3CheckIntegrity*Node callbacks.
4950 */
4951typedef struct PGMCHECKINTARGS
4952{
4953 bool fLeftToRight; /**< true: left-to-right; false: right-to-left. */
4954 PPGMPHYSHANDLER pPrevPhys;
4955 PPGMVIRTHANDLER pPrevVirt;
4956 PPGMPHYS2VIRTHANDLER pPrevPhys2Virt;
4957 PVM pVM;
4958} PGMCHECKINTARGS, *PPGMCHECKINTARGS;
4959
4960/**
4961 * Validate a node in the physical handler tree.
4962 *
4963 * @returns 0 on if ok, other wise 1.
4964 * @param pNode The handler node.
4965 * @param pvUser pVM.
4966 */
4967static DECLCALLBACK(int) pgmR3CheckIntegrityPhysHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4968{
4969 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4970 PPGMPHYSHANDLER pCur = (PPGMPHYSHANDLER)pNode;
4971 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4972 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGp-%RGp %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4973 AssertReleaseMsg( !pArgs->pPrevPhys
4974 || (pArgs->fLeftToRight ? pArgs->pPrevPhys->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys->Core.KeyLast > pCur->Core.Key),
4975 ("pPrevPhys=%p %RGp-%RGp %s\n"
4976 " pCur=%p %RGp-%RGp %s\n",
4977 pArgs->pPrevPhys, pArgs->pPrevPhys->Core.Key, pArgs->pPrevPhys->Core.KeyLast, pArgs->pPrevPhys->pszDesc,
4978 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4979 pArgs->pPrevPhys = pCur;
4980 return 0;
4981}
4982
4983
4984/**
4985 * Validate a node in the virtual handler tree.
4986 *
4987 * @returns 0 on if ok, other wise 1.
4988 * @param pNode The handler node.
4989 * @param pvUser pVM.
4990 */
4991static DECLCALLBACK(int) pgmR3CheckIntegrityVirtHandlerNode(PAVLROGCPTRNODECORE pNode, void *pvUser)
4992{
4993 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4994 PPGMVIRTHANDLER pCur = (PPGMVIRTHANDLER)pNode;
4995 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4996 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGv-%RGv %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4997 AssertReleaseMsg( !pArgs->pPrevVirt
4998 || (pArgs->fLeftToRight ? pArgs->pPrevVirt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevVirt->Core.KeyLast > pCur->Core.Key),
4999 ("pPrevVirt=%p %RGv-%RGv %s\n"
5000 " pCur=%p %RGv-%RGv %s\n",
5001 pArgs->pPrevVirt, pArgs->pPrevVirt->Core.Key, pArgs->pPrevVirt->Core.KeyLast, pArgs->pPrevVirt->pszDesc,
5002 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
5003 for (unsigned iPage = 0; iPage < pCur->cPages; iPage++)
5004 {
5005 AssertReleaseMsg(pCur->aPhysToVirt[iPage].offVirtHandler == -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage]),
5006 ("pCur=%p %RGv-%RGv %s\n"
5007 "iPage=%d offVirtHandle=%#x expected %#x\n",
5008 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc,
5009 iPage, pCur->aPhysToVirt[iPage].offVirtHandler, -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage])));
5010 }
5011 pArgs->pPrevVirt = pCur;
5012 return 0;
5013}
5014
5015
5016/**
5017 * Validate a node in the virtual handler tree.
5018 *
5019 * @returns 0 on if ok, other wise 1.
5020 * @param pNode The handler node.
5021 * @param pvUser pVM.
5022 */
5023static DECLCALLBACK(int) pgmR3CheckIntegrityPhysToVirtHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
5024{
5025 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
5026 PPGMPHYS2VIRTHANDLER pCur = (PPGMPHYS2VIRTHANDLER)pNode;
5027 AssertReleaseMsgReturn(!((uintptr_t)pCur & 3), ("\n"), 1);
5028 AssertReleaseMsgReturn(!(pCur->offVirtHandler & 3), ("\n"), 1);
5029 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGp-%RGp\n", pCur, pCur->Core.Key, pCur->Core.KeyLast));
5030 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
5031 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
5032 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
5033 " pCur=%p %RGp-%RGp\n",
5034 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
5035 pCur, pCur->Core.Key, pCur->Core.KeyLast));
5036 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
5037 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
5038 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
5039 " pCur=%p %RGp-%RGp\n",
5040 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
5041 pCur, pCur->Core.Key, pCur->Core.KeyLast));
5042 AssertReleaseMsg((pCur->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD),
5043 ("pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
5044 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
5045 if (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK)
5046 {
5047 PPGMPHYS2VIRTHANDLER pCur2 = pCur;
5048 for (;;)
5049 {
5050 pCur2 = (PPGMPHYS2VIRTHANDLER)((intptr_t)pCur + (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK));
5051 AssertReleaseMsg(pCur2 != pCur,
5052 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
5053 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
5054 AssertReleaseMsg((pCur2->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == PGMPHYS2VIRTHANDLER_IN_TREE,
5055 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
5056 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
5057 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
5058 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
5059 AssertReleaseMsg((pCur2->Core.Key ^ pCur->Core.Key) < PAGE_SIZE,
5060 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
5061 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
5062 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
5063 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
5064 AssertReleaseMsg((pCur2->Core.KeyLast ^ pCur->Core.KeyLast) < PAGE_SIZE,
5065 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
5066 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
5067 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
5068 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
5069 if (!(pCur2->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK))
5070 break;
5071 }
5072 }
5073
5074 pArgs->pPrevPhys2Virt = pCur;
5075 return 0;
5076}
5077
5078
5079/**
5080 * Perform an integrity check on the PGM component.
5081 *
5082 * @returns VINF_SUCCESS if everything is fine.
5083 * @returns VBox error status after asserting on integrity breach.
5084 * @param pVM The VM handle.
5085 */
5086VMMR3DECL(int) PGMR3CheckIntegrity(PVM pVM)
5087{
5088 AssertReleaseReturn(pVM->pgm.s.offVM, VERR_INTERNAL_ERROR);
5089
5090 /*
5091 * Check the trees.
5092 */
5093 int cErrors = 0;
5094 const static PGMCHECKINTARGS s_LeftToRight = { true, NULL, NULL, NULL, pVM };
5095 const static PGMCHECKINTARGS s_RightToLeft = { false, NULL, NULL, NULL, pVM };
5096 PGMCHECKINTARGS Args = s_LeftToRight;
5097 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3CheckIntegrityPhysHandlerNode, &Args);
5098 Args = s_RightToLeft;
5099 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, false, pgmR3CheckIntegrityPhysHandlerNode, &Args);
5100 Args = s_LeftToRight;
5101 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
5102 Args = s_RightToLeft;
5103 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
5104 Args = s_LeftToRight;
5105 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
5106 Args = s_RightToLeft;
5107 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
5108 Args = s_LeftToRight;
5109 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, true, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
5110 Args = s_RightToLeft;
5111 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, false, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
5112
5113 return !cErrors ? VINF_SUCCESS : VERR_INTERNAL_ERROR;
5114}
5115
5116
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette