VirtualBox

source: vbox/trunk/src/VBox/VMM/PGM.cpp@ 17562

Last change on this file since 17562 was 17556, checked in by vboxsync, 16 years ago

Allow pgm pool flushing only in ring 3. Deal with shadow mode reinit there as well.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 220.8 KB
Line 
1/* $Id: PGM.cpp 17556 2009-03-09 09:46:40Z vboxsync $ */
2/** @file
3 * PGM - Page Manager and Monitor. (Mixing stuff here, not good?)
4 */
5
6/*
7 * Copyright (C) 2006-2007 Sun Microsystems, Inc.
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 *
17 * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
18 * Clara, CA 95054 USA or visit http://www.sun.com if you need
19 * additional information or have any questions.
20 */
21
22
23/** @page pg_pgm PGM - The Page Manager and Monitor
24 *
25 * @see grp_pgm,
26 * @ref pg_pgm_pool,
27 * @ref pg_pgm_phys.
28 *
29 *
30 * @section sec_pgm_modes Paging Modes
31 *
32 * There are three memory contexts: Host Context (HC), Guest Context (GC)
33 * and intermediate context. When talking about paging HC can also be refered to
34 * as "host paging", and GC refered to as "shadow paging".
35 *
36 * We define three basic paging modes: 32-bit, PAE and AMD64. The host paging mode
37 * is defined by the host operating system. The mode used in the shadow paging mode
38 * depends on the host paging mode and what the mode the guest is currently in. The
39 * following relation between the two is defined:
40 *
41 * @verbatim
42 Host > 32-bit | PAE | AMD64 |
43 Guest | | | |
44 ==v================================
45 32-bit 32-bit PAE PAE
46 -------|--------|--------|--------|
47 PAE PAE PAE PAE
48 -------|--------|--------|--------|
49 AMD64 AMD64 AMD64 AMD64
50 -------|--------|--------|--------| @endverbatim
51 *
52 * All configuration except those in the diagonal (upper left) are expected to
53 * require special effort from the switcher (i.e. a bit slower).
54 *
55 *
56 *
57 *
58 * @section sec_pgm_shw The Shadow Memory Context
59 *
60 *
61 * [..]
62 *
63 * Because of guest context mappings requires PDPT and PML4 entries to allow
64 * writing on AMD64, the two upper levels will have fixed flags whatever the
65 * guest is thinking of using there. So, when shadowing the PD level we will
66 * calculate the effective flags of PD and all the higher levels. In legacy
67 * PAE mode this only applies to the PWT and PCD bits (the rest are
68 * ignored/reserved/MBZ). We will ignore those bits for the present.
69 *
70 *
71 *
72 * @section sec_pgm_int The Intermediate Memory Context
73 *
74 * The world switch goes thru an intermediate memory context which purpose it is
75 * to provide different mappings of the switcher code. All guest mappings are also
76 * present in this context.
77 *
78 * The switcher code is mapped at the same location as on the host, at an
79 * identity mapped location (physical equals virtual address), and at the
80 * hypervisor location. The identity mapped location is for when the world
81 * switches that involves disabling paging.
82 *
83 * PGM maintain page tables for 32-bit, PAE and AMD64 paging modes. This
84 * simplifies switching guest CPU mode and consistency at the cost of more
85 * code to do the work. All memory use for those page tables is located below
86 * 4GB (this includes page tables for guest context mappings).
87 *
88 *
89 * @subsection subsec_pgm_int_gc Guest Context Mappings
90 *
91 * During assignment and relocation of a guest context mapping the intermediate
92 * memory context is used to verify the new location.
93 *
94 * Guest context mappings are currently restricted to below 4GB, for reasons
95 * of simplicity. This may change when we implement AMD64 support.
96 *
97 *
98 *
99 *
100 * @section sec_pgm_misc Misc
101 *
102 * @subsection subsec_pgm_misc_diff Differences Between Legacy PAE and Long Mode PAE
103 *
104 * The differences between legacy PAE and long mode PAE are:
105 * -# PDPE bits 1, 2, 5 and 6 are defined differently. In leagcy mode they are
106 * all marked down as must-be-zero, while in long mode 1, 2 and 5 have the
107 * usual meanings while 6 is ignored (AMD). This means that upon switching to
108 * legacy PAE mode we'll have to clear these bits and when going to long mode
109 * they must be set. This applies to both intermediate and shadow contexts,
110 * however we don't need to do it for the intermediate one since we're
111 * executing with CR0.WP at that time.
112 * -# CR3 allows a 32-byte aligned address in legacy mode, while in long mode
113 * a page aligned one is required.
114 *
115 *
116 * @section sec_pgm_handlers Access Handlers
117 *
118 * Placeholder.
119 *
120 *
121 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
122 *
123 * Placeholder.
124 *
125 *
126 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
127 *
128 * We currently implement three types of virtual access handlers: ALL, WRITE
129 * and HYPERVISOR (WRITE). See PGMVIRTHANDLERTYPE for some more details.
130 *
131 * The HYPERVISOR access handlers is kept in a separate tree since it doesn't apply
132 * to physical pages (PGMTREES::HyperVirtHandlers) and only needs to be consulted in
133 * a special \#PF case. The ALL and WRITE are in the PGMTREES::VirtHandlers tree, the
134 * rest of this section is going to be about these handlers.
135 *
136 * We'll go thru the life cycle of a handler and try make sense of it all, don't know
137 * how successfull this is gonna be...
138 *
139 * 1. A handler is registered thru the PGMR3HandlerVirtualRegister and
140 * PGMHandlerVirtualRegisterEx APIs. We check for conflicting virtual handlers
141 * and create a new node that is inserted into the AVL tree (range key). Then
142 * a full PGM resync is flagged (clear pool, sync cr3, update virtual bit of PGMPAGE).
143 *
144 * 2. The following PGMSyncCR3/SyncCR3 operation will first make invoke HandlerVirtualUpdate.
145 *
146 * 2a. HandlerVirtualUpdate will will lookup all the pages covered by virtual handlers
147 * via the current guest CR3 and update the physical page -> virtual handler
148 * translation. Needless to say, this doesn't exactly scale very well. If any changes
149 * are detected, it will flag a virtual bit update just like we did on registration.
150 * PGMPHYS pages with changes will have their virtual handler state reset to NONE.
151 *
152 * 2b. The virtual bit update process will iterate all the pages covered by all the
153 * virtual handlers and update the PGMPAGE virtual handler state to the max of all
154 * virtual handlers on that page.
155 *
156 * 2c. Back in SyncCR3 we will now flush the entire shadow page cache to make sure
157 * we don't miss any alias mappings of the monitored pages.
158 *
159 * 2d. SyncCR3 will then proceed with syncing the CR3 table.
160 *
161 * 3. \#PF(np,read) on a page in the range. This will cause it to be synced
162 * read-only and resumed if it's a WRITE handler. If it's an ALL handler we
163 * will call the handlers like in the next step. If the physical mapping has
164 * changed we will - some time in the future - perform a handler callback
165 * (optional) and update the physical -> virtual handler cache.
166 *
167 * 4. \#PF(,write) on a page in the range. This will cause the handler to
168 * be invoked.
169 *
170 * 5. The guest invalidates the page and changes the physical backing or
171 * unmaps it. This should cause the invalidation callback to be invoked
172 * (it might not yet be 100% perfect). Exactly what happens next... is
173 * this where we mess up and end up out of sync for a while?
174 *
175 * 6. The handler is deregistered by the client via PGMHandlerVirtualDeregister.
176 * We will then set all PGMPAGEs in the physical -> virtual handler cache for
177 * this handler to NONE and trigger a full PGM resync (basically the same
178 * as int step 1). Which means 2 is executed again.
179 *
180 *
181 * @subsubsection sub_sec_pgm_handler_virt_todo TODOs
182 *
183 * There is a bunch of things that needs to be done to make the virtual handlers
184 * work 100% correctly and work more efficiently.
185 *
186 * The first bit hasn't been implemented yet because it's going to slow the
187 * whole mess down even more, and besides it seems to be working reliably for
188 * our current uses. OTOH, some of the optimizations might end up more or less
189 * implementing the missing bits, so we'll see.
190 *
191 * On the optimization side, the first thing to do is to try avoid unnecessary
192 * cache flushing. Then try team up with the shadowing code to track changes
193 * in mappings by means of access to them (shadow in), updates to shadows pages,
194 * invlpg, and shadow PT discarding (perhaps).
195 *
196 * Some idea that have popped up for optimization for current and new features:
197 * - bitmap indicating where there are virtual handlers installed.
198 * (4KB => 2**20 pages, page 2**12 => covers 32-bit address space 1:1!)
199 * - Further optimize this by min/max (needs min/max avl getters).
200 * - Shadow page table entry bit (if any left)?
201 *
202 */
203
204
205/** @page pg_pgm_phys PGM Physical Guest Memory Management
206 *
207 *
208 * Objectives:
209 * - Guest RAM over-commitment using memory ballooning,
210 * zero pages and general page sharing.
211 * - Moving or mirroring a VM onto a different physical machine.
212 *
213 *
214 * @subsection subsec_pgmPhys_Definitions Definitions
215 *
216 * Allocation chunk - A RTR0MemObjAllocPhysNC object and the tracking
217 * machinery assoicated with it.
218 *
219 *
220 *
221 *
222 * @subsection subsec_pgmPhys_AllocPage Allocating a page.
223 *
224 * Initially we map *all* guest memory to the (per VM) zero page, which
225 * means that none of the read functions will cause pages to be allocated.
226 *
227 * Exception, access bit in page tables that have been shared. This must
228 * be handled, but we must also make sure PGMGst*Modify doesn't make
229 * unnecessary modifications.
230 *
231 * Allocation points:
232 * - PGMPhysSimpleWriteGCPhys and PGMPhysWrite.
233 * - Replacing a zero page mapping at \#PF.
234 * - Replacing a shared page mapping at \#PF.
235 * - ROM registration (currently MMR3RomRegister).
236 * - VM restore (pgmR3Load).
237 *
238 * For the first three it would make sense to keep a few pages handy
239 * until we've reached the max memory commitment for the VM.
240 *
241 * For the ROM registration, we know exactly how many pages we need
242 * and will request these from ring-0. For restore, we will save
243 * the number of non-zero pages in the saved state and allocate
244 * them up front. This would allow the ring-0 component to refuse
245 * the request if the isn't sufficient memory available for VM use.
246 *
247 * Btw. for both ROM and restore allocations we won't be requiring
248 * zeroed pages as they are going to be filled instantly.
249 *
250 *
251 * @subsection subsec_pgmPhys_FreePage Freeing a page
252 *
253 * There are a few points where a page can be freed:
254 * - After being replaced by the zero page.
255 * - After being replaced by a shared page.
256 * - After being ballooned by the guest additions.
257 * - At reset.
258 * - At restore.
259 *
260 * When freeing one or more pages they will be returned to the ring-0
261 * component and replaced by the zero page.
262 *
263 * The reasoning for clearing out all the pages on reset is that it will
264 * return us to the exact same state as on power on, and may thereby help
265 * us reduce the memory load on the system. Further it might have a
266 * (temporary) positive influence on memory fragmentation (@see subsec_pgmPhys_Fragmentation).
267 *
268 * On restore, as mention under the allocation topic, pages should be
269 * freed / allocated depending on how many is actually required by the
270 * new VM state. The simplest approach is to do like on reset, and free
271 * all non-ROM pages and then allocate what we need.
272 *
273 * A measure to prevent some fragmentation, would be to let each allocation
274 * chunk have some affinity towards the VM having allocated the most pages
275 * from it. Also, try make sure to allocate from allocation chunks that
276 * are almost full. Admittedly, both these measures might work counter to
277 * our intentions and its probably not worth putting a lot of effort,
278 * cpu time or memory into this.
279 *
280 *
281 * @subsection subsec_pgmPhys_SharePage Sharing a page
282 *
283 * The basic idea is that there there will be a idle priority kernel
284 * thread walking the non-shared VM pages hashing them and looking for
285 * pages with the same checksum. If such pages are found, it will compare
286 * them byte-by-byte to see if they actually are identical. If found to be
287 * identical it will allocate a shared page, copy the content, check that
288 * the page didn't change while doing this, and finally request both the
289 * VMs to use the shared page instead. If the page is all zeros (special
290 * checksum and byte-by-byte check) it will request the VM that owns it
291 * to replace it with the zero page.
292 *
293 * To make this efficient, we will have to make sure not to try share a page
294 * that will change its contents soon. This part requires the most work.
295 * A simple idea would be to request the VM to write monitor the page for
296 * a while to make sure it isn't modified any time soon. Also, it may
297 * make sense to skip pages that are being write monitored since this
298 * information is readily available to the thread if it works on the
299 * per-VM guest memory structures (presently called PGMRAMRANGE).
300 *
301 *
302 * @subsection subsec_pgmPhys_Fragmentation Fragmentation Concerns and Counter Measures
303 *
304 * The pages are organized in allocation chunks in ring-0, this is a necessity
305 * if we wish to have an OS agnostic approach to this whole thing. (On Linux we
306 * could easily work on a page-by-page basis if we liked. Whether this is possible
307 * or efficient on NT I don't quite know.) Fragmentation within these chunks may
308 * become a problem as part of the idea here is that we wish to return memory to
309 * the host system.
310 *
311 * For instance, starting two VMs at the same time, they will both allocate the
312 * guest memory on-demand and if permitted their page allocations will be
313 * intermixed. Shut down one of the two VMs and it will be difficult to return
314 * any memory to the host system because the page allocation for the two VMs are
315 * mixed up in the same allocation chunks.
316 *
317 * To further complicate matters, when pages are freed because they have been
318 * ballooned or become shared/zero the whole idea is that the page is supposed
319 * to be reused by another VM or returned to the host system. This will cause
320 * allocation chunks to contain pages belonging to different VMs and prevent
321 * returning memory to the host when one of those VM shuts down.
322 *
323 * The only way to really deal with this problem is to move pages. This can
324 * either be done at VM shutdown and or by the idle priority worker thread
325 * that will be responsible for finding sharable/zero pages. The mechanisms
326 * involved for coercing a VM to move a page (or to do it for it) will be
327 * the same as when telling it to share/zero a page.
328 *
329 *
330 * @subsection subsec_pgmPhys_Tracking Tracking Structures And Their Cost
331 *
332 * There's a difficult balance between keeping the per-page tracking structures
333 * (global and guest page) easy to use and keeping them from eating too much
334 * memory. We have limited virtual memory resources available when operating in
335 * 32-bit kernel space (on 64-bit there'll it's quite a different story). The
336 * tracking structures will be attemted designed such that we can deal with up
337 * to 32GB of memory on a 32-bit system and essentially unlimited on 64-bit ones.
338 *
339 *
340 * @subsubsection subsubsec_pgmPhys_Tracking_Kernel Kernel Space
341 *
342 * @see pg_GMM
343 *
344 * @subsubsection subsubsec_pgmPhys_Tracking_PerVM Per-VM
345 *
346 * Fixed info is the physical address of the page (HCPhys) and the page id
347 * (described above). Theoretically we'll need 48(-12) bits for the HCPhys part.
348 * Today we've restricting ourselves to 40(-12) bits because this is the current
349 * restrictions of all AMD64 implementations (I think Barcelona will up this
350 * to 48(-12) bits, not that it really matters) and I needed the bits for
351 * tracking mappings of a page. 48-12 = 36. That leaves 28 bits, which means a
352 * decent range for the page id: 2^(28+12) = 1024TB.
353 *
354 * In additions to these, we'll have to keep maintaining the page flags as we
355 * currently do. Although it wouldn't harm to optimize these quite a bit, like
356 * for instance the ROM shouldn't depend on having a write handler installed
357 * in order for it to become read-only. A RO/RW bit should be considered so
358 * that the page syncing code doesn't have to mess about checking multiple
359 * flag combinations (ROM || RW handler || write monitored) in order to
360 * figure out how to setup a shadow PTE. But this of course, is second
361 * priority at present. Current this requires 12 bits, but could probably
362 * be optimized to ~8.
363 *
364 * Then there's the 24 bits used to track which shadow page tables are
365 * currently mapping a page for the purpose of speeding up physical
366 * access handlers, and thereby the page pool cache. More bit for this
367 * purpose wouldn't hurt IIRC.
368 *
369 * Then there is a new bit in which we need to record what kind of page
370 * this is, shared, zero, normal or write-monitored-normal. This'll
371 * require 2 bits. One bit might be needed for indicating whether a
372 * write monitored page has been written to. And yet another one or
373 * two for tracking migration status. 3-4 bits total then.
374 *
375 * Whatever is left will can be used to record the sharabilitiy of a
376 * page. The page checksum will not be stored in the per-VM table as
377 * the idle thread will not be permitted to do modifications to it.
378 * It will instead have to keep its own working set of potentially
379 * shareable pages and their check sums and stuff.
380 *
381 * For the present we'll keep the current packing of the
382 * PGMRAMRANGE::aHCPhys to keep the changes simple, only of course,
383 * we'll have to change it to a struct with a total of 128-bits at
384 * our disposal.
385 *
386 * The initial layout will be like this:
387 * @verbatim
388 RTHCPHYS HCPhys; The current stuff.
389 63:40 Current shadow PT tracking stuff.
390 39:12 The physical page frame number.
391 11:0 The current flags.
392 uint32_t u28PageId : 28; The page id.
393 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
394 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
395 uint32_t u1Reserved : 1; Reserved for later.
396 uint32_t u32Reserved; Reserved for later, mostly sharing stats.
397 @endverbatim
398 *
399 * The final layout will be something like this:
400 * @verbatim
401 RTHCPHYS HCPhys; The current stuff.
402 63:48 High page id (12+).
403 47:12 The physical page frame number.
404 11:0 Low page id.
405 uint32_t fReadOnly : 1; Whether it's readonly page (rom or monitored in some way).
406 uint32_t u3Type : 3; The page type {RESERVED, MMIO, MMIO2, ROM, shadowed ROM, RAM}.
407 uint32_t u2PhysMon : 2; Physical access handler type {none, read, write, all}.
408 uint32_t u2VirtMon : 2; Virtual access handler type {none, read, write, all}..
409 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
410 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
411 uint32_t u20Reserved : 20; Reserved for later, mostly sharing stats.
412 uint32_t u32Tracking; The shadow PT tracking stuff, roughly.
413 @endverbatim
414 *
415 * Cost wise, this means we'll double the cost for guest memory. There isn't anyway
416 * around that I'm afraid. It means that the cost of dealing out 32GB of memory
417 * to one or more VMs is: (32GB >> PAGE_SHIFT) * 16 bytes, or 128MBs. Or another
418 * example, the VM heap cost when assigning 1GB to a VM will be: 4MB.
419 *
420 * A couple of cost examples for the total cost per-VM + kernel.
421 * 32-bit Windows and 32-bit linux:
422 * 1GB guest ram, 256K pages: 4MB + 2MB(+) = 6MB
423 * 4GB guest ram, 1M pages: 16MB + 8MB(+) = 24MB
424 * 32GB guest ram, 8M pages: 128MB + 64MB(+) = 192MB
425 * 64-bit Windows and 64-bit linux:
426 * 1GB guest ram, 256K pages: 4MB + 3MB(+) = 7MB
427 * 4GB guest ram, 1M pages: 16MB + 12MB(+) = 28MB
428 * 32GB guest ram, 8M pages: 128MB + 96MB(+) = 224MB
429 *
430 * UPDATE - 2007-09-27:
431 * Will need a ballooned flag/state too because we cannot
432 * trust the guest 100% and reporting the same page as ballooned more
433 * than once will put the GMM off balance.
434 *
435 *
436 * @subsection subsec_pgmPhys_Serializing Serializing Access
437 *
438 * Initially, we'll try a simple scheme:
439 *
440 * - The per-VM RAM tracking structures (PGMRAMRANGE) is only modified
441 * by the EMT thread of that VM while in the pgm critsect.
442 * - Other threads in the VM process that needs to make reliable use of
443 * the per-VM RAM tracking structures will enter the critsect.
444 * - No process external thread or kernel thread will ever try enter
445 * the pgm critical section, as that just won't work.
446 * - The idle thread (and similar threads) doesn't not need 100% reliable
447 * data when performing it tasks as the EMT thread will be the one to
448 * do the actual changes later anyway. So, as long as it only accesses
449 * the main ram range, it can do so by somehow preventing the VM from
450 * being destroyed while it works on it...
451 *
452 * - The over-commitment management, including the allocating/freeing
453 * chunks, is serialized by a ring-0 mutex lock (a fast one since the
454 * more mundane mutex implementation is broken on Linux).
455 * - A separeate mutex is protecting the set of allocation chunks so
456 * that pages can be shared or/and freed up while some other VM is
457 * allocating more chunks. This mutex can be take from under the other
458 * one, but not the otherway around.
459 *
460 *
461 * @subsection subsec_pgmPhys_Request VM Request interface
462 *
463 * When in ring-0 it will become necessary to send requests to a VM so it can
464 * for instance move a page while defragmenting during VM destroy. The idle
465 * thread will make use of this interface to request VMs to setup shared
466 * pages and to perform write monitoring of pages.
467 *
468 * I would propose an interface similar to the current VMReq interface, similar
469 * in that it doesn't require locking and that the one sending the request may
470 * wait for completion if it wishes to. This shouldn't be very difficult to
471 * realize.
472 *
473 * The requests themselves are also pretty simple. They are basically:
474 * -# Check that some precondition is still true.
475 * -# Do the update.
476 * -# Update all shadow page tables involved with the page.
477 *
478 * The 3rd step is identical to what we're already doing when updating a
479 * physical handler, see pgmHandlerPhysicalSetRamFlagsAndFlushShadowPTs.
480 *
481 *
482 *
483 * @section sec_pgmPhys_MappingCaches Mapping Caches
484 *
485 * In order to be able to map in and out memory and to be able to support
486 * guest with more RAM than we've got virtual address space, we'll employing
487 * a mapping cache. There is already a tiny one for GC (see PGMGCDynMapGCPageEx)
488 * and we'll create a similar one for ring-0 unless we decide to setup a dedicate
489 * memory context for the HWACCM execution.
490 *
491 *
492 * @subsection subsec_pgmPhys_MappingCaches_R3 Ring-3
493 *
494 * We've considered implementing the ring-3 mapping cache page based but found
495 * that this was bother some when one had to take into account TLBs+SMP and
496 * portability (missing the necessary APIs on several platforms). There were
497 * also some performance concerns with this approach which hadn't quite been
498 * worked out.
499 *
500 * Instead, we'll be mapping allocation chunks into the VM process. This simplifies
501 * matters greatly quite a bit since we don't need to invent any new ring-0 stuff,
502 * only some minor RTR0MEMOBJ mapping stuff. The main concern here is that mapping
503 * compared to the previous idea is that mapping or unmapping a 1MB chunk is more
504 * costly than a single page, although how much more costly is uncertain. We'll
505 * try address this by using a very big cache, preferably bigger than the actual
506 * VM RAM size if possible. The current VM RAM sizes should give some idea for
507 * 32-bit boxes, while on 64-bit we can probably get away with employing an
508 * unlimited cache.
509 *
510 * The cache have to parts, as already indicated, the ring-3 side and the
511 * ring-0 side.
512 *
513 * The ring-0 will be tied to the page allocator since it will operate on the
514 * memory objects it contains. It will therefore require the first ring-0 mutex
515 * discussed in @ref subsec_pgmPhys_Serializing. We
516 * some double house keeping wrt to who has mapped what I think, since both
517 * VMMR0.r0 and RTR0MemObj will keep track of mapping relataions
518 *
519 * The ring-3 part will be protected by the pgm critsect. For simplicity, we'll
520 * require anyone that desires to do changes to the mapping cache to do that
521 * from within this critsect. Alternatively, we could employ a separate critsect
522 * for serializing changes to the mapping cache as this would reduce potential
523 * contention with other threads accessing mappings unrelated to the changes
524 * that are in process. We can see about this later, contention will show
525 * up in the statistics anyway, so it'll be simple to tell.
526 *
527 * The organization of the ring-3 part will be very much like how the allocation
528 * chunks are organized in ring-0, that is in an AVL tree by chunk id. To avoid
529 * having to walk the tree all the time, we'll have a couple of lookaside entries
530 * like in we do for I/O ports and MMIO in IOM.
531 *
532 * The simplified flow of a PGMPhysRead/Write function:
533 * -# Enter the PGM critsect.
534 * -# Lookup GCPhys in the ram ranges and get the Page ID.
535 * -# Calc the Allocation Chunk ID from the Page ID.
536 * -# Check the lookaside entries and then the AVL tree for the Chunk ID.
537 * If not found in cache:
538 * -# Call ring-0 and request it to be mapped and supply
539 * a chunk to be unmapped if the cache is maxed out already.
540 * -# Insert the new mapping into the AVL tree (id + R3 address).
541 * -# Update the relevant lookaside entry and return the mapping address.
542 * -# Do the read/write according to monitoring flags and everything.
543 * -# Leave the critsect.
544 *
545 *
546 * @section sec_pgmPhys_Fallback Fallback
547 *
548 * Current all the "second tier" hosts will not support the RTR0MemObjAllocPhysNC
549 * API and thus require a fallback.
550 *
551 * So, when RTR0MemObjAllocPhysNC returns VERR_NOT_SUPPORTED the page allocator
552 * will return to the ring-3 caller (and later ring-0) and asking it to seed
553 * the page allocator with some fresh pages (VERR_GMM_SEED_ME). Ring-3 will
554 * then perform an SUPPageAlloc(cbChunk >> PAGE_SHIFT) call and make a
555 * "SeededAllocPages" call to ring-0.
556 *
557 * The first time ring-0 sees the VERR_NOT_SUPPORTED failure it will disable
558 * all page sharing (zero page detection will continue). It will also force
559 * all allocations to come from the VM which seeded the page. Both these
560 * measures are taken to make sure that there will never be any need for
561 * mapping anything into ring-3 - everything will be mapped already.
562 *
563 * Whether we'll continue to use the current MM locked memory management
564 * for this I don't quite know (I'd prefer not to and just ditch that all
565 * togther), we'll see what's simplest to do.
566 *
567 *
568 *
569 * @section sec_pgmPhys_Changes Changes
570 *
571 * Breakdown of the changes involved?
572 */
573
574
575/** Saved state data unit version. */
576#define PGM_SAVED_STATE_VERSION 6
577
578/*******************************************************************************
579* Header Files *
580*******************************************************************************/
581#define LOG_GROUP LOG_GROUP_PGM
582#include <VBox/dbgf.h>
583#include <VBox/pgm.h>
584#include <VBox/cpum.h>
585#include <VBox/iom.h>
586#include <VBox/sup.h>
587#include <VBox/mm.h>
588#include <VBox/em.h>
589#include <VBox/stam.h>
590#include <VBox/rem.h>
591#include <VBox/dbgf.h>
592#include <VBox/rem.h>
593#include <VBox/selm.h>
594#include <VBox/ssm.h>
595#include "PGMInternal.h"
596#include <VBox/vm.h>
597#include <VBox/dbg.h>
598#include <VBox/hwaccm.h>
599
600#include <iprt/assert.h>
601#include <iprt/alloc.h>
602#include <iprt/asm.h>
603#include <iprt/thread.h>
604#include <iprt/string.h>
605#ifdef DEBUG_bird
606# include <iprt/env.h>
607#endif
608#include <VBox/param.h>
609#include <VBox/err.h>
610
611
612
613/*******************************************************************************
614* Internal Functions *
615*******************************************************************************/
616static int pgmR3InitPaging(PVM pVM);
617static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
618static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
619static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
620static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser);
621static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
622static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
623#ifdef VBOX_STRICT
624static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser);
625#endif
626static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM);
627static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version);
628static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0);
629static void pgmR3ModeDataSwitch(PVM pVM, PGMMODE enmShw, PGMMODE enmGst);
630static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher);
631
632#ifdef VBOX_WITH_STATISTICS
633static void pgmR3InitStats(PVM pVM);
634#endif
635
636#ifdef VBOX_WITH_DEBUGGER
637/** @todo all but the two last commands must be converted to 'info'. */
638static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
639static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
640static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
641static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
642# ifdef VBOX_STRICT
643static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
644# endif
645#endif
646
647
648/*******************************************************************************
649* Global Variables *
650*******************************************************************************/
651#ifdef VBOX_WITH_DEBUGGER
652/** Command descriptors. */
653static const DBGCCMD g_aCmds[] =
654{
655 /* pszCmd, cArgsMin, cArgsMax, paArgDesc, cArgDescs, pResultDesc, fFlags, pfnHandler pszSyntax, ....pszDescription */
656 { "pgmram", 0, 0, NULL, 0, NULL, 0, pgmR3CmdRam, "", "Display the ram ranges." },
657 { "pgmmap", 0, 0, NULL, 0, NULL, 0, pgmR3CmdMap, "", "Display the mapping ranges." },
658 { "pgmsync", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSync, "", "Sync the CR3 page." },
659#ifdef VBOX_STRICT
660 { "pgmassertcr3", 0, 0, NULL, 0, NULL, 0, pgmR3CmdAssertCR3, "", "Check the shadow CR3 mapping." },
661#endif
662 { "pgmsyncalways", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSyncAlways, "", "Toggle permanent CR3 syncing." },
663};
664#endif
665
666
667
668
669/*
670 * Shadow - 32-bit mode
671 */
672#define PGM_SHW_TYPE PGM_TYPE_32BIT
673#define PGM_SHW_NAME(name) PGM_SHW_NAME_32BIT(name)
674#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_32BIT_STR(name)
675#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_32BIT_STR(name)
676#include "PGMShw.h"
677
678/* Guest - real mode */
679#define PGM_GST_TYPE PGM_TYPE_REAL
680#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
681#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
682#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
683#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_REAL(name)
684#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_REAL_STR(name)
685#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_REAL_STR(name)
686#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
687#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
688#include "PGMBth.h"
689#include "PGMGstDefs.h"
690#include "PGMGst.h"
691#undef BTH_PGMPOOLKIND_PT_FOR_PT
692#undef BTH_PGMPOOLKIND_ROOT
693#undef PGM_BTH_NAME
694#undef PGM_BTH_NAME_RC_STR
695#undef PGM_BTH_NAME_R0_STR
696#undef PGM_GST_TYPE
697#undef PGM_GST_NAME
698#undef PGM_GST_NAME_RC_STR
699#undef PGM_GST_NAME_R0_STR
700
701/* Guest - protected mode */
702#define PGM_GST_TYPE PGM_TYPE_PROT
703#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
704#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
705#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
706#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_PROT(name)
707#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_PROT_STR(name)
708#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_PROT_STR(name)
709#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
710#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
711#include "PGMBth.h"
712#include "PGMGstDefs.h"
713#include "PGMGst.h"
714#undef BTH_PGMPOOLKIND_PT_FOR_PT
715#undef BTH_PGMPOOLKIND_ROOT
716#undef PGM_BTH_NAME
717#undef PGM_BTH_NAME_RC_STR
718#undef PGM_BTH_NAME_R0_STR
719#undef PGM_GST_TYPE
720#undef PGM_GST_NAME
721#undef PGM_GST_NAME_RC_STR
722#undef PGM_GST_NAME_R0_STR
723
724/* Guest - 32-bit mode */
725#define PGM_GST_TYPE PGM_TYPE_32BIT
726#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
727#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
728#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
729#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_32BIT(name)
730#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_32BIT_STR(name)
731#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_32BIT_STR(name)
732#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT
733#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB
734#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD
735#include "PGMBth.h"
736#include "PGMGstDefs.h"
737#include "PGMGst.h"
738#undef BTH_PGMPOOLKIND_PT_FOR_BIG
739#undef BTH_PGMPOOLKIND_PT_FOR_PT
740#undef BTH_PGMPOOLKIND_ROOT
741#undef PGM_BTH_NAME
742#undef PGM_BTH_NAME_RC_STR
743#undef PGM_BTH_NAME_R0_STR
744#undef PGM_GST_TYPE
745#undef PGM_GST_NAME
746#undef PGM_GST_NAME_RC_STR
747#undef PGM_GST_NAME_R0_STR
748
749#undef PGM_SHW_TYPE
750#undef PGM_SHW_NAME
751#undef PGM_SHW_NAME_RC_STR
752#undef PGM_SHW_NAME_R0_STR
753
754
755/*
756 * Shadow - PAE mode
757 */
758#define PGM_SHW_TYPE PGM_TYPE_PAE
759#define PGM_SHW_NAME(name) PGM_SHW_NAME_PAE(name)
760#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_PAE_STR(name)
761#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_PAE_STR(name)
762#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
763#include "PGMShw.h"
764
765/* Guest - real mode */
766#define PGM_GST_TYPE PGM_TYPE_REAL
767#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
768#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
769#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
770#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
771#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_REAL_STR(name)
772#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_REAL_STR(name)
773#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
774#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
775#include "PGMGstDefs.h"
776#include "PGMBth.h"
777#undef BTH_PGMPOOLKIND_PT_FOR_PT
778#undef BTH_PGMPOOLKIND_ROOT
779#undef PGM_BTH_NAME
780#undef PGM_BTH_NAME_RC_STR
781#undef PGM_BTH_NAME_R0_STR
782#undef PGM_GST_TYPE
783#undef PGM_GST_NAME
784#undef PGM_GST_NAME_RC_STR
785#undef PGM_GST_NAME_R0_STR
786
787/* Guest - protected mode */
788#define PGM_GST_TYPE PGM_TYPE_PROT
789#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
790#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
791#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
792#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PROT(name)
793#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PROT_STR(name)
794#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PROT_STR(name)
795#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
796#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
797#include "PGMGstDefs.h"
798#include "PGMBth.h"
799#undef BTH_PGMPOOLKIND_PT_FOR_PT
800#undef BTH_PGMPOOLKIND_ROOT
801#undef PGM_BTH_NAME
802#undef PGM_BTH_NAME_RC_STR
803#undef PGM_BTH_NAME_R0_STR
804#undef PGM_GST_TYPE
805#undef PGM_GST_NAME
806#undef PGM_GST_NAME_RC_STR
807#undef PGM_GST_NAME_R0_STR
808
809/* Guest - 32-bit mode */
810#define PGM_GST_TYPE PGM_TYPE_32BIT
811#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
812#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
813#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
814#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_32BIT(name)
815#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_32BIT_STR(name)
816#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_32BIT_STR(name)
817#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
818#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
819#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_FOR_32BIT
820#include "PGMGstDefs.h"
821#include "PGMBth.h"
822#undef BTH_PGMPOOLKIND_PT_FOR_BIG
823#undef BTH_PGMPOOLKIND_PT_FOR_PT
824#undef BTH_PGMPOOLKIND_ROOT
825#undef PGM_BTH_NAME
826#undef PGM_BTH_NAME_RC_STR
827#undef PGM_BTH_NAME_R0_STR
828#undef PGM_GST_TYPE
829#undef PGM_GST_NAME
830#undef PGM_GST_NAME_RC_STR
831#undef PGM_GST_NAME_R0_STR
832
833/* Guest - PAE mode */
834#define PGM_GST_TYPE PGM_TYPE_PAE
835#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
836#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
837#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
838#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PAE(name)
839#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PAE_STR(name)
840#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PAE_STR(name)
841#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
842#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
843#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT
844#include "PGMBth.h"
845#include "PGMGstDefs.h"
846#include "PGMGst.h"
847#undef BTH_PGMPOOLKIND_PT_FOR_BIG
848#undef BTH_PGMPOOLKIND_PT_FOR_PT
849#undef BTH_PGMPOOLKIND_ROOT
850#undef PGM_BTH_NAME
851#undef PGM_BTH_NAME_RC_STR
852#undef PGM_BTH_NAME_R0_STR
853#undef PGM_GST_TYPE
854#undef PGM_GST_NAME
855#undef PGM_GST_NAME_RC_STR
856#undef PGM_GST_NAME_R0_STR
857
858#undef PGM_SHW_TYPE
859#undef PGM_SHW_NAME
860#undef PGM_SHW_NAME_RC_STR
861#undef PGM_SHW_NAME_R0_STR
862
863
864/*
865 * Shadow - AMD64 mode
866 */
867#define PGM_SHW_TYPE PGM_TYPE_AMD64
868#define PGM_SHW_NAME(name) PGM_SHW_NAME_AMD64(name)
869#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_AMD64_STR(name)
870#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_AMD64_STR(name)
871#include "PGMShw.h"
872
873#ifdef VBOX_WITH_64_BITS_GUESTS
874/* Guest - AMD64 mode */
875# define PGM_GST_TYPE PGM_TYPE_AMD64
876# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
877# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
878# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
879# define PGM_BTH_NAME(name) PGM_BTH_NAME_AMD64_AMD64(name)
880# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_AMD64_AMD64_STR(name)
881# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_AMD64_AMD64_STR(name)
882# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
883# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
884# define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_64BIT_PML4
885# include "PGMBth.h"
886# include "PGMGstDefs.h"
887# include "PGMGst.h"
888# undef BTH_PGMPOOLKIND_PT_FOR_BIG
889# undef BTH_PGMPOOLKIND_PT_FOR_PT
890# undef BTH_PGMPOOLKIND_ROOT
891# undef PGM_BTH_NAME
892# undef PGM_BTH_NAME_RC_STR
893# undef PGM_BTH_NAME_R0_STR
894# undef PGM_GST_TYPE
895# undef PGM_GST_NAME
896# undef PGM_GST_NAME_RC_STR
897# undef PGM_GST_NAME_R0_STR
898#endif /* VBOX_WITH_64_BITS_GUESTS */
899
900#undef PGM_SHW_TYPE
901#undef PGM_SHW_NAME
902#undef PGM_SHW_NAME_RC_STR
903#undef PGM_SHW_NAME_R0_STR
904
905
906/*
907 * Shadow - Nested paging mode
908 */
909#define PGM_SHW_TYPE PGM_TYPE_NESTED
910#define PGM_SHW_NAME(name) PGM_SHW_NAME_NESTED(name)
911#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_NESTED_STR(name)
912#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_NESTED_STR(name)
913#include "PGMShw.h"
914
915/* Guest - real mode */
916#define PGM_GST_TYPE PGM_TYPE_REAL
917#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
918#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
919#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
920#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_REAL(name)
921#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_REAL_STR(name)
922#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_REAL_STR(name)
923#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
924#include "PGMGstDefs.h"
925#include "PGMBth.h"
926#undef BTH_PGMPOOLKIND_PT_FOR_PT
927#undef PGM_BTH_NAME
928#undef PGM_BTH_NAME_RC_STR
929#undef PGM_BTH_NAME_R0_STR
930#undef PGM_GST_TYPE
931#undef PGM_GST_NAME
932#undef PGM_GST_NAME_RC_STR
933#undef PGM_GST_NAME_R0_STR
934
935/* Guest - protected mode */
936#define PGM_GST_TYPE PGM_TYPE_PROT
937#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
938#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
939#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
940#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PROT(name)
941#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PROT_STR(name)
942#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PROT_STR(name)
943#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
944#include "PGMGstDefs.h"
945#include "PGMBth.h"
946#undef BTH_PGMPOOLKIND_PT_FOR_PT
947#undef PGM_BTH_NAME
948#undef PGM_BTH_NAME_RC_STR
949#undef PGM_BTH_NAME_R0_STR
950#undef PGM_GST_TYPE
951#undef PGM_GST_NAME
952#undef PGM_GST_NAME_RC_STR
953#undef PGM_GST_NAME_R0_STR
954
955/* Guest - 32-bit mode */
956#define PGM_GST_TYPE PGM_TYPE_32BIT
957#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
958#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
959#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
960#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_32BIT(name)
961#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_32BIT_STR(name)
962#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_32BIT_STR(name)
963#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
964#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
965#include "PGMGstDefs.h"
966#include "PGMBth.h"
967#undef BTH_PGMPOOLKIND_PT_FOR_BIG
968#undef BTH_PGMPOOLKIND_PT_FOR_PT
969#undef PGM_BTH_NAME
970#undef PGM_BTH_NAME_RC_STR
971#undef PGM_BTH_NAME_R0_STR
972#undef PGM_GST_TYPE
973#undef PGM_GST_NAME
974#undef PGM_GST_NAME_RC_STR
975#undef PGM_GST_NAME_R0_STR
976
977/* Guest - PAE mode */
978#define PGM_GST_TYPE PGM_TYPE_PAE
979#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
980#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
981#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
982#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PAE(name)
983#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PAE_STR(name)
984#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PAE_STR(name)
985#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
986#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
987#include "PGMGstDefs.h"
988#include "PGMBth.h"
989#undef BTH_PGMPOOLKIND_PT_FOR_BIG
990#undef BTH_PGMPOOLKIND_PT_FOR_PT
991#undef PGM_BTH_NAME
992#undef PGM_BTH_NAME_RC_STR
993#undef PGM_BTH_NAME_R0_STR
994#undef PGM_GST_TYPE
995#undef PGM_GST_NAME
996#undef PGM_GST_NAME_RC_STR
997#undef PGM_GST_NAME_R0_STR
998
999#ifdef VBOX_WITH_64_BITS_GUESTS
1000/* Guest - AMD64 mode */
1001# define PGM_GST_TYPE PGM_TYPE_AMD64
1002# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1003# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1004# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1005# define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_AMD64(name)
1006# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_AMD64_STR(name)
1007# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_AMD64_STR(name)
1008# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1009# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1010# include "PGMGstDefs.h"
1011# include "PGMBth.h"
1012# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1013# undef BTH_PGMPOOLKIND_PT_FOR_PT
1014# undef PGM_BTH_NAME
1015# undef PGM_BTH_NAME_RC_STR
1016# undef PGM_BTH_NAME_R0_STR
1017# undef PGM_GST_TYPE
1018# undef PGM_GST_NAME
1019# undef PGM_GST_NAME_RC_STR
1020# undef PGM_GST_NAME_R0_STR
1021#endif /* VBOX_WITH_64_BITS_GUESTS */
1022
1023#undef PGM_SHW_TYPE
1024#undef PGM_SHW_NAME
1025#undef PGM_SHW_NAME_RC_STR
1026#undef PGM_SHW_NAME_R0_STR
1027
1028
1029/*
1030 * Shadow - EPT
1031 */
1032#define PGM_SHW_TYPE PGM_TYPE_EPT
1033#define PGM_SHW_NAME(name) PGM_SHW_NAME_EPT(name)
1034#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_EPT_STR(name)
1035#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_EPT_STR(name)
1036#include "PGMShw.h"
1037
1038/* Guest - real mode */
1039#define PGM_GST_TYPE PGM_TYPE_REAL
1040#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
1041#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
1042#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
1043#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_REAL(name)
1044#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_REAL_STR(name)
1045#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_REAL_STR(name)
1046#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1047#include "PGMGstDefs.h"
1048#include "PGMBth.h"
1049#undef BTH_PGMPOOLKIND_PT_FOR_PT
1050#undef PGM_BTH_NAME
1051#undef PGM_BTH_NAME_RC_STR
1052#undef PGM_BTH_NAME_R0_STR
1053#undef PGM_GST_TYPE
1054#undef PGM_GST_NAME
1055#undef PGM_GST_NAME_RC_STR
1056#undef PGM_GST_NAME_R0_STR
1057
1058/* Guest - protected mode */
1059#define PGM_GST_TYPE PGM_TYPE_PROT
1060#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
1061#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
1062#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
1063#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PROT(name)
1064#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PROT_STR(name)
1065#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PROT_STR(name)
1066#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1067#include "PGMGstDefs.h"
1068#include "PGMBth.h"
1069#undef BTH_PGMPOOLKIND_PT_FOR_PT
1070#undef PGM_BTH_NAME
1071#undef PGM_BTH_NAME_RC_STR
1072#undef PGM_BTH_NAME_R0_STR
1073#undef PGM_GST_TYPE
1074#undef PGM_GST_NAME
1075#undef PGM_GST_NAME_RC_STR
1076#undef PGM_GST_NAME_R0_STR
1077
1078/* Guest - 32-bit mode */
1079#define PGM_GST_TYPE PGM_TYPE_32BIT
1080#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
1081#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
1082#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
1083#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_32BIT(name)
1084#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_32BIT_STR(name)
1085#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_32BIT_STR(name)
1086#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
1087#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
1088#include "PGMGstDefs.h"
1089#include "PGMBth.h"
1090#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1091#undef BTH_PGMPOOLKIND_PT_FOR_PT
1092#undef PGM_BTH_NAME
1093#undef PGM_BTH_NAME_RC_STR
1094#undef PGM_BTH_NAME_R0_STR
1095#undef PGM_GST_TYPE
1096#undef PGM_GST_NAME
1097#undef PGM_GST_NAME_RC_STR
1098#undef PGM_GST_NAME_R0_STR
1099
1100/* Guest - PAE mode */
1101#define PGM_GST_TYPE PGM_TYPE_PAE
1102#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
1103#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
1104#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
1105#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PAE(name)
1106#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PAE_STR(name)
1107#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PAE_STR(name)
1108#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1109#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1110#include "PGMGstDefs.h"
1111#include "PGMBth.h"
1112#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1113#undef BTH_PGMPOOLKIND_PT_FOR_PT
1114#undef PGM_BTH_NAME
1115#undef PGM_BTH_NAME_RC_STR
1116#undef PGM_BTH_NAME_R0_STR
1117#undef PGM_GST_TYPE
1118#undef PGM_GST_NAME
1119#undef PGM_GST_NAME_RC_STR
1120#undef PGM_GST_NAME_R0_STR
1121
1122#ifdef VBOX_WITH_64_BITS_GUESTS
1123/* Guest - AMD64 mode */
1124# define PGM_GST_TYPE PGM_TYPE_AMD64
1125# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1126# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1127# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1128# define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_AMD64(name)
1129# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_AMD64_STR(name)
1130# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_AMD64_STR(name)
1131# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1132# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1133# include "PGMGstDefs.h"
1134# include "PGMBth.h"
1135# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1136# undef BTH_PGMPOOLKIND_PT_FOR_PT
1137# undef PGM_BTH_NAME
1138# undef PGM_BTH_NAME_RC_STR
1139# undef PGM_BTH_NAME_R0_STR
1140# undef PGM_GST_TYPE
1141# undef PGM_GST_NAME
1142# undef PGM_GST_NAME_RC_STR
1143# undef PGM_GST_NAME_R0_STR
1144#endif /* VBOX_WITH_64_BITS_GUESTS */
1145
1146#undef PGM_SHW_TYPE
1147#undef PGM_SHW_NAME
1148#undef PGM_SHW_NAME_RC_STR
1149#undef PGM_SHW_NAME_R0_STR
1150
1151
1152
1153/**
1154 * Initiates the paging of VM.
1155 *
1156 * @returns VBox status code.
1157 * @param pVM Pointer to VM structure.
1158 */
1159VMMR3DECL(int) PGMR3Init(PVM pVM)
1160{
1161 LogFlow(("PGMR3Init:\n"));
1162 PCFGMNODE pCfgPGM = CFGMR3GetChild(CFGMR3GetRoot(pVM), "/PGM");
1163 int rc;
1164
1165 /*
1166 * Assert alignment and sizes.
1167 */
1168 AssertRelease(sizeof(pVM->pgm.s) <= sizeof(pVM->pgm.padding));
1169
1170 /*
1171 * Init the structure.
1172 */
1173 pVM->pgm.s.offVM = RT_OFFSETOF(VM, pgm.s);
1174 pVM->pgm.s.offVCpu = RT_OFFSETOF(VMCPU, pgm.s);
1175 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
1176 pVM->pgm.s.enmGuestMode = PGMMODE_INVALID;
1177 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1178 pVM->pgm.s.GCPhysCR3 = NIL_RTGCPHYS;
1179#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1180 pVM->pgm.s.GCPhysGstCR3Monitored = NIL_RTGCPHYS;
1181#endif
1182 pVM->pgm.s.fA20Enabled = true;
1183 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1; /* default; checked later */
1184 pVM->pgm.s.pGstPaePdptR3 = NULL;
1185#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1186 pVM->pgm.s.pGstPaePdptR0 = NIL_RTR0PTR;
1187#endif
1188 pVM->pgm.s.pGstPaePdptRC = NIL_RTRCPTR;
1189 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apGstPaePDsR3); i++)
1190 {
1191 pVM->pgm.s.apGstPaePDsR3[i] = NULL;
1192#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1193 pVM->pgm.s.apGstPaePDsR0[i] = NIL_RTR0PTR;
1194#endif
1195 pVM->pgm.s.apGstPaePDsRC[i] = NIL_RTRCPTR;
1196 pVM->pgm.s.aGCPhysGstPaePDs[i] = NIL_RTGCPHYS;
1197 pVM->pgm.s.aGCPhysGstPaePDsMonitored[i] = NIL_RTGCPHYS;
1198 }
1199
1200 rc = CFGMR3QueryBoolDef(pCfgPGM, "RamPreAlloc", &pVM->pgm.s.fRamPreAlloc, false);
1201 AssertLogRelRCReturn(rc, rc);
1202
1203#if HC_ARCH_BITS == 64
1204 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, UINT32_MAX);
1205#else
1206 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, _1G / GMM_CHUNK_SIZE);
1207#endif
1208 AssertLogRelRCReturn(rc, rc);
1209 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->pgm.s.ChunkR3Map.Tlb.aEntries); i++)
1210 pVM->pgm.s.ChunkR3Map.Tlb.aEntries[i].idChunk = NIL_GMM_CHUNKID;
1211
1212 /*
1213 * Get the configured RAM size - to estimate saved state size.
1214 */
1215 uint64_t cbRam;
1216 rc = CFGMR3QueryU64(CFGMR3GetRoot(pVM), "RamSize", &cbRam);
1217 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
1218 cbRam = pVM->pgm.s.cbRamSize = 0;
1219 else if (RT_SUCCESS(rc))
1220 {
1221 if (cbRam < PAGE_SIZE)
1222 cbRam = 0;
1223 cbRam = RT_ALIGN_64(cbRam, PAGE_SIZE);
1224 pVM->pgm.s.cbRamSize = (RTUINT)cbRam;
1225 }
1226 else
1227 {
1228 AssertMsgFailed(("Configuration error: Failed to query integer \"RamSize\", rc=%Rrc.\n", rc));
1229 return rc;
1230 }
1231
1232 /*
1233 * Register callbacks, string formatters and the saved state data unit.
1234 */
1235#ifdef VBOX_STRICT
1236 VMR3AtStateRegister(pVM, pgmR3ResetNoMorePhysWritesFlag, NULL);
1237#endif
1238 PGMRegisterStringFormatTypes();
1239
1240 rc = SSMR3RegisterInternal(pVM, "pgm", 1, PGM_SAVED_STATE_VERSION, (size_t)cbRam + sizeof(PGM),
1241 NULL, pgmR3Save, NULL,
1242 NULL, pgmR3Load, NULL);
1243 if (RT_FAILURE(rc))
1244 return rc;
1245
1246 /*
1247 * Initialize the PGM critical section and flush the phys TLBs
1248 */
1249 rc = PDMR3CritSectInit(pVM, &pVM->pgm.s.CritSect, "PGM");
1250 AssertRCReturn(rc, rc);
1251
1252 PGMR3PhysChunkInvalidateTLB(pVM);
1253 PGMPhysInvalidatePageR3MapTLB(pVM);
1254 PGMPhysInvalidatePageR0MapTLB(pVM);
1255 PGMPhysInvalidatePageGCMapTLB(pVM);
1256
1257#ifdef VBOX_WITH_NEW_PHYS_CODE
1258 /*
1259 * For the time being we sport a full set of handy pages in addition to the base
1260 * memory to simplify things.
1261 */
1262 rc = MMR3ReserveHandyPages(pVM, RT_ELEMENTS(pVM->pgm.s.aHandyPages));
1263 AssertRCReturn(rc, rc);
1264#endif
1265
1266 /*
1267 * Trees
1268 */
1269 rc = MMHyperAlloc(pVM, sizeof(PGMTREES), 0, MM_TAG_PGM, (void **)&pVM->pgm.s.pTreesR3);
1270 if (RT_SUCCESS(rc))
1271 {
1272 pVM->pgm.s.pTreesR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pTreesR3);
1273 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
1274
1275 /*
1276 * Alocate the zero page.
1277 */
1278 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvZeroPgR3);
1279 }
1280 if (RT_SUCCESS(rc))
1281 {
1282 pVM->pgm.s.pvZeroPgGC = MMHyperR3ToRC(pVM, pVM->pgm.s.pvZeroPgR3);
1283 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
1284 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTHCPHYS);
1285 pVM->pgm.s.HCPhysZeroPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvZeroPgR3);
1286 AssertRelease(pVM->pgm.s.HCPhysZeroPg != NIL_RTHCPHYS);
1287
1288 /*
1289 * Init the paging.
1290 */
1291 rc = pgmR3InitPaging(pVM);
1292 }
1293 if (RT_SUCCESS(rc))
1294 {
1295 /*
1296 * Init the page pool.
1297 */
1298 rc = pgmR3PoolInit(pVM);
1299 }
1300#ifdef VBOX_WITH_PGMPOOL_PAGING_ONLY
1301 if (RT_SUCCESS(rc))
1302 rc = PGMR3ChangeMode(pVM, PGMMODE_REAL);
1303#endif
1304 if (RT_SUCCESS(rc))
1305 {
1306 /*
1307 * Info & statistics
1308 */
1309 DBGFR3InfoRegisterInternal(pVM, "mode",
1310 "Shows the current paging mode. "
1311 "Recognizes 'all', 'guest', 'shadow' and 'host' as arguments, defaulting to 'all' if nothing's given.",
1312 pgmR3InfoMode);
1313 DBGFR3InfoRegisterInternal(pVM, "pgmcr3",
1314 "Dumps all the entries in the top level paging table. No arguments.",
1315 pgmR3InfoCr3);
1316 DBGFR3InfoRegisterInternal(pVM, "phys",
1317 "Dumps all the physical address ranges. No arguments.",
1318 pgmR3PhysInfo);
1319 DBGFR3InfoRegisterInternal(pVM, "handlers",
1320 "Dumps physical, virtual and hyper virtual handlers. "
1321 "Pass 'phys', 'virt', 'hyper' as argument if only one kind is wanted."
1322 "Add 'nost' if the statistics are unwanted, use together with 'all' or explicit selection.",
1323 pgmR3InfoHandlers);
1324 DBGFR3InfoRegisterInternal(pVM, "mappings",
1325 "Dumps guest mappings.",
1326 pgmR3MapInfo);
1327
1328 STAM_REL_REG(pVM, &pVM->pgm.s.cGuestModeChanges, STAMTYPE_COUNTER, "/PGM/cGuestModeChanges", STAMUNIT_OCCURENCES, "Number of guest mode changes.");
1329 STAM_REL_REG(pVM, &pVM->pgm.s.cRelocations, STAMTYPE_COUNTER, "/PGM/cRelocations", STAMUNIT_OCCURENCES, "Number of hypervisor relocations.");
1330#ifdef VBOX_WITH_STATISTICS
1331 pgmR3InitStats(pVM);
1332#endif
1333#ifdef VBOX_WITH_DEBUGGER
1334 /*
1335 * Debugger commands.
1336 */
1337 static bool fRegisteredCmds = false;
1338 if (!fRegisteredCmds)
1339 {
1340 int rc = DBGCRegisterCommands(&g_aCmds[0], RT_ELEMENTS(g_aCmds));
1341 if (RT_SUCCESS(rc))
1342 fRegisteredCmds = true;
1343 }
1344#endif
1345 return VINF_SUCCESS;
1346 }
1347
1348 /* Almost no cleanup necessary, MM frees all memory. */
1349 PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
1350
1351 return rc;
1352}
1353
1354
1355/**
1356 * Initializes the per-VCPU PGM.
1357 *
1358 * @returns VBox status code.
1359 * @param pVM The VM to operate on.
1360 */
1361VMMR3DECL(int) PGMR3InitCPU(PVM pVM)
1362{
1363 LogFlow(("PGMR3InitCPU\n"));
1364 return VINF_SUCCESS;
1365}
1366
1367
1368/**
1369 * Init paging.
1370 *
1371 * Since we need to check what mode the host is operating in before we can choose
1372 * the right paging functions for the host we have to delay this until R0 has
1373 * been initialized.
1374 *
1375 * @returns VBox status code.
1376 * @param pVM VM handle.
1377 */
1378static int pgmR3InitPaging(PVM pVM)
1379{
1380 /*
1381 * Force a recalculation of modes and switcher so everyone gets notified.
1382 */
1383 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
1384 pVM->pgm.s.enmGuestMode = PGMMODE_INVALID;
1385 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1386
1387 /*
1388 * Allocate static mapping space for whatever the cr3 register
1389 * points to and in the case of PAE mode to the 4 PDs.
1390 */
1391 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * 5, "CR3 mapping", &pVM->pgm.s.GCPtrCR3Mapping);
1392 if (RT_FAILURE(rc))
1393 {
1394 AssertMsgFailed(("Failed to reserve two pages for cr mapping in HMA, rc=%Rrc\n", rc));
1395 return rc;
1396 }
1397 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1398
1399 /*
1400 * Allocate pages for the three possible intermediate contexts
1401 * (AMD64, PAE and plain 32-Bit). We maintain all three contexts
1402 * for the sake of simplicity. The AMD64 uses the PAE for the
1403 * lower levels, making the total number of pages 11 (3 + 7 + 1).
1404 *
1405 * We assume that two page tables will be enought for the core code
1406 * mappings (HC virtual and identity).
1407 */
1408 pVM->pgm.s.pInterPD = (PX86PD)MMR3PageAllocLow(pVM);
1409 pVM->pgm.s.apInterPTs[0] = (PX86PT)MMR3PageAllocLow(pVM);
1410 pVM->pgm.s.apInterPTs[1] = (PX86PT)MMR3PageAllocLow(pVM);
1411 pVM->pgm.s.apInterPaePTs[0] = (PX86PTPAE)MMR3PageAlloc(pVM);
1412 pVM->pgm.s.apInterPaePTs[1] = (PX86PTPAE)MMR3PageAlloc(pVM);
1413 pVM->pgm.s.apInterPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1414 pVM->pgm.s.apInterPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1415 pVM->pgm.s.apInterPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1416 pVM->pgm.s.apInterPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1417 pVM->pgm.s.pInterPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM);
1418 pVM->pgm.s.pInterPaePDPT64 = (PX86PDPT)MMR3PageAllocLow(pVM);
1419 pVM->pgm.s.pInterPaePML4 = (PX86PML4)MMR3PageAllocLow(pVM);
1420 if ( !pVM->pgm.s.pInterPD
1421 || !pVM->pgm.s.apInterPTs[0]
1422 || !pVM->pgm.s.apInterPTs[1]
1423 || !pVM->pgm.s.apInterPaePTs[0]
1424 || !pVM->pgm.s.apInterPaePTs[1]
1425 || !pVM->pgm.s.apInterPaePDs[0]
1426 || !pVM->pgm.s.apInterPaePDs[1]
1427 || !pVM->pgm.s.apInterPaePDs[2]
1428 || !pVM->pgm.s.apInterPaePDs[3]
1429 || !pVM->pgm.s.pInterPaePDPT
1430 || !pVM->pgm.s.pInterPaePDPT64
1431 || !pVM->pgm.s.pInterPaePML4)
1432 {
1433 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1434 return VERR_NO_PAGE_MEMORY;
1435 }
1436
1437 pVM->pgm.s.HCPhysInterPD = MMPage2Phys(pVM, pVM->pgm.s.pInterPD);
1438 AssertRelease(pVM->pgm.s.HCPhysInterPD != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPD & PAGE_OFFSET_MASK));
1439 pVM->pgm.s.HCPhysInterPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT);
1440 AssertRelease(pVM->pgm.s.HCPhysInterPaePDPT != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePDPT & PAGE_OFFSET_MASK));
1441 pVM->pgm.s.HCPhysInterPaePML4 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePML4);
1442 AssertRelease(pVM->pgm.s.HCPhysInterPaePML4 != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePML4 & PAGE_OFFSET_MASK) && pVM->pgm.s.HCPhysInterPaePML4 < 0xffffffff);
1443
1444 /*
1445 * Initialize the pages, setting up the PML4 and PDPT for repetitive 4GB action.
1446 */
1447 ASMMemZeroPage(pVM->pgm.s.pInterPD);
1448 ASMMemZeroPage(pVM->pgm.s.apInterPTs[0]);
1449 ASMMemZeroPage(pVM->pgm.s.apInterPTs[1]);
1450
1451 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[0]);
1452 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[1]);
1453
1454 ASMMemZeroPage(pVM->pgm.s.pInterPaePDPT);
1455 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apInterPaePDs); i++)
1456 {
1457 ASMMemZeroPage(pVM->pgm.s.apInterPaePDs[i]);
1458 pVM->pgm.s.pInterPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT
1459 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[i]);
1460 }
1461
1462 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePDPT64->a); i++)
1463 {
1464 const unsigned iPD = i % RT_ELEMENTS(pVM->pgm.s.apInterPaePDs);
1465 pVM->pgm.s.pInterPaePDPT64->a[i].u = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A | PGM_PLXFLAGS_PERMANENT
1466 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[iPD]);
1467 }
1468
1469 RTHCPHYS HCPhysInterPaePDPT64 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64);
1470 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePML4->a); i++)
1471 pVM->pgm.s.pInterPaePML4->a[i].u = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A | PGM_PLXFLAGS_PERMANENT
1472 | HCPhysInterPaePDPT64;
1473
1474 /*
1475 * Allocate pages for the three possible guest contexts (AMD64, PAE and plain 32-Bit).
1476 * We allocate pages for all three posibilities in order to simplify mappings and
1477 * avoid resource failure during mode switches. So, we need to cover all levels of the
1478 * of the first 4GB down to PD level.
1479 * As with the intermediate context, AMD64 uses the PAE PDPT and PDs.
1480 */
1481#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1482 pVM->pgm.s.pShw32BitPdR3 = (PX86PD)MMR3PageAllocLow(pVM);
1483# ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1484 pVM->pgm.s.pShw32BitPdR0 = (uintptr_t)pVM->pgm.s.pShw32BitPdR3;
1485# endif
1486 pVM->pgm.s.apShwPaePDsR3[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1487 pVM->pgm.s.apShwPaePDsR3[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1488 AssertRelease((uintptr_t)pVM->pgm.s.apShwPaePDsR3[0] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apShwPaePDsR3[1]);
1489 pVM->pgm.s.apShwPaePDsR3[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1490 AssertRelease((uintptr_t)pVM->pgm.s.apShwPaePDsR3[1] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apShwPaePDsR3[2]);
1491 pVM->pgm.s.apShwPaePDsR3[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1492 AssertRelease((uintptr_t)pVM->pgm.s.apShwPaePDsR3[2] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apShwPaePDsR3[3]);
1493# ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1494 pVM->pgm.s.apShwPaePDsR0[0] = (uintptr_t)pVM->pgm.s.apShwPaePDsR3[0];
1495 pVM->pgm.s.apShwPaePDsR0[1] = (uintptr_t)pVM->pgm.s.apShwPaePDsR3[1];
1496 pVM->pgm.s.apShwPaePDsR0[2] = (uintptr_t)pVM->pgm.s.apShwPaePDsR3[2];
1497 pVM->pgm.s.apShwPaePDsR0[3] = (uintptr_t)pVM->pgm.s.apShwPaePDsR3[3];
1498# endif
1499 pVM->pgm.s.pShwPaePdptR3 = (PX86PDPT)MMR3PageAllocLow(pVM);
1500# ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1501 pVM->pgm.s.pShwPaePdptR0 = (uintptr_t)pVM->pgm.s.pShwPaePdptR3;
1502# endif
1503 pVM->pgm.s.pShwNestedRootR3 = MMR3PageAllocLow(pVM);
1504# ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1505 pVM->pgm.s.pShwNestedRootR0 = (uintptr_t)pVM->pgm.s.pShwNestedRootR3;
1506# endif
1507#endif /* VBOX_WITH_PGMPOOL_PAGING_ONLY */
1508
1509#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1510 if ( !pVM->pgm.s.pShw32BitPdR3
1511 || !pVM->pgm.s.apShwPaePDsR3[0]
1512 || !pVM->pgm.s.apShwPaePDsR3[1]
1513 || !pVM->pgm.s.apShwPaePDsR3[2]
1514 || !pVM->pgm.s.apShwPaePDsR3[3]
1515 || !pVM->pgm.s.pShwPaePdptR3
1516 || !pVM->pgm.s.pShwNestedRootR3)
1517 {
1518 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1519 return VERR_NO_PAGE_MEMORY;
1520 }
1521#endif
1522
1523 /* get physical addresses. */
1524#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1525 pVM->pgm.s.HCPhysShw32BitPD = MMPage2Phys(pVM, pVM->pgm.s.pShw32BitPdR3);
1526 Assert(MMPagePhys2Page(pVM, pVM->pgm.s.HCPhysShw32BitPD) == pVM->pgm.s.pShw32BitPdR3);
1527 pVM->pgm.s.aHCPhysPaePDs[0] = MMPage2Phys(pVM, pVM->pgm.s.apShwPaePDsR3[0]);
1528 pVM->pgm.s.aHCPhysPaePDs[1] = MMPage2Phys(pVM, pVM->pgm.s.apShwPaePDsR3[1]);
1529 pVM->pgm.s.aHCPhysPaePDs[2] = MMPage2Phys(pVM, pVM->pgm.s.apShwPaePDsR3[2]);
1530 pVM->pgm.s.aHCPhysPaePDs[3] = MMPage2Phys(pVM, pVM->pgm.s.apShwPaePDsR3[3]);
1531 pVM->pgm.s.HCPhysShwPaePdpt = MMPage2Phys(pVM, pVM->pgm.s.pShwPaePdptR3);
1532 pVM->pgm.s.HCPhysShwNestedRoot = MMPage2Phys(pVM, pVM->pgm.s.pShwNestedRootR3);
1533#endif
1534
1535 /*
1536 * Initialize the pages, setting up the PML4 and PDPT for action below 4GB.
1537 */
1538#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1539 ASMMemZero32(pVM->pgm.s.pShw32BitPdR3, PAGE_SIZE);
1540 ASMMemZero32(pVM->pgm.s.pShwPaePdptR3, PAGE_SIZE);
1541 ASMMemZero32(pVM->pgm.s.pShwNestedRootR3, PAGE_SIZE);
1542
1543 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apShwPaePDsR3); i++)
1544 {
1545 ASMMemZero32(pVM->pgm.s.apShwPaePDsR3[i], PAGE_SIZE);
1546 pVM->pgm.s.pShwPaePdptR3->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT | pVM->pgm.s.aHCPhysPaePDs[i];
1547 /* The flags will be corrected when entering and leaving long mode. */
1548 }
1549#endif
1550
1551 /*
1552 * Initialize paging workers and mode from current host mode
1553 * and the guest running in real mode.
1554 */
1555 pVM->pgm.s.enmHostMode = SUPGetPagingMode();
1556 switch (pVM->pgm.s.enmHostMode)
1557 {
1558 case SUPPAGINGMODE_32_BIT:
1559 case SUPPAGINGMODE_32_BIT_GLOBAL:
1560 case SUPPAGINGMODE_PAE:
1561 case SUPPAGINGMODE_PAE_GLOBAL:
1562 case SUPPAGINGMODE_PAE_NX:
1563 case SUPPAGINGMODE_PAE_GLOBAL_NX:
1564 break;
1565
1566 case SUPPAGINGMODE_AMD64:
1567 case SUPPAGINGMODE_AMD64_GLOBAL:
1568 case SUPPAGINGMODE_AMD64_NX:
1569 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
1570#ifndef VBOX_WITH_HYBRID_32BIT_KERNEL
1571 if (ARCH_BITS != 64)
1572 {
1573 AssertMsgFailed(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1574 LogRel(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1575 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1576 }
1577#endif
1578 break;
1579 default:
1580 AssertMsgFailed(("Host mode %d is not supported\n", pVM->pgm.s.enmHostMode));
1581 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1582 }
1583 rc = pgmR3ModeDataInit(pVM, false /* don't resolve GC and R0 syms yet */);
1584#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1585 if (RT_SUCCESS(rc))
1586 rc = PGMR3ChangeMode(pVM, PGMMODE_REAL);
1587#endif
1588 if (RT_SUCCESS(rc))
1589 {
1590 LogFlow(("pgmR3InitPaging: returns successfully\n"));
1591#if HC_ARCH_BITS == 64
1592# ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1593 LogRel(("Debug: HCPhysShw32BitPD=%RHp aHCPhysPaePDs={%RHp,%RHp,%RHp,%RHp} HCPhysShwPaePdpt=%RHp\n",
1594 pVM->pgm.s.HCPhysShw32BitPD,
1595 pVM->pgm.s.aHCPhysPaePDs[0], pVM->pgm.s.aHCPhysPaePDs[1], pVM->pgm.s.aHCPhysPaePDs[2], pVM->pgm.s.aHCPhysPaePDs[3],
1596 pVM->pgm.s.HCPhysShwPaePdpt));
1597# endif
1598 LogRel(("Debug: HCPhysInterPD=%RHp HCPhysInterPaePDPT=%RHp HCPhysInterPaePML4=%RHp\n",
1599 pVM->pgm.s.HCPhysInterPD, pVM->pgm.s.HCPhysInterPaePDPT, pVM->pgm.s.HCPhysInterPaePML4));
1600 LogRel(("Debug: apInterPTs={%RHp,%RHp} apInterPaePTs={%RHp,%RHp} apInterPaePDs={%RHp,%RHp,%RHp,%RHp} pInterPaePDPT64=%RHp\n",
1601 MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[1]),
1602 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[1]),
1603 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[1]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[2]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[3]),
1604 MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64)));
1605#endif
1606
1607 return VINF_SUCCESS;
1608 }
1609
1610 LogFlow(("pgmR3InitPaging: returns %Rrc\n", rc));
1611 return rc;
1612}
1613
1614
1615#ifdef VBOX_WITH_STATISTICS
1616/**
1617 * Init statistics
1618 */
1619static void pgmR3InitStats(PVM pVM)
1620{
1621 PPGM pPGM = &pVM->pgm.s;
1622 unsigned i;
1623
1624 /*
1625 * Note! The layout of this function matches the member layout exactly!
1626 */
1627
1628 /* Common - misc variables */
1629 STAM_REG(pVM, &pPGM->cAllPages, STAMTYPE_U32, "/PGM/Page/cAllPages", STAMUNIT_OCCURENCES, "The total number of pages.");
1630 STAM_REG(pVM, &pPGM->cPrivatePages, STAMTYPE_U32, "/PGM/Page/cPrivatePages", STAMUNIT_OCCURENCES, "The number of private pages.");
1631 STAM_REG(pVM, &pPGM->cSharedPages, STAMTYPE_U32, "/PGM/Page/cSharedPages", STAMUNIT_OCCURENCES, "The number of shared pages.");
1632 STAM_REG(pVM, &pPGM->cZeroPages, STAMTYPE_U32, "/PGM/Page/cZeroPages", STAMUNIT_OCCURENCES, "The number of zero backed pages.");
1633 STAM_REG(pVM, &pPGM->cHandyPages, STAMTYPE_U32, "/PGM/Page/cHandyPages", STAMUNIT_OCCURENCES, "The number of handy pages (not included in cAllPages).");
1634 STAM_REG(pVM, &pPGM->ChunkR3Map.c, STAMTYPE_U32, "/PGM/ChunkR3Map/c", STAMUNIT_OCCURENCES, "Number of mapped chunks.");
1635 STAM_REG(pVM, &pPGM->ChunkR3Map.cMax, STAMTYPE_U32, "/PGM/ChunkR3Map/cMax", STAMUNIT_OCCURENCES, "Maximum number of mapped chunks.");
1636
1637 /* Common - stats */
1638#ifdef PGMPOOL_WITH_GCPHYS_TRACKING
1639 STAM_REG(pVM, &pPGM->StatTrackVirgin, STAMTYPE_COUNTER, "/PGM/Track/Virgin", STAMUNIT_OCCURENCES, "The number of first time shadowings");
1640 STAM_REG(pVM, &pPGM->StatTrackAliased, STAMTYPE_COUNTER, "/PGM/Track/Aliased", STAMUNIT_OCCURENCES, "The number of times switching to cRef2, i.e. the page is being shadowed by two PTs.");
1641 STAM_REG(pVM, &pPGM->StatTrackAliasedMany, STAMTYPE_COUNTER, "/PGM/Track/AliasedMany", STAMUNIT_OCCURENCES, "The number of times we're tracking using cRef2.");
1642 STAM_REG(pVM, &pPGM->StatTrackAliasedLots, STAMTYPE_COUNTER, "/PGM/Track/AliasedLots", STAMUNIT_OCCURENCES, "The number of times we're hitting pages which has overflowed cRef2");
1643 STAM_REG(pVM, &pPGM->StatTrackOverflows, STAMTYPE_COUNTER, "/PGM/Track/Overflows", STAMUNIT_OCCURENCES, "The number of times the extent list grows to long.");
1644 STAM_REG(pVM, &pPGM->StatTrackDeref, STAMTYPE_PROFILE, "/PGM/Track/Deref", STAMUNIT_OCCURENCES, "Profiling of SyncPageWorkerTrackDeref (expensive).");
1645#endif
1646 for (i = 0; i < RT_ELEMENTS(pPGM->StatSyncPtPD); i++)
1647 STAMR3RegisterF(pVM, &pPGM->StatSyncPtPD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1648 "The number of SyncPT per PD n.", "/PGM/PDSyncPT/%04X", i);
1649 for (i = 0; i < RT_ELEMENTS(pPGM->StatSyncPagePD); i++)
1650 STAMR3RegisterF(pVM, &pPGM->StatSyncPagePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1651 "The number of SyncPage per PD n.", "/PGM/PDSyncPage/%04X", i);
1652
1653 /* R3 only: */
1654 STAM_REG(pVM, &pPGM->StatR3DetectedConflicts, STAMTYPE_COUNTER, "/PGM/R3/DetectedConflicts", STAMUNIT_OCCURENCES, "The number of times PGMR3CheckMappingConflicts() detected a conflict.");
1655 STAM_REG(pVM, &pPGM->StatR3ResolveConflict, STAMTYPE_PROFILE, "/PGM/R3/ResolveConflict", STAMUNIT_TICKS_PER_CALL, "pgmR3SyncPTResolveConflict() profiling (includes the entire relocation).");
1656 STAM_REG(pVM, &pPGM->StatR3GuestPDWrite, STAMTYPE_COUNTER, "/PGM/R3/PDWrite", STAMUNIT_OCCURENCES, "The total number of times pgmHCGuestPDWriteHandler() was called.");
1657 STAM_REG(pVM, &pPGM->StatR3GuestPDWriteConflict, STAMTYPE_COUNTER, "/PGM/R3/PDWriteConflict", STAMUNIT_OCCURENCES, "The number of times pgmHCGuestPDWriteHandler() detected a conflict.");
1658#ifndef VBOX_WITH_NEW_PHYS_CODE
1659 STAM_REG(pVM, &pPGM->StatR3DynRamTotal, STAMTYPE_COUNTER, "/PGM/DynAlloc/TotalAlloc", STAMUNIT_MEGABYTES, "Allocated MBs of guest ram.");
1660 STAM_REG(pVM, &pPGM->StatR3DynRamGrow, STAMTYPE_COUNTER, "/PGM/DynAlloc/Grow", STAMUNIT_OCCURENCES, "Nr of pgmr3PhysGrowRange calls.");
1661#endif
1662
1663 /* R0 only: */
1664 STAM_REG(pVM, &pPGM->StatR0DynMapMigrateInvlPg, STAMTYPE_COUNTER, "/PGM/R0/DynMapMigrateInvlPg", STAMUNIT_OCCURENCES, "invlpg count in PGMDynMapMigrateAutoSet.");
1665 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInl, STAMTYPE_PROFILE, "/PGM/R0/DynMapPageGCPageInl", STAMUNIT_TICKS_PER_CALL, "Calls to pgmR0DynMapGCPageInlined.");
1666 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInlHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageGCPageInl/Hits", STAMUNIT_OCCURENCES, "Hash table lookup hits.");
1667 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInlMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageGCPageInl/Misses", STAMUNIT_OCCURENCES, "Misses that falls back to code common with PGMDynMapHCPage.");
1668 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInlRamHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageGCPageInl/RamHits", STAMUNIT_OCCURENCES, "1st ram range hits.");
1669 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInlRamMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageGCPageInl/RamMisses", STAMUNIT_OCCURENCES, "1st ram range misses, takes slow path.");
1670 STAM_REG(pVM, &pPGM->StatR0DynMapHCPageInl, STAMTYPE_PROFILE, "/PGM/R0/DynMapPageHCPageInl", STAMUNIT_TICKS_PER_CALL, "Calls to pgmR0DynMapHCPageInlined.");
1671 STAM_REG(pVM, &pPGM->StatR0DynMapHCPageInlHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageHCPageInl/Hits", STAMUNIT_OCCURENCES, "Hash table lookup hits.");
1672 STAM_REG(pVM, &pPGM->StatR0DynMapHCPageInlMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageHCPageInl/Misses", STAMUNIT_OCCURENCES, "Misses that falls back to code common with PGMDynMapHCPage.");
1673 STAM_REG(pVM, &pPGM->StatR0DynMapPage, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage", STAMUNIT_OCCURENCES, "Calls to pgmR0DynMapPage");
1674 STAM_REG(pVM, &pPGM->StatR0DynMapSetOptimize, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SetOptimize", STAMUNIT_OCCURENCES, "Calls to pgmDynMapOptimizeAutoSet.");
1675 STAM_REG(pVM, &pPGM->StatR0DynMapSetSearchFlushes, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SetSearchFlushes",STAMUNIT_OCCURENCES, "Set search restorting to subset flushes.");
1676 STAM_REG(pVM, &pPGM->StatR0DynMapSetSearchHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SetSearchHits", STAMUNIT_OCCURENCES, "Set search hits.");
1677 STAM_REG(pVM, &pPGM->StatR0DynMapSetSearchMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SetSearchMisses", STAMUNIT_OCCURENCES, "Set search misses.");
1678 STAM_REG(pVM, &pPGM->StatR0DynMapHCPage, STAMTYPE_PROFILE, "/PGM/R0/DynMapPage/HCPage", STAMUNIT_TICKS_PER_CALL, "Calls to PGMDynMapHCPage (ring-0).");
1679 STAM_REG(pVM, &pPGM->StatR0DynMapPageHits0, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/Hits0", STAMUNIT_OCCURENCES, "Hits at iPage+0");
1680 STAM_REG(pVM, &pPGM->StatR0DynMapPageHits1, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/Hits1", STAMUNIT_OCCURENCES, "Hits at iPage+1");
1681 STAM_REG(pVM, &pPGM->StatR0DynMapPageHits2, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/Hits2", STAMUNIT_OCCURENCES, "Hits at iPage+2");
1682 STAM_REG(pVM, &pPGM->StatR0DynMapPageInvlPg, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/InvlPg", STAMUNIT_OCCURENCES, "invlpg count in pgmR0DynMapPageSlow.");
1683 STAM_REG(pVM, &pPGM->StatR0DynMapPageSlow, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/Slow", STAMUNIT_OCCURENCES, "Calls to pgmR0DynMapPageSlow - subtract this from pgmR0DynMapPage to get 1st level hits.");
1684 STAM_REG(pVM, &pPGM->StatR0DynMapPageSlowLoopHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SlowLoopHits" , STAMUNIT_OCCURENCES, "Hits in the loop path.");
1685 STAM_REG(pVM, &pPGM->StatR0DynMapPageSlowLoopMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SlowLoopMisses", STAMUNIT_OCCURENCES, "Misses in the loop path. NonLoopMisses = Slow - SlowLoopHit - SlowLoopMisses");
1686 //STAM_REG(pVM, &pPGM->StatR0DynMapPageSlowLostHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SlowLostHits", STAMUNIT_OCCURENCES, "Lost hits.");
1687 STAM_REG(pVM, &pPGM->StatR0DynMapSubsets, STAMTYPE_COUNTER, "/PGM/R0/Subsets", STAMUNIT_OCCURENCES, "Times PGMDynMapPushAutoSubset was called.");
1688 STAM_REG(pVM, &pPGM->StatR0DynMapPopFlushes, STAMTYPE_COUNTER, "/PGM/R0/SubsetPopFlushes", STAMUNIT_OCCURENCES, "Times PGMDynMapPopAutoSubset flushes the subset.");
1689 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[0], STAMTYPE_COUNTER, "/PGM/R0/SetSize000..09", STAMUNIT_OCCURENCES, "00-09% filled");
1690 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[1], STAMTYPE_COUNTER, "/PGM/R0/SetSize010..19", STAMUNIT_OCCURENCES, "10-19% filled");
1691 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[2], STAMTYPE_COUNTER, "/PGM/R0/SetSize020..29", STAMUNIT_OCCURENCES, "20-29% filled");
1692 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[3], STAMTYPE_COUNTER, "/PGM/R0/SetSize030..39", STAMUNIT_OCCURENCES, "30-39% filled");
1693 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[4], STAMTYPE_COUNTER, "/PGM/R0/SetSize040..49", STAMUNIT_OCCURENCES, "40-49% filled");
1694 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[5], STAMTYPE_COUNTER, "/PGM/R0/SetSize050..59", STAMUNIT_OCCURENCES, "50-59% filled");
1695 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[6], STAMTYPE_COUNTER, "/PGM/R0/SetSize060..69", STAMUNIT_OCCURENCES, "60-69% filled");
1696 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[7], STAMTYPE_COUNTER, "/PGM/R0/SetSize070..79", STAMUNIT_OCCURENCES, "70-79% filled");
1697 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[8], STAMTYPE_COUNTER, "/PGM/R0/SetSize080..89", STAMUNIT_OCCURENCES, "80-89% filled");
1698 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[9], STAMTYPE_COUNTER, "/PGM/R0/SetSize090..99", STAMUNIT_OCCURENCES, "90-99% filled");
1699 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[10], STAMTYPE_COUNTER, "/PGM/R0/SetSize100", STAMUNIT_OCCURENCES, "100% filled");
1700
1701 /* GC only: */
1702 STAM_REG(pVM, &pPGM->StatRCDynMapCacheHits, STAMTYPE_COUNTER, "/PGM/RC/DynMapCache/Hits" , STAMUNIT_OCCURENCES, "Number of dynamic page mapping cache hits.");
1703 STAM_REG(pVM, &pPGM->StatRCDynMapCacheMisses, STAMTYPE_COUNTER, "/PGM/RC/DynMapCache/Misses" , STAMUNIT_OCCURENCES, "Number of dynamic page mapping cache misses.");
1704 STAM_REG(pVM, &pPGM->StatRCInvlPgConflict, STAMTYPE_COUNTER, "/PGM/RC/InvlPgConflict", STAMUNIT_OCCURENCES, "Number of times PGMInvalidatePage() detected a mapping conflict.");
1705 STAM_REG(pVM, &pPGM->StatRCInvlPgSyncMonCR3, STAMTYPE_COUNTER, "/PGM/RC/InvlPgSyncMonitorCR3", STAMUNIT_OCCURENCES, "Number of times PGMInvalidatePage() ran into PGM_SYNC_MONITOR_CR3.");
1706
1707 /* RZ only: */
1708 STAM_REG(pVM, &pPGM->StatRZTrap0e, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMTrap0eHandler() body.");
1709 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeCheckPageFault, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/CheckPageFault", STAMUNIT_TICKS_PER_CALL, "Profiling of checking for dirty/access emulation faults.");
1710 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeSyncPT, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of lazy page table syncing.");
1711 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeMapping, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/Mapping", STAMUNIT_TICKS_PER_CALL, "Profiling of checking virtual mappings.");
1712 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeOutOfSync, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/OutOfSync", STAMUNIT_TICKS_PER_CALL, "Profiling of out of sync page handling.");
1713 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeHandlers, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of checking handlers.");
1714 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2CSAM, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/CSAM", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is CSAM.");
1715 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2DirtyAndAccessed, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/DirtyAndAccessedBits", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is dirty and/or accessed bit emulation.");
1716 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2GuestTrap, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/GuestTrap", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a guest trap.");
1717 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2HndPhys, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/HandlerPhysical", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a physical handler.");
1718 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2HndVirt, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/HandlerVirtual", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a virtual handler.");
1719 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2HndUnhandled, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/HandlerUnhandled", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is access outside the monitored areas of a monitored page.");
1720 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2Misc, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/Misc", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is not known.");
1721 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2OutOfSync, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/OutOfSync", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync page.");
1722 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2OutOfSyncHndPhys, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/OutOfSyncHndPhys", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync physical handler page.");
1723 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2OutOfSyncHndVirt, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/OutOfSyncHndVirt", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync virtual handler page.");
1724 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2OutOfSyncHndObs, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/OutOfSyncObsHnd", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an obsolete handler page.");
1725 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2SyncPT, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is lazy syncing of a PT.");
1726 STAM_REG(pVM, &pPGM->StatRZTrap0eConflicts, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Conflicts", STAMUNIT_OCCURENCES, "The number of times #PF was caused by an undetected conflict.");
1727 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersMapping, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Mapping", STAMUNIT_OCCURENCES, "Number of traps due to access handlers in mappings.");
1728 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersOutOfSync, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/OutOfSync", STAMUNIT_OCCURENCES, "Number of traps due to out-of-sync handled pages.");
1729 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersPhysical, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Physical", STAMUNIT_OCCURENCES, "Number of traps due to physical access handlers.");
1730 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersVirtual, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Virtual", STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers.");
1731 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersVirtualByPhys, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/VirtualByPhys", STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers by physical address.");
1732 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersVirtualUnmarked,STAMTYPE_COUNTER,"/PGM/RZ/Trap0e/Handlers/VirtualUnmarked",STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers by virtual address (without proper physical flags).");
1733 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersUnhandled, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Unhandled", STAMUNIT_OCCURENCES, "Number of traps due to access outside range of monitored page(s).");
1734 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersInvalid, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Invalid", STAMUNIT_OCCURENCES, "Number of traps due to access to invalid physical memory.");
1735 STAM_REG(pVM, &pPGM->StatRZTrap0eUSNotPresentRead, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/NPRead", STAMUNIT_OCCURENCES, "Number of user mode not present read page faults.");
1736 STAM_REG(pVM, &pPGM->StatRZTrap0eUSNotPresentWrite, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/NPWrite", STAMUNIT_OCCURENCES, "Number of user mode not present write page faults.");
1737 STAM_REG(pVM, &pPGM->StatRZTrap0eUSWrite, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/Write", STAMUNIT_OCCURENCES, "Number of user mode write page faults.");
1738 STAM_REG(pVM, &pPGM->StatRZTrap0eUSReserved, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/Reserved", STAMUNIT_OCCURENCES, "Number of user mode reserved bit page faults.");
1739 STAM_REG(pVM, &pPGM->StatRZTrap0eUSNXE, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/NXE", STAMUNIT_OCCURENCES, "Number of user mode NXE page faults.");
1740 STAM_REG(pVM, &pPGM->StatRZTrap0eUSRead, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/Read", STAMUNIT_OCCURENCES, "Number of user mode read page faults.");
1741 STAM_REG(pVM, &pPGM->StatRZTrap0eSVNotPresentRead, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/NPRead", STAMUNIT_OCCURENCES, "Number of supervisor mode not present read page faults.");
1742 STAM_REG(pVM, &pPGM->StatRZTrap0eSVNotPresentWrite, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/NPWrite", STAMUNIT_OCCURENCES, "Number of supervisor mode not present write page faults.");
1743 STAM_REG(pVM, &pPGM->StatRZTrap0eSVWrite, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/Write", STAMUNIT_OCCURENCES, "Number of supervisor mode write page faults.");
1744 STAM_REG(pVM, &pPGM->StatRZTrap0eSVReserved, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/Reserved", STAMUNIT_OCCURENCES, "Number of supervisor mode reserved bit page faults.");
1745 STAM_REG(pVM, &pPGM->StatRZTrap0eSNXE, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/NXE", STAMUNIT_OCCURENCES, "Number of supervisor mode NXE page faults.");
1746 STAM_REG(pVM, &pPGM->StatRZTrap0eGuestPF, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/GuestPF", STAMUNIT_OCCURENCES, "Number of real guest page faults.");
1747 STAM_REG(pVM, &pPGM->StatRZTrap0eGuestPFUnh, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/GuestPF/Unhandled", STAMUNIT_OCCURENCES, "Number of real guest page faults from the 'unhandled' case.");
1748 STAM_REG(pVM, &pPGM->StatRZTrap0eGuestPFMapping, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/GuestPF/InMapping", STAMUNIT_OCCURENCES, "Number of real guest page faults in a mapping.");
1749 STAM_REG(pVM, &pPGM->StatRZTrap0eWPEmulInRZ, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/WP/InRZ", STAMUNIT_OCCURENCES, "Number of guest page faults due to X86_CR0_WP emulation.");
1750 STAM_REG(pVM, &pPGM->StatRZTrap0eWPEmulToR3, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/WP/ToR3", STAMUNIT_OCCURENCES, "Number of guest page faults due to X86_CR0_WP emulation (forward to R3 for emulation).");
1751 for (i = 0; i < RT_ELEMENTS(pPGM->StatRZTrap0ePD); i++)
1752 STAMR3RegisterF(pVM, &pPGM->StatRZTrap0ePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1753 "The number of traps in page directory n.", "/PGM/RZ/Trap0e/PD/%04X", i);
1754 STAM_REG(pVM, &pPGM->StatRZGuestCR3WriteHandled, STAMTYPE_COUNTER, "/PGM/RZ/CR3WriteHandled", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 change was successfully handled.");
1755 STAM_REG(pVM, &pPGM->StatRZGuestCR3WriteUnhandled, STAMTYPE_COUNTER, "/PGM/RZ/CR3WriteUnhandled", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 change was passed back to the recompiler.");
1756 STAM_REG(pVM, &pPGM->StatRZGuestCR3WriteConflict, STAMTYPE_COUNTER, "/PGM/RZ/CR3WriteConflict", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 monitoring detected a conflict.");
1757 STAM_REG(pVM, &pPGM->StatRZGuestROMWriteHandled, STAMTYPE_COUNTER, "/PGM/RZ/ROMWriteHandled", STAMUNIT_OCCURENCES, "The number of times the Guest ROM change was successfully handled.");
1758 STAM_REG(pVM, &pPGM->StatRZGuestROMWriteUnhandled, STAMTYPE_COUNTER, "/PGM/RZ/ROMWriteUnhandled", STAMUNIT_OCCURENCES, "The number of times the Guest ROM change was passed back to the recompiler.");
1759
1760 /* HC only: */
1761
1762 /* RZ & R3: */
1763 STAM_REG(pVM, &pPGM->StatRZSyncCR3, STAMTYPE_PROFILE, "/PGM/RZ/SyncCR3", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() body.");
1764 STAM_REG(pVM, &pPGM->StatRZSyncCR3Handlers, STAMTYPE_PROFILE, "/PGM/RZ/SyncCR3/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() update handler section.");
1765 STAM_REG(pVM, &pPGM->StatRZSyncCR3HandlerVirtualUpdate, STAMTYPE_PROFILE, "/PGM/RZ/SyncCR3/Handlers/VirtualUpdate", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler updates.");
1766 STAM_REG(pVM, &pPGM->StatRZSyncCR3HandlerVirtualReset, STAMTYPE_PROFILE, "/PGM/RZ/SyncCR3/Handlers/VirtualReset", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler resets.");
1767 STAM_REG(pVM, &pPGM->StatRZSyncCR3Global, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/Global", STAMUNIT_OCCURENCES, "The number of global CR3 syncs.");
1768 STAM_REG(pVM, &pPGM->StatRZSyncCR3NotGlobal, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/NotGlobal", STAMUNIT_OCCURENCES, "The number of non-global CR3 syncs.");
1769 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstCacheHit, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstChacheHit", STAMUNIT_OCCURENCES, "The number of times we got some kind of a cache hit.");
1770 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstFreed, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstFreed", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry.");
1771 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstFreedSrcNP, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstFreedSrcNP", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry for which the source entry was not present.");
1772 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstNotPresent, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstNotPresent", STAMUNIT_OCCURENCES, "The number of times we've encountered a not present shadow entry for a present guest entry.");
1773 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstSkippedGlobalPD, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstSkippedGlobalPD", STAMUNIT_OCCURENCES, "The number of times a global page directory wasn't flushed.");
1774 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstSkippedGlobalPT, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstSkippedGlobalPT", STAMUNIT_OCCURENCES, "The number of times a page table with only global entries wasn't flushed.");
1775 STAM_REG(pVM, &pPGM->StatRZSyncPT, STAMTYPE_PROFILE, "/PGM/RZ/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the pfnSyncPT() body.");
1776 STAM_REG(pVM, &pPGM->StatRZSyncPTFailed, STAMTYPE_COUNTER, "/PGM/RZ/SyncPT/Failed", STAMUNIT_OCCURENCES, "The number of times pfnSyncPT() failed.");
1777 STAM_REG(pVM, &pPGM->StatRZSyncPT4K, STAMTYPE_COUNTER, "/PGM/RZ/SyncPT/4K", STAMUNIT_OCCURENCES, "Nr of 4K PT syncs");
1778 STAM_REG(pVM, &pPGM->StatRZSyncPT4M, STAMTYPE_COUNTER, "/PGM/RZ/SyncPT/4M", STAMUNIT_OCCURENCES, "Nr of 4M PT syncs");
1779 STAM_REG(pVM, &pPGM->StatRZSyncPagePDNAs, STAMTYPE_COUNTER, "/PGM/RZ/SyncPagePDNAs", STAMUNIT_OCCURENCES, "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1780 STAM_REG(pVM, &pPGM->StatRZSyncPagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/RZ/SyncPagePDOutOfSync", STAMUNIT_OCCURENCES, "The number of time we've encountered an out-of-sync PD in SyncPage.");
1781 STAM_REG(pVM, &pPGM->StatRZAccessedPage, STAMTYPE_COUNTER, "/PGM/RZ/AccessedPage", STAMUNIT_OCCURENCES, "The number of pages marked not present for accessed bit emulation.");
1782 STAM_REG(pVM, &pPGM->StatRZDirtyBitTracking, STAMTYPE_PROFILE, "/PGM/RZ/DirtyPage", STAMUNIT_TICKS_PER_CALL, "Profiling the dirty bit tracking in CheckPageFault().");
1783 STAM_REG(pVM, &pPGM->StatRZDirtyPage, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/Mark", STAMUNIT_OCCURENCES, "The number of pages marked read-only for dirty bit tracking.");
1784 STAM_REG(pVM, &pPGM->StatRZDirtyPageBig, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/MarkBig", STAMUNIT_OCCURENCES, "The number of 4MB pages marked read-only for dirty bit tracking.");
1785 STAM_REG(pVM, &pPGM->StatRZDirtyPageSkipped, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/Skipped", STAMUNIT_OCCURENCES, "The number of pages already dirty or readonly.");
1786 STAM_REG(pVM, &pPGM->StatRZDirtyPageTrap, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/Trap", STAMUNIT_OCCURENCES, "The number of traps generated for dirty bit tracking.");
1787 STAM_REG(pVM, &pPGM->StatRZDirtiedPage, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/SetDirty", STAMUNIT_OCCURENCES, "The number of pages marked dirty because of write accesses.");
1788 STAM_REG(pVM, &pPGM->StatRZDirtyTrackRealPF, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/RealPF", STAMUNIT_OCCURENCES, "The number of real pages faults during dirty bit tracking.");
1789 STAM_REG(pVM, &pPGM->StatRZPageAlreadyDirty, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/AlreadySet", STAMUNIT_OCCURENCES, "The number of pages already marked dirty because of write accesses.");
1790 STAM_REG(pVM, &pPGM->StatRZInvalidatePage, STAMTYPE_PROFILE, "/PGM/RZ/InvalidatePage", STAMUNIT_TICKS_PER_CALL, "PGMInvalidatePage() profiling.");
1791 STAM_REG(pVM, &pPGM->StatRZInvalidatePage4KBPages, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/4KBPages", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a 4KB page.");
1792 STAM_REG(pVM, &pPGM->StatRZInvalidatePage4MBPages, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/4MBPages", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a 4MB page.");
1793 STAM_REG(pVM, &pPGM->StatRZInvalidatePage4MBPagesSkip, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/4MBPagesSkip",STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() skipped a 4MB page.");
1794 STAM_REG(pVM, &pPGM->StatRZInvalidatePagePDMappings, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/PDMappings", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
1795 STAM_REG(pVM, &pPGM->StatRZInvalidatePagePDNAs, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/PDNAs", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
1796 STAM_REG(pVM, &pPGM->StatRZInvalidatePagePDNPs, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/PDNPs", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a not present page directory.");
1797 STAM_REG(pVM, &pPGM->StatRZInvalidatePagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/PDOutOfSync", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
1798 STAM_REG(pVM, &pPGM->StatRZInvalidatePageSkipped, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/Skipped", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1799 STAM_REG(pVM, &pPGM->StatRZVirtHandlerSearchByPhys, STAMTYPE_PROFILE, "/PGM/RZ/VirtHandlerSearchByPhys", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1800 STAM_REG(pVM, &pPGM->StatRZPhysHandlerReset, STAMTYPE_COUNTER, "/PGM/RZ/PhysHandlerReset", STAMUNIT_OCCURENCES, "The number of times PGMHandlerPhysicalReset is called.");
1801 STAM_REG(pVM, &pPGM->StatRZPageOutOfSyncSupervisor, STAMTYPE_COUNTER, "/PGM/RZ/OutOfSync/SuperVisor", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1802 STAM_REG(pVM, &pPGM->StatRZPageOutOfSyncUser, STAMTYPE_COUNTER, "/PGM/RZ/OutOfSync/User", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1803 STAM_REG(pVM, &pPGM->StatRZPrefetch, STAMTYPE_PROFILE, "/PGM/RZ/Prefetch", STAMUNIT_TICKS_PER_CALL, "PGMPrefetchPage profiling.");
1804 STAM_REG(pVM, &pPGM->StatRZChunkR3MapTlbHits, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbHitsRZ", STAMUNIT_OCCURENCES, "TLB hits.");
1805 STAM_REG(pVM, &pPGM->StatRZChunkR3MapTlbMisses, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbMissesRZ", STAMUNIT_OCCURENCES, "TLB misses.");
1806 STAM_REG(pVM, &pPGM->StatRZPageMapTlbHits, STAMTYPE_COUNTER, "/PGM/RZ/Page/MapTlbHits", STAMUNIT_OCCURENCES, "TLB hits.");
1807 STAM_REG(pVM, &pPGM->StatRZPageMapTlbMisses, STAMTYPE_COUNTER, "/PGM/RZ/Page/MapTlbMisses", STAMUNIT_OCCURENCES, "TLB misses.");
1808 STAM_REG(pVM, &pPGM->StatRZPageReplaceShared, STAMTYPE_COUNTER, "/PGM/RZ/Page/ReplacedShared", STAMUNIT_OCCURENCES, "Times a shared page was replaced.");
1809 STAM_REG(pVM, &pPGM->StatRZPageReplaceZero, STAMTYPE_COUNTER, "/PGM/RZ/Page/ReplacedZero", STAMUNIT_OCCURENCES, "Times the zero page was replaced.");
1810/// @todo STAM_REG(pVM, &pPGM->StatRZPageHandyAllocs, STAMTYPE_COUNTER, "/PGM/RZ/Page/HandyAllocs", STAMUNIT_OCCURENCES, "Number of times we've allocated more handy pages.");
1811 STAM_REG(pVM, &pPGM->StatRZFlushTLB, STAMTYPE_PROFILE, "/PGM/RZ/FlushTLB", STAMUNIT_OCCURENCES, "Profiling of the PGMFlushTLB() body.");
1812 STAM_REG(pVM, &pPGM->StatRZFlushTLBNewCR3, STAMTYPE_COUNTER, "/PGM/RZ/FlushTLB/NewCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
1813 STAM_REG(pVM, &pPGM->StatRZFlushTLBNewCR3Global, STAMTYPE_COUNTER, "/PGM/RZ/FlushTLB/NewCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
1814 STAM_REG(pVM, &pPGM->StatRZFlushTLBSameCR3, STAMTYPE_COUNTER, "/PGM/RZ/FlushTLB/SameCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
1815 STAM_REG(pVM, &pPGM->StatRZFlushTLBSameCR3Global, STAMTYPE_COUNTER, "/PGM/RZ/FlushTLB/SameCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
1816 STAM_REG(pVM, &pPGM->StatRZGstModifyPage, STAMTYPE_PROFILE, "/PGM/RZ/GstModifyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGstModifyPage() body.");
1817
1818 STAM_REG(pVM, &pPGM->StatR3SyncCR3, STAMTYPE_PROFILE, "/PGM/R3/SyncCR3", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() body.");
1819 STAM_REG(pVM, &pPGM->StatR3SyncCR3Handlers, STAMTYPE_PROFILE, "/PGM/R3/SyncCR3/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() update handler section.");
1820 STAM_REG(pVM, &pPGM->StatR3SyncCR3HandlerVirtualUpdate, STAMTYPE_PROFILE, "/PGM/R3/SyncCR3/Handlers/VirtualUpdate", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler updates.");
1821 STAM_REG(pVM, &pPGM->StatR3SyncCR3HandlerVirtualReset, STAMTYPE_PROFILE, "/PGM/R3/SyncCR3/Handlers/VirtualReset", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler resets.");
1822 STAM_REG(pVM, &pPGM->StatR3SyncCR3Global, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/Global", STAMUNIT_OCCURENCES, "The number of global CR3 syncs.");
1823 STAM_REG(pVM, &pPGM->StatR3SyncCR3NotGlobal, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/NotGlobal", STAMUNIT_OCCURENCES, "The number of non-global CR3 syncs.");
1824 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstCacheHit, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstChacheHit", STAMUNIT_OCCURENCES, "The number of times we got some kind of a cache hit.");
1825 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstFreed, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstFreed", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry.");
1826 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstFreedSrcNP, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstFreedSrcNP", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry for which the source entry was not present.");
1827 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstNotPresent, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstNotPresent", STAMUNIT_OCCURENCES, "The number of times we've encountered a not present shadow entry for a present guest entry.");
1828 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstSkippedGlobalPD, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstSkippedGlobalPD", STAMUNIT_OCCURENCES, "The number of times a global page directory wasn't flushed.");
1829 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstSkippedGlobalPT, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstSkippedGlobalPT", STAMUNIT_OCCURENCES, "The number of times a page table with only global entries wasn't flushed.");
1830 STAM_REG(pVM, &pPGM->StatR3SyncPT, STAMTYPE_PROFILE, "/PGM/R3/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the pfnSyncPT() body.");
1831 STAM_REG(pVM, &pPGM->StatR3SyncPTFailed, STAMTYPE_COUNTER, "/PGM/R3/SyncPT/Failed", STAMUNIT_OCCURENCES, "The number of times pfnSyncPT() failed.");
1832 STAM_REG(pVM, &pPGM->StatR3SyncPT4K, STAMTYPE_COUNTER, "/PGM/R3/SyncPT/4K", STAMUNIT_OCCURENCES, "Nr of 4K PT syncs");
1833 STAM_REG(pVM, &pPGM->StatR3SyncPT4M, STAMTYPE_COUNTER, "/PGM/R3/SyncPT/4M", STAMUNIT_OCCURENCES, "Nr of 4M PT syncs");
1834 STAM_REG(pVM, &pPGM->StatR3SyncPagePDNAs, STAMTYPE_COUNTER, "/PGM/R3/SyncPagePDNAs", STAMUNIT_OCCURENCES, "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1835 STAM_REG(pVM, &pPGM->StatR3SyncPagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/R3/SyncPagePDOutOfSync", STAMUNIT_OCCURENCES, "The number of time we've encountered an out-of-sync PD in SyncPage.");
1836 STAM_REG(pVM, &pPGM->StatR3AccessedPage, STAMTYPE_COUNTER, "/PGM/R3/AccessedPage", STAMUNIT_OCCURENCES, "The number of pages marked not present for accessed bit emulation.");
1837 STAM_REG(pVM, &pPGM->StatR3DirtyBitTracking, STAMTYPE_PROFILE, "/PGM/R3/DirtyPage", STAMUNIT_TICKS_PER_CALL, "Profiling the dirty bit tracking in CheckPageFault().");
1838 STAM_REG(pVM, &pPGM->StatR3DirtyPage, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/Mark", STAMUNIT_OCCURENCES, "The number of pages marked read-only for dirty bit tracking.");
1839 STAM_REG(pVM, &pPGM->StatR3DirtyPageBig, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/MarkBig", STAMUNIT_OCCURENCES, "The number of 4MB pages marked read-only for dirty bit tracking.");
1840 STAM_REG(pVM, &pPGM->StatR3DirtyPageSkipped, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/Skipped", STAMUNIT_OCCURENCES, "The number of pages already dirty or readonly.");
1841 STAM_REG(pVM, &pPGM->StatR3DirtyPageTrap, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/Trap", STAMUNIT_OCCURENCES, "The number of traps generated for dirty bit tracking.");
1842 STAM_REG(pVM, &pPGM->StatR3DirtiedPage, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/SetDirty", STAMUNIT_OCCURENCES, "The number of pages marked dirty because of write accesses.");
1843 STAM_REG(pVM, &pPGM->StatR3DirtyTrackRealPF, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/RealPF", STAMUNIT_OCCURENCES, "The number of real pages faults during dirty bit tracking.");
1844 STAM_REG(pVM, &pPGM->StatR3PageAlreadyDirty, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/AlreadySet", STAMUNIT_OCCURENCES, "The number of pages already marked dirty because of write accesses.");
1845 STAM_REG(pVM, &pPGM->StatR3InvalidatePage, STAMTYPE_PROFILE, "/PGM/R3/InvalidatePage", STAMUNIT_TICKS_PER_CALL, "PGMInvalidatePage() profiling.");
1846 STAM_REG(pVM, &pPGM->StatR3InvalidatePage4KBPages, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/4KBPages", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a 4KB page.");
1847 STAM_REG(pVM, &pPGM->StatR3InvalidatePage4MBPages, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/4MBPages", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a 4MB page.");
1848 STAM_REG(pVM, &pPGM->StatR3InvalidatePage4MBPagesSkip, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/4MBPagesSkip",STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() skipped a 4MB page.");
1849 STAM_REG(pVM, &pPGM->StatR3InvalidatePagePDMappings, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/PDMappings", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
1850 STAM_REG(pVM, &pPGM->StatR3InvalidatePagePDNAs, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/PDNAs", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
1851 STAM_REG(pVM, &pPGM->StatR3InvalidatePagePDNPs, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/PDNPs", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a not present page directory.");
1852 STAM_REG(pVM, &pPGM->StatR3InvalidatePagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/PDOutOfSync", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
1853 STAM_REG(pVM, &pPGM->StatR3InvalidatePageSkipped, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/Skipped", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1854 STAM_REG(pVM, &pPGM->StatR3VirtHandlerSearchByPhys, STAMTYPE_PROFILE, "/PGM/R3/VirtHandlerSearchByPhys", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1855 STAM_REG(pVM, &pPGM->StatR3PhysHandlerReset, STAMTYPE_COUNTER, "/PGM/R3/PhysHandlerReset", STAMUNIT_OCCURENCES, "The number of times PGMHandlerPhysicalReset is called.");
1856 STAM_REG(pVM, &pPGM->StatR3PageOutOfSyncSupervisor, STAMTYPE_COUNTER, "/PGM/R3/OutOfSync/SuperVisor", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1857 STAM_REG(pVM, &pPGM->StatR3PageOutOfSyncUser, STAMTYPE_COUNTER, "/PGM/R3/OutOfSync/User", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1858 STAM_REG(pVM, &pPGM->StatR3Prefetch, STAMTYPE_PROFILE, "/PGM/R3/Prefetch", STAMUNIT_TICKS_PER_CALL, "PGMPrefetchPage profiling.");
1859 STAM_REG(pVM, &pPGM->StatR3ChunkR3MapTlbHits, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbHitsR3", STAMUNIT_OCCURENCES, "TLB hits.");
1860 STAM_REG(pVM, &pPGM->StatR3ChunkR3MapTlbMisses, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbMissesR3", STAMUNIT_OCCURENCES, "TLB misses.");
1861 STAM_REG(pVM, &pPGM->StatR3PageMapTlbHits, STAMTYPE_COUNTER, "/PGM/R3/Page/MapTlbHits", STAMUNIT_OCCURENCES, "TLB hits.");
1862 STAM_REG(pVM, &pPGM->StatR3PageMapTlbMisses, STAMTYPE_COUNTER, "/PGM/R3/Page/MapTlbMisses", STAMUNIT_OCCURENCES, "TLB misses.");
1863 STAM_REG(pVM, &pPGM->StatR3PageReplaceShared, STAMTYPE_COUNTER, "/PGM/R3/Page/ReplacedShared", STAMUNIT_OCCURENCES, "Times a shared page was replaced.");
1864 STAM_REG(pVM, &pPGM->StatR3PageReplaceZero, STAMTYPE_COUNTER, "/PGM/R3/Page/ReplacedZero", STAMUNIT_OCCURENCES, "Times the zero page was replaced.");
1865/// @todo STAM_REG(pVM, &pPGM->StatR3PageHandyAllocs, STAMTYPE_COUNTER, "/PGM/R3/Page/HandyAllocs", STAMUNIT_OCCURENCES, "Number of times we've allocated more handy pages.");
1866 STAM_REG(pVM, &pPGM->StatR3FlushTLB, STAMTYPE_PROFILE, "/PGM/R3/FlushTLB", STAMUNIT_OCCURENCES, "Profiling of the PGMFlushTLB() body.");
1867 STAM_REG(pVM, &pPGM->StatR3FlushTLBNewCR3, STAMTYPE_COUNTER, "/PGM/R3/FlushTLB/NewCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
1868 STAM_REG(pVM, &pPGM->StatR3FlushTLBNewCR3Global, STAMTYPE_COUNTER, "/PGM/R3/FlushTLB/NewCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
1869 STAM_REG(pVM, &pPGM->StatR3FlushTLBSameCR3, STAMTYPE_COUNTER, "/PGM/R3/FlushTLB/SameCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
1870 STAM_REG(pVM, &pPGM->StatR3FlushTLBSameCR3Global, STAMTYPE_COUNTER, "/PGM/R3/FlushTLB/SameCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
1871 STAM_REG(pVM, &pPGM->StatR3GstModifyPage, STAMTYPE_PROFILE, "/PGM/R3/GstModifyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGstModifyPage() body.");
1872
1873}
1874#endif /* VBOX_WITH_STATISTICS */
1875
1876
1877/**
1878 * Init the PGM bits that rely on VMMR0 and MM to be fully initialized.
1879 *
1880 * The dynamic mapping area will also be allocated and initialized at this
1881 * time. We could allocate it during PGMR3Init of course, but the mapping
1882 * wouldn't be allocated at that time preventing us from setting up the
1883 * page table entries with the dummy page.
1884 *
1885 * @returns VBox status code.
1886 * @param pVM VM handle.
1887 */
1888VMMR3DECL(int) PGMR3InitDynMap(PVM pVM)
1889{
1890 RTGCPTR GCPtr;
1891 int rc;
1892
1893#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1894 /*
1895 * Reserve space for mapping the paging pages into guest context.
1896 */
1897 rc = MMR3HyperReserve(pVM, PAGE_SIZE * (2 + RT_ELEMENTS(pVM->pgm.s.apShwPaePDsR3) + 1 + 2 + 2), "Paging", &GCPtr);
1898 AssertRCReturn(rc, rc);
1899 pVM->pgm.s.pShw32BitPdRC = GCPtr;
1900 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1901#endif
1902
1903 /*
1904 * Reserve space for the dynamic mappings.
1905 */
1906 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping", &GCPtr);
1907 if (RT_SUCCESS(rc))
1908 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
1909
1910 if ( RT_SUCCESS(rc)
1911 && (pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) != ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT))
1912 {
1913 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping not crossing", &GCPtr);
1914 if (RT_SUCCESS(rc))
1915 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
1916 }
1917 if (RT_SUCCESS(rc))
1918 {
1919 AssertRelease((pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) == ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT));
1920 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1921 }
1922 return rc;
1923}
1924
1925
1926/**
1927 * Ring-3 init finalizing.
1928 *
1929 * @returns VBox status code.
1930 * @param pVM The VM handle.
1931 */
1932VMMR3DECL(int) PGMR3InitFinalize(PVM pVM)
1933{
1934 int rc;
1935
1936#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1937 /*
1938 * Map the paging pages into the guest context.
1939 */
1940 RTGCPTR GCPtr = pVM->pgm.s.pShw32BitPdRC;
1941 AssertReleaseReturn(GCPtr, VERR_INTERNAL_ERROR);
1942
1943 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhysShw32BitPD, PAGE_SIZE, 0);
1944 AssertRCReturn(rc, rc);
1945 pVM->pgm.s.pShw32BitPdRC = GCPtr;
1946 GCPtr += PAGE_SIZE;
1947 GCPtr += PAGE_SIZE; /* reserved page */
1948
1949 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apShwPaePDsR3); i++)
1950 {
1951 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.aHCPhysPaePDs[i], PAGE_SIZE, 0);
1952 AssertRCReturn(rc, rc);
1953 pVM->pgm.s.apShwPaePDsRC[i] = GCPtr;
1954 GCPtr += PAGE_SIZE;
1955 }
1956 /* A bit of paranoia is justified. */
1957 AssertRelease(pVM->pgm.s.apShwPaePDsRC[0] + PAGE_SIZE == pVM->pgm.s.apShwPaePDsRC[1]);
1958 AssertRelease(pVM->pgm.s.apShwPaePDsRC[1] + PAGE_SIZE == pVM->pgm.s.apShwPaePDsRC[2]);
1959 AssertRelease(pVM->pgm.s.apShwPaePDsRC[2] + PAGE_SIZE == pVM->pgm.s.apShwPaePDsRC[3]);
1960 GCPtr += PAGE_SIZE; /* reserved page */
1961
1962 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhysShwPaePdpt, PAGE_SIZE, 0);
1963 AssertRCReturn(rc, rc);
1964 pVM->pgm.s.pShwPaePdptRC = GCPtr;
1965 GCPtr += PAGE_SIZE;
1966 GCPtr += PAGE_SIZE; /* reserved page */
1967#endif
1968
1969 /*
1970 * Reserve space for the dynamic mappings.
1971 * Initialize the dynamic mapping pages with dummy pages to simply the cache.
1972 */
1973 /* get the pointer to the page table entries. */
1974 PPGMMAPPING pMapping = pgmGetMapping(pVM, pVM->pgm.s.pbDynPageMapBaseGC);
1975 AssertRelease(pMapping);
1976 const uintptr_t off = pVM->pgm.s.pbDynPageMapBaseGC - pMapping->GCPtr;
1977 const unsigned iPT = off >> X86_PD_SHIFT;
1978 const unsigned iPG = (off >> X86_PT_SHIFT) & X86_PT_MASK;
1979 pVM->pgm.s.paDynPageMap32BitPTEsGC = pMapping->aPTs[iPT].pPTRC + iPG * sizeof(pMapping->aPTs[0].pPTR3->a[0]);
1980 pVM->pgm.s.paDynPageMapPaePTEsGC = pMapping->aPTs[iPT].paPaePTsRC + iPG * sizeof(pMapping->aPTs[0].paPaePTsR3->a[0]);
1981
1982 /* init cache */
1983 RTHCPHYS HCPhysDummy = MMR3PageDummyHCPhys(pVM);
1984 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.aHCPhysDynPageMapCache); i++)
1985 pVM->pgm.s.aHCPhysDynPageMapCache[i] = HCPhysDummy;
1986
1987 for (unsigned i = 0; i < MM_HYPER_DYNAMIC_SIZE; i += PAGE_SIZE)
1988 {
1989 rc = PGMMap(pVM, pVM->pgm.s.pbDynPageMapBaseGC + i, HCPhysDummy, PAGE_SIZE, 0);
1990 AssertRCReturn(rc, rc);
1991 }
1992
1993 /*
1994 * Note that AMD uses all the 8 reserved bits for the address (so 40 bits in total);
1995 * Intel only goes up to 36 bits, so we stick to 36 as well.
1996 */
1997 /** @todo How to test for the 40 bits support? Long mode seems to be the test criterium. */
1998 uint32_t u32Dummy, u32Features;
1999 CPUMGetGuestCpuId(pVM, 1, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
2000
2001 if (u32Features & X86_CPUID_FEATURE_EDX_PSE36)
2002 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(36) - 1;
2003 else
2004 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1;
2005
2006 LogRel(("PGMR3InitFinalize: 4 MB PSE mask %RGp\n", pVM->pgm.s.GCPhys4MBPSEMask));
2007 return rc;
2008}
2009
2010
2011/**
2012 * Applies relocations to data and code managed by this component.
2013 *
2014 * This function will be called at init and whenever the VMM need to relocate it
2015 * self inside the GC.
2016 *
2017 * @param pVM The VM.
2018 * @param offDelta Relocation delta relative to old location.
2019 */
2020VMMR3DECL(void) PGMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
2021{
2022 LogFlow(("PGMR3Relocate\n"));
2023
2024 /*
2025 * Paging stuff.
2026 */
2027 pVM->pgm.s.GCPtrCR3Mapping += offDelta;
2028 /** @todo move this into shadow and guest specific relocation functions. */
2029#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
2030 AssertMsg(pVM->pgm.s.pShw32BitPdR3, ("Init order, no relocation before paging is initialized!\n"));
2031 pVM->pgm.s.pShw32BitPdRC += offDelta;
2032#endif
2033 pVM->pgm.s.pGst32BitPdRC += offDelta;
2034 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apGstPaePDsRC); i++)
2035 {
2036#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
2037 AssertCompile(RT_ELEMENTS(pVM->pgm.s.apShwPaePDsRC) == RT_ELEMENTS(pVM->pgm.s.apGstPaePDsRC));
2038 pVM->pgm.s.apShwPaePDsRC[i] += offDelta;
2039#endif
2040 pVM->pgm.s.apGstPaePDsRC[i] += offDelta;
2041 }
2042 pVM->pgm.s.pGstPaePdptRC += offDelta;
2043#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
2044 pVM->pgm.s.pShwPaePdptRC += offDelta;
2045#endif
2046
2047#ifdef VBOX_WITH_PGMPOOL_PAGING_ONLY
2048 pVM->pgm.s.pShwPageCR3RC += offDelta;
2049#endif
2050
2051 pgmR3ModeDataInit(pVM, true /* resolve GC/R0 symbols */);
2052 pgmR3ModeDataSwitch(pVM, pVM->pgm.s.enmShadowMode, pVM->pgm.s.enmGuestMode);
2053
2054 PGM_SHW_PFN(Relocate, pVM)(pVM, offDelta);
2055 PGM_GST_PFN(Relocate, pVM)(pVM, offDelta);
2056 PGM_BTH_PFN(Relocate, pVM)(pVM, offDelta);
2057
2058 /*
2059 * Trees.
2060 */
2061 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
2062
2063 /*
2064 * Ram ranges.
2065 */
2066 if (pVM->pgm.s.pRamRangesR3)
2067 {
2068 pVM->pgm.s.pRamRangesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pRamRangesR3);
2069 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur->pNextR3; pCur = pCur->pNextR3)
2070 pCur->pNextRC = MMHyperR3ToRC(pVM, pCur->pNextR3);
2071 }
2072
2073 /*
2074 * Update the two page directories with all page table mappings.
2075 * (One or more of them have changed, that's why we're here.)
2076 */
2077 pVM->pgm.s.pMappingsRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pMappingsR3);
2078 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur->pNextR3; pCur = pCur->pNextR3)
2079 pCur->pNextRC = MMHyperR3ToRC(pVM, pCur->pNextR3);
2080
2081 /* Relocate GC addresses of Page Tables. */
2082 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
2083 {
2084 for (RTHCUINT i = 0; i < pCur->cPTs; i++)
2085 {
2086 pCur->aPTs[i].pPTRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].pPTR3);
2087 pCur->aPTs[i].paPaePTsRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].paPaePTsR3);
2088 }
2089 }
2090
2091 /*
2092 * Dynamic page mapping area.
2093 */
2094 pVM->pgm.s.paDynPageMap32BitPTEsGC += offDelta;
2095 pVM->pgm.s.paDynPageMapPaePTEsGC += offDelta;
2096 pVM->pgm.s.pbDynPageMapBaseGC += offDelta;
2097
2098 /*
2099 * The Zero page.
2100 */
2101 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
2102#ifdef VBOX_WITH_2X_4GB_ADDR_SPACE
2103 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR || !VMMIsHwVirtExtForced(pVM));
2104#else
2105 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR);
2106#endif
2107
2108 /*
2109 * Physical and virtual handlers.
2110 */
2111 RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3RelocatePhysHandler, &offDelta);
2112 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3RelocateVirtHandler, &offDelta);
2113 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3RelocateHyperVirtHandler, &offDelta);
2114
2115 /*
2116 * The page pool.
2117 */
2118 pgmR3PoolRelocate(pVM);
2119}
2120
2121
2122/**
2123 * Callback function for relocating a physical access handler.
2124 *
2125 * @returns 0 (continue enum)
2126 * @param pNode Pointer to a PGMPHYSHANDLER node.
2127 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2128 * not certain the delta will fit in a void pointer for all possible configs.
2129 */
2130static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser)
2131{
2132 PPGMPHYSHANDLER pHandler = (PPGMPHYSHANDLER)pNode;
2133 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2134 if (pHandler->pfnHandlerRC)
2135 pHandler->pfnHandlerRC += offDelta;
2136 if (pHandler->pvUserRC >= 0x10000)
2137 pHandler->pvUserRC += offDelta;
2138 return 0;
2139}
2140
2141
2142/**
2143 * Callback function for relocating a virtual access handler.
2144 *
2145 * @returns 0 (continue enum)
2146 * @param pNode Pointer to a PGMVIRTHANDLER node.
2147 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2148 * not certain the delta will fit in a void pointer for all possible configs.
2149 */
2150static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2151{
2152 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2153 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2154 Assert( pHandler->enmType == PGMVIRTHANDLERTYPE_ALL
2155 || pHandler->enmType == PGMVIRTHANDLERTYPE_WRITE);
2156 Assert(pHandler->pfnHandlerRC);
2157 pHandler->pfnHandlerRC += offDelta;
2158 return 0;
2159}
2160
2161
2162/**
2163 * Callback function for relocating a virtual access handler for the hypervisor mapping.
2164 *
2165 * @returns 0 (continue enum)
2166 * @param pNode Pointer to a PGMVIRTHANDLER node.
2167 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2168 * not certain the delta will fit in a void pointer for all possible configs.
2169 */
2170static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2171{
2172 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2173 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2174 Assert(pHandler->enmType == PGMVIRTHANDLERTYPE_HYPERVISOR);
2175 Assert(pHandler->pfnHandlerRC);
2176 pHandler->pfnHandlerRC += offDelta;
2177 return 0;
2178}
2179
2180
2181/**
2182 * The VM is being reset.
2183 *
2184 * For the PGM component this means that any PD write monitors
2185 * needs to be removed.
2186 *
2187 * @param pVM VM handle.
2188 */
2189VMMR3DECL(void) PGMR3Reset(PVM pVM)
2190{
2191 LogFlow(("PGMR3Reset:\n"));
2192 VM_ASSERT_EMT(pVM);
2193
2194 pgmLock(pVM);
2195
2196 /*
2197 * Unfix any fixed mappings and disable CR3 monitoring.
2198 */
2199 pVM->pgm.s.fMappingsFixed = false;
2200 pVM->pgm.s.GCPtrMappingFixed = 0;
2201 pVM->pgm.s.cbMappingFixed = 0;
2202
2203 /* Exit the guest paging mode before the pgm pool gets reset.
2204 * Important to clean up the amd64 case.
2205 */
2206 int rc = PGM_GST_PFN(Exit, pVM)(pVM);
2207 AssertRC(rc);
2208#ifdef DEBUG
2209 DBGFR3InfoLog(pVM, "mappings", NULL);
2210 DBGFR3InfoLog(pVM, "handlers", "all nostat");
2211#endif
2212
2213 /*
2214 * Reset the shadow page pool.
2215 */
2216 pgmR3PoolReset(pVM);
2217
2218 /*
2219 * Re-init other members.
2220 */
2221 pVM->pgm.s.fA20Enabled = true;
2222
2223 /*
2224 * Clear the FFs PGM owns.
2225 */
2226 VM_FF_CLEAR(pVM, VM_FF_PGM_SYNC_CR3);
2227 VM_FF_CLEAR(pVM, VM_FF_PGM_SYNC_CR3_NON_GLOBAL);
2228
2229 /*
2230 * Reset (zero) RAM pages.
2231 */
2232 rc = pgmR3PhysRamReset(pVM);
2233 if (RT_SUCCESS(rc))
2234 {
2235#ifdef VBOX_WITH_NEW_PHYS_CODE
2236 /*
2237 * Reset (zero) shadow ROM pages.
2238 */
2239 rc = pgmR3PhysRomReset(pVM);
2240#endif
2241 if (RT_SUCCESS(rc))
2242 {
2243 /*
2244 * Switch mode back to real mode.
2245 */
2246 rc = PGMR3ChangeMode(pVM, PGMMODE_REAL);
2247 STAM_REL_COUNTER_RESET(&pVM->pgm.s.cGuestModeChanges);
2248 }
2249 }
2250
2251 pgmUnlock(pVM);
2252 //return rc;
2253 AssertReleaseRC(rc);
2254}
2255
2256
2257#ifdef VBOX_STRICT
2258/**
2259 * VM state change callback for clearing fNoMorePhysWrites after
2260 * a snapshot has been created.
2261 */
2262static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser)
2263{
2264 if (enmState == VMSTATE_RUNNING)
2265 pVM->pgm.s.fNoMorePhysWrites = false;
2266}
2267#endif
2268
2269
2270/**
2271 * Terminates the PGM.
2272 *
2273 * @returns VBox status code.
2274 * @param pVM Pointer to VM structure.
2275 */
2276VMMR3DECL(int) PGMR3Term(PVM pVM)
2277{
2278 PGMDeregisterStringFormatTypes();
2279 return PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
2280}
2281
2282
2283/**
2284 * Terminates the per-VCPU PGM.
2285 *
2286 * Termination means cleaning up and freeing all resources,
2287 * the VM it self is at this point powered off or suspended.
2288 *
2289 * @returns VBox status code.
2290 * @param pVM The VM to operate on.
2291 */
2292VMMR3DECL(int) PGMR3TermCPU(PVM pVM)
2293{
2294 return 0;
2295}
2296
2297
2298/**
2299 * Execute state save operation.
2300 *
2301 * @returns VBox status code.
2302 * @param pVM VM Handle.
2303 * @param pSSM SSM operation handle.
2304 */
2305static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM)
2306{
2307#ifdef VBOX_WITH_NEW_PHYS_CODE
2308 AssertReleaseFailedReturn(VERR_NOT_IMPLEMENTED); /** @todo */
2309#else
2310 PPGM pPGM = &pVM->pgm.s;
2311
2312 /* No more writes to physical memory after this point! */
2313 pVM->pgm.s.fNoMorePhysWrites = true;
2314
2315 /*
2316 * Save basic data (required / unaffected by relocation).
2317 */
2318#if 1
2319 SSMR3PutBool(pSSM, pPGM->fMappingsFixed);
2320#else
2321 SSMR3PutUInt(pSSM, pPGM->fMappingsFixed);
2322#endif
2323 SSMR3PutGCPtr(pSSM, pPGM->GCPtrMappingFixed);
2324 SSMR3PutU32(pSSM, pPGM->cbMappingFixed);
2325 SSMR3PutUInt(pSSM, pPGM->cbRamSize);
2326 SSMR3PutGCPhys(pSSM, pPGM->GCPhysA20Mask);
2327 SSMR3PutUInt(pSSM, pPGM->fA20Enabled);
2328 SSMR3PutUInt(pSSM, pPGM->fSyncFlags);
2329 SSMR3PutUInt(pSSM, pPGM->enmGuestMode);
2330 SSMR3PutU32(pSSM, ~0); /* Separator. */
2331
2332 /*
2333 * The guest mappings.
2334 */
2335 uint32_t i = 0;
2336 for (PPGMMAPPING pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3, i++)
2337 {
2338 SSMR3PutU32(pSSM, i);
2339 SSMR3PutStrZ(pSSM, pMapping->pszDesc); /* This is the best unique id we have... */
2340 SSMR3PutGCPtr(pSSM, pMapping->GCPtr);
2341 SSMR3PutGCUIntPtr(pSSM, pMapping->cPTs);
2342 /* flags are done by the mapping owners! */
2343 }
2344 SSMR3PutU32(pSSM, ~0); /* terminator. */
2345
2346 /*
2347 * Ram range flags and bits.
2348 */
2349 i = 0;
2350 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2351 {
2352 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
2353
2354 SSMR3PutU32(pSSM, i);
2355 SSMR3PutGCPhys(pSSM, pRam->GCPhys);
2356 SSMR3PutGCPhys(pSSM, pRam->GCPhysLast);
2357 SSMR3PutGCPhys(pSSM, pRam->cb);
2358 SSMR3PutU8(pSSM, !!pRam->pvR3); /* boolean indicating memory or not. */
2359
2360 /* Flags. */
2361 const unsigned cPages = pRam->cb >> PAGE_SHIFT;
2362 for (unsigned iPage = 0; iPage < cPages; iPage++)
2363 SSMR3PutU16(pSSM, (uint16_t)(pRam->aPages[iPage].HCPhys & ~X86_PTE_PAE_PG_MASK)); /** @todo PAGE FLAGS */
2364
2365 /* any memory associated with the range. */
2366 if (pRam->fFlags & MM_RAM_FLAGS_DYNAMIC_ALLOC)
2367 {
2368 for (unsigned iChunk = 0; iChunk < (pRam->cb >> PGM_DYNAMIC_CHUNK_SHIFT); iChunk++)
2369 {
2370 if (pRam->paChunkR3Ptrs[iChunk])
2371 {
2372 SSMR3PutU8(pSSM, 1); /* chunk present */
2373 SSMR3PutMem(pSSM, (void *)pRam->paChunkR3Ptrs[iChunk], PGM_DYNAMIC_CHUNK_SIZE);
2374 }
2375 else
2376 SSMR3PutU8(pSSM, 0); /* no chunk present */
2377 }
2378 }
2379 else if (pRam->pvR3)
2380 {
2381 int rc = SSMR3PutMem(pSSM, pRam->pvR3, pRam->cb);
2382 if (RT_FAILURE(rc))
2383 {
2384 Log(("pgmR3Save: SSMR3PutMem(, %p, %#x) -> %Rrc\n", pRam->pvR3, pRam->cb, rc));
2385 return rc;
2386 }
2387 }
2388 }
2389#endif /* !VBOX_WITH_NEW_PHYS_CODE */
2390 return SSMR3PutU32(pSSM, ~0); /* terminator. */
2391}
2392
2393
2394/**
2395 * Execute state load operation.
2396 *
2397 * @returns VBox status code.
2398 * @param pVM VM Handle.
2399 * @param pSSM SSM operation handle.
2400 * @param u32Version Data layout version.
2401 */
2402static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version)
2403{
2404#ifdef VBOX_WITH_NEW_PHYS_CODE
2405 AssertReleaseFailedReturn(VERR_NOT_IMPLEMENTED); /** @todo */
2406#else
2407 /*
2408 * Validate version.
2409 */
2410 if (u32Version != PGM_SAVED_STATE_VERSION)
2411 {
2412 AssertMsgFailed(("pgmR3Load: Invalid version u32Version=%d (current %d)!\n", u32Version, PGM_SAVED_STATE_VERSION));
2413 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
2414 }
2415
2416 /*
2417 * Call the reset function to make sure all the memory is cleared.
2418 */
2419 PGMR3Reset(pVM);
2420
2421 /*
2422 * Load basic data (required / unaffected by relocation).
2423 */
2424 PPGM pPGM = &pVM->pgm.s;
2425#if 1
2426 SSMR3GetBool(pSSM, &pPGM->fMappingsFixed);
2427#else
2428 uint32_t u;
2429 SSMR3GetU32(pSSM, &u);
2430 pPGM->fMappingsFixed = u;
2431#endif
2432 SSMR3GetGCPtr(pSSM, &pPGM->GCPtrMappingFixed);
2433 SSMR3GetU32(pSSM, &pPGM->cbMappingFixed);
2434
2435 RTUINT cbRamSize;
2436 int rc = SSMR3GetU32(pSSM, &cbRamSize);
2437 if (RT_FAILURE(rc))
2438 return rc;
2439 if (cbRamSize != pPGM->cbRamSize)
2440 return VERR_SSM_LOAD_MEMORY_SIZE_MISMATCH;
2441 SSMR3GetGCPhys(pSSM, &pPGM->GCPhysA20Mask);
2442 SSMR3GetUInt(pSSM, &pPGM->fA20Enabled);
2443 SSMR3GetUInt(pSSM, &pPGM->fSyncFlags);
2444 RTUINT uGuestMode;
2445 SSMR3GetUInt(pSSM, &uGuestMode);
2446 pPGM->enmGuestMode = (PGMMODE)uGuestMode;
2447
2448 /* check separator. */
2449 uint32_t u32Sep;
2450 SSMR3GetU32(pSSM, &u32Sep);
2451 if (RT_FAILURE(rc))
2452 return rc;
2453 if (u32Sep != (uint32_t)~0)
2454 {
2455 AssertMsgFailed(("u32Sep=%#x (first)\n", u32Sep));
2456 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2457 }
2458
2459 /*
2460 * The guest mappings.
2461 */
2462 uint32_t i = 0;
2463 for (;; i++)
2464 {
2465 /* Check the seqence number / separator. */
2466 rc = SSMR3GetU32(pSSM, &u32Sep);
2467 if (RT_FAILURE(rc))
2468 return rc;
2469 if (u32Sep == ~0U)
2470 break;
2471 if (u32Sep != i)
2472 {
2473 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2474 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2475 }
2476
2477 /* get the mapping details. */
2478 char szDesc[256];
2479 szDesc[0] = '\0';
2480 rc = SSMR3GetStrZ(pSSM, szDesc, sizeof(szDesc));
2481 if (RT_FAILURE(rc))
2482 return rc;
2483 RTGCPTR GCPtr;
2484 SSMR3GetGCPtr(pSSM, &GCPtr);
2485 RTGCPTR cPTs;
2486 rc = SSMR3GetGCUIntPtr(pSSM, &cPTs);
2487 if (RT_FAILURE(rc))
2488 return rc;
2489
2490 /* find matching range. */
2491 PPGMMAPPING pMapping;
2492 for (pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3)
2493 if ( pMapping->cPTs == cPTs
2494 && !strcmp(pMapping->pszDesc, szDesc))
2495 break;
2496 if (!pMapping)
2497 {
2498 LogRel(("Couldn't find mapping: cPTs=%#x szDesc=%s (GCPtr=%RGv)\n",
2499 cPTs, szDesc, GCPtr));
2500 AssertFailed();
2501 return VERR_SSM_LOAD_CONFIG_MISMATCH;
2502 }
2503
2504 /* relocate it. */
2505 if (pMapping->GCPtr != GCPtr)
2506 {
2507 AssertMsg((GCPtr >> X86_PD_SHIFT << X86_PD_SHIFT) == GCPtr, ("GCPtr=%RGv\n", GCPtr));
2508 pgmR3MapRelocate(pVM, pMapping, pMapping->GCPtr, GCPtr);
2509 }
2510 else
2511 Log(("pgmR3Load: '%s' needed no relocation (%RGv)\n", szDesc, GCPtr));
2512 }
2513
2514 /*
2515 * Ram range flags and bits.
2516 */
2517 i = 0;
2518 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2519 {
2520 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
2521 /* Check the seqence number / separator. */
2522 rc = SSMR3GetU32(pSSM, &u32Sep);
2523 if (RT_FAILURE(rc))
2524 return rc;
2525 if (u32Sep == ~0U)
2526 break;
2527 if (u32Sep != i)
2528 {
2529 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2530 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2531 }
2532
2533 /* Get the range details. */
2534 RTGCPHYS GCPhys;
2535 SSMR3GetGCPhys(pSSM, &GCPhys);
2536 RTGCPHYS GCPhysLast;
2537 SSMR3GetGCPhys(pSSM, &GCPhysLast);
2538 RTGCPHYS cb;
2539 SSMR3GetGCPhys(pSSM, &cb);
2540 uint8_t fHaveBits;
2541 rc = SSMR3GetU8(pSSM, &fHaveBits);
2542 if (RT_FAILURE(rc))
2543 return rc;
2544 if (fHaveBits & ~1)
2545 {
2546 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2547 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2548 }
2549
2550 /* Match it up with the current range. */
2551 if ( GCPhys != pRam->GCPhys
2552 || GCPhysLast != pRam->GCPhysLast
2553 || cb != pRam->cb
2554 || fHaveBits != !!pRam->pvR3)
2555 {
2556 LogRel(("Ram range: %RGp-%RGp %RGp bytes %s\n"
2557 "State : %RGp-%RGp %RGp bytes %s\n",
2558 pRam->GCPhys, pRam->GCPhysLast, pRam->cb, pRam->pvR3 ? "bits" : "nobits",
2559 GCPhys, GCPhysLast, cb, fHaveBits ? "bits" : "nobits"));
2560 /*
2561 * If we're loading a state for debugging purpose, don't make a fuss if
2562 * the MMIO[2] and ROM stuff isn't 100% right, just skip the mismatches.
2563 */
2564 if ( SSMR3HandleGetAfter(pSSM) != SSMAFTER_DEBUG_IT
2565 || GCPhys < 8 * _1M)
2566 AssertFailedReturn(VERR_SSM_LOAD_CONFIG_MISMATCH);
2567
2568 RTGCPHYS cPages = ((GCPhysLast - GCPhys) + 1) >> PAGE_SHIFT;
2569 while (cPages-- > 0)
2570 {
2571 uint16_t u16Ignore;
2572 SSMR3GetU16(pSSM, &u16Ignore);
2573 }
2574 continue;
2575 }
2576
2577 /* Flags. */
2578 const unsigned cPages = pRam->cb >> PAGE_SHIFT;
2579 for (unsigned iPage = 0; iPage < cPages; iPage++)
2580 {
2581 uint16_t u16 = 0;
2582 SSMR3GetU16(pSSM, &u16);
2583 u16 &= PAGE_OFFSET_MASK & ~( RT_BIT(4) | RT_BIT(5) | RT_BIT(6)
2584 | RT_BIT(7) | RT_BIT(8) | RT_BIT(9) | RT_BIT(10) );
2585 // &= MM_RAM_FLAGS_DYNAMIC_ALLOC | MM_RAM_FLAGS_RESERVED | MM_RAM_FLAGS_ROM | MM_RAM_FLAGS_MMIO | MM_RAM_FLAGS_MMIO2
2586 pRam->aPages[iPage].HCPhys = PGM_PAGE_GET_HCPHYS(&pRam->aPages[iPage]) | (RTHCPHYS)u16; /** @todo PAGE FLAGS */
2587 }
2588
2589 /* any memory associated with the range. */
2590 if (pRam->fFlags & MM_RAM_FLAGS_DYNAMIC_ALLOC)
2591 {
2592 for (unsigned iChunk = 0; iChunk < (pRam->cb >> PGM_DYNAMIC_CHUNK_SHIFT); iChunk++)
2593 {
2594 uint8_t fValidChunk;
2595
2596 rc = SSMR3GetU8(pSSM, &fValidChunk);
2597 if (RT_FAILURE(rc))
2598 return rc;
2599 if (fValidChunk > 1)
2600 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2601
2602 if (fValidChunk)
2603 {
2604 if (!pRam->paChunkR3Ptrs[iChunk])
2605 {
2606 rc = pgmr3PhysGrowRange(pVM, pRam->GCPhys + iChunk * PGM_DYNAMIC_CHUNK_SIZE);
2607 if (RT_FAILURE(rc))
2608 return rc;
2609 }
2610 Assert(pRam->paChunkR3Ptrs[iChunk]);
2611
2612 SSMR3GetMem(pSSM, (void *)pRam->paChunkR3Ptrs[iChunk], PGM_DYNAMIC_CHUNK_SIZE);
2613 }
2614 /* else nothing to do */
2615 }
2616 }
2617 else if (pRam->pvR3)
2618 {
2619 int rc = SSMR3GetMem(pSSM, pRam->pvR3, pRam->cb);
2620 if (RT_FAILURE(rc))
2621 {
2622 Log(("pgmR3Save: SSMR3GetMem(, %p, %#x) -> %Rrc\n", pRam->pvR3, pRam->cb, rc));
2623 return rc;
2624 }
2625 }
2626 }
2627
2628 /*
2629 * We require a full resync now.
2630 */
2631 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3_NON_GLOBAL);
2632 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
2633 pPGM->fSyncFlags |= PGM_SYNC_UPDATE_PAGE_BIT_VIRTUAL;
2634 pPGM->fPhysCacheFlushPending = true;
2635 pgmR3HandlerPhysicalUpdateAll(pVM);
2636
2637 /*
2638 * Change the paging mode.
2639 */
2640 rc = PGMR3ChangeMode(pVM, pPGM->enmGuestMode);
2641
2642 /* Restore pVM->pgm.s.GCPhysCR3. */
2643 Assert(pVM->pgm.s.GCPhysCR3 == NIL_RTGCPHYS);
2644 RTGCPHYS GCPhysCR3 = CPUMGetGuestCR3(pVM);
2645 if ( pVM->pgm.s.enmGuestMode == PGMMODE_PAE
2646 || pVM->pgm.s.enmGuestMode == PGMMODE_PAE_NX
2647 || pVM->pgm.s.enmGuestMode == PGMMODE_AMD64
2648 || pVM->pgm.s.enmGuestMode == PGMMODE_AMD64_NX)
2649 GCPhysCR3 = (GCPhysCR3 & X86_CR3_PAE_PAGE_MASK);
2650 else
2651 GCPhysCR3 = (GCPhysCR3 & X86_CR3_PAGE_MASK);
2652 pVM->pgm.s.GCPhysCR3 = GCPhysCR3;
2653
2654 return rc;
2655#endif /* !VBOX_WITH_NEW_PHYS_CODE */
2656}
2657
2658
2659/**
2660 * Show paging mode.
2661 *
2662 * @param pVM VM Handle.
2663 * @param pHlp The info helpers.
2664 * @param pszArgs "all" (default), "guest", "shadow" or "host".
2665 */
2666static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2667{
2668 /* digest argument. */
2669 bool fGuest, fShadow, fHost;
2670 if (pszArgs)
2671 pszArgs = RTStrStripL(pszArgs);
2672 if (!pszArgs || !*pszArgs || strstr(pszArgs, "all"))
2673 fShadow = fHost = fGuest = true;
2674 else
2675 {
2676 fShadow = fHost = fGuest = false;
2677 if (strstr(pszArgs, "guest"))
2678 fGuest = true;
2679 if (strstr(pszArgs, "shadow"))
2680 fShadow = true;
2681 if (strstr(pszArgs, "host"))
2682 fHost = true;
2683 }
2684
2685 /* print info. */
2686 if (fGuest)
2687 pHlp->pfnPrintf(pHlp, "Guest paging mode: %s, changed %RU64 times, A20 %s\n",
2688 PGMGetModeName(pVM->pgm.s.enmGuestMode), pVM->pgm.s.cGuestModeChanges.c,
2689 pVM->pgm.s.fA20Enabled ? "enabled" : "disabled");
2690 if (fShadow)
2691 pHlp->pfnPrintf(pHlp, "Shadow paging mode: %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode));
2692 if (fHost)
2693 {
2694 const char *psz;
2695 switch (pVM->pgm.s.enmHostMode)
2696 {
2697 case SUPPAGINGMODE_INVALID: psz = "invalid"; break;
2698 case SUPPAGINGMODE_32_BIT: psz = "32-bit"; break;
2699 case SUPPAGINGMODE_32_BIT_GLOBAL: psz = "32-bit+G"; break;
2700 case SUPPAGINGMODE_PAE: psz = "PAE"; break;
2701 case SUPPAGINGMODE_PAE_GLOBAL: psz = "PAE+G"; break;
2702 case SUPPAGINGMODE_PAE_NX: psz = "PAE+NX"; break;
2703 case SUPPAGINGMODE_PAE_GLOBAL_NX: psz = "PAE+G+NX"; break;
2704 case SUPPAGINGMODE_AMD64: psz = "AMD64"; break;
2705 case SUPPAGINGMODE_AMD64_GLOBAL: psz = "AMD64+G"; break;
2706 case SUPPAGINGMODE_AMD64_NX: psz = "AMD64+NX"; break;
2707 case SUPPAGINGMODE_AMD64_GLOBAL_NX: psz = "AMD64+G+NX"; break;
2708 default: psz = "unknown"; break;
2709 }
2710 pHlp->pfnPrintf(pHlp, "Host paging mode: %s\n", psz);
2711 }
2712}
2713
2714
2715/**
2716 * Dump registered MMIO ranges to the log.
2717 *
2718 * @param pVM VM Handle.
2719 * @param pHlp The info helpers.
2720 * @param pszArgs Arguments, ignored.
2721 */
2722static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2723{
2724 NOREF(pszArgs);
2725 pHlp->pfnPrintf(pHlp,
2726 "RAM ranges (pVM=%p)\n"
2727 "%.*s %.*s\n",
2728 pVM,
2729 sizeof(RTGCPHYS) * 4 + 1, "GC Phys Range ",
2730 sizeof(RTHCPTR) * 2, "pvHC ");
2731
2732 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur; pCur = pCur->pNextR3)
2733 pHlp->pfnPrintf(pHlp,
2734 "%RGp-%RGp %RHv %s\n",
2735 pCur->GCPhys,
2736 pCur->GCPhysLast,
2737 pCur->pvR3,
2738 pCur->pszDesc);
2739}
2740
2741/**
2742 * Dump the page directory to the log.
2743 *
2744 * @param pVM VM Handle.
2745 * @param pHlp The info helpers.
2746 * @param pszArgs Arguments, ignored.
2747 */
2748static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2749{
2750/** @todo fix this! Convert the PGMR3DumpHierarchyHC functions to do guest stuff. */
2751 /* Big pages supported? */
2752 const bool fPSE = !!(CPUMGetGuestCR4(pVM) & X86_CR4_PSE);
2753
2754 /* Global pages supported? */
2755 const bool fPGE = !!(CPUMGetGuestCR4(pVM) & X86_CR4_PGE);
2756
2757 NOREF(pszArgs);
2758
2759 /*
2760 * Get page directory addresses.
2761 */
2762 PX86PD pPDSrc = pVM->pgm.s.pGst32BitPdR3;
2763 Assert(pPDSrc);
2764 Assert(PGMPhysGCPhys2R3PtrAssert(pVM, (RTGCPHYS)(CPUMGetGuestCR3(pVM) & X86_CR3_PAGE_MASK), sizeof(*pPDSrc)) == pPDSrc);
2765
2766 /*
2767 * Iterate the page directory.
2768 */
2769 for (unsigned iPD = 0; iPD < RT_ELEMENTS(pPDSrc->a); iPD++)
2770 {
2771 X86PDE PdeSrc = pPDSrc->a[iPD];
2772 if (PdeSrc.n.u1Present)
2773 {
2774 if (PdeSrc.b.u1Size && fPSE)
2775 pHlp->pfnPrintf(pHlp,
2776 "%04X - %RGp P=%d U=%d RW=%d G=%d - BIG\n",
2777 iPD,
2778 pgmGstGet4MBPhysPage(&pVM->pgm.s, PdeSrc),
2779 PdeSrc.b.u1Present, PdeSrc.b.u1User, PdeSrc.b.u1Write, PdeSrc.b.u1Global && fPGE);
2780 else
2781 pHlp->pfnPrintf(pHlp,
2782 "%04X - %RGp P=%d U=%d RW=%d [G=%d]\n",
2783 iPD,
2784 (RTGCPHYS)(PdeSrc.u & X86_PDE_PG_MASK),
2785 PdeSrc.n.u1Present, PdeSrc.n.u1User, PdeSrc.n.u1Write, PdeSrc.b.u1Global && fPGE);
2786 }
2787 }
2788}
2789
2790
2791/**
2792 * Serivce a VMMCALLHOST_PGM_LOCK call.
2793 *
2794 * @returns VBox status code.
2795 * @param pVM The VM handle.
2796 */
2797VMMR3DECL(int) PGMR3LockCall(PVM pVM)
2798{
2799 int rc = PDMR3CritSectEnterEx(&pVM->pgm.s.CritSect, true /* fHostCall */);
2800 AssertRC(rc);
2801 return rc;
2802}
2803
2804
2805/**
2806 * Converts a PGMMODE value to a PGM_TYPE_* \#define.
2807 *
2808 * @returns PGM_TYPE_*.
2809 * @param pgmMode The mode value to convert.
2810 */
2811DECLINLINE(unsigned) pgmModeToType(PGMMODE pgmMode)
2812{
2813 switch (pgmMode)
2814 {
2815 case PGMMODE_REAL: return PGM_TYPE_REAL;
2816 case PGMMODE_PROTECTED: return PGM_TYPE_PROT;
2817 case PGMMODE_32_BIT: return PGM_TYPE_32BIT;
2818 case PGMMODE_PAE:
2819 case PGMMODE_PAE_NX: return PGM_TYPE_PAE;
2820 case PGMMODE_AMD64:
2821 case PGMMODE_AMD64_NX: return PGM_TYPE_AMD64;
2822 case PGMMODE_NESTED: return PGM_TYPE_NESTED;
2823 case PGMMODE_EPT: return PGM_TYPE_EPT;
2824 default:
2825 AssertFatalMsgFailed(("pgmMode=%d\n", pgmMode));
2826 }
2827}
2828
2829
2830/**
2831 * Gets the index into the paging mode data array of a SHW+GST mode.
2832 *
2833 * @returns PGM::paPagingData index.
2834 * @param uShwType The shadow paging mode type.
2835 * @param uGstType The guest paging mode type.
2836 */
2837DECLINLINE(unsigned) pgmModeDataIndex(unsigned uShwType, unsigned uGstType)
2838{
2839 Assert(uShwType >= PGM_TYPE_32BIT && uShwType <= PGM_TYPE_MAX);
2840 Assert(uGstType >= PGM_TYPE_REAL && uGstType <= PGM_TYPE_AMD64);
2841 return (uShwType - PGM_TYPE_32BIT) * (PGM_TYPE_AMD64 - PGM_TYPE_REAL + 1)
2842 + (uGstType - PGM_TYPE_REAL);
2843}
2844
2845
2846/**
2847 * Gets the index into the paging mode data array of a SHW+GST mode.
2848 *
2849 * @returns PGM::paPagingData index.
2850 * @param enmShw The shadow paging mode.
2851 * @param enmGst The guest paging mode.
2852 */
2853DECLINLINE(unsigned) pgmModeDataIndexByMode(PGMMODE enmShw, PGMMODE enmGst)
2854{
2855 Assert(enmShw >= PGMMODE_32_BIT && enmShw <= PGMMODE_MAX);
2856 Assert(enmGst > PGMMODE_INVALID && enmGst < PGMMODE_MAX);
2857 return pgmModeDataIndex(pgmModeToType(enmShw), pgmModeToType(enmGst));
2858}
2859
2860
2861/**
2862 * Calculates the max data index.
2863 * @returns The number of entries in the paging data array.
2864 */
2865DECLINLINE(unsigned) pgmModeDataMaxIndex(void)
2866{
2867 return pgmModeDataIndex(PGM_TYPE_MAX, PGM_TYPE_AMD64) + 1;
2868}
2869
2870
2871/**
2872 * Initializes the paging mode data kept in PGM::paModeData.
2873 *
2874 * @param pVM The VM handle.
2875 * @param fResolveGCAndR0 Indicate whether or not GC and Ring-0 symbols can be resolved now.
2876 * This is used early in the init process to avoid trouble with PDM
2877 * not being initialized yet.
2878 */
2879static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0)
2880{
2881 PPGMMODEDATA pModeData;
2882 int rc;
2883
2884 /*
2885 * Allocate the array on the first call.
2886 */
2887 if (!pVM->pgm.s.paModeData)
2888 {
2889 pVM->pgm.s.paModeData = (PPGMMODEDATA)MMR3HeapAllocZ(pVM, MM_TAG_PGM, sizeof(PGMMODEDATA) * pgmModeDataMaxIndex());
2890 AssertReturn(pVM->pgm.s.paModeData, VERR_NO_MEMORY);
2891 }
2892
2893 /*
2894 * Initialize the array entries.
2895 */
2896 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_REAL)];
2897 pModeData->uShwType = PGM_TYPE_32BIT;
2898 pModeData->uGstType = PGM_TYPE_REAL;
2899 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2900 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2901 rc = PGM_BTH_NAME_32BIT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2902
2903 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGMMODE_PROTECTED)];
2904 pModeData->uShwType = PGM_TYPE_32BIT;
2905 pModeData->uGstType = PGM_TYPE_PROT;
2906 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2907 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2908 rc = PGM_BTH_NAME_32BIT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2909
2910 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_32BIT)];
2911 pModeData->uShwType = PGM_TYPE_32BIT;
2912 pModeData->uGstType = PGM_TYPE_32BIT;
2913 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2914 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2915 rc = PGM_BTH_NAME_32BIT_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2916
2917 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_REAL)];
2918 pModeData->uShwType = PGM_TYPE_PAE;
2919 pModeData->uGstType = PGM_TYPE_REAL;
2920 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2921 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2922 rc = PGM_BTH_NAME_PAE_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2923
2924 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PROT)];
2925 pModeData->uShwType = PGM_TYPE_PAE;
2926 pModeData->uGstType = PGM_TYPE_PROT;
2927 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2928 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2929 rc = PGM_BTH_NAME_PAE_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2930
2931 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_32BIT)];
2932 pModeData->uShwType = PGM_TYPE_PAE;
2933 pModeData->uGstType = PGM_TYPE_32BIT;
2934 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2935 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2936 rc = PGM_BTH_NAME_PAE_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2937
2938 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PAE)];
2939 pModeData->uShwType = PGM_TYPE_PAE;
2940 pModeData->uGstType = PGM_TYPE_PAE;
2941 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2942 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2943 rc = PGM_BTH_NAME_PAE_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2944
2945#ifdef VBOX_WITH_64_BITS_GUESTS
2946 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_AMD64, PGM_TYPE_AMD64)];
2947 pModeData->uShwType = PGM_TYPE_AMD64;
2948 pModeData->uGstType = PGM_TYPE_AMD64;
2949 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2950 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2951 rc = PGM_BTH_NAME_AMD64_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2952#endif
2953
2954 /* The nested paging mode. */
2955 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_REAL)];
2956 pModeData->uShwType = PGM_TYPE_NESTED;
2957 pModeData->uGstType = PGM_TYPE_REAL;
2958 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2959 rc = PGM_BTH_NAME_NESTED_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2960
2961 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGMMODE_PROTECTED)];
2962 pModeData->uShwType = PGM_TYPE_NESTED;
2963 pModeData->uGstType = PGM_TYPE_PROT;
2964 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2965 rc = PGM_BTH_NAME_NESTED_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2966
2967 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_32BIT)];
2968 pModeData->uShwType = PGM_TYPE_NESTED;
2969 pModeData->uGstType = PGM_TYPE_32BIT;
2970 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2971 rc = PGM_BTH_NAME_NESTED_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2972
2973 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_PAE)];
2974 pModeData->uShwType = PGM_TYPE_NESTED;
2975 pModeData->uGstType = PGM_TYPE_PAE;
2976 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2977 rc = PGM_BTH_NAME_NESTED_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2978
2979#ifdef VBOX_WITH_64_BITS_GUESTS
2980 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
2981 pModeData->uShwType = PGM_TYPE_NESTED;
2982 pModeData->uGstType = PGM_TYPE_AMD64;
2983 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2984 rc = PGM_BTH_NAME_NESTED_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2985#endif
2986
2987 /* The shadow part of the nested callback mode depends on the host paging mode (AMD-V only). */
2988 switch (pVM->pgm.s.enmHostMode)
2989 {
2990#if HC_ARCH_BITS == 32
2991 case SUPPAGINGMODE_32_BIT:
2992 case SUPPAGINGMODE_32_BIT_GLOBAL:
2993 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
2994 {
2995 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
2996 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2997 }
2998# ifdef VBOX_WITH_64_BITS_GUESTS
2999 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3000 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3001# endif
3002 break;
3003
3004 case SUPPAGINGMODE_PAE:
3005 case SUPPAGINGMODE_PAE_NX:
3006 case SUPPAGINGMODE_PAE_GLOBAL:
3007 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3008 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3009 {
3010 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3011 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3012 }
3013# ifdef VBOX_WITH_64_BITS_GUESTS
3014 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3015 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3016# endif
3017 break;
3018#endif /* HC_ARCH_BITS == 32 */
3019
3020#if HC_ARCH_BITS == 64 || defined(RT_OS_DARWIN)
3021 case SUPPAGINGMODE_AMD64:
3022 case SUPPAGINGMODE_AMD64_GLOBAL:
3023 case SUPPAGINGMODE_AMD64_NX:
3024 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3025# ifdef VBOX_WITH_64_BITS_GUESTS
3026 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_AMD64; i++)
3027# else
3028 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3029# endif
3030 {
3031 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3032 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3033 }
3034 break;
3035#endif /* HC_ARCH_BITS == 64 || RT_OS_DARWIN */
3036
3037 default:
3038 AssertFailed();
3039 break;
3040 }
3041
3042 /* Extended paging (EPT) / Intel VT-x */
3043 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_REAL)];
3044 pModeData->uShwType = PGM_TYPE_EPT;
3045 pModeData->uGstType = PGM_TYPE_REAL;
3046 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3047 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3048 rc = PGM_BTH_NAME_EPT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3049
3050 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PROT)];
3051 pModeData->uShwType = PGM_TYPE_EPT;
3052 pModeData->uGstType = PGM_TYPE_PROT;
3053 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3054 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3055 rc = PGM_BTH_NAME_EPT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3056
3057 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_32BIT)];
3058 pModeData->uShwType = PGM_TYPE_EPT;
3059 pModeData->uGstType = PGM_TYPE_32BIT;
3060 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3061 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3062 rc = PGM_BTH_NAME_EPT_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3063
3064 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PAE)];
3065 pModeData->uShwType = PGM_TYPE_EPT;
3066 pModeData->uGstType = PGM_TYPE_PAE;
3067 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3068 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3069 rc = PGM_BTH_NAME_EPT_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3070
3071#ifdef VBOX_WITH_64_BITS_GUESTS
3072 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_AMD64)];
3073 pModeData->uShwType = PGM_TYPE_EPT;
3074 pModeData->uGstType = PGM_TYPE_AMD64;
3075 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3076 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3077 rc = PGM_BTH_NAME_EPT_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3078#endif
3079 return VINF_SUCCESS;
3080}
3081
3082
3083/**
3084 * Switch to different (or relocated in the relocate case) mode data.
3085 *
3086 * @param pVM The VM handle.
3087 * @param enmShw The the shadow paging mode.
3088 * @param enmGst The the guest paging mode.
3089 */
3090static void pgmR3ModeDataSwitch(PVM pVM, PGMMODE enmShw, PGMMODE enmGst)
3091{
3092 PPGMMODEDATA pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndexByMode(enmShw, enmGst)];
3093
3094 Assert(pModeData->uGstType == pgmModeToType(enmGst));
3095 Assert(pModeData->uShwType == pgmModeToType(enmShw));
3096
3097 /* shadow */
3098 pVM->pgm.s.pfnR3ShwRelocate = pModeData->pfnR3ShwRelocate;
3099 pVM->pgm.s.pfnR3ShwExit = pModeData->pfnR3ShwExit;
3100 pVM->pgm.s.pfnR3ShwGetPage = pModeData->pfnR3ShwGetPage;
3101 Assert(pVM->pgm.s.pfnR3ShwGetPage);
3102 pVM->pgm.s.pfnR3ShwModifyPage = pModeData->pfnR3ShwModifyPage;
3103
3104 pVM->pgm.s.pfnRCShwGetPage = pModeData->pfnRCShwGetPage;
3105 pVM->pgm.s.pfnRCShwModifyPage = pModeData->pfnRCShwModifyPage;
3106
3107 pVM->pgm.s.pfnR0ShwGetPage = pModeData->pfnR0ShwGetPage;
3108 pVM->pgm.s.pfnR0ShwModifyPage = pModeData->pfnR0ShwModifyPage;
3109
3110
3111 /* guest */
3112 pVM->pgm.s.pfnR3GstRelocate = pModeData->pfnR3GstRelocate;
3113 pVM->pgm.s.pfnR3GstExit = pModeData->pfnR3GstExit;
3114 pVM->pgm.s.pfnR3GstGetPage = pModeData->pfnR3GstGetPage;
3115 Assert(pVM->pgm.s.pfnR3GstGetPage);
3116 pVM->pgm.s.pfnR3GstModifyPage = pModeData->pfnR3GstModifyPage;
3117 pVM->pgm.s.pfnR3GstGetPDE = pModeData->pfnR3GstGetPDE;
3118#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3119 pVM->pgm.s.pfnR3GstMonitorCR3 = pModeData->pfnR3GstMonitorCR3;
3120 pVM->pgm.s.pfnR3GstUnmonitorCR3 = pModeData->pfnR3GstUnmonitorCR3;
3121#endif
3122#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3123 pVM->pgm.s.pfnR3GstWriteHandlerCR3 = pModeData->pfnR3GstWriteHandlerCR3;
3124 pVM->pgm.s.pszR3GstWriteHandlerCR3 = pModeData->pszR3GstWriteHandlerCR3;
3125 pVM->pgm.s.pfnR3GstPAEWriteHandlerCR3 = pModeData->pfnR3GstPAEWriteHandlerCR3;
3126 pVM->pgm.s.pszR3GstPAEWriteHandlerCR3 = pModeData->pszR3GstPAEWriteHandlerCR3;
3127#endif
3128 pVM->pgm.s.pfnRCGstGetPage = pModeData->pfnRCGstGetPage;
3129 pVM->pgm.s.pfnRCGstModifyPage = pModeData->pfnRCGstModifyPage;
3130 pVM->pgm.s.pfnRCGstGetPDE = pModeData->pfnRCGstGetPDE;
3131#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3132 pVM->pgm.s.pfnRCGstMonitorCR3 = pModeData->pfnRCGstMonitorCR3;
3133 pVM->pgm.s.pfnRCGstUnmonitorCR3 = pModeData->pfnRCGstUnmonitorCR3;
3134#endif
3135#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3136 pVM->pgm.s.pfnRCGstWriteHandlerCR3 = pModeData->pfnRCGstWriteHandlerCR3;
3137 pVM->pgm.s.pfnRCGstPAEWriteHandlerCR3 = pModeData->pfnRCGstPAEWriteHandlerCR3;
3138#endif
3139 pVM->pgm.s.pfnR0GstGetPage = pModeData->pfnR0GstGetPage;
3140 pVM->pgm.s.pfnR0GstModifyPage = pModeData->pfnR0GstModifyPage;
3141 pVM->pgm.s.pfnR0GstGetPDE = pModeData->pfnR0GstGetPDE;
3142#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3143 pVM->pgm.s.pfnR0GstMonitorCR3 = pModeData->pfnR0GstMonitorCR3;
3144 pVM->pgm.s.pfnR0GstUnmonitorCR3 = pModeData->pfnR0GstUnmonitorCR3;
3145#endif
3146#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3147 pVM->pgm.s.pfnR0GstWriteHandlerCR3 = pModeData->pfnR0GstWriteHandlerCR3;
3148 pVM->pgm.s.pfnR0GstPAEWriteHandlerCR3 = pModeData->pfnR0GstPAEWriteHandlerCR3;
3149#endif
3150
3151 /* both */
3152 pVM->pgm.s.pfnR3BthRelocate = pModeData->pfnR3BthRelocate;
3153 pVM->pgm.s.pfnR3BthInvalidatePage = pModeData->pfnR3BthInvalidatePage;
3154 pVM->pgm.s.pfnR3BthSyncCR3 = pModeData->pfnR3BthSyncCR3;
3155 Assert(pVM->pgm.s.pfnR3BthSyncCR3);
3156 pVM->pgm.s.pfnR3BthSyncPage = pModeData->pfnR3BthSyncPage;
3157 pVM->pgm.s.pfnR3BthPrefetchPage = pModeData->pfnR3BthPrefetchPage;
3158 pVM->pgm.s.pfnR3BthVerifyAccessSyncPage = pModeData->pfnR3BthVerifyAccessSyncPage;
3159#ifdef VBOX_STRICT
3160 pVM->pgm.s.pfnR3BthAssertCR3 = pModeData->pfnR3BthAssertCR3;
3161#endif
3162 pVM->pgm.s.pfnR3BthMapCR3 = pModeData->pfnR3BthMapCR3;
3163 pVM->pgm.s.pfnR3BthUnmapCR3 = pModeData->pfnR3BthUnmapCR3;
3164
3165 pVM->pgm.s.pfnRCBthTrap0eHandler = pModeData->pfnRCBthTrap0eHandler;
3166 pVM->pgm.s.pfnRCBthInvalidatePage = pModeData->pfnRCBthInvalidatePage;
3167 pVM->pgm.s.pfnRCBthSyncCR3 = pModeData->pfnRCBthSyncCR3;
3168 pVM->pgm.s.pfnRCBthSyncPage = pModeData->pfnRCBthSyncPage;
3169 pVM->pgm.s.pfnRCBthPrefetchPage = pModeData->pfnRCBthPrefetchPage;
3170 pVM->pgm.s.pfnRCBthVerifyAccessSyncPage = pModeData->pfnRCBthVerifyAccessSyncPage;
3171#ifdef VBOX_STRICT
3172 pVM->pgm.s.pfnRCBthAssertCR3 = pModeData->pfnRCBthAssertCR3;
3173#endif
3174 pVM->pgm.s.pfnRCBthMapCR3 = pModeData->pfnRCBthMapCR3;
3175 pVM->pgm.s.pfnRCBthUnmapCR3 = pModeData->pfnRCBthUnmapCR3;
3176
3177 pVM->pgm.s.pfnR0BthTrap0eHandler = pModeData->pfnR0BthTrap0eHandler;
3178 pVM->pgm.s.pfnR0BthInvalidatePage = pModeData->pfnR0BthInvalidatePage;
3179 pVM->pgm.s.pfnR0BthSyncCR3 = pModeData->pfnR0BthSyncCR3;
3180 pVM->pgm.s.pfnR0BthSyncPage = pModeData->pfnR0BthSyncPage;
3181 pVM->pgm.s.pfnR0BthPrefetchPage = pModeData->pfnR0BthPrefetchPage;
3182 pVM->pgm.s.pfnR0BthVerifyAccessSyncPage = pModeData->pfnR0BthVerifyAccessSyncPage;
3183#ifdef VBOX_STRICT
3184 pVM->pgm.s.pfnR0BthAssertCR3 = pModeData->pfnR0BthAssertCR3;
3185#endif
3186 pVM->pgm.s.pfnR0BthMapCR3 = pModeData->pfnR0BthMapCR3;
3187 pVM->pgm.s.pfnR0BthUnmapCR3 = pModeData->pfnR0BthUnmapCR3;
3188}
3189
3190
3191/**
3192 * Calculates the shadow paging mode.
3193 *
3194 * @returns The shadow paging mode.
3195 * @param pVM VM handle.
3196 * @param enmGuestMode The guest mode.
3197 * @param enmHostMode The host mode.
3198 * @param enmShadowMode The current shadow mode.
3199 * @param penmSwitcher Where to store the switcher to use.
3200 * VMMSWITCHER_INVALID means no change.
3201 */
3202static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher)
3203{
3204 VMMSWITCHER enmSwitcher = VMMSWITCHER_INVALID;
3205 switch (enmGuestMode)
3206 {
3207 /*
3208 * When switching to real or protected mode we don't change
3209 * anything since it's likely that we'll switch back pretty soon.
3210 *
3211 * During pgmR3InitPaging we'll end up here with PGMMODE_INVALID
3212 * and is supposed to determine which shadow paging and switcher to
3213 * use during init.
3214 */
3215 case PGMMODE_REAL:
3216 case PGMMODE_PROTECTED:
3217 if ( enmShadowMode != PGMMODE_INVALID
3218 && !HWACCMIsEnabled(pVM) /* always switch in hwaccm mode! */)
3219 break; /* (no change) */
3220
3221 switch (enmHostMode)
3222 {
3223 case SUPPAGINGMODE_32_BIT:
3224 case SUPPAGINGMODE_32_BIT_GLOBAL:
3225 enmShadowMode = PGMMODE_32_BIT;
3226 enmSwitcher = VMMSWITCHER_32_TO_32;
3227 break;
3228
3229 case SUPPAGINGMODE_PAE:
3230 case SUPPAGINGMODE_PAE_NX:
3231 case SUPPAGINGMODE_PAE_GLOBAL:
3232 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3233 enmShadowMode = PGMMODE_PAE;
3234 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3235#ifdef DEBUG_bird
3236 if (RTEnvExist("VBOX_32BIT"))
3237 {
3238 enmShadowMode = PGMMODE_32_BIT;
3239 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3240 }
3241#endif
3242 break;
3243
3244 case SUPPAGINGMODE_AMD64:
3245 case SUPPAGINGMODE_AMD64_GLOBAL:
3246 case SUPPAGINGMODE_AMD64_NX:
3247 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3248 enmShadowMode = PGMMODE_PAE;
3249 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3250#ifdef DEBUG_bird
3251 if (RTEnvExist("VBOX_32BIT"))
3252 {
3253 enmShadowMode = PGMMODE_32_BIT;
3254 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3255 }
3256#endif
3257 break;
3258
3259 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3260 }
3261 break;
3262
3263 case PGMMODE_32_BIT:
3264 switch (enmHostMode)
3265 {
3266 case SUPPAGINGMODE_32_BIT:
3267 case SUPPAGINGMODE_32_BIT_GLOBAL:
3268 enmShadowMode = PGMMODE_32_BIT;
3269 enmSwitcher = VMMSWITCHER_32_TO_32;
3270 break;
3271
3272 case SUPPAGINGMODE_PAE:
3273 case SUPPAGINGMODE_PAE_NX:
3274 case SUPPAGINGMODE_PAE_GLOBAL:
3275 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3276 enmShadowMode = PGMMODE_PAE;
3277 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3278#ifdef DEBUG_bird
3279 if (RTEnvExist("VBOX_32BIT"))
3280 {
3281 enmShadowMode = PGMMODE_32_BIT;
3282 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3283 }
3284#endif
3285 break;
3286
3287 case SUPPAGINGMODE_AMD64:
3288 case SUPPAGINGMODE_AMD64_GLOBAL:
3289 case SUPPAGINGMODE_AMD64_NX:
3290 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3291 enmShadowMode = PGMMODE_PAE;
3292 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3293#ifdef DEBUG_bird
3294 if (RTEnvExist("VBOX_32BIT"))
3295 {
3296 enmShadowMode = PGMMODE_32_BIT;
3297 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3298 }
3299#endif
3300 break;
3301
3302 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3303 }
3304 break;
3305
3306 case PGMMODE_PAE:
3307 case PGMMODE_PAE_NX: /** @todo This might require more switchers and guest+both modes. */
3308 switch (enmHostMode)
3309 {
3310 case SUPPAGINGMODE_32_BIT:
3311 case SUPPAGINGMODE_32_BIT_GLOBAL:
3312 enmShadowMode = PGMMODE_PAE;
3313 enmSwitcher = VMMSWITCHER_32_TO_PAE;
3314 break;
3315
3316 case SUPPAGINGMODE_PAE:
3317 case SUPPAGINGMODE_PAE_NX:
3318 case SUPPAGINGMODE_PAE_GLOBAL:
3319 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3320 enmShadowMode = PGMMODE_PAE;
3321 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3322 break;
3323
3324 case SUPPAGINGMODE_AMD64:
3325 case SUPPAGINGMODE_AMD64_GLOBAL:
3326 case SUPPAGINGMODE_AMD64_NX:
3327 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3328 enmShadowMode = PGMMODE_PAE;
3329 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3330 break;
3331
3332 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3333 }
3334 break;
3335
3336 case PGMMODE_AMD64:
3337 case PGMMODE_AMD64_NX:
3338 switch (enmHostMode)
3339 {
3340 case SUPPAGINGMODE_32_BIT:
3341 case SUPPAGINGMODE_32_BIT_GLOBAL:
3342 enmShadowMode = PGMMODE_AMD64;
3343 enmSwitcher = VMMSWITCHER_32_TO_AMD64;
3344 break;
3345
3346 case SUPPAGINGMODE_PAE:
3347 case SUPPAGINGMODE_PAE_NX:
3348 case SUPPAGINGMODE_PAE_GLOBAL:
3349 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3350 enmShadowMode = PGMMODE_AMD64;
3351 enmSwitcher = VMMSWITCHER_PAE_TO_AMD64;
3352 break;
3353
3354 case SUPPAGINGMODE_AMD64:
3355 case SUPPAGINGMODE_AMD64_GLOBAL:
3356 case SUPPAGINGMODE_AMD64_NX:
3357 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3358 enmShadowMode = PGMMODE_AMD64;
3359 enmSwitcher = VMMSWITCHER_AMD64_TO_AMD64;
3360 break;
3361
3362 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3363 }
3364 break;
3365
3366
3367 default:
3368 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3369 return PGMMODE_INVALID;
3370 }
3371 /* Override the shadow mode is nested paging is active. */
3372 if (HWACCMIsNestedPagingActive(pVM))
3373 enmShadowMode = HWACCMGetShwPagingMode(pVM);
3374
3375 *penmSwitcher = enmSwitcher;
3376 return enmShadowMode;
3377}
3378
3379
3380/**
3381 * Performs the actual mode change.
3382 * This is called by PGMChangeMode and pgmR3InitPaging().
3383 *
3384 * @returns VBox status code.
3385 * @param pVM VM handle.
3386 * @param enmGuestMode The new guest mode. This is assumed to be different from
3387 * the current mode.
3388 */
3389VMMR3DECL(int) PGMR3ChangeMode(PVM pVM, PGMMODE enmGuestMode)
3390{
3391 Log(("PGMR3ChangeMode: Guest mode: %s -> %s\n", PGMGetModeName(pVM->pgm.s.enmGuestMode), PGMGetModeName(enmGuestMode)));
3392 STAM_REL_COUNTER_INC(&pVM->pgm.s.cGuestModeChanges);
3393
3394 /*
3395 * Calc the shadow mode and switcher.
3396 */
3397 VMMSWITCHER enmSwitcher;
3398 PGMMODE enmShadowMode = pgmR3CalcShadowMode(pVM, enmGuestMode, pVM->pgm.s.enmHostMode, pVM->pgm.s.enmShadowMode, &enmSwitcher);
3399 if (enmSwitcher != VMMSWITCHER_INVALID)
3400 {
3401 /*
3402 * Select new switcher.
3403 */
3404 int rc = VMMR3SelectSwitcher(pVM, enmSwitcher);
3405 if (RT_FAILURE(rc))
3406 {
3407 AssertReleaseMsgFailed(("VMMR3SelectSwitcher(%d) -> %Rrc\n", enmSwitcher, rc));
3408 return rc;
3409 }
3410 }
3411
3412 /*
3413 * Exit old mode(s).
3414 */
3415 /* shadow */
3416 if (enmShadowMode != pVM->pgm.s.enmShadowMode)
3417 {
3418 LogFlow(("PGMR3ChangeMode: Shadow mode: %s -> %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode), PGMGetModeName(enmShadowMode)));
3419 if (PGM_SHW_PFN(Exit, pVM))
3420 {
3421 int rc = PGM_SHW_PFN(Exit, pVM)(pVM);
3422 if (RT_FAILURE(rc))
3423 {
3424 AssertMsgFailed(("Exit failed for shadow mode %d: %Rrc\n", pVM->pgm.s.enmShadowMode, rc));
3425 return rc;
3426 }
3427 }
3428
3429 }
3430 else
3431 LogFlow(("PGMR3ChangeMode: Shadow mode remains: %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode)));
3432
3433 /* guest */
3434 if (PGM_GST_PFN(Exit, pVM))
3435 {
3436 int rc = PGM_GST_PFN(Exit, pVM)(pVM);
3437 if (RT_FAILURE(rc))
3438 {
3439 AssertMsgFailed(("Exit failed for guest mode %d: %Rrc\n", pVM->pgm.s.enmGuestMode, rc));
3440 return rc;
3441 }
3442 }
3443
3444 /*
3445 * Load new paging mode data.
3446 */
3447 pgmR3ModeDataSwitch(pVM, enmShadowMode, enmGuestMode);
3448
3449 /*
3450 * Enter new shadow mode (if changed).
3451 */
3452 if (enmShadowMode != pVM->pgm.s.enmShadowMode)
3453 {
3454 int rc;
3455 pVM->pgm.s.enmShadowMode = enmShadowMode;
3456 switch (enmShadowMode)
3457 {
3458 case PGMMODE_32_BIT:
3459 rc = PGM_SHW_NAME_32BIT(Enter)(pVM);
3460 break;
3461 case PGMMODE_PAE:
3462 case PGMMODE_PAE_NX:
3463 rc = PGM_SHW_NAME_PAE(Enter)(pVM);
3464 break;
3465 case PGMMODE_AMD64:
3466 case PGMMODE_AMD64_NX:
3467 rc = PGM_SHW_NAME_AMD64(Enter)(pVM);
3468 break;
3469 case PGMMODE_NESTED:
3470 rc = PGM_SHW_NAME_NESTED(Enter)(pVM);
3471 break;
3472 case PGMMODE_EPT:
3473 rc = PGM_SHW_NAME_EPT(Enter)(pVM);
3474 break;
3475 case PGMMODE_REAL:
3476 case PGMMODE_PROTECTED:
3477 default:
3478 AssertReleaseMsgFailed(("enmShadowMode=%d\n", enmShadowMode));
3479 return VERR_INTERNAL_ERROR;
3480 }
3481 if (RT_FAILURE(rc))
3482 {
3483 AssertReleaseMsgFailed(("Entering enmShadowMode=%d failed: %Rrc\n", enmShadowMode, rc));
3484 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
3485 return rc;
3486 }
3487 }
3488
3489#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3490 /** @todo This is a bug!
3491 *
3492 * We must flush the PGM pool cache if the guest mode changes; we don't always
3493 * switch shadow paging mode (e.g. protected->32-bit) and shouldn't reuse
3494 * the shadow page tables.
3495 *
3496 * That only applies when switching between paging and non-paging modes.
3497 */
3498 /** @todo A20 setting */
3499 if ( pVM->pgm.s.CTX_SUFF(pPool)
3500 && !HWACCMIsNestedPagingActive(pVM)
3501 && PGMMODE_WITH_PAGING(pVM->pgm.s.enmGuestMode) != PGMMODE_WITH_PAGING(enmGuestMode))
3502 {
3503 Log(("PGMR3ChangeMode: changing guest paging mode -> flush pgm pool cache!\n"));
3504 pgmPoolFlushAll(pVM);
3505 }
3506#endif
3507
3508 /*
3509 * Enter the new guest and shadow+guest modes.
3510 */
3511 int rc = -1;
3512 int rc2 = -1;
3513 RTGCPHYS GCPhysCR3 = NIL_RTGCPHYS;
3514 pVM->pgm.s.enmGuestMode = enmGuestMode;
3515 switch (enmGuestMode)
3516 {
3517 case PGMMODE_REAL:
3518 rc = PGM_GST_NAME_REAL(Enter)(pVM, NIL_RTGCPHYS);
3519 switch (pVM->pgm.s.enmShadowMode)
3520 {
3521 case PGMMODE_32_BIT:
3522 rc2 = PGM_BTH_NAME_32BIT_REAL(Enter)(pVM, NIL_RTGCPHYS);
3523 break;
3524 case PGMMODE_PAE:
3525 case PGMMODE_PAE_NX:
3526 rc2 = PGM_BTH_NAME_PAE_REAL(Enter)(pVM, NIL_RTGCPHYS);
3527 break;
3528 case PGMMODE_NESTED:
3529 rc2 = PGM_BTH_NAME_NESTED_REAL(Enter)(pVM, NIL_RTGCPHYS);
3530 break;
3531 case PGMMODE_EPT:
3532 rc2 = PGM_BTH_NAME_EPT_REAL(Enter)(pVM, NIL_RTGCPHYS);
3533 break;
3534 case PGMMODE_AMD64:
3535 case PGMMODE_AMD64_NX:
3536 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3537 default: AssertFailed(); break;
3538 }
3539 break;
3540
3541 case PGMMODE_PROTECTED:
3542 rc = PGM_GST_NAME_PROT(Enter)(pVM, NIL_RTGCPHYS);
3543 switch (pVM->pgm.s.enmShadowMode)
3544 {
3545 case PGMMODE_32_BIT:
3546 rc2 = PGM_BTH_NAME_32BIT_PROT(Enter)(pVM, NIL_RTGCPHYS);
3547 break;
3548 case PGMMODE_PAE:
3549 case PGMMODE_PAE_NX:
3550 rc2 = PGM_BTH_NAME_PAE_PROT(Enter)(pVM, NIL_RTGCPHYS);
3551 break;
3552 case PGMMODE_NESTED:
3553 rc2 = PGM_BTH_NAME_NESTED_PROT(Enter)(pVM, NIL_RTGCPHYS);
3554 break;
3555 case PGMMODE_EPT:
3556 rc2 = PGM_BTH_NAME_EPT_PROT(Enter)(pVM, NIL_RTGCPHYS);
3557 break;
3558 case PGMMODE_AMD64:
3559 case PGMMODE_AMD64_NX:
3560 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3561 default: AssertFailed(); break;
3562 }
3563 break;
3564
3565 case PGMMODE_32_BIT:
3566 GCPhysCR3 = CPUMGetGuestCR3(pVM) & X86_CR3_PAGE_MASK;
3567 rc = PGM_GST_NAME_32BIT(Enter)(pVM, GCPhysCR3);
3568 switch (pVM->pgm.s.enmShadowMode)
3569 {
3570 case PGMMODE_32_BIT:
3571 rc2 = PGM_BTH_NAME_32BIT_32BIT(Enter)(pVM, GCPhysCR3);
3572 break;
3573 case PGMMODE_PAE:
3574 case PGMMODE_PAE_NX:
3575 rc2 = PGM_BTH_NAME_PAE_32BIT(Enter)(pVM, GCPhysCR3);
3576 break;
3577 case PGMMODE_NESTED:
3578 rc2 = PGM_BTH_NAME_NESTED_32BIT(Enter)(pVM, GCPhysCR3);
3579 break;
3580 case PGMMODE_EPT:
3581 rc2 = PGM_BTH_NAME_EPT_32BIT(Enter)(pVM, GCPhysCR3);
3582 break;
3583 case PGMMODE_AMD64:
3584 case PGMMODE_AMD64_NX:
3585 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3586 default: AssertFailed(); break;
3587 }
3588 break;
3589
3590 case PGMMODE_PAE_NX:
3591 case PGMMODE_PAE:
3592 {
3593 uint32_t u32Dummy, u32Features;
3594
3595 CPUMGetGuestCpuId(pVM, 1, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
3596 if (!(u32Features & X86_CPUID_FEATURE_EDX_PAE))
3597 {
3598 /* Pause first, then inform Main. */
3599 rc = VMR3SuspendNoSave(pVM);
3600 AssertRC(rc);
3601
3602 VMSetRuntimeError(pVM, true, "PAEmode",
3603 N_("The guest is trying to switch to the PAE mode which is currently disabled by default in VirtualBox. PAE support can be enabled using the VM settings (General/Advanced)"));
3604 /* we must return VINF_SUCCESS here otherwise the recompiler will assert */
3605 return VINF_SUCCESS;
3606 }
3607 GCPhysCR3 = CPUMGetGuestCR3(pVM) & X86_CR3_PAE_PAGE_MASK;
3608 rc = PGM_GST_NAME_PAE(Enter)(pVM, GCPhysCR3);
3609 switch (pVM->pgm.s.enmShadowMode)
3610 {
3611 case PGMMODE_PAE:
3612 case PGMMODE_PAE_NX:
3613 rc2 = PGM_BTH_NAME_PAE_PAE(Enter)(pVM, GCPhysCR3);
3614 break;
3615 case PGMMODE_NESTED:
3616 rc2 = PGM_BTH_NAME_NESTED_PAE(Enter)(pVM, GCPhysCR3);
3617 break;
3618 case PGMMODE_EPT:
3619 rc2 = PGM_BTH_NAME_EPT_PAE(Enter)(pVM, GCPhysCR3);
3620 break;
3621 case PGMMODE_32_BIT:
3622 case PGMMODE_AMD64:
3623 case PGMMODE_AMD64_NX:
3624 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3625 default: AssertFailed(); break;
3626 }
3627 break;
3628 }
3629
3630#ifdef VBOX_WITH_64_BITS_GUESTS
3631 case PGMMODE_AMD64_NX:
3632 case PGMMODE_AMD64:
3633 GCPhysCR3 = CPUMGetGuestCR3(pVM) & UINT64_C(0xfffffffffffff000); /** @todo define this mask! */
3634 rc = PGM_GST_NAME_AMD64(Enter)(pVM, GCPhysCR3);
3635 switch (pVM->pgm.s.enmShadowMode)
3636 {
3637 case PGMMODE_AMD64:
3638 case PGMMODE_AMD64_NX:
3639 rc2 = PGM_BTH_NAME_AMD64_AMD64(Enter)(pVM, GCPhysCR3);
3640 break;
3641 case PGMMODE_NESTED:
3642 rc2 = PGM_BTH_NAME_NESTED_AMD64(Enter)(pVM, GCPhysCR3);
3643 break;
3644 case PGMMODE_EPT:
3645 rc2 = PGM_BTH_NAME_EPT_AMD64(Enter)(pVM, GCPhysCR3);
3646 break;
3647 case PGMMODE_32_BIT:
3648 case PGMMODE_PAE:
3649 case PGMMODE_PAE_NX:
3650 AssertMsgFailed(("Should use AMD64 shadow mode!\n"));
3651 default: AssertFailed(); break;
3652 }
3653 break;
3654#endif
3655
3656 default:
3657 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3658 rc = VERR_NOT_IMPLEMENTED;
3659 break;
3660 }
3661
3662 /* status codes. */
3663 AssertRC(rc);
3664 AssertRC(rc2);
3665 if (RT_SUCCESS(rc))
3666 {
3667 rc = rc2;
3668 if (RT_SUCCESS(rc)) /* no informational status codes. */
3669 rc = VINF_SUCCESS;
3670 }
3671
3672 /*
3673 * Notify SELM so it can update the TSSes with correct CR3s.
3674 */
3675 SELMR3PagingModeChanged(pVM);
3676
3677 /* Notify HWACCM as well. */
3678 HWACCMR3PagingModeChanged(pVM, pVM->pgm.s.enmShadowMode, pVM->pgm.s.enmGuestMode);
3679 return rc;
3680}
3681
3682
3683/**
3684 * Dumps a PAE shadow page table.
3685 *
3686 * @returns VBox status code (VINF_SUCCESS).
3687 * @param pVM The VM handle.
3688 * @param pPT Pointer to the page table.
3689 * @param u64Address The virtual address of the page table starts.
3690 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3691 * @param cMaxDepth The maxium depth.
3692 * @param pHlp Pointer to the output functions.
3693 */
3694static int pgmR3DumpHierarchyHCPaePT(PVM pVM, PX86PTPAE pPT, uint64_t u64Address, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3695{
3696 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
3697 {
3698 X86PTEPAE Pte = pPT->a[i];
3699 if (Pte.n.u1Present)
3700 {
3701 pHlp->pfnPrintf(pHlp,
3702 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3703 ? "%016llx 3 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n"
3704 : "%08llx 2 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n",
3705 u64Address + ((uint64_t)i << X86_PT_PAE_SHIFT),
3706 Pte.n.u1Write ? 'W' : 'R',
3707 Pte.n.u1User ? 'U' : 'S',
3708 Pte.n.u1Accessed ? 'A' : '-',
3709 Pte.n.u1Dirty ? 'D' : '-',
3710 Pte.n.u1Global ? 'G' : '-',
3711 Pte.n.u1WriteThru ? "WT" : "--",
3712 Pte.n.u1CacheDisable? "CD" : "--",
3713 Pte.n.u1PAT ? "AT" : "--",
3714 Pte.n.u1NoExecute ? "NX" : "--",
3715 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3716 Pte.u & RT_BIT(10) ? '1' : '0',
3717 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED? 'v' : '-',
3718 Pte.u & X86_PTE_PAE_PG_MASK);
3719 }
3720 }
3721 return VINF_SUCCESS;
3722}
3723
3724
3725/**
3726 * Dumps a PAE shadow page directory table.
3727 *
3728 * @returns VBox status code (VINF_SUCCESS).
3729 * @param pVM The VM handle.
3730 * @param HCPhys The physical address of the page directory table.
3731 * @param u64Address The virtual address of the page table starts.
3732 * @param cr4 The CR4, PSE is currently used.
3733 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3734 * @param cMaxDepth The maxium depth.
3735 * @param pHlp Pointer to the output functions.
3736 */
3737static int pgmR3DumpHierarchyHCPaePD(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3738{
3739 PX86PDPAE pPD = (PX86PDPAE)MMPagePhys2Page(pVM, HCPhys);
3740 if (!pPD)
3741 {
3742 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory at HCPhys=%RHp was not found in the page pool!\n",
3743 fLongMode ? 16 : 8, u64Address, HCPhys);
3744 return VERR_INVALID_PARAMETER;
3745 }
3746 const bool fBigPagesSupported = fLongMode || !!(cr4 & X86_CR4_PSE);
3747
3748 int rc = VINF_SUCCESS;
3749 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
3750 {
3751 X86PDEPAE Pde = pPD->a[i];
3752 if (Pde.n.u1Present)
3753 {
3754 if (fBigPagesSupported && Pde.b.u1Size)
3755 pHlp->pfnPrintf(pHlp,
3756 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3757 ? "%016llx 2 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n"
3758 : "%08llx 1 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n",
3759 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
3760 Pde.b.u1Write ? 'W' : 'R',
3761 Pde.b.u1User ? 'U' : 'S',
3762 Pde.b.u1Accessed ? 'A' : '-',
3763 Pde.b.u1Dirty ? 'D' : '-',
3764 Pde.b.u1Global ? 'G' : '-',
3765 Pde.b.u1WriteThru ? "WT" : "--",
3766 Pde.b.u1CacheDisable? "CD" : "--",
3767 Pde.b.u1PAT ? "AT" : "--",
3768 Pde.b.u1NoExecute ? "NX" : "--",
3769 Pde.u & RT_BIT_64(9) ? '1' : '0',
3770 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3771 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3772 Pde.u & X86_PDE_PAE_PG_MASK);
3773 else
3774 {
3775 pHlp->pfnPrintf(pHlp,
3776 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3777 ? "%016llx 2 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n"
3778 : "%08llx 1 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n",
3779 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
3780 Pde.n.u1Write ? 'W' : 'R',
3781 Pde.n.u1User ? 'U' : 'S',
3782 Pde.n.u1Accessed ? 'A' : '-',
3783 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
3784 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
3785 Pde.n.u1WriteThru ? "WT" : "--",
3786 Pde.n.u1CacheDisable? "CD" : "--",
3787 Pde.n.u1NoExecute ? "NX" : "--",
3788 Pde.u & RT_BIT_64(9) ? '1' : '0',
3789 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3790 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3791 Pde.u & X86_PDE_PAE_PG_MASK);
3792 if (cMaxDepth >= 1)
3793 {
3794 /** @todo what about using the page pool for mapping PTs? */
3795 uint64_t u64AddressPT = u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT);
3796 RTHCPHYS HCPhysPT = Pde.u & X86_PDE_PAE_PG_MASK;
3797 PX86PTPAE pPT = NULL;
3798 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
3799 pPT = (PX86PTPAE)MMPagePhys2Page(pVM, HCPhysPT);
3800 else
3801 {
3802 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
3803 {
3804 uint64_t off = u64AddressPT - pMap->GCPtr;
3805 if (off < pMap->cb)
3806 {
3807 const int iPDE = (uint32_t)(off >> X86_PD_SHIFT);
3808 const int iSub = (int)((off >> X86_PD_PAE_SHIFT) & 1); /* MSC is a pain sometimes */
3809 if ((iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0) != HCPhysPT)
3810 pHlp->pfnPrintf(pHlp, "%0*llx error! Mapping error! PT %d has HCPhysPT=%RHp not %RHp is in the PD.\n",
3811 fLongMode ? 16 : 8, u64AddressPT, iPDE,
3812 iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0, HCPhysPT);
3813 pPT = &pMap->aPTs[iPDE].paPaePTsR3[iSub];
3814 }
3815 }
3816 }
3817 int rc2 = VERR_INVALID_PARAMETER;
3818 if (pPT)
3819 rc2 = pgmR3DumpHierarchyHCPaePT(pVM, pPT, u64AddressPT, fLongMode, cMaxDepth - 1, pHlp);
3820 else
3821 pHlp->pfnPrintf(pHlp, "%0*llx error! Page table at HCPhys=%RHp was not found in the page pool!\n",
3822 fLongMode ? 16 : 8, u64AddressPT, HCPhysPT);
3823 if (rc2 < rc && RT_SUCCESS(rc))
3824 rc = rc2;
3825 }
3826 }
3827 }
3828 }
3829 return rc;
3830}
3831
3832
3833/**
3834 * Dumps a PAE shadow page directory pointer table.
3835 *
3836 * @returns VBox status code (VINF_SUCCESS).
3837 * @param pVM The VM handle.
3838 * @param HCPhys The physical address of the page directory pointer table.
3839 * @param u64Address The virtual address of the page table starts.
3840 * @param cr4 The CR4, PSE is currently used.
3841 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3842 * @param cMaxDepth The maxium depth.
3843 * @param pHlp Pointer to the output functions.
3844 */
3845static int pgmR3DumpHierarchyHCPaePDPT(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3846{
3847 PX86PDPT pPDPT = (PX86PDPT)MMPagePhys2Page(pVM, HCPhys);
3848 if (!pPDPT)
3849 {
3850 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory pointer table at HCPhys=%RHp was not found in the page pool!\n",
3851 fLongMode ? 16 : 8, u64Address, HCPhys);
3852 return VERR_INVALID_PARAMETER;
3853 }
3854
3855 int rc = VINF_SUCCESS;
3856 const unsigned c = fLongMode ? RT_ELEMENTS(pPDPT->a) : X86_PG_PAE_PDPE_ENTRIES;
3857 for (unsigned i = 0; i < c; i++)
3858 {
3859 X86PDPE Pdpe = pPDPT->a[i];
3860 if (Pdpe.n.u1Present)
3861 {
3862 if (fLongMode)
3863 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3864 "%016llx 1 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3865 u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
3866 Pdpe.lm.u1Write ? 'W' : 'R',
3867 Pdpe.lm.u1User ? 'U' : 'S',
3868 Pdpe.lm.u1Accessed ? 'A' : '-',
3869 Pdpe.lm.u3Reserved & 1? '?' : '.', /* ignored */
3870 Pdpe.lm.u3Reserved & 4? '!' : '.', /* mbz */
3871 Pdpe.lm.u1WriteThru ? "WT" : "--",
3872 Pdpe.lm.u1CacheDisable? "CD" : "--",
3873 Pdpe.lm.u3Reserved & 2? "!" : "..",/* mbz */
3874 Pdpe.lm.u1NoExecute ? "NX" : "--",
3875 Pdpe.u & RT_BIT(9) ? '1' : '0',
3876 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3877 Pdpe.u & RT_BIT(11) ? '1' : '0',
3878 Pdpe.u & X86_PDPE_PG_MASK);
3879 else
3880 pHlp->pfnPrintf(pHlp, /*P G WT CD AT NX 4M a p ? */
3881 "%08x 0 | P %c %s %s %s %s .. %c%c%c %016llx\n",
3882 i << X86_PDPT_SHIFT,
3883 Pdpe.n.u4Reserved & 1? '!' : '.', /* mbz */
3884 Pdpe.n.u4Reserved & 4? '!' : '.', /* mbz */
3885 Pdpe.n.u1WriteThru ? "WT" : "--",
3886 Pdpe.n.u1CacheDisable? "CD" : "--",
3887 Pdpe.n.u4Reserved & 2? "!" : "..",/* mbz */
3888 Pdpe.u & RT_BIT(9) ? '1' : '0',
3889 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3890 Pdpe.u & RT_BIT(11) ? '1' : '0',
3891 Pdpe.u & X86_PDPE_PG_MASK);
3892 if (cMaxDepth >= 1)
3893 {
3894 int rc2 = pgmR3DumpHierarchyHCPaePD(pVM, Pdpe.u & X86_PDPE_PG_MASK, u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
3895 cr4, fLongMode, cMaxDepth - 1, pHlp);
3896 if (rc2 < rc && RT_SUCCESS(rc))
3897 rc = rc2;
3898 }
3899 }
3900 }
3901 return rc;
3902}
3903
3904
3905/**
3906 * Dumps a 32-bit shadow page table.
3907 *
3908 * @returns VBox status code (VINF_SUCCESS).
3909 * @param pVM The VM handle.
3910 * @param HCPhys The physical address of the table.
3911 * @param cr4 The CR4, PSE is currently used.
3912 * @param cMaxDepth The maxium depth.
3913 * @param pHlp Pointer to the output functions.
3914 */
3915static int pgmR3DumpHierarchyHcPaePML4(PVM pVM, RTHCPHYS HCPhys, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3916{
3917 PX86PML4 pPML4 = (PX86PML4)MMPagePhys2Page(pVM, HCPhys);
3918 if (!pPML4)
3919 {
3920 pHlp->pfnPrintf(pHlp, "Page map level 4 at HCPhys=%RHp was not found in the page pool!\n", HCPhys);
3921 return VERR_INVALID_PARAMETER;
3922 }
3923
3924 int rc = VINF_SUCCESS;
3925 for (unsigned i = 0; i < RT_ELEMENTS(pPML4->a); i++)
3926 {
3927 X86PML4E Pml4e = pPML4->a[i];
3928 if (Pml4e.n.u1Present)
3929 {
3930 uint64_t u64Address = ((uint64_t)i << X86_PML4_SHIFT) | (((uint64_t)i >> (X86_PML4_SHIFT - X86_PDPT_SHIFT - 1)) * 0xffff000000000000ULL);
3931 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3932 "%016llx 0 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3933 u64Address,
3934 Pml4e.n.u1Write ? 'W' : 'R',
3935 Pml4e.n.u1User ? 'U' : 'S',
3936 Pml4e.n.u1Accessed ? 'A' : '-',
3937 Pml4e.n.u3Reserved & 1? '?' : '.', /* ignored */
3938 Pml4e.n.u3Reserved & 4? '!' : '.', /* mbz */
3939 Pml4e.n.u1WriteThru ? "WT" : "--",
3940 Pml4e.n.u1CacheDisable? "CD" : "--",
3941 Pml4e.n.u3Reserved & 2? "!" : "..",/* mbz */
3942 Pml4e.n.u1NoExecute ? "NX" : "--",
3943 Pml4e.u & RT_BIT(9) ? '1' : '0',
3944 Pml4e.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3945 Pml4e.u & RT_BIT(11) ? '1' : '0',
3946 Pml4e.u & X86_PML4E_PG_MASK);
3947
3948 if (cMaxDepth >= 1)
3949 {
3950 int rc2 = pgmR3DumpHierarchyHCPaePDPT(pVM, Pml4e.u & X86_PML4E_PG_MASK, u64Address, cr4, true, cMaxDepth - 1, pHlp);
3951 if (rc2 < rc && RT_SUCCESS(rc))
3952 rc = rc2;
3953 }
3954 }
3955 }
3956 return rc;
3957}
3958
3959
3960/**
3961 * Dumps a 32-bit shadow page table.
3962 *
3963 * @returns VBox status code (VINF_SUCCESS).
3964 * @param pVM The VM handle.
3965 * @param pPT Pointer to the page table.
3966 * @param u32Address The virtual address this table starts at.
3967 * @param pHlp Pointer to the output functions.
3968 */
3969int pgmR3DumpHierarchyHC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, PCDBGFINFOHLP pHlp)
3970{
3971 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
3972 {
3973 X86PTE Pte = pPT->a[i];
3974 if (Pte.n.u1Present)
3975 {
3976 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3977 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
3978 u32Address + (i << X86_PT_SHIFT),
3979 Pte.n.u1Write ? 'W' : 'R',
3980 Pte.n.u1User ? 'U' : 'S',
3981 Pte.n.u1Accessed ? 'A' : '-',
3982 Pte.n.u1Dirty ? 'D' : '-',
3983 Pte.n.u1Global ? 'G' : '-',
3984 Pte.n.u1WriteThru ? "WT" : "--",
3985 Pte.n.u1CacheDisable? "CD" : "--",
3986 Pte.n.u1PAT ? "AT" : "--",
3987 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3988 Pte.u & RT_BIT(10) ? '1' : '0',
3989 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
3990 Pte.u & X86_PDE_PG_MASK);
3991 }
3992 }
3993 return VINF_SUCCESS;
3994}
3995
3996
3997/**
3998 * Dumps a 32-bit shadow page directory and page tables.
3999 *
4000 * @returns VBox status code (VINF_SUCCESS).
4001 * @param pVM The VM handle.
4002 * @param cr3 The root of the hierarchy.
4003 * @param cr4 The CR4, PSE is currently used.
4004 * @param cMaxDepth How deep into the hierarchy the dumper should go.
4005 * @param pHlp Pointer to the output functions.
4006 */
4007int pgmR3DumpHierarchyHC32BitPD(PVM pVM, uint32_t cr3, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4008{
4009 PX86PD pPD = (PX86PD)MMPagePhys2Page(pVM, cr3 & X86_CR3_PAGE_MASK);
4010 if (!pPD)
4011 {
4012 pHlp->pfnPrintf(pHlp, "Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK);
4013 return VERR_INVALID_PARAMETER;
4014 }
4015
4016 int rc = VINF_SUCCESS;
4017 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
4018 {
4019 X86PDE Pde = pPD->a[i];
4020 if (Pde.n.u1Present)
4021 {
4022 const uint32_t u32Address = i << X86_PD_SHIFT;
4023 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
4024 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
4025 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
4026 u32Address,
4027 Pde.b.u1Write ? 'W' : 'R',
4028 Pde.b.u1User ? 'U' : 'S',
4029 Pde.b.u1Accessed ? 'A' : '-',
4030 Pde.b.u1Dirty ? 'D' : '-',
4031 Pde.b.u1Global ? 'G' : '-',
4032 Pde.b.u1WriteThru ? "WT" : "--",
4033 Pde.b.u1CacheDisable? "CD" : "--",
4034 Pde.b.u1PAT ? "AT" : "--",
4035 Pde.u & RT_BIT_64(9) ? '1' : '0',
4036 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
4037 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
4038 Pde.u & X86_PDE4M_PG_MASK);
4039 else
4040 {
4041 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
4042 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
4043 u32Address,
4044 Pde.n.u1Write ? 'W' : 'R',
4045 Pde.n.u1User ? 'U' : 'S',
4046 Pde.n.u1Accessed ? 'A' : '-',
4047 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
4048 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
4049 Pde.n.u1WriteThru ? "WT" : "--",
4050 Pde.n.u1CacheDisable? "CD" : "--",
4051 Pde.u & RT_BIT_64(9) ? '1' : '0',
4052 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
4053 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
4054 Pde.u & X86_PDE_PG_MASK);
4055 if (cMaxDepth >= 1)
4056 {
4057 /** @todo what about using the page pool for mapping PTs? */
4058 RTHCPHYS HCPhys = Pde.u & X86_PDE_PG_MASK;
4059 PX86PT pPT = NULL;
4060 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
4061 pPT = (PX86PT)MMPagePhys2Page(pVM, HCPhys);
4062 else
4063 {
4064 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
4065 if (u32Address - pMap->GCPtr < pMap->cb)
4066 {
4067 int iPDE = (u32Address - pMap->GCPtr) >> X86_PD_SHIFT;
4068 if (pMap->aPTs[iPDE].HCPhysPT != HCPhys)
4069 pHlp->pfnPrintf(pHlp, "%08x error! Mapping error! PT %d has HCPhysPT=%RHp not %RHp is in the PD.\n",
4070 u32Address, iPDE, pMap->aPTs[iPDE].HCPhysPT, HCPhys);
4071 pPT = pMap->aPTs[iPDE].pPTR3;
4072 }
4073 }
4074 int rc2 = VERR_INVALID_PARAMETER;
4075 if (pPT)
4076 rc2 = pgmR3DumpHierarchyHC32BitPT(pVM, pPT, u32Address, pHlp);
4077 else
4078 pHlp->pfnPrintf(pHlp, "%08x error! Page table at %#x was not found in the page pool!\n", u32Address, HCPhys);
4079 if (rc2 < rc && RT_SUCCESS(rc))
4080 rc = rc2;
4081 }
4082 }
4083 }
4084 }
4085
4086 return rc;
4087}
4088
4089
4090/**
4091 * Dumps a 32-bit shadow page table.
4092 *
4093 * @returns VBox status code (VINF_SUCCESS).
4094 * @param pVM The VM handle.
4095 * @param pPT Pointer to the page table.
4096 * @param u32Address The virtual address this table starts at.
4097 * @param PhysSearch Address to search for.
4098 */
4099int pgmR3DumpHierarchyGC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, RTGCPHYS PhysSearch)
4100{
4101 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
4102 {
4103 X86PTE Pte = pPT->a[i];
4104 if (Pte.n.u1Present)
4105 {
4106 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4107 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
4108 u32Address + (i << X86_PT_SHIFT),
4109 Pte.n.u1Write ? 'W' : 'R',
4110 Pte.n.u1User ? 'U' : 'S',
4111 Pte.n.u1Accessed ? 'A' : '-',
4112 Pte.n.u1Dirty ? 'D' : '-',
4113 Pte.n.u1Global ? 'G' : '-',
4114 Pte.n.u1WriteThru ? "WT" : "--",
4115 Pte.n.u1CacheDisable? "CD" : "--",
4116 Pte.n.u1PAT ? "AT" : "--",
4117 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
4118 Pte.u & RT_BIT(10) ? '1' : '0',
4119 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
4120 Pte.u & X86_PDE_PG_MASK));
4121
4122 if ((Pte.u & X86_PDE_PG_MASK) == PhysSearch)
4123 {
4124 uint64_t fPageShw = 0;
4125 RTHCPHYS pPhysHC = 0;
4126
4127 PGMShwGetPage(pVM, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), &fPageShw, &pPhysHC);
4128 Log(("Found %RGp at %RGv -> flags=%llx\n", PhysSearch, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), fPageShw));
4129 }
4130 }
4131 }
4132 return VINF_SUCCESS;
4133}
4134
4135
4136/**
4137 * Dumps a 32-bit guest page directory and page tables.
4138 *
4139 * @returns VBox status code (VINF_SUCCESS).
4140 * @param pVM The VM handle.
4141 * @param cr3 The root of the hierarchy.
4142 * @param cr4 The CR4, PSE is currently used.
4143 * @param PhysSearch Address to search for.
4144 */
4145VMMR3DECL(int) PGMR3DumpHierarchyGC(PVM pVM, uint64_t cr3, uint64_t cr4, RTGCPHYS PhysSearch)
4146{
4147 bool fLongMode = false;
4148 const unsigned cch = fLongMode ? 16 : 8; NOREF(cch);
4149 PX86PD pPD = 0;
4150
4151 int rc = PGM_GCPHYS_2_PTR(pVM, cr3 & X86_CR3_PAGE_MASK, &pPD);
4152 if (RT_FAILURE(rc) || !pPD)
4153 {
4154 Log(("Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK));
4155 return VERR_INVALID_PARAMETER;
4156 }
4157
4158 Log(("cr3=%08x cr4=%08x%s\n"
4159 "%-*s P - Present\n"
4160 "%-*s | R/W - Read (0) / Write (1)\n"
4161 "%-*s | | U/S - User (1) / Supervisor (0)\n"
4162 "%-*s | | | A - Accessed\n"
4163 "%-*s | | | | D - Dirty\n"
4164 "%-*s | | | | | G - Global\n"
4165 "%-*s | | | | | | WT - Write thru\n"
4166 "%-*s | | | | | | | CD - Cache disable\n"
4167 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
4168 "%-*s | | | | | | | | | NX - No execute (K8)\n"
4169 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
4170 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
4171 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
4172 "%-*s Level | | | | | | | | | | | | Page\n"
4173 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
4174 - W U - - - -- -- -- -- -- 010 */
4175 , cr3, cr4, fLongMode ? " Long Mode" : "",
4176 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
4177 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address"));
4178
4179 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
4180 {
4181 X86PDE Pde = pPD->a[i];
4182 if (Pde.n.u1Present)
4183 {
4184 const uint32_t u32Address = i << X86_PD_SHIFT;
4185
4186 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
4187 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4188 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
4189 u32Address,
4190 Pde.b.u1Write ? 'W' : 'R',
4191 Pde.b.u1User ? 'U' : 'S',
4192 Pde.b.u1Accessed ? 'A' : '-',
4193 Pde.b.u1Dirty ? 'D' : '-',
4194 Pde.b.u1Global ? 'G' : '-',
4195 Pde.b.u1WriteThru ? "WT" : "--",
4196 Pde.b.u1CacheDisable? "CD" : "--",
4197 Pde.b.u1PAT ? "AT" : "--",
4198 Pde.u & RT_BIT(9) ? '1' : '0',
4199 Pde.u & RT_BIT(10) ? '1' : '0',
4200 Pde.u & RT_BIT(11) ? '1' : '0',
4201 pgmGstGet4MBPhysPage(&pVM->pgm.s, Pde)));
4202 /** @todo PhysSearch */
4203 else
4204 {
4205 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4206 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
4207 u32Address,
4208 Pde.n.u1Write ? 'W' : 'R',
4209 Pde.n.u1User ? 'U' : 'S',
4210 Pde.n.u1Accessed ? 'A' : '-',
4211 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
4212 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
4213 Pde.n.u1WriteThru ? "WT" : "--",
4214 Pde.n.u1CacheDisable? "CD" : "--",
4215 Pde.u & RT_BIT(9) ? '1' : '0',
4216 Pde.u & RT_BIT(10) ? '1' : '0',
4217 Pde.u & RT_BIT(11) ? '1' : '0',
4218 Pde.u & X86_PDE_PG_MASK));
4219 ////if (cMaxDepth >= 1)
4220 {
4221 /** @todo what about using the page pool for mapping PTs? */
4222 RTGCPHYS GCPhys = Pde.u & X86_PDE_PG_MASK;
4223 PX86PT pPT = NULL;
4224
4225 rc = PGM_GCPHYS_2_PTR(pVM, GCPhys, &pPT);
4226
4227 int rc2 = VERR_INVALID_PARAMETER;
4228 if (pPT)
4229 rc2 = pgmR3DumpHierarchyGC32BitPT(pVM, pPT, u32Address, PhysSearch);
4230 else
4231 Log(("%08x error! Page table at %#x was not found in the page pool!\n", u32Address, GCPhys));
4232 if (rc2 < rc && RT_SUCCESS(rc))
4233 rc = rc2;
4234 }
4235 }
4236 }
4237 }
4238
4239 return rc;
4240}
4241
4242
4243/**
4244 * Dumps a page table hierarchy use only physical addresses and cr4/lm flags.
4245 *
4246 * @returns VBox status code (VINF_SUCCESS).
4247 * @param pVM The VM handle.
4248 * @param cr3 The root of the hierarchy.
4249 * @param cr4 The cr4, only PAE and PSE is currently used.
4250 * @param fLongMode Set if long mode, false if not long mode.
4251 * @param cMaxDepth Number of levels to dump.
4252 * @param pHlp Pointer to the output functions.
4253 */
4254VMMR3DECL(int) PGMR3DumpHierarchyHC(PVM pVM, uint64_t cr3, uint64_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4255{
4256 if (!pHlp)
4257 pHlp = DBGFR3InfoLogHlp();
4258 if (!cMaxDepth)
4259 return VINF_SUCCESS;
4260 const unsigned cch = fLongMode ? 16 : 8;
4261 pHlp->pfnPrintf(pHlp,
4262 "cr3=%08x cr4=%08x%s\n"
4263 "%-*s P - Present\n"
4264 "%-*s | R/W - Read (0) / Write (1)\n"
4265 "%-*s | | U/S - User (1) / Supervisor (0)\n"
4266 "%-*s | | | A - Accessed\n"
4267 "%-*s | | | | D - Dirty\n"
4268 "%-*s | | | | | G - Global\n"
4269 "%-*s | | | | | | WT - Write thru\n"
4270 "%-*s | | | | | | | CD - Cache disable\n"
4271 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
4272 "%-*s | | | | | | | | | NX - No execute (K8)\n"
4273 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
4274 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
4275 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
4276 "%-*s Level | | | | | | | | | | | | Page\n"
4277 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
4278 - W U - - - -- -- -- -- -- 010 */
4279 , cr3, cr4, fLongMode ? " Long Mode" : "",
4280 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
4281 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address");
4282 if (cr4 & X86_CR4_PAE)
4283 {
4284 if (fLongMode)
4285 return pgmR3DumpHierarchyHcPaePML4(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
4286 return pgmR3DumpHierarchyHCPaePDPT(pVM, cr3 & X86_CR3_PAE_PAGE_MASK, 0, cr4, false, cMaxDepth, pHlp);
4287 }
4288 return pgmR3DumpHierarchyHC32BitPD(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
4289}
4290
4291#ifdef VBOX_WITH_DEBUGGER
4292
4293/**
4294 * The '.pgmram' command.
4295 *
4296 * @returns VBox status.
4297 * @param pCmd Pointer to the command descriptor (as registered).
4298 * @param pCmdHlp Pointer to command helper functions.
4299 * @param pVM Pointer to the current VM (if any).
4300 * @param paArgs Pointer to (readonly) array of arguments.
4301 * @param cArgs Number of arguments in the array.
4302 */
4303static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4304{
4305 /*
4306 * Validate input.
4307 */
4308 if (!pVM)
4309 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4310 if (!pVM->pgm.s.pRamRangesRC)
4311 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no Ram is registered.\n");
4312
4313 /*
4314 * Dump the ranges.
4315 */
4316 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "From - To (incl) pvHC\n");
4317 PPGMRAMRANGE pRam;
4318 for (pRam = pVM->pgm.s.pRamRangesR3; pRam; pRam = pRam->pNextR3)
4319 {
4320 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
4321 "%RGp - %RGp %p\n",
4322 pRam->GCPhys, pRam->GCPhysLast, pRam->pvR3);
4323 if (RT_FAILURE(rc))
4324 return rc;
4325 }
4326
4327 return VINF_SUCCESS;
4328}
4329
4330
4331/**
4332 * The '.pgmmap' command.
4333 *
4334 * @returns VBox status.
4335 * @param pCmd Pointer to the command descriptor (as registered).
4336 * @param pCmdHlp Pointer to command helper functions.
4337 * @param pVM Pointer to the current VM (if any).
4338 * @param paArgs Pointer to (readonly) array of arguments.
4339 * @param cArgs Number of arguments in the array.
4340 */
4341static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4342{
4343 /*
4344 * Validate input.
4345 */
4346 if (!pVM)
4347 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4348 if (!pVM->pgm.s.pMappingsR3)
4349 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no mappings are registered.\n");
4350
4351 /*
4352 * Print message about the fixedness of the mappings.
4353 */
4354 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, pVM->pgm.s.fMappingsFixed ? "The mappings are FIXED.\n" : "The mappings are FLOATING.\n");
4355 if (RT_FAILURE(rc))
4356 return rc;
4357
4358 /*
4359 * Dump the ranges.
4360 */
4361 PPGMMAPPING pCur;
4362 for (pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
4363 {
4364 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
4365 "%08x - %08x %s\n",
4366 pCur->GCPtr, pCur->GCPtrLast, pCur->pszDesc);
4367 if (RT_FAILURE(rc))
4368 return rc;
4369 }
4370
4371 return VINF_SUCCESS;
4372}
4373
4374
4375/**
4376 * The '.pgmsync' command.
4377 *
4378 * @returns VBox status.
4379 * @param pCmd Pointer to the command descriptor (as registered).
4380 * @param pCmdHlp Pointer to command helper functions.
4381 * @param pVM Pointer to the current VM (if any).
4382 * @param paArgs Pointer to (readonly) array of arguments.
4383 * @param cArgs Number of arguments in the array.
4384 */
4385static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4386{
4387 /*
4388 * Validate input.
4389 */
4390 if (!pVM)
4391 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4392
4393 /*
4394 * Force page directory sync.
4395 */
4396 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
4397
4398 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Forcing page directory sync.\n");
4399 if (RT_FAILURE(rc))
4400 return rc;
4401
4402 return VINF_SUCCESS;
4403}
4404
4405
4406#ifdef VBOX_STRICT
4407/**
4408 * The '.pgmassertcr3' command.
4409 *
4410 * @returns VBox status.
4411 * @param pCmd Pointer to the command descriptor (as registered).
4412 * @param pCmdHlp Pointer to command helper functions.
4413 * @param pVM Pointer to the current VM (if any).
4414 * @param paArgs Pointer to (readonly) array of arguments.
4415 * @param cArgs Number of arguments in the array.
4416 */
4417static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4418{
4419 /*
4420 * Validate input.
4421 */
4422 if (!pVM)
4423 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4424
4425 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Checking shadow CR3 page tables for consistency.\n");
4426 if (RT_FAILURE(rc))
4427 return rc;
4428
4429 PGMAssertCR3(pVM, CPUMGetGuestCR3(pVM), CPUMGetGuestCR4(pVM));
4430
4431 return VINF_SUCCESS;
4432}
4433#endif /* VBOX_STRICT */
4434
4435
4436/**
4437 * The '.pgmsyncalways' command.
4438 *
4439 * @returns VBox status.
4440 * @param pCmd Pointer to the command descriptor (as registered).
4441 * @param pCmdHlp Pointer to command helper functions.
4442 * @param pVM Pointer to the current VM (if any).
4443 * @param paArgs Pointer to (readonly) array of arguments.
4444 * @param cArgs Number of arguments in the array.
4445 */
4446static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4447{
4448 /*
4449 * Validate input.
4450 */
4451 if (!pVM)
4452 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4453
4454 /*
4455 * Force page directory sync.
4456 */
4457 if (pVM->pgm.s.fSyncFlags & PGM_SYNC_ALWAYS)
4458 {
4459 ASMAtomicAndU32(&pVM->pgm.s.fSyncFlags, ~PGM_SYNC_ALWAYS);
4460 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Disabled permanent forced page directory syncing.\n");
4461 }
4462 else
4463 {
4464 ASMAtomicOrU32(&pVM->pgm.s.fSyncFlags, PGM_SYNC_ALWAYS);
4465 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
4466 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Enabled permanent forced page directory syncing.\n");
4467 }
4468}
4469
4470#endif /* VBOX_WITH_DEBUGGER */
4471
4472/**
4473 * pvUser argument of the pgmR3CheckIntegrity*Node callbacks.
4474 */
4475typedef struct PGMCHECKINTARGS
4476{
4477 bool fLeftToRight; /**< true: left-to-right; false: right-to-left. */
4478 PPGMPHYSHANDLER pPrevPhys;
4479 PPGMVIRTHANDLER pPrevVirt;
4480 PPGMPHYS2VIRTHANDLER pPrevPhys2Virt;
4481 PVM pVM;
4482} PGMCHECKINTARGS, *PPGMCHECKINTARGS;
4483
4484/**
4485 * Validate a node in the physical handler tree.
4486 *
4487 * @returns 0 on if ok, other wise 1.
4488 * @param pNode The handler node.
4489 * @param pvUser pVM.
4490 */
4491static DECLCALLBACK(int) pgmR3CheckIntegrityPhysHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4492{
4493 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4494 PPGMPHYSHANDLER pCur = (PPGMPHYSHANDLER)pNode;
4495 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4496 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGp-%RGp %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4497 AssertReleaseMsg( !pArgs->pPrevPhys
4498 || (pArgs->fLeftToRight ? pArgs->pPrevPhys->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys->Core.KeyLast > pCur->Core.Key),
4499 ("pPrevPhys=%p %RGp-%RGp %s\n"
4500 " pCur=%p %RGp-%RGp %s\n",
4501 pArgs->pPrevPhys, pArgs->pPrevPhys->Core.Key, pArgs->pPrevPhys->Core.KeyLast, pArgs->pPrevPhys->pszDesc,
4502 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4503 pArgs->pPrevPhys = pCur;
4504 return 0;
4505}
4506
4507
4508/**
4509 * Validate a node in the virtual handler tree.
4510 *
4511 * @returns 0 on if ok, other wise 1.
4512 * @param pNode The handler node.
4513 * @param pvUser pVM.
4514 */
4515static DECLCALLBACK(int) pgmR3CheckIntegrityVirtHandlerNode(PAVLROGCPTRNODECORE pNode, void *pvUser)
4516{
4517 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4518 PPGMVIRTHANDLER pCur = (PPGMVIRTHANDLER)pNode;
4519 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4520 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGv-%RGv %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4521 AssertReleaseMsg( !pArgs->pPrevVirt
4522 || (pArgs->fLeftToRight ? pArgs->pPrevVirt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevVirt->Core.KeyLast > pCur->Core.Key),
4523 ("pPrevVirt=%p %RGv-%RGv %s\n"
4524 " pCur=%p %RGv-%RGv %s\n",
4525 pArgs->pPrevVirt, pArgs->pPrevVirt->Core.Key, pArgs->pPrevVirt->Core.KeyLast, pArgs->pPrevVirt->pszDesc,
4526 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4527 for (unsigned iPage = 0; iPage < pCur->cPages; iPage++)
4528 {
4529 AssertReleaseMsg(pCur->aPhysToVirt[iPage].offVirtHandler == -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage]),
4530 ("pCur=%p %RGv-%RGv %s\n"
4531 "iPage=%d offVirtHandle=%#x expected %#x\n",
4532 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc,
4533 iPage, pCur->aPhysToVirt[iPage].offVirtHandler, -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage])));
4534 }
4535 pArgs->pPrevVirt = pCur;
4536 return 0;
4537}
4538
4539
4540/**
4541 * Validate a node in the virtual handler tree.
4542 *
4543 * @returns 0 on if ok, other wise 1.
4544 * @param pNode The handler node.
4545 * @param pvUser pVM.
4546 */
4547static DECLCALLBACK(int) pgmR3CheckIntegrityPhysToVirtHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4548{
4549 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4550 PPGMPHYS2VIRTHANDLER pCur = (PPGMPHYS2VIRTHANDLER)pNode;
4551 AssertReleaseMsgReturn(!((uintptr_t)pCur & 3), ("\n"), 1);
4552 AssertReleaseMsgReturn(!(pCur->offVirtHandler & 3), ("\n"), 1);
4553 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGp-%RGp\n", pCur, pCur->Core.Key, pCur->Core.KeyLast));
4554 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4555 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4556 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
4557 " pCur=%p %RGp-%RGp\n",
4558 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4559 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4560 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4561 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4562 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
4563 " pCur=%p %RGp-%RGp\n",
4564 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4565 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4566 AssertReleaseMsg((pCur->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD),
4567 ("pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4568 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4569 if (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK)
4570 {
4571 PPGMPHYS2VIRTHANDLER pCur2 = pCur;
4572 for (;;)
4573 {
4574 pCur2 = (PPGMPHYS2VIRTHANDLER)((intptr_t)pCur + (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK));
4575 AssertReleaseMsg(pCur2 != pCur,
4576 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4577 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4578 AssertReleaseMsg((pCur2->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == PGMPHYS2VIRTHANDLER_IN_TREE,
4579 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4580 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4581 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4582 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4583 AssertReleaseMsg((pCur2->Core.Key ^ pCur->Core.Key) < PAGE_SIZE,
4584 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4585 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4586 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4587 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4588 AssertReleaseMsg((pCur2->Core.KeyLast ^ pCur->Core.KeyLast) < PAGE_SIZE,
4589 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4590 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4591 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4592 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4593 if (!(pCur2->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK))
4594 break;
4595 }
4596 }
4597
4598 pArgs->pPrevPhys2Virt = pCur;
4599 return 0;
4600}
4601
4602
4603/**
4604 * Perform an integrity check on the PGM component.
4605 *
4606 * @returns VINF_SUCCESS if everything is fine.
4607 * @returns VBox error status after asserting on integrity breach.
4608 * @param pVM The VM handle.
4609 */
4610VMMR3DECL(int) PGMR3CheckIntegrity(PVM pVM)
4611{
4612 AssertReleaseReturn(pVM->pgm.s.offVM, VERR_INTERNAL_ERROR);
4613
4614 /*
4615 * Check the trees.
4616 */
4617 int cErrors = 0;
4618 const static PGMCHECKINTARGS s_LeftToRight = { true, NULL, NULL, NULL, pVM };
4619 const static PGMCHECKINTARGS s_RightToLeft = { false, NULL, NULL, NULL, pVM };
4620 PGMCHECKINTARGS Args = s_LeftToRight;
4621 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4622 Args = s_RightToLeft;
4623 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, false, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4624 Args = s_LeftToRight;
4625 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4626 Args = s_RightToLeft;
4627 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4628 Args = s_LeftToRight;
4629 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4630 Args = s_RightToLeft;
4631 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4632 Args = s_LeftToRight;
4633 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, true, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4634 Args = s_RightToLeft;
4635 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, false, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4636
4637 return !cErrors ? VINF_SUCCESS : VERR_INTERNAL_ERROR;
4638}
4639
4640
Note: See TracBrowser for help on using the repository browser.

© 2025 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette