VirtualBox

source: vbox/trunk/src/VBox/VMM/PGM.cpp@ 17296

Last change on this file since 17296 was 17215, checked in by vboxsync, 16 years ago

Split up the definitions and the guest code. Otherwise we'll end up using e.g. wrong masks in Bth code.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 219.5 KB
Line 
1/* $Id: PGM.cpp 17215 2009-02-27 16:33:19Z vboxsync $ */
2/** @file
3 * PGM - Page Manager and Monitor. (Mixing stuff here, not good?)
4 */
5
6/*
7 * Copyright (C) 2006-2007 Sun Microsystems, Inc.
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 *
17 * Please contact Sun Microsystems, Inc., 4150 Network Circle, Santa
18 * Clara, CA 95054 USA or visit http://www.sun.com if you need
19 * additional information or have any questions.
20 */
21
22
23/** @page pg_pgm PGM - The Page Manager and Monitor
24 *
25 * @see grp_pgm,
26 * @ref pg_pgm_pool,
27 * @ref pg_pgm_phys.
28 *
29 *
30 * @section sec_pgm_modes Paging Modes
31 *
32 * There are three memory contexts: Host Context (HC), Guest Context (GC)
33 * and intermediate context. When talking about paging HC can also be refered to
34 * as "host paging", and GC refered to as "shadow paging".
35 *
36 * We define three basic paging modes: 32-bit, PAE and AMD64. The host paging mode
37 * is defined by the host operating system. The mode used in the shadow paging mode
38 * depends on the host paging mode and what the mode the guest is currently in. The
39 * following relation between the two is defined:
40 *
41 * @verbatim
42 Host > 32-bit | PAE | AMD64 |
43 Guest | | | |
44 ==v================================
45 32-bit 32-bit PAE PAE
46 -------|--------|--------|--------|
47 PAE PAE PAE PAE
48 -------|--------|--------|--------|
49 AMD64 AMD64 AMD64 AMD64
50 -------|--------|--------|--------| @endverbatim
51 *
52 * All configuration except those in the diagonal (upper left) are expected to
53 * require special effort from the switcher (i.e. a bit slower).
54 *
55 *
56 *
57 *
58 * @section sec_pgm_shw The Shadow Memory Context
59 *
60 *
61 * [..]
62 *
63 * Because of guest context mappings requires PDPT and PML4 entries to allow
64 * writing on AMD64, the two upper levels will have fixed flags whatever the
65 * guest is thinking of using there. So, when shadowing the PD level we will
66 * calculate the effective flags of PD and all the higher levels. In legacy
67 * PAE mode this only applies to the PWT and PCD bits (the rest are
68 * ignored/reserved/MBZ). We will ignore those bits for the present.
69 *
70 *
71 *
72 * @section sec_pgm_int The Intermediate Memory Context
73 *
74 * The world switch goes thru an intermediate memory context which purpose it is
75 * to provide different mappings of the switcher code. All guest mappings are also
76 * present in this context.
77 *
78 * The switcher code is mapped at the same location as on the host, at an
79 * identity mapped location (physical equals virtual address), and at the
80 * hypervisor location. The identity mapped location is for when the world
81 * switches that involves disabling paging.
82 *
83 * PGM maintain page tables for 32-bit, PAE and AMD64 paging modes. This
84 * simplifies switching guest CPU mode and consistency at the cost of more
85 * code to do the work. All memory use for those page tables is located below
86 * 4GB (this includes page tables for guest context mappings).
87 *
88 *
89 * @subsection subsec_pgm_int_gc Guest Context Mappings
90 *
91 * During assignment and relocation of a guest context mapping the intermediate
92 * memory context is used to verify the new location.
93 *
94 * Guest context mappings are currently restricted to below 4GB, for reasons
95 * of simplicity. This may change when we implement AMD64 support.
96 *
97 *
98 *
99 *
100 * @section sec_pgm_misc Misc
101 *
102 * @subsection subsec_pgm_misc_diff Differences Between Legacy PAE and Long Mode PAE
103 *
104 * The differences between legacy PAE and long mode PAE are:
105 * -# PDPE bits 1, 2, 5 and 6 are defined differently. In leagcy mode they are
106 * all marked down as must-be-zero, while in long mode 1, 2 and 5 have the
107 * usual meanings while 6 is ignored (AMD). This means that upon switching to
108 * legacy PAE mode we'll have to clear these bits and when going to long mode
109 * they must be set. This applies to both intermediate and shadow contexts,
110 * however we don't need to do it for the intermediate one since we're
111 * executing with CR0.WP at that time.
112 * -# CR3 allows a 32-byte aligned address in legacy mode, while in long mode
113 * a page aligned one is required.
114 *
115 *
116 * @section sec_pgm_handlers Access Handlers
117 *
118 * Placeholder.
119 *
120 *
121 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
122 *
123 * Placeholder.
124 *
125 *
126 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
127 *
128 * We currently implement three types of virtual access handlers: ALL, WRITE
129 * and HYPERVISOR (WRITE). See PGMVIRTHANDLERTYPE for some more details.
130 *
131 * The HYPERVISOR access handlers is kept in a separate tree since it doesn't apply
132 * to physical pages (PGMTREES::HyperVirtHandlers) and only needs to be consulted in
133 * a special \#PF case. The ALL and WRITE are in the PGMTREES::VirtHandlers tree, the
134 * rest of this section is going to be about these handlers.
135 *
136 * We'll go thru the life cycle of a handler and try make sense of it all, don't know
137 * how successfull this is gonna be...
138 *
139 * 1. A handler is registered thru the PGMR3HandlerVirtualRegister and
140 * PGMHandlerVirtualRegisterEx APIs. We check for conflicting virtual handlers
141 * and create a new node that is inserted into the AVL tree (range key). Then
142 * a full PGM resync is flagged (clear pool, sync cr3, update virtual bit of PGMPAGE).
143 *
144 * 2. The following PGMSyncCR3/SyncCR3 operation will first make invoke HandlerVirtualUpdate.
145 *
146 * 2a. HandlerVirtualUpdate will will lookup all the pages covered by virtual handlers
147 * via the current guest CR3 and update the physical page -> virtual handler
148 * translation. Needless to say, this doesn't exactly scale very well. If any changes
149 * are detected, it will flag a virtual bit update just like we did on registration.
150 * PGMPHYS pages with changes will have their virtual handler state reset to NONE.
151 *
152 * 2b. The virtual bit update process will iterate all the pages covered by all the
153 * virtual handlers and update the PGMPAGE virtual handler state to the max of all
154 * virtual handlers on that page.
155 *
156 * 2c. Back in SyncCR3 we will now flush the entire shadow page cache to make sure
157 * we don't miss any alias mappings of the monitored pages.
158 *
159 * 2d. SyncCR3 will then proceed with syncing the CR3 table.
160 *
161 * 3. \#PF(np,read) on a page in the range. This will cause it to be synced
162 * read-only and resumed if it's a WRITE handler. If it's an ALL handler we
163 * will call the handlers like in the next step. If the physical mapping has
164 * changed we will - some time in the future - perform a handler callback
165 * (optional) and update the physical -> virtual handler cache.
166 *
167 * 4. \#PF(,write) on a page in the range. This will cause the handler to
168 * be invoked.
169 *
170 * 5. The guest invalidates the page and changes the physical backing or
171 * unmaps it. This should cause the invalidation callback to be invoked
172 * (it might not yet be 100% perfect). Exactly what happens next... is
173 * this where we mess up and end up out of sync for a while?
174 *
175 * 6. The handler is deregistered by the client via PGMHandlerVirtualDeregister.
176 * We will then set all PGMPAGEs in the physical -> virtual handler cache for
177 * this handler to NONE and trigger a full PGM resync (basically the same
178 * as int step 1). Which means 2 is executed again.
179 *
180 *
181 * @subsubsection sub_sec_pgm_handler_virt_todo TODOs
182 *
183 * There is a bunch of things that needs to be done to make the virtual handlers
184 * work 100% correctly and work more efficiently.
185 *
186 * The first bit hasn't been implemented yet because it's going to slow the
187 * whole mess down even more, and besides it seems to be working reliably for
188 * our current uses. OTOH, some of the optimizations might end up more or less
189 * implementing the missing bits, so we'll see.
190 *
191 * On the optimization side, the first thing to do is to try avoid unnecessary
192 * cache flushing. Then try team up with the shadowing code to track changes
193 * in mappings by means of access to them (shadow in), updates to shadows pages,
194 * invlpg, and shadow PT discarding (perhaps).
195 *
196 * Some idea that have popped up for optimization for current and new features:
197 * - bitmap indicating where there are virtual handlers installed.
198 * (4KB => 2**20 pages, page 2**12 => covers 32-bit address space 1:1!)
199 * - Further optimize this by min/max (needs min/max avl getters).
200 * - Shadow page table entry bit (if any left)?
201 *
202 */
203
204
205/** @page pg_pgm_phys PGM Physical Guest Memory Management
206 *
207 *
208 * Objectives:
209 * - Guest RAM over-commitment using memory ballooning,
210 * zero pages and general page sharing.
211 * - Moving or mirroring a VM onto a different physical machine.
212 *
213 *
214 * @subsection subsec_pgmPhys_Definitions Definitions
215 *
216 * Allocation chunk - A RTR0MemObjAllocPhysNC object and the tracking
217 * machinery assoicated with it.
218 *
219 *
220 *
221 *
222 * @subsection subsec_pgmPhys_AllocPage Allocating a page.
223 *
224 * Initially we map *all* guest memory to the (per VM) zero page, which
225 * means that none of the read functions will cause pages to be allocated.
226 *
227 * Exception, access bit in page tables that have been shared. This must
228 * be handled, but we must also make sure PGMGst*Modify doesn't make
229 * unnecessary modifications.
230 *
231 * Allocation points:
232 * - PGMPhysSimpleWriteGCPhys and PGMPhysWrite.
233 * - Replacing a zero page mapping at \#PF.
234 * - Replacing a shared page mapping at \#PF.
235 * - ROM registration (currently MMR3RomRegister).
236 * - VM restore (pgmR3Load).
237 *
238 * For the first three it would make sense to keep a few pages handy
239 * until we've reached the max memory commitment for the VM.
240 *
241 * For the ROM registration, we know exactly how many pages we need
242 * and will request these from ring-0. For restore, we will save
243 * the number of non-zero pages in the saved state and allocate
244 * them up front. This would allow the ring-0 component to refuse
245 * the request if the isn't sufficient memory available for VM use.
246 *
247 * Btw. for both ROM and restore allocations we won't be requiring
248 * zeroed pages as they are going to be filled instantly.
249 *
250 *
251 * @subsection subsec_pgmPhys_FreePage Freeing a page
252 *
253 * There are a few points where a page can be freed:
254 * - After being replaced by the zero page.
255 * - After being replaced by a shared page.
256 * - After being ballooned by the guest additions.
257 * - At reset.
258 * - At restore.
259 *
260 * When freeing one or more pages they will be returned to the ring-0
261 * component and replaced by the zero page.
262 *
263 * The reasoning for clearing out all the pages on reset is that it will
264 * return us to the exact same state as on power on, and may thereby help
265 * us reduce the memory load on the system. Further it might have a
266 * (temporary) positive influence on memory fragmentation (@see subsec_pgmPhys_Fragmentation).
267 *
268 * On restore, as mention under the allocation topic, pages should be
269 * freed / allocated depending on how many is actually required by the
270 * new VM state. The simplest approach is to do like on reset, and free
271 * all non-ROM pages and then allocate what we need.
272 *
273 * A measure to prevent some fragmentation, would be to let each allocation
274 * chunk have some affinity towards the VM having allocated the most pages
275 * from it. Also, try make sure to allocate from allocation chunks that
276 * are almost full. Admittedly, both these measures might work counter to
277 * our intentions and its probably not worth putting a lot of effort,
278 * cpu time or memory into this.
279 *
280 *
281 * @subsection subsec_pgmPhys_SharePage Sharing a page
282 *
283 * The basic idea is that there there will be a idle priority kernel
284 * thread walking the non-shared VM pages hashing them and looking for
285 * pages with the same checksum. If such pages are found, it will compare
286 * them byte-by-byte to see if they actually are identical. If found to be
287 * identical it will allocate a shared page, copy the content, check that
288 * the page didn't change while doing this, and finally request both the
289 * VMs to use the shared page instead. If the page is all zeros (special
290 * checksum and byte-by-byte check) it will request the VM that owns it
291 * to replace it with the zero page.
292 *
293 * To make this efficient, we will have to make sure not to try share a page
294 * that will change its contents soon. This part requires the most work.
295 * A simple idea would be to request the VM to write monitor the page for
296 * a while to make sure it isn't modified any time soon. Also, it may
297 * make sense to skip pages that are being write monitored since this
298 * information is readily available to the thread if it works on the
299 * per-VM guest memory structures (presently called PGMRAMRANGE).
300 *
301 *
302 * @subsection subsec_pgmPhys_Fragmentation Fragmentation Concerns and Counter Measures
303 *
304 * The pages are organized in allocation chunks in ring-0, this is a necessity
305 * if we wish to have an OS agnostic approach to this whole thing. (On Linux we
306 * could easily work on a page-by-page basis if we liked. Whether this is possible
307 * or efficient on NT I don't quite know.) Fragmentation within these chunks may
308 * become a problem as part of the idea here is that we wish to return memory to
309 * the host system.
310 *
311 * For instance, starting two VMs at the same time, they will both allocate the
312 * guest memory on-demand and if permitted their page allocations will be
313 * intermixed. Shut down one of the two VMs and it will be difficult to return
314 * any memory to the host system because the page allocation for the two VMs are
315 * mixed up in the same allocation chunks.
316 *
317 * To further complicate matters, when pages are freed because they have been
318 * ballooned or become shared/zero the whole idea is that the page is supposed
319 * to be reused by another VM or returned to the host system. This will cause
320 * allocation chunks to contain pages belonging to different VMs and prevent
321 * returning memory to the host when one of those VM shuts down.
322 *
323 * The only way to really deal with this problem is to move pages. This can
324 * either be done at VM shutdown and or by the idle priority worker thread
325 * that will be responsible for finding sharable/zero pages. The mechanisms
326 * involved for coercing a VM to move a page (or to do it for it) will be
327 * the same as when telling it to share/zero a page.
328 *
329 *
330 * @subsection subsec_pgmPhys_Tracking Tracking Structures And Their Cost
331 *
332 * There's a difficult balance between keeping the per-page tracking structures
333 * (global and guest page) easy to use and keeping them from eating too much
334 * memory. We have limited virtual memory resources available when operating in
335 * 32-bit kernel space (on 64-bit there'll it's quite a different story). The
336 * tracking structures will be attemted designed such that we can deal with up
337 * to 32GB of memory on a 32-bit system and essentially unlimited on 64-bit ones.
338 *
339 *
340 * @subsubsection subsubsec_pgmPhys_Tracking_Kernel Kernel Space
341 *
342 * @see pg_GMM
343 *
344 * @subsubsection subsubsec_pgmPhys_Tracking_PerVM Per-VM
345 *
346 * Fixed info is the physical address of the page (HCPhys) and the page id
347 * (described above). Theoretically we'll need 48(-12) bits for the HCPhys part.
348 * Today we've restricting ourselves to 40(-12) bits because this is the current
349 * restrictions of all AMD64 implementations (I think Barcelona will up this
350 * to 48(-12) bits, not that it really matters) and I needed the bits for
351 * tracking mappings of a page. 48-12 = 36. That leaves 28 bits, which means a
352 * decent range for the page id: 2^(28+12) = 1024TB.
353 *
354 * In additions to these, we'll have to keep maintaining the page flags as we
355 * currently do. Although it wouldn't harm to optimize these quite a bit, like
356 * for instance the ROM shouldn't depend on having a write handler installed
357 * in order for it to become read-only. A RO/RW bit should be considered so
358 * that the page syncing code doesn't have to mess about checking multiple
359 * flag combinations (ROM || RW handler || write monitored) in order to
360 * figure out how to setup a shadow PTE. But this of course, is second
361 * priority at present. Current this requires 12 bits, but could probably
362 * be optimized to ~8.
363 *
364 * Then there's the 24 bits used to track which shadow page tables are
365 * currently mapping a page for the purpose of speeding up physical
366 * access handlers, and thereby the page pool cache. More bit for this
367 * purpose wouldn't hurt IIRC.
368 *
369 * Then there is a new bit in which we need to record what kind of page
370 * this is, shared, zero, normal or write-monitored-normal. This'll
371 * require 2 bits. One bit might be needed for indicating whether a
372 * write monitored page has been written to. And yet another one or
373 * two for tracking migration status. 3-4 bits total then.
374 *
375 * Whatever is left will can be used to record the sharabilitiy of a
376 * page. The page checksum will not be stored in the per-VM table as
377 * the idle thread will not be permitted to do modifications to it.
378 * It will instead have to keep its own working set of potentially
379 * shareable pages and their check sums and stuff.
380 *
381 * For the present we'll keep the current packing of the
382 * PGMRAMRANGE::aHCPhys to keep the changes simple, only of course,
383 * we'll have to change it to a struct with a total of 128-bits at
384 * our disposal.
385 *
386 * The initial layout will be like this:
387 * @verbatim
388 RTHCPHYS HCPhys; The current stuff.
389 63:40 Current shadow PT tracking stuff.
390 39:12 The physical page frame number.
391 11:0 The current flags.
392 uint32_t u28PageId : 28; The page id.
393 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
394 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
395 uint32_t u1Reserved : 1; Reserved for later.
396 uint32_t u32Reserved; Reserved for later, mostly sharing stats.
397 @endverbatim
398 *
399 * The final layout will be something like this:
400 * @verbatim
401 RTHCPHYS HCPhys; The current stuff.
402 63:48 High page id (12+).
403 47:12 The physical page frame number.
404 11:0 Low page id.
405 uint32_t fReadOnly : 1; Whether it's readonly page (rom or monitored in some way).
406 uint32_t u3Type : 3; The page type {RESERVED, MMIO, MMIO2, ROM, shadowed ROM, RAM}.
407 uint32_t u2PhysMon : 2; Physical access handler type {none, read, write, all}.
408 uint32_t u2VirtMon : 2; Virtual access handler type {none, read, write, all}..
409 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
410 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
411 uint32_t u20Reserved : 20; Reserved for later, mostly sharing stats.
412 uint32_t u32Tracking; The shadow PT tracking stuff, roughly.
413 @endverbatim
414 *
415 * Cost wise, this means we'll double the cost for guest memory. There isn't anyway
416 * around that I'm afraid. It means that the cost of dealing out 32GB of memory
417 * to one or more VMs is: (32GB >> PAGE_SHIFT) * 16 bytes, or 128MBs. Or another
418 * example, the VM heap cost when assigning 1GB to a VM will be: 4MB.
419 *
420 * A couple of cost examples for the total cost per-VM + kernel.
421 * 32-bit Windows and 32-bit linux:
422 * 1GB guest ram, 256K pages: 4MB + 2MB(+) = 6MB
423 * 4GB guest ram, 1M pages: 16MB + 8MB(+) = 24MB
424 * 32GB guest ram, 8M pages: 128MB + 64MB(+) = 192MB
425 * 64-bit Windows and 64-bit linux:
426 * 1GB guest ram, 256K pages: 4MB + 3MB(+) = 7MB
427 * 4GB guest ram, 1M pages: 16MB + 12MB(+) = 28MB
428 * 32GB guest ram, 8M pages: 128MB + 96MB(+) = 224MB
429 *
430 * UPDATE - 2007-09-27:
431 * Will need a ballooned flag/state too because we cannot
432 * trust the guest 100% and reporting the same page as ballooned more
433 * than once will put the GMM off balance.
434 *
435 *
436 * @subsection subsec_pgmPhys_Serializing Serializing Access
437 *
438 * Initially, we'll try a simple scheme:
439 *
440 * - The per-VM RAM tracking structures (PGMRAMRANGE) is only modified
441 * by the EMT thread of that VM while in the pgm critsect.
442 * - Other threads in the VM process that needs to make reliable use of
443 * the per-VM RAM tracking structures will enter the critsect.
444 * - No process external thread or kernel thread will ever try enter
445 * the pgm critical section, as that just won't work.
446 * - The idle thread (and similar threads) doesn't not need 100% reliable
447 * data when performing it tasks as the EMT thread will be the one to
448 * do the actual changes later anyway. So, as long as it only accesses
449 * the main ram range, it can do so by somehow preventing the VM from
450 * being destroyed while it works on it...
451 *
452 * - The over-commitment management, including the allocating/freeing
453 * chunks, is serialized by a ring-0 mutex lock (a fast one since the
454 * more mundane mutex implementation is broken on Linux).
455 * - A separeate mutex is protecting the set of allocation chunks so
456 * that pages can be shared or/and freed up while some other VM is
457 * allocating more chunks. This mutex can be take from under the other
458 * one, but not the otherway around.
459 *
460 *
461 * @subsection subsec_pgmPhys_Request VM Request interface
462 *
463 * When in ring-0 it will become necessary to send requests to a VM so it can
464 * for instance move a page while defragmenting during VM destroy. The idle
465 * thread will make use of this interface to request VMs to setup shared
466 * pages and to perform write monitoring of pages.
467 *
468 * I would propose an interface similar to the current VMReq interface, similar
469 * in that it doesn't require locking and that the one sending the request may
470 * wait for completion if it wishes to. This shouldn't be very difficult to
471 * realize.
472 *
473 * The requests themselves are also pretty simple. They are basically:
474 * -# Check that some precondition is still true.
475 * -# Do the update.
476 * -# Update all shadow page tables involved with the page.
477 *
478 * The 3rd step is identical to what we're already doing when updating a
479 * physical handler, see pgmHandlerPhysicalSetRamFlagsAndFlushShadowPTs.
480 *
481 *
482 *
483 * @section sec_pgmPhys_MappingCaches Mapping Caches
484 *
485 * In order to be able to map in and out memory and to be able to support
486 * guest with more RAM than we've got virtual address space, we'll employing
487 * a mapping cache. There is already a tiny one for GC (see PGMGCDynMapGCPageEx)
488 * and we'll create a similar one for ring-0 unless we decide to setup a dedicate
489 * memory context for the HWACCM execution.
490 *
491 *
492 * @subsection subsec_pgmPhys_MappingCaches_R3 Ring-3
493 *
494 * We've considered implementing the ring-3 mapping cache page based but found
495 * that this was bother some when one had to take into account TLBs+SMP and
496 * portability (missing the necessary APIs on several platforms). There were
497 * also some performance concerns with this approach which hadn't quite been
498 * worked out.
499 *
500 * Instead, we'll be mapping allocation chunks into the VM process. This simplifies
501 * matters greatly quite a bit since we don't need to invent any new ring-0 stuff,
502 * only some minor RTR0MEMOBJ mapping stuff. The main concern here is that mapping
503 * compared to the previous idea is that mapping or unmapping a 1MB chunk is more
504 * costly than a single page, although how much more costly is uncertain. We'll
505 * try address this by using a very big cache, preferably bigger than the actual
506 * VM RAM size if possible. The current VM RAM sizes should give some idea for
507 * 32-bit boxes, while on 64-bit we can probably get away with employing an
508 * unlimited cache.
509 *
510 * The cache have to parts, as already indicated, the ring-3 side and the
511 * ring-0 side.
512 *
513 * The ring-0 will be tied to the page allocator since it will operate on the
514 * memory objects it contains. It will therefore require the first ring-0 mutex
515 * discussed in @ref subsec_pgmPhys_Serializing. We
516 * some double house keeping wrt to who has mapped what I think, since both
517 * VMMR0.r0 and RTR0MemObj will keep track of mapping relataions
518 *
519 * The ring-3 part will be protected by the pgm critsect. For simplicity, we'll
520 * require anyone that desires to do changes to the mapping cache to do that
521 * from within this critsect. Alternatively, we could employ a separate critsect
522 * for serializing changes to the mapping cache as this would reduce potential
523 * contention with other threads accessing mappings unrelated to the changes
524 * that are in process. We can see about this later, contention will show
525 * up in the statistics anyway, so it'll be simple to tell.
526 *
527 * The organization of the ring-3 part will be very much like how the allocation
528 * chunks are organized in ring-0, that is in an AVL tree by chunk id. To avoid
529 * having to walk the tree all the time, we'll have a couple of lookaside entries
530 * like in we do for I/O ports and MMIO in IOM.
531 *
532 * The simplified flow of a PGMPhysRead/Write function:
533 * -# Enter the PGM critsect.
534 * -# Lookup GCPhys in the ram ranges and get the Page ID.
535 * -# Calc the Allocation Chunk ID from the Page ID.
536 * -# Check the lookaside entries and then the AVL tree for the Chunk ID.
537 * If not found in cache:
538 * -# Call ring-0 and request it to be mapped and supply
539 * a chunk to be unmapped if the cache is maxed out already.
540 * -# Insert the new mapping into the AVL tree (id + R3 address).
541 * -# Update the relevant lookaside entry and return the mapping address.
542 * -# Do the read/write according to monitoring flags and everything.
543 * -# Leave the critsect.
544 *
545 *
546 * @section sec_pgmPhys_Fallback Fallback
547 *
548 * Current all the "second tier" hosts will not support the RTR0MemObjAllocPhysNC
549 * API and thus require a fallback.
550 *
551 * So, when RTR0MemObjAllocPhysNC returns VERR_NOT_SUPPORTED the page allocator
552 * will return to the ring-3 caller (and later ring-0) and asking it to seed
553 * the page allocator with some fresh pages (VERR_GMM_SEED_ME). Ring-3 will
554 * then perform an SUPPageAlloc(cbChunk >> PAGE_SHIFT) call and make a
555 * "SeededAllocPages" call to ring-0.
556 *
557 * The first time ring-0 sees the VERR_NOT_SUPPORTED failure it will disable
558 * all page sharing (zero page detection will continue). It will also force
559 * all allocations to come from the VM which seeded the page. Both these
560 * measures are taken to make sure that there will never be any need for
561 * mapping anything into ring-3 - everything will be mapped already.
562 *
563 * Whether we'll continue to use the current MM locked memory management
564 * for this I don't quite know (I'd prefer not to and just ditch that all
565 * togther), we'll see what's simplest to do.
566 *
567 *
568 *
569 * @section sec_pgmPhys_Changes Changes
570 *
571 * Breakdown of the changes involved?
572 */
573
574
575/** Saved state data unit version. */
576#define PGM_SAVED_STATE_VERSION 6
577
578/*******************************************************************************
579* Header Files *
580*******************************************************************************/
581#define LOG_GROUP LOG_GROUP_PGM
582#include <VBox/dbgf.h>
583#include <VBox/pgm.h>
584#include <VBox/cpum.h>
585#include <VBox/iom.h>
586#include <VBox/sup.h>
587#include <VBox/mm.h>
588#include <VBox/em.h>
589#include <VBox/stam.h>
590#include <VBox/rem.h>
591#include <VBox/dbgf.h>
592#include <VBox/rem.h>
593#include <VBox/selm.h>
594#include <VBox/ssm.h>
595#include "PGMInternal.h"
596#include <VBox/vm.h>
597#include <VBox/dbg.h>
598#include <VBox/hwaccm.h>
599
600#include <iprt/assert.h>
601#include <iprt/alloc.h>
602#include <iprt/asm.h>
603#include <iprt/thread.h>
604#include <iprt/string.h>
605#ifdef DEBUG_bird
606# include <iprt/env.h>
607#endif
608#include <VBox/param.h>
609#include <VBox/err.h>
610
611
612
613/*******************************************************************************
614* Internal Functions *
615*******************************************************************************/
616static int pgmR3InitPaging(PVM pVM);
617static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
618static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
619static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
620static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser);
621static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
622static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
623#ifdef VBOX_STRICT
624static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser);
625#endif
626static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM);
627static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version);
628static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0);
629static void pgmR3ModeDataSwitch(PVM pVM, PGMMODE enmShw, PGMMODE enmGst);
630static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher);
631
632#ifdef VBOX_WITH_STATISTICS
633static void pgmR3InitStats(PVM pVM);
634#endif
635
636#ifdef VBOX_WITH_DEBUGGER
637/** @todo all but the two last commands must be converted to 'info'. */
638static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
639static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
640static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
641static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
642# ifdef VBOX_STRICT
643static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
644# endif
645#endif
646
647
648/*******************************************************************************
649* Global Variables *
650*******************************************************************************/
651#ifdef VBOX_WITH_DEBUGGER
652/** Command descriptors. */
653static const DBGCCMD g_aCmds[] =
654{
655 /* pszCmd, cArgsMin, cArgsMax, paArgDesc, cArgDescs, pResultDesc, fFlags, pfnHandler pszSyntax, ....pszDescription */
656 { "pgmram", 0, 0, NULL, 0, NULL, 0, pgmR3CmdRam, "", "Display the ram ranges." },
657 { "pgmmap", 0, 0, NULL, 0, NULL, 0, pgmR3CmdMap, "", "Display the mapping ranges." },
658 { "pgmsync", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSync, "", "Sync the CR3 page." },
659#ifdef VBOX_STRICT
660 { "pgmassertcr3", 0, 0, NULL, 0, NULL, 0, pgmR3CmdAssertCR3, "", "Check the shadow CR3 mapping." },
661#endif
662 { "pgmsyncalways", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSyncAlways, "", "Toggle permanent CR3 syncing." },
663};
664#endif
665
666
667
668
669/*
670 * Shadow - 32-bit mode
671 */
672#define PGM_SHW_TYPE PGM_TYPE_32BIT
673#define PGM_SHW_NAME(name) PGM_SHW_NAME_32BIT(name)
674#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_32BIT_STR(name)
675#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_32BIT_STR(name)
676#include "PGMShw.h"
677
678/* Guest - real mode */
679#define PGM_GST_TYPE PGM_TYPE_REAL
680#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
681#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
682#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
683#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_REAL(name)
684#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_REAL_STR(name)
685#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_REAL_STR(name)
686#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
687#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
688#include "PGMBth.h"
689#include "PGMGstDefs.h"
690#include "PGMGst.h"
691#undef BTH_PGMPOOLKIND_PT_FOR_PT
692#undef BTH_PGMPOOLKIND_ROOT
693#undef PGM_BTH_NAME
694#undef PGM_BTH_NAME_RC_STR
695#undef PGM_BTH_NAME_R0_STR
696#undef PGM_GST_TYPE
697#undef PGM_GST_NAME
698#undef PGM_GST_NAME_RC_STR
699#undef PGM_GST_NAME_R0_STR
700
701/* Guest - protected mode */
702#define PGM_GST_TYPE PGM_TYPE_PROT
703#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
704#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
705#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
706#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_PROT(name)
707#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_PROT_STR(name)
708#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_PROT_STR(name)
709#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
710#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
711#include "PGMBth.h"
712#include "PGMGstDefs.h"
713#include "PGMGst.h"
714#undef BTH_PGMPOOLKIND_PT_FOR_PT
715#undef BTH_PGMPOOLKIND_ROOT
716#undef PGM_BTH_NAME
717#undef PGM_BTH_NAME_RC_STR
718#undef PGM_BTH_NAME_R0_STR
719#undef PGM_GST_TYPE
720#undef PGM_GST_NAME
721#undef PGM_GST_NAME_RC_STR
722#undef PGM_GST_NAME_R0_STR
723
724/* Guest - 32-bit mode */
725#define PGM_GST_TYPE PGM_TYPE_32BIT
726#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
727#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
728#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
729#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_32BIT(name)
730#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_32BIT_STR(name)
731#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_32BIT_STR(name)
732#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT
733#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB
734#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD
735#include "PGMBth.h"
736#include "PGMGstDefs.h"
737#include "PGMGst.h"
738#undef BTH_PGMPOOLKIND_PT_FOR_BIG
739#undef BTH_PGMPOOLKIND_PT_FOR_PT
740#undef BTH_PGMPOOLKIND_ROOT
741#undef PGM_BTH_NAME
742#undef PGM_BTH_NAME_RC_STR
743#undef PGM_BTH_NAME_R0_STR
744#undef PGM_GST_TYPE
745#undef PGM_GST_NAME
746#undef PGM_GST_NAME_RC_STR
747#undef PGM_GST_NAME_R0_STR
748
749#undef PGM_SHW_TYPE
750#undef PGM_SHW_NAME
751#undef PGM_SHW_NAME_RC_STR
752#undef PGM_SHW_NAME_R0_STR
753
754
755/*
756 * Shadow - PAE mode
757 */
758#define PGM_SHW_TYPE PGM_TYPE_PAE
759#define PGM_SHW_NAME(name) PGM_SHW_NAME_PAE(name)
760#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_PAE_STR(name)
761#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_PAE_STR(name)
762#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
763#include "PGMShw.h"
764
765/* Guest - real mode */
766#define PGM_GST_TYPE PGM_TYPE_REAL
767#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
768#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
769#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
770#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
771#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_REAL_STR(name)
772#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_REAL_STR(name)
773#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
774#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
775#include "PGMGstDefs.h"
776#include "PGMBth.h"
777#undef BTH_PGMPOOLKIND_PT_FOR_PT
778#undef BTH_PGMPOOLKIND_ROOT
779#undef PGM_BTH_NAME
780#undef PGM_BTH_NAME_RC_STR
781#undef PGM_BTH_NAME_R0_STR
782#undef PGM_GST_TYPE
783#undef PGM_GST_NAME
784#undef PGM_GST_NAME_RC_STR
785#undef PGM_GST_NAME_R0_STR
786
787/* Guest - protected mode */
788#define PGM_GST_TYPE PGM_TYPE_PROT
789#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
790#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
791#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
792#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PROT(name)
793#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PROT_STR(name)
794#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PROT_STR(name)
795#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
796#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
797#include "PGMGstDefs.h"
798#include "PGMBth.h"
799#undef BTH_PGMPOOLKIND_PT_FOR_PT
800#undef BTH_PGMPOOLKIND_ROOT
801#undef PGM_BTH_NAME
802#undef PGM_BTH_NAME_RC_STR
803#undef PGM_BTH_NAME_R0_STR
804#undef PGM_GST_TYPE
805#undef PGM_GST_NAME
806#undef PGM_GST_NAME_RC_STR
807#undef PGM_GST_NAME_R0_STR
808
809/* Guest - 32-bit mode */
810#define PGM_GST_TYPE PGM_TYPE_32BIT
811#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
812#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
813#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
814#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_32BIT(name)
815#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_32BIT_STR(name)
816#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_32BIT_STR(name)
817#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
818#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
819#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_FOR_32BIT
820#include "PGMGstDefs.h"
821#include "PGMBth.h"
822#undef BTH_PGMPOOLKIND_PT_FOR_BIG
823#undef BTH_PGMPOOLKIND_PT_FOR_PT
824#undef BTH_PGMPOOLKIND_ROOT
825#undef PGM_BTH_NAME
826#undef PGM_BTH_NAME_RC_STR
827#undef PGM_BTH_NAME_R0_STR
828#undef PGM_GST_TYPE
829#undef PGM_GST_NAME
830#undef PGM_GST_NAME_RC_STR
831#undef PGM_GST_NAME_R0_STR
832
833/* Guest - PAE mode */
834#define PGM_GST_TYPE PGM_TYPE_PAE
835#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
836#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
837#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
838#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PAE(name)
839#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PAE_STR(name)
840#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PAE_STR(name)
841#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
842#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
843#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT
844#include "PGMBth.h"
845#include "PGMGstDefs.h"
846#include "PGMGst.h"
847#undef BTH_PGMPOOLKIND_PT_FOR_BIG
848#undef BTH_PGMPOOLKIND_PT_FOR_PT
849#undef BTH_PGMPOOLKIND_ROOT
850#undef PGM_BTH_NAME
851#undef PGM_BTH_NAME_RC_STR
852#undef PGM_BTH_NAME_R0_STR
853#undef PGM_GST_TYPE
854#undef PGM_GST_NAME
855#undef PGM_GST_NAME_RC_STR
856#undef PGM_GST_NAME_R0_STR
857
858#undef PGM_SHW_TYPE
859#undef PGM_SHW_NAME
860#undef PGM_SHW_NAME_RC_STR
861#undef PGM_SHW_NAME_R0_STR
862
863
864/*
865 * Shadow - AMD64 mode
866 */
867#define PGM_SHW_TYPE PGM_TYPE_AMD64
868#define PGM_SHW_NAME(name) PGM_SHW_NAME_AMD64(name)
869#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_AMD64_STR(name)
870#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_AMD64_STR(name)
871#include "PGMShw.h"
872
873#ifdef VBOX_WITH_64_BITS_GUESTS
874/* Guest - AMD64 mode */
875# define PGM_GST_TYPE PGM_TYPE_AMD64
876# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
877# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
878# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
879# define PGM_BTH_NAME(name) PGM_BTH_NAME_AMD64_AMD64(name)
880# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_AMD64_AMD64_STR(name)
881# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_AMD64_AMD64_STR(name)
882# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
883# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
884# define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_64BIT_PML4
885# include "PGMBth.h"
886# include "PGMGstDefs.h"
887# include "PGMGst.h"
888# undef BTH_PGMPOOLKIND_PT_FOR_BIG
889# undef BTH_PGMPOOLKIND_PT_FOR_PT
890# undef BTH_PGMPOOLKIND_ROOT
891# undef PGM_BTH_NAME
892# undef PGM_BTH_NAME_RC_STR
893# undef PGM_BTH_NAME_R0_STR
894# undef PGM_GST_TYPE
895# undef PGM_GST_NAME
896# undef PGM_GST_NAME_RC_STR
897# undef PGM_GST_NAME_R0_STR
898#endif /* VBOX_WITH_64_BITS_GUESTS */
899
900#undef PGM_SHW_TYPE
901#undef PGM_SHW_NAME
902#undef PGM_SHW_NAME_RC_STR
903#undef PGM_SHW_NAME_R0_STR
904
905
906/*
907 * Shadow - Nested paging mode
908 */
909#define PGM_SHW_TYPE PGM_TYPE_NESTED
910#define PGM_SHW_NAME(name) PGM_SHW_NAME_NESTED(name)
911#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_NESTED_STR(name)
912#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_NESTED_STR(name)
913#include "PGMShw.h"
914
915/* Guest - real mode */
916#define PGM_GST_TYPE PGM_TYPE_REAL
917#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
918#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
919#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
920#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_REAL(name)
921#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_REAL_STR(name)
922#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_REAL_STR(name)
923#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
924#include "PGMGstDefs.h"
925#include "PGMBth.h"
926#undef BTH_PGMPOOLKIND_PT_FOR_PT
927#undef PGM_BTH_NAME
928#undef PGM_BTH_NAME_RC_STR
929#undef PGM_BTH_NAME_R0_STR
930#undef PGM_GST_TYPE
931#undef PGM_GST_NAME
932#undef PGM_GST_NAME_RC_STR
933#undef PGM_GST_NAME_R0_STR
934
935/* Guest - protected mode */
936#define PGM_GST_TYPE PGM_TYPE_PROT
937#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
938#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
939#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
940#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PROT(name)
941#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PROT_STR(name)
942#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PROT_STR(name)
943#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
944#include "PGMGstDefs.h"
945#include "PGMBth.h"
946#undef BTH_PGMPOOLKIND_PT_FOR_PT
947#undef PGM_BTH_NAME
948#undef PGM_BTH_NAME_RC_STR
949#undef PGM_BTH_NAME_R0_STR
950#undef PGM_GST_TYPE
951#undef PGM_GST_NAME
952#undef PGM_GST_NAME_RC_STR
953#undef PGM_GST_NAME_R0_STR
954
955/* Guest - 32-bit mode */
956#define PGM_GST_TYPE PGM_TYPE_32BIT
957#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
958#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
959#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
960#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_32BIT(name)
961#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_32BIT_STR(name)
962#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_32BIT_STR(name)
963#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
964#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
965#include "PGMGstDefs.h"
966#include "PGMBth.h"
967#undef BTH_PGMPOOLKIND_PT_FOR_BIG
968#undef BTH_PGMPOOLKIND_PT_FOR_PT
969#undef PGM_BTH_NAME
970#undef PGM_BTH_NAME_RC_STR
971#undef PGM_BTH_NAME_R0_STR
972#undef PGM_GST_TYPE
973#undef PGM_GST_NAME
974#undef PGM_GST_NAME_RC_STR
975#undef PGM_GST_NAME_R0_STR
976
977/* Guest - PAE mode */
978#define PGM_GST_TYPE PGM_TYPE_PAE
979#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
980#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
981#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
982#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PAE(name)
983#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PAE_STR(name)
984#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PAE_STR(name)
985#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
986#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
987#include "PGMGstDefs.h"
988#include "PGMBth.h"
989#undef BTH_PGMPOOLKIND_PT_FOR_BIG
990#undef BTH_PGMPOOLKIND_PT_FOR_PT
991#undef PGM_BTH_NAME
992#undef PGM_BTH_NAME_RC_STR
993#undef PGM_BTH_NAME_R0_STR
994#undef PGM_GST_TYPE
995#undef PGM_GST_NAME
996#undef PGM_GST_NAME_RC_STR
997#undef PGM_GST_NAME_R0_STR
998
999#ifdef VBOX_WITH_64_BITS_GUESTS
1000/* Guest - AMD64 mode */
1001# define PGM_GST_TYPE PGM_TYPE_AMD64
1002# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1003# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1004# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1005# define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_AMD64(name)
1006# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_AMD64_STR(name)
1007# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_AMD64_STR(name)
1008# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1009# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1010# include "PGMGstDefs.h"
1011# include "PGMBth.h"
1012# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1013# undef BTH_PGMPOOLKIND_PT_FOR_PT
1014# undef PGM_BTH_NAME
1015# undef PGM_BTH_NAME_RC_STR
1016# undef PGM_BTH_NAME_R0_STR
1017# undef PGM_GST_TYPE
1018# undef PGM_GST_NAME
1019# undef PGM_GST_NAME_RC_STR
1020# undef PGM_GST_NAME_R0_STR
1021#endif /* VBOX_WITH_64_BITS_GUESTS */
1022
1023#undef PGM_SHW_TYPE
1024#undef PGM_SHW_NAME
1025#undef PGM_SHW_NAME_RC_STR
1026#undef PGM_SHW_NAME_R0_STR
1027
1028
1029/*
1030 * Shadow - EPT
1031 */
1032#define PGM_SHW_TYPE PGM_TYPE_EPT
1033#define PGM_SHW_NAME(name) PGM_SHW_NAME_EPT(name)
1034#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_EPT_STR(name)
1035#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_EPT_STR(name)
1036#include "PGMShw.h"
1037
1038/* Guest - real mode */
1039#define PGM_GST_TYPE PGM_TYPE_REAL
1040#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
1041#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
1042#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
1043#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_REAL(name)
1044#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_REAL_STR(name)
1045#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_REAL_STR(name)
1046#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1047#include "PGMGstDefs.h"
1048#include "PGMBth.h"
1049#undef BTH_PGMPOOLKIND_PT_FOR_PT
1050#undef PGM_BTH_NAME
1051#undef PGM_BTH_NAME_RC_STR
1052#undef PGM_BTH_NAME_R0_STR
1053#undef PGM_GST_TYPE
1054#undef PGM_GST_NAME
1055#undef PGM_GST_NAME_RC_STR
1056#undef PGM_GST_NAME_R0_STR
1057
1058/* Guest - protected mode */
1059#define PGM_GST_TYPE PGM_TYPE_PROT
1060#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
1061#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
1062#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
1063#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PROT(name)
1064#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PROT_STR(name)
1065#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PROT_STR(name)
1066#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1067#include "PGMGstDefs.h"
1068#include "PGMBth.h"
1069#undef BTH_PGMPOOLKIND_PT_FOR_PT
1070#undef PGM_BTH_NAME
1071#undef PGM_BTH_NAME_RC_STR
1072#undef PGM_BTH_NAME_R0_STR
1073#undef PGM_GST_TYPE
1074#undef PGM_GST_NAME
1075#undef PGM_GST_NAME_RC_STR
1076#undef PGM_GST_NAME_R0_STR
1077
1078/* Guest - 32-bit mode */
1079#define PGM_GST_TYPE PGM_TYPE_32BIT
1080#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
1081#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
1082#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
1083#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_32BIT(name)
1084#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_32BIT_STR(name)
1085#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_32BIT_STR(name)
1086#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
1087#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
1088#include "PGMGstDefs.h"
1089#include "PGMBth.h"
1090#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1091#undef BTH_PGMPOOLKIND_PT_FOR_PT
1092#undef PGM_BTH_NAME
1093#undef PGM_BTH_NAME_RC_STR
1094#undef PGM_BTH_NAME_R0_STR
1095#undef PGM_GST_TYPE
1096#undef PGM_GST_NAME
1097#undef PGM_GST_NAME_RC_STR
1098#undef PGM_GST_NAME_R0_STR
1099
1100/* Guest - PAE mode */
1101#define PGM_GST_TYPE PGM_TYPE_PAE
1102#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
1103#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
1104#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
1105#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PAE(name)
1106#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PAE_STR(name)
1107#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PAE_STR(name)
1108#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1109#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1110#include "PGMGstDefs.h"
1111#include "PGMBth.h"
1112#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1113#undef BTH_PGMPOOLKIND_PT_FOR_PT
1114#undef PGM_BTH_NAME
1115#undef PGM_BTH_NAME_RC_STR
1116#undef PGM_BTH_NAME_R0_STR
1117#undef PGM_GST_TYPE
1118#undef PGM_GST_NAME
1119#undef PGM_GST_NAME_RC_STR
1120#undef PGM_GST_NAME_R0_STR
1121
1122#ifdef VBOX_WITH_64_BITS_GUESTS
1123/* Guest - AMD64 mode */
1124# define PGM_GST_TYPE PGM_TYPE_AMD64
1125# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1126# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1127# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1128# define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_AMD64(name)
1129# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_AMD64_STR(name)
1130# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_AMD64_STR(name)
1131# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1132# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1133# include "PGMGstDefs.h"
1134# include "PGMBth.h"
1135# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1136# undef BTH_PGMPOOLKIND_PT_FOR_PT
1137# undef PGM_BTH_NAME
1138# undef PGM_BTH_NAME_RC_STR
1139# undef PGM_BTH_NAME_R0_STR
1140# undef PGM_GST_TYPE
1141# undef PGM_GST_NAME
1142# undef PGM_GST_NAME_RC_STR
1143# undef PGM_GST_NAME_R0_STR
1144#endif /* VBOX_WITH_64_BITS_GUESTS */
1145
1146#undef PGM_SHW_TYPE
1147#undef PGM_SHW_NAME
1148#undef PGM_SHW_NAME_RC_STR
1149#undef PGM_SHW_NAME_R0_STR
1150
1151
1152
1153/**
1154 * Initiates the paging of VM.
1155 *
1156 * @returns VBox status code.
1157 * @param pVM Pointer to VM structure.
1158 */
1159VMMR3DECL(int) PGMR3Init(PVM pVM)
1160{
1161 LogFlow(("PGMR3Init:\n"));
1162
1163 /*
1164 * Assert alignment and sizes.
1165 */
1166 AssertRelease(sizeof(pVM->pgm.s) <= sizeof(pVM->pgm.padding));
1167
1168 /*
1169 * Init the structure.
1170 */
1171 pVM->pgm.s.offVM = RT_OFFSETOF(VM, pgm.s);
1172 pVM->pgm.s.offVCpu = RT_OFFSETOF(VMCPU, pgm.s);
1173 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
1174 pVM->pgm.s.enmGuestMode = PGMMODE_INVALID;
1175 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1176 pVM->pgm.s.GCPhysCR3 = NIL_RTGCPHYS;
1177#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1178 pVM->pgm.s.GCPhysGstCR3Monitored = NIL_RTGCPHYS;
1179#endif
1180 pVM->pgm.s.fA20Enabled = true;
1181 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1; /* default; checked later */
1182 pVM->pgm.s.pGstPaePdptR3 = NULL;
1183#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1184 pVM->pgm.s.pGstPaePdptR0 = NIL_RTR0PTR;
1185#endif
1186 pVM->pgm.s.pGstPaePdptRC = NIL_RTRCPTR;
1187 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apGstPaePDsR3); i++)
1188 {
1189 pVM->pgm.s.apGstPaePDsR3[i] = NULL;
1190#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1191 pVM->pgm.s.apGstPaePDsR0[i] = NIL_RTR0PTR;
1192#endif
1193 pVM->pgm.s.apGstPaePDsRC[i] = NIL_RTRCPTR;
1194 pVM->pgm.s.aGCPhysGstPaePDs[i] = NIL_RTGCPHYS;
1195 pVM->pgm.s.aGCPhysGstPaePDsMonitored[i] = NIL_RTGCPHYS;
1196 }
1197
1198#ifdef VBOX_STRICT
1199 VMR3AtStateRegister(pVM, pgmR3ResetNoMorePhysWritesFlag, NULL);
1200#endif
1201
1202 /*
1203 * Get the configured RAM size - to estimate saved state size.
1204 */
1205 uint64_t cbRam;
1206 int rc = CFGMR3QueryU64(CFGMR3GetRoot(pVM), "RamSize", &cbRam);
1207 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
1208 cbRam = pVM->pgm.s.cbRamSize = 0;
1209 else if (RT_SUCCESS(rc))
1210 {
1211 if (cbRam < PAGE_SIZE)
1212 cbRam = 0;
1213 cbRam = RT_ALIGN_64(cbRam, PAGE_SIZE);
1214 pVM->pgm.s.cbRamSize = (RTUINT)cbRam;
1215 }
1216 else
1217 {
1218 AssertMsgFailed(("Configuration error: Failed to query integer \"RamSize\", rc=%Rrc.\n", rc));
1219 return rc;
1220 }
1221
1222 /*
1223 * Register saved state data unit.
1224 */
1225 rc = SSMR3RegisterInternal(pVM, "pgm", 1, PGM_SAVED_STATE_VERSION, (size_t)cbRam + sizeof(PGM),
1226 NULL, pgmR3Save, NULL,
1227 NULL, pgmR3Load, NULL);
1228 if (RT_FAILURE(rc))
1229 return rc;
1230
1231 /*
1232 * Initialize the PGM critical section and flush the phys TLBs
1233 */
1234 rc = PDMR3CritSectInit(pVM, &pVM->pgm.s.CritSect, "PGM");
1235 AssertRCReturn(rc, rc);
1236
1237 PGMR3PhysChunkInvalidateTLB(pVM);
1238 PGMPhysInvalidatePageR3MapTLB(pVM);
1239 PGMPhysInvalidatePageR0MapTLB(pVM);
1240 PGMPhysInvalidatePageGCMapTLB(pVM);
1241
1242 /*
1243 * Trees
1244 */
1245 rc = MMHyperAlloc(pVM, sizeof(PGMTREES), 0, MM_TAG_PGM, (void **)&pVM->pgm.s.pTreesR3);
1246 if (RT_SUCCESS(rc))
1247 {
1248 pVM->pgm.s.pTreesR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pTreesR3);
1249 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
1250
1251 /*
1252 * Alocate the zero page.
1253 */
1254 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvZeroPgR3);
1255 }
1256 if (RT_SUCCESS(rc))
1257 {
1258 pVM->pgm.s.pvZeroPgGC = MMHyperR3ToRC(pVM, pVM->pgm.s.pvZeroPgR3);
1259 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
1260 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTHCPHYS);
1261 pVM->pgm.s.HCPhysZeroPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvZeroPgR3);
1262 AssertRelease(pVM->pgm.s.HCPhysZeroPg != NIL_RTHCPHYS);
1263
1264 /*
1265 * Init the paging.
1266 */
1267 rc = pgmR3InitPaging(pVM);
1268 }
1269 if (RT_SUCCESS(rc))
1270 {
1271 /*
1272 * Init the page pool.
1273 */
1274 rc = pgmR3PoolInit(pVM);
1275 }
1276#ifdef VBOX_WITH_PGMPOOL_PAGING_ONLY
1277 if (RT_SUCCESS(rc))
1278 rc = PGMR3ChangeMode(pVM, PGMMODE_REAL);
1279#endif
1280 if (RT_SUCCESS(rc))
1281 {
1282 /*
1283 * Info & statistics
1284 */
1285 DBGFR3InfoRegisterInternal(pVM, "mode",
1286 "Shows the current paging mode. "
1287 "Recognizes 'all', 'guest', 'shadow' and 'host' as arguments, defaulting to 'all' if nothing's given.",
1288 pgmR3InfoMode);
1289 DBGFR3InfoRegisterInternal(pVM, "pgmcr3",
1290 "Dumps all the entries in the top level paging table. No arguments.",
1291 pgmR3InfoCr3);
1292 DBGFR3InfoRegisterInternal(pVM, "phys",
1293 "Dumps all the physical address ranges. No arguments.",
1294 pgmR3PhysInfo);
1295 DBGFR3InfoRegisterInternal(pVM, "handlers",
1296 "Dumps physical, virtual and hyper virtual handlers. "
1297 "Pass 'phys', 'virt', 'hyper' as argument if only one kind is wanted."
1298 "Add 'nost' if the statistics are unwanted, use together with 'all' or explicit selection.",
1299 pgmR3InfoHandlers);
1300 DBGFR3InfoRegisterInternal(pVM, "mappings",
1301 "Dumps guest mappings.",
1302 pgmR3MapInfo);
1303
1304 STAM_REL_REG(pVM, &pVM->pgm.s.cGuestModeChanges, STAMTYPE_COUNTER, "/PGM/cGuestModeChanges", STAMUNIT_OCCURENCES, "Number of guest mode changes.");
1305 STAM_REL_REG(pVM, &pVM->pgm.s.cRelocations, STAMTYPE_COUNTER, "/PGM/cRelocations", STAMUNIT_OCCURENCES, "Number of hypervisor relocations.");
1306#ifdef VBOX_WITH_STATISTICS
1307 pgmR3InitStats(pVM);
1308#endif
1309#ifdef VBOX_WITH_DEBUGGER
1310 /*
1311 * Debugger commands.
1312 */
1313 static bool fRegisteredCmds = false;
1314 if (!fRegisteredCmds)
1315 {
1316 int rc = DBGCRegisterCommands(&g_aCmds[0], RT_ELEMENTS(g_aCmds));
1317 if (RT_SUCCESS(rc))
1318 fRegisteredCmds = true;
1319 }
1320#endif
1321 return VINF_SUCCESS;
1322 }
1323
1324 /* Almost no cleanup necessary, MM frees all memory. */
1325 PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
1326
1327 return rc;
1328}
1329
1330
1331/**
1332 * Initializes the per-VCPU PGM.
1333 *
1334 * @returns VBox status code.
1335 * @param pVM The VM to operate on.
1336 */
1337VMMR3DECL(int) PGMR3InitCPU(PVM pVM)
1338{
1339 LogFlow(("PGMR3InitCPU\n"));
1340 return VINF_SUCCESS;
1341}
1342
1343
1344/**
1345 * Init paging.
1346 *
1347 * Since we need to check what mode the host is operating in before we can choose
1348 * the right paging functions for the host we have to delay this until R0 has
1349 * been initialized.
1350 *
1351 * @returns VBox status code.
1352 * @param pVM VM handle.
1353 */
1354static int pgmR3InitPaging(PVM pVM)
1355{
1356 /*
1357 * Force a recalculation of modes and switcher so everyone gets notified.
1358 */
1359 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
1360 pVM->pgm.s.enmGuestMode = PGMMODE_INVALID;
1361 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1362
1363 /*
1364 * Allocate static mapping space for whatever the cr3 register
1365 * points to and in the case of PAE mode to the 4 PDs.
1366 */
1367 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * 5, "CR3 mapping", &pVM->pgm.s.GCPtrCR3Mapping);
1368 if (RT_FAILURE(rc))
1369 {
1370 AssertMsgFailed(("Failed to reserve two pages for cr mapping in HMA, rc=%Rrc\n", rc));
1371 return rc;
1372 }
1373 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1374
1375 /*
1376 * Allocate pages for the three possible intermediate contexts
1377 * (AMD64, PAE and plain 32-Bit). We maintain all three contexts
1378 * for the sake of simplicity. The AMD64 uses the PAE for the
1379 * lower levels, making the total number of pages 11 (3 + 7 + 1).
1380 *
1381 * We assume that two page tables will be enought for the core code
1382 * mappings (HC virtual and identity).
1383 */
1384 pVM->pgm.s.pInterPD = (PX86PD)MMR3PageAllocLow(pVM);
1385 pVM->pgm.s.apInterPTs[0] = (PX86PT)MMR3PageAllocLow(pVM);
1386 pVM->pgm.s.apInterPTs[1] = (PX86PT)MMR3PageAllocLow(pVM);
1387 pVM->pgm.s.apInterPaePTs[0] = (PX86PTPAE)MMR3PageAlloc(pVM);
1388 pVM->pgm.s.apInterPaePTs[1] = (PX86PTPAE)MMR3PageAlloc(pVM);
1389 pVM->pgm.s.apInterPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1390 pVM->pgm.s.apInterPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1391 pVM->pgm.s.apInterPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1392 pVM->pgm.s.apInterPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1393 pVM->pgm.s.pInterPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM);
1394 pVM->pgm.s.pInterPaePDPT64 = (PX86PDPT)MMR3PageAllocLow(pVM);
1395 pVM->pgm.s.pInterPaePML4 = (PX86PML4)MMR3PageAllocLow(pVM);
1396 if ( !pVM->pgm.s.pInterPD
1397 || !pVM->pgm.s.apInterPTs[0]
1398 || !pVM->pgm.s.apInterPTs[1]
1399 || !pVM->pgm.s.apInterPaePTs[0]
1400 || !pVM->pgm.s.apInterPaePTs[1]
1401 || !pVM->pgm.s.apInterPaePDs[0]
1402 || !pVM->pgm.s.apInterPaePDs[1]
1403 || !pVM->pgm.s.apInterPaePDs[2]
1404 || !pVM->pgm.s.apInterPaePDs[3]
1405 || !pVM->pgm.s.pInterPaePDPT
1406 || !pVM->pgm.s.pInterPaePDPT64
1407 || !pVM->pgm.s.pInterPaePML4)
1408 {
1409 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1410 return VERR_NO_PAGE_MEMORY;
1411 }
1412
1413 pVM->pgm.s.HCPhysInterPD = MMPage2Phys(pVM, pVM->pgm.s.pInterPD);
1414 AssertRelease(pVM->pgm.s.HCPhysInterPD != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPD & PAGE_OFFSET_MASK));
1415 pVM->pgm.s.HCPhysInterPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT);
1416 AssertRelease(pVM->pgm.s.HCPhysInterPaePDPT != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePDPT & PAGE_OFFSET_MASK));
1417 pVM->pgm.s.HCPhysInterPaePML4 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePML4);
1418 AssertRelease(pVM->pgm.s.HCPhysInterPaePML4 != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePML4 & PAGE_OFFSET_MASK) && pVM->pgm.s.HCPhysInterPaePML4 < 0xffffffff);
1419
1420 /*
1421 * Initialize the pages, setting up the PML4 and PDPT for repetitive 4GB action.
1422 */
1423 ASMMemZeroPage(pVM->pgm.s.pInterPD);
1424 ASMMemZeroPage(pVM->pgm.s.apInterPTs[0]);
1425 ASMMemZeroPage(pVM->pgm.s.apInterPTs[1]);
1426
1427 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[0]);
1428 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[1]);
1429
1430 ASMMemZeroPage(pVM->pgm.s.pInterPaePDPT);
1431 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apInterPaePDs); i++)
1432 {
1433 ASMMemZeroPage(pVM->pgm.s.apInterPaePDs[i]);
1434 pVM->pgm.s.pInterPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT
1435 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[i]);
1436 }
1437
1438 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePDPT64->a); i++)
1439 {
1440 const unsigned iPD = i % RT_ELEMENTS(pVM->pgm.s.apInterPaePDs);
1441 pVM->pgm.s.pInterPaePDPT64->a[i].u = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A | PGM_PLXFLAGS_PERMANENT
1442 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[iPD]);
1443 }
1444
1445 RTHCPHYS HCPhysInterPaePDPT64 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64);
1446 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePML4->a); i++)
1447 pVM->pgm.s.pInterPaePML4->a[i].u = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A | PGM_PLXFLAGS_PERMANENT
1448 | HCPhysInterPaePDPT64;
1449
1450 /*
1451 * Allocate pages for the three possible guest contexts (AMD64, PAE and plain 32-Bit).
1452 * We allocate pages for all three posibilities in order to simplify mappings and
1453 * avoid resource failure during mode switches. So, we need to cover all levels of the
1454 * of the first 4GB down to PD level.
1455 * As with the intermediate context, AMD64 uses the PAE PDPT and PDs.
1456 */
1457#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1458 pVM->pgm.s.pShw32BitPdR3 = (PX86PD)MMR3PageAllocLow(pVM);
1459# ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1460 pVM->pgm.s.pShw32BitPdR0 = (uintptr_t)pVM->pgm.s.pShw32BitPdR3;
1461# endif
1462 pVM->pgm.s.apShwPaePDsR3[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1463 pVM->pgm.s.apShwPaePDsR3[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1464 AssertRelease((uintptr_t)pVM->pgm.s.apShwPaePDsR3[0] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apShwPaePDsR3[1]);
1465 pVM->pgm.s.apShwPaePDsR3[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1466 AssertRelease((uintptr_t)pVM->pgm.s.apShwPaePDsR3[1] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apShwPaePDsR3[2]);
1467 pVM->pgm.s.apShwPaePDsR3[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1468 AssertRelease((uintptr_t)pVM->pgm.s.apShwPaePDsR3[2] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apShwPaePDsR3[3]);
1469# ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1470 pVM->pgm.s.apShwPaePDsR0[0] = (uintptr_t)pVM->pgm.s.apShwPaePDsR3[0];
1471 pVM->pgm.s.apShwPaePDsR0[1] = (uintptr_t)pVM->pgm.s.apShwPaePDsR3[1];
1472 pVM->pgm.s.apShwPaePDsR0[2] = (uintptr_t)pVM->pgm.s.apShwPaePDsR3[2];
1473 pVM->pgm.s.apShwPaePDsR0[3] = (uintptr_t)pVM->pgm.s.apShwPaePDsR3[3];
1474# endif
1475 pVM->pgm.s.pShwPaePdptR3 = (PX86PDPT)MMR3PageAllocLow(pVM);
1476# ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1477 pVM->pgm.s.pShwPaePdptR0 = (uintptr_t)pVM->pgm.s.pShwPaePdptR3;
1478# endif
1479#endif /* VBOX_WITH_PGMPOOL_PAGING_ONLY */
1480 pVM->pgm.s.pShwNestedRootR3 = MMR3PageAllocLow(pVM);
1481#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1482 pVM->pgm.s.pShwNestedRootR0 = (uintptr_t)pVM->pgm.s.pShwNestedRootR3;
1483#endif
1484
1485#ifdef VBOX_WITH_PGMPOOL_PAGING_ONLY
1486 if (!pVM->pgm.s.pShwNestedRootR3)
1487#else
1488 if ( !pVM->pgm.s.pShw32BitPdR3
1489 || !pVM->pgm.s.apShwPaePDsR3[0]
1490 || !pVM->pgm.s.apShwPaePDsR3[1]
1491 || !pVM->pgm.s.apShwPaePDsR3[2]
1492 || !pVM->pgm.s.apShwPaePDsR3[3]
1493 || !pVM->pgm.s.pShwPaePdptR3
1494 || !pVM->pgm.s.pShwNestedRootR3)
1495#endif
1496 {
1497 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1498 return VERR_NO_PAGE_MEMORY;
1499 }
1500
1501 /* get physical addresses. */
1502#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1503 pVM->pgm.s.HCPhysShw32BitPD = MMPage2Phys(pVM, pVM->pgm.s.pShw32BitPdR3);
1504 Assert(MMPagePhys2Page(pVM, pVM->pgm.s.HCPhysShw32BitPD) == pVM->pgm.s.pShw32BitPdR3);
1505 pVM->pgm.s.aHCPhysPaePDs[0] = MMPage2Phys(pVM, pVM->pgm.s.apShwPaePDsR3[0]);
1506 pVM->pgm.s.aHCPhysPaePDs[1] = MMPage2Phys(pVM, pVM->pgm.s.apShwPaePDsR3[1]);
1507 pVM->pgm.s.aHCPhysPaePDs[2] = MMPage2Phys(pVM, pVM->pgm.s.apShwPaePDsR3[2]);
1508 pVM->pgm.s.aHCPhysPaePDs[3] = MMPage2Phys(pVM, pVM->pgm.s.apShwPaePDsR3[3]);
1509 pVM->pgm.s.HCPhysShwPaePdpt = MMPage2Phys(pVM, pVM->pgm.s.pShwPaePdptR3);
1510#endif
1511 pVM->pgm.s.HCPhysShwNestedRoot = MMPage2Phys(pVM, pVM->pgm.s.pShwNestedRootR3);
1512
1513 /*
1514 * Initialize the pages, setting up the PML4 and PDPT for action below 4GB.
1515 */
1516#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1517 ASMMemZero32(pVM->pgm.s.pShw32BitPdR3, PAGE_SIZE);
1518 ASMMemZero32(pVM->pgm.s.pShwPaePdptR3, PAGE_SIZE);
1519#endif
1520 ASMMemZero32(pVM->pgm.s.pShwNestedRootR3, PAGE_SIZE);
1521#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1522 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apShwPaePDsR3); i++)
1523 {
1524 ASMMemZero32(pVM->pgm.s.apShwPaePDsR3[i], PAGE_SIZE);
1525 pVM->pgm.s.pShwPaePdptR3->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT | pVM->pgm.s.aHCPhysPaePDs[i];
1526 /* The flags will be corrected when entering and leaving long mode. */
1527 }
1528#endif
1529
1530 /*
1531 * Initialize paging workers and mode from current host mode
1532 * and the guest running in real mode.
1533 */
1534 pVM->pgm.s.enmHostMode = SUPGetPagingMode();
1535 switch (pVM->pgm.s.enmHostMode)
1536 {
1537 case SUPPAGINGMODE_32_BIT:
1538 case SUPPAGINGMODE_32_BIT_GLOBAL:
1539 case SUPPAGINGMODE_PAE:
1540 case SUPPAGINGMODE_PAE_GLOBAL:
1541 case SUPPAGINGMODE_PAE_NX:
1542 case SUPPAGINGMODE_PAE_GLOBAL_NX:
1543 break;
1544
1545 case SUPPAGINGMODE_AMD64:
1546 case SUPPAGINGMODE_AMD64_GLOBAL:
1547 case SUPPAGINGMODE_AMD64_NX:
1548 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
1549#ifndef VBOX_WITH_HYBRID_32BIT_KERNEL
1550 if (ARCH_BITS != 64)
1551 {
1552 AssertMsgFailed(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1553 LogRel(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1554 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1555 }
1556#endif
1557 break;
1558 default:
1559 AssertMsgFailed(("Host mode %d is not supported\n", pVM->pgm.s.enmHostMode));
1560 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1561 }
1562 rc = pgmR3ModeDataInit(pVM, false /* don't resolve GC and R0 syms yet */);
1563#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1564 if (RT_SUCCESS(rc))
1565 rc = PGMR3ChangeMode(pVM, PGMMODE_REAL);
1566#endif
1567 if (RT_SUCCESS(rc))
1568 {
1569 LogFlow(("pgmR3InitPaging: returns successfully\n"));
1570#if HC_ARCH_BITS == 64
1571# ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1572 LogRel(("Debug: HCPhysShw32BitPD=%RHp aHCPhysPaePDs={%RHp,%RHp,%RHp,%RHp} HCPhysShwPaePdpt=%RHp\n",
1573 pVM->pgm.s.HCPhysShw32BitPD,
1574 pVM->pgm.s.aHCPhysPaePDs[0], pVM->pgm.s.aHCPhysPaePDs[1], pVM->pgm.s.aHCPhysPaePDs[2], pVM->pgm.s.aHCPhysPaePDs[3],
1575 pVM->pgm.s.HCPhysShwPaePdpt));
1576# endif
1577 LogRel(("Debug: HCPhysInterPD=%RHp HCPhysInterPaePDPT=%RHp HCPhysInterPaePML4=%RHp\n",
1578 pVM->pgm.s.HCPhysInterPD, pVM->pgm.s.HCPhysInterPaePDPT, pVM->pgm.s.HCPhysInterPaePML4));
1579 LogRel(("Debug: apInterPTs={%RHp,%RHp} apInterPaePTs={%RHp,%RHp} apInterPaePDs={%RHp,%RHp,%RHp,%RHp} pInterPaePDPT64=%RHp\n",
1580 MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[1]),
1581 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[1]),
1582 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[1]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[2]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[3]),
1583 MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64)));
1584#endif
1585
1586 return VINF_SUCCESS;
1587 }
1588
1589 LogFlow(("pgmR3InitPaging: returns %Rrc\n", rc));
1590 return rc;
1591}
1592
1593
1594#ifdef VBOX_WITH_STATISTICS
1595/**
1596 * Init statistics
1597 */
1598static void pgmR3InitStats(PVM pVM)
1599{
1600 PPGM pPGM = &pVM->pgm.s;
1601 unsigned i;
1602
1603 /*
1604 * Note! The layout of this function matches the member layout exactly!
1605 */
1606
1607 /* Common - misc variables */
1608 STAM_REG(pVM, &pPGM->cAllPages, STAMTYPE_U32, "/PGM/Page/cAllPages", STAMUNIT_OCCURENCES, "The total number of pages.");
1609 STAM_REG(pVM, &pPGM->cPrivatePages, STAMTYPE_U32, "/PGM/Page/cPrivatePages", STAMUNIT_OCCURENCES, "The number of private pages.");
1610 STAM_REG(pVM, &pPGM->cSharedPages, STAMTYPE_U32, "/PGM/Page/cSharedPages", STAMUNIT_OCCURENCES, "The number of shared pages.");
1611 STAM_REG(pVM, &pPGM->cZeroPages, STAMTYPE_U32, "/PGM/Page/cZeroPages", STAMUNIT_OCCURENCES, "The number of zero backed pages.");
1612 STAM_REG(pVM, &pPGM->ChunkR3Map.c, STAMTYPE_U32, "/PGM/ChunkR3Map/c", STAMUNIT_OCCURENCES, "Number of mapped chunks.");
1613 STAM_REG(pVM, &pPGM->ChunkR3Map.cMax, STAMTYPE_U32, "/PGM/ChunkR3Map/cMax", STAMUNIT_OCCURENCES, "Maximum number of mapped chunks.");
1614
1615 /* Common - stats */
1616#ifdef PGMPOOL_WITH_GCPHYS_TRACKING
1617 STAM_REG(pVM, &pPGM->StatTrackVirgin, STAMTYPE_COUNTER, "/PGM/Track/Virgin", STAMUNIT_OCCURENCES, "The number of first time shadowings");
1618 STAM_REG(pVM, &pPGM->StatTrackAliased, STAMTYPE_COUNTER, "/PGM/Track/Aliased", STAMUNIT_OCCURENCES, "The number of times switching to cRef2, i.e. the page is being shadowed by two PTs.");
1619 STAM_REG(pVM, &pPGM->StatTrackAliasedMany, STAMTYPE_COUNTER, "/PGM/Track/AliasedMany", STAMUNIT_OCCURENCES, "The number of times we're tracking using cRef2.");
1620 STAM_REG(pVM, &pPGM->StatTrackAliasedLots, STAMTYPE_COUNTER, "/PGM/Track/AliasedLots", STAMUNIT_OCCURENCES, "The number of times we're hitting pages which has overflowed cRef2");
1621 STAM_REG(pVM, &pPGM->StatTrackOverflows, STAMTYPE_COUNTER, "/PGM/Track/Overflows", STAMUNIT_OCCURENCES, "The number of times the extent list grows to long.");
1622 STAM_REG(pVM, &pPGM->StatTrackDeref, STAMTYPE_PROFILE, "/PGM/Track/Deref", STAMUNIT_OCCURENCES, "Profiling of SyncPageWorkerTrackDeref (expensive).");
1623#endif
1624 for (i = 0; i < RT_ELEMENTS(pPGM->StatSyncPtPD); i++)
1625 STAMR3RegisterF(pVM, &pPGM->StatSyncPtPD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1626 "The number of SyncPT per PD n.", "/PGM/PDSyncPT/%04X", i);
1627 for (i = 0; i < RT_ELEMENTS(pPGM->StatSyncPagePD); i++)
1628 STAMR3RegisterF(pVM, &pPGM->StatSyncPagePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1629 "The number of SyncPage per PD n.", "/PGM/PDSyncPage/%04X", i);
1630
1631 /* R3 only: */
1632 STAM_REG(pVM, &pPGM->StatR3DetectedConflicts, STAMTYPE_COUNTER, "/PGM/R3/DetectedConflicts", STAMUNIT_OCCURENCES, "The number of times PGMR3CheckMappingConflicts() detected a conflict.");
1633 STAM_REG(pVM, &pPGM->StatR3ResolveConflict, STAMTYPE_PROFILE, "/PGM/R3/ResolveConflict", STAMUNIT_TICKS_PER_CALL, "pgmR3SyncPTResolveConflict() profiling (includes the entire relocation).");
1634 STAM_REG(pVM, &pPGM->StatR3GuestPDWrite, STAMTYPE_COUNTER, "/PGM/R3/PDWrite", STAMUNIT_OCCURENCES, "The total number of times pgmHCGuestPDWriteHandler() was called.");
1635 STAM_REG(pVM, &pPGM->StatR3GuestPDWriteConflict, STAMTYPE_COUNTER, "/PGM/R3/PDWriteConflict", STAMUNIT_OCCURENCES, "The number of times pgmHCGuestPDWriteHandler() detected a conflict.");
1636 STAM_REG(pVM, &pPGM->StatR3DynRamTotal, STAMTYPE_COUNTER, "/PGM/DynAlloc/TotalAlloc", STAMUNIT_MEGABYTES, "Allocated MBs of guest ram.");
1637 STAM_REG(pVM, &pPGM->StatR3DynRamGrow, STAMTYPE_COUNTER, "/PGM/DynAlloc/Grow", STAMUNIT_OCCURENCES, "Nr of pgmr3PhysGrowRange calls.");
1638
1639 /* R0 only: */
1640 STAM_REG(pVM, &pPGM->StatR0DynMapMigrateInvlPg, STAMTYPE_COUNTER, "/PGM/R0/DynMapMigrateInvlPg", STAMUNIT_OCCURENCES, "invlpg count in PGMDynMapMigrateAutoSet.");
1641 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInl, STAMTYPE_PROFILE, "/PGM/R0/DynMapPageGCPageInl", STAMUNIT_TICKS_PER_CALL, "Calls to pgmR0DynMapGCPageInlined.");
1642 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInlHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageGCPageInl/Hits", STAMUNIT_OCCURENCES, "Hash table lookup hits.");
1643 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInlMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageGCPageInl/Misses", STAMUNIT_OCCURENCES, "Misses that falls back to code common with PGMDynMapHCPage.");
1644 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInlRamHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageGCPageInl/RamHits", STAMUNIT_OCCURENCES, "1st ram range hits.");
1645 STAM_REG(pVM, &pPGM->StatR0DynMapGCPageInlRamMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageGCPageInl/RamMisses", STAMUNIT_OCCURENCES, "1st ram range misses, takes slow path.");
1646 STAM_REG(pVM, &pPGM->StatR0DynMapHCPageInl, STAMTYPE_PROFILE, "/PGM/R0/DynMapPageHCPageInl", STAMUNIT_TICKS_PER_CALL, "Calls to pgmR0DynMapHCPageInlined.");
1647 STAM_REG(pVM, &pPGM->StatR0DynMapHCPageInlHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageHCPageInl/Hits", STAMUNIT_OCCURENCES, "Hash table lookup hits.");
1648 STAM_REG(pVM, &pPGM->StatR0DynMapHCPageInlMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPageHCPageInl/Misses", STAMUNIT_OCCURENCES, "Misses that falls back to code common with PGMDynMapHCPage.");
1649 STAM_REG(pVM, &pPGM->StatR0DynMapPage, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage", STAMUNIT_OCCURENCES, "Calls to pgmR0DynMapPage");
1650 STAM_REG(pVM, &pPGM->StatR0DynMapSetOptimize, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SetOptimize", STAMUNIT_OCCURENCES, "Calls to pgmDynMapOptimizeAutoSet.");
1651 STAM_REG(pVM, &pPGM->StatR0DynMapSetSearchFlushes, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SetSearchFlushes",STAMUNIT_OCCURENCES, "Set search restorting to subset flushes.");
1652 STAM_REG(pVM, &pPGM->StatR0DynMapSetSearchHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SetSearchHits", STAMUNIT_OCCURENCES, "Set search hits.");
1653 STAM_REG(pVM, &pPGM->StatR0DynMapSetSearchMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SetSearchMisses", STAMUNIT_OCCURENCES, "Set search misses.");
1654 STAM_REG(pVM, &pPGM->StatR0DynMapHCPage, STAMTYPE_PROFILE, "/PGM/R0/DynMapPage/HCPage", STAMUNIT_TICKS_PER_CALL, "Calls to PGMDynMapHCPage (ring-0).");
1655 STAM_REG(pVM, &pPGM->StatR0DynMapPageHits0, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/Hits0", STAMUNIT_OCCURENCES, "Hits at iPage+0");
1656 STAM_REG(pVM, &pPGM->StatR0DynMapPageHits1, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/Hits1", STAMUNIT_OCCURENCES, "Hits at iPage+1");
1657 STAM_REG(pVM, &pPGM->StatR0DynMapPageHits2, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/Hits2", STAMUNIT_OCCURENCES, "Hits at iPage+2");
1658 STAM_REG(pVM, &pPGM->StatR0DynMapPageInvlPg, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/InvlPg", STAMUNIT_OCCURENCES, "invlpg count in pgmR0DynMapPageSlow.");
1659 STAM_REG(pVM, &pPGM->StatR0DynMapPageSlow, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/Slow", STAMUNIT_OCCURENCES, "Calls to pgmR0DynMapPageSlow - subtract this from pgmR0DynMapPage to get 1st level hits.");
1660 STAM_REG(pVM, &pPGM->StatR0DynMapPageSlowLoopHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SlowLoopHits" , STAMUNIT_OCCURENCES, "Hits in the loop path.");
1661 STAM_REG(pVM, &pPGM->StatR0DynMapPageSlowLoopMisses, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SlowLoopMisses", STAMUNIT_OCCURENCES, "Misses in the loop path. NonLoopMisses = Slow - SlowLoopHit - SlowLoopMisses");
1662 //STAM_REG(pVM, &pPGM->StatR0DynMapPageSlowLostHits, STAMTYPE_COUNTER, "/PGM/R0/DynMapPage/SlowLostHits", STAMUNIT_OCCURENCES, "Lost hits.");
1663 STAM_REG(pVM, &pPGM->StatR0DynMapSubsets, STAMTYPE_COUNTER, "/PGM/R0/Subsets", STAMUNIT_OCCURENCES, "Times PGMDynMapPushAutoSubset was called.");
1664 STAM_REG(pVM, &pPGM->StatR0DynMapPopFlushes, STAMTYPE_COUNTER, "/PGM/R0/SubsetPopFlushes", STAMUNIT_OCCURENCES, "Times PGMDynMapPopAutoSubset flushes the subset.");
1665 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[0], STAMTYPE_COUNTER, "/PGM/R0/SetSize000..09", STAMUNIT_OCCURENCES, "00-09% filled");
1666 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[1], STAMTYPE_COUNTER, "/PGM/R0/SetSize010..19", STAMUNIT_OCCURENCES, "10-19% filled");
1667 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[2], STAMTYPE_COUNTER, "/PGM/R0/SetSize020..29", STAMUNIT_OCCURENCES, "20-29% filled");
1668 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[3], STAMTYPE_COUNTER, "/PGM/R0/SetSize030..39", STAMUNIT_OCCURENCES, "30-39% filled");
1669 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[4], STAMTYPE_COUNTER, "/PGM/R0/SetSize040..49", STAMUNIT_OCCURENCES, "40-49% filled");
1670 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[5], STAMTYPE_COUNTER, "/PGM/R0/SetSize050..59", STAMUNIT_OCCURENCES, "50-59% filled");
1671 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[6], STAMTYPE_COUNTER, "/PGM/R0/SetSize060..69", STAMUNIT_OCCURENCES, "60-69% filled");
1672 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[7], STAMTYPE_COUNTER, "/PGM/R0/SetSize070..79", STAMUNIT_OCCURENCES, "70-79% filled");
1673 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[8], STAMTYPE_COUNTER, "/PGM/R0/SetSize080..89", STAMUNIT_OCCURENCES, "80-89% filled");
1674 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[9], STAMTYPE_COUNTER, "/PGM/R0/SetSize090..99", STAMUNIT_OCCURENCES, "90-99% filled");
1675 STAM_REG(pVM, &pPGM->aStatR0DynMapSetSize[10], STAMTYPE_COUNTER, "/PGM/R0/SetSize100", STAMUNIT_OCCURENCES, "100% filled");
1676
1677 /* GC only: */
1678 STAM_REG(pVM, &pPGM->StatRCDynMapCacheHits, STAMTYPE_COUNTER, "/PGM/RC/DynMapCache/Hits" , STAMUNIT_OCCURENCES, "Number of dynamic page mapping cache hits.");
1679 STAM_REG(pVM, &pPGM->StatRCDynMapCacheMisses, STAMTYPE_COUNTER, "/PGM/RC/DynMapCache/Misses" , STAMUNIT_OCCURENCES, "Number of dynamic page mapping cache misses.");
1680 STAM_REG(pVM, &pPGM->StatRCInvlPgConflict, STAMTYPE_COUNTER, "/PGM/RC/InvlPgConflict", STAMUNIT_OCCURENCES, "Number of times PGMInvalidatePage() detected a mapping conflict.");
1681 STAM_REG(pVM, &pPGM->StatRCInvlPgSyncMonCR3, STAMTYPE_COUNTER, "/PGM/RC/InvlPgSyncMonitorCR3", STAMUNIT_OCCURENCES, "Number of times PGMInvalidatePage() ran into PGM_SYNC_MONITOR_CR3.");
1682
1683 /* RZ only: */
1684 STAM_REG(pVM, &pPGM->StatRZTrap0e, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMTrap0eHandler() body.");
1685 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeCheckPageFault, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/CheckPageFault", STAMUNIT_TICKS_PER_CALL, "Profiling of checking for dirty/access emulation faults.");
1686 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeSyncPT, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of lazy page table syncing.");
1687 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeMapping, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/Mapping", STAMUNIT_TICKS_PER_CALL, "Profiling of checking virtual mappings.");
1688 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeOutOfSync, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/OutOfSync", STAMUNIT_TICKS_PER_CALL, "Profiling of out of sync page handling.");
1689 STAM_REG(pVM, &pPGM->StatRZTrap0eTimeHandlers, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of checking handlers.");
1690 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2CSAM, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/CSAM", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is CSAM.");
1691 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2DirtyAndAccessed, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/DirtyAndAccessedBits", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is dirty and/or accessed bit emulation.");
1692 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2GuestTrap, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/GuestTrap", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a guest trap.");
1693 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2HndPhys, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/HandlerPhysical", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a physical handler.");
1694 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2HndVirt, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/HandlerVirtual", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a virtual handler.");
1695 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2HndUnhandled, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/HandlerUnhandled", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is access outside the monitored areas of a monitored page.");
1696 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2Misc, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/Misc", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is not known.");
1697 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2OutOfSync, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/OutOfSync", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync page.");
1698 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2OutOfSyncHndPhys, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/OutOfSyncHndPhys", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync physical handler page.");
1699 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2OutOfSyncHndVirt, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/OutOfSyncHndVirt", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync virtual handler page.");
1700 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2OutOfSyncHndObs, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/OutOfSyncObsHnd", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an obsolete handler page.");
1701 STAM_REG(pVM, &pPGM->StatRZTrap0eTime2SyncPT, STAMTYPE_PROFILE, "/PGM/RZ/Trap0e/Time2/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is lazy syncing of a PT.");
1702 STAM_REG(pVM, &pPGM->StatRZTrap0eConflicts, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Conflicts", STAMUNIT_OCCURENCES, "The number of times #PF was caused by an undetected conflict.");
1703 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersMapping, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Mapping", STAMUNIT_OCCURENCES, "Number of traps due to access handlers in mappings.");
1704 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersOutOfSync, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/OutOfSync", STAMUNIT_OCCURENCES, "Number of traps due to out-of-sync handled pages.");
1705 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersPhysical, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Physical", STAMUNIT_OCCURENCES, "Number of traps due to physical access handlers.");
1706 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersVirtual, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Virtual", STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers.");
1707 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersVirtualByPhys, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/VirtualByPhys", STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers by physical address.");
1708 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersVirtualUnmarked,STAMTYPE_COUNTER,"/PGM/RZ/Trap0e/Handlers/VirtualUnmarked",STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers by virtual address (without proper physical flags).");
1709 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersUnhandled, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Unhandled", STAMUNIT_OCCURENCES, "Number of traps due to access outside range of monitored page(s).");
1710 STAM_REG(pVM, &pPGM->StatRZTrap0eHandlersInvalid, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Handlers/Invalid", STAMUNIT_OCCURENCES, "Number of traps due to access to invalid physical memory.");
1711 STAM_REG(pVM, &pPGM->StatRZTrap0eUSNotPresentRead, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/NPRead", STAMUNIT_OCCURENCES, "Number of user mode not present read page faults.");
1712 STAM_REG(pVM, &pPGM->StatRZTrap0eUSNotPresentWrite, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/NPWrite", STAMUNIT_OCCURENCES, "Number of user mode not present write page faults.");
1713 STAM_REG(pVM, &pPGM->StatRZTrap0eUSWrite, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/Write", STAMUNIT_OCCURENCES, "Number of user mode write page faults.");
1714 STAM_REG(pVM, &pPGM->StatRZTrap0eUSReserved, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/Reserved", STAMUNIT_OCCURENCES, "Number of user mode reserved bit page faults.");
1715 STAM_REG(pVM, &pPGM->StatRZTrap0eUSNXE, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/NXE", STAMUNIT_OCCURENCES, "Number of user mode NXE page faults.");
1716 STAM_REG(pVM, &pPGM->StatRZTrap0eUSRead, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/User/Read", STAMUNIT_OCCURENCES, "Number of user mode read page faults.");
1717 STAM_REG(pVM, &pPGM->StatRZTrap0eSVNotPresentRead, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/NPRead", STAMUNIT_OCCURENCES, "Number of supervisor mode not present read page faults.");
1718 STAM_REG(pVM, &pPGM->StatRZTrap0eSVNotPresentWrite, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/NPWrite", STAMUNIT_OCCURENCES, "Number of supervisor mode not present write page faults.");
1719 STAM_REG(pVM, &pPGM->StatRZTrap0eSVWrite, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/Write", STAMUNIT_OCCURENCES, "Number of supervisor mode write page faults.");
1720 STAM_REG(pVM, &pPGM->StatRZTrap0eSVReserved, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/Reserved", STAMUNIT_OCCURENCES, "Number of supervisor mode reserved bit page faults.");
1721 STAM_REG(pVM, &pPGM->StatRZTrap0eSNXE, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/Err/Supervisor/NXE", STAMUNIT_OCCURENCES, "Number of supervisor mode NXE page faults.");
1722 STAM_REG(pVM, &pPGM->StatRZTrap0eGuestPF, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/GuestPF", STAMUNIT_OCCURENCES, "Number of real guest page faults.");
1723 STAM_REG(pVM, &pPGM->StatRZTrap0eGuestPFUnh, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/GuestPF/Unhandled", STAMUNIT_OCCURENCES, "Number of real guest page faults from the 'unhandled' case.");
1724 STAM_REG(pVM, &pPGM->StatRZTrap0eGuestPFMapping, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/GuestPF/InMapping", STAMUNIT_OCCURENCES, "Number of real guest page faults in a mapping.");
1725 STAM_REG(pVM, &pPGM->StatRZTrap0eWPEmulInRZ, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/WP/InRZ", STAMUNIT_OCCURENCES, "Number of guest page faults due to X86_CR0_WP emulation.");
1726 STAM_REG(pVM, &pPGM->StatRZTrap0eWPEmulToR3, STAMTYPE_COUNTER, "/PGM/RZ/Trap0e/WP/ToR3", STAMUNIT_OCCURENCES, "Number of guest page faults due to X86_CR0_WP emulation (forward to R3 for emulation).");
1727 for (i = 0; i < RT_ELEMENTS(pPGM->StatRZTrap0ePD); i++)
1728 STAMR3RegisterF(pVM, &pPGM->StatRZTrap0ePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1729 "The number of traps in page directory n.", "/PGM/RZ/Trap0e/PD/%04X", i);
1730 STAM_REG(pVM, &pPGM->StatRZGuestCR3WriteHandled, STAMTYPE_COUNTER, "/PGM/RZ/CR3WriteHandled", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 change was successfully handled.");
1731 STAM_REG(pVM, &pPGM->StatRZGuestCR3WriteUnhandled, STAMTYPE_COUNTER, "/PGM/RZ/CR3WriteUnhandled", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 change was passed back to the recompiler.");
1732 STAM_REG(pVM, &pPGM->StatRZGuestCR3WriteConflict, STAMTYPE_COUNTER, "/PGM/RZ/CR3WriteConflict", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 monitoring detected a conflict.");
1733 STAM_REG(pVM, &pPGM->StatRZGuestROMWriteHandled, STAMTYPE_COUNTER, "/PGM/RZ/ROMWriteHandled", STAMUNIT_OCCURENCES, "The number of times the Guest ROM change was successfully handled.");
1734 STAM_REG(pVM, &pPGM->StatRZGuestROMWriteUnhandled, STAMTYPE_COUNTER, "/PGM/RZ/ROMWriteUnhandled", STAMUNIT_OCCURENCES, "The number of times the Guest ROM change was passed back to the recompiler.");
1735
1736 /* HC only: */
1737
1738 /* RZ & R3: */
1739 STAM_REG(pVM, &pPGM->StatRZSyncCR3, STAMTYPE_PROFILE, "/PGM/RZ/SyncCR3", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() body.");
1740 STAM_REG(pVM, &pPGM->StatRZSyncCR3Handlers, STAMTYPE_PROFILE, "/PGM/RZ/SyncCR3/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() update handler section.");
1741 STAM_REG(pVM, &pPGM->StatRZSyncCR3HandlerVirtualUpdate, STAMTYPE_PROFILE, "/PGM/RZ/SyncCR3/Handlers/VirtualUpdate", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler updates.");
1742 STAM_REG(pVM, &pPGM->StatRZSyncCR3HandlerVirtualReset, STAMTYPE_PROFILE, "/PGM/RZ/SyncCR3/Handlers/VirtualReset", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler resets.");
1743 STAM_REG(pVM, &pPGM->StatRZSyncCR3Global, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/Global", STAMUNIT_OCCURENCES, "The number of global CR3 syncs.");
1744 STAM_REG(pVM, &pPGM->StatRZSyncCR3NotGlobal, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/NotGlobal", STAMUNIT_OCCURENCES, "The number of non-global CR3 syncs.");
1745 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstCacheHit, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstChacheHit", STAMUNIT_OCCURENCES, "The number of times we got some kind of a cache hit.");
1746 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstFreed, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstFreed", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry.");
1747 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstFreedSrcNP, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstFreedSrcNP", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry for which the source entry was not present.");
1748 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstNotPresent, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstNotPresent", STAMUNIT_OCCURENCES, "The number of times we've encountered a not present shadow entry for a present guest entry.");
1749 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstSkippedGlobalPD, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstSkippedGlobalPD", STAMUNIT_OCCURENCES, "The number of times a global page directory wasn't flushed.");
1750 STAM_REG(pVM, &pPGM->StatRZSyncCR3DstSkippedGlobalPT, STAMTYPE_COUNTER, "/PGM/RZ/SyncCR3/DstSkippedGlobalPT", STAMUNIT_OCCURENCES, "The number of times a page table with only global entries wasn't flushed.");
1751 STAM_REG(pVM, &pPGM->StatRZSyncPT, STAMTYPE_PROFILE, "/PGM/RZ/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the pfnSyncPT() body.");
1752 STAM_REG(pVM, &pPGM->StatRZSyncPTFailed, STAMTYPE_COUNTER, "/PGM/RZ/SyncPT/Failed", STAMUNIT_OCCURENCES, "The number of times pfnSyncPT() failed.");
1753 STAM_REG(pVM, &pPGM->StatRZSyncPT4K, STAMTYPE_COUNTER, "/PGM/RZ/SyncPT/4K", STAMUNIT_OCCURENCES, "Nr of 4K PT syncs");
1754 STAM_REG(pVM, &pPGM->StatRZSyncPT4M, STAMTYPE_COUNTER, "/PGM/RZ/SyncPT/4M", STAMUNIT_OCCURENCES, "Nr of 4M PT syncs");
1755 STAM_REG(pVM, &pPGM->StatRZSyncPagePDNAs, STAMTYPE_COUNTER, "/PGM/RZ/SyncPagePDNAs", STAMUNIT_OCCURENCES, "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1756 STAM_REG(pVM, &pPGM->StatRZSyncPagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/RZ/SyncPagePDOutOfSync", STAMUNIT_OCCURENCES, "The number of time we've encountered an out-of-sync PD in SyncPage.");
1757 STAM_REG(pVM, &pPGM->StatRZAccessedPage, STAMTYPE_COUNTER, "/PGM/RZ/AccessedPage", STAMUNIT_OCCURENCES, "The number of pages marked not present for accessed bit emulation.");
1758 STAM_REG(pVM, &pPGM->StatRZDirtyBitTracking, STAMTYPE_PROFILE, "/PGM/RZ/DirtyPage", STAMUNIT_TICKS_PER_CALL, "Profiling the dirty bit tracking in CheckPageFault().");
1759 STAM_REG(pVM, &pPGM->StatRZDirtyPage, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/Mark", STAMUNIT_OCCURENCES, "The number of pages marked read-only for dirty bit tracking.");
1760 STAM_REG(pVM, &pPGM->StatRZDirtyPageBig, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/MarkBig", STAMUNIT_OCCURENCES, "The number of 4MB pages marked read-only for dirty bit tracking.");
1761 STAM_REG(pVM, &pPGM->StatRZDirtyPageSkipped, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/Skipped", STAMUNIT_OCCURENCES, "The number of pages already dirty or readonly.");
1762 STAM_REG(pVM, &pPGM->StatRZDirtyPageTrap, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/Trap", STAMUNIT_OCCURENCES, "The number of traps generated for dirty bit tracking.");
1763 STAM_REG(pVM, &pPGM->StatRZDirtiedPage, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/SetDirty", STAMUNIT_OCCURENCES, "The number of pages marked dirty because of write accesses.");
1764 STAM_REG(pVM, &pPGM->StatRZDirtyTrackRealPF, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/RealPF", STAMUNIT_OCCURENCES, "The number of real pages faults during dirty bit tracking.");
1765 STAM_REG(pVM, &pPGM->StatRZPageAlreadyDirty, STAMTYPE_COUNTER, "/PGM/RZ/DirtyPage/AlreadySet", STAMUNIT_OCCURENCES, "The number of pages already marked dirty because of write accesses.");
1766 STAM_REG(pVM, &pPGM->StatRZInvalidatePage, STAMTYPE_PROFILE, "/PGM/RZ/InvalidatePage", STAMUNIT_TICKS_PER_CALL, "PGMInvalidatePage() profiling.");
1767 STAM_REG(pVM, &pPGM->StatRZInvalidatePage4KBPages, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/4KBPages", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a 4KB page.");
1768 STAM_REG(pVM, &pPGM->StatRZInvalidatePage4MBPages, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/4MBPages", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a 4MB page.");
1769 STAM_REG(pVM, &pPGM->StatRZInvalidatePage4MBPagesSkip, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/4MBPagesSkip",STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() skipped a 4MB page.");
1770 STAM_REG(pVM, &pPGM->StatRZInvalidatePagePDMappings, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/PDMappings", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
1771 STAM_REG(pVM, &pPGM->StatRZInvalidatePagePDNAs, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/PDNAs", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
1772 STAM_REG(pVM, &pPGM->StatRZInvalidatePagePDNPs, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/PDNPs", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a not present page directory.");
1773 STAM_REG(pVM, &pPGM->StatRZInvalidatePagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/PDOutOfSync", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
1774 STAM_REG(pVM, &pPGM->StatRZInvalidatePageSkipped, STAMTYPE_COUNTER, "/PGM/RZ/InvalidatePage/Skipped", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1775 STAM_REG(pVM, &pPGM->StatRZVirtHandlerSearchByPhys, STAMTYPE_PROFILE, "/PGM/RZ/VirtHandlerSearchByPhys", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1776 STAM_REG(pVM, &pPGM->StatRZPhysHandlerReset, STAMTYPE_COUNTER, "/PGM/RZ/PhysHandlerReset", STAMUNIT_OCCURENCES, "The number of times PGMHandlerPhysicalReset is called.");
1777 STAM_REG(pVM, &pPGM->StatRZPageOutOfSyncSupervisor, STAMTYPE_COUNTER, "/PGM/RZ/OutOfSync/SuperVisor", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1778 STAM_REG(pVM, &pPGM->StatRZPageOutOfSyncUser, STAMTYPE_COUNTER, "/PGM/RZ/OutOfSync/User", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1779 STAM_REG(pVM, &pPGM->StatRZPrefetch, STAMTYPE_PROFILE, "/PGM/RZ/Prefetch", STAMUNIT_TICKS_PER_CALL, "PGMPrefetchPage profiling.");
1780 STAM_REG(pVM, &pPGM->StatRZChunkR3MapTlbHits, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbHitsRZ", STAMUNIT_OCCURENCES, "TLB hits.");
1781 STAM_REG(pVM, &pPGM->StatRZChunkR3MapTlbMisses, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbMissesRZ", STAMUNIT_OCCURENCES, "TLB misses.");
1782 STAM_REG(pVM, &pPGM->StatRZPageMapTlbHits, STAMTYPE_COUNTER, "/PGM/RZ/Page/MapTlbHits", STAMUNIT_OCCURENCES, "TLB hits.");
1783 STAM_REG(pVM, &pPGM->StatRZPageMapTlbMisses, STAMTYPE_COUNTER, "/PGM/RZ/Page/MapTlbMisses", STAMUNIT_OCCURENCES, "TLB misses.");
1784 STAM_REG(pVM, &pPGM->StatRZPageReplaceShared, STAMTYPE_COUNTER, "/PGM/RZ/Page/ReplacedShared", STAMUNIT_OCCURENCES, "Times a shared page was replaced.");
1785 STAM_REG(pVM, &pPGM->StatRZPageReplaceZero, STAMTYPE_COUNTER, "/PGM/RZ/Page/ReplacedZero", STAMUNIT_OCCURENCES, "Times the zero page was replaced.");
1786/// @todo STAM_REG(pVM, &pPGM->StatRZPageHandyAllocs, STAMTYPE_COUNTER, "/PGM/RZ/Page/HandyAllocs", STAMUNIT_OCCURENCES, "Number of times we've allocated more handy pages.");
1787 STAM_REG(pVM, &pPGM->StatRZFlushTLB, STAMTYPE_PROFILE, "/PGM/RZ/FlushTLB", STAMUNIT_OCCURENCES, "Profiling of the PGMFlushTLB() body.");
1788 STAM_REG(pVM, &pPGM->StatRZFlushTLBNewCR3, STAMTYPE_COUNTER, "/PGM/RZ/FlushTLB/NewCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
1789 STAM_REG(pVM, &pPGM->StatRZFlushTLBNewCR3Global, STAMTYPE_COUNTER, "/PGM/RZ/FlushTLB/NewCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
1790 STAM_REG(pVM, &pPGM->StatRZFlushTLBSameCR3, STAMTYPE_COUNTER, "/PGM/RZ/FlushTLB/SameCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
1791 STAM_REG(pVM, &pPGM->StatRZFlushTLBSameCR3Global, STAMTYPE_COUNTER, "/PGM/RZ/FlushTLB/SameCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
1792 STAM_REG(pVM, &pPGM->StatRZGstModifyPage, STAMTYPE_PROFILE, "/PGM/RZ/GstModifyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGstModifyPage() body.");
1793
1794 STAM_REG(pVM, &pPGM->StatR3SyncCR3, STAMTYPE_PROFILE, "/PGM/R3/SyncCR3", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() body.");
1795 STAM_REG(pVM, &pPGM->StatR3SyncCR3Handlers, STAMTYPE_PROFILE, "/PGM/R3/SyncCR3/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() update handler section.");
1796 STAM_REG(pVM, &pPGM->StatR3SyncCR3HandlerVirtualUpdate, STAMTYPE_PROFILE, "/PGM/R3/SyncCR3/Handlers/VirtualUpdate", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler updates.");
1797 STAM_REG(pVM, &pPGM->StatR3SyncCR3HandlerVirtualReset, STAMTYPE_PROFILE, "/PGM/R3/SyncCR3/Handlers/VirtualReset", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler resets.");
1798 STAM_REG(pVM, &pPGM->StatR3SyncCR3Global, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/Global", STAMUNIT_OCCURENCES, "The number of global CR3 syncs.");
1799 STAM_REG(pVM, &pPGM->StatR3SyncCR3NotGlobal, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/NotGlobal", STAMUNIT_OCCURENCES, "The number of non-global CR3 syncs.");
1800 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstCacheHit, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstChacheHit", STAMUNIT_OCCURENCES, "The number of times we got some kind of a cache hit.");
1801 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstFreed, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstFreed", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry.");
1802 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstFreedSrcNP, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstFreedSrcNP", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry for which the source entry was not present.");
1803 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstNotPresent, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstNotPresent", STAMUNIT_OCCURENCES, "The number of times we've encountered a not present shadow entry for a present guest entry.");
1804 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstSkippedGlobalPD, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstSkippedGlobalPD", STAMUNIT_OCCURENCES, "The number of times a global page directory wasn't flushed.");
1805 STAM_REG(pVM, &pPGM->StatR3SyncCR3DstSkippedGlobalPT, STAMTYPE_COUNTER, "/PGM/R3/SyncCR3/DstSkippedGlobalPT", STAMUNIT_OCCURENCES, "The number of times a page table with only global entries wasn't flushed.");
1806 STAM_REG(pVM, &pPGM->StatR3SyncPT, STAMTYPE_PROFILE, "/PGM/R3/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the pfnSyncPT() body.");
1807 STAM_REG(pVM, &pPGM->StatR3SyncPTFailed, STAMTYPE_COUNTER, "/PGM/R3/SyncPT/Failed", STAMUNIT_OCCURENCES, "The number of times pfnSyncPT() failed.");
1808 STAM_REG(pVM, &pPGM->StatR3SyncPT4K, STAMTYPE_COUNTER, "/PGM/R3/SyncPT/4K", STAMUNIT_OCCURENCES, "Nr of 4K PT syncs");
1809 STAM_REG(pVM, &pPGM->StatR3SyncPT4M, STAMTYPE_COUNTER, "/PGM/R3/SyncPT/4M", STAMUNIT_OCCURENCES, "Nr of 4M PT syncs");
1810 STAM_REG(pVM, &pPGM->StatR3SyncPagePDNAs, STAMTYPE_COUNTER, "/PGM/R3/SyncPagePDNAs", STAMUNIT_OCCURENCES, "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1811 STAM_REG(pVM, &pPGM->StatR3SyncPagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/R3/SyncPagePDOutOfSync", STAMUNIT_OCCURENCES, "The number of time we've encountered an out-of-sync PD in SyncPage.");
1812 STAM_REG(pVM, &pPGM->StatR3AccessedPage, STAMTYPE_COUNTER, "/PGM/R3/AccessedPage", STAMUNIT_OCCURENCES, "The number of pages marked not present for accessed bit emulation.");
1813 STAM_REG(pVM, &pPGM->StatR3DirtyBitTracking, STAMTYPE_PROFILE, "/PGM/R3/DirtyPage", STAMUNIT_TICKS_PER_CALL, "Profiling the dirty bit tracking in CheckPageFault().");
1814 STAM_REG(pVM, &pPGM->StatR3DirtyPage, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/Mark", STAMUNIT_OCCURENCES, "The number of pages marked read-only for dirty bit tracking.");
1815 STAM_REG(pVM, &pPGM->StatR3DirtyPageBig, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/MarkBig", STAMUNIT_OCCURENCES, "The number of 4MB pages marked read-only for dirty bit tracking.");
1816 STAM_REG(pVM, &pPGM->StatR3DirtyPageSkipped, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/Skipped", STAMUNIT_OCCURENCES, "The number of pages already dirty or readonly.");
1817 STAM_REG(pVM, &pPGM->StatR3DirtyPageTrap, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/Trap", STAMUNIT_OCCURENCES, "The number of traps generated for dirty bit tracking.");
1818 STAM_REG(pVM, &pPGM->StatR3DirtiedPage, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/SetDirty", STAMUNIT_OCCURENCES, "The number of pages marked dirty because of write accesses.");
1819 STAM_REG(pVM, &pPGM->StatR3DirtyTrackRealPF, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/RealPF", STAMUNIT_OCCURENCES, "The number of real pages faults during dirty bit tracking.");
1820 STAM_REG(pVM, &pPGM->StatR3PageAlreadyDirty, STAMTYPE_COUNTER, "/PGM/R3/DirtyPage/AlreadySet", STAMUNIT_OCCURENCES, "The number of pages already marked dirty because of write accesses.");
1821 STAM_REG(pVM, &pPGM->StatR3InvalidatePage, STAMTYPE_PROFILE, "/PGM/R3/InvalidatePage", STAMUNIT_TICKS_PER_CALL, "PGMInvalidatePage() profiling.");
1822 STAM_REG(pVM, &pPGM->StatR3InvalidatePage4KBPages, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/4KBPages", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a 4KB page.");
1823 STAM_REG(pVM, &pPGM->StatR3InvalidatePage4MBPages, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/4MBPages", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a 4MB page.");
1824 STAM_REG(pVM, &pPGM->StatR3InvalidatePage4MBPagesSkip, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/4MBPagesSkip",STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() skipped a 4MB page.");
1825 STAM_REG(pVM, &pPGM->StatR3InvalidatePagePDMappings, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/PDMappings", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
1826 STAM_REG(pVM, &pPGM->StatR3InvalidatePagePDNAs, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/PDNAs", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
1827 STAM_REG(pVM, &pPGM->StatR3InvalidatePagePDNPs, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/PDNPs", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for a not present page directory.");
1828 STAM_REG(pVM, &pPGM->StatR3InvalidatePagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/PDOutOfSync", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
1829 STAM_REG(pVM, &pPGM->StatR3InvalidatePageSkipped, STAMTYPE_COUNTER, "/PGM/R3/InvalidatePage/Skipped", STAMUNIT_OCCURENCES, "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1830 STAM_REG(pVM, &pPGM->StatR3VirtHandlerSearchByPhys, STAMTYPE_PROFILE, "/PGM/R3/VirtHandlerSearchByPhys", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1831 STAM_REG(pVM, &pPGM->StatR3PhysHandlerReset, STAMTYPE_COUNTER, "/PGM/R3/PhysHandlerReset", STAMUNIT_OCCURENCES, "The number of times PGMHandlerPhysicalReset is called.");
1832 STAM_REG(pVM, &pPGM->StatR3PageOutOfSyncSupervisor, STAMTYPE_COUNTER, "/PGM/R3/OutOfSync/SuperVisor", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1833 STAM_REG(pVM, &pPGM->StatR3PageOutOfSyncUser, STAMTYPE_COUNTER, "/PGM/R3/OutOfSync/User", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
1834 STAM_REG(pVM, &pPGM->StatR3Prefetch, STAMTYPE_PROFILE, "/PGM/R3/Prefetch", STAMUNIT_TICKS_PER_CALL, "PGMPrefetchPage profiling.");
1835 STAM_REG(pVM, &pPGM->StatR3ChunkR3MapTlbHits, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbHitsR3", STAMUNIT_OCCURENCES, "TLB hits.");
1836 STAM_REG(pVM, &pPGM->StatR3ChunkR3MapTlbMisses, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbMissesR3", STAMUNIT_OCCURENCES, "TLB misses.");
1837 STAM_REG(pVM, &pPGM->StatR3PageMapTlbHits, STAMTYPE_COUNTER, "/PGM/R3/Page/MapTlbHits", STAMUNIT_OCCURENCES, "TLB hits.");
1838 STAM_REG(pVM, &pPGM->StatR3PageMapTlbMisses, STAMTYPE_COUNTER, "/PGM/R3/Page/MapTlbMisses", STAMUNIT_OCCURENCES, "TLB misses.");
1839 STAM_REG(pVM, &pPGM->StatR3PageReplaceShared, STAMTYPE_COUNTER, "/PGM/R3/Page/ReplacedShared", STAMUNIT_OCCURENCES, "Times a shared page was replaced.");
1840 STAM_REG(pVM, &pPGM->StatR3PageReplaceZero, STAMTYPE_COUNTER, "/PGM/R3/Page/ReplacedZero", STAMUNIT_OCCURENCES, "Times the zero page was replaced.");
1841/// @todo STAM_REG(pVM, &pPGM->StatR3PageHandyAllocs, STAMTYPE_COUNTER, "/PGM/R3/Page/HandyAllocs", STAMUNIT_OCCURENCES, "Number of times we've allocated more handy pages.");
1842 STAM_REG(pVM, &pPGM->StatR3FlushTLB, STAMTYPE_PROFILE, "/PGM/R3/FlushTLB", STAMUNIT_OCCURENCES, "Profiling of the PGMFlushTLB() body.");
1843 STAM_REG(pVM, &pPGM->StatR3FlushTLBNewCR3, STAMTYPE_COUNTER, "/PGM/R3/FlushTLB/NewCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
1844 STAM_REG(pVM, &pPGM->StatR3FlushTLBNewCR3Global, STAMTYPE_COUNTER, "/PGM/R3/FlushTLB/NewCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
1845 STAM_REG(pVM, &pPGM->StatR3FlushTLBSameCR3, STAMTYPE_COUNTER, "/PGM/R3/FlushTLB/SameCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
1846 STAM_REG(pVM, &pPGM->StatR3FlushTLBSameCR3Global, STAMTYPE_COUNTER, "/PGM/R3/FlushTLB/SameCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
1847 STAM_REG(pVM, &pPGM->StatR3GstModifyPage, STAMTYPE_PROFILE, "/PGM/R3/GstModifyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGstModifyPage() body.");
1848
1849}
1850#endif /* VBOX_WITH_STATISTICS */
1851
1852
1853/**
1854 * Init the PGM bits that rely on VMMR0 and MM to be fully initialized.
1855 *
1856 * The dynamic mapping area will also be allocated and initialized at this
1857 * time. We could allocate it during PGMR3Init of course, but the mapping
1858 * wouldn't be allocated at that time preventing us from setting up the
1859 * page table entries with the dummy page.
1860 *
1861 * @returns VBox status code.
1862 * @param pVM VM handle.
1863 */
1864VMMR3DECL(int) PGMR3InitDynMap(PVM pVM)
1865{
1866 RTGCPTR GCPtr;
1867 int rc;
1868
1869#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1870 /*
1871 * Reserve space for mapping the paging pages into guest context.
1872 */
1873 rc = MMR3HyperReserve(pVM, PAGE_SIZE * (2 + RT_ELEMENTS(pVM->pgm.s.apShwPaePDsR3) + 1 + 2 + 2), "Paging", &GCPtr);
1874 AssertRCReturn(rc, rc);
1875 pVM->pgm.s.pShw32BitPdRC = GCPtr;
1876 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1877#endif
1878
1879 /*
1880 * Reserve space for the dynamic mappings.
1881 */
1882 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping", &GCPtr);
1883 if (RT_SUCCESS(rc))
1884 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
1885
1886 if ( RT_SUCCESS(rc)
1887 && (pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) != ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT))
1888 {
1889 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping not crossing", &GCPtr);
1890 if (RT_SUCCESS(rc))
1891 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
1892 }
1893 if (RT_SUCCESS(rc))
1894 {
1895 AssertRelease((pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) == ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT));
1896 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1897 }
1898 return rc;
1899}
1900
1901
1902/**
1903 * Ring-3 init finalizing.
1904 *
1905 * @returns VBox status code.
1906 * @param pVM The VM handle.
1907 */
1908VMMR3DECL(int) PGMR3InitFinalize(PVM pVM)
1909{
1910 int rc;
1911
1912#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
1913 /*
1914 * Map the paging pages into the guest context.
1915 */
1916 RTGCPTR GCPtr = pVM->pgm.s.pShw32BitPdRC;
1917 AssertReleaseReturn(GCPtr, VERR_INTERNAL_ERROR);
1918
1919 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhysShw32BitPD, PAGE_SIZE, 0);
1920 AssertRCReturn(rc, rc);
1921 pVM->pgm.s.pShw32BitPdRC = GCPtr;
1922 GCPtr += PAGE_SIZE;
1923 GCPtr += PAGE_SIZE; /* reserved page */
1924
1925 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apShwPaePDsR3); i++)
1926 {
1927 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.aHCPhysPaePDs[i], PAGE_SIZE, 0);
1928 AssertRCReturn(rc, rc);
1929 pVM->pgm.s.apShwPaePDsRC[i] = GCPtr;
1930 GCPtr += PAGE_SIZE;
1931 }
1932 /* A bit of paranoia is justified. */
1933 AssertRelease(pVM->pgm.s.apShwPaePDsRC[0] + PAGE_SIZE == pVM->pgm.s.apShwPaePDsRC[1]);
1934 AssertRelease(pVM->pgm.s.apShwPaePDsRC[1] + PAGE_SIZE == pVM->pgm.s.apShwPaePDsRC[2]);
1935 AssertRelease(pVM->pgm.s.apShwPaePDsRC[2] + PAGE_SIZE == pVM->pgm.s.apShwPaePDsRC[3]);
1936 GCPtr += PAGE_SIZE; /* reserved page */
1937
1938 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhysShwPaePdpt, PAGE_SIZE, 0);
1939 AssertRCReturn(rc, rc);
1940 pVM->pgm.s.pShwPaePdptRC = GCPtr;
1941 GCPtr += PAGE_SIZE;
1942 GCPtr += PAGE_SIZE; /* reserved page */
1943#endif
1944
1945 /*
1946 * Reserve space for the dynamic mappings.
1947 * Initialize the dynamic mapping pages with dummy pages to simply the cache.
1948 */
1949 /* get the pointer to the page table entries. */
1950 PPGMMAPPING pMapping = pgmGetMapping(pVM, pVM->pgm.s.pbDynPageMapBaseGC);
1951 AssertRelease(pMapping);
1952 const uintptr_t off = pVM->pgm.s.pbDynPageMapBaseGC - pMapping->GCPtr;
1953 const unsigned iPT = off >> X86_PD_SHIFT;
1954 const unsigned iPG = (off >> X86_PT_SHIFT) & X86_PT_MASK;
1955 pVM->pgm.s.paDynPageMap32BitPTEsGC = pMapping->aPTs[iPT].pPTRC + iPG * sizeof(pMapping->aPTs[0].pPTR3->a[0]);
1956 pVM->pgm.s.paDynPageMapPaePTEsGC = pMapping->aPTs[iPT].paPaePTsRC + iPG * sizeof(pMapping->aPTs[0].paPaePTsR3->a[0]);
1957
1958 /* init cache */
1959 RTHCPHYS HCPhysDummy = MMR3PageDummyHCPhys(pVM);
1960 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.aHCPhysDynPageMapCache); i++)
1961 pVM->pgm.s.aHCPhysDynPageMapCache[i] = HCPhysDummy;
1962
1963 for (unsigned i = 0; i < MM_HYPER_DYNAMIC_SIZE; i += PAGE_SIZE)
1964 {
1965 rc = PGMMap(pVM, pVM->pgm.s.pbDynPageMapBaseGC + i, HCPhysDummy, PAGE_SIZE, 0);
1966 AssertRCReturn(rc, rc);
1967 }
1968
1969 /*
1970 * Note that AMD uses all the 8 reserved bits for the address (so 40 bits in total);
1971 * Intel only goes up to 36 bits, so we stick to 36 as well.
1972 */
1973 /** @todo How to test for the 40 bits support? Long mode seems to be the test criterium. */
1974 uint32_t u32Dummy, u32Features;
1975 CPUMGetGuestCpuId(pVM, 1, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
1976
1977 if (u32Features & X86_CPUID_FEATURE_EDX_PSE36)
1978 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(36) - 1;
1979 else
1980 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1;
1981
1982 LogRel(("PGMR3InitFinalize: 4 MB PSE mask %RGp\n", pVM->pgm.s.GCPhys4MBPSEMask));
1983 return rc;
1984}
1985
1986
1987/**
1988 * Applies relocations to data and code managed by this component.
1989 *
1990 * This function will be called at init and whenever the VMM need to relocate it
1991 * self inside the GC.
1992 *
1993 * @param pVM The VM.
1994 * @param offDelta Relocation delta relative to old location.
1995 */
1996VMMR3DECL(void) PGMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
1997{
1998 LogFlow(("PGMR3Relocate\n"));
1999
2000 /*
2001 * Paging stuff.
2002 */
2003 pVM->pgm.s.GCPtrCR3Mapping += offDelta;
2004 /** @todo move this into shadow and guest specific relocation functions. */
2005#ifdef VBOX_WITH_PGMPOOL_PAGING_ONLY
2006 AssertMsg(pVM->pgm.s.pShwNestedRootR3, ("Init order, no relocation before paging is initialized!\n"));
2007#else
2008 AssertMsg(pVM->pgm.s.pShw32BitPdR3, ("Init order, no relocation before paging is initialized!\n"));
2009 pVM->pgm.s.pShw32BitPdRC += offDelta;
2010#endif
2011 pVM->pgm.s.pGst32BitPdRC += offDelta;
2012 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apGstPaePDsRC); i++)
2013 {
2014#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
2015 AssertCompile(RT_ELEMENTS(pVM->pgm.s.apShwPaePDsRC) == RT_ELEMENTS(pVM->pgm.s.apGstPaePDsRC));
2016 pVM->pgm.s.apShwPaePDsRC[i] += offDelta;
2017#endif
2018 pVM->pgm.s.apGstPaePDsRC[i] += offDelta;
2019 }
2020 pVM->pgm.s.pGstPaePdptRC += offDelta;
2021#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
2022 pVM->pgm.s.pShwPaePdptRC += offDelta;
2023#endif
2024
2025#ifdef VBOX_WITH_PGMPOOL_PAGING_ONLY
2026 pVM->pgm.s.pShwPageCR3RC += offDelta;
2027#endif
2028
2029 pgmR3ModeDataInit(pVM, true /* resolve GC/R0 symbols */);
2030 pgmR3ModeDataSwitch(pVM, pVM->pgm.s.enmShadowMode, pVM->pgm.s.enmGuestMode);
2031
2032 PGM_SHW_PFN(Relocate, pVM)(pVM, offDelta);
2033 PGM_GST_PFN(Relocate, pVM)(pVM, offDelta);
2034 PGM_BTH_PFN(Relocate, pVM)(pVM, offDelta);
2035
2036 /*
2037 * Trees.
2038 */
2039 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
2040
2041 /*
2042 * Ram ranges.
2043 */
2044 if (pVM->pgm.s.pRamRangesR3)
2045 {
2046 pVM->pgm.s.pRamRangesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pRamRangesR3);
2047 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur->pNextR3; pCur = pCur->pNextR3)
2048 pCur->pNextRC = MMHyperR3ToRC(pVM, pCur->pNextR3);
2049 }
2050
2051 /*
2052 * Update the two page directories with all page table mappings.
2053 * (One or more of them have changed, that's why we're here.)
2054 */
2055 pVM->pgm.s.pMappingsRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pMappingsR3);
2056 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur->pNextR3; pCur = pCur->pNextR3)
2057 pCur->pNextRC = MMHyperR3ToRC(pVM, pCur->pNextR3);
2058
2059 /* Relocate GC addresses of Page Tables. */
2060 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
2061 {
2062 for (RTHCUINT i = 0; i < pCur->cPTs; i++)
2063 {
2064 pCur->aPTs[i].pPTRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].pPTR3);
2065 pCur->aPTs[i].paPaePTsRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].paPaePTsR3);
2066 }
2067 }
2068
2069 /*
2070 * Dynamic page mapping area.
2071 */
2072 pVM->pgm.s.paDynPageMap32BitPTEsGC += offDelta;
2073 pVM->pgm.s.paDynPageMapPaePTEsGC += offDelta;
2074 pVM->pgm.s.pbDynPageMapBaseGC += offDelta;
2075
2076 /*
2077 * The Zero page.
2078 */
2079 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
2080#ifdef VBOX_WITH_2X_4GB_ADDR_SPACE
2081 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR || !VMMIsHwVirtExtForced(pVM));
2082#else
2083 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR);
2084#endif
2085
2086 /*
2087 * Physical and virtual handlers.
2088 */
2089 RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3RelocatePhysHandler, &offDelta);
2090 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3RelocateVirtHandler, &offDelta);
2091 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3RelocateHyperVirtHandler, &offDelta);
2092
2093 /*
2094 * The page pool.
2095 */
2096 pgmR3PoolRelocate(pVM);
2097}
2098
2099
2100/**
2101 * Callback function for relocating a physical access handler.
2102 *
2103 * @returns 0 (continue enum)
2104 * @param pNode Pointer to a PGMPHYSHANDLER node.
2105 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2106 * not certain the delta will fit in a void pointer for all possible configs.
2107 */
2108static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser)
2109{
2110 PPGMPHYSHANDLER pHandler = (PPGMPHYSHANDLER)pNode;
2111 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2112 if (pHandler->pfnHandlerRC)
2113 pHandler->pfnHandlerRC += offDelta;
2114 if (pHandler->pvUserRC >= 0x10000)
2115 pHandler->pvUserRC += offDelta;
2116 return 0;
2117}
2118
2119
2120/**
2121 * Callback function for relocating a virtual access handler.
2122 *
2123 * @returns 0 (continue enum)
2124 * @param pNode Pointer to a PGMVIRTHANDLER node.
2125 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2126 * not certain the delta will fit in a void pointer for all possible configs.
2127 */
2128static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2129{
2130 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2131 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2132 Assert( pHandler->enmType == PGMVIRTHANDLERTYPE_ALL
2133 || pHandler->enmType == PGMVIRTHANDLERTYPE_WRITE);
2134 Assert(pHandler->pfnHandlerRC);
2135 pHandler->pfnHandlerRC += offDelta;
2136 return 0;
2137}
2138
2139
2140/**
2141 * Callback function for relocating a virtual access handler for the hypervisor mapping.
2142 *
2143 * @returns 0 (continue enum)
2144 * @param pNode Pointer to a PGMVIRTHANDLER node.
2145 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
2146 * not certain the delta will fit in a void pointer for all possible configs.
2147 */
2148static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2149{
2150 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2151 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
2152 Assert(pHandler->enmType == PGMVIRTHANDLERTYPE_HYPERVISOR);
2153 Assert(pHandler->pfnHandlerRC);
2154 pHandler->pfnHandlerRC += offDelta;
2155 return 0;
2156}
2157
2158
2159/**
2160 * The VM is being reset.
2161 *
2162 * For the PGM component this means that any PD write monitors
2163 * needs to be removed.
2164 *
2165 * @param pVM VM handle.
2166 */
2167VMMR3DECL(void) PGMR3Reset(PVM pVM)
2168{
2169 LogFlow(("PGMR3Reset:\n"));
2170 VM_ASSERT_EMT(pVM);
2171
2172 pgmLock(pVM);
2173
2174 /*
2175 * Unfix any fixed mappings and disable CR3 monitoring.
2176 */
2177 pVM->pgm.s.fMappingsFixed = false;
2178 pVM->pgm.s.GCPtrMappingFixed = 0;
2179 pVM->pgm.s.cbMappingFixed = 0;
2180
2181 /* Exit the guest paging mode before the pgm pool gets reset.
2182 * Important to clean up the amd64 case.
2183 */
2184 int rc = PGM_GST_PFN(Exit, pVM)(pVM);
2185 AssertRC(rc);
2186#ifdef DEBUG
2187 DBGFR3InfoLog(pVM, "mappings", NULL);
2188 DBGFR3InfoLog(pVM, "handlers", "all nostat");
2189#endif
2190
2191 /*
2192 * Reset the shadow page pool.
2193 */
2194 pgmR3PoolReset(pVM);
2195
2196 /*
2197 * Re-init other members.
2198 */
2199 pVM->pgm.s.fA20Enabled = true;
2200
2201 /*
2202 * Clear the FFs PGM owns.
2203 */
2204 VM_FF_CLEAR(pVM, VM_FF_PGM_SYNC_CR3);
2205 VM_FF_CLEAR(pVM, VM_FF_PGM_SYNC_CR3_NON_GLOBAL);
2206
2207 /*
2208 * Reset (zero) RAM pages.
2209 */
2210 rc = pgmR3PhysRamReset(pVM);
2211 if (RT_SUCCESS(rc))
2212 {
2213#ifdef VBOX_WITH_NEW_PHYS_CODE
2214 /*
2215 * Reset (zero) shadow ROM pages.
2216 */
2217 rc = pgmR3PhysRomReset(pVM);
2218#endif
2219 if (RT_SUCCESS(rc))
2220 {
2221 /*
2222 * Switch mode back to real mode.
2223 */
2224 rc = PGMR3ChangeMode(pVM, PGMMODE_REAL);
2225 STAM_REL_COUNTER_RESET(&pVM->pgm.s.cGuestModeChanges);
2226 }
2227 }
2228
2229 pgmUnlock(pVM);
2230 //return rc;
2231 AssertReleaseRC(rc);
2232}
2233
2234
2235#ifdef VBOX_STRICT
2236/**
2237 * VM state change callback for clearing fNoMorePhysWrites after
2238 * a snapshot has been created.
2239 */
2240static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser)
2241{
2242 if (enmState == VMSTATE_RUNNING)
2243 pVM->pgm.s.fNoMorePhysWrites = false;
2244}
2245#endif
2246
2247
2248/**
2249 * Terminates the PGM.
2250 *
2251 * @returns VBox status code.
2252 * @param pVM Pointer to VM structure.
2253 */
2254VMMR3DECL(int) PGMR3Term(PVM pVM)
2255{
2256 return PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
2257}
2258
2259
2260/**
2261 * Terminates the per-VCPU PGM.
2262 *
2263 * Termination means cleaning up and freeing all resources,
2264 * the VM it self is at this point powered off or suspended.
2265 *
2266 * @returns VBox status code.
2267 * @param pVM The VM to operate on.
2268 */
2269VMMR3DECL(int) PGMR3TermCPU(PVM pVM)
2270{
2271 return 0;
2272}
2273
2274
2275/**
2276 * Execute state save operation.
2277 *
2278 * @returns VBox status code.
2279 * @param pVM VM Handle.
2280 * @param pSSM SSM operation handle.
2281 */
2282static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM)
2283{
2284 PPGM pPGM = &pVM->pgm.s;
2285
2286 /* No more writes to physical memory after this point! */
2287 pVM->pgm.s.fNoMorePhysWrites = true;
2288
2289 /*
2290 * Save basic data (required / unaffected by relocation).
2291 */
2292#if 1
2293 SSMR3PutBool(pSSM, pPGM->fMappingsFixed);
2294#else
2295 SSMR3PutUInt(pSSM, pPGM->fMappingsFixed);
2296#endif
2297 SSMR3PutGCPtr(pSSM, pPGM->GCPtrMappingFixed);
2298 SSMR3PutU32(pSSM, pPGM->cbMappingFixed);
2299 SSMR3PutUInt(pSSM, pPGM->cbRamSize);
2300 SSMR3PutGCPhys(pSSM, pPGM->GCPhysA20Mask);
2301 SSMR3PutUInt(pSSM, pPGM->fA20Enabled);
2302 SSMR3PutUInt(pSSM, pPGM->fSyncFlags);
2303 SSMR3PutUInt(pSSM, pPGM->enmGuestMode);
2304 SSMR3PutU32(pSSM, ~0); /* Separator. */
2305
2306 /*
2307 * The guest mappings.
2308 */
2309 uint32_t i = 0;
2310 for (PPGMMAPPING pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3, i++)
2311 {
2312 SSMR3PutU32(pSSM, i);
2313 SSMR3PutStrZ(pSSM, pMapping->pszDesc); /* This is the best unique id we have... */
2314 SSMR3PutGCPtr(pSSM, pMapping->GCPtr);
2315 SSMR3PutGCUIntPtr(pSSM, pMapping->cPTs);
2316 /* flags are done by the mapping owners! */
2317 }
2318 SSMR3PutU32(pSSM, ~0); /* terminator. */
2319
2320 /*
2321 * Ram range flags and bits.
2322 */
2323 i = 0;
2324 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2325 {
2326 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
2327
2328 SSMR3PutU32(pSSM, i);
2329 SSMR3PutGCPhys(pSSM, pRam->GCPhys);
2330 SSMR3PutGCPhys(pSSM, pRam->GCPhysLast);
2331 SSMR3PutGCPhys(pSSM, pRam->cb);
2332 SSMR3PutU8(pSSM, !!pRam->pvR3); /* boolean indicating memory or not. */
2333
2334 /* Flags. */
2335 const unsigned cPages = pRam->cb >> PAGE_SHIFT;
2336 for (unsigned iPage = 0; iPage < cPages; iPage++)
2337 SSMR3PutU16(pSSM, (uint16_t)(pRam->aPages[iPage].HCPhys & ~X86_PTE_PAE_PG_MASK)); /** @todo PAGE FLAGS */
2338
2339 /* any memory associated with the range. */
2340 if (pRam->fFlags & MM_RAM_FLAGS_DYNAMIC_ALLOC)
2341 {
2342 for (unsigned iChunk = 0; iChunk < (pRam->cb >> PGM_DYNAMIC_CHUNK_SHIFT); iChunk++)
2343 {
2344 if (pRam->paChunkR3Ptrs[iChunk])
2345 {
2346 SSMR3PutU8(pSSM, 1); /* chunk present */
2347 SSMR3PutMem(pSSM, (void *)pRam->paChunkR3Ptrs[iChunk], PGM_DYNAMIC_CHUNK_SIZE);
2348 }
2349 else
2350 SSMR3PutU8(pSSM, 0); /* no chunk present */
2351 }
2352 }
2353 else if (pRam->pvR3)
2354 {
2355 int rc = SSMR3PutMem(pSSM, pRam->pvR3, pRam->cb);
2356 if (RT_FAILURE(rc))
2357 {
2358 Log(("pgmR3Save: SSMR3PutMem(, %p, %#x) -> %Rrc\n", pRam->pvR3, pRam->cb, rc));
2359 return rc;
2360 }
2361 }
2362 }
2363 return SSMR3PutU32(pSSM, ~0); /* terminator. */
2364}
2365
2366
2367/**
2368 * Execute state load operation.
2369 *
2370 * @returns VBox status code.
2371 * @param pVM VM Handle.
2372 * @param pSSM SSM operation handle.
2373 * @param u32Version Data layout version.
2374 */
2375static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version)
2376{
2377 /*
2378 * Validate version.
2379 */
2380 if (u32Version != PGM_SAVED_STATE_VERSION)
2381 {
2382 AssertMsgFailed(("pgmR3Load: Invalid version u32Version=%d (current %d)!\n", u32Version, PGM_SAVED_STATE_VERSION));
2383 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
2384 }
2385
2386 /*
2387 * Call the reset function to make sure all the memory is cleared.
2388 */
2389 PGMR3Reset(pVM);
2390
2391 /*
2392 * Load basic data (required / unaffected by relocation).
2393 */
2394 PPGM pPGM = &pVM->pgm.s;
2395#if 1
2396 SSMR3GetBool(pSSM, &pPGM->fMappingsFixed);
2397#else
2398 uint32_t u;
2399 SSMR3GetU32(pSSM, &u);
2400 pPGM->fMappingsFixed = u;
2401#endif
2402 SSMR3GetGCPtr(pSSM, &pPGM->GCPtrMappingFixed);
2403 SSMR3GetU32(pSSM, &pPGM->cbMappingFixed);
2404
2405 RTUINT cbRamSize;
2406 int rc = SSMR3GetU32(pSSM, &cbRamSize);
2407 if (RT_FAILURE(rc))
2408 return rc;
2409 if (cbRamSize != pPGM->cbRamSize)
2410 return VERR_SSM_LOAD_MEMORY_SIZE_MISMATCH;
2411 SSMR3GetGCPhys(pSSM, &pPGM->GCPhysA20Mask);
2412 SSMR3GetUInt(pSSM, &pPGM->fA20Enabled);
2413 SSMR3GetUInt(pSSM, &pPGM->fSyncFlags);
2414 RTUINT uGuestMode;
2415 SSMR3GetUInt(pSSM, &uGuestMode);
2416 pPGM->enmGuestMode = (PGMMODE)uGuestMode;
2417
2418 /* check separator. */
2419 uint32_t u32Sep;
2420 SSMR3GetU32(pSSM, &u32Sep);
2421 if (RT_FAILURE(rc))
2422 return rc;
2423 if (u32Sep != (uint32_t)~0)
2424 {
2425 AssertMsgFailed(("u32Sep=%#x (first)\n", u32Sep));
2426 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2427 }
2428
2429 /*
2430 * The guest mappings.
2431 */
2432 uint32_t i = 0;
2433 for (;; i++)
2434 {
2435 /* Check the seqence number / separator. */
2436 rc = SSMR3GetU32(pSSM, &u32Sep);
2437 if (RT_FAILURE(rc))
2438 return rc;
2439 if (u32Sep == ~0U)
2440 break;
2441 if (u32Sep != i)
2442 {
2443 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2444 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2445 }
2446
2447 /* get the mapping details. */
2448 char szDesc[256];
2449 szDesc[0] = '\0';
2450 rc = SSMR3GetStrZ(pSSM, szDesc, sizeof(szDesc));
2451 if (RT_FAILURE(rc))
2452 return rc;
2453 RTGCPTR GCPtr;
2454 SSMR3GetGCPtr(pSSM, &GCPtr);
2455 RTGCPTR cPTs;
2456 rc = SSMR3GetGCUIntPtr(pSSM, &cPTs);
2457 if (RT_FAILURE(rc))
2458 return rc;
2459
2460 /* find matching range. */
2461 PPGMMAPPING pMapping;
2462 for (pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3)
2463 if ( pMapping->cPTs == cPTs
2464 && !strcmp(pMapping->pszDesc, szDesc))
2465 break;
2466 if (!pMapping)
2467 {
2468 LogRel(("Couldn't find mapping: cPTs=%#x szDesc=%s (GCPtr=%RGv)\n",
2469 cPTs, szDesc, GCPtr));
2470 AssertFailed();
2471 return VERR_SSM_LOAD_CONFIG_MISMATCH;
2472 }
2473
2474 /* relocate it. */
2475 if (pMapping->GCPtr != GCPtr)
2476 {
2477 AssertMsg((GCPtr >> X86_PD_SHIFT << X86_PD_SHIFT) == GCPtr, ("GCPtr=%RGv\n", GCPtr));
2478 pgmR3MapRelocate(pVM, pMapping, pMapping->GCPtr, GCPtr);
2479 }
2480 else
2481 Log(("pgmR3Load: '%s' needed no relocation (%RGv)\n", szDesc, GCPtr));
2482 }
2483
2484 /*
2485 * Ram range flags and bits.
2486 */
2487 i = 0;
2488 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2489 {
2490 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
2491 /* Check the seqence number / separator. */
2492 rc = SSMR3GetU32(pSSM, &u32Sep);
2493 if (RT_FAILURE(rc))
2494 return rc;
2495 if (u32Sep == ~0U)
2496 break;
2497 if (u32Sep != i)
2498 {
2499 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2500 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2501 }
2502
2503 /* Get the range details. */
2504 RTGCPHYS GCPhys;
2505 SSMR3GetGCPhys(pSSM, &GCPhys);
2506 RTGCPHYS GCPhysLast;
2507 SSMR3GetGCPhys(pSSM, &GCPhysLast);
2508 RTGCPHYS cb;
2509 SSMR3GetGCPhys(pSSM, &cb);
2510 uint8_t fHaveBits;
2511 rc = SSMR3GetU8(pSSM, &fHaveBits);
2512 if (RT_FAILURE(rc))
2513 return rc;
2514 if (fHaveBits & ~1)
2515 {
2516 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2517 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2518 }
2519
2520 /* Match it up with the current range. */
2521 if ( GCPhys != pRam->GCPhys
2522 || GCPhysLast != pRam->GCPhysLast
2523 || cb != pRam->cb
2524 || fHaveBits != !!pRam->pvR3)
2525 {
2526 LogRel(("Ram range: %RGp-%RGp %RGp bytes %s\n"
2527 "State : %RGp-%RGp %RGp bytes %s\n",
2528 pRam->GCPhys, pRam->GCPhysLast, pRam->cb, pRam->pvR3 ? "bits" : "nobits",
2529 GCPhys, GCPhysLast, cb, fHaveBits ? "bits" : "nobits"));
2530 /*
2531 * If we're loading a state for debugging purpose, don't make a fuss if
2532 * the MMIO[2] and ROM stuff isn't 100% right, just skip the mismatches.
2533 */
2534 if ( SSMR3HandleGetAfter(pSSM) != SSMAFTER_DEBUG_IT
2535 || GCPhys < 8 * _1M)
2536 AssertFailedReturn(VERR_SSM_LOAD_CONFIG_MISMATCH);
2537
2538 RTGCPHYS cPages = ((GCPhysLast - GCPhys) + 1) >> PAGE_SHIFT;
2539 while (cPages-- > 0)
2540 {
2541 uint16_t u16Ignore;
2542 SSMR3GetU16(pSSM, &u16Ignore);
2543 }
2544 continue;
2545 }
2546
2547 /* Flags. */
2548 const unsigned cPages = pRam->cb >> PAGE_SHIFT;
2549 for (unsigned iPage = 0; iPage < cPages; iPage++)
2550 {
2551 uint16_t u16 = 0;
2552 SSMR3GetU16(pSSM, &u16);
2553 u16 &= PAGE_OFFSET_MASK & ~( RT_BIT(4) | RT_BIT(5) | RT_BIT(6)
2554 | RT_BIT(7) | RT_BIT(8) | RT_BIT(9) | RT_BIT(10) );
2555 // &= MM_RAM_FLAGS_DYNAMIC_ALLOC | MM_RAM_FLAGS_RESERVED | MM_RAM_FLAGS_ROM | MM_RAM_FLAGS_MMIO | MM_RAM_FLAGS_MMIO2
2556 pRam->aPages[iPage].HCPhys = PGM_PAGE_GET_HCPHYS(&pRam->aPages[iPage]) | (RTHCPHYS)u16; /** @todo PAGE FLAGS */
2557 }
2558
2559 /* any memory associated with the range. */
2560 if (pRam->fFlags & MM_RAM_FLAGS_DYNAMIC_ALLOC)
2561 {
2562 for (unsigned iChunk = 0; iChunk < (pRam->cb >> PGM_DYNAMIC_CHUNK_SHIFT); iChunk++)
2563 {
2564 uint8_t fValidChunk;
2565
2566 rc = SSMR3GetU8(pSSM, &fValidChunk);
2567 if (RT_FAILURE(rc))
2568 return rc;
2569 if (fValidChunk > 1)
2570 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2571
2572 if (fValidChunk)
2573 {
2574 if (!pRam->paChunkR3Ptrs[iChunk])
2575 {
2576 rc = pgmr3PhysGrowRange(pVM, pRam->GCPhys + iChunk * PGM_DYNAMIC_CHUNK_SIZE);
2577 if (RT_FAILURE(rc))
2578 return rc;
2579 }
2580 Assert(pRam->paChunkR3Ptrs[iChunk]);
2581
2582 SSMR3GetMem(pSSM, (void *)pRam->paChunkR3Ptrs[iChunk], PGM_DYNAMIC_CHUNK_SIZE);
2583 }
2584 /* else nothing to do */
2585 }
2586 }
2587 else if (pRam->pvR3)
2588 {
2589 int rc = SSMR3GetMem(pSSM, pRam->pvR3, pRam->cb);
2590 if (RT_FAILURE(rc))
2591 {
2592 Log(("pgmR3Save: SSMR3GetMem(, %p, %#x) -> %Rrc\n", pRam->pvR3, pRam->cb, rc));
2593 return rc;
2594 }
2595 }
2596 }
2597
2598 /*
2599 * We require a full resync now.
2600 */
2601 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3_NON_GLOBAL);
2602 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
2603 pPGM->fSyncFlags |= PGM_SYNC_UPDATE_PAGE_BIT_VIRTUAL;
2604 pPGM->fPhysCacheFlushPending = true;
2605 pgmR3HandlerPhysicalUpdateAll(pVM);
2606
2607 /*
2608 * Change the paging mode.
2609 */
2610 rc = PGMR3ChangeMode(pVM, pPGM->enmGuestMode);
2611
2612 /* Restore pVM->pgm.s.GCPhysCR3. */
2613 Assert(pVM->pgm.s.GCPhysCR3 == NIL_RTGCPHYS);
2614 RTGCPHYS GCPhysCR3 = CPUMGetGuestCR3(pVM);
2615 if ( pVM->pgm.s.enmGuestMode == PGMMODE_PAE
2616 || pVM->pgm.s.enmGuestMode == PGMMODE_PAE_NX
2617 || pVM->pgm.s.enmGuestMode == PGMMODE_AMD64
2618 || pVM->pgm.s.enmGuestMode == PGMMODE_AMD64_NX)
2619 GCPhysCR3 = (GCPhysCR3 & X86_CR3_PAE_PAGE_MASK);
2620 else
2621 GCPhysCR3 = (GCPhysCR3 & X86_CR3_PAGE_MASK);
2622 pVM->pgm.s.GCPhysCR3 = GCPhysCR3;
2623
2624 return rc;
2625}
2626
2627
2628/**
2629 * Show paging mode.
2630 *
2631 * @param pVM VM Handle.
2632 * @param pHlp The info helpers.
2633 * @param pszArgs "all" (default), "guest", "shadow" or "host".
2634 */
2635static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2636{
2637 /* digest argument. */
2638 bool fGuest, fShadow, fHost;
2639 if (pszArgs)
2640 pszArgs = RTStrStripL(pszArgs);
2641 if (!pszArgs || !*pszArgs || strstr(pszArgs, "all"))
2642 fShadow = fHost = fGuest = true;
2643 else
2644 {
2645 fShadow = fHost = fGuest = false;
2646 if (strstr(pszArgs, "guest"))
2647 fGuest = true;
2648 if (strstr(pszArgs, "shadow"))
2649 fShadow = true;
2650 if (strstr(pszArgs, "host"))
2651 fHost = true;
2652 }
2653
2654 /* print info. */
2655 if (fGuest)
2656 pHlp->pfnPrintf(pHlp, "Guest paging mode: %s, changed %RU64 times, A20 %s\n",
2657 PGMGetModeName(pVM->pgm.s.enmGuestMode), pVM->pgm.s.cGuestModeChanges.c,
2658 pVM->pgm.s.fA20Enabled ? "enabled" : "disabled");
2659 if (fShadow)
2660 pHlp->pfnPrintf(pHlp, "Shadow paging mode: %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode));
2661 if (fHost)
2662 {
2663 const char *psz;
2664 switch (pVM->pgm.s.enmHostMode)
2665 {
2666 case SUPPAGINGMODE_INVALID: psz = "invalid"; break;
2667 case SUPPAGINGMODE_32_BIT: psz = "32-bit"; break;
2668 case SUPPAGINGMODE_32_BIT_GLOBAL: psz = "32-bit+G"; break;
2669 case SUPPAGINGMODE_PAE: psz = "PAE"; break;
2670 case SUPPAGINGMODE_PAE_GLOBAL: psz = "PAE+G"; break;
2671 case SUPPAGINGMODE_PAE_NX: psz = "PAE+NX"; break;
2672 case SUPPAGINGMODE_PAE_GLOBAL_NX: psz = "PAE+G+NX"; break;
2673 case SUPPAGINGMODE_AMD64: psz = "AMD64"; break;
2674 case SUPPAGINGMODE_AMD64_GLOBAL: psz = "AMD64+G"; break;
2675 case SUPPAGINGMODE_AMD64_NX: psz = "AMD64+NX"; break;
2676 case SUPPAGINGMODE_AMD64_GLOBAL_NX: psz = "AMD64+G+NX"; break;
2677 default: psz = "unknown"; break;
2678 }
2679 pHlp->pfnPrintf(pHlp, "Host paging mode: %s\n", psz);
2680 }
2681}
2682
2683
2684/**
2685 * Dump registered MMIO ranges to the log.
2686 *
2687 * @param pVM VM Handle.
2688 * @param pHlp The info helpers.
2689 * @param pszArgs Arguments, ignored.
2690 */
2691static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2692{
2693 NOREF(pszArgs);
2694 pHlp->pfnPrintf(pHlp,
2695 "RAM ranges (pVM=%p)\n"
2696 "%.*s %.*s\n",
2697 pVM,
2698 sizeof(RTGCPHYS) * 4 + 1, "GC Phys Range ",
2699 sizeof(RTHCPTR) * 2, "pvHC ");
2700
2701 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur; pCur = pCur->pNextR3)
2702 pHlp->pfnPrintf(pHlp,
2703 "%RGp-%RGp %RHv %s\n",
2704 pCur->GCPhys,
2705 pCur->GCPhysLast,
2706 pCur->pvR3,
2707 pCur->pszDesc);
2708}
2709
2710/**
2711 * Dump the page directory to the log.
2712 *
2713 * @param pVM VM Handle.
2714 * @param pHlp The info helpers.
2715 * @param pszArgs Arguments, ignored.
2716 */
2717static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2718{
2719/** @todo fix this! Convert the PGMR3DumpHierarchyHC functions to do guest stuff. */
2720 /* Big pages supported? */
2721 const bool fPSE = !!(CPUMGetGuestCR4(pVM) & X86_CR4_PSE);
2722
2723 /* Global pages supported? */
2724 const bool fPGE = !!(CPUMGetGuestCR4(pVM) & X86_CR4_PGE);
2725
2726 NOREF(pszArgs);
2727
2728 /*
2729 * Get page directory addresses.
2730 */
2731 PX86PD pPDSrc = pVM->pgm.s.pGst32BitPdR3;
2732 Assert(pPDSrc);
2733 Assert(PGMPhysGCPhys2R3PtrAssert(pVM, (RTGCPHYS)(CPUMGetGuestCR3(pVM) & X86_CR3_PAGE_MASK), sizeof(*pPDSrc)) == pPDSrc);
2734
2735 /*
2736 * Iterate the page directory.
2737 */
2738 for (unsigned iPD = 0; iPD < RT_ELEMENTS(pPDSrc->a); iPD++)
2739 {
2740 X86PDE PdeSrc = pPDSrc->a[iPD];
2741 if (PdeSrc.n.u1Present)
2742 {
2743 if (PdeSrc.b.u1Size && fPSE)
2744 pHlp->pfnPrintf(pHlp,
2745 "%04X - %RGp P=%d U=%d RW=%d G=%d - BIG\n",
2746 iPD,
2747 pgmGstGet4MBPhysPage(&pVM->pgm.s, PdeSrc),
2748 PdeSrc.b.u1Present, PdeSrc.b.u1User, PdeSrc.b.u1Write, PdeSrc.b.u1Global && fPGE);
2749 else
2750 pHlp->pfnPrintf(pHlp,
2751 "%04X - %RGp P=%d U=%d RW=%d [G=%d]\n",
2752 iPD,
2753 (RTGCPHYS)(PdeSrc.u & X86_PDE_PG_MASK),
2754 PdeSrc.n.u1Present, PdeSrc.n.u1User, PdeSrc.n.u1Write, PdeSrc.b.u1Global && fPGE);
2755 }
2756 }
2757}
2758
2759
2760/**
2761 * Serivce a VMMCALLHOST_PGM_LOCK call.
2762 *
2763 * @returns VBox status code.
2764 * @param pVM The VM handle.
2765 */
2766VMMR3DECL(int) PGMR3LockCall(PVM pVM)
2767{
2768 int rc = PDMR3CritSectEnterEx(&pVM->pgm.s.CritSect, true /* fHostCall */);
2769 AssertRC(rc);
2770 return rc;
2771}
2772
2773
2774/**
2775 * Converts a PGMMODE value to a PGM_TYPE_* \#define.
2776 *
2777 * @returns PGM_TYPE_*.
2778 * @param pgmMode The mode value to convert.
2779 */
2780DECLINLINE(unsigned) pgmModeToType(PGMMODE pgmMode)
2781{
2782 switch (pgmMode)
2783 {
2784 case PGMMODE_REAL: return PGM_TYPE_REAL;
2785 case PGMMODE_PROTECTED: return PGM_TYPE_PROT;
2786 case PGMMODE_32_BIT: return PGM_TYPE_32BIT;
2787 case PGMMODE_PAE:
2788 case PGMMODE_PAE_NX: return PGM_TYPE_PAE;
2789 case PGMMODE_AMD64:
2790 case PGMMODE_AMD64_NX: return PGM_TYPE_AMD64;
2791 case PGMMODE_NESTED: return PGM_TYPE_NESTED;
2792 case PGMMODE_EPT: return PGM_TYPE_EPT;
2793 default:
2794 AssertFatalMsgFailed(("pgmMode=%d\n", pgmMode));
2795 }
2796}
2797
2798
2799/**
2800 * Gets the index into the paging mode data array of a SHW+GST mode.
2801 *
2802 * @returns PGM::paPagingData index.
2803 * @param uShwType The shadow paging mode type.
2804 * @param uGstType The guest paging mode type.
2805 */
2806DECLINLINE(unsigned) pgmModeDataIndex(unsigned uShwType, unsigned uGstType)
2807{
2808 Assert(uShwType >= PGM_TYPE_32BIT && uShwType <= PGM_TYPE_MAX);
2809 Assert(uGstType >= PGM_TYPE_REAL && uGstType <= PGM_TYPE_AMD64);
2810 return (uShwType - PGM_TYPE_32BIT) * (PGM_TYPE_AMD64 - PGM_TYPE_REAL + 1)
2811 + (uGstType - PGM_TYPE_REAL);
2812}
2813
2814
2815/**
2816 * Gets the index into the paging mode data array of a SHW+GST mode.
2817 *
2818 * @returns PGM::paPagingData index.
2819 * @param enmShw The shadow paging mode.
2820 * @param enmGst The guest paging mode.
2821 */
2822DECLINLINE(unsigned) pgmModeDataIndexByMode(PGMMODE enmShw, PGMMODE enmGst)
2823{
2824 Assert(enmShw >= PGMMODE_32_BIT && enmShw <= PGMMODE_MAX);
2825 Assert(enmGst > PGMMODE_INVALID && enmGst < PGMMODE_MAX);
2826 return pgmModeDataIndex(pgmModeToType(enmShw), pgmModeToType(enmGst));
2827}
2828
2829
2830/**
2831 * Calculates the max data index.
2832 * @returns The number of entries in the paging data array.
2833 */
2834DECLINLINE(unsigned) pgmModeDataMaxIndex(void)
2835{
2836 return pgmModeDataIndex(PGM_TYPE_MAX, PGM_TYPE_AMD64) + 1;
2837}
2838
2839
2840/**
2841 * Initializes the paging mode data kept in PGM::paModeData.
2842 *
2843 * @param pVM The VM handle.
2844 * @param fResolveGCAndR0 Indicate whether or not GC and Ring-0 symbols can be resolved now.
2845 * This is used early in the init process to avoid trouble with PDM
2846 * not being initialized yet.
2847 */
2848static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0)
2849{
2850 PPGMMODEDATA pModeData;
2851 int rc;
2852
2853 /*
2854 * Allocate the array on the first call.
2855 */
2856 if (!pVM->pgm.s.paModeData)
2857 {
2858 pVM->pgm.s.paModeData = (PPGMMODEDATA)MMR3HeapAllocZ(pVM, MM_TAG_PGM, sizeof(PGMMODEDATA) * pgmModeDataMaxIndex());
2859 AssertReturn(pVM->pgm.s.paModeData, VERR_NO_MEMORY);
2860 }
2861
2862 /*
2863 * Initialize the array entries.
2864 */
2865 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_REAL)];
2866 pModeData->uShwType = PGM_TYPE_32BIT;
2867 pModeData->uGstType = PGM_TYPE_REAL;
2868 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2869 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2870 rc = PGM_BTH_NAME_32BIT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2871
2872 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGMMODE_PROTECTED)];
2873 pModeData->uShwType = PGM_TYPE_32BIT;
2874 pModeData->uGstType = PGM_TYPE_PROT;
2875 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2876 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2877 rc = PGM_BTH_NAME_32BIT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2878
2879 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_32BIT)];
2880 pModeData->uShwType = PGM_TYPE_32BIT;
2881 pModeData->uGstType = PGM_TYPE_32BIT;
2882 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2883 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2884 rc = PGM_BTH_NAME_32BIT_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2885
2886 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_REAL)];
2887 pModeData->uShwType = PGM_TYPE_PAE;
2888 pModeData->uGstType = PGM_TYPE_REAL;
2889 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2890 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2891 rc = PGM_BTH_NAME_PAE_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2892
2893 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PROT)];
2894 pModeData->uShwType = PGM_TYPE_PAE;
2895 pModeData->uGstType = PGM_TYPE_PROT;
2896 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2897 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2898 rc = PGM_BTH_NAME_PAE_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2899
2900 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_32BIT)];
2901 pModeData->uShwType = PGM_TYPE_PAE;
2902 pModeData->uGstType = PGM_TYPE_32BIT;
2903 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2904 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2905 rc = PGM_BTH_NAME_PAE_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2906
2907 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PAE)];
2908 pModeData->uShwType = PGM_TYPE_PAE;
2909 pModeData->uGstType = PGM_TYPE_PAE;
2910 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2911 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2912 rc = PGM_BTH_NAME_PAE_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2913
2914#ifdef VBOX_WITH_64_BITS_GUESTS
2915 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_AMD64, PGM_TYPE_AMD64)];
2916 pModeData->uShwType = PGM_TYPE_AMD64;
2917 pModeData->uGstType = PGM_TYPE_AMD64;
2918 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2919 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2920 rc = PGM_BTH_NAME_AMD64_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2921#endif
2922
2923 /* The nested paging mode. */
2924 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_REAL)];
2925 pModeData->uShwType = PGM_TYPE_NESTED;
2926 pModeData->uGstType = PGM_TYPE_REAL;
2927 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2928 rc = PGM_BTH_NAME_NESTED_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2929
2930 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGMMODE_PROTECTED)];
2931 pModeData->uShwType = PGM_TYPE_NESTED;
2932 pModeData->uGstType = PGM_TYPE_PROT;
2933 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2934 rc = PGM_BTH_NAME_NESTED_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2935
2936 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_32BIT)];
2937 pModeData->uShwType = PGM_TYPE_NESTED;
2938 pModeData->uGstType = PGM_TYPE_32BIT;
2939 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2940 rc = PGM_BTH_NAME_NESTED_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2941
2942 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_PAE)];
2943 pModeData->uShwType = PGM_TYPE_NESTED;
2944 pModeData->uGstType = PGM_TYPE_PAE;
2945 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2946 rc = PGM_BTH_NAME_NESTED_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2947
2948#ifdef VBOX_WITH_64_BITS_GUESTS
2949 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
2950 pModeData->uShwType = PGM_TYPE_NESTED;
2951 pModeData->uGstType = PGM_TYPE_AMD64;
2952 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2953 rc = PGM_BTH_NAME_NESTED_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2954#endif
2955
2956 /* The shadow part of the nested callback mode depends on the host paging mode (AMD-V only). */
2957 switch (pVM->pgm.s.enmHostMode)
2958 {
2959#if HC_ARCH_BITS == 32
2960 case SUPPAGINGMODE_32_BIT:
2961 case SUPPAGINGMODE_32_BIT_GLOBAL:
2962 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
2963 {
2964 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
2965 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2966 }
2967# ifdef VBOX_WITH_64_BITS_GUESTS
2968 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
2969 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2970# endif
2971 break;
2972
2973 case SUPPAGINGMODE_PAE:
2974 case SUPPAGINGMODE_PAE_NX:
2975 case SUPPAGINGMODE_PAE_GLOBAL:
2976 case SUPPAGINGMODE_PAE_GLOBAL_NX:
2977 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
2978 {
2979 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
2980 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2981 }
2982# ifdef VBOX_WITH_64_BITS_GUESTS
2983 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
2984 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2985# endif
2986 break;
2987#endif /* HC_ARCH_BITS == 32 */
2988
2989#if HC_ARCH_BITS == 64 || defined(RT_OS_DARWIN)
2990 case SUPPAGINGMODE_AMD64:
2991 case SUPPAGINGMODE_AMD64_GLOBAL:
2992 case SUPPAGINGMODE_AMD64_NX:
2993 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
2994# ifdef VBOX_WITH_64_BITS_GUESTS
2995 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_AMD64; i++)
2996# else
2997 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
2998# endif
2999 {
3000 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3001 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3002 }
3003 break;
3004#endif /* HC_ARCH_BITS == 64 || RT_OS_DARWIN */
3005
3006 default:
3007 AssertFailed();
3008 break;
3009 }
3010
3011 /* Extended paging (EPT) / Intel VT-x */
3012 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_REAL)];
3013 pModeData->uShwType = PGM_TYPE_EPT;
3014 pModeData->uGstType = PGM_TYPE_REAL;
3015 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3016 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3017 rc = PGM_BTH_NAME_EPT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3018
3019 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PROT)];
3020 pModeData->uShwType = PGM_TYPE_EPT;
3021 pModeData->uGstType = PGM_TYPE_PROT;
3022 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3023 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3024 rc = PGM_BTH_NAME_EPT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3025
3026 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_32BIT)];
3027 pModeData->uShwType = PGM_TYPE_EPT;
3028 pModeData->uGstType = PGM_TYPE_32BIT;
3029 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3030 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3031 rc = PGM_BTH_NAME_EPT_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3032
3033 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PAE)];
3034 pModeData->uShwType = PGM_TYPE_EPT;
3035 pModeData->uGstType = PGM_TYPE_PAE;
3036 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3037 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3038 rc = PGM_BTH_NAME_EPT_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3039
3040#ifdef VBOX_WITH_64_BITS_GUESTS
3041 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_AMD64)];
3042 pModeData->uShwType = PGM_TYPE_EPT;
3043 pModeData->uGstType = PGM_TYPE_AMD64;
3044 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3045 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3046 rc = PGM_BTH_NAME_EPT_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3047#endif
3048 return VINF_SUCCESS;
3049}
3050
3051
3052/**
3053 * Switch to different (or relocated in the relocate case) mode data.
3054 *
3055 * @param pVM The VM handle.
3056 * @param enmShw The the shadow paging mode.
3057 * @param enmGst The the guest paging mode.
3058 */
3059static void pgmR3ModeDataSwitch(PVM pVM, PGMMODE enmShw, PGMMODE enmGst)
3060{
3061 PPGMMODEDATA pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndexByMode(enmShw, enmGst)];
3062
3063 Assert(pModeData->uGstType == pgmModeToType(enmGst));
3064 Assert(pModeData->uShwType == pgmModeToType(enmShw));
3065
3066 /* shadow */
3067 pVM->pgm.s.pfnR3ShwRelocate = pModeData->pfnR3ShwRelocate;
3068 pVM->pgm.s.pfnR3ShwExit = pModeData->pfnR3ShwExit;
3069 pVM->pgm.s.pfnR3ShwGetPage = pModeData->pfnR3ShwGetPage;
3070 Assert(pVM->pgm.s.pfnR3ShwGetPage);
3071 pVM->pgm.s.pfnR3ShwModifyPage = pModeData->pfnR3ShwModifyPage;
3072
3073 pVM->pgm.s.pfnRCShwGetPage = pModeData->pfnRCShwGetPage;
3074 pVM->pgm.s.pfnRCShwModifyPage = pModeData->pfnRCShwModifyPage;
3075
3076 pVM->pgm.s.pfnR0ShwGetPage = pModeData->pfnR0ShwGetPage;
3077 pVM->pgm.s.pfnR0ShwModifyPage = pModeData->pfnR0ShwModifyPage;
3078
3079
3080 /* guest */
3081 pVM->pgm.s.pfnR3GstRelocate = pModeData->pfnR3GstRelocate;
3082 pVM->pgm.s.pfnR3GstExit = pModeData->pfnR3GstExit;
3083 pVM->pgm.s.pfnR3GstGetPage = pModeData->pfnR3GstGetPage;
3084 Assert(pVM->pgm.s.pfnR3GstGetPage);
3085 pVM->pgm.s.pfnR3GstModifyPage = pModeData->pfnR3GstModifyPage;
3086 pVM->pgm.s.pfnR3GstGetPDE = pModeData->pfnR3GstGetPDE;
3087#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3088 pVM->pgm.s.pfnR3GstMonitorCR3 = pModeData->pfnR3GstMonitorCR3;
3089 pVM->pgm.s.pfnR3GstUnmonitorCR3 = pModeData->pfnR3GstUnmonitorCR3;
3090#endif
3091#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3092 pVM->pgm.s.pfnR3GstWriteHandlerCR3 = pModeData->pfnR3GstWriteHandlerCR3;
3093 pVM->pgm.s.pszR3GstWriteHandlerCR3 = pModeData->pszR3GstWriteHandlerCR3;
3094 pVM->pgm.s.pfnR3GstPAEWriteHandlerCR3 = pModeData->pfnR3GstPAEWriteHandlerCR3;
3095 pVM->pgm.s.pszR3GstPAEWriteHandlerCR3 = pModeData->pszR3GstPAEWriteHandlerCR3;
3096#endif
3097 pVM->pgm.s.pfnRCGstGetPage = pModeData->pfnRCGstGetPage;
3098 pVM->pgm.s.pfnRCGstModifyPage = pModeData->pfnRCGstModifyPage;
3099 pVM->pgm.s.pfnRCGstGetPDE = pModeData->pfnRCGstGetPDE;
3100#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3101 pVM->pgm.s.pfnRCGstMonitorCR3 = pModeData->pfnRCGstMonitorCR3;
3102 pVM->pgm.s.pfnRCGstUnmonitorCR3 = pModeData->pfnRCGstUnmonitorCR3;
3103#endif
3104#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3105 pVM->pgm.s.pfnRCGstWriteHandlerCR3 = pModeData->pfnRCGstWriteHandlerCR3;
3106 pVM->pgm.s.pfnRCGstPAEWriteHandlerCR3 = pModeData->pfnRCGstPAEWriteHandlerCR3;
3107#endif
3108 pVM->pgm.s.pfnR0GstGetPage = pModeData->pfnR0GstGetPage;
3109 pVM->pgm.s.pfnR0GstModifyPage = pModeData->pfnR0GstModifyPage;
3110 pVM->pgm.s.pfnR0GstGetPDE = pModeData->pfnR0GstGetPDE;
3111#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3112 pVM->pgm.s.pfnR0GstMonitorCR3 = pModeData->pfnR0GstMonitorCR3;
3113 pVM->pgm.s.pfnR0GstUnmonitorCR3 = pModeData->pfnR0GstUnmonitorCR3;
3114#endif
3115#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3116 pVM->pgm.s.pfnR0GstWriteHandlerCR3 = pModeData->pfnR0GstWriteHandlerCR3;
3117 pVM->pgm.s.pfnR0GstPAEWriteHandlerCR3 = pModeData->pfnR0GstPAEWriteHandlerCR3;
3118#endif
3119
3120 /* both */
3121 pVM->pgm.s.pfnR3BthRelocate = pModeData->pfnR3BthRelocate;
3122 pVM->pgm.s.pfnR3BthInvalidatePage = pModeData->pfnR3BthInvalidatePage;
3123 pVM->pgm.s.pfnR3BthSyncCR3 = pModeData->pfnR3BthSyncCR3;
3124 Assert(pVM->pgm.s.pfnR3BthSyncCR3);
3125 pVM->pgm.s.pfnR3BthSyncPage = pModeData->pfnR3BthSyncPage;
3126 pVM->pgm.s.pfnR3BthPrefetchPage = pModeData->pfnR3BthPrefetchPage;
3127 pVM->pgm.s.pfnR3BthVerifyAccessSyncPage = pModeData->pfnR3BthVerifyAccessSyncPage;
3128#ifdef VBOX_STRICT
3129 pVM->pgm.s.pfnR3BthAssertCR3 = pModeData->pfnR3BthAssertCR3;
3130#endif
3131 pVM->pgm.s.pfnR3BthMapCR3 = pModeData->pfnR3BthMapCR3;
3132 pVM->pgm.s.pfnR3BthUnmapCR3 = pModeData->pfnR3BthUnmapCR3;
3133
3134 pVM->pgm.s.pfnRCBthTrap0eHandler = pModeData->pfnRCBthTrap0eHandler;
3135 pVM->pgm.s.pfnRCBthInvalidatePage = pModeData->pfnRCBthInvalidatePage;
3136 pVM->pgm.s.pfnRCBthSyncCR3 = pModeData->pfnRCBthSyncCR3;
3137 pVM->pgm.s.pfnRCBthSyncPage = pModeData->pfnRCBthSyncPage;
3138 pVM->pgm.s.pfnRCBthPrefetchPage = pModeData->pfnRCBthPrefetchPage;
3139 pVM->pgm.s.pfnRCBthVerifyAccessSyncPage = pModeData->pfnRCBthVerifyAccessSyncPage;
3140#ifdef VBOX_STRICT
3141 pVM->pgm.s.pfnRCBthAssertCR3 = pModeData->pfnRCBthAssertCR3;
3142#endif
3143 pVM->pgm.s.pfnRCBthMapCR3 = pModeData->pfnRCBthMapCR3;
3144 pVM->pgm.s.pfnRCBthUnmapCR3 = pModeData->pfnRCBthUnmapCR3;
3145
3146 pVM->pgm.s.pfnR0BthTrap0eHandler = pModeData->pfnR0BthTrap0eHandler;
3147 pVM->pgm.s.pfnR0BthInvalidatePage = pModeData->pfnR0BthInvalidatePage;
3148 pVM->pgm.s.pfnR0BthSyncCR3 = pModeData->pfnR0BthSyncCR3;
3149 pVM->pgm.s.pfnR0BthSyncPage = pModeData->pfnR0BthSyncPage;
3150 pVM->pgm.s.pfnR0BthPrefetchPage = pModeData->pfnR0BthPrefetchPage;
3151 pVM->pgm.s.pfnR0BthVerifyAccessSyncPage = pModeData->pfnR0BthVerifyAccessSyncPage;
3152#ifdef VBOX_STRICT
3153 pVM->pgm.s.pfnR0BthAssertCR3 = pModeData->pfnR0BthAssertCR3;
3154#endif
3155 pVM->pgm.s.pfnR0BthMapCR3 = pModeData->pfnR0BthMapCR3;
3156 pVM->pgm.s.pfnR0BthUnmapCR3 = pModeData->pfnR0BthUnmapCR3;
3157}
3158
3159
3160/**
3161 * Calculates the shadow paging mode.
3162 *
3163 * @returns The shadow paging mode.
3164 * @param pVM VM handle.
3165 * @param enmGuestMode The guest mode.
3166 * @param enmHostMode The host mode.
3167 * @param enmShadowMode The current shadow mode.
3168 * @param penmSwitcher Where to store the switcher to use.
3169 * VMMSWITCHER_INVALID means no change.
3170 */
3171static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher)
3172{
3173 VMMSWITCHER enmSwitcher = VMMSWITCHER_INVALID;
3174 switch (enmGuestMode)
3175 {
3176 /*
3177 * When switching to real or protected mode we don't change
3178 * anything since it's likely that we'll switch back pretty soon.
3179 *
3180 * During pgmR3InitPaging we'll end up here with PGMMODE_INVALID
3181 * and is supposed to determine which shadow paging and switcher to
3182 * use during init.
3183 */
3184 case PGMMODE_REAL:
3185 case PGMMODE_PROTECTED:
3186 if ( enmShadowMode != PGMMODE_INVALID
3187 && !HWACCMIsEnabled(pVM) /* always switch in hwaccm mode! */)
3188 break; /* (no change) */
3189
3190 switch (enmHostMode)
3191 {
3192 case SUPPAGINGMODE_32_BIT:
3193 case SUPPAGINGMODE_32_BIT_GLOBAL:
3194 enmShadowMode = PGMMODE_32_BIT;
3195 enmSwitcher = VMMSWITCHER_32_TO_32;
3196 break;
3197
3198 case SUPPAGINGMODE_PAE:
3199 case SUPPAGINGMODE_PAE_NX:
3200 case SUPPAGINGMODE_PAE_GLOBAL:
3201 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3202 enmShadowMode = PGMMODE_PAE;
3203 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3204#ifdef DEBUG_bird
3205 if (RTEnvExist("VBOX_32BIT"))
3206 {
3207 enmShadowMode = PGMMODE_32_BIT;
3208 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3209 }
3210#endif
3211 break;
3212
3213 case SUPPAGINGMODE_AMD64:
3214 case SUPPAGINGMODE_AMD64_GLOBAL:
3215 case SUPPAGINGMODE_AMD64_NX:
3216 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3217 enmShadowMode = PGMMODE_PAE;
3218 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3219#ifdef DEBUG_bird
3220 if (RTEnvExist("VBOX_32BIT"))
3221 {
3222 enmShadowMode = PGMMODE_32_BIT;
3223 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3224 }
3225#endif
3226 break;
3227
3228 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3229 }
3230 break;
3231
3232 case PGMMODE_32_BIT:
3233 switch (enmHostMode)
3234 {
3235 case SUPPAGINGMODE_32_BIT:
3236 case SUPPAGINGMODE_32_BIT_GLOBAL:
3237 enmShadowMode = PGMMODE_32_BIT;
3238 enmSwitcher = VMMSWITCHER_32_TO_32;
3239 break;
3240
3241 case SUPPAGINGMODE_PAE:
3242 case SUPPAGINGMODE_PAE_NX:
3243 case SUPPAGINGMODE_PAE_GLOBAL:
3244 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3245 enmShadowMode = PGMMODE_PAE;
3246 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3247#ifdef DEBUG_bird
3248 if (RTEnvExist("VBOX_32BIT"))
3249 {
3250 enmShadowMode = PGMMODE_32_BIT;
3251 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3252 }
3253#endif
3254 break;
3255
3256 case SUPPAGINGMODE_AMD64:
3257 case SUPPAGINGMODE_AMD64_GLOBAL:
3258 case SUPPAGINGMODE_AMD64_NX:
3259 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3260 enmShadowMode = PGMMODE_PAE;
3261 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3262#ifdef DEBUG_bird
3263 if (RTEnvExist("VBOX_32BIT"))
3264 {
3265 enmShadowMode = PGMMODE_32_BIT;
3266 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3267 }
3268#endif
3269 break;
3270
3271 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3272 }
3273 break;
3274
3275 case PGMMODE_PAE:
3276 case PGMMODE_PAE_NX: /** @todo This might require more switchers and guest+both modes. */
3277 switch (enmHostMode)
3278 {
3279 case SUPPAGINGMODE_32_BIT:
3280 case SUPPAGINGMODE_32_BIT_GLOBAL:
3281 enmShadowMode = PGMMODE_PAE;
3282 enmSwitcher = VMMSWITCHER_32_TO_PAE;
3283 break;
3284
3285 case SUPPAGINGMODE_PAE:
3286 case SUPPAGINGMODE_PAE_NX:
3287 case SUPPAGINGMODE_PAE_GLOBAL:
3288 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3289 enmShadowMode = PGMMODE_PAE;
3290 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3291 break;
3292
3293 case SUPPAGINGMODE_AMD64:
3294 case SUPPAGINGMODE_AMD64_GLOBAL:
3295 case SUPPAGINGMODE_AMD64_NX:
3296 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3297 enmShadowMode = PGMMODE_PAE;
3298 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3299 break;
3300
3301 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3302 }
3303 break;
3304
3305 case PGMMODE_AMD64:
3306 case PGMMODE_AMD64_NX:
3307 switch (enmHostMode)
3308 {
3309 case SUPPAGINGMODE_32_BIT:
3310 case SUPPAGINGMODE_32_BIT_GLOBAL:
3311 enmShadowMode = PGMMODE_AMD64;
3312 enmSwitcher = VMMSWITCHER_32_TO_AMD64;
3313 break;
3314
3315 case SUPPAGINGMODE_PAE:
3316 case SUPPAGINGMODE_PAE_NX:
3317 case SUPPAGINGMODE_PAE_GLOBAL:
3318 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3319 enmShadowMode = PGMMODE_AMD64;
3320 enmSwitcher = VMMSWITCHER_PAE_TO_AMD64;
3321 break;
3322
3323 case SUPPAGINGMODE_AMD64:
3324 case SUPPAGINGMODE_AMD64_GLOBAL:
3325 case SUPPAGINGMODE_AMD64_NX:
3326 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3327 enmShadowMode = PGMMODE_AMD64;
3328 enmSwitcher = VMMSWITCHER_AMD64_TO_AMD64;
3329 break;
3330
3331 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3332 }
3333 break;
3334
3335
3336 default:
3337 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3338 return PGMMODE_INVALID;
3339 }
3340 /* Override the shadow mode is nested paging is active. */
3341 if (HWACCMIsNestedPagingActive(pVM))
3342 enmShadowMode = HWACCMGetShwPagingMode(pVM);
3343
3344 *penmSwitcher = enmSwitcher;
3345 return enmShadowMode;
3346}
3347
3348
3349/**
3350 * Performs the actual mode change.
3351 * This is called by PGMChangeMode and pgmR3InitPaging().
3352 *
3353 * @returns VBox status code.
3354 * @param pVM VM handle.
3355 * @param enmGuestMode The new guest mode. This is assumed to be different from
3356 * the current mode.
3357 */
3358VMMR3DECL(int) PGMR3ChangeMode(PVM pVM, PGMMODE enmGuestMode)
3359{
3360 Log(("PGMR3ChangeMode: Guest mode: %s -> %s\n", PGMGetModeName(pVM->pgm.s.enmGuestMode), PGMGetModeName(enmGuestMode)));
3361 STAM_REL_COUNTER_INC(&pVM->pgm.s.cGuestModeChanges);
3362
3363 /*
3364 * Calc the shadow mode and switcher.
3365 */
3366 VMMSWITCHER enmSwitcher;
3367 PGMMODE enmShadowMode = pgmR3CalcShadowMode(pVM, enmGuestMode, pVM->pgm.s.enmHostMode, pVM->pgm.s.enmShadowMode, &enmSwitcher);
3368 if (enmSwitcher != VMMSWITCHER_INVALID)
3369 {
3370 /*
3371 * Select new switcher.
3372 */
3373 int rc = VMMR3SelectSwitcher(pVM, enmSwitcher);
3374 if (RT_FAILURE(rc))
3375 {
3376 AssertReleaseMsgFailed(("VMMR3SelectSwitcher(%d) -> %Rrc\n", enmSwitcher, rc));
3377 return rc;
3378 }
3379 }
3380
3381 /*
3382 * Exit old mode(s).
3383 */
3384 /* shadow */
3385 if (enmShadowMode != pVM->pgm.s.enmShadowMode)
3386 {
3387 LogFlow(("PGMR3ChangeMode: Shadow mode: %s -> %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode), PGMGetModeName(enmShadowMode)));
3388 if (PGM_SHW_PFN(Exit, pVM))
3389 {
3390 int rc = PGM_SHW_PFN(Exit, pVM)(pVM);
3391 if (RT_FAILURE(rc))
3392 {
3393 AssertMsgFailed(("Exit failed for shadow mode %d: %Rrc\n", pVM->pgm.s.enmShadowMode, rc));
3394 return rc;
3395 }
3396 }
3397
3398 }
3399 else
3400 LogFlow(("PGMR3ChangeMode: Shadow mode remains: %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode)));
3401
3402 /* guest */
3403 if (PGM_GST_PFN(Exit, pVM))
3404 {
3405 int rc = PGM_GST_PFN(Exit, pVM)(pVM);
3406 if (RT_FAILURE(rc))
3407 {
3408 AssertMsgFailed(("Exit failed for guest mode %d: %Rrc\n", pVM->pgm.s.enmGuestMode, rc));
3409 return rc;
3410 }
3411 }
3412
3413 /*
3414 * Load new paging mode data.
3415 */
3416 pgmR3ModeDataSwitch(pVM, enmShadowMode, enmGuestMode);
3417
3418 /*
3419 * Enter new shadow mode (if changed).
3420 */
3421 if (enmShadowMode != pVM->pgm.s.enmShadowMode)
3422 {
3423 int rc;
3424 pVM->pgm.s.enmShadowMode = enmShadowMode;
3425 switch (enmShadowMode)
3426 {
3427 case PGMMODE_32_BIT:
3428 rc = PGM_SHW_NAME_32BIT(Enter)(pVM);
3429 break;
3430 case PGMMODE_PAE:
3431 case PGMMODE_PAE_NX:
3432 rc = PGM_SHW_NAME_PAE(Enter)(pVM);
3433 break;
3434 case PGMMODE_AMD64:
3435 case PGMMODE_AMD64_NX:
3436 rc = PGM_SHW_NAME_AMD64(Enter)(pVM);
3437 break;
3438 case PGMMODE_NESTED:
3439 rc = PGM_SHW_NAME_NESTED(Enter)(pVM);
3440 break;
3441 case PGMMODE_EPT:
3442 rc = PGM_SHW_NAME_EPT(Enter)(pVM);
3443 break;
3444 case PGMMODE_REAL:
3445 case PGMMODE_PROTECTED:
3446 default:
3447 AssertReleaseMsgFailed(("enmShadowMode=%d\n", enmShadowMode));
3448 return VERR_INTERNAL_ERROR;
3449 }
3450 if (RT_FAILURE(rc))
3451 {
3452 AssertReleaseMsgFailed(("Entering enmShadowMode=%d failed: %Rrc\n", enmShadowMode, rc));
3453 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
3454 return rc;
3455 }
3456 }
3457
3458#ifndef VBOX_WITH_PGMPOOL_PAGING_ONLY
3459 /** @todo This is a bug!
3460 *
3461 * We must flush the PGM pool cache if the guest mode changes; we don't always
3462 * switch shadow paging mode (e.g. protected->32-bit) and shouldn't reuse
3463 * the shadow page tables.
3464 *
3465 * That only applies when switching between paging and non-paging modes.
3466 */
3467 /** @todo A20 setting */
3468 if ( pVM->pgm.s.CTX_SUFF(pPool)
3469 && !HWACCMIsNestedPagingActive(pVM)
3470 && PGMMODE_WITH_PAGING(pVM->pgm.s.enmGuestMode) != PGMMODE_WITH_PAGING(enmGuestMode))
3471 {
3472 Log(("PGMR3ChangeMode: changing guest paging mode -> flush pgm pool cache!\n"));
3473 pgmPoolFlushAll(pVM);
3474 }
3475#endif
3476
3477 /*
3478 * Enter the new guest and shadow+guest modes.
3479 */
3480 int rc = -1;
3481 int rc2 = -1;
3482 RTGCPHYS GCPhysCR3 = NIL_RTGCPHYS;
3483 pVM->pgm.s.enmGuestMode = enmGuestMode;
3484 switch (enmGuestMode)
3485 {
3486 case PGMMODE_REAL:
3487 rc = PGM_GST_NAME_REAL(Enter)(pVM, NIL_RTGCPHYS);
3488 switch (pVM->pgm.s.enmShadowMode)
3489 {
3490 case PGMMODE_32_BIT:
3491 rc2 = PGM_BTH_NAME_32BIT_REAL(Enter)(pVM, NIL_RTGCPHYS);
3492 break;
3493 case PGMMODE_PAE:
3494 case PGMMODE_PAE_NX:
3495 rc2 = PGM_BTH_NAME_PAE_REAL(Enter)(pVM, NIL_RTGCPHYS);
3496 break;
3497 case PGMMODE_NESTED:
3498 rc2 = PGM_BTH_NAME_NESTED_REAL(Enter)(pVM, NIL_RTGCPHYS);
3499 break;
3500 case PGMMODE_EPT:
3501 rc2 = PGM_BTH_NAME_EPT_REAL(Enter)(pVM, NIL_RTGCPHYS);
3502 break;
3503 case PGMMODE_AMD64:
3504 case PGMMODE_AMD64_NX:
3505 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3506 default: AssertFailed(); break;
3507 }
3508 break;
3509
3510 case PGMMODE_PROTECTED:
3511 rc = PGM_GST_NAME_PROT(Enter)(pVM, NIL_RTGCPHYS);
3512 switch (pVM->pgm.s.enmShadowMode)
3513 {
3514 case PGMMODE_32_BIT:
3515 rc2 = PGM_BTH_NAME_32BIT_PROT(Enter)(pVM, NIL_RTGCPHYS);
3516 break;
3517 case PGMMODE_PAE:
3518 case PGMMODE_PAE_NX:
3519 rc2 = PGM_BTH_NAME_PAE_PROT(Enter)(pVM, NIL_RTGCPHYS);
3520 break;
3521 case PGMMODE_NESTED:
3522 rc2 = PGM_BTH_NAME_NESTED_PROT(Enter)(pVM, NIL_RTGCPHYS);
3523 break;
3524 case PGMMODE_EPT:
3525 rc2 = PGM_BTH_NAME_EPT_PROT(Enter)(pVM, NIL_RTGCPHYS);
3526 break;
3527 case PGMMODE_AMD64:
3528 case PGMMODE_AMD64_NX:
3529 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3530 default: AssertFailed(); break;
3531 }
3532 break;
3533
3534 case PGMMODE_32_BIT:
3535 GCPhysCR3 = CPUMGetGuestCR3(pVM) & X86_CR3_PAGE_MASK;
3536 rc = PGM_GST_NAME_32BIT(Enter)(pVM, GCPhysCR3);
3537 switch (pVM->pgm.s.enmShadowMode)
3538 {
3539 case PGMMODE_32_BIT:
3540 rc2 = PGM_BTH_NAME_32BIT_32BIT(Enter)(pVM, GCPhysCR3);
3541 break;
3542 case PGMMODE_PAE:
3543 case PGMMODE_PAE_NX:
3544 rc2 = PGM_BTH_NAME_PAE_32BIT(Enter)(pVM, GCPhysCR3);
3545 break;
3546 case PGMMODE_NESTED:
3547 rc2 = PGM_BTH_NAME_NESTED_32BIT(Enter)(pVM, GCPhysCR3);
3548 break;
3549 case PGMMODE_EPT:
3550 rc2 = PGM_BTH_NAME_EPT_32BIT(Enter)(pVM, GCPhysCR3);
3551 break;
3552 case PGMMODE_AMD64:
3553 case PGMMODE_AMD64_NX:
3554 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3555 default: AssertFailed(); break;
3556 }
3557 break;
3558
3559 case PGMMODE_PAE_NX:
3560 case PGMMODE_PAE:
3561 {
3562 uint32_t u32Dummy, u32Features;
3563
3564 CPUMGetGuestCpuId(pVM, 1, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
3565 if (!(u32Features & X86_CPUID_FEATURE_EDX_PAE))
3566 {
3567 /* Pause first, then inform Main. */
3568 rc = VMR3SuspendNoSave(pVM);
3569 AssertRC(rc);
3570
3571 VMSetRuntimeError(pVM, true, "PAEmode",
3572 N_("The guest is trying to switch to the PAE mode which is currently disabled by default in VirtualBox. PAE support can be enabled using the VM settings (General/Advanced)"));
3573 /* we must return VINF_SUCCESS here otherwise the recompiler will assert */
3574 return VINF_SUCCESS;
3575 }
3576 GCPhysCR3 = CPUMGetGuestCR3(pVM) & X86_CR3_PAE_PAGE_MASK;
3577 rc = PGM_GST_NAME_PAE(Enter)(pVM, GCPhysCR3);
3578 switch (pVM->pgm.s.enmShadowMode)
3579 {
3580 case PGMMODE_PAE:
3581 case PGMMODE_PAE_NX:
3582 rc2 = PGM_BTH_NAME_PAE_PAE(Enter)(pVM, GCPhysCR3);
3583 break;
3584 case PGMMODE_NESTED:
3585 rc2 = PGM_BTH_NAME_NESTED_PAE(Enter)(pVM, GCPhysCR3);
3586 break;
3587 case PGMMODE_EPT:
3588 rc2 = PGM_BTH_NAME_EPT_PAE(Enter)(pVM, GCPhysCR3);
3589 break;
3590 case PGMMODE_32_BIT:
3591 case PGMMODE_AMD64:
3592 case PGMMODE_AMD64_NX:
3593 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3594 default: AssertFailed(); break;
3595 }
3596 break;
3597 }
3598
3599#ifdef VBOX_WITH_64_BITS_GUESTS
3600 case PGMMODE_AMD64_NX:
3601 case PGMMODE_AMD64:
3602 GCPhysCR3 = CPUMGetGuestCR3(pVM) & UINT64_C(0xfffffffffffff000); /** @todo define this mask! */
3603 rc = PGM_GST_NAME_AMD64(Enter)(pVM, GCPhysCR3);
3604 switch (pVM->pgm.s.enmShadowMode)
3605 {
3606 case PGMMODE_AMD64:
3607 case PGMMODE_AMD64_NX:
3608 rc2 = PGM_BTH_NAME_AMD64_AMD64(Enter)(pVM, GCPhysCR3);
3609 break;
3610 case PGMMODE_NESTED:
3611 rc2 = PGM_BTH_NAME_NESTED_AMD64(Enter)(pVM, GCPhysCR3);
3612 break;
3613 case PGMMODE_EPT:
3614 rc2 = PGM_BTH_NAME_EPT_AMD64(Enter)(pVM, GCPhysCR3);
3615 break;
3616 case PGMMODE_32_BIT:
3617 case PGMMODE_PAE:
3618 case PGMMODE_PAE_NX:
3619 AssertMsgFailed(("Should use AMD64 shadow mode!\n"));
3620 default: AssertFailed(); break;
3621 }
3622 break;
3623#endif
3624
3625 default:
3626 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3627 rc = VERR_NOT_IMPLEMENTED;
3628 break;
3629 }
3630
3631 /* status codes. */
3632 AssertRC(rc);
3633 AssertRC(rc2);
3634 if (RT_SUCCESS(rc))
3635 {
3636 rc = rc2;
3637 if (RT_SUCCESS(rc)) /* no informational status codes. */
3638 rc = VINF_SUCCESS;
3639 }
3640
3641 /*
3642 * Notify SELM so it can update the TSSes with correct CR3s.
3643 */
3644 SELMR3PagingModeChanged(pVM);
3645
3646 /* Notify HWACCM as well. */
3647 HWACCMR3PagingModeChanged(pVM, pVM->pgm.s.enmShadowMode, pVM->pgm.s.enmGuestMode);
3648 return rc;
3649}
3650
3651
3652/**
3653 * Dumps a PAE shadow page table.
3654 *
3655 * @returns VBox status code (VINF_SUCCESS).
3656 * @param pVM The VM handle.
3657 * @param pPT Pointer to the page table.
3658 * @param u64Address The virtual address of the page table starts.
3659 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3660 * @param cMaxDepth The maxium depth.
3661 * @param pHlp Pointer to the output functions.
3662 */
3663static int pgmR3DumpHierarchyHCPaePT(PVM pVM, PX86PTPAE pPT, uint64_t u64Address, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3664{
3665 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
3666 {
3667 X86PTEPAE Pte = pPT->a[i];
3668 if (Pte.n.u1Present)
3669 {
3670 pHlp->pfnPrintf(pHlp,
3671 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3672 ? "%016llx 3 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n"
3673 : "%08llx 2 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n",
3674 u64Address + ((uint64_t)i << X86_PT_PAE_SHIFT),
3675 Pte.n.u1Write ? 'W' : 'R',
3676 Pte.n.u1User ? 'U' : 'S',
3677 Pte.n.u1Accessed ? 'A' : '-',
3678 Pte.n.u1Dirty ? 'D' : '-',
3679 Pte.n.u1Global ? 'G' : '-',
3680 Pte.n.u1WriteThru ? "WT" : "--",
3681 Pte.n.u1CacheDisable? "CD" : "--",
3682 Pte.n.u1PAT ? "AT" : "--",
3683 Pte.n.u1NoExecute ? "NX" : "--",
3684 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3685 Pte.u & RT_BIT(10) ? '1' : '0',
3686 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED? 'v' : '-',
3687 Pte.u & X86_PTE_PAE_PG_MASK);
3688 }
3689 }
3690 return VINF_SUCCESS;
3691}
3692
3693
3694/**
3695 * Dumps a PAE shadow page directory table.
3696 *
3697 * @returns VBox status code (VINF_SUCCESS).
3698 * @param pVM The VM handle.
3699 * @param HCPhys The physical address of the page directory table.
3700 * @param u64Address The virtual address of the page table starts.
3701 * @param cr4 The CR4, PSE is currently used.
3702 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3703 * @param cMaxDepth The maxium depth.
3704 * @param pHlp Pointer to the output functions.
3705 */
3706static int pgmR3DumpHierarchyHCPaePD(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3707{
3708 PX86PDPAE pPD = (PX86PDPAE)MMPagePhys2Page(pVM, HCPhys);
3709 if (!pPD)
3710 {
3711 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory at HCPhys=%RHp was not found in the page pool!\n",
3712 fLongMode ? 16 : 8, u64Address, HCPhys);
3713 return VERR_INVALID_PARAMETER;
3714 }
3715 const bool fBigPagesSupported = fLongMode || !!(cr4 & X86_CR4_PSE);
3716
3717 int rc = VINF_SUCCESS;
3718 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
3719 {
3720 X86PDEPAE Pde = pPD->a[i];
3721 if (Pde.n.u1Present)
3722 {
3723 if (fBigPagesSupported && Pde.b.u1Size)
3724 pHlp->pfnPrintf(pHlp,
3725 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3726 ? "%016llx 2 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n"
3727 : "%08llx 1 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n",
3728 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
3729 Pde.b.u1Write ? 'W' : 'R',
3730 Pde.b.u1User ? 'U' : 'S',
3731 Pde.b.u1Accessed ? 'A' : '-',
3732 Pde.b.u1Dirty ? 'D' : '-',
3733 Pde.b.u1Global ? 'G' : '-',
3734 Pde.b.u1WriteThru ? "WT" : "--",
3735 Pde.b.u1CacheDisable? "CD" : "--",
3736 Pde.b.u1PAT ? "AT" : "--",
3737 Pde.b.u1NoExecute ? "NX" : "--",
3738 Pde.u & RT_BIT_64(9) ? '1' : '0',
3739 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3740 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3741 Pde.u & X86_PDE_PAE_PG_MASK);
3742 else
3743 {
3744 pHlp->pfnPrintf(pHlp,
3745 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3746 ? "%016llx 2 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n"
3747 : "%08llx 1 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n",
3748 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
3749 Pde.n.u1Write ? 'W' : 'R',
3750 Pde.n.u1User ? 'U' : 'S',
3751 Pde.n.u1Accessed ? 'A' : '-',
3752 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
3753 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
3754 Pde.n.u1WriteThru ? "WT" : "--",
3755 Pde.n.u1CacheDisable? "CD" : "--",
3756 Pde.n.u1NoExecute ? "NX" : "--",
3757 Pde.u & RT_BIT_64(9) ? '1' : '0',
3758 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3759 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3760 Pde.u & X86_PDE_PAE_PG_MASK);
3761 if (cMaxDepth >= 1)
3762 {
3763 /** @todo what about using the page pool for mapping PTs? */
3764 uint64_t u64AddressPT = u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT);
3765 RTHCPHYS HCPhysPT = Pde.u & X86_PDE_PAE_PG_MASK;
3766 PX86PTPAE pPT = NULL;
3767 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
3768 pPT = (PX86PTPAE)MMPagePhys2Page(pVM, HCPhysPT);
3769 else
3770 {
3771 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
3772 {
3773 uint64_t off = u64AddressPT - pMap->GCPtr;
3774 if (off < pMap->cb)
3775 {
3776 const int iPDE = (uint32_t)(off >> X86_PD_SHIFT);
3777 const int iSub = (int)((off >> X86_PD_PAE_SHIFT) & 1); /* MSC is a pain sometimes */
3778 if ((iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0) != HCPhysPT)
3779 pHlp->pfnPrintf(pHlp, "%0*llx error! Mapping error! PT %d has HCPhysPT=%RHp not %RHp is in the PD.\n",
3780 fLongMode ? 16 : 8, u64AddressPT, iPDE,
3781 iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0, HCPhysPT);
3782 pPT = &pMap->aPTs[iPDE].paPaePTsR3[iSub];
3783 }
3784 }
3785 }
3786 int rc2 = VERR_INVALID_PARAMETER;
3787 if (pPT)
3788 rc2 = pgmR3DumpHierarchyHCPaePT(pVM, pPT, u64AddressPT, fLongMode, cMaxDepth - 1, pHlp);
3789 else
3790 pHlp->pfnPrintf(pHlp, "%0*llx error! Page table at HCPhys=%RHp was not found in the page pool!\n",
3791 fLongMode ? 16 : 8, u64AddressPT, HCPhysPT);
3792 if (rc2 < rc && RT_SUCCESS(rc))
3793 rc = rc2;
3794 }
3795 }
3796 }
3797 }
3798 return rc;
3799}
3800
3801
3802/**
3803 * Dumps a PAE shadow page directory pointer table.
3804 *
3805 * @returns VBox status code (VINF_SUCCESS).
3806 * @param pVM The VM handle.
3807 * @param HCPhys The physical address of the page directory pointer table.
3808 * @param u64Address The virtual address of the page table starts.
3809 * @param cr4 The CR4, PSE is currently used.
3810 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3811 * @param cMaxDepth The maxium depth.
3812 * @param pHlp Pointer to the output functions.
3813 */
3814static int pgmR3DumpHierarchyHCPaePDPT(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3815{
3816 PX86PDPT pPDPT = (PX86PDPT)MMPagePhys2Page(pVM, HCPhys);
3817 if (!pPDPT)
3818 {
3819 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory pointer table at HCPhys=%RHp was not found in the page pool!\n",
3820 fLongMode ? 16 : 8, u64Address, HCPhys);
3821 return VERR_INVALID_PARAMETER;
3822 }
3823
3824 int rc = VINF_SUCCESS;
3825 const unsigned c = fLongMode ? RT_ELEMENTS(pPDPT->a) : X86_PG_PAE_PDPE_ENTRIES;
3826 for (unsigned i = 0; i < c; i++)
3827 {
3828 X86PDPE Pdpe = pPDPT->a[i];
3829 if (Pdpe.n.u1Present)
3830 {
3831 if (fLongMode)
3832 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3833 "%016llx 1 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3834 u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
3835 Pdpe.lm.u1Write ? 'W' : 'R',
3836 Pdpe.lm.u1User ? 'U' : 'S',
3837 Pdpe.lm.u1Accessed ? 'A' : '-',
3838 Pdpe.lm.u3Reserved & 1? '?' : '.', /* ignored */
3839 Pdpe.lm.u3Reserved & 4? '!' : '.', /* mbz */
3840 Pdpe.lm.u1WriteThru ? "WT" : "--",
3841 Pdpe.lm.u1CacheDisable? "CD" : "--",
3842 Pdpe.lm.u3Reserved & 2? "!" : "..",/* mbz */
3843 Pdpe.lm.u1NoExecute ? "NX" : "--",
3844 Pdpe.u & RT_BIT(9) ? '1' : '0',
3845 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3846 Pdpe.u & RT_BIT(11) ? '1' : '0',
3847 Pdpe.u & X86_PDPE_PG_MASK);
3848 else
3849 pHlp->pfnPrintf(pHlp, /*P G WT CD AT NX 4M a p ? */
3850 "%08x 0 | P %c %s %s %s %s .. %c%c%c %016llx\n",
3851 i << X86_PDPT_SHIFT,
3852 Pdpe.n.u4Reserved & 1? '!' : '.', /* mbz */
3853 Pdpe.n.u4Reserved & 4? '!' : '.', /* mbz */
3854 Pdpe.n.u1WriteThru ? "WT" : "--",
3855 Pdpe.n.u1CacheDisable? "CD" : "--",
3856 Pdpe.n.u4Reserved & 2? "!" : "..",/* mbz */
3857 Pdpe.u & RT_BIT(9) ? '1' : '0',
3858 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3859 Pdpe.u & RT_BIT(11) ? '1' : '0',
3860 Pdpe.u & X86_PDPE_PG_MASK);
3861 if (cMaxDepth >= 1)
3862 {
3863 int rc2 = pgmR3DumpHierarchyHCPaePD(pVM, Pdpe.u & X86_PDPE_PG_MASK, u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
3864 cr4, fLongMode, cMaxDepth - 1, pHlp);
3865 if (rc2 < rc && RT_SUCCESS(rc))
3866 rc = rc2;
3867 }
3868 }
3869 }
3870 return rc;
3871}
3872
3873
3874/**
3875 * Dumps a 32-bit shadow page table.
3876 *
3877 * @returns VBox status code (VINF_SUCCESS).
3878 * @param pVM The VM handle.
3879 * @param HCPhys The physical address of the table.
3880 * @param cr4 The CR4, PSE is currently used.
3881 * @param cMaxDepth The maxium depth.
3882 * @param pHlp Pointer to the output functions.
3883 */
3884static int pgmR3DumpHierarchyHcPaePML4(PVM pVM, RTHCPHYS HCPhys, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3885{
3886 PX86PML4 pPML4 = (PX86PML4)MMPagePhys2Page(pVM, HCPhys);
3887 if (!pPML4)
3888 {
3889 pHlp->pfnPrintf(pHlp, "Page map level 4 at HCPhys=%RHp was not found in the page pool!\n", HCPhys);
3890 return VERR_INVALID_PARAMETER;
3891 }
3892
3893 int rc = VINF_SUCCESS;
3894 for (unsigned i = 0; i < RT_ELEMENTS(pPML4->a); i++)
3895 {
3896 X86PML4E Pml4e = pPML4->a[i];
3897 if (Pml4e.n.u1Present)
3898 {
3899 uint64_t u64Address = ((uint64_t)i << X86_PML4_SHIFT) | (((uint64_t)i >> (X86_PML4_SHIFT - X86_PDPT_SHIFT - 1)) * 0xffff000000000000ULL);
3900 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3901 "%016llx 0 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3902 u64Address,
3903 Pml4e.n.u1Write ? 'W' : 'R',
3904 Pml4e.n.u1User ? 'U' : 'S',
3905 Pml4e.n.u1Accessed ? 'A' : '-',
3906 Pml4e.n.u3Reserved & 1? '?' : '.', /* ignored */
3907 Pml4e.n.u3Reserved & 4? '!' : '.', /* mbz */
3908 Pml4e.n.u1WriteThru ? "WT" : "--",
3909 Pml4e.n.u1CacheDisable? "CD" : "--",
3910 Pml4e.n.u3Reserved & 2? "!" : "..",/* mbz */
3911 Pml4e.n.u1NoExecute ? "NX" : "--",
3912 Pml4e.u & RT_BIT(9) ? '1' : '0',
3913 Pml4e.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3914 Pml4e.u & RT_BIT(11) ? '1' : '0',
3915 Pml4e.u & X86_PML4E_PG_MASK);
3916
3917 if (cMaxDepth >= 1)
3918 {
3919 int rc2 = pgmR3DumpHierarchyHCPaePDPT(pVM, Pml4e.u & X86_PML4E_PG_MASK, u64Address, cr4, true, cMaxDepth - 1, pHlp);
3920 if (rc2 < rc && RT_SUCCESS(rc))
3921 rc = rc2;
3922 }
3923 }
3924 }
3925 return rc;
3926}
3927
3928
3929/**
3930 * Dumps a 32-bit shadow page table.
3931 *
3932 * @returns VBox status code (VINF_SUCCESS).
3933 * @param pVM The VM handle.
3934 * @param pPT Pointer to the page table.
3935 * @param u32Address The virtual address this table starts at.
3936 * @param pHlp Pointer to the output functions.
3937 */
3938int pgmR3DumpHierarchyHC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, PCDBGFINFOHLP pHlp)
3939{
3940 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
3941 {
3942 X86PTE Pte = pPT->a[i];
3943 if (Pte.n.u1Present)
3944 {
3945 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3946 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
3947 u32Address + (i << X86_PT_SHIFT),
3948 Pte.n.u1Write ? 'W' : 'R',
3949 Pte.n.u1User ? 'U' : 'S',
3950 Pte.n.u1Accessed ? 'A' : '-',
3951 Pte.n.u1Dirty ? 'D' : '-',
3952 Pte.n.u1Global ? 'G' : '-',
3953 Pte.n.u1WriteThru ? "WT" : "--",
3954 Pte.n.u1CacheDisable? "CD" : "--",
3955 Pte.n.u1PAT ? "AT" : "--",
3956 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3957 Pte.u & RT_BIT(10) ? '1' : '0',
3958 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
3959 Pte.u & X86_PDE_PG_MASK);
3960 }
3961 }
3962 return VINF_SUCCESS;
3963}
3964
3965
3966/**
3967 * Dumps a 32-bit shadow page directory and page tables.
3968 *
3969 * @returns VBox status code (VINF_SUCCESS).
3970 * @param pVM The VM handle.
3971 * @param cr3 The root of the hierarchy.
3972 * @param cr4 The CR4, PSE is currently used.
3973 * @param cMaxDepth How deep into the hierarchy the dumper should go.
3974 * @param pHlp Pointer to the output functions.
3975 */
3976int pgmR3DumpHierarchyHC32BitPD(PVM pVM, uint32_t cr3, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3977{
3978 PX86PD pPD = (PX86PD)MMPagePhys2Page(pVM, cr3 & X86_CR3_PAGE_MASK);
3979 if (!pPD)
3980 {
3981 pHlp->pfnPrintf(pHlp, "Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK);
3982 return VERR_INVALID_PARAMETER;
3983 }
3984
3985 int rc = VINF_SUCCESS;
3986 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
3987 {
3988 X86PDE Pde = pPD->a[i];
3989 if (Pde.n.u1Present)
3990 {
3991 const uint32_t u32Address = i << X86_PD_SHIFT;
3992 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
3993 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3994 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
3995 u32Address,
3996 Pde.b.u1Write ? 'W' : 'R',
3997 Pde.b.u1User ? 'U' : 'S',
3998 Pde.b.u1Accessed ? 'A' : '-',
3999 Pde.b.u1Dirty ? 'D' : '-',
4000 Pde.b.u1Global ? 'G' : '-',
4001 Pde.b.u1WriteThru ? "WT" : "--",
4002 Pde.b.u1CacheDisable? "CD" : "--",
4003 Pde.b.u1PAT ? "AT" : "--",
4004 Pde.u & RT_BIT_64(9) ? '1' : '0',
4005 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
4006 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
4007 Pde.u & X86_PDE4M_PG_MASK);
4008 else
4009 {
4010 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
4011 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
4012 u32Address,
4013 Pde.n.u1Write ? 'W' : 'R',
4014 Pde.n.u1User ? 'U' : 'S',
4015 Pde.n.u1Accessed ? 'A' : '-',
4016 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
4017 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
4018 Pde.n.u1WriteThru ? "WT" : "--",
4019 Pde.n.u1CacheDisable? "CD" : "--",
4020 Pde.u & RT_BIT_64(9) ? '1' : '0',
4021 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
4022 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
4023 Pde.u & X86_PDE_PG_MASK);
4024 if (cMaxDepth >= 1)
4025 {
4026 /** @todo what about using the page pool for mapping PTs? */
4027 RTHCPHYS HCPhys = Pde.u & X86_PDE_PG_MASK;
4028 PX86PT pPT = NULL;
4029 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
4030 pPT = (PX86PT)MMPagePhys2Page(pVM, HCPhys);
4031 else
4032 {
4033 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
4034 if (u32Address - pMap->GCPtr < pMap->cb)
4035 {
4036 int iPDE = (u32Address - pMap->GCPtr) >> X86_PD_SHIFT;
4037 if (pMap->aPTs[iPDE].HCPhysPT != HCPhys)
4038 pHlp->pfnPrintf(pHlp, "%08x error! Mapping error! PT %d has HCPhysPT=%RHp not %RHp is in the PD.\n",
4039 u32Address, iPDE, pMap->aPTs[iPDE].HCPhysPT, HCPhys);
4040 pPT = pMap->aPTs[iPDE].pPTR3;
4041 }
4042 }
4043 int rc2 = VERR_INVALID_PARAMETER;
4044 if (pPT)
4045 rc2 = pgmR3DumpHierarchyHC32BitPT(pVM, pPT, u32Address, pHlp);
4046 else
4047 pHlp->pfnPrintf(pHlp, "%08x error! Page table at %#x was not found in the page pool!\n", u32Address, HCPhys);
4048 if (rc2 < rc && RT_SUCCESS(rc))
4049 rc = rc2;
4050 }
4051 }
4052 }
4053 }
4054
4055 return rc;
4056}
4057
4058
4059/**
4060 * Dumps a 32-bit shadow page table.
4061 *
4062 * @returns VBox status code (VINF_SUCCESS).
4063 * @param pVM The VM handle.
4064 * @param pPT Pointer to the page table.
4065 * @param u32Address The virtual address this table starts at.
4066 * @param PhysSearch Address to search for.
4067 */
4068int pgmR3DumpHierarchyGC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, RTGCPHYS PhysSearch)
4069{
4070 for (unsigned i = 0; i < RT_ELEMENTS(pPT->a); i++)
4071 {
4072 X86PTE Pte = pPT->a[i];
4073 if (Pte.n.u1Present)
4074 {
4075 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4076 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
4077 u32Address + (i << X86_PT_SHIFT),
4078 Pte.n.u1Write ? 'W' : 'R',
4079 Pte.n.u1User ? 'U' : 'S',
4080 Pte.n.u1Accessed ? 'A' : '-',
4081 Pte.n.u1Dirty ? 'D' : '-',
4082 Pte.n.u1Global ? 'G' : '-',
4083 Pte.n.u1WriteThru ? "WT" : "--",
4084 Pte.n.u1CacheDisable? "CD" : "--",
4085 Pte.n.u1PAT ? "AT" : "--",
4086 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
4087 Pte.u & RT_BIT(10) ? '1' : '0',
4088 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
4089 Pte.u & X86_PDE_PG_MASK));
4090
4091 if ((Pte.u & X86_PDE_PG_MASK) == PhysSearch)
4092 {
4093 uint64_t fPageShw = 0;
4094 RTHCPHYS pPhysHC = 0;
4095
4096 PGMShwGetPage(pVM, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), &fPageShw, &pPhysHC);
4097 Log(("Found %RGp at %RGv -> flags=%llx\n", PhysSearch, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), fPageShw));
4098 }
4099 }
4100 }
4101 return VINF_SUCCESS;
4102}
4103
4104
4105/**
4106 * Dumps a 32-bit guest page directory and page tables.
4107 *
4108 * @returns VBox status code (VINF_SUCCESS).
4109 * @param pVM The VM handle.
4110 * @param cr3 The root of the hierarchy.
4111 * @param cr4 The CR4, PSE is currently used.
4112 * @param PhysSearch Address to search for.
4113 */
4114VMMR3DECL(int) PGMR3DumpHierarchyGC(PVM pVM, uint64_t cr3, uint64_t cr4, RTGCPHYS PhysSearch)
4115{
4116 bool fLongMode = false;
4117 const unsigned cch = fLongMode ? 16 : 8; NOREF(cch);
4118 PX86PD pPD = 0;
4119
4120 int rc = PGM_GCPHYS_2_PTR(pVM, cr3 & X86_CR3_PAGE_MASK, &pPD);
4121 if (RT_FAILURE(rc) || !pPD)
4122 {
4123 Log(("Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK));
4124 return VERR_INVALID_PARAMETER;
4125 }
4126
4127 Log(("cr3=%08x cr4=%08x%s\n"
4128 "%-*s P - Present\n"
4129 "%-*s | R/W - Read (0) / Write (1)\n"
4130 "%-*s | | U/S - User (1) / Supervisor (0)\n"
4131 "%-*s | | | A - Accessed\n"
4132 "%-*s | | | | D - Dirty\n"
4133 "%-*s | | | | | G - Global\n"
4134 "%-*s | | | | | | WT - Write thru\n"
4135 "%-*s | | | | | | | CD - Cache disable\n"
4136 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
4137 "%-*s | | | | | | | | | NX - No execute (K8)\n"
4138 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
4139 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
4140 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
4141 "%-*s Level | | | | | | | | | | | | Page\n"
4142 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
4143 - W U - - - -- -- -- -- -- 010 */
4144 , cr3, cr4, fLongMode ? " Long Mode" : "",
4145 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
4146 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address"));
4147
4148 for (unsigned i = 0; i < RT_ELEMENTS(pPD->a); i++)
4149 {
4150 X86PDE Pde = pPD->a[i];
4151 if (Pde.n.u1Present)
4152 {
4153 const uint32_t u32Address = i << X86_PD_SHIFT;
4154
4155 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
4156 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4157 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
4158 u32Address,
4159 Pde.b.u1Write ? 'W' : 'R',
4160 Pde.b.u1User ? 'U' : 'S',
4161 Pde.b.u1Accessed ? 'A' : '-',
4162 Pde.b.u1Dirty ? 'D' : '-',
4163 Pde.b.u1Global ? 'G' : '-',
4164 Pde.b.u1WriteThru ? "WT" : "--",
4165 Pde.b.u1CacheDisable? "CD" : "--",
4166 Pde.b.u1PAT ? "AT" : "--",
4167 Pde.u & RT_BIT(9) ? '1' : '0',
4168 Pde.u & RT_BIT(10) ? '1' : '0',
4169 Pde.u & RT_BIT(11) ? '1' : '0',
4170 pgmGstGet4MBPhysPage(&pVM->pgm.s, Pde)));
4171 /** @todo PhysSearch */
4172 else
4173 {
4174 Log(( /*P R S A D G WT CD AT NX 4M a m d */
4175 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
4176 u32Address,
4177 Pde.n.u1Write ? 'W' : 'R',
4178 Pde.n.u1User ? 'U' : 'S',
4179 Pde.n.u1Accessed ? 'A' : '-',
4180 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
4181 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
4182 Pde.n.u1WriteThru ? "WT" : "--",
4183 Pde.n.u1CacheDisable? "CD" : "--",
4184 Pde.u & RT_BIT(9) ? '1' : '0',
4185 Pde.u & RT_BIT(10) ? '1' : '0',
4186 Pde.u & RT_BIT(11) ? '1' : '0',
4187 Pde.u & X86_PDE_PG_MASK));
4188 ////if (cMaxDepth >= 1)
4189 {
4190 /** @todo what about using the page pool for mapping PTs? */
4191 RTGCPHYS GCPhys = Pde.u & X86_PDE_PG_MASK;
4192 PX86PT pPT = NULL;
4193
4194 rc = PGM_GCPHYS_2_PTR(pVM, GCPhys, &pPT);
4195
4196 int rc2 = VERR_INVALID_PARAMETER;
4197 if (pPT)
4198 rc2 = pgmR3DumpHierarchyGC32BitPT(pVM, pPT, u32Address, PhysSearch);
4199 else
4200 Log(("%08x error! Page table at %#x was not found in the page pool!\n", u32Address, GCPhys));
4201 if (rc2 < rc && RT_SUCCESS(rc))
4202 rc = rc2;
4203 }
4204 }
4205 }
4206 }
4207
4208 return rc;
4209}
4210
4211
4212/**
4213 * Dumps a page table hierarchy use only physical addresses and cr4/lm flags.
4214 *
4215 * @returns VBox status code (VINF_SUCCESS).
4216 * @param pVM The VM handle.
4217 * @param cr3 The root of the hierarchy.
4218 * @param cr4 The cr4, only PAE and PSE is currently used.
4219 * @param fLongMode Set if long mode, false if not long mode.
4220 * @param cMaxDepth Number of levels to dump.
4221 * @param pHlp Pointer to the output functions.
4222 */
4223VMMR3DECL(int) PGMR3DumpHierarchyHC(PVM pVM, uint64_t cr3, uint64_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
4224{
4225 if (!pHlp)
4226 pHlp = DBGFR3InfoLogHlp();
4227 if (!cMaxDepth)
4228 return VINF_SUCCESS;
4229 const unsigned cch = fLongMode ? 16 : 8;
4230 pHlp->pfnPrintf(pHlp,
4231 "cr3=%08x cr4=%08x%s\n"
4232 "%-*s P - Present\n"
4233 "%-*s | R/W - Read (0) / Write (1)\n"
4234 "%-*s | | U/S - User (1) / Supervisor (0)\n"
4235 "%-*s | | | A - Accessed\n"
4236 "%-*s | | | | D - Dirty\n"
4237 "%-*s | | | | | G - Global\n"
4238 "%-*s | | | | | | WT - Write thru\n"
4239 "%-*s | | | | | | | CD - Cache disable\n"
4240 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
4241 "%-*s | | | | | | | | | NX - No execute (K8)\n"
4242 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
4243 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
4244 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
4245 "%-*s Level | | | | | | | | | | | | Page\n"
4246 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
4247 - W U - - - -- -- -- -- -- 010 */
4248 , cr3, cr4, fLongMode ? " Long Mode" : "",
4249 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
4250 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address");
4251 if (cr4 & X86_CR4_PAE)
4252 {
4253 if (fLongMode)
4254 return pgmR3DumpHierarchyHcPaePML4(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
4255 return pgmR3DumpHierarchyHCPaePDPT(pVM, cr3 & X86_CR3_PAE_PAGE_MASK, 0, cr4, false, cMaxDepth, pHlp);
4256 }
4257 return pgmR3DumpHierarchyHC32BitPD(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
4258}
4259
4260#ifdef VBOX_WITH_DEBUGGER
4261
4262/**
4263 * The '.pgmram' command.
4264 *
4265 * @returns VBox status.
4266 * @param pCmd Pointer to the command descriptor (as registered).
4267 * @param pCmdHlp Pointer to command helper functions.
4268 * @param pVM Pointer to the current VM (if any).
4269 * @param paArgs Pointer to (readonly) array of arguments.
4270 * @param cArgs Number of arguments in the array.
4271 */
4272static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4273{
4274 /*
4275 * Validate input.
4276 */
4277 if (!pVM)
4278 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4279 if (!pVM->pgm.s.pRamRangesRC)
4280 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no Ram is registered.\n");
4281
4282 /*
4283 * Dump the ranges.
4284 */
4285 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "From - To (incl) pvHC\n");
4286 PPGMRAMRANGE pRam;
4287 for (pRam = pVM->pgm.s.pRamRangesR3; pRam; pRam = pRam->pNextR3)
4288 {
4289 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
4290 "%RGp - %RGp %p\n",
4291 pRam->GCPhys, pRam->GCPhysLast, pRam->pvR3);
4292 if (RT_FAILURE(rc))
4293 return rc;
4294 }
4295
4296 return VINF_SUCCESS;
4297}
4298
4299
4300/**
4301 * The '.pgmmap' command.
4302 *
4303 * @returns VBox status.
4304 * @param pCmd Pointer to the command descriptor (as registered).
4305 * @param pCmdHlp Pointer to command helper functions.
4306 * @param pVM Pointer to the current VM (if any).
4307 * @param paArgs Pointer to (readonly) array of arguments.
4308 * @param cArgs Number of arguments in the array.
4309 */
4310static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4311{
4312 /*
4313 * Validate input.
4314 */
4315 if (!pVM)
4316 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4317 if (!pVM->pgm.s.pMappingsR3)
4318 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no mappings are registered.\n");
4319
4320 /*
4321 * Print message about the fixedness of the mappings.
4322 */
4323 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, pVM->pgm.s.fMappingsFixed ? "The mappings are FIXED.\n" : "The mappings are FLOATING.\n");
4324 if (RT_FAILURE(rc))
4325 return rc;
4326
4327 /*
4328 * Dump the ranges.
4329 */
4330 PPGMMAPPING pCur;
4331 for (pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
4332 {
4333 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
4334 "%08x - %08x %s\n",
4335 pCur->GCPtr, pCur->GCPtrLast, pCur->pszDesc);
4336 if (RT_FAILURE(rc))
4337 return rc;
4338 }
4339
4340 return VINF_SUCCESS;
4341}
4342
4343
4344/**
4345 * The '.pgmsync' command.
4346 *
4347 * @returns VBox status.
4348 * @param pCmd Pointer to the command descriptor (as registered).
4349 * @param pCmdHlp Pointer to command helper functions.
4350 * @param pVM Pointer to the current VM (if any).
4351 * @param paArgs Pointer to (readonly) array of arguments.
4352 * @param cArgs Number of arguments in the array.
4353 */
4354static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4355{
4356 /*
4357 * Validate input.
4358 */
4359 if (!pVM)
4360 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4361
4362 /*
4363 * Force page directory sync.
4364 */
4365 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
4366
4367 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Forcing page directory sync.\n");
4368 if (RT_FAILURE(rc))
4369 return rc;
4370
4371 return VINF_SUCCESS;
4372}
4373
4374
4375#ifdef VBOX_STRICT
4376/**
4377 * The '.pgmassertcr3' command.
4378 *
4379 * @returns VBox status.
4380 * @param pCmd Pointer to the command descriptor (as registered).
4381 * @param pCmdHlp Pointer to command helper functions.
4382 * @param pVM Pointer to the current VM (if any).
4383 * @param paArgs Pointer to (readonly) array of arguments.
4384 * @param cArgs Number of arguments in the array.
4385 */
4386static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4387{
4388 /*
4389 * Validate input.
4390 */
4391 if (!pVM)
4392 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4393
4394 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Checking shadow CR3 page tables for consistency.\n");
4395 if (RT_FAILURE(rc))
4396 return rc;
4397
4398 PGMAssertCR3(pVM, CPUMGetGuestCR3(pVM), CPUMGetGuestCR4(pVM));
4399
4400 return VINF_SUCCESS;
4401}
4402#endif /* VBOX_STRICT */
4403
4404
4405/**
4406 * The '.pgmsyncalways' command.
4407 *
4408 * @returns VBox status.
4409 * @param pCmd Pointer to the command descriptor (as registered).
4410 * @param pCmdHlp Pointer to command helper functions.
4411 * @param pVM Pointer to the current VM (if any).
4412 * @param paArgs Pointer to (readonly) array of arguments.
4413 * @param cArgs Number of arguments in the array.
4414 */
4415static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
4416{
4417 /*
4418 * Validate input.
4419 */
4420 if (!pVM)
4421 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires a VM to be selected.\n");
4422
4423 /*
4424 * Force page directory sync.
4425 */
4426 if (pVM->pgm.s.fSyncFlags & PGM_SYNC_ALWAYS)
4427 {
4428 ASMAtomicAndU32(&pVM->pgm.s.fSyncFlags, ~PGM_SYNC_ALWAYS);
4429 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Disabled permanent forced page directory syncing.\n");
4430 }
4431 else
4432 {
4433 ASMAtomicOrU32(&pVM->pgm.s.fSyncFlags, PGM_SYNC_ALWAYS);
4434 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
4435 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Enabled permanent forced page directory syncing.\n");
4436 }
4437}
4438
4439#endif /* VBOX_WITH_DEBUGGER */
4440
4441/**
4442 * pvUser argument of the pgmR3CheckIntegrity*Node callbacks.
4443 */
4444typedef struct PGMCHECKINTARGS
4445{
4446 bool fLeftToRight; /**< true: left-to-right; false: right-to-left. */
4447 PPGMPHYSHANDLER pPrevPhys;
4448 PPGMVIRTHANDLER pPrevVirt;
4449 PPGMPHYS2VIRTHANDLER pPrevPhys2Virt;
4450 PVM pVM;
4451} PGMCHECKINTARGS, *PPGMCHECKINTARGS;
4452
4453/**
4454 * Validate a node in the physical handler tree.
4455 *
4456 * @returns 0 on if ok, other wise 1.
4457 * @param pNode The handler node.
4458 * @param pvUser pVM.
4459 */
4460static DECLCALLBACK(int) pgmR3CheckIntegrityPhysHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4461{
4462 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4463 PPGMPHYSHANDLER pCur = (PPGMPHYSHANDLER)pNode;
4464 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4465 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGp-%RGp %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4466 AssertReleaseMsg( !pArgs->pPrevPhys
4467 || (pArgs->fLeftToRight ? pArgs->pPrevPhys->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys->Core.KeyLast > pCur->Core.Key),
4468 ("pPrevPhys=%p %RGp-%RGp %s\n"
4469 " pCur=%p %RGp-%RGp %s\n",
4470 pArgs->pPrevPhys, pArgs->pPrevPhys->Core.Key, pArgs->pPrevPhys->Core.KeyLast, pArgs->pPrevPhys->pszDesc,
4471 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4472 pArgs->pPrevPhys = pCur;
4473 return 0;
4474}
4475
4476
4477/**
4478 * Validate a node in the virtual handler tree.
4479 *
4480 * @returns 0 on if ok, other wise 1.
4481 * @param pNode The handler node.
4482 * @param pvUser pVM.
4483 */
4484static DECLCALLBACK(int) pgmR3CheckIntegrityVirtHandlerNode(PAVLROGCPTRNODECORE pNode, void *pvUser)
4485{
4486 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4487 PPGMVIRTHANDLER pCur = (PPGMVIRTHANDLER)pNode;
4488 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4489 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGv-%RGv %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4490 AssertReleaseMsg( !pArgs->pPrevVirt
4491 || (pArgs->fLeftToRight ? pArgs->pPrevVirt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevVirt->Core.KeyLast > pCur->Core.Key),
4492 ("pPrevVirt=%p %RGv-%RGv %s\n"
4493 " pCur=%p %RGv-%RGv %s\n",
4494 pArgs->pPrevVirt, pArgs->pPrevVirt->Core.Key, pArgs->pPrevVirt->Core.KeyLast, pArgs->pPrevVirt->pszDesc,
4495 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4496 for (unsigned iPage = 0; iPage < pCur->cPages; iPage++)
4497 {
4498 AssertReleaseMsg(pCur->aPhysToVirt[iPage].offVirtHandler == -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage]),
4499 ("pCur=%p %RGv-%RGv %s\n"
4500 "iPage=%d offVirtHandle=%#x expected %#x\n",
4501 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc,
4502 iPage, pCur->aPhysToVirt[iPage].offVirtHandler, -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage])));
4503 }
4504 pArgs->pPrevVirt = pCur;
4505 return 0;
4506}
4507
4508
4509/**
4510 * Validate a node in the virtual handler tree.
4511 *
4512 * @returns 0 on if ok, other wise 1.
4513 * @param pNode The handler node.
4514 * @param pvUser pVM.
4515 */
4516static DECLCALLBACK(int) pgmR3CheckIntegrityPhysToVirtHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4517{
4518 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4519 PPGMPHYS2VIRTHANDLER pCur = (PPGMPHYS2VIRTHANDLER)pNode;
4520 AssertReleaseMsgReturn(!((uintptr_t)pCur & 3), ("\n"), 1);
4521 AssertReleaseMsgReturn(!(pCur->offVirtHandler & 3), ("\n"), 1);
4522 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGp-%RGp\n", pCur, pCur->Core.Key, pCur->Core.KeyLast));
4523 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4524 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4525 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
4526 " pCur=%p %RGp-%RGp\n",
4527 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4528 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4529 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4530 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4531 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
4532 " pCur=%p %RGp-%RGp\n",
4533 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4534 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4535 AssertReleaseMsg((pCur->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD),
4536 ("pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4537 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4538 if (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK)
4539 {
4540 PPGMPHYS2VIRTHANDLER pCur2 = pCur;
4541 for (;;)
4542 {
4543 pCur2 = (PPGMPHYS2VIRTHANDLER)((intptr_t)pCur + (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK));
4544 AssertReleaseMsg(pCur2 != pCur,
4545 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4546 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4547 AssertReleaseMsg((pCur2->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == PGMPHYS2VIRTHANDLER_IN_TREE,
4548 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4549 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4550 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4551 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4552 AssertReleaseMsg((pCur2->Core.Key ^ pCur->Core.Key) < PAGE_SIZE,
4553 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4554 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4555 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4556 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4557 AssertReleaseMsg((pCur2->Core.KeyLast ^ pCur->Core.KeyLast) < PAGE_SIZE,
4558 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4559 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4560 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4561 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4562 if (!(pCur2->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK))
4563 break;
4564 }
4565 }
4566
4567 pArgs->pPrevPhys2Virt = pCur;
4568 return 0;
4569}
4570
4571
4572/**
4573 * Perform an integrity check on the PGM component.
4574 *
4575 * @returns VINF_SUCCESS if everything is fine.
4576 * @returns VBox error status after asserting on integrity breach.
4577 * @param pVM The VM handle.
4578 */
4579VMMR3DECL(int) PGMR3CheckIntegrity(PVM pVM)
4580{
4581 AssertReleaseReturn(pVM->pgm.s.offVM, VERR_INTERNAL_ERROR);
4582
4583 /*
4584 * Check the trees.
4585 */
4586 int cErrors = 0;
4587 const static PGMCHECKINTARGS s_LeftToRight = { true, NULL, NULL, NULL, pVM };
4588 const static PGMCHECKINTARGS s_RightToLeft = { false, NULL, NULL, NULL, pVM };
4589 PGMCHECKINTARGS Args = s_LeftToRight;
4590 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4591 Args = s_RightToLeft;
4592 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, false, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4593 Args = s_LeftToRight;
4594 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4595 Args = s_RightToLeft;
4596 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4597 Args = s_LeftToRight;
4598 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4599 Args = s_RightToLeft;
4600 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4601 Args = s_LeftToRight;
4602 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, true, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4603 Args = s_RightToLeft;
4604 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, false, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4605
4606 return !cErrors ? VINF_SUCCESS : VERR_INTERNAL_ERROR;
4607}
4608
4609
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette