VirtualBox

source: vbox/trunk/src/VBox/VMM/PGM.cpp@ 7802

Last change on this file since 7802 was 7753, checked in by vboxsync, 17 years ago

The PGM bits of the MMIO cleanup.
Moved the parts of PGMR3Reset that deals with RAM (zeroing it) and sketched out the new code there.
Fixed a bug in PGM_PAGE_INIT_ZERO* where the type and state was switched.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id
File size: 183.6 KB
Line 
1/* $Id: PGM.cpp 7753 2008-04-04 20:35:44Z vboxsync $ */
2/** @file
3 * PGM - Page Manager and Monitor. (Mixing stuff here, not good?)
4 */
5
6/*
7 * Copyright (C) 2006-2007 innotek GmbH
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18
19/** @page pg_pgm PGM - The Page Manager and Monitor
20 *
21 *
22 *
23 * @section sec_pgm_modes Paging Modes
24 *
25 * There are three memory contexts: Host Context (HC), Guest Context (GC)
26 * and intermediate context. When talking about paging HC can also be refered to
27 * as "host paging", and GC refered to as "shadow paging".
28 *
29 * We define three basic paging modes: 32-bit, PAE and AMD64. The host paging mode
30 * is defined by the host operating system. The mode used in the shadow paging mode
31 * depends on the host paging mode and what the mode the guest is currently in. The
32 * following relation between the two is defined:
33 *
34 * @verbatim
35 Host > 32-bit | PAE | AMD64 |
36 Guest | | | |
37 ==v================================
38 32-bit 32-bit PAE PAE
39 -------|--------|--------|--------|
40 PAE PAE PAE PAE
41 -------|--------|--------|--------|
42 AMD64 AMD64 AMD64 AMD64
43 -------|--------|--------|--------| @endverbatim
44 *
45 * All configuration except those in the diagonal (upper left) are expected to
46 * require special effort from the switcher (i.e. a bit slower).
47 *
48 *
49 *
50 *
51 * @section sec_pgm_shw The Shadow Memory Context
52 *
53 *
54 * [..]
55 *
56 * Because of guest context mappings requires PDPT and PML4 entries to allow
57 * writing on AMD64, the two upper levels will have fixed flags whatever the
58 * guest is thinking of using there. So, when shadowing the PD level we will
59 * calculate the effective flags of PD and all the higher levels. In legacy
60 * PAE mode this only applies to the PWT and PCD bits (the rest are
61 * ignored/reserved/MBZ). We will ignore those bits for the present.
62 *
63 *
64 *
65 * @section sec_pgm_int The Intermediate Memory Context
66 *
67 * The world switch goes thru an intermediate memory context which purpose it is
68 * to provide different mappings of the switcher code. All guest mappings are also
69 * present in this context.
70 *
71 * The switcher code is mapped at the same location as on the host, at an
72 * identity mapped location (physical equals virtual address), and at the
73 * hypervisor location.
74 *
75 * PGM maintain page tables for 32-bit, PAE and AMD64 paging modes. This
76 * simplifies switching guest CPU mode and consistency at the cost of more
77 * code to do the work. All memory use for those page tables is located below
78 * 4GB (this includes page tables for guest context mappings).
79 *
80 *
81 * @subsection subsec_pgm_int_gc Guest Context Mappings
82 *
83 * During assignment and relocation of a guest context mapping the intermediate
84 * memory context is used to verify the new location.
85 *
86 * Guest context mappings are currently restricted to below 4GB, for reasons
87 * of simplicity. This may change when we implement AMD64 support.
88 *
89 *
90 *
91 *
92 * @section sec_pgm_misc Misc
93 *
94 * @subsection subsec_pgm_misc_diff Differences Between Legacy PAE and Long Mode PAE
95 *
96 * The differences between legacy PAE and long mode PAE are:
97 * -# PDPE bits 1, 2, 5 and 6 are defined differently. In leagcy mode they are
98 * all marked down as must-be-zero, while in long mode 1, 2 and 5 have the
99 * usual meanings while 6 is ignored (AMD). This means that upon switching to
100 * legacy PAE mode we'll have to clear these bits and when going to long mode
101 * they must be set. This applies to both intermediate and shadow contexts,
102 * however we don't need to do it for the intermediate one since we're
103 * executing with CR0.WP at that time.
104 * -# CR3 allows a 32-byte aligned address in legacy mode, while in long mode
105 * a page aligned one is required.
106 *
107 *
108 * @section sec_pgm_handlers Access Handlers
109 *
110 * Placeholder.
111 *
112 *
113 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
114 *
115 * Placeholder.
116 *
117 *
118 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
119 *
120 * We currently implement three types of virtual access handlers: ALL, WRITE
121 * and HYPERVISOR (WRITE). See PGMVIRTHANDLERTYPE for some more details.
122 *
123 * The HYPERVISOR access handlers is kept in a separate tree since it doesn't apply
124 * to physical pages (PGMTREES::HyperVirtHandlers) and only needs to be consulted in
125 * a special \#PF case. The ALL and WRITE are in the PGMTREES::VirtHandlers tree, the
126 * rest of this section is going to be about these handlers.
127 *
128 * We'll go thru the life cycle of a handler and try make sense of it all, don't know
129 * how successfull this is gonna be...
130 *
131 * 1. A handler is registered thru the PGMR3HandlerVirtualRegister and
132 * PGMHandlerVirtualRegisterEx APIs. We check for conflicting virtual handlers
133 * and create a new node that is inserted into the AVL tree (range key). Then
134 * a full PGM resync is flagged (clear pool, sync cr3, update virtual bit of PGMPAGE).
135 *
136 * 2. The following PGMSyncCR3/SyncCR3 operation will first make invoke HandlerVirtualUpdate.
137 *
138 * 2a. HandlerVirtualUpdate will will lookup all the pages covered by virtual handlers
139 * via the current guest CR3 and update the physical page -> virtual handler
140 * translation. Needless to say, this doesn't exactly scale very well. If any changes
141 * are detected, it will flag a virtual bit update just like we did on registration.
142 * PGMPHYS pages with changes will have their virtual handler state reset to NONE.
143 *
144 * 2b. The virtual bit update process will iterate all the pages covered by all the
145 * virtual handlers and update the PGMPAGE virtual handler state to the max of all
146 * virtual handlers on that page.
147 *
148 * 2c. Back in SyncCR3 we will now flush the entire shadow page cache to make sure
149 * we don't miss any alias mappings of the monitored pages.
150 *
151 * 2d. SyncCR3 will then proceed with syncing the CR3 table.
152 *
153 * 3. \#PF(np,read) on a page in the range. This will cause it to be synced
154 * read-only and resumed if it's a WRITE handler. If it's an ALL handler we
155 * will call the handlers like in the next step. If the physical mapping has
156 * changed we will - some time in the future - perform a handler callback
157 * (optional) and update the physical -> virtual handler cache.
158 *
159 * 4. \#PF(,write) on a page in the range. This will cause the handler to
160 * be invoked.
161 *
162 * 5. The guest invalidates the page and changes the physical backing or
163 * unmaps it. This should cause the invalidation callback to be invoked
164 * (it might not yet be 100% perfect). Exactly what happens next... is
165 * this where we mess up and end up out of sync for a while?
166 *
167 * 6. The handler is deregistered by the client via PGMHandlerVirtualDeregister.
168 * We will then set all PGMPAGEs in the physical -> virtual handler cache for
169 * this handler to NONE and trigger a full PGM resync (basically the same
170 * as int step 1). Which means 2 is executed again.
171 *
172 *
173 * @subsubsection sub_sec_pgm_handler_virt_todo TODOs
174 *
175 * There is a bunch of things that needs to be done to make the virtual handlers
176 * work 100% correctly and work more efficiently.
177 *
178 * The first bit hasn't been implemented yet because it's going to slow the
179 * whole mess down even more, and besides it seems to be working reliably for
180 * our current uses. OTOH, some of the optimizations might end up more or less
181 * implementing the missing bits, so we'll see.
182 *
183 * On the optimization side, the first thing to do is to try avoid unnecessary
184 * cache flushing. Then try team up with the shadowing code to track changes
185 * in mappings by means of access to them (shadow in), updates to shadows pages,
186 * invlpg, and shadow PT discarding (perhaps).
187 *
188 * Some idea that have popped up for optimization for current and new features:
189 * - bitmap indicating where there are virtual handlers installed.
190 * (4KB => 2**20 pages, page 2**12 => covers 32-bit address space 1:1!)
191 * - Further optimize this by min/max (needs min/max avl getters).
192 * - Shadow page table entry bit (if any left)?
193 *
194 */
195
196
197/** @page pg_pgmPhys PGMPhys - Physical Guest Memory Management.
198 *
199 *
200 * Objectives:
201 * - Guest RAM over-commitment using memory ballooning,
202 * zero pages and general page sharing.
203 * - Moving or mirroring a VM onto a different physical machine.
204 *
205 *
206 * @subsection subsec_pgmPhys_Definitions Definitions
207 *
208 * Allocation chunk - A RTR0MemObjAllocPhysNC object and the tracking
209 * machinery assoicated with it.
210 *
211 *
212 *
213 *
214 * @subsection subsec_pgmPhys_AllocPage Allocating a page.
215 *
216 * Initially we map *all* guest memory to the (per VM) zero page, which
217 * means that none of the read functions will cause pages to be allocated.
218 *
219 * Exception, access bit in page tables that have been shared. This must
220 * be handled, but we must also make sure PGMGst*Modify doesn't make
221 * unnecessary modifications.
222 *
223 * Allocation points:
224 * - PGMPhysWriteGCPhys and PGMPhysWrite.
225 * - Replacing a zero page mapping at \#PF.
226 * - Replacing a shared page mapping at \#PF.
227 * - ROM registration (currently MMR3RomRegister).
228 * - VM restore (pgmR3Load).
229 *
230 * For the first three it would make sense to keep a few pages handy
231 * until we've reached the max memory commitment for the VM.
232 *
233 * For the ROM registration, we know exactly how many pages we need
234 * and will request these from ring-0. For restore, we will save
235 * the number of non-zero pages in the saved state and allocate
236 * them up front. This would allow the ring-0 component to refuse
237 * the request if the isn't sufficient memory available for VM use.
238 *
239 * Btw. for both ROM and restore allocations we won't be requiring
240 * zeroed pages as they are going to be filled instantly.
241 *
242 *
243 * @subsection subsec_pgmPhys_FreePage Freeing a page
244 *
245 * There are a few points where a page can be freed:
246 * - After being replaced by the zero page.
247 * - After being replaced by a shared page.
248 * - After being ballooned by the guest additions.
249 * - At reset.
250 * - At restore.
251 *
252 * When freeing one or more pages they will be returned to the ring-0
253 * component and replaced by the zero page.
254 *
255 * The reasoning for clearing out all the pages on reset is that it will
256 * return us to the exact same state as on power on, and may thereby help
257 * us reduce the memory load on the system. Further it might have a
258 * (temporary) positive influence on memory fragmentation (@see subsec_pgmPhys_Fragmentation).
259 *
260 * On restore, as mention under the allocation topic, pages should be
261 * freed / allocated depending on how many is actually required by the
262 * new VM state. The simplest approach is to do like on reset, and free
263 * all non-ROM pages and then allocate what we need.
264 *
265 * A measure to prevent some fragmentation, would be to let each allocation
266 * chunk have some affinity towards the VM having allocated the most pages
267 * from it. Also, try make sure to allocate from allocation chunks that
268 * are almost full. Admittedly, both these measures might work counter to
269 * our intentions and its probably not worth putting a lot of effort,
270 * cpu time or memory into this.
271 *
272 *
273 * @subsection subsec_pgmPhys_SharePage Sharing a page
274 *
275 * The basic idea is that there there will be a idle priority kernel
276 * thread walking the non-shared VM pages hashing them and looking for
277 * pages with the same checksum. If such pages are found, it will compare
278 * them byte-by-byte to see if they actually are identical. If found to be
279 * identical it will allocate a shared page, copy the content, check that
280 * the page didn't change while doing this, and finally request both the
281 * VMs to use the shared page instead. If the page is all zeros (special
282 * checksum and byte-by-byte check) it will request the VM that owns it
283 * to replace it with the zero page.
284 *
285 * To make this efficient, we will have to make sure not to try share a page
286 * that will change its contents soon. This part requires the most work.
287 * A simple idea would be to request the VM to write monitor the page for
288 * a while to make sure it isn't modified any time soon. Also, it may
289 * make sense to skip pages that are being write monitored since this
290 * information is readily available to the thread if it works on the
291 * per-VM guest memory structures (presently called PGMRAMRANGE).
292 *
293 *
294 * @subsection subsec_pgmPhys_Fragmentation Fragmentation Concerns and Counter Measures
295 *
296 * The pages are organized in allocation chunks in ring-0, this is a necessity
297 * if we wish to have an OS agnostic approach to this whole thing. (On Linux we
298 * could easily work on a page-by-page basis if we liked. Whether this is possible
299 * or efficient on NT I don't quite know.) Fragmentation within these chunks may
300 * become a problem as part of the idea here is that we wish to return memory to
301 * the host system.
302 *
303 * For instance, starting two VMs at the same time, they will both allocate the
304 * guest memory on-demand and if permitted their page allocations will be
305 * intermixed. Shut down one of the two VMs and it will be difficult to return
306 * any memory to the host system because the page allocation for the two VMs are
307 * mixed up in the same allocation chunks.
308 *
309 * To further complicate matters, when pages are freed because they have been
310 * ballooned or become shared/zero the whole idea is that the page is supposed
311 * to be reused by another VM or returned to the host system. This will cause
312 * allocation chunks to contain pages belonging to different VMs and prevent
313 * returning memory to the host when one of those VM shuts down.
314 *
315 * The only way to really deal with this problem is to move pages. This can
316 * either be done at VM shutdown and or by the idle priority worker thread
317 * that will be responsible for finding sharable/zero pages. The mechanisms
318 * involved for coercing a VM to move a page (or to do it for it) will be
319 * the same as when telling it to share/zero a page.
320 *
321 *
322 * @subsection subsec_pgmPhys_Tracking Tracking Structures And Their Cost
323 *
324 * There's a difficult balance between keeping the per-page tracking structures
325 * (global and guest page) easy to use and keeping them from eating too much
326 * memory. We have limited virtual memory resources available when operating in
327 * 32-bit kernel space (on 64-bit there'll it's quite a different story). The
328 * tracking structures will be attemted designed such that we can deal with up
329 * to 32GB of memory on a 32-bit system and essentially unlimited on 64-bit ones.
330 *
331 *
332 * @subsubsection subsubsec_pgmPhys_Tracking_Kernel Kernel Space
333 *
334 * @see pg_GMM
335 *
336 * @subsubsection subsubsec_pgmPhys_Tracking_PerVM Per-VM
337 *
338 * Fixed info is the physical address of the page (HCPhys) and the page id
339 * (described above). Theoretically we'll need 48(-12) bits for the HCPhys part.
340 * Today we've restricting ourselves to 40(-12) bits because this is the current
341 * restrictions of all AMD64 implementations (I think Barcelona will up this
342 * to 48(-12) bits, not that it really matters) and I needed the bits for
343 * tracking mappings of a page. 48-12 = 36. That leaves 28 bits, which means a
344 * decent range for the page id: 2^(28+12) = 1024TB.
345 *
346 * In additions to these, we'll have to keep maintaining the page flags as we
347 * currently do. Although it wouldn't harm to optimize these quite a bit, like
348 * for instance the ROM shouldn't depend on having a write handler installed
349 * in order for it to become read-only. A RO/RW bit should be considered so
350 * that the page syncing code doesn't have to mess about checking multiple
351 * flag combinations (ROM || RW handler || write monitored) in order to
352 * figure out how to setup a shadow PTE. But this of course, is second
353 * priority at present. Current this requires 12 bits, but could probably
354 * be optimized to ~8.
355 *
356 * Then there's the 24 bits used to track which shadow page tables are
357 * currently mapping a page for the purpose of speeding up physical
358 * access handlers, and thereby the page pool cache. More bit for this
359 * purpose wouldn't hurt IIRC.
360 *
361 * Then there is a new bit in which we need to record what kind of page
362 * this is, shared, zero, normal or write-monitored-normal. This'll
363 * require 2 bits. One bit might be needed for indicating whether a
364 * write monitored page has been written to. And yet another one or
365 * two for tracking migration status. 3-4 bits total then.
366 *
367 * Whatever is left will can be used to record the sharabilitiy of a
368 * page. The page checksum will not be stored in the per-VM table as
369 * the idle thread will not be permitted to do modifications to it.
370 * It will instead have to keep its own working set of potentially
371 * shareable pages and their check sums and stuff.
372 *
373 * For the present we'll keep the current packing of the
374 * PGMRAMRANGE::aHCPhys to keep the changes simple, only of course,
375 * we'll have to change it to a struct with a total of 128-bits at
376 * our disposal.
377 *
378 * The initial layout will be like this:
379 * @verbatim
380 RTHCPHYS HCPhys; The current stuff.
381 63:40 Current shadow PT tracking stuff.
382 39:12 The physical page frame number.
383 11:0 The current flags.
384 uint32_t u28PageId : 28; The page id.
385 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
386 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
387 uint32_t u1Reserved : 1; Reserved for later.
388 uint32_t u32Reserved; Reserved for later, mostly sharing stats.
389 @endverbatim
390 *
391 * The final layout will be something like this:
392 * @verbatim
393 RTHCPHYS HCPhys; The current stuff.
394 63:48 High page id (12+).
395 47:12 The physical page frame number.
396 11:0 Low page id.
397 uint32_t fReadOnly : 1; Whether it's readonly page (rom or monitored in some way).
398 uint32_t u3Type : 3; The page type {RESERVED, MMIO, MMIO2, ROM, shadowed ROM, RAM}.
399 uint32_t u2PhysMon : 2; Physical access handler type {none, read, write, all}.
400 uint32_t u2VirtMon : 2; Virtual access handler type {none, read, write, all}..
401 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
402 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
403 uint32_t u20Reserved : 20; Reserved for later, mostly sharing stats.
404 uint32_t u32Tracking; The shadow PT tracking stuff, roughly.
405 @endverbatim
406 *
407 * Cost wise, this means we'll double the cost for guest memory. There isn't anyway
408 * around that I'm afraid. It means that the cost of dealing out 32GB of memory
409 * to one or more VMs is: (32GB >> PAGE_SHIFT) * 16 bytes, or 128MBs. Or another
410 * example, the VM heap cost when assigning 1GB to a VM will be: 4MB.
411 *
412 * A couple of cost examples for the total cost per-VM + kernel.
413 * 32-bit Windows and 32-bit linux:
414 * 1GB guest ram, 256K pages: 4MB + 2MB(+) = 6MB
415 * 4GB guest ram, 1M pages: 16MB + 8MB(+) = 24MB
416 * 32GB guest ram, 8M pages: 128MB + 64MB(+) = 192MB
417 * 64-bit Windows and 64-bit linux:
418 * 1GB guest ram, 256K pages: 4MB + 3MB(+) = 7MB
419 * 4GB guest ram, 1M pages: 16MB + 12MB(+) = 28MB
420 * 32GB guest ram, 8M pages: 128MB + 96MB(+) = 224MB
421 *
422 * UPDATE - 2007-09-27:
423 * Will need a ballooned flag/state too because we cannot
424 * trust the guest 100% and reporting the same page as ballooned more
425 * than once will put the GMM off balance.
426 *
427 *
428 * @subsection subsec_pgmPhys_Serializing Serializing Access
429 *
430 * Initially, we'll try a simple scheme:
431 *
432 * - The per-VM RAM tracking structures (PGMRAMRANGE) is only modified
433 * by the EMT thread of that VM while in the pgm critsect.
434 * - Other threads in the VM process that needs to make reliable use of
435 * the per-VM RAM tracking structures will enter the critsect.
436 * - No process external thread or kernel thread will ever try enter
437 * the pgm critical section, as that just won't work.
438 * - The idle thread (and similar threads) doesn't not need 100% reliable
439 * data when performing it tasks as the EMT thread will be the one to
440 * do the actual changes later anyway. So, as long as it only accesses
441 * the main ram range, it can do so by somehow preventing the VM from
442 * being destroyed while it works on it...
443 *
444 * - The over-commitment management, including the allocating/freeing
445 * chunks, is serialized by a ring-0 mutex lock (a fast one since the
446 * more mundane mutex implementation is broken on Linux).
447 * - A separeate mutex is protecting the set of allocation chunks so
448 * that pages can be shared or/and freed up while some other VM is
449 * allocating more chunks. This mutex can be take from under the other
450 * one, but not the otherway around.
451 *
452 *
453 * @subsection subsec_pgmPhys_Request VM Request interface
454 *
455 * When in ring-0 it will become necessary to send requests to a VM so it can
456 * for instance move a page while defragmenting during VM destroy. The idle
457 * thread will make use of this interface to request VMs to setup shared
458 * pages and to perform write monitoring of pages.
459 *
460 * I would propose an interface similar to the current VMReq interface, similar
461 * in that it doesn't require locking and that the one sending the request may
462 * wait for completion if it wishes to. This shouldn't be very difficult to
463 * realize.
464 *
465 * The requests themselves are also pretty simple. They are basically:
466 * -# Check that some precondition is still true.
467 * -# Do the update.
468 * -# Update all shadow page tables involved with the page.
469 *
470 * The 3rd step is identical to what we're already doing when updating a
471 * physical handler, see pgmHandlerPhysicalSetRamFlagsAndFlushShadowPTs.
472 *
473 *
474 *
475 * @section sec_pgmPhys_MappingCaches Mapping Caches
476 *
477 * In order to be able to map in and out memory and to be able to support
478 * guest with more RAM than we've got virtual address space, we'll employing
479 * a mapping cache. There is already a tiny one for GC (see PGMGCDynMapGCPageEx)
480 * and we'll create a similar one for ring-0 unless we decide to setup a dedicate
481 * memory context for the HWACCM execution.
482 *
483 *
484 * @subsection subsec_pgmPhys_MappingCaches_R3 Ring-3
485 *
486 * We've considered implementing the ring-3 mapping cache page based but found
487 * that this was bother some when one had to take into account TLBs+SMP and
488 * portability (missing the necessary APIs on several platforms). There were
489 * also some performance concerns with this approach which hadn't quite been
490 * worked out.
491 *
492 * Instead, we'll be mapping allocation chunks into the VM process. This simplifies
493 * matters greatly quite a bit since we don't need to invent any new ring-0 stuff,
494 * only some minor RTR0MEMOBJ mapping stuff. The main concern here is that mapping
495 * compared to the previous idea is that mapping or unmapping a 1MB chunk is more
496 * costly than a single page, although how much more costly is uncertain. We'll
497 * try address this by using a very big cache, preferably bigger than the actual
498 * VM RAM size if possible. The current VM RAM sizes should give some idea for
499 * 32-bit boxes, while on 64-bit we can probably get away with employing an
500 * unlimited cache.
501 *
502 * The cache have to parts, as already indicated, the ring-3 side and the
503 * ring-0 side.
504 *
505 * The ring-0 will be tied to the page allocator since it will operate on the
506 * memory objects it contains. It will therefore require the first ring-0 mutex
507 * discussed in @ref subsec_pgmPhys_Serializing. We
508 * some double house keeping wrt to who has mapped what I think, since both
509 * VMMR0.r0 and RTR0MemObj will keep track of mapping relataions
510 *
511 * The ring-3 part will be protected by the pgm critsect. For simplicity, we'll
512 * require anyone that desires to do changes to the mapping cache to do that
513 * from within this critsect. Alternatively, we could employ a separate critsect
514 * for serializing changes to the mapping cache as this would reduce potential
515 * contention with other threads accessing mappings unrelated to the changes
516 * that are in process. We can see about this later, contention will show
517 * up in the statistics anyway, so it'll be simple to tell.
518 *
519 * The organization of the ring-3 part will be very much like how the allocation
520 * chunks are organized in ring-0, that is in an AVL tree by chunk id. To avoid
521 * having to walk the tree all the time, we'll have a couple of lookaside entries
522 * like in we do for I/O ports and MMIO in IOM.
523 *
524 * The simplified flow of a PGMPhysRead/Write function:
525 * -# Enter the PGM critsect.
526 * -# Lookup GCPhys in the ram ranges and get the Page ID.
527 * -# Calc the Allocation Chunk ID from the Page ID.
528 * -# Check the lookaside entries and then the AVL tree for the Chunk ID.
529 * If not found in cache:
530 * -# Call ring-0 and request it to be mapped and supply
531 * a chunk to be unmapped if the cache is maxed out already.
532 * -# Insert the new mapping into the AVL tree (id + R3 address).
533 * -# Update the relevant lookaside entry and return the mapping address.
534 * -# Do the read/write according to monitoring flags and everything.
535 * -# Leave the critsect.
536 *
537 *
538 * @section sec_pgmPhys_Fallback Fallback
539 *
540 * Current all the "second tier" hosts will not support the RTR0MemObjAllocPhysNC
541 * API and thus require a fallback.
542 *
543 * So, when RTR0MemObjAllocPhysNC returns VERR_NOT_SUPPORTED the page allocator
544 * will return to the ring-3 caller (and later ring-0) and asking it to seed
545 * the page allocator with some fresh pages (VERR_GMM_SEED_ME). Ring-3 will
546 * then perform an SUPPageAlloc(cbChunk >> PAGE_SHIFT) call and make a
547 * "SeededAllocPages" call to ring-0.
548 *
549 * The first time ring-0 sees the VERR_NOT_SUPPORTED failure it will disable
550 * all page sharing (zero page detection will continue). It will also force
551 * all allocations to come from the VM which seeded the page. Both these
552 * measures are taken to make sure that there will never be any need for
553 * mapping anything into ring-3 - everything will be mapped already.
554 *
555 * Whether we'll continue to use the current MM locked memory management
556 * for this I don't quite know (I'd prefer not to and just ditch that all
557 * togther), we'll see what's simplest to do.
558 *
559 *
560 *
561 * @section sec_pgmPhys_Changes Changes
562 *
563 * Breakdown of the changes involved?
564 */
565
566
567/** Saved state data unit version. */
568#define PGM_SAVED_STATE_VERSION 6
569
570/*******************************************************************************
571* Header Files *
572*******************************************************************************/
573#define LOG_GROUP LOG_GROUP_PGM
574#include <VBox/dbgf.h>
575#include <VBox/pgm.h>
576#include <VBox/cpum.h>
577#include <VBox/iom.h>
578#include <VBox/sup.h>
579#include <VBox/mm.h>
580#include <VBox/em.h>
581#include <VBox/stam.h>
582#include <VBox/rem.h>
583#include <VBox/dbgf.h>
584#include <VBox/rem.h>
585#include <VBox/selm.h>
586#include <VBox/ssm.h>
587#include "PGMInternal.h"
588#include <VBox/vm.h>
589#include <VBox/dbg.h>
590#include <VBox/hwaccm.h>
591
592#include <iprt/assert.h>
593#include <iprt/alloc.h>
594#include <iprt/asm.h>
595#include <iprt/thread.h>
596#include <iprt/string.h>
597#include <VBox/param.h>
598#include <VBox/err.h>
599
600
601
602/*******************************************************************************
603* Internal Functions *
604*******************************************************************************/
605static int pgmR3InitPaging(PVM pVM);
606static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
607static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
608static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
609static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser);
610static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
611static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
612#ifdef VBOX_STRICT
613static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser);
614#endif
615static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM);
616static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version);
617static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0);
618static void pgmR3ModeDataSwitch(PVM pVM, PGMMODE enmShw, PGMMODE enmGst);
619static PGMMODE pgmR3CalcShadowMode(PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher);
620
621#ifdef VBOX_WITH_STATISTICS
622static void pgmR3InitStats(PVM pVM);
623#endif
624
625#ifdef VBOX_WITH_DEBUGGER
626/** @todo all but the two last commands must be converted to 'info'. */
627static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
628static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
629static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
630static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult);
631#endif
632
633
634/*******************************************************************************
635* Global Variables *
636*******************************************************************************/
637#ifdef VBOX_WITH_DEBUGGER
638/** Command descriptors. */
639static const DBGCCMD g_aCmds[] =
640{
641 /* pszCmd, cArgsMin, cArgsMax, paArgDesc, cArgDescs, pResultDesc, fFlags, pfnHandler pszSyntax, ....pszDescription */
642 { "pgmram", 0, 0, NULL, 0, NULL, 0, pgmR3CmdRam, "", "Display the ram ranges." },
643 { "pgmmap", 0, 0, NULL, 0, NULL, 0, pgmR3CmdMap, "", "Display the mapping ranges." },
644 { "pgmsync", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSync, "", "Sync the CR3 page." },
645 { "pgmsyncalways", 0, 0, NULL, 0, NULL, 0, pgmR3CmdSyncAlways, "", "Toggle permanent CR3 syncing." },
646};
647#endif
648
649
650
651
652#if 1/// @todo ndef RT_ARCH_AMD64
653/*
654 * Shadow - 32-bit mode
655 */
656#define PGM_SHW_TYPE PGM_TYPE_32BIT
657#define PGM_SHW_NAME(name) PGM_SHW_NAME_32BIT(name)
658#define PGM_SHW_NAME_GC_STR(name) PGM_SHW_NAME_GC_32BIT_STR(name)
659#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_32BIT_STR(name)
660#include "PGMShw.h"
661
662/* Guest - real mode */
663#define PGM_GST_TYPE PGM_TYPE_REAL
664#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
665#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_REAL_STR(name)
666#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
667#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_REAL(name)
668#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_32BIT_REAL_STR(name)
669#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_REAL_STR(name)
670#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
671#include "PGMGst.h"
672#include "PGMBth.h"
673#undef BTH_PGMPOOLKIND_PT_FOR_PT
674#undef PGM_BTH_NAME
675#undef PGM_BTH_NAME_GC_STR
676#undef PGM_BTH_NAME_R0_STR
677#undef PGM_GST_TYPE
678#undef PGM_GST_NAME
679#undef PGM_GST_NAME_GC_STR
680#undef PGM_GST_NAME_R0_STR
681
682/* Guest - protected mode */
683#define PGM_GST_TYPE PGM_TYPE_PROT
684#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
685#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PROT_STR(name)
686#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
687#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_PROT(name)
688#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_32BIT_PROT_STR(name)
689#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_PROT_STR(name)
690#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
691#include "PGMGst.h"
692#include "PGMBth.h"
693#undef BTH_PGMPOOLKIND_PT_FOR_PT
694#undef PGM_BTH_NAME
695#undef PGM_BTH_NAME_GC_STR
696#undef PGM_BTH_NAME_R0_STR
697#undef PGM_GST_TYPE
698#undef PGM_GST_NAME
699#undef PGM_GST_NAME_GC_STR
700#undef PGM_GST_NAME_R0_STR
701
702/* Guest - 32-bit mode */
703#define PGM_GST_TYPE PGM_TYPE_32BIT
704#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
705#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_32BIT_STR(name)
706#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
707#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_32BIT(name)
708#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_32BIT_32BIT_STR(name)
709#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_32BIT_STR(name)
710#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT
711#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB
712#include "PGMGst.h"
713#include "PGMBth.h"
714#undef BTH_PGMPOOLKIND_PT_FOR_BIG
715#undef BTH_PGMPOOLKIND_PT_FOR_PT
716#undef PGM_BTH_NAME
717#undef PGM_BTH_NAME_GC_STR
718#undef PGM_BTH_NAME_R0_STR
719#undef PGM_GST_TYPE
720#undef PGM_GST_NAME
721#undef PGM_GST_NAME_GC_STR
722#undef PGM_GST_NAME_R0_STR
723
724#undef PGM_SHW_TYPE
725#undef PGM_SHW_NAME
726#undef PGM_SHW_NAME_GC_STR
727#undef PGM_SHW_NAME_R0_STR
728#endif /* !RT_ARCH_AMD64 */
729
730
731/*
732 * Shadow - PAE mode
733 */
734#define PGM_SHW_TYPE PGM_TYPE_PAE
735#define PGM_SHW_NAME(name) PGM_SHW_NAME_PAE(name)
736#define PGM_SHW_NAME_GC_STR(name) PGM_SHW_NAME_GC_PAE_STR(name)
737#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_PAE_STR(name)
738#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
739#include "PGMShw.h"
740
741/* Guest - real mode */
742#define PGM_GST_TYPE PGM_TYPE_REAL
743#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
744#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_REAL_STR(name)
745#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
746#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
747#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_PAE_REAL_STR(name)
748#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_REAL_STR(name)
749#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
750#include "PGMBth.h"
751#undef BTH_PGMPOOLKIND_PT_FOR_PT
752#undef PGM_BTH_NAME
753#undef PGM_BTH_NAME_GC_STR
754#undef PGM_BTH_NAME_R0_STR
755#undef PGM_GST_TYPE
756#undef PGM_GST_NAME
757#undef PGM_GST_NAME_GC_STR
758#undef PGM_GST_NAME_R0_STR
759
760/* Guest - protected mode */
761#define PGM_GST_TYPE PGM_TYPE_PROT
762#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
763#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PROT_STR(name)
764#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
765#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PROT(name)
766#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_PAE_PROT_STR(name)
767#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PROT_STR(name)
768#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
769#include "PGMBth.h"
770#undef BTH_PGMPOOLKIND_PT_FOR_PT
771#undef PGM_BTH_NAME
772#undef PGM_BTH_NAME_GC_STR
773#undef PGM_BTH_NAME_R0_STR
774#undef PGM_GST_TYPE
775#undef PGM_GST_NAME
776#undef PGM_GST_NAME_GC_STR
777#undef PGM_GST_NAME_R0_STR
778
779/* Guest - 32-bit mode */
780#define PGM_GST_TYPE PGM_TYPE_32BIT
781#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
782#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_32BIT_STR(name)
783#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
784#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_32BIT(name)
785#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_PAE_32BIT_STR(name)
786#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_32BIT_STR(name)
787#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
788#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
789#include "PGMBth.h"
790#undef BTH_PGMPOOLKIND_PT_FOR_BIG
791#undef BTH_PGMPOOLKIND_PT_FOR_PT
792#undef PGM_BTH_NAME
793#undef PGM_BTH_NAME_GC_STR
794#undef PGM_BTH_NAME_R0_STR
795#undef PGM_GST_TYPE
796#undef PGM_GST_NAME
797#undef PGM_GST_NAME_GC_STR
798#undef PGM_GST_NAME_R0_STR
799
800/* Guest - PAE mode */
801#define PGM_GST_TYPE PGM_TYPE_PAE
802#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
803#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_PAE_STR(name)
804#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
805#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PAE(name)
806#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_PAE_PAE_STR(name)
807#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PAE_STR(name)
808#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
809#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
810#include "PGMGst.h"
811#include "PGMBth.h"
812#undef BTH_PGMPOOLKIND_PT_FOR_BIG
813#undef BTH_PGMPOOLKIND_PT_FOR_PT
814#undef PGM_BTH_NAME
815#undef PGM_BTH_NAME_GC_STR
816#undef PGM_BTH_NAME_R0_STR
817#undef PGM_GST_TYPE
818#undef PGM_GST_NAME
819#undef PGM_GST_NAME_GC_STR
820#undef PGM_GST_NAME_R0_STR
821
822#undef PGM_SHW_TYPE
823#undef PGM_SHW_NAME
824#undef PGM_SHW_NAME_GC_STR
825#undef PGM_SHW_NAME_R0_STR
826
827
828/*
829 * Shadow - AMD64 mode
830 */
831#define PGM_SHW_TYPE PGM_TYPE_AMD64
832#define PGM_SHW_NAME(name) PGM_SHW_NAME_AMD64(name)
833#define PGM_SHW_NAME_GC_STR(name) PGM_SHW_NAME_GC_AMD64_STR(name)
834#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_AMD64_STR(name)
835#include "PGMShw.h"
836
837/* Guest - AMD64 mode */
838#define PGM_GST_TYPE PGM_TYPE_AMD64
839#define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
840#define PGM_GST_NAME_GC_STR(name) PGM_GST_NAME_GC_AMD64_STR(name)
841#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
842#define PGM_BTH_NAME(name) PGM_BTH_NAME_AMD64_AMD64(name)
843#define PGM_BTH_NAME_GC_STR(name) PGM_BTH_NAME_GC_AMD64_AMD64_STR(name)
844#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_AMD64_AMD64_STR(name)
845#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
846#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
847#include "PGMGst.h"
848#include "PGMBth.h"
849#undef BTH_PGMPOOLKIND_PT_FOR_BIG
850#undef BTH_PGMPOOLKIND_PT_FOR_PT
851#undef PGM_BTH_NAME
852#undef PGM_BTH_NAME_GC_STR
853#undef PGM_BTH_NAME_R0_STR
854#undef PGM_GST_TYPE
855#undef PGM_GST_NAME
856#undef PGM_GST_NAME_GC_STR
857#undef PGM_GST_NAME_R0_STR
858
859#undef PGM_SHW_TYPE
860#undef PGM_SHW_NAME
861#undef PGM_SHW_NAME_GC_STR
862#undef PGM_SHW_NAME_R0_STR
863
864
865/**
866 * Initiates the paging of VM.
867 *
868 * @returns VBox status code.
869 * @param pVM Pointer to VM structure.
870 */
871PGMR3DECL(int) PGMR3Init(PVM pVM)
872{
873 LogFlow(("PGMR3Init:\n"));
874
875 /*
876 * Assert alignment and sizes.
877 */
878 AssertRelease(sizeof(pVM->pgm.s) <= sizeof(pVM->pgm.padding));
879
880 /*
881 * Init the structure.
882 */
883 pVM->pgm.s.offVM = RT_OFFSETOF(VM, pgm.s);
884 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
885 pVM->pgm.s.enmGuestMode = PGMMODE_INVALID;
886 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
887 pVM->pgm.s.GCPhysCR3 = NIL_RTGCPHYS;
888 pVM->pgm.s.GCPhysGstCR3Monitored = NIL_RTGCPHYS;
889 pVM->pgm.s.fA20Enabled = true;
890 pVM->pgm.s.pGstPaePDPTHC = NULL;
891 pVM->pgm.s.pGstPaePDPTGC = 0;
892 for (unsigned i = 0; i < ELEMENTS(pVM->pgm.s.apGstPaePDsHC); i++)
893 {
894 pVM->pgm.s.apGstPaePDsHC[i] = NULL;
895 pVM->pgm.s.apGstPaePDsGC[i] = 0;
896 pVM->pgm.s.aGCPhysGstPaePDs[i] = NIL_RTGCPHYS;
897 }
898
899#ifdef VBOX_STRICT
900 VMR3AtStateRegister(pVM, pgmR3ResetNoMorePhysWritesFlag, NULL);
901#endif
902
903 /*
904 * Get the configured RAM size - to estimate saved state size.
905 */
906 uint64_t cbRam;
907 int rc = CFGMR3QueryU64(CFGMR3GetRoot(pVM), "RamSize", &cbRam);
908 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
909 cbRam = pVM->pgm.s.cbRamSize = 0;
910 else if (VBOX_SUCCESS(rc))
911 {
912 if (cbRam < PAGE_SIZE)
913 cbRam = 0;
914 cbRam = RT_ALIGN_64(cbRam, PAGE_SIZE);
915 pVM->pgm.s.cbRamSize = (RTUINT)cbRam;
916 }
917 else
918 {
919 AssertMsgFailed(("Configuration error: Failed to query integer \"RamSize\", rc=%Vrc.\n", rc));
920 return rc;
921 }
922
923 /*
924 * Register saved state data unit.
925 */
926 rc = SSMR3RegisterInternal(pVM, "pgm", 1, PGM_SAVED_STATE_VERSION, (size_t)cbRam + sizeof(PGM),
927 NULL, pgmR3Save, NULL,
928 NULL, pgmR3Load, NULL);
929 if (VBOX_FAILURE(rc))
930 return rc;
931
932 /*
933 * Initialize the PGM critical section and flush the phys TLBs
934 */
935 rc = PDMR3CritSectInit(pVM, &pVM->pgm.s.CritSect, "PGM");
936 AssertRCReturn(rc, rc);
937
938 PGMR3PhysChunkInvalidateTLB(pVM);
939 PGMPhysInvalidatePageR3MapTLB(pVM);
940 PGMPhysInvalidatePageR0MapTLB(pVM);
941 PGMPhysInvalidatePageGCMapTLB(pVM);
942
943 /*
944 * Trees
945 */
946 rc = MMHyperAlloc(pVM, sizeof(PGMTREES), 0, MM_TAG_PGM, (void **)&pVM->pgm.s.pTreesHC);
947 if (VBOX_SUCCESS(rc))
948 {
949 pVM->pgm.s.pTreesGC = MMHyperHC2GC(pVM, pVM->pgm.s.pTreesHC);
950
951 /*
952 * Alocate the zero page.
953 */
954 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvZeroPgR3);
955 }
956 if (VBOX_SUCCESS(rc))
957 {
958 pVM->pgm.s.pvZeroPgGC = MMHyperR3ToGC(pVM, pVM->pgm.s.pvZeroPgR3);
959 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
960 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTHCPHYS);
961 pVM->pgm.s.HCPhysZeroPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvZeroPgR3);
962 AssertRelease(pVM->pgm.s.HCPhysZeroPg != NIL_RTHCPHYS);
963
964 /*
965 * Init the paging.
966 */
967 rc = pgmR3InitPaging(pVM);
968 }
969 if (VBOX_SUCCESS(rc))
970 {
971 /*
972 * Init the page pool.
973 */
974 rc = pgmR3PoolInit(pVM);
975 }
976 if (VBOX_SUCCESS(rc))
977 {
978 /*
979 * Info & statistics
980 */
981 DBGFR3InfoRegisterInternal(pVM, "mode",
982 "Shows the current paging mode. "
983 "Recognizes 'all', 'guest', 'shadow' and 'host' as arguments, defaulting to 'all' if nothing's given.",
984 pgmR3InfoMode);
985 DBGFR3InfoRegisterInternal(pVM, "pgmcr3",
986 "Dumps all the entries in the top level paging table. No arguments.",
987 pgmR3InfoCr3);
988 DBGFR3InfoRegisterInternal(pVM, "phys",
989 "Dumps all the physical address ranges. No arguments.",
990 pgmR3PhysInfo);
991 DBGFR3InfoRegisterInternal(pVM, "handlers",
992 "Dumps physical, virtual and hyper virtual handlers. "
993 "Pass 'phys', 'virt', 'hyper' as argument if only one kind is wanted."
994 "Add 'nost' if the statistics are unwanted, use together with 'all' or explicit selection.",
995 pgmR3InfoHandlers);
996 DBGFR3InfoRegisterInternal(pVM, "mappings",
997 "Dumps guest mappings.",
998 pgmR3MapInfo);
999
1000 STAM_REL_REG(pVM, &pVM->pgm.s.cGuestModeChanges, STAMTYPE_COUNTER, "/PGM/cGuestModeChanges", STAMUNIT_OCCURENCES, "Number of guest mode changes.");
1001#ifdef VBOX_WITH_STATISTICS
1002 pgmR3InitStats(pVM);
1003#endif
1004#ifdef VBOX_WITH_DEBUGGER
1005 /*
1006 * Debugger commands.
1007 */
1008 static bool fRegisteredCmds = false;
1009 if (!fRegisteredCmds)
1010 {
1011 int rc = DBGCRegisterCommands(&g_aCmds[0], ELEMENTS(g_aCmds));
1012 if (VBOX_SUCCESS(rc))
1013 fRegisteredCmds = true;
1014 }
1015#endif
1016 return VINF_SUCCESS;
1017 }
1018
1019 /* Almost no cleanup necessary, MM frees all memory. */
1020 PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
1021
1022 return rc;
1023}
1024
1025
1026/**
1027 * Init paging.
1028 *
1029 * Since we need to check what mode the host is operating in before we can choose
1030 * the right paging functions for the host we have to delay this until R0 has
1031 * been initialized.
1032 *
1033 * @returns VBox status code.
1034 * @param pVM VM handle.
1035 */
1036static int pgmR3InitPaging(PVM pVM)
1037{
1038 /*
1039 * Force a recalculation of modes and switcher so everyone gets notified.
1040 */
1041 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
1042 pVM->pgm.s.enmGuestMode = PGMMODE_INVALID;
1043 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1044
1045 /*
1046 * Allocate static mapping space for whatever the cr3 register
1047 * points to and in the case of PAE mode to the 4 PDs.
1048 */
1049 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * 5, "CR3 mapping", &pVM->pgm.s.GCPtrCR3Mapping);
1050 if (VBOX_FAILURE(rc))
1051 {
1052 AssertMsgFailed(("Failed to reserve two pages for cr mapping in HMA, rc=%Vrc\n", rc));
1053 return rc;
1054 }
1055 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1056
1057 /*
1058 * Allocate pages for the three possible intermediate contexts
1059 * (AMD64, PAE and plain 32-Bit). We maintain all three contexts
1060 * for the sake of simplicity. The AMD64 uses the PAE for the
1061 * lower levels, making the total number of pages 11 (3 + 7 + 1).
1062 *
1063 * We assume that two page tables will be enought for the core code
1064 * mappings (HC virtual and identity).
1065 */
1066 pVM->pgm.s.pInterPD = (PX86PD)MMR3PageAllocLow(pVM);
1067 pVM->pgm.s.apInterPTs[0] = (PX86PT)MMR3PageAllocLow(pVM);
1068 pVM->pgm.s.apInterPTs[1] = (PX86PT)MMR3PageAllocLow(pVM);
1069 pVM->pgm.s.apInterPaePTs[0] = (PX86PTPAE)MMR3PageAlloc(pVM);
1070 pVM->pgm.s.apInterPaePTs[1] = (PX86PTPAE)MMR3PageAlloc(pVM);
1071 pVM->pgm.s.apInterPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1072 pVM->pgm.s.apInterPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1073 pVM->pgm.s.apInterPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1074 pVM->pgm.s.apInterPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1075 pVM->pgm.s.pInterPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM);
1076 pVM->pgm.s.pInterPaePDPT64 = (PX86PDPT)MMR3PageAllocLow(pVM);
1077 pVM->pgm.s.pInterPaePML4 = (PX86PML4)MMR3PageAllocLow(pVM);
1078 if ( !pVM->pgm.s.pInterPD
1079 || !pVM->pgm.s.apInterPTs[0]
1080 || !pVM->pgm.s.apInterPTs[1]
1081 || !pVM->pgm.s.apInterPaePTs[0]
1082 || !pVM->pgm.s.apInterPaePTs[1]
1083 || !pVM->pgm.s.apInterPaePDs[0]
1084 || !pVM->pgm.s.apInterPaePDs[1]
1085 || !pVM->pgm.s.apInterPaePDs[2]
1086 || !pVM->pgm.s.apInterPaePDs[3]
1087 || !pVM->pgm.s.pInterPaePDPT
1088 || !pVM->pgm.s.pInterPaePDPT64
1089 || !pVM->pgm.s.pInterPaePML4)
1090 {
1091 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1092 return VERR_NO_PAGE_MEMORY;
1093 }
1094
1095 pVM->pgm.s.HCPhysInterPD = MMPage2Phys(pVM, pVM->pgm.s.pInterPD);
1096 AssertRelease(pVM->pgm.s.HCPhysInterPD != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPD & PAGE_OFFSET_MASK));
1097 pVM->pgm.s.HCPhysInterPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT);
1098 AssertRelease(pVM->pgm.s.HCPhysInterPaePDPT != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePDPT & PAGE_OFFSET_MASK));
1099 pVM->pgm.s.HCPhysInterPaePML4 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePML4);
1100 AssertRelease(pVM->pgm.s.HCPhysInterPaePML4 != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePML4 & PAGE_OFFSET_MASK));
1101
1102 /*
1103 * Initialize the pages, setting up the PML4 and PDPT for repetitive 4GB action.
1104 */
1105 ASMMemZeroPage(pVM->pgm.s.pInterPD);
1106 ASMMemZeroPage(pVM->pgm.s.apInterPTs[0]);
1107 ASMMemZeroPage(pVM->pgm.s.apInterPTs[1]);
1108
1109 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[0]);
1110 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[1]);
1111
1112 ASMMemZeroPage(pVM->pgm.s.pInterPaePDPT);
1113 for (unsigned i = 0; i < ELEMENTS(pVM->pgm.s.apInterPaePDs); i++)
1114 {
1115 ASMMemZeroPage(pVM->pgm.s.apInterPaePDs[i]);
1116 pVM->pgm.s.pInterPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT
1117 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[i]);
1118 }
1119
1120 for (unsigned i = 0; i < ELEMENTS(pVM->pgm.s.pInterPaePDPT64->a); i++)
1121 {
1122 const unsigned iPD = i % ELEMENTS(pVM->pgm.s.apInterPaePDs);
1123 pVM->pgm.s.pInterPaePDPT64->a[i].u = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A | PGM_PLXFLAGS_PERMANENT
1124 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[iPD]);
1125 }
1126
1127 RTHCPHYS HCPhysInterPaePDPT64 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64);
1128 for (unsigned i = 0; i < ELEMENTS(pVM->pgm.s.pInterPaePML4->a); i++)
1129 pVM->pgm.s.pInterPaePML4->a[i].u = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A | PGM_PLXFLAGS_PERMANENT
1130 | HCPhysInterPaePDPT64;
1131
1132 /*
1133 * Allocate pages for the three possible guest contexts (AMD64, PAE and plain 32-Bit).
1134 * We allocate pages for all three posibilities to in order to simplify mappings and
1135 * avoid resource failure during mode switches. So, we need to cover all levels of the
1136 * of the first 4GB down to PD level.
1137 * As with the intermediate context, AMD64 uses the PAE PDPT and PDs.
1138 */
1139 pVM->pgm.s.pHC32BitPD = (PX86PD)MMR3PageAllocLow(pVM);
1140 pVM->pgm.s.apHCPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM);
1141 pVM->pgm.s.apHCPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM);
1142 AssertRelease((uintptr_t)pVM->pgm.s.apHCPaePDs[0] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apHCPaePDs[1]);
1143 pVM->pgm.s.apHCPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM);
1144 AssertRelease((uintptr_t)pVM->pgm.s.apHCPaePDs[1] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apHCPaePDs[2]);
1145 pVM->pgm.s.apHCPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM);
1146 AssertRelease((uintptr_t)pVM->pgm.s.apHCPaePDs[2] + PAGE_SIZE == (uintptr_t)pVM->pgm.s.apHCPaePDs[3]);
1147 pVM->pgm.s.pHCPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM);
1148 pVM->pgm.s.pHCPaePML4 = (PX86PML4)MMR3PageAllocLow(pVM);
1149 if ( !pVM->pgm.s.pHC32BitPD
1150 || !pVM->pgm.s.apHCPaePDs[0]
1151 || !pVM->pgm.s.apHCPaePDs[1]
1152 || !pVM->pgm.s.apHCPaePDs[2]
1153 || !pVM->pgm.s.apHCPaePDs[3]
1154 || !pVM->pgm.s.pHCPaePDPT
1155 || !pVM->pgm.s.pHCPaePML4)
1156 {
1157 AssertMsgFailed(("Failed to allocate pages for the intermediate context!\n"));
1158 return VERR_NO_PAGE_MEMORY;
1159 }
1160
1161 /* get physical addresses. */
1162 pVM->pgm.s.HCPhys32BitPD = MMPage2Phys(pVM, pVM->pgm.s.pHC32BitPD);
1163 Assert(MMPagePhys2Page(pVM, pVM->pgm.s.HCPhys32BitPD) == pVM->pgm.s.pHC32BitPD);
1164 pVM->pgm.s.aHCPhysPaePDs[0] = MMPage2Phys(pVM, pVM->pgm.s.apHCPaePDs[0]);
1165 pVM->pgm.s.aHCPhysPaePDs[1] = MMPage2Phys(pVM, pVM->pgm.s.apHCPaePDs[1]);
1166 pVM->pgm.s.aHCPhysPaePDs[2] = MMPage2Phys(pVM, pVM->pgm.s.apHCPaePDs[2]);
1167 pVM->pgm.s.aHCPhysPaePDs[3] = MMPage2Phys(pVM, pVM->pgm.s.apHCPaePDs[3]);
1168 pVM->pgm.s.HCPhysPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pHCPaePDPT);
1169 pVM->pgm.s.HCPhysPaePML4 = MMPage2Phys(pVM, pVM->pgm.s.pHCPaePML4);
1170
1171 /*
1172 * Initialize the pages, setting up the PML4 and PDPT for action below 4GB.
1173 */
1174 ASMMemZero32(pVM->pgm.s.pHC32BitPD, PAGE_SIZE);
1175
1176 ASMMemZero32(pVM->pgm.s.pHCPaePDPT, PAGE_SIZE);
1177 for (unsigned i = 0; i < ELEMENTS(pVM->pgm.s.apHCPaePDs); i++)
1178 {
1179 ASMMemZero32(pVM->pgm.s.apHCPaePDs[i], PAGE_SIZE);
1180 pVM->pgm.s.pHCPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT | pVM->pgm.s.aHCPhysPaePDs[i];
1181 /* The flags will be corrected when entering and leaving long mode. */
1182 }
1183
1184 ASMMemZero32(pVM->pgm.s.pHCPaePML4, PAGE_SIZE);
1185 pVM->pgm.s.pHCPaePML4->a[0].u = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_A
1186 | PGM_PLXFLAGS_PERMANENT | pVM->pgm.s.HCPhysPaePDPT;
1187
1188 CPUMSetHyperCR3(pVM, (uint32_t)pVM->pgm.s.HCPhys32BitPD);
1189
1190 /*
1191 * Initialize paging workers and mode from current host mode
1192 * and the guest running in real mode.
1193 */
1194 pVM->pgm.s.enmHostMode = SUPGetPagingMode();
1195 switch (pVM->pgm.s.enmHostMode)
1196 {
1197 case SUPPAGINGMODE_32_BIT:
1198 case SUPPAGINGMODE_32_BIT_GLOBAL:
1199 case SUPPAGINGMODE_PAE:
1200 case SUPPAGINGMODE_PAE_GLOBAL:
1201 case SUPPAGINGMODE_PAE_NX:
1202 case SUPPAGINGMODE_PAE_GLOBAL_NX:
1203 break;
1204
1205 case SUPPAGINGMODE_AMD64:
1206 case SUPPAGINGMODE_AMD64_GLOBAL:
1207 case SUPPAGINGMODE_AMD64_NX:
1208 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
1209#ifndef VBOX_WITH_HYBIRD_32BIT_KERNEL
1210 if (ARCH_BITS != 64)
1211 {
1212 AssertMsgFailed(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1213 LogRel(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1214 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1215 }
1216#endif
1217 break;
1218 default:
1219 AssertMsgFailed(("Host mode %d is not supported\n", pVM->pgm.s.enmHostMode));
1220 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1221 }
1222 rc = pgmR3ModeDataInit(pVM, false /* don't resolve GC and R0 syms yet */);
1223 if (VBOX_SUCCESS(rc))
1224 rc = pgmR3ChangeMode(pVM, PGMMODE_REAL);
1225 if (VBOX_SUCCESS(rc))
1226 {
1227 LogFlow(("pgmR3InitPaging: returns successfully\n"));
1228#if HC_ARCH_BITS == 64
1229LogRel(("Debug: HCPhys32BitPD=%VHp aHCPhysPaePDs={%VHp,%VHp,%VHp,%VHp} HCPhysPaePDPT=%VHp HCPhysPaePML4=%VHp\n",
1230 pVM->pgm.s.HCPhys32BitPD, pVM->pgm.s.aHCPhysPaePDs[0], pVM->pgm.s.aHCPhysPaePDs[1], pVM->pgm.s.aHCPhysPaePDs[2], pVM->pgm.s.aHCPhysPaePDs[3],
1231 pVM->pgm.s.HCPhysPaePDPT, pVM->pgm.s.HCPhysPaePML4));
1232LogRel(("Debug: HCPhysInterPD=%VHp HCPhysInterPaePDPT=%VHp HCPhysInterPaePML4=%VHp\n",
1233 pVM->pgm.s.HCPhysInterPD, pVM->pgm.s.HCPhysInterPaePDPT, pVM->pgm.s.HCPhysInterPaePML4));
1234LogRel(("Debug: apInterPTs={%VHp,%VHp} apInterPaePTs={%VHp,%VHp} apInterPaePDs={%VHp,%VHp,%VHp,%VHp} pInterPaePDPT64=%VHp\n",
1235 MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[1]),
1236 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[1]),
1237 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[1]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[2]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[3]),
1238 MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64)));
1239#endif
1240
1241 return VINF_SUCCESS;
1242 }
1243
1244 LogFlow(("pgmR3InitPaging: returns %Vrc\n", rc));
1245 return rc;
1246}
1247
1248
1249#ifdef VBOX_WITH_STATISTICS
1250/**
1251 * Init statistics
1252 */
1253static void pgmR3InitStats(PVM pVM)
1254{
1255 PPGM pPGM = &pVM->pgm.s;
1256 STAM_REG(pVM, &pPGM->StatGCInvalidatePage, STAMTYPE_PROFILE, "/PGM/GC/InvalidatePage", STAMUNIT_TICKS_PER_CALL, "PGMGCInvalidatePage() profiling.");
1257 STAM_REG(pVM, &pPGM->StatGCInvalidatePage4KBPages, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/4KBPages", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a 4KB page.");
1258 STAM_REG(pVM, &pPGM->StatGCInvalidatePage4MBPages, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/4MBPages", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a 4MB page.");
1259 STAM_REG(pVM, &pPGM->StatGCInvalidatePage4MBPagesSkip, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/4MBPagesSkip",STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() skipped a 4MB page.");
1260 STAM_REG(pVM, &pPGM->StatGCInvalidatePagePDMappings, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/PDMappings", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a page directory containing mappings (no conflict).");
1261 STAM_REG(pVM, &pPGM->StatGCInvalidatePagePDNAs, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/PDNAs", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a not accessed page directory.");
1262 STAM_REG(pVM, &pPGM->StatGCInvalidatePagePDNPs, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/PDNPs", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for a not present page directory.");
1263 STAM_REG(pVM, &pPGM->StatGCInvalidatePagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/PDOutOfSync", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for an out of sync page directory.");
1264 STAM_REG(pVM, &pPGM->StatGCInvalidatePageSkipped, STAMTYPE_COUNTER, "/PGM/GC/InvalidatePage/Skipped", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1265 STAM_REG(pVM, &pPGM->StatGCSyncPT, STAMTYPE_PROFILE, "/PGM/GC/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGCSyncPT() body.");
1266 STAM_REG(pVM, &pPGM->StatGCAccessedPage, STAMTYPE_COUNTER, "/PGM/GC/AccessedPage", STAMUNIT_OCCURENCES, "The number of pages marked not present for accessed bit emulation.");
1267 STAM_REG(pVM, &pPGM->StatGCDirtyPage, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/Mark", STAMUNIT_OCCURENCES, "The number of pages marked read-only for dirty bit tracking.");
1268 STAM_REG(pVM, &pPGM->StatGCDirtyPageBig, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/MarkBig", STAMUNIT_OCCURENCES, "The number of 4MB pages marked read-only for dirty bit tracking.");
1269 STAM_REG(pVM, &pPGM->StatGCDirtyPageTrap, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/Trap", STAMUNIT_OCCURENCES, "The number of traps generated for dirty bit tracking.");
1270 STAM_REG(pVM, &pPGM->StatGCDirtyPageSkipped, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/Skipped", STAMUNIT_OCCURENCES, "The number of pages already dirty or readonly.");
1271 STAM_REG(pVM, &pPGM->StatGCDirtiedPage, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/SetDirty", STAMUNIT_OCCURENCES, "The number of pages marked dirty because of write accesses.");
1272 STAM_REG(pVM, &pPGM->StatGCDirtyTrackRealPF, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/RealPF", STAMUNIT_OCCURENCES, "The number of real pages faults during dirty bit tracking.");
1273 STAM_REG(pVM, &pPGM->StatGCPageAlreadyDirty, STAMTYPE_COUNTER, "/PGM/GC/DirtyPage/AlreadySet", STAMUNIT_OCCURENCES, "The number of pages already marked dirty because of write accesses.");
1274 STAM_REG(pVM, &pPGM->StatGCDirtyBitTracking, STAMTYPE_PROFILE, "/PGM/GC/DirtyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMTrackDirtyBit() body.");
1275 STAM_REG(pVM, &pPGM->StatGCSyncPTAlloc, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/Alloc", STAMUNIT_OCCURENCES, "The number of times PGMGCSyncPT() needed to allocate page tables.");
1276 STAM_REG(pVM, &pPGM->StatGCSyncPTConflict, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/Conflicts", STAMUNIT_OCCURENCES, "The number of times PGMGCSyncPT() detected conflicts.");
1277 STAM_REG(pVM, &pPGM->StatGCSyncPTFailed, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/Failed", STAMUNIT_OCCURENCES, "The number of times PGMGCSyncPT() failed.");
1278
1279 STAM_REG(pVM, &pPGM->StatGCTrap0e, STAMTYPE_PROFILE, "/PGM/GC/Trap0e", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGCTrap0eHandler() body.");
1280 STAM_REG(pVM, &pPGM->StatCheckPageFault, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/CheckPageFault", STAMUNIT_TICKS_PER_CALL, "Profiling of checking for dirty/access emulation faults.");
1281 STAM_REG(pVM, &pPGM->StatLazySyncPT, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of lazy page table syncing.");
1282 STAM_REG(pVM, &pPGM->StatMapping, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/Mapping", STAMUNIT_TICKS_PER_CALL, "Profiling of checking virtual mappings.");
1283 STAM_REG(pVM, &pPGM->StatOutOfSync, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/OutOfSync", STAMUNIT_TICKS_PER_CALL, "Profiling of out of sync page handling.");
1284 STAM_REG(pVM, &pPGM->StatHandlers, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of checking handlers.");
1285 STAM_REG(pVM, &pPGM->StatEIPHandlers, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time/EIPHandlers", STAMUNIT_TICKS_PER_CALL, "Profiling of checking eip handlers.");
1286 STAM_REG(pVM, &pPGM->StatTrap0eCSAM, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/CSAM", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is CSAM.");
1287 STAM_REG(pVM, &pPGM->StatTrap0eDirtyAndAccessedBits, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/DirtyAndAccessedBits", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is dirty and/or accessed bit emulation.");
1288 STAM_REG(pVM, &pPGM->StatTrap0eGuestTrap, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/GuestTrap", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a guest trap.");
1289 STAM_REG(pVM, &pPGM->StatTrap0eHndPhys, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/HandlerPhysical", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a physical handler.");
1290 STAM_REG(pVM, &pPGM->StatTrap0eHndVirt, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/HandlerVirtual",STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is a virtual handler.");
1291 STAM_REG(pVM, &pPGM->StatTrap0eHndUnhandled, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/HandlerUnhandled", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is access outside the monitored areas of a monitored page.");
1292 STAM_REG(pVM, &pPGM->StatTrap0eMisc, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/Misc", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is not known.");
1293 STAM_REG(pVM, &pPGM->StatTrap0eOutOfSync, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/OutOfSync", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync page.");
1294 STAM_REG(pVM, &pPGM->StatTrap0eOutOfSyncHndPhys, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/OutOfSyncHndPhys", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync physical handler page.");
1295 STAM_REG(pVM, &pPGM->StatTrap0eOutOfSyncHndVirt, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/OutOfSyncHndVirt", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an out-of-sync virtual handler page.");
1296 STAM_REG(pVM, &pPGM->StatTrap0eOutOfSyncObsHnd, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/OutOfSyncObsHnd", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is an obsolete handler page.");
1297 STAM_REG(pVM, &pPGM->StatTrap0eSyncPT, STAMTYPE_PROFILE, "/PGM/GC/Trap0e/Time2/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the Trap0eHandler body when the cause is lazy syncing of a PT.");
1298
1299 STAM_REG(pVM, &pPGM->StatTrap0eMapHandler, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Mapping", STAMUNIT_OCCURENCES, "Number of traps due to access handlers in mappings.");
1300 STAM_REG(pVM, &pPGM->StatHandlersOutOfSync, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/OutOfSync", STAMUNIT_OCCURENCES, "Number of traps due to out-of-sync handled pages.");
1301 STAM_REG(pVM, &pPGM->StatHandlersPhysical, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Physical", STAMUNIT_OCCURENCES, "Number of traps due to physical access handlers.");
1302 STAM_REG(pVM, &pPGM->StatHandlersVirtual, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Virtual", STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers.");
1303 STAM_REG(pVM, &pPGM->StatHandlersVirtualByPhys, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/VirtualByPhys", STAMUNIT_OCCURENCES, "Number of traps due to virtual access handlers by physical address.");
1304 STAM_REG(pVM, &pPGM->StatHandlersVirtualUnmarked, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/VirtualUnmarked", STAMUNIT_OCCURENCES,"Number of traps due to virtual access handlers by virtual address (without proper physical flags).");
1305 STAM_REG(pVM, &pPGM->StatHandlersUnhandled, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Handlers/Unhandled", STAMUNIT_OCCURENCES, "Number of traps due to access outside range of monitored page(s).");
1306
1307 STAM_REG(pVM, &pPGM->StatGCTrap0eConflicts, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Conflicts", STAMUNIT_OCCURENCES, "The number of times #PF was caused by an undetected conflict.");
1308 STAM_REG(pVM, &pPGM->StatGCTrap0eUSNotPresentRead, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/NPRead", STAMUNIT_OCCURENCES, "Number of user mode not present read page faults.");
1309 STAM_REG(pVM, &pPGM->StatGCTrap0eUSNotPresentWrite, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/NPWrite", STAMUNIT_OCCURENCES, "Number of user mode not present write page faults.");
1310 STAM_REG(pVM, &pPGM->StatGCTrap0eUSWrite, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/Write", STAMUNIT_OCCURENCES, "Number of user mode write page faults.");
1311 STAM_REG(pVM, &pPGM->StatGCTrap0eUSReserved, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/Reserved", STAMUNIT_OCCURENCES, "Number of user mode reserved bit page faults.");
1312 STAM_REG(pVM, &pPGM->StatGCTrap0eUSNXE, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/NXE", STAMUNIT_OCCURENCES, "Number of user mode NXE page faults.");
1313 STAM_REG(pVM, &pPGM->StatGCTrap0eUSRead, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/User/Read", STAMUNIT_OCCURENCES, "Number of user mode read page faults.");
1314
1315 STAM_REG(pVM, &pPGM->StatGCTrap0eSVNotPresentRead, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/NPRead", STAMUNIT_OCCURENCES, "Number of supervisor mode not present read page faults.");
1316 STAM_REG(pVM, &pPGM->StatGCTrap0eSVNotPresentWrite, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/NPWrite", STAMUNIT_OCCURENCES, "Number of supervisor mode not present write page faults.");
1317 STAM_REG(pVM, &pPGM->StatGCTrap0eSVWrite, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/Write", STAMUNIT_OCCURENCES, "Number of supervisor mode write page faults.");
1318 STAM_REG(pVM, &pPGM->StatGCTrap0eSVReserved, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/Reserved", STAMUNIT_OCCURENCES, "Number of supervisor mode reserved bit page faults.");
1319 STAM_REG(pVM, &pPGM->StatGCTrap0eSNXE, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/Supervisor/NXE", STAMUNIT_OCCURENCES, "Number of supervisor mode NXE page faults.");
1320 STAM_REG(pVM, &pPGM->StatGCTrap0eUnhandled, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/GuestPF/Unhandled", STAMUNIT_OCCURENCES, "Number of guest real page faults.");
1321 STAM_REG(pVM, &pPGM->StatGCTrap0eMap, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/GuestPF/Map", STAMUNIT_OCCURENCES, "Number of guest page faults due to map accesses.");
1322
1323 STAM_REG(pVM, &pPGM->StatTrap0eWPEmulGC, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/WP/InGC", STAMUNIT_OCCURENCES, "Number of guest page faults due to X86_CR0_WP emulation.");
1324 STAM_REG(pVM, &pPGM->StatTrap0eWPEmulR3, STAMTYPE_COUNTER, "/PGM/GC/Trap0e/WP/ToR3", STAMUNIT_OCCURENCES, "Number of guest page faults due to X86_CR0_WP emulation (forward to R3 for emulation).");
1325
1326 STAM_REG(pVM, &pPGM->StatGCGuestCR3WriteHandled, STAMTYPE_COUNTER, "/PGM/GC/CR3WriteInt", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 change was successfully handled.");
1327 STAM_REG(pVM, &pPGM->StatGCGuestCR3WriteUnhandled, STAMTYPE_COUNTER, "/PGM/GC/CR3WriteEmu", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 change was passed back to the recompiler.");
1328 STAM_REG(pVM, &pPGM->StatGCGuestCR3WriteConflict, STAMTYPE_COUNTER, "/PGM/GC/CR3WriteConflict", STAMUNIT_OCCURENCES, "The number of times the Guest CR3 monitoring detected a conflict.");
1329
1330 STAM_REG(pVM, &pPGM->StatGCPageOutOfSyncSupervisor, STAMTYPE_COUNTER, "/PGM/GC/OutOfSync/SuperVisor", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync.");
1331 STAM_REG(pVM, &pPGM->StatGCPageOutOfSyncUser, STAMTYPE_COUNTER, "/PGM/GC/OutOfSync/User", STAMUNIT_OCCURENCES, "Number of traps due to pages out of sync.");
1332
1333 STAM_REG(pVM, &pPGM->StatGCGuestROMWriteHandled, STAMTYPE_COUNTER, "/PGM/GC/ROMWriteInt", STAMUNIT_OCCURENCES, "The number of times the Guest ROM change was successfully handled.");
1334 STAM_REG(pVM, &pPGM->StatGCGuestROMWriteUnhandled, STAMTYPE_COUNTER, "/PGM/GC/ROMWriteEmu", STAMUNIT_OCCURENCES, "The number of times the Guest ROM change was passed back to the recompiler.");
1335
1336 STAM_REG(pVM, &pPGM->StatDynMapCacheHits, STAMTYPE_COUNTER, "/PGM/GC/DynMapCache/Hits" , STAMUNIT_OCCURENCES, "Number of dynamic page mapping cache hits.");
1337 STAM_REG(pVM, &pPGM->StatDynMapCacheMisses, STAMTYPE_COUNTER, "/PGM/GC/DynMapCache/Misses" , STAMUNIT_OCCURENCES, "Number of dynamic page mapping cache misses.");
1338
1339 STAM_REG(pVM, &pPGM->StatHCDetectedConflicts, STAMTYPE_COUNTER, "/PGM/HC/DetectedConflicts", STAMUNIT_OCCURENCES, "The number of times PGMR3CheckMappingConflicts() detected a conflict.");
1340 STAM_REG(pVM, &pPGM->StatHCGuestPDWrite, STAMTYPE_COUNTER, "/PGM/HC/PDWrite", STAMUNIT_OCCURENCES, "The total number of times pgmHCGuestPDWriteHandler() was called.");
1341 STAM_REG(pVM, &pPGM->StatHCGuestPDWriteConflict, STAMTYPE_COUNTER, "/PGM/HC/PDWriteConflict", STAMUNIT_OCCURENCES, "The number of times pgmHCGuestPDWriteHandler() detected a conflict.");
1342
1343 STAM_REG(pVM, &pPGM->StatHCInvalidatePage, STAMTYPE_PROFILE, "/PGM/HC/InvalidatePage", STAMUNIT_TICKS_PER_CALL, "PGMHCInvalidatePage() profiling.");
1344 STAM_REG(pVM, &pPGM->StatHCInvalidatePage4KBPages, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/4KBPages", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a 4KB page.");
1345 STAM_REG(pVM, &pPGM->StatHCInvalidatePage4MBPages, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/4MBPages", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a 4MB page.");
1346 STAM_REG(pVM, &pPGM->StatHCInvalidatePage4MBPagesSkip, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/4MBPagesSkip",STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() skipped a 4MB page.");
1347 STAM_REG(pVM, &pPGM->StatHCInvalidatePagePDMappings, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/PDMappings", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a page directory containing mappings (no conflict).");
1348 STAM_REG(pVM, &pPGM->StatHCInvalidatePagePDNAs, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/PDNAs", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a not accessed page directory.");
1349 STAM_REG(pVM, &pPGM->StatHCInvalidatePagePDNPs, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/PDNPs", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was called for a not present page directory.");
1350 STAM_REG(pVM, &pPGM->StatHCInvalidatePagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/PDOutOfSync", STAMUNIT_OCCURENCES, "The number of times PGMGCInvalidatePage() was called for an out of sync page directory.");
1351 STAM_REG(pVM, &pPGM->StatHCInvalidatePageSkipped, STAMTYPE_COUNTER, "/PGM/HC/InvalidatePage/Skipped", STAMUNIT_OCCURENCES, "The number of times PGMHCInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
1352 STAM_REG(pVM, &pPGM->StatHCResolveConflict, STAMTYPE_PROFILE, "/PGM/HC/ResolveConflict", STAMUNIT_TICKS_PER_CALL, "pgmR3SyncPTResolveConflict() profiling (includes the entire relocation).");
1353 STAM_REG(pVM, &pPGM->StatHCPrefetch, STAMTYPE_PROFILE, "/PGM/HC/Prefetch", STAMUNIT_TICKS_PER_CALL, "PGMR3PrefetchPage profiling.");
1354
1355 STAM_REG(pVM, &pPGM->StatHCSyncPT, STAMTYPE_PROFILE, "/PGM/HC/SyncPT", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMR3SyncPT() body.");
1356 STAM_REG(pVM, &pPGM->StatHCAccessedPage, STAMTYPE_COUNTER, "/PGM/HC/AccessedPage", STAMUNIT_OCCURENCES, "The number of pages marked not present for accessed bit emulation.");
1357 STAM_REG(pVM, &pPGM->StatHCDirtyPage, STAMTYPE_COUNTER, "/PGM/HC/DirtyPage/Mark", STAMUNIT_OCCURENCES, "The number of pages marked read-only for dirty bit tracking.");
1358 STAM_REG(pVM, &pPGM->StatHCDirtyPageBig, STAMTYPE_COUNTER, "/PGM/HC/DirtyPage/MarkBig", STAMUNIT_OCCURENCES, "The number of 4MB pages marked read-only for dirty bit tracking.");
1359 STAM_REG(pVM, &pPGM->StatHCDirtyPageTrap, STAMTYPE_COUNTER, "/PGM/HC/DirtyPage/Trap", STAMUNIT_OCCURENCES, "The number of traps generated for dirty bit tracking.");
1360 STAM_REG(pVM, &pPGM->StatHCDirtyPageSkipped, STAMTYPE_COUNTER, "/PGM/HC/DirtyPage/Skipped", STAMUNIT_OCCURENCES, "The number of pages already dirty or readonly.");
1361 STAM_REG(pVM, &pPGM->StatHCDirtyBitTracking, STAMTYPE_PROFILE, "/PGM/HC/DirtyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMTrackDirtyBit() body.");
1362
1363 STAM_REG(pVM, &pPGM->StatGCSyncPagePDNAs, STAMTYPE_COUNTER, "/PGM/GC/SyncPagePDNAs", STAMUNIT_OCCURENCES, "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1364 STAM_REG(pVM, &pPGM->StatGCSyncPagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/GC/SyncPagePDOutOfSync", STAMUNIT_OCCURENCES, "The number of time we've encountered an out-of-sync PD in SyncPage.");
1365 STAM_REG(pVM, &pPGM->StatHCSyncPagePDNAs, STAMTYPE_COUNTER, "/PGM/HC/SyncPagePDNAs", STAMUNIT_OCCURENCES, "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
1366 STAM_REG(pVM, &pPGM->StatHCSyncPagePDOutOfSync, STAMTYPE_COUNTER, "/PGM/HC/SyncPagePDOutOfSync", STAMUNIT_OCCURENCES, "The number of time we've encountered an out-of-sync PD in SyncPage.");
1367
1368 STAM_REG(pVM, &pPGM->StatFlushTLB, STAMTYPE_PROFILE, "/PGM/FlushTLB", STAMUNIT_OCCURENCES, "Profiling of the PGMFlushTLB() body.");
1369 STAM_REG(pVM, &pPGM->StatFlushTLBNewCR3, STAMTYPE_COUNTER, "/PGM/FlushTLB/NewCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
1370 STAM_REG(pVM, &pPGM->StatFlushTLBNewCR3Global, STAMTYPE_COUNTER, "/PGM/FlushTLB/NewCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
1371 STAM_REG(pVM, &pPGM->StatFlushTLBSameCR3, STAMTYPE_COUNTER, "/PGM/FlushTLB/SameCR3", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
1372 STAM_REG(pVM, &pPGM->StatFlushTLBSameCR3Global, STAMTYPE_COUNTER, "/PGM/FlushTLB/SameCR3Global", STAMUNIT_OCCURENCES, "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
1373
1374 STAM_REG(pVM, &pPGM->StatGCSyncCR3, STAMTYPE_PROFILE, "/PGM/GC/SyncCR3", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() body.");
1375 STAM_REG(pVM, &pPGM->StatGCSyncCR3Handlers, STAMTYPE_PROFILE, "/PGM/GC/SyncCR3/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() update handler section.");
1376 STAM_REG(pVM, &pPGM->StatGCSyncCR3HandlerVirtualUpdate, STAMTYPE_PROFILE, "/PGM/GC/SyncCR3/Handlers/VirtualUpdate",STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler updates.");
1377 STAM_REG(pVM, &pPGM->StatGCSyncCR3HandlerVirtualReset, STAMTYPE_PROFILE, "/PGM/GC/SyncCR3/Handlers/VirtualReset", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler resets.");
1378 STAM_REG(pVM, &pPGM->StatGCSyncCR3Global, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/Global", STAMUNIT_OCCURENCES, "The number of global CR3 syncs.");
1379 STAM_REG(pVM, &pPGM->StatGCSyncCR3NotGlobal, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/NotGlobal", STAMUNIT_OCCURENCES, "The number of non-global CR3 syncs.");
1380 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstCacheHit, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstChacheHit", STAMUNIT_OCCURENCES, "The number of times we got some kind of a cache hit.");
1381 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstFreed, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstFreed", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry.");
1382 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstFreedSrcNP, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstFreedSrcNP", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry for which the source entry was not present.");
1383 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstNotPresent, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstNotPresent", STAMUNIT_OCCURENCES, "The number of times we've encountered a not present shadow entry for a present guest entry.");
1384 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstSkippedGlobalPD, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstSkippedGlobalPD", STAMUNIT_OCCURENCES, "The number of times a global page directory wasn't flushed.");
1385 STAM_REG(pVM, &pPGM->StatGCSyncCR3DstSkippedGlobalPT, STAMTYPE_COUNTER, "/PGM/GC/SyncCR3/DstSkippedGlobalPT", STAMUNIT_OCCURENCES, "The number of times a page table with only global entries wasn't flushed.");
1386
1387 STAM_REG(pVM, &pPGM->StatHCSyncCR3, STAMTYPE_PROFILE, "/PGM/HC/SyncCR3", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() body.");
1388 STAM_REG(pVM, &pPGM->StatHCSyncCR3Handlers, STAMTYPE_PROFILE, "/PGM/HC/SyncCR3/Handlers", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMSyncCR3() update handler section.");
1389 STAM_REG(pVM, &pPGM->StatHCSyncCR3HandlerVirtualUpdate, STAMTYPE_PROFILE, "/PGM/HC/SyncCR3/Handlers/VirtualUpdate",STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler updates.");
1390 STAM_REG(pVM, &pPGM->StatHCSyncCR3HandlerVirtualReset, STAMTYPE_PROFILE, "/PGM/HC/SyncCR3/Handlers/VirtualReset", STAMUNIT_TICKS_PER_CALL, "Profiling of the virtual handler resets.");
1391 STAM_REG(pVM, &pPGM->StatHCSyncCR3Global, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/Global", STAMUNIT_OCCURENCES, "The number of global CR3 syncs.");
1392 STAM_REG(pVM, &pPGM->StatHCSyncCR3NotGlobal, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/NotGlobal", STAMUNIT_OCCURENCES, "The number of non-global CR3 syncs.");
1393 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstCacheHit, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstChacheHit", STAMUNIT_OCCURENCES, "The number of times we got some kind of a cache hit.");
1394 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstFreed, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstFreed", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry.");
1395 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstFreedSrcNP, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstFreedSrcNP", STAMUNIT_OCCURENCES, "The number of times we've had to free a shadow entry for which the source entry was not present.");
1396 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstNotPresent, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstNotPresent", STAMUNIT_OCCURENCES, "The number of times we've encountered a not present shadow entry for a present guest entry.");
1397 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstSkippedGlobalPD, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstSkippedGlobalPD", STAMUNIT_OCCURENCES, "The number of times a global page directory wasn't flushed.");
1398 STAM_REG(pVM, &pPGM->StatHCSyncCR3DstSkippedGlobalPT, STAMTYPE_COUNTER, "/PGM/HC/SyncCR3/DstSkippedGlobalPT", STAMUNIT_OCCURENCES, "The number of times a page table with only global entries wasn't flushed.");
1399
1400 STAM_REG(pVM, &pPGM->StatVirtHandleSearchByPhysGC, STAMTYPE_PROFILE, "/PGM/VirtHandler/SearchByPhys/GC", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmHandlerVirtualFindByPhysAddr in GC.");
1401 STAM_REG(pVM, &pPGM->StatVirtHandleSearchByPhysHC, STAMTYPE_PROFILE, "/PGM/VirtHandler/SearchByPhys/HC", STAMUNIT_TICKS_PER_CALL, "Profiling of pgmHandlerVirtualFindByPhysAddr in HC.");
1402 STAM_REG(pVM, &pPGM->StatHandlePhysicalReset, STAMTYPE_COUNTER, "/PGM/HC/HandlerPhysicalReset", STAMUNIT_OCCURENCES, "The number of times PGMR3HandlerPhysicalReset is called.");
1403
1404 STAM_REG(pVM, &pPGM->StatHCGstModifyPage, STAMTYPE_PROFILE, "/PGM/HC/GstModifyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGstModifyPage() body.");
1405 STAM_REG(pVM, &pPGM->StatGCGstModifyPage, STAMTYPE_PROFILE, "/PGM/GC/GstModifyPage", STAMUNIT_TICKS_PER_CALL, "Profiling of the PGMGstModifyPage() body.");
1406
1407 STAM_REG(pVM, &pPGM->StatSynPT4kGC, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/4k", STAMUNIT_OCCURENCES, "Nr of 4k PT syncs");
1408 STAM_REG(pVM, &pPGM->StatSynPT4kHC, STAMTYPE_COUNTER, "/PGM/HC/SyncPT/4k", STAMUNIT_OCCURENCES, "Nr of 4k PT syncs");
1409 STAM_REG(pVM, &pPGM->StatSynPT4MGC, STAMTYPE_COUNTER, "/PGM/GC/SyncPT/4M", STAMUNIT_OCCURENCES, "Nr of 4M PT syncs");
1410 STAM_REG(pVM, &pPGM->StatSynPT4MHC, STAMTYPE_COUNTER, "/PGM/HC/SyncPT/4M", STAMUNIT_OCCURENCES, "Nr of 4M PT syncs");
1411
1412 STAM_REG(pVM, &pPGM->StatDynRamTotal, STAMTYPE_COUNTER, "/PGM/RAM/TotalAlloc", STAMUNIT_MEGABYTES, "Allocated mbs of guest ram.");
1413 STAM_REG(pVM, &pPGM->StatDynRamGrow, STAMTYPE_COUNTER, "/PGM/RAM/Grow", STAMUNIT_OCCURENCES, "Nr of pgmr3PhysGrowRange calls.");
1414
1415 STAM_REG(pVM, &pPGM->StatPageHCMapTlbHits, STAMTYPE_COUNTER, "/PGM/PageHCMap/TlbHits", STAMUNIT_OCCURENCES, "TLB hits.");
1416 STAM_REG(pVM, &pPGM->StatPageHCMapTlbMisses, STAMTYPE_COUNTER, "/PGM/PageHCMap/TlbMisses", STAMUNIT_OCCURENCES, "TLB misses.");
1417 STAM_REG(pVM, &pPGM->ChunkR3Map.c, STAMTYPE_U32, "/PGM/ChunkR3Map/c", STAMUNIT_OCCURENCES, "Number of mapped chunks.");
1418 STAM_REG(pVM, &pPGM->ChunkR3Map.cMax, STAMTYPE_U32, "/PGM/ChunkR3Map/cMax", STAMUNIT_OCCURENCES, "Maximum number of mapped chunks.");
1419 STAM_REG(pVM, &pPGM->StatChunkR3MapTlbHits, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbHits", STAMUNIT_OCCURENCES, "TLB hits.");
1420 STAM_REG(pVM, &pPGM->StatChunkR3MapTlbMisses, STAMTYPE_COUNTER, "/PGM/ChunkR3Map/TlbMisses", STAMUNIT_OCCURENCES, "TLB misses.");
1421 STAM_REG(pVM, &pPGM->StatPageReplaceShared, STAMTYPE_COUNTER, "/PGM/Page/ReplacedShared", STAMUNIT_OCCURENCES, "Times a shared page was replaced.");
1422 STAM_REG(pVM, &pPGM->StatPageReplaceZero, STAMTYPE_COUNTER, "/PGM/Page/ReplacedZero", STAMUNIT_OCCURENCES, "Times the zero page was replaced.");
1423 STAM_REG(pVM, &pPGM->StatPageHandyAllocs, STAMTYPE_COUNTER, "/PGM/Page/HandyAllocs", STAMUNIT_OCCURENCES, "Number of times we've allocated more handy pages.");
1424 STAM_REG(pVM, &pPGM->cAllPages, STAMTYPE_U32, "/PGM/Page/cAllPages", STAMUNIT_OCCURENCES, "The total number of pages.");
1425 STAM_REG(pVM, &pPGM->cPrivatePages, STAMTYPE_U32, "/PGM/Page/cPrivatePages", STAMUNIT_OCCURENCES, "The number of private pages.");
1426 STAM_REG(pVM, &pPGM->cSharedPages, STAMTYPE_U32, "/PGM/Page/cSharedPages", STAMUNIT_OCCURENCES, "The number of shared pages.");
1427 STAM_REG(pVM, &pPGM->cZeroPages, STAMTYPE_U32, "/PGM/Page/cZeroPages", STAMUNIT_OCCURENCES, "The number of zero backed pages.");
1428
1429#ifdef PGMPOOL_WITH_GCPHYS_TRACKING
1430 STAM_REG(pVM, &pPGM->StatTrackVirgin, STAMTYPE_COUNTER, "/PGM/Track/Virgin", STAMUNIT_OCCURENCES, "The number of first time shadowings");
1431 STAM_REG(pVM, &pPGM->StatTrackAliased, STAMTYPE_COUNTER, "/PGM/Track/Aliased", STAMUNIT_OCCURENCES, "The number of times switching to cRef2, i.e. the page is being shadowed by two PTs.");
1432 STAM_REG(pVM, &pPGM->StatTrackAliasedMany, STAMTYPE_COUNTER, "/PGM/Track/AliasedMany", STAMUNIT_OCCURENCES, "The number of times we're tracking using cRef2.");
1433 STAM_REG(pVM, &pPGM->StatTrackAliasedLots, STAMTYPE_COUNTER, "/PGM/Track/AliasedLots", STAMUNIT_OCCURENCES, "The number of times we're hitting pages which has overflowed cRef2");
1434 STAM_REG(pVM, &pPGM->StatTrackOverflows, STAMTYPE_COUNTER, "/PGM/Track/Overflows", STAMUNIT_OCCURENCES, "The number of times the extent list grows to long.");
1435 STAM_REG(pVM, &pPGM->StatTrackDeref, STAMTYPE_PROFILE, "/PGM/Track/Deref", STAMUNIT_OCCURENCES, "Profiling of SyncPageWorkerTrackDeref (expensive).");
1436#endif
1437
1438 for (unsigned i = 0; i < X86_PG_ENTRIES; i++)
1439 {
1440 /** @todo r=bird: We need a STAMR3RegisterF()! */
1441 char szName[32];
1442
1443 RTStrPrintf(szName, sizeof(szName), "/PGM/GC/PD/Trap0e/%04X", i);
1444 int rc = STAMR3Register(pVM, &pPGM->StatGCTrap0ePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, szName, STAMUNIT_OCCURENCES, "The number of traps in page directory n.");
1445 AssertRC(rc);
1446
1447 RTStrPrintf(szName, sizeof(szName), "/PGM/GC/PD/SyncPt/%04X", i);
1448 rc = STAMR3Register(pVM, &pPGM->StatGCSyncPtPD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, szName, STAMUNIT_OCCURENCES, "The number of syncs per PD n.");
1449 AssertRC(rc);
1450
1451 RTStrPrintf(szName, sizeof(szName), "/PGM/GC/PD/SyncPage/%04X", i);
1452 rc = STAMR3Register(pVM, &pPGM->StatGCSyncPagePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, szName, STAMUNIT_OCCURENCES, "The number of out of sync pages per page directory n.");
1453 AssertRC(rc);
1454 }
1455}
1456#endif /* VBOX_WITH_STATISTICS */
1457
1458/**
1459 * Init the PGM bits that rely on VMMR0 and MM to be fully initialized.
1460 *
1461 * The dynamic mapping area will also be allocated and initialized at this
1462 * time. We could allocate it during PGMR3Init of course, but the mapping
1463 * wouldn't be allocated at that time preventing us from setting up the
1464 * page table entries with the dummy page.
1465 *
1466 * @returns VBox status code.
1467 * @param pVM VM handle.
1468 */
1469PGMR3DECL(int) PGMR3InitDynMap(PVM pVM)
1470{
1471 /*
1472 * Reserve space for mapping the paging pages into guest context.
1473 */
1474 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * (2 + ELEMENTS(pVM->pgm.s.apHCPaePDs) + 1 + 2 + 2), "Paging", &pVM->pgm.s.pGC32BitPD);
1475 AssertRCReturn(rc, rc);
1476 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1477
1478 /*
1479 * Reserve space for the dynamic mappings.
1480 */
1481 /** @todo r=bird: Need to verify that the checks for crossing PTs are correct here. They seems to be assuming 4MB PTs.. */
1482 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping", &pVM->pgm.s.pbDynPageMapBaseGC);
1483 if ( VBOX_SUCCESS(rc)
1484 && (pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_SHIFT) != ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_SHIFT))
1485 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping not crossing", &pVM->pgm.s.pbDynPageMapBaseGC);
1486 if (VBOX_SUCCESS(rc))
1487 {
1488 AssertRelease((pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_SHIFT) == ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_SHIFT));
1489 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1490 }
1491 return rc;
1492}
1493
1494
1495/**
1496 * Ring-3 init finalizing.
1497 *
1498 * @returns VBox status code.
1499 * @param pVM The VM handle.
1500 */
1501PGMR3DECL(int) PGMR3InitFinalize(PVM pVM)
1502{
1503 /*
1504 * Map the paging pages into the guest context.
1505 */
1506 RTGCPTR GCPtr = pVM->pgm.s.pGC32BitPD;
1507 AssertReleaseReturn(GCPtr, VERR_INTERNAL_ERROR);
1508
1509 int rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhys32BitPD, PAGE_SIZE, 0);
1510 AssertRCReturn(rc, rc);
1511 pVM->pgm.s.pGC32BitPD = GCPtr;
1512 GCPtr += PAGE_SIZE;
1513 GCPtr += PAGE_SIZE; /* reserved page */
1514
1515 for (unsigned i = 0; i < ELEMENTS(pVM->pgm.s.apHCPaePDs); i++)
1516 {
1517 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.aHCPhysPaePDs[i], PAGE_SIZE, 0);
1518 AssertRCReturn(rc, rc);
1519 pVM->pgm.s.apGCPaePDs[i] = GCPtr;
1520 GCPtr += PAGE_SIZE;
1521 }
1522 /* A bit of paranoia is justified. */
1523 AssertRelease((RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[0] + PAGE_SIZE == (RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[1]);
1524 AssertRelease((RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[1] + PAGE_SIZE == (RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[2]);
1525 AssertRelease((RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[2] + PAGE_SIZE == (RTGCUINTPTR)pVM->pgm.s.apGCPaePDs[3]);
1526 GCPtr += PAGE_SIZE; /* reserved page */
1527
1528 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhysPaePDPT, PAGE_SIZE, 0);
1529 AssertRCReturn(rc, rc);
1530 pVM->pgm.s.pGCPaePDPT = GCPtr;
1531 GCPtr += PAGE_SIZE;
1532 GCPtr += PAGE_SIZE; /* reserved page */
1533
1534 rc = PGMMap(pVM, GCPtr, pVM->pgm.s.HCPhysPaePML4, PAGE_SIZE, 0);
1535 AssertRCReturn(rc, rc);
1536 pVM->pgm.s.pGCPaePML4 = GCPtr;
1537 GCPtr += PAGE_SIZE;
1538 GCPtr += PAGE_SIZE; /* reserved page */
1539
1540
1541 /*
1542 * Reserve space for the dynamic mappings.
1543 * Initialize the dynamic mapping pages with dummy pages to simply the cache.
1544 */
1545 /* get the pointer to the page table entries. */
1546 PPGMMAPPING pMapping = pgmGetMapping(pVM, pVM->pgm.s.pbDynPageMapBaseGC);
1547 AssertRelease(pMapping);
1548 const uintptr_t off = pVM->pgm.s.pbDynPageMapBaseGC - pMapping->GCPtr;
1549 const unsigned iPT = off >> X86_PD_SHIFT;
1550 const unsigned iPG = (off >> X86_PT_SHIFT) & X86_PT_MASK;
1551 pVM->pgm.s.paDynPageMap32BitPTEsGC = pMapping->aPTs[iPT].pPTGC + iPG * sizeof(pMapping->aPTs[0].pPTR3->a[0]);
1552 pVM->pgm.s.paDynPageMapPaePTEsGC = pMapping->aPTs[iPT].paPaePTsGC + iPG * sizeof(pMapping->aPTs[0].paPaePTsR3->a[0]);
1553
1554 /* init cache */
1555 RTHCPHYS HCPhysDummy = MMR3PageDummyHCPhys(pVM);
1556 for (unsigned i = 0; i < ELEMENTS(pVM->pgm.s.aHCPhysDynPageMapCache); i++)
1557 pVM->pgm.s.aHCPhysDynPageMapCache[i] = HCPhysDummy;
1558
1559 for (unsigned i = 0; i < MM_HYPER_DYNAMIC_SIZE; i += PAGE_SIZE)
1560 {
1561 rc = PGMMap(pVM, pVM->pgm.s.pbDynPageMapBaseGC + i, HCPhysDummy, PAGE_SIZE, 0);
1562 AssertRCReturn(rc, rc);
1563 }
1564
1565 return rc;
1566}
1567
1568
1569/**
1570 * Applies relocations to data and code managed by this
1571 * component. This function will be called at init and
1572 * whenever the VMM need to relocate it self inside the GC.
1573 *
1574 * @param pVM The VM.
1575 * @param offDelta Relocation delta relative to old location.
1576 */
1577PGMR3DECL(void) PGMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
1578{
1579 LogFlow(("PGMR3Relocate\n"));
1580
1581 /*
1582 * Paging stuff.
1583 */
1584 pVM->pgm.s.GCPtrCR3Mapping += offDelta;
1585 /** @todo move this into shadow and guest specific relocation functions. */
1586 AssertMsg(pVM->pgm.s.pGC32BitPD, ("Init order, no relocation before paging is initialized!\n"));
1587 pVM->pgm.s.pGC32BitPD += offDelta;
1588 pVM->pgm.s.pGuestPDGC += offDelta;
1589 for (unsigned i = 0; i < ELEMENTS(pVM->pgm.s.apGCPaePDs); i++)
1590 pVM->pgm.s.apGCPaePDs[i] += offDelta;
1591 pVM->pgm.s.pGCPaePDPT += offDelta;
1592 pVM->pgm.s.pGCPaePML4 += offDelta;
1593
1594 pgmR3ModeDataInit(pVM, true /* resolve GC/R0 symbols */);
1595 pgmR3ModeDataSwitch(pVM, pVM->pgm.s.enmShadowMode, pVM->pgm.s.enmGuestMode);
1596
1597 PGM_SHW_PFN(Relocate, pVM)(pVM, offDelta);
1598 PGM_GST_PFN(Relocate, pVM)(pVM, offDelta);
1599 PGM_BTH_PFN(Relocate, pVM)(pVM, offDelta);
1600
1601 /*
1602 * Trees.
1603 */
1604 pVM->pgm.s.pTreesGC = MMHyperHC2GC(pVM, pVM->pgm.s.pTreesHC);
1605
1606 /*
1607 * Ram ranges.
1608 */
1609 if (pVM->pgm.s.pRamRangesR3)
1610 {
1611 pVM->pgm.s.pRamRangesGC = MMHyperHC2GC(pVM, pVM->pgm.s.pRamRangesR3);
1612 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur->pNextR3; pCur = pCur->pNextR3)
1613#ifdef VBOX_WITH_NEW_PHYS_CODE
1614 pCur->pNextGC = MMHyperR3ToGC(pVM, pCur->pNextR3);
1615#else
1616 {
1617 pCur->pNextGC = MMHyperR3ToGC(pVM, pCur->pNextR3);
1618 if (pCur->pavHCChunkGC)
1619 pCur->pavHCChunkGC = MMHyperHC2GC(pVM, pCur->pavHCChunkHC);
1620 }
1621#endif
1622 }
1623
1624 /*
1625 * Update the two page directories with all page table mappings.
1626 * (One or more of them have changed, that's why we're here.)
1627 */
1628 pVM->pgm.s.pMappingsGC = MMHyperHC2GC(pVM, pVM->pgm.s.pMappingsR3);
1629 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur->pNextR3; pCur = pCur->pNextR3)
1630 pCur->pNextGC = MMHyperHC2GC(pVM, pCur->pNextR3);
1631
1632 /* Relocate GC addresses of Page Tables. */
1633 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
1634 {
1635 for (RTHCUINT i = 0; i < pCur->cPTs; i++)
1636 {
1637 pCur->aPTs[i].pPTGC = MMHyperR3ToGC(pVM, pCur->aPTs[i].pPTR3);
1638 pCur->aPTs[i].paPaePTsGC = MMHyperR3ToGC(pVM, pCur->aPTs[i].paPaePTsR3);
1639 }
1640 }
1641
1642 /*
1643 * Dynamic page mapping area.
1644 */
1645 pVM->pgm.s.paDynPageMap32BitPTEsGC += offDelta;
1646 pVM->pgm.s.paDynPageMapPaePTEsGC += offDelta;
1647 pVM->pgm.s.pbDynPageMapBaseGC += offDelta;
1648
1649 /*
1650 * The Zero page.
1651 */
1652 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
1653 AssertRelease(pVM->pgm.s.pvZeroPgR0);
1654
1655 /*
1656 * Physical and virtual handlers.
1657 */
1658 RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysHandlers, true, pgmR3RelocatePhysHandler, &offDelta);
1659 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesHC->VirtHandlers, true, pgmR3RelocateVirtHandler, &offDelta);
1660 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesHC->HyperVirtHandlers, true, pgmR3RelocateHyperVirtHandler, &offDelta);
1661
1662 /*
1663 * The page pool.
1664 */
1665 pgmR3PoolRelocate(pVM);
1666}
1667
1668
1669/**
1670 * Callback function for relocating a physical access handler.
1671 *
1672 * @returns 0 (continue enum)
1673 * @param pNode Pointer to a PGMPHYSHANDLER node.
1674 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
1675 * not certain the delta will fit in a void pointer for all possible configs.
1676 */
1677static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser)
1678{
1679 PPGMPHYSHANDLER pHandler = (PPGMPHYSHANDLER)pNode;
1680 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
1681 if (pHandler->pfnHandlerGC)
1682 pHandler->pfnHandlerGC += offDelta;
1683 if ((RTGCUINTPTR)pHandler->pvUserGC >= 0x10000)
1684 pHandler->pvUserGC += offDelta;
1685 return 0;
1686}
1687
1688
1689/**
1690 * Callback function for relocating a virtual access handler.
1691 *
1692 * @returns 0 (continue enum)
1693 * @param pNode Pointer to a PGMVIRTHANDLER node.
1694 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
1695 * not certain the delta will fit in a void pointer for all possible configs.
1696 */
1697static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
1698{
1699 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
1700 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
1701 Assert( pHandler->enmType == PGMVIRTHANDLERTYPE_ALL
1702 || pHandler->enmType == PGMVIRTHANDLERTYPE_WRITE);
1703 Assert(pHandler->pfnHandlerGC);
1704 pHandler->pfnHandlerGC += offDelta;
1705 return 0;
1706}
1707
1708
1709/**
1710 * Callback function for relocating a virtual access handler for the hypervisor mapping.
1711 *
1712 * @returns 0 (continue enum)
1713 * @param pNode Pointer to a PGMVIRTHANDLER node.
1714 * @param pvUser Pointer to the offDelta. This is a pointer to the delta since we're
1715 * not certain the delta will fit in a void pointer for all possible configs.
1716 */
1717static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
1718{
1719 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
1720 RTGCINTPTR offDelta = *(PRTGCINTPTR)pvUser;
1721 Assert(pHandler->enmType == PGMVIRTHANDLERTYPE_HYPERVISOR);
1722 Assert(pHandler->pfnHandlerGC);
1723 pHandler->pfnHandlerGC += offDelta;
1724 return 0;
1725}
1726
1727
1728/**
1729 * The VM is being reset.
1730 *
1731 * For the PGM component this means that any PD write monitors
1732 * needs to be removed.
1733 *
1734 * @param pVM VM handle.
1735 */
1736PGMR3DECL(void) PGMR3Reset(PVM pVM)
1737{
1738 LogFlow(("PGMR3Reset:\n"));
1739 VM_ASSERT_EMT(pVM);
1740
1741 pgmLock(pVM);
1742
1743 /*
1744 * Unfix any fixed mappings and disable CR3 monitoring.
1745 */
1746 pVM->pgm.s.fMappingsFixed = false;
1747 pVM->pgm.s.GCPtrMappingFixed = 0;
1748 pVM->pgm.s.cbMappingFixed = 0;
1749
1750 int rc = PGM_GST_PFN(UnmonitorCR3, pVM)(pVM);
1751 AssertRC(rc);
1752#ifdef DEBUG
1753 DBGFR3InfoLog(pVM, "mappings", NULL);
1754 DBGFR3InfoLog(pVM, "handlers", "all nostat");
1755#endif
1756
1757 /*
1758 * Reset the shadow page pool.
1759 */
1760 pgmR3PoolReset(pVM);
1761
1762 /*
1763 * Re-init other members.
1764 */
1765 pVM->pgm.s.fA20Enabled = true;
1766
1767 /*
1768 * Clear the FFs PGM owns.
1769 */
1770 VM_FF_CLEAR(pVM, VM_FF_PGM_SYNC_CR3);
1771 VM_FF_CLEAR(pVM, VM_FF_PGM_SYNC_CR3_NON_GLOBAL);
1772
1773 /*
1774 * Reset (zero) RAM pages.
1775 */
1776 rc = pgmR3PhysRamReset(pVM);
1777 if (RT_SUCCESS(rc))
1778 {
1779#ifdef VBOX_WITH_NEW_PHYS_CODE
1780 /*
1781 * Reset (zero) shadow ROM pages.
1782 */
1783 rc = pgmR3PhysRomReset(pVM);
1784#endif
1785 if (RT_SUCCESS(rc))
1786 {
1787 /*
1788 * Switch mode back to real mode.
1789 */
1790 rc = pgmR3ChangeMode(pVM, PGMMODE_REAL);
1791 STAM_REL_COUNTER_RESET(&pVM->pgm.s.cGuestModeChanges);
1792 }
1793 }
1794
1795 pgmUnlock(pVM);
1796 //return rc;
1797 AssertReleaseRC(rc);
1798}
1799
1800
1801#ifdef VBOX_STRICT
1802/**
1803 * VM state change callback for clearing fNoMorePhysWrites after
1804 * a snapshot has been created.
1805 */
1806static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PVM pVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser)
1807{
1808 if (enmState == VMSTATE_RUNNING)
1809 pVM->pgm.s.fNoMorePhysWrites = false;
1810}
1811#endif
1812
1813
1814/**
1815 * Terminates the PGM.
1816 *
1817 * @returns VBox status code.
1818 * @param pVM Pointer to VM structure.
1819 */
1820PGMR3DECL(int) PGMR3Term(PVM pVM)
1821{
1822 return PDMR3CritSectDelete(&pVM->pgm.s.CritSect);
1823}
1824
1825
1826/**
1827 * Execute state save operation.
1828 *
1829 * @returns VBox status code.
1830 * @param pVM VM Handle.
1831 * @param pSSM SSM operation handle.
1832 */
1833static DECLCALLBACK(int) pgmR3Save(PVM pVM, PSSMHANDLE pSSM)
1834{
1835 PPGM pPGM = &pVM->pgm.s;
1836
1837 /* No more writes to physical memory after this point! */
1838 pVM->pgm.s.fNoMorePhysWrites = true;
1839
1840 /*
1841 * Save basic data (required / unaffected by relocation).
1842 */
1843#if 1
1844 SSMR3PutBool(pSSM, pPGM->fMappingsFixed);
1845#else
1846 SSMR3PutUInt(pSSM, pPGM->fMappingsFixed);
1847#endif
1848 SSMR3PutGCPtr(pSSM, pPGM->GCPtrMappingFixed);
1849 SSMR3PutU32(pSSM, pPGM->cbMappingFixed);
1850 SSMR3PutUInt(pSSM, pPGM->cbRamSize);
1851 SSMR3PutGCPhys(pSSM, pPGM->GCPhysA20Mask);
1852 SSMR3PutUInt(pSSM, pPGM->fA20Enabled);
1853 SSMR3PutUInt(pSSM, pPGM->fSyncFlags);
1854 SSMR3PutUInt(pSSM, pPGM->enmGuestMode);
1855 SSMR3PutU32(pSSM, ~0); /* Separator. */
1856
1857 /*
1858 * The guest mappings.
1859 */
1860 uint32_t i = 0;
1861 for (PPGMMAPPING pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3, i++)
1862 {
1863 SSMR3PutU32(pSSM, i);
1864 SSMR3PutStrZ(pSSM, pMapping->pszDesc); /* This is the best unique id we have... */
1865 SSMR3PutGCPtr(pSSM, pMapping->GCPtr);
1866 SSMR3PutGCUIntPtr(pSSM, pMapping->cPTs);
1867 /* flags are done by the mapping owners! */
1868 }
1869 SSMR3PutU32(pSSM, ~0); /* terminator. */
1870
1871 /*
1872 * Ram range flags and bits.
1873 */
1874 i = 0;
1875 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
1876 {
1877 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
1878
1879 SSMR3PutU32(pSSM, i);
1880 SSMR3PutGCPhys(pSSM, pRam->GCPhys);
1881 SSMR3PutGCPhys(pSSM, pRam->GCPhysLast);
1882 SSMR3PutGCPhys(pSSM, pRam->cb);
1883 SSMR3PutU8(pSSM, !!pRam->pvHC); /* boolean indicating memory or not. */
1884
1885 /* Flags. */
1886 const unsigned cPages = pRam->cb >> PAGE_SHIFT;
1887 for (unsigned iPage = 0; iPage < cPages; iPage++)
1888 SSMR3PutU16(pSSM, (uint16_t)(pRam->aPages[iPage].HCPhys & ~X86_PTE_PAE_PG_MASK)); /** @todo PAGE FLAGS */
1889
1890 /* any memory associated with the range. */
1891 if (pRam->fFlags & MM_RAM_FLAGS_DYNAMIC_ALLOC)
1892 {
1893 for (unsigned iChunk = 0; iChunk < (pRam->cb >> PGM_DYNAMIC_CHUNK_SHIFT); iChunk++)
1894 {
1895 if (pRam->pavHCChunkHC[iChunk])
1896 {
1897 SSMR3PutU8(pSSM, 1); /* chunk present */
1898 SSMR3PutMem(pSSM, pRam->pavHCChunkHC[iChunk], PGM_DYNAMIC_CHUNK_SIZE);
1899 }
1900 else
1901 SSMR3PutU8(pSSM, 0); /* no chunk present */
1902 }
1903 }
1904 else if (pRam->pvHC)
1905 {
1906 int rc = SSMR3PutMem(pSSM, pRam->pvHC, pRam->cb);
1907 if (VBOX_FAILURE(rc))
1908 {
1909 Log(("pgmR3Save: SSMR3PutMem(, %p, %#x) -> %Vrc\n", pRam->pvHC, pRam->cb, rc));
1910 return rc;
1911 }
1912 }
1913 }
1914 return SSMR3PutU32(pSSM, ~0); /* terminator. */
1915}
1916
1917
1918/**
1919 * Execute state load operation.
1920 *
1921 * @returns VBox status code.
1922 * @param pVM VM Handle.
1923 * @param pSSM SSM operation handle.
1924 * @param u32Version Data layout version.
1925 */
1926static DECLCALLBACK(int) pgmR3Load(PVM pVM, PSSMHANDLE pSSM, uint32_t u32Version)
1927{
1928 /*
1929 * Validate version.
1930 */
1931 if (u32Version != PGM_SAVED_STATE_VERSION)
1932 {
1933 Log(("pgmR3Load: Invalid version u32Version=%d (current %d)!\n", u32Version, PGM_SAVED_STATE_VERSION));
1934 return VERR_SSM_UNSUPPORTED_DATA_UNIT_VERSION;
1935 }
1936
1937 /*
1938 * Call the reset function to make sure all the memory is cleared.
1939 */
1940 PGMR3Reset(pVM);
1941
1942 /*
1943 * Load basic data (required / unaffected by relocation).
1944 */
1945 PPGM pPGM = &pVM->pgm.s;
1946#if 1
1947 SSMR3GetBool(pSSM, &pPGM->fMappingsFixed);
1948#else
1949 uint32_t u;
1950 SSMR3GetU32(pSSM, &u);
1951 pPGM->fMappingsFixed = u;
1952#endif
1953 SSMR3GetGCPtr(pSSM, &pPGM->GCPtrMappingFixed);
1954 SSMR3GetU32(pSSM, &pPGM->cbMappingFixed);
1955
1956 RTUINT cbRamSize;
1957 int rc = SSMR3GetU32(pSSM, &cbRamSize);
1958 if (VBOX_FAILURE(rc))
1959 return rc;
1960 if (cbRamSize != pPGM->cbRamSize)
1961 return VERR_SSM_LOAD_MEMORY_SIZE_MISMATCH;
1962 SSMR3GetGCPhys(pSSM, &pPGM->GCPhysA20Mask);
1963 SSMR3GetUInt(pSSM, &pPGM->fA20Enabled);
1964 SSMR3GetUInt(pSSM, &pPGM->fSyncFlags);
1965 RTUINT uGuestMode;
1966 SSMR3GetUInt(pSSM, &uGuestMode);
1967 pPGM->enmGuestMode = (PGMMODE)uGuestMode;
1968
1969 /* check separator. */
1970 uint32_t u32Sep;
1971 SSMR3GetU32(pSSM, &u32Sep);
1972 if (VBOX_FAILURE(rc))
1973 return rc;
1974 if (u32Sep != (uint32_t)~0)
1975 {
1976 AssertMsgFailed(("u32Sep=%#x (first)\n", u32Sep));
1977 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
1978 }
1979
1980 /*
1981 * The guest mappings.
1982 */
1983 uint32_t i = 0;
1984 for (;; i++)
1985 {
1986 /* Check the seqence number / separator. */
1987 rc = SSMR3GetU32(pSSM, &u32Sep);
1988 if (VBOX_FAILURE(rc))
1989 return rc;
1990 if (u32Sep == ~0U)
1991 break;
1992 if (u32Sep != i)
1993 {
1994 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
1995 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
1996 }
1997
1998 /* get the mapping details. */
1999 char szDesc[256];
2000 szDesc[0] = '\0';
2001 rc = SSMR3GetStrZ(pSSM, szDesc, sizeof(szDesc));
2002 if (VBOX_FAILURE(rc))
2003 return rc;
2004 RTGCPTR GCPtr;
2005 SSMR3GetGCPtr(pSSM, &GCPtr);
2006 RTGCUINTPTR cPTs;
2007 rc = SSMR3GetU32(pSSM, &cPTs);
2008 if (VBOX_FAILURE(rc))
2009 return rc;
2010
2011 /* find matching range. */
2012 PPGMMAPPING pMapping;
2013 for (pMapping = pPGM->pMappingsR3; pMapping; pMapping = pMapping->pNextR3)
2014 if ( pMapping->cPTs == cPTs
2015 && !strcmp(pMapping->pszDesc, szDesc))
2016 break;
2017 if (!pMapping)
2018 {
2019 LogRel(("Couldn't find mapping: cPTs=%#x szDesc=%s (GCPtr=%VGv)\n",
2020 cPTs, szDesc, GCPtr));
2021 AssertFailed();
2022 return VERR_SSM_LOAD_CONFIG_MISMATCH;
2023 }
2024
2025 /* relocate it. */
2026 if (pMapping->GCPtr != GCPtr)
2027 {
2028 AssertMsg((GCPtr >> X86_PD_SHIFT << X86_PD_SHIFT) == GCPtr, ("GCPtr=%VGv\n", GCPtr));
2029#if HC_ARCH_BITS == 64
2030LogRel(("Mapping: %VGv -> %VGv %s\n", pMapping->GCPtr, GCPtr, pMapping->pszDesc));
2031#endif
2032 pgmR3MapRelocate(pVM, pMapping, pMapping->GCPtr >> X86_PD_SHIFT, GCPtr >> X86_PD_SHIFT);
2033 }
2034 else
2035 Log(("pgmR3Load: '%s' needed no relocation (%VGv)\n", szDesc, GCPtr));
2036 }
2037
2038 /*
2039 * Ram range flags and bits.
2040 */
2041 i = 0;
2042 for (PPGMRAMRANGE pRam = pPGM->pRamRangesR3; pRam; pRam = pRam->pNextR3, i++)
2043 {
2044 /** @todo MMIO ranges may move (PCI reconfig), we currently assume they don't. */
2045 /* Check the seqence number / separator. */
2046 rc = SSMR3GetU32(pSSM, &u32Sep);
2047 if (VBOX_FAILURE(rc))
2048 return rc;
2049 if (u32Sep == ~0U)
2050 break;
2051 if (u32Sep != i)
2052 {
2053 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2054 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2055 }
2056
2057 /* Get the range details. */
2058 RTGCPHYS GCPhys;
2059 SSMR3GetGCPhys(pSSM, &GCPhys);
2060 RTGCPHYS GCPhysLast;
2061 SSMR3GetGCPhys(pSSM, &GCPhysLast);
2062 RTGCPHYS cb;
2063 SSMR3GetGCPhys(pSSM, &cb);
2064 uint8_t fHaveBits;
2065 rc = SSMR3GetU8(pSSM, &fHaveBits);
2066 if (VBOX_FAILURE(rc))
2067 return rc;
2068 if (fHaveBits & ~1)
2069 {
2070 AssertMsgFailed(("u32Sep=%#x (last)\n", u32Sep));
2071 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2072 }
2073
2074 /* Match it up with the current range. */
2075 if ( GCPhys != pRam->GCPhys
2076 || GCPhysLast != pRam->GCPhysLast
2077 || cb != pRam->cb
2078 || fHaveBits != !!pRam->pvHC)
2079 {
2080 LogRel(("Ram range: %VGp-%VGp %VGp bytes %s\n"
2081 "State : %VGp-%VGp %VGp bytes %s\n",
2082 pRam->GCPhys, pRam->GCPhysLast, pRam->cb, pRam->pvHC ? "bits" : "nobits",
2083 GCPhys, GCPhysLast, cb, fHaveBits ? "bits" : "nobits"));
2084 /*
2085 * If we're loading a state for debugging purpose, don't make a fuss if
2086 * the MMIO[2] and ROM stuff isn't 100% right, just skip the mismatches.
2087 */
2088 if ( SSMR3HandleGetAfter(pSSM) != SSMAFTER_DEBUG_IT
2089 || GCPhys < 8 * _1M)
2090 AssertFailedReturn(VERR_SSM_LOAD_CONFIG_MISMATCH);
2091
2092 RTGCPHYS cPages = ((GCPhysLast - GCPhys) + 1) >> PAGE_SHIFT;
2093 while (cPages-- > 0)
2094 {
2095 uint16_t u16Ignore;
2096 SSMR3GetU16(pSSM, &u16Ignore);
2097 }
2098 continue;
2099 }
2100
2101 /* Flags. */
2102 const unsigned cPages = pRam->cb >> PAGE_SHIFT;
2103 for (unsigned iPage = 0; iPage < cPages; iPage++)
2104 {
2105 uint16_t u16 = 0;
2106 SSMR3GetU16(pSSM, &u16);
2107 u16 &= PAGE_OFFSET_MASK & ~( RT_BIT(4) | RT_BIT(5) | RT_BIT(6)
2108 | RT_BIT(7) | RT_BIT(8) | RT_BIT(9) | RT_BIT(10) );
2109 // &= MM_RAM_FLAGS_DYNAMIC_ALLOC | MM_RAM_FLAGS_RESERVED | MM_RAM_FLAGS_ROM | MM_RAM_FLAGS_MMIO | MM_RAM_FLAGS_MMIO2
2110 pRam->aPages[iPage].HCPhys = PGM_PAGE_GET_HCPHYS(&pRam->aPages[iPage]) | (RTHCPHYS)u16; /** @todo PAGE FLAGS */
2111 }
2112
2113 /* any memory associated with the range. */
2114 if (pRam->fFlags & MM_RAM_FLAGS_DYNAMIC_ALLOC)
2115 {
2116 for (unsigned iChunk = 0; iChunk < (pRam->cb >> PGM_DYNAMIC_CHUNK_SHIFT); iChunk++)
2117 {
2118 uint8_t fValidChunk;
2119
2120 rc = SSMR3GetU8(pSSM, &fValidChunk);
2121 if (VBOX_FAILURE(rc))
2122 return rc;
2123 if (fValidChunk > 1)
2124 return VERR_SSM_DATA_UNIT_FORMAT_CHANGED;
2125
2126 if (fValidChunk)
2127 {
2128 if (!pRam->pavHCChunkHC[iChunk])
2129 {
2130 rc = pgmr3PhysGrowRange(pVM, pRam->GCPhys + iChunk * PGM_DYNAMIC_CHUNK_SIZE);
2131 if (VBOX_FAILURE(rc))
2132 return rc;
2133 }
2134 Assert(pRam->pavHCChunkHC[iChunk]);
2135
2136 SSMR3GetMem(pSSM, pRam->pavHCChunkHC[iChunk], PGM_DYNAMIC_CHUNK_SIZE);
2137 }
2138 /* else nothing to do */
2139 }
2140 }
2141 else if (pRam->pvHC)
2142 {
2143 int rc = SSMR3GetMem(pSSM, pRam->pvHC, pRam->cb);
2144 if (VBOX_FAILURE(rc))
2145 {
2146 Log(("pgmR3Save: SSMR3GetMem(, %p, %#x) -> %Vrc\n", pRam->pvHC, pRam->cb, rc));
2147 return rc;
2148 }
2149 }
2150 }
2151
2152 /*
2153 * We require a full resync now.
2154 */
2155 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3_NON_GLOBAL);
2156 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
2157 pPGM->fSyncFlags |= PGM_SYNC_UPDATE_PAGE_BIT_VIRTUAL;
2158 pPGM->fPhysCacheFlushPending = true;
2159 pgmR3HandlerPhysicalUpdateAll(pVM);
2160
2161 /*
2162 * Change the paging mode.
2163 */
2164 return pgmR3ChangeMode(pVM, pPGM->enmGuestMode);
2165}
2166
2167
2168/**
2169 * Show paging mode.
2170 *
2171 * @param pVM VM Handle.
2172 * @param pHlp The info helpers.
2173 * @param pszArgs "all" (default), "guest", "shadow" or "host".
2174 */
2175static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2176{
2177 /* digest argument. */
2178 bool fGuest, fShadow, fHost;
2179 if (pszArgs)
2180 pszArgs = RTStrStripL(pszArgs);
2181 if (!pszArgs || !*pszArgs || strstr(pszArgs, "all"))
2182 fShadow = fHost = fGuest = true;
2183 else
2184 {
2185 fShadow = fHost = fGuest = false;
2186 if (strstr(pszArgs, "guest"))
2187 fGuest = true;
2188 if (strstr(pszArgs, "shadow"))
2189 fShadow = true;
2190 if (strstr(pszArgs, "host"))
2191 fHost = true;
2192 }
2193
2194 /* print info. */
2195 if (fGuest)
2196 pHlp->pfnPrintf(pHlp, "Guest paging mode: %s, changed %RU64 times, A20 %s\n",
2197 PGMGetModeName(pVM->pgm.s.enmGuestMode), pVM->pgm.s.cGuestModeChanges.c,
2198 pVM->pgm.s.fA20Enabled ? "enabled" : "disabled");
2199 if (fShadow)
2200 pHlp->pfnPrintf(pHlp, "Shadow paging mode: %s\n", PGMGetModeName(pVM->pgm.s.enmShadowMode));
2201 if (fHost)
2202 {
2203 const char *psz;
2204 switch (pVM->pgm.s.enmHostMode)
2205 {
2206 case SUPPAGINGMODE_INVALID: psz = "invalid"; break;
2207 case SUPPAGINGMODE_32_BIT: psz = "32-bit"; break;
2208 case SUPPAGINGMODE_32_BIT_GLOBAL: psz = "32-bit+G"; break;
2209 case SUPPAGINGMODE_PAE: psz = "PAE"; break;
2210 case SUPPAGINGMODE_PAE_GLOBAL: psz = "PAE+G"; break;
2211 case SUPPAGINGMODE_PAE_NX: psz = "PAE+NX"; break;
2212 case SUPPAGINGMODE_PAE_GLOBAL_NX: psz = "PAE+G+NX"; break;
2213 case SUPPAGINGMODE_AMD64: psz = "AMD64"; break;
2214 case SUPPAGINGMODE_AMD64_GLOBAL: psz = "AMD64+G"; break;
2215 case SUPPAGINGMODE_AMD64_NX: psz = "AMD64+NX"; break;
2216 case SUPPAGINGMODE_AMD64_GLOBAL_NX: psz = "AMD64+G+NX"; break;
2217 default: psz = "unknown"; break;
2218 }
2219 pHlp->pfnPrintf(pHlp, "Host paging mode: %s\n", psz);
2220 }
2221}
2222
2223
2224/**
2225 * Dump registered MMIO ranges to the log.
2226 *
2227 * @param pVM VM Handle.
2228 * @param pHlp The info helpers.
2229 * @param pszArgs Arguments, ignored.
2230 */
2231static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2232{
2233 NOREF(pszArgs);
2234 pHlp->pfnPrintf(pHlp,
2235 "RAM ranges (pVM=%p)\n"
2236 "%.*s %.*s\n",
2237 pVM,
2238 sizeof(RTGCPHYS) * 4 + 1, "GC Phys Range ",
2239 sizeof(RTHCPTR) * 2, "pvHC ");
2240
2241 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesR3; pCur; pCur = pCur->pNextR3)
2242 pHlp->pfnPrintf(pHlp,
2243 "%RGp-%RGp %RHv %s\n",
2244 pCur->GCPhys,
2245 pCur->GCPhysLast,
2246 pCur->pvHC,
2247 pCur->pszDesc);
2248}
2249
2250/**
2251 * Dump the page directory to the log.
2252 *
2253 * @param pVM VM Handle.
2254 * @param pHlp The info helpers.
2255 * @param pszArgs Arguments, ignored.
2256 */
2257static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2258{
2259/** @todo fix this! Convert the PGMR3DumpHierarchyHC functions to do guest stuff. */
2260 /* Big pages supported? */
2261 const bool fPSE = !!(CPUMGetGuestCR4(pVM) & X86_CR4_PSE);
2262 /* Global pages supported? */
2263 const bool fPGE = !!(CPUMGetGuestCR4(pVM) & X86_CR4_PGE);
2264
2265 NOREF(pszArgs);
2266
2267 /*
2268 * Get page directory addresses.
2269 */
2270 PX86PD pPDSrc = pVM->pgm.s.pGuestPDHC;
2271 Assert(pPDSrc);
2272 Assert(MMPhysGCPhys2HCVirt(pVM, (RTGCPHYS)(CPUMGetGuestCR3(pVM) & X86_CR3_PAGE_MASK), sizeof(*pPDSrc)) == pPDSrc);
2273
2274 /*
2275 * Iterate the page directory.
2276 */
2277 for (unsigned iPD = 0; iPD < ELEMENTS(pPDSrc->a); iPD++)
2278 {
2279 X86PDE PdeSrc = pPDSrc->a[iPD];
2280 if (PdeSrc.n.u1Present)
2281 {
2282 if (PdeSrc.b.u1Size && fPSE)
2283 {
2284 pHlp->pfnPrintf(pHlp,
2285 "%04X - %VGp P=%d U=%d RW=%d G=%d - BIG\n",
2286 iPD,
2287 PdeSrc.u & X86_PDE_PG_MASK,
2288 PdeSrc.b.u1Present, PdeSrc.b.u1User, PdeSrc.b.u1Write, PdeSrc.b.u1Global && fPGE);
2289 }
2290 else
2291 {
2292 pHlp->pfnPrintf(pHlp,
2293 "%04X - %VGp P=%d U=%d RW=%d [G=%d]\n",
2294 iPD,
2295 PdeSrc.u & X86_PDE4M_PG_MASK,
2296 PdeSrc.n.u1Present, PdeSrc.n.u1User, PdeSrc.n.u1Write, PdeSrc.b.u1Global && fPGE);
2297 }
2298 }
2299 }
2300}
2301
2302
2303/**
2304 * Serivce a VMMCALLHOST_PGM_LOCK call.
2305 *
2306 * @returns VBox status code.
2307 * @param pVM The VM handle.
2308 */
2309PDMR3DECL(int) PGMR3LockCall(PVM pVM)
2310{
2311 return pgmLock(pVM);
2312}
2313
2314
2315/**
2316 * Converts a PGMMODE value to a PGM_TYPE_* \#define.
2317 *
2318 * @returns PGM_TYPE_*.
2319 * @param pgmMode The mode value to convert.
2320 */
2321DECLINLINE(unsigned) pgmModeToType(PGMMODE pgmMode)
2322{
2323 switch (pgmMode)
2324 {
2325 case PGMMODE_REAL: return PGM_TYPE_REAL;
2326 case PGMMODE_PROTECTED: return PGM_TYPE_PROT;
2327 case PGMMODE_32_BIT: return PGM_TYPE_32BIT;
2328 case PGMMODE_PAE:
2329 case PGMMODE_PAE_NX: return PGM_TYPE_PAE;
2330 case PGMMODE_AMD64:
2331 case PGMMODE_AMD64_NX: return PGM_TYPE_AMD64;
2332 default:
2333 AssertFatalMsgFailed(("pgmMode=%d\n", pgmMode));
2334 }
2335}
2336
2337
2338/**
2339 * Gets the index into the paging mode data array of a SHW+GST mode.
2340 *
2341 * @returns PGM::paPagingData index.
2342 * @param uShwType The shadow paging mode type.
2343 * @param uGstType The guest paging mode type.
2344 */
2345DECLINLINE(unsigned) pgmModeDataIndex(unsigned uShwType, unsigned uGstType)
2346{
2347 Assert(uShwType >= PGM_TYPE_32BIT && uShwType <= PGM_TYPE_AMD64);
2348 Assert(uGstType >= PGM_TYPE_REAL && uGstType <= PGM_TYPE_AMD64);
2349 return (uShwType - PGM_TYPE_32BIT) * (PGM_TYPE_AMD64 - PGM_TYPE_32BIT + 1)
2350 + (uGstType - PGM_TYPE_REAL);
2351}
2352
2353
2354/**
2355 * Gets the index into the paging mode data array of a SHW+GST mode.
2356 *
2357 * @returns PGM::paPagingData index.
2358 * @param enmShw The shadow paging mode.
2359 * @param enmGst The guest paging mode.
2360 */
2361DECLINLINE(unsigned) pgmModeDataIndexByMode(PGMMODE enmShw, PGMMODE enmGst)
2362{
2363 Assert(enmShw >= PGMMODE_32_BIT && enmShw <= PGMMODE_MAX);
2364 Assert(enmGst > PGMMODE_INVALID && enmGst < PGMMODE_MAX);
2365 return pgmModeDataIndex(pgmModeToType(enmShw), pgmModeToType(enmGst));
2366}
2367
2368
2369/**
2370 * Calculates the max data index.
2371 * @returns The number of entries in the pagaing data array.
2372 */
2373DECLINLINE(unsigned) pgmModeDataMaxIndex(void)
2374{
2375 return pgmModeDataIndex(PGM_TYPE_AMD64, PGM_TYPE_AMD64) + 1;
2376}
2377
2378
2379/**
2380 * Initializes the paging mode data kept in PGM::paModeData.
2381 *
2382 * @param pVM The VM handle.
2383 * @param fResolveGCAndR0 Indicate whether or not GC and Ring-0 symbols can be resolved now.
2384 * This is used early in the init process to avoid trouble with PDM
2385 * not being initialized yet.
2386 */
2387static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0)
2388{
2389 PPGMMODEDATA pModeData;
2390 int rc;
2391
2392 /*
2393 * Allocate the array on the first call.
2394 */
2395 if (!pVM->pgm.s.paModeData)
2396 {
2397 pVM->pgm.s.paModeData = (PPGMMODEDATA)MMR3HeapAllocZ(pVM, MM_TAG_PGM, sizeof(PGMMODEDATA) * pgmModeDataMaxIndex());
2398 AssertReturn(pVM->pgm.s.paModeData, VERR_NO_MEMORY);
2399 }
2400
2401 /*
2402 * Initialize the array entries.
2403 */
2404 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_REAL)];
2405 pModeData->uShwType = PGM_TYPE_32BIT;
2406 pModeData->uGstType = PGM_TYPE_REAL;
2407 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2408 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2409 rc = PGM_BTH_NAME_32BIT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2410
2411 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGMMODE_PROTECTED)];
2412 pModeData->uShwType = PGM_TYPE_32BIT;
2413 pModeData->uGstType = PGM_TYPE_PROT;
2414 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2415 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2416 rc = PGM_BTH_NAME_32BIT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2417
2418 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_32BIT)];
2419 pModeData->uShwType = PGM_TYPE_32BIT;
2420 pModeData->uGstType = PGM_TYPE_32BIT;
2421 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2422 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2423 rc = PGM_BTH_NAME_32BIT_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2424
2425 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_REAL)];
2426 pModeData->uShwType = PGM_TYPE_PAE;
2427 pModeData->uGstType = PGM_TYPE_REAL;
2428 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2429 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2430 rc = PGM_BTH_NAME_PAE_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2431
2432 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PROT)];
2433 pModeData->uShwType = PGM_TYPE_PAE;
2434 pModeData->uGstType = PGM_TYPE_PROT;
2435 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2436 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2437 rc = PGM_BTH_NAME_PAE_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2438
2439 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_32BIT)];
2440 pModeData->uShwType = PGM_TYPE_PAE;
2441 pModeData->uGstType = PGM_TYPE_32BIT;
2442 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2443 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2444 rc = PGM_BTH_NAME_PAE_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2445
2446 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PAE)];
2447 pModeData->uShwType = PGM_TYPE_PAE;
2448 pModeData->uGstType = PGM_TYPE_PAE;
2449 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2450 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2451 rc = PGM_BTH_NAME_PAE_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2452
2453 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_AMD64, PGM_TYPE_AMD64)];
2454 pModeData->uShwType = PGM_TYPE_AMD64;
2455 pModeData->uGstType = PGM_TYPE_AMD64;
2456 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2457 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2458
2459 return VINF_SUCCESS;
2460}
2461
2462
2463/**
2464 * Swtich to different (or relocated in the relocate case) mode data.
2465 *
2466 * @param pVM The VM handle.
2467 * @param enmShw The the shadow paging mode.
2468 * @param enmGst The the guest paging mode.
2469 */
2470static void pgmR3ModeDataSwitch(PVM pVM, PGMMODE enmShw, PGMMODE enmGst)
2471{
2472 PPGMMODEDATA pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(enmShw, enmGst)];
2473
2474 Assert(pModeData->uGstType == pgmModeToType(enmGst));
2475 Assert(pModeData->uShwType == pgmModeToType(enmShw));
2476
2477 /* shadow */
2478 pVM->pgm.s.pfnR3ShwRelocate = pModeData->pfnR3ShwRelocate;
2479 pVM->pgm.s.pfnR3ShwExit = pModeData->pfnR3ShwExit;
2480 pVM->pgm.s.pfnR3ShwGetPage = pModeData->pfnR3ShwGetPage;
2481 Assert(pVM->pgm.s.pfnR3ShwGetPage);
2482 pVM->pgm.s.pfnR3ShwModifyPage = pModeData->pfnR3ShwModifyPage;
2483 pVM->pgm.s.pfnR3ShwGetPDEByIndex = pModeData->pfnR3ShwGetPDEByIndex;
2484 pVM->pgm.s.pfnR3ShwSetPDEByIndex = pModeData->pfnR3ShwSetPDEByIndex;
2485 pVM->pgm.s.pfnR3ShwModifyPDEByIndex = pModeData->pfnR3ShwModifyPDEByIndex;
2486
2487 pVM->pgm.s.pfnGCShwGetPage = pModeData->pfnGCShwGetPage;
2488 pVM->pgm.s.pfnGCShwModifyPage = pModeData->pfnGCShwModifyPage;
2489 pVM->pgm.s.pfnGCShwGetPDEByIndex = pModeData->pfnGCShwGetPDEByIndex;
2490 pVM->pgm.s.pfnGCShwSetPDEByIndex = pModeData->pfnGCShwSetPDEByIndex;
2491 pVM->pgm.s.pfnGCShwModifyPDEByIndex = pModeData->pfnGCShwModifyPDEByIndex;
2492
2493 pVM->pgm.s.pfnR0ShwGetPage = pModeData->pfnR0ShwGetPage;
2494 pVM->pgm.s.pfnR0ShwModifyPage = pModeData->pfnR0ShwModifyPage;
2495 pVM->pgm.s.pfnR0ShwGetPDEByIndex = pModeData->pfnR0ShwGetPDEByIndex;
2496 pVM->pgm.s.pfnR0ShwSetPDEByIndex = pModeData->pfnR0ShwSetPDEByIndex;
2497 pVM->pgm.s.pfnR0ShwModifyPDEByIndex = pModeData->pfnR0ShwModifyPDEByIndex;
2498
2499
2500 /* guest */
2501 pVM->pgm.s.pfnR3GstRelocate = pModeData->pfnR3GstRelocate;
2502 pVM->pgm.s.pfnR3GstExit = pModeData->pfnR3GstExit;
2503 pVM->pgm.s.pfnR3GstGetPage = pModeData->pfnR3GstGetPage;
2504 Assert(pVM->pgm.s.pfnR3GstGetPage);
2505 pVM->pgm.s.pfnR3GstModifyPage = pModeData->pfnR3GstModifyPage;
2506 pVM->pgm.s.pfnR3GstGetPDE = pModeData->pfnR3GstGetPDE;
2507 pVM->pgm.s.pfnR3GstMonitorCR3 = pModeData->pfnR3GstMonitorCR3;
2508 pVM->pgm.s.pfnR3GstUnmonitorCR3 = pModeData->pfnR3GstUnmonitorCR3;
2509 pVM->pgm.s.pfnR3GstMapCR3 = pModeData->pfnR3GstMapCR3;
2510 pVM->pgm.s.pfnR3GstUnmapCR3 = pModeData->pfnR3GstUnmapCR3;
2511 pVM->pgm.s.pfnR3GstWriteHandlerCR3 = pModeData->pfnR3GstWriteHandlerCR3;
2512 pVM->pgm.s.pszR3GstWriteHandlerCR3 = pModeData->pszR3GstWriteHandlerCR3;
2513 pVM->pgm.s.pfnR3GstPAEWriteHandlerCR3 = pModeData->pfnR3GstPAEWriteHandlerCR3;
2514 pVM->pgm.s.pszR3GstPAEWriteHandlerCR3 = pModeData->pszR3GstPAEWriteHandlerCR3;
2515
2516 pVM->pgm.s.pfnGCGstGetPage = pModeData->pfnGCGstGetPage;
2517 pVM->pgm.s.pfnGCGstModifyPage = pModeData->pfnGCGstModifyPage;
2518 pVM->pgm.s.pfnGCGstGetPDE = pModeData->pfnGCGstGetPDE;
2519 pVM->pgm.s.pfnGCGstMonitorCR3 = pModeData->pfnGCGstMonitorCR3;
2520 pVM->pgm.s.pfnGCGstUnmonitorCR3 = pModeData->pfnGCGstUnmonitorCR3;
2521 pVM->pgm.s.pfnGCGstMapCR3 = pModeData->pfnGCGstMapCR3;
2522 pVM->pgm.s.pfnGCGstUnmapCR3 = pModeData->pfnGCGstUnmapCR3;
2523 pVM->pgm.s.pfnGCGstWriteHandlerCR3 = pModeData->pfnGCGstWriteHandlerCR3;
2524 pVM->pgm.s.pfnGCGstPAEWriteHandlerCR3 = pModeData->pfnGCGstPAEWriteHandlerCR3;
2525
2526 pVM->pgm.s.pfnR0GstGetPage = pModeData->pfnR0GstGetPage;
2527 pVM->pgm.s.pfnR0GstModifyPage = pModeData->pfnR0GstModifyPage;
2528 pVM->pgm.s.pfnR0GstGetPDE = pModeData->pfnR0GstGetPDE;
2529 pVM->pgm.s.pfnR0GstMonitorCR3 = pModeData->pfnR0GstMonitorCR3;
2530 pVM->pgm.s.pfnR0GstUnmonitorCR3 = pModeData->pfnR0GstUnmonitorCR3;
2531 pVM->pgm.s.pfnR0GstMapCR3 = pModeData->pfnR0GstMapCR3;
2532 pVM->pgm.s.pfnR0GstUnmapCR3 = pModeData->pfnR0GstUnmapCR3;
2533 pVM->pgm.s.pfnR0GstWriteHandlerCR3 = pModeData->pfnR0GstWriteHandlerCR3;
2534 pVM->pgm.s.pfnR0GstPAEWriteHandlerCR3 = pModeData->pfnR0GstPAEWriteHandlerCR3;
2535
2536
2537 /* both */
2538 pVM->pgm.s.pfnR3BthRelocate = pModeData->pfnR3BthRelocate;
2539 pVM->pgm.s.pfnR3BthTrap0eHandler = pModeData->pfnR3BthTrap0eHandler;
2540 pVM->pgm.s.pfnR3BthInvalidatePage = pModeData->pfnR3BthInvalidatePage;
2541 pVM->pgm.s.pfnR3BthSyncCR3 = pModeData->pfnR3BthSyncCR3;
2542 Assert(pVM->pgm.s.pfnR3BthSyncCR3);
2543 pVM->pgm.s.pfnR3BthSyncPage = pModeData->pfnR3BthSyncPage;
2544 pVM->pgm.s.pfnR3BthPrefetchPage = pModeData->pfnR3BthPrefetchPage;
2545 pVM->pgm.s.pfnR3BthVerifyAccessSyncPage = pModeData->pfnR3BthVerifyAccessSyncPage;
2546#ifdef VBOX_STRICT
2547 pVM->pgm.s.pfnR3BthAssertCR3 = pModeData->pfnR3BthAssertCR3;
2548#endif
2549
2550 pVM->pgm.s.pfnGCBthTrap0eHandler = pModeData->pfnGCBthTrap0eHandler;
2551 pVM->pgm.s.pfnGCBthInvalidatePage = pModeData->pfnGCBthInvalidatePage;
2552 pVM->pgm.s.pfnGCBthSyncCR3 = pModeData->pfnGCBthSyncCR3;
2553 pVM->pgm.s.pfnGCBthSyncPage = pModeData->pfnGCBthSyncPage;
2554 pVM->pgm.s.pfnGCBthPrefetchPage = pModeData->pfnGCBthPrefetchPage;
2555 pVM->pgm.s.pfnGCBthVerifyAccessSyncPage = pModeData->pfnGCBthVerifyAccessSyncPage;
2556#ifdef VBOX_STRICT
2557 pVM->pgm.s.pfnGCBthAssertCR3 = pModeData->pfnGCBthAssertCR3;
2558#endif
2559
2560 pVM->pgm.s.pfnR0BthTrap0eHandler = pModeData->pfnR0BthTrap0eHandler;
2561 pVM->pgm.s.pfnR0BthInvalidatePage = pModeData->pfnR0BthInvalidatePage;
2562 pVM->pgm.s.pfnR0BthSyncCR3 = pModeData->pfnR0BthSyncCR3;
2563 pVM->pgm.s.pfnR0BthSyncPage = pModeData->pfnR0BthSyncPage;
2564 pVM->pgm.s.pfnR0BthPrefetchPage = pModeData->pfnR0BthPrefetchPage;
2565 pVM->pgm.s.pfnR0BthVerifyAccessSyncPage = pModeData->pfnR0BthVerifyAccessSyncPage;
2566#ifdef VBOX_STRICT
2567 pVM->pgm.s.pfnR0BthAssertCR3 = pModeData->pfnR0BthAssertCR3;
2568#endif
2569}
2570
2571
2572#ifdef DEBUG_bird
2573#include <stdlib.h> /* getenv() remove me! */
2574#endif
2575
2576/**
2577 * Calculates the shadow paging mode.
2578 *
2579 * @returns The shadow paging mode.
2580 * @param enmGuestMode The guest mode.
2581 * @param enmHostMode The host mode.
2582 * @param enmShadowMode The current shadow mode.
2583 * @param penmSwitcher Where to store the switcher to use.
2584 * VMMSWITCHER_INVALID means no change.
2585 */
2586static PGMMODE pgmR3CalcShadowMode(PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher)
2587{
2588 VMMSWITCHER enmSwitcher = VMMSWITCHER_INVALID;
2589 switch (enmGuestMode)
2590 {
2591 /*
2592 * When switching to real or protected mode we don't change
2593 * anything since it's likely that we'll switch back pretty soon.
2594 *
2595 * During pgmR3InitPaging we'll end up here with PGMMODE_INVALID
2596 * and is supposed to determin which shadow paging and switcher to
2597 * use during init.
2598 */
2599 case PGMMODE_REAL:
2600 case PGMMODE_PROTECTED:
2601 if (enmShadowMode != PGMMODE_INVALID)
2602 break; /* (no change) */
2603 switch (enmHostMode)
2604 {
2605 case SUPPAGINGMODE_32_BIT:
2606 case SUPPAGINGMODE_32_BIT_GLOBAL:
2607 enmShadowMode = PGMMODE_32_BIT;
2608 enmSwitcher = VMMSWITCHER_32_TO_32;
2609 break;
2610
2611 case SUPPAGINGMODE_PAE:
2612 case SUPPAGINGMODE_PAE_NX:
2613 case SUPPAGINGMODE_PAE_GLOBAL:
2614 case SUPPAGINGMODE_PAE_GLOBAL_NX:
2615 enmShadowMode = PGMMODE_PAE;
2616 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
2617#ifdef DEBUG_bird
2618if (getenv("VBOX_32BIT"))
2619{
2620 enmShadowMode = PGMMODE_32_BIT;
2621 enmSwitcher = VMMSWITCHER_PAE_TO_32;
2622}
2623#endif
2624 break;
2625
2626 case SUPPAGINGMODE_AMD64:
2627 case SUPPAGINGMODE_AMD64_GLOBAL:
2628 case SUPPAGINGMODE_AMD64_NX:
2629 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
2630 enmShadowMode = PGMMODE_PAE;
2631 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
2632 break;
2633
2634 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
2635 }
2636 break;
2637
2638 case PGMMODE_32_BIT:
2639 switch (enmHostMode)
2640 {
2641 case SUPPAGINGMODE_32_BIT:
2642 case SUPPAGINGMODE_32_BIT_GLOBAL:
2643 enmShadowMode = PGMMODE_32_BIT;
2644 enmSwitcher = VMMSWITCHER_32_TO_32;
2645 break;
2646
2647 case SUPPAGINGMODE_PAE:
2648 case SUPPAGINGMODE_PAE_NX:
2649 case SUPPAGINGMODE_PAE_GLOBAL:
2650 case SUPPAGINGMODE_PAE_GLOBAL_NX:
2651 enmShadowMode = PGMMODE_PAE;
2652 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
2653#ifdef DEBUG_bird
2654if (getenv("VBOX_32BIT"))
2655{
2656 enmShadowMode = PGMMODE_32_BIT;
2657 enmSwitcher = VMMSWITCHER_PAE_TO_32;
2658}
2659#endif
2660 break;
2661
2662 case SUPPAGINGMODE_AMD64:
2663 case SUPPAGINGMODE_AMD64_GLOBAL:
2664 case SUPPAGINGMODE_AMD64_NX:
2665 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
2666 enmShadowMode = PGMMODE_PAE;
2667 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
2668 break;
2669
2670 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
2671 }
2672 break;
2673
2674 case PGMMODE_PAE:
2675 case PGMMODE_PAE_NX: /** @todo This might require more switchers and guest+both modes. */
2676 switch (enmHostMode)
2677 {
2678 case SUPPAGINGMODE_32_BIT:
2679 case SUPPAGINGMODE_32_BIT_GLOBAL:
2680 enmShadowMode = PGMMODE_PAE;
2681 enmSwitcher = VMMSWITCHER_32_TO_PAE;
2682 break;
2683
2684 case SUPPAGINGMODE_PAE:
2685 case SUPPAGINGMODE_PAE_NX:
2686 case SUPPAGINGMODE_PAE_GLOBAL:
2687 case SUPPAGINGMODE_PAE_GLOBAL_NX:
2688 enmShadowMode = PGMMODE_PAE;
2689 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
2690 break;
2691
2692 case SUPPAGINGMODE_AMD64:
2693 case SUPPAGINGMODE_AMD64_GLOBAL:
2694 case SUPPAGINGMODE_AMD64_NX:
2695 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
2696 enmShadowMode = PGMMODE_PAE;
2697 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
2698 break;
2699
2700 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
2701 }
2702 break;
2703
2704 case PGMMODE_AMD64:
2705 case PGMMODE_AMD64_NX:
2706 switch (enmHostMode)
2707 {
2708 case SUPPAGINGMODE_32_BIT:
2709 case SUPPAGINGMODE_32_BIT_GLOBAL:
2710 enmShadowMode = PGMMODE_PAE;
2711 enmSwitcher = VMMSWITCHER_32_TO_AMD64;
2712 break;
2713
2714 case SUPPAGINGMODE_PAE:
2715 case SUPPAGINGMODE_PAE_NX:
2716 case SUPPAGINGMODE_PAE_GLOBAL:
2717 case SUPPAGINGMODE_PAE_GLOBAL_NX:
2718 enmShadowMode = PGMMODE_PAE;
2719 enmSwitcher = VMMSWITCHER_PAE_TO_AMD64;
2720 break;
2721
2722 case SUPPAGINGMODE_AMD64:
2723 case SUPPAGINGMODE_AMD64_GLOBAL:
2724 case SUPPAGINGMODE_AMD64_NX:
2725 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
2726 enmShadowMode = PGMMODE_AMD64;
2727 enmSwitcher = VMMSWITCHER_AMD64_TO_AMD64;
2728 break;
2729
2730 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
2731 }
2732 break;
2733
2734
2735 default:
2736 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
2737 return PGMMODE_INVALID;
2738 }
2739
2740 *penmSwitcher = enmSwitcher;
2741 return enmShadowMode;
2742}
2743
2744
2745/**
2746 * Performs the actual mode change.
2747 * This is called by PGMChangeMode and pgmR3InitPaging().
2748 *
2749 * @returns VBox status code.
2750 * @param pVM VM handle.
2751 * @param enmGuestMode The new guest mode. This is assumed to be different from
2752 * the current mode.
2753 */
2754int pgmR3ChangeMode(PVM pVM, PGMMODE enmGuestMode)
2755{
2756 LogFlow(("pgmR3ChangeMode: Guest mode: %d -> %d\n", pVM->pgm.s.enmGuestMode, enmGuestMode));
2757 STAM_REL_COUNTER_INC(&pVM->pgm.s.cGuestModeChanges);
2758
2759 /*
2760 * Calc the shadow mode and switcher.
2761 */
2762 VMMSWITCHER enmSwitcher;
2763 PGMMODE enmShadowMode = pgmR3CalcShadowMode(enmGuestMode, pVM->pgm.s.enmHostMode, pVM->pgm.s.enmShadowMode, &enmSwitcher);
2764 if (enmSwitcher != VMMSWITCHER_INVALID)
2765 {
2766 /*
2767 * Select new switcher.
2768 */
2769 int rc = VMMR3SelectSwitcher(pVM, enmSwitcher);
2770 if (VBOX_FAILURE(rc))
2771 {
2772 AssertReleaseMsgFailed(("VMMR3SelectSwitcher(%d) -> %Vrc\n", enmSwitcher, rc));
2773 return rc;
2774 }
2775 }
2776
2777 /*
2778 * Exit old mode(s).
2779 */
2780 /* shadow */
2781 if (enmShadowMode != pVM->pgm.s.enmShadowMode)
2782 {
2783 LogFlow(("pgmR3ChangeMode: Shadow mode: %d -> %d\n", pVM->pgm.s.enmShadowMode, enmShadowMode));
2784 if (PGM_SHW_PFN(Exit, pVM))
2785 {
2786 int rc = PGM_SHW_PFN(Exit, pVM)(pVM);
2787 if (VBOX_FAILURE(rc))
2788 {
2789 AssertMsgFailed(("Exit failed for shadow mode %d: %Vrc\n", pVM->pgm.s.enmShadowMode, rc));
2790 return rc;
2791 }
2792 }
2793
2794 }
2795
2796 /* guest */
2797 if (PGM_GST_PFN(Exit, pVM))
2798 {
2799 int rc = PGM_GST_PFN(Exit, pVM)(pVM);
2800 if (VBOX_FAILURE(rc))
2801 {
2802 AssertMsgFailed(("Exit failed for guest mode %d: %Vrc\n", pVM->pgm.s.enmGuestMode, rc));
2803 return rc;
2804 }
2805 }
2806
2807 /*
2808 * Load new paging mode data.
2809 */
2810 pgmR3ModeDataSwitch(pVM, enmShadowMode, enmGuestMode);
2811
2812 /*
2813 * Enter new shadow mode (if changed).
2814 */
2815 if (enmShadowMode != pVM->pgm.s.enmShadowMode)
2816 {
2817 int rc;
2818 pVM->pgm.s.enmShadowMode = enmShadowMode;
2819 switch (enmShadowMode)
2820 {
2821 case PGMMODE_32_BIT:
2822 rc = PGM_SHW_NAME_32BIT(Enter)(pVM);
2823 break;
2824 case PGMMODE_PAE:
2825 case PGMMODE_PAE_NX:
2826 rc = PGM_SHW_NAME_PAE(Enter)(pVM);
2827 break;
2828 case PGMMODE_AMD64:
2829 case PGMMODE_AMD64_NX:
2830 rc = PGM_SHW_NAME_AMD64(Enter)(pVM);
2831 break;
2832 case PGMMODE_REAL:
2833 case PGMMODE_PROTECTED:
2834 default:
2835 AssertReleaseMsgFailed(("enmShadowMode=%d\n", enmShadowMode));
2836 return VERR_INTERNAL_ERROR;
2837 }
2838 if (VBOX_FAILURE(rc))
2839 {
2840 AssertReleaseMsgFailed(("Entering enmShadowMode=%d failed: %Vrc\n", enmShadowMode, rc));
2841 pVM->pgm.s.enmShadowMode = PGMMODE_INVALID;
2842 return rc;
2843 }
2844 }
2845
2846 /*
2847 * Enter the new guest and shadow+guest modes.
2848 */
2849 int rc = -1;
2850 int rc2 = -1;
2851 RTGCPHYS GCPhysCR3 = NIL_RTGCPHYS;
2852 pVM->pgm.s.enmGuestMode = enmGuestMode;
2853 switch (enmGuestMode)
2854 {
2855 case PGMMODE_REAL:
2856 rc = PGM_GST_NAME_REAL(Enter)(pVM, NIL_RTGCPHYS);
2857 switch (pVM->pgm.s.enmShadowMode)
2858 {
2859 case PGMMODE_32_BIT:
2860 rc2 = PGM_BTH_NAME_32BIT_REAL(Enter)(pVM, NIL_RTGCPHYS);
2861 break;
2862 case PGMMODE_PAE:
2863 case PGMMODE_PAE_NX:
2864 rc2 = PGM_BTH_NAME_PAE_REAL(Enter)(pVM, NIL_RTGCPHYS);
2865 break;
2866 case PGMMODE_AMD64:
2867 case PGMMODE_AMD64_NX:
2868 AssertMsgFailed(("Should use PAE shadow mode!\n"));
2869 default: AssertFailed(); break;
2870 }
2871 break;
2872
2873 case PGMMODE_PROTECTED:
2874 rc = PGM_GST_NAME_PROT(Enter)(pVM, NIL_RTGCPHYS);
2875 switch (pVM->pgm.s.enmShadowMode)
2876 {
2877 case PGMMODE_32_BIT:
2878 rc2 = PGM_BTH_NAME_32BIT_PROT(Enter)(pVM, NIL_RTGCPHYS);
2879 break;
2880 case PGMMODE_PAE:
2881 case PGMMODE_PAE_NX:
2882 rc2 = PGM_BTH_NAME_PAE_PROT(Enter)(pVM, NIL_RTGCPHYS);
2883 break;
2884 case PGMMODE_AMD64:
2885 case PGMMODE_AMD64_NX:
2886 AssertMsgFailed(("Should use PAE shadow mode!\n"));
2887 default: AssertFailed(); break;
2888 }
2889 break;
2890
2891 case PGMMODE_32_BIT:
2892 GCPhysCR3 = CPUMGetGuestCR3(pVM) & X86_CR3_PAGE_MASK;
2893 rc = PGM_GST_NAME_32BIT(Enter)(pVM, GCPhysCR3);
2894 switch (pVM->pgm.s.enmShadowMode)
2895 {
2896 case PGMMODE_32_BIT:
2897 rc2 = PGM_BTH_NAME_32BIT_32BIT(Enter)(pVM, GCPhysCR3);
2898 break;
2899 case PGMMODE_PAE:
2900 case PGMMODE_PAE_NX:
2901 rc2 = PGM_BTH_NAME_PAE_32BIT(Enter)(pVM, GCPhysCR3);
2902 break;
2903 case PGMMODE_AMD64:
2904 case PGMMODE_AMD64_NX:
2905 AssertMsgFailed(("Should use PAE shadow mode!\n"));
2906 default: AssertFailed(); break;
2907 }
2908 break;
2909
2910 //case PGMMODE_PAE_NX:
2911 case PGMMODE_PAE:
2912 GCPhysCR3 = CPUMGetGuestCR3(pVM) & X86_CR3_PAE_PAGE_MASK;
2913 rc = PGM_GST_NAME_PAE(Enter)(pVM, GCPhysCR3);
2914 switch (pVM->pgm.s.enmShadowMode)
2915 {
2916 case PGMMODE_PAE:
2917 case PGMMODE_PAE_NX:
2918 rc2 = PGM_BTH_NAME_PAE_PAE(Enter)(pVM, GCPhysCR3);
2919 break;
2920 case PGMMODE_32_BIT:
2921 case PGMMODE_AMD64:
2922 case PGMMODE_AMD64_NX:
2923 AssertMsgFailed(("Should use PAE shadow mode!\n"));
2924 default: AssertFailed(); break;
2925 }
2926 break;
2927
2928 //case PGMMODE_AMD64_NX:
2929 case PGMMODE_AMD64:
2930 GCPhysCR3 = CPUMGetGuestCR3(pVM) & 0xfffffffffffff000ULL; /** @todo define this mask and make CR3 64-bit in this case! */
2931 rc = PGM_GST_NAME_AMD64(Enter)(pVM, GCPhysCR3);
2932 switch (pVM->pgm.s.enmShadowMode)
2933 {
2934 case PGMMODE_AMD64:
2935 case PGMMODE_AMD64_NX:
2936 rc2 = PGM_BTH_NAME_AMD64_AMD64(Enter)(pVM, GCPhysCR3);
2937 break;
2938 case PGMMODE_32_BIT:
2939 case PGMMODE_PAE:
2940 case PGMMODE_PAE_NX:
2941 AssertMsgFailed(("Should use AMD64 shadow mode!\n"));
2942 default: AssertFailed(); break;
2943 }
2944 break;
2945
2946 default:
2947 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
2948 rc = VERR_NOT_IMPLEMENTED;
2949 break;
2950 }
2951
2952 /* status codes. */
2953 AssertRC(rc);
2954 AssertRC(rc2);
2955 if (VBOX_SUCCESS(rc))
2956 {
2957 rc = rc2;
2958 if (VBOX_SUCCESS(rc)) /* no informational status codes. */
2959 rc = VINF_SUCCESS;
2960 }
2961
2962 /*
2963 * Notify SELM so it can update the TSSes with correct CR3s.
2964 */
2965 SELMR3PagingModeChanged(pVM);
2966
2967 /* Notify HWACCM as well. */
2968 HWACCMR3PagingModeChanged(pVM, pVM->pgm.s.enmShadowMode);
2969 return rc;
2970}
2971
2972
2973/**
2974 * Dumps a PAE shadow page table.
2975 *
2976 * @returns VBox status code (VINF_SUCCESS).
2977 * @param pVM The VM handle.
2978 * @param pPT Pointer to the page table.
2979 * @param u64Address The virtual address of the page table starts.
2980 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
2981 * @param cMaxDepth The maxium depth.
2982 * @param pHlp Pointer to the output functions.
2983 */
2984static int pgmR3DumpHierarchyHCPaePT(PVM pVM, PX86PTPAE pPT, uint64_t u64Address, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
2985{
2986 for (unsigned i = 0; i < ELEMENTS(pPT->a); i++)
2987 {
2988 X86PTEPAE Pte = pPT->a[i];
2989 if (Pte.n.u1Present)
2990 {
2991 pHlp->pfnPrintf(pHlp,
2992 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
2993 ? "%016llx 3 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n"
2994 : "%08llx 2 | P %c %c %c %c %c %s %s %s %s 4K %c%c%c %016llx\n",
2995 u64Address + ((uint64_t)i << X86_PT_PAE_SHIFT),
2996 Pte.n.u1Write ? 'W' : 'R',
2997 Pte.n.u1User ? 'U' : 'S',
2998 Pte.n.u1Accessed ? 'A' : '-',
2999 Pte.n.u1Dirty ? 'D' : '-',
3000 Pte.n.u1Global ? 'G' : '-',
3001 Pte.n.u1WriteThru ? "WT" : "--",
3002 Pte.n.u1CacheDisable? "CD" : "--",
3003 Pte.n.u1PAT ? "AT" : "--",
3004 Pte.n.u1NoExecute ? "NX" : "--",
3005 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3006 Pte.u & RT_BIT(10) ? '1' : '0',
3007 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED? 'v' : '-',
3008 Pte.u & X86_PTE_PAE_PG_MASK);
3009 }
3010 }
3011 return VINF_SUCCESS;
3012}
3013
3014
3015/**
3016 * Dumps a PAE shadow page directory table.
3017 *
3018 * @returns VBox status code (VINF_SUCCESS).
3019 * @param pVM The VM handle.
3020 * @param HCPhys The physical address of the page directory table.
3021 * @param u64Address The virtual address of the page table starts.
3022 * @param cr4 The CR4, PSE is currently used.
3023 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3024 * @param cMaxDepth The maxium depth.
3025 * @param pHlp Pointer to the output functions.
3026 */
3027static int pgmR3DumpHierarchyHCPaePD(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3028{
3029 PX86PDPAE pPD = (PX86PDPAE)MMPagePhys2Page(pVM, HCPhys);
3030 if (!pPD)
3031 {
3032 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory at HCPhys=%#VHp was not found in the page pool!\n",
3033 fLongMode ? 16 : 8, u64Address, HCPhys);
3034 return VERR_INVALID_PARAMETER;
3035 }
3036 int rc = VINF_SUCCESS;
3037 for (unsigned i = 0; i < ELEMENTS(pPD->a); i++)
3038 {
3039 X86PDEPAE Pde = pPD->a[i];
3040 if (Pde.n.u1Present)
3041 {
3042 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
3043 pHlp->pfnPrintf(pHlp,
3044 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3045 ? "%016llx 2 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n"
3046 : "%08llx 1 | P %c %c %c %c %c %s %s %s %s 4M %c%c%c %016llx\n",
3047 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
3048 Pde.b.u1Write ? 'W' : 'R',
3049 Pde.b.u1User ? 'U' : 'S',
3050 Pde.b.u1Accessed ? 'A' : '-',
3051 Pde.b.u1Dirty ? 'D' : '-',
3052 Pde.b.u1Global ? 'G' : '-',
3053 Pde.b.u1WriteThru ? "WT" : "--",
3054 Pde.b.u1CacheDisable? "CD" : "--",
3055 Pde.b.u1PAT ? "AT" : "--",
3056 Pde.b.u1NoExecute ? "NX" : "--",
3057 Pde.u & RT_BIT_64(9) ? '1' : '0',
3058 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3059 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3060 Pde.u & X86_PDE_PAE_PG_MASK);
3061 else
3062 {
3063 pHlp->pfnPrintf(pHlp,
3064 fLongMode /*P R S A D G WT CD AT NX 4M a p ? */
3065 ? "%016llx 2 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n"
3066 : "%08llx 1 | P %c %c %c %c %c %s %s .. %s 4K %c%c%c %016llx\n",
3067 u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT),
3068 Pde.n.u1Write ? 'W' : 'R',
3069 Pde.n.u1User ? 'U' : 'S',
3070 Pde.n.u1Accessed ? 'A' : '-',
3071 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
3072 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
3073 Pde.n.u1WriteThru ? "WT" : "--",
3074 Pde.n.u1CacheDisable? "CD" : "--",
3075 Pde.n.u1NoExecute ? "NX" : "--",
3076 Pde.u & RT_BIT_64(9) ? '1' : '0',
3077 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3078 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3079 Pde.u & X86_PDE_PAE_PG_MASK);
3080 if (cMaxDepth >= 1)
3081 {
3082 /** @todo what about using the page pool for mapping PTs? */
3083 uint64_t u64AddressPT = u64Address + ((uint64_t)i << X86_PD_PAE_SHIFT);
3084 RTHCPHYS HCPhysPT = Pde.u & X86_PDE_PAE_PG_MASK;
3085 PX86PTPAE pPT = NULL;
3086 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
3087 pPT = (PX86PTPAE)MMPagePhys2Page(pVM, HCPhysPT);
3088 else
3089 {
3090 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
3091 {
3092 uint64_t off = u64AddressPT - pMap->GCPtr;
3093 if (off < pMap->cb)
3094 {
3095 const int iPDE = (uint32_t)(off >> X86_PD_SHIFT);
3096 const int iSub = (int)((off >> X86_PD_PAE_SHIFT) & 1); /* MSC is a pain sometimes */
3097 if ((iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0) != HCPhysPT)
3098 pHlp->pfnPrintf(pHlp, "%0*llx error! Mapping error! PT %d has HCPhysPT=%VHp not %VHp is in the PD.\n",
3099 fLongMode ? 16 : 8, u64AddressPT, iPDE,
3100 iSub ? pMap->aPTs[iPDE].HCPhysPaePT1 : pMap->aPTs[iPDE].HCPhysPaePT0, HCPhysPT);
3101 pPT = &pMap->aPTs[iPDE].paPaePTsR3[iSub];
3102 }
3103 }
3104 }
3105 int rc2 = VERR_INVALID_PARAMETER;
3106 if (pPT)
3107 rc2 = pgmR3DumpHierarchyHCPaePT(pVM, pPT, u64AddressPT, fLongMode, cMaxDepth - 1, pHlp);
3108 else
3109 pHlp->pfnPrintf(pHlp, "%0*llx error! Page table at HCPhys=%#VHp was not found in the page pool!\n",
3110 fLongMode ? 16 : 8, u64AddressPT, HCPhysPT);
3111 if (rc2 < rc && VBOX_SUCCESS(rc))
3112 rc = rc2;
3113 }
3114 }
3115 }
3116 }
3117 return rc;
3118}
3119
3120
3121/**
3122 * Dumps a PAE shadow page directory pointer table.
3123 *
3124 * @returns VBox status code (VINF_SUCCESS).
3125 * @param pVM The VM handle.
3126 * @param HCPhys The physical address of the page directory pointer table.
3127 * @param u64Address The virtual address of the page table starts.
3128 * @param cr4 The CR4, PSE is currently used.
3129 * @param fLongMode Set if this a long mode table; clear if it's a legacy mode table.
3130 * @param cMaxDepth The maxium depth.
3131 * @param pHlp Pointer to the output functions.
3132 */
3133static int pgmR3DumpHierarchyHCPaePDPT(PVM pVM, RTHCPHYS HCPhys, uint64_t u64Address, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3134{
3135 PX86PDPT pPDPT = (PX86PDPT)MMPagePhys2Page(pVM, HCPhys);
3136 if (!pPDPT)
3137 {
3138 pHlp->pfnPrintf(pHlp, "%0*llx error! Page directory pointer table at HCPhys=%#VHp was not found in the page pool!\n",
3139 fLongMode ? 16 : 8, u64Address, HCPhys);
3140 return VERR_INVALID_PARAMETER;
3141 }
3142
3143 int rc = VINF_SUCCESS;
3144 const unsigned c = fLongMode ? ELEMENTS(pPDPT->a) : X86_PG_PAE_PDPE_ENTRIES;
3145 for (unsigned i = 0; i < c; i++)
3146 {
3147 X86PDPE Pdpe = pPDPT->a[i];
3148 if (Pdpe.n.u1Present)
3149 {
3150 if (fLongMode)
3151 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3152 "%016llx 1 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3153 u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
3154 Pdpe.n.u1Write ? 'W' : 'R',
3155 Pdpe.n.u1User ? 'U' : 'S',
3156 Pdpe.n.u1Accessed ? 'A' : '-',
3157 Pdpe.n.u3Reserved & 1? '?' : '.', /* ignored */
3158 Pdpe.n.u3Reserved & 4? '!' : '.', /* mbz */
3159 Pdpe.n.u1WriteThru ? "WT" : "--",
3160 Pdpe.n.u1CacheDisable? "CD" : "--",
3161 Pdpe.n.u3Reserved & 2? "!" : "..",/* mbz */
3162 Pdpe.n.u1NoExecute ? "NX" : "--",
3163 Pdpe.u & RT_BIT(9) ? '1' : '0',
3164 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3165 Pdpe.u & RT_BIT(11) ? '1' : '0',
3166 Pdpe.u & X86_PDPE_PG_MASK);
3167 else
3168 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3169 "%08x 0 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3170 i << X86_PDPT_SHIFT,
3171 Pdpe.n.u1Write ? '!' : '.', /* mbz */
3172 Pdpe.n.u1User ? '!' : '.', /* mbz */
3173 Pdpe.n.u1Accessed ? '!' : '.', /* mbz */
3174 Pdpe.n.u3Reserved & 1? '!' : '.', /* mbz */
3175 Pdpe.n.u3Reserved & 4? '!' : '.', /* mbz */
3176 Pdpe.n.u1WriteThru ? "WT" : "--",
3177 Pdpe.n.u1CacheDisable? "CD" : "--",
3178 Pdpe.n.u3Reserved & 2? "!" : "..",/* mbz */
3179 Pdpe.n.u1NoExecute ? "NX" : "--",
3180 Pdpe.u & RT_BIT(9) ? '1' : '0',
3181 Pdpe.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3182 Pdpe.u & RT_BIT(11) ? '1' : '0',
3183 Pdpe.u & X86_PDPE_PG_MASK);
3184 if (cMaxDepth >= 1)
3185 {
3186 int rc2 = pgmR3DumpHierarchyHCPaePD(pVM, Pdpe.u & X86_PDPE_PG_MASK, u64Address + ((uint64_t)i << X86_PDPT_SHIFT),
3187 cr4, fLongMode, cMaxDepth - 1, pHlp);
3188 if (rc2 < rc && VBOX_SUCCESS(rc))
3189 rc = rc2;
3190 }
3191 }
3192 }
3193 return rc;
3194}
3195
3196
3197/**
3198 * Dumps a 32-bit shadow page table.
3199 *
3200 * @returns VBox status code (VINF_SUCCESS).
3201 * @param pVM The VM handle.
3202 * @param HCPhys The physical address of the table.
3203 * @param cr4 The CR4, PSE is currently used.
3204 * @param cMaxDepth The maxium depth.
3205 * @param pHlp Pointer to the output functions.
3206 */
3207static int pgmR3DumpHierarchyHcPaePML4(PVM pVM, RTHCPHYS HCPhys, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3208{
3209 PX86PML4 pPML4 = (PX86PML4)MMPagePhys2Page(pVM, HCPhys);
3210 if (!pPML4)
3211 {
3212 pHlp->pfnPrintf(pHlp, "Page map level 4 at HCPhys=%#VHp was not found in the page pool!\n", HCPhys);
3213 return VERR_INVALID_PARAMETER;
3214 }
3215
3216 int rc = VINF_SUCCESS;
3217 for (unsigned i = 0; i < ELEMENTS(pPML4->a); i++)
3218 {
3219 X86PML4E Pml4e = pPML4->a[i];
3220 if (Pml4e.n.u1Present)
3221 {
3222 uint64_t u64Address = ((uint64_t)i << X86_PML4_SHIFT) | (((uint64_t)i >> (X86_PML4_SHIFT - X86_PDPT_SHIFT - 1)) * 0xffff000000000000ULL);
3223 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a p ? */
3224 "%016llx 0 | P %c %c %c %c %c %s %s %s %s .. %c%c%c %016llx\n",
3225 u64Address,
3226 Pml4e.n.u1Write ? 'W' : 'R',
3227 Pml4e.n.u1User ? 'U' : 'S',
3228 Pml4e.n.u1Accessed ? 'A' : '-',
3229 Pml4e.n.u3Reserved & 1? '?' : '.', /* ignored */
3230 Pml4e.n.u3Reserved & 4? '!' : '.', /* mbz */
3231 Pml4e.n.u1WriteThru ? "WT" : "--",
3232 Pml4e.n.u1CacheDisable? "CD" : "--",
3233 Pml4e.n.u3Reserved & 2? "!" : "..",/* mbz */
3234 Pml4e.n.u1NoExecute ? "NX" : "--",
3235 Pml4e.u & RT_BIT(9) ? '1' : '0',
3236 Pml4e.u & PGM_PLXFLAGS_PERMANENT ? 'p' : '-',
3237 Pml4e.u & RT_BIT(11) ? '1' : '0',
3238 Pml4e.u & X86_PML4E_PG_MASK);
3239
3240 if (cMaxDepth >= 1)
3241 {
3242 int rc2 = pgmR3DumpHierarchyHCPaePDPT(pVM, Pml4e.u & X86_PML4E_PG_MASK, u64Address, cr4, true, cMaxDepth - 1, pHlp);
3243 if (rc2 < rc && VBOX_SUCCESS(rc))
3244 rc = rc2;
3245 }
3246 }
3247 }
3248 return rc;
3249}
3250
3251
3252/**
3253 * Dumps a 32-bit shadow page table.
3254 *
3255 * @returns VBox status code (VINF_SUCCESS).
3256 * @param pVM The VM handle.
3257 * @param pPT Pointer to the page table.
3258 * @param u32Address The virtual address this table starts at.
3259 * @param pHlp Pointer to the output functions.
3260 */
3261int pgmR3DumpHierarchyHC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, PCDBGFINFOHLP pHlp)
3262{
3263 for (unsigned i = 0; i < ELEMENTS(pPT->a); i++)
3264 {
3265 X86PTE Pte = pPT->a[i];
3266 if (Pte.n.u1Present)
3267 {
3268 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3269 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
3270 u32Address + (i << X86_PT_SHIFT),
3271 Pte.n.u1Write ? 'W' : 'R',
3272 Pte.n.u1User ? 'U' : 'S',
3273 Pte.n.u1Accessed ? 'A' : '-',
3274 Pte.n.u1Dirty ? 'D' : '-',
3275 Pte.n.u1Global ? 'G' : '-',
3276 Pte.n.u1WriteThru ? "WT" : "--",
3277 Pte.n.u1CacheDisable? "CD" : "--",
3278 Pte.n.u1PAT ? "AT" : "--",
3279 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3280 Pte.u & RT_BIT(10) ? '1' : '0',
3281 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
3282 Pte.u & X86_PDE_PG_MASK);
3283 }
3284 }
3285 return VINF_SUCCESS;
3286}
3287
3288
3289/**
3290 * Dumps a 32-bit shadow page directory and page tables.
3291 *
3292 * @returns VBox status code (VINF_SUCCESS).
3293 * @param pVM The VM handle.
3294 * @param cr3 The root of the hierarchy.
3295 * @param cr4 The CR4, PSE is currently used.
3296 * @param cMaxDepth How deep into the hierarchy the dumper should go.
3297 * @param pHlp Pointer to the output functions.
3298 */
3299int pgmR3DumpHierarchyHC32BitPD(PVM pVM, uint32_t cr3, uint32_t cr4, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3300{
3301 PX86PD pPD = (PX86PD)MMPagePhys2Page(pVM, cr3 & X86_CR3_PAGE_MASK);
3302 if (!pPD)
3303 {
3304 pHlp->pfnPrintf(pHlp, "Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK);
3305 return VERR_INVALID_PARAMETER;
3306 }
3307
3308 int rc = VINF_SUCCESS;
3309 for (unsigned i = 0; i < ELEMENTS(pPD->a); i++)
3310 {
3311 X86PDE Pde = pPD->a[i];
3312 if (Pde.n.u1Present)
3313 {
3314 const uint32_t u32Address = i << X86_PD_SHIFT;
3315 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
3316 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3317 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
3318 u32Address,
3319 Pde.b.u1Write ? 'W' : 'R',
3320 Pde.b.u1User ? 'U' : 'S',
3321 Pde.b.u1Accessed ? 'A' : '-',
3322 Pde.b.u1Dirty ? 'D' : '-',
3323 Pde.b.u1Global ? 'G' : '-',
3324 Pde.b.u1WriteThru ? "WT" : "--",
3325 Pde.b.u1CacheDisable? "CD" : "--",
3326 Pde.b.u1PAT ? "AT" : "--",
3327 Pde.u & RT_BIT_64(9) ? '1' : '0',
3328 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3329 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3330 Pde.u & X86_PDE4M_PG_MASK);
3331 else
3332 {
3333 pHlp->pfnPrintf(pHlp, /*P R S A D G WT CD AT NX 4M a m d */
3334 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
3335 u32Address,
3336 Pde.n.u1Write ? 'W' : 'R',
3337 Pde.n.u1User ? 'U' : 'S',
3338 Pde.n.u1Accessed ? 'A' : '-',
3339 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
3340 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
3341 Pde.n.u1WriteThru ? "WT" : "--",
3342 Pde.n.u1CacheDisable? "CD" : "--",
3343 Pde.u & RT_BIT_64(9) ? '1' : '0',
3344 Pde.u & PGM_PDFLAGS_MAPPING ? 'm' : '-',
3345 Pde.u & PGM_PDFLAGS_TRACK_DIRTY ? 'd' : '-',
3346 Pde.u & X86_PDE_PG_MASK);
3347 if (cMaxDepth >= 1)
3348 {
3349 /** @todo what about using the page pool for mapping PTs? */
3350 RTHCPHYS HCPhys = Pde.u & X86_PDE_PG_MASK;
3351 PX86PT pPT = NULL;
3352 if (!(Pde.u & PGM_PDFLAGS_MAPPING))
3353 pPT = (PX86PT)MMPagePhys2Page(pVM, HCPhys);
3354 else
3355 {
3356 for (PPGMMAPPING pMap = pVM->pgm.s.pMappingsR3; pMap; pMap = pMap->pNextR3)
3357 if (u32Address - pMap->GCPtr < pMap->cb)
3358 {
3359 int iPDE = (u32Address - pMap->GCPtr) >> X86_PD_SHIFT;
3360 if (pMap->aPTs[iPDE].HCPhysPT != HCPhys)
3361 pHlp->pfnPrintf(pHlp, "%08x error! Mapping error! PT %d has HCPhysPT=%VHp not %VHp is in the PD.\n",
3362 u32Address, iPDE, pMap->aPTs[iPDE].HCPhysPT, HCPhys);
3363 pPT = pMap->aPTs[iPDE].pPTR3;
3364 }
3365 }
3366 int rc2 = VERR_INVALID_PARAMETER;
3367 if (pPT)
3368 rc2 = pgmR3DumpHierarchyHC32BitPT(pVM, pPT, u32Address, pHlp);
3369 else
3370 pHlp->pfnPrintf(pHlp, "%08x error! Page table at %#x was not found in the page pool!\n", u32Address, HCPhys);
3371 if (rc2 < rc && VBOX_SUCCESS(rc))
3372 rc = rc2;
3373 }
3374 }
3375 }
3376 }
3377
3378 return rc;
3379}
3380
3381
3382/**
3383 * Dumps a 32-bit shadow page table.
3384 *
3385 * @returns VBox status code (VINF_SUCCESS).
3386 * @param pVM The VM handle.
3387 * @param pPT Pointer to the page table.
3388 * @param u32Address The virtual address this table starts at.
3389 * @param PhysSearch Address to search for.
3390 */
3391int pgmR3DumpHierarchyGC32BitPT(PVM pVM, PX86PT pPT, uint32_t u32Address, RTGCPHYS PhysSearch)
3392{
3393 for (unsigned i = 0; i < ELEMENTS(pPT->a); i++)
3394 {
3395 X86PTE Pte = pPT->a[i];
3396 if (Pte.n.u1Present)
3397 {
3398 Log(( /*P R S A D G WT CD AT NX 4M a m d */
3399 "%08x 1 | P %c %c %c %c %c %s %s %s .. 4K %c%c%c %08x\n",
3400 u32Address + (i << X86_PT_SHIFT),
3401 Pte.n.u1Write ? 'W' : 'R',
3402 Pte.n.u1User ? 'U' : 'S',
3403 Pte.n.u1Accessed ? 'A' : '-',
3404 Pte.n.u1Dirty ? 'D' : '-',
3405 Pte.n.u1Global ? 'G' : '-',
3406 Pte.n.u1WriteThru ? "WT" : "--",
3407 Pte.n.u1CacheDisable? "CD" : "--",
3408 Pte.n.u1PAT ? "AT" : "--",
3409 Pte.u & PGM_PTFLAGS_TRACK_DIRTY ? 'd' : '-',
3410 Pte.u & RT_BIT(10) ? '1' : '0',
3411 Pte.u & PGM_PTFLAGS_CSAM_VALIDATED ? 'v' : '-',
3412 Pte.u & X86_PDE_PG_MASK));
3413
3414 if ((Pte.u & X86_PDE_PG_MASK) == PhysSearch)
3415 {
3416 uint64_t fPageShw = 0;
3417 RTHCPHYS pPhysHC = 0;
3418
3419 PGMShwGetPage(pVM, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), &fPageShw, &pPhysHC);
3420 Log(("Found %VGp at %VGv -> flags=%llx\n", PhysSearch, (RTGCPTR)(u32Address + (i << X86_PT_SHIFT)), fPageShw));
3421 }
3422 }
3423 }
3424 return VINF_SUCCESS;
3425}
3426
3427
3428/**
3429 * Dumps a 32-bit guest page directory and page tables.
3430 *
3431 * @returns VBox status code (VINF_SUCCESS).
3432 * @param pVM The VM handle.
3433 * @param cr3 The root of the hierarchy.
3434 * @param cr4 The CR4, PSE is currently used.
3435 * @param PhysSearch Address to search for.
3436 */
3437PGMR3DECL(int) PGMR3DumpHierarchyGC(PVM pVM, uint32_t cr3, uint32_t cr4, RTGCPHYS PhysSearch)
3438{
3439 bool fLongMode = false;
3440 const unsigned cch = fLongMode ? 16 : 8; NOREF(cch);
3441 PX86PD pPD = 0;
3442
3443 int rc = PGM_GCPHYS_2_PTR(pVM, cr3 & X86_CR3_PAGE_MASK, &pPD);
3444 if (VBOX_FAILURE(rc) || !pPD)
3445 {
3446 Log(("Page directory at %#x was not found in the page pool!\n", cr3 & X86_CR3_PAGE_MASK));
3447 return VERR_INVALID_PARAMETER;
3448 }
3449
3450 Log(("cr3=%08x cr4=%08x%s\n"
3451 "%-*s P - Present\n"
3452 "%-*s | R/W - Read (0) / Write (1)\n"
3453 "%-*s | | U/S - User (1) / Supervisor (0)\n"
3454 "%-*s | | | A - Accessed\n"
3455 "%-*s | | | | D - Dirty\n"
3456 "%-*s | | | | | G - Global\n"
3457 "%-*s | | | | | | WT - Write thru\n"
3458 "%-*s | | | | | | | CD - Cache disable\n"
3459 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
3460 "%-*s | | | | | | | | | NX - No execute (K8)\n"
3461 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
3462 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
3463 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
3464 "%-*s Level | | | | | | | | | | | | Page\n"
3465 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
3466 - W U - - - -- -- -- -- -- 010 */
3467 , cr3, cr4, fLongMode ? " Long Mode" : "",
3468 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
3469 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address"));
3470
3471 for (unsigned i = 0; i < ELEMENTS(pPD->a); i++)
3472 {
3473 X86PDE Pde = pPD->a[i];
3474 if (Pde.n.u1Present)
3475 {
3476 const uint32_t u32Address = i << X86_PD_SHIFT;
3477
3478 if ((cr4 & X86_CR4_PSE) && Pde.b.u1Size)
3479 Log(( /*P R S A D G WT CD AT NX 4M a m d */
3480 "%08x 0 | P %c %c %c %c %c %s %s %s .. 4M %c%c%c %08x\n",
3481 u32Address,
3482 Pde.b.u1Write ? 'W' : 'R',
3483 Pde.b.u1User ? 'U' : 'S',
3484 Pde.b.u1Accessed ? 'A' : '-',
3485 Pde.b.u1Dirty ? 'D' : '-',
3486 Pde.b.u1Global ? 'G' : '-',
3487 Pde.b.u1WriteThru ? "WT" : "--",
3488 Pde.b.u1CacheDisable? "CD" : "--",
3489 Pde.b.u1PAT ? "AT" : "--",
3490 Pde.u & RT_BIT(9) ? '1' : '0',
3491 Pde.u & RT_BIT(10) ? '1' : '0',
3492 Pde.u & RT_BIT(11) ? '1' : '0',
3493 Pde.u & X86_PDE4M_PG_MASK));
3494 /** @todo PhysSearch */
3495 else
3496 {
3497 Log(( /*P R S A D G WT CD AT NX 4M a m d */
3498 "%08x 0 | P %c %c %c %c %c %s %s .. .. 4K %c%c%c %08x\n",
3499 u32Address,
3500 Pde.n.u1Write ? 'W' : 'R',
3501 Pde.n.u1User ? 'U' : 'S',
3502 Pde.n.u1Accessed ? 'A' : '-',
3503 Pde.n.u1Reserved0 ? '?' : '.', /* ignored */
3504 Pde.n.u1Reserved1 ? '?' : '.', /* ignored */
3505 Pde.n.u1WriteThru ? "WT" : "--",
3506 Pde.n.u1CacheDisable? "CD" : "--",
3507 Pde.u & RT_BIT(9) ? '1' : '0',
3508 Pde.u & RT_BIT(10) ? '1' : '0',
3509 Pde.u & RT_BIT(11) ? '1' : '0',
3510 Pde.u & X86_PDE_PG_MASK));
3511 ////if (cMaxDepth >= 1)
3512 {
3513 /** @todo what about using the page pool for mapping PTs? */
3514 RTGCPHYS GCPhys = Pde.u & X86_PDE_PG_MASK;
3515 PX86PT pPT = NULL;
3516
3517 rc = PGM_GCPHYS_2_PTR(pVM, GCPhys, &pPT);
3518
3519 int rc2 = VERR_INVALID_PARAMETER;
3520 if (pPT)
3521 rc2 = pgmR3DumpHierarchyGC32BitPT(pVM, pPT, u32Address, PhysSearch);
3522 else
3523 Log(("%08x error! Page table at %#x was not found in the page pool!\n", u32Address, GCPhys));
3524 if (rc2 < rc && VBOX_SUCCESS(rc))
3525 rc = rc2;
3526 }
3527 }
3528 }
3529 }
3530
3531 return rc;
3532}
3533
3534
3535/**
3536 * Dumps a page table hierarchy use only physical addresses and cr4/lm flags.
3537 *
3538 * @returns VBox status code (VINF_SUCCESS).
3539 * @param pVM The VM handle.
3540 * @param cr3 The root of the hierarchy.
3541 * @param cr4 The cr4, only PAE and PSE is currently used.
3542 * @param fLongMode Set if long mode, false if not long mode.
3543 * @param cMaxDepth Number of levels to dump.
3544 * @param pHlp Pointer to the output functions.
3545 */
3546PGMR3DECL(int) PGMR3DumpHierarchyHC(PVM pVM, uint32_t cr3, uint32_t cr4, bool fLongMode, unsigned cMaxDepth, PCDBGFINFOHLP pHlp)
3547{
3548 if (!pHlp)
3549 pHlp = DBGFR3InfoLogHlp();
3550 if (!cMaxDepth)
3551 return VINF_SUCCESS;
3552 const unsigned cch = fLongMode ? 16 : 8;
3553 pHlp->pfnPrintf(pHlp,
3554 "cr3=%08x cr4=%08x%s\n"
3555 "%-*s P - Present\n"
3556 "%-*s | R/W - Read (0) / Write (1)\n"
3557 "%-*s | | U/S - User (1) / Supervisor (0)\n"
3558 "%-*s | | | A - Accessed\n"
3559 "%-*s | | | | D - Dirty\n"
3560 "%-*s | | | | | G - Global\n"
3561 "%-*s | | | | | | WT - Write thru\n"
3562 "%-*s | | | | | | | CD - Cache disable\n"
3563 "%-*s | | | | | | | | AT - Attribute table (PAT)\n"
3564 "%-*s | | | | | | | | | NX - No execute (K8)\n"
3565 "%-*s | | | | | | | | | | 4K/4M/2M - Page size.\n"
3566 "%-*s | | | | | | | | | | | AVL - a=allocated; m=mapping; d=track dirty;\n"
3567 "%-*s | | | | | | | | | | | | p=permanent; v=validated;\n"
3568 "%-*s Level | | | | | | | | | | | | Page\n"
3569 /* xxxx n **** P R S A D G WT CD AT NX 4M AVL xxxxxxxxxxxxx
3570 - W U - - - -- -- -- -- -- 010 */
3571 , cr3, cr4, fLongMode ? " Long Mode" : "",
3572 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "",
3573 cch, "", cch, "", cch, "", cch, "", cch, "", cch, "", cch, "Address");
3574 if (cr4 & X86_CR4_PAE)
3575 {
3576 if (fLongMode)
3577 return pgmR3DumpHierarchyHcPaePML4(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
3578 return pgmR3DumpHierarchyHCPaePDPT(pVM, cr3 & X86_CR3_PAE_PAGE_MASK, 0, cr4, false, cMaxDepth, pHlp);
3579 }
3580 return pgmR3DumpHierarchyHC32BitPD(pVM, cr3 & X86_CR3_PAGE_MASK, cr4, cMaxDepth, pHlp);
3581}
3582
3583
3584
3585#ifdef VBOX_WITH_DEBUGGER
3586/**
3587 * The '.pgmram' command.
3588 *
3589 * @returns VBox status.
3590 * @param pCmd Pointer to the command descriptor (as registered).
3591 * @param pCmdHlp Pointer to command helper functions.
3592 * @param pVM Pointer to the current VM (if any).
3593 * @param paArgs Pointer to (readonly) array of arguments.
3594 * @param cArgs Number of arguments in the array.
3595 */
3596static DECLCALLBACK(int) pgmR3CmdRam(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
3597{
3598 /*
3599 * Validate input.
3600 */
3601 if (!pVM)
3602 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires VM to be selected.\n");
3603 if (!pVM->pgm.s.pRamRangesGC)
3604 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no Ram is registered.\n");
3605
3606 /*
3607 * Dump the ranges.
3608 */
3609 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "From - To (incl) pvHC\n");
3610 PPGMRAMRANGE pRam;
3611 for (pRam = pVM->pgm.s.pRamRangesR3; pRam; pRam = pRam->pNextR3)
3612 {
3613 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
3614 "%VGp - %VGp %p\n",
3615 pRam->GCPhys, pRam->GCPhysLast, pRam->pvHC);
3616 if (VBOX_FAILURE(rc))
3617 return rc;
3618 }
3619
3620 return VINF_SUCCESS;
3621}
3622
3623
3624/**
3625 * The '.pgmmap' command.
3626 *
3627 * @returns VBox status.
3628 * @param pCmd Pointer to the command descriptor (as registered).
3629 * @param pCmdHlp Pointer to command helper functions.
3630 * @param pVM Pointer to the current VM (if any).
3631 * @param paArgs Pointer to (readonly) array of arguments.
3632 * @param cArgs Number of arguments in the array.
3633 */
3634static DECLCALLBACK(int) pgmR3CmdMap(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
3635{
3636 /*
3637 * Validate input.
3638 */
3639 if (!pVM)
3640 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires VM to be selected.\n");
3641 if (!pVM->pgm.s.pMappingsR3)
3642 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Sorry, no mappings are registered.\n");
3643
3644 /*
3645 * Print message about the fixedness of the mappings.
3646 */
3647 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, pVM->pgm.s.fMappingsFixed ? "The mappings are FIXED.\n" : "The mappings are FLOATING.\n");
3648 if (VBOX_FAILURE(rc))
3649 return rc;
3650
3651 /*
3652 * Dump the ranges.
3653 */
3654 PPGMMAPPING pCur;
3655 for (pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
3656 {
3657 rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL,
3658 "%08x - %08x %s\n",
3659 pCur->GCPtr, pCur->GCPtrLast, pCur->pszDesc);
3660 if (VBOX_FAILURE(rc))
3661 return rc;
3662 }
3663
3664 return VINF_SUCCESS;
3665}
3666
3667
3668/**
3669 * The '.pgmsync' command.
3670 *
3671 * @returns VBox status.
3672 * @param pCmd Pointer to the command descriptor (as registered).
3673 * @param pCmdHlp Pointer to command helper functions.
3674 * @param pVM Pointer to the current VM (if any).
3675 * @param paArgs Pointer to (readonly) array of arguments.
3676 * @param cArgs Number of arguments in the array.
3677 */
3678static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
3679{
3680 /*
3681 * Validate input.
3682 */
3683 if (!pVM)
3684 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires VM to be selected.\n");
3685
3686 /*
3687 * Force page directory sync.
3688 */
3689 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
3690
3691 int rc = pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Forcing page directory sync.\n");
3692 if (VBOX_FAILURE(rc))
3693 return rc;
3694
3695 return VINF_SUCCESS;
3696}
3697
3698
3699/**
3700 * The '.pgmsyncalways' command.
3701 *
3702 * @returns VBox status.
3703 * @param pCmd Pointer to the command descriptor (as registered).
3704 * @param pCmdHlp Pointer to command helper functions.
3705 * @param pVM Pointer to the current VM (if any).
3706 * @param paArgs Pointer to (readonly) array of arguments.
3707 * @param cArgs Number of arguments in the array.
3708 */
3709static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PVM pVM, PCDBGCVAR paArgs, unsigned cArgs, PDBGCVAR pResult)
3710{
3711 /*
3712 * Validate input.
3713 */
3714 if (!pVM)
3715 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "error: The command requires VM to be selected.\n");
3716
3717 /*
3718 * Force page directory sync.
3719 */
3720 if (pVM->pgm.s.fSyncFlags & PGM_SYNC_ALWAYS)
3721 {
3722 ASMAtomicAndU32(&pVM->pgm.s.fSyncFlags, ~PGM_SYNC_ALWAYS);
3723 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Disabled permanent forced page directory syncing.\n");
3724 }
3725 else
3726 {
3727 ASMAtomicOrU32(&pVM->pgm.s.fSyncFlags, PGM_SYNC_ALWAYS);
3728 VM_FF_SET(pVM, VM_FF_PGM_SYNC_CR3);
3729 return pCmdHlp->pfnPrintf(pCmdHlp, NULL, "Enabled permanent forced page directory syncing.\n");
3730 }
3731}
3732
3733#endif
3734
3735/**
3736 * pvUser argument of the pgmR3CheckIntegrity*Node callbacks.
3737 */
3738typedef struct PGMCHECKINTARGS
3739{
3740 bool fLeftToRight; /**< true: left-to-right; false: right-to-left. */
3741 PPGMPHYSHANDLER pPrevPhys;
3742 PPGMVIRTHANDLER pPrevVirt;
3743 PPGMPHYS2VIRTHANDLER pPrevPhys2Virt;
3744 PVM pVM;
3745} PGMCHECKINTARGS, *PPGMCHECKINTARGS;
3746
3747/**
3748 * Validate a node in the physical handler tree.
3749 *
3750 * @returns 0 on if ok, other wise 1.
3751 * @param pNode The handler node.
3752 * @param pvUser pVM.
3753 */
3754static DECLCALLBACK(int) pgmR3CheckIntegrityPhysHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
3755{
3756 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
3757 PPGMPHYSHANDLER pCur = (PPGMPHYSHANDLER)pNode;
3758 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
3759 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %VGp-%VGp %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
3760 AssertReleaseMsg( !pArgs->pPrevPhys
3761 || (pArgs->fLeftToRight ? pArgs->pPrevPhys->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys->Core.KeyLast > pCur->Core.Key),
3762 ("pPrevPhys=%p %VGp-%VGp %s\n"
3763 " pCur=%p %VGp-%VGp %s\n",
3764 pArgs->pPrevPhys, pArgs->pPrevPhys->Core.Key, pArgs->pPrevPhys->Core.KeyLast, pArgs->pPrevPhys->pszDesc,
3765 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
3766 pArgs->pPrevPhys = pCur;
3767 return 0;
3768}
3769
3770
3771/**
3772 * Validate a node in the virtual handler tree.
3773 *
3774 * @returns 0 on if ok, other wise 1.
3775 * @param pNode The handler node.
3776 * @param pvUser pVM.
3777 */
3778static DECLCALLBACK(int) pgmR3CheckIntegrityVirtHandlerNode(PAVLROGCPTRNODECORE pNode, void *pvUser)
3779{
3780 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
3781 PPGMVIRTHANDLER pCur = (PPGMVIRTHANDLER)pNode;
3782 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
3783 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %VGv-%VGv %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
3784 AssertReleaseMsg( !pArgs->pPrevVirt
3785 || (pArgs->fLeftToRight ? pArgs->pPrevVirt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevVirt->Core.KeyLast > pCur->Core.Key),
3786 ("pPrevVirt=%p %VGv-%VGv %s\n"
3787 " pCur=%p %VGv-%VGv %s\n",
3788 pArgs->pPrevVirt, pArgs->pPrevVirt->Core.Key, pArgs->pPrevVirt->Core.KeyLast, pArgs->pPrevVirt->pszDesc,
3789 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
3790 for (unsigned iPage = 0; iPage < pCur->cPages; iPage++)
3791 {
3792 AssertReleaseMsg(pCur->aPhysToVirt[iPage].offVirtHandler == -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage]),
3793 ("pCur=%p %VGv-%VGv %s\n"
3794 "iPage=%d offVirtHandle=%#x expected %#x\n",
3795 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc,
3796 iPage, pCur->aPhysToVirt[iPage].offVirtHandler, -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage])));
3797 }
3798 pArgs->pPrevVirt = pCur;
3799 return 0;
3800}
3801
3802
3803/**
3804 * Validate a node in the virtual handler tree.
3805 *
3806 * @returns 0 on if ok, other wise 1.
3807 * @param pNode The handler node.
3808 * @param pvUser pVM.
3809 */
3810static DECLCALLBACK(int) pgmR3CheckIntegrityPhysToVirtHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
3811{
3812 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
3813 PPGMPHYS2VIRTHANDLER pCur = (PPGMPHYS2VIRTHANDLER)pNode;
3814 AssertReleaseMsgReturn(!((uintptr_t)pCur & 3), ("\n"), 1);
3815 AssertReleaseMsgReturn(!(pCur->offVirtHandler & 3), ("\n"), 1);
3816 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %VGp-%VGp\n", pCur, pCur->Core.Key, pCur->Core.KeyLast));
3817 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
3818 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
3819 ("pPrevPhys2Virt=%p %VGp-%VGp\n"
3820 " pCur=%p %VGp-%VGp\n",
3821 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
3822 pCur, pCur->Core.Key, pCur->Core.KeyLast));
3823 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
3824 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
3825 ("pPrevPhys2Virt=%p %VGp-%VGp\n"
3826 " pCur=%p %VGp-%VGp\n",
3827 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
3828 pCur, pCur->Core.Key, pCur->Core.KeyLast));
3829 AssertReleaseMsg((pCur->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD),
3830 ("pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
3831 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
3832 if (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK)
3833 {
3834 PPGMPHYS2VIRTHANDLER pCur2 = pCur;
3835 for (;;)
3836 {
3837 pCur2 = (PPGMPHYS2VIRTHANDLER)((intptr_t)pCur + (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK));
3838 AssertReleaseMsg(pCur2 != pCur,
3839 (" pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
3840 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
3841 AssertReleaseMsg((pCur2->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == PGMPHYS2VIRTHANDLER_IN_TREE,
3842 (" pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
3843 "pCur2=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
3844 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
3845 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
3846 AssertReleaseMsg((pCur2->Core.Key ^ pCur->Core.Key) < PAGE_SIZE,
3847 (" pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
3848 "pCur2=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
3849 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
3850 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
3851 AssertReleaseMsg((pCur2->Core.KeyLast ^ pCur->Core.KeyLast) < PAGE_SIZE,
3852 (" pCur=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
3853 "pCur2=%p:{.Core.Key=%VGp, .Core.KeyLast=%VGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
3854 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
3855 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
3856 if (!(pCur2->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK))
3857 break;
3858 }
3859 }
3860
3861 pArgs->pPrevPhys2Virt = pCur;
3862 return 0;
3863}
3864
3865
3866/**
3867 * Perform an integrity check on the PGM component.
3868 *
3869 * @returns VINF_SUCCESS if everything is fine.
3870 * @returns VBox error status after asserting on integrity breach.
3871 * @param pVM The VM handle.
3872 */
3873PDMR3DECL(int) PGMR3CheckIntegrity(PVM pVM)
3874{
3875 AssertReleaseReturn(pVM->pgm.s.offVM, VERR_INTERNAL_ERROR);
3876
3877 /*
3878 * Check the trees.
3879 */
3880 int cErrors = 0;
3881 PGMCHECKINTARGS Args = { true, NULL, NULL, NULL, pVM };
3882 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysHandlers, true, pgmR3CheckIntegrityPhysHandlerNode, &Args);
3883 Args.fLeftToRight = false;
3884 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysHandlers, false, pgmR3CheckIntegrityPhysHandlerNode, &Args);
3885 Args.fLeftToRight = true;
3886 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesHC->VirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
3887 Args.fLeftToRight = false;
3888 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesHC->VirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
3889 Args.fLeftToRight = true;
3890 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesHC->HyperVirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
3891 Args.fLeftToRight = false;
3892 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesHC->HyperVirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
3893 Args.fLeftToRight = true;
3894 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysToVirtHandlers, true, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
3895 Args.fLeftToRight = false;
3896 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesHC->PhysToVirtHandlers, false, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
3897
3898 return !cErrors ? VINF_SUCCESS : VERR_INTERNAL_ERROR;
3899}
3900
3901
3902/**
3903 * Inform PGM if we want all mappings to be put into the shadow page table. (necessary for e.g. VMX)
3904 *
3905 * @returns VBox status code.
3906 * @param pVM VM handle.
3907 * @param fEnable Enable or disable shadow mappings
3908 */
3909PGMR3DECL(int) PGMR3ChangeShwPDMappings(PVM pVM, bool fEnable)
3910{
3911 pVM->pgm.s.fDisableMappings = !fEnable;
3912
3913 uint32_t cb;
3914 int rc = PGMR3MappingsSize(pVM, &cb);
3915 AssertRCReturn(rc, rc);
3916
3917 /* Pretend the mappings are now fixed; to force a refresh of the reserved PDEs. */
3918 rc = PGMR3MappingsFix(pVM, MM_HYPER_AREA_ADDRESS, cb);
3919 AssertRCReturn(rc, rc);
3920
3921 return VINF_SUCCESS;
3922}
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette