VirtualBox

source: vbox/trunk/src/VBox/VMM/VMMR3/PGM.cpp@ 61574

Last change on this file since 61574 was 61570, checked in by vboxsync, 9 years ago

DBGFR3Info*: Added DBGFINFO_FLAGS_ALL_EMTS flag for cpum, apic and others dealing with per-vcpu state. Also got rid of some redundant VMR3ReqCallWaitU calls when we're already on the rigth EMT, and replacing them with priority calls when we actually need them. Didn't make sense to only do priority calls from DBGFR3InfoEx.

  • Property svn:eol-style set to native
  • Property svn:keywords set to Id Revision
File size: 207.1 KB
Line 
1/* $Id: PGM.cpp 61570 2016-06-08 10:55:10Z vboxsync $ */
2/** @file
3 * PGM - Page Manager and Monitor. (Mixing stuff here, not good?)
4 */
5
6/*
7 * Copyright (C) 2006-2015 Oracle Corporation
8 *
9 * This file is part of VirtualBox Open Source Edition (OSE), as
10 * available from http://www.virtualbox.org. This file is free software;
11 * you can redistribute it and/or modify it under the terms of the GNU
12 * General Public License (GPL) as published by the Free Software
13 * Foundation, in version 2 as it comes in the "COPYING" file of the
14 * VirtualBox OSE distribution. VirtualBox OSE is distributed in the
15 * hope that it will be useful, but WITHOUT ANY WARRANTY of any kind.
16 */
17
18
19/** @page pg_pgm PGM - The Page Manager and Monitor
20 *
21 * @sa @ref grp_pgm
22 * @subpage pg_pgm_pool
23 * @subpage pg_pgm_phys
24 *
25 *
26 * @section sec_pgm_modes Paging Modes
27 *
28 * There are three memory contexts: Host Context (HC), Guest Context (GC)
29 * and intermediate context. When talking about paging HC can also be referred
30 * to as "host paging", and GC referred to as "shadow paging".
31 *
32 * We define three basic paging modes: 32-bit, PAE and AMD64. The host paging mode
33 * is defined by the host operating system. The mode used in the shadow paging mode
34 * depends on the host paging mode and what the mode the guest is currently in. The
35 * following relation between the two is defined:
36 *
37 * @verbatim
38 Host > 32-bit | PAE | AMD64 |
39 Guest | | | |
40 ==v================================
41 32-bit 32-bit PAE PAE
42 -------|--------|--------|--------|
43 PAE PAE PAE PAE
44 -------|--------|--------|--------|
45 AMD64 AMD64 AMD64 AMD64
46 -------|--------|--------|--------| @endverbatim
47 *
48 * All configuration except those in the diagonal (upper left) are expected to
49 * require special effort from the switcher (i.e. a bit slower).
50 *
51 *
52 *
53 *
54 * @section sec_pgm_shw The Shadow Memory Context
55 *
56 *
57 * [..]
58 *
59 * Because of guest context mappings requires PDPT and PML4 entries to allow
60 * writing on AMD64, the two upper levels will have fixed flags whatever the
61 * guest is thinking of using there. So, when shadowing the PD level we will
62 * calculate the effective flags of PD and all the higher levels. In legacy
63 * PAE mode this only applies to the PWT and PCD bits (the rest are
64 * ignored/reserved/MBZ). We will ignore those bits for the present.
65 *
66 *
67 *
68 * @section sec_pgm_int The Intermediate Memory Context
69 *
70 * The world switch goes thru an intermediate memory context which purpose it is
71 * to provide different mappings of the switcher code. All guest mappings are also
72 * present in this context.
73 *
74 * The switcher code is mapped at the same location as on the host, at an
75 * identity mapped location (physical equals virtual address), and at the
76 * hypervisor location. The identity mapped location is for when the world
77 * switches that involves disabling paging.
78 *
79 * PGM maintain page tables for 32-bit, PAE and AMD64 paging modes. This
80 * simplifies switching guest CPU mode and consistency at the cost of more
81 * code to do the work. All memory use for those page tables is located below
82 * 4GB (this includes page tables for guest context mappings).
83 *
84 * Note! The intermediate memory context is also used for 64-bit guest
85 * execution on 32-bit hosts. Because we need to load 64-bit registers
86 * prior to switching to guest context, we need to be in 64-bit mode
87 * first. So, HM has some 64-bit worker routines in VMMRC.rc that get
88 * invoked via the special world switcher code in LegacyToAMD64.asm.
89 *
90 *
91 * @subsection subsec_pgm_int_gc Guest Context Mappings
92 *
93 * During assignment and relocation of a guest context mapping the intermediate
94 * memory context is used to verify the new location.
95 *
96 * Guest context mappings are currently restricted to below 4GB, for reasons
97 * of simplicity. This may change when we implement AMD64 support.
98 *
99 *
100 *
101 *
102 * @section sec_pgm_misc Misc
103 *
104 *
105 * @subsection sec_pgm_misc_A20 The A20 Gate
106 *
107 * PGM implements the A20 gate masking when translating a virtual guest address
108 * into a physical address for CPU access, i.e. PGMGstGetPage (and friends) and
109 * the code reading the guest page table entries during shadowing. The masking
110 * is done consistenly for all CPU modes, paged ones included. Large pages are
111 * also masked correctly. (On current CPUs, experiments indicates that AMD does
112 * not apply A20M in paged modes and intel only does it for the 2nd MB of
113 * memory.)
114 *
115 * The A20 gate implementation is per CPU core. It can be configured on a per
116 * core basis via the keyboard device and PC architecture device. This is
117 * probably not exactly how real CPUs do it, but SMP and A20 isn't a place where
118 * guest OSes try pushing things anyway, so who cares. (On current real systems
119 * the A20M signal is probably only sent to the boot CPU and it affects all
120 * thread and probably all cores in that package.)
121 *
122 * The keyboard device and the PC architecture device doesn't OR their A20
123 * config bits together, rather they are currently implemented such that they
124 * mirror the CPU state. So, flipping the bit in either of them will change the
125 * A20 state. (On real hardware the bits of the two devices should probably be
126 * ORed together to indicate enabled, i.e. both needs to be cleared to disable
127 * A20 masking.)
128 *
129 * The A20 state will change immediately, transmeta fashion. There is no delays
130 * due to buses, wiring or other physical stuff. (On real hardware there are
131 * normally delays, the delays differs between the two devices and probably also
132 * between chipsets and CPU generations. Note that it's said that transmeta CPUs
133 * does the change immediately like us, they apparently intercept/handles the
134 * port accesses in microcode. Neat.)
135 *
136 * @sa http://en.wikipedia.org/wiki/A20_line#The_80286_and_the_high_memory_area
137 *
138 *
139 * @subsection subsec_pgm_misc_diff Differences Between Legacy PAE and Long Mode PAE
140 *
141 * The differences between legacy PAE and long mode PAE are:
142 * -# PDPE bits 1, 2, 5 and 6 are defined differently. In leagcy mode they are
143 * all marked down as must-be-zero, while in long mode 1, 2 and 5 have the
144 * usual meanings while 6 is ignored (AMD). This means that upon switching to
145 * legacy PAE mode we'll have to clear these bits and when going to long mode
146 * they must be set. This applies to both intermediate and shadow contexts,
147 * however we don't need to do it for the intermediate one since we're
148 * executing with CR0.WP at that time.
149 * -# CR3 allows a 32-byte aligned address in legacy mode, while in long mode
150 * a page aligned one is required.
151 *
152 *
153 * @section sec_pgm_handlers Access Handlers
154 *
155 * Placeholder.
156 *
157 *
158 * @subsection sec_pgm_handlers_phys Physical Access Handlers
159 *
160 * Placeholder.
161 *
162 *
163 * @subsection sec_pgm_handlers_virt Virtual Access Handlers
164 *
165 * We currently implement three types of virtual access handlers: ALL, WRITE
166 * and HYPERVISOR (WRITE). See PGMVIRTHANDLERKIND for some more details.
167 *
168 * The HYPERVISOR access handlers is kept in a separate tree since it doesn't apply
169 * to physical pages (PGMTREES::HyperVirtHandlers) and only needs to be consulted in
170 * a special \#PF case. The ALL and WRITE are in the PGMTREES::VirtHandlers tree, the
171 * rest of this section is going to be about these handlers.
172 *
173 * We'll go thru the life cycle of a handler and try make sense of it all, don't know
174 * how successful this is gonna be...
175 *
176 * 1. A handler is registered thru the PGMR3HandlerVirtualRegister and
177 * PGMHandlerVirtualRegisterEx APIs. We check for conflicting virtual handlers
178 * and create a new node that is inserted into the AVL tree (range key). Then
179 * a full PGM resync is flagged (clear pool, sync cr3, update virtual bit of PGMPAGE).
180 *
181 * 2. The following PGMSyncCR3/SyncCR3 operation will first make invoke HandlerVirtualUpdate.
182 *
183 * 2a. HandlerVirtualUpdate will will lookup all the pages covered by virtual handlers
184 * via the current guest CR3 and update the physical page -> virtual handler
185 * translation. Needless to say, this doesn't exactly scale very well. If any changes
186 * are detected, it will flag a virtual bit update just like we did on registration.
187 * PGMPHYS pages with changes will have their virtual handler state reset to NONE.
188 *
189 * 2b. The virtual bit update process will iterate all the pages covered by all the
190 * virtual handlers and update the PGMPAGE virtual handler state to the max of all
191 * virtual handlers on that page.
192 *
193 * 2c. Back in SyncCR3 we will now flush the entire shadow page cache to make sure
194 * we don't miss any alias mappings of the monitored pages.
195 *
196 * 2d. SyncCR3 will then proceed with syncing the CR3 table.
197 *
198 * 3. \#PF(np,read) on a page in the range. This will cause it to be synced
199 * read-only and resumed if it's a WRITE handler. If it's an ALL handler we
200 * will call the handlers like in the next step. If the physical mapping has
201 * changed we will - some time in the future - perform a handler callback
202 * (optional) and update the physical -> virtual handler cache.
203 *
204 * 4. \#PF(,write) on a page in the range. This will cause the handler to
205 * be invoked.
206 *
207 * 5. The guest invalidates the page and changes the physical backing or
208 * unmaps it. This should cause the invalidation callback to be invoked
209 * (it might not yet be 100% perfect). Exactly what happens next... is
210 * this where we mess up and end up out of sync for a while?
211 *
212 * 6. The handler is deregistered by the client via PGMHandlerVirtualDeregister.
213 * We will then set all PGMPAGEs in the physical -> virtual handler cache for
214 * this handler to NONE and trigger a full PGM resync (basically the same
215 * as int step 1). Which means 2 is executed again.
216 *
217 *
218 * @subsubsection sub_sec_pgm_handler_virt_todo TODOs
219 *
220 * There is a bunch of things that needs to be done to make the virtual handlers
221 * work 100% correctly and work more efficiently.
222 *
223 * The first bit hasn't been implemented yet because it's going to slow the
224 * whole mess down even more, and besides it seems to be working reliably for
225 * our current uses. OTOH, some of the optimizations might end up more or less
226 * implementing the missing bits, so we'll see.
227 *
228 * On the optimization side, the first thing to do is to try avoid unnecessary
229 * cache flushing. Then try team up with the shadowing code to track changes
230 * in mappings by means of access to them (shadow in), updates to shadows pages,
231 * invlpg, and shadow PT discarding (perhaps).
232 *
233 * Some idea that have popped up for optimization for current and new features:
234 * - bitmap indicating where there are virtual handlers installed.
235 * (4KB => 2**20 pages, page 2**12 => covers 32-bit address space 1:1!)
236 * - Further optimize this by min/max (needs min/max avl getters).
237 * - Shadow page table entry bit (if any left)?
238 *
239 */
240
241
242/** @page pg_pgm_phys PGM Physical Guest Memory Management
243 *
244 *
245 * Objectives:
246 * - Guest RAM over-commitment using memory ballooning,
247 * zero pages and general page sharing.
248 * - Moving or mirroring a VM onto a different physical machine.
249 *
250 *
251 * @section sec_pgmPhys_Definitions Definitions
252 *
253 * Allocation chunk - A RTR0MemObjAllocPhysNC object and the tracking
254 * machinery associated with it.
255 *
256 *
257 *
258 *
259 * @section sec_pgmPhys_AllocPage Allocating a page.
260 *
261 * Initially we map *all* guest memory to the (per VM) zero page, which
262 * means that none of the read functions will cause pages to be allocated.
263 *
264 * Exception, access bit in page tables that have been shared. This must
265 * be handled, but we must also make sure PGMGst*Modify doesn't make
266 * unnecessary modifications.
267 *
268 * Allocation points:
269 * - PGMPhysSimpleWriteGCPhys and PGMPhysWrite.
270 * - Replacing a zero page mapping at \#PF.
271 * - Replacing a shared page mapping at \#PF.
272 * - ROM registration (currently MMR3RomRegister).
273 * - VM restore (pgmR3Load).
274 *
275 * For the first three it would make sense to keep a few pages handy
276 * until we've reached the max memory commitment for the VM.
277 *
278 * For the ROM registration, we know exactly how many pages we need
279 * and will request these from ring-0. For restore, we will save
280 * the number of non-zero pages in the saved state and allocate
281 * them up front. This would allow the ring-0 component to refuse
282 * the request if the isn't sufficient memory available for VM use.
283 *
284 * Btw. for both ROM and restore allocations we won't be requiring
285 * zeroed pages as they are going to be filled instantly.
286 *
287 *
288 * @section sec_pgmPhys_FreePage Freeing a page
289 *
290 * There are a few points where a page can be freed:
291 * - After being replaced by the zero page.
292 * - After being replaced by a shared page.
293 * - After being ballooned by the guest additions.
294 * - At reset.
295 * - At restore.
296 *
297 * When freeing one or more pages they will be returned to the ring-0
298 * component and replaced by the zero page.
299 *
300 * The reasoning for clearing out all the pages on reset is that it will
301 * return us to the exact same state as on power on, and may thereby help
302 * us reduce the memory load on the system. Further it might have a
303 * (temporary) positive influence on memory fragmentation (@see subsec_pgmPhys_Fragmentation).
304 *
305 * On restore, as mention under the allocation topic, pages should be
306 * freed / allocated depending on how many is actually required by the
307 * new VM state. The simplest approach is to do like on reset, and free
308 * all non-ROM pages and then allocate what we need.
309 *
310 * A measure to prevent some fragmentation, would be to let each allocation
311 * chunk have some affinity towards the VM having allocated the most pages
312 * from it. Also, try make sure to allocate from allocation chunks that
313 * are almost full. Admittedly, both these measures might work counter to
314 * our intentions and its probably not worth putting a lot of effort,
315 * cpu time or memory into this.
316 *
317 *
318 * @section sec_pgmPhys_SharePage Sharing a page
319 *
320 * The basic idea is that there there will be a idle priority kernel
321 * thread walking the non-shared VM pages hashing them and looking for
322 * pages with the same checksum. If such pages are found, it will compare
323 * them byte-by-byte to see if they actually are identical. If found to be
324 * identical it will allocate a shared page, copy the content, check that
325 * the page didn't change while doing this, and finally request both the
326 * VMs to use the shared page instead. If the page is all zeros (special
327 * checksum and byte-by-byte check) it will request the VM that owns it
328 * to replace it with the zero page.
329 *
330 * To make this efficient, we will have to make sure not to try share a page
331 * that will change its contents soon. This part requires the most work.
332 * A simple idea would be to request the VM to write monitor the page for
333 * a while to make sure it isn't modified any time soon. Also, it may
334 * make sense to skip pages that are being write monitored since this
335 * information is readily available to the thread if it works on the
336 * per-VM guest memory structures (presently called PGMRAMRANGE).
337 *
338 *
339 * @section sec_pgmPhys_Fragmentation Fragmentation Concerns and Counter Measures
340 *
341 * The pages are organized in allocation chunks in ring-0, this is a necessity
342 * if we wish to have an OS agnostic approach to this whole thing. (On Linux we
343 * could easily work on a page-by-page basis if we liked. Whether this is possible
344 * or efficient on NT I don't quite know.) Fragmentation within these chunks may
345 * become a problem as part of the idea here is that we wish to return memory to
346 * the host system.
347 *
348 * For instance, starting two VMs at the same time, they will both allocate the
349 * guest memory on-demand and if permitted their page allocations will be
350 * intermixed. Shut down one of the two VMs and it will be difficult to return
351 * any memory to the host system because the page allocation for the two VMs are
352 * mixed up in the same allocation chunks.
353 *
354 * To further complicate matters, when pages are freed because they have been
355 * ballooned or become shared/zero the whole idea is that the page is supposed
356 * to be reused by another VM or returned to the host system. This will cause
357 * allocation chunks to contain pages belonging to different VMs and prevent
358 * returning memory to the host when one of those VM shuts down.
359 *
360 * The only way to really deal with this problem is to move pages. This can
361 * either be done at VM shutdown and or by the idle priority worker thread
362 * that will be responsible for finding sharable/zero pages. The mechanisms
363 * involved for coercing a VM to move a page (or to do it for it) will be
364 * the same as when telling it to share/zero a page.
365 *
366 *
367 * @section sec_pgmPhys_Tracking Tracking Structures And Their Cost
368 *
369 * There's a difficult balance between keeping the per-page tracking structures
370 * (global and guest page) easy to use and keeping them from eating too much
371 * memory. We have limited virtual memory resources available when operating in
372 * 32-bit kernel space (on 64-bit there'll it's quite a different story). The
373 * tracking structures will be attempted designed such that we can deal with up
374 * to 32GB of memory on a 32-bit system and essentially unlimited on 64-bit ones.
375 *
376 *
377 * @subsection subsec_pgmPhys_Tracking_Kernel Kernel Space
378 *
379 * @see pg_GMM
380 *
381 * @subsection subsec_pgmPhys_Tracking_PerVM Per-VM
382 *
383 * Fixed info is the physical address of the page (HCPhys) and the page id
384 * (described above). Theoretically we'll need 48(-12) bits for the HCPhys part.
385 * Today we've restricting ourselves to 40(-12) bits because this is the current
386 * restrictions of all AMD64 implementations (I think Barcelona will up this
387 * to 48(-12) bits, not that it really matters) and I needed the bits for
388 * tracking mappings of a page. 48-12 = 36. That leaves 28 bits, which means a
389 * decent range for the page id: 2^(28+12) = 1024TB.
390 *
391 * In additions to these, we'll have to keep maintaining the page flags as we
392 * currently do. Although it wouldn't harm to optimize these quite a bit, like
393 * for instance the ROM shouldn't depend on having a write handler installed
394 * in order for it to become read-only. A RO/RW bit should be considered so
395 * that the page syncing code doesn't have to mess about checking multiple
396 * flag combinations (ROM || RW handler || write monitored) in order to
397 * figure out how to setup a shadow PTE. But this of course, is second
398 * priority at present. Current this requires 12 bits, but could probably
399 * be optimized to ~8.
400 *
401 * Then there's the 24 bits used to track which shadow page tables are
402 * currently mapping a page for the purpose of speeding up physical
403 * access handlers, and thereby the page pool cache. More bit for this
404 * purpose wouldn't hurt IIRC.
405 *
406 * Then there is a new bit in which we need to record what kind of page
407 * this is, shared, zero, normal or write-monitored-normal. This'll
408 * require 2 bits. One bit might be needed for indicating whether a
409 * write monitored page has been written to. And yet another one or
410 * two for tracking migration status. 3-4 bits total then.
411 *
412 * Whatever is left will can be used to record the sharabilitiy of a
413 * page. The page checksum will not be stored in the per-VM table as
414 * the idle thread will not be permitted to do modifications to it.
415 * It will instead have to keep its own working set of potentially
416 * shareable pages and their check sums and stuff.
417 *
418 * For the present we'll keep the current packing of the
419 * PGMRAMRANGE::aHCPhys to keep the changes simple, only of course,
420 * we'll have to change it to a struct with a total of 128-bits at
421 * our disposal.
422 *
423 * The initial layout will be like this:
424 * @verbatim
425 RTHCPHYS HCPhys; The current stuff.
426 63:40 Current shadow PT tracking stuff.
427 39:12 The physical page frame number.
428 11:0 The current flags.
429 uint32_t u28PageId : 28; The page id.
430 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
431 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
432 uint32_t u1Reserved : 1; Reserved for later.
433 uint32_t u32Reserved; Reserved for later, mostly sharing stats.
434 @endverbatim
435 *
436 * The final layout will be something like this:
437 * @verbatim
438 RTHCPHYS HCPhys; The current stuff.
439 63:48 High page id (12+).
440 47:12 The physical page frame number.
441 11:0 Low page id.
442 uint32_t fReadOnly : 1; Whether it's readonly page (rom or monitored in some way).
443 uint32_t u3Type : 3; The page type {RESERVED, MMIO, MMIO2, ROM, shadowed ROM, RAM}.
444 uint32_t u2PhysMon : 2; Physical access handler type {none, read, write, all}.
445 uint32_t u2VirtMon : 2; Virtual access handler type {none, read, write, all}..
446 uint32_t u2State : 2; The page state { zero, shared, normal, write monitored }.
447 uint32_t fWrittenTo : 1; Whether a write monitored page was written to.
448 uint32_t u20Reserved : 20; Reserved for later, mostly sharing stats.
449 uint32_t u32Tracking; The shadow PT tracking stuff, roughly.
450 @endverbatim
451 *
452 * Cost wise, this means we'll double the cost for guest memory. There isn't anyway
453 * around that I'm afraid. It means that the cost of dealing out 32GB of memory
454 * to one or more VMs is: (32GB >> PAGE_SHIFT) * 16 bytes, or 128MBs. Or another
455 * example, the VM heap cost when assigning 1GB to a VM will be: 4MB.
456 *
457 * A couple of cost examples for the total cost per-VM + kernel.
458 * 32-bit Windows and 32-bit linux:
459 * 1GB guest ram, 256K pages: 4MB + 2MB(+) = 6MB
460 * 4GB guest ram, 1M pages: 16MB + 8MB(+) = 24MB
461 * 32GB guest ram, 8M pages: 128MB + 64MB(+) = 192MB
462 * 64-bit Windows and 64-bit linux:
463 * 1GB guest ram, 256K pages: 4MB + 3MB(+) = 7MB
464 * 4GB guest ram, 1M pages: 16MB + 12MB(+) = 28MB
465 * 32GB guest ram, 8M pages: 128MB + 96MB(+) = 224MB
466 *
467 * UPDATE - 2007-09-27:
468 * Will need a ballooned flag/state too because we cannot
469 * trust the guest 100% and reporting the same page as ballooned more
470 * than once will put the GMM off balance.
471 *
472 *
473 * @section sec_pgmPhys_Serializing Serializing Access
474 *
475 * Initially, we'll try a simple scheme:
476 *
477 * - The per-VM RAM tracking structures (PGMRAMRANGE) is only modified
478 * by the EMT thread of that VM while in the pgm critsect.
479 * - Other threads in the VM process that needs to make reliable use of
480 * the per-VM RAM tracking structures will enter the critsect.
481 * - No process external thread or kernel thread will ever try enter
482 * the pgm critical section, as that just won't work.
483 * - The idle thread (and similar threads) doesn't not need 100% reliable
484 * data when performing it tasks as the EMT thread will be the one to
485 * do the actual changes later anyway. So, as long as it only accesses
486 * the main ram range, it can do so by somehow preventing the VM from
487 * being destroyed while it works on it...
488 *
489 * - The over-commitment management, including the allocating/freeing
490 * chunks, is serialized by a ring-0 mutex lock (a fast one since the
491 * more mundane mutex implementation is broken on Linux).
492 * - A separate mutex is protecting the set of allocation chunks so
493 * that pages can be shared or/and freed up while some other VM is
494 * allocating more chunks. This mutex can be take from under the other
495 * one, but not the other way around.
496 *
497 *
498 * @section sec_pgmPhys_Request VM Request interface
499 *
500 * When in ring-0 it will become necessary to send requests to a VM so it can
501 * for instance move a page while defragmenting during VM destroy. The idle
502 * thread will make use of this interface to request VMs to setup shared
503 * pages and to perform write monitoring of pages.
504 *
505 * I would propose an interface similar to the current VMReq interface, similar
506 * in that it doesn't require locking and that the one sending the request may
507 * wait for completion if it wishes to. This shouldn't be very difficult to
508 * realize.
509 *
510 * The requests themselves are also pretty simple. They are basically:
511 * -# Check that some precondition is still true.
512 * -# Do the update.
513 * -# Update all shadow page tables involved with the page.
514 *
515 * The 3rd step is identical to what we're already doing when updating a
516 * physical handler, see pgmHandlerPhysicalSetRamFlagsAndFlushShadowPTs.
517 *
518 *
519 *
520 * @section sec_pgmPhys_MappingCaches Mapping Caches
521 *
522 * In order to be able to map in and out memory and to be able to support
523 * guest with more RAM than we've got virtual address space, we'll employing
524 * a mapping cache. Normally ring-0 and ring-3 can share the same cache,
525 * however on 32-bit darwin the ring-0 code is running in a different memory
526 * context and therefore needs a separate cache. In raw-mode context we also
527 * need a separate cache. The 32-bit darwin mapping cache and the one for
528 * raw-mode context share a lot of code, see PGMRZDYNMAP.
529 *
530 *
531 * @subsection subsec_pgmPhys_MappingCaches_R3 Ring-3
532 *
533 * We've considered implementing the ring-3 mapping cache page based but found
534 * that this was bother some when one had to take into account TLBs+SMP and
535 * portability (missing the necessary APIs on several platforms). There were
536 * also some performance concerns with this approach which hadn't quite been
537 * worked out.
538 *
539 * Instead, we'll be mapping allocation chunks into the VM process. This simplifies
540 * matters greatly quite a bit since we don't need to invent any new ring-0 stuff,
541 * only some minor RTR0MEMOBJ mapping stuff. The main concern here is that mapping
542 * compared to the previous idea is that mapping or unmapping a 1MB chunk is more
543 * costly than a single page, although how much more costly is uncertain. We'll
544 * try address this by using a very big cache, preferably bigger than the actual
545 * VM RAM size if possible. The current VM RAM sizes should give some idea for
546 * 32-bit boxes, while on 64-bit we can probably get away with employing an
547 * unlimited cache.
548 *
549 * The cache have to parts, as already indicated, the ring-3 side and the
550 * ring-0 side.
551 *
552 * The ring-0 will be tied to the page allocator since it will operate on the
553 * memory objects it contains. It will therefore require the first ring-0 mutex
554 * discussed in @ref sec_pgmPhys_Serializing. We some double house keeping wrt
555 * to who has mapped what I think, since both VMMR0.r0 and RTR0MemObj will keep
556 * track of mapping relations
557 *
558 * The ring-3 part will be protected by the pgm critsect. For simplicity, we'll
559 * require anyone that desires to do changes to the mapping cache to do that
560 * from within this critsect. Alternatively, we could employ a separate critsect
561 * for serializing changes to the mapping cache as this would reduce potential
562 * contention with other threads accessing mappings unrelated to the changes
563 * that are in process. We can see about this later, contention will show
564 * up in the statistics anyway, so it'll be simple to tell.
565 *
566 * The organization of the ring-3 part will be very much like how the allocation
567 * chunks are organized in ring-0, that is in an AVL tree by chunk id. To avoid
568 * having to walk the tree all the time, we'll have a couple of lookaside entries
569 * like in we do for I/O ports and MMIO in IOM.
570 *
571 * The simplified flow of a PGMPhysRead/Write function:
572 * -# Enter the PGM critsect.
573 * -# Lookup GCPhys in the ram ranges and get the Page ID.
574 * -# Calc the Allocation Chunk ID from the Page ID.
575 * -# Check the lookaside entries and then the AVL tree for the Chunk ID.
576 * If not found in cache:
577 * -# Call ring-0 and request it to be mapped and supply
578 * a chunk to be unmapped if the cache is maxed out already.
579 * -# Insert the new mapping into the AVL tree (id + R3 address).
580 * -# Update the relevant lookaside entry and return the mapping address.
581 * -# Do the read/write according to monitoring flags and everything.
582 * -# Leave the critsect.
583 *
584 *
585 * @section sec_pgmPhys_Fallback Fallback
586 *
587 * Current all the "second tier" hosts will not support the RTR0MemObjAllocPhysNC
588 * API and thus require a fallback.
589 *
590 * So, when RTR0MemObjAllocPhysNC returns VERR_NOT_SUPPORTED the page allocator
591 * will return to the ring-3 caller (and later ring-0) and asking it to seed
592 * the page allocator with some fresh pages (VERR_GMM_SEED_ME). Ring-3 will
593 * then perform an SUPR3PageAlloc(cbChunk >> PAGE_SHIFT) call and make a
594 * "SeededAllocPages" call to ring-0.
595 *
596 * The first time ring-0 sees the VERR_NOT_SUPPORTED failure it will disable
597 * all page sharing (zero page detection will continue). It will also force
598 * all allocations to come from the VM which seeded the page. Both these
599 * measures are taken to make sure that there will never be any need for
600 * mapping anything into ring-3 - everything will be mapped already.
601 *
602 * Whether we'll continue to use the current MM locked memory management
603 * for this I don't quite know (I'd prefer not to and just ditch that all
604 * together), we'll see what's simplest to do.
605 *
606 *
607 *
608 * @section sec_pgmPhys_Changes Changes
609 *
610 * Breakdown of the changes involved?
611 */
612
613
614/*********************************************************************************************************************************
615* Header Files *
616*********************************************************************************************************************************/
617#define LOG_GROUP LOG_GROUP_PGM
618#include <VBox/vmm/dbgf.h>
619#include <VBox/vmm/pgm.h>
620#include <VBox/vmm/cpum.h>
621#include <VBox/vmm/iom.h>
622#include <VBox/sup.h>
623#include <VBox/vmm/mm.h>
624#include <VBox/vmm/em.h>
625#include <VBox/vmm/stam.h>
626#ifdef VBOX_WITH_REM
627# include <VBox/vmm/rem.h>
628#endif
629#include <VBox/vmm/selm.h>
630#include <VBox/vmm/ssm.h>
631#include <VBox/vmm/hm.h>
632#include "PGMInternal.h"
633#include <VBox/vmm/vm.h>
634#include <VBox/vmm/uvm.h>
635#include "PGMInline.h"
636
637#include <VBox/dbg.h>
638#include <VBox/param.h>
639#include <VBox/err.h>
640
641#include <iprt/asm.h>
642#include <iprt/asm-amd64-x86.h>
643#include <iprt/assert.h>
644#include <iprt/env.h>
645#include <iprt/mem.h>
646#include <iprt/file.h>
647#include <iprt/string.h>
648#include <iprt/thread.h>
649
650
651/*********************************************************************************************************************************
652* Structures and Typedefs *
653*********************************************************************************************************************************/
654/**
655 * Argument package for pgmR3RElocatePhysHnadler, pgmR3RelocateVirtHandler and
656 * pgmR3RelocateHyperVirtHandler.
657 */
658typedef struct PGMRELOCHANDLERARGS
659{
660 RTGCINTPTR offDelta;
661 PVM pVM;
662} PGMRELOCHANDLERARGS;
663/** Pointer to a page access handlere relocation argument package. */
664typedef PGMRELOCHANDLERARGS const *PCPGMRELOCHANDLERARGS;
665
666
667/*********************************************************************************************************************************
668* Internal Functions *
669*********************************************************************************************************************************/
670static int pgmR3InitPaging(PVM pVM);
671static int pgmR3InitStats(PVM pVM);
672static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
673static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
674static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs);
675static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser);
676#ifdef VBOX_WITH_RAW_MODE
677static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
678static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser);
679#endif /* VBOX_WITH_RAW_MODE */
680#ifdef VBOX_STRICT
681static FNVMATSTATE pgmR3ResetNoMorePhysWritesFlag;
682#endif
683static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0);
684static void pgmR3ModeDataSwitch(PVM pVM, PVMCPU pVCpu, PGMMODE enmShw, PGMMODE enmGst);
685static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher);
686
687#ifdef VBOX_WITH_DEBUGGER
688static FNDBGCCMD pgmR3CmdError;
689static FNDBGCCMD pgmR3CmdSync;
690static FNDBGCCMD pgmR3CmdSyncAlways;
691# ifdef VBOX_STRICT
692static FNDBGCCMD pgmR3CmdAssertCR3;
693# endif
694static FNDBGCCMD pgmR3CmdPhysToFile;
695#endif
696
697
698/*********************************************************************************************************************************
699* Global Variables *
700*********************************************************************************************************************************/
701#ifdef VBOX_WITH_DEBUGGER
702/** Argument descriptors for '.pgmerror' and '.pgmerroroff'. */
703static const DBGCVARDESC g_aPgmErrorArgs[] =
704{
705 /* cTimesMin, cTimesMax, enmCategory, fFlags, pszName, pszDescription */
706 { 0, 1, DBGCVAR_CAT_STRING, 0, "where", "Error injection location." },
707};
708
709static const DBGCVARDESC g_aPgmPhysToFileArgs[] =
710{
711 /* cTimesMin, cTimesMax, enmCategory, fFlags, pszName, pszDescription */
712 { 1, 1, DBGCVAR_CAT_STRING, 0, "file", "The file name." },
713 { 0, 1, DBGCVAR_CAT_STRING, 0, "nozero", "If present, zero pages are skipped." },
714};
715
716# ifdef DEBUG_sandervl
717static const DBGCVARDESC g_aPgmCountPhysWritesArgs[] =
718{
719 /* cTimesMin, cTimesMax, enmCategory, fFlags, pszName, pszDescription */
720 { 1, 1, DBGCVAR_CAT_STRING, 0, "enabled", "on/off." },
721 { 1, 1, DBGCVAR_CAT_NUMBER_NO_RANGE, 0, "interval", "Interval in ms." },
722};
723# endif
724
725/** Command descriptors. */
726static const DBGCCMD g_aCmds[] =
727{
728 /* pszCmd, cArgsMin, cArgsMax, paArgDesc, cArgDescs, fFlags, pfnHandler pszSyntax, ....pszDescription */
729 { "pgmsync", 0, 0, NULL, 0, 0, pgmR3CmdSync, "", "Sync the CR3 page." },
730 { "pgmerror", 0, 1, &g_aPgmErrorArgs[0], 1, 0, pgmR3CmdError, "", "Enables inject runtime of errors into parts of PGM." },
731 { "pgmerroroff", 0, 1, &g_aPgmErrorArgs[0], 1, 0, pgmR3CmdError, "", "Disables inject runtime errors into parts of PGM." },
732# ifdef VBOX_STRICT
733 { "pgmassertcr3", 0, 0, NULL, 0, 0, pgmR3CmdAssertCR3, "", "Check the shadow CR3 mapping." },
734# ifdef VBOX_WITH_PAGE_SHARING
735 { "pgmcheckduppages", 0, 0, NULL, 0, 0, pgmR3CmdCheckDuplicatePages, "", "Check for duplicate pages in all running VMs." },
736 { "pgmsharedmodules", 0, 0, NULL, 0, 0, pgmR3CmdShowSharedModules, "", "Print shared modules info." },
737# endif
738# endif
739 { "pgmsyncalways", 0, 0, NULL, 0, 0, pgmR3CmdSyncAlways, "", "Toggle permanent CR3 syncing." },
740 { "pgmphystofile", 1, 2, &g_aPgmPhysToFileArgs[0], 2, 0, pgmR3CmdPhysToFile, "", "Save the physical memory to file." },
741};
742#endif
743
744
745
746
747/*
748 * Shadow - 32-bit mode
749 */
750#define PGM_SHW_TYPE PGM_TYPE_32BIT
751#define PGM_SHW_NAME(name) PGM_SHW_NAME_32BIT(name)
752#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_32BIT_STR(name)
753#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_32BIT_STR(name)
754#include "PGMShw.h"
755
756/* Guest - real mode */
757#define PGM_GST_TYPE PGM_TYPE_REAL
758#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
759#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
760#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
761#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_REAL(name)
762#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_REAL_STR(name)
763#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_REAL_STR(name)
764#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
765#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
766#include "PGMBth.h"
767#include "PGMGstDefs.h"
768#include "PGMGst.h"
769#undef BTH_PGMPOOLKIND_PT_FOR_PT
770#undef BTH_PGMPOOLKIND_ROOT
771#undef PGM_BTH_NAME
772#undef PGM_BTH_NAME_RC_STR
773#undef PGM_BTH_NAME_R0_STR
774#undef PGM_GST_TYPE
775#undef PGM_GST_NAME
776#undef PGM_GST_NAME_RC_STR
777#undef PGM_GST_NAME_R0_STR
778
779/* Guest - protected mode */
780#define PGM_GST_TYPE PGM_TYPE_PROT
781#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
782#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
783#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
784#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_PROT(name)
785#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_PROT_STR(name)
786#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_PROT_STR(name)
787#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_PHYS
788#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD_PHYS
789#include "PGMBth.h"
790#include "PGMGstDefs.h"
791#include "PGMGst.h"
792#undef BTH_PGMPOOLKIND_PT_FOR_PT
793#undef BTH_PGMPOOLKIND_ROOT
794#undef PGM_BTH_NAME
795#undef PGM_BTH_NAME_RC_STR
796#undef PGM_BTH_NAME_R0_STR
797#undef PGM_GST_TYPE
798#undef PGM_GST_NAME
799#undef PGM_GST_NAME_RC_STR
800#undef PGM_GST_NAME_R0_STR
801
802/* Guest - 32-bit mode */
803#define PGM_GST_TYPE PGM_TYPE_32BIT
804#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
805#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
806#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
807#define PGM_BTH_NAME(name) PGM_BTH_NAME_32BIT_32BIT(name)
808#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_32BIT_32BIT_STR(name)
809#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_32BIT_32BIT_STR(name)
810#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_32BIT_PT_FOR_32BIT_PT
811#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_32BIT_PT_FOR_32BIT_4MB
812#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_32BIT_PD
813#include "PGMBth.h"
814#include "PGMGstDefs.h"
815#include "PGMGst.h"
816#undef BTH_PGMPOOLKIND_PT_FOR_BIG
817#undef BTH_PGMPOOLKIND_PT_FOR_PT
818#undef BTH_PGMPOOLKIND_ROOT
819#undef PGM_BTH_NAME
820#undef PGM_BTH_NAME_RC_STR
821#undef PGM_BTH_NAME_R0_STR
822#undef PGM_GST_TYPE
823#undef PGM_GST_NAME
824#undef PGM_GST_NAME_RC_STR
825#undef PGM_GST_NAME_R0_STR
826
827#undef PGM_SHW_TYPE
828#undef PGM_SHW_NAME
829#undef PGM_SHW_NAME_RC_STR
830#undef PGM_SHW_NAME_R0_STR
831
832
833/*
834 * Shadow - PAE mode
835 */
836#define PGM_SHW_TYPE PGM_TYPE_PAE
837#define PGM_SHW_NAME(name) PGM_SHW_NAME_PAE(name)
838#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_PAE_STR(name)
839#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_PAE_STR(name)
840#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
841#include "PGMShw.h"
842
843/* Guest - real mode */
844#define PGM_GST_TYPE PGM_TYPE_REAL
845#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
846#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
847#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
848#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_REAL(name)
849#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_REAL_STR(name)
850#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_REAL_STR(name)
851#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
852#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
853#include "PGMGstDefs.h"
854#include "PGMBth.h"
855#undef BTH_PGMPOOLKIND_PT_FOR_PT
856#undef BTH_PGMPOOLKIND_ROOT
857#undef PGM_BTH_NAME
858#undef PGM_BTH_NAME_RC_STR
859#undef PGM_BTH_NAME_R0_STR
860#undef PGM_GST_TYPE
861#undef PGM_GST_NAME
862#undef PGM_GST_NAME_RC_STR
863#undef PGM_GST_NAME_R0_STR
864
865/* Guest - protected mode */
866#define PGM_GST_TYPE PGM_TYPE_PROT
867#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
868#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
869#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
870#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PROT(name)
871#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PROT_STR(name)
872#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PROT_STR(name)
873#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
874#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_PHYS
875#include "PGMGstDefs.h"
876#include "PGMBth.h"
877#undef BTH_PGMPOOLKIND_PT_FOR_PT
878#undef BTH_PGMPOOLKIND_ROOT
879#undef PGM_BTH_NAME
880#undef PGM_BTH_NAME_RC_STR
881#undef PGM_BTH_NAME_R0_STR
882#undef PGM_GST_TYPE
883#undef PGM_GST_NAME
884#undef PGM_GST_NAME_RC_STR
885#undef PGM_GST_NAME_R0_STR
886
887/* Guest - 32-bit mode */
888#define PGM_GST_TYPE PGM_TYPE_32BIT
889#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
890#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
891#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
892#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_32BIT(name)
893#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_32BIT_STR(name)
894#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_32BIT_STR(name)
895#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
896#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
897#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT_FOR_32BIT
898#include "PGMGstDefs.h"
899#include "PGMBth.h"
900#undef BTH_PGMPOOLKIND_PT_FOR_BIG
901#undef BTH_PGMPOOLKIND_PT_FOR_PT
902#undef BTH_PGMPOOLKIND_ROOT
903#undef PGM_BTH_NAME
904#undef PGM_BTH_NAME_RC_STR
905#undef PGM_BTH_NAME_R0_STR
906#undef PGM_GST_TYPE
907#undef PGM_GST_NAME
908#undef PGM_GST_NAME_RC_STR
909#undef PGM_GST_NAME_R0_STR
910
911/* Guest - PAE mode */
912#define PGM_GST_TYPE PGM_TYPE_PAE
913#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
914#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
915#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
916#define PGM_BTH_NAME(name) PGM_BTH_NAME_PAE_PAE(name)
917#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_PAE_PAE_STR(name)
918#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_PAE_PAE_STR(name)
919#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
920#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
921#define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_PAE_PDPT
922#include "PGMBth.h"
923#include "PGMGstDefs.h"
924#include "PGMGst.h"
925#undef BTH_PGMPOOLKIND_PT_FOR_BIG
926#undef BTH_PGMPOOLKIND_PT_FOR_PT
927#undef BTH_PGMPOOLKIND_ROOT
928#undef PGM_BTH_NAME
929#undef PGM_BTH_NAME_RC_STR
930#undef PGM_BTH_NAME_R0_STR
931#undef PGM_GST_TYPE
932#undef PGM_GST_NAME
933#undef PGM_GST_NAME_RC_STR
934#undef PGM_GST_NAME_R0_STR
935
936#undef PGM_SHW_TYPE
937#undef PGM_SHW_NAME
938#undef PGM_SHW_NAME_RC_STR
939#undef PGM_SHW_NAME_R0_STR
940
941
942/*
943 * Shadow - AMD64 mode
944 */
945#define PGM_SHW_TYPE PGM_TYPE_AMD64
946#define PGM_SHW_NAME(name) PGM_SHW_NAME_AMD64(name)
947#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_AMD64_STR(name)
948#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_AMD64_STR(name)
949#include "PGMShw.h"
950
951#ifdef VBOX_WITH_64_BITS_GUESTS
952/* Guest - AMD64 mode */
953# define PGM_GST_TYPE PGM_TYPE_AMD64
954# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
955# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
956# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
957# define PGM_BTH_NAME(name) PGM_BTH_NAME_AMD64_AMD64(name)
958# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_AMD64_AMD64_STR(name)
959# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_AMD64_AMD64_STR(name)
960# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
961# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
962# define BTH_PGMPOOLKIND_ROOT PGMPOOLKIND_64BIT_PML4
963# include "PGMBth.h"
964# include "PGMGstDefs.h"
965# include "PGMGst.h"
966# undef BTH_PGMPOOLKIND_PT_FOR_BIG
967# undef BTH_PGMPOOLKIND_PT_FOR_PT
968# undef BTH_PGMPOOLKIND_ROOT
969# undef PGM_BTH_NAME
970# undef PGM_BTH_NAME_RC_STR
971# undef PGM_BTH_NAME_R0_STR
972# undef PGM_GST_TYPE
973# undef PGM_GST_NAME
974# undef PGM_GST_NAME_RC_STR
975# undef PGM_GST_NAME_R0_STR
976#endif /* VBOX_WITH_64_BITS_GUESTS */
977
978#undef PGM_SHW_TYPE
979#undef PGM_SHW_NAME
980#undef PGM_SHW_NAME_RC_STR
981#undef PGM_SHW_NAME_R0_STR
982
983
984/*
985 * Shadow - Nested paging mode
986 */
987#define PGM_SHW_TYPE PGM_TYPE_NESTED
988#define PGM_SHW_NAME(name) PGM_SHW_NAME_NESTED(name)
989#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_NESTED_STR(name)
990#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_NESTED_STR(name)
991#include "PGMShw.h"
992
993/* Guest - real mode */
994#define PGM_GST_TYPE PGM_TYPE_REAL
995#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
996#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
997#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
998#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_REAL(name)
999#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_REAL_STR(name)
1000#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_REAL_STR(name)
1001#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1002#include "PGMGstDefs.h"
1003#include "PGMBth.h"
1004#undef BTH_PGMPOOLKIND_PT_FOR_PT
1005#undef PGM_BTH_NAME
1006#undef PGM_BTH_NAME_RC_STR
1007#undef PGM_BTH_NAME_R0_STR
1008#undef PGM_GST_TYPE
1009#undef PGM_GST_NAME
1010#undef PGM_GST_NAME_RC_STR
1011#undef PGM_GST_NAME_R0_STR
1012
1013/* Guest - protected mode */
1014#define PGM_GST_TYPE PGM_TYPE_PROT
1015#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
1016#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
1017#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
1018#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PROT(name)
1019#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PROT_STR(name)
1020#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PROT_STR(name)
1021#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1022#include "PGMGstDefs.h"
1023#include "PGMBth.h"
1024#undef BTH_PGMPOOLKIND_PT_FOR_PT
1025#undef PGM_BTH_NAME
1026#undef PGM_BTH_NAME_RC_STR
1027#undef PGM_BTH_NAME_R0_STR
1028#undef PGM_GST_TYPE
1029#undef PGM_GST_NAME
1030#undef PGM_GST_NAME_RC_STR
1031#undef PGM_GST_NAME_R0_STR
1032
1033/* Guest - 32-bit mode */
1034#define PGM_GST_TYPE PGM_TYPE_32BIT
1035#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
1036#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
1037#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
1038#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_32BIT(name)
1039#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_32BIT_STR(name)
1040#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_32BIT_STR(name)
1041#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
1042#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
1043#include "PGMGstDefs.h"
1044#include "PGMBth.h"
1045#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1046#undef BTH_PGMPOOLKIND_PT_FOR_PT
1047#undef PGM_BTH_NAME
1048#undef PGM_BTH_NAME_RC_STR
1049#undef PGM_BTH_NAME_R0_STR
1050#undef PGM_GST_TYPE
1051#undef PGM_GST_NAME
1052#undef PGM_GST_NAME_RC_STR
1053#undef PGM_GST_NAME_R0_STR
1054
1055/* Guest - PAE mode */
1056#define PGM_GST_TYPE PGM_TYPE_PAE
1057#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
1058#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
1059#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
1060#define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_PAE(name)
1061#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_PAE_STR(name)
1062#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_PAE_STR(name)
1063#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1064#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1065#include "PGMGstDefs.h"
1066#include "PGMBth.h"
1067#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1068#undef BTH_PGMPOOLKIND_PT_FOR_PT
1069#undef PGM_BTH_NAME
1070#undef PGM_BTH_NAME_RC_STR
1071#undef PGM_BTH_NAME_R0_STR
1072#undef PGM_GST_TYPE
1073#undef PGM_GST_NAME
1074#undef PGM_GST_NAME_RC_STR
1075#undef PGM_GST_NAME_R0_STR
1076
1077#ifdef VBOX_WITH_64_BITS_GUESTS
1078/* Guest - AMD64 mode */
1079# define PGM_GST_TYPE PGM_TYPE_AMD64
1080# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1081# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1082# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1083# define PGM_BTH_NAME(name) PGM_BTH_NAME_NESTED_AMD64(name)
1084# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_NESTED_AMD64_STR(name)
1085# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_NESTED_AMD64_STR(name)
1086# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1087# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1088# include "PGMGstDefs.h"
1089# include "PGMBth.h"
1090# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1091# undef BTH_PGMPOOLKIND_PT_FOR_PT
1092# undef PGM_BTH_NAME
1093# undef PGM_BTH_NAME_RC_STR
1094# undef PGM_BTH_NAME_R0_STR
1095# undef PGM_GST_TYPE
1096# undef PGM_GST_NAME
1097# undef PGM_GST_NAME_RC_STR
1098# undef PGM_GST_NAME_R0_STR
1099#endif /* VBOX_WITH_64_BITS_GUESTS */
1100
1101#undef PGM_SHW_TYPE
1102#undef PGM_SHW_NAME
1103#undef PGM_SHW_NAME_RC_STR
1104#undef PGM_SHW_NAME_R0_STR
1105
1106
1107/*
1108 * Shadow - EPT
1109 */
1110#define PGM_SHW_TYPE PGM_TYPE_EPT
1111#define PGM_SHW_NAME(name) PGM_SHW_NAME_EPT(name)
1112#define PGM_SHW_NAME_RC_STR(name) PGM_SHW_NAME_RC_EPT_STR(name)
1113#define PGM_SHW_NAME_R0_STR(name) PGM_SHW_NAME_R0_EPT_STR(name)
1114#include "PGMShw.h"
1115
1116/* Guest - real mode */
1117#define PGM_GST_TYPE PGM_TYPE_REAL
1118#define PGM_GST_NAME(name) PGM_GST_NAME_REAL(name)
1119#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_REAL_STR(name)
1120#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_REAL_STR(name)
1121#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_REAL(name)
1122#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_REAL_STR(name)
1123#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_REAL_STR(name)
1124#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1125#include "PGMGstDefs.h"
1126#include "PGMBth.h"
1127#undef BTH_PGMPOOLKIND_PT_FOR_PT
1128#undef PGM_BTH_NAME
1129#undef PGM_BTH_NAME_RC_STR
1130#undef PGM_BTH_NAME_R0_STR
1131#undef PGM_GST_TYPE
1132#undef PGM_GST_NAME
1133#undef PGM_GST_NAME_RC_STR
1134#undef PGM_GST_NAME_R0_STR
1135
1136/* Guest - protected mode */
1137#define PGM_GST_TYPE PGM_TYPE_PROT
1138#define PGM_GST_NAME(name) PGM_GST_NAME_PROT(name)
1139#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PROT_STR(name)
1140#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PROT_STR(name)
1141#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PROT(name)
1142#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PROT_STR(name)
1143#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PROT_STR(name)
1144#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PHYS
1145#include "PGMGstDefs.h"
1146#include "PGMBth.h"
1147#undef BTH_PGMPOOLKIND_PT_FOR_PT
1148#undef PGM_BTH_NAME
1149#undef PGM_BTH_NAME_RC_STR
1150#undef PGM_BTH_NAME_R0_STR
1151#undef PGM_GST_TYPE
1152#undef PGM_GST_NAME
1153#undef PGM_GST_NAME_RC_STR
1154#undef PGM_GST_NAME_R0_STR
1155
1156/* Guest - 32-bit mode */
1157#define PGM_GST_TYPE PGM_TYPE_32BIT
1158#define PGM_GST_NAME(name) PGM_GST_NAME_32BIT(name)
1159#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_32BIT_STR(name)
1160#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_32BIT_STR(name)
1161#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_32BIT(name)
1162#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_32BIT_STR(name)
1163#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_32BIT_STR(name)
1164#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_32BIT_PT
1165#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_32BIT_4MB
1166#include "PGMGstDefs.h"
1167#include "PGMBth.h"
1168#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1169#undef BTH_PGMPOOLKIND_PT_FOR_PT
1170#undef PGM_BTH_NAME
1171#undef PGM_BTH_NAME_RC_STR
1172#undef PGM_BTH_NAME_R0_STR
1173#undef PGM_GST_TYPE
1174#undef PGM_GST_NAME
1175#undef PGM_GST_NAME_RC_STR
1176#undef PGM_GST_NAME_R0_STR
1177
1178/* Guest - PAE mode */
1179#define PGM_GST_TYPE PGM_TYPE_PAE
1180#define PGM_GST_NAME(name) PGM_GST_NAME_PAE(name)
1181#define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_PAE_STR(name)
1182#define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_PAE_STR(name)
1183#define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_PAE(name)
1184#define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_PAE_STR(name)
1185#define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_PAE_STR(name)
1186#define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1187#define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1188#include "PGMGstDefs.h"
1189#include "PGMBth.h"
1190#undef BTH_PGMPOOLKIND_PT_FOR_BIG
1191#undef BTH_PGMPOOLKIND_PT_FOR_PT
1192#undef PGM_BTH_NAME
1193#undef PGM_BTH_NAME_RC_STR
1194#undef PGM_BTH_NAME_R0_STR
1195#undef PGM_GST_TYPE
1196#undef PGM_GST_NAME
1197#undef PGM_GST_NAME_RC_STR
1198#undef PGM_GST_NAME_R0_STR
1199
1200#ifdef VBOX_WITH_64_BITS_GUESTS
1201/* Guest - AMD64 mode */
1202# define PGM_GST_TYPE PGM_TYPE_AMD64
1203# define PGM_GST_NAME(name) PGM_GST_NAME_AMD64(name)
1204# define PGM_GST_NAME_RC_STR(name) PGM_GST_NAME_RC_AMD64_STR(name)
1205# define PGM_GST_NAME_R0_STR(name) PGM_GST_NAME_R0_AMD64_STR(name)
1206# define PGM_BTH_NAME(name) PGM_BTH_NAME_EPT_AMD64(name)
1207# define PGM_BTH_NAME_RC_STR(name) PGM_BTH_NAME_RC_EPT_AMD64_STR(name)
1208# define PGM_BTH_NAME_R0_STR(name) PGM_BTH_NAME_R0_EPT_AMD64_STR(name)
1209# define BTH_PGMPOOLKIND_PT_FOR_PT PGMPOOLKIND_PAE_PT_FOR_PAE_PT
1210# define BTH_PGMPOOLKIND_PT_FOR_BIG PGMPOOLKIND_PAE_PT_FOR_PAE_2MB
1211# include "PGMGstDefs.h"
1212# include "PGMBth.h"
1213# undef BTH_PGMPOOLKIND_PT_FOR_BIG
1214# undef BTH_PGMPOOLKIND_PT_FOR_PT
1215# undef PGM_BTH_NAME
1216# undef PGM_BTH_NAME_RC_STR
1217# undef PGM_BTH_NAME_R0_STR
1218# undef PGM_GST_TYPE
1219# undef PGM_GST_NAME
1220# undef PGM_GST_NAME_RC_STR
1221# undef PGM_GST_NAME_R0_STR
1222#endif /* VBOX_WITH_64_BITS_GUESTS */
1223
1224#undef PGM_SHW_TYPE
1225#undef PGM_SHW_NAME
1226#undef PGM_SHW_NAME_RC_STR
1227#undef PGM_SHW_NAME_R0_STR
1228
1229
1230
1231/**
1232 * Initiates the paging of VM.
1233 *
1234 * @returns VBox status code.
1235 * @param pVM The cross context VM structure.
1236 */
1237VMMR3DECL(int) PGMR3Init(PVM pVM)
1238{
1239 LogFlow(("PGMR3Init:\n"));
1240 PCFGMNODE pCfgPGM = CFGMR3GetChild(CFGMR3GetRoot(pVM), "/PGM");
1241 int rc;
1242
1243 /*
1244 * Assert alignment and sizes.
1245 */
1246 AssertCompile(sizeof(pVM->pgm.s) <= sizeof(pVM->pgm.padding));
1247 AssertCompile(sizeof(pVM->aCpus[0].pgm.s) <= sizeof(pVM->aCpus[0].pgm.padding));
1248 AssertCompileMemberAlignment(PGM, CritSectX, sizeof(uintptr_t));
1249
1250 /*
1251 * Init the structure.
1252 */
1253 pVM->pgm.s.offVM = RT_OFFSETOF(VM, pgm.s);
1254 pVM->pgm.s.offVCpuPGM = RT_OFFSETOF(VMCPU, pgm.s);
1255 /*pVM->pgm.s.fRestoreRomPagesAtReset = false;*/
1256
1257 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.aHandyPages); i++)
1258 {
1259 pVM->pgm.s.aHandyPages[i].HCPhysGCPhys = NIL_RTHCPHYS;
1260 pVM->pgm.s.aHandyPages[i].idPage = NIL_GMM_PAGEID;
1261 pVM->pgm.s.aHandyPages[i].idSharedPage = NIL_GMM_PAGEID;
1262 }
1263
1264 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.aLargeHandyPage); i++)
1265 {
1266 pVM->pgm.s.aLargeHandyPage[i].HCPhysGCPhys = NIL_RTHCPHYS;
1267 pVM->pgm.s.aLargeHandyPage[i].idPage = NIL_GMM_PAGEID;
1268 pVM->pgm.s.aLargeHandyPage[i].idSharedPage = NIL_GMM_PAGEID;
1269 }
1270
1271 /* Init the per-CPU part. */
1272 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
1273 {
1274 PVMCPU pVCpu = &pVM->aCpus[idCpu];
1275 PPGMCPU pPGM = &pVCpu->pgm.s;
1276
1277 pPGM->offVM = (uintptr_t)&pVCpu->pgm.s - (uintptr_t)pVM;
1278 pPGM->offVCpu = RT_OFFSETOF(VMCPU, pgm.s);
1279 pPGM->offPGM = (uintptr_t)&pVCpu->pgm.s - (uintptr_t)&pVM->pgm.s;
1280
1281 pPGM->enmShadowMode = PGMMODE_INVALID;
1282 pPGM->enmGuestMode = PGMMODE_INVALID;
1283
1284 pPGM->GCPhysCR3 = NIL_RTGCPHYS;
1285
1286 pPGM->pGst32BitPdR3 = NULL;
1287 pPGM->pGstPaePdptR3 = NULL;
1288 pPGM->pGstAmd64Pml4R3 = NULL;
1289#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1290 pPGM->pGst32BitPdR0 = NIL_RTR0PTR;
1291 pPGM->pGstPaePdptR0 = NIL_RTR0PTR;
1292 pPGM->pGstAmd64Pml4R0 = NIL_RTR0PTR;
1293#endif
1294 pPGM->pGst32BitPdRC = NIL_RTRCPTR;
1295 pPGM->pGstPaePdptRC = NIL_RTRCPTR;
1296 for (unsigned i = 0; i < RT_ELEMENTS(pVCpu->pgm.s.apGstPaePDsR3); i++)
1297 {
1298 pPGM->apGstPaePDsR3[i] = NULL;
1299#ifndef VBOX_WITH_2X_4GB_ADDR_SPACE
1300 pPGM->apGstPaePDsR0[i] = NIL_RTR0PTR;
1301#endif
1302 pPGM->apGstPaePDsRC[i] = NIL_RTRCPTR;
1303 pPGM->aGCPhysGstPaePDs[i] = NIL_RTGCPHYS;
1304 pPGM->aGstPaePdpeRegs[i].u = UINT64_MAX;
1305 pPGM->aGCPhysGstPaePDsMonitored[i] = NIL_RTGCPHYS;
1306 }
1307
1308 pPGM->fA20Enabled = true;
1309 pPGM->GCPhysA20Mask = ~((RTGCPHYS)!pPGM->fA20Enabled << 20);
1310 }
1311
1312 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1313 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1; /* default; checked later */
1314 pVM->pgm.s.GCPtrPrevRamRangeMapping = MM_HYPER_AREA_ADDRESS;
1315
1316 rc = CFGMR3QueryBoolDef(CFGMR3GetRoot(pVM), "RamPreAlloc", &pVM->pgm.s.fRamPreAlloc,
1317#ifdef VBOX_WITH_PREALLOC_RAM_BY_DEFAULT
1318 true
1319#else
1320 false
1321#endif
1322 );
1323 AssertLogRelRCReturn(rc, rc);
1324
1325#if HC_ARCH_BITS == 32
1326# ifdef RT_OS_DARWIN
1327 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, _1G / GMM_CHUNK_SIZE * 3);
1328# else
1329 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, _1G / GMM_CHUNK_SIZE);
1330# endif
1331#else
1332 rc = CFGMR3QueryU32Def(pCfgPGM, "MaxRing3Chunks", &pVM->pgm.s.ChunkR3Map.cMax, UINT32_MAX);
1333#endif
1334 AssertLogRelRCReturn(rc, rc);
1335 for (uint32_t i = 0; i < RT_ELEMENTS(pVM->pgm.s.ChunkR3Map.Tlb.aEntries); i++)
1336 pVM->pgm.s.ChunkR3Map.Tlb.aEntries[i].idChunk = NIL_GMM_CHUNKID;
1337
1338 /*
1339 * Get the configured RAM size - to estimate saved state size.
1340 */
1341 uint64_t cbRam;
1342 rc = CFGMR3QueryU64(CFGMR3GetRoot(pVM), "RamSize", &cbRam);
1343 if (rc == VERR_CFGM_VALUE_NOT_FOUND)
1344 cbRam = 0;
1345 else if (RT_SUCCESS(rc))
1346 {
1347 if (cbRam < PAGE_SIZE)
1348 cbRam = 0;
1349 cbRam = RT_ALIGN_64(cbRam, PAGE_SIZE);
1350 }
1351 else
1352 {
1353 AssertMsgFailed(("Configuration error: Failed to query integer \"RamSize\", rc=%Rrc.\n", rc));
1354 return rc;
1355 }
1356
1357 /*
1358 * Check for PCI pass-through and other configurables.
1359 */
1360 rc = CFGMR3QueryBoolDef(pCfgPGM, "PciPassThrough", &pVM->pgm.s.fPciPassthrough, false);
1361 AssertMsgRCReturn(rc, ("Configuration error: Failed to query integer \"PciPassThrough\", rc=%Rrc.\n", rc), rc);
1362 AssertLogRelReturn(!pVM->pgm.s.fPciPassthrough || pVM->pgm.s.fRamPreAlloc, VERR_INVALID_PARAMETER);
1363
1364 rc = CFGMR3QueryBoolDef(CFGMR3GetRoot(pVM), "PageFusionAllowed", &pVM->pgm.s.fPageFusionAllowed, false);
1365 AssertLogRelRCReturn(rc, rc);
1366
1367 /** @cfgm{/PGM/ZeroRamPagesOnReset, boolean, true}
1368 * Whether to clear RAM pages on (hard) reset. */
1369 rc = CFGMR3QueryBoolDef(pCfgPGM, "ZeroRamPagesOnReset", &pVM->pgm.s.fZeroRamPagesOnReset, true);
1370 AssertLogRelRCReturn(rc, rc);
1371
1372#ifdef VBOX_WITH_STATISTICS
1373 /*
1374 * Allocate memory for the statistics before someone tries to use them.
1375 */
1376 size_t cbTotalStats = RT_ALIGN_Z(sizeof(PGMSTATS), 64) + RT_ALIGN_Z(sizeof(PGMCPUSTATS), 64) * pVM->cCpus;
1377 void *pv;
1378 rc = MMHyperAlloc(pVM, RT_ALIGN_Z(cbTotalStats, PAGE_SIZE), PAGE_SIZE, MM_TAG_PGM, &pv);
1379 AssertRCReturn(rc, rc);
1380
1381 pVM->pgm.s.pStatsR3 = (PGMSTATS *)pv;
1382 pVM->pgm.s.pStatsR0 = MMHyperCCToR0(pVM, pv);
1383 pVM->pgm.s.pStatsRC = MMHyperCCToRC(pVM, pv);
1384 pv = (uint8_t *)pv + RT_ALIGN_Z(sizeof(PGMSTATS), 64);
1385
1386 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
1387 {
1388 pVM->aCpus[iCpu].pgm.s.pStatsR3 = (PGMCPUSTATS *)pv;
1389 pVM->aCpus[iCpu].pgm.s.pStatsR0 = MMHyperCCToR0(pVM, pv);
1390 pVM->aCpus[iCpu].pgm.s.pStatsRC = MMHyperCCToRC(pVM, pv);
1391
1392 pv = (uint8_t *)pv + RT_ALIGN_Z(sizeof(PGMCPUSTATS), 64);
1393 }
1394#endif /* VBOX_WITH_STATISTICS */
1395
1396 /*
1397 * Register callbacks, string formatters and the saved state data unit.
1398 */
1399#ifdef VBOX_STRICT
1400 VMR3AtStateRegister(pVM->pUVM, pgmR3ResetNoMorePhysWritesFlag, NULL);
1401#endif
1402 PGMRegisterStringFormatTypes();
1403
1404 rc = pgmR3InitSavedState(pVM, cbRam);
1405 if (RT_FAILURE(rc))
1406 return rc;
1407
1408 /*
1409 * Initialize the PGM critical section and flush the phys TLBs
1410 */
1411 rc = PDMR3CritSectInit(pVM, &pVM->pgm.s.CritSectX, RT_SRC_POS, "PGM");
1412 AssertRCReturn(rc, rc);
1413
1414 PGMR3PhysChunkInvalidateTLB(pVM);
1415 pgmPhysInvalidatePageMapTLB(pVM);
1416
1417 /*
1418 * For the time being we sport a full set of handy pages in addition to the base
1419 * memory to simplify things.
1420 */
1421 rc = MMR3ReserveHandyPages(pVM, RT_ELEMENTS(pVM->pgm.s.aHandyPages)); /** @todo this should be changed to PGM_HANDY_PAGES_MIN but this needs proper testing... */
1422 AssertRCReturn(rc, rc);
1423
1424 /*
1425 * Trees
1426 */
1427 rc = MMHyperAlloc(pVM, sizeof(PGMTREES), 0, MM_TAG_PGM, (void **)&pVM->pgm.s.pTreesR3);
1428 if (RT_SUCCESS(rc))
1429 {
1430 pVM->pgm.s.pTreesR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pTreesR3);
1431 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
1432 }
1433
1434 /*
1435 * Allocate the zero page.
1436 */
1437 if (RT_SUCCESS(rc))
1438 {
1439 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvZeroPgR3);
1440 if (RT_SUCCESS(rc))
1441 {
1442 pVM->pgm.s.pvZeroPgRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pvZeroPgR3);
1443 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
1444 pVM->pgm.s.HCPhysZeroPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvZeroPgR3);
1445 AssertRelease(pVM->pgm.s.HCPhysZeroPg != NIL_RTHCPHYS);
1446 }
1447 }
1448
1449 /*
1450 * Allocate the invalid MMIO page.
1451 * (The invalid bits in HCPhysInvMmioPg are set later on init complete.)
1452 */
1453 if (RT_SUCCESS(rc))
1454 {
1455 rc = MMHyperAlloc(pVM, PAGE_SIZE, PAGE_SIZE, MM_TAG_PGM, &pVM->pgm.s.pvMmioPgR3);
1456 if (RT_SUCCESS(rc))
1457 {
1458 ASMMemFill32(pVM->pgm.s.pvMmioPgR3, PAGE_SIZE, 0xfeedface);
1459 pVM->pgm.s.HCPhysMmioPg = MMR3HyperHCVirt2HCPhys(pVM, pVM->pgm.s.pvMmioPgR3);
1460 AssertRelease(pVM->pgm.s.HCPhysMmioPg != NIL_RTHCPHYS);
1461 pVM->pgm.s.HCPhysInvMmioPg = pVM->pgm.s.HCPhysMmioPg;
1462 }
1463 }
1464
1465 /*
1466 * Register the physical access handler protecting ROMs.
1467 */
1468 if (RT_SUCCESS(rc))
1469 rc = PGMR3HandlerPhysicalTypeRegister(pVM, PGMPHYSHANDLERKIND_WRITE,
1470 pgmPhysRomWriteHandler,
1471 NULL, NULL, "pgmPhysRomWritePfHandler",
1472 NULL, NULL, "pgmPhysRomWritePfHandler",
1473 "ROM write protection",
1474 &pVM->pgm.s.hRomPhysHandlerType);
1475
1476 /*
1477 * Init the paging.
1478 */
1479 if (RT_SUCCESS(rc))
1480 rc = pgmR3InitPaging(pVM);
1481
1482 /*
1483 * Init the page pool.
1484 */
1485 if (RT_SUCCESS(rc))
1486 rc = pgmR3PoolInit(pVM);
1487
1488 if (RT_SUCCESS(rc))
1489 {
1490 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1491 {
1492 PVMCPU pVCpu = &pVM->aCpus[i];
1493 rc = PGMR3ChangeMode(pVM, pVCpu, PGMMODE_REAL);
1494 if (RT_FAILURE(rc))
1495 break;
1496 }
1497 }
1498
1499 if (RT_SUCCESS(rc))
1500 {
1501 /*
1502 * Info & statistics
1503 */
1504 DBGFR3InfoRegisterInternalEx(pVM, "mode",
1505 "Shows the current paging mode. "
1506 "Recognizes 'all', 'guest', 'shadow' and 'host' as arguments, defaulting to 'all' if nothing is given.",
1507 pgmR3InfoMode,
1508 DBGFINFO_FLAGS_ALL_EMTS);
1509 DBGFR3InfoRegisterInternal(pVM, "pgmcr3",
1510 "Dumps all the entries in the top level paging table. No arguments.",
1511 pgmR3InfoCr3);
1512 DBGFR3InfoRegisterInternal(pVM, "phys",
1513 "Dumps all the physical address ranges. No arguments.",
1514 pgmR3PhysInfo);
1515 DBGFR3InfoRegisterInternal(pVM, "handlers",
1516 "Dumps physical, virtual and hyper virtual handlers. "
1517 "Pass 'phys', 'virt', 'hyper' as argument if only one kind is wanted."
1518 "Add 'nost' if the statistics are unwanted, use together with 'all' or explicit selection.",
1519 pgmR3InfoHandlers);
1520 DBGFR3InfoRegisterInternal(pVM, "mappings",
1521 "Dumps guest mappings.",
1522 pgmR3MapInfo);
1523
1524 pgmR3InitStats(pVM);
1525
1526#ifdef VBOX_WITH_DEBUGGER
1527 /*
1528 * Debugger commands.
1529 */
1530 static bool s_fRegisteredCmds = false;
1531 if (!s_fRegisteredCmds)
1532 {
1533 int rc2 = DBGCRegisterCommands(&g_aCmds[0], RT_ELEMENTS(g_aCmds));
1534 if (RT_SUCCESS(rc2))
1535 s_fRegisteredCmds = true;
1536 }
1537#endif
1538 return VINF_SUCCESS;
1539 }
1540
1541 /* Almost no cleanup necessary, MM frees all memory. */
1542 PDMR3CritSectDelete(&pVM->pgm.s.CritSectX);
1543
1544 return rc;
1545}
1546
1547
1548/**
1549 * Init paging.
1550 *
1551 * Since we need to check what mode the host is operating in before we can choose
1552 * the right paging functions for the host we have to delay this until R0 has
1553 * been initialized.
1554 *
1555 * @returns VBox status code.
1556 * @param pVM The cross context VM structure.
1557 */
1558static int pgmR3InitPaging(PVM pVM)
1559{
1560 /*
1561 * Force a recalculation of modes and switcher so everyone gets notified.
1562 */
1563 for (VMCPUID i = 0; i < pVM->cCpus; i++)
1564 {
1565 PVMCPU pVCpu = &pVM->aCpus[i];
1566
1567 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
1568 pVCpu->pgm.s.enmGuestMode = PGMMODE_INVALID;
1569 }
1570
1571 pVM->pgm.s.enmHostMode = SUPPAGINGMODE_INVALID;
1572
1573 /*
1574 * Allocate static mapping space for whatever the cr3 register
1575 * points to and in the case of PAE mode to the 4 PDs.
1576 */
1577 int rc = MMR3HyperReserve(pVM, PAGE_SIZE * 5, "CR3 mapping", &pVM->pgm.s.GCPtrCR3Mapping);
1578 if (RT_FAILURE(rc))
1579 {
1580 AssertMsgFailed(("Failed to reserve two pages for cr mapping in HMA, rc=%Rrc\n", rc));
1581 return rc;
1582 }
1583 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
1584
1585 /*
1586 * Allocate pages for the three possible intermediate contexts
1587 * (AMD64, PAE and plain 32-Bit). We maintain all three contexts
1588 * for the sake of simplicity. The AMD64 uses the PAE for the
1589 * lower levels, making the total number of pages 11 (3 + 7 + 1).
1590 *
1591 * We assume that two page tables will be enought for the core code
1592 * mappings (HC virtual and identity).
1593 */
1594 pVM->pgm.s.pInterPD = (PX86PD)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.pInterPD, VERR_NO_PAGE_MEMORY);
1595 pVM->pgm.s.apInterPTs[0] = (PX86PT)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.apInterPTs[0], VERR_NO_PAGE_MEMORY);
1596 pVM->pgm.s.apInterPTs[1] = (PX86PT)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.apInterPTs[1], VERR_NO_PAGE_MEMORY);
1597 pVM->pgm.s.apInterPaePTs[0] = (PX86PTPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePTs[0], VERR_NO_PAGE_MEMORY);
1598 pVM->pgm.s.apInterPaePTs[1] = (PX86PTPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePTs[1], VERR_NO_PAGE_MEMORY);
1599 pVM->pgm.s.apInterPaePDs[0] = (PX86PDPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePDs[0], VERR_NO_PAGE_MEMORY);
1600 pVM->pgm.s.apInterPaePDs[1] = (PX86PDPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePDs[1], VERR_NO_PAGE_MEMORY);
1601 pVM->pgm.s.apInterPaePDs[2] = (PX86PDPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePDs[2], VERR_NO_PAGE_MEMORY);
1602 pVM->pgm.s.apInterPaePDs[3] = (PX86PDPAE)MMR3PageAlloc(pVM); AssertReturn(pVM->pgm.s.apInterPaePDs[3], VERR_NO_PAGE_MEMORY);
1603 pVM->pgm.s.pInterPaePDPT = (PX86PDPT)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.pInterPaePDPT, VERR_NO_PAGE_MEMORY);
1604 pVM->pgm.s.pInterPaePDPT64 = (PX86PDPT)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.pInterPaePDPT64, VERR_NO_PAGE_MEMORY);
1605 pVM->pgm.s.pInterPaePML4 = (PX86PML4)MMR3PageAllocLow(pVM); AssertReturn(pVM->pgm.s.pInterPaePML4, VERR_NO_PAGE_MEMORY);
1606
1607 pVM->pgm.s.HCPhysInterPD = MMPage2Phys(pVM, pVM->pgm.s.pInterPD);
1608 AssertRelease(pVM->pgm.s.HCPhysInterPD != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPD & PAGE_OFFSET_MASK));
1609 pVM->pgm.s.HCPhysInterPaePDPT = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT);
1610 AssertRelease(pVM->pgm.s.HCPhysInterPaePDPT != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePDPT & PAGE_OFFSET_MASK));
1611 pVM->pgm.s.HCPhysInterPaePML4 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePML4);
1612 AssertRelease(pVM->pgm.s.HCPhysInterPaePML4 != NIL_RTHCPHYS && !(pVM->pgm.s.HCPhysInterPaePML4 & PAGE_OFFSET_MASK) && pVM->pgm.s.HCPhysInterPaePML4 < 0xffffffff);
1613
1614 /*
1615 * Initialize the pages, setting up the PML4 and PDPT for repetitive 4GB action.
1616 */
1617 ASMMemZeroPage(pVM->pgm.s.pInterPD);
1618 ASMMemZeroPage(pVM->pgm.s.apInterPTs[0]);
1619 ASMMemZeroPage(pVM->pgm.s.apInterPTs[1]);
1620
1621 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[0]);
1622 ASMMemZeroPage(pVM->pgm.s.apInterPaePTs[1]);
1623
1624 ASMMemZeroPage(pVM->pgm.s.pInterPaePDPT);
1625 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.apInterPaePDs); i++)
1626 {
1627 ASMMemZeroPage(pVM->pgm.s.apInterPaePDs[i]);
1628 pVM->pgm.s.pInterPaePDPT->a[i].u = X86_PDPE_P | PGM_PLXFLAGS_PERMANENT
1629 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[i]);
1630 }
1631
1632 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePDPT64->a); i++)
1633 {
1634 const unsigned iPD = i % RT_ELEMENTS(pVM->pgm.s.apInterPaePDs);
1635 pVM->pgm.s.pInterPaePDPT64->a[i].u = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A | PGM_PLXFLAGS_PERMANENT
1636 | MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[iPD]);
1637 }
1638
1639 RTHCPHYS HCPhysInterPaePDPT64 = MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64);
1640 for (unsigned i = 0; i < RT_ELEMENTS(pVM->pgm.s.pInterPaePML4->a); i++)
1641 pVM->pgm.s.pInterPaePML4->a[i].u = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A | PGM_PLXFLAGS_PERMANENT
1642 | HCPhysInterPaePDPT64;
1643
1644 /*
1645 * Initialize paging workers and mode from current host mode
1646 * and the guest running in real mode.
1647 */
1648 pVM->pgm.s.enmHostMode = SUPR3GetPagingMode();
1649 switch (pVM->pgm.s.enmHostMode)
1650 {
1651 case SUPPAGINGMODE_32_BIT:
1652 case SUPPAGINGMODE_32_BIT_GLOBAL:
1653 case SUPPAGINGMODE_PAE:
1654 case SUPPAGINGMODE_PAE_GLOBAL:
1655 case SUPPAGINGMODE_PAE_NX:
1656 case SUPPAGINGMODE_PAE_GLOBAL_NX:
1657 break;
1658
1659 case SUPPAGINGMODE_AMD64:
1660 case SUPPAGINGMODE_AMD64_GLOBAL:
1661 case SUPPAGINGMODE_AMD64_NX:
1662 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
1663 if (ARCH_BITS != 64)
1664 {
1665 AssertMsgFailed(("Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1666 LogRel(("PGM: Host mode %d (64-bit) is not supported by non-64bit builds\n", pVM->pgm.s.enmHostMode));
1667 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1668 }
1669 break;
1670 default:
1671 AssertMsgFailed(("Host mode %d is not supported\n", pVM->pgm.s.enmHostMode));
1672 return VERR_PGM_UNSUPPORTED_HOST_PAGING_MODE;
1673 }
1674 rc = pgmR3ModeDataInit(pVM, false /* don't resolve GC and R0 syms yet */);
1675 if (RT_SUCCESS(rc))
1676 {
1677 LogFlow(("pgmR3InitPaging: returns successfully\n"));
1678#if HC_ARCH_BITS == 64
1679 LogRel(("PGM: HCPhysInterPD=%RHp HCPhysInterPaePDPT=%RHp HCPhysInterPaePML4=%RHp\n",
1680 pVM->pgm.s.HCPhysInterPD, pVM->pgm.s.HCPhysInterPaePDPT, pVM->pgm.s.HCPhysInterPaePML4));
1681 LogRel(("PGM: apInterPTs={%RHp,%RHp} apInterPaePTs={%RHp,%RHp} apInterPaePDs={%RHp,%RHp,%RHp,%RHp} pInterPaePDPT64=%RHp\n",
1682 MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPTs[1]),
1683 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePTs[1]),
1684 MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[0]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[1]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[2]), MMPage2Phys(pVM, pVM->pgm.s.apInterPaePDs[3]),
1685 MMPage2Phys(pVM, pVM->pgm.s.pInterPaePDPT64)));
1686#endif
1687
1688 /*
1689 * Log the host paging mode. It may come in handy.
1690 */
1691 const char *pszHostMode;
1692 switch (pVM->pgm.s.enmHostMode)
1693 {
1694 case SUPPAGINGMODE_32_BIT: pszHostMode = "32-bit"; break;
1695 case SUPPAGINGMODE_32_BIT_GLOBAL: pszHostMode = "32-bit+PGE"; break;
1696 case SUPPAGINGMODE_PAE: pszHostMode = "PAE"; break;
1697 case SUPPAGINGMODE_PAE_GLOBAL: pszHostMode = "PAE+PGE"; break;
1698 case SUPPAGINGMODE_PAE_NX: pszHostMode = "PAE+NXE"; break;
1699 case SUPPAGINGMODE_PAE_GLOBAL_NX: pszHostMode = "PAE+PGE+NXE"; break;
1700 case SUPPAGINGMODE_AMD64: pszHostMode = "AMD64"; break;
1701 case SUPPAGINGMODE_AMD64_GLOBAL: pszHostMode = "AMD64+PGE"; break;
1702 case SUPPAGINGMODE_AMD64_NX: pszHostMode = "AMD64+NX"; break;
1703 case SUPPAGINGMODE_AMD64_GLOBAL_NX: pszHostMode = "AMD64+PGE+NX"; break;
1704 default: pszHostMode = "???"; break;
1705 }
1706 LogRel(("PGM: Host paging mode: %s\n", pszHostMode));
1707
1708 return VINF_SUCCESS;
1709 }
1710
1711 LogFlow(("pgmR3InitPaging: returns %Rrc\n", rc));
1712 return rc;
1713}
1714
1715
1716/**
1717 * Init statistics
1718 * @returns VBox status code.
1719 */
1720static int pgmR3InitStats(PVM pVM)
1721{
1722 PPGM pPGM = &pVM->pgm.s;
1723 int rc;
1724
1725 /*
1726 * Release statistics.
1727 */
1728 /* Common - misc variables */
1729 STAM_REL_REG(pVM, &pPGM->cAllPages, STAMTYPE_U32, "/PGM/Page/cAllPages", STAMUNIT_COUNT, "The total number of pages.");
1730 STAM_REL_REG(pVM, &pPGM->cPrivatePages, STAMTYPE_U32, "/PGM/Page/cPrivatePages", STAMUNIT_COUNT, "The number of private pages.");
1731 STAM_REL_REG(pVM, &pPGM->cSharedPages, STAMTYPE_U32, "/PGM/Page/cSharedPages", STAMUNIT_COUNT, "The number of shared pages.");
1732 STAM_REL_REG(pVM, &pPGM->cReusedSharedPages, STAMTYPE_U32, "/PGM/Page/cReusedSharedPages", STAMUNIT_COUNT, "The number of reused shared pages.");
1733 STAM_REL_REG(pVM, &pPGM->cZeroPages, STAMTYPE_U32, "/PGM/Page/cZeroPages", STAMUNIT_COUNT, "The number of zero backed pages.");
1734 STAM_REL_REG(pVM, &pPGM->cPureMmioPages, STAMTYPE_U32, "/PGM/Page/cPureMmioPages", STAMUNIT_COUNT, "The number of pure MMIO pages.");
1735 STAM_REL_REG(pVM, &pPGM->cMonitoredPages, STAMTYPE_U32, "/PGM/Page/cMonitoredPages", STAMUNIT_COUNT, "The number of write monitored pages.");
1736 STAM_REL_REG(pVM, &pPGM->cWrittenToPages, STAMTYPE_U32, "/PGM/Page/cWrittenToPages", STAMUNIT_COUNT, "The number of previously write monitored pages that have been written to.");
1737 STAM_REL_REG(pVM, &pPGM->cWriteLockedPages, STAMTYPE_U32, "/PGM/Page/cWriteLockedPages", STAMUNIT_COUNT, "The number of write(/read) locked pages.");
1738 STAM_REL_REG(pVM, &pPGM->cReadLockedPages, STAMTYPE_U32, "/PGM/Page/cReadLockedPages", STAMUNIT_COUNT, "The number of read (only) locked pages.");
1739 STAM_REL_REG(pVM, &pPGM->cBalloonedPages, STAMTYPE_U32, "/PGM/Page/cBalloonedPages", STAMUNIT_COUNT, "The number of ballooned pages.");
1740 STAM_REL_REG(pVM, &pPGM->cHandyPages, STAMTYPE_U32, "/PGM/Page/cHandyPages", STAMUNIT_COUNT, "The number of handy pages (not included in cAllPages).");
1741 STAM_REL_REG(pVM, &pPGM->cLargePages, STAMTYPE_U32, "/PGM/Page/cLargePages", STAMUNIT_COUNT, "The number of large pages allocated (includes disabled).");
1742 STAM_REL_REG(pVM, &pPGM->cLargePagesDisabled, STAMTYPE_U32, "/PGM/Page/cLargePagesDisabled", STAMUNIT_COUNT, "The number of disabled large pages.");
1743 STAM_REL_REG(pVM, &pPGM->cRelocations, STAMTYPE_COUNTER, "/PGM/cRelocations", STAMUNIT_OCCURENCES,"Number of hypervisor relocations.");
1744 STAM_REL_REG(pVM, &pPGM->ChunkR3Map.c, STAMTYPE_U32, "/PGM/ChunkR3Map/c", STAMUNIT_COUNT, "Number of mapped chunks.");
1745 STAM_REL_REG(pVM, &pPGM->ChunkR3Map.cMax, STAMTYPE_U32, "/PGM/ChunkR3Map/cMax", STAMUNIT_COUNT, "Maximum number of mapped chunks.");
1746 STAM_REL_REG(pVM, &pPGM->cMappedChunks, STAMTYPE_U32, "/PGM/ChunkR3Map/Mapped", STAMUNIT_COUNT, "Number of times we mapped a chunk.");
1747 STAM_REL_REG(pVM, &pPGM->cUnmappedChunks, STAMTYPE_U32, "/PGM/ChunkR3Map/Unmapped", STAMUNIT_COUNT, "Number of times we unmapped a chunk.");
1748
1749 STAM_REL_REG(pVM, &pPGM->StatLargePageReused, STAMTYPE_COUNTER, "/PGM/LargePage/Reused", STAMUNIT_OCCURENCES, "The number of times we've reused a large page.");
1750 STAM_REL_REG(pVM, &pPGM->StatLargePageRefused, STAMTYPE_COUNTER, "/PGM/LargePage/Refused", STAMUNIT_OCCURENCES, "The number of times we couldn't use a large page.");
1751 STAM_REL_REG(pVM, &pPGM->StatLargePageRecheck, STAMTYPE_COUNTER, "/PGM/LargePage/Recheck", STAMUNIT_OCCURENCES, "The number of times we've rechecked a disabled large page.");
1752
1753 STAM_REL_REG(pVM, &pPGM->StatShModCheck, STAMTYPE_PROFILE, "/PGM/ShMod/Check", STAMUNIT_TICKS_PER_CALL, "Profiles the shared module checking.");
1754
1755 /* Live save */
1756 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.fActive, STAMTYPE_U8, "/PGM/LiveSave/fActive", STAMUNIT_COUNT, "Active or not.");
1757 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cIgnoredPages, STAMTYPE_U32, "/PGM/LiveSave/cIgnoredPages", STAMUNIT_COUNT, "The number of ignored pages in the RAM ranges (i.e. MMIO, MMIO2 and ROM).");
1758 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cDirtyPagesLong, STAMTYPE_U32, "/PGM/LiveSave/cDirtyPagesLong", STAMUNIT_COUNT, "Longer term dirty page average.");
1759 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cDirtyPagesShort, STAMTYPE_U32, "/PGM/LiveSave/cDirtyPagesShort", STAMUNIT_COUNT, "Short term dirty page average.");
1760 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cPagesPerSecond, STAMTYPE_U32, "/PGM/LiveSave/cPagesPerSecond", STAMUNIT_COUNT, "Pages per second.");
1761 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.cSavedPages, STAMTYPE_U64, "/PGM/LiveSave/cSavedPages", STAMUNIT_COUNT, "The total number of saved pages.");
1762 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Ram.cReadyPages, STAMTYPE_U32, "/PGM/LiveSave/Ram/cReadPages", STAMUNIT_COUNT, "RAM: Ready pages.");
1763 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Ram.cDirtyPages, STAMTYPE_U32, "/PGM/LiveSave/Ram/cDirtyPages", STAMUNIT_COUNT, "RAM: Dirty pages.");
1764 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Ram.cZeroPages, STAMTYPE_U32, "/PGM/LiveSave/Ram/cZeroPages", STAMUNIT_COUNT, "RAM: Ready zero pages.");
1765 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Ram.cMonitoredPages, STAMTYPE_U32, "/PGM/LiveSave/Ram/cMonitoredPages", STAMUNIT_COUNT, "RAM: Write monitored pages.");
1766 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Rom.cReadyPages, STAMTYPE_U32, "/PGM/LiveSave/Rom/cReadPages", STAMUNIT_COUNT, "ROM: Ready pages.");
1767 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Rom.cDirtyPages, STAMTYPE_U32, "/PGM/LiveSave/Rom/cDirtyPages", STAMUNIT_COUNT, "ROM: Dirty pages.");
1768 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Rom.cZeroPages, STAMTYPE_U32, "/PGM/LiveSave/Rom/cZeroPages", STAMUNIT_COUNT, "ROM: Ready zero pages.");
1769 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Rom.cMonitoredPages, STAMTYPE_U32, "/PGM/LiveSave/Rom/cMonitoredPages", STAMUNIT_COUNT, "ROM: Write monitored pages.");
1770 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Mmio2.cReadyPages, STAMTYPE_U32, "/PGM/LiveSave/Mmio2/cReadPages", STAMUNIT_COUNT, "MMIO2: Ready pages.");
1771 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Mmio2.cDirtyPages, STAMTYPE_U32, "/PGM/LiveSave/Mmio2/cDirtyPages", STAMUNIT_COUNT, "MMIO2: Dirty pages.");
1772 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Mmio2.cZeroPages, STAMTYPE_U32, "/PGM/LiveSave/Mmio2/cZeroPages", STAMUNIT_COUNT, "MMIO2: Ready zero pages.");
1773 STAM_REL_REG_USED(pVM, &pPGM->LiveSave.Mmio2.cMonitoredPages,STAMTYPE_U32, "/PGM/LiveSave/Mmio2/cMonitoredPages",STAMUNIT_COUNT, "MMIO2: Write monitored pages.");
1774
1775#ifdef VBOX_WITH_STATISTICS
1776
1777# define PGM_REG_COUNTER(a, b, c) \
1778 rc = STAMR3RegisterF(pVM, a, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, c, b); \
1779 AssertRC(rc);
1780
1781# define PGM_REG_COUNTER_BYTES(a, b, c) \
1782 rc = STAMR3RegisterF(pVM, a, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_BYTES, c, b); \
1783 AssertRC(rc);
1784
1785# define PGM_REG_PROFILE(a, b, c) \
1786 rc = STAMR3RegisterF(pVM, a, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS_PER_CALL, c, b); \
1787 AssertRC(rc);
1788
1789 PGMSTATS *pStats = pVM->pgm.s.pStatsR3;
1790
1791 PGM_REG_PROFILE(&pStats->StatAllocLargePage, "/PGM/LargePage/Prof/Alloc", "Time spent by the host OS for large page allocation.");
1792 PGM_REG_PROFILE(&pStats->StatClearLargePage, "/PGM/LargePage/Prof/Clear", "Time spent clearing the newly allocated large pages.");
1793 PGM_REG_COUNTER(&pStats->StatLargePageOverflow, "/PGM/LargePage/Overflow", "The number of times allocating a large page took too long.");
1794 PGM_REG_PROFILE(&pStats->StatR3IsValidLargePage, "/PGM/LargePage/Prof/R3/IsValid", "pgmPhysIsValidLargePage profiling - R3.");
1795 PGM_REG_PROFILE(&pStats->StatRZIsValidLargePage, "/PGM/LargePage/Prof/RZ/IsValid", "pgmPhysIsValidLargePage profiling - RZ.");
1796
1797 PGM_REG_COUNTER(&pStats->StatR3DetectedConflicts, "/PGM/R3/DetectedConflicts", "The number of times PGMR3CheckMappingConflicts() detected a conflict.");
1798 PGM_REG_PROFILE(&pStats->StatR3ResolveConflict, "/PGM/R3/ResolveConflict", "pgmR3SyncPTResolveConflict() profiling (includes the entire relocation).");
1799 PGM_REG_COUNTER(&pStats->StatR3PhysRead, "/PGM/R3/Phys/Read", "The number of times PGMPhysRead was called.");
1800 PGM_REG_COUNTER_BYTES(&pStats->StatR3PhysReadBytes, "/PGM/R3/Phys/Read/Bytes", "The number of bytes read by PGMPhysRead.");
1801 PGM_REG_COUNTER(&pStats->StatR3PhysWrite, "/PGM/R3/Phys/Write", "The number of times PGMPhysWrite was called.");
1802 PGM_REG_COUNTER_BYTES(&pStats->StatR3PhysWriteBytes, "/PGM/R3/Phys/Write/Bytes", "The number of bytes written by PGMPhysWrite.");
1803 PGM_REG_COUNTER(&pStats->StatR3PhysSimpleRead, "/PGM/R3/Phys/Simple/Read", "The number of times PGMPhysSimpleReadGCPtr was called.");
1804 PGM_REG_COUNTER_BYTES(&pStats->StatR3PhysSimpleReadBytes, "/PGM/R3/Phys/Simple/Read/Bytes", "The number of bytes read by PGMPhysSimpleReadGCPtr.");
1805 PGM_REG_COUNTER(&pStats->StatR3PhysSimpleWrite, "/PGM/R3/Phys/Simple/Write", "The number of times PGMPhysSimpleWriteGCPtr was called.");
1806 PGM_REG_COUNTER_BYTES(&pStats->StatR3PhysSimpleWriteBytes, "/PGM/R3/Phys/Simple/Write/Bytes", "The number of bytes written by PGMPhysSimpleWriteGCPtr.");
1807
1808 PGM_REG_COUNTER(&pStats->StatRZChunkR3MapTlbHits, "/PGM/ChunkR3Map/TlbHitsRZ", "TLB hits.");
1809 PGM_REG_COUNTER(&pStats->StatRZChunkR3MapTlbMisses, "/PGM/ChunkR3Map/TlbMissesRZ", "TLB misses.");
1810 PGM_REG_PROFILE(&pStats->StatChunkAging, "/PGM/ChunkR3Map/Map/Aging", "Chunk aging profiling.");
1811 PGM_REG_PROFILE(&pStats->StatChunkFindCandidate, "/PGM/ChunkR3Map/Map/Find", "Chunk unmap find profiling.");
1812 PGM_REG_PROFILE(&pStats->StatChunkUnmap, "/PGM/ChunkR3Map/Map/Unmap", "Chunk unmap of address space profiling.");
1813 PGM_REG_PROFILE(&pStats->StatChunkMap, "/PGM/ChunkR3Map/Map/Map", "Chunk map of address space profiling.");
1814
1815 PGM_REG_COUNTER(&pStats->StatRZPageMapTlbHits, "/PGM/RZ/Page/MapTlbHits", "TLB hits.");
1816 PGM_REG_COUNTER(&pStats->StatRZPageMapTlbMisses, "/PGM/RZ/Page/MapTlbMisses", "TLB misses.");
1817 PGM_REG_COUNTER(&pStats->StatR3ChunkR3MapTlbHits, "/PGM/ChunkR3Map/TlbHitsR3", "TLB hits.");
1818 PGM_REG_COUNTER(&pStats->StatR3ChunkR3MapTlbMisses, "/PGM/ChunkR3Map/TlbMissesR3", "TLB misses.");
1819 PGM_REG_COUNTER(&pStats->StatR3PageMapTlbHits, "/PGM/R3/Page/MapTlbHits", "TLB hits.");
1820 PGM_REG_COUNTER(&pStats->StatR3PageMapTlbMisses, "/PGM/R3/Page/MapTlbMisses", "TLB misses.");
1821 PGM_REG_COUNTER(&pStats->StatPageMapTlbFlushes, "/PGM/R3/Page/MapTlbFlushes", "TLB flushes (all contexts).");
1822 PGM_REG_COUNTER(&pStats->StatPageMapTlbFlushEntry, "/PGM/R3/Page/MapTlbFlushEntry", "TLB entry flushes (all contexts).");
1823
1824 PGM_REG_COUNTER(&pStats->StatRZRamRangeTlbHits, "/PGM/RZ/RamRange/TlbHits", "TLB hits.");
1825 PGM_REG_COUNTER(&pStats->StatRZRamRangeTlbMisses, "/PGM/RZ/RamRange/TlbMisses", "TLB misses.");
1826 PGM_REG_COUNTER(&pStats->StatR3RamRangeTlbHits, "/PGM/R3/RamRange/TlbHits", "TLB hits.");
1827 PGM_REG_COUNTER(&pStats->StatR3RamRangeTlbMisses, "/PGM/R3/RamRange/TlbMisses", "TLB misses.");
1828
1829 PGM_REG_PROFILE(&pStats->StatRZSyncCR3HandlerVirtualUpdate, "/PGM/RZ/SyncCR3/Handlers/VirtualUpdate", "Profiling of the virtual handler updates.");
1830 PGM_REG_PROFILE(&pStats->StatRZSyncCR3HandlerVirtualReset, "/PGM/RZ/SyncCR3/Handlers/VirtualReset", "Profiling of the virtual handler resets.");
1831 PGM_REG_PROFILE(&pStats->StatR3SyncCR3HandlerVirtualUpdate, "/PGM/R3/SyncCR3/Handlers/VirtualUpdate", "Profiling of the virtual handler updates.");
1832 PGM_REG_PROFILE(&pStats->StatR3SyncCR3HandlerVirtualReset, "/PGM/R3/SyncCR3/Handlers/VirtualReset", "Profiling of the virtual handler resets.");
1833
1834 PGM_REG_COUNTER(&pStats->StatRZPhysHandlerReset, "/PGM/RZ/PhysHandlerReset", "The number of times PGMHandlerPhysicalReset is called.");
1835 PGM_REG_COUNTER(&pStats->StatR3PhysHandlerReset, "/PGM/R3/PhysHandlerReset", "The number of times PGMHandlerPhysicalReset is called.");
1836 PGM_REG_COUNTER(&pStats->StatRZPhysHandlerLookupHits, "/PGM/RZ/PhysHandlerLookupHits", "The number of cache hits when looking up physical handlers.");
1837 PGM_REG_COUNTER(&pStats->StatR3PhysHandlerLookupHits, "/PGM/R3/PhysHandlerLookupHits", "The number of cache hits when looking up physical handlers.");
1838 PGM_REG_COUNTER(&pStats->StatRZPhysHandlerLookupMisses, "/PGM/RZ/PhysHandlerLookupMisses", "The number of cache misses when looking up physical handlers.");
1839 PGM_REG_COUNTER(&pStats->StatR3PhysHandlerLookupMisses, "/PGM/R3/PhysHandlerLookupMisses", "The number of cache misses when looking up physical handlers.");
1840 PGM_REG_PROFILE(&pStats->StatRZVirtHandlerSearchByPhys, "/PGM/RZ/VirtHandlerSearchByPhys", "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1841 PGM_REG_PROFILE(&pStats->StatR3VirtHandlerSearchByPhys, "/PGM/R3/VirtHandlerSearchByPhys", "Profiling of pgmHandlerVirtualFindByPhysAddr.");
1842
1843 PGM_REG_COUNTER(&pStats->StatRZPageReplaceShared, "/PGM/RZ/Page/ReplacedShared", "Times a shared page was replaced.");
1844 PGM_REG_COUNTER(&pStats->StatRZPageReplaceZero, "/PGM/RZ/Page/ReplacedZero", "Times the zero page was replaced.");
1845/// @todo PGM_REG_COUNTER(&pStats->StatRZPageHandyAllocs, "/PGM/RZ/Page/HandyAllocs", "Number of times we've allocated more handy pages.");
1846 PGM_REG_COUNTER(&pStats->StatR3PageReplaceShared, "/PGM/R3/Page/ReplacedShared", "Times a shared page was replaced.");
1847 PGM_REG_COUNTER(&pStats->StatR3PageReplaceZero, "/PGM/R3/Page/ReplacedZero", "Times the zero page was replaced.");
1848/// @todo PGM_REG_COUNTER(&pStats->StatR3PageHandyAllocs, "/PGM/R3/Page/HandyAllocs", "Number of times we've allocated more handy pages.");
1849
1850 PGM_REG_COUNTER(&pStats->StatRZPhysRead, "/PGM/RZ/Phys/Read", "The number of times PGMPhysRead was called.");
1851 PGM_REG_COUNTER_BYTES(&pStats->StatRZPhysReadBytes, "/PGM/RZ/Phys/Read/Bytes", "The number of bytes read by PGMPhysRead.");
1852 PGM_REG_COUNTER(&pStats->StatRZPhysWrite, "/PGM/RZ/Phys/Write", "The number of times PGMPhysWrite was called.");
1853 PGM_REG_COUNTER_BYTES(&pStats->StatRZPhysWriteBytes, "/PGM/RZ/Phys/Write/Bytes", "The number of bytes written by PGMPhysWrite.");
1854 PGM_REG_COUNTER(&pStats->StatRZPhysSimpleRead, "/PGM/RZ/Phys/Simple/Read", "The number of times PGMPhysSimpleReadGCPtr was called.");
1855 PGM_REG_COUNTER_BYTES(&pStats->StatRZPhysSimpleReadBytes, "/PGM/RZ/Phys/Simple/Read/Bytes", "The number of bytes read by PGMPhysSimpleReadGCPtr.");
1856 PGM_REG_COUNTER(&pStats->StatRZPhysSimpleWrite, "/PGM/RZ/Phys/Simple/Write", "The number of times PGMPhysSimpleWriteGCPtr was called.");
1857 PGM_REG_COUNTER_BYTES(&pStats->StatRZPhysSimpleWriteBytes, "/PGM/RZ/Phys/Simple/Write/Bytes", "The number of bytes written by PGMPhysSimpleWriteGCPtr.");
1858
1859 /* GC only: */
1860 PGM_REG_COUNTER(&pStats->StatRCInvlPgConflict, "/PGM/RC/InvlPgConflict", "Number of times PGMInvalidatePage() detected a mapping conflict.");
1861 PGM_REG_COUNTER(&pStats->StatRCInvlPgSyncMonCR3, "/PGM/RC/InvlPgSyncMonitorCR3", "Number of times PGMInvalidatePage() ran into PGM_SYNC_MONITOR_CR3.");
1862
1863 PGM_REG_COUNTER(&pStats->StatRCPhysRead, "/PGM/RC/Phys/Read", "The number of times PGMPhysRead was called.");
1864 PGM_REG_COUNTER_BYTES(&pStats->StatRCPhysReadBytes, "/PGM/RC/Phys/Read/Bytes", "The number of bytes read by PGMPhysRead.");
1865 PGM_REG_COUNTER(&pStats->StatRCPhysWrite, "/PGM/RC/Phys/Write", "The number of times PGMPhysWrite was called.");
1866 PGM_REG_COUNTER_BYTES(&pStats->StatRCPhysWriteBytes, "/PGM/RC/Phys/Write/Bytes", "The number of bytes written by PGMPhysWrite.");
1867 PGM_REG_COUNTER(&pStats->StatRCPhysSimpleRead, "/PGM/RC/Phys/Simple/Read", "The number of times PGMPhysSimpleReadGCPtr was called.");
1868 PGM_REG_COUNTER_BYTES(&pStats->StatRCPhysSimpleReadBytes, "/PGM/RC/Phys/Simple/Read/Bytes", "The number of bytes read by PGMPhysSimpleReadGCPtr.");
1869 PGM_REG_COUNTER(&pStats->StatRCPhysSimpleWrite, "/PGM/RC/Phys/Simple/Write", "The number of times PGMPhysSimpleWriteGCPtr was called.");
1870 PGM_REG_COUNTER_BYTES(&pStats->StatRCPhysSimpleWriteBytes, "/PGM/RC/Phys/Simple/Write/Bytes", "The number of bytes written by PGMPhysSimpleWriteGCPtr.");
1871
1872 PGM_REG_COUNTER(&pStats->StatTrackVirgin, "/PGM/Track/Virgin", "The number of first time shadowings");
1873 PGM_REG_COUNTER(&pStats->StatTrackAliased, "/PGM/Track/Aliased", "The number of times switching to cRef2, i.e. the page is being shadowed by two PTs.");
1874 PGM_REG_COUNTER(&pStats->StatTrackAliasedMany, "/PGM/Track/AliasedMany", "The number of times we're tracking using cRef2.");
1875 PGM_REG_COUNTER(&pStats->StatTrackAliasedLots, "/PGM/Track/AliasedLots", "The number of times we're hitting pages which has overflowed cRef2");
1876 PGM_REG_COUNTER(&pStats->StatTrackOverflows, "/PGM/Track/Overflows", "The number of times the extent list grows too long.");
1877 PGM_REG_COUNTER(&pStats->StatTrackNoExtentsLeft, "/PGM/Track/NoExtentLeft", "The number of times the extent list was exhausted.");
1878 PGM_REG_PROFILE(&pStats->StatTrackDeref, "/PGM/Track/Deref", "Profiling of SyncPageWorkerTrackDeref (expensive).");
1879
1880# undef PGM_REG_COUNTER
1881# undef PGM_REG_PROFILE
1882#endif
1883
1884 /*
1885 * Note! The layout below matches the member layout exactly!
1886 */
1887
1888 /*
1889 * Common - stats
1890 */
1891 for (VMCPUID idCpu = 0; idCpu < pVM->cCpus; idCpu++)
1892 {
1893 PPGMCPU pPgmCpu = &pVM->aCpus[idCpu].pgm.s;
1894
1895#define PGM_REG_COUNTER(a, b, c) \
1896 rc = STAMR3RegisterF(pVM, a, STAMTYPE_COUNTER, STAMVISIBILITY_ALWAYS, STAMUNIT_OCCURENCES, c, b, idCpu); \
1897 AssertRC(rc);
1898#define PGM_REG_PROFILE(a, b, c) \
1899 rc = STAMR3RegisterF(pVM, a, STAMTYPE_PROFILE, STAMVISIBILITY_ALWAYS, STAMUNIT_TICKS_PER_CALL, c, b, idCpu); \
1900 AssertRC(rc);
1901
1902 PGM_REG_COUNTER(&pPgmCpu->cGuestModeChanges, "/PGM/CPU%u/cGuestModeChanges", "Number of guest mode changes.");
1903 PGM_REG_COUNTER(&pPgmCpu->cA20Changes, "/PGM/CPU%u/cA20Changes", "Number of A20 gate changes.");
1904
1905#ifdef VBOX_WITH_STATISTICS
1906 PGMCPUSTATS *pCpuStats = pVM->aCpus[idCpu].pgm.s.pStatsR3;
1907
1908# if 0 /* rarely useful; leave for debugging. */
1909 for (unsigned j = 0; j < RT_ELEMENTS(pPgmCpu->StatSyncPtPD); j++)
1910 STAMR3RegisterF(pVM, &pCpuStats->StatSyncPtPD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1911 "The number of SyncPT per PD n.", "/PGM/CPU%u/PDSyncPT/%04X", i, j);
1912 for (unsigned j = 0; j < RT_ELEMENTS(pCpuStats->StatSyncPagePD); j++)
1913 STAMR3RegisterF(pVM, &pCpuStats->StatSyncPagePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1914 "The number of SyncPage per PD n.", "/PGM/CPU%u/PDSyncPage/%04X", i, j);
1915# endif
1916 /* R0 only: */
1917 PGM_REG_PROFILE(&pCpuStats->StatR0NpMiscfg, "/PGM/CPU%u/R0/NpMiscfg", "PGMR0Trap0eHandlerNPMisconfig() profiling.");
1918 PGM_REG_COUNTER(&pCpuStats->StatR0NpMiscfgSyncPage, "/PGM/CPU%u/R0/NpMiscfgSyncPage", "SyncPage calls from PGMR0Trap0eHandlerNPMisconfig().");
1919
1920 /* RZ only: */
1921 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0e, "/PGM/CPU%u/RZ/Trap0e", "Profiling of the PGMTrap0eHandler() body.");
1922 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Ballooned, "/PGM/CPU%u/RZ/Trap0e/Time2/Ballooned", "Profiling of the Trap0eHandler body when the cause is read access to a ballooned page.");
1923 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2CSAM, "/PGM/CPU%u/RZ/Trap0e/Time2/CSAM", "Profiling of the Trap0eHandler body when the cause is CSAM.");
1924 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2DirtyAndAccessed, "/PGM/CPU%u/RZ/Trap0e/Time2/DirtyAndAccessedBits", "Profiling of the Trap0eHandler body when the cause is dirty and/or accessed bit emulation.");
1925 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2GuestTrap, "/PGM/CPU%u/RZ/Trap0e/Time2/GuestTrap", "Profiling of the Trap0eHandler body when the cause is a guest trap.");
1926 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2HndPhys, "/PGM/CPU%u/RZ/Trap0e/Time2/HandlerPhysical", "Profiling of the Trap0eHandler body when the cause is a physical handler.");
1927 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2HndVirt, "/PGM/CPU%u/RZ/Trap0e/Time2/HandlerVirtual", "Profiling of the Trap0eHandler body when the cause is a virtual handler.");
1928 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2HndUnhandled, "/PGM/CPU%u/RZ/Trap0e/Time2/HandlerUnhandled", "Profiling of the Trap0eHandler body when the cause is access outside the monitored areas of a monitored page.");
1929 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2InvalidPhys, "/PGM/CPU%u/RZ/Trap0e/Time2/InvalidPhys", "Profiling of the Trap0eHandler body when the cause is access to an invalid physical guest address.");
1930 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2MakeWritable, "/PGM/CPU%u/RZ/Trap0e/Time2/MakeWritable", "Profiling of the Trap0eHandler body when the cause is that a page needed to be made writeable.");
1931 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Mapping, "/PGM/CPU%u/RZ/Trap0e/Time2/Mapping", "Profiling of the Trap0eHandler body when the cause is related to the guest mappings.");
1932 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Misc, "/PGM/CPU%u/RZ/Trap0e/Time2/Misc", "Profiling of the Trap0eHandler body when the cause is not known.");
1933 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2OutOfSync, "/PGM/CPU%u/RZ/Trap0e/Time2/OutOfSync", "Profiling of the Trap0eHandler body when the cause is an out-of-sync page.");
1934 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2OutOfSyncHndPhys, "/PGM/CPU%u/RZ/Trap0e/Time2/OutOfSyncHndPhys", "Profiling of the Trap0eHandler body when the cause is an out-of-sync physical handler page.");
1935 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2OutOfSyncHndVirt, "/PGM/CPU%u/RZ/Trap0e/Time2/OutOfSyncHndVirt", "Profiling of the Trap0eHandler body when the cause is an out-of-sync virtual handler page.");
1936 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2OutOfSyncHndObs, "/PGM/CPU%u/RZ/Trap0e/Time2/OutOfSyncObsHnd", "Profiling of the Trap0eHandler body when the cause is an obsolete handler page.");
1937 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2SyncPT, "/PGM/CPU%u/RZ/Trap0e/Time2/SyncPT", "Profiling of the Trap0eHandler body when the cause is lazy syncing of a PT.");
1938 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2WPEmulation, "/PGM/CPU%u/RZ/Trap0e/Time2/WPEmulation", "Profiling of the Trap0eHandler body when the cause is CR0.WP emulation.");
1939 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Wp0RoUsHack, "/PGM/CPU%u/RZ/Trap0e/Time2/WP0R0USHack", "Profiling of the Trap0eHandler body when the cause is CR0.WP and netware hack to be enabled.");
1940 PGM_REG_PROFILE(&pCpuStats->StatRZTrap0eTime2Wp0RoUsUnhack, "/PGM/CPU%u/RZ/Trap0e/Time2/WP0R0USUnhack", "Profiling of the Trap0eHandler body when the cause is CR0.WP and netware hack to be disabled.");
1941 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eConflicts, "/PGM/CPU%u/RZ/Trap0e/Conflicts", "The number of times #PF was caused by an undetected conflict.");
1942 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersMapping, "/PGM/CPU%u/RZ/Trap0e/Handlers/Mapping", "Number of traps due to access handlers in mappings.");
1943 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersOutOfSync, "/PGM/CPU%u/RZ/Trap0e/Handlers/OutOfSync", "Number of traps due to out-of-sync handled pages.");
1944 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersPhysAll, "/PGM/CPU%u/RZ/Trap0e/Handlers/PhysAll", "Number of traps due to physical all-access handlers.");
1945 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersPhysAllOpt, "/PGM/CPU%u/RZ/Trap0e/Handlers/PhysAllOpt", "Number of the physical all-access handler traps using the optimization.");
1946 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersPhysWrite, "/PGM/CPU%u/RZ/Trap0e/Handlers/PhysWrite", "Number of traps due to physical write-access handlers.");
1947 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersVirtual, "/PGM/CPU%u/RZ/Trap0e/Handlers/Virtual", "Number of traps due to virtual access handlers.");
1948 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersVirtualByPhys, "/PGM/CPU%u/RZ/Trap0e/Handlers/VirtualByPhys", "Number of traps due to virtual access handlers by physical address.");
1949 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersVirtualUnmarked,"/PGM/CPU%u/RZ/Trap0e/Handlers/VirtualUnmarked","Number of traps due to virtual access handlers by virtual address (without proper physical flags).");
1950 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersUnhandled, "/PGM/CPU%u/RZ/Trap0e/Handlers/Unhandled", "Number of traps due to access outside range of monitored page(s).");
1951 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eHandlersInvalid, "/PGM/CPU%u/RZ/Trap0e/Handlers/Invalid", "Number of traps due to access to invalid physical memory.");
1952 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSNotPresentRead, "/PGM/CPU%u/RZ/Trap0e/Err/User/NPRead", "Number of user mode not present read page faults.");
1953 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSNotPresentWrite, "/PGM/CPU%u/RZ/Trap0e/Err/User/NPWrite", "Number of user mode not present write page faults.");
1954 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSWrite, "/PGM/CPU%u/RZ/Trap0e/Err/User/Write", "Number of user mode write page faults.");
1955 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSReserved, "/PGM/CPU%u/RZ/Trap0e/Err/User/Reserved", "Number of user mode reserved bit page faults.");
1956 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSNXE, "/PGM/CPU%u/RZ/Trap0e/Err/User/NXE", "Number of user mode NXE page faults.");
1957 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eUSRead, "/PGM/CPU%u/RZ/Trap0e/Err/User/Read", "Number of user mode read page faults.");
1958 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSVNotPresentRead, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/NPRead", "Number of supervisor mode not present read page faults.");
1959 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSVNotPresentWrite, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/NPWrite", "Number of supervisor mode not present write page faults.");
1960 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSVWrite, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/Write", "Number of supervisor mode write page faults.");
1961 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSVReserved, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/Reserved", "Number of supervisor mode reserved bit page faults.");
1962 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eSNXE, "/PGM/CPU%u/RZ/Trap0e/Err/Supervisor/NXE", "Number of supervisor mode NXE page faults.");
1963 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eGuestPF, "/PGM/CPU%u/RZ/Trap0e/GuestPF", "Number of real guest page faults.");
1964 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eGuestPFMapping, "/PGM/CPU%u/RZ/Trap0e/GuestPF/InMapping", "Number of real guest page faults in a mapping.");
1965 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eWPEmulInRZ, "/PGM/CPU%u/RZ/Trap0e/WP/InRZ", "Number of guest page faults due to X86_CR0_WP emulation.");
1966 PGM_REG_COUNTER(&pCpuStats->StatRZTrap0eWPEmulToR3, "/PGM/CPU%u/RZ/Trap0e/WP/ToR3", "Number of guest page faults due to X86_CR0_WP emulation (forward to R3 for emulation).");
1967#if 0 /* rarely useful; leave for debugging. */
1968 for (unsigned j = 0; j < RT_ELEMENTS(pCpuStats->StatRZTrap0ePD); j++)
1969 STAMR3RegisterF(pVM, &pCpuStats->StatRZTrap0ePD[i], STAMTYPE_COUNTER, STAMVISIBILITY_USED, STAMUNIT_OCCURENCES,
1970 "The number of traps in page directory n.", "/PGM/CPU%u/RZ/Trap0e/PD/%04X", i, j);
1971#endif
1972 PGM_REG_COUNTER(&pCpuStats->StatRZGuestCR3WriteHandled, "/PGM/CPU%u/RZ/CR3WriteHandled", "The number of times the Guest CR3 change was successfully handled.");
1973 PGM_REG_COUNTER(&pCpuStats->StatRZGuestCR3WriteUnhandled, "/PGM/CPU%u/RZ/CR3WriteUnhandled", "The number of times the Guest CR3 change was passed back to the recompiler.");
1974 PGM_REG_COUNTER(&pCpuStats->StatRZGuestCR3WriteConflict, "/PGM/CPU%u/RZ/CR3WriteConflict", "The number of times the Guest CR3 monitoring detected a conflict.");
1975 PGM_REG_COUNTER(&pCpuStats->StatRZGuestROMWriteHandled, "/PGM/CPU%u/RZ/ROMWriteHandled", "The number of times the Guest ROM change was successfully handled.");
1976 PGM_REG_COUNTER(&pCpuStats->StatRZGuestROMWriteUnhandled, "/PGM/CPU%u/RZ/ROMWriteUnhandled", "The number of times the Guest ROM change was passed back to the recompiler.");
1977
1978 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapMigrateInvlPg, "/PGM/CPU%u/RZ/DynMap/MigrateInvlPg", "invlpg count in PGMR0DynMapMigrateAutoSet.");
1979 PGM_REG_PROFILE(&pCpuStats->StatRZDynMapGCPageInl, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl", "Calls to pgmR0DynMapGCPageInlined.");
1980 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapGCPageInlHits, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl/Hits", "Hash table lookup hits.");
1981 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapGCPageInlMisses, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl/Misses", "Misses that falls back to the code common.");
1982 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapGCPageInlRamHits, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl/RamHits", "1st ram range hits.");
1983 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapGCPageInlRamMisses, "/PGM/CPU%u/RZ/DynMap/PageGCPageInl/RamMisses", "1st ram range misses, takes slow path.");
1984 PGM_REG_PROFILE(&pCpuStats->StatRZDynMapHCPageInl, "/PGM/CPU%u/RZ/DynMap/PageHCPageInl", "Calls to pgmRZDynMapHCPageInlined.");
1985 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapHCPageInlHits, "/PGM/CPU%u/RZ/DynMap/PageHCPageInl/Hits", "Hash table lookup hits.");
1986 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapHCPageInlMisses, "/PGM/CPU%u/RZ/DynMap/PageHCPageInl/Misses", "Misses that falls back to the code common.");
1987 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPage, "/PGM/CPU%u/RZ/DynMap/Page", "Calls to pgmR0DynMapPage");
1988 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSetOptimize, "/PGM/CPU%u/RZ/DynMap/Page/SetOptimize", "Calls to pgmRZDynMapOptimizeAutoSet.");
1989 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSetSearchFlushes, "/PGM/CPU%u/RZ/DynMap/Page/SetSearchFlushes", "Set search restoring to subset flushes.");
1990 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSetSearchHits, "/PGM/CPU%u/RZ/DynMap/Page/SetSearchHits", "Set search hits.");
1991 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSetSearchMisses, "/PGM/CPU%u/RZ/DynMap/Page/SetSearchMisses", "Set search misses.");
1992 PGM_REG_PROFILE(&pCpuStats->StatRZDynMapHCPage, "/PGM/CPU%u/RZ/DynMap/Page/HCPage", "Calls to pgmRZDynMapHCPageCommon (ring-0).");
1993 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageHits0, "/PGM/CPU%u/RZ/DynMap/Page/Hits0", "Hits at iPage+0");
1994 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageHits1, "/PGM/CPU%u/RZ/DynMap/Page/Hits1", "Hits at iPage+1");
1995 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageHits2, "/PGM/CPU%u/RZ/DynMap/Page/Hits2", "Hits at iPage+2");
1996 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageInvlPg, "/PGM/CPU%u/RZ/DynMap/Page/InvlPg", "invlpg count in pgmR0DynMapPageSlow.");
1997 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageSlow, "/PGM/CPU%u/RZ/DynMap/Page/Slow", "Calls to pgmR0DynMapPageSlow - subtract this from pgmR0DynMapPage to get 1st level hits.");
1998 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageSlowLoopHits, "/PGM/CPU%u/RZ/DynMap/Page/SlowLoopHits" , "Hits in the loop path.");
1999 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageSlowLoopMisses, "/PGM/CPU%u/RZ/DynMap/Page/SlowLoopMisses", "Misses in the loop path. NonLoopMisses = Slow - SlowLoopHit - SlowLoopMisses");
2000 //PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPageSlowLostHits, "/PGM/CPU%u/R0/DynMap/Page/SlowLostHits", "Lost hits.");
2001 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapSubsets, "/PGM/CPU%u/RZ/DynMap/Subsets", "Times PGMRZDynMapPushAutoSubset was called.");
2002 PGM_REG_COUNTER(&pCpuStats->StatRZDynMapPopFlushes, "/PGM/CPU%u/RZ/DynMap/SubsetPopFlushes", "Times PGMRZDynMapPopAutoSubset flushes the subset.");
2003 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[0], "/PGM/CPU%u/RZ/DynMap/SetFilledPct000..09", "00-09% filled (RC: min(set-size, dynmap-size))");
2004 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[1], "/PGM/CPU%u/RZ/DynMap/SetFilledPct010..19", "10-19% filled (RC: min(set-size, dynmap-size))");
2005 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[2], "/PGM/CPU%u/RZ/DynMap/SetFilledPct020..29", "20-29% filled (RC: min(set-size, dynmap-size))");
2006 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[3], "/PGM/CPU%u/RZ/DynMap/SetFilledPct030..39", "30-39% filled (RC: min(set-size, dynmap-size))");
2007 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[4], "/PGM/CPU%u/RZ/DynMap/SetFilledPct040..49", "40-49% filled (RC: min(set-size, dynmap-size))");
2008 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[5], "/PGM/CPU%u/RZ/DynMap/SetFilledPct050..59", "50-59% filled (RC: min(set-size, dynmap-size))");
2009 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[6], "/PGM/CPU%u/RZ/DynMap/SetFilledPct060..69", "60-69% filled (RC: min(set-size, dynmap-size))");
2010 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[7], "/PGM/CPU%u/RZ/DynMap/SetFilledPct070..79", "70-79% filled (RC: min(set-size, dynmap-size))");
2011 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[8], "/PGM/CPU%u/RZ/DynMap/SetFilledPct080..89", "80-89% filled (RC: min(set-size, dynmap-size))");
2012 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[9], "/PGM/CPU%u/RZ/DynMap/SetFilledPct090..99", "90-99% filled (RC: min(set-size, dynmap-size))");
2013 PGM_REG_COUNTER(&pCpuStats->aStatRZDynMapSetFilledPct[10], "/PGM/CPU%u/RZ/DynMap/SetFilledPct100", "100% filled (RC: min(set-size, dynmap-size))");
2014
2015 /* HC only: */
2016
2017 /* RZ & R3: */
2018 PGM_REG_PROFILE(&pCpuStats->StatRZSyncCR3, "/PGM/CPU%u/RZ/SyncCR3", "Profiling of the PGMSyncCR3() body.");
2019 PGM_REG_PROFILE(&pCpuStats->StatRZSyncCR3Handlers, "/PGM/CPU%u/RZ/SyncCR3/Handlers", "Profiling of the PGMSyncCR3() update handler section.");
2020 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3Global, "/PGM/CPU%u/RZ/SyncCR3/Global", "The number of global CR3 syncs.");
2021 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3NotGlobal, "/PGM/CPU%u/RZ/SyncCR3/NotGlobal", "The number of non-global CR3 syncs.");
2022 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstCacheHit, "/PGM/CPU%u/RZ/SyncCR3/DstChacheHit", "The number of times we got some kind of a cache hit.");
2023 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstFreed, "/PGM/CPU%u/RZ/SyncCR3/DstFreed", "The number of times we've had to free a shadow entry.");
2024 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstFreedSrcNP, "/PGM/CPU%u/RZ/SyncCR3/DstFreedSrcNP", "The number of times we've had to free a shadow entry for which the source entry was not present.");
2025 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstNotPresent, "/PGM/CPU%u/RZ/SyncCR3/DstNotPresent", "The number of times we've encountered a not present shadow entry for a present guest entry.");
2026 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstSkippedGlobalPD, "/PGM/CPU%u/RZ/SyncCR3/DstSkippedGlobalPD", "The number of times a global page directory wasn't flushed.");
2027 PGM_REG_COUNTER(&pCpuStats->StatRZSyncCR3DstSkippedGlobalPT, "/PGM/CPU%u/RZ/SyncCR3/DstSkippedGlobalPT", "The number of times a page table with only global entries wasn't flushed.");
2028 PGM_REG_PROFILE(&pCpuStats->StatRZSyncPT, "/PGM/CPU%u/RZ/SyncPT", "Profiling of the pfnSyncPT() body.");
2029 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPTFailed, "/PGM/CPU%u/RZ/SyncPT/Failed", "The number of times pfnSyncPT() failed.");
2030 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPT4K, "/PGM/CPU%u/RZ/SyncPT/4K", "Nr of 4K PT syncs");
2031 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPT4M, "/PGM/CPU%u/RZ/SyncPT/4M", "Nr of 4M PT syncs");
2032 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPagePDNAs, "/PGM/CPU%u/RZ/SyncPagePDNAs", "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
2033 PGM_REG_COUNTER(&pCpuStats->StatRZSyncPagePDOutOfSync, "/PGM/CPU%u/RZ/SyncPagePDOutOfSync", "The number of time we've encountered an out-of-sync PD in SyncPage.");
2034 PGM_REG_COUNTER(&pCpuStats->StatRZAccessedPage, "/PGM/CPU%u/RZ/AccessedPage", "The number of pages marked not present for accessed bit emulation.");
2035 PGM_REG_PROFILE(&pCpuStats->StatRZDirtyBitTracking, "/PGM/CPU%u/RZ/DirtyPage", "Profiling the dirty bit tracking in CheckPageFault().");
2036 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPage, "/PGM/CPU%u/RZ/DirtyPage/Mark", "The number of pages marked read-only for dirty bit tracking.");
2037 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPageBig, "/PGM/CPU%u/RZ/DirtyPage/MarkBig", "The number of 4MB pages marked read-only for dirty bit tracking.");
2038 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPageSkipped, "/PGM/CPU%u/RZ/DirtyPage/Skipped", "The number of pages already dirty or readonly.");
2039 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPageTrap, "/PGM/CPU%u/RZ/DirtyPage/Trap", "The number of traps generated for dirty bit tracking.");
2040 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyPageStale, "/PGM/CPU%u/RZ/DirtyPage/Stale", "The number of traps generated for dirty bit tracking (stale tlb entries).");
2041 PGM_REG_COUNTER(&pCpuStats->StatRZDirtiedPage, "/PGM/CPU%u/RZ/DirtyPage/SetDirty", "The number of pages marked dirty because of write accesses.");
2042 PGM_REG_COUNTER(&pCpuStats->StatRZDirtyTrackRealPF, "/PGM/CPU%u/RZ/DirtyPage/RealPF", "The number of real pages faults during dirty bit tracking.");
2043 PGM_REG_COUNTER(&pCpuStats->StatRZPageAlreadyDirty, "/PGM/CPU%u/RZ/DirtyPage/AlreadySet", "The number of pages already marked dirty because of write accesses.");
2044 PGM_REG_PROFILE(&pCpuStats->StatRZInvalidatePage, "/PGM/CPU%u/RZ/InvalidatePage", "PGMInvalidatePage() profiling.");
2045 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePage4KBPages, "/PGM/CPU%u/RZ/InvalidatePage/4KBPages", "The number of times PGMInvalidatePage() was called for a 4KB page.");
2046 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePage4MBPages, "/PGM/CPU%u/RZ/InvalidatePage/4MBPages", "The number of times PGMInvalidatePage() was called for a 4MB page.");
2047 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePage4MBPagesSkip, "/PGM/CPU%u/RZ/InvalidatePage/4MBPagesSkip","The number of times PGMInvalidatePage() skipped a 4MB page.");
2048 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePagePDMappings, "/PGM/CPU%u/RZ/InvalidatePage/PDMappings", "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
2049 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePagePDNAs, "/PGM/CPU%u/RZ/InvalidatePage/PDNAs", "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
2050 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePagePDNPs, "/PGM/CPU%u/RZ/InvalidatePage/PDNPs", "The number of times PGMInvalidatePage() was called for a not present page directory.");
2051 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePagePDOutOfSync, "/PGM/CPU%u/RZ/InvalidatePage/PDOutOfSync", "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
2052 PGM_REG_COUNTER(&pCpuStats->StatRZInvalidatePageSkipped, "/PGM/CPU%u/RZ/InvalidatePage/Skipped", "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
2053 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncSupervisor, "/PGM/CPU%u/RZ/OutOfSync/SuperVisor", "Number of traps due to pages out of sync (P) and times VerifyAccessSyncPage calls SyncPage.");
2054 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncUser, "/PGM/CPU%u/RZ/OutOfSync/User", "Number of traps due to pages out of sync (P) and times VerifyAccessSyncPage calls SyncPage.");
2055 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncSupervisorWrite,"/PGM/CPU%u/RZ/OutOfSync/SuperVisorWrite", "Number of traps due to pages out of sync (RW) and times VerifyAccessSyncPage calls SyncPage.");
2056 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncUserWrite, "/PGM/CPU%u/RZ/OutOfSync/UserWrite", "Number of traps due to pages out of sync (RW) and times VerifyAccessSyncPage calls SyncPage.");
2057 PGM_REG_COUNTER(&pCpuStats->StatRZPageOutOfSyncBallloon, "/PGM/CPU%u/RZ/OutOfSync/Balloon", "The number of times a ballooned page was accessed (read).");
2058 PGM_REG_PROFILE(&pCpuStats->StatRZPrefetch, "/PGM/CPU%u/RZ/Prefetch", "PGMPrefetchPage profiling.");
2059 PGM_REG_PROFILE(&pCpuStats->StatRZFlushTLB, "/PGM/CPU%u/RZ/FlushTLB", "Profiling of the PGMFlushTLB() body.");
2060 PGM_REG_COUNTER(&pCpuStats->StatRZFlushTLBNewCR3, "/PGM/CPU%u/RZ/FlushTLB/NewCR3", "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
2061 PGM_REG_COUNTER(&pCpuStats->StatRZFlushTLBNewCR3Global, "/PGM/CPU%u/RZ/FlushTLB/NewCR3Global", "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
2062 PGM_REG_COUNTER(&pCpuStats->StatRZFlushTLBSameCR3, "/PGM/CPU%u/RZ/FlushTLB/SameCR3", "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
2063 PGM_REG_COUNTER(&pCpuStats->StatRZFlushTLBSameCR3Global, "/PGM/CPU%u/RZ/FlushTLB/SameCR3Global", "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
2064 PGM_REG_PROFILE(&pCpuStats->StatRZGstModifyPage, "/PGM/CPU%u/RZ/GstModifyPage", "Profiling of the PGMGstModifyPage() body.");
2065
2066 PGM_REG_PROFILE(&pCpuStats->StatR3SyncCR3, "/PGM/CPU%u/R3/SyncCR3", "Profiling of the PGMSyncCR3() body.");
2067 PGM_REG_PROFILE(&pCpuStats->StatR3SyncCR3Handlers, "/PGM/CPU%u/R3/SyncCR3/Handlers", "Profiling of the PGMSyncCR3() update handler section.");
2068 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3Global, "/PGM/CPU%u/R3/SyncCR3/Global", "The number of global CR3 syncs.");
2069 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3NotGlobal, "/PGM/CPU%u/R3/SyncCR3/NotGlobal", "The number of non-global CR3 syncs.");
2070 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstCacheHit, "/PGM/CPU%u/R3/SyncCR3/DstChacheHit", "The number of times we got some kind of a cache hit.");
2071 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstFreed, "/PGM/CPU%u/R3/SyncCR3/DstFreed", "The number of times we've had to free a shadow entry.");
2072 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstFreedSrcNP, "/PGM/CPU%u/R3/SyncCR3/DstFreedSrcNP", "The number of times we've had to free a shadow entry for which the source entry was not present.");
2073 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstNotPresent, "/PGM/CPU%u/R3/SyncCR3/DstNotPresent", "The number of times we've encountered a not present shadow entry for a present guest entry.");
2074 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstSkippedGlobalPD, "/PGM/CPU%u/R3/SyncCR3/DstSkippedGlobalPD", "The number of times a global page directory wasn't flushed.");
2075 PGM_REG_COUNTER(&pCpuStats->StatR3SyncCR3DstSkippedGlobalPT, "/PGM/CPU%u/R3/SyncCR3/DstSkippedGlobalPT", "The number of times a page table with only global entries wasn't flushed.");
2076 PGM_REG_PROFILE(&pCpuStats->StatR3SyncPT, "/PGM/CPU%u/R3/SyncPT", "Profiling of the pfnSyncPT() body.");
2077 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPTFailed, "/PGM/CPU%u/R3/SyncPT/Failed", "The number of times pfnSyncPT() failed.");
2078 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPT4K, "/PGM/CPU%u/R3/SyncPT/4K", "Nr of 4K PT syncs");
2079 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPT4M, "/PGM/CPU%u/R3/SyncPT/4M", "Nr of 4M PT syncs");
2080 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPagePDNAs, "/PGM/CPU%u/R3/SyncPagePDNAs", "The number of time we've marked a PD not present from SyncPage to virtualize the accessed bit.");
2081 PGM_REG_COUNTER(&pCpuStats->StatR3SyncPagePDOutOfSync, "/PGM/CPU%u/R3/SyncPagePDOutOfSync", "The number of time we've encountered an out-of-sync PD in SyncPage.");
2082 PGM_REG_COUNTER(&pCpuStats->StatR3AccessedPage, "/PGM/CPU%u/R3/AccessedPage", "The number of pages marked not present for accessed bit emulation.");
2083 PGM_REG_PROFILE(&pCpuStats->StatR3DirtyBitTracking, "/PGM/CPU%u/R3/DirtyPage", "Profiling the dirty bit tracking in CheckPageFault().");
2084 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyPage, "/PGM/CPU%u/R3/DirtyPage/Mark", "The number of pages marked read-only for dirty bit tracking.");
2085 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyPageBig, "/PGM/CPU%u/R3/DirtyPage/MarkBig", "The number of 4MB pages marked read-only for dirty bit tracking.");
2086 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyPageSkipped, "/PGM/CPU%u/R3/DirtyPage/Skipped", "The number of pages already dirty or readonly.");
2087 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyPageTrap, "/PGM/CPU%u/R3/DirtyPage/Trap", "The number of traps generated for dirty bit tracking.");
2088 PGM_REG_COUNTER(&pCpuStats->StatR3DirtiedPage, "/PGM/CPU%u/R3/DirtyPage/SetDirty", "The number of pages marked dirty because of write accesses.");
2089 PGM_REG_COUNTER(&pCpuStats->StatR3DirtyTrackRealPF, "/PGM/CPU%u/R3/DirtyPage/RealPF", "The number of real pages faults during dirty bit tracking.");
2090 PGM_REG_COUNTER(&pCpuStats->StatR3PageAlreadyDirty, "/PGM/CPU%u/R3/DirtyPage/AlreadySet", "The number of pages already marked dirty because of write accesses.");
2091 PGM_REG_PROFILE(&pCpuStats->StatR3InvalidatePage, "/PGM/CPU%u/R3/InvalidatePage", "PGMInvalidatePage() profiling.");
2092 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePage4KBPages, "/PGM/CPU%u/R3/InvalidatePage/4KBPages", "The number of times PGMInvalidatePage() was called for a 4KB page.");
2093 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePage4MBPages, "/PGM/CPU%u/R3/InvalidatePage/4MBPages", "The number of times PGMInvalidatePage() was called for a 4MB page.");
2094 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePage4MBPagesSkip, "/PGM/CPU%u/R3/InvalidatePage/4MBPagesSkip","The number of times PGMInvalidatePage() skipped a 4MB page.");
2095 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePagePDMappings, "/PGM/CPU%u/R3/InvalidatePage/PDMappings", "The number of times PGMInvalidatePage() was called for a page directory containing mappings (no conflict).");
2096 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePagePDNAs, "/PGM/CPU%u/R3/InvalidatePage/PDNAs", "The number of times PGMInvalidatePage() was called for a not accessed page directory.");
2097 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePagePDNPs, "/PGM/CPU%u/R3/InvalidatePage/PDNPs", "The number of times PGMInvalidatePage() was called for a not present page directory.");
2098 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePagePDOutOfSync, "/PGM/CPU%u/R3/InvalidatePage/PDOutOfSync", "The number of times PGMInvalidatePage() was called for an out of sync page directory.");
2099 PGM_REG_COUNTER(&pCpuStats->StatR3InvalidatePageSkipped, "/PGM/CPU%u/R3/InvalidatePage/Skipped", "The number of times PGMInvalidatePage() was skipped due to not present shw or pending pending SyncCR3.");
2100 PGM_REG_COUNTER(&pCpuStats->StatR3PageOutOfSyncSupervisor, "/PGM/CPU%u/R3/OutOfSync/SuperVisor", "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
2101 PGM_REG_COUNTER(&pCpuStats->StatR3PageOutOfSyncUser, "/PGM/CPU%u/R3/OutOfSync/User", "Number of traps due to pages out of sync and times VerifyAccessSyncPage calls SyncPage.");
2102 PGM_REG_COUNTER(&pCpuStats->StatR3PageOutOfSyncBallloon, "/PGM/CPU%u/R3/OutOfSync/Balloon", "The number of times a ballooned page was accessed (read).");
2103 PGM_REG_PROFILE(&pCpuStats->StatR3Prefetch, "/PGM/CPU%u/R3/Prefetch", "PGMPrefetchPage profiling.");
2104 PGM_REG_PROFILE(&pCpuStats->StatR3FlushTLB, "/PGM/CPU%u/R3/FlushTLB", "Profiling of the PGMFlushTLB() body.");
2105 PGM_REG_COUNTER(&pCpuStats->StatR3FlushTLBNewCR3, "/PGM/CPU%u/R3/FlushTLB/NewCR3", "The number of times PGMFlushTLB was called with a new CR3, non-global. (switch)");
2106 PGM_REG_COUNTER(&pCpuStats->StatR3FlushTLBNewCR3Global, "/PGM/CPU%u/R3/FlushTLB/NewCR3Global", "The number of times PGMFlushTLB was called with a new CR3, global. (switch)");
2107 PGM_REG_COUNTER(&pCpuStats->StatR3FlushTLBSameCR3, "/PGM/CPU%u/R3/FlushTLB/SameCR3", "The number of times PGMFlushTLB was called with the same CR3, non-global. (flush)");
2108 PGM_REG_COUNTER(&pCpuStats->StatR3FlushTLBSameCR3Global, "/PGM/CPU%u/R3/FlushTLB/SameCR3Global", "The number of times PGMFlushTLB was called with the same CR3, global. (flush)");
2109 PGM_REG_PROFILE(&pCpuStats->StatR3GstModifyPage, "/PGM/CPU%u/R3/GstModifyPage", "Profiling of the PGMGstModifyPage() body.");
2110#endif /* VBOX_WITH_STATISTICS */
2111
2112#undef PGM_REG_PROFILE
2113#undef PGM_REG_COUNTER
2114
2115 }
2116
2117 return VINF_SUCCESS;
2118}
2119
2120
2121/**
2122 * Init the PGM bits that rely on VMMR0 and MM to be fully initialized.
2123 *
2124 * The dynamic mapping area will also be allocated and initialized at this
2125 * time. We could allocate it during PGMR3Init of course, but the mapping
2126 * wouldn't be allocated at that time preventing us from setting up the
2127 * page table entries with the dummy page.
2128 *
2129 * @returns VBox status code.
2130 * @param pVM The cross context VM structure.
2131 */
2132VMMR3DECL(int) PGMR3InitDynMap(PVM pVM)
2133{
2134 RTGCPTR GCPtr;
2135 int rc;
2136
2137 /*
2138 * Reserve space for the dynamic mappings.
2139 */
2140 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping", &GCPtr);
2141 if (RT_SUCCESS(rc))
2142 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
2143
2144 if ( RT_SUCCESS(rc)
2145 && (pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) != ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT))
2146 {
2147 rc = MMR3HyperReserve(pVM, MM_HYPER_DYNAMIC_SIZE, "Dynamic mapping not crossing", &GCPtr);
2148 if (RT_SUCCESS(rc))
2149 pVM->pgm.s.pbDynPageMapBaseGC = GCPtr;
2150 }
2151 if (RT_SUCCESS(rc))
2152 {
2153 AssertRelease((pVM->pgm.s.pbDynPageMapBaseGC >> X86_PD_PAE_SHIFT) == ((pVM->pgm.s.pbDynPageMapBaseGC + MM_HYPER_DYNAMIC_SIZE - 1) >> X86_PD_PAE_SHIFT));
2154 MMR3HyperReserve(pVM, PAGE_SIZE, "fence", NULL);
2155 }
2156 return rc;
2157}
2158
2159
2160/**
2161 * Ring-3 init finalizing.
2162 *
2163 * @returns VBox status code.
2164 * @param pVM The cross context VM structure.
2165 */
2166VMMR3DECL(int) PGMR3InitFinalize(PVM pVM)
2167{
2168 int rc;
2169
2170 /*
2171 * Reserve space for the dynamic mappings.
2172 * Initialize the dynamic mapping pages with dummy pages to simply the cache.
2173 */
2174 /* get the pointer to the page table entries. */
2175 PPGMMAPPING pMapping = pgmGetMapping(pVM, pVM->pgm.s.pbDynPageMapBaseGC);
2176 AssertRelease(pMapping);
2177 const uintptr_t off = pVM->pgm.s.pbDynPageMapBaseGC - pMapping->GCPtr;
2178 const unsigned iPT = off >> X86_PD_SHIFT;
2179 const unsigned iPG = (off >> X86_PT_SHIFT) & X86_PT_MASK;
2180 pVM->pgm.s.paDynPageMap32BitPTEsGC = pMapping->aPTs[iPT].pPTRC + iPG * sizeof(pMapping->aPTs[0].pPTR3->a[0]);
2181 pVM->pgm.s.paDynPageMapPaePTEsGC = pMapping->aPTs[iPT].paPaePTsRC + iPG * sizeof(pMapping->aPTs[0].paPaePTsR3->a[0]);
2182
2183 /* init cache area */
2184 RTHCPHYS HCPhysDummy = MMR3PageDummyHCPhys(pVM);
2185 for (uint32_t offDynMap = 0; offDynMap < MM_HYPER_DYNAMIC_SIZE; offDynMap += PAGE_SIZE)
2186 {
2187 rc = PGMMap(pVM, pVM->pgm.s.pbDynPageMapBaseGC + offDynMap, HCPhysDummy, PAGE_SIZE, 0);
2188 AssertRCReturn(rc, rc);
2189 }
2190
2191 /*
2192 * Determine the max physical address width (MAXPHYADDR) and apply it to
2193 * all the mask members and stuff.
2194 */
2195 uint32_t cMaxPhysAddrWidth;
2196 uint32_t uMaxExtLeaf = ASMCpuId_EAX(0x80000000);
2197 if ( uMaxExtLeaf >= 0x80000008
2198 && uMaxExtLeaf <= 0x80000fff)
2199 {
2200 cMaxPhysAddrWidth = ASMCpuId_EAX(0x80000008) & 0xff;
2201 LogRel(("PGM: The CPU physical address width is %u bits\n", cMaxPhysAddrWidth));
2202 cMaxPhysAddrWidth = RT_MIN(52, cMaxPhysAddrWidth);
2203 pVM->pgm.s.fLessThan52PhysicalAddressBits = cMaxPhysAddrWidth < 52;
2204 for (uint32_t iBit = cMaxPhysAddrWidth; iBit < 52; iBit++)
2205 pVM->pgm.s.HCPhysInvMmioPg |= RT_BIT_64(iBit);
2206 }
2207 else
2208 {
2209 LogRel(("PGM: ASSUMING CPU physical address width of 48 bits (uMaxExtLeaf=%#x)\n", uMaxExtLeaf));
2210 cMaxPhysAddrWidth = 48;
2211 pVM->pgm.s.fLessThan52PhysicalAddressBits = true;
2212 pVM->pgm.s.HCPhysInvMmioPg |= UINT64_C(0x000f0000000000);
2213 }
2214
2215 /** @todo query from CPUM. */
2216 pVM->pgm.s.GCPhysInvAddrMask = 0;
2217 for (uint32_t iBit = cMaxPhysAddrWidth; iBit < 64; iBit++)
2218 pVM->pgm.s.GCPhysInvAddrMask |= RT_BIT_64(iBit);
2219
2220 /*
2221 * Initialize the invalid paging entry masks, assuming NX is disabled.
2222 */
2223 uint64_t fMbzPageFrameMask = pVM->pgm.s.GCPhysInvAddrMask & UINT64_C(0x000ffffffffff000);
2224 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2225 {
2226 PVMCPU pVCpu = &pVM->aCpus[iCpu];
2227
2228 /** @todo The manuals are not entirely clear whether the physical
2229 * address width is relevant. See table 5-9 in the intel
2230 * manual vs the PDE4M descriptions. Write testcase (NP). */
2231 pVCpu->pgm.s.fGst32BitMbzBigPdeMask = ((uint32_t)(fMbzPageFrameMask >> (32 - 13)) & X86_PDE4M_PG_HIGH_MASK)
2232 | X86_PDE4M_MBZ_MASK;
2233
2234 pVCpu->pgm.s.fGstPaeMbzPteMask = fMbzPageFrameMask | X86_PTE_PAE_MBZ_MASK_NO_NX;
2235 pVCpu->pgm.s.fGstPaeMbzPdeMask = fMbzPageFrameMask | X86_PDE_PAE_MBZ_MASK_NO_NX;
2236 pVCpu->pgm.s.fGstPaeMbzBigPdeMask = fMbzPageFrameMask | X86_PDE2M_PAE_MBZ_MASK_NO_NX;
2237 pVCpu->pgm.s.fGstPaeMbzPdpeMask = fMbzPageFrameMask | X86_PDPE_PAE_MBZ_MASK;
2238
2239 pVCpu->pgm.s.fGstAmd64MbzPteMask = fMbzPageFrameMask | X86_PTE_LM_MBZ_MASK_NO_NX;
2240 pVCpu->pgm.s.fGstAmd64MbzPdeMask = fMbzPageFrameMask | X86_PDE_LM_MBZ_MASK_NX;
2241 pVCpu->pgm.s.fGstAmd64MbzBigPdeMask = fMbzPageFrameMask | X86_PDE2M_LM_MBZ_MASK_NX;
2242 pVCpu->pgm.s.fGstAmd64MbzPdpeMask = fMbzPageFrameMask | X86_PDPE_LM_MBZ_MASK_NO_NX;
2243 pVCpu->pgm.s.fGstAmd64MbzBigPdpeMask = fMbzPageFrameMask | X86_PDPE1G_LM_MBZ_MASK_NO_NX;
2244 pVCpu->pgm.s.fGstAmd64MbzPml4eMask = fMbzPageFrameMask | X86_PML4E_MBZ_MASK_NO_NX;
2245
2246 pVCpu->pgm.s.fGst64ShadowedPteMask = X86_PTE_P | X86_PTE_RW | X86_PTE_US | X86_PTE_G | X86_PTE_A | X86_PTE_D;
2247 pVCpu->pgm.s.fGst64ShadowedPdeMask = X86_PDE_P | X86_PDE_RW | X86_PDE_US | X86_PDE_A;
2248 pVCpu->pgm.s.fGst64ShadowedBigPdeMask = X86_PDE4M_P | X86_PDE4M_RW | X86_PDE4M_US | X86_PDE4M_A;
2249 pVCpu->pgm.s.fGst64ShadowedBigPde4PteMask =
2250 X86_PDE4M_P | X86_PDE4M_RW | X86_PDE4M_US | X86_PDE4M_G | X86_PDE4M_A | X86_PDE4M_D;
2251 pVCpu->pgm.s.fGstAmd64ShadowedPdpeMask = X86_PDPE_P | X86_PDPE_RW | X86_PDPE_US | X86_PDPE_A;
2252 pVCpu->pgm.s.fGstAmd64ShadowedPml4eMask = X86_PML4E_P | X86_PML4E_RW | X86_PML4E_US | X86_PML4E_A;
2253 }
2254
2255 /*
2256 * Note that AMD uses all the 8 reserved bits for the address (so 40 bits in total);
2257 * Intel only goes up to 36 bits, so we stick to 36 as well.
2258 * Update: More recent intel manuals specifies 40 bits just like AMD.
2259 */
2260 uint32_t u32Dummy, u32Features;
2261 CPUMGetGuestCpuId(VMMGetCpu(pVM), 1, 0, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
2262 if (u32Features & X86_CPUID_FEATURE_EDX_PSE36)
2263 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(RT_MAX(36, cMaxPhysAddrWidth)) - 1;
2264 else
2265 pVM->pgm.s.GCPhys4MBPSEMask = RT_BIT_64(32) - 1;
2266
2267 /*
2268 * Allocate memory if we're supposed to do that.
2269 */
2270 if (pVM->pgm.s.fRamPreAlloc)
2271 rc = pgmR3PhysRamPreAllocate(pVM);
2272
2273 LogRel(("PGM: PGMR3InitFinalize: 4 MB PSE mask %RGp\n", pVM->pgm.s.GCPhys4MBPSEMask));
2274 return rc;
2275}
2276
2277
2278/**
2279 * Init phase completed callback.
2280 *
2281 * @returns VBox status code.
2282 * @param pVM The cross context VM structure.
2283 * @param enmWhat What has been completed.
2284 * @thread EMT(0)
2285 */
2286VMMR3_INT_DECL(int) PGMR3InitCompleted(PVM pVM, VMINITCOMPLETED enmWhat)
2287{
2288 switch (enmWhat)
2289 {
2290 case VMINITCOMPLETED_HM:
2291#ifdef VBOX_WITH_PCI_PASSTHROUGH
2292 if (pVM->pgm.s.fPciPassthrough)
2293 {
2294 AssertLogRelReturn(pVM->pgm.s.fRamPreAlloc, VERR_PCI_PASSTHROUGH_NO_RAM_PREALLOC);
2295 AssertLogRelReturn(HMIsEnabled(pVM), VERR_PCI_PASSTHROUGH_NO_HM);
2296 AssertLogRelReturn(HMIsNestedPagingActive(pVM), VERR_PCI_PASSTHROUGH_NO_NESTED_PAGING);
2297
2298 /*
2299 * Report assignments to the IOMMU (hope that's good enough for now).
2300 */
2301 if (pVM->pgm.s.fPciPassthrough)
2302 {
2303 int rc = VMMR3CallR0(pVM, VMMR0_DO_PGM_PHYS_SETUP_IOMMU, 0, NULL);
2304 AssertRCReturn(rc, rc);
2305 }
2306 }
2307#else
2308 AssertLogRelReturn(!pVM->pgm.s.fPciPassthrough, VERR_PGM_PCI_PASSTHRU_MISCONFIG);
2309#endif
2310 break;
2311
2312 default:
2313 /* shut up gcc */
2314 break;
2315 }
2316
2317 return VINF_SUCCESS;
2318}
2319
2320
2321/**
2322 * Applies relocations to data and code managed by this component.
2323 *
2324 * This function will be called at init and whenever the VMM need to relocate it
2325 * self inside the GC.
2326 *
2327 * @param pVM The cross context VM structure.
2328 * @param offDelta Relocation delta relative to old location.
2329 */
2330VMMR3DECL(void) PGMR3Relocate(PVM pVM, RTGCINTPTR offDelta)
2331{
2332 LogFlow(("PGMR3Relocate %RGv to %RGv\n", pVM->pgm.s.GCPtrCR3Mapping, pVM->pgm.s.GCPtrCR3Mapping + offDelta));
2333
2334 /*
2335 * Paging stuff.
2336 */
2337 pVM->pgm.s.GCPtrCR3Mapping += offDelta;
2338
2339 pgmR3ModeDataInit(pVM, true /* resolve GC/R0 symbols */);
2340
2341 /* Shadow, guest and both mode switch & relocation for each VCPU. */
2342 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2343 {
2344 PVMCPU pVCpu = &pVM->aCpus[i];
2345
2346 pgmR3ModeDataSwitch(pVM, pVCpu, pVCpu->pgm.s.enmShadowMode, pVCpu->pgm.s.enmGuestMode);
2347
2348 PGM_SHW_PFN(Relocate, pVCpu)(pVCpu, offDelta);
2349 PGM_GST_PFN(Relocate, pVCpu)(pVCpu, offDelta);
2350 PGM_BTH_PFN(Relocate, pVCpu)(pVCpu, offDelta);
2351 }
2352
2353 /*
2354 * Trees.
2355 */
2356 pVM->pgm.s.pTreesRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pTreesR3);
2357
2358 /*
2359 * Ram ranges.
2360 */
2361 if (pVM->pgm.s.pRamRangesXR3)
2362 {
2363 /* Update the pSelfRC pointers and relink them. */
2364 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesXR3; pCur; pCur = pCur->pNextR3)
2365 if (!(pCur->fFlags & PGM_RAM_RANGE_FLAGS_FLOATING))
2366 pCur->pSelfRC = MMHyperCCToRC(pVM, pCur);
2367 pgmR3PhysRelinkRamRanges(pVM);
2368
2369 /* Flush the RC TLB. */
2370 for (unsigned i = 0; i < PGM_RAMRANGE_TLB_ENTRIES; i++)
2371 pVM->pgm.s.apRamRangesTlbRC[i] = NIL_RTRCPTR;
2372 }
2373
2374 /*
2375 * Update the pSelfRC pointer of the MMIO2 ram ranges since they might not
2376 * be mapped and thus not included in the above exercise.
2377 */
2378 for (PPGMMMIO2RANGE pCur = pVM->pgm.s.pMmio2RangesR3; pCur; pCur = pCur->pNextR3)
2379 if (!(pCur->RamRange.fFlags & PGM_RAM_RANGE_FLAGS_FLOATING))
2380 pCur->RamRange.pSelfRC = MMHyperCCToRC(pVM, &pCur->RamRange);
2381
2382 /*
2383 * Update the two page directories with all page table mappings.
2384 * (One or more of them have changed, that's why we're here.)
2385 */
2386 pVM->pgm.s.pMappingsRC = MMHyperR3ToRC(pVM, pVM->pgm.s.pMappingsR3);
2387 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur->pNextR3; pCur = pCur->pNextR3)
2388 pCur->pNextRC = MMHyperR3ToRC(pVM, pCur->pNextR3);
2389
2390 /* Relocate GC addresses of Page Tables. */
2391 for (PPGMMAPPING pCur = pVM->pgm.s.pMappingsR3; pCur; pCur = pCur->pNextR3)
2392 {
2393 for (RTHCUINT i = 0; i < pCur->cPTs; i++)
2394 {
2395 pCur->aPTs[i].pPTRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].pPTR3);
2396 pCur->aPTs[i].paPaePTsRC = MMHyperR3ToRC(pVM, pCur->aPTs[i].paPaePTsR3);
2397 }
2398 }
2399
2400 /*
2401 * Dynamic page mapping area.
2402 */
2403 pVM->pgm.s.paDynPageMap32BitPTEsGC += offDelta;
2404 pVM->pgm.s.paDynPageMapPaePTEsGC += offDelta;
2405 pVM->pgm.s.pbDynPageMapBaseGC += offDelta;
2406
2407 if (pVM->pgm.s.pRCDynMap)
2408 {
2409 pVM->pgm.s.pRCDynMap += offDelta;
2410 PPGMRCDYNMAP pDynMap = (PPGMRCDYNMAP)MMHyperRCToCC(pVM, pVM->pgm.s.pRCDynMap);
2411
2412 pDynMap->paPages += offDelta;
2413 PPGMRCDYNMAPENTRY paPages = (PPGMRCDYNMAPENTRY)MMHyperRCToCC(pVM, pDynMap->paPages);
2414
2415 for (uint32_t iPage = 0; iPage < pDynMap->cPages; iPage++)
2416 {
2417 paPages[iPage].pvPage += offDelta;
2418 paPages[iPage].uPte.pLegacy += offDelta;
2419 paPages[iPage].uPte.pPae += offDelta;
2420 }
2421 }
2422
2423 /*
2424 * The Zero page.
2425 */
2426 pVM->pgm.s.pvZeroPgR0 = MMHyperR3ToR0(pVM, pVM->pgm.s.pvZeroPgR3);
2427#ifdef VBOX_WITH_2X_4GB_ADDR_SPACE
2428 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR || !HMIsEnabled(pVM));
2429#else
2430 AssertRelease(pVM->pgm.s.pvZeroPgR0 != NIL_RTR0PTR);
2431#endif
2432
2433 /*
2434 * Physical and virtual handlers.
2435 */
2436 PGMRELOCHANDLERARGS Args = { offDelta, pVM };
2437 RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3RelocatePhysHandler, &Args);
2438 pVM->pgm.s.pLastPhysHandlerRC = NIL_RTRCPTR;
2439
2440 PPGMPHYSHANDLERTYPEINT pCurPhysType;
2441 RTListOff32ForEach(&pVM->pgm.s.pTreesR3->HeadPhysHandlerTypes, pCurPhysType, PGMPHYSHANDLERTYPEINT, ListNode)
2442 {
2443 if (pCurPhysType->pfnHandlerRC != NIL_RTRCPTR)
2444 pCurPhysType->pfnHandlerRC += offDelta;
2445 if (pCurPhysType->pfnPfHandlerRC != NIL_RTRCPTR)
2446 pCurPhysType->pfnPfHandlerRC += offDelta;
2447 }
2448
2449#ifdef VBOX_WITH_RAW_MODE
2450 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3RelocateVirtHandler, &Args);
2451 RTAvlroGCPtrDoWithAll(&pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3RelocateHyperVirtHandler, &Args);
2452
2453 PPGMVIRTHANDLERTYPEINT pCurVirtType;
2454 RTListOff32ForEach(&pVM->pgm.s.pTreesR3->HeadVirtHandlerTypes, pCurVirtType, PGMVIRTHANDLERTYPEINT, ListNode)
2455 {
2456 if (pCurVirtType->pfnHandlerRC != NIL_RTRCPTR)
2457 pCurVirtType->pfnHandlerRC += offDelta;
2458 if (pCurVirtType->pfnPfHandlerRC != NIL_RTRCPTR)
2459 pCurVirtType->pfnPfHandlerRC += offDelta;
2460 }
2461#endif
2462
2463 /*
2464 * The page pool.
2465 */
2466 pgmR3PoolRelocate(pVM);
2467
2468#ifdef VBOX_WITH_STATISTICS
2469 /*
2470 * Statistics.
2471 */
2472 pVM->pgm.s.pStatsRC = MMHyperCCToRC(pVM, pVM->pgm.s.pStatsR3);
2473 for (VMCPUID iCpu = 0; iCpu < pVM->cCpus; iCpu++)
2474 pVM->aCpus[iCpu].pgm.s.pStatsRC = MMHyperCCToRC(pVM, pVM->aCpus[iCpu].pgm.s.pStatsR3);
2475#endif
2476}
2477
2478
2479/**
2480 * Callback function for relocating a physical access handler.
2481 *
2482 * @returns 0 (continue enum)
2483 * @param pNode Pointer to a PGMPHYSHANDLER node.
2484 * @param pvUser Pointer to a PGMRELOCHANDLERARGS.
2485 */
2486static DECLCALLBACK(int) pgmR3RelocatePhysHandler(PAVLROGCPHYSNODECORE pNode, void *pvUser)
2487{
2488 PPGMPHYSHANDLER pHandler = (PPGMPHYSHANDLER)pNode;
2489 PCPGMRELOCHANDLERARGS pArgs = (PCPGMRELOCHANDLERARGS)pvUser;
2490 if (pHandler->pvUserRC >= 0x10000)
2491 pHandler->pvUserRC += pArgs->offDelta;
2492 return 0;
2493}
2494
2495#ifdef VBOX_WITH_RAW_MODE
2496
2497/**
2498 * Callback function for relocating a virtual access handler.
2499 *
2500 * @returns 0 (continue enum)
2501 * @param pNode Pointer to a PGMVIRTHANDLER node.
2502 * @param pvUser Pointer to a PGMRELOCHANDLERARGS.
2503 */
2504static DECLCALLBACK(int) pgmR3RelocateVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2505{
2506 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2507 PCPGMRELOCHANDLERARGS pArgs = (PCPGMRELOCHANDLERARGS)pvUser;
2508 Assert(PGMVIRTANDLER_GET_TYPE(pArgs->pVM, pHandler)->enmKind != PGMVIRTHANDLERKIND_HYPERVISOR);
2509
2510 if ( pHandler->pvUserRC != NIL_RTRCPTR
2511 && PGMVIRTANDLER_GET_TYPE(pArgs->pVM, pHandler)->fRelocUserRC)
2512 pHandler->pvUserRC += pArgs->offDelta;
2513 return 0;
2514}
2515
2516
2517/**
2518 * Callback function for relocating a virtual access handler for the hypervisor mapping.
2519 *
2520 * @returns 0 (continue enum)
2521 * @param pNode Pointer to a PGMVIRTHANDLER node.
2522 * @param pvUser Pointer to a PGMRELOCHANDLERARGS.
2523 */
2524static DECLCALLBACK(int) pgmR3RelocateHyperVirtHandler(PAVLROGCPTRNODECORE pNode, void *pvUser)
2525{
2526 PPGMVIRTHANDLER pHandler = (PPGMVIRTHANDLER)pNode;
2527 PCPGMRELOCHANDLERARGS pArgs = (PCPGMRELOCHANDLERARGS)pvUser;
2528 Assert(PGMVIRTANDLER_GET_TYPE(pArgs->pVM, pHandler)->enmKind == PGMVIRTHANDLERKIND_HYPERVISOR);
2529
2530 if ( pHandler->pvUserRC != NIL_RTRCPTR
2531 && PGMVIRTANDLER_GET_TYPE(pArgs->pVM, pHandler)->fRelocUserRC)
2532 pHandler->pvUserRC += pArgs->offDelta;
2533 return 0;
2534}
2535
2536#endif /* VBOX_WITH_RAW_MODE */
2537
2538/**
2539 * Resets a virtual CPU when unplugged.
2540 *
2541 * @param pVM The cross context VM structure.
2542 * @param pVCpu The cross context virtual CPU structure.
2543 */
2544VMMR3DECL(void) PGMR3ResetCpu(PVM pVM, PVMCPU pVCpu)
2545{
2546 int rc = PGM_GST_PFN(Exit, pVCpu)(pVCpu);
2547 AssertRC(rc);
2548
2549 rc = PGMR3ChangeMode(pVM, pVCpu, PGMMODE_REAL);
2550 AssertRC(rc);
2551
2552 STAM_REL_COUNTER_RESET(&pVCpu->pgm.s.cGuestModeChanges);
2553
2554 pgmR3PoolResetUnpluggedCpu(pVM, pVCpu);
2555
2556 /*
2557 * Re-init other members.
2558 */
2559 pVCpu->pgm.s.fA20Enabled = true;
2560 pVCpu->pgm.s.GCPhysA20Mask = ~((RTGCPHYS)!pVCpu->pgm.s.fA20Enabled << 20);
2561
2562 /*
2563 * Clear the FFs PGM owns.
2564 */
2565 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
2566 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3_NON_GLOBAL);
2567}
2568
2569
2570/**
2571 * The VM is being reset.
2572 *
2573 * For the PGM component this means that any PD write monitors
2574 * needs to be removed.
2575 *
2576 * @param pVM The cross context VM structure.
2577 */
2578VMMR3_INT_DECL(void) PGMR3Reset(PVM pVM)
2579{
2580 LogFlow(("PGMR3Reset:\n"));
2581 VM_ASSERT_EMT(pVM);
2582
2583 pgmLock(pVM);
2584
2585 /*
2586 * Unfix any fixed mappings and disable CR3 monitoring.
2587 */
2588 pVM->pgm.s.fMappingsFixed = false;
2589 pVM->pgm.s.fMappingsFixedRestored = false;
2590 pVM->pgm.s.GCPtrMappingFixed = NIL_RTGCPTR;
2591 pVM->pgm.s.cbMappingFixed = 0;
2592
2593 /*
2594 * Exit the guest paging mode before the pgm pool gets reset.
2595 * Important to clean up the amd64 case.
2596 */
2597 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2598 {
2599 PVMCPU pVCpu = &pVM->aCpus[i];
2600 int rc = PGM_GST_PFN(Exit, pVCpu)(pVCpu);
2601 AssertReleaseRC(rc);
2602 }
2603
2604#ifdef DEBUG
2605 DBGFR3_INFO_LOG(pVM, "mappings", NULL);
2606 DBGFR3_INFO_LOG(pVM, "handlers", "all nostat");
2607#endif
2608
2609 /*
2610 * Switch mode back to real mode. (before resetting the pgm pool!)
2611 */
2612 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2613 {
2614 PVMCPU pVCpu = &pVM->aCpus[i];
2615
2616 int rc = PGMR3ChangeMode(pVM, pVCpu, PGMMODE_REAL);
2617 AssertReleaseRC(rc);
2618
2619 STAM_REL_COUNTER_RESET(&pVCpu->pgm.s.cGuestModeChanges);
2620 STAM_REL_COUNTER_RESET(&pVCpu->pgm.s.cA20Changes);
2621 }
2622
2623 /*
2624 * Reset the shadow page pool.
2625 */
2626 pgmR3PoolReset(pVM);
2627
2628 /*
2629 * Re-init various other members and clear the FFs that PGM owns.
2630 */
2631 for (VMCPUID i = 0; i < pVM->cCpus; i++)
2632 {
2633 PVMCPU pVCpu = &pVM->aCpus[i];
2634
2635 pVCpu->pgm.s.fGst32BitPageSizeExtension = false;
2636 PGMNotifyNxeChanged(pVCpu, false);
2637
2638 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
2639 VMCPU_FF_CLEAR(pVCpu, VMCPU_FF_PGM_SYNC_CR3_NON_GLOBAL);
2640
2641 if (!pVCpu->pgm.s.fA20Enabled)
2642 {
2643 pVCpu->pgm.s.fA20Enabled = true;
2644 pVCpu->pgm.s.GCPhysA20Mask = ~((RTGCPHYS)!pVCpu->pgm.s.fA20Enabled << 20);
2645#ifdef PGM_WITH_A20
2646 pVCpu->pgm.s.fSyncFlags |= PGM_SYNC_UPDATE_PAGE_BIT_VIRTUAL;
2647 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
2648 pgmR3RefreshShadowModeAfterA20Change(pVCpu);
2649 HMFlushTLB(pVCpu);
2650#endif
2651 }
2652 }
2653
2654 pgmUnlock(pVM);
2655}
2656
2657
2658/**
2659 * Memory setup after VM construction or reset.
2660 *
2661 * @param pVM The cross context VM structure.
2662 * @param fAtReset Indicates the context, after reset if @c true or after
2663 * construction if @c false.
2664 */
2665VMMR3_INT_DECL(void) PGMR3MemSetup(PVM pVM, bool fAtReset)
2666{
2667 if (fAtReset)
2668 {
2669 pgmLock(pVM);
2670
2671 int rc = pgmR3PhysRamZeroAll(pVM);
2672 AssertReleaseRC(rc);
2673
2674 rc = pgmR3PhysRomReset(pVM);
2675 AssertReleaseRC(rc);
2676
2677 pgmUnlock(pVM);
2678 }
2679}
2680
2681
2682#ifdef VBOX_STRICT
2683/**
2684 * VM state change callback for clearing fNoMorePhysWrites after
2685 * a snapshot has been created.
2686 */
2687static DECLCALLBACK(void) pgmR3ResetNoMorePhysWritesFlag(PUVM pUVM, VMSTATE enmState, VMSTATE enmOldState, void *pvUser)
2688{
2689 if ( enmState == VMSTATE_RUNNING
2690 || enmState == VMSTATE_RESUMING)
2691 pUVM->pVM->pgm.s.fNoMorePhysWrites = false;
2692 NOREF(enmOldState); NOREF(pvUser);
2693}
2694#endif
2695
2696/**
2697 * Private API to reset fNoMorePhysWrites.
2698 */
2699VMMR3_INT_DECL(void) PGMR3ResetNoMorePhysWritesFlag(PVM pVM)
2700{
2701 pVM->pgm.s.fNoMorePhysWrites = false;
2702}
2703
2704/**
2705 * Terminates the PGM.
2706 *
2707 * @returns VBox status code.
2708 * @param pVM The cross context VM structure.
2709 */
2710VMMR3DECL(int) PGMR3Term(PVM pVM)
2711{
2712 /* Must free shared pages here. */
2713 pgmLock(pVM);
2714 pgmR3PhysRamTerm(pVM);
2715 pgmR3PhysRomTerm(pVM);
2716 pgmUnlock(pVM);
2717
2718 PGMDeregisterStringFormatTypes();
2719 return PDMR3CritSectDelete(&pVM->pgm.s.CritSectX);
2720}
2721
2722
2723/**
2724 * Show paging mode.
2725 *
2726 * @param pVM The cross context VM structure.
2727 * @param pHlp The info helpers.
2728 * @param pszArgs "all" (default), "guest", "shadow" or "host".
2729 */
2730static DECLCALLBACK(void) pgmR3InfoMode(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2731{
2732 /* digest argument. */
2733 bool fGuest, fShadow, fHost;
2734 if (pszArgs)
2735 pszArgs = RTStrStripL(pszArgs);
2736 if (!pszArgs || !*pszArgs || strstr(pszArgs, "all"))
2737 fShadow = fHost = fGuest = true;
2738 else
2739 {
2740 fShadow = fHost = fGuest = false;
2741 if (strstr(pszArgs, "guest"))
2742 fGuest = true;
2743 if (strstr(pszArgs, "shadow"))
2744 fShadow = true;
2745 if (strstr(pszArgs, "host"))
2746 fHost = true;
2747 }
2748
2749 PVMCPU pVCpu = VMMGetCpu(pVM);
2750 if (!pVCpu)
2751 pVCpu = &pVM->aCpus[0];
2752
2753
2754 /* print info. */
2755 if (fGuest)
2756 pHlp->pfnPrintf(pHlp, "Guest paging mode (VCPU #%u): %s (changed %RU64 times), A20 %s (changed %RU64 times)\n",
2757 pVCpu->idCpu, PGMGetModeName(pVCpu->pgm.s.enmGuestMode), pVCpu->pgm.s.cGuestModeChanges.c,
2758 pVCpu->pgm.s.fA20Enabled ? "enabled" : "disabled", pVCpu->pgm.s.cA20Changes.c);
2759 if (fShadow)
2760 pHlp->pfnPrintf(pHlp, "Shadow paging mode (VCPU #%u): %s\n", pVCpu->idCpu, PGMGetModeName(pVCpu->pgm.s.enmShadowMode));
2761 if (fHost)
2762 {
2763 const char *psz;
2764 switch (pVM->pgm.s.enmHostMode)
2765 {
2766 case SUPPAGINGMODE_INVALID: psz = "invalid"; break;
2767 case SUPPAGINGMODE_32_BIT: psz = "32-bit"; break;
2768 case SUPPAGINGMODE_32_BIT_GLOBAL: psz = "32-bit+G"; break;
2769 case SUPPAGINGMODE_PAE: psz = "PAE"; break;
2770 case SUPPAGINGMODE_PAE_GLOBAL: psz = "PAE+G"; break;
2771 case SUPPAGINGMODE_PAE_NX: psz = "PAE+NX"; break;
2772 case SUPPAGINGMODE_PAE_GLOBAL_NX: psz = "PAE+G+NX"; break;
2773 case SUPPAGINGMODE_AMD64: psz = "AMD64"; break;
2774 case SUPPAGINGMODE_AMD64_GLOBAL: psz = "AMD64+G"; break;
2775 case SUPPAGINGMODE_AMD64_NX: psz = "AMD64+NX"; break;
2776 case SUPPAGINGMODE_AMD64_GLOBAL_NX: psz = "AMD64+G+NX"; break;
2777 default: psz = "unknown"; break;
2778 }
2779 pHlp->pfnPrintf(pHlp, "Host paging mode: %s\n", psz);
2780 }
2781}
2782
2783
2784/**
2785 * Dump registered MMIO ranges to the log.
2786 *
2787 * @param pVM The cross context VM structure.
2788 * @param pHlp The info helpers.
2789 * @param pszArgs Arguments, ignored.
2790 */
2791static DECLCALLBACK(void) pgmR3PhysInfo(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2792{
2793 NOREF(pszArgs);
2794 pHlp->pfnPrintf(pHlp,
2795 "RAM ranges (pVM=%p)\n"
2796 "%.*s %.*s\n",
2797 pVM,
2798 sizeof(RTGCPHYS) * 4 + 1, "GC Phys Range ",
2799 sizeof(RTHCPTR) * 2, "pvHC ");
2800
2801 for (PPGMRAMRANGE pCur = pVM->pgm.s.pRamRangesXR3; pCur; pCur = pCur->pNextR3)
2802 pHlp->pfnPrintf(pHlp,
2803 "%RGp-%RGp %RHv %s\n",
2804 pCur->GCPhys,
2805 pCur->GCPhysLast,
2806 pCur->pvR3,
2807 pCur->pszDesc);
2808}
2809
2810
2811/**
2812 * Dump the page directory to the log.
2813 *
2814 * @param pVM The cross context VM structure.
2815 * @param pHlp The info helpers.
2816 * @param pszArgs Arguments, ignored.
2817 */
2818static DECLCALLBACK(void) pgmR3InfoCr3(PVM pVM, PCDBGFINFOHLP pHlp, const char *pszArgs)
2819{
2820 /** @todo SMP support!! */
2821 PVMCPU pVCpu = &pVM->aCpus[0];
2822
2823/** @todo fix this! Convert the PGMR3DumpHierarchyHC functions to do guest stuff. */
2824 /* Big pages supported? */
2825 const bool fPSE = !!(CPUMGetGuestCR4(pVCpu) & X86_CR4_PSE);
2826
2827 /* Global pages supported? */
2828 const bool fPGE = !!(CPUMGetGuestCR4(pVCpu) & X86_CR4_PGE);
2829
2830 NOREF(pszArgs);
2831
2832 /*
2833 * Get page directory addresses.
2834 */
2835 pgmLock(pVM);
2836 PX86PD pPDSrc = pgmGstGet32bitPDPtr(pVCpu);
2837 Assert(pPDSrc);
2838
2839 /*
2840 * Iterate the page directory.
2841 */
2842 for (unsigned iPD = 0; iPD < RT_ELEMENTS(pPDSrc->a); iPD++)
2843 {
2844 X86PDE PdeSrc = pPDSrc->a[iPD];
2845 if (PdeSrc.n.u1Present)
2846 {
2847 if (PdeSrc.b.u1Size && fPSE)
2848 pHlp->pfnPrintf(pHlp,
2849 "%04X - %RGp P=%d U=%d RW=%d G=%d - BIG\n",
2850 iPD,
2851 pgmGstGet4MBPhysPage(pVM, PdeSrc),
2852 PdeSrc.b.u1Present, PdeSrc.b.u1User, PdeSrc.b.u1Write, PdeSrc.b.u1Global && fPGE);
2853 else
2854 pHlp->pfnPrintf(pHlp,
2855 "%04X - %RGp P=%d U=%d RW=%d [G=%d]\n",
2856 iPD,
2857 (RTGCPHYS)(PdeSrc.u & X86_PDE_PG_MASK),
2858 PdeSrc.n.u1Present, PdeSrc.n.u1User, PdeSrc.n.u1Write, PdeSrc.b.u1Global && fPGE);
2859 }
2860 }
2861 pgmUnlock(pVM);
2862}
2863
2864
2865/**
2866 * Service a VMMCALLRING3_PGM_LOCK call.
2867 *
2868 * @returns VBox status code.
2869 * @param pVM The cross context VM structure.
2870 */
2871VMMR3DECL(int) PGMR3LockCall(PVM pVM)
2872{
2873 int rc = PDMR3CritSectEnterEx(&pVM->pgm.s.CritSectX, true /* fHostCall */);
2874 AssertRC(rc);
2875 return rc;
2876}
2877
2878
2879/**
2880 * Converts a PGMMODE value to a PGM_TYPE_* \#define.
2881 *
2882 * @returns PGM_TYPE_*.
2883 * @param pgmMode The mode value to convert.
2884 */
2885DECLINLINE(unsigned) pgmModeToType(PGMMODE pgmMode)
2886{
2887 switch (pgmMode)
2888 {
2889 case PGMMODE_REAL: return PGM_TYPE_REAL;
2890 case PGMMODE_PROTECTED: return PGM_TYPE_PROT;
2891 case PGMMODE_32_BIT: return PGM_TYPE_32BIT;
2892 case PGMMODE_PAE:
2893 case PGMMODE_PAE_NX: return PGM_TYPE_PAE;
2894 case PGMMODE_AMD64:
2895 case PGMMODE_AMD64_NX: return PGM_TYPE_AMD64;
2896 case PGMMODE_NESTED: return PGM_TYPE_NESTED;
2897 case PGMMODE_EPT: return PGM_TYPE_EPT;
2898 default:
2899 AssertFatalMsgFailed(("pgmMode=%d\n", pgmMode));
2900 }
2901}
2902
2903
2904/**
2905 * Gets the index into the paging mode data array of a SHW+GST mode.
2906 *
2907 * @returns PGM::paPagingData index.
2908 * @param uShwType The shadow paging mode type.
2909 * @param uGstType The guest paging mode type.
2910 */
2911DECLINLINE(unsigned) pgmModeDataIndex(unsigned uShwType, unsigned uGstType)
2912{
2913 Assert(uShwType >= PGM_TYPE_32BIT && uShwType <= PGM_TYPE_MAX);
2914 Assert(uGstType >= PGM_TYPE_REAL && uGstType <= PGM_TYPE_AMD64);
2915 return (uShwType - PGM_TYPE_32BIT) * (PGM_TYPE_AMD64 - PGM_TYPE_REAL + 1)
2916 + (uGstType - PGM_TYPE_REAL);
2917}
2918
2919
2920/**
2921 * Gets the index into the paging mode data array of a SHW+GST mode.
2922 *
2923 * @returns PGM::paPagingData index.
2924 * @param enmShw The shadow paging mode.
2925 * @param enmGst The guest paging mode.
2926 */
2927DECLINLINE(unsigned) pgmModeDataIndexByMode(PGMMODE enmShw, PGMMODE enmGst)
2928{
2929 Assert(enmShw >= PGMMODE_32_BIT && enmShw <= PGMMODE_MAX);
2930 Assert(enmGst > PGMMODE_INVALID && enmGst < PGMMODE_MAX);
2931 return pgmModeDataIndex(pgmModeToType(enmShw), pgmModeToType(enmGst));
2932}
2933
2934
2935/**
2936 * Calculates the max data index.
2937 * @returns The number of entries in the paging data array.
2938 */
2939DECLINLINE(unsigned) pgmModeDataMaxIndex(void)
2940{
2941 return pgmModeDataIndex(PGM_TYPE_MAX, PGM_TYPE_AMD64) + 1;
2942}
2943
2944
2945/**
2946 * Initializes the paging mode data kept in PGM::paModeData.
2947 *
2948 * @param pVM The cross context VM structure.
2949 * @param fResolveGCAndR0 Indicate whether or not GC and Ring-0 symbols can be resolved now.
2950 * This is used early in the init process to avoid trouble with PDM
2951 * not being initialized yet.
2952 */
2953static int pgmR3ModeDataInit(PVM pVM, bool fResolveGCAndR0)
2954{
2955 PPGMMODEDATA pModeData;
2956 int rc;
2957
2958 /*
2959 * Allocate the array on the first call.
2960 */
2961 if (!pVM->pgm.s.paModeData)
2962 {
2963 pVM->pgm.s.paModeData = (PPGMMODEDATA)MMR3HeapAllocZ(pVM, MM_TAG_PGM, sizeof(PGMMODEDATA) * pgmModeDataMaxIndex());
2964 AssertReturn(pVM->pgm.s.paModeData, VERR_NO_MEMORY);
2965 }
2966
2967 /*
2968 * Initialize the array entries.
2969 */
2970 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_REAL)];
2971 pModeData->uShwType = PGM_TYPE_32BIT;
2972 pModeData->uGstType = PGM_TYPE_REAL;
2973 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2974 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2975 rc = PGM_BTH_NAME_32BIT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2976
2977 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGMMODE_PROTECTED)];
2978 pModeData->uShwType = PGM_TYPE_32BIT;
2979 pModeData->uGstType = PGM_TYPE_PROT;
2980 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2981 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2982 rc = PGM_BTH_NAME_32BIT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2983
2984 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_32BIT, PGM_TYPE_32BIT)];
2985 pModeData->uShwType = PGM_TYPE_32BIT;
2986 pModeData->uGstType = PGM_TYPE_32BIT;
2987 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2988 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2989 rc = PGM_BTH_NAME_32BIT_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2990
2991 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_REAL)];
2992 pModeData->uShwType = PGM_TYPE_PAE;
2993 pModeData->uGstType = PGM_TYPE_REAL;
2994 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2995 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2996 rc = PGM_BTH_NAME_PAE_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
2997
2998 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PROT)];
2999 pModeData->uShwType = PGM_TYPE_PAE;
3000 pModeData->uGstType = PGM_TYPE_PROT;
3001 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3002 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3003 rc = PGM_BTH_NAME_PAE_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3004
3005 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_32BIT)];
3006 pModeData->uShwType = PGM_TYPE_PAE;
3007 pModeData->uGstType = PGM_TYPE_32BIT;
3008 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3009 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3010 rc = PGM_BTH_NAME_PAE_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3011
3012 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_PAE, PGM_TYPE_PAE)];
3013 pModeData->uShwType = PGM_TYPE_PAE;
3014 pModeData->uGstType = PGM_TYPE_PAE;
3015 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3016 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3017 rc = PGM_BTH_NAME_PAE_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3018
3019#ifdef VBOX_WITH_64_BITS_GUESTS
3020 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_AMD64, PGM_TYPE_AMD64)];
3021 pModeData->uShwType = PGM_TYPE_AMD64;
3022 pModeData->uGstType = PGM_TYPE_AMD64;
3023 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3024 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3025 rc = PGM_BTH_NAME_AMD64_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3026#endif
3027
3028 /* The nested paging mode. */
3029 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_REAL)];
3030 pModeData->uShwType = PGM_TYPE_NESTED;
3031 pModeData->uGstType = PGM_TYPE_REAL;
3032 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3033 rc = PGM_BTH_NAME_NESTED_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3034
3035 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGMMODE_PROTECTED)];
3036 pModeData->uShwType = PGM_TYPE_NESTED;
3037 pModeData->uGstType = PGM_TYPE_PROT;
3038 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3039 rc = PGM_BTH_NAME_NESTED_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3040
3041 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_32BIT)];
3042 pModeData->uShwType = PGM_TYPE_NESTED;
3043 pModeData->uGstType = PGM_TYPE_32BIT;
3044 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3045 rc = PGM_BTH_NAME_NESTED_32BIT(InitData)(pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3046
3047 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_PAE)];
3048 pModeData->uShwType = PGM_TYPE_NESTED;
3049 pModeData->uGstType = PGM_TYPE_PAE;
3050 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3051 rc = PGM_BTH_NAME_NESTED_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3052
3053#ifdef VBOX_WITH_64_BITS_GUESTS
3054 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3055 pModeData->uShwType = PGM_TYPE_NESTED;
3056 pModeData->uGstType = PGM_TYPE_AMD64;
3057 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3058 rc = PGM_BTH_NAME_NESTED_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3059#endif
3060
3061 /* The shadow part of the nested callback mode depends on the host paging mode (AMD-V only). */
3062 switch (pVM->pgm.s.enmHostMode)
3063 {
3064#if HC_ARCH_BITS == 32
3065 case SUPPAGINGMODE_32_BIT:
3066 case SUPPAGINGMODE_32_BIT_GLOBAL:
3067 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3068 {
3069 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3070 rc = PGM_SHW_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3071 }
3072# ifdef VBOX_WITH_64_BITS_GUESTS
3073 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3074 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3075# endif
3076 break;
3077
3078 case SUPPAGINGMODE_PAE:
3079 case SUPPAGINGMODE_PAE_NX:
3080 case SUPPAGINGMODE_PAE_GLOBAL:
3081 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3082 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3083 {
3084 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3085 rc = PGM_SHW_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3086 }
3087# ifdef VBOX_WITH_64_BITS_GUESTS
3088 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, PGM_TYPE_AMD64)];
3089 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3090# endif
3091 break;
3092#endif /* HC_ARCH_BITS == 32 */
3093
3094#if HC_ARCH_BITS == 64 || defined(RT_OS_DARWIN)
3095 case SUPPAGINGMODE_AMD64:
3096 case SUPPAGINGMODE_AMD64_GLOBAL:
3097 case SUPPAGINGMODE_AMD64_NX:
3098 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3099# ifdef VBOX_WITH_64_BITS_GUESTS
3100 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_AMD64; i++)
3101# else
3102 for (unsigned i = PGM_TYPE_REAL; i <= PGM_TYPE_PAE; i++)
3103# endif
3104 {
3105 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_NESTED, i)];
3106 rc = PGM_SHW_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3107 }
3108 break;
3109#endif /* HC_ARCH_BITS == 64 || RT_OS_DARWIN */
3110
3111 default:
3112 AssertFailed();
3113 break;
3114 }
3115
3116 /* Extended paging (EPT) / Intel VT-x */
3117 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_REAL)];
3118 pModeData->uShwType = PGM_TYPE_EPT;
3119 pModeData->uGstType = PGM_TYPE_REAL;
3120 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3121 rc = PGM_GST_NAME_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3122 rc = PGM_BTH_NAME_EPT_REAL(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3123
3124 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PROT)];
3125 pModeData->uShwType = PGM_TYPE_EPT;
3126 pModeData->uGstType = PGM_TYPE_PROT;
3127 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3128 rc = PGM_GST_NAME_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3129 rc = PGM_BTH_NAME_EPT_PROT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3130
3131 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_32BIT)];
3132 pModeData->uShwType = PGM_TYPE_EPT;
3133 pModeData->uGstType = PGM_TYPE_32BIT;
3134 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3135 rc = PGM_GST_NAME_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3136 rc = PGM_BTH_NAME_EPT_32BIT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3137
3138 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_PAE)];
3139 pModeData->uShwType = PGM_TYPE_EPT;
3140 pModeData->uGstType = PGM_TYPE_PAE;
3141 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3142 rc = PGM_GST_NAME_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3143 rc = PGM_BTH_NAME_EPT_PAE(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3144
3145#ifdef VBOX_WITH_64_BITS_GUESTS
3146 pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndex(PGM_TYPE_EPT, PGM_TYPE_AMD64)];
3147 pModeData->uShwType = PGM_TYPE_EPT;
3148 pModeData->uGstType = PGM_TYPE_AMD64;
3149 rc = PGM_SHW_NAME_EPT(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3150 rc = PGM_GST_NAME_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3151 rc = PGM_BTH_NAME_EPT_AMD64(InitData)( pVM, pModeData, fResolveGCAndR0); AssertRCReturn(rc, rc);
3152#endif
3153 return VINF_SUCCESS;
3154}
3155
3156
3157/**
3158 * Switch to different (or relocated in the relocate case) mode data.
3159 *
3160 * @param pVM The cross context VM structure.
3161 * @param pVCpu The cross context virtual CPU structure.
3162 * @param enmShw The shadow paging mode.
3163 * @param enmGst The guest paging mode.
3164 */
3165static void pgmR3ModeDataSwitch(PVM pVM, PVMCPU pVCpu, PGMMODE enmShw, PGMMODE enmGst)
3166{
3167 PPGMMODEDATA pModeData = &pVM->pgm.s.paModeData[pgmModeDataIndexByMode(enmShw, enmGst)];
3168
3169 Assert(pModeData->uGstType == pgmModeToType(enmGst));
3170 Assert(pModeData->uShwType == pgmModeToType(enmShw));
3171
3172 /* shadow */
3173 pVCpu->pgm.s.pfnR3ShwRelocate = pModeData->pfnR3ShwRelocate;
3174 pVCpu->pgm.s.pfnR3ShwExit = pModeData->pfnR3ShwExit;
3175 pVCpu->pgm.s.pfnR3ShwGetPage = pModeData->pfnR3ShwGetPage;
3176 Assert(pVCpu->pgm.s.pfnR3ShwGetPage);
3177 pVCpu->pgm.s.pfnR3ShwModifyPage = pModeData->pfnR3ShwModifyPage;
3178
3179 pVCpu->pgm.s.pfnRCShwGetPage = pModeData->pfnRCShwGetPage;
3180 pVCpu->pgm.s.pfnRCShwModifyPage = pModeData->pfnRCShwModifyPage;
3181
3182 pVCpu->pgm.s.pfnR0ShwGetPage = pModeData->pfnR0ShwGetPage;
3183 pVCpu->pgm.s.pfnR0ShwModifyPage = pModeData->pfnR0ShwModifyPage;
3184
3185
3186 /* guest */
3187 pVCpu->pgm.s.pfnR3GstRelocate = pModeData->pfnR3GstRelocate;
3188 pVCpu->pgm.s.pfnR3GstExit = pModeData->pfnR3GstExit;
3189 pVCpu->pgm.s.pfnR3GstGetPage = pModeData->pfnR3GstGetPage;
3190 Assert(pVCpu->pgm.s.pfnR3GstGetPage);
3191 pVCpu->pgm.s.pfnR3GstModifyPage = pModeData->pfnR3GstModifyPage;
3192 pVCpu->pgm.s.pfnR3GstGetPDE = pModeData->pfnR3GstGetPDE;
3193 pVCpu->pgm.s.pfnRCGstGetPage = pModeData->pfnRCGstGetPage;
3194 pVCpu->pgm.s.pfnRCGstModifyPage = pModeData->pfnRCGstModifyPage;
3195 pVCpu->pgm.s.pfnRCGstGetPDE = pModeData->pfnRCGstGetPDE;
3196 pVCpu->pgm.s.pfnR0GstGetPage = pModeData->pfnR0GstGetPage;
3197 pVCpu->pgm.s.pfnR0GstModifyPage = pModeData->pfnR0GstModifyPage;
3198 pVCpu->pgm.s.pfnR0GstGetPDE = pModeData->pfnR0GstGetPDE;
3199
3200 /* both */
3201 pVCpu->pgm.s.pfnR3BthRelocate = pModeData->pfnR3BthRelocate;
3202 pVCpu->pgm.s.pfnR3BthInvalidatePage = pModeData->pfnR3BthInvalidatePage;
3203 pVCpu->pgm.s.pfnR3BthSyncCR3 = pModeData->pfnR3BthSyncCR3;
3204 Assert(pVCpu->pgm.s.pfnR3BthSyncCR3);
3205 pVCpu->pgm.s.pfnR3BthPrefetchPage = pModeData->pfnR3BthPrefetchPage;
3206 pVCpu->pgm.s.pfnR3BthVerifyAccessSyncPage = pModeData->pfnR3BthVerifyAccessSyncPage;
3207#ifdef VBOX_STRICT
3208 pVCpu->pgm.s.pfnR3BthAssertCR3 = pModeData->pfnR3BthAssertCR3;
3209#endif
3210 pVCpu->pgm.s.pfnR3BthMapCR3 = pModeData->pfnR3BthMapCR3;
3211 pVCpu->pgm.s.pfnR3BthUnmapCR3 = pModeData->pfnR3BthUnmapCR3;
3212
3213 pVCpu->pgm.s.pfnRCBthTrap0eHandler = pModeData->pfnRCBthTrap0eHandler;
3214 pVCpu->pgm.s.pfnRCBthInvalidatePage = pModeData->pfnRCBthInvalidatePage;
3215 pVCpu->pgm.s.pfnRCBthSyncCR3 = pModeData->pfnRCBthSyncCR3;
3216 pVCpu->pgm.s.pfnRCBthPrefetchPage = pModeData->pfnRCBthPrefetchPage;
3217 pVCpu->pgm.s.pfnRCBthVerifyAccessSyncPage = pModeData->pfnRCBthVerifyAccessSyncPage;
3218#ifdef VBOX_STRICT
3219 pVCpu->pgm.s.pfnRCBthAssertCR3 = pModeData->pfnRCBthAssertCR3;
3220#endif
3221 pVCpu->pgm.s.pfnRCBthMapCR3 = pModeData->pfnRCBthMapCR3;
3222 pVCpu->pgm.s.pfnRCBthUnmapCR3 = pModeData->pfnRCBthUnmapCR3;
3223
3224 pVCpu->pgm.s.pfnR0BthTrap0eHandler = pModeData->pfnR0BthTrap0eHandler;
3225 pVCpu->pgm.s.pfnR0BthInvalidatePage = pModeData->pfnR0BthInvalidatePage;
3226 pVCpu->pgm.s.pfnR0BthSyncCR3 = pModeData->pfnR0BthSyncCR3;
3227 pVCpu->pgm.s.pfnR0BthPrefetchPage = pModeData->pfnR0BthPrefetchPage;
3228 pVCpu->pgm.s.pfnR0BthVerifyAccessSyncPage = pModeData->pfnR0BthVerifyAccessSyncPage;
3229#ifdef VBOX_STRICT
3230 pVCpu->pgm.s.pfnR0BthAssertCR3 = pModeData->pfnR0BthAssertCR3;
3231#endif
3232 pVCpu->pgm.s.pfnR0BthMapCR3 = pModeData->pfnR0BthMapCR3;
3233 pVCpu->pgm.s.pfnR0BthUnmapCR3 = pModeData->pfnR0BthUnmapCR3;
3234}
3235
3236
3237/**
3238 * Calculates the shadow paging mode.
3239 *
3240 * @returns The shadow paging mode.
3241 * @param pVM The cross context VM structure.
3242 * @param enmGuestMode The guest mode.
3243 * @param enmHostMode The host mode.
3244 * @param enmShadowMode The current shadow mode.
3245 * @param penmSwitcher Where to store the switcher to use.
3246 * VMMSWITCHER_INVALID means no change.
3247 */
3248static PGMMODE pgmR3CalcShadowMode(PVM pVM, PGMMODE enmGuestMode, SUPPAGINGMODE enmHostMode, PGMMODE enmShadowMode, VMMSWITCHER *penmSwitcher)
3249{
3250 VMMSWITCHER enmSwitcher = VMMSWITCHER_INVALID;
3251 switch (enmGuestMode)
3252 {
3253 /*
3254 * When switching to real or protected mode we don't change
3255 * anything since it's likely that we'll switch back pretty soon.
3256 *
3257 * During pgmR3InitPaging we'll end up here with PGMMODE_INVALID
3258 * and is supposed to determine which shadow paging and switcher to
3259 * use during init.
3260 */
3261 case PGMMODE_REAL:
3262 case PGMMODE_PROTECTED:
3263 if ( enmShadowMode != PGMMODE_INVALID
3264 && !HMIsEnabled(pVM) /* always switch in hm mode! */)
3265 break; /* (no change) */
3266
3267 switch (enmHostMode)
3268 {
3269 case SUPPAGINGMODE_32_BIT:
3270 case SUPPAGINGMODE_32_BIT_GLOBAL:
3271 enmShadowMode = PGMMODE_32_BIT;
3272 enmSwitcher = VMMSWITCHER_32_TO_32;
3273 break;
3274
3275 case SUPPAGINGMODE_PAE:
3276 case SUPPAGINGMODE_PAE_NX:
3277 case SUPPAGINGMODE_PAE_GLOBAL:
3278 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3279 enmShadowMode = PGMMODE_PAE;
3280 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3281#ifdef DEBUG_bird
3282 if (RTEnvExist("VBOX_32BIT"))
3283 {
3284 enmShadowMode = PGMMODE_32_BIT;
3285 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3286 }
3287#endif
3288 break;
3289
3290 case SUPPAGINGMODE_AMD64:
3291 case SUPPAGINGMODE_AMD64_GLOBAL:
3292 case SUPPAGINGMODE_AMD64_NX:
3293 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3294 enmShadowMode = PGMMODE_PAE;
3295 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3296#ifdef DEBUG_bird
3297 if (RTEnvExist("VBOX_32BIT"))
3298 {
3299 enmShadowMode = PGMMODE_32_BIT;
3300 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3301 }
3302#endif
3303 break;
3304
3305 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3306 }
3307 break;
3308
3309 case PGMMODE_32_BIT:
3310 switch (enmHostMode)
3311 {
3312 case SUPPAGINGMODE_32_BIT:
3313 case SUPPAGINGMODE_32_BIT_GLOBAL:
3314 enmShadowMode = PGMMODE_32_BIT;
3315 enmSwitcher = VMMSWITCHER_32_TO_32;
3316 break;
3317
3318 case SUPPAGINGMODE_PAE:
3319 case SUPPAGINGMODE_PAE_NX:
3320 case SUPPAGINGMODE_PAE_GLOBAL:
3321 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3322 enmShadowMode = PGMMODE_PAE;
3323 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3324#ifdef DEBUG_bird
3325 if (RTEnvExist("VBOX_32BIT"))
3326 {
3327 enmShadowMode = PGMMODE_32_BIT;
3328 enmSwitcher = VMMSWITCHER_PAE_TO_32;
3329 }
3330#endif
3331 break;
3332
3333 case SUPPAGINGMODE_AMD64:
3334 case SUPPAGINGMODE_AMD64_GLOBAL:
3335 case SUPPAGINGMODE_AMD64_NX:
3336 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3337 enmShadowMode = PGMMODE_PAE;
3338 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3339#ifdef DEBUG_bird
3340 if (RTEnvExist("VBOX_32BIT"))
3341 {
3342 enmShadowMode = PGMMODE_32_BIT;
3343 enmSwitcher = VMMSWITCHER_AMD64_TO_32;
3344 }
3345#endif
3346 break;
3347
3348 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3349 }
3350 break;
3351
3352 case PGMMODE_PAE:
3353 case PGMMODE_PAE_NX: /** @todo This might require more switchers and guest+both modes. */
3354 switch (enmHostMode)
3355 {
3356 case SUPPAGINGMODE_32_BIT:
3357 case SUPPAGINGMODE_32_BIT_GLOBAL:
3358 enmShadowMode = PGMMODE_PAE;
3359 enmSwitcher = VMMSWITCHER_32_TO_PAE;
3360 break;
3361
3362 case SUPPAGINGMODE_PAE:
3363 case SUPPAGINGMODE_PAE_NX:
3364 case SUPPAGINGMODE_PAE_GLOBAL:
3365 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3366 enmShadowMode = PGMMODE_PAE;
3367 enmSwitcher = VMMSWITCHER_PAE_TO_PAE;
3368 break;
3369
3370 case SUPPAGINGMODE_AMD64:
3371 case SUPPAGINGMODE_AMD64_GLOBAL:
3372 case SUPPAGINGMODE_AMD64_NX:
3373 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3374 enmShadowMode = PGMMODE_PAE;
3375 enmSwitcher = VMMSWITCHER_AMD64_TO_PAE;
3376 break;
3377
3378 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3379 }
3380 break;
3381
3382 case PGMMODE_AMD64:
3383 case PGMMODE_AMD64_NX:
3384 switch (enmHostMode)
3385 {
3386 case SUPPAGINGMODE_32_BIT:
3387 case SUPPAGINGMODE_32_BIT_GLOBAL:
3388 enmShadowMode = PGMMODE_AMD64;
3389 enmSwitcher = VMMSWITCHER_32_TO_AMD64;
3390 break;
3391
3392 case SUPPAGINGMODE_PAE:
3393 case SUPPAGINGMODE_PAE_NX:
3394 case SUPPAGINGMODE_PAE_GLOBAL:
3395 case SUPPAGINGMODE_PAE_GLOBAL_NX:
3396 enmShadowMode = PGMMODE_AMD64;
3397 enmSwitcher = VMMSWITCHER_PAE_TO_AMD64;
3398 break;
3399
3400 case SUPPAGINGMODE_AMD64:
3401 case SUPPAGINGMODE_AMD64_GLOBAL:
3402 case SUPPAGINGMODE_AMD64_NX:
3403 case SUPPAGINGMODE_AMD64_GLOBAL_NX:
3404 enmShadowMode = PGMMODE_AMD64;
3405 enmSwitcher = VMMSWITCHER_AMD64_TO_AMD64;
3406 break;
3407
3408 default: AssertMsgFailed(("enmHostMode=%d\n", enmHostMode)); break;
3409 }
3410 break;
3411
3412
3413 default:
3414 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3415 *penmSwitcher = VMMSWITCHER_INVALID;
3416 return PGMMODE_INVALID;
3417 }
3418 /* Override the shadow mode is nested paging is active. */
3419 pVM->pgm.s.fNestedPaging = HMIsNestedPagingActive(pVM);
3420 if (pVM->pgm.s.fNestedPaging)
3421 enmShadowMode = HMGetShwPagingMode(pVM);
3422
3423 *penmSwitcher = enmSwitcher;
3424 return enmShadowMode;
3425}
3426
3427
3428/**
3429 * Performs the actual mode change.
3430 * This is called by PGMChangeMode and pgmR3InitPaging().
3431 *
3432 * @returns VBox status code. May suspend or power off the VM on error, but this
3433 * will trigger using FFs and not status codes.
3434 *
3435 * @param pVM The cross context VM structure.
3436 * @param pVCpu The cross context virtual CPU structure.
3437 * @param enmGuestMode The new guest mode. This is assumed to be different from
3438 * the current mode.
3439 */
3440VMMR3DECL(int) PGMR3ChangeMode(PVM pVM, PVMCPU pVCpu, PGMMODE enmGuestMode)
3441{
3442#if HC_ARCH_BITS == 32
3443 bool fIsOldGuestPagingMode64Bits = (pVCpu->pgm.s.enmGuestMode >= PGMMODE_AMD64);
3444#endif
3445 bool fIsNewGuestPagingMode64Bits = (enmGuestMode >= PGMMODE_AMD64);
3446
3447 Log(("PGMR3ChangeMode: Guest mode: %s -> %s\n", PGMGetModeName(pVCpu->pgm.s.enmGuestMode), PGMGetModeName(enmGuestMode)));
3448 STAM_REL_COUNTER_INC(&pVCpu->pgm.s.cGuestModeChanges);
3449
3450 /*
3451 * Calc the shadow mode and switcher.
3452 */
3453 VMMSWITCHER enmSwitcher;
3454 PGMMODE enmShadowMode;
3455 enmShadowMode = pgmR3CalcShadowMode(pVM, enmGuestMode, pVM->pgm.s.enmHostMode, pVCpu->pgm.s.enmShadowMode, &enmSwitcher);
3456
3457#ifdef VBOX_WITH_RAW_MODE
3458 if ( enmSwitcher != VMMSWITCHER_INVALID
3459 && !HMIsEnabled(pVM))
3460 {
3461 /*
3462 * Select new switcher.
3463 */
3464 int rc = VMMR3SelectSwitcher(pVM, enmSwitcher);
3465 if (RT_FAILURE(rc))
3466 {
3467 AssertReleaseMsgFailed(("VMMR3SelectSwitcher(%d) -> %Rrc\n", enmSwitcher, rc));
3468 return rc;
3469 }
3470 }
3471#endif
3472
3473 /*
3474 * Exit old mode(s).
3475 */
3476#if HC_ARCH_BITS == 32
3477 /* The nested shadow paging mode for AMD-V does change when running 64 bits guests on 32 bits hosts; typically PAE <-> AMD64 */
3478 const bool fForceShwEnterExit = ( fIsOldGuestPagingMode64Bits != fIsNewGuestPagingMode64Bits
3479 && enmShadowMode == PGMMODE_NESTED);
3480#else
3481 const bool fForceShwEnterExit = false;
3482#endif
3483 /* shadow */
3484 if ( enmShadowMode != pVCpu->pgm.s.enmShadowMode
3485 || fForceShwEnterExit)
3486 {
3487 LogFlow(("PGMR3ChangeMode: Shadow mode: %s -> %s\n", PGMGetModeName(pVCpu->pgm.s.enmShadowMode), PGMGetModeName(enmShadowMode)));
3488 if (PGM_SHW_PFN(Exit, pVCpu))
3489 {
3490 int rc = PGM_SHW_PFN(Exit, pVCpu)(pVCpu);
3491 if (RT_FAILURE(rc))
3492 {
3493 AssertMsgFailed(("Exit failed for shadow mode %d: %Rrc\n", pVCpu->pgm.s.enmShadowMode, rc));
3494 return rc;
3495 }
3496 }
3497
3498 }
3499 else
3500 LogFlow(("PGMR3ChangeMode: Shadow mode remains: %s\n", PGMGetModeName(pVCpu->pgm.s.enmShadowMode)));
3501
3502 /* guest */
3503 if (PGM_GST_PFN(Exit, pVCpu))
3504 {
3505 int rc = PGM_GST_PFN(Exit, pVCpu)(pVCpu);
3506 if (RT_FAILURE(rc))
3507 {
3508 AssertMsgFailed(("Exit failed for guest mode %d: %Rrc\n", pVCpu->pgm.s.enmGuestMode, rc));
3509 return rc;
3510 }
3511 }
3512
3513 /*
3514 * Load new paging mode data.
3515 */
3516 pgmR3ModeDataSwitch(pVM, pVCpu, enmShadowMode, enmGuestMode);
3517
3518 /*
3519 * Enter new shadow mode (if changed).
3520 */
3521 if ( enmShadowMode != pVCpu->pgm.s.enmShadowMode
3522 || fForceShwEnterExit)
3523 {
3524 int rc;
3525 pVCpu->pgm.s.enmShadowMode = enmShadowMode;
3526 switch (enmShadowMode)
3527 {
3528 case PGMMODE_32_BIT:
3529 rc = PGM_SHW_NAME_32BIT(Enter)(pVCpu, false);
3530 break;
3531 case PGMMODE_PAE:
3532 case PGMMODE_PAE_NX:
3533 rc = PGM_SHW_NAME_PAE(Enter)(pVCpu, false);
3534 break;
3535 case PGMMODE_AMD64:
3536 case PGMMODE_AMD64_NX:
3537 rc = PGM_SHW_NAME_AMD64(Enter)(pVCpu, fIsNewGuestPagingMode64Bits);
3538 break;
3539 case PGMMODE_NESTED:
3540 rc = PGM_SHW_NAME_NESTED(Enter)(pVCpu, fIsNewGuestPagingMode64Bits);
3541 break;
3542 case PGMMODE_EPT:
3543 rc = PGM_SHW_NAME_EPT(Enter)(pVCpu, fIsNewGuestPagingMode64Bits);
3544 break;
3545 case PGMMODE_REAL:
3546 case PGMMODE_PROTECTED:
3547 default:
3548 AssertReleaseMsgFailed(("enmShadowMode=%d\n", enmShadowMode));
3549 return VERR_INTERNAL_ERROR;
3550 }
3551 if (RT_FAILURE(rc))
3552 {
3553 AssertReleaseMsgFailed(("Entering enmShadowMode=%d failed: %Rrc\n", enmShadowMode, rc));
3554 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
3555 return rc;
3556 }
3557 }
3558
3559 /*
3560 * Always flag the necessary updates
3561 */
3562 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
3563
3564 /*
3565 * Enter the new guest and shadow+guest modes.
3566 */
3567 int rc = -1;
3568 int rc2 = -1;
3569 RTGCPHYS GCPhysCR3 = NIL_RTGCPHYS;
3570 pVCpu->pgm.s.enmGuestMode = enmGuestMode;
3571 switch (enmGuestMode)
3572 {
3573 case PGMMODE_REAL:
3574 rc = PGM_GST_NAME_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3575 switch (pVCpu->pgm.s.enmShadowMode)
3576 {
3577 case PGMMODE_32_BIT:
3578 rc2 = PGM_BTH_NAME_32BIT_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3579 break;
3580 case PGMMODE_PAE:
3581 case PGMMODE_PAE_NX:
3582 rc2 = PGM_BTH_NAME_PAE_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3583 break;
3584 case PGMMODE_NESTED:
3585 rc2 = PGM_BTH_NAME_NESTED_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3586 break;
3587 case PGMMODE_EPT:
3588 rc2 = PGM_BTH_NAME_EPT_REAL(Enter)(pVCpu, NIL_RTGCPHYS);
3589 break;
3590 case PGMMODE_AMD64:
3591 case PGMMODE_AMD64_NX:
3592 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3593 default: AssertFailed(); break;
3594 }
3595 break;
3596
3597 case PGMMODE_PROTECTED:
3598 rc = PGM_GST_NAME_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3599 switch (pVCpu->pgm.s.enmShadowMode)
3600 {
3601 case PGMMODE_32_BIT:
3602 rc2 = PGM_BTH_NAME_32BIT_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3603 break;
3604 case PGMMODE_PAE:
3605 case PGMMODE_PAE_NX:
3606 rc2 = PGM_BTH_NAME_PAE_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3607 break;
3608 case PGMMODE_NESTED:
3609 rc2 = PGM_BTH_NAME_NESTED_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3610 break;
3611 case PGMMODE_EPT:
3612 rc2 = PGM_BTH_NAME_EPT_PROT(Enter)(pVCpu, NIL_RTGCPHYS);
3613 break;
3614 case PGMMODE_AMD64:
3615 case PGMMODE_AMD64_NX:
3616 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3617 default: AssertFailed(); break;
3618 }
3619 break;
3620
3621 case PGMMODE_32_BIT:
3622 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & X86_CR3_PAGE_MASK;
3623 rc = PGM_GST_NAME_32BIT(Enter)(pVCpu, GCPhysCR3);
3624 switch (pVCpu->pgm.s.enmShadowMode)
3625 {
3626 case PGMMODE_32_BIT:
3627 rc2 = PGM_BTH_NAME_32BIT_32BIT(Enter)(pVCpu, GCPhysCR3);
3628 break;
3629 case PGMMODE_PAE:
3630 case PGMMODE_PAE_NX:
3631 rc2 = PGM_BTH_NAME_PAE_32BIT(Enter)(pVCpu, GCPhysCR3);
3632 break;
3633 case PGMMODE_NESTED:
3634 rc2 = PGM_BTH_NAME_NESTED_32BIT(Enter)(pVCpu, GCPhysCR3);
3635 break;
3636 case PGMMODE_EPT:
3637 rc2 = PGM_BTH_NAME_EPT_32BIT(Enter)(pVCpu, GCPhysCR3);
3638 break;
3639 case PGMMODE_AMD64:
3640 case PGMMODE_AMD64_NX:
3641 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3642 default: AssertFailed(); break;
3643 }
3644 break;
3645
3646 case PGMMODE_PAE_NX:
3647 case PGMMODE_PAE:
3648 {
3649 uint32_t u32Dummy, u32Features;
3650
3651 CPUMGetGuestCpuId(pVCpu, 1, 0, &u32Dummy, &u32Dummy, &u32Dummy, &u32Features);
3652 if (!(u32Features & X86_CPUID_FEATURE_EDX_PAE))
3653 return VMSetRuntimeError(pVM, VMSETRTERR_FLAGS_FATAL, "PAEmode",
3654 N_("The guest is trying to switch to the PAE mode which is currently disabled by default in VirtualBox. PAE support can be enabled using the VM settings (System/Processor)"));
3655
3656 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & X86_CR3_PAE_PAGE_MASK;
3657 rc = PGM_GST_NAME_PAE(Enter)(pVCpu, GCPhysCR3);
3658 switch (pVCpu->pgm.s.enmShadowMode)
3659 {
3660 case PGMMODE_PAE:
3661 case PGMMODE_PAE_NX:
3662 rc2 = PGM_BTH_NAME_PAE_PAE(Enter)(pVCpu, GCPhysCR3);
3663 break;
3664 case PGMMODE_NESTED:
3665 rc2 = PGM_BTH_NAME_NESTED_PAE(Enter)(pVCpu, GCPhysCR3);
3666 break;
3667 case PGMMODE_EPT:
3668 rc2 = PGM_BTH_NAME_EPT_PAE(Enter)(pVCpu, GCPhysCR3);
3669 break;
3670 case PGMMODE_32_BIT:
3671 case PGMMODE_AMD64:
3672 case PGMMODE_AMD64_NX:
3673 AssertMsgFailed(("Should use PAE shadow mode!\n"));
3674 default: AssertFailed(); break;
3675 }
3676 break;
3677 }
3678
3679#ifdef VBOX_WITH_64_BITS_GUESTS
3680 case PGMMODE_AMD64_NX:
3681 case PGMMODE_AMD64:
3682 GCPhysCR3 = CPUMGetGuestCR3(pVCpu) & UINT64_C(0xfffffffffffff000); /** @todo define this mask! */
3683 rc = PGM_GST_NAME_AMD64(Enter)(pVCpu, GCPhysCR3);
3684 switch (pVCpu->pgm.s.enmShadowMode)
3685 {
3686 case PGMMODE_AMD64:
3687 case PGMMODE_AMD64_NX:
3688 rc2 = PGM_BTH_NAME_AMD64_AMD64(Enter)(pVCpu, GCPhysCR3);
3689 break;
3690 case PGMMODE_NESTED:
3691 rc2 = PGM_BTH_NAME_NESTED_AMD64(Enter)(pVCpu, GCPhysCR3);
3692 break;
3693 case PGMMODE_EPT:
3694 rc2 = PGM_BTH_NAME_EPT_AMD64(Enter)(pVCpu, GCPhysCR3);
3695 break;
3696 case PGMMODE_32_BIT:
3697 case PGMMODE_PAE:
3698 case PGMMODE_PAE_NX:
3699 AssertMsgFailed(("Should use AMD64 shadow mode!\n"));
3700 default: AssertFailed(); break;
3701 }
3702 break;
3703#endif
3704
3705 default:
3706 AssertReleaseMsgFailed(("enmGuestMode=%d\n", enmGuestMode));
3707 rc = VERR_NOT_IMPLEMENTED;
3708 break;
3709 }
3710
3711 /* status codes. */
3712 AssertRC(rc);
3713 AssertRC(rc2);
3714 if (RT_SUCCESS(rc))
3715 {
3716 rc = rc2;
3717 if (RT_SUCCESS(rc)) /* no informational status codes. */
3718 rc = VINF_SUCCESS;
3719 }
3720
3721 /* Notify HM as well. */
3722 HMR3PagingModeChanged(pVM, pVCpu, pVCpu->pgm.s.enmShadowMode, pVCpu->pgm.s.enmGuestMode);
3723 return rc;
3724}
3725
3726
3727/**
3728 * Called by pgmPoolFlushAllInt prior to flushing the pool.
3729 *
3730 * @returns VBox status code, fully asserted.
3731 * @param pVCpu The cross context virtual CPU structure.
3732 */
3733int pgmR3ExitShadowModeBeforePoolFlush(PVMCPU pVCpu)
3734{
3735 /* Unmap the old CR3 value before flushing everything. */
3736 int rc = PGM_BTH_PFN(UnmapCR3, pVCpu)(pVCpu);
3737 AssertRC(rc);
3738
3739 /* Exit the current shadow paging mode as well; nested paging and EPT use a root CR3 which will get flushed here. */
3740 rc = PGM_SHW_PFN(Exit, pVCpu)(pVCpu);
3741 AssertRC(rc);
3742 Assert(pVCpu->pgm.s.pShwPageCR3R3 == NULL);
3743 return rc;
3744}
3745
3746
3747/**
3748 * Called by pgmPoolFlushAllInt after flushing the pool.
3749 *
3750 * @returns VBox status code, fully asserted.
3751 * @param pVM The cross context VM structure.
3752 * @param pVCpu The cross context virtual CPU structure.
3753 */
3754int pgmR3ReEnterShadowModeAfterPoolFlush(PVM pVM, PVMCPU pVCpu)
3755{
3756 pVCpu->pgm.s.enmShadowMode = PGMMODE_INVALID;
3757 int rc = PGMR3ChangeMode(pVM, pVCpu, PGMGetGuestMode(pVCpu));
3758 Assert(VMCPU_FF_IS_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3));
3759 AssertRCReturn(rc, rc);
3760 AssertRCSuccessReturn(rc, VERR_IPE_UNEXPECTED_INFO_STATUS);
3761
3762 Assert(pVCpu->pgm.s.pShwPageCR3R3 != NULL);
3763 AssertMsg( pVCpu->pgm.s.enmShadowMode >= PGMMODE_NESTED
3764 || CPUMGetHyperCR3(pVCpu) == PGMGetHyperCR3(pVCpu),
3765 ("%RHp != %RHp %s\n", (RTHCPHYS)CPUMGetHyperCR3(pVCpu), PGMGetHyperCR3(pVCpu), PGMGetModeName(pVCpu->pgm.s.enmShadowMode)));
3766 return rc;
3767}
3768
3769
3770/**
3771 * Called by PGMR3PhysSetA20 after changing the A20 state.
3772 *
3773 * @param pVCpu The cross context virtual CPU structure.
3774 */
3775void pgmR3RefreshShadowModeAfterA20Change(PVMCPU pVCpu)
3776{
3777 /** @todo Probably doing a bit too much here. */
3778 int rc = pgmR3ExitShadowModeBeforePoolFlush(pVCpu);
3779 AssertReleaseRC(rc);
3780 rc = pgmR3ReEnterShadowModeAfterPoolFlush(pVCpu->CTX_SUFF(pVM), pVCpu);
3781 AssertReleaseRC(rc);
3782}
3783
3784
3785#ifdef VBOX_WITH_DEBUGGER
3786
3787/**
3788 * @callback_method_impl{FNDBGCCMD, The '.pgmerror' and '.pgmerroroff' commands.}
3789 */
3790static DECLCALLBACK(int) pgmR3CmdError(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3791{
3792 /*
3793 * Validate input.
3794 */
3795 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3796 PVM pVM = pUVM->pVM;
3797 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, 0, cArgs == 0 || (cArgs == 1 && paArgs[0].enmType == DBGCVAR_TYPE_STRING));
3798
3799 if (!cArgs)
3800 {
3801 /*
3802 * Print the list of error injection locations with status.
3803 */
3804 DBGCCmdHlpPrintf(pCmdHlp, "PGM error inject locations:\n");
3805 DBGCCmdHlpPrintf(pCmdHlp, " handy - %RTbool\n", pVM->pgm.s.fErrInjHandyPages);
3806 }
3807 else
3808 {
3809 /*
3810 * String switch on where to inject the error.
3811 */
3812 bool const fNewState = !strcmp(pCmd->pszCmd, "pgmerror");
3813 const char *pszWhere = paArgs[0].u.pszString;
3814 if (!strcmp(pszWhere, "handy"))
3815 ASMAtomicWriteBool(&pVM->pgm.s.fErrInjHandyPages, fNewState);
3816 else
3817 return DBGCCmdHlpPrintf(pCmdHlp, "error: Invalid 'where' value: %s.\n", pszWhere);
3818 DBGCCmdHlpPrintf(pCmdHlp, "done\n");
3819 }
3820 return VINF_SUCCESS;
3821}
3822
3823
3824/**
3825 * @callback_method_impl{FNDBGCCMD, The '.pgmsync' command.}
3826 */
3827static DECLCALLBACK(int) pgmR3CmdSync(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3828{
3829 /*
3830 * Validate input.
3831 */
3832 NOREF(pCmd); NOREF(paArgs); NOREF(cArgs);
3833 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3834 PVMCPU pVCpu = VMMR3GetCpuByIdU(pUVM, DBGCCmdHlpGetCurrentCpu(pCmdHlp));
3835 if (!pVCpu)
3836 return DBGCCmdHlpFail(pCmdHlp, pCmd, "Invalid CPU ID");
3837
3838 /*
3839 * Force page directory sync.
3840 */
3841 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
3842
3843 int rc = DBGCCmdHlpPrintf(pCmdHlp, "Forcing page directory sync.\n");
3844 if (RT_FAILURE(rc))
3845 return rc;
3846
3847 return VINF_SUCCESS;
3848}
3849
3850#ifdef VBOX_STRICT
3851
3852/**
3853 * EMT callback for pgmR3CmdAssertCR3.
3854 *
3855 * @returns VBox status code.
3856 * @param pUVM The user mode VM handle.
3857 * @param pcErrors Where to return the error count.
3858 */
3859static DECLCALLBACK(int) pgmR3CmdAssertCR3EmtWorker(PUVM pUVM, unsigned *pcErrors)
3860{
3861 PVM pVM = pUVM->pVM;
3862 VM_ASSERT_VALID_EXT_RETURN(pVM, VERR_INVALID_VM_HANDLE);
3863 PVMCPU pVCpu = VMMGetCpu(pVM);
3864
3865 *pcErrors = PGMAssertCR3(pVM, pVCpu, CPUMGetGuestCR3(pVCpu), CPUMGetGuestCR4(pVCpu));
3866
3867 return VINF_SUCCESS;
3868}
3869
3870
3871/**
3872 * @callback_method_impl{FNDBGCCMD, The '.pgmassertcr3' command.}
3873 */
3874static DECLCALLBACK(int) pgmR3CmdAssertCR3(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3875{
3876 /*
3877 * Validate input.
3878 */
3879 NOREF(pCmd); NOREF(paArgs); NOREF(cArgs);
3880 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3881
3882 int rc = DBGCCmdHlpPrintf(pCmdHlp, "Checking shadow CR3 page tables for consistency.\n");
3883 if (RT_FAILURE(rc))
3884 return rc;
3885
3886 unsigned cErrors = 0;
3887 rc = VMR3ReqCallWaitU(pUVM, DBGCCmdHlpGetCurrentCpu(pCmdHlp), (PFNRT)pgmR3CmdAssertCR3EmtWorker, 2, pUVM, &cErrors);
3888 if (RT_FAILURE(rc))
3889 return DBGCCmdHlpFail(pCmdHlp, pCmd, "VMR3ReqCallWaitU failed: %Rrc", rc);
3890 if (cErrors > 0)
3891 return DBGCCmdHlpFail(pCmdHlp, pCmd, "PGMAssertCR3: %u error(s)", cErrors);
3892 return DBGCCmdHlpPrintf(pCmdHlp, "PGMAssertCR3: OK\n");
3893}
3894
3895#endif /* VBOX_STRICT */
3896
3897/**
3898 * @callback_method_impl{FNDBGCCMD, The '.pgmsyncalways' command.}
3899 */
3900static DECLCALLBACK(int) pgmR3CmdSyncAlways(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3901{
3902 /*
3903 * Validate input.
3904 */
3905 NOREF(pCmd); NOREF(paArgs); NOREF(cArgs);
3906 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3907 PVMCPU pVCpu = VMMR3GetCpuByIdU(pUVM, DBGCCmdHlpGetCurrentCpu(pCmdHlp));
3908 if (!pVCpu)
3909 return DBGCCmdHlpFail(pCmdHlp, pCmd, "Invalid CPU ID");
3910
3911 /*
3912 * Force page directory sync.
3913 */
3914 int rc;
3915 if (pVCpu->pgm.s.fSyncFlags & PGM_SYNC_ALWAYS)
3916 {
3917 ASMAtomicAndU32(&pVCpu->pgm.s.fSyncFlags, ~PGM_SYNC_ALWAYS);
3918 rc = DBGCCmdHlpPrintf(pCmdHlp, "Disabled permanent forced page directory syncing.\n");
3919 }
3920 else
3921 {
3922 ASMAtomicOrU32(&pVCpu->pgm.s.fSyncFlags, PGM_SYNC_ALWAYS);
3923 VMCPU_FF_SET(pVCpu, VMCPU_FF_PGM_SYNC_CR3);
3924 rc = DBGCCmdHlpPrintf(pCmdHlp, "Enabled permanent forced page directory syncing.\n");
3925 }
3926 return rc;
3927}
3928
3929
3930/**
3931 * @callback_method_impl{FNDBGCCMD, The '.pgmphystofile' command.}
3932 */
3933static DECLCALLBACK(int) pgmR3CmdPhysToFile(PCDBGCCMD pCmd, PDBGCCMDHLP pCmdHlp, PUVM pUVM, PCDBGCVAR paArgs, unsigned cArgs)
3934{
3935 /*
3936 * Validate input.
3937 */
3938 NOREF(pCmd);
3939 DBGC_CMDHLP_REQ_UVM_RET(pCmdHlp, pCmd, pUVM);
3940 PVM pVM = pUVM->pVM;
3941 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, 0, cArgs == 1 || cArgs == 2);
3942 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, 0, paArgs[0].enmType == DBGCVAR_TYPE_STRING);
3943 if (cArgs == 2)
3944 {
3945 DBGC_CMDHLP_ASSERT_PARSER_RET(pCmdHlp, pCmd, 1, paArgs[1].enmType == DBGCVAR_TYPE_STRING);
3946 if (strcmp(paArgs[1].u.pszString, "nozero"))
3947 return DBGCCmdHlpFail(pCmdHlp, pCmd, "Invalid 2nd argument '%s', must be 'nozero'.\n", paArgs[1].u.pszString);
3948 }
3949 bool fIncZeroPgs = cArgs < 2;
3950
3951 /*
3952 * Open the output file and get the ram parameters.
3953 */
3954 RTFILE hFile;
3955 int rc = RTFileOpen(&hFile, paArgs[0].u.pszString, RTFILE_O_WRITE | RTFILE_O_CREATE_REPLACE | RTFILE_O_DENY_WRITE);
3956 if (RT_FAILURE(rc))
3957 return DBGCCmdHlpPrintf(pCmdHlp, "error: RTFileOpen(,'%s',) -> %Rrc.\n", paArgs[0].u.pszString, rc);
3958
3959 uint32_t cbRamHole = 0;
3960 CFGMR3QueryU32Def(CFGMR3GetRootU(pUVM), "RamHoleSize", &cbRamHole, MM_RAM_HOLE_SIZE_DEFAULT);
3961 uint64_t cbRam = 0;
3962 CFGMR3QueryU64Def(CFGMR3GetRootU(pUVM), "RamSize", &cbRam, 0);
3963 RTGCPHYS GCPhysEnd = cbRam + cbRamHole;
3964
3965 /*
3966 * Dump the physical memory, page by page.
3967 */
3968 RTGCPHYS GCPhys = 0;
3969 char abZeroPg[PAGE_SIZE];
3970 RT_ZERO(abZeroPg);
3971
3972 pgmLock(pVM);
3973 for (PPGMRAMRANGE pRam = pVM->pgm.s.pRamRangesXR3;
3974 pRam && pRam->GCPhys < GCPhysEnd && RT_SUCCESS(rc);
3975 pRam = pRam->pNextR3)
3976 {
3977 /* fill the gap */
3978 if (pRam->GCPhys > GCPhys && fIncZeroPgs)
3979 {
3980 while (pRam->GCPhys > GCPhys && RT_SUCCESS(rc))
3981 {
3982 rc = RTFileWrite(hFile, abZeroPg, PAGE_SIZE, NULL);
3983 GCPhys += PAGE_SIZE;
3984 }
3985 }
3986
3987 PCPGMPAGE pPage = &pRam->aPages[0];
3988 while (GCPhys < pRam->GCPhysLast && RT_SUCCESS(rc))
3989 {
3990 if ( PGM_PAGE_IS_ZERO(pPage)
3991 || PGM_PAGE_IS_BALLOONED(pPage))
3992 {
3993 if (fIncZeroPgs)
3994 {
3995 rc = RTFileWrite(hFile, abZeroPg, PAGE_SIZE, NULL);
3996 if (RT_FAILURE(rc))
3997 DBGCCmdHlpPrintf(pCmdHlp, "error: RTFileWrite -> %Rrc at GCPhys=%RGp.\n", rc, GCPhys);
3998 }
3999 }
4000 else
4001 {
4002 switch (PGM_PAGE_GET_TYPE(pPage))
4003 {
4004 case PGMPAGETYPE_RAM:
4005 case PGMPAGETYPE_ROM_SHADOW: /* trouble?? */
4006 case PGMPAGETYPE_ROM:
4007 case PGMPAGETYPE_MMIO2:
4008 {
4009 void const *pvPage;
4010 PGMPAGEMAPLOCK Lock;
4011 rc = PGMPhysGCPhys2CCPtrReadOnly(pVM, GCPhys, &pvPage, &Lock);
4012 if (RT_SUCCESS(rc))
4013 {
4014 rc = RTFileWrite(hFile, pvPage, PAGE_SIZE, NULL);
4015 PGMPhysReleasePageMappingLock(pVM, &Lock);
4016 if (RT_FAILURE(rc))
4017 DBGCCmdHlpPrintf(pCmdHlp, "error: RTFileWrite -> %Rrc at GCPhys=%RGp.\n", rc, GCPhys);
4018 }
4019 else
4020 DBGCCmdHlpPrintf(pCmdHlp, "error: PGMPhysGCPhys2CCPtrReadOnly -> %Rrc at GCPhys=%RGp.\n", rc, GCPhys);
4021 break;
4022 }
4023
4024 default:
4025 AssertFailed();
4026 case PGMPAGETYPE_MMIO:
4027 case PGMPAGETYPE_MMIO2_ALIAS_MMIO:
4028 case PGMPAGETYPE_SPECIAL_ALIAS_MMIO:
4029 if (fIncZeroPgs)
4030 {
4031 rc = RTFileWrite(hFile, abZeroPg, PAGE_SIZE, NULL);
4032 if (RT_FAILURE(rc))
4033 DBGCCmdHlpPrintf(pCmdHlp, "error: RTFileWrite -> %Rrc at GCPhys=%RGp.\n", rc, GCPhys);
4034 }
4035 break;
4036 }
4037 }
4038
4039
4040 /* advance */
4041 GCPhys += PAGE_SIZE;
4042 pPage++;
4043 }
4044 }
4045 pgmUnlock(pVM);
4046
4047 RTFileClose(hFile);
4048 if (RT_SUCCESS(rc))
4049 return DBGCCmdHlpPrintf(pCmdHlp, "Successfully saved physical memory to '%s'.\n", paArgs[0].u.pszString);
4050 return VINF_SUCCESS;
4051}
4052
4053#endif /* VBOX_WITH_DEBUGGER */
4054
4055/**
4056 * pvUser argument of the pgmR3CheckIntegrity*Node callbacks.
4057 */
4058typedef struct PGMCHECKINTARGS
4059{
4060 bool fLeftToRight; /**< true: left-to-right; false: right-to-left. */
4061 PPGMPHYSHANDLER pPrevPhys;
4062#ifdef VBOX_WITH_RAW_MODE
4063 PPGMVIRTHANDLER pPrevVirt;
4064 PPGMPHYS2VIRTHANDLER pPrevPhys2Virt;
4065#else
4066 void *pvFiller1, *pvFiller2;
4067#endif
4068 PVM pVM;
4069} PGMCHECKINTARGS, *PPGMCHECKINTARGS;
4070
4071/**
4072 * Validate a node in the physical handler tree.
4073 *
4074 * @returns 0 on if ok, other wise 1.
4075 * @param pNode The handler node.
4076 * @param pvUser pVM.
4077 */
4078static DECLCALLBACK(int) pgmR3CheckIntegrityPhysHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4079{
4080 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4081 PPGMPHYSHANDLER pCur = (PPGMPHYSHANDLER)pNode;
4082 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4083 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,
4084 ("pCur=%p %RGp-%RGp %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4085 AssertReleaseMsg( !pArgs->pPrevPhys
4086 || ( pArgs->fLeftToRight
4087 ? pArgs->pPrevPhys->Core.KeyLast < pCur->Core.Key
4088 : pArgs->pPrevPhys->Core.KeyLast > pCur->Core.Key),
4089 ("pPrevPhys=%p %RGp-%RGp %s\n"
4090 " pCur=%p %RGp-%RGp %s\n",
4091 pArgs->pPrevPhys, pArgs->pPrevPhys->Core.Key, pArgs->pPrevPhys->Core.KeyLast, pArgs->pPrevPhys->pszDesc,
4092 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4093 pArgs->pPrevPhys = pCur;
4094 return 0;
4095}
4096
4097#ifdef VBOX_WITH_RAW_MODE
4098
4099/**
4100 * Validate a node in the virtual handler tree.
4101 *
4102 * @returns 0 on if ok, other wise 1.
4103 * @param pNode The handler node.
4104 * @param pvUser pVM.
4105 */
4106static DECLCALLBACK(int) pgmR3CheckIntegrityVirtHandlerNode(PAVLROGCPTRNODECORE pNode, void *pvUser)
4107{
4108 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4109 PPGMVIRTHANDLER pCur = (PPGMVIRTHANDLER)pNode;
4110 AssertReleaseReturn(!((uintptr_t)pCur & 7), 1);
4111 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGv-%RGv %s\n", pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4112 AssertReleaseMsg( !pArgs->pPrevVirt
4113 || (pArgs->fLeftToRight ? pArgs->pPrevVirt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevVirt->Core.KeyLast > pCur->Core.Key),
4114 ("pPrevVirt=%p %RGv-%RGv %s\n"
4115 " pCur=%p %RGv-%RGv %s\n",
4116 pArgs->pPrevVirt, pArgs->pPrevVirt->Core.Key, pArgs->pPrevVirt->Core.KeyLast, pArgs->pPrevVirt->pszDesc,
4117 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc));
4118 for (unsigned iPage = 0; iPage < pCur->cPages; iPage++)
4119 {
4120 AssertReleaseMsg(pCur->aPhysToVirt[iPage].offVirtHandler == -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage]),
4121 ("pCur=%p %RGv-%RGv %s\n"
4122 "iPage=%d offVirtHandle=%#x expected %#x\n",
4123 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->pszDesc,
4124 iPage, pCur->aPhysToVirt[iPage].offVirtHandler, -RT_OFFSETOF(PGMVIRTHANDLER, aPhysToVirt[iPage])));
4125 }
4126 pArgs->pPrevVirt = pCur;
4127 return 0;
4128}
4129
4130
4131/**
4132 * Validate a node in the virtual handler tree.
4133 *
4134 * @returns 0 on if ok, other wise 1.
4135 * @param pNode The handler node.
4136 * @param pvUser pVM.
4137 */
4138static DECLCALLBACK(int) pgmR3CheckIntegrityPhysToVirtHandlerNode(PAVLROGCPHYSNODECORE pNode, void *pvUser)
4139{
4140 PPGMCHECKINTARGS pArgs = (PPGMCHECKINTARGS)pvUser;
4141 PPGMPHYS2VIRTHANDLER pCur = (PPGMPHYS2VIRTHANDLER)pNode;
4142 AssertReleaseMsgReturn(!((uintptr_t)pCur & 3), ("\n"), 1);
4143 AssertReleaseMsgReturn(!(pCur->offVirtHandler & 3), ("\n"), 1);
4144 AssertReleaseMsg(pCur->Core.Key <= pCur->Core.KeyLast,("pCur=%p %RGp-%RGp\n", pCur, pCur->Core.Key, pCur->Core.KeyLast));
4145 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4146 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4147 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
4148 " pCur=%p %RGp-%RGp\n",
4149 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4150 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4151 AssertReleaseMsg( !pArgs->pPrevPhys2Virt
4152 || (pArgs->fLeftToRight ? pArgs->pPrevPhys2Virt->Core.KeyLast < pCur->Core.Key : pArgs->pPrevPhys2Virt->Core.KeyLast > pCur->Core.Key),
4153 ("pPrevPhys2Virt=%p %RGp-%RGp\n"
4154 " pCur=%p %RGp-%RGp\n",
4155 pArgs->pPrevPhys2Virt, pArgs->pPrevPhys2Virt->Core.Key, pArgs->pPrevPhys2Virt->Core.KeyLast,
4156 pCur, pCur->Core.Key, pCur->Core.KeyLast));
4157 AssertReleaseMsg((pCur->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD),
4158 ("pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4159 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4160 if (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK)
4161 {
4162 PPGMPHYS2VIRTHANDLER pCur2 = pCur;
4163 for (;;)
4164 {
4165 pCur2 = (PPGMPHYS2VIRTHANDLER)((intptr_t)pCur + (pCur->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK));
4166 AssertReleaseMsg(pCur2 != pCur,
4167 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4168 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias));
4169 AssertReleaseMsg((pCur2->offNextAlias & (PGMPHYS2VIRTHANDLER_IN_TREE | PGMPHYS2VIRTHANDLER_IS_HEAD)) == PGMPHYS2VIRTHANDLER_IN_TREE,
4170 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4171 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4172 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4173 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4174 AssertReleaseMsg((pCur2->Core.Key ^ pCur->Core.Key) < PAGE_SIZE,
4175 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4176 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4177 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4178 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4179 AssertReleaseMsg((pCur2->Core.KeyLast ^ pCur->Core.KeyLast) < PAGE_SIZE,
4180 (" pCur=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n"
4181 "pCur2=%p:{.Core.Key=%RGp, .Core.KeyLast=%RGp, .offVirtHandler=%#RX32, .offNextAlias=%#RX32}\n",
4182 pCur, pCur->Core.Key, pCur->Core.KeyLast, pCur->offVirtHandler, pCur->offNextAlias,
4183 pCur2, pCur2->Core.Key, pCur2->Core.KeyLast, pCur2->offVirtHandler, pCur2->offNextAlias));
4184 if (!(pCur2->offNextAlias & PGMPHYS2VIRTHANDLER_OFF_MASK))
4185 break;
4186 }
4187 }
4188
4189 pArgs->pPrevPhys2Virt = pCur;
4190 return 0;
4191}
4192
4193#endif /* VBOX_WITH_RAW_MODE */
4194
4195/**
4196 * Perform an integrity check on the PGM component.
4197 *
4198 * @returns VINF_SUCCESS if everything is fine.
4199 * @returns VBox error status after asserting on integrity breach.
4200 * @param pVM The cross context VM structure.
4201 */
4202VMMR3DECL(int) PGMR3CheckIntegrity(PVM pVM)
4203{
4204 AssertReleaseReturn(pVM->pgm.s.offVM, VERR_INTERNAL_ERROR);
4205
4206 /*
4207 * Check the trees.
4208 */
4209 int cErrors = 0;
4210 const static PGMCHECKINTARGS s_LeftToRight = { true, NULL, NULL, NULL, pVM };
4211 const static PGMCHECKINTARGS s_RightToLeft = { false, NULL, NULL, NULL, pVM };
4212 PGMCHECKINTARGS Args = s_LeftToRight;
4213 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, true, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4214 Args = s_RightToLeft;
4215 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysHandlers, false, pgmR3CheckIntegrityPhysHandlerNode, &Args);
4216#ifdef VBOX_WITH_RAW_MODE
4217 Args = s_LeftToRight;
4218 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4219 Args = s_RightToLeft;
4220 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->VirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4221 Args = s_LeftToRight;
4222 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, true, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4223 Args = s_RightToLeft;
4224 cErrors += RTAvlroGCPtrDoWithAll( &pVM->pgm.s.pTreesR3->HyperVirtHandlers, false, pgmR3CheckIntegrityVirtHandlerNode, &Args);
4225 Args = s_LeftToRight;
4226 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, true, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4227 Args = s_RightToLeft;
4228 cErrors += RTAvlroGCPhysDoWithAll(&pVM->pgm.s.pTreesR3->PhysToVirtHandlers, false, pgmR3CheckIntegrityPhysToVirtHandlerNode, &Args);
4229#endif /* VBOX_WITH_RAW_MODE */
4230
4231 return !cErrors ? VINF_SUCCESS : VERR_INTERNAL_ERROR;
4232}
4233
Note: See TracBrowser for help on using the repository browser.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette