VirtualBox

Changeset 4536 in vbox


Ignore:
Timestamp:
Sep 5, 2007 2:46:53 PM (17 years ago)
Author:
vboxsync
Message:

mapping cache notes.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/src/VBox/VMM/PGM.cpp

    r4518 r4536  
    431431 *
    432432 *
    433  * @subsection subsec_pgmPhys_Changes           Changes
    434  *
    435  * Breakdown of the changes involved...
    436  *
    437  *
     433 *
     434 * @section sec_pgmPhys_MappingCaches   Mapping Caches
     435 *
     436 * In order to be able to map in and out memory and to be able to support
     437 * guest with more RAM than we've got virtual address space, we'll employing
     438 * a mapping cache. There is already a tiny one for GC (see PGMGCDynMapGCPageEx)
     439 * and we'll create a similar one for ring-0 unless we decide to setup a dedicate
     440 * memory context for the HWACCM execution.
     441 *
     442 *
     443 * @subsection subsec_pgmPhys_MappingCaches_R3  Ring-3
     444 *
     445 * We've considered implementing the ring-3 mapping cache page based but found
     446 * that this was bother some when one had to take into account TLBs+SMP and
     447 * portability (missing the necessary APIs on several platforms). There were
     448 * also some performance concerns with this approach which hadn't quite been
     449 * worked out.
     450 *
     451 * Instead, we'll be mapping allocation chunks into the VM process. This simplifies
     452 * matters greatly quite a bit since we don't need to invent any new ring-0 stuff,
     453 * only some minor RTR0MEMOBJ mapping stuff. The main concern here is that mapping
     454 * compared to the previous idea is that mapping or unmapping a 1MB chunk is more
     455 * costly than a single page, although how much more costly is uncertain. We'll
     456 * try address this by using a very big cache, preferably bigger than the actual
     457 * VM RAM size if possible. The current VM RAM sizes should give some idea for
     458 * 32-bit boxes, while on 64-bit we can probably get away with employing an
     459 * unlimited cache.
     460 *
     461 * The cache have to parts, as already indicated, the ring-3 side and the
     462 * ring-0 side.
     463 *
     464 * The ring-0 will be tied to the page allocator since it will operate on the
     465 * memory objects it contains. It will therefore require the first ring-0 mutex
     466 * discussed in @ref subsec_pgmPhys_Serializing. We
     467 * some double house keeping wrt to who has mapped what I think, since both
     468 * VMMR0.r0 and RTR0MemObj will keep track of mapping relataions
     469 *
     470 * The ring-3 part will be protected by the pgm critsect. For simplicity, we'll
     471 * require anyone that desires to do changes to the mapping cache to do that
     472 * from within this critsect. Alternatively, we could employ a separate critsect
     473 * for serializing changes to the mapping cache as this would reduce potential
     474 * contention with other threads accessing mappings unrelated to the changes
     475 * that are in process. We can see about this later, contention will show
     476 * up in the statistics anyway, so it'll be simple to tell.
     477 *
     478 * The organization of the ring-3 part will be very much like how the allocation
     479 * chunks are organized in ring-0, that is in an AVL tree by chunk id. To avoid
     480 * having to walk the tree all the time, we'll have a couple of lookaside entries
     481 * like in we do for I/O ports and MMIO in IOM.
     482 *
     483 * The simplified flow of a PGMPhysRead/Write function:
     484 *      -# Enter the PGM critsect.
     485 *      -# Lookup GCPhys in the ram ranges and get the Page ID.
     486 *      -# Calc the Allocation Chunk ID from the Page ID.
     487 *      -# Check the lookaside entries and then the AVL tree for the Chunk ID.
     488 *         If not found in cache:
     489 *              -# Call ring-0 and request it to be mapped and supply
     490 *                 a chunk to be unmapped if the cache is maxed out already.
     491 *              -# Insert the new mapping into the AVL tree (id + R3 address).
     492 *      -# Update the relevant lookaside entry and return the mapping address.
     493 *      -# Do the read/write according to monitoring flags and everything.
     494 *      -# Leave the critsect.
     495 *
     496 *
     497 *
     498 * @section sec_pgmPhys_Changes             Changes
     499 *
     500 * Breakdown of the changes involved?
    438501 */
    439502
Note: See TracChangeset for help on using the changeset viewer.

© 2024 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette