VirtualBox

Changeset 6311 in vbox for trunk/src/VBox


Ignore:
Timestamp:
Jan 9, 2008 6:46:06 PM (17 years ago)
Author:
vboxsync
svn:sync-xref-src-repo-rev:
27180
Message:

Documentation updates.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/src/VBox/VMM/VMMR0/GMMR0.cpp

    r5999 r6311  
    2828 * unnecessary performance penalties.
    2929 *
    30  *
    3130 * The allocation chunks has fixed sized, the size defined at compile time
    32  * by the GMM_CHUNK_SIZE \#define.
     31 * by the #GMM_CHUNK_SIZE \#define.
    3332 *
    3433 * Each chunk is given an unquie ID. Each page also has a unique ID. The
    3534 * relation ship between the two IDs is:
    36  * @verbatim
    37        (idChunk << GMM_CHUNK_SHIFT) | iPage
    38  @endverbatim
    39  * Where GMM_CHUNK_SHIFT is log2(GMM_CHUNK_SIZE / PAGE_SIZE) and iPage is
    40  * the index of the page within the chunk. This ID scheme permits for efficient
    41  * chunk and page lookup, but it relies on the chunk size to be set at compile
    42  * time. The chunks are organized in an AVL tree with their IDs being the keys.
     35 * @code
     36 *  GMM_CHUNK_SHIFT = log2(GMM_CHUNK_SIZE / PAGE_SIZE);
     37 *  idPage = (idChunk << GMM_CHUNK_SHIFT) | iPage;
     38 * @endcode
     39 * Where iPage is the index of the page within the chunk. This ID scheme
     40 * permits for efficient chunk and page lookup, but it relies on the chunk size
     41 * to be set at compile time. The chunks are organized in an AVL tree with their
     42 * IDs being the keys.
    4343 *
    4444 * The physical address of each page in an allocation chunk is maintained by
    45  * the RTR0MEMOBJ and obtained using RTR0MemObjGetPagePhysAddr. There is no
     45 * the #RTR0MEMOBJ and obtained using #RTR0MemObjGetPagePhysAddr. There is no
    4646 * need to duplicate this information (it'll cost 8-bytes per page if we did).
    4747 *
    48  * So what do we need to track per page? Most importantly we need to know what
    49  * state the page is in:
     48 * So what do we need to track per page? Most importantly we need to know
     49 * which state the page is in:
    5050 *   - Private - Allocated for (eventually) backing one particular VM page.
    5151 *   - Shared  - Readonly page that is used by one or more VMs and treated
     
    6464 * On 64-bit systems we will use a 64-bit bitfield per page, while on 32-bit
    6565 * systems a 32-bit bitfield will have to suffice because of address space
    66  * limitations. The GMMPAGE structure shows the details.
     66 * limitations. The #GMMPAGE structure shows the details.
    6767 *
    6868 *
     
    7171 * The strategy for allocating pages has to take fragmentation and shared
    7272 * pages into account, or we may end up with with 2000 chunks with only
    73  * a few pages in each. The fragmentation wrt shared pages is that unlike
    74  * private pages they cannot easily be reallocated. Private pages can be
     73 * a few pages in each. Shared pages cannot easily be reallocated because
     74 * of the inaccurate usage accounting (see above). Private pages can be
    7575 * reallocated by a defragmentation thread in the same manner that sharing
    7676 * is done.
     
    9696 * (sizeof(RT0MEMOBJ) + sizof(CHUNK)) / 2^CHUNK_SHIFT bytes per page.
    9797 *
    98  * On Windows the per page RTR0MEMOBJ cost is 32-bit on 32-bit windows
     98 * On Windows the per page #RTR0MEMOBJ cost is 32-bit on 32-bit windows
    9999 * and 64-bit on 64-bit windows (a PFN_NUMBER in the MDL). So, 64-bit per page.
    100100 * The cost on Linux is identical, but here it's because of sizeof(struct page *).
     
    104104 *
    105105 * In legacy mode the page source is locked user pages and not
    106  * RTR0MemObjAllocPhysNC, this means that a page can only be allocated
     106 * #RTR0MemObjAllocPhysNC, this means that a page can only be allocated
    107107 * by the VM that locked it. We will make no attempt at implementing
    108108 * page sharing on these systems, just do enough to make it all work.
     
    114114 * two as metioned in @ref subsec_pgmPhys_Serializing.
    115115 *
    116  * @see subsec_pgmPhys_Serializing
     116 * @see @ref subsec_pgmPhys_Serializing
    117117 *
    118118 *
     
    132132 * daemon the will keep VMMR0.r0 in memory and enable the security measures.
    133133 *
    134  * This will not be implemented this week. :-)
     134 *
     135 *
     136 * @section sec_gmm_numa  NUMA
     137 *
     138 * NUMA considerations will be designed and implemented a bit later.
     139 *
     140 * The preliminary guesses is that we will have to try allocate memory as
     141 * close as possible to the CPUs the VM is executed on (EMT and additional CPU
     142 * threads). Which means it's mostly about allocation and sharing policies.
     143 * Both the scheduler and allocator interface will to supply some NUMA info
     144 * and we'll need to have a way to calc access costs.
    135145 *
    136146 */
     
    27292739    if (!pGMM->fLegacyMode)
    27302740    {
    2731         Log(("GMMR0MapUnmapChunk: not in legacy mode!\n"));
     2741        Log(("GMMR0SeedChunk: not in legacy mode!\n"));
    27322742        return VERR_NOT_SUPPORTED;
    27332743    }
Note: See TracChangeset for help on using the changeset viewer.

© 2025 Oracle Support Privacy / Do Not Sell My Info Terms of Use Trademark Policy Automated Access Etiquette