Changeset 6311 in vbox for trunk/src/VBox
- Timestamp:
- Jan 9, 2008 6:46:06 PM (17 years ago)
- svn:sync-xref-src-repo-rev:
- 27180
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/src/VBox/VMM/VMMR0/GMMR0.cpp
r5999 r6311 28 28 * unnecessary performance penalties. 29 29 * 30 *31 30 * The allocation chunks has fixed sized, the size defined at compile time 32 * by the GMM_CHUNK_SIZE \#define.31 * by the #GMM_CHUNK_SIZE \#define. 33 32 * 34 33 * Each chunk is given an unquie ID. Each page also has a unique ID. The 35 34 * relation ship between the two IDs is: 36 * @verbatim 37 (idChunk << GMM_CHUNK_SHIFT) | iPage 38 @endverbatim 39 * Where GMM_CHUNK_SHIFT is log2(GMM_CHUNK_SIZE / PAGE_SIZE) and iPage is 40 * the index of the page within the chunk. This ID scheme permits for efficient 41 * chunk and page lookup, but it relies on the chunk size to be set at compile 42 * time. The chunks are organized in an AVL tree with their IDs being the keys. 35 * @code 36 * GMM_CHUNK_SHIFT = log2(GMM_CHUNK_SIZE / PAGE_SIZE); 37 * idPage = (idChunk << GMM_CHUNK_SHIFT) | iPage; 38 * @endcode 39 * Where iPage is the index of the page within the chunk. This ID scheme 40 * permits for efficient chunk and page lookup, but it relies on the chunk size 41 * to be set at compile time. The chunks are organized in an AVL tree with their 42 * IDs being the keys. 43 43 * 44 44 * The physical address of each page in an allocation chunk is maintained by 45 * the RTR0MEMOBJ and obtained usingRTR0MemObjGetPagePhysAddr. There is no45 * the #RTR0MEMOBJ and obtained using #RTR0MemObjGetPagePhysAddr. There is no 46 46 * need to duplicate this information (it'll cost 8-bytes per page if we did). 47 47 * 48 * So what do we need to track per page? Most importantly we need to know what49 * state the page is in:48 * So what do we need to track per page? Most importantly we need to know 49 * which state the page is in: 50 50 * - Private - Allocated for (eventually) backing one particular VM page. 51 51 * - Shared - Readonly page that is used by one or more VMs and treated … … 64 64 * On 64-bit systems we will use a 64-bit bitfield per page, while on 32-bit 65 65 * systems a 32-bit bitfield will have to suffice because of address space 66 * limitations. The GMMPAGE structure shows the details.66 * limitations. The #GMMPAGE structure shows the details. 67 67 * 68 68 * … … 71 71 * The strategy for allocating pages has to take fragmentation and shared 72 72 * pages into account, or we may end up with with 2000 chunks with only 73 * a few pages in each. The fragmentation wrt shared pages is that unlike74 * private pages they cannot easily be reallocated. Private pages can be73 * a few pages in each. Shared pages cannot easily be reallocated because 74 * of the inaccurate usage accounting (see above). Private pages can be 75 75 * reallocated by a defragmentation thread in the same manner that sharing 76 76 * is done. … … 96 96 * (sizeof(RT0MEMOBJ) + sizof(CHUNK)) / 2^CHUNK_SHIFT bytes per page. 97 97 * 98 * On Windows the per page RTR0MEMOBJ cost is 32-bit on 32-bit windows98 * On Windows the per page #RTR0MEMOBJ cost is 32-bit on 32-bit windows 99 99 * and 64-bit on 64-bit windows (a PFN_NUMBER in the MDL). So, 64-bit per page. 100 100 * The cost on Linux is identical, but here it's because of sizeof(struct page *). … … 104 104 * 105 105 * In legacy mode the page source is locked user pages and not 106 * RTR0MemObjAllocPhysNC, this means that a page can only be allocated106 * #RTR0MemObjAllocPhysNC, this means that a page can only be allocated 107 107 * by the VM that locked it. We will make no attempt at implementing 108 108 * page sharing on these systems, just do enough to make it all work. … … 114 114 * two as metioned in @ref subsec_pgmPhys_Serializing. 115 115 * 116 * @see subsec_pgmPhys_Serializing116 * @see @ref subsec_pgmPhys_Serializing 117 117 * 118 118 * … … 132 132 * daemon the will keep VMMR0.r0 in memory and enable the security measures. 133 133 * 134 * This will not be implemented this week. :-) 134 * 135 * 136 * @section sec_gmm_numa NUMA 137 * 138 * NUMA considerations will be designed and implemented a bit later. 139 * 140 * The preliminary guesses is that we will have to try allocate memory as 141 * close as possible to the CPUs the VM is executed on (EMT and additional CPU 142 * threads). Which means it's mostly about allocation and sharing policies. 143 * Both the scheduler and allocator interface will to supply some NUMA info 144 * and we'll need to have a way to calc access costs. 135 145 * 136 146 */ … … 2729 2739 if (!pGMM->fLegacyMode) 2730 2740 { 2731 Log(("GMMR0 MapUnmapChunk: not in legacy mode!\n"));2741 Log(("GMMR0SeedChunk: not in legacy mode!\n")); 2732 2742 return VERR_NOT_SUPPORTED; 2733 2743 }
Note:
See TracChangeset
for help on using the changeset viewer.