Changeset 651 for trunk/kernel/mm/vmm.h


Ignore:
Timestamp:
Nov 14, 2019, 11:50:09 AM (4 years ago)
Author:
alain
Message:

1) Improve the VMM MMAP allocator: implement the "buddy" algorithm
to allocate only aligned blocks.
2) fix a bug in the pthread_join() / pthread_exit() mmechanism.

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/kernel/mm/vmm.h

    r640 r651  
    6464
    6565/*********************************************************************************************
    66  * This structure defines the MMAP allocator used by the VMM to dynamically handle 
    67  * MMAP vsegs requested or released by an user process.
    68  * This allocator should be only used in the reference cluster.
    69  * - allocation policy : all allocated vsegs occupy an integer number of pages that is
    70  *   power of 2, and are aligned on a page boundary. The requested number of pages is
    71  *   rounded if required. The first_free_vpn variable defines completely the MMAP zone state.
    72  *   It is never decremented, as the released vsegs are simply registered in a zombi_list.
    73  *   The relevant zombi_list is checked first for each allocation request.
    74  * - release policy : a released MMAP vseg is registered in an array of zombi_lists.
    75  *   This array is indexed by ln(number of pages), and each entry contains the root of
    76  *   a local list of zombi vsegs that have the same size. The physical memory allocated
    77  *   for a zombi vseg descriptor is not released, to use the "list" field.
    78  *   This physical memory allocated for MMAP vseg descriptors is actually released
    79  *   when the VMM is destroyed.
     66 * This structure defines the MMAP allocator used by the VMM to dynamically handle MMAP vsegs
     67 * requested or released by an user process. It must be called in the reference cluster.
     68 * - allocation policy :
     69 *   This allocator implements the buddy algorithm. All allocated vsegs occupy an integer
     70 *   number of pages, that is power of 2, and are aligned (vpn_base is multiple of vpn_size).
     71 *   The requested number of pages is rounded if required. The global allocator state is
     72 *   completely defined by the free_pages_root[] array indexed by the vseg order.
     73 *   These free lists are local, but are implemented as xlist because we use the existing
     74 *   vseg.xlist to register a free vseg in its free list.
     75 * - release policy :
     76 *   A released vseg is recursively merged with the "buddy" vseg when it is free, in
     77 *   order to build the largest possible aligned free vsegs. The resulting vseg.vpn_size
     78 *   field is updated.
     79 * Implementation note:
     80 * The only significant (and documented) fiels in the vsegs registered in the MMAP allocator
     81 * free lists are "xlist", "vpn_base", and "vpn_size".
    8082 ********************************************************************************************/
    8183
     
    8587    vpn_t          vpn_base;           /*! first page of MMAP zone                          */
    8688    vpn_t          vpn_size;           /*! number of pages in MMAP zone                     */
    87     vpn_t          first_free_vpn;     /*! first free page in MMAP zone                     */
    88     xlist_entry_t  zombi_list[32];     /*! array of roots of released vsegs lists           */
     89    xlist_entry_t  free_list_root[CONFIG_VMM_HEAP_MAX_ORDER + 1];  /* roots of free lists   */
    8990}
    9091mmap_mgr_t;
     
    103104 * 2. The VSL contains only local vsegs, but it is implemented as an xlist, and protected by
    104105 *    a remote_rwlock, because it can be accessed by a thread running in a remote cluster.
    105  *    An exemple is the vmm_fork_copy() function.
     106 *    An example is the vmm_fork_copy() function.
    106107 * 3. The GPT in the reference cluster can be directly accessed by remote threads to handle
    107108 *    false page-fault (page is mapped in the reference GPT, but the PTE copy is missing
     
    119120
    120121    stack_mgr_t        stack_mgr;           /*! embedded STACK vsegs allocator              */
     122
    121123    mmap_mgr_t         mmap_mgr;            /*! embedded MMAP vsegs allocator               */
    122124
     
    156158 * call to the vmm_user_init() function after an exec() syscall.
    157159 * It removes from the VMM of the process identified by the <process> argument all
    158  * non kernel vsegs (i.e. all user vsegs), by calling the vmm_remove_vseg() function.
     160 * all user vsegs, by calling the vmm_remove_vseg() function.
    159161 * - the vsegs are removed from the VSL.
    160162 * - the corresponding GPT entries are removed from the GPT.
     
    279281/*********************************************************************************************
    280282 * This function allocates memory for a vseg descriptor, initialises it, and register it
    281  * in the VSL of the local process descriptor, that must be the reference process.
    282  * - For the FILE, ANON, & REMOTE types, it does not use the <base> and <size> arguments,
    283  *   but uses the specific MMAP virtual memory allocator.
     283 * in the VSL of the local process descriptor.
     284 * - For the FILE, ANON, & REMOTE types, it does not use the <base> argument, but uses
     285 *   the specific VMM MMAP allocator.
    284286 * - For the STACK type, it does not use the <base> and <size> arguments,  but uses the
    285  *   and the <base> argument the specific STACK virtual memory allocator.
     287 *   the specific VMM STACK allocator.
    286288 * It checks collision with pre-existing vsegs.
    287289 * To comply with the "on-demand" paging policy, this function does NOT modify the GPT,
Note: See TracChangeset for help on using the changeset viewer.