wiki:replication_distribution

Version 31 (modified by alain, 7 years ago) (diff)

--

Data replication & distribution policy

The replication / distribution policy of user processes has two goals: enforce locality (as much as possible), and avoid contention (it is the main goal).

  • The read-only segment containing the user code is replicated in all clusters where there is one thread using it.
  • The private segment containing the stack for a given thread is placed in the same cluster as the thread using it.
  • The shared segment containing the global data is distributed on all clusters as regularly as possible to avoid contention.
  • The segments dynamically allocated by the mmap() system call are placed as described below.

To actually control data placement on the physical memory banks, the kernel uses the paged virtual memory MMU to map a virtual segment to a given physical memory bank.

This replication / distribution policy is implemented by the Virtual Memory Manager (in the vmm.h / vmm.c files).

A vseg is a contiguous memory zone in the process virtual space, such as all adresses in this vseg can be accessed by the process without segmentation violation: if the corresponding is not mapped, the page fault will be handled by the kernel, and a physical page will be dynamically allocated and initialized if required. A vseg contains always an integer number of pages. Depending on its type, a vseg has some specific attributes regarding access rights, replication policy, and distribution policy. The vseg descriptor is defined by the structure vseg_t in the vseg.h file.

For each process P, the process descriptor is replicated in all clusters containing at least one thread of P, and these clusters are called active clusters. More precisely, the virtual memory manager VMM(P,K) of process P in active cluster K, contains two main structures:

  • the VSL(P,K) is the list of all vsegs registered for process P in cluster K,
  • the GPT(P,K) is the generic page table, defining the actual physical mapping of those vsegs.

1) User segments types and attributes

  • A vseg is public when it can be accessed by any thread T of the process, whatever the cluster running the thread T. It is private when it can only be accessed by the threads running in the cluster containing the physical memory bank where this vseg is mapped. A private vseg is entirely mapped in one single cluster K.
  • For a public vseg ALMOS-MKH implements a global mapping : In all clusters, a given virtual address is mapped to the same physical address. For a private vseg, ALMOS-MKH implements a local mapping : the same virtual address can be mapped to different physical addresses, in different clusters.
  • A public vseg can be localized (all vseg pages are mapped in the same cluster), or distributed (different pages are mapped on different clusters, using the virtual page number (VPN) least significant bits as distribution key). A private vseg is always localized.

ALMOS-MK defines six vseg types:

type
STACK private localized Read Write one physical mapping per thread
CODE private localized Read Only one physical mapping per cluster
DATA public distributed Read Write one single physical mapping
ANON public localized Read Write one per mmap(anon)
FILE public localized Read Write one per mmap(file)
REMOTE public localized Read Write one per remote_mmap()
  1. CODE : This private vseg contains the user application code. ALMOS-MK creates one CODE vseg per active cluster. For a process P, the CODE vseg is registered in the VSL(P,Z) when the process is created in reference cluster Z. In the other clusters X, the CODE vseg is registered in VSL(P,X) when a page fault is signaled by a thread of P running in cluster X. In each active cluster X, the CODE vseg is localized, and physically mapped in cluster X.
  2. DATA : This vseg contains the user application global data. ALMOS-MK creates one single DATA vseg per process, that is registered in the reference VSL(P,Z) when the process P is created in reference cluster Z. In the other clusters X, the DATA vseg is registered in VSL(P,X) when a page fault is signaled by a thread of P running in cluster X. To avoid contention, this vseg is physically distributed on all clusters, with a page granularity. For each page, the physical mapping is defined by the LSB bits of the page VPN.
  3. STACK : This private vseg contains the execution stack of a thread. For each thread T of process P running in cluster X, ALMOS_MK creates one STACK vseg. This vseg is registered in the VSL(P,X) when the thread descriptor is created in cluster X. To enforce locality, this vseg is physically mapped in cluster X.
  4. ANON : This type of vseg is dynamically created by ALMOS-MK to serve an anonymous mmap() system call executed by a client thread running in a cluster X. The first vseg registration and the physical mapping are done by the reference cluster Z, but the vseg is mapped in the client cluster X.
  5. FILE : This type of vseg is dynamically created by ALMOS-MK to serve a file based mmap() system call executed by a client thread running in a cluster X. The first vseg registration and the physical mapping are done by the reference cluster Z, but the vseg is mapped in cluster Y containing the file cache.
  6. REMOTE : This type of vseg is dynamically created by ALMOS-MK to serve a remote mmap() system call where a client thread running in a cluster X request to create a new vseg mapped in another cluster Y. The first vseg registration and the physical mapping are done by the reference cluster Z, but the vseg is mapped in cluster Y specified by the user.

The replication of the VSL(P,K) and GPT(P,K) kernel structures creates a coherence problem for the non private vsegs.

  • A VSL(P,K) contains all private vsegs in cluster K, but contains only the public vsegs that have been actually accessed by a thread of P running in cluster K. Only the reference process descriptor stored in the reference cluster Z contains the complete list VSL(P,Z) of all public vsegs for the P process.
  • A GPT(P,K) contains all mapped entries corresponding to private vsegs. For public vsegs, it contains only the entries corresponding to pages that have been accessed by a thread running in cluster K. Only the reference cluster Z contains the complete GPT(P,Z) page table of all mapped entries for process P.

Therefore, the process descriptors - other than the reference one - can be considered as read-only caches. When a given vseg or a given entry in the page table must be removed by the kernel, this modification must be done first in the reference cluster, and broadcast to all other clusters for update.

2) User process virtual space organisation

The virtual space of an user process P is split in 5 fixed size zones, defined by configuration parameters. Each zone contains one or several vsegs, as described below.

The utils zone

It is located in the lower part of the virtual space, and starts a address 0. It contains the three vsegs kentry, args, envs, whose sizes are defined by specific configuration parameters.

  • The kentry vseg (CODE type) contains the HAL specific code that must be executed to enter/exit the kernel from user space (in case of interrupts, exceptions, or syscalls). It contains also the HAL specific code for context switch. For some architectures (namely the TSAR 32 bits architectures), this vseg must be identity mapped.
  • The args vseg (DATA type) contains the process main() arguments.
  • The envs vseg (DATA type) contains the process environment variables.

The elf zone

It is located on top of the utils zone, and starts at address defined by the CONFIG_VSPACE_ELF_BASE parameter. It contains the text vseg (CODE type) and data vseg (DATA type) defining the process binary code and global data. The actual vsegs base addresses and sizes are defined in the .elf file and reported in the boot_info structure by the boot loader.

The heap zone

It is located on top of the elf zone, and starts at address defined by the CONFIG_VSPACE_HEAP_BASE parameter. It contains all vsegs dynamically allocated or released by the mmap() / munmap() system calls (i.e. FILE / ANON / REMOTE).he VMM implements a specific MMAP allocator for this zone, implementing the buddy algorithm.

The stack zone

It is located on top of the mmap zone and starts at address defined by the CONFIG_VSPACE_STACK_BASE parameter. It contains an array of fixed size slots, and each slot contain one stack vseg. The size of a slot is defined by the CONFIG_VSPACE_STACK_SIZE. In each slot the first page is not mapped to detect stack overflow. As threads are dynamically created and destroyed, the VMM implement a specific STACK allocator for this zone, using a bitmap vector.