wiki:replication_distribution

Data replication & distribution policy

The replication / distribution policy has two goals: enforce locality (as much as possible), and avoid contention (it is the main goal).

  • The read-only segments (type CODE) are replicated in all clusters where they are used.
  • The private segments (type STACK) are placed in the same cluster as the thread using it.
  • The shared segments (types DATA or HEAP ) are distributed on all clusters as regularly as possible to avoid contention.
  • The pinned segments (types FILE or REMOTE) are placed in the specified cluster.

To actually control data placement on the physical memory banks, the kernel uses the paged virtual memory MMU to map a virtual segment to a given physical memory bank.

This replication / distribution policy is implemented by the Virtual Memory Manager (in the vmm.h / vmm.c files).

A vseg is a contiguous memory zone in the process virtual space. It is always an integer number of pages. Depending on its type, a vseg has some specific attributes regarding access rights, replication policy, and distribution policy. The vseg descriptor is defined by the structure vseg_t in the vseg.h file.

For each process P, the process descriptor is replicated in all clusters containing at least one thread of P, and these clusters are called active clusters. In each active cluster K, the virtual memory manager VMM(P,K) is stored in the local process descriptor, and contains two main structures:

  • the VSL(P,K) is the list of all vsegs registered for process P in cluster K,
  • the GPT(P,K) is the generic page table, defining the actual physical mapping of those vsegs.

1) User segments types and attributes

  • A vseg is public when it can be accessed by any thread T of the process, whatever the cluster running the thread T. It is private when it can only be accessed by the threads running in the cluster containing the physical memory bank where this vseg is mapped. A private vseg is entirely mapped in one single cluster K.
  • For a public vseg ALMOS-MK implements a global mapping : In all clusters, a given virtual address is mapped to the same physical address. For a private vseg, ALMOS-MK implements a local mapping : the same virtual address can be mapped to different physical addresses, in different clusters.
  • A public vseg can be localized (all vseg pages are mapped in the same cluster), or distributed (different pages are mapped on different clusters, using the virtual page number (VPN) least significant bits as distribution key). A private vseg is always localized.

ALMOS-MK defines seven vseg types:

type
STACK private localized Read Write one physical mapping per thread
CODE private localized Read Only one physical mapping per cluster
DATA public distributed Read Write one single physical mapping
HEAP public distributed Read Write one single physical mapping
ANON public localized Read Write one per mmap(anon)
FILE public localized Read Write one per mmap(file)
REMOTE public localized Read Write one per remote_mmap()
  1. CODE : This private vseg contains the user application code. ALMOS-MK creates one CODE vseg per cluster. For a process P, the CODE vseg is registered in the VSL(P,Z) when the process is created in reference cluster Z. In the other clusters X, the CODE vseg is registered in VSL(P,X) when a page fault is signaled by a thread of P running in cluster X. In each cluster X, the CODE vseg is physically mapped in cluster X.
  2. DATA : This vseg contains the user application global data. ALMOS-MK creates one single DATA vseg per process, that is registered in the reference VSL(P,Z) when the process P is created in reference cluster Z. In the other clusters X, the DATA vseg is registered in VSL(P,X) when a page fault is signaled by a thread of P running in cluster X. To avoid contention, this vseg is physically distributed on all clusters. For each page, the physical mapping is decided by the reference cluster Z, but the page can be mapped on any cluster Y.
  3. HEAP This vseg is actually used by the malloc() library. ALMOS-MK creates one single HEAP vseg per process, that is registered in the reference VSL(P,Z) when the process P is created in reference cluster Z. In the other clusters X, the HEAP vseg is registered in VSL(P,X) when a page fault is signaled by a thread of P running in cluster X. To avoid contention, this vseg is physically distributed on all clusters. For each page, the physical mapping is decided by the reference cluster Z, but the page can be mapped on any cluster Y.
  4. STACK : This private vseg contains the execution stack of a thread. For each thread T of process P running in cluster X, ALMOS_MK creates one STACK vseg. This vseg is registered in the VSL(P,X) when the thread descriptor is created in cluster X. To enforce locality, this vseg is physically mapped in cluster X.
  5. ANON : This type of vseg is dynamically created by ALMOS-MK to serve an anonymous mmap() system call executed by a client thread running in a cluster X. The first vseg registration and the physicaI mapping are done by the reference cluster Z, but the vseg is mapped in the client cluster X.
  6. FILE : This type of vseg is dynamically created by ALMOS-MK to serve a file based mmap() system call executed by a client thread running in a cluster X. The first vseg registration and the physicaI mapping are done by the reference cluster Z, but the vseg is mapped in cluster Y containing the file cache.
  7. REMOTE : This type of vseg is dynamically created by ALMOS-MK to serve a remote_mmap() system call executed by a client thread running in a cluster X. The first vseg registration and the physicaI mapping are done by the reference cluster Z, but the vseg is mapped in cluster Y specified by the user.

The replication of the VSL(P,K) and GPT(P,K) kernel structures creates a coherence problem for the non private vsegs.

  • A VSL(P,K) contains all private vsegs in cluster K, but contains only the public vsegs that have been actually accessed by a thread of P running in cluster K. Only the reference process descriptor stored in the reference cluster Z contains the complete list VSL(P,Z) of all public vsegs for the P process.
  • A GPT(P,K) contains all mapped entries corresponding to private vsegs. For public vsegs, it contains only the entries corresponding to pages that have been accessed by a thread running in cluster K. Only the reference cluster Z contains the complete GPT(P,Z) page table of all mapped entries for process P.

Therefore, the process descriptors - other than the reference one - are used as read-only caches. When a given vseg or a given entry in the page table must be removed by the kernel, this modification must be done first in the reference cluster, and broadcasted to all other clusters for update.

2) User process virtual space organisation

The virtual space of an user process P in a given cluster K is split in 5 fixed size zones, defined by configuration parameters. Each zone contains one or several vsegs, as described below.

The utils zone

It is located in the lower part of the virtual space, and starts a address 0. It contains the three vsegs kentry, args, envs, whose sizes are defined by specific configuration parameters. The kentry vseg (CODE type) contains the code that must be executed to enter the kernel from user space. The args vseg (DATA type) contains the process main() arguments. The envs vseg (DATA type) contains the process environment variables.

The elf zone

It is located on top of the utils zone, and starts at address defined by the CONFIG_VSPACE_ELF_BASE parameter. It contains the text vseg (CODE type) and data vseg (DATA type) defining the process binary code and global data. The actual vsegs base addresses and sizes are defined in the .elf file and reported in the boot_info structure by the boot loader.

The heap zone

It is located on top of the elf zone, and starts at address defined by the CONFIG_VSPACE_HEAP_BASE parameter. It contains one single heap vseg, used by the malloc() library.

The mmap zone

It is located on top of the heap zone, and starts at address defined by the CONFIG_VSPACE_MMAP_BASE parameter. It contains all vsegs of type ANON, FILE, or REMOTE that are dynamically allocated / released by the user application. The VMM implements a specific MMAP allocator for this zone, implementing the buddy algorithm.

The stack zone

It is located on top of the mmap zone and starts at address defined by the CONFIG_VSPACE_STACK_BASE parameter. It contains an array of fixed size slots, and each slot contain one stack vseg. The size of a slot is defined by the CONFIG_VSPACE_STACK_SIZE. In each slot the first page is not mapped to detect stack overflow. As threads are dynamically created and destroyed, the VMM implement a specific STACK allocator for this zone, using a bitmap vector.

Last modified 3 months ago Last modified on Jan 17, 2017, 11:20:24 AM