wiki:replication_distribution

Version 56 (modified by alain, 4 years ago) (diff)

--

Data replication & distribution policy

alain.greiner@…

The replication / distribution policy of data on the physical memory banks has two goals: enforce locality (as much as possible), and avoid contention (it is the main goal).

The data to be placed are the virtual segments defined - at compilation time - in the virtual space of the various user processes currently running, or in the virtual space of the operating system itself.

1. General principles

To actually control the placement of all these virtual segments on the physical memory banks, the kernel uses the paged virtual memory MMU to map a virtual segment to a given physical memory bank in a given cluster.

A vseg is a contiguous memory zone in the process virtual space, defined by the two (base, size) values. All adresses in this interval can be accessed without segmentation violation: if the corresponding page is not mapped, the page fault will be handled by the kernel, and a physical page will be dynamically allocated (and initialized if required). A vseg always occupies an integer number of pages, as a given page cannot be shared by two different vsegs.

In all UNIX system (including almos-mkh), a vseg has some specific attributes defining access rights (readable, writable, executable, catchable, etc). But for almos-mkh, the vseg type defines also the replication and distribution policy:

  • A vseg is public when it can be accessed by any thread T of the involved process, whatever the cluster running the thread T. It is private when it can only be accessed by the threads running in the cluster containing the physical memory bank where this vseg is defined and mapped.
  • For a public vseg, ALMOS-MKH implements a global mapping : In all clusters, a given virtual address is mapped to the same physical address. For a private vseg, ALMOS-MKH implements a local mapping : the same virtual address can be mapped to different physical addresses, in different clusters.
  • A public vseg can be localized (all vseg pages are mapped in the same cluster), or distributed (different pages are mapped on different clusters). A private vseg is always localized.

The vseg structure and API is defined in the almos_mk/kernel/mm/vseg and almos-mkh/kernel/mm/vseg.c files.

In all UNIX systems, the process descriptor contains the table used by the MMU to make the virtual to physical address translation. An important feature of almos-mkh is the following: To avoid contention, in parallel applications creating a large number of threads in one single process P, almos-mkh replicates, the process descriptor in all clusters containing at least one thread of this process. These clusters are called active clusters.

In almos-mkh, the structure used by the MMU for address translation is called VMM (Virtual Memory Manager). For a process P in cluster K, the VMM(P,K) structure, contains two main sub-structures:

  • The VSL(P,K) is the list of virtual segments registered for process P in cluster K,
  • The GPT(P,K) is the generic page table, defining the actual physical mapping for each page of each vseg.

For a given process P, the different VMM(P,K) in different clusters can have different contents for several reasons :

  1. A private vseg can be registered in only one VSL(P,K) in cluster K, and be totally undefined in the others VSL(P,K').
  2. A public vseg can be replicated in deveral VSL(P,K), but the registration of a vseg in a given VSL(P,K) is on demand: the vseg is only registered in VSL(P,K) when a thread of process P running in cluster K try to access this vseg.
  3. Similarly, the mapping of a given virtual page VPN of a given vseg (i.e. the allocation of a physical page PPN to a virtual page VPN, and the registration of this PPN in the GPT(P,K) is on demand: the page table entry will be updated in the GPT(P,K) only when a thread of process P in cluster K try to access this VPN.

We have the following properties for the private vsegs:

  • the VSL(P,K) contains always all private vsegs in cluster K,
  • The GPT(P,K) contains all mapped entries corresponding to a private vseg in cluster K.

We have the following properties for the public vsegs:

  • the VSL(P,K) contains only the public vsegs that have been actually accessed by a thread of P running in cluster K.
  • Only the reference cluster KREF contains the complete VSL(P,KREF) of all public vsegs for the P process.
  • The GPT(P,K) contains only the entries that have been accessed by a thread running in cluster K.
  • Only the reference cluster KREF contains the complete GPT(P,KREF) of all mapped entries of public vsegs for the P process.

For the public vsegs, the VMM(P,K) structures - other than the reference one - can be considered as local caches. This creates a coherence problem, that is solved by the following rules :

  1. For the private vsegs, and the corresponding entries in the page table, the VSL(P,K) and the GPT(P,K) are only shared by the threads of P running in cluster K, and these structures can be privately handled by the local kernel instance in cluster K.
  2. When a given public vseg in the VSL, or a given entry in the GPT must be removed or modified, this modification must be done first in the reference cluster, and broadcast to all other clusters for update of local VSL or GPT copies.
  3. When a miss is detected in a non-reference cluster, the reference VMM(P,KREF) must be accessed first to check a possible false segmentation fault or a 'false page fault.

For more details on the VMM implementation, the API is defined in the almos_mkh/kernel/mm/vmm.h and almos-mkh/kernel/mm/vmm.c files.

2. User vsegs

This section describes the six types of user virtual segments and the associated replication / distribution policy defined and implemented by almost-mkh:

2.1 CODE vsegs

This private vseg contains the application code. It is replicated in all clusters. ALMOS-MK creates one CODE vseg per active cluster. For a process P, the CODE vseg is registered in the VSL(P,KREF) when the process is created in reference cluster KREF. In the other clusters K, the CODE vseg is registered in VSL(P,K) when a page fault is signaled by a thread of P running in cluster K. In each active cluster K, the CODE vseg is mapped in cluster K.

2.2 DATA vseg

This public vseg contains the user application global data. ALMOS-MK creates one single DATA vseg, that is registered in the reference VSL(P,KREF) when the process P is created in reference cluster KREF. In the other clusters K, the DATA vseg is registered in VSL(P,K) when a page fault is signaled by a thread of P running in cluster K. To avoid contention, this vseg is physically distributed on all clusters, with a page granularity. For each page, the physical mapping is defined by the LSB bits of the VPN.

2.3 STACK vseg

This private vseg contains the execution stack of a thread. Almos-mkh creates one STACK vseg for each thread of P running in cluster K. This vseg is registered in the VSL(P,K) when the thread descriptor is created in cluster K. To enforce locality, this vseg is of course mapped in cluster K.

2.4 ANON vseg

This public vseg is dynamically created by ALMOS-MK to serve an anonymous mmap system call executed by a client thread running in a cluster K. The vseg is registered in VSL(P,KREF), but the vseg is mapped in the client cluster K.

2.5 FILE vseg

This public vseg is dynamically created by ALMOS-MK to serve a file based mmap system call executed by a client thread running in a cluster K. The vseg is registered in VSL(P,KREF), but the vseg is mapped in cluster Y containing the file cache.

2.6 REMOTE vseg

This public vseg is dynamically created by ALMOS-MK to serve a remote mmap system call, where a client thread running in a cluster X requests to create a new vseg mapped in another cluster Y. The vseg is registered in VSL(P,KREF), but the vseg is mapped in cluster Y specified by the user.

2.7 summary

This table summarize the replication, distribution & mapping rules for user vsegs:

Type Access Replication Mapping in physical space Allocation policy in virtual space
STACK private localized Read Write one per thread same cluster as thread using it dynamic (one stack allocator per cluster)
CODE private localized Read Only one per cluste same cluster as thread using it static (defined in .elf file)
DATA public distributed Read Write non replicated distributed on all clusters static (defined in .elf file)
ANON public localized Read Write non replicated same cluster as calling thread dynamic (one heap allocator per process
FILE public localized Read Write non replicated same cluster as the file cache dynamic (one heap allocator per process)
REMOTE public localized Read Write non replicated cluster defined by user dynamic (one heap allocator per process)

3. kernel vsegs

For any process descriptor P in a cluster K, the VMM(P,K) contains not only the user vsegs defined above, but also the kernel vsegs, because all user theads can make system calls, that must access both the kernel instructions and the kernel data structures, and this requires address translation. This section describes the four types of kernel virtual segments defined by almost-mkh.

3.1. KCODE vsegs

A KCODE vseg contains the kernel code defined in the kernel.elf file. To avoid contention and improve locality, almos-mkh replicates this code in all clusters. This code has already been copied in all clusters by the bootloader. In each cluster K, and for all process P in cluster K (including the kernel process_zero), almos-mkh registers the KCODE vseg in all VSL(P,K), and map it to the local copy in all the GPT(P,K). This vseg uses only big pages, and there is no on-demand paging for this type of vseg. With this local mapping all access to the virtual instruction addresses will be simply translated by the MMU to the local physical address.

WARNING : there is only one vseg defined in the kernel.elf file, but there is as many KCODE vsegs as the number of clusters. All these vsegs have the same virtual base address and the same size. but the physical adresses (defined in the GPTs) depend on the cluster, because we want to access the local copy. This is not a problem because A KCODE vseg is a private vseg, that is accessed only by local threads.

3.2. KDATA vsegs

A KDATA vseg contains the kernel global data, statically allocated at compilation time, and defined in the kernel.elf file. To avoid contention and improve locality, almos-mkh replicates the KDATA vseg in all clusters. The corresponding data have already been copied in all clusters. As a physical copy of the KDATA vseg is available in any cluster K, almos-mkh can register this vseg in all VSL(P,K), and map it to this local copy in all GPT(P,K). With this local mapping we expect that most accesses to any KDATA segment will be done by a local thread.

WARNING : there is only one vseg defined in the kernel.elf file, and there is as many KDATA vsegs as the number of clusters. All these vsegs have the same virtual base address and the same size, but the physical addresses (defined in the GPTs), depends on the cluster, because we generally want to access the local copy. This is a problem, because there is two big differences between the KCODE and the KDATA vsegs :

  1. The values contained in the N KDATA vsegs are initially identical, as they are all defined by the same kernel.elf file. But they are not read-only, and can evolve differently in different clusters.
  2. The N KDATA vsegs are public, and define an addressable storage space N times larger than one single KDATA vseg. Even if most accesses are local, a thread running in cluster K must be able to access a global variable stored in another cluster X, or to send a request to another kernel instance in cluster X, or to scan a globally distributed structure, such as the DQDT or the VFS.

To support this inter-cluster kernel-to-kernel communication, almos-mkh defines the hal_remote_load( cxy , ptr ) and hal_remote_store( cxy , ptr ) functions, where ptr is a normal pointer (in kernel virtual space) on a variable stored in the KDATA vseg, and cxy is the remote cluster identifier. Notice that a given global variable is now identified by and extended pointer XPTR( cry , ptr ). With these remote access primitives, any kernel instance in cluster K can access any global variable in any cluster. Notice that local accesses can use the normal pointers in virtual kernel space, as the virtual adresses will be simply translated by the MMU to the local physical address.

In other words, almost-mkh clearly distinguish the local accesses, that can use standard pointers, from the remote access that must extended pointers. This can be seen as a bad constraint, but it can also help to imp to improve the locality, and to identify (and remove) the contention.

The remote_access primitives API is defined in the almos_mkh/hal/generic/hal_remote.h file.

3.3. KHEAP vsegs

The KHEAP vsegs are used to store dynamically allocated of kernel structures, such as the user process descriptors, the thread descriptors, the vseg descriptors, the file descriptors, etc. These structure can be requested by any thread running in any cluster, and are defined as global variables, that can be accessed by any thread. To avoid contention and improve locality, almos-mkh build a physically distributed kernel heap. In each cluster K, almos-mkh can register this HEAP vseg in all VSL(P,K), and map it in all the GPT(P,K). with is local mapping, a kernel structure requested by a thread running in any cluster will always be allocated in the local physical memory.

WARNING : To unify the access to remote data (i.e. data stored in a remote cluster), almos-mkh use the same policy for KHEAP and KDATA vsegs: all KHEAP segments have the same virtual base address. The local accesses to the locally allocated kernel structures can use normal pointers that will be translated by the MMU to local physical adresses. The remote access to remotely allocated kernel structures must use the remote_load() and remote_store() functions and handle extended pointers.

3.4. KDEV vsegs

Finally the KDEV vsegs are associated to the peripheral. There is one KDEV vseg per chdev (i.e. per channel device.

4. Address Translation for kernel vsegs

The detailed implementation of the virtual to physical address translation depends on the target architecture.

4.1 TSAR-MIPS32

As the TSAR architecture uses 32 bits cores, to reduce the power consumption, the virtual space is bounded to 4 Gbytes.

But the TSAR architecture provides two non standard, but very useful features to simplify the virtual to physical address translation for kernel vsegs :

  1. The TSAR 40 bits physical address has a specific format : it is the concaténation of an 8 bits CXY field, and a 32 bits LPADDR field, where the CXY defines

the cluster identifier, and the LPADDR is the local physical address inside the cluster.

  1. the MIPS32 core used BY the TSAR architecture defines, besides the standard MMU, another non-standard,hardware mechanism for address translation : A 40 bits physical address is simply build by appending to each 32 bits virtual address a 8 bits extension contained in a software controllable register, called DATA_PADDR_EXT.

In the TSAR architecture, and for any process P in any cluster K, almost-mkh registers only one extra KCODE vseg in the VMM[P,K), for kernel adressing, because almos-mkh uses the INST-MMU for instruction addresses translation, but does NOT not use the DATA-MMU for data addresses translation : When a core enters the kernel, the DATA-MMU is deactivated, and it is only reactivated when the core returns to user code.

When the value contained in the extension register is the local cluster identifier, any local kernel structures stored in the KDATA or the KHEAP segments is accessed by using directly the local physical addresses (identity mapping). To access a remote kernel structure, almost-mkh must use the hardware architecture dependent remote access functions presented in section C. For the TSAR architecture these load/store functions simply modify the extension register DATA_PADDR_EXT before the memory access, and restore it after the memory access.

This pseudo identity mapping impose some constraints on the KCODE and the KDATA segments when compiling the kernel.

The implementation of the hal_remote_load() and hal_remote_store() functions for the TSAR architecture is available in the almos_mkh/hal/tsar_mips32/core.hal_remote.c file.

4.2 Intel 64 bits

TODO

5. Virtual space organisation

This section describes the almost-mkh assumptions regarding the virtual space organisation, that is strongly dependent on the size of the virtual space.

5.1 TSAR-MIP32

The virtual address space of an user process P is split in 5 fixed size zones, defined by configuration parameters in https://www-soc.lip6.fr/trac/almos-mkh/browser/trunk/kernel/kernel_config.h. Each zone contains one or several vsegs, as described below.

5.1.1 The kernel zone

It contains the kcode vseg (type KCODE), that must be mapped in all user processes. It is located in the lower part of the virtual space, and starts a address 0. Its size cannot be less than a big page size (2 Mbytes for the TSAR architecture), because it will be mapped as one (or several big) pages. 5.1.2 The utils zone

It contains the two args and envs vsegs, whose sizes are defined by specific configuration parameters. The args vseg (DATA type) contains the process main() arguments. The envs vseg (DATA type) contains the process environment variables. It is located on top of the kernel zone, and starts at address defined by the CONFIG_VMM_ELF_BASE parameter.

5.1.3 The elf zone

It contains the text (CODE type) and data (DATA type) vsegs, defining the process binary code and global data. The actual vsegs base addresses and sizes are defined in the .elf file and reported in the boot_info_t structure by the boot loader.

5.1.4 The heap zone

It contains all vsegs dynamically allocated / released by the mmap / munmap system calls (i.e. FILE / ANON / REMOTE types). It is located on top of the elf zone, and starts at the address defined by the CONFIG_VMM_HEAP_BASE parameter. The VMM defines a specific MMAP allocator for this zone, implementing the buddy algorithm. The mmap( FILE ) syscall maps directly a file in user space. The user level malloc library uses the mmap( ANON ) syscall to allocate virtual memory from the heap and map it in the same cluster as the calling thread. Besides the standard malloc() function, this library implements a non-standard remote_malloc() function, that uses the mmap( REMOTE ) syscall to dynamically allocate virtual memory from the heap, and map it to a remote physical cluster.

5.1.5 The stack zone

It is located on top of the mmap zone and starts at the address defined by the CONFIG_VMM_STACK_BASE parameter. It contains an array of fixed size slots, and each slot contains one stack vseg. The size of a slot is defined by the CONFIG_VMM_STACK_SIZE. In each slot, the first page is not mapped, in order to detect stack overflows. As threads are dynamically created and destroyed, the VMM implements a specific STACK allocator for this zone, using a bitmap vector. As the stack vsegs are private (the same virtual address can have different mappings, depending on the cluster) the number of slots in the stack zone actually defines the max number of threads for given process in a given cluster.

5.2 Intel 64 bits

TODO