wiki:page_tables

Version 14 (modified by phan, 3 years ago) (diff)

--

Page tables and vsegs lists implementation

1) vseg descriptors

A vseg descriptor contains the following fields :

  • TYPE : Defines the replication/distribution policy (CODE / STACK / DATA / HEAP / HEAPXY / FILE / ANON)
  • FLAGS : Defines access rights
  • VBASE : Base virtual address
  • LENGTH : Segment length
  • BIN : Pathname to the .elf file. (only for DATA and CODE types)
  • X,Y : Coordinates of the cluster where the vseg is mapped (only for a localized vseg)
  • MAPPER : radix-tree containing the physical pages allocated to this vseg (only for CODE, DATA and FILE types)

2) Page tables and vseg lists

The various information associated to a process P can be found in the process descriptor (process_t structure). This process descriptor and the structures within are - partly - replicated in all clusters contaning at least one thread of process P, these clusters are called "active" clusters.

The page table is used by the kernel to store the mapping for each page of each vseg of a process. The page table belongs to these - partly - replicated information, we will call PT(P,K) the page table of process P in cluster K.

The vsegs list of a process is used by the kernel when a page fault occurs, it checks that the unmapped virtual address corresponds to a defined segment and determines the segment type. This list is also one of the - partly - replicated information and we will call VSL(P,K) the vsegs list of process P in cluster K.

2.1) Evolution of PT(P,K)

For a P process, the contents of the different page tables PT(P,K) changes over time, and it evolves differently in active clusters : On the one hand, the P page tables content evolves dynamically in the clusters depending on the page faults triggered by the thread of P running in each cluster. Moreover, the private segments mapping (CODE and STACK types) differs from one cluster to another, since a same virtual address corresponds to different addresses depending on the cluster. For public vsegs, only the reference cluster contains the complete mapping state.

2.2) Evolution of VSL(P,K)

For a P process, the contents of the various vsegs lists VSL(P,K) changes over time too, and is not the same in all clusters. Indeed, the vsegs list must be identical for public vsegs, but each private vseg is registered only in the cluster it belongs to. For public vsegs, every dynamic insertion of a new vseg or extension of an existing vseg must be echoed in all active clusters.

3) Registering and destruction of vsegs in VSL(P,K)

The registering and destruction policy in the VSL(P,K) depends on the vseg type :

3.1) DATA

This vseg type is registered in VSL(P,Z), the Z cluster being the owner of process P at its creation. It is registered in VSL(P,A) in an A cluster every time a thread of P is created in this A cluster, and this A cluster didn't have a thread of P yet. The length is defined in the .elf file containing the process' binary code. There are no mapping cluster for a distributed vseg. This type of vseg gets destroyed only when the P process is destroyed.

3.2) CODE

This vseg type is registered is registered in VSL(P,Z), the Z cluster being the owner of process P at its creation. It is registered in VSL(P,A,) in an A cluster every time a thread of P is created in this A cluster, and this A cluster didn't have a thread of P yet. The length is defined in the .elf file containing the process' binary code. This mapping cluster is always the local cluster for a private vseg. This type of vseg gets destroyed only when the P process is destroyed.

3.3) STACK

This type of vseg is registered in VSL(P,X) every time a new thread of process P is created in cluster X. The VSL(P,Y) of other Y clusters don't need to be updated because a STACK vseg in an X cluster is never known nor accessed by another Y cluster. The length is defined in a global parameter in the OS : MIN_STACK_SIZE. This mapping cluster is always the local cluster for a private vseg. This type of vseg is removed from VSL(P,X) when the thread is destroyed.

3.4) HEAP

This type of vseg is registered in VSL(P,Z), the Z cluster being the owner of process P at its creation. It is registered in VSL(P,A,) in an A cluster every time a thread of P is created in this A cluster, and this A cluster didn't have a thread of P yet. The length is defined in a global parameter in the OS : STANDARD_MALLOC_HEAP_SIZE. There are no mapping cluster for a distributed vseg. This type of vseg is destroyed when the process is destroyed.

3.5) REMOTE

This type of vseg is registered in VSL(P,A) of all A clusters containing at least one thread of P, when a thread of P executes a remote_malloc(x,y) in a K cluster. The kernel instance in cluster K sends a VVSEG_REQUEST_RPC to the Z cluster, owner of P, if there wasn't already a REMOTE vseg in VSL(P,K). The arguments are the PID and the type of the missing vseg. The length is defined in a global parameter in the OS : REMOTE_MALLOC_HEAP_SIZE. The mapping cluster is defined by arguments (x,y) from the remote_malloc(). This type of vseg is destroyed only at the process destruction.

3.6) FILE

This type of vseg is registered in the VSL(P,A) of all A clusters containing at least one thread of P, it is registered when a thread of P executes mmap(file, size) in a cluster K. The kernel instance running in cluster K sends a VSEG_REQUEST_RPC to the Z cluster, owner of process P. The arguments are the PID, the vseg type, the file descriptor and the size. The kernel instance in clsuter Z broadcasts a VSEG_REGISTER_RPC to all the other active cluster of P. Te vseg length is defined by the size argument of mmap(). The mapping cluster is defined by the file argument, and it can be any cluster since a file cache can be placed on any cluster (uniform dispatching policy). This vseg type is destroyed on a munmap() call, using a two-RPCs mechanism as for the creation.

3.7) ANON

This vseg type is registered in the VSL(P,A) of all A clusters containing at least one thread of P, when a thread of P executes a mmap(anonymous, size) in a cluster K. The kernel instance of cluster K sends a VSEG_REQUEST_RPC to the cluster Z that owns P. The arguments are the PID, the vseg type, the file descriptor, the size, ... to be completed... The kernel instance of cluster Z broadcasts a VSEG_REGISTER_BCRPC to all active clusters of p. The vseg length is defined by the size argument of mmap(). There is no mapping cluster for a distributed vseg. This vseg type is destroyed on munmap() call, using a two-RPCs mechanism just as for the creation.

4) Insertion of an entry in the pages table

Adding a new entry in a PT(P,K) for a process P in a cluster K is the result of a page fault, triggered by any thread of process P running in cluster K, based on the "on-demand paging" principle. All threads of a P process in a K cluster use exclusively the local PT(P,K), and report the page fault to the local kernel instance. The handling of the page fault depends on the segment type :

4.1) CODE

There is a CODE vseg in the VSL of all the clusters having at least one thread of process P. If the K cluster that detected the page fault is different from the Z cluster owner of P, the kernel of cluster K has to allocate a physical page in cluster K. To initialize this page, it sends a PT_MISS_RPC to cluster Z, owner of P. When it gets the PTE stored in PT(P,Z), it does a remote_memcpy() to copy the contents of physical page in cluster Z to the physical page of cluster K. It then ends with inserting the missing PTE to the PT(P,K). IF cluster K is the owner cluster, it allocates a physical page, initializes this page by addressing the file system to retrieve the content of the missing page in the .elf file cache, then updates the PT(P,Z).

QUESTION : dans le cluster propriétaire Z, faut-il faire une copie de la page du cache de fichier vers une autre page physique ? [AG]

4.2) STACK

The STACK vsegs associated to the threads placed in a cluster X are mapped in this cluster X and are handled independently from each other in the different clusters. The kernel instance in cluster X has to allocate a physical page and register it in the local PT(P,X) without initializing it. IF the requested address is in the last possible page for the vseg, the STACK vseg length can be locally dynamically increased in the local VSL(P,X), if there is enough space in the virtual space zone used for the stacks. As suggested by Franck, we can imagine an allocation policy by dichotomy using two arguments : MAX_STACK_SIZE, defining the total length of the zone reserved for the stacks, and MIN_STACK_SIZE, defining the minimal length of one stack.

4.3) DATA

This vseg being distributed, the physical pages are distributed among all the cluster depending on the VPN LSBs. If the K cluster, that detects the page fault, is different from the owner Z cluster, then the kernel instance of cluster K sends a PT_MISS_RPC to cluster Z in order to obtain the PTE stored in PT(P,Z). The arguments are the PID and the VPN of the missing page. When it receives the response, it updates the PT(P,K). If the cluster that detects the page fault is the owner Z cluster, it chooses a target M cluster from the VPN LSBs and sends a RPC_PMEM_GET_SPP to cluster M in order to obtain the PPN of a physical page in cluster M. In response to this RPC, the kernel instance of cluster M allocates a physical page and returns its PPN. The kernel instance of cluster Z addresses the file system to retrieve the contents of the missing page in the .elf file cache and initialized the physical page in M via a remote_memcpy(). Then it updates the PT(P,Z).

4.4) HEAP

This vseg being distributed, the physical pages are distributed among all the cluster depending on the VPN LSBs. If the K cluster, that detects the page fault, is different from the owner Z cluster, then the kernel instance of cluster K sends a PT_MISS_RPC to cluster Z in order to obtain the PTE stored in PT(P,Z). The arguments are the PID and the VPN of the missing page. When it receives the response, it updates the PT(P,K). If the cluster that detects the page fault is the owner Z cluster, it chooses a target M cluster from the VPN LSBs and sends a RPC_PMEM_GET_SPP to cluster M in order to obtain the PPN of a physical page in cluster M. In response to this RPC, the kernel instance of cluster M allocates a physical page and returns its PPN. When the kernel of cluster Z obtains the PPN, it updates the PT(P,Z).

4.5) REMOTE

This vseg being localized, the mapping M cluster's coordinates are registered in the vseg descriptor. If the K cluster, that detects the page fault, is different from the owner Z cluster, then the kernel instance of cluster K sends a PT_MISS_RPC to cluster Z in order to obtain the PTE stored in PT(P,Z). The arguments are the PID and the VPN of the missing page. When it receives the response, it updates the PT(P,K). If the cluster that detects the page fault is the owner Z cluster, it sends a RPC_PMEM_GET_SPP to cluster M in order to obtain the PPN of a physical page in cluster M. In response to this RPC, the kernel of cluster M allocated a physical page and returns its PPN. Whe the kernel of cluster Z obtains the PPN, it updates the PT(P,Z).

4.6) FILE

Ce vseg étant localised, les coordonnées du cluster de mapping M sont enregistrées dans le descripteur de vseg. Si le cluster qui détecte le défaut de page K est différent du cluster propriétaire Z, le noyau du cluster K envoie une PT_MISS_RPC au cluster Z, pour obtenir le PTE stocké dans la PT(P,Z). Les arguments sont le PID et le VPN de la page manquante. Quand il reçoit la réponse, il met à jour la PT(P,K). Si le cluster qui détecte le défaut de page est le cluster propriétaire Z, il envoie au cluster M qui contient le cache du fichier une GET_FILE_CACHE_RPC pour obtenir le PPN. Les arguments sont le PID, le descripteur du fichier, et l’index de la page dans le mapper. En réponse à cette RPC, le noyau du cluster M accède au mapper du vseg et retourne le PPN correspondant. Quand le noyau du cluster Z obtient le PPN, il met à jour la PT (P,Z).

4.7) ANON

Ce vseg étant distributed, les pages physiques sont distribuées sur tous les clusters suivant les bits de poids faible du VPN. Le traitement d’un défaut de page est le même que pour un vseg HEAP.

5) Invalidation d’une entrée de la table des pages

Dans un cluster Z, propriétaire d’un processus P, le noyau peut décider d’invalider une entrée d’une PT(P,Z). Cela peut se produire par exemple en cas de pénurie de mémoire dans le cluster Z, ou simplement en cas de munmap(). Sauf si le vseg concerné est de type STACK, l’entrée invalidée dans la PT(P,Z) doit aussi être invalidée dans les PT(P,K) des autre clusters. Pour ce faire, le noyau du cluster Z doit broadcaster une PT_INVAL_BCRPC vers tous les autres clusters actifs de P.

6) Optimisation des RPC broadcast

Dans une RPC broadcast, tous les clusters destinataires doivent signaler la terminaison en incrémentant de façon atomique un compteur de réponses, qui est scruté par le cluster initiateur.

Pour réduire le nombre de destinatiares, le descripteur du processus P du cluster propriétaire Z peut maintenir quatre variables XMIN, XMAX, YMIN, YMAX définissant le rectangle minimal recouvrant tous les clusters actifs de P à tout instant. Dans ce cas une RPC broadcast ne doit être envoyée qu’a (XMAX - XMIN + 1) * (YMAX - YMIN +1) destinataires. Ces variables sont mises à jour à chaque création de thread.

7 ) Optimisation du traitement des défauts de page

Pour réduire le nombre de RPC causés par les défauts de page, le noyau d’un cluster X qui détecte un défaut de page peut utiliser un remote_read() dans la table PT(P,Z) du cluster de référence au lieu d’une PT_MISS_RPC. Ceci impose cependant d’utiliser un lock multi-lecteurs pour éviter un état incohérent dans le cas d’une transaction PT_INVAL_BC_RPC simultanée initiée par le cluster Z : Ce lock doit être pris systématiquement par le cluster propriétaire avant un PT_INVAL_BC_RPC, et par les autres clusters avant un remote_read(). Il garantit que le PT_INVAL_RPC ne sera lancé qu’après la fin de tous les remote_read() en cours. Il garantit qu’aucun nouveau remote_read() ne sera plus accepté avant la completion du PT_INVAL_RPC.