wiki:processus_thread

Version 57 (modified by alain, 4 years ago) (diff)

--

Process and thread creation/destruction

The process is the internal représentation of an user application. A process can be running as a single thread (called main thread), or can be multi-threaded. ALMOS-MKH supports the POSIX thread API. For a multi-threaded application, the number of threads can be very large, and the threads of a given process can be distributed on all cores available in the shared memory architecture, for maximal parallelism. Therefore A single process can spread on all clusters. To avoid contention, the process descriptor of a P process, and the associated structures, such as the list of registered vsegs (VSL), the generic page table (GPT), or the file descriptors table (FDT) are (partially) replicated in all clusters containing at least one thread of P.

1) Process

The PID (Process Identifier) is coded on 32 bits. It is unique in the system, and has a fixed format: The 16 MSB (CXY) contain the owner cluster identifier. The 16 LSB bits (LPID) contain the local process index in owner cluster. The owner cluster is therefore defined by the 16 MSB bits of PID.

As it exists several copies of the process descriptors, ALMOS-MKH defines a reference process descriptor, located in the reference cluster. The other copies are used as local caches, and ALMOS-MKH must guaranty the coherence between the reference and the copies.

As ALMOS-MKH supports process migration, the reference cluster can be different from the owner cluster. The owner cluster cannot change (because the PID is fixed), but the reference cluster can change in case of process migration.

In each cluster K, the local cluster manager ( cluster_t type in ALMOS-MKH ) contains a process manager ( pmgr_t type in ALMOS-MKH ) that maintains three structures for all process owned by K :

  • The PREF_TBL[lpid] is an array indexed by the local process index. Each entry contains an extended pointer on the reference process descriptor.
  • The COPIES_ROOT[lpid] array is also indexed by the local process index. Each entry contains the root of the global list of copies for each process owned by cluster K.
  • The LOCAL_ROOT is the local list of all process descriptors in cluster K. A process descriptor copy of P is present in K, as soon as P has a thread in cluster K.

There is a partial list of informations stored in a process descriptor ( process_t in ALMOS-MKH ):

  • PID : proces identifier.
  • PPID : parent process identifier,
  • PREF : extended pointer on the reference process descriptor.
  • VSL : root of the local list of virtual segments defining the memory image.
  • GPT : generic page table defining the physical memory mapping.
  • FDT : open file descriptors table.
  • TH_TBL : local table of threads owned by this process in this cluster.
  • LOCAL_LIST : member of local list of all process descriptors in same cluster.
  • COPIES_LIST : member of global list of all descriptors of same process.
  • CHILDREN_LIST : member of global list of all children of same parent process.
  • CHILDREN_ROOT : root of global list of children process.

All elements of a local list are in the same cluster, and ALMOS-MKH uses local pointers. Elements of a global list can be distributed on all clusters, and ALMOS-MKH uses extended pointers.

2) Thread

ALMOS-MKH defines four types of threads :

  • one USR thread is created by a pthread_create() system call.
  • one DEV thread is created by the kernel to execute all I/O operations for a given channel device.
  • one RPC thread is activated by the kernel to execute pending RPC requests in the local RPC fifo.
  • the IDL thread is executed when there is no other thread to execute on a core.

From the point of view of scheduling, a thread can be in three states : RUNNING, RUNNABLE or BLOCKED.

This implementation of ALMOS-MKH does not support thread migration: a thread created by a pthread_create() system call is pinned on a given core in a given cluster. The only exception is the main thread of a process, that is automatically created by the kernel when a new process is created, and follows its owner process in case of process migration.

In a given process, a thread is identified by a fixed format TRDID identifier, coded on 32 bits : The 16 MSB bits (CXY) define the cluster where the thread has been pinned. The 16 LSB bits (LTID) define the thread local index in the local TH_TBL[K,P] of a process descriptor P in a cluster K. This LTID index is allocated by the local process descriptor when the thread is created.

Therefore, the TH_TBL(K,P) thread table for a given process in a given clusters contains only the threads of P placed in cluster K. The set of all threads of a given process is defined by the union of all TH_TBL(K,P) for all active clusters K. To scan the set off all threads of a process P, ALMOS-MKH traverse the COPIES_LIST of all process_descriptors associated to P process.

There is a partial list of informations stored in a thread descriptor (thread_t in ALMOS-MKH):

  • TRDID : thread identifier
  • TYPE : KERNEL / USER / IDLE / RPC
  • FLAGS : thread attributes
  • STATE : CREATE / READY / USER / KERNEL / WAIT / ZOMBI / DEAD
  • PROCESS : pointer on the local process descriptor
  • LOCKS_COUNT : current number of locks taken by this thread
  • PWS : zone de sauvegarde des registres du coeur.
  • SCHED : pointer on the scheduler in charge of this thread.
  • CORE : pointer on the owner processor core.
  • IO : allocated devices (in case of privately allocated devices).
  • SIGNALS : bit vector permettant d’enregistrer les signaux reçus par le thread.
  • XLIST : member of the global list of threads waiting on the same resource.
  • CHILDREN_ROOT : root of the global list of children threads.
  • CHILDREN_LIST : member of the global list of all children of same parent.
  • etc.

3) Process creation

The process creation in a remote cluster implement the POSIX fork() / exec() mechanism. When a parent process P executes the fork() system call, a new child process C is created. The new C process inherit from the parent process P the open files (FDT), and the memory image (VSL and GPT). These structures must be replicated in the new process descriptor. After a fork(), the C process can execute an exec() system call, that allocate a new memory image to the C process, but the new process can also continue to execute with the inherited memory image. For load balancing, ALMOS-MKH uses the DQDT to create the child process C on a different cluster from the parent cluster P, but the user application can also use the non-standard fork_place() system call to specify the target cluster.

3.1) fork()

The fork() system call is the only method to create a new process. A thread of parent process P, running in a cluster X, executes the fork() system call to create a child process C on a remote cluster Y, that will become both the owner and the reference cluster for the C process. A new process descriptor, and a new thread descriptor are created and initialized in target cluster Y for the child process. The calling thread can run in any cluster. If the target cluster Y is different from the calling thread cluster X, the calling thread uses a RPC to ask the target cluster Y to do the work, because only the target cluster Y can allocate memory for the new process and thread descriptor.

Regarding the process descriptor, a new PID is allocated in cluster Y. The child process C inherit the vsegs registered in the parent process reference VSL, but the ALMOS-MKH replication policy depends on the vseg type:

  • for the DATA, MMAP, REMOTE vsegs (containing shared, non replicated data), all vsegs registered in the parent reference VSL(Z,P) are registered in the child reference VSL(Y,C), and all valid GPT entries in the reference parent GPT(Z,P) are copied in the child reference GPT(Y,C). For all pages, the WRITABLE flag is reset and the COW flag is set, in both (parent and child) GPTs. This require to update all corresponding entries in the parent GPT copies (in clusters other than the reference).
  • for the STACK vsegs (that are private), only one vseg is registered in the child reference VSL(Y,C). This vseg contains the user stack of the user thread requesting the fork, running in cluster X. All valid GPT entries in the parent GPT(X,P) are copied in the child GPT(Y,C). For all pages, the WRITABLE flag is reset and the COW flag is set, in both (client and child) GPTs.
  • for the CODE vsegs (that must be replicated in all clusters containing a thread), all vsegs registered in the reference parent VSL(Z,P) are registered in the child reference VSL(Y,C), but the reference child GPT(Y,C) is not updated by the fork: It will be dynamically updated on demand in case of page fault.
  • for the FILE vsegs (containing shared memory mapped files), all vsegs registered in the reference parent VSL(Z,P) are registered in the child reference VSL(Y,C), and all valid entries registered in the reference parent GPT(Z,P) are copied in the reference child GPT(Y,C). The COW flag is not set for these shared data.

Regarding the thread descriptor, a new TRDID is allocated in cluster Y, and the calling parent thread context (current values stored in the CPU and FPU registers) is saved in the child thread CPU and FPU contexts, to be restored when the child thread will be selected for execution. Three CPU context slots are not simple copies of the parent value:

  • the thread pointer register contains the current thread descriptor address. This thread pointer register cannot have the same value for parent and child.
  • the stack pointer register contains the current pointer on the kernel stack. ALMOS-MKH uses a specific kernel stack when an user thread enters the kernel, and this kernel stack is implemented in the thread descriptor. As parent and child cannot use the same kernel stack, the parent kernel stack content is copied to the child kernel stack, and the stack pointer register cannot have the same value for parent and child.
  • the page table pointer register contains the physical base address of the current generic page table. As the child GPT is a copy of the parent GPT in the child cluster, this page table

register cannot have the same value for parent and child.

At the end of the fork(), cluster Y is both the owner cluster and the reference cluster for the new C process, that contains one single thread running in the Y cluster. All pages of DATA, REMOTE, and MMAP vsegs are marked Copy On Write in the child C process GPT (clusters Y), and in all copies of the parent P process GPT (all clusters containing a copy of P).

3.2) exec()

After a fork() system call, any thread of the the P process can execute an exec() system call. This system call forces the P process to execute a new application, while keeping the same PID, the same parent process, all open file descriptors, and the environment variables. The existing P process descriptors (both the reference and the copies) and all associated threads are destroyed. A new process descriptor and a new main thread descriptor are created in the reference cluster, and initialized from values found in the existing process descriptor, and from values contained in the .elf file defining the new application. The calling thread can run in any cluster. If the reference cluster Z for process P is different from the calling thread cluster X, the calling thread must use a RPC to ask the reference cluster Z to do the work.

At the end of the exec() system call, the cluster Z is both the owner and the reference cluster for process C, that contains one single thread in cluster Z.

4) Thread creation

Any thread T of any process P, running in any cluster K, can create a new thread NT in any cluster M. This creation is driven by the pthread_create system call. The target M cluster is called the host cluster. If the M cluster does not contain a process descriptor copy for process P (because the NT thread is the first thread of process P in cluster M), a new process descriptor must be created in cluster M.

  • The target cluster M can be specified by the user application, using the CXY field of the pthread_attr_t argument. If the CXY is not defined by the user, the target cluster M is selected by the kernel K, using the DQDT.
  • The Target core in cluster M can be specified by the user application, using the CORE_LID field of the pthread_attr_t argument. If the CORE_LID is not defined by the user, the target core is selected by the target kernel M.

If the target cluster M is different from the client cluster, the cluster K send a RPC_THREAD_USER_CREATE request to cluster M. The argument is a complete structure pthread_attr_t (defined in the thread.h file in ALMOS-MK), containing the PID, the function to execute and its arguments, and optionally, the target cluster and target core. This RPC should return the thread TRDID. The detailed scenario is the following:

  1. The kernel M checks if it contains a copy of the P process descriptor.
  2. If not, the kernel M creates a process descriptor copy from the reference P process descriptor, using a remote_memcpy(), and using the cluster_get_reference_process_from_pid() to get the extended pointer on reference cluster. It allocates memory for the associated structures GPT(M,P), VSL(M,P), FDT(M,P). It initializes (partially) these structures by using remote_memcpy() from the reference cluster. The PG_TBL structure will be filled by the page faults.
  3. The kernel M register this new process descriptor in the COPIES_LIST and LOCAL_LIST.
  4. When the local process descriptor is set, the kernel M select the core that will execute the thread, allocates a TRDID to this thread, and creates the thread descriptor for NT.
  5. The kernel M registers the thread descriptor in the local process descriptor TH_TBL(M,P), and in the selected core scheduler.
  6. The kernel M returns the TRDID to the client cluster K, and acknowledge the RPC.

5) Thread destruction

The destruction of a thread T running in cluster K can be caused by the thread itself, executing the thread_exit() function to suicide. It can also be caused by another thread, executing the thread_kill() function requesting the target thread to stop execution.

5.1) thread_kill()

The thread_kill() function must be executed by a thread running in kernel mode in the same cluster as the target thread. It is called by the pthread_cancel system call, to destroy one single thread, or by the kill system call, to destroy all threads of a given process. The killer thread requires the target thread scheduler to do the job by writing in the target thread descriptor, and the target scheduler signals completion to the killer by writing in the killer thread descriptor.

  • To request the kill, the killer thread sets the BLOCKED_GLOBAL bit in the target thread "blocked" field, sets the SIG_KILL bit in the target thread "signals" field, and register in the target thread "kill_rsp" field the address of a response counter allocated in the killer thread stack.
  • If the target thread is running on another core than the killer thread, the killer thread send an IPI to the core running the target thread, to ask the target scheduler to handle the request. If the target is running on the same thread as the killer, the killer thread calls directly the sched_handle_signal() function.
  • In both cases, the sched_handle_signals() function - detecting the SIG_KILL signal - detach the target thread from the scheduler, detach the target thread from the local process descriptor, release the memory allocated to the target thread descriptor, and atomically decrement the response counter in the killer thread to signal completion.

5.2) thread_exit() when DETACHED

The thread_exit() is called by an user thread T executing the pthread_exit system call to suicide. The scenario is rather simple when the thread T is not running in ATTACHED mode.

  • The thread_exit() function sets the SIG_SUICIDE bit in the thread "signals" bit_vector, sets the BLOCKED_GLOBAL bit in the thread "blocked" bit_vector, and de-schedule.
  • The scheduler, detecting the SIG_SUICIDE bit, detach the thread from the scheduler, detach the thread from the local process descriptor, and releases the memory allocated to the thread descriptor.

5.3) thread_exit() when ATTACHED

The thread_exit() scenario is more complex if the finishing thread T is running in ATTACHED mode, because another - possibly remote - PT thread, executing the pthread_join system call, must be informed of the exit of thread T. As the pthread_exit can happen before of after the pthread_join, this requires a synchronisation: The first arrived thread block and deschedule, and must be reactivated by the other thread. This synchronisation uses three specific fields in the thread descriptor: the "join_lock" field is a remote_spin_lock; the "join_value" field contains the exit value returned by the finishing thread T; the "join_xp"field contains an extended pointer on the PT thread that wants to join. The scenario is the following:

  • Both the T thread (executing the thread_exit() function), and the PT thread (executing the thread_join() function) try to take the "join_lock" implemented in the T thread descriptor (the "join_lock" in the PT thread is not used).
  • After taking the "join_lock", the T thread test the JOIN_DONE flag in T thread. If this flag is set, the PT thread arrived first: the T thread register its exit value in the PT thread "join_value" field, set the EXIT_DONE flag in the PT thread, reset the BLOCKED_EXIT bit in PT thread (using the extended pointer stored in the "join_xp" field), release the "join_lock", and exit as described for the DETACHED case.
  • If the JOIN_DONE flag is not set, the thread T arrived first: The T thread register its exit value in the PT thread "join_value" field, release the "join"lock", blocks on the BLOCKED_JOIN and deschedules.
  • After taking the "join_lock", the PT thread test the EXIT_DONE flag in PT thread. If this flag is set, the T thread arrived first: the PT thread reset the BLOCKED_EXIT bit in the T thread "blocked" field, get the exit value in the "join_value" field, reset the "join_lock" in T thread, and continue.
  • If the EXIT_DONE flag is not set, the PT thread arrived first: the PT thread register its extended pointer in the T thread "join_xp" field, set the JOIN_DONE flag in the T thread, releases the "join_lock" in the T thread, blocks on the BLOCKED_EXIT bit, and deschedules.

6) Process destruction

The destruction of process P can be caused by an exit() system call executed by any thread of process P, or by a signal send by another process executing the kill() system call. In both case, the owner cluster is in charge of the destruction.

6.1) process exit

The exit() is a four steps scenario:

  1. If the exit() system call is executed by a thread running in a cluster K different from the owner cluster Z, the kernel K send a RPC_PROCESS_EXIT to cluster Z. The argument is the calling process PID.
  2. To execute this RPC, the owner kernel Z send a multi-cast RPC_PROCESS_KILL to all clusters X that contain a copy of the process descriptor, using its COPIES_LIST. The argument of this RPC is the target process descriptor pointer.
  3. In each cluster X, the kernel receiving a RPC_PROCESS_KILL sets the FLAG_KILL bit signal in all threads descriptors associated to the target process, and polls the local TH_TBL(X,P). When it detects that the TH_TBL(X,P) is empty, it releases the memory allocated to process descriptor, and acknowledges the RPC to cluster Z.
  4. When the kernel Z has received all expected responses to the multi-cast RPC_PROCESS_KILL, it releases all memory allocated to process PID in cluster Z, and deschedulethis completes the process destruction.

6.2) process kill

The kill() scenario is identical to the 4 steps exit() scenario above. It is initiated by a kill() system call executed by any thread running in any cluster.