wiki:kernel_synchro

Version 17 (modified by alain, 4 years ago) (diff)

--

This section describes the synchronisation primitives used by ALMO-MKH, namely the barriers used during the parallel kernel initialization, and the locks used to protect concurrent access to the kernel data structures.

A) Synchronisation barriers

The kernel initialization is done in parallel in all clusters, by the kernel idle threads, where there is one idle thread per core. These threads allocate and initialize shared and distributed data structures, such as the cluster managers, the cores schedulers, or the trans-cluster Virtual File System, or the trans-cluster DQDT. This requires synchronization barriers.

ALMOS-MKH implements both local barriers and global barriers.

  • The local barriers are used to synchronize all idle threads in a given cluster K. The number of expected threads on a local barrier is defined by the number of cores in cluster K, that is obtained by the boot-loader, and registered in the local boot_info structure.
  • The global barrier is used to synchronize all threads running on the first core (Core 0) of each cluster. The number of expected threads on the global barrier is defined by the total number of active clusters (i.e. containing one kernel instance), that is also registered in each local boot_info structure.

These barriers being used only during kernel initialization implement a simple busy-waiting policy. They are implemented as global variables, in the kdata segment. These toggle barriers can be used several times, and don't need to be explicitly initialized. For the global barrier (xbarrier), the client threads use remote_read(), remote_write(), & remote_atomic_add() primitives, to access the barrier located in cluster 0.

B) Locks general principles

Most kernel data structures are shared: they can be concurrently accessed by several threads. These threads can be specialized kernel threads , such as the DEV threads or the RPC threads, or can be user threads, running in kernel mode after a syscall.

There exist actually two levels of sharing:

  • some structures are locally shared: they can only be accessed by threads running in the cluster containing the shared structure. Examples are the scheduler associated to a given core in a given cluster, or the physical pages manager (PPM) in a given cluster, or the virtual memory manager (VMM) associated to a given process descriptor in a given cluster.
  • some structures are globally shared: they can be concurrently by any thread running in any cluster. Examples are the waiting queues associated to the chdevs (channel devices), distributed on all clusters, or the kernel distributed virtual file system (VFS), that is also distributed on all clusters.

ALMOS-MKH defines three types of locks to implement exclusive access to these shared structures: busylocks, queuelocks, and rwlocks, and for each type, it exists one local version and one global version.

C) busylocks

The busylock (local) and remote_busylock (global) are low-level locks implementing a busy-waiting policy for the calling threads. If the lock is already taken by another thread, the calling thread keep polling the lock until success. They are used to protect higher level synchronisation primitives (such as the queuelocks or rwlocks described below) , or simple data-structure where the locking time is small and can be bounded.

A thread holding a busy lock cannot deschedule. To enforce this rule, the busylock_acquire() function enters a critical section before taking the lock, and saves the SR value in the busy lock descriptor. The thread holding the busy lock exit the critical section when it calls the busylock_release() function that releases the lock and restores the SR state. Each time a thread acquire a busylock, it increments a busylocks counter in the thread descriptor, and decrements it when it releases the lock. The scheduler makes a kernel panic if the current thread busylocks counter is not nul when it executes the sched_yield() function.

To improve fairness, the busylock_acquire() function uses a ticket policy: the calling thread makes an atomic increment on a ticket allocator in lock descriptor, and keep polling the current value until current == ticket. To release the lock, the busylock_release() function increments the "current" value in lock descriptor.

D) queuelock

The queuelock (local) and remote_queuelock (global) are higher level locks implementing a descheduling policy, with registration in a waiting queue. If the lock is already taken by another thread, the calling thread register in a (local or trans-cluster) waiting queue rooted in the queuelock, and deschedules. The first thread T registered in the waiting thread is re-activated by the thread T' holding the lock when T' release the lock. It is used to protect complex structures, where the access can require to get exclusive access to one (or more) other shared resources.

The queue lock descriptor itself contains a busylock, that is used by the queuelock_acquire() and queue lock_release() functions to protect exclusive access to the queue lock state.

A thread holding a queuelock can deschedule, and no special checking is done by the scheduler.

E) rwlocks

The rwlock (local) and remote_rwlock (global) support several simultaneous read accesses, but only one write access to a given shared object. As for queue locks, both readers and writers take the associated busylock before accessing or updating the rwlock state, and releases the busylock after rwlock state update.

  • when a reader try to access the object, it increments the readers "count" when the lock is not "taken" by a writer. It registers in the "rd_root" waiting queue, blocks, and deschedules when the lock is taken.
  • when a writer try to take the rwlock, it check the "taken" field. If the lock is already taken, or if the number of readers is non zero, it registers in the "wr_root" waiting queue, blocks, and deschedules. It set "taken" otherwise.
  • when a reader completes its access, it decrement the readers "count", unblock the first waiting writer if there is no other readers, and unblock all waiting readers if there is no write request.
  • when a writer completes its access, it reset the "taken" field, releases the first waiting writer if queue non empty, or releases all waiting readers if no writer.

F) locks debug

Each local or remote busylock contains a <type> field defining the specific resource protected by this lock, that is a non-zero value defined at lock initialization.

  • if the busylock is directly used to protect access to a shared kernel structure, this type field defines the type of the protected structure.
  • if the busylock is used to protect concurrent access to an higher level lock (queuelock or rwlock), the <type> field defines the type of the structure protected by this higher level lock.

The existing lock types are defined in the <kernel_config.h> file.

F.1 busylock debug

When the DEBUG_BUSYLOCK parameter is set to a non-zero value in the <kernel_config.h> file, two debug mechanism are activated, thanks to conditional compilation.

  1. Each thread contains - besides the busylocks counter - an optional busylocks_root field, that is the root of an embedded xlist of (local or remote) busylocks hold by a given thread at a given time. This list is implemented by an optional xlist field in the busy lock descriptor. It is dynamically updated by the busylock_acquire() and busylock_release() functions. The set of taken busylocks is printed in the error message, when the scheduler detects that a descheduling thread is holding one or several busylocks. This list can also be printed through the idbg interactive debugger, for any thread identified by its (pid,trdid).
  1. All busylock_acquire() / busylock_release() made by the thread[pid,trdid], are traced on kernel TXT0. The DEBUG_BUSYLOCK_PID DEBUG_BUSYLOCK_TRDID parameters are defined in the <kernel_config.h> file)

F.2 higher level lock debug

When the DEBUG_QUEUELOCK_TYPE parameter is set to a non zero value, all queuelock_acquire() / queuelock_release() made by any thread on a given queuelock identified by this type and its (cxy,ptr) pointer are traced on the kernel TXT0 terminal. The DEBUG_QUEUELOCK_TYPE, DEBUG_QUEUELOCK_CXY (cluster identifier) and DEBUG_QUEUELOCK_PTR (queuelock local pointer) parameters are defined in the <kernel_config.h> file. If the DEBUG_QUEUELOCK_CXY and DEBUG_QUEUELOCK_PTR parameters are set to zero, all queuelocks matching the specified type are traced.

When the DEBUG_RWLOCK_TYPE parameter is set to a non zero value, all rwlock_acquire() / rwlock_release() made by any thread on a given rwlock identified by this type and its (cxy,ptr) pointer are traced on the kernel TXT0 terminal. The DEBUG_RWLOCK_TYPE, DEBUG_RWLOCK_CXY (cluster identifier) and DEBUG_RWLOCK_PTR (rwlock local pointer) parameters are defined in the <kernel_config.h> file. If the DEBUG_RWLOCK_CXY and DEBUG_RWLOCK_PTR parameters are set to zero, all rwlocks matching the specified type are traced.