Changes between Version 7 and Version 8 of kernel_locks


Ignore:
Timestamp:
Dec 5, 2014, 2:49:56 PM (9 years ago)
Author:
alain
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • kernel_locks

    v7 v8  
    33[[PageOutline]]
    44
    5 The [source:soft/giet_vm/giet_common/locks.c locks.c] and [source:soft/giet_vm/giet_common/locks.h locks.h] files define the functions used by the kernel to take & release  locks protecting exclusive access to a shared resource. These locks are implemented as spin-locks, with a waiting queue (based on a ticket allocator scheme) to enforce fairness and avoid live-lock situations. The GIET_LOCK_MAX_TICKET define the wrapping value for the ticket allocator.
     5The [source:soft/giet_vm/giet_common/locks.c locks.c] and [source:soft/giet_vm/giet_common/locks.h locks.h] files define the functions used by the kernel to take & release  locks protecting exclusive access to a shared resource.
     6
     7The GIET_VM kernel define two types of spin-lock:
     8 * The '''spin_lock_t''' implements a spin-lock with a waiting queue (based on a ticket allocator scheme), to enforce fairness and avoid live-lock situations. The GIET_LOCK_MAX_TICKET define the wrapping value for the ticket allocator.
     9 * The '''simple_lock_t''' implements a spin-lock without waiting queue. It is only used by the TTY0 access functions, to get exclusive access to the TTY0 kernel terminal
    610
    711The lock access functions are prefixed by "_" to remind that they can only be executed by a processor in kernel mode.
    812
    9 The ''spin_lock_t'' structure is defined to have one single lock in a 64 bytes cache line, and should be aligned on a cache line boundary.
     13Both the '''spin_lock_t''' and '''simple_lock_t''' structures are implemented to have one single lock in a 64 bytes cache line, and should be aligned on a cache line boundary.
    1014
    1115
    1216 === unsigned int '''_atomic_increment'''( unsigned int * shared , unsigned int increment ) ===
    13 This blocking function use a LL/SC to atomically increment a shared variable.
     17This blocking function use a LL/SC to atomically increment a shared variable. 
    1418 * '''shared''' : pointer on the shared variable
    1519 * '''increment''' : increment value
     
    1923
    2024 === void '''_lock_acquire'''( spin_lock_t * lock ) ===
    21 This blocking function uses the atomic_increment() function, and returns only when the lock as been granted.
     25This blocking function uses the atomic_increment() function, to implement a ticket allocator and provide ordered access to the protected resource. It returns only when the lock as been granted.
    2226
    23  === void '''_lock_release'''( giet_lock_t * lock ) ===
    24 This function releases the spin-lock. It must always be called after a successful _lock_acquire().
     27 === void '''_lock_release'''( spin_lock_t * lock ) ===
     28This function releases the lock. It must always be called after a successful _lock_acquire().
     29
     30 === void '''_simple_lock_acquire'''( simple_lock_t * lock ) ===
     31This blocking function does not implement any ordered allocation. It returns only when the lock as been granted.
     32
     33 === void '''_simple_lock_release'''( simple_lock_t * lock ) ===
     34This function releases the lock. It must always be called after a successful _simple_lock_acquire().