Changes between Version 23 and Version 24 of io_operations


Ignore:
Timestamp:
Nov 3, 2016, 5:15:42 PM (7 years ago)
Author:
meunier
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • io_operations

    v23 v24  
    1313 * '''External peripherals''' are accessed through a bridge located in one single cluster (called ''cluster_io'', identified by the ''io_cxy'' parameter in the arch_info description). They are shared ressources that can be used by any thread running in any cluster. Examples are the generic IOC device (Block Device Controller), the generic NIC device (Network Interface Controller), the generic TXT device (Text Terminal), the generic FBF device (Frame Buffer for Graphical Display Controller).
    1414
    15  * '''Internal peripherals''' are replicated in all clusters. Each internal peripheral is associated to the local kernel instance, but can be accessed by ant thread running in any cluster. There is very few internal peripherals. Examples are the generic ICU device (Interrupt Controller Unit), or the generic MMC device (L2 Cache Configuration and coherence management).
     15 * '''Internal peripherals''' are replicated in all clusters. Each internal peripheral is associated to the local kernel instance, but can be accessed by any thread running in any cluster. There are very few internal peripherals. Examples are the generic ICU device (Interrupt Controller Unit), or the generic MMC device (L2 Cache Configuration and coherence management).
    1616
    1717ALMOS-MK supports ''multi-channels'' external peripherals, where one single peripheral controller contains N channels that can run in parallel. Each channel has a separated set of addressable registers, and each channel can be used by the OS as an independent device. Examples are the  TXT peripheral (one channel per text terminal), or the NIC peripheral (one channel per MAC interface).
    1818
    19 The set of available peripherals, and their localisation in a given many-core architecture must be described in the '''arch_info.py''' file. For each peripheral, the composite index is implemented as a 32 bits integer, where the 16 MSB bits define the type, and the 16 LSB bits define the subtype.
     19The set of available peripherals, and their location in a given many-core architecture must be described in the '''arch_info.py''' file. For each peripheral, the composite index is implemented as a 32 bit integer, where the 16 MSB bits define the type, and the 16 LSB bits define the subtype.
    2020
    2121== B) Generic Devices  APIs  ==
    2222
    23 To represent the available peripherals in a given manicore architecture, ALMOS-MK uses generic ''device descriptors'' (implemented by the ''device_t'' structure). For multi-channels peripherals, ALMOS-MK defines one ''device descriptor'' per channel.  this descriptor contains the functional index, the implementation index, the channel index, and the physical base address of the segment containing the addressable registers for this peripheral channel.
     23To represent the available peripherals in a given manycore architecture, ALMOS-MK uses generic ''device descriptors'' (implemented by the ''device_t'' structure). For multi-channels peripherals, ALMOS-MK defines one ''device descriptor'' per channel.  This descriptor contains the functional index, the implementation index, the channel index, and the physical base address of the segment containing the addressable registers for this peripheral channel.
    2424
    25 Each device descriptor contains a waiting queue of pending commands registered by the various clients threads.
     25Each device descriptor contains a waiting queue of pending commands registered by the various client threads.
    2626
    2727For each generic device type, the device specific API defines the list of available commands, and the specific structure defining the command descriptor (containing the command type and arguments). This structure is embedded (as an union of the various device types) in the thread descriptor, to be passed to the hardware specific driver.
    2828
    29 The set of supported generic devices, and the associated APIs are defined below:
     29The set of supported generic devices, and their associated APIs are defined below:
    3030
    31 || device || type ||  usage                                 || api definition                         ||
    32 || IOC     || ext   || block device controller        || [wiki:ioc_device_api ioc_api] ||
    33 || TXT     || ext   || text terminal controller        || [wiki:txt_device_api txt_api] ||
     31|| device  || type  ||  usage                       || api definition                ||
     32|| IOC     || ext   || block device controller      || [wiki:ioc_device_api ioc_api] ||
     33|| TXT     || ext   || text terminal controller     || [wiki:txt_device_api txt_api] ||
    3434|| NIC     || ext   || network interface controller || [wiki:nic_device_api nic_api] ||
    3535
    3636== C) Devices Descriptors Placement ==
    3737
    38 '''Internal peripherals'''  are replicated in all clusters.  In each cluster, the device descriptor is evidently stored in the same cluster as the hardware device itself. These device descriptors are mostly accessed by the local kernel instance, but  can also be accessed by threads running in another cluster (it is the case for both the ICU and MMC devices).
     38'''Internal peripherals'''  are replicated in all clusters.  In each cluster, the device descriptor is obviously stored in the same cluster as the hardware device itself. These device descriptors are mostly accessed by the local kernel instance, but  can also be accessed by threads running in another cluster (it is the case for both the ICU and MMC devices).
    3939
    40 '''External peripherals''' are shared resources, located in the I/O cluster. To minimize contention, the corresponding device descriptors are distributed on all clusters, as uniformly as possible. Therefore, an I/O operation involve generally three clusters: the client cluster, the I/O cluster containing the external peripheral, and the server cluster containing the device descriptor.
     40'''External peripherals''' are shared resources, located in the I/O cluster. To minimize contention, the corresponding device descriptors are distributed on all clusters, as uniformly as possible. Therefore, an I/O operation involves generally three clusters: the client cluster, the I/O cluster containing the external peripheral, and the server cluster containing the device descriptor.
    4141
    4242The ''devices_directory_t'' structure contains  extended pointers on all generic devices descriptors defined in the manycore architecture.
     
    4545 * There is one entry per cluster for each '''internal peripheral''', and the corresponding array is indexed by the cluster index (it is not indexed by the cluster identifier cxy, because cxy is not a continuous index).
    4646
    47 This device directory being implemented as a global variable, is replicated in all clusters, and is initialized in the kernel initialization phase.
     47This device directory, implemented as a global variable, is replicated in all clusters, and is initialized in the kernel initialization phase.
    4848
    4949== D) Waiting queue Management ==
    5050
    51 The commands waiting queue is implemented as a distributed XLIST, rooted in the device descriptor. To launch an I/O operation,  a client thread, running in any cluster, call a function of the device API. This function builds the command descriptor embedded in the thread descriptor, and registers the thread in the waiting queue.
     51The commands waiting queue is implemented as a distributed XLIST, rooted in the device descriptor. To launch an I/O operation,  a client thread, running in any cluster, calls a function of the device API. This function builds the command descriptor embedded in the thread descriptor, and registers the thread in the waiting queue.
    5252
    53 For all I/O operations, ALMOS-MK implements a blocking policy: The thread calling a command function is blocked on the THREAD_BLOCKED_IO condition, and deschedule. It will be re-activated by the driver ISR (Interrupt Service Routine) signaling the completion of the I/O operation.
     53For all I/O operations, ALMOS-MK implements a blocking policy: the thread calling a command function is blocked on the THREAD_BLOCKED_IO condition, and descheduled. It will be re-activated by the driver ISR (Interrupt Service Routine) signaling the completion of the I/O operation.
    5454
    5555The waiting queue is handled as a Multi-Writers / Single-Reader FIFO, protected by a remote_lock. The N writers are the clients threads, whose number is not bounded.
    56 The single reader is a server thread associated to the device descriptor, and created at kernel initialization. This thread is in charge of consuming the pending commands from the waiting queue. When the queue is empty, the server thread blocks on the THREAD_BLOCKED_QUEUE condition, and deschedule. It is activated by the client thread when a new command is registered in the queue.
     56The single reader is a server thread associated to the device descriptor, and created at kernel initialization. This thread is in charge of consuming the pending commands from the waiting queue. When the queue is empty, the server thread blocks on the THREAD_BLOCKED_QUEUE condition, and is descheduled. It is activated by the client thread when a new command is registered in the queue.
    5757
    5858Finally, each generic device descriptor contains a link to the specific driver associated to the available hardware implementation. This link is established in the kernel initialization phase.
     
    6060== E) Drivers API ==
    6161
    62 To start an I/O operation the server thread associated to the device must call the specific driver corresponding to the hardware peripheral available in the manycore architecture.
     62To start an I/O operation, the server thread associated to the device must call the specific driver corresponding to the hardware peripheral available in the manycore architecture.
    6363
    6464To signal the completion of a given I/O operation, the peripheral rises an IRQ to execute a specific ISR (Interrupt Service Routine) in the client cluster, on the core running the client thread. This requires to dynamically route the IRQ to this core.
     
    6868'''driver_init()'''
    6969
    70 This functions initialises both the peripheral hardware registers, and the specific global variables defined by a given hardware implementation. It is called in the kernel initialization phase.
     70This function initialises both the peripheral hardware registers, and the specific global variables defined by a given hardware implementation. It is called in the kernel initialization phase.
    7171
    7272'''driver_cmd( xptr_t  thread , device_t * device )
    7373
    74 This function  is called by the server thread. It access to the peripheral hardware registers to start the I/O operation. Depending on the hardware peripheral implementation, can be blocking or non-blocking for the server thread.
     74This function  is called by the server thread. It accesses to the peripheral hardware registers to start the I/O operation. Depending on the hardware peripheral implementation, it can be blocking or non-blocking for the server thread.
    7575 * It is blocking on the THREAD_BLOCKED_DEV_ISR condition, if the hardware peripheral supports only one simultaneous I/O operation. Examples are a simple disk controller, or a text terminal controller. The blocked server thread must be re-activated by the ISR signaling completion of the current I/O operation.
    7676 * It is non-blocking if the hardware peripheral supports several simultaneous I/O operations. Example is an AHCI compliant disk controller. It blocks only if the number of simultaneous I/O operations becomes larger than the max number of concurrent operations supported by the hardware.
     
    8080'''driver_isr( xptr_t device )'''
    8181
    82 This function is executed in the client cluster, on the core running the client thread.  It access the peripheral hardware register to get the I/O operation error status, acknowledge the IRQ, and unblock the client thread.
    83 If the server thread has been blocked, it unblock also the server thread.
     82This function is executed in the client cluster, on the core running the client thread.  It accesses the peripheral hardware registers to get the I/O operation error status, acknowledge the IRQ, and unblock the client thread.
     83If the server thread has been blocked, it also unblocks the server thread.
    8484The ''device'' argument is the extended pointer on the device descriptor.
    8585
    8686== F) Implementation ==
    8787
    88 It is worth to note that this general I/O operation mechanism involves generally three clusters (client cluster / server cluster / IO cluster), but does not use any RPC:
    89  * to post a new command in the waiting queue of a given (remote) device descriptor, the client thread uses only few remote access to be registered in the distributed XLIST rooted in the server cluster.
    90  * to launch the I/O operation on the (remote) peripheral, the server thread uses only remote access to the physical registers located in the I/O cluster.
    91  * To complete the I/O operation, the ISR running on the client cluster access peripheral registers in I/O clusters, reports I/O operation status in the command descriptor, and unblock client thread and server thread, using only local or remote accesses.
     88It is worth noting that this general I/O operation mechanism involves generally three clusters (client cluster / server cluster / IO cluster), but does not use any RPC:
     89 * To post a new command in the waiting queue of a given (remote) device descriptor, the client thread uses only few remote accesses to be registered in the distributed XLIST rooted in the server cluster.
     90 * To launch the I/O operation on the (remote) peripheral, the server thread uses only remote accesses to the physical registers located in the I/O cluster.
     91 * To complete the I/O operation, the ISR running on the client cluster accesses peripheral registers in the I/O cluster, reports the I/O operation status in the command descriptor, and unblocks the client and server threads, using only local or remote accesses.