wiki:io_operations

Version 13 (modified by alain, 7 years ago) (diff)

--

Input & Output Operations

A) Peripheral identification

ALMOS-MK identifies a peripheral by a composite index (func,impl). The func index defines a functionnal type, the impl index defines a specific hardware implementation.

  • Each value of the functional index defines a generic (implementation independent) device XYZ, that is characterized by an API defined in the dev_xyz.h file. This generic API allows the kernel to access the peripheral without taking care on the actual implementation.
  • For each generic device XYZ, it can exist several hardware implementation, and each value of the implementation index impl is associated with a specific driver, that must implement the API defined for the XYZ generic device.

ALMOS-MK supports two types of peripheral components:

  • External peripherals are accessed through a bridge located in one single cluster (called cluster_io, identified by the io_cxy parameter in the arch_info description). They are shared ressources that can be used by any kernel instance, running in any cluster. Examples are the generic IOC device (Block Device Controller), the generic NIC device (Network Interface Controller), the generic TXT device (Text Terminal), the generic FBF device (Frame Buffer for Graphical Display Controller).
  • Internal peripherals are replicated in all clusters. Each internal peripheral is associated to the local kernel instance. There is very few internal peripherals. examples are the generic ICU device (Interrupt Controller Unit), or the generic MMC (L2 Cache Configuration).

ALMOS-MK supports multi-channels peripherals, where one single peripheral controller contains N channels that can run in parallel. Each channel has a separated set of addressable registers, and each channel can be used by the OS as an independent device. Examples are the TXT peripheral (one channel per text terminal), or the NIC peripheral (one channel per MAC interface).

The set of available peripherals, and their localisation in a given many-core architecture must be described in the arch_info;py file. For each peripheral, the composite index is implemented as a 32 bits integer, where the 16 MSB bits define the type, and the 16 LSB bits define the subtype.

B) Generic Devices APIs

To represent the available peripherals in a given manicore architecture, ALMOS-MK uses generic device descriptors (implemented by the device_t structure). For multi-channels peripherals, ALMOS-MK defines one device descriptor per channel. this descriptor contains the functional index, the implementation index, the channel index, and the physical base address of the segment containing the addressable registers for this peripheral channel.

Each device descriptor contains a waiting queue of pending commands registered by the various clients threads.

For each generic device type, the device specific API defines the list of available commands, and the specific structure defining the command descriptor (containing the command type and arguments). This structure is embedded (as an union of the various device types) in the thread descriptor, to be passed to the hardware specific driver.

The set of supported generic devices, and the associated APIs are defined below:

device type usage api definition
IOC ext block device controller ioc_device_api ioc_api
TXT ext text terminal controller txt_device_api txt_api
NIC ext network interface controller nic_device_api nic_api

C) Waiting queue Management

The commands waiting queue is implemented as a distributed XLIST, rooted in the device descriptor. To launch an I/O operation, any thread, running in any cluster, call a function of the device API. This function builds the command descriptor, registers the command in the thread descriptor, and registers the thread in the waiting queue.

For all I/O operations, ALMOS-MK implements a blocking policy: The thread calling a command function is blocked on the THREAD_BLOCKED_IO condition, and deschedule. It will be re-activated by the driver ISR (Interrupt Service Routine) signaling the completion of the I/O operation.

The waiting queue is handled as a Multi-Writers / Single-Reader FIFO, protected by a remote_lock. The N writers are the clients threads, whose number is not bounded. The single reader is a server thread associated to the device descriptor, and created at kernel initialization. This thread is in charge of consuming the pending commands from the waiting queue. When the queue is empty, the server thread blocks on the THREAD_BLOCKED_QUEUE condition, and deschedule. It is re-activated by the client thread when a new command is registered in the queue.

Finally, each device descriptor for a generic device XYZ contains a link to the specific driver associated to the available hardware implementation. This link is established in the kernel initialization phase.

  • As internal peripheral are private resources, replicated in all clusters, the device descriptor is stored in the same cluster as the hardware device itself. Therefore, most accesses are local.
  • For external peripherals, the hardware devices are shared resources, located in the I/O cluster. To minimize contention, the device descriptors are distributed on all clusters, as uniformly as possible. Therefore, an I/O operation involve generally three clusters: the client cluster, the I/O cluster, and the server cluster, containing the device descriptor.

D) Drivers API

To start an I/O operation the server thread associated to the device must call the specific driver corresponding to the hardware peripheral available in the manycore architecture.

To signal the completion of a given I/O operation, the peripheral rises an IRQ to execute a specific ISR (Interrupt Service Routine) in the client cluster, on the core running the client thread. This requires to dynamically route the IRQ to this core.

Any driver must therefore implement the three following functions:

driver_init()

This functions initialises both the peripheral hardware registers, and the specific global variables defined by a given hardware implementation. It is called in the kernel initialization phase.

driver_cmd( xptr_t thread , device_t * device )

This function is called by the server thread. It access to the peripheral hardware registers to start the I/O operation. Depending on the hardware peripheral implementation, can be blocking or non-blocking for the server thread.

  • It is blocking on the THREAD_BLOCKED_DEV_ISR condition, if the hardware peripheral supports only one simultaneous I/O operation. Examples are a simple disk controller, or a text terminal controller. The blocked server thread must be re-activated by the ISR signaling completion of the current I/O operation.
  • It is non-blocking if the hardware peripheral supports several simultaneous I/O operations. Example is an AHCI compliant disk controller. It blocks only if the number of simultaneous I/O operations becomes is larger than the max number supported by the hardware.

The thread argument is the extended pointer on the client thread, containing the embedded command descriptor. The device argument is the local pointer on the device descriptor.

diver_isr( xptr_t device )

This function is executed in the client cluster, on the core running the client thread. It access the peripheral hardware register to get the I/O operation error status, acknowledge the IRQ, and unblock the client thread. If the server thread has been blocked, it unblock also the server thread. The device argument is the extended pointer on the device descriptor.

E) Implementation

It is worth to note that this general I/O operation mechanism involve generally three clusters (client cluster / server cluster / IO cluster), but does not use any RPC:

  • to post a new command in the waiting queue of a given (remote) device descriptor, the client thread uses only few remote access to be registered in the distributed XLIST rooted in the server cluster.
  • to launch the I/O operation on the (remote) peripheral, the server thread uses only remote access access the physical registers located in the I/O cluster.
  • To complete the I/O operation, the ISR running on the client cluster access peripheral registers in I/O clusters, reports I/O operation status in the command descriptor, and unblock client and server thread using only remote accesses.