wiki:mapping_info

Version 14 (modified by alain, 10 years ago) (diff)

--

GIET_VM / Mapping

The GIET_VM is a fully static operating system for shared address space, many-cores architectures. These architecture are generally NUMA (Non Uniform memory Acces), because the memory is logically shared, but physically distributed, and the main goal of the GIET_VM is to address these NUMA issues.

the GIET_VM bootloader map the kernel and one or several multi-threaded user applications on the target architecture. All software objects (user applications code and data, but also kernel code and critical kernel structures such as the page tables or the processors schedulers) are statically build and loaded from disk into physical memory by the GIET_VM bootloader in the boot phase.

The main advantage of this static approach is to provide the system designer to place the tasks on the processors, but also to place software objects on the distributed physical memory banks. It supports replication of (read-only) critical objects such as kernel code, user code, or page tables. The page tables are statically initialised in the boot phase, and are not modified anymore in the execution phase.

To define the mapping, the system designer must provide a map.bin file containing a dedicated C binary data structure, that is loaded in memory by the bootloader.

The next section describes this C binary structure. The following sections describe how this binary file can be generated by the genmap tool from a python language description. Finally the genmap tool can generate a readable map.xml representation of the map.bin file.

C mapping data structure

The C mapping data structure contains the following informations:

  1. It contains a description of the target, clusterized, hardware architecture, with the following constraints: Processor cores are MIP32. The clusters are organised in a 2D mesh topology, and the number of clusters is variable (can be one). The number of processors per cluster is variable (can be one). The number of physical memory banks is variable (up to one physical memory bank per cluster. Most peripherals are external and localized in one specific I/O cluster. The physical address width is between 32 and 48 bits, and is the concatenation of 3 fields: the LSB field (32 bits) define a 4 Gbits physical address space inside a single cluster. The X and Y fields (up to 8 bits for each field) define the cluster coordinate.
  1. It contains a description of the user applications to be launched on the platform. An user application is characterized by a a virtual address space, called a vspace. An user application can be multi-threaded, and the number of parallel tasks sharing the same address space in a given application is variable (can be one). The GIET_VM provide a specific Multi-Writer/Multi?-Reader communication middleware for send/receive inter-tasks communication. Each vspace contains a variable number of virtual segments, called vsegs. The number of simultaneously mapped vspaces on a given architecture is variable (can be one).
  1. It contains the mapping directives: The tasks are statically allocated to processors. The various software objects (user and kernel code segments, tasks stacks, tasks heaps, communication channels, etc.) are called vobjs, and are statically placed on the distributed physical memory banks (called psegs), using the page tables (one page table per vspace) that define the mapping of the vsegs on the psegs.

The C binary mapping data structure is defined in the mapping_info.h file, and is organised as the concatenation of a fixed size header, and 11 variable size arrays:

  • mapping_cluster_t cluster[]
  • mapping_pseg_t pseg[]
  • mapping_vspace_t vspace[]
  • mapping_vseg_t vseg[]
  • mapping_vobj_t vobj[]
  • mapping_task_t task[]
  • mapping_proc_t proc[]
  • mapping_irq_t irq[]
  • mapping_coproc_t coproc[]
  • mapping_cp_port_t cp_port[]
  • mapping_periph_t periph[]

The map.bin file must be stored on disk and will be loaded in memory by the GIET_VM bootloader in the seg_boot_mapping segment.

Python mapping description

A specific mapping requires at least two python files:

  • The arch.py file is attached to a given hardware architecture. It describes both the (possibly generic) hardware architectures, and the mapping of the kernel software objects on this hardware architecture.
  • The appli.py file is attached to a given user application. It describes both the application structure (tasks and communication channels), and the mapping of the application tasks and software objects on the architecture.

The various Python Classes used by these these files are defined in the mapping.py file.

Python hardware architecture description

The target hardware architecture must be defined in the arch.py file , you must use the following constructors:

1. mapping

The Mapping( ) constructor build a mapping object and define the target architecture general parameters:

name mapping name == architecture name
x_size number of clusters in a row of the 2D mesh
y_size number of clusters in a column of the 2D mesh
nprocs max number of processors per cluster
x_width number of bits to encode X coordinate in paddr
y_width number of bits to encode Y coordinate in paddr
p_width number of bits to encode local processor index
paddr_width number of bits in physical address
coherence Boolean true if hardware cache coherence
irq_per_proc number of IRQ lines between XCU and one proc (GIET_VM use only one)
use_ramdisk Boolean true if the architecture contains a RamDisk?
x_io io_cluster X coordinate
y_io io_cluster Y coordinate
peri_increment virtual address increment for peripherals replicated in all clusters
reset_address physical base address of the ROM containing the preloader code
ram_base physical memory bank base address in cluster [0,0]
ram_size physical memory bank size in one cluster (bytes)

2. Processor core

The mapping.addProc( ) construct define one MIPS32 processor core in a cluster (number of processor cores can different in different clusters). It has the following arguments:

x cluster x coordinate
y physical
p physical memory bank size (bytes)

The physical global processor index will be : ( ( x << y_width ) + y ) << p_width ) + p

3. Physical memory bank

The mapping.addRam( ) construct define one physical memory bank, and the associated physical segment in a cluster. It has the following arguments:

name segment name
base physical memory bank base address
size physical memory bank size (bytes)

The target cluster coordinates (x,y) is defined by the base address MSB bits.

4. Physical peripheral

The mapping.addPeriph( ) construct adds one peripheral, and the associated physical segment in a cluster. It has the following arguments:

name segment name
base peripheral segment physical base address
size peripheral segment size (bytes)
ptype Peripheral type
subtype Peripheral subtype
channels number of channels for multi-channels peripherals
arg optionnal argument depending on peripheral type

The target cluster coordinates (x,y) is defined by the base address MSB bits. The supported peripheral types and subtypes are defined in the mapping.py file.

5. Interrupt line

The mapping.addIrq() construct adds one IRQ line input to an XCU peripheral, or to a PIC peripheral. It has the following arguments:

periph peripheral receiving the IRQ line
index input port index
isrtype Interrupt Service Routine type
channel channel index for multi-channel ISR

The supported ISR types are defined in the mapping.py file.

Python kernel mapping

The mapping of the GIET_VM vsegs must be defined in the arch.py file.

Each kernel virtual segment has the global attribute, and must be mapped in all vspaces. It can be mapped to a set of consecutive small pages (4 Kbytes), or to a set of consecutives big pages (2 Mbytes).

The mapping.addGlobal() construct define the mapping for one kernel vseg. It has the following arguments:

name virtual segment name
vbase virtual base address
size segment size (bytes
mode access rights (CXWU)
vtype vseg type
x destination cluster X coordinate
y destination cluster Y coordinate
pseg destination pseg name
identity identity mapping required (default = False)
binpath pathname for binary file if required (default = ' ')
align alignment constraint if required (default = 0)
local only mapped in local page table if true (default = False)
big to be mapped in big pages (default = False)

The supported values for the mode argument, and for the vtype arguments are defined in the mapping.py file. The x, y, and pseg arguments define actually the mapping.

1. Boot vsegs

There is 4 'vsegs for the GIET_VM bootloader:

  • The seg_boot_mapping vseg contains the C binary structure defining the mapping. It is loaded from disk by the boot-loader.
  • The seg_boot_code vseg contains the boot-loader code. It is loaded from disk by the preloader.
  • The seg_boot_data vseg contains the boot-loader global data.
  • The seg_boot_stacks vseg contains the stacks for all processors.

These 4 vsegs must be identity mapping (because the page table are not available), and are mapped in the first big physical page (2 Mbytes) in cluster [0][0].

2. Kernel vsegs

There is six types of vsegs for the GIET_VM kernel, but some vsegs are replicated in all clusters, to improve locality and minimize contention, as explained below:

  • The seg_kernel_ptab_x_y vseg has type PTAB. It contains the page tables for all vspaces (one page table per vspace). There is one such vseg in each cluster (one set of page tables per cluster). Each PTAB vseg is mapped in one big physical page.
  • The seg_kernel_code & seg_kernel_init have type ELF. They contain the kernel code. These two vsegs must be mapped in one big physical page. They are replicated in each cluster. The local attribute must be set, because the same virtual address will be mapped on different physical address depending on the cluster.
  • The seg_kernel_data & seg_kernel_uncdata have type ELF. They contain the kernel global data (cacheable, or non cacheable). They are not replicated, and must be mapped in cluster[0][0].
  • The seg_kernel_sched_x_y vseg has type SCHED. It contains the processor schedulers (one scheduler per processor). There is one such vseg in each cluster, and it must be mapped on small pages (two small pages per scheduler).

3. Peripheral vsegs

A global vseg must be defined for each addressable peripheral. As a general rule, we use big physical page(s) for each external peripheral, and one small physical page for each replicated peripheral.

Python user application mapping

The mapping of a given application must be defined in the application.py file.

A vspace, containing a variable number of tasks, and a variable number of vsegs, must be defined for each application.

There is several types of user vseg

  • The code vseg can be optionally replicated in all clusters.
  • The data vseg is not replicated. It must contain the start_vector defining the entry points of the application tasks.
  • It must exist as many stack' vseg as the number of tasks.
  • One or several heap vseg(s), can be used by the malloc user library.
  • One or several mwmr vseg(s)

1. create the vspace

The mapping.addvspace( ) construct define a vspace. It has the following arguments:

name vspace name == application name
startname name of

2. vseg mapping

The mapping.addVseg( ) construct define the mapping of a vseg in the vspace. It has the following arguments:

vspace vspace containing the vseg
name vseg name
vbase virtual base address
size vseg size (bytes)
mode access rights (CXWU)
 vtype vseg type
x destination cluster X coordinate
y destination cluster Y coordinate
pseg destination pseg name
binpath pathname for binary file if required (default = ' ')
align alignment constraint if required (default = 0)
local only mapped in local page table if true (default = False)
big to be mapped in big pages (default = False)

The supported values for the mode argument, and for the vtype arguments are defined in the mapping.py file. The x, y, and pseg arguments define actually the mapping.

3. task mapping

The mapping.addVseg( ) construct define the mapping of a task in the vspace. It has the following arguments:

vspace vspace containing the task
name task name (unique in vspace)
trdid thread index (unique in vspace]
x destination cluster X coordinate
y destination cluster Y coordinate
lpid destination processor local index
stackname name of vseg containing stack
heapname name of vseg containing heap
startid index in start vector (defining the task entry point virtual address)

The x, y, lpid arguments define actually the task placement.