wiki:mapping_info

GIET_VM / Mapping

The GIET_VM bootloader loads the GIET_VM kernel and the user application(s) on the target architecture. All user applications segments (code, data, stack and heap) the kernel code and the critical kernel structures (page tables or processors schedulers), are statically build by the GIET_VM bootloader, as specified by the mapping directives.

The main goal of this static approach is to allow the system designer to control the placement of the tasks on the processors, but also to control the placement of software objects on the distributed physical memory banks. It supports replication of (read-only) critical objects such as kernel code, user code, or page tables. The page tables are statically initialized in the boot phase, and are not modified anymore in the execution phase.

To define the mapping, the system designer must provide a map.bin file containing a dedicated C binary data structure, that is loaded in memory by the bootloader.

The next section describes this C binary structure. The following sections describe how this binary file can be generated by the genmap tool from Python scripts. The genmap tool generates also a readable map.xml representation of the map.bin file.

Mapping content

The mapping contains the following informations:

  1. It contains a description of the target hardware architecture, with the following constraints:
    • All processor cores are identical (MIPS32).
    • The clusters form a 2D mesh topology. The mesh size is defined by the (X_SIZE,Y_SIZE) parameters.
    • The number of processors per cluster is defined by the NPROCS parameter.
    • The number of physical memory banks is variable (typically one physical memory bank per cluster).
    • Most peripherals are external and localized in one specific I/O cluster.
    • A small number of peripherals (such as the XCU interrupt controller) are internal and replicated in each cluster containing processors.
    • The 40 bits physical address is the concatenation of 3 fields: the LSB field (32 bits) define a 4 Gbits physical address space inside a single cluster. The [X,Y] MSB fields (4 bits for each field) define the cluster coordinates.
  1. It contains a description of the GIET_VM kernel software objects (called virtual segments or vsegs):
    • The kernel code is replicated in all clusters. Each copy is a vseg.
    • There is one page table for each user application. All page tables are packed in one single vseg, and this vseg is replicated in each cluster.
    • The kernel heap is distributed in all clusters. Each heap section is a vseg.
    • Finally there is a specific vseg for each peripheral (both internal and external), containing the peripheral addressable registers.
  1. It contains a description of the user application(s) to be launched on the platform. An user application is characterized by a a virtual address space, called a vspace. An user application can be multi-threaded. The user threads respect the POSIX API.The number of threads can depend on the target architecture. Each thread must be statically placed on a given processor (x,y,p). Moreover, each application defines a variable number of vsegs:
    • The application code can be defined as a single vseg, in a single cluster. It can also be replicated in all clusters, with one vseg per cluster.
    • There is one stack per thread, and each stack vseg must be placed in a specific cluster(x,y).
    • The data vseg contains the global (shared) variables. It is not replicated, and must be placed in a single cluster.
    • The user heap can be physically distributed on all clusters and it can exist one heap vseg per cluster.

All kernel vsegs being accessed by all user applications must be defined in all virtual spaces, and are mapped in all page tables. They are called global vsegs.

C mapping data structure

The C binary structures used by the boot code are defined in the mapping_info.h file, and is organised as the concatenation of a fixed size header, and 8 variable size arrays:

mapping_cluster_t a cluster contains psegs, processors, peripherals and coprocessors
mapping_pseg_t a physical segment defined by a name, a base address ans a size (bytes)
mapping_vspace_t a virtual space contains several vsegs and several parallel tasks
mapping_vseg_t a virtual segment contains a software object
mapping_task_t a task must be statically associated to a processor
mapping_proc_t a processor identified by a triple index (x,y,lpid)
mapping_irq_t a source interrupt define the ISR to be executed
mapping_periph_t a peripheral associated to a specific pseg

The map.bin file is automatically build by the genmap tool from the python scripts. It must be stored on disk in the file system root directory and will be loaded by the GIET_VM bootloader in the seg_boot_mapping memory segment.

Python mapping description

A mapping requires two python files:

  • The arch.py file is attached to a given hardware architecture. It describes both the (possibly generic) hardware architecture, and the mapping of the kernel software objects on this hardware architecture.
  • The appli.py file is attached to a given user application. It describes the application structure (tasks and vsegs), and the mapping of tasks and vsegs on the architecture.

The various Python Classes used by these these files are defined in the mapping.py file.

Python hardware architecture description

The target hardware architecture must be defined in the arch.py file , you must use the following constructors:

1. mapping

The Mapping( ) constructor build a mapping object and define the target architecture general parameters:

name mapping name == architecture name
x_size number of clusters in a row of the 2D mesh
y_size number of clusters in a column of the 2D mesh
nprocs max number of processors per cluster
x_width number of bits to encode X coordinate in paddr
y_width number of bits to encode Y coordinate in paddr
p_width number of bits to encode local processor index
paddr_width number of bits in physical address
coherence Boolean true if hardware cache coherence
irq_per_proc number of IRQ lines between XCU and one proc (GIET_VM use only one)
use_ramdisk Boolean true if the architecture contains a RamDisk?
x_io io_cluster X coordinate
y_io io_cluster Y coordinate
peri_increment virtual address increment for peripherals replicated in all clusters
reset_address physical base address of the ROM containing the preloader code
ram_base physical memory bank base address in cluster [0,0]
ram_size physical memory bank size in one cluster (bytes)

2. Processor core

The mapping.addProc( ) construct adds one MIPS32 processor core in a cluster. It has the following arguments:

x cluster x coordinate
y cluster y coordinate
lpid processor local index

The global processor index (stored in CP0_PROCID register) is : ( ( x << y_width ) + y ) << p_width ) + lpid

3. Physical memory bank

The mapping.addRam( ) construct adds one physical memory segment in a cluster. It has the following arguments:

name pseg name
base physical base address
size segment size (bytes)

The target cluster coordinates (x,y) are implicitely defined by the base address MSB bits.

4. Physical peripherals

The mapping.addPeriph( ) construct adds one peripheral, and the associated physical segment in a cluster. It has the following arguments:

name pseg name
base peripheral segment physical base address
size peripheral segment size (bytes)
ptype Peripheral type
subtype Peripheral subtype
channels number of channels for multi-channels peripherals
arg0 optionnal argument depending on peripheral type
arg1 optionnal argument depending on peripheral type
arg2 optionnal argument depending on peripheral type
arg3 optionnal argument depending on peripheral type

The target cluster coordinates (x,y) are implicitely defined by the physical base address MSB bits. The supported peripheral types and subtypes are defined in the mapping.py file. Hardware coprocessors using the MWMR_DMA controller to access memory are described as peripherals. They must be defined with the MWR ptype argument, and the subtype argument defines the coprocessor type.

Four peripheral components require specific arguments with the following semantics:

Frame Buffer Interrupt controller DMA Controller Programmable Interrupt Controller
ptype FBF XCU MWR PIC
arg0 number of pixels per line Number of HWI inputs number of TO_COPROC ports number of HWI inputs
arg1 number of lines Number of PTI inputs number of FROM_COPROC ports unused
arg2 unused Number of WTI inputs number of CONFIG registers unused
arg3 unused unused number of STATUS registers unused

5. Interrupt line

The mapping.addIrq() construct adds one input IRQ line to an XCU peripheral, or to a PIC peripheral. It has the following arguments:

periph peripheral receiving the IRQ line
index input port index
isrtype Interrupt Service Routine type
channel channel index for multi-channel ISR

The supported ISR types are defined in the mapping.py file.

Python kernel mapping

The mapping of the GIET_VM vsegs must be defined in the arch.py file.

Each kernel virtual segment has the global attribute, and must be mapped in all vspaces. It can be mapped to a set of consecutive small pages (4 Kbytes), or to a set of consecutives big pages (2 Mbytes).

The mapping.addGlobal() construct define the mapping for one kernel vseg. It has the following arguments:

name vseg name
vbase virtual base address
size segment size (bytes)
mode access rights (CXWU)
vtype vseg type
x destination cluster X coordinate
y destination cluster Y coordinate
pseg destination pseg name
identity identity mapping required (default = False)
binpath pathname for binary file if required (default = ' ')
align alignment constraint if required (default = 0)
local only mapped in local page table if true (default = False)
big to be mapped in big pages (default = False)

The supported values for the mode and vtype arguments are defined in the mapping.py file.

The (x, y, pseg) arguments define actually the vseg placement.

1. Boot vsegs

There is 4 global vsegs for the GIET_VM bootloader:

  • The seg_boot_mapping vseg contains the C binary structure defining the mapping. It is loaded from disk by the boot-loader.
  • The seg_boot_code vseg contains the boot-loader code. It is loaded from disk by the preloader.
  • The seg_boot_data vseg contains the boot-loader global data.
  • The seg_boot_stacks vseg contains the temporary stacks for all processors.

These 4 vsegs must be identity mapping (because they are accessed when the page table are not yet available), and are mapped in the first big physical page (2 Mbytes) in cluster [0][0].

2. Kernel vsegs

Most kernel vsegs are replicated or distributed in all clusters, to improve locality and minimize contention during execution, as explained below:

  • The seg_kernel_ptab_x_y vsegs have type PTAB. They contains the page tables associated to vspaces (one page table per vspace). There is one PTAB vseg per cluster (one set of page tables per cluster). Each PTAB vseg is mapped in one big physical page.
  • The seg_kernel_code & seg_kernel_init have type ELF. They contain the kernel code. These two vsegs are mapped in one big physical page. They are replicated in each cluster. The local attribute must be set, because the same virtual address will be mapped on different physical address depending on the cluster.
  • The seg_kernel_data has type ELF, and contains the kernel global data. It is not replicated, and is mapped in cluster[0][0].
  • The seg_kernel_sched_x_y vseg have type SCHED. It contains the processor schedulers (one scheduler per processor). There is one SCHED vseg in each cluster, and it is mapped on small pages (two small pages per scheduler).
  • The seg_kernel_heap_x_y vseg have type HEAP, and contain the distributed kernel heap. There is one HEAP vseg per cluster, and is mapped in (at least) one big page.

3. Peripheral vsegs

A global vseg must be defined for each addressable peripheral. As a general rule, we use big physical page(s) for each external peripheral, and one small physical page for each replicated peripheral.

Python user application mapping

The mapping of a given application must be defined in the application.py file.

A vspace, contains a variable number of tasks, and a variable number of vsegs, that must be defined for each application.

There is several types of user vseg:

  • The code vsegs must have the ELF type. They can be (optionally) replicated in all clusters.
  • The data vseg must have the ELF type. It is not replicated and must be mapped in one single cluster. It contains the start_vector defining the entry points of the application tasks.
  • It must exist as many stack' vseg as the number of tasks (one private stack per task). They have the BUFFER type. Each stack vseg should be placed in the cluster containing the processor running the associated task.
  • The distributed heap vsegs (one vseg per cluster), are handled by the malloc user library. These vsegs must have the HEAP type. The remote_malloc() can be used to control the placement of specific data on the physical memory banks.

1. create the vspace

The mapping.addvspace( ) construct define a vspace. It has the following arguments:

name vspace name == application name
startname name of vseg containing the start_vector defining the entry points for all threads
active boolean defining if the application should be activated by the boot code (default is False)

2. vseg mapping

The mapping.addVseg( ) construct define the mapping of a vseg in the vspace. It has the following arguments:

vspace vspace containing the vseg
name vseg name
vbase virtual base address
size vseg size (bytes)
mode access rights (CXWU)
vtype vseg type
x destination cluster X coordinate
y destination cluster Y coordinate
pseg destination pseg name
local only mapped in local page table if true (default = False)
big to be mapped in big pages (default = False)
binpath pathname for binary file if required (default = ' ')

The supported values for the mode and vtype arguments are defined in the mapping.py file.

The (x, y, pseg) arguments define actually the vseg placement.

3. thread mapping

The mapping.addThread( ) construct defines the static mapping of a thread on a processor. It has the following arguments:

vspace vspace containing the thread
name thread name (unique in vspace)
is_main Boolean defining the thread that must be activated to launch the application
x destination cluster X coordinate
y destination cluster Y coordinate
p destination processor local index
stackname name of vseg containing the thread stack
heapname name of vseg containing the thread heap
startid index in start vector (defining the thread entry point virtual address)

The (x, y, p) arguments define actually the thread placement.

mapping.addThread( vspace , 'name' , is_main , x , y , p , 'stackname' , 'heapname' , startid ) 
Last modified 8 years ago Last modified on Aug 19, 2016, 5:58:51 PM