Shared Buffers

This topic explains the concepts of shared buffers and shared-buffer pools.

Copying large amounts of data between memory contexts uses CPU cycles, and therefore impedes performance and battery life. Sharing memory areas avoids copies and improves efficiency. However, it also creates security and robustness risks, and must only be applied to trusted components.

Shared buffer

A buffer is a contiguous section of physical memory with defined characteristics and a known layout.

An RShBuf is a shared buffer : an area of shareable memory. Processes can share the handle to an RShBuf object to access the data it contains.

Shared-buffer pool

A pool is a collection of buffers with common characteristics. These characteristics are their size and their DMA requirements (physical memory address range). The pool preallocates the memory for the buffers and is responsible for allocating and managing the buffers within the pool. The pool is the only provider of buffers.

An RShPool is a shared-buffer pool: it contains shared buffers of identical size. It is the only provider

The RShPool grows and shrinks automatically. It allocates more memory when more shared buffers are required, and frees memory occupied by unused buffers when possible.

Use case

A typical use case for shared buffers involves a group of components that execute in different processes and need to share data. Some components in the group produce data, request a shared buffer for it and send the buffer to others. Each component can read the data, add to it or write over it before passing the buffer to the next component. When the data has been processed, the buffer can be returned to the pool which then considers it available for another request.

Components in the group are trusted to access the shared memory: they usually execute in privileged processes. Some of these components may be device drivers, therefore executing in kernel space.

Related information
Flexible Memory Model