This document describes the shared memory between threads and how to avoid race conditions.
Shared memory is a method of InterProcess Communication (IPC) where a single chunk of memory is shared between two or more processes. It is used to communicate between threads within a process or two unrelated processes, allowing both to share a given region of memory efficiently. In an SMP system, multiple cores are running threads at the same time and not just virtually as in unicore. Extra caution is needed to handle memory that shared between multiple threads.
A thread is the unit of execution within a process. Every time a process is initialized, a primary thread is created. For many applications the primary thread is the only one that the application requires, however, processes can create additional threads. In an SMP system, multiple threads can be executing the same (shared) functions simultaneously. Therefore functions need to deal with this situation in order to maintain consistency of the system.
Following are the key concepts to synchronize threads:
Mutexes are used to serialize access to a section of re-entrant code that cannot be executed concurrently by more than one thread. A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section. A mutex can be used by threads across any number of processes. If a resource is only shared between the threads within the same process, it can be more efficient to use a critical section.
Semaphores restrict the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore). A thread that requests access to a busy resource is put in a waiting state. The semaphore maintains a First In First Out (FIFO) queue of such waiting threads. When another thread increments the semaphore, the first thread in this queue is resumed.
Locks are used to synchronize the data between threads in the kernel. In SMP, threads are executed in parallel, which means that if locks are not applied to the code it could result in a race condition. Race conditions lead to system crashes and data corruptions. For more information about locking, see Locking .
API name |
Description |
---|---|
Searches for threads by pattern matching against the names of thread objects. |
|
A handle to a thread. |
|
A handle to a mutex. |
|
Finds all global mutexes whose full names match a specified pattern. |
|
A handle to a semaphore. |
|
Finds all global semaphores whose full names match a specified pattern. |
Copyright ©2010 Nokia Corporation and/or its subsidiary(-ies).
All rights
reserved. Unless otherwise stated, these materials are provided under the terms of the Eclipse Public License
v1.0.