|
1 <?xml version="1.0" encoding="utf-8"?> |
|
2 <!-- Copyright (c) 2007-2010 Nokia Corporation and/or its subsidiary(-ies) All rights reserved. --> |
|
3 <!-- This component and the accompanying materials are made available under the terms of the License |
|
4 "Eclipse Public License v1.0" which accompanies this distribution, |
|
5 and is available at the URL "http://www.eclipse.org/legal/epl-v10.html". --> |
|
6 <!-- Initial Contributors: |
|
7 Nokia Corporation - initial contribution. |
|
8 Contributors: |
|
9 --> |
|
10 <!DOCTYPE concept |
|
11 PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd"> |
|
12 <concept id="GUID-16AED228-539F-4BF7-A7FD-9A01FF1A9A84" xml:lang="en"><title>Locking</title><shortdesc>This document describes SMP locks and outlines the need of introducing |
|
13 locks in the code.</shortdesc><prolog><metadata><keywords/></metadata></prolog><conbody> |
|
14 <p>Locks are used to synchronize the data between threads in the kernel. They |
|
15 can be also used to synchronize access to data in user side threads In SMP, |
|
16 threads are executed in parallel, which means that if locks are not applied |
|
17 to the code it could result in a race condition. Race conditions lead to system |
|
18 crashes and data corruptions. </p> |
|
19 <section id="GUID-6DA960AD-4E33-4B15-B960-F8077530AC88"><title>Locking Granularity</title><p>An |
|
20 important property of a lock is its granularity. The granularity is a measure |
|
21 of the amount of data the lock is protecting. There are two different granularities |
|
22 for locks:</p><ul> |
|
23 <li><p><b>Coarse-Grained Locks</b> enclose a large area of shared code or |
|
24 multiple areas of unrelated data. The lock reduces the number of threads that |
|
25 run concurrently resulting in serial execution making the code behave like |
|
26 a single thread process. Coarse locks are applied in parallel. One lock can |
|
27 be applied to each kernel subsystem.</p></li> |
|
28 </ul><p>The following diagram illustrates how a Coarse-Grained Lock covers |
|
29 many parts of code. It not only simplifies the locking action itself but also |
|
30 frees developers from having to load all the members of a code in order to |
|
31 lock them. In order to get concurrency in the operating system, the operating |
|
32 system must allow more than one process (or interrupt) to execute at the same |
|
33 time. To do this, we divide the OS into sections and give each section a lock. |
|
34 For a small number of processors, we only need a small number of locks, each |
|
35 covering a large region of the OS. This model of coarse-grained locking provides |
|
36 good scaling on small numbers of processors.</p><fig id="GUID-D70A45BC-E281-403E-9D7F-519D990F0DAE"> |
|
37 <title>Coarse-Grained Lock</title> |
|
38 <desc/> |
|
39 <image href="GUID-91BD4E81-4CDC-4279-8E19-5B79A63B838E_d0e638812_href.png" placement="inline"/> |
|
40 </fig><ul> |
|
41 <li><p><b>Fine-Grained Locks</b> enclose a small area of code for example |
|
42 a data structure. These locks are added to the code and the user must remember |
|
43 to release the lock. Fine locks are error prone. </p></li> |
|
44 </ul><p>As the number of processors increases, the number of locks also increases. |
|
45 The following diagram illustrates how fine locks are applied to data. In other |
|
46 words, fine locks protect individual data structures or even parts of data |
|
47 structures. All those locks add instructions and data. To do this, we divide |
|
48 the OS into sections and divide the section into small pieces of code and |
|
49 apply lock for each piece of code. Fine-grained locking can result in near |
|
50 perfect scaling.</p><fig id="GUID-97F40770-1B6C-435B-AFF0-3BA3AC66F7DA"> |
|
51 <title>Fine-Grained Lock</title> |
|
52 <image href="GUID-2E3F9FBD-21FE-4F02-B410-F756012805D2_d0e638829_href.png" placement="inline"/> |
|
53 </fig></section> |
|
54 <section id="GUID-94FDD42D-9D26-4A41-BFB6-57648083EC41"><title>Type of Locks</title><ul> |
|
55 <li><p><xref href="GUID-FB1605A8-9946-364C-A649-DEF60E1F761B.dita"><apiname>TSpinLock</apiname></xref> is the lightest weight lock available |
|
56 kernel side. If a process attempts to acquire a spinlock and one is not available, |
|
57 the process will keep trying (spinning) until it can acquire the lock. Spinlocks |
|
58 should be used to lock data in situations where the lock is not held for a |
|
59 long time.</p></li> |
|
60 </ul><ul> |
|
61 <li><p><xref href="GUID-669D0368-7ADE-35FA-881C-51D476D45B8A.dita"><apiname>RFastLock</apiname></xref> is the lightest weight lock available |
|
62 user side. There is no priority inheritance. This is a layer over a standard |
|
63 semaphore, and only calls into the kernel side if there is contention.</p></li> |
|
64 </ul><ul> |
|
65 <li><p><xref href="GUID-C0FEA3A0-7DD3-3B87-A919-CB973BC05766.dita"><apiname>RMutex</apiname></xref> is used to serialize access to a section |
|
66 of re-entrant code that cannot be executed concurrently by more than one thread. |
|
67 A mutex object allows one thread into a controlled section, forcing other |
|
68 threads which attempt to gain access to that section to wait until the first |
|
69 thread has exited from that section.</p></li> |
|
70 </ul><ul> |
|
71 <li><p><xref href="GUID-AED27A76-3645-3A04-B80D-10473D9C5A27.dita"><apiname>RSemaphore</apiname></xref> is used for Inter Process Communication |
|
72 (IPC), they are similar in performance to <codeph>RMutex</codeph>. <codeph>RSemaphore</codeph> locks |
|
73 are used when the lock must be held for a long time. These locks put the thread |
|
74 into sleep mode and are used to synchronize user contexts.</p></li> |
|
75 </ul></section> |
|
76 </conbody><related-links> |
|
77 <link href="GUID-387E98B0-568D-4DBB-9A9E-616E41E96B58.dita"><linktext>SMP - Overview</linktext> |
|
78 </link> |
|
79 <link href="GUID-FA120B3F-4EC4-5A0A-8A36-D5CD032CF1B0.dita"><linktext>Using Mutexes</linktext> |
|
80 </link> |
|
81 <link href="GUID-9D00655C-AFBA-5DF7-B11B-6B2355BDF08D.dita"><linktext>Using Semaphores</linktext> |
|
82 </link> |
|
83 </related-links></concept> |