Symbian3/PDK/Source/GUID-9D93F895-B975-4F2D-A2A3-817033EA5C12.dita
changeset 1 25a17d01db0c
child 3 46218c8b8afa
equal deleted inserted replaced
0:89d6a7a84779 1:25a17d01db0c
       
     1 <?xml version="1.0" encoding="utf-8"?>
       
     2 <!-- Copyright (c) 2007-2010 Nokia Corporation and/or its subsidiary(-ies) All rights reserved. -->
       
     3 <!-- This component and the accompanying materials are made available under the terms of the License 
       
     4 "Eclipse Public License v1.0" which accompanies this distribution, 
       
     5 and is available at the URL "http://www.eclipse.org/legal/epl-v10.html". -->
       
     6 <!-- Initial Contributors:
       
     7     Nokia Corporation - initial contribution.
       
     8 Contributors: 
       
     9 -->
       
    10 <!DOCTYPE concept
       
    11   PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
       
    12 <concept id="GUID-9D93F895-B975-4F2D-A2A3-817033EA5C12" xml:lang="en"><title> Data
       
    13 Integrity And Memory Barriers</title><shortdesc>This topic explains how memory barriers are used to maintain data
       
    14 integrity in a multi-CPU system with shared memory and I/O.</shortdesc><prolog><metadata><keywords/></metadata></prolog><conbody>
       
    15 <section id="GUID-8A6A93C2-AA57-4ABB-A6E0-64F34D12E05C"><title>Introduction</title><p>When
       
    16 a thread is executed on a single CPU system, there is an order to the read/write
       
    17 operations to shared memory and I/O. Since read and write operations cannot
       
    18 occur at the same time, the integrity of the data is maintained.</p><fig id="GUID-86081A73-848E-49A4-A663-77D681DC6784">
       
    19 <title>Shared Memory and I/O on a Single CPU System.</title>
       
    20 <image href="GUID-CFD41A5A-2FE2-47FE-8369-08E3C73CB9A5_d0e638914_href.png" placement="inline"/>
       
    21 </fig><p>Figure 1 shows how shared memory and I/O is handled on a single CPU
       
    22 system. The CPU switches between threads (this is called a context switch).
       
    23 Because only one thread can be executed at once, read and write operations
       
    24 to shared memory and I/O cannot occur at the same time. Hence the integrity
       
    25 of the data can be maintained.</p><fig id="GUID-38C5602A-15EF-4162-962B-932B13CC8377">
       
    26 <title>Shared Memory and I/O on a Multi CPU System.</title>
       
    27 <image href="GUID-4AB3C821-25B5-4B5B-BC20-C8FA42D69802_d0e638923_href.png" placement="inline"/>
       
    28 </fig><p>Figure 2 shows how shared memory and I/O is handled on a multi CPU
       
    29 system. In this system, it is possible that the read/write order will not
       
    30 be the one expected. This is due performance decisions made by the hardware
       
    31 used to interface memory to the rest of the system. Without some form of synchronisation
       
    32 mechanism in place, data corruption will occur.</p><p>To safeguard the integrity
       
    33 of data, a new synchronisation method known as a memory barrier has been implemented.</p> 
       
    34    </section>
       
    35 <section id="GUID-1512CFA2-E4F7-4B02-90B3-A02BC560B821"><title>Memory Barriers</title><p>Memory
       
    36 barriers are used to enforce the order of memory access. They are also known
       
    37 as membar, memory fence or a fence instruction.</p><p>An example of their
       
    38 use is :</p><codeblock xml:space="preserve">thread 1: 
       
    39 
       
    40 	a = 1;
       
    41 	b = 1;
       
    42 
       
    43 thread 2: 
       
    44 
       
    45 	while (b != 1);
       
    46 	assert(a==1);
       
    47 </codeblock><p>In the above code on a single-core system, the condition in
       
    48 the assert statement would always be true, because the order of read/write
       
    49 operations can be guaranteed.</p><p>However on a multi-core system where the
       
    50 read/write operations cannot be guaranteed, a synchronization method has to
       
    51 be implemented that allows the contents of the shared memory to by synchronized
       
    52 across the whole system. This synchronisation method is the memory barrier.</p><p>With
       
    53 the above example, the memory barrier implementation would be :</p><codeblock xml:space="preserve">thread 1: 
       
    54 
       
    55 	a = 1;
       
    56 	memory_barrier;
       
    57 	b = 1;
       
    58 
       
    59 thread 2: 
       
    60 
       
    61 	while (b != 1);
       
    62 	memory_barrier;
       
    63 	assert(a==1);
       
    64 </codeblock><p>In the above example, the first memory barrier makes sure that
       
    65 the variables a and b are written to in the specified order. The second memory
       
    66 barrier makes sure that the read operations are done in the correct order
       
    67 (the value of variable b is read before the value of variable a).</p><p> Two
       
    68 forms of memory barrier are supported by the Symbian platform:</p><ul>
       
    69 <li><p>One that will only allow any subsequent memory access if all the previous
       
    70 memory access operations have been observed.</p><p> This memory barrier does
       
    71 not guarantee the order of completion of the memory requests.</p><p> This
       
    72 equates to the ARM DMB (Data Memory Barrier) instruction.</p><p> This is implemented
       
    73 via the<codeph> __e32_memory_barrier()</codeph> function.</p></li>
       
    74 <li><p>One that ensures that all previous memory and I/O access operations
       
    75 are complete before any new access instructions can be executed.</p><p> This
       
    76 equates to the ARM DSB (Data Synchronisation Barrier) instruction. The difference
       
    77 between this form of memory barrier and the previous one, is that all of the
       
    78 cache operations will have been completed before the memory barrier instruction
       
    79 completes and that no instruction can be executed until the memory barrier
       
    80 instruction has been completed.</p><p>This is implemented via the <codeph>___e32_io_completion_barrier()</codeph> function.</p></li>
       
    81 </ul><p>Memory barriers are used in implementing lockless algorithms, which
       
    82 perform shared memory operations without using locks. They are used in areas
       
    83 where performance is a prime requirement.</p><p>It is unlikely that memory
       
    84 barriers would be used by anyone other than device drivers (especially the <codeph>__e32_io_completion_barrier()</codeph> function).
       
    85 It is also unlikely that these functions would be used on their own. Instead
       
    86 they are most likely to be called via one of the atomic operation functions.
       
    87 An example of their use is :</p><codeblock xml:space="preserve">         /* Make sure change to iTail is not observed before the trace data reads which preceded the call to this function. */
       
    88          __e32_memory_barrier();
       
    89          buffer-&gt;iTail += iLastGetDataSize;
       
    90 </codeblock></section>
       
    91 </conbody></concept>