Adaptation/GUID-5B24741C-7CE0-58E8-98C9-1D1CACCD476F.dita
changeset 15 307f4279f433
equal deleted inserted replaced
14:578be2adaf3e 15:307f4279f433
       
     1 <?xml version="1.0" encoding="utf-8"?>
       
     2 <!-- Copyright (c) 2007-2010 Nokia Corporation and/or its subsidiary(-ies) All rights reserved. -->
       
     3 <!-- This component and the accompanying materials are made available under the terms of the License 
       
     4 "Eclipse Public License v1.0" which accompanies this distribution, 
       
     5 and is available at the URL "http://www.eclipse.org/legal/epl-v10.html". -->
       
     6 <!-- Initial Contributors:
       
     7     Nokia Corporation - initial contribution.
       
     8 Contributors: 
       
     9 -->
       
    10 <!DOCTYPE concept
       
    11   PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
       
    12 <concept id="GUID-5B24741C-7CE0-58E8-98C9-1D1CACCD476F" xml:lang="en"><title>Fair Scheduling and File Caching Background</title><shortdesc>This document describes the fair scheduling and file caching
       
    13 features of the File Server.</shortdesc><prolog><metadata><keywords/></metadata></prolog><conbody>
       
    14 <section id="GUID-CBF527A2-EF30-5D4D-9BF4-631748315BE9"><title>Fair
       
    15 scheduling</title><p>The current design of the file server supports
       
    16 the processing of client requests concurrently, as long as those requests
       
    17 are made to different drives in the system. For example, a read operation
       
    18 may take place on the NAND user area partition while a write operation
       
    19 to the MMC card takes place concurrently. </p><p>However, requests
       
    20 to the same drive are serialized on a first-come first-served basis,
       
    21 which under some circumstances leads to bad user experience. For example:</p><ol>
       
    22 <li id="GUID-6550B420-5545-42C9-B304-9A8FA7D9DA01"><p>An incoming
       
    23 call arrives while a large video file is being written to the NAND
       
    24 user area by an application that writes the file in very large chunks.</p></li>
       
    25 <li id="GUID-7B08B626-E610-448B-917C-CB6CC127F2B9"><p>In order to
       
    26 display the caller’s details, the phone needs to read from the contacts
       
    27 database which is also stored on the NAND user area.</p></li>
       
    28 <li id="GUID-3A42569B-66CF-420F-A09D-F02A6BA4E4F9"><p>The write operation
       
    29 takes a very long time to complete, so the call is lost.</p></li>
       
    30 </ol><p> This is one of many scenarios where the single threaded nature
       
    31 of the file server may lead to unresponsive behavior. In order to
       
    32 improve the responsiveness of the system, the Symbian platform implements
       
    33 a fair scheduling policy that splits up large requests into more manageable
       
    34 chunks, thus providing clients of the file server with a more responsive
       
    35 system when the file server is under heavy load.</p><p>See <xref href="GUID-89BCF9A5-ABBD-5523-BA76-FDB00B806A30.dita#GUID-89BCF9A5-ABBD-5523-BA76-FDB00B806A30/GUID-450D57A3-928B-5389-B941-595DB20A7CEA">Enabling fair scheduling</xref>.</p></section>
       
    36 <section id="GUID-84C18F69-D193-4010-A9B2-B533FC6ACF51"><title>Read
       
    37 caching</title><p>Read caching aims to improve file server performance
       
    38 by addressing the following use case: <ul>
       
    39 <li><p>A client (or multiple clients) issues repeated requests to
       
    40 read data from the same locality within a file. Data that was previously
       
    41 read (and is still in the cache) can be returned to the client without
       
    42 continuously re-reading the data from the media. </p></li>
       
    43 </ul></p><note>Read caching is of little benefit for clients/files
       
    44 that are typically accessed sequentially in large blocks and never
       
    45 re-read.</note><p>There may be a small degradation in performance
       
    46 on some media due to the overhead of copying the data from the media
       
    47 into the file cache. To some extent this may be mitigated by the affects
       
    48 of read-ahead, but this clearly does not affect large (&gt;= 4K) reads
       
    49 and/or non-sequential reads. It should also be noted that any degradation
       
    50 may be more significant for media where the read is entirely synchronous,
       
    51 because there is no scope for a read-ahead to be running in the file
       
    52 server drive thread at the same time as reads are being satisfied
       
    53 in the context of the file server’s main thread.</p></section>
       
    54 <section id="GUID-7C6E6FDB-1419-4A8E-82F8-74EE6E62D468"><title>Demand
       
    55 paging effects</title><p>When ROM paging is enabled, the kernel maintains
       
    56 a <i>live list</i> of pages that are currently being used to store
       
    57 demand paged content. It is important to realize that this list also
       
    58 contains non-dirty pages belonging to the file cache. The implication
       
    59 of this is that reading some data into the file cache, or reading
       
    60 data already stored in the file cache, may result in code pages being
       
    61 evicted from the live list. </p><p>Having a large number of clients
       
    62 reading through or from the file cache can have an adverse effect
       
    63 on performance. For this reason it is probably not a good idea to
       
    64 set the <xref href="GUID-3B777E89-8E4C-3540-958F-C621741895C8.dita"><apiname>FileCacheSize</apiname></xref> property to too large a value
       
    65 – otherwise a single application reading a single large file through
       
    66 the cache is more likely to cause code page evictions if the amount
       
    67 of available RAM is restricted. See <xref href="GUID-EB2566BD-8F65-5A81-B215-E8B05CFE21C3.dita">Migration Tutorial:
       
    68 Demand Paging and Media Drivers</xref>.</p></section>
       
    69 <section id="GUID-9BCF8F2A-F919-4AC7-BA05-02CC763429C1"><title>Read-ahead
       
    70 caching</title><p>Clients that read data sequentially (particularly
       
    71 using small block lengths) impact system performance due to the overhead
       
    72 in requesting data from the media. Read-ahead caching addresses this
       
    73 issue by ensuring that subsequent small read operations may be satisfied
       
    74 from the cache after issuing a large request to read ahead data from
       
    75 the media.</p><p>Read-ahead caching builds on read caching by detecting
       
    76 clients that are performing streaming operations and speculatively
       
    77 reading ahead on the assumption that once the data is in the cache
       
    78 it is likely to be accessed in the near future, thus improving performance.</p><p>The number of bytes requested by the read-ahead mechanism is initially
       
    79 equal to double the client’s last read length or a page, for example,
       
    80 4K (whichever is greater) and doubles each time the file server detects
       
    81 that the client is due to read outside of the extents of the read-ahead
       
    82 cache, up to a pre-defined maximum (128K). </p></section>
       
    83 <section id="GUID-976EE014-E3C3-42FD-8467-1D008DEC27FD"><title>Write
       
    84 caching</title><p>Write caching is implemented to perform a small
       
    85 level of write-back caching. This overcomes inefficiencies of clients
       
    86 that perform small write operations, thus taking advantage of media
       
    87 that is written on a block basis by consolidating multiple file updates
       
    88 into a single larger write as well as minimizing the overhead of metadata
       
    89 updates that the file system performs. </p><p>By implementing write
       
    90 back at the file level, rather than at the level of the block device,
       
    91 the possibility of file system corruption is removed as the robustness
       
    92 features provided by rugged FAT and LFFS are still applicable.</p><p>Furthermore, by disabling write back by default, allowing the
       
    93 licensee to specify the policy on a per drive basis, providing APIs
       
    94 on a per session basis and respecting Flush and Close operations,
       
    95 the risk of data corruption is minimized.</p><note>The possibility
       
    96 of data loss increases, but the LFFS file system already implements
       
    97 a simple write back cache of 4K in size so the impact of this should
       
    98 be minimal.</note><note>Write caching must be handled with care, as
       
    99 this could potentially result in loss of user data.</note><p>Database
       
   100 access needs special consideration as corruption may occur if the
       
   101 database is written to expect write operations to be committed to
       
   102 disk immediately or in a certain order (write caching may re-order
       
   103 write requests).</p><p>For these reasons, it is probably safer to
       
   104 leave write caching off by default and to consider enabling it on
       
   105 a per-application basis. See <xref href="GUID-89BCF9A5-ABBD-5523-BA76-FDB00B806A30.dita#GUID-89BCF9A5-ABBD-5523-BA76-FDB00B806A30/GUID-719C7E0C-790B-5C61-9EA8-E2687397340F">Enabling read and write caching</xref>.</p></section>
       
   106 </conbody></concept>