diff -r 578be2adaf3e -r 307f4279f433 Adaptation/GUID-5B24741C-7CE0-58E8-98C9-1D1CACCD476F.dita
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/Adaptation/GUID-5B24741C-7CE0-58E8-98C9-1D1CACCD476F.dita Fri Oct 15 14:32:18 2010 +0100
@@ -0,0 +1,106 @@
+
+
+
+
+
+ The current design of the file server supports
+the processing of client requests concurrently, as long as those requests
+are made to different drives in the system. For example, a read operation
+may take place on the NAND user area partition while a write operation
+to the MMC card takes place concurrently. However, requests
+to the same drive are serialized on a first-come first-served basis,
+which under some circumstances leads to bad user experience. For example: An incoming
+call arrives while a large video file is being written to the NAND
+user area by an application that writes the file in very large chunks. In order to
+display the caller’s details, the phone needs to read from the contacts
+database which is also stored on the NAND user area. The write operation
+takes a very long time to complete, so the call is lost. This is one of many scenarios where the single threaded nature
+of the file server may lead to unresponsive behavior. In order to
+improve the responsiveness of the system, the Symbian platform implements
+a fair scheduling policy that splits up large requests into more manageable
+chunks, thus providing clients of the file server with a more responsive
+system when the file server is under heavy load. See Read caching aims to improve file server performance
+by addressing the following use case: A client (or multiple clients) issues repeated requests to
+read data from the same locality within a file. Data that was previously
+read (and is still in the cache) can be returned to the client without
+continuously re-reading the data from the media.
+
+
There may be a small degradation in performance +on some media due to the overhead of copying the data from the media +into the file cache. To some extent this may be mitigated by the affects +of read-ahead, but this clearly does not affect large (>= 4K) reads +and/or non-sequential reads. It should also be noted that any degradation +may be more significant for media where the read is entirely synchronous, +because there is no scope for a read-ahead to be running in the file +server drive thread at the same time as reads are being satisfied +in the context of the file server’s main thread.
+When ROM paging is enabled, the kernel maintains +a live list of pages that are currently being used to store +demand paged content. It is important to realize that this list also +contains non-dirty pages belonging to the file cache. The implication +of this is that reading some data into the file cache, or reading +data already stored in the file cache, may result in code pages being +evicted from the live list.
Having a large number of clients
+reading through or from the file cache can have an adverse effect
+on performance. For this reason it is probably not a good idea to
+set the
Clients that read data sequentially (particularly +using small block lengths) impact system performance due to the overhead +in requesting data from the media. Read-ahead caching addresses this +issue by ensuring that subsequent small read operations may be satisfied +from the cache after issuing a large request to read ahead data from +the media.
Read-ahead caching builds on read caching by detecting +clients that are performing streaming operations and speculatively +reading ahead on the assumption that once the data is in the cache +it is likely to be accessed in the near future, thus improving performance.
The number of bytes requested by the read-ahead mechanism is initially +equal to double the client’s last read length or a page, for example, +4K (whichever is greater) and doubles each time the file server detects +that the client is due to read outside of the extents of the read-ahead +cache, up to a pre-defined maximum (128K).
Write caching is implemented to perform a small +level of write-back caching. This overcomes inefficiencies of clients +that perform small write operations, thus taking advantage of media +that is written on a block basis by consolidating multiple file updates +into a single larger write as well as minimizing the overhead of metadata +updates that the file system performs.
By implementing write +back at the file level, rather than at the level of the block device, +the possibility of file system corruption is removed as the robustness +features provided by rugged FAT and LFFS are still applicable.
Furthermore, by disabling write back by default, allowing the +licensee to specify the policy on a per drive basis, providing APIs +on a per session basis and respecting Flush and Close operations, +the risk of data corruption is minimized.
Database +access needs special consideration as corruption may occur if the +database is written to expect write operations to be committed to +disk immediately or in a certain order (write caching may re-order +write requests).
For these reasons, it is probably safer to
+leave write caching off by default and to consider enabling it on
+a per-application basis. See