# HG changeset patch
# User Graeme Price ROMBUILD is the Symbian platform binary XIP (execute-in-place)
+ROM builder. It is normally invoked through If the OBY files are encoded in UTF-8 with
+non-ASCII character support, use the following the ROMBUILD command
+syntax: If the OBY files are encoded in local character set with non-ASCII
+characters support, use the following the ROMBUILD command syntax: The following command line Accepts a parameter file, which contains a list of command-line
+parameters specific to the ROM tools, as input. Displays more detailed help for the command. Uses a PE-COFF header instead of a Symbian platform header. PE-COFF is a Portable Executable–Common Object File
+Format and is a Microsoft extension to the COFF standard. Compresses executable files where possible using the inflate
+(Deflate, Huffman+LZ77) algorithm unless the Specifies the compression algorithm to use. Can be used
+either with the No compression is used. Compresses executable files using the default (Deflate,
+Huffman+LZ77) algorithm. Compresses executable files using the bytepair algorithm.
+Bytepair compression allows faster decompression than the default
+Deflate, Huffman+LZ77 algorithm and supports demand paging by performing
+compression and decompression of code in independent 4 KB pages. Uses the specified core image file as the basis for creating
+the ROM image extension. If the given core ROM image file
+is invalid or does not exist, an error message is displayed. Sets the trace bitmask; this only applies to debug builds. The simplest way of specifying this is to use a string of hexadecimal
+characters starting with 0x (for example, 0x01234567). However, any
+string that can be interpreted and translated into a valid TUint value
+may be used. See the standard C function Reduces the physical memory consumption during image generation. Level of information to log file. The following valid log
+levels are available: Default level of information to log file. Logs the host or the ROM filenames, the file size, and the
+hidden attribute in addition to the Logs the E32 file header attributes such as UIDs, data size,
+heap size, stack size, VID, SID, and priority in addition to the Suppresses the image loader header. Does not add sorted entry arrays (6.1 compatible). It is
+a table of offsets for the subdirectories and files which are not
+in the sorted order. Compares the generated ROM image with the ROM image whose
+file name is specified. Displays a summary of the size to the specified destination,
+that is, to the log, to the screen or to both the log and the screen. Verbose mode. Displays a warning if a file is placed in a non-standard
+directory when For example, the following instruction in
+OBY file leads to a warning when Specifies the number of working threads that can run concurrently
+to create a ROM image. The If the If the If the Generates a dependencies file describing internal dependencies
+among executables or dlls in a ROM image. Note: You can
+only generate dependence information in the paged section of a ROM
+image. Generates symbols for each data or executable specified
+in the OBY file. Note: The symbols file is not generated
+by default. See the Calibration is done through a user-side calibration application. The platform specific layer performs the following calculation to convert
+between ADC values and co-ordinates: Where the R matrix determines the scaling and rotation, and the T vector
+determines the translation. This conversion is implemented by the platform specific layer's implementation
+of All that you need to do is to provide an initial set of values for R and T.
+An example of how to set them up is included in the template port. The principle is that the application draws cross hairs on the screen,
+and then invites the user to touch these points. The digitiser ADC value is
+used to calculate the digitiser to screen pixel value transformation. Any calibration code will interface with the On a cold restart, the calibration application goes through the following
+steps: Gets the calibration
+points by using Sets calibration points
+by using: Saves calibration data
+by using: On a warm restart, the calibration application just restores calibration
+data using The Time client interface is used when communicating with the
+Real Time Clock (RTC). This client interface can be used to check
+that the RTC is working correctly, maintain its request queue and
+to set/reset the wake-up alarm. None None The following classes provide the
+RTC interface: Name Description Provides the interface between the Real Time Clock (RTC)
+and the upper layers of the OS. Provides an interface to the Application Specific Integrated
+Circuit (ASIC) that is being used. The functionality provided by this class, can be split
+into the following groups: Diagnostics Management of the wake-up alarm Request management Set and read the RTC These methods are used to test that the RTC is working correctly.
+The methods that are included in this group are: This group of methods, relate to the
+setting and releasing of the wake-up alarm. The methods that are included
+in this group are:
+
+
+
+
+
+
+
+
+
+
Calls to the RTC are placed in a request queue, +waiting for the RTC to process them. Calls to the other two groups +of functionality add requests to the queue for the RTC, the methods +in this group remove them. The methods that are included in this +group are:
These functions are used to set and read +the RTC. Both functions measure time as the number of seconds since +the start of the year 2000.
The Base Starter is started by the File Server,
As the USB PDD runs in kernel mode, debug
The debug message +type
A kernel object +that takes a message string as a constructor.
Two debug message types are used:
+
The following code samples illustrate their use:
++
This document covers information which is generic to master and +slave channel implementation
Intended Audience
Base porting engineers.
Introduction
IIC +buses (a term used in this document to represent serial inter-chip +buses that IIC can operate on) are a class of bus used to transmit +non time-critical data between components of a hardware system.
Different IIC buses have a large amount of +functionality in common, but some functionality is specific to individual +implementations. For this reason, the IIC software has an architecture +consisting of two layers, the Platform Independent Layer (PIL) and +the SHAI implementation layer. The Platform Independent Layer is a +set of classes which encapsulate generic functionality and have been +implemented for you. The SHAI implementation layer is an interface +which you must implement yourself to encapsulate the functionality +specific to the platform you are working on.
You implement
+the SHAI implementation layer by subclassing the classes of the Platform
+Independent Layer and writing functions that provide an abstraction
+of your hardware. To access the hardware you are strongly recommended
+to use existing Symbian APIs such as e.g.
An IIC channel operates in one of two modes, master +and slave, and there are separate master and slave APIs to be implemented. +In master mode, a channel and a client communicating with that channel +execute in two separate threads. In slave mode there is only one thread +for both channel and client.
The SHAI implementation and platform +independent APIs assume an interrupt-driven approach to implementation. +An event on hardware triggers an interrupt, which then signals this +event to the channel thread (where client is master) or client thread +(where client is slave). This means that implementation involves writing +interrupt service routines (ISRs) and a DFC queue to hold the callbacks +which respond to them. For both master and slave operation, the callbacks +are queued for execution on the client thread - so the client thread +must not be blocked, otherwise the callbacks will never run.
Clients of a master channel request transactions for execution in +either a synchronous or asynchronous manner. This results in the transaction +being entered in the channel’s request queue.
If synchronous
+execution is requested, the client thread is blocked until the transaction
+completes (either an ISR or polling mechanism in the SHAI implementation
+layer recognizes the transaction has completed, and it calls the PIL
+method
If asynchronous execution is requested, the
+client thread is not blocked, and may continue to perform other tasks.
+In the meanwhile, the channel thread will retrieve the next transaction
+from the request queue and pass it to the SHAI implementation layer
+for processing. When the transaction completes, the SHAI implementation
+layer calls the PIL method
A client of a slave channel provides receive and transmit buffers
+to be used and specifies the types of event (trigger) that it wishes
+to be notified of. When ANY event occurs, the SHAI implementation
+layer should call the PIL method
You should refer to the template port at
The MMC specification is published by the
Communication is based on a 7-pin serial bus designed to operate in a low +voltage range (2.0 - 3.6V). The MultiMediaCard specification defines a communication +protocol referred to as MMC mode (0-20MHz). In addition, for compatibility +with existing controllers, MultiMediaCards may offer an alternative communication +protocol which is based on the SPI standard (0-5MHz).
+The +following features of the MultiMediaCard System Specification are not provided +in the Symbian platform MultiMediaCard controller:
Stream read and write +operations are not supported.
SPI bus mode is not +supported.
The MultiMediaCard System +Specification includes a feature where a MultiMediaCard can be put into a +disconnected state. This can be performed on a card which is currently in +the programming state, i.e. writing data to the disk. To place that card into +the disconnected state involves placing another card into the transfer state, +which means giving the other card the current data transfer focus. A disconnected +card will not terminate the programming operation and, if re-selected, moves +back to the programming state. The Symbian platform MultiMediaCard controller +does not support this mode of operation even if the underlying hardware interface +does.
There are two types of user that are interested in DMA: those that +want to use it to transfer data, and those that need to implement +it for their hardware.
+Device driver developers use DMA to reduce the burden on the +main CPU. They open a channel and then use that channel to queue one +or more DMA requests.
Hardware implementers, usually device creators, implement the +DMA Platform Specific Layer (PSL) so that it will work with the particular +DMA controller on a device.
Some devices will have more than one DMA controller, which +means the DMA channel manager may require the hardware implementer +to provide a means of identifying whether a particular DMA channel +is of the correct type and on the appropriate controller.
The diagram represents different classes related to the DMA. +The classes in green provide the Client Interface which allows the +users to request data transfer, the classes in blue implement the +platform specific layer and the classes in white are the abstract +classes.
The device driver writers including the +developers of physical device drivers and kernel extension writers +use the client interface of the DMA platform service.
DMA technology is described the
The concepts of device driver are described in the
The Client Interface is explained in the
If you are a device creator or are adapting +the DMA Framework for your DMA controller, you must implement the +Platform Specific Layer.
The hardware interface is explained in the
The hardware specific functions are implemented in the platform
+specific layer and the implementation is explained in the
Testing the implementation is described in the
Under normal circumstances this ought not to happen, but when the kernel +faults, the device enters the kernel debug monitor.
+There may be circumstances where you need to force a kernel crash, for
+example, if the system is locking up. Running the test program
For example, when the system locks up under certain conditions, run "crash +60", and then recreate the conditions that lead to the lockup. After 60 seconds, +the kernel crash is forced and the debug monitor is entered.
+Notes:
+the EKA2 debug monitor +is very similar to the EKA1 version, although the details displayed may be +different.
you will occasionally +find references to the crash debugger; this is the same as the debug +monitor.
When the kernel faults, the device enters the +debug monitor.
To make use of the debug monitor, do the following:
Plug the mains adaptor +into the DC jack.
Connect the target device +COM port to your PC, and set the PC serial port to 115200 baud, 8 bits, no +parity, 1 stop bit, XON/XOFF flow control.
Press the ON key on +the target device.
Start a terminal program +on the PC (e.g. HyperTerminal.)
Press
Enter the password "replacement" +(all lower case, but without the quotes) on the PC. The target device should +now reply:
You can now enter debug monitor commands.
Commands consist of a single letter describing the operation +to be performed, followed by any arguments. Not all commands take arguments. +Commands are case sensitive; the majority are lower case. Commands should +be entered at the command prompt, on the PC. The set of supported commands +is as follows:
This command displays information about +the the kernel fault that caused the debugger to be entered. The information +has the following format.
Notes:
R15 is the program counter
R14 is the link register,
R13 is the stack pointer
This command dumps memory in both hexadecimal and ASCII format. +Use one of the following command formats:
Address +parameters are always virtual addresses (the MMU is still on).
The +resulting format is similar to the EKA1 format.
For example:
If an illegal memory access occurs, the debugger traps the +exception and displays an error message.
This command dumps memory in +both hexadecimal and ASCII format, but excludes any unmapped memory space. +If an illegal memory access occurs, it does not stop, but skips to the next +page instead. This is useful to inspect the content of discontiguous chunks.
The
+syntax and the display format is the same as for the
This command displays +information for the current process and thread.
The format for the thread is:
The format for the process is:
This command in lower case displays
+basic information about the
where
For example:
This command in upper case displays
+full information about the
where
This command in lower
+case displays basic information about one or more code segments, as encapsulated
+by
where:
For example:
This command in upper case displays
+the full information about one or more code segments, as encapsulated by
where:
For example:
This command in lower case displays
+the contents of one of the kernel's object containers, a
The command has +the following syntax:
where
For example:
The information displayed for each object is the same as that
+shown after using the
Notes
the
the type value passed
+as an argument to the command is one of the enum values of the
This command in upper case is
+exactly the same as the lower case
This command dumps the full ARM register set.
On +ARM this dumps the full set of user mode registers and all the alternate registers +for other modes.
For example:
This command, in upper case, dumps both the user +and supervisor stacks used by each thread in the system. Some threads do not +have a user thread, in which case this is indicated. Each set of stacks is +displayed in turn, in the following format:
This command, in lower case, leaves the +debugger and does a cold restart of the current ROM image.
This command, in upper case, leaves the debugger, +and returns to the bootloader to wait for a new ROM image to be downloaded.
Displays +a short summery of the crash debugger commands.
Direct Memory Access (DMA) channels allow you to copy data between +memory locations, or between memory and peripherals, while minimizing +the use of the CPU. The transfer operation is performed in hardware +by the DMA controller. The purpose of the DMA platform service is +to provide a common interface between hardware and device drivers +which use DMA.
The framework is divided into a platform-independent +layer and a platform-specific layer, with the DMA platform service +being the interface between the two. The framework presents an API +for use by device drivers and other client applications which transfer +data by DMA. The PSL must be implemented by hardware developers to +provide the same functionality on multiple platforms.
Before +running this test, you must do the following:
Port the SDIO Controller for your platform (see
Build the test ROM.
Boot the device with the test ROM.
After +you have ported the SDIO Controller on your platform, you can test +it using the provided unit test application, SDIOTest.
The +test application runs in a text shell and is made of two parts:
The test program,
The test driver,
The source code of the test application is in
You must build
To include the two test components in a ROM, specify the
The SDIOTest utility is not an automated test: it performs +various unitary operations which can be used to validate the behavior +of the SDIO Controller. First, you request an operation by pressing +the corresponding key on the command line. Then, you compare the resulting +display with the data expected from the card.
The following +steps provide an example of how to test your port.
The stack must report that no card is present.
The stack must report that a card is present.
The data returned by the stack must match the data +sheet you have for the card.
The values must match the expected card data.
The values must match the expected card data.
The data returned by the stack must match the I/O specifications +of the card.
Most device drivers need to use memory buffers for I/O operations. +Drivers typically own memory buffers, allocate them, and access them through +pointers. The thread should be in a critical section when it allocates or +de-allocates memory.
Memory buffers must be freed when they are not required. +If this is not done, a memory leak occurs.
Note: This is an optional step when creating a class driver.
Class drivers can allocate resources to endpoints. This allows the PDD PSL to manage DMA resources more efficiently and to enable and disable endpoint resources on particular interface settings.
Note: The functions
The USB Client PDD PSL must explicitly support the allocation of endpoint resources. The PDD will indicate if this is the case by setting the
This structure is passed to the LDD as part of
The successful outcome of the
Check that the driver supports endpoint resource allocation for DMA and double buffering with by
Allocate resources for endpoints with
If more than one resource is to be specified then multiple calls to the function have to be made, each one selecting a single resource.
There is no direct and immediate feedback as to whether the resources were successfully allocated. The user can find out about the outcome by calling
A resource may be deallocated by the PDD if:
the host selects a different configuration (including zero) than the current one by sending a SET_CONFIGURATION request,
the host selects a different alternate interface setting than the current one by sending a SET_INTERFACE request,
the user calls
After you have allocated resources for the endpoints you should force a
KTRACE
During development,
+drivers can display information for debugging purposes. The basic API to use
+for this is
You can make the display
+of debugging messages conditional on whether certain flags have been set in
+the ROM. To do this, use the print command with the
This macro's first argument is a mask that must be enabled for the message
+to be displayed. The
The following
+are some of the mask definitions that can be found in
BTrace +is a Kernel service that is designed to generate and capture trace information +with minimal impact on a running system. It is useful for generating trace +information in the Kernel and in drivers, for which fast tracing is required +and general serial debug is not possible.
BTrace defines API and macros
+to use in programs to create trace information in
The basic macros used for generating +traces are:
The macros set category and sub-category values with their first
+two arguments. The category indicates the type of information being recorded,
+which is specified using a value from the enumeration
For
+longer traces that include a context ID, program counter values, and different
+filters, more macros are provided. Performance logging macros are also available
+which in turn use BTrace to
Demand +paging using ROM demand paging is used when the files to be paged are in the +core ROM image and not using another file system such as ROFS.
The following are useful +background information for Demand Paging using ROM demand paging:
Demand Paging
ROM paging
Demand paging (using
+ROM demand paging) provides the following features compared to
Lower RAM overhead
Lower performance overhead.
The following are +known limitations for Demand Paging (using ROM demand paging) compared to +the other types of code paging:
If the executable has static dependencies, then it is best to place + these dependencies in ROFS. This is a limitation of ROFS and not ROM demand +paging.
This paging system can only be used with files that are stored using + the ROM filing system. This is because ROM images using the ROM filing system + are designed to be executed in place.
The LCD Extension (or video driver) is implemented as a standard +kernel extension and is loaded by the kernel at boot time.
+The public interface to the LCD Extension is accessed through the
+User-Side Hardware Abstraction (HAL) interfaces
From Symbian^3 there is a new graphics architecture called
When ScreenPlay is not in use, the Graphics
The Text Window Server also implements a version of the Screen +Driver for a simple console user interface.
+The kernel-side framework manages the transition between power +states on the device. A base port can implement a kernel extension +to interface to the power management hardware that is provided on +the phone. The framework also provides an interface for device drivers +to use the power
+The user-side interface to the framework is provided by the Power
+class in the User Library. This allows applications can be notified
+about, and can react to, changes in power state, and indeed can initiate
+changes to power state. The
In Symbian platform, an initial port of the system does not require +the existence of a power model. In this sense it is optional, and +its absence will result in default behaviour provided by Symbian platform. +However, this is not likely to result in optimal power usage, so the +development of a power model based on the framework provided by Symbian +platform will be required for commercial devices.
+The interfaces for power management on the kernel-side are shown +in the following diagram:
+The power manager +manages the transition between power states.
The power controller +is a kernel extension that manages the device-specific power management +hardware such as an on-chip power management controller, oscillators +etc. The power controller kernel extension is provided in the base +port.
Power handlers +are device drivers that interface to the power manager.
The following sections describes these concepts in more detail.
+The power manager manages the transition between +power states. It co-ordinates the change in the power state of peripherals +and the CPU (hardware), and provides the necessary interactions with +the device (peripheral) drivers and the power controller (software). +It also manages interactions with the user-side model.
The
+power manager is an instance of the
The user-side uses the
The power manager object is the anchor point for the power controller +object and the power handler objects.
The power controller manages the device-specific +power management hardware such as an on-chip power management controller, +oscillators etc.
The power controller is an instance of a
The power
+controller is responsible for tracking wake-up events, notifying the
+power manager of such events through a call to
Device drivers interface to the power manager
+through the
How a
The power manager calls
See the
The set of power sources on a device, how they are connected to peripherals, +and their power states are totally dependent on the specific device. +This means that Symbian platform cannot provide a comprehensive API +for handling such sources.
However, Symbian platform does
+provide an interface,
Each (shared) +power source is represented by a separate object.
The class, or
+class hierarchy, that represents the power source should implement
+the
Define a custom +interface with the power controller. It is likely that the Variant +will be responsible for providing the implementation for the power +inputs, and for providing this custom interface.
Drivers that need power from a specific power source call
To represent a shared power
+source, a suggested implementation of
To write data from the device to the host (
The function
When sending buffers using this function, if a second buffer is ready to send before the first has finished transmitting then the second buffer can be sent to the LDD. The LDD will start sending the data in the second buffer when the first has finished. This reduces the delay between finishing one and transmitting the next. To do this the
Note: an offset is provided that allows a buffer to be split into parts. This allows you to fill one half while sending the other.
When the request has completed, if there is no more data to be sent then close the channel and unload the driver.
When you have finished reading and writing
This document explains how to implement a master channel on the +SHAI implementation layer of IIC.
Intended Audience
Base porting engineers.
Introduction
IIC buses (a term used in this document to represent serial inter-chip +buses that IIC can operate on) are a class of bus used to transmit +non time-critical data between components of a hardware system.
You should read and understand the template
+port at
To implement the SHAI implementation layer
+of an IIC channel as master you must write a class extending
The +four functions you must implement are:
You must also provide the following functionality:
validation of +input,
interrupt service +routines, and
reporting on
+transmission status using the callback function
Your implementation of the SHAI implementation layer will +need to call functions of the PIL. You are restricted to using a certain +subset of these.
Implementing DoCreate()
call the PIL
+function
create the DFC
+queue for the driver and assign it to the driver by calling
initialize the +hardware, setting up interrupts, and
initialize the +other data structures representing the channel.
Implementing DoRequest()
Extract the
+transaction parameters, that is the transaction header and transfers
+(
configure the +interface with the configuration supplied in the transaction header,
set up the hardware +to start the first transfer, that is
extract the
+first transfer for one direction by calling
and if the transfer
+is full duplex extract the first transfer for the second direction
+by calling
in each case filling hardware FIFO buffers, enabling interrupts +and so on,
call the PIL
+function
instruct the +hardware to start transmission, and
return either
+an error code or
The effect of calling
Implementing HandleSlaveTimeout()
If the timer started by the PIL method
stop the transmission,
disable the +hardware, and
call the PIL
+function
Implementing CheckHdr()
Implement
check if the +header of the transaction is valid.
Using the Platform Independent Layer
You can +only use certain functions of the PIL. Most are used to access private +data of the parent class. Others provide functionality which is generic +to IIC buses. These are:
•
The functions accessing private members of
The functions accessing private members of
The actions needed in porting a Serial Port Driver are based on +the experience of porting the template reference board port. Note +however that the code shown here is idealized to show the basic principles; +it is not an exact copy of the template port code.
+In the template reference board port, the
The following diagram shows the general relationship between the various +classes and structs that form the DMA Framework. The individual items are +described in more detail below.
+This is the
It defines the main +interface between the platform independent and platform specific layers.
it is a container for +channels, descriptors and descriptor headers
The channel manager is a
a function to open a
+DMA channel:
a function that is called
+when a channel is closed:
a function that can
+be used to extend the framework with platform specific functionality on a
+channel independent basis:
DMA
+controllers operating in scatter/gather mode are configured via a linked list
+of small data structures describing the data to be transferred. These data
+structures are called descriptors. (Note that the use of the term descriptor
+in the context of DMA should not be confused with the same term widely used
+in Symbian platform to refer to the family of
The Symbian platform DMA Framework always uses descriptor +data structures to store transfer-configuration-information, even if the underlying +DMA controller does not support scatter/gather mode.
The following +example illustrates the idea: assume that a device driver needs to transfer +two disjoint blocks of memory, A and B, into another block C. Block A starts +at address 1000 and is 300 bytes long. Block B starts at address 2000 and +is 700 bytes long. The destination buffer C starts at address 5000 and is +1000 bytes long. Assume that the DMA descriptors are allocated in a pool starting +at address 600. The following diagram shows the scatter/gather list that the +device driver might create:
If the DMA controller supports the scatter/gather arrangement, then +the framework uses a structure that will be specific to the hardware. This +structure is defined in the platform specific layer.
If the DMA controller
+does not support scatter/gather, then the framework uses the generic structure
a set of generic flags +that characterise the transfer. For example, is the source of the data memory +or a peripheral; is the destination memory or peripheral; what addressing +mode is to be used?
the source and destination +location. This is in the form of the base virtual address for a memory buffer, +and a 32-bit value (or cookie) for peripherals. The meaning of the 32-bit +value is interpreted by the platform specific layer.
The number of bytes +to be transferred.
A word that only has +meaning for the platform specific layer, and passed by the client at request +fragmentation time.
These are objects of type
A +descriptor header allows additional information about a descriptor to be stored. +The header is a separate object from the descriptor because it is difficult +to embed additional data in a structure whose layout may be hardware-imposed.
Descriptors +and descriptor headers are stored in two parallel arrays allocated at boot-time, +and each descriptor is always associated with the header of same index, so +that there is always a one-to-one relationship between the header and the +descriptor.
In the current design, the only information in the descriptor header
+is a pointer,
Descriptors are always accessed +through their associated header.
The platform independent layer never
+directly accesses hardware-specific descriptors. It just passes descriptor
+headers to the platform specific layer. However, the platform independent
+layer does directly manipulate the generic descriptors,
A
+transfer request is the way in which a device driver sets up and initiates
+a DMA transfer, and is represented by a
A +transfer request has a number of characteristics:
it is always associated +with exactly one channel.
it stores a client-provided +callback function, which is invoked when the whole request completes, whether +successfully or not.
it can be in one of +four states:
not configured
idle
being transferred
pending; this state +only occurs in streaming mode.
Internally, a transfer request is represented as a singly linked +list of descriptor headers. Each header in the list is associated with a descriptor +that specifies how to transfer one fragment of the whole request. Transfer +requests have pointers to the first and last headers in the list.
When +the request is idle, the header list ends with a NULL pointer. This is not +always true when the request is queued (i.e. when the request is being transferred +or is still pending). The following diagram shows an idle request with three +fragments.
Splitting a request into fragments is useful because:
it insulates device +drivers from the maximum transfer size supported by the underlying DMA controller.
the source and destination +DMA buffers may not be physically contiguous and thus require fragmenting.
Both of these situations can be handled by using the generic fragmentation
+algorithm
Some device +drivers may have to create custom descriptors lists. For example, the USB +section of the PXA250 manual describes how to build custom lists where a descriptor +containing data transfer information is followed by another one poking an +I/O port to issue a command to the USB controller.
To cover that case,
A
+channel is a
A channel can be in one of 4 states:
closed
open and idle
open and transferring +data
suspended following +an error.
On opening a channel, the client device driver specifies:
A 32-bit value (cookie) +that is used by the platform specific layer to select which channel is to +be opened
The number of descriptors +that must be reserved for this channel.
A DFC to be used by +the framework to service DMA interrupts.
A channel maintains a queue of transfer requests. If the channel +is being used for one-shot transfers, the queue will always contain an idle +or a transferring request. In streaming mode, the queue may contain several +requests, the first one being transferred and the remaining ones pending. +When a request is completely transferred, it is removed from the queue. The +first request is always the one being transferred.
A transferring +channel has a pointer to the header associated with the descriptor being transferred. +The headers of all queued requests are linked together on one linked list.
The +following diagram shows a DMA channel with a three-fragment request being +transferred and a two-fragment one pending. The fragment currently being transferred +is the second one of the first request.
The
TDmaSbChannel +State Machine
For reference purposes, the following diagram shows
+the state machine that
TDmaDbChannel +State Machine
For reference purposes, the following diagram shows
+the state machine that
TDmaSgChannel +State Machine
For reference purposes, the following diagram shows
+the state machine that
When a transfer request is queued +onto a channel that is already transferring data, the header descriptor for +the new list of descriptors is appended to the header of the previous request +on the queue. If the underlying DMA controller supports hardware descriptors, +they must also be linked together.
The following diagram shows how +headers and descriptors for a new request are appended to the existing ones +for a DMA controller. The dashed arrows represent the new links.
Note that hardware descriptors are linked together using physical +addresses, not virtual ones.
Linking hardware descriptors together +is implemented in the platform specific layer. There are two issues to consider:
The channel may go idle +before the linking is complete, which means that the data for the new request +would not be transferred.
If the channel is transferring +the last descriptor in the list, then updating the “next” field in the descriptor +will have no effect because the content of the descriptor will already have +been loaded in the controller registers. Again the channel would go idle without +transferring the data associated with the new request.
The solutions available when porting the DMA Framework are:
If the DMA controller +has hardware support for appending descriptors while transferring data, then +there is no problem.
If the peripheral attached +to the channel can withstand a short disruption in the transfer flow, then +the channel can be suspended while the appending takes place. This should +be done with interrupts disabled to minimise disruption.
The alternative technique +is to append the new list with interrupts disabled and set a flag. When the +next interrupt occurs, if the flag is set and the channel idle, then the interrupt +service routine must restart the transfer.
See
The platform specific layer must notify +the platform independent layer when the following events occur:
when an error occurs.
when each fragment completes, +for single-buffer and double-buffer controllers.
when the last fragment +of a request completes, for scatter/gather controllers.
The ISR, as implemented in the platform specific layer, must:
determine which channel
+the interrupt is for. If the DMA controller uses dedicated interrupt lines
+per channel, the ASSP/variant interrupt dispatcher will do this. See
determine whether the +interrupt was asserted following a successful transfer completion or a failure. +If the DMA controller uses different interrupt lines for completion and failure, +the ASSP/variant interrupt dispatcher will do this.
Call
The DFC updates the state of the channel and, for single and double +buffer DMA controllers, configures the next fragment to transfer, if any. +When all fragments making up a request have been transferred, the DFC calls +the client-supplied callback associated with the completed request. The callback +is also called if an error occurs during transfer.
For
+a given device driver,
Each
The
Some header files that the bootstrap uses are shared with the Kernel and +are written in C++. The header files are shared to avoid duplicate definitions +of, for example, super page layout in the kernel and in the bootstrap. Since +the bootstrap is written in assembler, the C++ header files must be translated +to assembler-syntax include files.
+This is done automatically under the control of the generic makefile. The +translation tool can produce include files in ARMASM, GNU AS or Turbo Assembler +(for X86) syntax; the examples will all be in ARMASM syntax.
+It should be noted that the translation tool does not process
The following elements of C++ header files are translated:
+Constants
+at file scope have the same name in the assembler include file and constants
+at class scope have the name
For example:
translates to:
For example:
translates to:
For example:
translates +to:
The offset of a class
+member within the class is given the assembler name
When computing these +offsets and sizes the following basic types are known to the translation tool:
The
+tool will produce an error if an attempt is made to use a
Asynchronous requests are commonly initiated by a call to
+
The DMA controller chipset has multiple DMA channels that can be +configured for different DMA transfers. Drivers generally initialize +and open the required DMA channel and use this channel to do the DMA +transfers.
+For some peripherals, such as the Camera and the Display controller, +there can be dedicated DMA channels that cannot be configured for +DMA transfers by other peripherals.
+To configure the channels and other features of the chipset, you +may need to write code. This code should be executed prior to the +DMA channels being used for the first time, or to configure dynamically-allocated +channels.
+This requires chipset specific code which is beyond the scope of +this documentation. Please refer to your chipset's datasheet and any +accompanying documentation for details on how to perform any required +chipset configuration.
+The Time +platform service specifies functions to get and set the current time.
To implement
+the Time platform service, you implement the functions
The function
The platform now has a function to set the current +time in seconds.
The function
The current state of the real time clock is a
You have +now implemented the Time platform service in hardware.
The
A sample configuration file is available in +the template. The configuration file looks like the one below:
The keyboard driver's job is to +interpret keyboard presses and to generate key-up and key-down events. +How the keyboard driver deals with the hardware depends to a great +extent on that hardware, and also on whether the hardware generates +interrupts as a result of key presses or whether the driver needs +to poll the hardware at regular intervals.
The template Base
+port, which can be found at
Whichever type is used, the end result is
+the addition of a stream of events onto the kernel's event queue.
+Events are represented by
The +general pattern, taken from the template keyboard driver, is as follows:
More generally,
The kernel event queue is a mechanism that allows this to happen. It is internal to Symbian platform, but to help you understand +what happens, it works like this:
The kernel maintains
+a circular buffer of
As part of its
+initialization, the Window Server calls the internal function
A call to
The Window Server
+uses an active object to wait for and subsequently dispatch the handling
+of events (and most importantly, key press events). It makes a request
+for further events by a call to the internal function
There are two services that the +Window Server needs when dealing with key presses:
it needs a translation +of a (hardware) scancode to a (Symbian platform or logical) keycode.
it needs to +know whether a key and combination of modifier key states is a "hot-key", +i.e. one that is intended to be sent to a specific window group, instead +of the window group that currently has focus. Note that, in this context, +a "hot-key" is more commonly referred to as a capture key.
To perform the translation, it creates an instance of a
To deal with capture keys,
+it creates an instance of a
Both classes are implemented in
[Note that CKeyTranslator and CCaptureKeys +are internal to Symbian platform and are not part of the public interface. +This is an attempt to provide an outline understanding of the mechanisms +involved.]
The Window Server also decides on the name
+of the key mapping tables DLL to be loaded. By default, the name of
+this DLL is
is
+99, then the DLL loaded is
The loading of this DLL is done by a member of
On receipt of key-up and key-down
+events, the Window Server calls
Before the +Window Server can be told that a capture key has been pressed, it +must previously have registered a pair of items:
the capture +key itself, i.e. the (logical) keycode and combination of modifier +key states.
a handle to +a window group.
Registration is simply the act of adding this information
+into the
The Sound Driver PDD must provide a factory class to create channels. All
+PDD factory classes are derived from
The PDD factory class provided by the template Sound Driver creates a new
+DFC queue and associated kernel thread,
The class
Ensure that the inherited data member
This +is the second stage constructor for the PDD factory class. The template version +creates the DFC queue and sets the driver name.
The physical device is identified by the driver name,
+alter this to reflect your device. The driver name is the same as the LDD
+name, but followed by a dot and a short string to represent the physical device.
+For example, the name used for template is "
This
+function is called by the kernel device driver framework to see whether this
+PDD is suitable for use with a particular driver channel. Ensure that the
+unit number checking code is correct for the unit numbers that are to be supported.
+The default version enables the PDD to open both the first playback driver
+channel number
This +function is called by the kernel device driver framework to create a PDD object. +Ensure that the unit number checking code is correct for the unit numbers +that are to be supported. For configurations supporting both playback and +record, it must be capable of creating either a playback or record PDD object +according to the channel number specified. The template version is implemented +as follows:
Off - +a state where the device and all peripherals are powered off or inactive, +or are characterised by negligible power consumption due to current +leakage to the electric and electronic circuits that make up the system. +This state is achieved as a result of controlled system shutdown resulting +from a user action, an application request, UI inactivity, or as a +result of accidental loss of power. This may also be achieved as a +result of putting the system into a hibernation state. Note that a +reboot is necessary to return the system to the Active state; +this could be a cold reboot, or a warm reboot if the system was put +into a hibernation state.
Standby - a low power consuming state that results from turning off most +system resources (clocks, voltages), peripherals, memory banks (where +possible), cpu and internal logic, while still retaining the state +prior to the transition. Typically, the only systems that are active +are those that are required to detect the events that force the transition +back to the Active state (e.g. RTC, clocks and Peripherals involved +in detecting hardware events). Returning to the Active state will +normally take a far shorter period of time than that required to reboot +the system. This state is achieved as a result of user action or application +request.
Active - the fully active state.
The three power states are defined by the enum values of the
A HAL handler gets or sets hardware-specific settings, for example, +the display contrast.
+A HAL handler is normally implemented in the kernel extension or +a driver that provides the hardware-specific functionality.
+The easiest way to see the general pattern for implementing HAL
+handlers is to look at a real example. We will use the screen (i.e.
+video or LCD) driver for the template reference board, which is implemented
+in
The HAL handler function has
+a signature that is defined by the
For example:
Before the handler can do anything, the +extension or driver must register its handler for a specific HAL group. +It does so by calling:
where:
Nearly all the functionality of the template screen driver is implemented
+in a single class, the LCD power handler object; its
Note that the third parameter is, in general, an optional
+pointer. It is a pointer to the current object, i.e. the
Using the template screen driver
+as an example, the driver provides the HAL handler for dealing with
+information related to the display HAL group,
This is a stand-alone function. The first parameter
Whether you use this kind of technique depends +on the way your drivers are implemented, but it is a pattern that +is also used by the digitiser and the keyboard driver, as well as +by the Symbian implemented HAL handlers.
The other parameters
+passed to the HAL handler function, i.e.
It's useful to note that a single HAL handler may end up being
+called as a result of calls to different accessory functions. For
+example,
To further distinguish
+between the different characteristics represented by a group, each
+group has an associated set of function-ids. The function id is the
+second parameter passed to the HAL handler function, and the most
+common pattern for an implementation is to code a simple switch statement
+based on this value. For example, the function-ids associated with
+the
If the HAL handler function does not deal with
+a specific function-id, then it returns
The meaning of the parameters
Dealing with the +HAL::GetAll() function
Calls that come through the
For example, using a
The information cannot be retrieved by the
When the HAL handler is expecting a value that
+specifies a particular mode or setting but receives a -1 the HAL handler
+must return the error code
See
The HAL handler itself is
+only called by the kernel, and is the end result of a sequence that
+starts either with a call to the user interface functions provided
+by the HAL class (
The handler runs
+on the kernel side in the context of the calling thread with no kernel
+mutexes or fast mutexes held; the calling thread is not placed into
+a critical section. It is the responsibility of the handler to do
+any thread synchronisation required. The handler returns a
If you are providing or porting a HAL handler, +you need to be aware of platform security issues. In principle, for +each call into the HAL handler, you need to check the capabilities +of the caller's process.
Recall that
For example, the screen (i.e. video or LCD) driver must check
+that the caller has the
To find the capabilities associated with function-ids,
+see the
This is a generic description of how to use SDIO in a kernel-side device +driver. Specifics differ according to hardware and card function.
+SDIO is an input/output protocol originally +introduced to enable a device to communicate with SDIO (Secure Digital) cards. +It can also be used as input/output for media such as Bluetooth adapters and +GPS receivers and for input/output within a device (for instance to talk to +an internal bus). These different uses of SDIO are called functions.
Symbian +platform implements SDIO as part of a larger framework involving SD cards, +which are a type of MMC (multimedia) card. For this reason, to use SDIO in +a device driver you will need to use classes representing SD and MMC cards +and the associated communications stack even if you only want the basic I/O +functionality. These classes are:
The work of data transfer is performed by reading to and writing from +registers in response to interrupts, and sometimes the read and write operations +are performed in sessions. The classes used are these:
This document illustrates the use of these classes to create a driver,
+with example code fragments in which the driver being created is called
The first step in writing a driver using SDIO is +thus to create socket, stack and card objects.
The
+functions supported by SDIO are represented by the enumeration
Kernel
+polling means the use of the
To +initialize the card you must power it up and test whether it is ready for +use. The following code uses kernel polling to perform the test 10 times at +100 millisecond intervals.
You may also want to test that the card is an SDIO card.
Next you locate and enable the function of the card (Bluetooth,
+UART etc.) which you want to use. A function has a location which differs
+from card to card and which is stored in a data structure called the CIS (Card
+Information Structure. To use a card's CIS you load it using the
The following code is a simple test to determine whether the +passed in function type is present on the card.
Once you have the location of a function, you register the +client driver with the stack and enable the function on the card.
SDIO cards place data to be read
+or written in a register whose address depends on the function and is defined
+in the relevant specification available at
Data can be transferred.
as individual bytes,
as the contents of byte buffers (both directly and using shared chunks), +and
by setting bits in the register.
The following code uses the
This code demonstrates the use of the
When large amounts of data are being transferred it is good +practice to use a shared chunk. You call the same functions as before with +the chunk as additional argument. The following example writes 1024 bytes +with an offset of zero and reads 1024 bytes with an offset of 1024.
The advantages of shared chunks are that they can be used from +user space and reduce copying overhead.
The following code example shows
+how to set and clear bits in a register using the
Another advantage of shared chunks is that they make it possible +to send commands to the card while a multibyte data transfer is taking place. +Not all cards support this functionality: you can check whether a particular +card does so by reading the SDC bit in the Card Capability register. To do +this, you need to create a second register interface to write the commands +in the form of one byte transfers. This second interface (or 'second session') +must run in a different thread from the interface performing the multibyte +transfer and it is created in a different way, as in this code:
Cards generate interrupts which control +the process of data transfer: for instance the arrival of data on the function +of a card generates an interrupt which serves as a signal that a read operation +should take place. The are two levels of interrupts. A card level interrupt +indicates which function has triggered it and a separate function level interrupt +gives information specific to that function which is covered in the documentation +for a particular card.
You must provide ISRs (interrupt service routines) +to handle interrupts. ISRs typically queue DFCs to perform the required actions +such as read and write operations on the card since ISRs must complete very +quickly to maintain real-time guarantees whereas data transfer can wait for +the current time slice to complete. The exact functionality of the DFCs will +vary depending on the nature of the function and the hardware. Two things +you must do in any driver implementation are enabling and disabling interrupts +and binding the ISRs to the stack. Also, when an interrupt is triggered you +must read the interrupt register to make sure that the interrupt is applicable +and then clear the function level interrupt and re-enable interrupts.
You +enable card level interrupts as in this example code:
How you enable function level interrupts is described in the +documentation for a particular card.
You bind ISRs to the stack as in +the following code fragment.
Register callbacks to be notified of +events. This mechanism is standard for handling the removal of a card and +powering down.
The SDIO stack can notify clients of events they must
+react to such as power down or the removal of the card. Events of this kind
+are listed in the enumeration
You respond to notifications with callback functions which
+you write to provide appropriate functionality. Use the callback to initialize
+a
You typically use notifications and callbacks to perform cleanup +before a pending power down event. The socket class has functions to postpone +power down at the start of cleanup and to resume power down when cleanup is +finished. They are used like this.
A reference implementation using Bluetooth is described in
The only way to kill a kernel side thread is for code running in that thread
+to call
A kernel side thread can kill itself, but cannot kill any other kernel +side thread. This avoids the overhead of critical sections. Remember that +a kernel side thread can kill a user side thread.
+In practice, device drivers will create threads that can run queued DFCs +(Deferred Function Calls) only if they choose not to use the two supplied +DFC threads, known as DFC thread 0 and DFC thread 1.
+In principle
+the only way to kill a thread that runs queued DFCs is to schedule a DFC that
+simply calls
In practice you
+create a 'kill' DFC by creating a
You need to make sure +that no other DFCs are on that DFC queue at the time that the 'kill' DFC runs +as there is no automatic cleanup.
Perform cleanup by calling
You need to make sure
+that the DFC queue object, i.e. the
You can do this by making +both the DFC queue object and the 'kill' DFC object static objects within +the driver. The device driver will only be unloaded from RAM by the null thread. +By queuing the 'kill' DFC on the DFC thread, you mark it ready-to-run, which +means that your 'kill' DFC will run before the driver is unloaded and before +the DFC queue object and the 'kill' DFC object vanish.
You need to make sure +that any code queued to run on your DFC thread never hangs.
The important
+point is that
It may be that you +have to consider writing defensive style code; for example, if your thread +is waiting for an event, you could consider using a timer to wake it up in +case the expected event never happens. Alternatively, you could consider moving +some processing to the user side.
Device drivers often need to share data between user
+space and kernel space threads. Though there are APIs for doing this, such
+as
To avoid unnecessary data transfer, Symbian +platform provides shared chunks, which are similar to shared memory in other +operating systems. Shared chunks can be created and used by both a driver +and user-side code directly. An example application for shared chunks is for +a camera driver. Without shared chunks, the image data would have to be copied +from the camera driver to the user process, which would then copy it to the +display driver. This would have a high memory copy overhead. Using shared +chunks instead would improve the performance of the camera view finder.
A +shared chunk is created and controlled by the kernel side code, for example, +the device driver, rather than the user code. This memory is safe to be used +by ISRs and DMA. A shared chunk can be mapped into multiple user processes +in sequence or at the same time. The memory object can be transferred between +another process and another device driver. Multiple drivers and multiple user +processes can have access to the same shared chunk.
The EKA1 versions of +Symbian platform had a mechanism called shared I/O buffers. These are now +deprecated, and shared chunks should be used instead. The disadvantage of +a shared I/O buffer is that only one user process can access the chunk at +a time, and that it does not have a user handle: the user process is only +supplied with the address and size of the buffer. Shared chunks solve these +issues, and give more flexibility for users to do safe operations.
This guide uses the port for the template board as +an example. Other porting example code is also available.
+The sound driver provides the conduit by +which Symbian platform can access the audio hardware and hence create sounds: +for example multi-media applications and ring tones.
Since audio hardware is rather +complex, the reader of this document needs to know what is meant by:
Audio channels
Codec
Stands for COmpressor-DECompressor and refers to the algorithm used +to compress and decompress the audio data stream.
The LDD for the sound driver sits
+below the
The
+communication between the driver and the application on the user-side is handled
+by class
The sound driver is used whenever +audio operations are required: for example MP3 player, having a phone call +and ring tones.
MP3 player
Multi-media
The
The function takes the
At its simplest, the function could be used to decide whether interrupts
+generate an IRQ or FIQ at the ARM. For hardware that supports interrupt
+priority in hardware, priorities can be modified through this function.
+If priority adjustment is not supported, or will not be implemented,
The client calls
The LDD breaks the audio request into manageable
+fragments and transfers them to the PDD by calling
When
+the PDD has transmitted the fragment it calls
To
+ensure uninterrupted playback, a client must have multiple play requests pending
+on the driver. As soon as one request completes, the client issues a further
+request until the end of the track. Typically, a client issues a series of
+calls to
Each
The LDD may need to break the play request down
+into fragments to support the maximum amount of data that the PDD can handle
+in a single transfer. The function
To support uninterrupted transfer of audio data, the
+PDD must be able to accept at least two transfer fragments simultaneously:
+the first being actively transferred by the audio hardware device, the other
+being queued for transfer on the same device. Therefore, as long as the LDD
+has transfer fragments still to queue, it continues to call
If the PDD accepts all the +fragments for this initial play request then the LDD moves on to process the +subsequent requests from the client. These are also fragmented until either +the PDD reaches its capacity or all pending play requests are queued.
Each
+time the PDD completes the transfer of a fragment from a play buffer, it must
+signal this event back to the LDD by calling the function
In executing
If,
+on completing a request from the client, the LDD discovers that there are
+no further play requests pending from the client, then this is treated as
+an underflow situation and
When the
+audio request has been completed the LDD calls
The client
+may temporarily halt the progress of audio playback at any time by issuing
The client
+requests resumption of playback by calling
Since +access to the hardware is required in both cases, pause and resume are handled +in the context of the driver DFC thread.
If the PDD reports
+an error when setting up the audio device for playback then the LDD immediately
+completes the first play request back to the client returning the error value
+as the result. For example, if the PDD returns a value other than
If the PDD reports an error when commencing +transfer or as the result of the transfer of a playback fragment, then the +LDD ceases transfer of the associated request and instead immediately completes +the request back to the client returning the error value as the result.
Unexpected
+errors from the PDD are returned to the LDD via the functions
The LDD does not cancel +the transfer of other fragments for the same request which are already queued +on the PDD, but it ignores their outcome. However, the LDD does try to carry +on with the transfer of subsequent play requests queued by the client.
In +any of the above situations, the client may choose to terminate playback operation +by cancelling all outstanding play requests when it detects a playback error.
Writable Data Paging +(WDP) allows allocated memory that is to be written e.g. stacks and heaps +to be paged in and out of the RAM pool.
The output of this +tutorial will be a configuration that allows the ROM build to use demand paging.
Below is a typical OBY file that uses data paging, +XIP ROM and code paging:
The OBY file that determined the start of the primary ROFS +partition, for example base.iby, would then be adjusted thus:
This tutorial only
+covers the configuration of the general demand paging parameters. To see how
+to make individual executables pageable see (
The next step is to
Typically, they are used by device drivers to communicate between a client +thread, usually a user thread, and a kernel thread running the actual device +driver code.
+The mechanism consists of a message, containing data, and a queue that +is associated with a DFC. The DFC runs in order to deal with each message.
+A kernel-side message is represented by a
Both
The message queue is represented by a
The following shows, in simple terms, the relationship between messages +and the message queues:
+When a message is sent to the queue, either:
+the message is accepted +immediately, and the receiving thread’s DFC runs. This will happen if the +message queue is ready to receive, which will be the case if the message queue +is empty and the receiving thread has requested the next message.
or
+the message is placed +on the delivered message queue, and the DFC does not run. This will happen +if there are other messages queued ahead of this one or if the receiving thread +has not (yet) requested another message.
A kernel-side message may be in one of three states at any time:
+FREE - represented by
+the
DELIVERED - represented
+by the
ACCEPTED - represented
+by the
Transitions between these states, including adding the message to, and
+removing the message from a message queue, occur under the protection of the
+global
Kernel-side messages may be sent either synchronously or asynchronously.
+Each
The
There are three stages involved in testing a port of the MultiMediaCard +controller. The first stage tests the controller in isolation; subsequent +stages test an additional area of functionality.
+All three stages involve text shell based test programs and test drivers, +and therefore these tests can be run by just building a text shell ROM. You +can also run tests on the Emulator.
+
This is not a 'go/no-go' type test, but is a test +utility that can perform a small variety of operations. These are selected +a simple interactive way by pressing appropriate keys.
This is not a 'go/no-go' type test, but +is a test utility that can perform a small variety of operations. These are +selected a simple interactive way by pressing appropriate keys. It is used +to test that card initialization and single block read and write requests +are performed successfully.
This +is a 'go/no-go' type test. If this test succeeds, you can have fairly high +confidence that the F32 test suite will run successfully.
The final test stage is to run the entire file +server test suite on a card drive.
To perform these tests:
build a text shell ROM +that contains the F32 test suite, i.e. build the ROM with the option:
boot the test machine +into the text shell.
launch the F32 automatic +tests and check that all of the tests run without error. Use the command option +"-d" to set the default drive, for example:
It is possible to emulate a user opening and closing the +MMC drive door, replacing and removing the MMC card, and assigning a password +to an emulated MMC card. This is important so that you can test:
how an application behaves +when the MMC card, from which the application is running, is removed and/or +swapped
how an application behaves +when it is reading and writing data to and from an MMC card
that data stored on +an MMC card is not lost or corrupted during MMC drive events
that MMC card locking +and password notification works correctly.
The platform independent layer of the MuiltiMediaCard controller, +as its name implies, is common to all platforms, including the emulator. However, +the emulator provides its own implementation of the platform specific layer.
The
+emulator does not provide access to any kind of MultiMediaCard hardware. Instead,
+the memory area of each emulated card is represented by a file, a
This +means that all the test programs described can be run on the emulator, and +they will exercise the platform independent layer code in the same way as +on real target hardware. It also means that call entries to, and exits from +platform specific layer code can be traced, and may be useful for debugging +problems on real hardware, and to help understand how the controller operates.
The
+code for the emulator specific layer code can be found in
The +configuration descriptor contains values required by the class drivers. The +settings are:
number of interfaces +in the configuration,
maximum power taken +by the device,
is remote wakeup enabled.
Note: that the number of interfaces in the configuration +must be set, all the other settings are optional.
The size of the
+data to be copied may be obtained by using
The size
+should normally be
Set the configuration
+descriptor using
Additional
+values may be set using the
Manufacturer,
Product,
Serial numbers.
After you have set the configuration descriptors you should
This topic provides a summary of related documentation for the +Keyboard Driver to which you can refer.
+Kernel Architecture 2
No specifications are published.
If +an XIP ROM image is stored on an XIP media such as NOR flash, it does not +have to be copied into main memory before it can be executed. If it is stored +on non-XIP media, such as NAND flash, then it must be copied into main memory +before it can be executed. The entire XIP image can run to many megabytes, +and therefore use a lot of main memory. If ROM Paging is used, then only those +sections of the XIP image which are required are loaded into memory. Additional +sections are loaded as and when they are needed, and sections which have not +been used in some time can be discarded. The overall effect is to leave more +memory available for applications.
<add links to memory mapping, +NOR NAND>
If the ROM image is in a non-XIP ROM it has to be loaded +into RAM before it can be executed. The ROM image is split into two parts:
Paged
Unpaged
The unpaged part of the ROM image is loaded by the bootloader and +it always present in RAM. The paged part of the ROM image is paged in on demand.
The type of paging used and the +area of memory to use first is specified in the oby and mmp files and then +built by using specific parameters in the buildrom utility.
Device drivers use timers for tasks such as providing timeouts for +device operations, and to wait for programming device timing.
The
+Kernel provides timer APIs that device drivers can use, for example,
It is recommended that peripheral drivers monitor peripheral inactivity +because they are in the best position to understand the current hardware requirement.
+The preferred approach is to implement inactivity monitoring in the platform +specific portion of peripheral drivers. However it is not always possible +to safely do it that way. In this case, an interface between the platform +specific and generic portions of the driver might need to be devised leading +to, in the future, inactivity monitoring functionality or assistance being +included in generic components.
+In some cases the developer of the driver might decide to implement inactivity +monitoring with a grace delay, using timers. For example, consider a peripheral +used as a data input device. If the only way that the platform specific layer +of the peripheral driver has any knowledge that an input request is pending, +and that data is arriving, is when an incoming unit of data causes an interrupt, +then the resulting ISR (or deferred DFC) could re-start a timer with a suitable +period. Inactivity could be monitored in the timer callback. This guarantees +that if another unit of data arrives before the timer expiration, the peripheral +is still in its operational power state.
+
It is possible that a particular ASSP allows peripheral +hardware blocks to be independently transitioned to a low power state. From +the point of view of power consumption, the base port might consider that +those peripherals have three logical power states:
Operational Power – +the peripheral is either processing some request, or performing a background +task with data transactions to or from it, or the peripheral is being kept +in this state in anticipation that data will be input to it, and is drawing +an amount of power corresponding to one of its possible modes of operation.
Low Power – +the peripheral is “idling” after a period of inactivity has been detected, +drawing a minimal amount of power required to preserve its internal state. +The peripheral should be able to keep its internal state and transition from +this to the Operational Power state with a negligible delay and with no impact +on the performance of either the peripheral driver or the upper layer components +that use it.
Off – the peripheral +power has been cut off, and only leakage accounts for power consumption. No +state is preserved and the delay to a return to full power is not negligible.
Note that peripheral power states are a notion that only applies +to the peripheral hardware block, and not to the driver. The peripheral driver +is the component that requests the peripheral to transition between these +states.
Note also that both Operational and Low Power states are subsets +of the Peripheral "Active" state. In some cases the differences in power consumption +between the Operational and Low Power modes might not be significant as, with +some hardware, only part of the peripheral needs to be operational to service +a particular type of request.
Some ASSPs make provision for a peripheral standby mode, +where individual peripherals can be transitioned to a low power consumption +mode of operation under software control, usually by writing to an ASIC internal +register.
If the peripheral is capable of keeping its internal state, +and the transition out of this mode is either done automatically on hardware, +or controlled by software, but has no impact on the performance of the peripheral +driver or on the users of this peripheral, and can be done with negligible +delay, then the peripheral standby mode could be considered a low power state +as above.
It is assumed that peripherals can be transitioned +between power states under the peripheral driver's software control. However, +transitioning peripherals to a low power state should only be performed if +it can be done transparently to the users and consumers of the services provided +by that peripheral. Negligible interactive delays may be acceptable, data +loss is not acceptable.
The transition of a peripheral to a low power +state can be requested by the platform-dependent layers of that peripheral +driver after inactivity, as defined above, has been detected. That request +usually assumes the form of a request to turn a controllable power resource +off. The request should be handled by the ASSP/Variant component that manages +platform power resources (the resource manager).
It can also assume +the form of an operation required to put the peripheral in standby mode. In +this case the peripheral driver is totally responsible for handling the transition +to the low power state.
This guide is +only concerned with peripheral low power states that can be reached and left +with no impact on system performance. However, it is possible that transitions +to and from a peripheral low power state are long enough to have an impact +on system performance, or cause loss of data, or the peripheral is not capable +of detecting the hardware events required to guarantee correct operation while +in a low power state. If this is the case, then the decision to transition +to a low power state cannot be taken solely by the peripheral driver, and +will have to be taken in agreement with the users and consumers of the services +provided by this peripheral.
It can be assumed that when the peripheral driver is loaded, the +peripheral is put into the Low Power state. If a new request is asserted +on that peripheral driver from a user of that peripheral (which can include +another dependent peripheral driver or user side Engine/server/application), +the peripheral should move to its Operational Power state. The driver +can then start monitoring for peripheral inactivity, and when that is detected, +and no other dependent peripheral is using this peripheral, it can move the +peripheral back to the Low Power state.
If, however, the power +manager requests the driver to power down the peripheral and no other dependent +peripheral is using it, it then moves into its Off state from which +it can only return to the Operational Power state when the power manager +issues a power up request. Moving from the Off state to the Operational +Power state is a relatively lengthy operation and requires resetting the +internal state of the peripheral.
If another dependent peripheral +is using this peripheral, and the power manager issues a power down request, +then the driver keeps the peripheral in its operational power state until +the dependent peripheral relinquishes its request on it.
From the Low +Power state, a peripheral can also be turned Off (unconditionally) if +the power manager so requests. It can be moved back to the Operational +Power state on an internal event (Interrupt, Timer).
Describes how to use the debug monitor commands to get information about +the call stack.
+
Tracing
+the call stack is an advanced use of the
Every time a function is called, the return address
+is automatically saved into register
When you are debugging only ROM-based code, it is relatively
+easy to identify the pushed return addresses because all code addresses will
+be in the ROM range:
Note
+that
To trace back through a thread’s kernel or user stack, you
+first need to find the stack pointer value. On the ARM,
In thread context:
When handling interrupts, +dedicated stacks are used:
To find out which stack to inspect, you need to know what mode the
+CPU was in when the fault occurred. The
use the
use the
The following examples show how to find the stack(s):
Kernel & user stacks +of the current thread after a hardware exception
Use the
In this example:
the kernel stack is
+the value of
the user stack is the
+value of
Kernel & user stacks +of the current thread after a panic
Use the
In this example:
the kernel stack is
+the value of
the user stack is the
+value of
Interrupt stacks
Use
+the
In this example:
the IRQ stack is the
+value of
the FRQ stack is the
+value of
Kernel & user stacks +of a non-current thread
Use the output of the
In this example:
the kernel stack is
+the value of
the user stack is the
+value of
One way of tracing through the call stack +is to assume that every word on the stack which looks like a ROM code address +is a saved return address. We say that this heuristic because:
some data words may +look like code addresses in ROM.
there may be saved return
+addresses left over from previous function calls. For example, suppose that
This scenario happens frequently
+when
If you want to trace applications loaded into RAM, then stack tracing +is more difficult because RAM-loaded DLLs are given addresses assigned at +load time.
On ARM, the stack pointer starts at the higher address
+end and moves 'down' towards the lower address end. This means that values
+at the top of the memory dump are more recent. You need to look back through
+this for code addresses. For ROM code this will be words with most significant
+byte in the range
Let's follow this in an example +session:
Decide whether the crash
+has been caused by a panic or an exception using the
This shows that the
+crash was caused by a panic, so now use the
The panic happened in supervisor mode, because
Using the
We can look for potential ROM addresses by scanning the log
+and look up the corresponding function name in the symbol file generated
Alternatively, we can
+use the
There are several false positives in this output (and even +more in the truncated parts). So some study of the source code is needed to +discard the noise and find the actual call stack. Here it is (innermost frame +first):
Note that for the sake of the example, a call to
All +other function names are false positives and should be ignored.
The heuristic method is quick but produces lots +of false positives. Another option is to manually reconstitute the call stack +from the memory dump. This is relatively easy for debug builds because GCC +uses R11 as a frame pointer (FP) and generates the same prologue/epilogue +for every function.
For release builds, there is no generic solution. +It is necessary to check the generated assembler code as there is no standard +prologue/epilogue and R11 is not used as frame pointer.
A typical +prologue for a debug ARM function looks like this:
noting that:
This +code creates the following stack frame:
Looking at the example session listed in when
Looking up the saved return address,
Using the pointer to the previous stack frame saved into the +current frame, we can decode the next frame:
Looking up the saved return address,
And here is the third stack frame:
So
Note that this +mechanical way of walking the stack is valid only for debug functions. For +release functions, it is necessary to study the code generated by the compiler.
For +completness, this is a typical prologue for a debug THUMB function:
and this creates the following stack frame:
A call stack can mix ARM and THUMB frames. Odd return addresses +are used for THUMB code and even ones for ARM code.
The output shown below is a typical result of using the
The +first three lines and the fifth line of the output show the state of the kernel +scheduler. This information is mainly of interest to kernel engineers, although +the state of the kernel and the system locks can be useful when debugging +device driver crashes.
The values are interpreted as follows:
The current thread is the thread that was executing when +the fault occurred. The 23 lines starting at line 10 of the output gives information +relating to the current thread:
Thread object and access +count
The
The
The
The thread name
The +thread name is the part after the colons. The part before the colons is the +process name. This means that the thread is called Main inside the +process test2.exe.
The thread state, exit information, +priority
The information that characterises the
+thread exit is described by
the thread has panicked,
+as indicated by: exit type 2; See also
the panic category was: USER
the panic number was:100
the thread was running +or it was in a ready-to-run state: MState READY
The priority shown is for the underlying thread, see also
Thread flags
The
Handles
The
Kernel & user stack +addresses
These fields give the base address and size, in bytes, of +the kernel and user stacks respectively.
Thread id, RAllocator instances, +trap frame
The
The
The
The
Trap handler, active scheduler, +user-side exception handler
The
The
The
Temporary object, temporary +allocation, IPC count
The
The
The
Underlying nanokernel thread
The
The
The
The
The
The
Fast mutexes
The
The
The
Timing, request semaphore +count
The
The
The
Suspend count, critical +section counter, register values
The
The
The remaining +content is a list of register values. Note that they are not the register +values when the thread panicked. They are the values in the registers +the last time that this thread was pre-empted.
The current process is the process in whose address space +the current thread was executing when the fault occurred. The 15 lines starting +at line 33 of the output gives information relating to the current process. +This has some similarities with the current thread information:
Process object and access +count
The
The
The
The process name
The
See
Exit information
The
Process flags
The
Handles
The
Attributes
The
Information about memory
The
The
Secure id
The
Capability
The +second four bytes of the Capability field contains the set of bits that define +the capability for this process. This defines what the process can and cannot +do.
Code segments
The
Chunks owned by the process
The NumChunks field contains the number of chunks owned by +the process.
Successive lines contain information about each chunk:
the
the
the
Shared IO buffer information
This is information about shared IO buffers. The cookie is +only really of interest to base engineers.
Domain information
This +is ARM MMU-specific protection information. Processes have domain -1 and DACR +0xFFFFFFFF.
In the moving memory model the current process +could be a fixed process, in which case there is also a current moving process. +The current data section process is the current moving process.
This
+field contains a pointer to the
The Time client interface provides the Real Time Clock (RTC) functionality +to the OS.
++
The MultiMediaCard subsystem performs multiple data transfers in a single +bus transaction.
+If your implementation uses double buffers, you must set the
If your MultiMediaCard controller uses DMA and your platform specific layer +supports physical addresses, you must use double buffers.
+See also:
+
Some drivers may only require an LDD, and not provide
+a PDD. However, if the driver has a PDD, then the driver must inform the framework
+of this. The LDD should set the
The +device driver framework provides a method to set and provide general information +to the user on the capabilities of the device driver. Typically, implementations +simply return the version number of the LDD. The device driver framework allows +both the LDD and PDD to set the device capabilities.
The user typically
+calls the
Note: The device capabilities in this context +refer to the services that the driver can offer. Do not confuse this with +the idea of platform security capabilities, which are completely different.
Symbian +platform provides a Hardware Abstraction Layer (HAL) interface that can be +used by device drivers and kernel extensions to provide information on device +specific attributes to user code. It comprises set and get interfaces used +by the Kernel and by user code. The attributes are divided into groups of +similar functionality, each of which is managed by a function called a HAL +handler.
Drivers must register any HAL handlers they provide, and +de-register them when unloading. Kernel extensions do not remove their HAL +handlers.
The arguments
+to
When the user calls
The stack is provided by the
The following sections describe how to implement these functions in the
The
If
+you have not declared it already, declare the following function in the definition
+of
Rely on the platform documentation to perform the operations and +update appropriate hardware registers when you intercept the following commands:
You should also handle the additional
In the following example,
+the hypothetical hardware platform has two requirements, which are implemented
+by the
As the SDIO Controller is instantiated at run-time, the +memory used to support the SDIO transfer mechanisms must be allocated dynamically. +Therefore, you can use two different types of memory and of data transfer:
Programmatic input/output using virtual memory
DMA using physically-allocated
The
If you use the Programmatic I/O method, you need to queue pending data transfers +while you handle hardware interrupts. Your platform's documentation specifies +which interrupts may be received during transfers.
Declare
+the following function in the definition of
The following example is a typical implementation
+of
You must also ensure that your interrupt handler checks
+the SDIO interrupt register and the card slot before forwarding the interrupt
+to the stack.
Declare
+the following function in the definition of
In the SD protocol, the block size must be a power +of two. SDIO does not have the same constraint and the block size can be anything +between 1 and 2048.
The following example is a typical implementation
+of
If a media driver uses a data transfer mechanism like
If the media driver code can handle physical addresses,
+it must tell the local media subsystem. This process is called registration.
+The media driver calls
After the call to
A
There are three pieces of information +that the local media subsystem needs from the media driver when the media +driver registers:
The minimum number +of bytes that the media device can transfer. For example, the architecture +of some memory card types requires data transfer in blocks of 512 bytes. The +local media subsystem can not support I/O requests for physical addresses +where the length of the data is smaller than this minimum number. This limit +prevents accidental access of the data outside the limits of the request.
The maximum number
+of bytes that the media driver can transfer in one burst. This value depends
+on the hardware. For eaxample,
The alignment of +memory that the DMA controller requires. For example: a DMA controller +might require 2 byte (word or 16 bit) alignment or 4 byte (double word or +32 bit) alignment. For 2 byte alignment, specify 2; for 4 byte alignment specify +4 etc. The local media subsystem can not support I/O requests for physical +addresses that are not aligned according to this value.
You get all of this information from the documentation for the platform.
This
+example code extends the code shown in the section
To use physical addreses, you need to make changes
+to the code that deals with
There +are a number of points to consider:
Check if the address is physical
Call
Physical address code
If you want to use the physical address,
+you need to get the physical address and the length of contiguous memory at
+this location. Call
If you do not want to deal with fragmented physical memory, +you can use your original code.
Virtual to physical address translation
Your code must not +perform a virtual to physical address translation when it deals with physical +memory. For example:
Eliminate inter-process copy
You must change your code to
+remove calls to
The same logic applies to
Test your changes
You are recommended to run regression tests +on your changed code to makes sure that the media driver operates correctly.
If the media driver can use physical addresses, +you need to be aware of a number of issues.
The address scheme used by the hardware
All media devices +have a minimum number of bytes that they can transfer. For example, the architecture +of some memory card types requires data transfer in blocks of 512 bytes. To +read one byte from this type of media device, the media driver must read a +block of 512 bytes and extract the byte from the block. To write one byte +to a media device, the media driver must read a block of 512 bytes, change +the content of the byte, and write the block to the media device.
Data transfer smaller than the minimum size
If the local
+media subsystem receives a request to transfer data with a length smaller
+than the minimum transfer size, the local media subsystem does not make a
+physical address available to the media driver. A call to
Data transfer not aligned to the media device block boundary
If
+the local media subsystem receives a request to transfer data, and the address
+on the media device is not aligned to the media device block boundary,
+you need to adopt the technique suggested below. The local media subsystem
+will make the physical address available to the media driver. A call to
Consider the following case. A request has been made to read +1024 bytes from a media device that has a block size of 512 bytes. The 1024 +bytes start at offset +256 on the media device.
To get the first 256 bytes, you must read the first block of 512 +bytes from the media device. This can corrupt the physical memory passed in +the I/O request. The solution is to read the first block from the media device +into an intermediate buffer. Copy the 256 bytes from that buffer into the +physical memory passed in the I/O request.
To get the last 256 bytes, +you must read the third block of 512 bytes from the media device into the +intermediate buffer. Copy the 256 bytes from that buffer into the correct +position in the physical memory passed in the I/O request.
The middle +512 bytes are aligned on the media device block boundary. The media driver +can read this data into the correct position in the physical memory passed +in the I/O request.
Scatter/Gather DMA controllers
DMA controllers can support +the Scatter/Gather mode of operation. Each request in this mode of operation +consists of a set of smaller requests chained together. This chain of requests +is called the Scatter/Gather list. Each item in the list consists of a physical +address and a length.
Use
The following code fragment +shows how you do this. The example assumes that the DMA controller supports +a Scatter/Gather list with an unlimited number of entries. In practice, the +number of entries in the list is finite.
The DMA framework provides an interface for clients, such as device +drivers, to perform data transfers.
+The following are the basic steps to +use the DMA framework:
The
The
The
The
There is no specific test suite available for the Interrupt platform +service at the moment.
+The local media sub-system allows the Symbian +platform file system and other programs to communicate with local media devices.
The local media sub-system +requires an understanding of the Symbian platform file system and local media +devices.
A media drive mounted within the phone.
The
The local media +subsystem provides the following:
Local media LDD
The +kernel-side logical device driver which implements communication between physical +devices and applications which use them.
The
Local +media subsystem:
Manages device connections
Manages data transfer
Handles notifications +and device configuration.
This set of macros is available for use in source files, but not in platform
+specific configuration header files. Their definitions are obtained by including
GETCPSR
Reads
+the CPSR into the specified ARM general register
This +macro should be used in preference to MRS instructions to avoid problems related +to syntax incompatibility between different assembler versions.
CGETCPSR
Reads
+the CPSR into the specified ARM general register
This +macro should be used in preference to MRS instructions to avoid problems related +to syntax incompatibility between different assembler versions.
GETSPSR
Reads
+the SPSR into the specified ARM general register
This +macro should be used in preference to MRS instructions to avoid problems related +to syntax incompatibility between different assembler versions.
CGETSPSR
Reads
+the SPSR into the specified ARM general register
This +macro should be used in preference to MRS instructions to avoid problems related +to syntax incompatibility between different assembler versions.
SETCPSR
Writes
+the entire (all 32 bits) CPSR from the specified ARM general register
This +macro should be used in preference to MRS instructions to avoid problems related +to syntax incompatibility between different assembler versions.
CSETCPSR
Writes
+the entire (all 32 bits) CPSR from the specified ARM general register
This macro should be used in preference to MRS instructions +to avoid problems related to syntax incompatibility between different assembler +versions.
SETSPSR
Writes
+the entire (all 32 bits) SPSR from the specified ARM general register
This +macro should be used in preference to MRS instructions to avoid problems related +to syntax incompatibility between different assembler versions.
CSETSPSR
Writes
+the entire (all 32 bits) SPSR from the specified ARM general register
This macro should be used in preference to MRS instructions +to avoid problems related to syntax incompatibility between different assembler +versions.
BOOTCALL
Calls
+the specified function via the boot table.
The +macro is transparent; the function is entered with all registers and flags +having the same values as immediately before the macro.
GETPARAM
Retrieves
+the parameter with number
See
+the description of
GETMPARAM
Retrieves
+the parameter with number
See the description
+of
FAULT
Faults
+the system if condition
BTP_ENTRY
Declares +MMU permissions and cache attributes. The macro takes a variable number of +arguments depending on the processor in use.
For ARM architecture 6 +CPUs:
For +XScale CPUs:
For +other CPUs:
ROM_BANK
Declares +an XIP ROM bank entry.
See also
HW_MAPPING
Defines
+an I/O mapping using the standard permissions and cache attributes for I/O
+mappings, i.e. those defined for the
See also:
HW_MAPPING_EXT
Defines
+an I/O mapping using the permissions and cache attributes defined by a
See also:
HW_MAPPING_EXT2
Defines
+an I/O mapping using the standard permissions and cache attributes for I/O
+mappings, i.e. those defined for the
See also
HW_MAPPING_EXT3
Defines
+an I/O mapping using the permissions and cache attributes defined by a
See also
Granularity of the I/O mapping
The
+granularity of the I/O mapping is defined by the
The
In each case the unit in which the
declares a mapping
+of size 4K starting at physical address
Determining the linear address
For
+those macros that don't specify a linear address:
On the direct memory +model, it is equal to the physical address.
On the moving memory
+model and the multiple memory model, the first such mapping is placed at
For example, on the moving memory model, the following mappings would
+have linear addresses
For the direct memory +model, all I/O mappings required by the system must be listed here since it +is not possible to make further mappings once the kernel is running.
This +class provides the main client API between the SDIO implementation and the +rest of the system.
For portability reasons, it is recommended that +this class is allocated on behalf of the client by the appropriate function +class after the client has been registered with the SDIO function.
The
+function declaration for the
Description
This method reads a single 8 bit value from the +specified register.
Parameters
Return value
The
+function declaration for the
Description
This method writes a single 8 bit value to the +register specified.
Parameters
Return value
The
+function declaration for the
Description
This method performs a bitwise read-modify-write +operation on the specified register.
Parameters
Return value
The function declaration for the
Description
This method performs a bitwise read-modify-write operation on the +specified register.
Parameters
Return value
The
+function declaration for the
Description
This method reads the specified number +of bytes starting at the specified register offset.
Parameters
Return value
The function declaration for the
Description
This method reads the specified number +of bytes starting at the specified register offset.
Parameters
Return value
The function declaration for the
Description
This method reads the specified number +of bytes starting at the specified register offset.
Parameters
Return value
The function declaration
+for the
Description
This method reads the specified number of bytes +starting at the specified register offset.
Parameters
Return value
The
+function declaration for the
Description
This method writes the specified length +of bytes starting at the specified register.
Parameters
Return value
The function declaration for the
Description
This method writes the specified length +of bytes starting at the specified register.
Parameters
Return value
The function declaration for the
Description
This method writes the specified length +of bytes starting at the specified register.
Parameters
Return value
The function declaration
+for the
Description
This method writes the specified length of bytes +starting at the specified register.
Parameters
Return value
The
+function declaration for the
Description
This function allows the user to disable +the synchronous nature of the DSDIORegInterface class, by using the specified +callback function to indicate the completion of an operation.
Parameters
Return value
The
+function declaration for the
Description
Allows the synchronous nature of the DSDIORegInterface class to be enabled.
When +the synchronous nature of the DSDIORegInterface class is enabled, then completion +of an operation is specified by waiting on a semaphore.
Parameters
None
Return +value
The
+request handling kernel side DFC is managed by a message queue object of the
The driver framework requires that the driver
+sets a DFC queue to use with the message queue. This is done by calling
The Kernel provides a standard DFC queue, which runs
+on a dedicated kernel thread called
The DFC thread that is created for the driver must be
+destroyed after use. To do this, the driver can create an Exit or Kill DFC
+request and queue it to the thread to be destroyed in the logical channel
+destructor. This exit DFC function cancels any other requests pending using
The
All
+synchronous and asynchronous requests are passed to the
The +client thread is blocked until the message is completed, as the request uses +the thread's message object. If the client thread was left free, it would +corrupt the message if another request was issued.
When the driver
+has completed handling the message, it notifies the framework by calling
For synchronous requests,
+the message is not completed until the request itself is complete, and the
+driver calls the
Symbian platform physical RAM defragmentation is the process of moving +physical pages, used to back virtual allocations, in order to create empty +physical ranges. This enables the powering off of individual RAM banks or, +if required, the allocation of a large contiguous buffer, for example, for +a CCD image. This functionality allows a more efficient use of RAM in terms +of both memory and power consumption.
+
There are two interrelated use cases for defragmenting physical RAM:
+many device drivers +require physical RAM for use for buffers. For example, a camera driver may +require a physically contiguous buffer for holding the output of the CCD. +In releases 9.3 and before such memory was typically allocated at boot time. +This practice increases initial RAM consumption after boot. Total RAM consumption +can be reduced if memory for such buffers is only allocated as required, rather +than at boot time. It is not possible to provide an absolute guarantee that +the RAM will be available when needed, but it should succeed in all but exceptional +cases.
typically, memory consumption +of a phone while idle is less than the total memory available. Idle and active +power consumption can be decreased by powering down unused RAM chips or unused +RAM banks within a RAM chip. This may also involve cooperation with a higher +level component in the UI which can shut down unused applications to reclaim +more memory.
There are three overloads for each of these
The first overload is
+synchronous and blocks the calling thread until the defragmentation is complete
+(or is cancelled using
The second overload
+is asynchronous and takes a pointer to an
The third overload is
+asynchronous and takes a DFC. The DFC is queued when the defragmentation is
+complete. The return value is
All
All
+the
aPriority
Determining the result of +a defragmentation operation
For synchronous overloads of the defragmentation
+methods the error code returned by the defragmentation method can be used
+to determine the result of the defragmentation operation.
For asynchronous
+overloads of the defragmentation methods the result of the defragmentation
+operation is determined by invoking
Cancelling a defragmentation +operation
A defragmentation operation can be cancelled by invoking
+the
Depending
+on the current state of the system and the value of
aMaxPages
Some devices may define
Once a RAM zone has been successfully claimed its memory can be used
+by the device driver after being mapped into a
A device driver may attempt to allocate memory from
+a specific RAM zone using
aId – the ID of the +RAM zone to be emptied.
Once the empty defragmentation operation has completed the RAM zone +specific allocation can be attempted again. However, the RAM zone specific +allocation may still fail in high RAM usage situations or if the RAM zone +has too many fixed pages allocated.
Device drivers are typically run and debugged on target hardware rather +than on the Emulator. The tool that provides this information is called the debug +monitor or the crash debugger.
+The debug monitor is used when the Kernel faults. This can happen because +there is a fault in a kernel-side component, such as a device driver, or because +a thread or process marked as system-critical crashes.
+Note that the debug monitor is one of the basic ways of debugging software +problems on target hardware. Full interactive debugging on reference hardware +and some real phones is available through commercial IDEs.
+. You can provide a personality layer so that a phone can run existing +protocol stacks, for example, mobile telephony signalling stacks, +or Bluetooth stacks, on the same CPU that runs the Symbian platform +applications.
+Such protocol stacks, often referred to as Real Time Applications +(RTA), almost always run on an RTOS, and the aim of the personality +layer is to provide the same API as the RTOS, or at least as much +of it as is required by the RTA. The RTA can then run, using the Kernel +Architecture 2 Nanokernel layer as the underlying real time kernel.
+The following diagram illustrates the point simply.
+There is sample code at
As a basis for emulating an RTOS, +the RTOS is assumed to provide the following features:
Threads
Thread synchronisation/communication
Thread scheduling +following a hardware interrupt
Timer management +functionality
Memory management.
Threads
Threads are independent units of execution, +usually scheduled on a strict highest-priority-first basis. There +are generally a fixed number of priorities. Round robin scheduling +of equal priority threads may be available but is usually not used +in real time applications. Dynamic creation and destruction of threads +may or may not be possible.
Thread synchronisation/communication
Typical examples +of such thread synchronisation and communication mechanisms are semaphores, +message queues and event flags.
There is wide variation between +systems as to which primitives are provided and what features they +support. Again, dynamic creation and destruction of such synchronisation +and communication objects may or may not be supported. Mutual exclusion +protection is often achieved by simply disabling interrupts, or occasionally +by disabling rescheduling.
Thread scheduling following a hardware interrupt
This +is usually achieved by allowing interrupt service routines (ISRs) +to make system calls which perform operations such as signalling a +semaphore, posting a message to a queue or setting event flags, which +would cause a thread waiting on the semaphore, message queue or event +flag to run.
Some systems don't allow ISRs to perform these +operations directly but require them to queue some kind of deferred +function call. This is a function which runs at a lower priority than +hardware interrupts (i.e. with interrupts enabled) but at a higher +priority than any thread - for example a Nucleus Plus HISR. The deferred +function call then performs the operation which causes thread rescheduling.
Timer management functionality
A timer management +function is usually also provided, allowing several software timers +to be driven from a single hardware timer. On expiry, a software timer +may call a supplied timer handler, post a message to a queue or set +an event flag.
Memory management
An RTOS often provides memory management, +usually in the form of fixed size block management as that allows +real time allocation and deallocation. Some RTOSs may also support +full variable size block management. However most RTOSs do not support +use of a hardware MMU and, even if the RTOS supports it (for example +OSE does), the real time applications under consideration do not make +use of such support since they are generally written to run on hardware +without an MMU.
When a number of drivers have a session with the MultiMediaCard Controller, +the MultiMediaCard Controller can have many requests to service.
+To +handle sessions, the MultiMediaCard controller implements a scheduler. The +MultiMediaCard stack has three internal queues:
an entry queue - this
+is the queue onto which a session is initially added when a client submits
+the session object by calling
a ready queue - this
+is the queue into which a session is moved when it is ready to be handled;
+the scheduler moves all the sessions from the entry queue into the ready queue
+when it can. This is anchored in the
a working set queue
+- this is the queue of sessions to be processed as chosen by the scheduler.
+This queue is limited to eight sessions. The scheduler moves a session from
+the ready queue to the working set queue if all current sessions in the working
+set queue are blocked and there are less than eight sessions in it. This is
+anchored in the
All three queues are circular queues as implemented using the internal
+Symbian platform class
Every +time one of the following events occurs, the MultiMediaCard stack invokes +its scheduler to deal with that event:
a client submitting +a session to the MultiMediaCard controller
a client aborting a +session
a card interrupt occurring.
The stack invokes the scheduler by calling the internal function
Sometimes, the MultiMediaCard controller has to perform +its processing in a DFC context. In those cases the Controller postpones operation +by queuing a DFC; the DFC call-back function then resumes the operation by +invoking the scheduler again. This means that the scheduler may be running +as part of a kernel executive call, a DFC or an interrupt service routine.
In +general, the MultiMediaCard controller processes a session in stages, issuing +commands, checking responses, reading or writing data etc. Each stage is usually +done asynchronously, first setting up the activity and then setting up a means +of being notified when that activity is complete, for whatever reason (e.g. +issuing a command and setting up an interrupt when a response has been received).
If +a session is waiting on an asynchronous operation, it is blocked. When one +session is blocked, the stack tries to process another session which isn’t +blocked.
The blocking of sessions is set up and implemented internally
+to Symbian platform. However, the platform specific layer can block a session
+by calling the
Internally, the
The +following flow-diagram shows the algorithm used by the scheduler.
The
The Kernel defines three system wide power states. A base port can add +new power states to improve power management for phone hardware.
+The generic system wide power states that are defined are Active, Standby and Off.
+Any additional sub-states of the Active power state must be wholly
+managed by the base port. These states may not need to be declared explicitly
+and may result from peripherals, or groups of peripherals, having moved to
+their low power state (see also
Usually, the transition of the system into one of the additional low power +states happens when the CPU enters the idle mode. The transition may be automatic +and wholly managed by the ASSP hardware or may result from an action taken +by the software routine that prepares the CPU to go to idle mode.
+An example of this is when
+the base port uses the idle mode to put the CPU and the device into hardware
+“Sleep” mode similar to that which can be achieved with a transition to Standby mode
+as described in the implementation issues for
When called, the power controller’s
There is a balance between the power savings obtained from
+moving the CPU and platform into “Sleep” mode in Idle and the performance
+hit resulting from spending time restoring the status after coming out of
+“Sleep” mode. Usually the
The decision to move into “Sleep” mode +may also be based on the current level of activity, or collected metrics on +the length of time spent in Idle, or other historical information. This can +be implemented entirely by the base port, or it may require the services of +an external component (such as, for example, Speed Management).
The
+transition to a hardware “Sleep” mode may also depend on shared peripherals
+being in a particular state, for example, no pending requests from other peripherals.
+This means that the power controller may need to check with the peripheral
+driver before initiating the change. The power controller could use the ASSP/Variant
+method to access the resource manager and check the usage of a shared peripheral
+using its
The decision to move into
+“Sleep” mode could be dependent on a number of peripherals (shared and not
+shared) being in Standby and a number of controllable power resources being
+turned off. Therefore the power controller may also need to check with the
+resource manager (through the Variant or ASSP) for the state of a specific
+set of controllable power resources (using the
On +going to “Sleep”, an action or set of actions might need to be taken that +would affect the whole platform. An example of this is when DRAM is put into +self-refresh mode on entering the CPU Idle mode after all peripherals that +might access it (including the LCD controller and DSP) are in low power state. +Unused DRAM banks may also be powered down.
The following diagram +exemplifies how the evolution of system power would look like on a system +when the most power saving “Sleep” mode can only be reached – from Idle - +when Peripherals A, B and C are already in their low power mode. On going +to the most power saving “Sleep” mode, additional actions can be taken to +lower the system power level:
The system is active +when a request for service is made on peripheral A; the peripheral driver +for peripheral A requests its transition to operational state increasing the +system power requirement (a).
After a period of activity
+related to servicing the request, the system enters the idle thread; in
The timer expires and +the system wakes up to the same power level as before (c).
The inactivity timer +implemented in the peripheral driver for peripheral A expires: the peripheral +is transitioned to the low power state (d).
At (e) the system enters +the idle thread again: the timer queue is investigated and then enters sleep +mode, waking up again when an interrupt occurs (f).
The inactivity timer +associated with the peripheral drive for peripheral B expires and the peripheral +is transitioned to its low power state (g).
On the next call to
Finally, the inactivity +timer for peripheral C expires and the peripheral is transitioned to low power +state (j). On reaching another period of system idle all conditions are met +to send the system to the deepest sleep mode available accompanied by the +switching off other power resources (k).
On waking up (l) the +system resources are restored to the same power level as before.
Any transition to and from these low power states must be transparent +to the rest of the system. A transparent transition is one that can be instantly +reversed, perhaps automatically in hardware, and has no noticeable impact +on system performance. In other words, it should be possible to wake the processor +up and move the entire device to Active Mode as if coming from a “normal” +Idle Mode.
To perform a system wide power change which is not transparent, +peripherals that may be affected by the transition would need to be examined, +and interfaces would have to be provided so that the users of these peripherals +could query the peripheral and allow or disallow the system transition into +that state.
Device driver DLLs come in two types - the logical +device driver (LDD), and the physical device driver (PDD). Typically, a single +LDD supports functionality common to a class of hardware devices, whereas +a PDD supports a specific member of that class.
In Symbian platform +source code, PDDs are part of variant baseports, while LDDs are stored in +a shared location that is not specific to a particular variant:
PDD source files (
PDD source files (
LDD source files are
+in source directories named
Common test application
+source files are in source directories named
For both types of driver, the source files are generally organised +in the following sub-directories:
<driver >
<driver >
<driver >
The +project files of a device driver that is part of the Kernel code are similar +to those for other components in Symbian platform. The following tables summarise +the source and binary file types you will see:
Describes writable data paging and how to use it.
+You implement DMA with the standard base porting tools (a compiler +and hardware-specific debugger). There are no tools specifically required +for DMA implementation.
+Checks if the symbol
If the
+symbol
Checks if the symbol
If
Checks if the symbol
If
The template port digitizer code can be found in
The template
Following the general pattern, your implementation will be contained in
+the directory:
Rename the class
Contains the guides that describe various aspects of writable data paging +in more detail.
+A base port must define the attributes that clients can use on a phone, +and implement any functions that are required to get and set the attributes.
+ To define attributes, you
+must change the source of the
Power domains depend on the physical wiring of a device, and this must +be taken into account by the base port. They are usually manipulated when +peripherals or groups of peripherals transition between the Operational +Power and Off states.
+The following diagram is an example of such an arrangement.
+To reduce power leakage, it may be useful to cut the power supply to an +area or a group of peripherals in the ASIC or the hardware platform, when +all peripherals on that peripheral power domain have moved to the Off state.
+In the arrangement shown here, when all peripherals on a power domain have +been powered down, the power supply to that domain can be cut off, using the +RESn signal.
+A suggested implementation would have peripheral power domains modelled
+by a
This is a suggested definition for a peripheral power domain class.
+The
Often, re-establishing the power supply to a power domain is a lengthy
+operation and should be modelled by an asynchronous operation. In that case
We recommend that peripheral power domains are defined as part of, and
+accessed through, a resource manager object as suggested in the discussion
+of
where
Peripheral power domains may also be a useful concept when transitioning
+peripherals to low power mode after peripheral inactivity detection. It is
+only after all peripherals on a peripheral power domain have requested moving
+the resource to the level corresponding to low power that the resource level
+can be changed. This is exemplified again in the block diagram above: when
+all peripherals in a domain have requested transitioning to low power (using
+a
The following base port software architecture diagram could be used to +address control of the peripheral power domains shown in the hardware block +diagram above:
+The steps are:
+Implement the +driver entry point function and initialisation code. This function +is called when the extension is loaded.
The initialisation +code binds the hardware interrupt to the Interrupt Service Routine +(ISR) and enables the interrupt.
Implement an +ISR to handle key events. The ISR queues a keyboard DFC.
Implement a +keyboard DFC. This function interrogates the keyboard, converts the +scancode to a keypress event, and places it onto the kernel's event +queue.
In the template reference board port, the
The source for the driver is contained entirely
+within
The driver is defined as a kernel extension and is loaded +early in the boot sequence.
The driver functionality is encapsulated
+by the
As the driver
+is a kernel extension, it must have a
It simply creates an instance of the
Initialisation on construction
The constructor
+of the
See also
The ISR (Interrupt
+Service Routine) just schedules the DFC that handles the keypress.
+It can also optionally contribute the timing of key interrupts as
+a source of random data for the Random Number Generator (see
The ISR disables the keyboard interrupt, as repeated +ISRs are not allowed to queue further DFC routines.
The DFC is the function
This:
reads the scan
+code data by calling
re-enables the +keyboard interrupt, now that the DFC has freed up the input data register
translates the +keypress event into a Symbian scancode
puts the translated +event on to the event queue.
The platform-specific configuration header is a file that provides the +configuration options that must be known at build time.
+The configuration header is an include file written in assembler, and is
+named
Use the configuration header file in the Template port in
The file consists of a number of global logical symbol definitions (
symbols that define +the CPU, listed below
a number of less commonly +used macros and symbols, which are documented in the template configuration +header file
a set of general bootstrap
+macros described in the
The CPU that the bootstrap runs on is indicated by defining one of +the following symbols:
Note that
The template file contains
+all these symbol definitions; just move the comment symbol (
A number of commonly used symbols are derived from the +supplied configuration options, and may be used in your code.
Precisely one of +the following three logical symbols is true, indicating the memory model in +use:
The following logical symbols are true or false depending on whether +the CPU in use has the corresponding property. The property represented by +the symbol is given by the symbol name.
Interrupt handling is implemented by writing platform +specific versions of the structures and functions of the Interrupt +class. The details of the implementation depend on hardware and the +architecture of the device. This document describes a simple implementation +of the structures and functions and then discusses more elaborate +strategies required with the use of device specific interrupts , chained +interrupts and multiple interrupt sources. It also covers implementation +on unicore platforms only. The SMP version of the kernel now implements +support for the ARM Generic Interrupt Controller and relies on its +use.
Chained interrupts are interrupts which are output by one controller +and input to another. They need to be distinguished in the ISR table +and require extensions to the handler functions.
Multiple interrupt sources to the same ISR require the use +of pseudo-interrupts.
When a Symbian port is split into an ASSP and variant (common +and device specific) layer, the variant may include additional interrupt +sources. Their API is defined by the port. Device specific interrupts +are sometimes used to handle interrupts from peripherals: another +technique is to route peripheral interrupts to a GPIO pin. Peripheral +interrupts cannot be specified as part of the ASSP layer.
The Template Baseport provides a skeleton implementation for
+developers to modify at
The ISR table is a data structure which pairs
+each ISR with the interrupt source to which it will be bound. It must
+have enough space for each interrupt source on the device. It is implemented
+as an array of
Interrupts must be disabled and cleared with a call to
The kernel is initialized in phases, interrupts being involved +in the first phase and sometimes the third. The Init1() function should +be implemented to
Initialize the ISR table, binding all ISRs to the spurious +interrupts handler.
Register the dispatcher functions.
Bind ISRs which handle chained or pseudo interrupts.
Interrupts must be disabled during first phase initialization.
This example code illustrates initialization of the ISR table.
This example code illustrates an implementation
+of
Third phase initialization involves initializing various interrupt
+handlers and sources which can only be initialized when the kernel
+is fully functional. This is done with a call to
It is important to remember that interrupts are enabled during +third phase initialization.
Interrupts not bound to a real ISR must be bound to a 'spurious'
+handler function
The
The argument
The implementation +should perform some preliminary checks.
The interrupt Id must be checked for validity
The ISR must not already be bound to a real interrupt. It should +have been bound to the spurious interrupt handler at initialization.
All interrupts must be disabled during binding.
This +example code provides a basic implementation.
The implementation of
The
The argument
The implementation should perform some preliminary +checks.
The interrupt Id must be checked for validity
The ISR must not already be unbound (that is, bound to the +spurious interrupt handler).
All interrupts must be disabled during unbinding.
This +example code provides a basic implementation.
The implementation of
Device drivers call the
The implementation +is entirely hardware dependent.
This example involves a check +for chained interrupts, which are discussed in their own section below.
Device drivers call the
Device drivers call the
The
Priority is a property of interrupts on some hardware, an example
+being OMAP. Where the hardware is of this type,
The implementation +is entirely hardware dependent.
The functions
In the simplest implementation,
+the interrupt Id is used as an index into the ISR table. The
This code is a simplified example which assumes that
The interrupt controller provides 32 interrupt sources and +has a 32 bit pending interrupt register where a 1 indicates a pending +interrupt and all ones are cleared when the register is read.
The interrupt source represented by the low order bit in the +pending interrupt register is represented by interrupt Id 0 and so +on.
Implementation will be more complex where chained interrupts +and multiple interrupt sources are involved, as discussed below.
Dispatch functions are time critical. You will probably write +an initial implementation in C++ to get them working and then rewrite +in assembler for maximum efficiency.
A platform often has multiple interrupt controllers +of higher and lower priority (higher and lower level controllers), +organized so that the output of a lower level controller is one of +the inputs to a higher level controller. Interrupt sources organized +in this way are called chained interrupts.
In a system with +chained interrupts, the ISR table must be structured so that interrupts +from higher and lower level controllers can be distinguished by their +Ids.
The
In a system with chained interrupts it can be desirable to write
+the
The
In one technique, the main interrupt dispatcher calls the second +level dispatcher if the relevant condition is satisfied.
In the other technique, the second level dispatcher is bound +directly to an interrupt source as its ISR.
The first technique works well in cases where there is only +a main and a secondary interrupt controller. It does not scale well +in cases which make use of multiple controllers chained to substantial +depth.
You need to allocate locations in your ISR table for +the secondary controllers so that the interrupt Id identifies which +hardware controller the input is on. For example, if each interrupt +controller handles 32 interrupt sources, you could allocate the first +32 Ids to the highest level controller, the next 32 to a second level +controller and so on.
This example code illustrates a main dispatcher +which calls a second level dispatcher.
This example code illustrates a second level dispatcher +bound directly to an interrupt source.
This example assumes an ISR table in which the second
+level interrupt ISRs begin at location 32 (
In cases where multiple peripherals are +connected to the same interrupt source, multiple sources may generate +the same interrupt which will then require a different ISR depending +on the specific source. However, EKA2 does not allow binding of multiple +ISRs to the same interrupt. There are two strategies for solving this +problem, both of which involve assigning the multiple ISRs not to +the real interrupt but to pseudo-interrupt Ids. In one strategy the +dispatcher examines the hardware to determine where the interrupt +originated and calls the appropriate ISR. In the other strategy, the +ISRs are written so that they examine their peripheral hardware and +only run if it is actually signalling an interrupt: in this strategy +the dispatcher calls all the ISRs bound to the real interrupt but +only one of them runs.
There is no requirement to extend the
+implementation of
Multiple interrupt sources require you to extend the
+implementation of
The dispatch functions should be extended in the same +way as with chained interrupts, using one of the two techniques described +for that case.
The ISR table should be structured so that the +interrupt Id identifies the hardware controller the interrupt is on. +For instance the first 32 Ids might refer to the highest level controller, +the next 32 to a second level controller and so on.
Interrupts generated by peripherals +are sometimes routed to a GPIO pin and sometimes included in a variant +layer. Where they are part of the variant layer, they must be listed +in a separate ISR table which is part of the device implementation. +However, we want device drivers to be able to use the Interrupt class +functions on interrupts of either type. The solution is to write separate +device specific functions derived from those of the core class. Core +class functions are then written in such a way as to identify device +specific interrupts and pass them on to the derived functions.
A recommended way of labelling interrupts as being device specific +is to assign negative numbers as their Ids. The core functions can +then identify negative Ids as belonging to device specific interrupts +and pass them to the device specific derived functions. The device +specific functions can convert them to positive numbers which serve +as indexes to the device specific ISR table.
This example code +illustrates device specific interrupt handling.
The example provides a version of
The PRM is a framework for managing system power resources. This +framework improves portability across different platforms and reduces +device driver complexity.
The PRM framework is split into +two layers:
Platform Independent
+Layer - the PIL is a generic software layer that is implemented by
+Symbian. This is the base virtual class for the Power Resource Controller
+(
Platform Specific
+Layer - the PSL is developed specifically to interface with the target
+hardware by licensees. This is the class derived from
Other acronyms used in this document set:
LDD - Logical +Device Driver. The higher layer of abstraction within the Symbian +platform device driver framework which implements common functionality +among differing pieces of hardware of one type,
PDD - Physical +Device Driver. The lower layer of abstraction within the Symbian platform +device driver framework which implements functionality that is specific +to a particular piece of hardware.
Intended audience
This document is intended +to be used by Symbian platform device creators.
Required +background
The reader of this document is assumed to have
+knowledge of the Symbian platform device driver model and
Introduction
The PRM provides a unique place +where all the current power states for resources can be obtained at +any time. The sum of all internal and external power states defines +the current system-wide power state.
Setup and configuration +requirements
The PRM component is implemented as a kernel +extension with an exported public interface that is accessible to +kernel side components through statically linking against its export +library. The export library is built from the resource manager and +the resource manager libraries.
There are two versions available:
basic resource +manager - provides essential or required functionality for static +resources.
The basic version of the PIL layer is compiled
+into
extended resource +manager - provides additional support for dynamic resources and resource +dependencies.
The extended version of the PIL layer is complied
+into
Device drivers that require the use of the PRM should link
+against the appropriate library.
Building the PRM for +the target platform
The PRM is an early extension, so
+the
Boot sequence
The
+PRM cannot be used to operate on power resources until later in the
+boot sequence when the kernel allows the scheduling of other threads.
+During this time it is not possible to read or change the state of
+resources, but
If a kernel extension needs to know if certain resources +have reached the post boot state in order to complete its own initialisation +then the kernel extension should queue resource state change notifications +for the resource when first registering as clients with the PRM. Note: notification requests are accepted before the PRM is fully +initialised.
Variants or kernel extensions can register static
+resources before the PRM is fully initialised with
Porting the PRM consists of +implementing the PSL layer for a given hardware platform and modifying +clients, such as device drivers, to use the PRM framework.
The following tasks are covered in this tutorial:
ARM provide a hardware floating point coprocessor that provides floating +point computation that is fully compliant with IEEE Std 754-1985.
+To support a coprocessor, you need to:
+Configure the Kernel +to use VFP through a macro setting in the Variant.
Configure the ROM to +include a Kernel extension to support IEEE-without-exceptions mode.
Configure the ROM to +include VFP support libraries.
Port the User Library
+to implement its
Define the macro
Add the HAL attribute for VFP
+to your base port's
As an example, see the Integrator CM1136 base port, and specifically:
See
+also
Symbian platform supports two execution modes :
NOTE: There may be some applications that depend on the correct
+handling of calculations involving, or resulting in, very small numbers; for
+such applications,
RunFast mode
For
IEEE-without-exceptions +mode
For
As an example, see the Integrator CM1136 base port, and specifically:
Note:
This is the default +execution mode.
It is possible to switch
+between this execution mode and the
Symbian platform provides both
+the VFP version and the non-VFP version of the floating point support functions.
+You choose the VFP version when you build your ROM image by specifying the
There are two ways to specify the
by adding the following
+line into the
You +use this technique to permanently include the VFP versions in your ROM image.
by passing
You use this technique if you only want +to include the VFP versions for a specific ROM image, for example, when testing.
If you use the first technique, you can still provide non-VFP versions
+for a specific ROM image by passing
In effect, you are overriding the definition in the
For
+example, see the Integrator CM1136 base port, and specifically:
Platform-specific source code needs to implement a set of standard functions +that are called by the generic source code. It also needs to provide a set +of standard MMU permission and cache attribute definitions.
+Standard functions are organised in the following way:
+a set of public functions +that the Symbian platform generic source links to directly
a set of functions that +are contained in a table, known as the boot table.
a set of entries that +define MMU and cache attributes to be used for standard memory and I/O areas; +this set of entries is also contained in the boot table.
Refer to the Template bootstrap source code:
Drivers generally implement the
The implementation reads the request type and other passed arguments, and +initiates handling of the request appropriately.
+The return value of this function only indicates the status of the message +reception to the driver. It does not actually indicate the request completion +result.
+A single channel has a single handle which is shared by driver users. A
+driver can allow or prevent the sharing of a handle to a logical channel between
+multiple users. This policy is implemented by the
In the following example, the driver ensures that only the intended clients
+can get the handle and access the driver. Any other client that tries to share
+the handle gets a
The Interrupt platform service provides an interface that allows +clients to have access to interrupts.
+The following are the basic steps to use the Interrupt +platform service:
The
The
The
The
There are no specific tools required to use or implement the Register +Access platform service.
+An LDD
+must define an entry point function using the macro
This factory object is created on the kernel heap. Its +purpose is to create the logical channel, the object through which all client +interaction with the driver will occur.
The
+file extension of a LDD DLL can be any permitted Symbian Platform name but,
+by convention, the LDD DLL has the extension
An
+LDD is loaded by calling
loads the DLL into RAM, +if necessary
calls the exported function
+at ordinal 1 to create the factory object, the
places the factory object +into the appropriate object container.
If an LDD needs to perform initialisation at boot time (before
+the driver is loaded by
In order for the kernel to initialise the LDD extension
+at boot time then the
The audio device driver +implementation may provide for playback, recording, or both at the same time, +but there are design considerations for each option.
+API Reference
+This is a list of state +machine functions defined for the MultiMediaCard controller.
No specifications are published.
The first three functions are exported from the kernel,
Typically, you implement your derived
+class in a power management kernel extension, which has the opportunity
+to perform initialisation before the system itself is fully initialised.
+This is done via the extension's DLL entry point. You use the
create an instance +of your power controller object
register your
+power controller object with the Symbian platform power manager, by
+calling
if you intend
+to use a battery monitor, then you would create it and register it
+as the HAL handler to deal with
For example:
The function
To support idle power management the Power Controller's
+PSL must override
Idle power management
The Power Controller can assemble
+a list of resources whose state it is interested in from a NULL thread
+in an array of
The example
+below creates a buffer to capture the state of all static resources.
+The PRM captures the information using
The first parameter passed to
Note:
The Symbian Power Model defines 3 generic system-wide
Each of the system-wide low power states: Standby and Off, has a defined set of wake-up events. +If a wake-up event occurs while the system is preparing to transition +into one of these low power states, or when the system is in the Standby state, then this can result in the system moving back +to the Active power state. Wake-up events may be different +for different target power states. Wake-up events are platform-specific +hardware events, and it is your base port that decides which events +count as wake-up events.
When is it called?
Context
When the user side entity decides to move
+the device to a low power state, it sends a notification to all of
+the user side power domains registered with it, giving their applications
+a chance to save their data and state. However, before doing this,
+it calls
Before calling
Once the user side transition is complete, it calls
Implementation issues
There are three possibilities +in dealing with wake-up events:
if wake-up events
+are not handled by specific peripheral drivers, you could set up the
+hardware so that wake-up events are recorded in a location accessible
+by the software. Prior to completing the system power transition,
+the Power controller would check the relevant hardware to see whether
+a wakeup event had occurred. On detecting a wake-up event it would
+tell the power manager by calling
if wake-up events
+are not handled by specific peripheral drivers, you could arrange
+for wake-up events to interrupt the CPU. The code that services those
+interrupts would tell the power manager by calling
if wakeup events +are intercepted, serviced and cleared by peripheral drivers, then +the following outline solution could be adopted:
The power controller +would need to publish a list of wake-up events, preferably as a bit +mask, and export an API which peripheral drivers could use to notify +the power controller that a wake-up event has occurred. The following +interface class could be used for this purpose:
The class would be implemented
+as part of the power management kernel extension and
Those peripheral drivers that intercept +wake-up events would need to link to the power controller DLL (i.e. +the power management kernel extension).
On the occurrence
+of a wake-up event, the peripheral driver software servicing that
+event would notify the power controller by calling
You might
+implement
where
When the peripheral driver powers down, it should leave the +hardware that generates the wake-up event powered and enabled for +detection. Thus, if a wake-up event of the type handled by this peripheral +driver occurs, it will not be serviced in the normal way, but will +be left pending until the power controller services it.
When
+the power controller’s
When is it called?
Implementation issues
If absolute timer expiration
+is one of your wake-up events, then your implementation of
It is recommended that +you track absolute timers.
When is it called?
Typically,
+this is a result of the user side entity that is responsible for moving
+the device into a low power state calling
If
Context
The target state is defined by the value of
+the
Implementation issues
Implementation depends on the +target state:
if the target
+state is Standby, as implied by a value of
If at least one wake-up event has been
+detected and recorded since the last call to
if the target
+state is Off, as implied by a value of
The
The Symbian definition of the Standby state usually translates into the hardware manufacturer’s CPU “Standby” +or “Sleep” modes of operation. Typically, the internal clocks associated +with the core and some of the core peripherals, or their power supply, +are suppressed, and their internal state is not preserved. In this +case, the state of the core and core peripherals should be saved before +going into Standby so that they can be restored when the system +wakes-up. Note the following:
for the core +state, save the current mode, the banked registers for each mode, +and the stack pointer for both the current mode and user mode
for the core +peripherals, save the state of the interrupt controller, the pin function +controller, the bus state controller, and the clock controller
for the MMU +state, save the control register, the translation table base address, +and the domain access control, if this is supported
flush the data +cache and drain the write buffer.
If all of this data is saved to a DRAM device, then this +should be put into self refresh mode.
Peripherals modules +involved in the detection of wake-up events should be left powered.
Tick timer events should be disabled, and the current count of +this and any other system timers should be saved; relevant wake-up +events should be left unmasked, and any others should be disabled.
When is it called?
If
Implementation issues
The implementation can call
+the Variant or ASSP implementation of
The idle state is usually implemented via a Wait-For-Interrupt +type instruction that will usually suspend execution of the CPU, possibly +triggering other ASIC power saving actions, and will resume execution +when any unmasked interrupt occurs.
Suppressing the system tick interrupt
To further increase
+power savings during CPU Idle, a base port can choose to suppress
+the system tick interrupt until the next nanokernel Timer (as implemented
+by
In EKA2, timing services rely on a hardware timer, which is programmed +by the base port Variant code, to generate the system tick. We refer +to this as the system timer.
Typically, the platform-specific
+ASSP or Variant object has a pointer to the nanokernel timer queue,
+an
Before going into Idle mode,
On returning from Idle mode, the software must examine the system
+timer and decide whether the
If waking up was due to the
If waking up was +due to another interrupt, then the software must read the system timer +and calculate the time elapsed since going into Idle mode, and adjust +the system tick count with the elapsed time (which could be a fractional +number of ticks) and reprogram the system timer to continue generating +the tick after the correct interval, as above.
If the hardware +timer that is used to generate the system ticks does not use a Compare-Match +function, then some care has to be taken not to introduce skewing +when manipulating the timer value directly. If the timer needs to +be reloaded with the new value to give the next tick, then the software +usually “spins”, waiting for the hardware timer to change, and then +reloads it. This way the timer will always be reloaded on an edge +of its internal clock.
Waking up from “Sleep” modes with long wakeup latencies
Often, to further enhance power savings, the CPU and some peripherals
+are moved into a hardware “sleep” mode when
if the wakeup
+time can be determined precisely, then
If the wakeup +time cannot be known deterministically, then the above scheme must +be combined with another system to allow adjusting the system timer +from the hardware Real Time Clock (RTC).
Typically, the hardware +RTC is clocked with a 1Hz clock, and can be programmed to interrupt +the CPU on multiples of one second intervals. The clock supply to +the RTC must be left enabled on going into "sleep" mode.
To guarantee that timed events occur when they are due, the +CPU should only be allowed to go to into the long latency hardware +“sleep” mode if the RTC can be guaranteed to complete a second tick +before the CPU is due to wakeup.
Note that if waking up from +hardware “Sleep” mode takes a non-negligible amount of time, extreme +care should be taken when deciding to move the platform into that +mode. For example, if a receive request is pending on a data input +device and the CPU reaches the Idle mode and the platform +is transitioned into a “sleep” state and data arrives while +it is still in that state, then there is a possibility that +the system will not wake up on time to service the incoming data.
The
The timing constants are defined near the beginning of the template port
+implementation of the platform specific layer in
The values and their meaning are as follows:
+This +constant determines the rate at which individual samples within a group are +collected.
A group of samples is collected when the pen is down, and,
+in the template port, this is handled by
Once
+all the samples in a group have been collected, the function calls
The setting of the timer +constant depends on the sample resolution of the digitizer:
A high resolution can
+lead to more jitter between samples, and therefore more smoothing is required
+to obtain nice steady values. This argues for higher sampling rates, and a
+smaller value of
A low resolution means
+that there is very little variation between consecutive samples, and therefore
+fewer are required. This argues for lower sampling rates, and a larger value
+of
Another consideration when setting this constant is the speed of +the communications between the MPU and the digitizer.
In the Symbian +reference ports, this value varies between 1 and 3.
This
+value is used in the implementation of the
The value is optional. Choose a value of 0 if +no delay is required. Typically, a value of 2 is chosen.
This
+value is used in
The
+inverse of the sum of
The delay is optional, a value of zero meaning that there is +no delay. Typically, a value of 1 is chosen.
This
+value is used in
If, +after the delay, the pen is still up, then a pen-up event is issued; otherwise +the state of the digitizer reverts to collecting mode, resetting the sample +buffer
Typically, this value is set to around 30.
This
+value is used in
If +the pen is down when the system powers on, then the digitizer is sampled using +this delay until the pen is up.
Typically, this value is set at 30.
This +class is used for setting up a session to form specific SDIO specific command +sequences.
The stages involved in initializing a session are as follows:
The session is first +created with the required parameter values
The session is placed +on the stack.
On completion of initialization, +a callback is called.
It is intended that the
Some +functions have an auto-increment parameter. If the hardware function requires +you to use the register as a pointer to an address, auto-increment automatically +increments the register to point to the next address after each call. If the +hardware function requires you to use the register as a FIFO, auto-increment +should be disabled.
The
+function declaration for the
Description
Set the session to transfer a single byte of +data to the card, and optionally read the register after the write operation.
Parameters
Return value
None
The
+function declaration for the
Description
Set the session to perform a read of a single +byte of data from the card.
Parameters
Return value
None
The
+function declaration for the
Description
Sets the session to perform a multi-byte (or multi-block) transfer +of data to the card.
Parameters
Return value
None
The
+function declaration for the
Description
Sets the session to perform a multi-byte (or multi-block) transfer of +data from the card.
Parameters
Return value
None
The
+function declaration for the
Description
Sets the session to perform a safe read-modify-write operation on a single +byte of data on the card.
Parameters
Return value
None
The macro have no effect in non-debug builds of the bootstrap, when
Sends
+the quoted text string
Appends
+a new-line character to the quoted text string
Sends
+the quoted text string
Dumps
+an area of memory of length
A ASSP hardware register is a physical interface to an item of +hardware which contains values, such as bit fields, for controlling +ASSP features.
+ An ASSP (Application Specific Standard Product) is a class of
+integrated circuit. Different ASSPs have different architectures and
+instructions sets. The
Before the implementing a register access implementation, the +following must be decided:
The area in memory to which the registers are mapped. The start +of this memory region is known as the base address.
The location of each register in terms of an offset from the +base address.
The above information will be present in a specific header +file which is used by any device driver that uses this API. This header +file also contains other information required to access the registers, +such as:
constants for bit field offsets
bit field values
To provide register access, hardware implementers need to implement
+the
Generic access to a register is provided in the form of three +operations on a hardware register: read, write and modify. All these +functions can be called in any context.
Symbian platform provides +the class header and inline implementations of the 8, 16 and 32-bit +wide functions. You must implement all the register modify functions +and all functions to access 64-bit wide registers. You may also need +to replace the inline 8, 16 and 32-bit wide register read and write +functions if the ASSP contains write-only registers.
You must +implement these functions atomically so that there is no possibility +of another thread or interrupt service routine accessing the contents +of the register before the function returns. Where you cannot implement +the function with a single machine instruction you must disable interrupts +during the register operation (if you are working with a single core +architecture) or wrap a spinlock around the register operation (if +you are working with a multicore architecture).
There are +three cases in which you must enforce atomicity with interrupt disabled +or use of spinlocks.
The modify operations +involve reading the register, taking a local copy, changing the local +copy and writing it back to the register and cannot be performed with +a single machine instruction.
Operations on
+write-only registers involve maintaining a copy of the current contents
+of the register within the
Some architectures +contain 64-bit registers. In this case, 64-bit operations involve +at least two machine instructions.
Baseport implementations of the Register Access platform service
+come in the form of a ASSP library (or DLL). To avoid the need for
+each device driver which uses the ASSP register class having to specify
+a baseport specific library file corresponding to the ASSP library,
+the name of the baseport specific library file should instead be included
+in a file used to define other variant specific settings such as the
+CPU core type, the memory model and the name of the variant. Each
+baseport will have a file called
In order to +specify which baseport assp register set up is to be used in the client +projects for this baseport, the following line is included in the +MMP file of those projects
A power controller is a kernel extension that manages the power management +hardware of a phone, for example on-chip power management controllers and +oscillators.
+The following build tools are available:
The following emulators are available:
Drivers, apart from kernel extensions, are +not automatically started by the Kernel, so they must be explicitly loaded +by application.
When testing an LDD-PDD model driver, it is a general +practice to write a simple test application that loads the LDD and PDD by +name, opens a logical channel, and calls the driver API to test the driver's +functionality.
The following shows a command line test application +that does this:
If the driver is a kernel extension, then the Kernel +loads the driver when it boots, so an application does not need to load the +driver explicitly. Applications simply open a channel to the driver and call +the driver API as required.
The
The drivers developed and built can be tested on the +target hardware. The debug or release versions of the driver are built into +the ROM and downloaded to the target. This is done either by using a MMC card, +USB, Serial or JTAG interface, as supported by the particular hardware board. +Once the board is powered on or reset, Symbian platform boots and the text +shell is displayed (assuming a text shell ROM image). To test an LDD-PDD driver, +run the test application from the command line.
If debug messages
+are enabled in the driver, by using
The implementation guide describes how SDIO is included in a build of the +Symbian platform.
+The testing guide describes techniques that can be used for testing the +SDIO functionality.
+The tools guide describes the tools that can be used to test an implementation +of SDIO.
+The build guide describes how to include the SDIO adaptation in a ROM image.
+The I2C is a serial bus technology invented by Philips. The I2C +is a multi master serial interface used by the low power peripherals +to exchange data. The I2C supports different modes of data transfer +and three commonly used modes are:
Standard mode supports 100Kbps data transfer
Fast mode supports 400Kbps data transfer
High speed mode supports 3.4Mbps data transfer
The I2C bus contains 2 bi-directional lines, a serial clock (SCL)
+and serial data (SDA) lines. In the I2C bus, more than one device
+could act as Master device. There can be only one master device active
+at a given time. The master device initiates the communication and
+slave responds to the master. The master and slave devices can be
+interchanged after a data
The master device can read and write data to the slave device. +The I2C bus uses 7 or 10 bit address.
The features of the I2C bus are:
the master device +can send and receive data from the slave device
the I2C bus +uses 7 or 10 bit address
only uses 2 +bi-directional lines for communication
multiple master +device can be connected to the same bus
The first byte from the master is used to address the slave device. +In the first byte the master sends the address and read/write signal +to the slave. There are three types of messages:
master just +reads data from the slave
master just +writes data to the slave
master reads +and writes data
The communication is initiated with a start signal and completed +with a stop bit. In the third type of message the master starts the +communication with a start signal with a read or write bit to the +slave. The process continues until the master has completed the read +and write tasks. The communication is terminated with a stop signal.
The typical uses of I2C bus are:
to read data +from various flash devices like EEPROM and SDRAM
to control LCD
to read real +time clock data
Demand paging is a change to how the kernel can use RAM from Symbian +platform v9.3. This topic describes the possible results for base +port.
+When demand paging is used, the contents of memory are available +to a program when they are required - that is, 'on demand'. When the +contents are no longer required, the RAM can be used for other content. +In this way, the total RAM required to store content is less than +if it were all permanently available.
+The Device Driver Guide provides
A base-port +component may provide services to device drivers, exposing to them +a shared resource; either hardware or software:
hardware - may +be a hardware controller whose non-instantaneous operation, once initiated, +cannot be disturbed until it completes
software - may +be a list of requests for services.
A hardware component +has a control interface that can be used by a number of drivers. Operations +on the control interface although near instantaneous, are not atomic +and cannot be interrupted.
In the case of the base-port component, when the state of a resource
+needs to be protected from the effects of pre-emption for a non-negligible
+period of time, the recommended approach is to use mutual exclusion,
+protecting the resource with a mutex: unless there is any chance that
+the same driver may trigger the same operation before the previous
+one completed. For example, when operations are non-blocking and happen
+in a context different from the initiator’s, a
An example of the hardware component situation is a set-clear control
+interface, where a pair of registers (one containing the bits to be
+set, the other the bits to be cleared) have to be written to produce
+the desired change. If the operation is pre-empted after bits are
+set but before they are cleared for a desired final output, and a
+new set-clear operation is initiated, the final state of the interface
+may be undetermined. Pre-emption protection in this case is achieved
+by simply locking the Kernel using
A
Initially, when the request is made, the Kernel changes the status to
This documentation is retained for reference for those working with hardware
+or software that is based on older versions of Symbian platform version (in
+particular old release 9.5). For later releases see the
Controllable power resources are defined as being any device resource, +such as voltages on power-lines or clocks, which can be independently enabled, +disabled or modified under software control.
+The following hardware block diagram is an example of the way power resources +might be arranged:
+There is no default support or default implementation for controllable +power resources in the generic layers of Symbian platform. They must be managed +by the base port.
+We suggest that the management of controllable power resources (the resource
+manager) be implemented in the Variant DLL or the ASSP DLL. This allows the
+resource manager to be up and running by the time peripheral drivers, which
+might also be kernel extensions, are loaded. See the
A suggested implementation has the instance of the
For example, for a device that has a Variant and an ASSP:
+where class
where
The resource manager can be declared as a global object at the Variant
+(or ASSP) scope and can therefore be created when this component is first
+loaded. In this case, the
In this implementation, the Variant would export a header file(s) publishing +a list of controllable power resources.
+Some controllable power resources can be shared +between groups of peripherals and therefore may require usage tracking.
We
+recommend that shared power resources be represented by a class derived from
We suggest that your
While we have implied that
+a resource manager is a single class, which we have called
However, +for the purpose of explaining what a resource manager can do, it is easier +to assume that this behaviour is handled by a resource manager class. This +is the outline definition of such a class.
The
enable an appropriate +set of simple resources at boot time
initialise the shared
+power resource tracking objects, i.e. the instances of your
The function would be called by the Variant or ASSP implementation
+of
The
As an example, the most common method of switching +clock resources on or off is by setting and clearing bits in a hardware register. +A base port could choose to implement common code for setting bits in registers.
A
+shared power resource is further represented by a
The
+resource manager, or its functions and members, could be offered as part of
+a power controller interface class, the
The Variant or ASSP would also need to link to the +power controller DLL.
The following base port software architecture
+could be applied to the power supply arrangement as illustrated in the hardware
+block diagram above (see the beginning of this section at
Some power resources +may have more than one operational level. For example, it is possible that +an input clock to a peripheral can be operated at a nominal value corresponding +to the fully active state, and may be lowered to another level corresponding +to the power saving state.
The same could apply to the input voltages +that the peripheral can operate on. The control model suggested above for +simple resources may not be appropriate. A suggested alternative would have +these multi-Level resources implementing an interface that offers a public +enumeration to cover all possible usage levels at which the resource can be +used, and APIs such as:
More importantly, it is possible that resources may +have to be operated in combination with other resources. For example, some +peripherals may have more than one input clock, and on transition between +power states, these clocks may have to be changed simultaneously.
If
+the resources to be actuated simultaneously are simple resources, then the
+model above with a
Modifying some resources may +not be an instantaneous operation. For example, it may take a non-negligible +amount of time to stabilise an input clock that has just been modified.
The +nature of EKA2 strongly discourages actively waiting (or “spinning”) inside +drivers; instead it is suggested that the multithreading capabilities of EKA2 +kernel side code and the synchronisation mechanisms available be used to solve +the problem of having the driver code having to wait for a resource to stabilise +before proceeding with its use.
As an example, let us assume that +the multi-level resource mentioned above has a non-negligible delay associated +with any level changes:
If the length of time
+to wait for a power resource to stabilise is known, and is the order of a
+few milliseconds or less, or the driver executes on its unique kernel thread
+(not used by any other kernel-side code), then the best solution would be
+to sleep the driver thread, by calling
When the length of time
+to wait for a power resource to stabilise increases, then sleeping the driver
+thread may become a problem if the driver executes in a DFC queue used by
+other drivers, which is often the case. This is because blocking that thread
+means that no other drivers can run. Here, the best solution is for the driver
+to create another thread and DFC queue, by calling
If the length of time
+to change a power resource cannot be known in advance, but the hardware provides
+an indication of completion of the change, for example, in the form of an
+interrupt, then the solution may involve the driver code waiting on a Fast
+Mutex, which is signalled by the completion event (in code belonging to the
The generic files are stored in the
The files are:
+
The location of the source code and the header files for
+the platform independent layer of the
The suggested location of the source code and the header
+files for the platform specific layer of the
How +you organize source and header files within this directory is up to you.
During first phase initialisation of start-up, the
+kernel calls the ASSP's implementation of
Note +that interrupts are disabled during first phase initialisation.
Within
+your implementation of
Initialise
It is useful to initialise the ISR table so
+that each interrupt is bound, by default, to the
For example,
where the spurious interrupt handler function might be +implemented as:
On the template reference board, the ISR table is initialised
+by
Register the dispatcher
+functions. The Symbian platform kernel needs to know the location of the IRQ
+and FIQ dispatchers. It provides the two functions:
On the template reference board, the registration is
+done by
Bind any ISRs that handle +chained or pseudo interrupts
During third phase initialisation of start-up, the
+kernel calls the ASSP's implementation of
Note +that interrupts are enabled during third phase initialisation.
Within
+your implementation of
The personality layer assumes that the RTA will +run in a single flat address space in which there is neither protection +between different parts of the application nor protection of any hardware +or CPU resource from any part of the application. For example, any +part of the application code can access any I/O port, and can disable +interrupts.
To get this behaviour under the Kernel Architecture +2, the RTA must run in supervisor mode in the kernel address space. +The obvious way to do this is to make the RTA together with the personality +layer a kernel extension. This also ensures that it is started automatically +early on in the boot process.
In general the RTA will have
+its own memory management strategy and will not wish to use the standard
+Symbian platform memory management system. To achieve this, the personality
+layer will allocate a certain fixed amount of RAM for use by the real
+time application at boot time. For a telephony stack this will be
+around 128K - 256K. This can be done either by including it in the
+kernel extension's
A nanokernel thread will +be used for each RTOS thread
A priority mapping scheme will +be required to map RTOS priorities, of which there are typically 64 +to 256 distinct values, to nanokernel priorities. As long as the RTA +does not use more than 35 threads running simultaneously, which is +usually the case, it should be possible to produce a mapping scheme +that allows each thread to have a distinct priority, if needed. If +this limit is exceeded, it will be necessary to fold some priorities +together.
Note that any attempt to increase the number of +priorities supported by both the nanokernel and the Symbian platform +kernel would be prohibitively expensive in terms of RAM usage.
To allow +the functionality of the RTA to be available to Symbian platform applications, +it is necessary that a mechanism exist by which Symbian platform code +and the RTA may communicate with each other. In practice this means:
it must be possible +for a Symbian platform thread to cause an RTOS thread to be scheduled +and vice-versa
it must be possible +for data to be transferred between Symbian platform and RTOS threads +in both directions.
It will usually be possible for a Symbian platform thread +to make standard personality layer calls (the same calls that RTOS +threads would make) in order to cause an RTOS thread to be scheduled. +This is because the nanokernel underlies both types of thread and +most 'signal' type operations (i.e. those that make threads ready +rather than blocking them) can be implemented using operations which +make no reference to the calling thread, and which are therefore not +sensitive to which type of thread they are called from.
The +standard personality layer calls will not work in the other direction, +since it will not be possible for a Symbian platform thread to wait +on a personality layer wait object. The most straightforward way for +RTOS threads to trigger scheduling of a Symbian platform thread would +be to enque a DFC on a queue operated by a Symbian platform thread. +Another possibility would be for the Symbian platform thread to wait +on a fast semaphore which could then be signalled by the RTOS thread. +However the DFC method fits better with the way device drivers are +generally written. A device driver will be necessary to mediate communication +between Symbian platform user mode processes and the RTA since the +latter runs kernel side.
All data transfer between the two
+environments must occur kernel side. It will not be possible for any
+RTOS thread to access normal user side memory since the functions
+provided for doing so access parts of the
A fairly common +architecture for real time applications involves a fixed block size +memory manager and message queues for inter-thread communication. +The memory manager supports allocation and freeing of memory in constant +time. The sending thread allocates a memory block, places data in +it and posts it to the receiving thread's message queue. The receiving +thread then processes the data and frees the memory block, or possibly +passes the block to yet another thread. It would be a simple proposition +to produce such a system in which the memory manager could be used +by any thread. In that case a Symbian platform thread could pass messages +to RTOS threads in the same way as other RTOS threads. Passing data +back would involve a special type of message queue implemented in +the personality layer. When a message was sent to a Symbian platform +thread a DFC would be enqueued. That DFC would then process the message +data and free the memory block as usual. This scheme combines the +data transfer and scheduling aspects of communication.
Any standard +buffering arrangement could be used between the RTA and the device +driver (e.g. circular buffers). Contention between threads could be +prevented using nanokernel fast mutexes, on which any thread can wait, +or by the simpler means of disabling preemption or disabling interrupts. +It will also be possible for RTOS threads to make use of shared I/O +buffers for transfer direct to user mode clients, provided that these +buffers are set up by the Symbian platform device driver thread. This +may be useful as a way to reduce copying overhead if bulk data transfer +is necessary between the two domains.
The nanokernel does not support most of the +synchronisation and communication primitives provided by a standard +RTOS. Any such primitives required by the RTA will have to be implemented +in the personality layer. This means that the personality layer needs +to define new types of object on which threads can wait. This in turn +means that new nanokernel thread states (N-state) must be defined +to signify that a thread is waiting on an object of the new type. +In general, each new type of wait object requires an accompanying +new nanokernel thread state.
Blocking a thread on a wait object
To make a thread
+block on a new type of wait object, call the nanokernel function
Use the
State handler
Every thread that can use a new type of wait object, must +have a nanokernel state handler installed to handle operations on +that thread when it is waiting on that wait object. A nanokernel state +handler is a function that has the following signature:
Note that the state handler is always called with preemption +disabled.
The possible values of
See the code in
Releasing the thread
When a thread's wait condition
+is resolved, the nanokernel function
The parameter is usually
sets the
cancels the +wait timer if it is still running
stores the supplied
+return code in
calls the
Most RTOS allow +interrupt service routines (ISRs) to perform operations such as semaphore +signal, queue post, set event flag directly, usually using the same +API as would be used in a thread context. The Kernel Architecture +2 nanokernel does not allow this; ISRs can only queue an IDFC or DFC.
The way to get round this limitation is to incorporate an IDFC
+into each personality layer wait object. The personality layer API
+involved then needs to check whether it is being invoked from an ISR
+or a thread and, in the first case, it queues the IDFC. It may need
+to save some other information for use by the IDFC, for example, it
+may need to maintain a list of messages queued from ISRs, a count
+of semaphore signals from ISRs or a bit mask of event flags set by
+ISRs. Checking for invocation from an ISR can be done either by using
Hardware interrupts serviced
+by the RTA will need to conform to the same pattern as those serviced
+by Symbian platform extensions or device drivers. This means that
+the standard preamble must run before the actual service routine and
+the nanokernel interrupt postamble must run after the service routine
+to enable reschedules to occur if necessary. This can be done by calling
The +client interface guide describes the generic procedure for using SDIO in a +kernel-side device driver.
The Bluetooth example describes a reference +client driver using Bluetooth
The other tutorials describe the SDIO classes and commands +in more detail
SDIO is an input/output protocol used to communicate with SDIO +(Secure Digital) cards and other media such as Bluetooth adapters and GPS +receivers.
This +document is intended to be used by device driver writers.
A
+shared chunk is created in kernel space. The user is given a handle to the
+shared chunk to access it, and can pass the handle to other process or other
+drivers. Chunks are represented by
A shared
+chunk is created by using
Chunk
+creation should be done in a critical section, created using
To allow a chunk to be properly cleaned +up, a driver should close the chunk when it is no longer required. When a +chunk is closed, the reference count is decremented by one. The chunk gets +destroyed when the reference count becomes zero. Closing the chunk should +be done within a critical section.
The destruction of the chunk happens
+asynchronously, and a notification of this can be requested. This is done
+using a DFC, by initialising
Shared chunks must be mapped to memory, +which means that either RAM or an I/O device must be committed to a shared +chunk before it can be used. This maps the chunk to a certain address. The +memory can be physical contiguous RAM pages, an arbitrary set of RAM pages, +a physical region, or a physical region with a list of physical addresses. +The Kernel provides the following APIs for committing these types of memory:
This page lists the keywords starting from P to R.
+rombuild and rofsbuild
Use the
This is the
+same as specifying the
rombuild and rofsbuild
Same as
rombuild and rofsbuild
Use the
rombuild and rofsbuild
This overrides the code +and data paging settings for all the files, such as EXE and DLL in +an OBY file. It takes a single argument, which can be one of the following:
For example, the following entry in the Obey file marks +all the executables as unpaged:
rombuild and rofsbuild
This overrides the default
+settings for both code and data paging. It also overrides the settings
+from all the
For example, the following entry in +the Obey file instructs the loader not to page the executables in +the default state:
Identifies +the payload of the SMR partition. This field allows kernel consumers +(such as HCR) at runtime to locate the memory regions that contain +their data.
Specifies the payload specific flags that are defined at runtime +by the the payload provider tools or user and payload consumer. This +field allows the provider to give metadata to the consumer of the +payload.
BUILDROM only
This keyword is introduced +since Symbian OS v9.3, and it enables you to change the value of a +constant that is exported by a binary while building the ROM image.
This means that the value of the constant can be changed without
+rebuilding the binary. This is useful for quickly building ROM images
+with different values for the constant, allowing you to make comparisons
+in behaviour between the ROMs. This keyword must be placed before
+or after the binary that it patches in the
For example, if a DLL named
then an executable file can import this constant; +for example:
If you add the following statement to the
Notes:
The value of +the constant in the source is not changed. It is only its value in +the copy of the binary incorporated into the ROM image that is changed.
Do not define
+a patchable constant (exported data) in the source file in which it
+is referred to, because the compiler may inline it. If a constant
+is inlined, it cannot be patched. Hence, the constant must be defined
+in a separate source file and must be referred to in other source
+files by using the
rombuild only
This is used when sectioning a
+ROM for language variants etc. If an executable is to be replaced,
+make it patched in the first section of the ROM and include a replacement
+in the top section of the ROM, after the
This keyword appears at the point +in the obey file where the ROM is to be split. All files before this +line appear in the first (constant) section and files after appear +in the second (patch/language) section.
rombuild only
This is a keyword that affects +Symbian platform security when building a ROM.
It controls +whether or not diagnostic messages are emitted when a capability or +other platform security policy check fails. A diagnostic message takes +the general form:
if platform security is enforced
if platform security is NOT enforced.
The string +xxxxx represents the text of the message that describes the capability +being violated or the security policy check that is failing.
Specify
Specify
If neither
rombuild only
This is a keyword that affects +Symbian platform security when building a ROM.
It allows capabilities +to be added to, or removed from, all executables in the ROM image.
Specify a list of capability names prefixed either by a
Capabilities preceded by a
Any of the capabilities +listed in the left hand column in the table below can be specified; +follow the corresponding link in the right hand column for a description +of that capability. Note that you can also use:
to add all capabilities, and
to remove all capabilities.
Note, however, that the combinations
TCB
CommDD
PowerMgmt
MultimediaDD
ReadDeviceData
WriteDeviceData
DRM
TrustedUI
ProtServ
DiskAdmin
NetworkControl
AllFiles
SwEvent
NetworkServices
LocalServices
ReadUserData
WriteUserData
Location
For example:
rombuild only
This is a keyword that affects +Symbian platform security when building a ROM.
It controls +whether or not platform security is enforced.
Specify
Specify
If neither
rombuild only
This is a keyword that affects +Symbian platform security when building a ROM.
It controls
+whether or not to force the location of binary executables into the
Specifying
the loader only
+looks for files in the
Specifying
If neither
rombuild only
This is a keyword that affects +Symbian platform security when building a ROM.
It controls +whether or not insecure APIs inherited from EKA1 (Versions 8.1a, 8.0a, +7.0s, and earlier) are to be disabled. These are APIs whose use is +intended to be restricted. The kernel provides run-time checks for +their correct usage.
See the
Specify
Specify
If neither
rombuild only
This +keyword specifies that the major version of the executable binary +must be preferred over its minor versions. The minor version specifies +the revision of the executable. The major version enables you to identify +whether two versions of an executable are compatible. The two versions +of an executable can be compatible only if they have same major version.
The executable's header stores minor and major versions of the +executable.
rombuild only
A standard executable file that +is loaded directly, bypassing the file server. The Boot program loads +and gives control to the primary; this is the Kernel.
As with +all standard executable files, this is loaded, relocated and stripped +of its relocation information.
Note that the
rombuild only
Sets the priority of the process. +The priority can be a hexadecimal number, or one of the keywords listed +below. The keywords correspond to the associated priority value.
rombuild only
This keyword specifies the process +to which a DLL is attached.
rombuild only
Overrides the default stack size +for the executable.
rombuild and rofsbuild
Defines a comment line. +Text that appears after the rem keyword is interpreted as a comment.
rombuild and rofsbuild
Adding a file and then +renaming it is equivalent to adding it directly at the rename destination. +The existing and destination directories do not have to be the same.
BUILDROM only
A pre-defined substitution. This
+is replaced with the exact time in the format
Note that there is no UNDEFINE facility, and substitutions +are applied in an unspecified order.
rofsbuild only
Defines the name of the core +image.
rofsbuild only
Specifies the maximum size of +the core image, or the maximum size of the extension.
rombuild only
The address alignment boundary +for files in the ROM.
This value should be greater than 4
+and a multiple of 4. The recommended value is 0x10. If no value is
+specified,
BUILDROM only
Adds additional command line parameters
+to the eventual invocation of
rombuild and rofsbuild
The checksum in the final +ROM image is made using the following algorithm:
checksum += romchecksum - sum of 32bit words in ROM image.
BUILDROM only
Defines a ROM image.
This +is a ROM configuration feature; up to 8 ROM images can be defined +for a device.
To mark a file for inclusion in a ROM it is prefixed with +the keyword ROM_IMAGE. For example:
A Block of files can be included using '{' '}' +braces, for example:
File blocks can be nested, for example:
rombuild only
The virtual address of the start +of ROM, in hex.
This is the address at which the kernel expects +to find the start of the ROM. The recommended value depends on the +memory model:
For the Multiple +Memory Model, typically used on ARMV6 based hardware platforms, the +recommended value is 0x80000000.
For the Moving +Memory Model, typically used on ARMV5 based hardware platforms, the +recommended value is 0xF8000000.
rombuild only
This is the name of the output
+file. It contains the ROM image that
rombuild only
rombuild only
rombuild and rofsbuild
The size of the entire ROM,
+in hex, for example,
Thrashing +is a state where the vast majority of the processing time is spent paging +memory in and out of the system and very little is spent useful work. If this +situation persists, then the system performance rapidly becomes unacceptable +and will appear to the user to have hung.
The following is useful background +reading:
Thrashing
When thrashing occurs the device +will appear to do nothing or 'hang' for period. Unlike a deadlock however, +the device can recover from thrashing.
The following can be used +reduce the chance of thrashing:
Compress pages in memory to decrease the amount of memory that has +to be paged in and out
Limit the maximum size of each process's paged memory to the minimum +size of the paging cache
Partition the page cache per-process and allocate pages to processes + based on the Working Set Size (WSS) of threads in that process
Swap out the oldest pages in advance to free up memory to cope with +spikes in paging demand
Don't let threads steal pages from other threads
The possible courses of +action to undertake, should thrashing occur, are :
Reboot the device
Kill a thread
Decrease the number +of paged threads running concurrently
Pick one thread with +a high page fault rate and stop other threads stealing pages from the thread's +associated process
Prompt the user-side +memory manager to close down applications.
Time platform service provides the state of the device +(mobile). It is controlled by a state machine that is SSM. These states +are defined by system state policies within a SSM plug-in and system +state changes are triggered by system-wide property changes or external +requests.
+Alarm Server: manages all alarms on the device and enables +client UI applications to use the services provided by the Alarm Server. +It relies on the Alarm UI to notify, which includes displaying and +playing sounds.
Alarm Client and Alarm Shared: are the static interface DLLs. +Alarm client is the client side of the Alarm Server, which allows +other components to interact with the Alarm Server. The Alarm Shared +facilitates a common format shared across server and its clients for +providing definition for an alarm. It consists of shared objects such +as alarm class, IPC messages, repeat definitions, alarm states.
System State Manager (SSM): manages the state of a device throughout +its lifecycle. SSM extends the existing Generic Start-up Architecture +(GSA) and also manages the system state and the system-wide property.
Including session alarms
Adding, updating and deleting alarms.
Performing alarm category-based operations such as retrieval +and deletion.
Activating an alarm on a specific date.
There are no specific build instructions for Register Access platform
+service. Register Access platform is a part of the ASSP layer. You
+should use the
The Power Resource Manager (PRM) is a framework for managing system power +resources. This framework improves portability across different platforms +and reduces device driver complexity.
+Introduction
The new keywords that are available in the OBY file are used +to specify whether an object is paged, and if so what kind of demand +paging it supports.
With the addition of writable +data paging, the list of paging modifiers is shown below:
Procedure
With the addition of writable data +paging, the list of paging modifiers is shown below:
These modifiers appear in OBY files at the end of statements
+relating to user-side ROM objects i.e.
The data paging keywords only make sense when applied to executable +binary files.
For an executable file, the use of one of the
+above keywords in the oby file will override the paging settings in
+the mmp file. However this will not be true, if the
Executing +the buildrom command builds with no errors or warnings.
An example of an OBY file that uses the new +demand paging keywords is given below:
The next step is to
Internally the scheduler always deals with nanokernel
+threads,
A
+Symbian platform thread, a
There are two +ways of setting a priority for Symbian platform thread:
using the two-level +priority scheme
using an absolute priority.
The +two level priority scheme
In this scheme, a Symbian platform thread +priority is relative to the priority of its owning process. By default, Symbian +platform threads inherit the priority of their owning process when they are +created. This priority can be raised or lowered relative to the process priority +- this just sets the thread’s priority to the process priority plus or minus +a specified priority weighting. If the priority of the process is changed, +the priority of its threads will change relative to other threads in the system +but will remain the same relative to each other.
The default priority
+of a process is
The +NULL thread, also known as the idle thread, runs at priority 0, and means +that it will only run when there are no other threads ready to run.
Symbian
+platform thread priorities map onto
where:
the process priority
+values are defined by the internal Symbian platform enum
the thread priority
+values are defined by the internal Symbian platform enum
Absolute +priority scheme
It is possible to set an absolute priority that +is not relative to the process priority; it is not affected by changes in +the process priority.
This is a brief summary about nanokernel thread states +and Symbian platform thread states.
Nanokernel +thread states
The state of a nanokernel thread is referred to +as the NState (or N-state). This is to disambiguate it from any other state, +such as the state of a Symbian platform thread (referred to as the MState +or M-state).
The states of a nanokernel thread are defined by the
+values of the
Symbian platform thread states
The state of a Symbian platform
+thread is referred to as the MState (or M_state). This is in addition to the
+nanokernel N-state, and tracks threads waiting on Symbian platform synchronization
+objects. The
User threads and processes have +“exit information”. When a thread or process terminates the reason for the +termination is found in the exit information. For example, a panic will store +the panic category and reason in the exit information. Exit information has +three parts: the exit type, exit reason and exit category.
Exit type
+is defined by the
When a thread +or process is created, its exit type is set to 3. An exit type of 3 indicates +that the thread is still active, though not necessarily running. If the thread +terminates for any reason, then the exit type is changed to reflect the cause +of the exit.
Once the thread or process has exited, the exit reason +and exit type fields will contain useful information. The contents depends +on the type of exit.
Note that if the main thread in a process exits, +then the process will exit with the same exit information as the thread.
Exit category: Terminate
if
Exit +category: Kill
If
Exit category: panic
If a thread panics, then the exit category
+is
Marking a thread or process as “system critical” +means that it is an integral and essential part of the system, for example, +the file server. In effect the thread or process is being declared necessary +for correct functioning of the device. If a system critical thread exits or +panics then the device will reboot; during development it will enter the debug +monitor. A thread can be set as process critical, which means that if it panics +the process will be panicked.
When a user thread makes a call into any kernel +code, the kernel code continues to run in the context of the user thread. +This applies to device driver code.
The stack is swapped to a kernel-side +stack and the permissions of the thread are increased to kernel privilege, +but otherwise the user thread is still running. Each thread has a small kernel +stack used to handle kernel calls – it would be dangerous to continue using +the normal thread stack in case it overflows. Some calls are handled in this +state, others – typically device drivers – will post a message to a kernel +side thread to carry out the request.
When
+a process is created, a chunk is allocated to hold the process executable's
By default, each thread is allocated 8k +of user-side stack space. A guard of 8k is also allocated.
The stack
+area follows the
Return addresses are stored by pushing them on to the stack so at +any point you can trace through the stack looking at the saved return addresses +to see the chain of function calls up to the present function.
The +size of the user-side stack space has an indirect effect on the number of +threads that a process can have. There are other factors involved, but this +is an important one. The limit is a consequence of the fact that a process +can have a maximum of 16 chunks. This means that if threads within a process +can share a heap (allocated from a single chunk), then it is possible to have +a maximum of 128 threads per process [2Mb/(8K + 8K)]. More threads may be +possible if you allow only 4K of stack per thread.
Apart from the +kernel stack attached to each thread, the kernel also maintains stacks that +are used during processing of interrupts, exceptions and certain CPU states. +Interrupts and exceptions can occur at any time, with the system in any state, +and it would be dangerous to allow them to use the current stack which may +not even be valid or may overflow and panic the kernel. The kernel stacks +are guaranteed to be large enough for all interrupt and exception processing.
Symbian platform devices have an MMU which is +used to map the addresses seen by running code to real addresses of memory +and I/O. The MMU in effect creates a virtual memory map, allowing scattered +blocks of RAM to appear contiguous, or for a section of memory to appear at +different addresses in different processes, or not at all.
Symbian +platform uses the MMU to provide memory protection between processes, to allow +sharing of memory, efficient allocation of RAM and to make all processes “see” +the same memory layout. Three different memory models are supported by Symbian +platform on ARM CPUs:
moving model: this is +the model familiar from EKA1 where processes are moved to a run-address in +low memory when executing and moved back to a home-address in high memory +when not running.
direct model: this is +used when the CPU does not have an MMU, or is emulating a system without an +MMU. Not normally used, but occasionally useful for development boards
multiple model: only +supported in ARM architecture V6 and above, each process has its own set of +MMU tables. A context switch changes the current MMU table to the new thread’s +table, instead of moving memory about in a single table as with moving model.
Fixed +processes
For ARM architectures with a virtually-tagged cache, +fixed processes avoid the need to flush the cache on context switches by keeping +all the code and data at a fixed address. This implies that there can only +ever be one instance of each fixed process because the data chunk address +cannot be changed.
Important servers such as the file server and window +server are fixed.
There is no limit to the number of fixed processes +that can be supported. The kernel will attempt to use ARM domains for fixed +process protection, but there are a limited number of domains so when they +are exhausted normal MMU techniques will be used. Domains are slightly faster +in a context switch but this is negligible compared to the real purpose of +the fixed process in avoiding the cache flush.
When the kernel crashes, control is handed to the +crash logger. It tries to read as much state information as it can reasonably +find, and dumps it to a pre-reserved location in permanent storage flash storage).
Support +exists for dumping information to nor flash and nand flash. +The crash logger must have its own drivers to do this, as it cannot be assumed +that any specific part of the kernel is operating correctly in these circumstances. +These drivers are essentially simplified versions of the standard drivers.
All +monitoring functionality, crash logging and crash debugging is placed in kernel +extensions. Functionality can be enabled or disabled by placing the appropriate +kernel extensions into ROM at ROM build time.
To support both crash
+logging and debugging without duplicating code, a common module,
This
+extension must be placed in ROM before either of the two monitoring extensions,
+as they both rely on its functionality. As
After
+the common monitoring code registers itself with the kernel, further debugging
+extensions may register with the common code. These extensions are called
+when the kernel fails, in the order that they register. The maximum number
+of extensions currently supported is nine though only two- the interactive
+debugger, and the non-interactive crash logger are provided. However, this
+is an arbitrary limit set by the macro definition
These DLLs are also built from the Variant. In the case of the crash
+logger, two separate
one for NAND flash,
+building
one for NOR flash, building
The rombuild scripts ensure that the crash logger for only one type
+of flash is placed into the ROM, and named as
At +the next system boot, the crash reader application uses the normal system +flash drivers to read the crash log from the reserved non-filesystem (non-uservisible) +flash area, and to write it into the user-visible file system.
The
+output file from the crash reader is a text file, or a compressed
The
+crash reader application is called
The macros are used in the platform-specific configuration header file,
Includes +support for the L210 cache in the bootstrap.
Includes +support for the L220 cache in the bootstrap.
Many aspects regarding the way
+that the Sound Driver recording operates are similar to the way it handles
+playback. One difference is how the memory location in the
The driver commences
+recording when the client issues the first
The
+client specifies the number and size of the record buffers available to the
+driver within the shared chunk by calling either
When the driver starts recording, all the +buffers in the shared chunk are empty, and the driver can use all of these +available buffers. They are filled one by one, and if the client is slow to +request the recorded data, then once the driver has filled all of the available +empty buffers, it is forced to discard the earliest one filled and re-use +this to continue recording data.
Each time the client requests a buffers
+worth of recorded data with
When
+buffers are in use by the client the number of buffers available to the driver
+for capture is reduced. If the client is slow to release buffers and the number
+of available buffers falls to two then further
Buffers
The driver maintains three buffer lists:
free list
completed list
in-use list.
A record buffer can only exist in one of these lists at any time.
+The free list contains buffers that are empty and not in use by the client.
+Once a buffer has been filled with record data it is moved into the completed
+buffer list. Here the buffer remains until it is passed back to the client
+in response to a record request. When a client is using the buffer it is deemed
+as in-use and is moved to the in-use list. Each time the client successfully
+calls
The driver +also maintains two record buffers which are excluded from any of the three +lists.
the current buffer, +the one actively being filled with record data
the next buffer which +becomes the active buffer once the current buffer is filled
During recording there may be DMA requests pending for both the current +buffer and the next buffers.
The numbers one to five show the buffer cycle under normal operation, +while the letters A to C show error induced operation.
When recording commences, the driver removes two buffers from the +free list making one the current buffer and the other the next buffer (4 and +5).
When the current buffer is set as filled, the LDD normally adds +this to the completed list (1). If a record error has occurred while recording +to this buffer and it is only partially filled, or even empty then the buffer +is still added to the completed list, as the client needs to be informed of +the error (1). The only exception is in handling record pause, where a record +buffer ends up being completed with no error and with no data added. In this +case the buffer is added straight into the free list (A).
Having added
+the current buffer to one of these lists, the driver moves the next buffer
+to the current buffer (5) and then obtains a new next buffer (4). In normal
+operation this comes from the free list but if the client is slow, this list
+may be empty and the buffer is taken from the completed list (B). This is
+a buffer overflow situation which is reported to the client as a response
+to its next
Whenever +a buffer is filled, the driver checks if there is a record request pending +(1). Similarly, when the driver processes a new record request it checks if +a filled buffer is already available. In either case, if a request can be +completed to the client then the earliest buffer completed is returned. If +this buffer was filled successfully then it added to the in-use list (2). +However, if an error occurred whilst filling the buffer then it is returned +straight to the free list instead (C).
Each time the client successfully
+calls
RecordData()
If
+the driver is not already recording data then the first
Returning to the case
+of a record request where the driver is not already in the process of recording
+data, the LDD first checks whether the client has specified or supplied a
+shared chunk to the driver channel and has set the audio configuration and
+record level. If the buffer configuration has not been specified then the
+driver cannot proceed and returns
StartTransfer()
Depending
+on the mapping attributes of the shared chunk, the LDD may now need to purge
+the region of the record chunk concerned. Next the LDD calls
TransferData()
The LDD may need to break down the record buffer
+into memory fragments. These specify a physically contiguous region and are
+manageable by the PDD as a single transfer. The LDD queues as many transfer
+fragments of the current buffer on the PDD as it can accept with a call to
RecordCallBack()
Each time the PDD completes the transfer
+of a fragment from a record buffer, it must signal this event back to the
+LDD by calling the function
The client can
+temporarily halt the progress of audio record at any time by issuing
If the PDD reports +an error when setting up the audio hardware device for recording then the +LDD immediately completes the first record request back to the client returning +the error value as the result. It will not restart record data transfer unless +it receives a further record request from the client.
If the PDD reports +an error when commencing the transfer of a record fragment or as the result +of the transfer of a record fragment, then the LDD ceases transfer to that +record buffer and instead reports the error back to the client. The error +is returned in response to the record request which corresponds with that +buffer.
Unexpected errors from the PDD are returned to the LDD via
+the functions
The +LDD does not try to cancel the transfer of other fragments for the same buffer +that are already queued on the PDD, but it ignores their outcome. However, +the LDD does try to carry on with the transfer to other available record buffers.
Unlike +ROM, the data in Flash memory can be changed by an application.
Features of Flash memory +are:
Writing and erasing of memory has to be undertaken in blocks of memory
Memory contents are retained until erased (regardless of whether the +power has been turned on/off).
The following are limitations +of FLASH memory:
There are a finite number of erase-write cycles (usually 100,000).
NAND Flash cannot be executed in place. Instead the contents have to +be loaded into RAM and then executed.
To read data from the host to
+the device (
If the buffer already has data in it,
+it will return immediately with the code
The first
+parameter required by
When the request has completed, process the +data in the buffer that was read from the driver.
The transfer
+buffer for an IN endpoint is always pointed to by the
If there is no more +data to be read then close the channel and unload the driver.
When you have finished reading
+and writing
The following diagram shows the main parts of the architecture:
+The DMA Framework is implemented as a single DLL, which is split into two +layers:
+a platform independent +layer that implements the behaviour that is common to all hardware
a platform specific +layer that implements the behaviour that is specific to a particular platform.
The DLL is called
The platform specific layer interfaces to the DMA controller hardware via +the I/O port constants and functions exposed by the ASSP DLL and/or Variant +DLL.
+The clients of the DMA Framework are physical device drivers (PDD).
+An asynchronous request is typically used to start an operation on a device +that completes at a later point of time. It returns immediately after starting +the operation on the device or making a request to the driver. The user side +thread is not blocked and can continue with other operations, including issuing +other requests (synchronous or asynchronous).
+A driver lists the available asynchronous request types in an enumeration.
+The DMA framework provides a generic simple interface for the +clients to access DMA resources in a device. The primary clients are +device drivers that will use the DMA framework to set up DMA transfers. +A device can have more than one DMA controller (DMAC).
The key concepts related to DMA framework are:
A device driver or a kernel object that needs to use the resources +of the DMA framework. On the Symbian platform, the Physical Device +Drivers are the primary clients of the DMA framework.
The generic interface to provide DMA services to the clients.
The DMA clients link against the
The platform-independent layer contains the generic framework +independent from the hardware.
The platform-specific layer is specific to the baseport and +the hardware used.
The DMAC is the DMA hardware implementation. The number of +DMAC in the implementation of the DMA Framework depends on the hardware. +The DMAC is a peripheral to the CPU and is programmed to control data +transfers without using the CPU.
The key concepts related to functionality +provided by the DMA framework are:
Some DMA controllers transfer data using scatter/gather mode. +In this mode, the operating system creates a DMA descriptor with source +and destination memory addresses and the configuration. The source +and destination data can be a list of scattered memory blocks. The +DMA controller uses this configuration to perform the data transfer.
The data structure used by the DMA framework to store the configuration +of the data transfer. These will store details such as source address, +destination address, number of bytes and pointer to the next descriptor, +when the DMA is used to transfer disjoint blocks of data. This structure +is defined in the platform specific layer. The descriptors are always +used to store the configuration even if the DMA controller does not +support scatter/gather mode. When the scatter/gather mode is not supported +by the DMAC, the descriptors used to store the configuration are called +as pseudo-descriptors.
The descriptor headers are used to store additional information +about the descriptors. The descriptors are accessed using the descriptor +headers which contain the pointer to the descriptor.
The device drivers configure and initialize a data transfer
+using the class
memory to memory
memory to peripheral
peripheral to memory
The object which represents a single hardware, logical or a +virtual channel. In the DMA platform service API, each DMA channel +is referred to by a 32-bit value called a cookie. Each DMA transfer +is in the form of a description header and its associated DMA descriptor. +The queue of transfer requests (both being transferred and pending) +consists of a linked list of the DMA headers.
The channel manager controls the opening and closing of DMA +channels. The channel manager functions are defined in the platform +independent layer and implemented in the platform specific layer.
The typical use cases of a DMA framework are:
programmable data transfer between hardware peripherals and +memory buffers
programmable data transfer of block of data between different +regions of memory
The Register Access platform service is used by many of the other +platform services for either reading values from or setting the configuration +of hardware in the device.
+The
The
The
The
The
The
Since this platform service is used by many of the other platform +services, it is of interest to both device driver developers and hardware +implementors.
A shared chunk is a mechanism that kernel-side code can use to share memory +buffers safely with user-side code. This topic describes this concept, and +explains how to use the shared chunk APIs.
+
You may find it useful to refer to the general sections :
A shared chunk is a mechanism that kernel-side code uses +to share memory buffers safely with user-side code. References to kernel-side +code always mean device driver code.
The main points to note about +shared chunks are:
They can be created +and destroyed only by device drivers. It is typical behaviour for user-side +code, which in this context we refer to as the client of the device +driver, to pass a request to the device driver for a handle to a shared chunk. +This causes the device driver to open a handle to the chunk and return the +handle value to the client. Successful handle creation also causes +the chunk's memory to be mapped into the address space of the process to which +the client's thread belongs. Note, however, that the driver dictates when the +chunk needs to be created, and when memory needs to be committed.
Like all kernel-side +objects, a shared chunk is reference counted. This means that it remains in +existence for as long as the reference count is greater than zero. Once all +opened references to the shared chunk have been closed, regardless +of whether the references are user-side or kernel-side, it is destroyed.
User-side code that +has gained access to a shared chunk from one device driver can pass this to +a second device driver. The second device driver must open the chunk +before it can be used.
More than one user-side +application can access the data in a shared chunk. A handle to a shared chunk +can be passed from one process to another using standard handle passing mechanisms. +In practice, handles are almost always passed in a client-server context via +inter process communication (IPC).
Processes that share +the data inside a chunk should communicate the location of that data as an +offset from the start of the chunk, and not as an absolute address. +The shared chunk may appear at different addresses in the address spaces of +different user processes.
A shared chunk can be created only by code running +on the kernel-side. This means that it is the device driver's responsibility +to create a chunk that is to be shared by user-side code. There is no user-side +API that allows user-side code to create shared chunks.
The device
+driver creates a shared chunk using the
Chunks are reference +counted kernel objects. When the chunk is created, the reference count is +set to one.
See
Before user-side code
+can access the memory in a shared chunk, the device driver must create a handle
+to the chunk and then pass the value back to the user-side. It does this by
+calling the
a pointer to the user-side +code's thread (or NULL if referring to the current thread)
a pointer to the shared +chunk
Typically, the device driver does this in response to a request from
+the user-side.
If
+the creation of the handle is successful, the device driver returns the handle value back
+to the user-side. The user-side code then assigns the value to an
The user-side code uses
Opening +a handle to a shared chunk increases the reference count by one.
See
After
+it has been opened, a device driver may need to perform further operations
+before the handle can be returned to the user-side. If these operations fail,
+the device driver code may want to unwind the processing it has done, including
+discarding the handle it has just created. It does this using the function
This
+reverses the operation performed by
There is no explicit method for deleting
+or destroying shared chunks. Instead, because chunks are reference counted
+kernel objects, a device driver can use the function
The chunk is not be
+destroyed immediately. Instead it is done asynchronously. If the device driver
+needs to know when the chunk has been destroyed, it must specify a DFC (Deferred
+Function Call) at the time the chunk is created. The device driver specifies
+the DFC through the
Note: For each call
+to
After a shared chunk has been created it owns a region of +contiguous virtual addresses. This region is empty, which means that it is +not mapped to any physical RAM or memory mapped I/O devices.
Before +the chunk can be used, the virtual memory must be mapped to real physical +memory. This is known as committing memory.
Committing RAM +from the system free memory pool
You can commit RAM taken from +the system free memory pool using the following ways:
by committing an arbitrary
+set of pages. Use the function
by committing a set
+of pages with physically contiguous addresses. Use the function
Committing specified physical addresses
You can commit
+specific physical addresses, but only if you set the data member
You can use the following ways to do this:
by committing a region
+of contiguous addresses. Use the function
by committing an arbitrary
+set of physical pages. Use the function
Note: the same physical memory can be committed to two different
User-side code that +has access to a shared chunk from one device driver may want to use this when +it communicates with another device driver. To enable this, the second device +driver needs to gain access to the chunk and the addresses used by the memory +it represents.
The second driver must open a handle on the shared +chunk before any of its code can safely access the memory represented by that +chunk. Once it has done this, the reference counted nature of chunks means +that the chunk, and the memory it represents, remains accessible until the +chunk is closed.
The general pattern is:
the first device driver +creates the shared chunk.
See
the user-side gets the
+handle value from the first device driver and calls
See
the user-side passes
+the handle value to the second device driver. This value is obtained
+by calling
the second device driver
+calls the variant of
to +open a handle to the shared chunk.
Note: there are situations
+where the second device driver cannot use this variant of
The user-side application +may have obtained data by using a library API that uses shared chunks internally, +but which only presents a descriptor based API to the user application. For +example, an image conversion library may perform JPEG decoding using a DSP, +which puts its output into a shared chunk, but that library supplies the user +application with only a descriptor for the decoded data, not a chunk handle.
The communication channel +between the user-side application and the device driver supports descriptor +based APIs only, and does not have an API specifically for shared chunks. +The API items presented by the File Server are an example of this situation.
The second device driver will only have the address and size of the
+data (usually a descriptor). If the driver needs to optimise the case where
+it knows that this data resides in a shared chunk, it can use the variant
+of
to +speculatively open the chunk.
Getting the virtual address of data in a shared chunk
Before
+device driver code can access a shared chunk that is has opened, it must get
+the address of the data within it. Typically, user-side code will pass offset
+and size values to the driver. The driver converts this information into an
+address, using the function
Getting +the physical address of data in a shared chunk
Device driver
+code can get the physical address of a region within a shared chunk from the
+offset into the chunk and a size value. This is useful for DMA or any other
+task that needs a physical address. Use the function
Getting chunk attributes and checking for uncommitted +memory
As a shared chunk may contain uncommitted regions of memory
+(gaps), it is important that these gaps are detected. Any attempt to access
+a bad address causes an exception. You can use the function
User-side code can access data in a shared +chunk once it has opened a handle on that chunk. Handles can be passed between +user processes using the various handle passing functions. This most common +scenario is via client-server Inter Process Communication (IPC).
Passing +a handle from client to server
The client passes the handle to
+the shared chunk as one of the four arguments in a
Passing a handle from server to client
The +server completes the client message using the chunk handle:
The
+client then assigns the returned handle value to an
Note:
Processes that share +data within shared chunks must specify the address of that data as an offset from +the start of the chunk, and not as an absolute address. This is because the +chunk may appear at different addresses in the address spaces of different +user processes.
Once a chunk is no longer +needed for data sharing, user applications should close the handle they have +on it. If they do not, the memory mapped by the chunk can never be freed.
See
When DMA or any other hardware device accesses +the physical memory represented by a shared chunk, the contents of CPU memory +cache(s) must be synchronised with that memory. Use these functions for this +purpose:
Note: both these functions take a
This +section contains code snippets that show shared chunks in use. Most of the +code is intended to be part of a device driver. A few snippets show user-side +code and the interaction with device driver code.
Example: Creating +a shared chunk
This code snippet shows how a device driver creates
+a shared chunk. The class
Implementation notes
a
If the device architecture
+allowed the device driver function
See also:
The line:
in
+the function
sets
+the DFC that runs when the shared chunk is finally closed. See
Example: Opening +a handle to the shared chunk for user-side access
These code snippets +show user-side code making a request to a device driver to open handles on +two shared chunks.
The code snippets assume that the chunks have already +been created.
User-side
The user-side interface to
+a device driver is always a class derived from
You call the function
issue a request to the +driver to create handles to two chunks
get handles to those +shared chunks so that they can be accessed by user-side code.
Implementation notes
The request is passed
+to the driver as a synchronous call; it calls the base class function
The driver returns the
+handle numbers in the
To access the shared chunks, user-side code needs handles
+to them. Handles to chunks are
The
+final step is to set the returned handle numbers into the
See
In this example the
+return value from
Device driver (kernel) side
This is example code +snippet shows the device driver side handling of the request made by the user-side +to open handles on two shared chunks.
The request is handled by the
Details of how
+a user-side call to
The following code snippet is the implementation of
Implementation notes
The function calls
The first handle created +is closed if the second handle cannot be created:
The handle values are
+written back into the
Example: Using +a DFC to notify destruction of a chunk
This set of code snippets +shows how to set up a DFC that notifies a device driver when a shared chunk +has finally been destroyed.
Implementation notes
The DFC is an item of
+information that is passed to
Ensure that the code +implementing this DFC is still loaded when the DFC runs. As you cannot know +when a chunk will be destroyed, the code should reside in a kernel extension.
Example: committing +memory
This code snippet shows how memory is committed. It is
+based on the implementation of the class
READIMAGE is a command-line tool that provides readable data from +a ROM, ROFS, or E32 image. It can also generate an IBY file from a +SIS file which can be used to generate a data drive image.
+Run the following command to use the READIMAGE tool:
where:
Reading ROM, ROFS and E32 images
Run the following +command to read a ROM, ROFS, or E32 image:
The command +line options are as follows:
Note: Options are not case-sensitive. READIMAGE +reports an error if an argument is not passed for the -x or -z option.
Extracting one file
The command below
+extracts all the files from
Extracting a set of files with +wildcard ?
The READIMAGE tool provides the -x option +to extract one or more files from a ROM or ROFS image, based on a +pattern that matches the name of the file. A question mark(?) indicates +to match one instance of any character in a file name, within the +directory.
The command below extracts all the
Extracting a set of files with +wildcard *
The READIMAGE tool provides the -x option +to extract one or more files from a ROM or ROFS image, based on a +pattern that matches the name of the file. A star sign(*) indicates +to match zero or more instances of any character in a file name, within +the directory.
The command below extracts all the
The command
+below extracts all the
The command below extracts +all the .dll files from the \sys\bin\ directory and its sub-directories +present in the sample.img file, into the current directory.
Generating an IBY file from a SIS file
Run the
+following command to generate an
The following command-line options are supported when generating
+an
To generate an
Reports an error
+if the
Extracts the
+contents of the
Creates an
The
For more information about creating
READIMAGE extracts the
Languages and Package-header +section
The details of Languages and Package-header
+sections are written in the
Install-file section
In the Install-file
+section, the standard installation files, which are specified with
+the installation option
Embedded-SIS section
The details of
+the Embedded-SIS section are written using
Conditions-block section
In the Conditions-block
+section, the IF statements are written using the
The condition-primitives
+and the functions in the
The relational and logical operator +keywords are written using the appropriate operators.
Note: Using the
The attributes that can be used on a device are defined in two +configuration files. .
+The purpose of this file,
are read-only
can be read +and written
are simply stored +by the HAL - the non-derived attributes
require calls +to other components to obtain or set their values - the derived attributes.
The file must be created in the
Note that:
all supported
+attributes must have an entry in the Config file
all non-derived
+attributes also need an entry in the
The build system uses the Perl script
After initial
+implementation, any changes to the files will be picked up by the
+supplied makefile,
Any additional functions must be added
+to the implementation file by hand, e.g. in a file called
The config file consists of a number of lines of text in +the form:
where the pair of square brackets indicates optional items.
<attrib>
This is one of the values of the enumeration
<prop1>, <prop2>,...
These are optional property
+specifications defined by the enumeration
The property
The property
<implementation>
Defines whether an
0, which means +that the specified attribute is not derived, but is just a constant +stored by the HAL.
the name of
+a function used to implement
Comments
The order in which attributes are defined +is not important. Comments may be included by starting a line with +//. A comment must be on its own line and cannot be appended +to an attribute specification.
The purpose of this file,
The build system uses the Perl script
After initial implementation, any changes to the file will be picked
+up by the supplied makefile,
Any additional functions must
+be added to the implementation file by hand, e.g. in a file called
The values file consists of a number of lines of text in +the form:
<attrib>
This is one of the values of the enumeration
<value>
This is the initial value of the attribute. +It may be specified in various ways, but depends on the attribute, +and may be one of the following:
a decimal integer +constant
a hexadecimal
+integer constant, by prepending
an enumerated
+value for those attributes that have a corresponding enumeration in
Comments
The order in which attributes are defined +is not important. Comments may be included by starting a line with +//. A comment must be on its own line and cannot be appended +to an attribute specification.
A line similar to the following
+in the
A line similar to the following in the
The following line in the
The following lines in the
This
+is read only information, so the entries do not contain the "set"
+keyword. The values are derived via the API function
A ROFS image is used to store code that is not part of the Core
+OS section. The File Server uses ROFS (the Read-Only File System)
+to access this, and ROFS uses a Media Driver to read the data from
+the NAND Flash device and to load this into RAM for access or execution.
+The Composite File System combines this media area with the Core OS
+media area and presents the two as a single read-only drive
A ROFS image contain the following main sections:
+a
an
The structure is defined as:
+See
See
See
A
A
A
A
A
An
On most hardware platforms, the variant is split into a common,
+or ASSP, layer and a device specific, or variant, layer; for example,
+the Symbian port for the template board is split into a core layer
+for the ASSP (in
On some hardware configurations, the USB platform-specific layer +functionality is not only dependent on the USB device controller (in +the ASSP) but also on the variant. As an example, the USB cable connection/disconnection +notification mechanism is not defined in the USB specification. This +means that some USB device controllers have a mechanism for detecting +the status of a cable connection that is internal to the ASSP, while +for other controllers the mechanism is variant specific, and requires +software support in the variant layer.
+In the template example port, the ASSP layer is implemented by
+the
In the template example port, the variant layer is implemented
+by the
The implementation for these functions can be found in
The USB port also needs interrupt handling code to deal with USB
+external interrupts. See the
The
The
+function declaration for the
Description
Binds the
Parameters
Return value
The
+function declaration for the
Description
Unbinds +the callback from the interrupt controller, replacing the callback with a +dummy handler.
Parameters
None.
Return value
The
+function declaration for the
Description
Enables +the function's interrupt by setting the appropriate IEN bit in the CCCR. Note +that this only unmasks the global function interrupt. It is the responsibility + of the client to perform any function-specific interrupt enabling that may +be required.
Parameters
None.
Return value
The
+function declaration for the
Description
Disables the function's interrupt by clearing the appropriate IEN bit in +the CCCR. Note that this only masks the global function interrupt. It is +the responsibility of the client to perform any function-specific interrupt +disabling that may be required.
Parameters
None.
return +value
The
The
+function declaration for the
Description
Binds the
Parameters
Return value
The
+function declaration for the
Description
Unbinds +the callback from the interrupt controller, replacing the callback with a +dummy handler.
Parameters
None.
Return value
The
+function declaration for the
Description
Enables +the function's interrupt by setting the appropriate IEN bit in the CCCR. Note +that this only unmasks the global function interrupt. It is the responsibility + of the client to perform any function-specific interrupt enabling that may +be required.
Parameters
None.
Return value
The
+function declaration for the
Description
Disables the function's interrupt by clearing the appropriate IEN bit in +the CCCR. Note that this only masks the global function interrupt. It is +the responsibility of the client to perform any function-specific interrupt +disabling that may be required.
Parameters
None.
return +value
The
You need to derive
The
These functions start and stop transfers on a DMA channel +and are the main interface between the PIL and the PSL. The implementation +of these functions depends on the hardware available for performing +DMA, and on the characteristics used to specify a DMA transfer:
The DMA Framework manages the transfer descriptors according
+to the descriptor parameter passed into the
This function initiates a +previously constructed request on a specific channel. This is the +template implementation:
This is the template +implementation:
The following
+auxiliary functions are used to implement the scatter-gather transfer
+mode behavior by creating and manipulating the linked list of transfer
+fragment headers that describe a given scatter-gather transaction.
+They are called by the
This is a +function for creating a scatter-gather list. From the information +in the passed-in request, the function sets up the descriptor with +that fragment's:
This is the template implementation:
If the framework +needs to fragment the client’s request, either because of the transfer +size or because the memory is not a single contiguous block, then +the framework calls this function. It chains hardware descriptors +together by setting the next pointer of the original descriptor to +the physical address of the descriptor to be chained. It assumes that +the DMAC channel is quiescent when called.
This is the template +implementation:
This function
+is called by the
This is the template implementation:
When a driver reads or writes data from or to a user-side program, the +data must be copied between user address space and Kernel address space. The +Kernel provides a number of APIs that allow this to be done safely. The API +to use depends on whether the request is serviced in a user thread context, +or in a Kernel thread context.
+To read and write data to the user space from
+a kernel thread context, use Kernel APIs such as
The Kernel also provides other APIs
+to safely read the information about a descriptor in another thread's address
+space.
From user space
When +executing a read or write operation in the context of a user thread, use the +following APIs:
Kernel APIs are also available to do copy operations using descriptors:
The bootstrap is a program that the phone runs after a hardware reset. +The task of the bootstrap is to prepare a basic environment for the kernel +to run. If the phone uses NAND Flash, then the NAND Flash Core Loader loads +and starts the bootstrap.
+Symbian platform divides the bootstrap into a platform-independent layer +that can be used with any ARM device, and a platform-specific layer that you +must implement when you create a new phone.
+The base port must provide and register a HAL handler to handle these attributes.
+The power attributes are in the
A typical application will have a user side monitor that periodically calls
Typically, the handler is implemented as part of a
The
If there is some power-related
+data that can be accessed via the HAL component and that persists through
+power cycles (On/Off), it should be initialised here. Typically the
The handler itself is
+the function
The kernel hooks up +the handler to service the HAL functions at boot-time.
The power Hal functions
+that the Handler can handle are enumerated by
The architecture has two layers:
+The template port provides a framework for implementing the platform specific +part of the digitiser. The diagram below shows the overall relationship:
+The standard Symbian platform ports all follow the same general pattern,
+including the H2. However, the H2 board implementation has two levels in its
+platform specific layer (an ASSP and a variant layer) and uses different source
+file names (e.g.
The Baseport Template is a minimal
+baseport for the Symbian platform that implements the basic functionality
+of a baseport without supplying any hardware specific code. It provides
+a skeleton of a working baseport, which base porters can modify in
+accordance with the instructions in the comments. Porting involves
+abstraction from hardware and must be performed in a set sequence
+(for example, the abstraction in the drivers depends on previous abstraction
+in HAL which depends on previous abstraction in ASIC and so on). Many
+of the header files are not in the Baseport Template files but are
+held elsewhere in the code base, for example,
The files which make up the Baseport +Template are divided between two directories, ASSP and Variant containing +two classes of the same names. This division represents a fundamental +feature of the kernel architecture which you are recommended to retain +in your implementation of the port. The baseport involves implementing +functions to run on the target hardware, but it is useful to distinguish +between functionality specific to the CPU (ARM etc) and functionality +specific to peripherals with their own processors. ASSP functions +must be implemented in accordance with the device hardware, while +Variant functions correspond to off chip peripherals. The distinction +is not absolutely mandatory (it is possible just to provide a Variant +layer) but strongly recommended and assumed in many other areas of +the porting process.
A port with an ASSP/Variant architecture
+is implemented with two C++ classes and two
For more information see the Base Porting Guide.
+The bootstrap is the code that runs after a hardware
+reset, to initialize the basic system services that enable the kernel
+to run. Parts of it must be written in assembler (either GNU or ARM)
+and are specific to the device hardware, while others are written
+in C++ and are automatically translated into assembler. Bootstrap
+is implemented as the
This is supplied as an example only: implementation is entirely +dependent on the target hardware.
For more information see the
The Symbian platform is a real time
+interrupt driven operating system. To port it you need to determine
+the number and function of the interrupts in your port and implement
+the interrupt dispatcher to handle them. Interrupt service routines
+(ISRs) are held in an array of class
For more information see the
The
For more information see the
The
The attributes are modified by get and set functions called +HAL handlers defined in the relevant hardware drivers.
For more
+information see the
The rest of the work involves +porting drivers. A typical implementation would include these drivers:
DMA Framework
An
The implementation of these functions is completely dependent on the interrupt
+hardware. In the template port,
Interrupts that 'belong' to the Variant Layer are passed to that layer +through the call to:
+
The Variant layer equivalent of this function,
To use a DMA channel it has to be opened.
A DMA channel has to be closed after completing all the DMA operations.
+This releases the DMA channel for other resources and peripherals.
+To close a previously opened channel, the channel should be idle,
+with no pending requests. So, before closing, call
To stop DMA without closing the channel, simply +cancel the request.
To use a DMA channel it has to be opened.
A DMA channel has to be closed after completing all the DMA operations.
+This releases the DMA channel for other resources and peripherals.
+To close a previously opened channel, the channel should be idle,
+with no pending requests. So, before closing, call
To stop DMA without closing the channel, simply +cancel the request.
Writable +data paging allows the demand paging of any user-side data (for example thread +stacks and heaps). Writable data paging makes use of a fixed size backing +store to page out data. The backing store may use a NAND flash partition or +an embedded MMC (eMMC).
The aim of data paging is to enable more memory-hungry +use cases without increasing the amount of physical RAM present in the device.
For +example, data paging enables:
running applications +that are not designed with the memory constraints of embedded devices in mind, +for example applications ported from other environments with writable data +paging (such as a PC).
running multiple applications +at the same time where previously there would not have been enough memory.
However, writable data paging does have +the following limitations:
It is only possible +to page the following types of memory:
user heaps,
user thread stacks,
private chunks,
global chunks,
static data.
A single memory object +(e.g. a heap, stack, DLL or chunk) is the smallest granularity at which paging +can be configured
No attempt will be made +to page kernel-side data
No attempt will be made +to implement memory mapped files (e.g. posix's mmap)
No attempt will be made +to implement any kind of paging on the emulator.
This
+document assumes that the reader is familiar with the concept of
Memory is managed in fixed size units called pages. The size of a page +is usually determined by the hardware, and for ARM architectures this is 4K.
If a given page of memory is present in RAM it is said to be paged +in (or just 'present'), as opposed to paged out.
The process of moving a page of memory from being paged out to being +paged in.
The process of moving a page of memory from being paged in to being +paged out.
The external media used to hold paged out pages is referred +to as the swap area. This area may be much larger than physical RAM, +although a factor of around twice as large is considered normal in existing +systems.
The term working set is defined as the memory accessed during +a given time period.
Pinning pages refers to paging in demand-paged memory and forcing +it to remain in RAM until it is unpinned at a later time.
The +following classes are affected by the use of writable data demand paging:
The +following classes are involved in the use of threads and processes:
These classes are used in handling memory:
The following classes are used in client/server communication:
This content is provided under our joint company non-disclosure +agreement. All content included in the documentation set is confidential +and can only be used for the limited purpose of viewing it as a potential +documentation solution. It cannot be redistributed or reused for any +purpose.
+The current design of the file server supports +the processing of client requests concurrently, as long as those requests +are made to different drives in the system. For example, a read operation +may take place on the NAND user area partition while a write operation +to the MMC card takes place concurrently.
However, requests +to the same drive are serialized on a first-come first-served basis, +which under some circumstances leads to bad user experience. For example:
An incoming +call arrives while a large video file is being written to the NAND +user area by an application that writes the file in very large chunks.
In order to +display the caller’s details, the phone needs to read from the contacts +database which is also stored on the NAND user area.
The write operation +takes a very long time to complete, so the call is lost.
This is one of many scenarios where the single threaded nature +of the file server may lead to unresponsive behavior. In order to +improve the responsiveness of the system, the Symbian platform implements +a fair scheduling policy that splits up large requests into more manageable +chunks, thus providing clients of the file server with a more responsive +system when the file server is under heavy load.
See
Read caching aims to improve file server performance +by addressing the following use case:
A client (or multiple clients) issues repeated requests to +read data from the same locality within a file. Data that was previously +read (and is still in the cache) can be returned to the client without +continuously re-reading the data from the media.
There may be a small degradation in performance +on some media due to the overhead of copying the data from the media +into the file cache. To some extent this may be mitigated by the affects +of read-ahead, but this clearly does not affect large (>= 4K) reads +and/or non-sequential reads. It should also be noted that any degradation +may be more significant for media where the read is entirely synchronous, +because there is no scope for a read-ahead to be running in the file +server drive thread at the same time as reads are being satisfied +in the context of the file server’s main thread.
When ROM paging is enabled, the kernel maintains +a live list of pages that are currently being used to store +demand paged content. It is important to realize that this list also +contains non-dirty pages belonging to the file cache. The implication +of this is that reading some data into the file cache, or reading +data already stored in the file cache, may result in code pages being +evicted from the live list.
Having a large number of clients
+reading through or from the file cache can have an adverse effect
+on performance. For this reason it is probably not a good idea to
+set the
Clients that read data sequentially (particularly +using small block lengths) impact system performance due to the overhead +in requesting data from the media. Read-ahead caching addresses this +issue by ensuring that subsequent small read operations may be satisfied +from the cache after issuing a large request to read ahead data from +the media.
Read-ahead caching builds on read caching by detecting +clients that are performing streaming operations and speculatively +reading ahead on the assumption that once the data is in the cache +it is likely to be accessed in the near future, thus improving performance.
The number of bytes requested by the read-ahead mechanism is initially +equal to double the client’s last read length or a page, for example, +4K (whichever is greater) and doubles each time the file server detects +that the client is due to read outside of the extents of the read-ahead +cache, up to a pre-defined maximum (128K).
Write caching is implemented to perform a small +level of write-back caching. This overcomes inefficiencies of clients +that perform small write operations, thus taking advantage of media +that is written on a block basis by consolidating multiple file updates +into a single larger write as well as minimizing the overhead of metadata +updates that the file system performs.
By implementing write +back at the file level, rather than at the level of the block device, +the possibility of file system corruption is removed as the robustness +features provided by rugged FAT and LFFS are still applicable.
Furthermore, by disabling write back by default, allowing the +licensee to specify the policy on a per drive basis, providing APIs +on a per session basis and respecting Flush and Close operations, +the risk of data corruption is minimized.
Database +access needs special consideration as corruption may occur if the +database is written to expect write operations to be committed to +disk immediately or in a certain order (write caching may re-order +write requests).
For these reasons, it is probably safer to
+leave write caching off by default and to consider enabling it on
+a per-application basis. See
This topic describes the source code for interrupt driven keyboard +drivers and related libraries that Symbian platform provides.
+In a reference board port, the
The source for the driver is contained entirely
+within
This DLL is part of Symbian +platform generic code and is built as part of the Text Window Server +component.
The mmp file is located in Symbian platform generic
+code in
The DLL is platform specific. It +is built as part of the Variant.
The mmp file has a name with
+format
The source code for the tables is located in
Local Media Subsystem uses +physical device drivers, called media drivers, that manage the storage media +hardware. A base port can implement new media drivers and implement ports +of the media drivers that are supplied in Symbian platform.
+The easiest way to create a base port is to take the supplied template +port and expand it to suit your own hardware configuration(s). The +template port, is an outline, but working, framework that you can +modify to suit your own hardware.
+The template port can be found under the
The first thing to do is to set up your +environment for building, downloading onto your hardware, and testing +that the port works.
As supplied, +the template port is designed to boot on any hardware. It should boot +successfully, but clearly, the system can do nothing more at this +time. A successful boot shows that your build environment has been +set up correctly.
When porting the base to a +new platform, you will need to code and build the Variant. This provides +those hardware dependent services required by the kernel. In nearly +all ports, this is split into an ASSP DLL and a Variant DLL. We usually +refer to the ASSP layer and the Variant layer.
It is desirable +that code that depends only on the properties of the ASSP be segregated +from code that depends on details of the system outside the ASSP, +so that multiple systems that use the same ASSP can share common code.
For example, in the template reference port, the
The bootstrap consists of several generic +source and header files supplied as part of Symbian platform, and +a set of platform specific files. You need to create these platform +specific files as part of a base port.
For details, see
The updated port can then be built, downloaded and tested.
An interrupt is a condition that +causes the CPU to suspend normal execution, enter interrupt handling +state and jump to a section of code called an interrupt handler. The +ASSP/Variant part of the base port must implement an interrupt dispatcher +class to manage interrupts.
For details, see the
The updated port can then +be built, downloaded and tested.
The Kernel requires that the ASSP/Variant
+part of the base port provides an implementation of the
For details, see the
The updated port can then be +built, downloaded and tested.
The User-Side +Hardware Abstraction (HAL) component provides a simple interface for +programs to read and set hardware-specific settings, for example, +the display contrast. A base port must define the attributes that +clients can use on a phone, and implement any functions that are required +to get and set the attributes.
For details, see
The remaining drivers can now be ported. +Go and see:
ARM provide a hardware floating point coprocessor that provides +floating point computation that is fully compliant with IEEE Std 754-1985.We +refer to the coprocessor as the VFP unit.
+Symbian platform supports the use of VFPv2 on platforms where the +required hardware is present in both RunFast mode and in IEEE-without-exceptions mode. See ARM's Vector Floating-point +Coprocessor Technical reference Manual for more details on the coprocessor, +its architecture, and its execution modes.
+You should read Floating point support in Symbian^3 Tools Guide +> Building. The guide contains information about applications +and user-side code, which is also applicable to code running on the +kernel side. However there are a number of restrictions that must +be observed:
+You cannot use VFP instructions in any interrupt service routine.
You cannot use VFP instructions when the kernel is locked, for example, in
+an IDFC or after calling
You cannot use VFP instructions in any section of code which runs with a fast +mutex held.
Using VFP instructions in these situations can lead to data being +corrupted, or the kernel panicking. If you rely on the compiler to +generate VFP instructions, rather than using inline assembler, it +is extremely important that you do not use any floating point values +in these situations. The compiler may generate VFP instructions for +the most trivial floating point operations and even for simple assignments.
+There is no conformance testing suite for the Time platform service. +So, this is a list of tests that have to be conducted to check that +the Time platform service is working correctly:
+The following table is a list of the tests that need +to be carried out in order to test the Time SHAI implementation.
Before setting the interface descriptors, get the device and endpoint capabilities of the USB Device Controller (UDC) hardware.
Use
Note: Endpoint zero is only supported as a control endpoint. The capabilities of endpoint zero are not included in the list of endpoint cababilities returned.
Before calling
In the example below we set the interface data in the
For each endpoint the
The endpoint type (
The members
A description string for the interface.
This number is all the endpoints you wish to use excluding endpoint zero. For example, if you wish to use 4 endpoints and the control endpoint, this value is should be four.
Note: If you wish to implement a control-only interface then this value should be zero.
Sets the class type for this interface.
See
To set the interface pass the initialised
The interface number passed to
Note: The whole
On success, a chunk handle is created and passed back through
If you are using the
Note: Calling
After you have set the interface descriptors you could optionally
The Baseport Template is a reference +board template, which provides a basic framework consisting of base +parts of the Symbian platform. This framework can be modified to suit +your hardware configurations.
The Baseport Template code is
+categorized into Application Specific Standard Product (ASSP) layer
+and variant layer. The ASSP layer provides the source code for hardware-specific
+functions (for example, GPIO, DMA, IIC, USB Client Controller) that
+are used by the kernel. This layer is implemented by the
ASSP and variant layers
+provide control functions for hardware which is used by multiple devices.
+This is done by placing Baseport Template code in a single DLL, called
+the Variant DLL (
The Baseport Template is mostly +useful in creating a baseport and testing the software. The following +is a list of some of the key uses of the Baseport Template.
The Baseport Template is useful in handling:
data exchange between devices connected to the bus
data copy between memory locations or between memory location +and peripheral
communication interface between the microprocessor components
hardware events and registers
USB functions of the device
power resources
phone restart after a hardware reset
shared chunks of camera and sound physical device driver
CPU idle operations
HAL, interrupt and keyboard functions
data storage based on NOR type flash memory
variant specific initialization (not the same as ASSP specific +initialization).
manufacture devices and want to develop software and get an +early debug output.
develop baseport. See
The following table lists the Baseport +Template details:
The clients that require to use the DMA framework
+functionality use the
The
The
This page lists the keywords starting from G to O.
+rombuild only
Overrides the default maximum +heap size for the executable.
rombuild only
Overrides the default minimum +heap size for the executable.
rombuild and rofsbuild
Marks the existing file +as hidden. The file is still available to resolve DLL static links.
BUILDROM only
Specifies an ECom plug-in, consisting +of an implementation DLL and an ECom registration resource file, to +be hidden in the ROM and in the SPI file.
Symbian platform
+code does not use this keyword directly. Instead, it uses the macro
For example:
This feature allows
As part of the ROM building process,
+it is possible to add support for multiple languages. This affects
+how ECom resource files are specified by
If an SPI file
+is being created, the
During the
If an SPI file
+is not being created, the
During the
Specifies +the location of an HCR data file which is included in +the SMR partition image.
Creates +an SMR partition image.
rombuild only
Kernel data/bss chunk base virtual +address.
The recommended value is 0x64000000.
rombuild only
This keyword sets the specified bit
+of the kernel configuration flag to
rombuild only
Maximum size of the Kernel's heap.
The Kernel heap grows in virtual address space until this value +is reached: It can then grow no further. The value is rounded up to +a multiple of the virtual address space mapped by a single page directory +entry - on ARM CPUs this is 1MB. Usually 1 page directory entry's +worth of RAM is specified. The default value is 0x100000; this is +used if none is supplied.
rombuild only
Minimum size of the Kernel heap, +allocated immediately when the Kernel boots. Specify either a minimal +value or a working set size based on envisaged use.
The recommended +value is 0x10000, and is the default value if none is explicitly supplied.
rombuild only
ROM image on which an extension +ROM is based. This keyword is only valid for extension ROMs
rombuild only
Initial value for the kernel trace
+mask. You can supply upto eight separate 32 bit masks in order to
+enable higher trace flags. The bit values are defined in
To define kernel trace in an
BUILDROM only
Localisation support. Specifies +the Symbian 2-digit codes for languages to be supported in this ROM. +The keyword may be used as many times as necessary.
rombuild only
Sets the maximum value for the
+size of uncompressed unpaged data for which you can flash the image.
+If the size of the unpaged data is greater than
rombuild only
Specifies the memory model to +be used at runtime together with the chunk size and the page size.
rombuild only
Specifies that the ROM image has +multiple kernel executables. These kernels are specified with multiple +primary keywords in the files-section.
Note that this keyword
+is mutually exclusive with
BUILDROM only
Localisation support. During the +localisation pass, the MULTI_LINGUIFY lines are expanded into one +line per language code.
For example, the line:
is expanded to the lines:
The first line is +for the default language code; the remaining lines are for all other +language codes.
This provides support for the
rombuild only
Specifies that no ROM wrapper +is required.
There is no conformance testing suite available for Baseport Template. +However, Kernel testing (E32) and File system testing (F32) tests +should be run with 100% pass rate for any successful baseport. The +E32 and F32 tests are written to test the generic base code using +very minimal hardware services. As a result, if the baseport cannot +pass the minimal requirements of the base code, it is unlikely that +it will be suitable for running a production device.
+The Baseport Template is designed to boot +on any hardware. It should boot successfully, but clearly, the system +can do nothing more at this time. A successful boot shows that your +build environment has been set up correctly.
You can conduct +a series of tests to check if the template works correctly. To test +different functionality of the template, see:
A pre-condition is +something that must be true before using the function.
A post-condition is +something that is true on return from the function.
Such conditions are expressed using a standard phrase, wherever possible, +and this section explains what those conditions mean.
+Where more than one pre-condition is stated for a given function, then
+you can assume that all pre-conditions apply. In this sense, there is an implied
+AND relation between conditions. For example, in the description of the function
Both conditions must be true before calling the functions.
+There are exceptions to this rule, where a precondition applies only if
+some other factor is true. For example, in the description of the function
Clearly, only one pre-condition will apply, depending on the supplied value
+of the
A few conditions are self-explanatory, and these are not included in these +lists.
+NOTE that some familiarity with kernel side programming is assumed.
+
This +describes the meaning of each precondition, and tries to provide insight as +to why it needs to be satisfied, and explains what you need to do to ensure +that the preconditions are met.
Calling thread must be in +a critical section
Critical sections are sections of code that
+leave the kernel in a compromised state if they cannot complete. A thread
+enters a critical section by calling
While
+in a critical section, the thread cannot be suspended or killed. These actions
+are deferred until the end of the critical section. If a thread takes an exception
+while inside a critical section, this is treated as fatal to the system, and
+cause a
The described function must +be in a critical section when called.
Calling thread must not +be in a critical section
Critical sections are sections of code
+that leave the kernel in a compromised state if they cannot complete. A thread
+enters a critical section by calling
While
+in a critical section, the thread cannot be suspended or killed. These actions
+are deferred until the end of the critical section. If a thread takes an exception
+while inside a critical section, this is treated as fatal to the system, and
+cause a
There are some functions winthin +the system that must NOT be in a critical section when called. This +applies to the described functions.
Calling thread can either +be in a critical section or not
Critical sections are sections
+of code that leave the kernel in a compromised state if they cannot complete.
+A thread enters a critical section by calling
While
+in a critical section, the thread cannot be suspended or killed. These actions
+are deferred until the end of the critical section. If a thread takes an exception
+while inside a critical section, this is treated as fatal to the system, and
+cause a
When this pre-condition applies +to the described function, it means that it does not matter whether the code +is in a critical section or not.
No fast mutex can be held
A +thread can hold no more than one fast mutex at any given time. The described +function uses a fast mutex, which means that on entry to the function, the +calling thread must not hold another one.
Kernel must be locked
Many +kernel side functions run with interrupts enabled, including code that manipulates +global structures, such as the thread ready list. To prevent such code from +being reentered and potentially corrupting the structure concerned, a lock +known as the kernel lock (sometimes referred to as the preemption lock) is +used.
Sections of code that need to be protected against rescheduling
+call
The pre-condition +means that the kernel lock must be set before calling the described function.
Kernel must be unlocked
Many +kernel side functions run with interrupts enabled, including code that manipulates +global structures, such as the thread ready list. To prevent such code from +being reentered and potentially corrupting the structure concerned, a lock +known as the kernel lock (sometimes referred to as the preemption lock) is +used.
Sections of code that need to be protected against rescheduling
+call
The pre-condition +means that the kernel lock must NOT be set when the described function +is called.
Kernel can be locked or +unlocked
Many kernel side functions run with interrupts enabled, +including code that manipulates global structures, such as the thread ready +list. To prevent such code from being reentered and potentially corrupting +the structure concerned, a lock known as the kernel lock (sometimes referred +to as the preemption lock) is used.
Sections of code that need to
+be protected against rescheduling call
The +pre-condition means that it does not matter whether the kernel lock is set +or unset when the described function is called.
Kernel must be locked, with +lock count 1
Many kernel side functions run with interrupts enabled, +including code that manipulates global structures, such as the thread ready +list. To prevent such code from being reentered and potentially corrupting +the structure concerned, a lock known as the kernel lock (sometimes referred +to as the preemption lock) is used.
Sections of code that need to
+be protected against rescheduling call
In
+addition, calls to
The pre-condition means that there must
+be exactly one call to
See also
Interrupts must be enabled
This +pre-condition states that interrupts must be enabled before calling the described +function.
Possible reasons why interrupts may need to be enabled include:
the function needs interrupts +to occur; for example, it may wait for timer ticks.
the function may take +a long or potentially unbounded time to run, so interrupts need to be enabled +to guarantee bounded interrupt latency.
See also the function
Interrupts must be disabled
This +pre-condition states that interrupts must be disabled before calling the described +function.
See also the function
Interrupts can either be +enabled or disabled
This pre-condition states that it does not +matter whether interrupts are enabled or disabled before calling the described +function.
System lock must be held
The +system lock is a specific fast mutex that only provides exclusion against +other threads acquiring the same fast mutex. Setting, and acquiring the system +lock means that a thread enters an implied critical section.
The major +items protected by the system lock are:
the consistency of the +memory map; on the kernel side, the state of user side memory or the mapping +of a process is not guaranteed unless (a) you are a thread belonging to the +process that owns the memory or (b) you hold the system lock.
the lifetime of
Note that the system lock is different from the kernel lock; the +kernel lock protects against any rescheduling. When the system lock is set, +the calling thread can still be pre-empted, even in the locked section.
The
+system lock is set by a call to
The +pre-condition means that the system lock must be set before calling the described +function.
System lock must not be +held
See the pre-condition
The
+system lock is unset by a call to
The +pre-condition means that the system lock must not be set before calling the +described function.
Call in a thread context
This +pre-condition means that the described function must be called directly, or +indirectly, from a DFC or a thread. The thread can be a user thread or a kernel +thread.
Call in an IDFC contex
This +pre-condition means that the described function must be called directly, or +indirectly, from an IDFC.
Note that when calling a function from an +IDFC:
the kernel is locked, +so pre-emption is disabled
user memory cannot be +accessed
the function cannot +block or wait.
Call either in a thread +or an IDFC context
This pre-condition means that the described +function can be called directly, or indirectly, from an IDFC, a DFC or a thread. +The thread can be a user thread or a kernel thread.
Note that when +calling a function from an IDFC:
the kernel is locked, +so pre-emption is disabled
user memory cannot be +accessed
the function cannot +block or wait.
Call in any context
This +pre-condition means that the described function can be called directly, or +indirectly, from an IDFC, a DFC or a thread, or it can be called from an Interrupt +Service Routine (ISR).
A thread can be a user thread or a kernel thread.
Note +that when calling a function from an IDFC:
the kernel is locked, +so pre-emption is disabled
user memory cannot be +accessed
the function cannot +block or wait.
Do not call from an ISR
This +pre-condition means that the described function must not be called from an +Interrupt Service Routine (ISR).
Note that ISRs have the following +characteristics:
they have an unknown +context
they must not allocate +or free memory
they cannot access user +memory
they must not call functions +that interfere with critical sections of code.
The calling thread must +own the semaphore
A semaphore can be waited on only by the thread +that owns it. This precondition is needed when the described function calls +a semaphore wait function.
The calling thread must +not be explicitly suspended
This refers to nanokernel threads, +not Symbian platform threads. The described function must not be used if the +thread is in the suspended state. One of the possible reasons for this is +that the described function does not check the thread's suspend count.
A
+thread may be created suspended, or the thread may be put into a suspended
+state using
See
+also
Note +that these functions are for use only in an RTOS personality layer.
The calling thread must +hold the mutex
The calling thread has waited for a mutex and acquired
+it. This precondition is needed when the thread is about to release the mutex,
+ie call one of the
Call only from ISR, IDFC +or thread with the kernel locked
See the pre-condition
Call only from IDFC or thread +with the kernel locked
See the pre-condition
Do not call from thread +with the kernel unlocked
See the pre-condition
Do not call from ISR or +thread with the kernel unlocked
See the pre-condition
Thread must not already +be exiting
The pre-condition means that the described function +should not be called after the thread has been killed.
In EKA2, threads +do not die immediately, they are placed on a queue to be deleted later.
Functions +with this pre-condition are not likely to used in a device driver.
Property has not been opened
A
+property is a single value used by “Publish and Subscribe”. Each property
+must be opened before it can be used. To open a property, use either
The pre-condition +means that the property must NOT already be open when the described +function is called.
Property has been opened
A
+property is a single value used by “Publish and Subscribe”. Each property
+must be opened before it can be used. To open a property, use either
The pre-condition +means that the property must already be open before calling the described +function.
Must be called under an +XTRAP harness or calling thread must not be in a critical section
Each +Symbian platform thread can be associated with a kernel-side +exception handler, set using XTRAP(); for example, to detect bad memory accesses.
The +described function can legitimately cause an exception, and the pre-condition +means that
either:
the described function +should be called inside an XTRAP() harness to catch the exception
or
the thread must not
+be in a critical section, because exceptions are not allowed inside them.
+If a thread takes an exception while inside a critical section, this is treated
+as fatal to the system, and causes a
DObject::Lock fast mutex +held
The described function accesses an object whose internal +data needs to be protected by the specified fast mutex.
The operations +of:
obtaining an object’s +name
setting an object's +name
setting an object's +owner
are all protected by the global fast mutex,
Setting +the owner is protected as the owner's name is part of the object's full name.
DCodeSeg::CodeSegLock mutex +held
The
Any kind of lock can be +held
The described function can be called with any kind of lock.
Call only from Init1() in +base port
The pre-condition means that the described function
+can only be called during the first phase of kernel initialisation, i.e. during
+execution of the Base Port implementation of
This condition may apply because the +described function:
must be called before +any context switch
must be called before +the MMU is turned on.
The various parameters must +be valid. The PIL or PSL will fault the kernel if not
This pre-condition +refers to a DMA request.
The parameters used when calling the described +function must be as specified. If they are not, the platform independent layer +or the platform specific layer cannot recover and will cause the kernel to +fault, i.e. the device will reset.
The request is not being +transferred or pending
This pre-condition refers to a DMA request.
The +described function must not be called if a DMA request has already been set +up, or is in progress. A possible reason for this is that the described function +resets parameters that have been already setup.
Wait on TimerMutex before +calling this
The
Message must be in ACCEPTED +state
This pre-condition indicates that the message has been read +by the receiving thread. It is not attached to a message queue but is currently +in use by the receiving thread.
Queue must not be in asynchronous +receive mode
This pre-condition refers to kernel side message +queues. A kernel thread can receive messages:
asynchronously by calling
by polling, by calling
A possible reason for this precondition is that the queue is about +to be polled.
The queue may be polled either:
before the first
while processing a message
+but before
Container mutex must be +held / Thread container mutex must be held / Process container mutex must +be held
Each of the containers is protected by a mutex.
The
+pre-condition means that the calling thread must acquire the relevant mutex
+before calling the described function. Object containers are
A +post condition describes what is true on return from a kernel API function.
Calling thread is in a critical +section
The code is in a critical section on return from the function.
See
+also the pre-condition:
Calling thread is not in +a critical section
The code is NOT in a critical section +on return from the function.
See also the pre-condition:
No fast mutex is held
A +thread can hold no more than one fast mutex at any given time. A fast mutex +is NOT held on exit from the function.
Kernel is locked
The +post-condition means that, on exit from the described function, the kernel +lock is on. The described function might have explicitly set the kernel lock +or, more commonly, the lock was set before entry to the described function, +and has not been unset by that function.
See also the pre-condition
Kernel is unlocked
The +kernel is NOT locked on exit from the described function. The described +function might have explicitly unset the kernel lock or, more commonly, the +lock was not set before entry to the described function, and has not been +set by that function.
See also the pre-condition
Kernel is locked, with lock +count 1
See the pre-condition
Interrupts are enabled
This +post-condition states that interrupts are enabled on return from the described +function.
See the pre-condition
Interrupts are disabled
This +post-condition states that interrupts are disabled on return from the described +function.
See the pre-condition
System lock is held
This +post-condition states that the system lock is held on return from the described +function.
The system lock is a specific fast mutex that only provides +exclusion against other threads acquiring the same fast mutex. Setting, and +acquiring the system lock means that a thread enters an implied critical section.
The +major items protected by the system lock are:
the consistency of the +memory map; on the kernel side, the state of user side memory or the mapping +of a process is not guaranteed unless (a) you are a thread belonging to the +process that owns the memory or (b) you hold the system lock.
the lifetime of
The
+system lock is set by a call to
The calling thread holds +the mutex
The calling thread has waited for a mutex and acquired +it. On return from the described function, the thread still holds the mutex.
See
+also the pre-condition
Container mutex is held +/ Thread container mutex is held / Process container mutex is held
Each +of the containers is protected by a mutex.
The post-condition means +that the calling thread has the relevant mutex on return from the described +function. This is most likely because the mutex was acquired before entering +the described function.
Object containers are
DCodeSeg::CodeSegLock mutex +held
The
This post condition
+means that the mutex is held on exit. The most common situation is that the
+mutex was acquired before entry to the described function. Relinquish the
+mutex by calling
The Baseport Template is the most important part of the system, +since it consists of the kernel and essential peripherals.
+The
The
The
The
The
The simplified architecture of the Baseport +Template platform service and how it fit into the Symbian platform +is shown below:
In the above diagram, the following are not part of the Baseport +Template:
In the above diagram, the following are part of the Baseport +Template:
The Baseport Template is +of interest to engineers who are to port the Symbian platform to a +new hardware platform. Along with engineers that are producing drivers +that will cover the functionality in the above table. This document +is of interest to:
Device driver and kernel-side component developers
Hardware implementors and base port developers.
The Interrupt platform service provides an interface to the kernel +and device drivers to associate interrupt with an Interrupt Service +Routine (ISR). There are three levels of interrupt management:
CPU level: Control +of interrupts available only to the kernel.
Controller level:
+Control provided by functions of the
Device level: +Control of hardware sources that are managed by a device specific +control scheme.
An interrupt is a hardware or a software event that may need +to be serviced. An example of a hardware interrupt is a key press +.
The unique ID of an interrupt source. The interrupt ID are +defined by the developers creating ASSP and variant.
Interrupts not associated with an Interrupt Service Routine +are called the spurious interrupts. The spurious interrupts are handled +by a spurious interrupt handler which is only used for debugging.
An Interrupt Service Routine (ISR ) is the function to handle +an interrupt. ISR is not a class member. ISR provides the minimum +processing such as storing data that may not be available later and +requests a DFC for further processing.
The Interrupt platform service allows developers to set a priority +for each interrupt source. The meaning of the priority value depends +on the hardware and the baseport implementation.
When an interrupt is received by the system, it calls the associated
+ISR. This is called interrupt dispatch. The interrupt dispatch is
+provided by the implementation of the
The table that maps the interrupt ID and the associated ISR. +The ISR table is implemented by the developers creating the baseport +variant.
The process of associating an interrupt ID with an ISR. Unbinding +removes the association.
The output of a low priority interrupt controller is provided +as an input to a higher priority interrupt controller.
Pseudo-interrupts correspond to multiple interrupt sources +sharing a single input to an interrupt controller but requiring separate +ISRs to service them. The ISRs cannot all be bound to the single real +interrupt and are therefore bound to the pseudo-interrupt instead.
An IRQ (interrupt request) is the signal generated by an item +of hardware or software to request an interrupt from the processor. +An FIQ (fast IRQ) is an IRQ with high priority on systems +which support prioritization of requests.
The Interrupt platform service allows the kernel and +device drivers to :
associate an ISR with an interrupt ID
enable/disable a specific interrupt
clear pending actions on a specific interrupt
change the priority of a specific interrupt.
The battery monitor code is implemented in the power controller kernel +extension.
+Note: to implement battery monitor code, the battery monitor hardware must +be accessible to the Application CPU of the phone.
+Typically, your battery monitor class would derive from the
Symbian platform considers that batteries can be in one of 4 possible logical
+states, defined by the
There are two pure virtual functions to be implemented:
+
We also suggest that the battery monitor component offer a public function +prototyped as:
+to supply information about the state of the main battery and the backup
+battery (if there is one), or external power to user side components that
+may require it (e.g. a user-side power monitor component that is used to track
+the state of batteries and initiate an orderly shutdown if it reaches a critically
+low level), and is usually called by the
This
+should read and return the logical state of the main battery or, if external
+power is connected, it should return
When +is it called?
The function is called by peripheral drivers via
+a call to
Implementation issues
A suggested implementation
+would have the function reading the state of the main battery, or whether
+external power is connected, and mapping to one of the logical states defined
+by
This +function is now deprecated, and you should just define an empty implementation.
An LDD can support more than one device by +providing a separate channel to each device.
There can be more than +one PDD associated with a given LDD, where each PDD supports a different variation +of a similar device.
Alternatively, a single PDD can be designed to +support multiple instances of an identical device by supporting more than +one channel. For example, a platform that contains two identical UARTS could +support these by providing a PDD that can open a channel on either (or both) +UARTs.
Where a driver supports multiple devices on a platform, then +it uses a unit number to distinguish between each instance of the device. +Clients open a channel to the driver for a particular unit. The following +shows an example of this, and the example driver function that creates the +channel:
The
+driver must inform the framework that it supports the use of unit numbers.
+A driver can use unit numbers to ensure that it only opens on one unit. This
+is done by setting the
The device driver framework validates if the driver +supports unit numbers. In the following example, the PDD checks if the unit +number passed is valid.
Devices generate interrupts to indicate hardware events. Generally
+drivers provide an interrupt service routine (ISR) to handle the interrupts
+and perform the required responses to the events. Symbian provides an
Interrupt handling +is typically done in a PDD, as device hardware access is done at that level. +Interrupt handling is generally done in two stages in the driver. High priority +and short time running tasks are done in the ISR, and the remaining processing +is deferred for handling later.
Interrupt handling is a blocking high +priority task and needs to take a minimal amount of time. The Kernel will +be in an indeterminate state and puts restrictions on doing various operations.
+The Sound Driver is a device driver that controls the audio hardware of +a phone. Symbian platform provides a User-Side API, a Logical Device Driver, +and a base class for physical channels. You must provide a PDD that implements +the physical channels that interface to the sound hardware on your phone.
+Use this test suite to ensure that your +Power Resource Manager implementation runs correctly.
Introduction
The +following tasks are preformed by the test suite:
enumerate all registered +resources,
check that resources +behave consistently with their properties,
ensure that resources +are reset to their default level when not in use,
ensure all actual resources +dependencies have been declared (only the extended version of the PRM allows +dependencies).
To ensure that single-user resources are available to the test application,
The instructions to run the +PRM acceptance test suite are as follows:
build the test code
+in
build a rom for H4HRP
+of type
This command generates +a self-starting ROM. The test output is captured from the default debug port +of the device.
To build a manual test ROM (so the test application does not run
+automatically), use the
Note: +This test must be run with minimal required components because the test driver +is a kernel extension and takes control of all single-user resources during +its entry point. This test randomly changes the state of all the resources +between a minimum and maximum value. If there are any restrictions on a resources +state change (i.e. the resource state can be only be changed to a certain +value) then that resource should not be registered while running this test.
The PSL must be implemented in order for +the PRM and the hardware to be able to communicate.
Introduction
The
+PSL side of the Resource Controller should be derived from the class
The following tasks are +covered in this tutorial:
Create an entry +Point
The PIL layer of the PRM offers the following macro. This
+needs to be invoked from the PSL. The macro calls the
If the PRM is a
Override +the pure virtual functions
Override the virtual functions of the
Here is an example H4 HRP implementation:
DoInitController()
Override
DoRegisterStaticResources()
creates all the static +resources in the kernel heap,
creates an array of +pointers to the static resources (indexed by their id),
sets the passed pointer
+(
updates the count of
+the static resources (in
The reference implementation is below:
Create +static resources that support dependencies
If there are static
+resources that support dependencies then the PSL must implement the function
The function
creates all static resources +that support dependencies in kernel heap,
creates an array of +pointers to them,
establishes the dependencies
+between them (using the resource's
sets the passed pointer
+(
updates the count of
+the static resource that supports dependency in
This function is called by the PIL during PRM initialisation. Below +is an example:
Creating +dependencies between static resources
Information
+within
The members
To link to dynamic resources that support
+dependency use the PRM function
Initialise +the pools
Pools are implemented as singly linked lists during
+the initialisation of the PRM. The PSL initialises the pools by invoking
The PRM has three +types of pools:
client pools to store +client information,
If the client pool is set to zero, then the PRM will not allow a +client of that type to register. For example, if the kernel side clients pool +is set to zero, then the PIL will not allow any kernel side clients to register. +The size of the client pools to specify depends on how many kernel side device +drivers (clients) and user side clients will access PRM and size of request +pool depends on the number of long latency resource available in the system. +Size of client level pool depends on number of clients that will request a +resource state change. If the client pools are exhausted the PIL will try +to increase the size of the pool. The size of the pool never decreases.
Registering with the Power +Controller
The Resource Controller must call
The platform specific implementation of the Power Controller
+overrides the
Idle power management
The resource controller's APIs
+cannot be called from a NULL thread. However, the PSL may need to know the
+state of the resources from a NULL thread to take the system to appropriate
+power states. To allow access to this information the PRM provides the virtual
+function
The Time platform service is a plug-in to the System State Manager.
+System state manager
You must be familiar with building ROM for the Symbian platform.
A reference template is provided at
Note that the reference source code is not supported
+on the
The system state manager plug-ins are built as a DLL.
The Kernel provides APIs to wait in blocked state, unblocked state, and +nanokernel APIs to get the tick period, timer ticks, to sleep on nanothread, +and so on.
+The IIC bus has a standard
The following test code is available to +test an IIC port.
The above test system consists of the executable (t_iic.exe) +and associated ldd files. The default version of t_iic.exe is used +to test that the platform independent layers of the IIC component +work correctly. The default version only works on the emulator, so +the layer below the SHAI is a series of stubs. In order for this test +harness to work with actual hardware, extensive modification to t_iic.exe +will have to be undertaken.
The IIC test application is used to +test:
The basic master channel functionality.
The master channel data handling for transaction functionality.
The master channel preamble and multi-transaction functionality.
The slave channel capture and release APIs.
The slave channel capture for receive and transmit of data.
That MasterSlave channels can only be used for one mode at +a time.
The IIC test application has the following known limitations:
This test suite does not work on hardware.
Illustrations of the use of APIs are taken from a number of example device +drivers. These are summarised below. All of the drivers implement support +for serial communications over a UART. They are designed to show a number +of different implementation techniques.
+The source code for the example device drivers is delivered as part of
+the Symbian platform source code on kits, using the directory structure described
+previously in the
This page lists the keywords starting from S to Z.
+rombuild only
A standard executable file that +is loaded by the kernel; this is the file server. All subsequent files +can be loaded through the file server.
As with all standard +executable files, this is loaded, relocated and stripped of its relocation +information.
rombuild only
ROMs can be sectioned into two +parts allowing the upper part of the ROM to be switched for language +variations and file patching. This is independent of the extension +ROM mechanism.
This keyword appears at the point in the obey +file where the ROM to be split. All files before this line appear +in the first (constant) section: Files after appear in the second +(patch/language) section.
BUILDROM only
Provides support for
The
+two sections are termed the upper and lower section. The upper section
+can be replaced without needing to change the lower section. This
+facility is most often used to put localised resource files into the
+upper section. This keyword provides
All lines beginning with the
For +example:
becomes:
See also
rofsbuild only
Configures the number of bytes +in each sector for the file system in data-drive images.
rombuild only
Specifies that this ROM image +has one kernel executable within it. This is the default.
Note that this keyword is mutually exclusive with
BUILDROM only
Specifies the
Note: A
+directory containing
BUILDROM only
Specifies input files used to +create a static plug-in information (SPI) file.
Its parameters +are:
An SPI file concatenates several resource files together.
+It is currently used to record the ECom plug-ins that are in ROM.
+This allows the ECom framework to register the ROM-based plug-ins
+without having to scan the file system for individual resource files.
+IBY files are not expected to use the
Note that
+creation of SPI files is optional (see
the
copies of each +SPI file are placed in the same directory as the created ROM image. +This is necessary for the possibility of creating extension/composite +ROMs.
any resource +files included in an SPI file are not placed in the ROM image. This +avoids duplication and an unnecessary increase in the size of the +ROM.
If SPI creation is switched off all resource files are placed
+in the ROM image in the locations specified by the
BUILDROM only
Specifies the files that need +to be marked as hidden in the static plug-in information (SPI) file, +to hide the associated ECom plug-in in the ROM.
Its parameters +are:
The file is marked as hidden in the SPI file by writing
+the data length of the file as 0. A resource language file can be
+overridden using this keyword in the IBY file. If you intend to hide
+both the resource file and the DLL, use the
Note that creation of SPI files is optional (see
rombuild only
Destination address for S-record +download.
rombuild only
rombuild only
Overrides the default stack size +for the executable.
rombuild only
Overrides the maximum size of +the stack.
rombuild and rofsbuild
Stops processing the +obey file. The rom image is not generated.
rombuild and rofsbuild
If specified, overwrites +the date-time stamp of the ROM image with this value. If not specified, +the image is time and date stamped from the system clock of the build +PC.
BUILDROM only
A pre-defined substitution. This
+is replaced with today's date in the format
Note that there is no UNDEFINE facility, and substitutions +are applied in an unspecified order.
rombuild and rofsbuild
Turns on rombuild tracing. +This is internal to Symbian.
rombuild only
Overrides the first UID for the +executable.
rombuild only
Overrides the second UID for the +executable.
rombuild only
Overrides the third UID for the +executable.
rombuild only
Indicates that this is a Unicode
+build; this is the default if not present and
rombuild and rofsbuild
Use the
rombuild and rofsbuild
Use the
rombuild and rofsbuild
Use the
rombuild only
Defines hardware variants.
It should be applied to the variant DLL. The
rombuild and rofsbuild
The ROM version number +as represented by its three component values.
rofsbuild only
Configures the volume label for +the file system in data-drive images.
BUILDROM only
Prints the rest of the line following +the WARNING keyword to standard output, and reports the source file +name and the line number.
BUILDROM only
Specifies the name of the Z drive +description image file( ROM, ROFS, extension ROFS or CORE image).
The clients +of the Interrupt platform service must know the following:
ISR function
Interrupt ID
Interrupts +are sent by hardware to indicate an event has occurred. Interrupts +typically cause an Interrupt Service Routine (ISR) to be executed. +The Interrupt platform service specifies the interface APIs for setting +up the ISRs and connecting them to specific Interrupt IDs. Interrupt +handling is a blocking high priority task and needs to take a minimal +amount of time. While the ISR is executed the kernel will be in an +indeterminate state and this puts restrictions on doing various operations, +such as allocating heap storage.
The device driver provides
+an ISR to handle an interrupt and perform the required response to
+the events. Symbian platform provides an
An Interrupt ID is identified by
+number, defined as a
An ISR is a static function that will be executed when an
+interrupt is received by the interrupt handler. The interrupt handler
+executes the ISR that is bound to the received Interrupt ID. It performs
+the actions necessary to service the event of the peripheral that
+generated the interrupt. The ISR must either remove the condition
+that caused the interrupt or call
The device +driver is able to handle interrupts.
This document describes the functionality that the DMA hardware +has to provide in order to be compliant with the DMA platform service.
None
Dependent on the Application Specific Standard +Product (ASSP) chip that is being used.
In order to use the DMA platform service, the +following information is required:
The location of the data source.
The location of the data destination.
The channel to be used.
The amount of data to be transferred.
How much data is to be transferred at once (packet size).
Synchronization setup
Interrupt settings
How the above settings relate to the operation of the DMA +is shown in the diagram below:
The settings listed above will now be discussed in more detail.
Location of the data source
This specifies where +data is to be transferred from (the source). This can be one of the +following:
Memory
Peripheral
If the source location is a peripheral, then its port will +have to be specified along with the location of the data source.
Location of the data destination
This specifies the +final location of the data to be transferred (the destination). As +with the location of the data source, this can be one of the following:
Memory
Peripheral
If the destination is to be a peripheral, then the port configuration +will have to be specified along with the location of the destination.
Channel
The DMA platform service transfers data over +channels which are configured independently. The priority order of +the each channel is specified in the DMA platform service API.
Amount of data to be transferred
This setting specifies +the amount of data that is to be transferred from the source to the +destination.
Data packet size
Data is transferred +in packets of a specified size. The acceptable values are:
4 bytes
8 bytes
16 bytes
32 bytes
64 bytes
128 bytes
Synchronization settings
These specify how the +transfer between the source and destination is to be controlled. This +is used when either the source or the destination can only take part +in a data transfer depending on external events. The synchronization +can be set up to be one of the following:
No synchronized transfer
Synchronize the transfer after a preset number of bytes
Interrupt settings
These are used to specify +how the DMA and/or specific channels should react to interrupt events.
Once a logical channel and a physical channel, if appropriate, have +been created and initialised, the driver is ready to handle requests.
On +the kernel-side, requests can be handled by one or more kernel-side threads, +allowing for rich and complex behaviour. Alternatively, if appropriate, a +request can be handled by code running in the context of the client user-side +thread, but running in supervisor mode.
There are two kinds of request, +synchronous and asynchronous.
The topic uses a reference board port named
In the template reference board port, the
The source
+for the driver is contained entirely within
The driver is defined as a kernel extension and is loaded +early in the boot sequence.
The driver functionality is almost entirely encapsulated by the
As the driver is a kernel extension, it must have a
This simply creates an instance of the
map the video +RAM
setup the video +info structure
install the +HAL handler
install the +power handler.
Map the video RAM
The frame buffer is a
If the frame buffer resides in main RAM and there
+is no restriction on which physical addresses may be used for it,
+physical RAM for the frame buffer should be reserved by using
If the frame buffer does +not reside in main RAM, there is no problem about reserving it.
If the frame buffer must reside at a specific address in main +RAM, there are two strategies available for reserving it:
If no conflicts
+are permitted between the frame buffer and memory allocations made
+during the kernel boot (for example, if the frame buffer must reside
+at the end of main memory), simply use
The required
+physical RAM region can be reserved in the bootstrap. The correct
+place to do this is in the implementation of the boot table function
Note that all Symbian platform base ports currently create +a second frame buffer for a secure screen. However, as platform security +is not yet implemented, this is wasteful of RAM and should be omitted.
Set up the video +information structure
The video information structure
+is used to define several aspects of the display including display
+size, bits per pixel and address of the frame buffer. This structure
+is the class
Install the HAL handler
Control of the display is +done by using the HAL, the Hardware Abstraction Layer.
The
See
Install the power handler
A call must be made to
+the
Requests to get and set hardware
+attributes are made through calls to
For the LCD Extension,
+the relevant hardware attributes are:
The HAL
+handler is registered with the kernel as the handler for the
A call to
See
The HAL handler is implemented
+as a case statement, switching on the function ID. For example, the
+following code fragment taken from
where
If an attribute does not have an implementation, the
+HAL handler function should return
For platform security, the code only allows the attribute
+to be set if the current thread has been authorized to write system
+data. Otherwise, it returns
Switch on and switch off operations
All of the HAL +operations are seen to be synchronous by the user side. However there +are some operations such as turning the display on and off which may +need to be implemented asynchronously.
The display on/off +code is implemented using synchronous kernel-side messages. There +is only one message per thread and the thread always blocks while +a message is outstanding. This means it is possible to make an asynchronous +operation appear synchronous.
When turning on the screen the +kernel-side message is queued and this thread is blocked until the +message is completed, which happens when the display has been turned +on.
If a display needs to be turned on and off truly asynchronously +(for example, if millisecond timer waits are required during the process +of turning on the display), the above functionality must be changed +so that the complete occurs when the display is truly on.
Accessing the video information structure
When any
+part of the
The
Note:
These functions +are called in the context of the thread that initiates power down +or power up, and synchronization is required, typically by means of +power up and power down DFCs.
These +functions generally queue DFCs which then call platform-specific functions +to power the display up and down.
When +power up or down is complete, the interface supplies a set of acknowledgment +functions which must be called when the change of state has taken +place.
DMA transfer requests are the way in which a device driver sets +up and initiates a DMA transfer. A transfer request internally comprises +a linked list of DMA descriptor headers and is associated with a single +DMA channel. A transfer request also stores an optional client-provided +callback function, which can be invoked when the whole request completes, +whether successfully or not. A DMA request can be in any of the four +states: Not configured, Idle, Being transferred, and Pending. However, +these states are not used by the driver.
+A device driver creates a DMA request by specifying a DMA channel +and an optional DMA callback function to be called after the request +completion. A DMA request must be fragmented.
+The following shows the creation of a DMA request:
+DMA transfer requests are the way in which a device driver sets +up and initiates a DMA transfer. A transfer request internally comprises +a linked list of DMA descriptor headers and is associated with a single +DMA channel. A transfer request also stores an optional client-provided +callback function, which can be invoked when the whole request completes, +whether successfully or not. A DMA request can be in any of the four +states: Not configured, Idle, Being transferred, and Pending. However, +these states are not used by the driver.
+A device driver creates a DMA request by specifying a DMA channel +and an optional DMA callback function to be called after the request +completion. A DMA request must be fragmented.
+The following shows the creation of a DMA request:
+There are no specific tools required to use or implement SDIO.
+This is the final +step required to build a ROM that can use Writable Data Paging (WDP). The +previous steps required to produce a ROM image that can use Writable Data +Paging (WDP) are:
Executing the buildrom +command builds the ROM image with no errors or warnings.
An +example of a buildrom command that produces a demand paging ROM is:
The Register Access platform service is intended for use in writing +device drivers. Writing device drivers involves frequent access to +hardware registers for reading, writing and modifying them.
+A register is +a memory location on the ASSP hardware to store data that relates +to the operation of that hardware. For example, a register can be +a counter, or a bit field, or the next sequence number or the next +byte or word of data that has arrived over a bus.
The Symbian +platform provides access functions for registers that have the following +sizes:
8–bit
16–bit
32–bit
64–bit
There are three types of function +provided for register access:
Each function takes a first argument of a
Each type of function (read, write, modify) has 8–bit, 16–bit, +32–bit and 64–bit versions.
The
See
The SDIO overview provides a description of the basic +concepts relevant to SDIO.
The technology guide provides a description of the technologies +used by SDIO.
The interface overview document provides a description +to the SDIO adaptation APIs.
The implementation overview describes +how SDIO is included in a build of the Symbian platform.
The +following illustration shows the basic architecture of SDIO.
SDIO +is a standard that combines an SD (Secure Digital) card and an I/O device +to produce a peripheral that can use an SD slot.
As they can offer a service to more than one other peripheral, we suggest
+that the driver that manages them presents a
A typical example of a shared Peripheral is an Inter-Component serial Bus +(I2C, SPI, etc):
+Special care must be taken to not create circular dependencies, for example,
+Peripheral A depends on Peripheral B, which depends on Peripheral C which
+depends on Peripheral A. Such circular dependencies would cause the
The shared peripheral driver’s power handler asynchronous
Some peripherals may also be represented as power resources to prevent +the base port power saving operations inhibiting their usage. For example, +a base port may consider DRAM to be a shared power resource with the DMA controller, +or the DSP controller as clients. This will allow them to still be able to +access DRAM whilst the CPU is idling, thus stopping the DRAM going into self-refresh +if either one of its clients has pending operations on it.
+As another example, to save power some base ports might decide to reduce +the refresh rate of the LCD whilst the CPU is idling or even power it down +for short periods and rely on the persistence of the LCD to prevent degredation +of the user experience. However, on a multiple core solution the LCD might +be used by the second core (e.g. DSP), for example, for playing back video +whilst the main CPU is idling. In this situation the LCD should be represented +as a shared power resource and the DSP must be able to request usage of this +resource. The DSP controller software could assert a request on the LCD preventing +the main CPU from initiating the power saving measures.
+Sound Driver concepts are +described here.
+The Serial Port Driver PDD must provide a factory class to create channels.
+The purpose of the PDD factory is to create the physical channel. For the
+Uart, this is defined by the
This implements the four virtual functions that the base class defines, +as well as a default constructor.
+
In the template port, the name is the string
See
+also
GetCaps() +can be implemented as a stub function, i.e.:
See also
Note that
See also:
Implementing this function +may involve:
setting up any port +related hardware; for example, defining GPIO pins as UART port I/O
initialising the UART +device itself; for example, enabling UART clock sources
binding any interrupt +resources to the UART Interrupt Service Routine (ISR).
The function sets a reference argument to point to the newly instantiated
+physical channel object, but returns an error code if object allocation or
+initialisation fails, i.e. the call to the physical channel object’s
This code is typical, +and can be copied from any current source.
where
Customisation depends on the functions in the
The +Power Resource Manager (PRM) integrates the Symbian device driver with existing +power control and functions used for resource state changing and querying. +It defines an exported API that device drivers and other users of power resources +(represented by kernel-side components) can use to control the state of power +resources. The PRM assists the development of device drivers by removing the +need to understand the complexities of platform specific power resource management +by providing a simple to use generic framework.
The PRM also allows +user-side clients to gain access to the services. Clients of the PRM can:
request information +on the clients and resources registered with PRM
get the current state +of the resources
request changes to the +state of power resources
request notification +of changes (i.e. power-down or change clock frequency) to the power resources +that they depend on.
The following diagram shows the generic and customisable parts of +the framework:
exported kernel level
+API (
base virtual class for
+Power Resource Controller (
PDD required for user-side +proxy client
platform specific implementation
+of resource control at the level of register interface to hardware resource
+controller (
Power resources
A +power resource is anything that has an effect on power consumption. These +are typically platform specific and either physical or logical. Physical power +resources are provided by hardware: for example, clocks, outputs from voltage +regulators, switched power domains and performance levels. Logical power resources +do not directly map to hardware components, but their state has an effect +on overall power consumption. In this category we include, for example, shared +buses and devices that provide services to other components.
Power +resources can be further categorised based on:
Number of
A power resource can have a single user or it can
+be shared. There is no limit to the number of clients sharing a resource.
+If the resource is shared, then a request system is used. This comprises a
+list of
States
The
A multi-property resource has different properties +across different portions of the operating curve. For example, the level may +be controllable within the ‘operational’ part of that curve but have different +discrete fixed states for the ‘idle’ or ‘sleep’ modes of operation.
Execution time
The +execution time of a resource can be either instantaneous or long latency. +An instantaneous resource can be operated almost immediately (no longer than +a few system clock cycles, comparable to a CPU instruction execution time). +Operations on long latency resources take a significant time to complete (a +non-negligible amount of CPU clock cycles).
Power resources may be
+long latency for state change operations and instantaneous for obtaining the
+current state. This is typically the case of a PLL (Phase Locked Loop) device
+(
Other power resources may be long latency +on both state change and state read operations. This is typically the case +for any resources accessed through an inter-IC bus (even if their operation +is instantaneous).
Resource sense
Each +client sharing a resource may have a different requirement on its state. The +resource sense is used to determine whose requirement prevails.
Static or Dynamic
Most +power resources are known at device creation time, so their control should +be addressed by such components as the Variant or ASSP. These resources are +registered during the creation of the Resource Controller; these are static +resources. Resources controlled by device drivers (internal or external) are +only registered at driver-creation time using the registration and deregistration +API provided by the PRM; these are known as dynamic resources.
Note:
+Dynamic resources are only available when you use the extended library. See
Physical resources may belong to one or more of these categories, +for example, a multilevel resource may be shared between different clients +and take a significant time to change state (long latency).
External +devices may include and control their own power resources. In some cases, +an external device can function as a bus expander, allowing other devices +to be connected, and eventually share the power resources it controls.
Caching the prevailing level
The +PRM caches the prevailing level of each resource as well as the client that +requested it. The cached state is guaranteed to be the state that the resource +was in when the last request to change or read the state from its hardware +was issued.
On resource state read operations, the PRM allows clients +to select the cached state instead of the current state of the hardware resource. +A cached state read is always an instantaneous operation executed in the context +of the calling client thread (even for long latency resources).
However, +care must be taken as the cached state may not be the state the resource is +at that moment. For example, a resource can be non-sticky: when it is turned +on it stays on during an operation but then automatically switches itself +off.
The consistency of the cached state relies on all resource state
+operations being requested through the
Dynamic +resources
A generic layer provides basic functionality for static
+resources an extended version of the PRM provides APIs for dynamic resources
+and resource dependencies. These dynamic resource and resource dependent APIs
+are only available through the extended library
A
+dynamic resource is registered with the PRM using
Pass the ID of the client that requests the dynamic resource and the
+dynamic resource (
Deregistering a dynamic resource
Use
Changing resource state on a dynamic resource
Because a dynamic
+resource can deregister from the PRM it is necessary to indicate to clients
+that this has happened. Any remaining notification requests are completed
+with a -2 in the client field of the callback function. This indicates that
+this notification is because the resource has been deregistered and not because
+the notification conditions have been met. When a client recieves a notification
+of this type the request notification should be cancelled using
Resource dependencies
A
+resource may register a dependency on another resource. At least one of the
+resources must be a dynamic resource. Request a dependency between resources
+with
This function is passed the ID of the client that sets the resource dependency
+and the dependency information of the resource (
The kernel will +panic if a closed loop dependency is found for the specified dependency or +if a resource with same dependency priority is already registered.
Use
Deregistering +resource dependencies
Use
Note: These functions are only available when you use the extended +library.
Changing +resource state on a dependent resource
A resource state can change +when the state of any of its dependent resources changes. A state change causes +a notification if the requested conditions are met. The client ID in the callback +function is updated with the ID of the resource that triggered this resource +change (bit 16 of the ID is set for dependency resource; this is used to distinguish +between a resource ID and a client ID).
Clients
The +PRM has both user and kernel-side clients. Kernel-side clients are:
device drivers
performance scaling +and/or performance monitoring and prediction components
User-side clients are:
user-side system-wide +power management framework
other power-aware servers +and power-aware applications
Clients register with the PRM by name. Clients can request to allocate +and reserve client level objects and request message objects after registering +with PRM.
Client level objects
Client +level objects represent a client requesting a level on a resource. They are +used by resources to set the current level of the resource and the current +owner of the resource level.
Each resource has a doubly linked list +on which client level objects are held. When a client requests a level for +the first time, a new client level object is added to the list; however, if +the client already has an object on the list, this object is adjusted to reflect +the change in the required level.
The following rules determine whether +the change is allowed on positive and negative sense resources:
If the requested change +is in the direction permitted by the resource sense and would result in exceeding +the prevailing level, the change is allowed.
The new prevailing level +is recorded, and the previous prevailing level and the requirements other +clients have on the resource remain on record.
If the requested change +is not in the direction permitted by the resource sense, but the client requesting +the change is the owner of the prevailing level, the resource state changes +up to next highest for a positive sense resource, or lowest for a negative +sense resource.
This value is now the prevailing level and the client +that requested it remains the owner of the prevailing level.
If the requested change +is not in the direction permitted by the resource sense, and the client requesting +the change is not the owner of the prevailing level, the change is not allowed.
Even +though the request was rejected, the client's existing requirement is adjusted +to reflect the request.
Prevailing levels are recorded by either adjusting the previous requirement +for the client that requested it or, if it is the first time this client requests +a level, a new requirement is created.
Notifications
A +client may request a state change notification from a resource. Notifications +can be conditional or unconditional; conditional notifications specify a threshold +on the resources state, which when crossed in the direction specified, triggers +the notification.
Notifications are always issued post resource change +and invoke callback functions executed in the context of the requesting client +thread. The process of notifying a client involves queuing a DFC in that clients' +thread. The notification object encapsulating the DFC must be created in kernel +heap and passed to the notification API by the client that requested the notification. +The client may share the same callback function between several notifications, +but a unique DFC must be created and queued per notification.
Because +notifications result in DFCs being queued, it is not possible to guarantee +that all changes of resource state are notified. If the resource state changes +again before the first DFC runs, a separate notification cannot be generated. +It is also not possible to guarantee that by the time the notification callback +runs, the resource state is still the same as the state that caused the notification +to be issued: it is the responsibility of the driver to read the resource +state after receiving a notification.
Introduction
The
The API +enables concurrent access by multiple clients, with one channel supported +for each. Sharing of channels between threads is not supported.
If a client wishes to access information
+on kernel-side clients of the Resource Controller, it must also exhibit the
Initialisation
Clients
+do not need to explicitly load the kernel-side LDD, as this is a kernel extension
+and so is loaded and initialised during kernel initialisation (attempts to
+re-load it returns the error code
Clients open a channel on the LDD by
+calling the
The client then needs to call the
The number of request objects created at +initialisation should be the maximum number of concurrent (asynchronous) requests +that the client expects to have outstanding at any time.
Getting power resource information
The example below
+shows these functions in use. The amount of resources available are returned
+by
Get the current resource state with
The API
+supports receiving a number of concurrent (asynchronous) requests from a client;
+the maximum number is determined by the number of resources that were specified
+at initialisation. If a client issues a request that exceeds its allocation,
+the error code
Some synchronous
+methods take a binary value,
Requesting notifications
Clients
+can request to be notified when a resource changes its state using the
To cancel a notification, use
It +is not guaranteed that all changes of resource state result in a notification +– a resource may change state more than once before a notification request +is completed. In addition, it is not guaranteed that a resource is in the +same state as it was when the notification was generated. For these reasons, +it is the responsibility of the user-side client to read the state of a resource +after receiving a notification.
Changing the state of power +resources
To change a resource state call the asynchronous
The Power Resource Manager component is implemented
+as a kernel extension. It has an exported public interface accessible to kernel-side
+components. Kernel components link to the resource manager kernel extension
+DLL (
To ease the development
+of the PRM including the functionality provided by the generic layer and the
+mandatory PSL implementation, the generic layer is compiled into a kernel
+library (klib) (
Concepts
Request messages
Request +message objects represent a request for an operation on a resource by a client. +For example a request to change a resource's level or notification if a resource's +level changes.
The client may attempt to increase its quota by requesting
+the allocation of request messages, but this may fail with
The
+pre-allocation and reservation of
See
Callbacks
A
+callback object is a customised DFC that is used to signal the completion
+of notifications and the PRM's asynchronous APIs. Use the class
The
+resource callback object is initialised with a callback function
This one specifies the DFC queue:
The user specified callback function is invoked with five arguments:
the ID of the client +is:
the client that issued +an asynchronous operation - if the callback function is called as a result +of an asynchronous resource state read operation
the client that is currently +holding the resource - if the callback function is called as a result of an +asynchronous resource state change operation or resource state change notification.
Note:
If -1 is passed as the +client ID this specifies that no client is currently holding the resource.
In the extended version
+of PRM (see Dynamic and Dependant resources) if -2 is passed as the client
+ID in the callback function it signifies that the dynamic resource is deregistering.
+With dependency resources, the ID of the dependent resource is passed as the
+client ID if the state of the resource (specified in
issued an asynchronous +API (state read or change) which leads to the callback function being called
requested the resource +state change that leads to queuing a notification, which then leads to the +callback function being called.
the resource ID - a +client may have multiple notifications pending on different resources and +a single callback function. The resource ID can be used to specify which resource +change is being notified
the level of the resource +when either:
the callback is called +as a result of a notification of state change
when the callback is
+called as a result of a non-blocking form of
the requested level
+for the resource when the callback is called as a result of a non-blocking
+form of
the ID of the resource +level owner
the error code returned +when the callback is called as a result of an asynchronous request to change +the state of the resource. This is not relevant when the callback is called +as a result of a notification of state change
a user defined argument +that is passed to the constructor.
Cancel an asynchronous request, or its callback, by calling
Registration and initialisation
Clients
+need to register with the PRM using
Registration happens at +different times for different clients:
kernel extensions register +from their entry point during kernel boot
device drivers for internal +or external devices register on opening a channel.
When a client registers with the PRM, a client link object is removed +from the free pool, populated with the relevant information, and a pointer +to it is added to a container of clients within the PRM. If no free link exists +within the pool, a number of client links are created in kernel heap. As a +result, client registration may fail in out of memory situations. Client links +are never destroyed after the client deregisters, but are released back into +the free pool.
Clients are registered by name; the PRM does not check
+that a name is unique unless the
On registration, a client also specifies the type
+of ownership (
The default is
Pre-allocating client level +and request message objects
The controller has an initial pool
+of free client levels and request messages whose size can be configured at
+build time by the PSL. After registering with the PRM, a client can request
+the pre-allocation of a number of client levels and request messages to guarantee
+deterministic behaviour (to avoid out of memory failure) of the resource state
+change operation using
A client may issue more than one request, possibly +to the same resource, before the previous one completes, it is therefore more +difficult to determine how many objects to pre-allocate.
It is recommended +that as many request messages as asynchronous long latency resources the client +expects to use are pre-allocated. The free pool should be relied on for the +situations when a resource is requested more than once before the first request +completes.
Deregistration
All
+notifications and asynchronous requests must be cancelled before the client
+is deregistered, otherwise the kernel will panic. Use the
Client deregistration can +result in a resource state change:
If the resource is shared +and the deregistering client owned the prevailing level, the resource state +is moved to the next level according to its sense. If it is a custom sense +resource, the resource state is changed as a result of calling a custom function.
If the resource is not +shared or if it is shared but the client is the only one using it at the time +of deregistration, the resource is moved to the default state. The +default state is only known by the PSL.
A deregistering client can have a number of active requirements on +a number of resources and more than one of these resources may need to change +state as a result of the client deregistering. The combined operation of deregistering +the client and adjusting the state of all resources it shares executes synchronously. +That is, the client thread is blocked throughout whether the resources whose +state needs to change is instantaneous or long latency.
Getting power resource information
The
+resource information can be retrieved using the method
To get information about clients using a resource, use the
+method
Use
The asynchronous
If executing asynchronously, the callback function encapsulated in the
+callback object (called in the client’s context) is invoked when the state
+of the resource is available. The third argument of the callback function
+carries the resource state. If this function is called with
For instantaneous resources,
ETrue - the resource +state is the cached value, this value is returned faster but is not guaranteed +to be correct
EFalse - the resource +state is read from the resource, this value takes longer to return, but is +much more likely to be correct.
Changing the state of power +resources
The
The values passed to the method are:
the ID of the client +requesting the change
the ID of the resource +that the change is to take place
the new state of the +resource
a pointer to the resource +callback object.
The requested state is either a binary value for a binary resource,
+an integer level for a multilevel resource or some platform specific token
+for a multi-property resource. The pointer to the callback object is defined
+by the class
Changing resource state on a long latency resource
If the
+callback object is specified, then
If
+the callback object is NULL, then
Changing resource state on an instantaneous resource
On an
+instantaneous resource
If a callback object is specified, then the callback +function encapsulated in the callback object is called from the client thread +after the resource change and before returning from the function.
Use
Requesting notifications
Clients +can request notification of a state change on a resource. Notifications are +issued post resource change and invoke the callback function encapsulated +in a notification object. The callback is executed in the context of the requesting +client thread.
The parameters passed to
the ID of the client +requesting the notification
the ID of the resource +for which notification of state changes is being requested
a reference to a notification +object encapsulating a callback function called whenever a resource state +change takes place.
A notification may be unconditional or conditional. An unconditional +notification request notifies the client each time the state changes. A conditional +notification request notifies the client when a threshold has been met in +the direction specified. A conditional notification has these additional parameters:
aThreshold - the level +of the resource state that triggers the notification
aDirection - the direction +the resource state change that triggers a notification:
EFalse - the resource +state is equal to or below the threshold
ETrue - the resource +state is equal to or above the threshold.
The client must create the notification object (
Because notifications +result in DFCs being queued, it is not possible to guarantee that all resource +state changes are notified. If the resource state changes again, before the +first DFC runs, it does not generate a separate notification. It is also not +possible to guarantee that by the time the notification callback runs the +resource state is still the same state that caused the notification to be +issued. The client should read the resource state after receiving a notification.
Use
To handle events, the interrupt source and the ISR have to be registered +with the OS: this is called binding the interrupt. An interrupt +has to be unbound when its use is complete.
+ An
+interrupt source ID is bound with the driver's interrupt service routine.
+This is done using the function
An
+interrupt is unbound by calling
The BIL +is a thin layer over the LDD APIs that removes the need for the class driver +to handle buffers directly. The BIL is consistent between different class +drivers and reduces the buffer related understanding needed to use the USBSc +LDD.
Using the BIL is optional to the class driver, but very highly +recommended.
Before completing these steps you must have followed
+the procedure from
+ the beginning of the tutorial set through to
+
When you have finished reading and writing
The paging options are:
Making executables (either exe or dll) paged is the finest level of control of paging at runtime. In the order of precedence for paging keywords, the values in this file will be overridden by those in the oby file.
The following keywords are used to indicate whether the executable is + unpaged or paged. If the executable is paged then the keywords indicate if code + paging, data paging or both are to be used.
When using the above keywords, the following points must be considered.
Building the executable with the new keyword does not produce any errors or warnings.
Below is an example mmp file with paging:
The above mmp file, specifies that the
Contains guides that describe various aspects of demand paging in more +detail.
+The IIC platform service provides APIs for a master channel and +a slave channel. The PIL provides the generic platform independent +functionality.
+You must derive a
+class from
The header file for the IIC can be found
Symbian converted the PDD of the
PDDs for the
For the remaining PDD code, although the LDD to PDD interface for the
One complication comes if the
Device drivers provide a mechanism for applications and other operating +system functions to access hardware devices without needing to know +how each specific piece of hardware works. In addition device drivers +may be written so that part of the device driver is user-side space +and the rest is kernel-side so that the device driver can access shared +memory and resources that belong to the Kernel.
+There are two main sets of users +for device drivers:
Application developers that want to use device drivers
Developers that need to create or amend device drivers.
This documentation set is aimed at the second category +of users. If you are an application developer and want to use a particular +device driver, then you should search for, and read the documentation +for that specific device driver.
The device driver guide explains +the concepts of device driver framework and how to implement a device +driver.
For the basic concepts of device driver, see
To write a device driver, see
To use the kernel services in a device driver, see
To debug your device driver implementation, see
It is important to understand how memory is paged in the system,
+so you should also read
The MMC Controller manages the power to the MultiMediaCard hardware.
+Before card commands can be issued to +a card, three operations are required:
power must be applied +to the card, i.e. VDD must be turned on
any requirement from +the power model must be set, e.g. requesting a necessary system clock
the clock to the card +interface must be switched on.
All three operations are performed as part of the
There +are two cases:
Local drive requests,
+i.e. those originating from a media driver - if the card is not fully powered
+up when such a request is received, then the local media device driver automatically
+precedes the request with an instruction to the controller to power up the
+card. This results in
Once the MultiMediaCard stack has been initialized,
+the MultiMediaCard controller calls
This automatic re-powering of the card applies in all situations +which can lead to the card not being powered: media change, machine power +down, card inactivity timeout, VDD power problem etc. In most cases, the process +of restoring power results in the closure of any existing media driver and +a new one opened. As the kernel thread used to perform controller requests +is able to block, this means that no special mechanism is necessary to allow +for this potential long-running power up sequence prior to a request on the +device.
Requests not originating
+via a local media device driver, for example device drivers for I/O cards
+- if the MultiMediaCard stack is not initialized when the client submits a
+session, then the MultiMediaCard controller automatically precedes the request
+with a call to the
The MultiMediaCard controller will normally be configured with an
+inactivity timer. If a given period elapses where there is no bus activity,
+then the controller automatically turns off the card clock, removes any power
+requirements on the power model, and then removes power from the cards. This
+is the bus power down sequence. The length of this inactivity timeout period
+is set in the platform specific layer as part of porting the controller, and
+can be disabled completely if required; see the reference to porting the
Clients of the controller do not need to worry +about re-initializing the stack following such a power down as the controller +does this automatically.
In the event of a PSU voltage check failure, +the controller performs the bus power down sequence. It will not re-power +the problem card and will expect it to be removed.
The power model
+calls
When the machine
+is powered back up after a normal power off, the power model calls
In an emergency power down situation, for example, where
+a battery is in a critically low power state, the MultiMediaCard controller
+performs the normal bus power down sequence as this is not a lengthy operation.
+The power model calls
However, +there is always a risk that a block being written to a card may become corrupt. +The solution to this problem lies in the hardware architecture. Two possible +solutions are:
to provide enough early +warning of power being removed before the battery level becomes too low for +card operation. For example, a catch mechanism on a battery door, making it +slow to remove. This would provide sufficient time for any write operation +in progress to be terminated at the next block boundary before the power supply +is lost.
Note, however, that the media driver fails any operations +involving write operations to a card when the battery level is becoming dangerously +low, so in general, we are only talking about unexpected battery removal.
to provide a backup +battery so that the failing write operation can be re-retried once a good +battery level has been restored.
Even with such mechanisms in place, if power is removed in the middle +of a multi-block write operation, then some blocks will contain new data while +others will still contain old data.
When +a door open event is detected by the MultiMediaCard controller, it attempts +to remove power from a card as soon as possible. Power is not removed immediately +if a bus operation is in progress as powering down a card in the middle of +writing a block could leave the block corrupted.
Power is only removed
+from a card immediately a door open event occurs if there are no sessions
+queued by clients on the controller. If one or more sessions are queued then
+these are allowed to complete, with power being removed once the last session
+has completed. Any attempt to engage a new session while the door is open
+fails with
To prevent a card becoming +corrupt because of attempted removal during a write operation, then it is +important that the door mechanism and circuitry gives enough early warning +of a potential card removal before the card is actually removed. This is to +provide sufficient time for any write operation in progress to proceed to +the next block boundary before the card is removed.
Once the door +is closed again, then new sessions can be engaged and power can be re-applied +to the card by the controller. However, power is only restored by the controller +in response to a client request. The Controller does not automatically re-power +a card to resume an operation interrupted by a door open event, no matter +what operation was in progress when the door opened.
An interrupt source is a hardware device or software action that +can force the CPU to suspend normal execution, enter interrupt handling +state and jump to a section of code called an interrupt handler.
+Typically, a number of interrupt sources are monitored by an interrupt +controller. This is hardware that generates a single interrupt notification +to the CPU, and provides information about which interrupts are pending, +i.e. which interrupts require action to be taken.
+An interrupt service routine, or ISR, is code that deals with +a pending interrupt. The Symbian platform kernel responds to an interrupt +notification by calling an ISR for each pending interrupt. The process +of calling ISRs is called interrupt dispatch.
The ISR is a +single bare function. It is not a class member.
Each ISR takes
+a single 32-bit parameter that is, typically, a pointer to an owning
+class, although it can be any value that is appropriate. The parameter
+is defined as a
ISRs are usually kept in an ISR table.
An interrupt source is identified by number, defined
+as a
Where +the ASSP layer is split into a common layer and a variant (device +specific) layer, then the variant layer may also define its own set +of interrupt IDs.
This number is usually referred to as the +interrupt ID.
Only one ISR can be associated with each +possible interrupt source. Making this association is known as binding. +ISRs can be bound and unbound during normal operation, but only one +ISR can be bound to an interrupt source at any one time.
A
+device driver binds an ISR by calling
At its simplest, this is the process of deciding +which interrupts are pending and calling the ISR for each.
The following pseudo code shows the general principle:
In practice the dispatcher may have to do some more +work to communicate with the interrupt controller hardware.
A system may have multiple interrupt controllers +to handle a large number of interrupt sources. These are usually prioritised +by connecting the interrupt output of a lower-priority controller +to an interrupt input of a higher-priority controller. This is called +chaining.
An interrupt from a lower priority controller will appear +as an interrupt on the highest-priority controller.
When the +interrupt dispatcher of the higher-priority controller detects that +it is the chained interrupt that is pending, the usual way of dealing +with this is to run a secondary dispatcher to determine which interrupt +on the chained controller is pending.
There may be further +levels of chaining before the true source of the interrupt has been +identified.
It is possible +that a single input to an interrupt controller is shared by several +interrupt sources.
It appears necessary to bind multiple ISRs to the same interrupt. +However, this is not possible. There are two ways of dealing with +this:
Maintain a list +of all ISRs that are bound to this single interrupt source, and call +all the ISRs in the list when the interrupt is dispatched. This is +most conveniently implemented by binding a single ISR to the interrupt, +which then calls all the real ISRs bound to this interrupt
Create pseudo
+interrupts. These are extra interrupt IDs that do not exist in the
+interrupt controller, but represent each of the interrupt sources
+connected to the single shared interrupt source. An ISR can then be
+bound to each pseudo interrupt. The interrupt dispatcher can then
+determine which of the sources are actually signalling and call the
+appropriate ISR via that pseudo interrupt ID. This is effectively
+an implementation of a
When a common ASSP +extension is used, a device may have additional peripherals external +to the ASSP, and there needs to be a way of allowing extra interrupt +binding and dispatch functions to be added later by the variant layer. +This must be handled by the port as Symbian platform does not provide +any additional API to support this.
Device drivers should
+be able to use the
To enable the core implementation of these functions +to decide whether an interrupt ID refers to a core interrupt or device +specific interrupt, a common technique is to "tag" the interrupt ID. +A simple way is to use positive numbers to identify core interrupts +and negative numbers to identify device specific interrupts. The ISRs +for device specific interrupts are not stored in the core ISR table, +instead the device specific layer provides its own ISR table.
The general pattern for creating the core-device specific split
+is that the core derives an implementation from class
As an example,
+the core layer for the template reference board defines a class
In the Kernel Architecture 2, it is a convention +that unbound interrupts should be bound to a "spurious" interrupt +handler, i.e. an interrupt handler that faults the system indicating +the number of the interrupt. This aids debugging by identifying interrupts +that are enabled without corresponding ISRs.
The interrupt architecture supports the concept
+of adjustable interrupt priorities. Symbian platform defines the
The Variant must provide a table where each entry
+defines which
When the Variant is split into an ASSP layer +and a Variant layer, the ISR table is put in the ASSP layer and will +not normally include ISRs for the Variant interrupt sources - these +will be handled by separate chained dispatchers in the Variant layer.
Symbian platform provides the
Interrupts are identified in the system by their +interrupt ID number, which is used to index into the ISR table. You +are free to allocate these numbers any way that is convenient for +you.
On the template reference board, for example, the ISR
+table is defined as a static data member of the
where KNumTemplateInts is defined in the same +header file.
Factors that decide the size of the ISR table
The number of entries to be reserved in the ISR table depends +on the following factors:
Where the ASSP +is targeted at only a single device, the number of possible interrupts +is usually known, and the table can include an entry for each one.
If any
Other factors affecting the ISR table
IRQs
+and FIQs may need to be distinguished, although the exact requirement
+is hardware dependent. Although the table has one entry for each possible
+interrupt source, a possible scheme may be to group IRQs at the start
+of the table, and FIQs at the end of the table. If the hardware has
+separate interrupt controller hardware for IRQs and FIQs (or at least,
+different registers) then you will need to arrange the table so that
+you can determine from the
For example:
Most of the interrupt dispatching +code is implemented in the ASSP layer. This includes a list of ISRs, +code for adding and removing ISRs, enabling and disabling interrupt +sources, and dispatching ISRs. The kernel only provides a pre-amble +and post-amble.
The kernel defines, but does not implement,
+a class called
The class is defined in the header files
See Symbian OS +Internals Book, Chapter 6 - Interrupts and Exceptions
Kernel
+objects such as
To
+show basic information about a
To show more detail, use the
As an example, use these commands to show information
+about a
This gives:
All objects derived from
You
+can use
Using the
Using the
The information displayed is memory model dependent. +It is shown here for the moving memory model.
Notes:
The
The
The
The
The
The
The
The
These three lines show the offset, base address and size (the +reserved size) of the chunk in the home region.
Internally,
+the kernel maintains lists of all current objects, organized by type. Each
+list is a container, a
The
The command effectively executes
+a
This section describes how to create a port of it for your phone hardware. Symbian +platform provides a generic Platform-Independent part of the Controller. You +must provide a Platform-Specific part to implement the interface to the MMC +hardware on your phone.
+Kernel +side developers will sometimes need to know the algorithm used to move data +in and out of paged memory.
Intended +Audience:
This document is intended to be read by kernel side +developers.
Terminology
Paged memory (virtual addresses and their contents) +is called either:
'live', meaning present +in the cache of physical RAM, or
'dead', meaning stored +elsewhere in the backing store.
Physical RAM not being used to hold paged memory is called the free +pool.
Items of paged memory can be reassigned from 'live' to 'dead' +and vice versa. 'Live' items are classed as either 'old' or 'young' for this +purpose.
Paging +out memory
When more RAM is needed and the pool of free memory +is empty then RAM is freed up. This means changing an item of paged memory +from live to dead. The process is called paging out and it involves these +tasks.
The oldest virtual page +is removed from the cache and physically stored elsewhere
The MMU marks the virtual +page as inaccessible.
The associated page +of RAM cache is returned to the free pool.
Paging +in memory
When a program attempts to access dead paged memory, +the MMU generates a page fault and the executing thread is diverted to the +Symbian platform exception handler. This performs the following +tasks.
Obtain a page of RAM +from the free pool. If the free pool is empty, free up some memory by paging +out the oldest live page.
Read the contents of +the dead paged memory from its actual location (e.g. NAND flash) and write +them to the page of RAM.
Update the live list +described in the next section.
The MMU maps the linear +address of the dead paged memory on to the page of RAM.
Resume program execution.
The +paging cache
The paging cache is only useful if it is used to +hold the pages most likely to be required. The paging subsystem provides for +this by selecting the oldest pages to be paged out making space for new ones +to be paged in.
All live pages are stored in a list called the 'live +page list'. Live pages are labelled 'young' or 'old' and are stored in two +sub-lists, one for each type: the item at the start of the young list is the +youngest item and the one at the end of the old list is the oldest item. The +MMU makes young pages accessible to programs and old pages inaccessible. However, +old pages are different from dead pages because their contents are still held +in RAM.
The young and old lists are maintained in accordance with +these four rules.
When a dead page is +paged in (made live) it is added to the start of the young list, becoming +the youngest item.
When a live page must +be paged out (made dead) to free up RAM, the oldest page is selected.
If an old page is accessed +by a program a page fault results because old pages are marked inaccessible. +The paging system handles the fault by turning the page into a young page +('rejuvenating' it). To compensate, the oldest young page is then turned into +an old page ('aging' it).
Efficient operation +requires a stable ratio of young to old pages. If the number of young pages +exceeds a specified ratio of old pages, the oldest young page is turned into +the youngest old page ('aging' it).
InterpretSIS is a command-line tool used to install SIS files to
+a data drive image that can be flashed onto a device's
For more information on
The syntax for using
or:
For example:
Alternatively, the parameters can be placed in a file, which
+is passed to
Synchronous requests are typically used to set
+or retrieve some information for the device driver. These requests almost
+never need to access the hardware itself, and usually complete relatively
+quickly. They return only after the completion of the request and the user
+side thread is blocked till completion. Synchronous requests are usuallly
+initiated by a call to
A driver lists its available synchronous requests +in an enumeration, for example:
Drivers
+generally implement the
Synchronous requests are typically used to set
+or retrieve some information for the device driver. These requests almost
+never need to access the hardware itself, and usually complete relatively
+quickly. They return only after the completion of the request and the user
+side thread is blocked till completion. Synchronous requests are usuallly
+initiated by a call to
A driver lists its available synchronous requests +in an enumeration, for example:
Drivers
+generally implement the
This topic describes how to set up the source code and project +files for an implementation of the USB client controller.
+Another more complicated example is implemented in the OMAP/H4
+platform. This platform has two USB device controllers. The code shows
+how two UDCs can be supported in the same source and build tree. The
+second controller (Fibula) is a High speed USB 2.0 compatible device
+controller, the code demonstrates how to support a USB 2.0 UDC and
+one or more USB 2.0 Client Devices. See the code in
The suggested steps are as follows:
+Decide where +to put the source and header files for the platform-specific layer +. Normally the USB device controller is part of the ASSP, and you +would put the source files in the ASSP directory. For an external +USB device controller, you would use the Variant directory instead.
Implement the +platform-specific Layer within your chosen source and header files.
If you have a header file with a name in the shape of
Define the
Add the name
+of the
No published APIs.
These attributes must be appropriately described in the HAL
This +document explains the points that have to be considered when writing a media +driver in an environment that uses writable data paging.
The +main issue to consider when writing a media device driver for a writable demand +paging environment is to avoid page faults from occurring in DFCs, since this +can lead to a deadlock condition between the driver and the client process.
This +can be avoided using the following methods:
Use shared chunks.
Shared +chunks are memory areas that are accessible by both kernel-side and user-side +and they are never paged.
This is the best solution for drivers that +involve fast throughput such as media drivers.
Use synchronous rather +than asynchronous data transfer
This could be done by implementing +the following steps:
The client requests +a notification when data is available.
The data arrives.
The driver writes data +into an internal buffer and completes the client request.
The client makes a read +request.
The driver writes the +data back to the client in the client thread context.
This approach is easy to implement, however it requires the buffering +of data.
Use the
This provides the ability for the memory to be +mapped into a drive's address space as unpaged.
This is an alternative +to the use of shared chunks.
However, this is not supported on the +moving or multiple memory models.
The intent behind SDIO is to allow the use of an SD slot for more than +memory cards. SDIO allows I/O devices such as GPS, camera, Wi-Fi, FM radio, +Ethernet, barcode readers and Bluetooth to be plugged into such a slot.
+However, with space being limited in modern mobile phones, memory card +slots are usually microSD slots - which are too small to accept SDIO or miniSDIO +cards. Also, memory card slots are often internal to the phone which prohibits the +insertion of SDIO cards. Therefore in Symbian devices, SDIO is typically used +only as an internal peripheral bus on all but hardware reference platforms. +Because of its ability to provide high-speed data I/O with low power consumption, +SDIO can be used to connect components such as Wi-Fi and FM radio peripheral +chips to the main processor.
+You +must have an understanding of SD card before using SDIO.
The +SDIO is divided into key modules described below.
Memory card performs the SD or the MMC card operations. It registers +the card driver, initializes the card, and sets permissions for the read and +write operations
SDIO Host controller is the most important module in the SDIO +architecture. It has a driver that allows card initialization, bus width settings, +clock frequency setting, sending and receiving of commands and SDIO interrupt +handling.
General Card Functions contains functions that are useful during +the development of other card drivers. The module supplies the read/write +SDIO/SD/MMC card register operations as well as select/deselect card, read +card status and reset card procedures.
SDIO Interrupts: SDIO Interrupts are level-sensitive interrupts. +They communicate with the host through the SDIO line. The signal is held +active until the host acknowledges the interrupt through some function unique +I/O operation.
SDIO Class Driver: SDIO Driver supports the SDIO Host Controller. +The driver includes functions such as initialization, bus settings, clock +frequency setting, sending and receiving of commands, and SDIO interrupt handling.
SDIO card controller: SDIO card protocol is a super-set of the +SD card protocol. The SDIO Card Controller shares some of its functionality +with the SD controller. The Symbian platform SDIO card controller is implemented +as a set of SDIO card classes, each of which is derived from the corresponding +SD card classes. The SDIO card controller provides support for SDIO cards +within the E32 kernel. It manages access to the SDIO card hardware interface +and provides an API for class drivers to access SDIO card functions.
This +API is used by the implementers of the class drivers.
Drives +can communicate with the Window Server, and through that to user programs, +using events.
The kernel event queue maintains a circular
+buffer of
When building a ROM image, all the processes and data needed to +execute a build are included in a single file. This file is executed +by the platform when the hardware is powered up.
+The SDIO hardware must conform to the SDIO standards and that +the hardware must work correctly.
You must be familiar with building a ROM for the Symbian platform.
None
To include the SDIO PIL libraries
+in the final ROM image, the
An example +of the use of this new parameter is:
This example produces a ROM.
The iby file that specifies +the SDIO's PSL, must be included in the oby file for the ROM image.
For more information, refer to
Not required
The Power Supply Unit (PSU) functionality is provided by the
The
The numeric value in the left hand column is the value of
+the
The lowest 4-bits of the FSR register
+indicates the fault generated by the MMU. The FSR register value is displayed
+as a result of entering an
The 5 least-significant bits of the CPSR
+register indicate the ARM processor mode. The CPSR register value is displayed
+as a result of entering an
The program completes the initialization of the File Server, then starts +the System Starter process. You must customize the Base Starter when you create +a port of Symbian platform to a phone. To customize the Base Starter, you +change the configuration files and the Base Starter source code.
+The Base Starter has often also been called EStart.
+The MultiMediaCard Controller uses state machines to manage the interactions +with the MultiMediaCard hardware.
+State machines allows the controller to maintain the state of each submitted +session – allowing it to schedule a second session when the first becomes +blocked, for example.
+To handle the complex sequence of bus operations involved, the controller +implements a state machine stack, allowing a parent state machine function +to invoke a child state machine function. The state machine stack allows nesting +of children down to a depth of 10 levels.
+Each session object has its own individual state machine stack because
+each session runs its own sequence of bus operations, with the controller
+managing these multiple sequences. This means that each session object has
+a state machine object, an instance of the
The state machine remembers the next state and child function name, and +moves to that state as soon as control returns to the session.
+The stack chooses the next session to be handled and manages the state +machine through the state machine dispatcher.
+
The stack itself is represented as an array
+of
The state machine
+maintains a "pointer",
Each state entry maintains three pieces of information:
A pointer to the state +machine function.
A variable containing +the current state; the value and meaning of the state is defined by the state +machine function.
A bitmask of
In general, the overall state of the state machine reflects the
+state of the current state entry; for example,
While the flow of control
+through a stack can be complex, the following diagram shows a short example
+snapshot. Here, the current state in the current stack entry is
For example, the
Most commands +and macros involve one or more asynchronous operations and while such operations +are outstanding, a session is blocked. While a session is blocked, its state +machine dispatch function is not called. As soon an asynchronous event removes +the blocking condition, control returns to the state machine.
State machine functions are defined and
+implemented in the
A state machine function can release control and wait to be +re-entered by returning zero. If its session is not blocked then that will +happen immediately. If the state machine function returns non-zero, the session +will be completed with that error code unless the parent state machine function +has explicitly intercepted such an error by setting the trap mask.
Important points to note
Each
+state machine function must define a list of states that can exist. States
+are defined as an enumeration whose first and last values must be labelled
To make the state machine
+functions more readable, a number of macros exist to help with the layout
+of the function code, and to control program flow. The most basic macros are
Notes:
be aware that SMF_BEGIN +generates an open curly bracket while SMF_STATE generates a corresponding +close bracket, and SMF_END closes the switch statement.
You need to be aware of the code that is generated by these macros.
+In particular, some such as SMF_BEGIN generate an opening curly brace, while
+others such as SMF_STATE generate a corresponding close curly brace. Also,
+you need to know whether a macro generates a
Blocking on an asynchronous +request
The state machine can be made to block. This is done so +that you can wait for an asynchronous operation to complete. When the operation +completes, it unblocks the state machine. There are two stages to blocking +a state machine:
Call
Execute one of the SMF_xxxWAIT +macros to return from the current state machine function and wait.
Note that the state machine function must return by calling +one of the SMF_xxxWAIT macros. It must not poll or sit in a loop! The state +machines are not threaded, and this means that CPU time can only be given +to other tasks if the function returns.
To
+unblock the state machine, call
Note that further
+state machine processing must take place in a DFC rather than within the interrupt
+service routine (ISR), and this can be forced by ORing the session status,
The following +code shows the idea:
This shows the state machine calling
+sequence that implements the reading of a single block on a MultiMediaCard,
+where the card is not yet selected. Note that the lowest level state machine
+function called is
The DMA Framework implements its entry point function in the platform-specific +layer.
+The entry point for a kernel extension is declared by a
+statement, followed by the block of code that runs on entry to the DLL. +The following code is typical for a port of the DMA Framework , and is taken +from the port for the template reference platform:
+where
Create() is a second-phase constructor defined in, and implemented by,
+the concrete class derived from
Note that if hardware-specific descriptors are used, they are allocated +in a hardware chunk. If pseudo-descriptors are used, they are allocated on +the kernel heap.
+This document explains the impact of data +paging on kernel side code.
Intended +Audience:
This document is intended for device driver writers.
New restrictions on access +to user memory.
Certain +exported and internal APIs access the address space of the current thread +and are subject to restrictions on their use enforced by assertions in the +code. The restrictions are these:
The APIs may only be +called from thread context.
They may not be called +while any mutexes are held. There are two particularly important cases when +mutexes are often held:
When publish and subscribe +is writing large binary properties to user space, and
When the multiple memory +model writes code segments' export directories to user space.
The APIs may not be +called when the system lock is held. There are two particularly important +cases when the system lock is often held:
When publish and subscribe +is writing large binary properties to user space, and
When the multiple memory
+model uses
The APIs concerned are these:
Certain +exported and internal APIs access the address space of another thread and +are subject to restrictions on their use enforced by assertions in the code. +The restrictions are these:
The APIs may not be +called when any mutexes are held. One particularly important case of this +is when undertakers are completed and handles written to user space.
The APIs concerned are these:
In non-paged code it
+is usual for a thread to have an asynchronous request outstanding and to complete
+it by calling
Instead,
+you should use the
Create a
Call the
Call
When the client thread
+next runs, the
The
In non-paged code it is common +for a client thread to send a message to a server and write it into the address +space of the server. When data paging is enabled, this creates the same risk +of page faults as the completion of asynchronous requests and can be mitigated +by the same techniques as above. In addition, descriptor information (type, +length and maximum length) is read by temporarily switching to the client's +address space, creating additional risk of page faults.
When data +paging is enabled, messages to servers must be pre-parsed and their type, +length and maximum length stored in the message structure. This involves change +to the kernel code but does not impact on user-side code.
The kernel maintains a queue
+of user-input events which is read by the window server. The introduction
+of data paging involved a change to the kernel code which responds to the
+user-side API
Kernel Architecture
This topic describes the USB Client Driver code that Symbian platform +provides.
+USB client driver LDD:
+is implemented
+in
has its source
+code located in
has its project
+file located in
USB client controller PDD:
+The interface
This is a section that contains the detail for specific porting
+activities. You will normally access this material via links from
+the section on the
The same set of source code is used to build two different DMA test harnesses:
+It builds:
It builds:
Some of the handlers are part of Symbian platform generic code and do not +require porting.
+Data paging can +affect performance guarantees. When writing software for a data paged platform, +you need to perform an impact assessment on the components of your application. +To do so, you perform the following analyses and implement the appropriate +mitigation for the impacts you discover:
Static analysis,
Use case analysis, and
IPC analysis.
Video playback,
VoIP phone calls, and
File download over USB.
Standard boot time,
Application start-up +time, and
Camera image capture +time.
Paged heaps and stacks,
RAM-loaded code, and
Read only XIP data structures +including
bitmaps,
constant descriptors,
constant data arrays +in code,
data files accessed +through a pointer, and
exported DLL data.
For all use cases, ensure +that the minimum paging cache size is large enough to accommodate both the +protected code path and any other paged data required at the same time.
This +method of protection may not always be practical, because compound use cases +may involve unpredictable amounts of data which exceed the size of the paging +cache.
Make the protected code +path unpaged. Mark the code itself as unpaged, and either mark whole regions +of memory as unpaged or pin memory to ensure that it is unpaged.
Separating the data +plane and the control plane, and
Splitting a monolithic +library into paged and unpaged parts.
Time platform service +starts during device start up. It is responsible for maintaining a +queue of all system-wide alarms. It allows clients to query the status +of alarms, set alarms, remove alarms and perform other utility functions.
Time platform service +is required to manage the alarm in the system and system +state manager utility plug-in specifically RTC adaptation plug-in.
Time platform service matters to all the Real time clock and alarm +implementations to fulfill the requirements set by the RTC adaptation +plug-in interface that is introduced beside of a System State Manager. +The SSM Adaptation Server contains a RTC Adaptation plug-in.
+The
The purpose of DMA is to transfer data such as streaming audio +produced by a client application. A typical transfer involves passing +the data from user side to kernel side via a shared chunk and then +a transfer from one memory location to another by DMA over a buffer. +If the amount of data to be transferred exceeds the maximum transfer +size supported by the controller or involves non-contiguous physical +memory, the data is fragmented and transmitted in a sequence of transfers +in one of three modes.
+In single buffer mode the hardware controller provides a single +set of transfer registers used alternately to perform an active transfer +and to set up a pending one: however this mode incurs overhead in +the form of significant periods of time when the driver is idle during +the pending phase.
In double buffer mode the hardware controller provides two +sets of transfer registers, one set for the active transfer and one +for the pending transfer..
In scatter-gather mode, a single procedure call sequentially +writes data from multiple buffers to a single data stream or from +a data stream to multiple buffers.
The PIL supplies three derived classes corresponding to the DMA +modes:
+These three classes are defined in
You may simply use one of these classes or derive further classes
+from them and implement those. For example, the template scatter-gather
+implementation defines a
In general, design decisions are made at the level of implementation +in hardware and are subsequently reflected in the structure of the +derived classes.
+There are two ways of extending the DMA Framework:
+to provide platform-specific functionality on a per-channel +basis, and
to provide platform-specific functionality on a channel independent +basis
In the first case,
The
+physical channel defines the interface between the logical device and the
+physical device. Typically, this interface is different for each device family
+and, therefore, the device driver framework does not require a particular
+type of interface. The only requirement is that the physical channel class
+is derived from
There are two possibilities:
+
To start, use the
The Fault Category field shows the type of fault, in this case an +exception.
+If the Fault Category is Exception, then +the fault is caused by an unhandled processor exception. You can get further +information on the type of exception by looking at the first three lines of +the generated output:
The
The number after
If the exception is +a prefetch abort, then the code address is invalid.
A data abort means that +the code address is invalid.
The number after
The number after FSR is
+the
The
+number after CPSR is the value of the CPU's CPSR register when the exception
+occurred. The 5 least-significant bits of the CPSR register indicate the
If +the Fault Category is not Exception, then the fault is due to +a panic. In this case the only other valid field is the Fault reason; +the values of all other fields are meaningless.
The panic number is +the low 16-bits of the fault reason, shown in hexadecimal.
For example, +a KERN 27 panic would generate:
If the panic is KERN 4, then a thread or process marked as +protected has panicked. For other panics, kernel side code has panicked; this +code is either in the kernel itself or in a device driver.
See
The
To enable
+fair scheduling, the file system mounted on a particular drive must
+support
The
The file system indicates its support for
To enable caching, the file system +must support reads and writes to local buffers, which are buffers +created and owned by the file server itself rather by a client of +the file server.
The local media subsystem already supports
+local messages, but the file system and any file system extensions
+need to be examined carefully to ensure that they do not call any
The file server calls
If no file system extensions are loaded, then
+the query is eventually handled by
If there are any file system extensions, they must
+handle the
The global and drive-specific cache property defaults
+are defined in
In this example, write caching +is enabled on drives I and K and has been turned on by default on +drive D.
For details, see
Global +cache settings
The following table holds the global caching +properties:
File +cache settings
The properties
See
If
A +deadlock is when two or more processes have some resources, but not all of +the resources required to execute. The resources that are required are held +by the other processes, which in turn want the resources that are held by +the initial processes. In this state, no process can execute.
The +classic sign that a deadlock has occurred is that a collection of processes +just appear to never do anything i.e. 'just hang'.
In the context +of demand paging, the resource is a page of RAM that can be paged in or out. +If one process wants data in a page of RAM that is paged-in or out by another +process, then a potential deadlock condition can occur.
Deadlocks +are only likely to occur if you are altering or interacting with one of the +components used in paging in and out, such as media drivers or the paging +sub-system. The majority of user-side code does not need to worry about deadlocks +from paging.
This guide is most applicable to device side writers +and other engineers writing kernel-side code.
For a deadlock to occur, +four necessary conditions must occur:
Mutual exclusion condition +
A resource cannot be used by more than one process at a time.
Hold +and wait condition
One process is holding one resource, but needs +another before it can finish.
No preemption condition
The +resources can only be relinquished by the process holding it.
Circular +wait condition
One process is holding a resource and wants another +resource that is held by another process, which in turn wants access to the +resource held by the initial process.
Since the cause of deadlocks +(as far as demand paging is concerned) is due to the paging in and out of +RAM, the following points have to be considered:
Make sure all kernel-side components are always unpaged.
Pinning; new APIs have been added that allows a process to over-ride
+ the demand paging rules as regards to how RAM pages are paged in and out
+the phone's memory. The name comes from that fact that the RAM page is fixed
+in the phone's RAM (as if a pin had been stuck into it) until a detached command
+is executed (unpinned). This is implemented by using the new
Mutex use in device drivers - if the nesting order is violated then +deadlock can occur. To overcome this, make sure that all device driver operations +that could cause a page fault don't use mutexes. In other words, any access +to paged memory while holding a mutex has the potential to cause a deadlock.
Code running in DFC Thread 1 must not access user memory. This DFC +thread is used by the implementation of the system timer functionality, hence +paging RAM in or out of the system by this thread could cause serious performance +problems or a deadlock.
For media drivers, make sure that when the media driver services page-in +requests, that the thread that the driver runs in does not also make requests +to page-in RAM pages. Because, if this was to occur, then the media driver +will not be able to service the page in request and a dead lock would occur.
The functions
The implementation can be different if there is a single Variant DLL, or +if there is also an ASSP extension.
+The following example
+implementation of
The
+implementation shows the basic idea but may need to be extended, especially
+where
Note that functions in the
When +a common ASSP extension is used, the Variant DLL may have to implement extra +interrupt binding and dispatch functions for the device-specific interrupts.
The
+following code is an example of an implementation of
The default device specific implementation of the
The device specific implementation of
Now you need a way of dispatching the interrupts and
+since this is really a chained interrupt, the dispatch function can be packaged
+as an ISR and bound to the core interrupt source it chains from. See
The following example code is a simple dispatcher for IRQ interrupts. +It assumes a simplistic interrupt controller that provides 32 interrupt sources, +and has a 32-bit pending-interrupt register where a 'one' bit indicates a +pending interrupt and all ones are cleared when the register is read.
+The code assumes that the interrupt source represented by the low order +bit in the pending-interrupt register is represented by interrupt ID number +0 etc.
+When implementing the dispatcher it is usual to write it in C++ initially, +but once you have it working you would probably want to rewrite it in assembler +to gain maximum efficiency for what is a time-critical section of code.
+There are two considerations when dealing with
how to identify interrupts +on the lower priority chained controllers
how to handle the dispatch +of a chained interrupt.
The first point is a question of allocating locations in your
There is
+no need to change the
There are at least two ways of dispatching a chained +interrupt:
Dispatching a chained interrupt (1)
One +way of dispatching a chained interrupt is simply to make it a special case +in the main interrupt dispatcher. For example:
This approach works for a simple case, for example, where +there is only a main and secondary interrupt controller. It is does not scale +well because the special cases in the dispatcher become an overhead on the +dispatch of normal, unchained interrupts as the number and depth of the chaining +increases.
Dispatching a chained interrupt (2)
A better +way of handling chained interrupts is to bind an ISR to the interrupt source +in the main interrupt controller and use it to dispatch the chained interrupt. +This is far more scalable because you can bind any number of ISRs without +having to add special cases to any of the interrupt dispatchers.
The +dispatcher code could then be re-implemented as:
The second level dispatcher,
The index count starts
+at offset
The case +where multiple peripherals are connected to the same interrupt source can +be handled through the technique of pseudo interrupt sources. This involves +assigning pseudo-interrupt IDs in the ISR table to correspond to each of the +peripherals that is attached to the interrupt line, i.e. ISRs are bound to +these pseudo-interrupt sources.
Dealing with pseudo interrupt sources
+is, in essence, a special case of
The dispatcher can do one of two things:
examine the peripheral +hardware to determine which of the interrupts are pending, and then call the +appropriate ISR
call all the ISRs and +leave them to determine whether their peripheral is actually signalling an +interrupt.
As usual, it is entirely up to you to choose the ID numbers for these +pseudo-interrupts.
There should be no need to alter the implementations
+of
Dispatching the interrupt can be done
+in either of the two ways described in dealing with
The purpose of the Base Starter is to:
+Define a handset's local +drives
Define what file systems
+(
Associate drive letters +with drives
Modify drive start-up +behaviour
Enable
Handset manufacturers need to customise the Base Starter as part of the
+process of porting Symbian platform to a new device.
Note that there are some
There are two ways to do this:
Create local drive mapping +files
A local drive mapping file is an ASCII text file that explicitly +defines the drive letter, file system, and file extension to be associated +with a drive. The file contains a set of records, one for each drive, that +specifies this information. Records are also referred to as drive mappings.
The +file is created as part of your hardware variant's source tree, but is copied +into a standard location in your ROM filing system when you build the ROM.
A +drive mapping file has a formal structure; the main rules are:
the file can contain +a maximum of 16 different drive mappings
each drive mapping is
+represented by a separate record; each record occupies one line; each line
+is separated by the new line character
information is represented +by items separated by at least one blank character
comments can occupy
+a whole line or can be added onto the end of a record (i.e. at the end of
+line). They are marked by a
A record, or drive mapping, has the following items, each separated +by at least one blank character. Each item must appear in the order described:
Use automatic local drive +mapping
If no drive mapping file exists or is unavailable, the +Base Starter decides which file system to mount on each local drive by interrogating +the capabilities of those drives. This is known as auto-detection.
Internally,
+the Base Starter holds a table containing entries for every known supported
+local drive configuration. This table is known as the auto-configuration table.
+The information supplied by each entry in the table resembles that supplied
+by a
The filename of the
+file system for this configuration (the
The object name of the
+file system for this configuration. The object name corresponds to the
The filename of the
+file server extension for this configuration, if applicable. The filename
+corresponds to the
The set of mount flags;
+these correspond to the
A file system identification +function that is used to decide whether the designated file system is really +suitable for this drive. Each function is an internal part of the generic +Base Starter code.
The Base Starter uses the following default mapping of drive letters +to drive numbers:
Most of the functionality provided by the Base
+Starter is provided by the class
Disabling drive auto-detection
The
+most common customisation is to use the supplied version of the Base Starter
+and to create one or more local drive mapping files . In this case, savings
+in code space can be made by removing the code that deals with auto-detection.
+This is achieved by adding the following line to the
Customising for multiple +hardware states
If the state of your hardware varies, and it requires
+different mapping files for different states, then it is still possible to
+use the local drive mapping scheme. Examples of hardware state variations
+might be differences in the state of a switch or jumper setting, or the presence
+or absence of a hardware feature. In this situation, the ROM can be built
+with more than one mapping file. A custom version of the Base Starter provides
+its own version of the virtual function
Overriding the default drive +mapping
To override the default mapping of drive letters to drive
+numbers on a drive by drive basis, a custom version of the Base Starter provides
+its own version of the virtual function
To
+override the auto-configuration table used by the automatic local drive mapping
+scheme, for example to add support for a new
Customising mount flags
Whether +you use the automatic local drive mapping scheme or an explicit local drive +mapping file, you can provide support for additional mount flags. A custom +version of the Base Starter provides its own version of the virtual functions:
Customising the drive initialisation +sequence
To override the entire local drive initialisation sequence
+provided by the generic version of the Base Starter, a custom version of the
+Base Starter provides its own version of the virtual function
Customising Loadlocale
Setting the
See
Customising the restart +mode
Symbian platform +does not define any meaning to restart mode values. It is for the use of the +device manufacturer.
The restart mode is defined by the HAL attribute
To
+use this attribute, define it in your variant’s
The value can be changed using
Calls to
Calls
+to
You +need to do the following:
If you choose to make
+the custom startup mode settable (in Symbian platform terminology, the attribute
+is said to be derived), you need to implement
If you choose to make
+the custom startup mode non-settable (in Symbian platform terminology, the
+attribute is said to be non-derived), you need to implement
You need to provide
+an implementation for your
The example below is the OMAP H4 variant implementation of
See
Customising other behaviour
You +can change other default behaviour. For example, to add additional functionality +related to File Server initialisation, or the initialisation of other base +related components, then the following virtual functions can also be customised. +Follow the links to the individual functions for more detail:
The Base Starter is responsible +for mapping local drives to drive letters and for loading the appropriate +file systems on local drives. It does not install media drivers, and +is not responsible for deciding which local drive a media driver will register +with. This is done by the individual media drivers and peripheral controllers +when they are initialised.
See
See
+also
The user-side interface to the LCD Extension is defined
+by the
The Time client interface methods can be executed independently. +The following examples show the Time client interface functions being +used along with an explanation.
+An example of the use of this function is:
The first three lines define the instance of the
The fifth line invokes the
An example of the use of this function is:
In the +above example, the RTC is being set with a time calibration value +of 30ppm (parts per million). This means that the RTC can lose up +to one second every 30 million seconds.
The value returned by
+this function is either
An example of the use of this function is:
In +the above example, the first line declares the variable to be used +and initializes it. The second line is used to set the wake-up alarm +time.
An example of the use of this function is:
The first line declares the variable that is to be +used. The second line deletes the current device wake-up alarm time.
An example of the use of this function is:
In the above example, the last request made to the
+instance of the
An example of the use of this function is:
In the above example, a call to the destructor in the
+instance of the
An example of the use of this function is:
The first and second lines declare and initialize the
+variables and classes that are to be used. The third line calls the
The value returned by this
+function is either
An example of the use of this function is:
The +first and second lines declare and initialize the variables and classes +that are to be used. The third line sets the RTC to be zero seconds +from the start of the year 2000.
The value returned by this
+function is either
The Register Access platform service is provided by functions of
+the
The difference between write and modify +functions is that the modify function allows the client to change +partial contents of a register using masks.
The header file for the Registry Access platform service
+can be found
A real-time clock is a chip that keeps +track of elapsed time. It is more accurate and consumes less power +than computing time from elapsed execution cycles.
The real-time clock can generate interrupts at specific times. +This provides a reliable alarm mechanism.
Device drivers, like user-side programs, need to use APIs for basic tasks
+such as managing buffers and arrays. However, the EKA2 architecture does not
+allow kernel-side programs such as drivers to link and use the User Library
+(
However, some classes are available for use on both the user side and the
+kernel side. The first section of this document,
The detailed contents are given below:
+Contents
+
This is a list of USER side classes, types and APIs +that can still be used on the kernel side. However, a subset of the class +member functions may not be available.
8-bit descriptors
The
+following classes, defined in
For some classes, the kernel side can use +the same member functions that are available to the user side. However, for +other classes, the kernel side is restricted to using a subset of functions; +where this is the case, the functions are listed
Arrays
The
+following classes, defined in
For some classes, the kernel side can use +the same member functions that are available to the user side. However, for +other classes, the kernel side is restricted to using a subset of functions; +where this is the case, the functions are listed
Character representation
The
+following class, defined in
For some classes, the kernel side can use the +same member functions that are available to the user side. However, for other +classes, the kernel side is restricted to using a subset of functions; where +this is the case, the functions are listed
Basic utility +functions and classes
The following global utility functions and
+classes, defined in
The Package +Buffers API
The package buffers API, represented by classes defined
+in
The UID manipulation +APIs
The UID manipulation APIs, represented by classes defined
+in
Version handling +API
The version handling API, represented by the
TRequestStatus
The
TIpcArgs
The
Basic +graphic classes
The basic graphic classes
Buffers: replacing +HBufC8 with HBuf
In EKA2, the heap descriptor buffer,
If your code uses the
+typedef
On the kernel side,
+there is no explicit support for 16-bit buffers, which means that there is
+no class called HBuf16. In practice, this means that,
Unlike
The number of functions
+available to create an
There are no "leaving" +variants of these functions - if you have NewL(), NewLC(), and other "leaving" +variants, then you will need to change your code to explicitly check the return +code to make sure that the creation, or the reallocation of the heap descriptor +buffer has worked.
As the descriptor is
+modifiable, there is no need for, and there is no equivalent of the function
If your code uses the
+assignment operators (i.e. the
The descriptor function
The following code fragments show code that is approximately equivalent +between EKA1 and EKA2. The fragments are a little artificial, and make assumptions +that would not necessarily be made in real code, but nevertheless still show +the essential differences.
Buffers: replacing +HBufC8 with C style pointers
Instead of replacing
You allocate memory
+from the kernel heap using
Although
+the EKA2 code here uses
Handling 16-bit +data items
If you need to handle 16-bit items on the kernel side, +then you can still use 8-bit heap descriptor buffers. You just need to be +aware that the data is 16 bit and cast accordingly. The following code fragments +are simplistic, but show you the basic idea.
Replacing CBase +with DBase
The
If you have a class derived from
change your class definition
+so that it is derived from
Remember to include
+the
you should not need to do anything else.
One additional feature
+provided by
Internally, asynchronous deletion works by +placing the object to be deleted onto a queue, and then triggering a DFC, +which runs in the context of the supervisor thread. This means that only a +small amount of code is executed by your (the calling) thread, and the actual +deletion is done by the supervisor thread.
Its use is straightforward.
Replacing +User::QueryVersionSupported() with Kern::QueryVersionSupported()
The parameters passed to these functions are the same, both
+in type and meaning. The behaviour of both functions is also the same. This
+means that all you need to do is replace
An ISR is a static function that will be executed when an interrupt occurs +on the interrupt source bound to the ISR.
+An ISR performs the actions necessary to service the event of the peripheral +that generated the interrupt and to remove the condition that caused it to +interrupt. It queues a DFC to do the required processing. It disables the +interrupts and FIQs on ARM, if required, to handle the interrupt, and enables +them before leaving. It can also disable the source of the interrupt that +it is servicing.
+Demand paging is a change made from Symbian platform v9.3 to how the Kernel +uses RAM and storage media. This topic
+For general information on migrating media drivers, see
ROM and code paging can be enabled +for a Multi Media Card (MMC) provided the card is non-removable. Removing +a card would result in a kernel fault whenever the next page-in request is +issued.
As the MMC media driver is entirely generic, a way of returning
+the paging related information contained in variantmedia.def to the
+generic part of the MMC stack is required. This is achieved by modifying the
+Platform Specific Layer (PSL) of the MMC stack to implement the
This example is taken from the H4 HRP VariantMedia.def changes:
This example is from H4 MMC +stack class definition:
This example is from H4 MMC stack class implementation:
To
+support ROM paging from an internal card, the MMCLoader utility is used to
+write the ROM image to the card. MMCLoader can be found in
The +paged image is written as a normal file under the FAT file system. For paging +to work however, the images file’s clusters must all be contiguous so, before +doing anything else, MMCLoader formats the card. It then writes the paged +part of the ROM to a file and checks that the file’s clusters are contiguous, +which is normally the case as the card has just been formatted. A pointer +to the image file is stored in the boot sector. When the board is rebooted +the MMC/SD media driver reads the boot sector and uses this pointer to determine +where the image file starts so that it can begin to satisfy paging requests.
MMCLoader +takes the filename of the original ROM image file as an input. It then splits +the given file into unpaged and paged files. The following code fragment shows +the syntax:
Before you start, +you must:
Understand
Understand
It is recommended +that shared chunks are used for speed of transfer when transmitting a large +amount of data. This approach is not recommended when transmitting smaller +amounts of data infrequently.
Complete the tasks listed below to create +a class driver that + interfaces with the USB LDD using shared chunks. +
SPITOOL is a utility for creating static plug-in information (SPI) +files, which contain registration data for ROM-based plug-ins.
+The tool is normally invoked by
The tool creates an SPI file by concatenating the specified input +files, using the options specified.
You can specify the input +files by specifying the full path to each file, or by specifying a +directory name, in which case all files from that directory are included.
The boot table consists of a linear array of 4-byte entries whose index
+positions are defined by the
The entries in the array are divided into two groups, one whose index positions
+are defined by the enumerator names
Entries in the first group specify addresses of
Entries in the second group specify
A boot table entry is the offset of the function from the beginning of +the bootstrap code but, since the bootstrap is linked for a base address of +zero and is position independent, bare function addresses can be used.
+Use
In the Template port, available in
If any interface settings have been changed then a re-enumeration must be forced by calling
If the re-enumeration has not completed within the expected time period then use
If alternate interface settings have been set up enquire which interface has been selected following a re-enumeration by using
After you have forced a re-enumeration you can use your USB class driver to
The DMA Framework is made of a PIL and a PSL. You have to write +a new Platform specific implementation code whenever your hardware +changes, but the PIL and the platform service API do not change.
+You should be familiar with the
In the diagram below, the classes in green provide the platform +service API and the classes in blue implement the Platform-specific +functions (PSL).
The following table lists the important files of the +DMA Framework and what you should do with each of them.
To write the PSL for the DMA Framework, do the following:
Copy the template +listed in the above section to your own variant directory (at chip +level or at board level depending on where the DMA hardware is located).
Implement the
Derive the
If necessary,
+derive the
Test the PSL +implementation.
A platform service is a set of defined functionality, +usually providing an abstraction or encapsulation for functionality +provided by hardware.
For example, a platform service may provide +a standard interface for using a serial bus to send commands and data +between integrated circuits on the base board. Another example would +be providing a standard interface for Direct Memory Access (DMA) hardware +for moving blocks of data from one block of memory locations to another +or from a block of memory to a hardware device.
The platform +service separates the functionality required by kernel, device driver +and application processes from the specific hardware implementation. +This means, for example, that a device driver can be written for a +platform service that uses the published APIs, without needing to +know the specifics of the actual hardware device/chipset.
The platform service is a consistent hardware interface at +the bottom of the operating system platform software stack. In the +past much of the logic and configuration required to enable a new +piece of hardware was higher up in the operating system stack and +that made it difficult to support new or variant hardware. The platform +service architecture separates out the hardware specific interfaces +and encapsulates them behind a standard set of platform service (client +interface) APIs.
The platform service consists of :
Client Interface/Platform service APIs
Implementation of the APIs:
Platform Independent Layer (PIL)/Framework/Generic implementation
SHAI interface
Platform Specific Layer (PSL)/SHAI Implementation/Hardware +implementation
The PIL contains all the common functionality code such as +maintaining a list of available channels for communication or holding +the pending list of memory transfers to be processed by DMA when the +hardware becomes available. Usually the PIL provides the implementation +of the Client Interface APIs. This is sometimes referred to as the +Framework.
The interface between the framework code and the +PSL code is known as the Symbian Hardware Adaptation Interface (SHAI). +The SHAI interface specifies how the PIL and PSL talk to each other +(APIs, shared memory areas etc.)
The PSL consists of the specific +code required to work with a particular piece of hardware. This means +that supporting a new or variant chipset only requires the PSL to +be amended or to have detection code to handle a particular piece +of hardware.
Some Platform Services, such as GPIO, provide +APIs that directly address the hardware, and so the PSL provides the +client interface API implementation directly. For example, there is +no GPIO PIL/framework.
Not all platform services are as simple +as the diagram above suggests. Some platform services do not talk +to hardware directly or at all, but provide a higher-level abstraction, +often with further low-level platform services below them in the OS +stack. For example, the Power Manager Policy platform service, provides +APIs to control power management policy, but does not interface with +any hardware directly. In these cases there is either no PSL, or the +PSL provides configuration specific responses to a generic PIL. For +example, if a handset can be built in two variants, one with a hardware +graphics accelerator and one without but with a software emulated +graphics accelerator, then the PSL would be responsible for directing +graphics requests either to the hardware, or to the software and returning +the results back through the PIL to the client.
Some plaftorm +services, for example TV-Out, may use additional platform services +to output sound as well as providing hardware support for the TV-Out +chipset.
Some platform services may share a piece of hardware, +such as a radio chip. The chip may provide cellular telephony, WiFi, +FM radio transmission and other features all on one chip, but there +may be several separate platform services, each exposing an API for +one particular technology.
Both the PIL and PSL may make calls +to other OS components, including other platform services, for information +such as Hardware Configuration, power status, and the availability +of other services.
Platform services +benefit both device creators and third party hardware/chipset developers +since they specify a standard software interface between the operating +system and standard functionality provided by hardware.
This +means that it is possible for a device creator to easily integrate +the best hardware for their device, and allows hardware creators to +work to a standard interface that can be used across multiple devices +and multiple customers.
As the platform service interface provides +a consistent interface to functionality, it allows the device creator +to concentrate on the core functionality and not worry about how the +hardware will provide it. And since the interface is standard for +each type of hardware component, each manufacturer can provide the +information or code for the PSL comparatively easily for each additional +variant they produce. So this means less code required from hardware +manufacturers per hardware component, and a larger market as each +new mobile device, from any device creator, can use that compliant +hardware.
The platform services abstraction for hardware adaptation +also means more common code that can be shared across the Symbian +Foundation member community. This includes sharing PSL code with other +hardware providers, and sharing code for device drivers and other +higher-level components that need to use the platform services APIs. +For example, if a device driver for a particular piece of hardware +also needs to use a serial inter-IC bus (IIC) to send commands to +that hardware, then it can use the IIC platform service APIs and re-use +code that other components have already tested for using the IIC client +interface APIs.
Platform services also allow the use of standardized +test tools and test suites. Each new piece of hardware in a particular +category (for example, camera, Bluetooth radio) has to comply with +the SHAI interface (PIL/PSL interface). This means that an existing +test suite that works with another camera module can be used to test +that this new piece of hardware works. This reduces the time taken +for testing and increases speed to market.
Application developers, who need to use the functionality of +a hardware device, but do not need to know the specific details of +the hardware/hardware implementation. They use the client interface +and need to know what the APIs are, and what order they must be called +in and so on.
Framework developers, who need to know how to implement each +of the client interface APIs , and a description of the functionality +required within the framework.
Hardware Developers, who need to understand the hardware interface, +and have to write or update the hardware implementation (PSL) to support +their particular chipset/hardware.
In general these are function +calls, and associated enumerations. There are two main types of function +call:
Synchronous
Asynchronous
A synchronous function call from the client/device-driver +waits for the platform service to process it and make any return value. +The client thread waits for the synchronous call to complete.
An asynchronous function call asks the platform service to do something, +but then execution of the client continues immediately. For example, +this would be the case of any function that starts a long data transmission +or receive process, such as streaming video. One of the standard components +of an asynchronous function call is a way for the result to be communicated +back to the client. Often this is done by a “callback function” where +the client specifies a pointer to a function (and parameters) during +the original call, and the platform service puts the callback on the +Delayed Function Call (DFC) queue for the client when the platform +service has completed the request. This may indicate “transfer completed” +or “transfer failed, error code 123”, it may release a lock or other +functionality defined by the author of the client.
Some DMA controllers provide several modes of operation. For example, one +ASSP provides both single-buffer mode and scatter-gather mode transfers.
+There are two options:
+Select a single mode +of operation.
Select a multiple mode
+of operation, and split the physical DMA controller into several logical controllers,
+one for each mode to be supported. If this option is chosen, the PSL must
+include one concrete controller class per logical controller; a controller
+class is derived from
If the DMA controller +supports more than one mode of operation, implement the simplest one first. +Implementing single-buffer mode first allows most of the the PSL (platform-specific +layer) to be debugged before you start to implement the more complex scatter-gather +functionality.
One reference implementation adopts a mixed strategy; the mode of operation
+is selectable at build-time by defining the macro
Scatter-gather support is essentially a superset of single-buffer support +because a scatter-gather list is an ordered set of variable sized buffers, +and means that code shown here demonstrates the implementation of a scatter-gather +DMA controller.
+Almost all modern MCUs use scatter-gather capable DMA controllers. Note +that we refer to descriptor fetch mode synonymously with scatter-gather mode, +as scatter-gather is implemented by passing the DMA controller a linked list +of descriptors describing the buffers to be transferred. Non–descriptor mode +requires that the DMA controller registers are programmed by software to describe +the buffer transfer required. Since each transfer must be handled by an interrupt +service routine triggered when the DMA controller has completed the transfer, +non-descriptor mode is not capable of performing transfers past buffer boundaries.
+This guide describes how to use the IIC client interface API.
In this document, a channel is a multi-wire bus with a two or +more devices connected to it. At least one of the devices can act +as the master.
There are three possible modes that a device +can be in as regards to the IIC client interface API (regardless of +the underlying two-wire bus that is being used ):
A single exchange of data in one direction is known as a transfer.
A list of transfers is known as a transaction.
Transactions +can be performed in half-duplex, or , full-duplex (if the underlying +bus supports it).
In certain instances, some pre-processing might +be required by the devices before a transaction can occur.
This +can be done by the transaction preamble functionality, which executes +immediately before the transaction is performed. This takes the form +of a function.
No spin locks are to be used.
There are no block or waiting on a fast mutex operations.
There are no calls to code that either uses spin locks, waits +or blocks a fast mutex.
This is for a device that is in master mode.
This is where a number of separate transactions +are linked together. It is used, for example, in situations where +it is not known in advance how big the transaction will be.
This is for a device that is in master mode.
Added to the above functionality, +the IIC platform service API has two modes of operation:
With-controller and
Controller-less
Controller operation is used in a situation where multiple +device drivers can communicate with multiple channels. In controller-less +operation, the link between a device driver and a channel is fixed.
In controller-less operation, the mode of the channel is set at +build time, buy using the following macros in the relevant mmp files:
The class that is used for the client interface +depends on whether the IIC controller is to be used or not.
If the IIC Controller is to be used, then the client interface API +class is:
If the IIC controller is not to be used, then the client +interface API classes are:
The ASSP/Variant layer provides two main functions. First, it implements +a small number of hardware-specific functions that are used by the +Kernel. Second, it implements common peripheral control functions +that other extensions and device drivers can use.
+The most important +of these functions is interrupt dispatching. During initialisation +the ASSP/Variant must specify a dispatch function to be called for +all hardware interrupts.
+In general, the ASSP/Variant provides control functions for hardware +which is shared between multiple devices. For example it is often +not possible to do a read-modify-write on a GPIO port in order to +change the state of an individual output line. This may be either +because an output port register is write-only or because reading the +port register reads the actual logic levels on the pins, not the last +value written to the register, and the pin level typically lags the +written value due to capacitive loading. In this case, the ASSP/Variant +could provide a function to set and clear individual port bits, keeping +a RAM copy of the last value written to the port register.
+The simplest implementation is put all the code in a single DLL,
+called the Variant DLL (
In the Base Porting Guide, we refer to the ASSP layer and the Variant +Layer, where the ASSP layer contains the source code tailored to a +range of different microprocessors (e.g. ARM720/920/SA1/Xscale), and +the Variant layer contains the source code associated with off-chip +hardware (same CPU, different peripherals).
+For example, the standard Symbian port for the template reference
+board is split into a core layer (in directory
The heart of the ASSP/Variant is the
Where there is an ASSP/Variant split,
+the ASSP layer should derive a concrete implementation from the
The ASSP layer can, itself, define additional functions
+to be implemented in the Variant layer. For example, the ASSP layer
+defines the pure virtual function
The template reference board port has the ASSP/Variant split. The
+ASSP layer is implemented by the
For reference: the template port ASSP layer implementation's
Note that one of the source files that forms the ASSP layer
+must have the
This document describes a simple implementation of an IIC client +using the master channel functionality.
+This tutorial explains how to use the IIC platform service API +as a master and to communicate with a peripheral using half-duplex.
Intended Audience
Device driver writers.
Required Background
Before you start, you must:
Have installed +the platform specific implementation of IIC channels that is required +to support the IIC platform service API.
Include the
Include the
The Following tasks will be covered +in this tutorial:
Performing read/write +operations.
Basic Procedure
The high level steps to performing read/write operations are +shown here:
First set the +bus type, the channel number and the slave address.
An example +of this is :
Configure the +channel.
An example of which, is :
Make linked +lists of the transfers that are to be carried out.
An example +of this is :
Use the
An example of this is :
The SPI is a synchronous serial interface. The SPI supports full +duplex communication channel for the devices that use SPI interface. +The devices and peripherals that use the SPI interface are described +as SPI devices in this document.
In the SPI bus one of the node must be a master device and one +or more slave devices. There can be only one slave active at a time. +The master device initiates and terminates communication between the +SPI devices. The SPI is a full duplex serial data communication. In +full duplex communication the data can be transferred between the +master and the slave devices simultaneously.
The SPI devices
+can be configured as master or slave. The SPI bus contain four wires.
+The first one is the serial clock signal sent by the master device
+to the slave device. The clock signal is used for synchronisation.
+The Master Output / Slave Input (
The master device is the node on the SPI bus that initiates and +terminates the communication. The master must provide the configuration +to the slave devices. The configuration includes the details like +the channel number and the data length. The SPI devices uses 4 and +8 bit words to communicate. If a slave device or a channel is not +available the appropriate error is returned to the master.
The following are the advantages of the SPI interface:
Full duplex +synchronous communication
Low power requirements +compared to the I2C devices
Less space compared +to the parallel bus
Slave devices +do not require unique address assigned
Typically the SPI devices are used for data transfers +where a delay in the time taken to send the data is acceptable. The +clients of the SPI interface are device drivers running on the application +specific integrated circuits (ASIC). Some of the peripherals that +use SPI interface are:
Touch screens
Codecs
Real Time Clocks
Built-in cameras
Flash memory +like MMC and SD cards
The User-Side Hardware Abstraction (HAL) component provides a platform-independent +way to access and control many device specific features, and to hide hardware-specific +functionality. For example, HAL can be used to find out if a screen backlight +is present or not, or to set the display contrast. This topic
+Specific +items of hardware related information are referred to as attributes. +Symbian platform defines two types of attribute:
Non-derived attribute. +This is an attribute where the information it represent is simply a value +that is stored in, and retrieved from, the HAL.
Derived attribute. This +is an attribute that requires a software call to obtain and set its value. +The software component is, ultimately provided by a driver, kernel extension +or by the kernel itself. The drivers or kernel extensions normally need porting.
Attributes are identified by the enumeration values of the enumerator
To +maintain backwards compatibility, the numbers identifying HAL attributes are +allocated strictly increasing consecutive values, and that a given HAL attribute +always has the same enumeration number on all implementations of Symbian platform. +This means that new HAL attributes can only be added by Symbian. This also +means that the addition of custom HAL attributes is not possible.
Symbian +platform provides the following static functions to get and set information +about specific hardware features:
The
For +example, on return from code such as:
A request for fetching and setting information about +hardware is dealt with by an entity called a HAL handler on the kernel side. +This section describes the types and concepts used to implement a HAL handler.
It is useful to group +together requests for information about related hardware features so that +they can be dealt with by a single HAL handler. As an example, all the HAL +screen attributes are in the display group and are handled by the screen (i.e. +video or LCD) driver. This means that a HAL group has a one-to-one association +with a HAL handler.
A HAL group has an associated set of function-ids, +which identify requests for different pieces of information to the HAL handler.
Symbian
+platform identifies HAL groups by a number defined by the
The function-ids
+associated with each HAL group are also defined in
The +idea of the HAL group is what allows HAL handling functionality to be implemented +in the Variant, rather than in the kernel.
Up to 32 groups can be +defined. Often, a HAL group is concerned with mode settings for a particular +piece of hardware; for example there are standard HAL groups for keyboard, +digitiser and display. However, some groups are concerned with some overall +aspect of the platform which extends across all devices; for example, there +is a group to handle emulation parameters on emulator platforms.
An
both result in calls to the screen (i.e. video or LCD)
+HAL handler, as represented by the HAL group number
However,
+there are HAL group/function-id pairs for which there are no corresponding
+generic attributes. In these cases, the hardware attributes represented by
+the HAL group/function-id pair are used internally, and can only be accessed
+using the kernel side function
The +following picture shows the general idea:
**Technically, the user side function
Most HAL handlers are
+implemented as part of a driver, a kernel extension, the Variant or the kernel
+itself. Some will need porting. See
An extension or a device
+driver must register a handler for HAL group. It does so by calling
Internally, pointers to the HAL handler are stored in the
+array
On the template board, for example, the HAL handler for the
+display group is part of the screen driver. The screen driver source can be
+found in
The following code, implemented in the Create() function +of the main class, does the registration.
Security is the responsibility of the HAL handler. If +necessary, it needs to check the capability of the caller's process. Each +information item, as identified by the function-id, can have an associated +capability.
Each function-id defined in
See
Code such as device drivers that runs on the kernel side
+need not use the generic interface provided by the HAL class functions. The
Unlike the HAL class functions,
Although
+an
non-derived, +i.e. has a value that is simply stored in the HAL,
or:
derived, i.e. +requires a call to a software entity to get and set the value.
This is the role of the Config file.
The Config file
+contains a list of all the attributes that have meaning on a platform, and
+uses the symbols defined in the enum
There is also a +file known as the Values file that defines the initial values for all +of the non-derived attributes, i.e. those attributes that are simply stored +in, and retrieved from, the HAL.
The Config and Values files are text
+files that, for a specific implementation of the HAL, reside in the
In effect, these C++ source files implement the
+static data arrays defined by the
The data member
See
Each
+derived attribute has a corresponding accessor function. For example,
All
+accessor functions are implemented in
Generally
+there is one accessor function per derived attribute, although there is nothing
+to prevent an accessor function dealing with more than one attribute. The
+attribute number is used to route calls made to
A new implementation of an accessor
+function should rarely, if ever, be needed as all likely accessor functions
+are already defined and implemented by Symbian platform in
See
Typically,
+an accessor function implementation calls
This +means that it is the accessor function, which is part of Symbian platform +generic code, that decides which HAL handler is to deal with a request (as +identified by the attribute).
The kernel uses the HAL group number +as an index to dispatch the call to the correct HAL handler.
Note +that there may be cases where access to a HAL attribute may be handled through +a device driver or server.
The
+static function
A
+call to this function results in a call to the kernel side function
For information purposes, the
To help enable a base port be produced quickly,
+Symbian platform provides template source code for drivers for most peripherals.
+The template implements a framework and protocols for the type of peripheral;
+the base port implementer then supplies additional implementation for the
+particular hardware device. For example, for the UART driver, the PDD template
+implements the standard hardware abstraction interfaces for serial communication,
+such as the
The
+template drivers are located in the
Details of the templates for particular drivers are +discussed in the relevant sections of the Base Porting Guide. This +tutorial presents an additional example device driver.
The Interrupt platform service is implemented as a part of the
+ASSP variant layer and hence included in the baseport. For more information,
+refer to
The SDIO implementation shares most of its functionality with the +Secure Digital (SD) Controller, which in turn extends the MultiMedia +Card (MMC) Controller. SDIO adds support for data transfer between +Input/Output hardware included on SDIO peripherals. The SDIO Controller +initializes SDIO peripherals and provides an API so that class drivers +can access the SDIO services.
+Providing an SDIO adaptation for your platform means porting the +SDIO Controller. This is done by providing platform specific implementations +of the SDIO SHAI interface functions.
+SDIO depends on SD and MMC: the SDIO protocol is a super-set of +the SD protocol (and of the MMC protocol). Therefore, in addition +to the SDIO standard, you need to know about the following:
SD (Secure Digital)
MMC (MultiMedia Card)
The following diagram illustrates the implementation of the SDIO +extension for the MMC Controller. The classes in green provide the +hardware interface and the classes in blue implement the platform-specific +functions.
The Platform-Specific Layer (PSL) should be located +in a variant directory and have a similar folder hierarchy to the +PIL. It should provide the following implementations:
Stack
Power Supply Unit (PSU)
The
When implementing the PSL, you need to provide at least one function:
This function opens a DMA channel and returns the object controlling +this channel.
+How the channel is allocated is hardware-dependent. The call to
+the
If you want to perform a hardware-specific operation when closing
+the channel, you also need to update
This example illustrates
+a simple allocation scheme where
OBEY files are standard text files containing statements that are used to control the operation of the ROM building tools.
You can use the trace log to test the performance of the programs.
+There are three macros that you
+can insert into your code. They only differ with respect to the optional data
+that you can add to the trace record. All are defined in
Note:
+the
You must specify an the 8-bit integer subcategory value in the
The macros wrap around the standard
The +generation and capture of trace information is implemented as a kernel extension. +This means that it is loaded and activated at an early stage in the startup +process of the device.
The following table
+shows which fields of a trace record are generated by each of the macros
To use the macros:
Include the header file
Link to
Call the
For examples, see the test code
+for the macros in
Rebuild your project
You can use the trace log to test the performance of the programs.
+There are three macros that you
+can insert into your code. They only differ with respect to the optional data
+that you can add to the trace record. All are defined in
Note:
+the
You must specify an the 8-bit integer subcategory value in the
The macros wrap around the standard
The +generation and capture of trace information is implemented as a kernel extension. +This means that it is loaded and activated at an early stage in the startup +process of the device.
The following table
+shows which fields of a trace record are generated by each of the macros
To use the macros:
Include the header file
Link to
Call the
For examples, see the test code
+for the macros in
Rebuild your project
Specific items of hardware
+related information are called
The base port customises the
Some attributes are +simple values, but some attributes must be read and set with a function. These +functions are called accessor functions. The base port sets identifiers for +the accessor functions to use.
User-side programs use
+the
The accessor functions
+are implemented on the kernel side by functions called
The variant DLL and many device drivers in a +base port can implement HAL handlers.
Kernel-side programs
+can use the
The Serial Port Driver
+provides a logical device driver,
The Register Access platform service is implemented in the ASSP
+layer. The Register Access functionality is provided to the clients
+by implementing the
The Register Access platform +service provides an interface to the kernel and device drivers to +read, write or modify the contents of only the ASSP registers.
A register is a memory location on the ASSP hardware to store +data that relates to the operation of that hardware. The Symbian platform +support registers that can store 8, 16, 32 and 64–bit data.
The
ASSP is an integrated circuit consisting of CPU, Memory Management +Unit (MMU), cache and a number of on-chip peripherals, which is intended +to be used in a class of devices. Typical examples include UARTs, +timers, LCD controller that are designed and marketed by a silicon +vendor.
The Register Access platform service allows the clients +to:
read data stored in 8, 16, 32 and 64–bit registers
store data in 8. 16, 32 and 64–bit registers
change certain bits of the data in 8, 16, 32 and 64–bit registers.
This page lists the keywords starting from D to F.
+rombuild +only
This keyword specifies the alignment for the executable's +data. It indicates the start address of a data segment that must be +aligned to a specified value.
Note: The
+inclusion of
rombuild and rofsbuild
If the
rombuild and rofsbuild
This overrides the default
+settings for data paging and the settings from all the
This keyword takes a single argument,
+which can be one of the possible values listed in
rombuild and rofsbuild
A file that +is copied from its source location into the ROM without any processing.
Note that the HWVD is optional but, if specified, must be enclosed +within square brackets.
rombuild only
Linear address of data/bss chunks +for all executables except the Kernel.
ROFSBUILD only
Specifies the name of the data +drive image.
BUILDROM only
Defines a data drive image.
This is a data drive configuration feature. There is no limitation +for the number of data drive images that can be defined for an internal +media.
To mark a file for inclusion in a data drive it is prefixed +with the keyword DATA_IMAGE. For example:
The above information can also be included using '{' '}' braces, +for example:
BUILDROM and ROFSBUILD only
Specifies the file
+system type of the datadrive image. If this is not specified then
+by default
ROFSBUILD only
Specifies the name of the datadrive
+image. If this is not specified then by default
ROFSBUILD only
Specifies the maximum size of +a data drive image. The default size is the size of the data drive +folder.
BUILDROM only
Localisation support. Specifies +the default language as a Symbian 2-digit code. This keyword should +only be used once.
BUILDROM only
Performs textual substitution. +All subsequent instances of <name> are replaced by <replacement>.
Notes:
There is no +UNDEFINE facility, and substitutions are applied in an unspecified +order
The C++ preprocessor +cannot be used conveniently because it inserts whitespace around substituted +text.
rombuild only
Defines kernel-mode logical or
+physical device drivers, which can have global data. The address of
+this data is generated by
Note
+that the
rombuild only
Specifies an executable file whose +entry point must be called.
rombuild only
Specifies the destination port +for Kernel and Bootstrap traces.
rombuild only
Specifies the maximum size of +the stack.
rombuild only
Specifies settings for demand +paging enabled ROM.
This keyword takes four arguments, which +are described below:
For example, the
For example, +let us assume that the ratio is R, the number of young pages is Ny and the number of old pages is No. If Ny > RNo, a page is taken from the end of the young pages +list and placed at the start of the old pages list. This process is +called aging.
Note: All the above listed attributes are limited +to the value range 0-32767.
rombuild only
Specifies the top of the DLL data +region.
BUILDROM only
Prints the rest of the line following +the ECHO keyword to standard output.
BUILDROM only
Specifies an ECom plug-in, consisting +of an implementation DLL and an ECom registration resource file, to +include in ROM.
Symbian platform code does not use this keyword
+directly. Instead, it uses the macro
For example:
Note that the resource file name is specified using the
Use of this keyword allows
Note that as part of the ROM creation +process, it is possible to add support for multiple languages. This +affects how ECom resource files are specified by BUILDROM:
If an SPI file
+is being created the
During the BUILDROM localisation stage these lines become:
where
If an SPI file
+is not being created
During the BUILDROM localisation stage these lines become:
BUILDROM only
A pre-defined substitution. This +is replaced with the value of the EPOCROOT environment variable.
Note that there is no UNDEFINE facility, and substitutions are +applied in an unspecified order.
rombuild only
Indicates that a Symbian platform +ROM wrapper is required
BUILDROM only
Prints the rest of the line following
+the ERROR keyword to standard output, and reports the source file
+name and the line number. In addition, it causes
ROFSBUILD only
This keyword is used with the
The following example shows how +this keyword is used to set the attribute for a text file, which is +copied to the ROM image:
Assuming that the
Where
Note: The space between the
BUILDROM only
The EXCLUDE_FEATURE keyword +is used to mark a feature as excluded.
rombuild only
Defines a kernel-mode DLL that
+can have global data, the address of which is generated by
Note that the
rofsbuild only
Marks the start of the definition +of an optional extension ROFS.
rofsbuild only
Defines the name of the ROFS +extension image.
Any new files added after this keyword will +appear in the extension. The files in the core can be renamed, hidden, +and aliased by using the other keywords.
rombuild only
This marks the start of an extension
+ROM section. A filename of "*" can be specified, which means use the
+file name specified on the
BUILDROM only
Used for invoking external tools
+through the IBY file keyword externaltool, specifying the list of
+toolnames each seperated by a comma.
The same invocation can
+be achieved alternatively by using
rofsbuild only
Configures the number of FAT +tables for the file system in the data-drive image.
Where
Note: The space between
+the
BUILDROM only
The FEATURE keyword is used +to mark a feature as included.
rombuild and rofsbuild
A standard executable
+file, for example, a
For example, the following entry
+in the Obey file provides the source and destination locations of
+the file
Notes:
the
the information +required to relocate the file is not preserved; this keyword provides +a fully resolved uncompressed file.
rombuild only
Doesn't compress the resulting +ROM image.
For example:
rombuild only
Compresses the resulting ROM image +using the Deflate, Huffman+LZ77 algorithm.
For example:
rombuild only
Compresses the resulting ROM image +using the bytepair algorithm.
For example:
rombuild only
An XIP (execute-in-place) executable +to be loaded into the ROM in compressed format. Note that the information +required to relocate the file is preserved.
rombuild only
XIP (execute-in-place) executable +to be loaded into the ROM uncompressed format. Note that the information +required to relocate the file is preserved.
rombuild only
This executable is loaded as a +fixed address process, and has its data fixed in kernel space (high +memory). The data of normal executables is paged in and out of low +memory as the executable runs. Fixing a chosen subset of the system +servers saves context switch time, and speeds execution considerably.
Specifies the SMR image format version. This value is checked by +the SMR consumers (such as HCR) at runtime to ensure code compatibility +with image or data format.
The IIC client interface API creates a common interface from +device drivers to serial inter-IC buses (IIC buses).
Intended +audience
This document is intended to be used by device +driver writers.
The IIC client interface API is part of a +group of APIs which provide a common, hardware-independent, interface +from kernel to different hardware. Other APIs of this type include +GPIO.
Description
Physically, a bus represented +by IIC is accessed over a node. The IIC client interface API wraps +a node in a software construct called a channel. It provides functionality +for creating and accessing channels directly. There are two possible +modes of operation:
Controller
Where the device drivers that use the IIC +client interface API can access different IIC channels.
Controller-less
Where the channels that the device driver +can access are set at compile time.
Furthermore, there are three possible channel behaviors:
Master
The device driver initializes the data transfer.
Slave
The device driver carries out a data transfer that +is controlled by another node attached to the bus.
MasterSlave
The device driver can carry out both roles.
If the access to the IIC bus is to be via a controller, +then the IIC platform service API class to use is:
If the access to the IIC bus is to be without a controller +(controller-less operation), then the IIC platform service API classes +to use are:
Device drivers must follow the Symbian platform security guidelines. As +a part of platform security, drivers must be given the necessary platform +security capabilities. A driver can also check the capabilities of a process +opening a channel on the device, in order to restrict access to the device.
+Driver-side +definition
Because drivers are loaded by the Kernel, both LDDs
+and PDDs must have the same level of trust and capability as the Kernel. This
+means that platform security capabilities must be set to
The user program must have the necessary
+capability set in its
User-side verification
A
+device driver must check the capability of the process that is accessing it.
+This is typically done during channel creation and, if required, for specific
+requests to the LDD. The Kernel provides the
The following shows how the example driver checks
+during channel creation that the user has the
Data caging
Symbian
+platform security requires that all DLLs and EXEs are placed in the folder
To create a port of the DMA Framework, you must create an implementation +of the PSL for the DMA controller hardware.
+The choice of class to be +used for implementing DMA channels is governed by the mode of operation to +be supported:
single-buffer channels
+should use
double-buffer channels
+should use
scatter-gather channels
+should use
These three classes are defined in
In
+addition, the PSL may need to store channel-specific data. One way of achieving
+this is by deriving a PSL-specific class from one of the generic ones. For
+example, the template scatter-gather implementation defines a
A controller can either be a physical
+controller, or a logical one depending on which
The PSL (platform-specific layer)
+must define one concrete controller class per supported controller. A controller
+class is derived from
Taking the +template port as an example, the class is defined as:
Notes:
The array of channels +must be defined in the PSL controller class because only the PSL knows what +kind of channels are used.
The PSL controller class +must be allocated in the BSS section of the DMA kernel extension. The second +phase constructor must be called from the extension entry point.
The C++ constructor
+must pass a
where KInfo is a
The PSL controller class
+needs a second phase constructor. This is the
call the second phase
+constructor of the base class:
perform any hardware-specific +initialisation.
bind and enable the +DMA interrupt(s).
In the template port,
Channel allocation is implemented in the PSL +(platform-specific layer) because this is a hardware-specific operation. There +are two basic options:
Preallocate, at design +time, one channel per DMA-aware peripheral. This is the simplest approach, +and it should be acceptable for most Symbian platform devices because the +set of supported peripherals is closed. In this case, cookies passed by client +device drivers map uniquely to DMA channels.
Full dynamic allocation. +This is a simple approach too, but DMA channels are, in general, not completely +identical. For example, some channels may have greater priorities than others.
Mixed schemes are also possible, for example, client device driver +cookies could be used to select a subset of channels, and dynamic allocation +used inside this subset.
Whatever option is chosen, the PSL must provide
+an implementation for the function
The template DMA channel manager returns a pointer to +a DMA channel if the channel has not been previously allocated. Note that +since channels possess preset priorities, the device drivers must be aware +of which channel(s) they require DMA service from and configure the DMA controller +to route sources to allocated channels accordingly.
The platform-specific +cookies passed by client device drivers to the PSL must be defined somewhere +so that client device drivers can access them.
In the template PSL (platform-specific
+layer), the function
The
These functions start and stop transfers on a DMA channel and are +the main interface between the PIL (platform-independent layer) and the PSL. +The implementation of these functions depends on the hardware available for +performing DMA, and on the characteristics used to specify a DMA transfer:
the source and destination +addresses
the burst size
the maximum transfer +size
the transfer width, +i.e. number of bits per memory access
the memory alignment +and endianness.
The DMA Framework manages the transfer descriptors according to the
+descriptor parameter passed into the
The +transfer function: Transfer()
This function initiates a previously +constructed request on a specific channel. This is the template implementation:
The +stop transfer function: StopTransfer()
This function requires +that the RUN mode is cleared. This is the template implementation:
The +function: IsIdle()
The following auxiliary
+functions are used to implement the scatter-gather transfer mode behaviour
+by creating and manipulating the linked list of transfer fragment headers
+that describe a given scatter-gather transaction. They are called by the
First +scatter-gather support function: InitHwDes()
This is a function +for creating a scatter-gather list. From the information in the passed-in +request, the function sets up the descriptor with that fragment's:
source and destination +address
size
driver/DMA controller +specific transfer parameters: memory/peripheral, burst size, transfer width.
This is the template implementation:
Second +scatter-gather support function: ChainHwDes()
If the framework +needs to fragment the client’s request, for transfer size or memory discontiguousness +reasons, then the framework calls this function. It chains hardware descriptors +together by setting the next pointer of the original descriptor to the physical +address of the descriptor to be chained. It assumes that the DMAC channel +is quiescent when called.
This is the template implementation:
Third +scatter-gather support function: AppendHwDes()
This function is
+called by the
stop the DMAC to prevent +any corruption of the scatter-gather list while appending the new fragment +descriptor
append the new descriptor
re-enable the channel, +ideally before the target has detected the gap in service.
This is the template implementation:
The interrupt service routine needs to +do the following:
identify the channel +that raised the interrupt
decide whether the interrupt +was raised because of a successful data transfer or because of an error
call the base class
+function
This is the template implementation:
The DMA Framework comes with a +test harness that can be used to validate the port if the underlying DMA controller +supports memory to memory transfers.
The test harness needs to know
+about the capabilities of the port being tested. The PSL provides the global
+function
See
You can optionally optimise critical +functions by writing them in assembler. The two candidates for an assembler +rewrite are:
The interrupt service +routine
In scatter-gather mode,
+the
There are two ways +of extending the DMA Framework:
to provide platform-specific +functionality on a per-channel basis.
to provide platform-specific +functionality on a channel independent basis.
In the first case, the PSL provides an implementation of the virtual
+function
In the second case, the PSL provides an implementation of the static
+function
Kernel +Architecture 2
No specifications are published.
The PDD factory creates the main media driver objects.
+The
In implementing your
The following function is virtual in
See also
This
+PDD factory function is called after the PDD DLL has been successfully loaded,
+as a result of a call to
The +function is a second stage constructor for the factory object, and allows +any further initialisation of the factory object to be done. As a minimum, +the function must set a name for the media driver's factory object. The name +is important as it is the way in which the object will subsequently be found. +The name should be of the form:
where
When a
The following simple function is typical:
See also
This
+PDD factory function is called by the device driver framework to create, and
+return, the media driver object. This is an instance of a class derived from
Typically,
Compares the build version
+of the media driver with the version requested, and returns
Creates an instance
+of the
Initiates second-phase
+construction of the media driver object. Typically, this involves initialisation
+that is capable of failing. This may be done synchronously or asynchronously.
+You would probably do this asynchronously for removable media that is slow
+to power up, or for slow internal media, in which case, you would need to
+make sure that you attached a DFC queue during
Acknowledges creation +of the media driver object. The way you do this depends on whether creation +is done synchronously or asynchronously:
Synchronous creation
+- call
Note that
Asynchronous creation
+- return either
The following is a typical example:
See also
This +PDD factory function is called by the kernel's device driver framework to +check whether this PDD is suitable for use with the media type specified in +the function.
A typical implementation of this function would perform +the following steps:
Compare the build version +of the media driver with the version requested
Confirm that this driver +is responsible for the media type
The following is a very typical implementation:
Note that the value passed in via the
See also
For
+media drivers, this PDD factory function is not used. However, an implementation
+is required because the function is defined as pure virtual in the
See also
This +PDD factory function is intended to return information relating to the media +driver. The function can, potentially, return many different types of information, +depending on the value passed as the first parameter. However, the only type +of information that Symbian platform currently requires is the priority of +the media driver. The returned priority value is used by Symbian platform +to decide the order in which media drivers are to be opened.
The default +implementation just returns 0, and therefore needs to be overridden.
Under
+most circumstances, you can return the value
The following code fragment is a typical implementation:
where
You use the
attach a DFC queue, +if the underlying media driver supports asynchronous creation
register the media driver +with the local media system.
If the underlying media driver supports
+asynchronous creation, then a DFC queue must be attached at this stage.
+However, media drivers that interface to the Peripheral Bus Controller should
+create a new
See also
The media driver must
+be registered with the Local Media Subsystem; this provides information such
+as the number of supported drives, partitions, names and drive numbers. This
+is done by calling
The media device type can be any of the
The values passed to this function +are highly dependent on the target hardware platform, and it is common practice +to define them in a file contained within the Variant directory, instead of +hard-coding them into generic Symbian platform code. For example, Variant +A may provide two PC Card slots, while Variant B may provide 4.
The
+port for the template reference board has the header file
Your code may find it convenient to use the
+struct
The following code is used:
You can also do any further initialisation that is appropriate +to your driver.
+
Peripheral driver power management is based on the
The class also provides the necessary support functions. The class +is defined as:
+Typically, at least one power handler object is implemented by
+the peripheral driver. In some cases the peripheral driver interface
+class may derive from
The first eight functions are exported from the kernel,
Notes:
+Current consumption
+monitoring does not have a full implementation in the power manager
+at present. It is unclear whether this will be ever required. However,
Although we have described +the context in which these functions would be used, we recommend that +you do not use them.
Fixed media +drivers do not have a power handler. This means there is currently +no mechanism to power down media drivers and the associated fixed +media when the system transitions to a low power state.
The
If implementing an old style power +handler, the following functions will have to have an implementation +provided by the peripheral driver:
If using at least an old style power handler +the power manager will not complete any powering down (transition +to Off or Standby). Thus it is recommended that they +not be used.
When is it called?
Implementation issues
After receiving +a request to power down, as a result of a system transition to the Standby or Off states, a peripheral driver should perform +the necessary activity to power down the peripheral and ancillary +hardware, unless it is required for the detection of wake-up events. +This activity might include requesting the removal of the power supply +and any other power resources.
The power down
+operation can be done in the same thread in which
There are synchronisation
+issues related to calls to the
To avoid deadlock,
all run in the same thread. A common implementation of
When is it called?
Implementation +issues
After receiving +a notification to power up, as a result of a system transition from +the Standby to the Active state, it is up to the peripheral +driver to decide whether or not to power up the peripheral and ancillary +hardware. The decision usually depends on whether or not the peripheral +driver is currently in use.
The power up
+operation can be done in the same thread in which
There are synchronisation
+issues related to calls to the
To avoid deadlock,
all run in the same thread. A common implementation of
You can change HAL to add accessor functions for new derived attributes.
+This step is not often done, because all normal accessor functions are already
+defined in
Each derived attribute is declared with an associated function name in
This function is called whenever any client of the HAL references the associated +attribute.
+Once the config file has been written, the Perl script
produces the file
The full implementation for the function can now be written.
+Notes:
+The
The
On some platforms it
+may be necessary to access some HAL attributes via a device driver or server,
+rather than using
Access to any HAL attribute +requiring the use of a server or device driver can fail due to lack of memory.
Thee code fragments in the
See also
This +code fragment shows the implementation of accessor functions for two HAL derived +attributes. One uses a device driver, and the other uses a server to provide +the necessary functionality.
The examples assume that the attributes
+are defined in the
Using a device driver
Using +a server
Both user-side code and drivers can read and write to shared chunks. Once +the chunk is created, opened, and memory committed to it, data can be written +and read to the shared chunk using the base address.
+If a shared chunk has already been created +by a driver, then another driver can access that shared chunk through its +handle by using one of the following functions:
The user
+can now obtain the base address of the chunk by calling
On the kernel side, a chunk
+is represented by a
Synchronization +between user access and kernel access to a shared chunk is handled by the +shared chunk API.
A kernel-side function or service almost +always requires that various preconditions apply before that function or service +can be called. There are also times when a call to a kernel-side function +or service from device drivers may not be appropriate, for example, calls +to some specific functions from within Interrupt Service Routines (ISRs) or +from within Deferred Function Calls (DFCs).
For example, before you
+call
the calling thread must +be in a critical section
interrupts must be enabled
the kernel must be unlocked
no fast mutex can be +held
If conditions such as these are not met before you call the kernel-side +function or service, then you risk damaging the integrity of the kernel and +the whole system, with a very high risk of causing the kernel to fault followed +by a re-boot of the device.
The pre-conditions that apply vary with +the function called; they are listed as part of the reference documentation +for that function.
To help device driver test suites check that device +driver code is valid, stringent checks can be switched on in debug (non-production) +builds of Symbian platform (and in the resulting test ROM images). These checks +help to catch non-compliant device driver code during the testing phase of +a new device. This minimizes the risk of production devices failing because +of faulty device driver code.
Validity checking is done by code that +is conditionally compiled into a build of Symbian platform. Such code has +an impact on the performance of the device on which it runs, but is always +restricted to test devices. On production builds of Symbian platform, validity +checking is switched off, which means that the code is not compiled in and +has no effect on performance.
Validity checking is switched ON for
+a build of Symbian platform if one or both of the following macros
+is defined in
Validity checking is switched OFF for a build of Symbian platform
+if both macros are undefined (in practice they will be marked
+as a comment rather than being completely deleted from the
__KERNEL_APIS_CONTEXT_CHECKS_WARNING__
If +this macro is defined, then calling a kernel-side function when the necessary +pre-conditions have not been satisfied causes a text message to be written +to a trace port. The message has the format:
For example,
+when you call
__KERNEL_APIS_CONTEXT_CHECKS_FAULT__
If +this macro is defined, then calling a kernel-side function when the necessary +pre-conditions have not been satisfied:
causes a text message
+to be written to a trace port, as happens if
causes the kernel to +fault, specifying the function name as the fault category, and zero as the +fault number. The crash debugger is also started if this is present on the +device.
__KERNEL_APIS_CONTEXT_CHECKS_WARNING__ +and __KERNEL_APIS_CONTEXT_CHECKS_FAULT__
If both macros are defined,
+then validity checking behaves as if
The section
A kernel-side function or service almost +always requires that various preconditions apply before that function or service +can be called. There are also times when a call to a kernel-side function +or service from device drivers may not be appropriate, for example, calls +to some specific functions from within Interrupt Service Routines (ISRs) or +from within Deferred Function Calls (DFCs).
For example, before you
+call
the calling thread must +be in a critical section
interrupts must be enabled
the kernel must be unlocked
no fast mutex can be +held
If conditions such as these are not met before you call the kernel-side +function or service, then you risk damaging the integrity of the kernel and +the whole system, with a very high risk of causing the kernel to fault followed +by a re-boot of the device.
The pre-conditions that apply vary with +the function called; they are listed as part of the reference documentation +for that function.
To help device driver test suites check that device +driver code is valid, stringent checks can be switched on in debug (non-production) +builds of Symbian platform (and in the resulting test ROM images). These checks +help to catch non-compliant device driver code during the testing phase of +a new device. This minimizes the risk of production devices failing because +of faulty device driver code.
Validity checking is done by code that +is conditionally compiled into a build of Symbian platform. Such code has +an impact on the performance of the device on which it runs, but is always +restricted to test devices. On production builds of Symbian platform, validity +checking is switched off, which means that the code is not compiled in and +has no effect on performance.
Validity checking is switched ON for
+a build of Symbian platform if one or both of the following macros
+is defined in
Validity checking is switched OFF for a build of Symbian platform
+if both macros are undefined (in practice they will be marked
+as a comment rather than being completely deleted from the
__KERNEL_APIS_CONTEXT_CHECKS_WARNING__
If +this macro is defined, then calling a kernel-side function when the necessary +pre-conditions have not been satisfied causes a text message to be written +to a trace port. The message has the format:
For example,
+when you call
__KERNEL_APIS_CONTEXT_CHECKS_FAULT__
If +this macro is defined, then calling a kernel-side function when the necessary +pre-conditions have not been satisfied:
causes a text message
+to be written to a trace port, as happens if
causes the kernel to +fault, specifying the function name as the fault category, and zero as the +fault number. The crash debugger is also started if this is present on the +device.
__KERNEL_APIS_CONTEXT_CHECKS_WARNING__ +and __KERNEL_APIS_CONTEXT_CHECKS_FAULT__
If both macros are defined,
+then validity checking behaves as if
The section
When building a ROM image, all the processes and data needed to execute +a build are included in a single file. This file is executed by the platform +when the hardware is powered up.
+DMA framework.
You must be familiar with building a ROM for the Symbian platform.
There is a template DMA Framework consisting
+of the source file
Decide which directory your DMA PSL and associated
Copy the template framework into your chosen location. Be aware that
+the template
Change the Variant's
Include the
For more
+information, refer to
A physical channel defines the interface between the logical device and
+the physical device. The Serial Port Driver physical channel interface is
+defined by the
The
Note that
Implement the constructor +for the channel object. Since there is no unit number argument, the constructor +is limited to initialising only those members that are common across all possible +channels, with no access to any specific hardware.
Implement the destructor +for the channel object. It should release any hardware and Symbian platform +resources that have been allocated to the driver, typically unbinding the +UART interrupt ISR, and any private DFCs that are still in use. The destructor +merely unbinds the interrupt as no channel specific DFCs are active at destructor +invocation.
Implement the
Typical
+operations performed by this function are to setup any channel specific hardware
+states and bind the driver’s ISR to the (possibly channel specific) interrupt
+channel, as well as enabling the UART hardware itself. However, since the
Implement
+the
Received characters are placed in the LDD upstream buffer, either
+by the interrupt handler routine, or by a polling routine that tests for the
+existence of received characters (and their associated error state). This
+is done by calling the LDD’s
The function
+is called by the LDD when the owning thread opens the device channel. It should
+complete any UART data I/O enabling that has not already been performed in
+the
Implement
+the
Depending on the type of stop requested, i.e. emergency
+power down, or otherwise,
The function is called by the LDD when +the owning thread closes the device channel. At this point, the UART device +should revert to the state in which all data I/O requests are ignored or failed. +To save OS resources the natural method to accomplish this is to disable the +UART interrupt processing or disable the UART itself. It may also be necessary +to cancel any DFCs queued, so that event notifications are not sent to the +LDD after it has requested an end to I/O.
Implement the
Implement the
This operation is +explicitly requested by the owning thread through the LDD. A break state requires +the forcing of the line into an abnormal state, which violates standard data +format timings (a form of out of band signalling), and is detected by the +remote device and is usually used to force it to cycle through its baud rate +detection modes. Forcing a break state requires that the PDD explicitly set +some state in the UART hardware.
Implement
+the
Implement the
Implement
+the
It is called by the LDD in response
+to a user request for the device's capabilities. The PDD must fill a
The object is defined in
The base object,
data rate
word format (i.e. parity, +data bits, stop bits)
flow control lines
MODEM control lines
IrDA support.
Each attribute range is passed as a bitfield of possible settings, +all assumed to be orthogonal, i.e. each attribute can assume all the values +independently of the other attributes.
The attribute bitfields are
+defined in
The
The
The
The
The
Implement the
Configuration data is encapsulated by a
This function is called by the LDD when the physical channel
+is opened. The default configuration is set by the LDD when the
Note
+that the
Implement the
Implement the
Implement the
Implement the
Implement the
Implement the main data +handling routines. These are entirely dependent on the UART hardware interface +and cannot therefore be described generally. They are typically driven by +the interrupt service routine as asynchronous events from the UART (eg. data +received, finished transmission of data, change in handshaking/flow control +inputs) occur.
The main ISR is typically driven by an interrupt multiplexed +from all the sources in the UART. The ISR, therefore, needs to determine which +of the interrupt sources requires attention. Since the most critical source +is the data received state, as this requires that the data is processed before +following data overwrites earlier data, then this is usually the signal to +be checked first. To avoid wasting time, the error state of the data is also +checked – data that has bad parity or framing must be noted as such. Typically +the receive path will save the currently available data into a temporary buffer, +and queue a DFC to process the data further at a later time, when more client +thread context is available (though this is a function of the LDD). The receive +ISRr just passes the location of the ISR buffer into an LDD function which +queues a DFC to process it.
The transmit path, called when the transmit +FIFO contents drop below a programmable number, merely requests more data +from the LDD. If there is none available it disables the transmit interrupt +so as to prevent further requests when there is no data to be sent. Further +data to transmit will cause the transmit FIFO empty interrupt to be re-enabled. +Hence the start of any transmission is always performed on an explicit start +request from the LDD and then continues under PDD interrupt control until +the source data is exhausted.
The status notification path is present +to inform the upstream LDD of changes in the input status signals (MODEM and +flow control status). The UART can generate interrupts when any input handshake +line changes state – the ISR merely reads the current state and queues a handler +DFC so that the LDD can be informed. The LDD is responsible for determining +what status change has occurred and dealing with it.
A Sound Driver PDD must implement physical channels to use the audio hardware
+of the phone. The main Sound Driver PDD class is derived from
The template defines two classes:
Implement +the PDD class constructor for both the playback and record driver channels. +This is normally limited to
initialising any data +members that need values other than zero
initialising any DFC +call-backs added.
Access to hardware has the potential to fail and so should be deferred
+to the second stage constructor
Implement +the PDD class destructor for both the playback and record driver channels. +The destructor must release any hardware and Symbian platform resources that +have been allocated to the driver. For example, this might include:
unbinding ISRs
cancelling private DFCs
closing DMA channels
deleting any mono-to-stereo +conversion buffers
The template versions of this function delete each DMA request object
+created in the PDD second stage constructor
Implement
+a PDD class second stage constructor for both the playback and record driver
+channels. In the template version, the function
binding an ISR to an +audio related interrupt
open a DMA channel
allocate a mono-to-stereo +conversion buffer
The template versions of this function include code to set up a DMA +channel. This involves opening a DMA channel in the appropriate direction +for the driver channel and then allocating a set of DMA request objects. However, +to use this implementation you must supply the missing platform specific information:
Provide values for the
+maximum number of DMA requests outstanding on the DMA channel. These values
+determine the number of separate DMA request objects allocated for the DMA
+channel, and so the number of transfer fragments that the PDD can accept from
+the LDD at any time. See
Set the appropriate
+DMA values for your device within the file
Setup the DMA channel
+information for the device. This includes the following members of
Make
+sure that the template version of the
The +supplied implementation is as follows
Many requests are executed in the context of a kernel-side
+thread. Rather than assign a kernel thread for the driver in the LDD, the
+Sound Driver allows the PDD to specify the DFC thread returned via the
The default implementation +for the record and playback driver channels, returns a pointer to the DFC +queue created by the PDD factory class.
See also
Implement
+the
The +supplied implementation is as follows:
The PDD must initialise the
Values for the following data members must be supplied +by the PDD:
The data member
See
Implement
+the
The +supplied implementation is as follows:
Values for the following variables must be supplied +by the PDD:
Many of the attribute ranges are passed as bit settings, and can +assume all the values independently of one another.
The following
+is a portion of the
The
+PDD maintains a
The data member
Implement
+the
The +supplied implementation is as follows:
This function is called each time the LDD alters the
+audio configuration of the channel. For example, after calling
The
+value returned by
If the PDD has to employ mono-to-stereo data conversion
+using a conversion buffer when configured in mono mode, then it needs to return
+the value that corresponds with the size of the conversion buffer each time
+it is configured in mono mode. See
Implement
+the
The +supplied implementation is as follows:
This function initialises the codec device and any associated +controller hardware that allows the CPU to communicate with it. However, at +this stage only basic initialisation of these hardware components is required, +as the specific audio configuration has not yet been specified, and data transfer +has not yet started.
If the PDD supports both record and playback
+driver channels then most, if not all, of this hardware initialisation is
+common to both channels. It may be necessary to ensure that such initialisation
+on one channel cannot interfere with the other. For example, when calling
Implement
+the
A configuration buffer in a packaged
The PDD must read and locally save the contents of the
It is not necessary to check
+for configurations requested by the LDD that are not supported by the audio
+hardware device if the
The PDD
+sets up the audio hardware device according to the audio configuration specified
+if it is required. Some, if not all of this hardware configuration may be
+put off until data transfer is started, this is when
If the +PDD has to employ mono-to-stereo conversion of data using a conversion buffer +while the audio configuration is set to mono, then the memory for the conversion +buffer should be allocated at this time.
Implement
+the
The +supplied implementation is as follows:
The PDD must first convert the volume/record level information +specified into a form which can be used to program the hardware. This may +require converting the value from a gain factor to an attenuation factor and/or +applying a scaling factor. The LDD detects situations where the client specifies +a record level /volume that is already in effect, and in this case does not +unnecessarily call the PDD.
The PDD may opt to setup the audio hardware
+device within this function or it may defer this until data transfer is started
+with the function
Implement
+the
The +supplied implementation is as follows:
This function performs any configurations of the audio
+hardware device that were deferred from the
If the PDD supports both record and playback +driver channels then it may be necessary to ensure that such hardware configuration +on one channel cannot interfere with the other.
Implement
+the
Once
+transfer has been started by calling
The template version for the record driver channel contains +the following code:
The first step is for the PDD to check that
+it has the capacity to accept a further transfer. For example, check that
+the PDD has a DMA request object that is free and that the DMA controller
+has the capacity for another transfer to be queued. If the PDD does not have
+the capacity to accept another transfer then it should immediately return
Otherwise, the PDD can start a DMA transfer. To do this
+it must acquire a DMA request object. The class
The data member
The record PDD class owns an array of DMA +request objects:
It
+also owns the data member
Next the function
If
+fragmentation is successful then the request object is queued on the DMA channel
The final part of this function requires
+platform specific code to start the audio hardware device transferring data.
+This needs to be executed for each fragment transferred or just for the first
+fragment queued following the
Once
+the transfer is complete, either successfully or with an error, the DMA framework
+executes the static DMA callback function,
This function receives two arguments:
the result of the transfer +from the DMA framework
an argument supplied +to the DMA framework when the request object was created
In this specific case, the argument type is a pointer to the DMA
+request object, and allows the retrieval of the transfer ID and the transfer
+size. In the template driver version an unsuccessful transfer returns an error
+value of
From +the callback function we call the record PDD function to decrement the count +of transfer fragments outstanding and to inform the LDD of the completion +of transfer.
Implement
+the
The
+PDD must reverse any operation performed on the audio hardware device done
+as part of
The template version for the record driver +channel contains the following code:
Implement
+the
When +pausing playback, there is normally some way to temporarily stop the codec +and pause the DMA channel so that it can be resumed later starting from the +next play sample.
Pausing record requires a different implementation.
+All active transfers must be aborted. When this has been achieved, the PDD
+must report back to the LDD how much data has been received for the fragment
+that was actively being transferred when the abort took place by calling
The supplied template +implementation is as follows:
Implement
+the
The +template version for the record driver channel contains the following code:
To resume playback, it is normally necessary to re-start +the codec and resume the DMA channel in order to restart playback from the +next play sample.
To resume record, all active transfers should have
+been aborted when the device was paused with the function
There is no need +for the PDD to perform any state checking as this is already performed by +the LDD. For example, checking that the device is not already paused.
Implement
+the
The
+PDD must reverse any operation performed on the audio hardware as part of
Implement
+the
If custom configuration
+is not supported, then the PDD should simply return
There are three main parts:
+the user side interface
+to the
The
+class
the
The
+class
the
The class
The +following diagram illustrates the Sound Driver architecture. The interactions +between the user side and kernel side classes are shown.
Thrashing +is an undesirable state where most of the processes in the system are in the +process of paging-in memory and none of the threads are doing useful work. +This results in unacceptable system performance.
These topics are useful background +information on thrashing:
Paging
Virtual memory
The signs of thrashing in demand +paging are:
The system performance rapidly becomes unacceptable, since no threads +are doing useful work. The device will become unresponsive or appear to 'hang'.
When observing paging activity logs it's seen that the same set of +pages are paged in and out many times over the course of a use case.
When observing paging activity logs there are long periods where many +threads are waiting for pages to be paged in.
There are large periods of null thread activity.
The following is a means of +preventing thrashing from occurring:
Increase the size of the paging cache. This reduces page faults and +hence the need to page-in memory.
Mark the code or data involved as unpaged, for example if a 20MB buffer +is actively used through a use-case and discarded afterwards there is little +use in making it paged as it will need to always be in the page cache.
Reduce the working set size so that it fits into the paging cache, +for example instead of having four activities running concurrently, serialize +them.
The Universal Serial Bus (USB) is a high speed +data transport mechanism that uses a 4 wire cable attached to a USB Host to +transfer signals and bus power to low cost peripherals (clients). Theoretically, +up to 127 devices (such as keyboards, video cameras etc.) can share the bus, +although less than four devices is more typical.
There are two versions +of the USB standard in use: V1.1 and V2.0. The following table shows the speeds +supported by the different versions:
USB Speeds
As the table shows USB V2.0 adds support for the higher speed +of 480Mbit/s.
The USB system consists of single USB host controller +(the USB Host) connected to a number of USB peripheral devices. The host is +the bus master, initiating all transactions over the bus.
A client +device is a USB peripheral device. It is connected to a USB Controller and +responds to transactions addressed to it from the Controller. The Universal +Serial Bus Specification 1.1 describes how a USB Host sends commands to a +peripheral and how data can be transferred from and to a Client. The main +points are:
USB is a Host polled +bus. This means that a peripheral cannot send a data packet unless the Host +has requested it. The request must either be accepted, and the data packet +sent within a given time period, or refused.
Every peripheral attached +to the USB is allocated a unique device address. The Host uses the device +address to direct its communication to a particular device. In fact, the Host +sees the peripheral as a collection of endpoints. Each endpoint is an address +within the peripheral to which the Host sends packets or from which the Host +receives packets.
Endpoints are grouped +together by the peripheral into Interfaces. The Host regards each peripheral +as a set of functions and communicates with each function via its Interface. +Simple devices, such as mice, contain only one function, other devices may +contain more than one function, e.g. a monitor may contain speakers and a +microphone.
Each function requires +its own driver at the Host end which communicates with the function within +the peripheral using a USB class driver or vendor defined protocol. The function +informs the Host what protocol it understands via its Interface.
The USB Client Driver supports:
multiple USB interfaces, +i.e. it supports licensee products that implement multiple USB interfaces +simultaneously
alternate USB interfaces
a single USB Configuration +only; the API does not provide a mechanism for specifying more than one USB +configuration.
extended standard endpoint +descriptors.
reading and modifying +standard descriptors.
setting class specific +interface and endpoint descriptors.
remote wakeup signalling.
reading device directed +Ep0 (Endpoint Zero) traffic.
There +are four endpoint types corresponding to the four transfer types:
Control endpoints
Bulk endpoints
Interrupt endpoints
Isochronous endpoints.
Control +endpoints
These are serially bi-directional. This means that they +can send and receive data, but not simultaneously.
Every peripheral +has at least one control endpoint. Although other control endpoints are possible, +Symbian platform only allows Endpoint zero (Ep0) as the control endpoint. +Ep0 is a shared resource, which must always be available.
Ep0 is vital +for the operation of the device. Its primary function is the control endpoint +for the Host. When the device is first connected to the Host, a process known +as enumeration must occur, and this takes place through Ep0. During enumeration, +the client tells the Host what kind of device it is and what classes it supports. +Enumeration is complete, for the purpose of writing client drivers, when the +Host moves the client into its configured state. The Host moves the client +into its configured state when a Host driver is available to communicate with +the client.
Ep0 needs handling with some sensitivity. Every request +arriving on Ep0 must be handled either by the Controller or by a client class +driver. Each request must be moved through the data phase, even if there is +no associated request data, so that the status phase may begin. It is imperative +that this happens otherwise endpoint zero will be unable to accept any further +requests. This could result in the Host stopping communication with the device.
Bulk endpoints
These are unidirectional. They can have the +IN or the OUT direction.
The IN direction means sending packets to +the Host. The OUT direction means receiving packets from the Host.
Bulk +endpoints can supply a maximum of 64 bytes of data per packet, but such transfers +have no guaranteed bus bandwidth.
Interrupt +endpoints
Interrupt endpoints are similar to Bulk endpoints but, +in addition, the endpoint descriptor specifies how often the Host should poll +the endpoint for data. The polling frequency may be anything from 1ms to 255ms.
Isochronous endpoints
Isochronous endpoints can deliver up +to 1023 bytes of data per packet, but there is no retry capability if an error +is detected. Like Interrupt endpoints, they have an associated polling frequency, +which is fixed at 1ms.
Two +numbering schemes are used within the USB implementation. There is also the +external USB address numbering scheme.
Virtual endpoint numbers
Applications +or class drivers use virtual endpoints when communicating with the USB client +driver. These numbers are used when exchanging data over the USB.
Virtual
+endpoints range in value from 1 to
For applications that implement a +specific USB device class, and which need access to endpoint-0, virtual endpoint-0 +is available. This endpoint is special because it does not belong to any specific +interface. It is always available and, at the interface to the USB client +driver, is the only bi-directional endpoint.
Implementation note
Within the platform independent layer,
+virtual endpoint numbers are represented by instances of the internal class
Physical endpoints
Physical (or 'real') endpoints are used +at the interface between the platform independent layer and the platform specific +layer within the USB client controller. They represent endpoints physically +present on a given device.
Implementation note
Within the platform independent layer,
+physical endpoint numbers are represented by instances of the internal class
To
+simplify array indexing, the numbering scheme uses "transformed" USB endpoint
+addresses. We represent each channel as a separate endpoint, RX (OUT) channel
+as
Canonical endpoint numbers
The +two layers of the USB client controller, i.e. the Platform Independent layer +and the Platform Specific layer, use a numbering system known as canonical +endpoint numbers when communicating between each other. The physical endpoint +numbers are never used explicitly.
This system is based upon the physical
+endpoint number, but takes the direction of the endpoint into account. If
if +the endpoint direction is IN
if +the endpoint direction is OUT
This means that canonical endpoint numbers fall into a range from +0 to 31.
For example, Endpoint 0 (Ep0) is represented by 2 canonical +endpoint numbers:
0, representing Ep0(OUT)
1, representing Ep0(IN).
Converting between Endpoint +number representations
It is possible to convert between the Endpoint +Address representation and the Canonical Endpoint. The following text explains +how these values are stored in memory and shows the conversion process:
Endpoint Address
where the numbers 0-7 represent bits 0-7 of the byte containing the +endpoint address.
D = Direction: 0 = Receive/IN, 1 = Transmit/OUT
P3 +P2 P1 P0 represents the physical endpoint number (0-15)
Canonical Endpoint Number
The canonical endpoint number is +just the endpoint address rotated left by 1 bit as shown below:
Summary +of Endpoint Number Representations
The APIs use the following +conventions for representing endpoint numbers types:
Handles and binary data can be passed to a process at process creation +time using environment slots. This topic describes this concept, and explains +how to use the process APIs for environment slots.
+
Handles +and binary data, in the form of descriptors or integer values, can be passed +to a process at process creation time, using what are called environment slots.
Up +to 16 separate pieces of information can be passed to a process on creation. +For this purpose, a process has 16 environment slots that can contain the +information passed to it by the launching process.
Slot 0 is reserved +and is never available for general purpose information passing.
The +parent (launching) process can only pass information to the child (created) +process after the child process has been created. However, it should be done +before the child process is resumed; it is an error to try and set environment +data in a child process that has been resumed.
A child process can +only extract the information from its environment slots once. Extracting information +from a slot causes that information to be deleted from the slot.
It +is a matter of convention between the parent and child process as to the meaning +to be applied to a slot, and the type of data that it is to contain.
To
+pass, a handle, a client server subsession, or binary data to a child process,
+the parent process calls
To extract, a handle, or a client server subsession, a child process
+can use the
To extract descriptor data, or integer data, a child process
+can use the
File server session handles and file +handles can both be passed to a child process.
The child process adopts +the file handle, and must not be closed by the parent. To use a file handle, +the session handle must also be passed.
For security reasons,
+when sharing a file handle, the parent process should create a separate file
+server session for the specific purpose of sharing the file handle. If the
+parent process has other files open with this file server session, the child
+process can gain access to those files by iterating through all the possible
+values for the file handle and attempting to
The +following two code fragments show code in the parent process and corresponding +code in the child process.
General handles derived
+from
The
+handle is duplicated when it is stored in the child process’s environment.
+The parent can close the handle immediately after calling
The following two +code fragments show code in the parent process and corresponding code in the +child process.
Both 8-bit and 16-bit descriptor data can be passed from a +parent to a child process.
Internally, an
The child process retrieves the descriptor
+data by calling
The +following two code fragments show code in the parent process and corresponding +code in the child process.
where
Note that the descriptors,
An integer can be passed from a parent to a child process.
The +following two code fragments show code in the parent process and corresponding +code in the child process.
The parent process is panicked when calling
the parent process is +not the creator of the child process
the slot number is out +of range, i.e. is not in the range 0 to 15
the slot is in use
the handle is local.
The parent process is panicked when calling
the parent process is +not the creator of the child process
the slot number is out +of range, i.e. is not in the range 0 to 15
the slot is in use
the length of the data +is negative.
The child process is panicked if the slot number is out of range.
The
+API functions that extract data from the process environment return
Symbian platform provides access to USB client hardware through its USB +client driver.
+The following diagram shows the USB client architecture. Devices can provide +more than one USB function. To better show the typical use, the diagram shows +the components in use with a phone that is configured as a combined +WHCM and OBEX peripheral.
+There are three main elements:
+The user side interface +to the USB client driver.
The USB client driver +LDD that provides the driver's logical channel. This is hardware independent.
The PDD, or client controller, +that manages access to the USB client hardware.
The user side interface to the USB client
+driver is provided by
The USB client driver provides an interface to the +USB client device. The USB client driver is also referred to as a channel to +the USB client device; it may also be referred to as the USB logical device +driver.
Each user component that requires a connection to the USB
+client device opens a channel using the user side
A +channel supports only one USB interface. A channel (i.e. the USB LDD) can +be loaded as many times as needed; the decision is based on the number of +interfaces required. This can be done either in the same process or in a different +process depending on the relationship between the interfaces.
If there +is more than one channel open on the USB client device, then they can all +share Control Endpoint 0 (Ep0). Each channel can make a request on this endpoint. +The other endpoints cannot be shared between channels; instead each endpoint +is used exclusively by one, and only one, channel.
Each channel can +claim up to a maximum of five endpoints in addition to Ep0. Each endpoint +claimed on a channel is locally numbered from one and five. This number need +not correspond to the actual endpoint number of the hardware interface; this +number is called the logical endpoint number, and all transfer requests use +the logical number for the endpoint. A driver can, however, discover the physical +endpoint address of a logical endpoint by requesting the endpoint descriptor.
A +channel can have asynchronous requests outstanding on one or more of its endpoints +simultaneously; this includes Ep0. As Ep0 can be shared between channels, +then the underlying USB Controller must manage multiple requests on this endpoint
The USB client controller manages access to the +USB client hardware on behalf of all USB client drivers. It has a two layer +implementation:
a licensee product independent +layer that provides an API for the USB client driver. This is often referred +to as the Platform Independent Layer, and forms the 'upper' part of the USB +physical device driver (PDD).
a layer that interfaces +with the hardware. This is often referred to as the Platform Specific Layer, +and forms the 'lower' part of the USB physical device driver (PDD). It +is this part that needs porting effort.
The Platform Independent Layer contains, as far as possible, the +functionality that is defined in the USB specification. This includes the +‘Chapter 9’ device state handling and request processing. The Platform Specific +Layer contains only that functionality which cannot be abstracted and made +generic, because different USB Device Controller designs are programmed differently +and operate differently.
The complete USB client controller (PDD)
+is an instance of a
The
+functionality of the Platform Independent Layer is provided by the base class
Shared chunks improve performance by eliminating the need to copy data between different processes. USB client application efficiency will be improved by sharing chunks from the kernel side drivers to the application.
The Baseport Template contains dummy functions. The Baseport Template +is designed to boot without supporting any peripherals. The developers +should port the individual drivers depending on the hardware used. +This guide describes a generic build procedure for ARMv5 based devices.
+The baseport code is available at
The following steps describe how to build the Baseport Template +for an ARMv5 processor.
Create a file,
+for example,
Call
Building a device driver is similar to building +any other component in Symbian platform. This section discusses the details +that are specific to drivers, and assumes that you are familiar with the Symbian +platform build tools.
Device drivers can be built either by using
+console commands (
The
When a target is specified
+using the keyword
For example, if the target to be built is
To include a driver in a ROM image,
+the driver's files must be included in the corresponding obey files (
For
+example, to include a driver in the H4 text shell ROM image, it should be
+included in
On
+building the test .inf files, test
After
+building the drivers, a ROM can be built. Device driver and base developers
+generally build text shell ROM images for development and testing. These can
+be built using the
For example, for the H4HRP variant, a text +shell ROM image is built using:
This
+builds a ROM called
To include base test programs
+in a ROM image, the option
This includes the
+tests from
The function
This is useful in situations where an interrupt must be cleared explicitly +rather than as a side effect of an I/O register access, especially where the +clearing is done from generic code such as in the PC card and MMC controllers.
+The implementation of this function is completely dependent on the interrupt +hardware.
+In the template port,
The following four constants defined at the
+top of the template implementation in
This is information that is returned when calls are made:
user side to
kernel side to
See
The following four constants define the +digitizer origin and size in digitizer ADC coordinates. Modify the values +for your own digitizer:
In the template port, this is implemented by DTemplatedigitizer::digitizerPowerUp(). +Note that this function is not derived from any base class function.
Add code to this function +to do these things:
Clear all digitizer +interrupts.
Request power resources +from the power controller. This will power the device up from sleep mode. +Note that power up, and power down, can be implemented later.
Configure the hardware +so that it generates an interrupt when the digitizer is touched.
Make sure that the digitizer +interrupt is defined. In the template port, the interrupt id is defined as +the constant:
in
+the file
Make sure that the interrupt
+controller is configured for the digitizer interrupt. See
This +code is executed at startup or whenever the pen is lifted up. It implements +a request for an interrupt to be generated when the pen next touches the digitizer. +It is called by the platform independent layer:
at startup
when the pen is removed +from the digitizer after a pen-up event has been issued.
There are two main cases to deal with: the digitizer is powering +down; the digitizer is not powering down
The +digitizer is powering down
The implementation for this case can
+be left until later. It is discussed in
The +digitizer is not powering down
To deal with this case, +your implementation needs to:
clear the digitizer +interrupt
set up the hardware +so that it can detect when the digitizer is touched.
The pen interrupt is now enabled; if the digitizer is now touched,
+the pen interrupt ISR (interrupt service routine)
In the template port, the ISR includes the statement:
and
+means that if the
If the
In the template port, the interrupt service routine +(ISR) that handles a pen interrupt is implemented by DTemplatedigitizer::PenInterrupt(). +Note that this function is not derived from any base class function.
There +are two main things to consider here:
You need to add code +to the start of the function to decide whether the pen is now up (i.e. removed +from the screen panel), or whether the pen is now down (i.e. touching the +digitizer panel). To make the decision, you may need to read the appropriate +hardware register. The detail depends on the hardware.
If the pen is down,
+you need to check the value of the configurable constant
If the value is greater
+than zero, then you do not need to change the code at this time. The existing
+code just starts a timer to delay the beginning of the collection of samples.
+Note, however, that you will need to modify the debounce timer callback function
If the value is zero,
+then you must clear the digitizer interrupt here in this function,
To contribute the timing
+of pen interrupts as a source of random data for the Random Number Generator
+(see
In the template port, the initialisation of sampling is +implemented in the first half of the function DTemplatedigitizer::TakeSample() function. +Note that this function is not derived from any base class function.
There +are two main things to consider here:
You need to decide whether
+the pen is up or down. Set the variable
To +do this you may need to read the appropriate hardware register - the detail +depends on the hardware.
Change the section of +code that is executed when the pen is found to be up after a power on, i.e. +the block of code:
The block of code needs to do the following:
reset the sample buffer
clear the digitizer +interrupt
set up the hardware +so that it can detect when the digitizer is touched
enable the digitizer +interrupt.
In the template port, the initialisation of sampling is +implemented in the second half of the function DTemplatedigitizer::TakeSample() function. +Note that this function is not derived from any base class function.
This +code is executed while the pen is down, and needs to do the following:
read the hardware for +the digitizer samples and put the results into the sample buffer. The sample +buffer resides in the platform independent layer.
set up the hardware +so that it can detect when the digitizer is touched
schedule the reading
+of the next sample using a one shot timer; the time interval is defined by
+the value of the configurable constant
when a complete group
+of samples has been taken, tell the platform independent layer by calling
Tracing note
If the
In the template port, the +debounce timer callback function is implemented by the local function timerIntExpired().
If
+not already done in
If the digitizer generates an interrupt when the
+pen is lifted, then you need to add code to the pen interrupt ISR to handle
+this. This is the same function, DTemplatedigitizer::PenInterrupt(),
+referred to in
The code should:
clear the digitizer +interrupt.
set up the hardware +so that it can detect when the digitizer panel is touched.
If there is no pen up interrupt by design, then you need to add code
+into the DTemplatedigitizer::TakeSample() function at the point where
+the pen is up and the digitizer is in the pen up debounce state (i.e.
At this point, you should have almost all digitizer +capability – pen up handling, pen down handling, and the taking of digitizer +readings.
This +function is called when the digitizer is being powered down.
If the +device is powered on, then the function needs to do disable the digitizer +interrupt.
If the digitizer is in the collect sample state
+(i.e.
set up the hardware +so that the device wakes up from standby if the digitizer panel is touched.
relinquish the request +for power resources; this will place the peripheral hardware into a low power +"sleep" mode, which is capable of detecting an interrupt.
At this point, the device should be +working successfully.
You now need to put the device into a low power +"sleep" mode when it is switched off, and to bring it out of "sleep" mode +when the digitizer is touched.
Add code to the template +port function DTemplatedigitizer::digitizerPowerUp() to request power +resources from the power controller. This will power the device up from sleep +mode.
Add code to the powering
+down part of your implementation of
sets up the hardware +to wake the device from standby if the digitizer is touched.
relinquishes the request +for power resources - this will place the peripheral hardware into a low power +"sleep" mode, which is capable of detecting an interrupt.
The platform-specific makefile sets some variables for file and path names +and then includes the generic makefile.
+The makefile consists of a collection of variables divided into a mandatory +set, which must have values assigned, and an optional set.
+The values are used by the Symbian platform generic makefile.
+Implement your power resource according +to the type of resource you wish to support.
Introduction
Power +resources controlled by the PRM can be turned on, off and varied with software.
See
+the
static resources - derived
+from the
static resource that
+support dependancy - derived from the
dynamic resources -
+derived from the
dynamic resource that
+support dependancy - derived from the
custom sense resources +- when shared the power level of this resource may be increased or decreased +freely by some privileged clients but not by others, which are bound by the +requirement of the privileged clients.
Note: dynamic resources and resource dependancies are only
+supported in the extended version of the PRM. See
The following tasks +are covered in this tutorial:
Override the pure +virtual functions
The pure virtual functions of
Constructor
Information about the resource
+based on the resources category must be set in the
Each resource +is classified based on:
usage,
operating levels,
latency,
resource sense.
Below is the example implementation of derived class constructor
GetInfo()
The PIL uses
The member variables of the
DoRequest()
Below is an example
The DoRequest function implementation should take care +of blocking the Resource Controller thread as operations on a long latency +resource take a significant amount of time to complete (in hardware). Usually +the request completion is notified either through an ISR (interrupt driven) +or through register setting (polling wait) by hardware.
Below is an
+example
Create +resource dependencies
A resource has a dependency if the state
+of the resource depends on the state of one or more resources. For example,
+a clock whose frequency is derived from another clock or voltage is a dependent
+resource. The PSL usually handles resource dependencies transparently. However,
+if the resources have public users (clients of the PRM), then these resources
+should be registered with the PRM as
Each dependency must be assigned +a priority value. The priority is used by the PIL when propagating the change +and propagating the request for notifications. Each link stemming from a resource +must have a unique priority value.
Note: Resource dependencies +are only supported in the extended version of the PRM. Only long latency resources +are allowed to have dependents and no closed loop dependencies are allowed.
Register +resource dependencies for dynamic and static resources
Static
+resources that support dependencies are created and registered with the PRM
+during resource controller creation time and are derived from
Dependencies between static resources should be established
+by the PSL before registering these resources with the PIL. Use the
Dynamic resources that
+support dependency are derived from
Dependencies can be established between a dynamic resource
+and a static (dependency enabled) resource using the same API. A client can
+deregister dependencies between a pair of dynamic resource (or between a dynamic
+and static resource) using
Change the state +of dependent resources
When a state change is requested by a client
+on any resource in a dependency tree before proceeding with the change the
+PIL must check if the change is allowed by its dependents and the ones it
+depends on. The PIL does this by calling the
This function is called by the +PIL prior to a resource change on any of its dependents. The function returns +one of these values:
Create +custom resources
Clients on a shared resource may have different +requirements on the state of a shared resource. The resource sense is used +by the PIL to determine whose requirement prevails:
positive sense resources +- when shared can have their value increased without prejudice to their clients,
negative sense resources +- when shared can have their value decreased without prejudice to their clients,
custom sense resources +- when shared may be increased or decreased freely by some privileged clients +but not by others, which are bound by the requirement of the privileged clients.
A custom function must be supplied for every
This +function sets the custom function for the resource. If a custom function is +not supplied for a custom sense resource the PIL panics during a resource +state change. An example of a custom sense resource is a shared domain clock +when a domain client uses it as a bit clock.
The +values held in TCustomFunction are:
the Id of the client +requesting the change,
the Id of the resource +on which the change is requested (this allows a single function to handle +multiple resources),
the level the client +is requesting,
a pointer to the list
+of the resource’s
Custom functions can be set at any time before a resource state change +is requested on a resource. Ideally a custom function is set at resource creation +time in the resource's constructor.
The decision to allow the requested resource state change
+is specified in the custom function return value. This is
The
+function can change the state of the resource to a level other than the one
+requested and can choose a client other than the passed client to hold the
Supporting dependencies
If
+the custom sense resource supports dependencies then use the function pointer
IIC is a an abstraction of serial inter-IC buses such as I2C and +SPI. It allows serial bus device drivers to be written that do not +need to know the specifics of the underlying hardware technology.
+Serial inter-chip buses are a class of bus +used to transmit data between components of a hardware system. Examples +are:
I2C,
SPI
SMBus
Microwire
SCCB
CCI.
These buses are commonly used to transmit commands, control-signals +and non time-critical data though other, time-critical, uses are possible. +The IIC API is used by developers creating client applications, typically +device drivers.
IIC
IIC provides an abstraction of serial +inter-IC buses. These buses allow for the exchange of commands and +simple non time-critical data between devices (nodes) in a bus configuration. +IIC is not designed for high-bandwidth data transfer. IIC is not a +bus, but a set of functions and concepts so that device drivers can +be written that are independent of the chip-specific implementation +of each bus type. The Platform Independent Layer (PIL) specifies and +implements the functions that are available to device drivers and +the SHAI implementation layer implements any parts of the IIC functions +that are hardware dependent.
Bus
A bus, in this +case a serial bus, is effectively one or more wires along which data +or clock signals can be sent. More than one device/node can be attached +to a bus. This means that the data on the bus must identify which +node should receive that data. One node will be designated as the +master node which is responsible for initiating and terminating the +data transfer on the bus. See below for more on nodes, master and +slave nodes, configuring the bus etc.
Clients
Clients +are applications/device drivers that use IIC to send commands and +basic data over a serial bus. Clients are typically device drivers +for devices such as digitizers, a built-in camera or the real time +clock.
Nodes
Each device on the serial bus is a +node. A particular node can send or receive commands and data, or +can both send and receive commands and data. On each bus, one of the +nodes is going to be the phone/handset device which is the one our +device driver will be using to send commands onto the bus and to receive +commands from the bus.
Master - a serial bus node that +is always responsible for initiating and terminating the exchange +of commands/data and for synchronizing the data transfer (clocking). +A master node acts on behalf of clients of this bus. For example, +if a client wants to send commands down a serial bus to a device, +the device driver will request that the master initiate the command +transfer. One node on each bus must perform the role of Master.
Slave - each slave node sends or receives commands under +the control of the master node. A number of slave nodes can be present +on a single bus. Only one slave node can be addressed by a master +at one time. A slave must be addressed by a master before it is allowed +to transmit on the bus. A slave is usually associated with one or +more functions. Slave nodes sometimes drive the bus but only in response +to instructions from the master.
The role of master may be +exchanged between nodes. For example, in I2C, one or more nodes can +perform the role of a master, but only one can be the active master +at any one time. In IIC. this is supported by a ‘MasterSlave’ type, which can alternate the two roles.
A Transfer is defined +as single exchange involving data flowing in one direction From the +master to a slave, or slave to master.
A Transaction comprises +a list of transfers, in both directions.
A basic transaction +is half duplex (transfers in both directions, but only one at a time).
Full duplex (simultaneous transfers in both directions) are enabled +for buses that support them, such as SPI.
A transaction is a +synchronous operation and takes control of the bus until the list +of transfers in that transaction is complete. However the client can +start the transaction with a synchronous call (waits for it to complete) +or with an asynchronous call (client thread continues while the transaction +is processed. At the end of the transaction a callback function is +called to inform the client that the transaction is complete.)
A master node initiates a transaction by addressing +a slave node, this establishes the two ends of the transaction. The +transaction continues with data being exchanged in either direction. +The transaction is explicitly terminated by the Master.
Transactions +may be executed either synchronously, or asynchronously.
Asynchronous +execution requires the client to provide a callback. The client must +wait for the callback to be executed before accessing the transaction’s +objects (buffers, transfers and the transaction itself).
For +synchronous execution, the client thread is blocked until the transaction +processing is complete. This means that the client may access the +transaction’s objects as soon as the client thread resumes.
Transaction Preamble
The transaction preamble is an +optional client-supplied function that is called just before a transaction +takes place.
For example, the transaction preamble may perform +a hardware operation required on the slave device in order for it +to handle the transaction. This could be selecting a function on a +multi-function device, or selecting a set of internal registers.
The client-supplied transaction preamble function shall not:
Spin.
Block or wait on a fast mutex.
Use any kernel or base port service that does any of the above. +For example, alloc/free memory, signal a DMutex, complete a request, +access user side memory.
An extended/multiple transaction (“multitransaction”) +is formed from a chain of transactions, and appears to the client +to be a single high-level transaction. An extended transaction should +be used when the amount of data that is to be transferred (how many +transfers) is not known in advance.
The next transfer in the +chain is selected by the Extended Transaction callback. The transaction +may be dynamically composed so that additional transfers are added +to the transaction while transfers closer to the start of the transaction +are being processed.
For example, an extended transaction could +consist of a write transaction followed by a number of read transactions. +The reason for making this a single extended transaction is that it +prevents other clients performing a read transaction after the initial +write transaction and so stealing the data that the client is expecting. +Another example is where the multiple transaction consists of a read +operation followed by several write operations. If another client +can write data after the read, then the slave buffer may not have +room for the subsequent write operations.
An applications’ ASIC may support a number of bus modules of different +interface standards. Each bus module for a given interface standard +may support more than one physical connection. For example, a particular +ASIC might have two I2C physical connections and one SPI physical +connection. So to set the master node on one of the I2C connections, +it must be possible to identify which physical bus to use, which is +done by allocating a 'channel number' to a particular node on each +connection. That node is the one that IIC controller or device driver +talks to.
The SHAI implementation layer for each bus standard +(I2C, SPI etc.) assigns unique channel number identifiers.
The IIC controller keeps track +of the channels. When a client wants to send a command to a particular +piece of hardware/function (a node), it asks the controller and passes +the channel ID. The controller checks that the operation is allowed +and forwards the request to the channel. Or rejects the command if +it is not allowed. For example, if a slave operation is requested +on a master channel.
For application processors that possess +IIC channels which may be used in a shared manner, the controller +provides functionality to negotiate access between simultaneous requests +from a number of clients device drivers.
If a channel is intended +to be dedicated to a single client, the controller is not necessary. +In this case, the client device driver can access the channel interface +directly. The channel is created and maintained by the client, independently +of the IIC controller.
Transfers
A transfer +is implemented as a buffer containing the data to be transmitted and +information used to carry out the transmission, including
the direction +of transmission (read or write from the point of view of the master +node),
the granularity +of the buffer (the width of the words in bits), and
a pointer to +the next buffer, used when transfers are combined into transactions +as a linked list.
The buffer is implemented with an 8 bit boundary but the +data being exchanged may be differently structured. The potential +conflict is tackled by the configuration mechanism.
Transactions
A transaction is a sequence of transfers implemented as +a linked list. This is why transfers have pointers to transfers. Transactions +are of two types.
Unidirectional +transactions are a sequence of transfers, either all of them read +or all of them write.
Combined transactions +are a sequence of transfers, some of them read and the others write.
Some buses support duplex transmission, which is simultaneous +transfers of data in each direction. The transfers within a transaction +take place sequentially, never simultaneously, so that a combined +transaction is only ever in half duplex mode. However, it is possible +to use the full duplex capabilities of a bus by performing two transactions +at the same time. The simplest case of this is two unidirectional +transactions in opposite directions. It is also possible to interleave +two combined transactions, matching the read and write transfers of +each transaction to create a full duplex combined transaction. The +IIC platform service API supports this functionality but implementation +is a matter for client writers.
A callback is a function to be executed after +a transaction in response to a condition called a trigger. A callback +is supplied with the result of the information transmitted. Callbacks +are of two kinds, master and slave.
When the client is master, +a master callback runs on completion of an asynchronous transaction +request (synchronous master transaction requests do not have callbacks). +Its purpose is to notify the client of completion since the client +will have been performing other tasks during the transaction.
A second kind of master callback is a function called just before +a master transaction. The Symbian platform name for a callback of +this kind is 'preamble'.
Multitransactions may also be associated +with master callbacks and preambles.
Slave callbacks are issued +during a transaction. Since slave channels are mainly reactive, they +need to be told what to do on receipt of each individual transfer, +and this is the purpose of a slave callback. A slave callback object +contains more information than a master callback because a slave channel +requires more information in order to proceed with a transfer. The +information packaged with a slave callback includes:
the Id of the +channel and a pointer to the channel object (which contains the actual +function to be called),
a pointer to +the parameters to be passed to the callback function,
the trigger,
the number of +words to be transmitted, and
the number of +words to be received.
Client applications use the platform service +API to communicate with an IIC bus. The bus API consists of a class +representing a bus and contains two sets of functions, the master +side API used when the client is talking to a master channel, and +the slave side API used when the client is talking to a slave channel. +A MasterSlave channel provides both sets of functions but returns +an error if a master function is used while the channel is in slave +mode, and similarly returns an error if a slave function is used when +the channel is in master mode.
A client application of a master +channel may use the functions of a number of devices on the same bus. +A client may also talk to multiple buses over multiple channels. A +master channel can also be shared between multiple clients.
The master side API provides functionality to:
queue transactions +synchronously,
queue transactions +asynchronously, and
cancel asynchronous +transactions.
Slave nodes operate at the level of the transfer, not the +transaction, and must be told what channel and buffer to use. They +act in response to slave callbacks.
The slave side API provides +functionality to:
capture a channel,
release a channel,
register a receive +buffer,
register a transmit +buffer, and
specify a trigger +which starts the next transfer.
A channel may also be a MasterSlave channel. A MasterSlave +channel enters either master mode or slave mode when certain entry +conditions are fulfilled and continues in that mode until certain +exit conditions are fulfilled. A MasterSlave channel can never operate +in both modes simultaneously.
A MasterSlave channel enters +master mode as soon as a transaction is queued. It continues in master +mode until all transactions are completed and then exits master mode. +While in master mode it accepts no slave API calls.
A MasterSlave +channel enters slave mode when a client captures the channel. It continues +in slave mode until the channel is released and then exits slave mode. +While in slave mode it accepts no master API calls.
The master +and slave side APIs both also supply a static extension used by developers +to provide additional functionality.
The proprietary variants +of IIC technology and the different devices which they support require +configuration at the level of the bus and the node. Bus configuration +is static and node configuration dynamic.
The static configuration
+of the bus is specified at design time and executed at build time.
+It involves designating nodes as master or slave and assigning addresses
+to nodes. The IIC performance service API encapsulates the bus configuration
+as a single structured integer called the
The dynamic configuration of the nodes is performed +by the clients. Each client configures its channel at the start of +a transaction, setting parameters relating to the physical node and +to the transaction: speed, endianness, word length and so on.
There are two timers that must be implemented in the SHAI implementation +layer:
Client Timeout
Master Timeout
Client timeout - specifies how long to wait for +the client to respond to bus events, such as data received, before +telling the SHAI implementation layer to cancel the transaction.
Master timeout - specifies how long to wait for the master
+to perform the next transfer in the transaction. If this timer runs
+out, then terminate the transaction by calling the PIL
The serial port driver allows Symbian platform +to communicate with the outside world via a serial port.
Serial ports +are used for debugging by various tools: for example the EKA2 Run-Mode Debugger +as well as being used on actual phones in implementing the IR port.
Means of sending data whereby transmission proceeds one bit after another +over a single medium for example a wire.
Serial communications on Symbian platform +is via a client/server framework. The client communicates with the serial +communications server which, in turn, communicates to the appropriate plug +in. These plug ins are known as CSY modules.
Demand +paging trades off increased available RAM against increased data latency, +increased media wear and increased power usage. Overall performance may be +increased as there can be more RAM available to each application as it runs.
Demand +paging is used to reduce the amount of RAM that needs to be shipped with a +device and so reduces the device cost.
Demand +paging relies on the fact that most memory access is likely to occur in a +small region of memory and not spread over the entire memory map. This means +that if only that small area of memory is available to a process (or thread) +at any one time, then the amount of RAM required can be reduced. The small +area of memory is known as a page. The working RAM is broken down into pages, +some of which are used directly by the processor and the rest are spare. Since +Symbian platform is a multi-tasking operating system, there is more than one +page in the working RAM at any one time. There is one page per thread.
There +are three types of demand paging: ROM paging, code paging and writable data +paging. All of these types of demand paging have some features in common:
A free RAM pool
Working RAM
Paging Fault Handler
A source of code and/or +data that the processor needs to access.
With demand paging, the processor requires content to be present +in the working RAM. If the content is not present, then a 'paging fault' occurs +and the required information is loaded into a page of the working RAM. This +is known as "paged-in". When the contents of this page are no longer required, +then the page returns to the pool of free RAM. This is known as "paging-out".
The +difference between the types of demand paging is the source that is to be +used:
For ROM paging, it is
+code and/or data stored under the
For code paging, it
+is code and/or data stored using
For writable data paging, +it is the writable data stored in RAM, for example user stacks and heaps. +In this case, the data is moved to a backing store.
The diagram above shows the basic operations involved in demand +paging:
The processor wants +to access content which is not available in the working RAM, causing a 'paging +fault'.
The paging fault handler +starts the process for copying a page of the source into a page of RAM.
A page is selected.
The contents of the +source is loaded into the page and this becomes part of the working memory.
When the contents of +the page are no longer required, the page is returned to the free RAM pool.
Demand Paging provides the +following features:
Reduced amount of RAM +required
Improved performance +in applications that involve loading a large amount of code into RAM, since +the number of memory access operations required has been reduced
Improved stability in +Out Of Memory (OOM) conditions.
The following are known +limitations of Demand Paging:
The access time cannot +be guaranteed.
There is an orders of magnitude difference in the access +time between an access where no page fault occurs and one where a page fault +occurs. If a page fault does not occur, then the time taken for a memory access +is in the tens to hundreds of nanosecond range. If a page fault does occur, +then the time taken for a memory access could be in the millisecond range
Device drivers have +to be written to allow for data latency when a page fault occurs.
The
What the function should do
On non-debug bootstrap builds,
+where
On debug bootstrap builds,
+where
It may be necessary to
+examine the
This function can be called with +the MMU either disabled or enabled. Different I/O port addresses will normally +be required in these two cases. To simplify handling of this the following +macro is provided:
Where
On the direct memory model, the
Note that
Entry conditions
What the function should do
This function should return a +pointer to a list of RAM banks that are present.
The list should be +a sequence of two-word entries. Each entry has the following format:
The list is terminated by an entry that has a zero MAXSIZE.
Of
+the 32 flag bits, only one is currently defined; all undefined flags should
+be zero. Flag 0, which is bit 0 of the first word is the
If clear, the specified
+physical address range may or may not contain RAM, and may be only partially
+occupied. In this case generic bootstrap code will probe the range to establish
+if any RAM is present, and if so, which parts of the range are occupied. This
+process is accompanied by calls to
If set, the specified
+physical range is known to be fully occupied by RAM, and furthermore, that
+all memory controller setup for that range has already been completed. In
+this case
Note that all banks declared in this list and subsequently found +to be occupied will be treated as standard RAM available for any purpose. +For this reason, internal RAM or TCRAM banks are not generally included in +the list.
Entry +conditions
the MMU is disabled.
Exit +conditions
What the function should do
The function should do any required +setup for each RAM bank.
This function is called twice for
+each RAM bank that does not have the
The first call is prior to RAM bus width detection, +and the second call is after width detection.
This function is only
+required if the system has a complex and very variable RAM setup, for example
+several banks of potentially different widths. Typical systems have one or
+two banks of RAM of known width and all memory controller initialisation can
+be done in the
Entry conditions
the MMU is disabled.
Exit +conditions
Other than flags, no registers should be modified.
What the function should do
This function should return a +pointer to a list of XIP ROM banks that are present. It is not called if the +bootstrap is found to be running in RAM; in this case it is assumed that all +XIP code is in RAM.
The list should be a sequence of four-word entries. +Each entry has the following format:
The list is terminated by a zero value four-word entry.
Only +the first, second and fourth words of each entry are actually used by the +rest of the bootstrap. The third is there mainly to support autodetection +schemes.
The
Entry conditions
the MMU is disabled.
Exit +conditions
What the function should do
The function should do any required +setup for each ROM bank.
It is called once immediately after the call
+to
This function is intended to support autodetection
+of the system ROM configuration. For example, the first call with
Entry conditions
On the first call:
On subsequent calls:
the MMU is disabled.
Exit +conditions
For calls where
If the entry size for a bank is set to zero, then that bank +is assumed not to exist, and is removed from the list of ROM banks.
Registers
What the function should do
This function should return a +pointer to a list of required I/O mappings.
The list should be a sequence
+of one-word and two-word entries, terminated by a zero-filled word. The entries
+in the list are defined using the
In the template port, this is
+implemented in
The pointer in the boot table to this list of I/O mappings
+is defined by the symbol
To support Level 2 cache (either L210 or L220), you +need to add the base address of the Level 2 cache controller to the list of +hardware banks. For example:
See also the
Entry +conditions
the MMU is disabled.
Exit +conditions
What the function should do
The function reserves physical +RAM if required.
It is called before the bootstrap's memory allocator
+(
There are two methods available for reserving physical RAM:
Use the generic bootstrap
+function
The following entry conditions apply
+when calling
Write a list of reserved +RAM blocks to the address passed in R11. The list should consist of two-word +entries, the first word being the physical base address of the block and the +second word the size. The list is terminated by an entry with zero size. The +listed blocks will still be recognised as RAM by the kernel but will be marked +as allocated during kernel boot. The blocks in the list should be multiples +of 4K in size and aligned to a 4K boundary.
If both methods are used simultaneously, then the
Entry conditions
the MMU is disabled.
Exit +conditions
The function can modify
What the function should do
The function should return the
+value of a run-time configurable boot parameter. The parameter is identified
+by one of the values of the
Typically, the function is implemented +as follows:
This implementation calls the generic function
Note that the address of this parameter table is
+also passed to the
Each entry in the boot
+parameter table is identified by a
Entry +conditions
R0 contains the number
+identifying the required parameter; see the
the MMU is disabled.
Exit conditions
If
+the parameter value is specified, it should be returned in
If the parameter value
+is not specified, the
Registers
What the function should do
The function should do any final +setup required before booting the kernel. It is called at the end of the bootstrap, +just before booting the kernel.
The function should:
Map any cache flushing +areas required by the CPU.
Processors that flush the data cache by
+reading dummy addresses or by allocating dummy addresses into the cache (e.g.
+StrongARM and XScale processors) will need a main cache flush area. If the
+processor has an alternate data cache (e.g. StrongARM and XScale mini data
+cache) a second flush area may be required for it. On the moving and multiple
+memory models the linear addresses used for these flushing areas are fixed
+(
Populate any super page +or CPU page fields with platform-specific information used by the variant, +for example addresses of CPU idle routines in the bootstrap. Routines can +be placed in the bootstrap if known alignment is required for the routine.
Entry +conditions
the MMU is disabled.
Exit +conditions
Registers
What the function should do
Allocates RAM during boot.
This +function is called at various points to initialise the memory allocator, or +to allocate physical RAM. A generic implementation is provided for this function +and this will normally be sufficient, so a typical implementation for this +function reduces to:
in
+the boot table. For systems with no MMU, the function name should be changed
+to
It is advisable not to override +this function.
Entry +conditions
R2 contains the type
+of allocation required; this is defined by one of the values of the
R4 contains the size
+to allocate (as a log to base 2) for allocations of type
the MMU may be enabled +or disabled.
The type of allocation required is defined by the value of the enumerator
Exit +conditions
What the function should do
This function is called at various
+points to translate a standardised permission descriptor along with the size
+of mapping required to the PDE permission/attribute bits needed for such a
+mapping. A standardised permission descriptor can be either an index into
+the boot table (for any of the standard permission types) or the value generated
+by a
A +generic implementation is provided for this function and it should not be +necessary to override it. A typical implementation for this function then +just reduces to:
in +the boot table.
Note that for systems with no MMU, this function is +not required.
What the function should do
This function is called at various
+points to translate a standardised permission descriptor along with the size
+of mapping required to the PTE permission/attribute bits needed for such a
+mapping. A standardised permission descriptor can be either an index into
+the boot table (for any of the standard permission types) or the value generated
+by
A +generic implementation is provided for this function and it should not be +necessary to override it. In the boot table, a typical implementation for +this function then reduces to:
Note +that for systems with no MMU, this function is not required.
What the function should do
This function is called whenever +a PDE or PTE is updated. It performs whatever actions are required to make +sure that the update takes effect. This usually means draining the write buffer +and flushing the TLB (both TLBs on a Harvard architecture system).
A +generic implementation is provided for this function and it should not be +necessary to override it. In the boot table, a typical implementation for +this function then reduces to:
Note +that for systems with no MMU, this function is not required.
What the function should do
This function is called to enable +the MMU and thereby switch from operating with physical addresses to operating +with virtual addresses.
A generic implementation is provided for this +function and it should not be necessary to override it. In the boot table, +a typical implementation for this function then reduces to:
Note +that for systems with no MMU, this function is not required.
If the PDD is to support audio transfer in a single direction, either record +or playback, then a conventional PDD implementation is required. The PDD opens +only a single driver channel and the PDD factory creates either a record or +playback PDD object.
+If the PDD is to support audio transfer in both directions then it must +be implemented to open two units, one playback unit and one record unit. For +each unit the PDD factory must create the appropriate PDD object.
+One complication in this configuration is the need to co-ordinate access +to the single audio hardware device from the two separate PDD objects. This +configuration needs coordination when accessing the hardware device from two +separate PDD objects, detecting and preventing situations where the handling +of a PDD function for the channel in one direction conflicts with the channel +setup in the other direction, specifically:
+preventing the setup +of conflicting audio configurations between the record and playback channels.
preventing the channel +in one direction from powering down the audio hardware device while it is +being used for data transfer by the other channel.
The solution is to move the code that controls those aspects of audio hardware +setup which are shared between the two driver channels into the PDD factory +object, as this object is shared.
+The porting process focuses on implementing a PDD that supports both record +and playback as this is the most common situation. The template port Sound +Driver is setup for this configuration. A PDD that supports audio transfer +in a single direction only omits the implementation for the direction not +supported.
+The +following diagram shows the relationship between the Controller and its clients, +and also shows aspects of the relationship between the constituent parts. +In this case, the client is the MultiMediaCard driver, although in general +this could be some other device driver.
The session is used to pass commands either to the entire stack, +i.e. a broadcast command, or to individual cards in the stack.
The
+session provides access to the stack of cards, via a pointer to a stack object,
+the
Clients of the MMC Controller such as the MMC Media Driver +uses the MMC Controller as follows:
requests a pointer to
+the stack object, through which the client can obtain pointers to the individual
+card objects and interrogate the cards, finding out their media type, capacity,
+and capability (via the card’s
creates a session object,
+an instance of the
initiates the session
+with the pointer to the stack object:
The client interacts with the stack in the following manner:
it sets the session
+up with the pointer to the card object, by calling
it sets the session
+up to do a specific job, such as a MultiMediaCard macro command, or a lower
+level card command, using the API provided by
it presents the session
+to the stack for execution, by calling
On completion, the stack +calls the client’s callback function. The client then can repeat the cycle +from step 1 or 2.
The client can override the default bus configuration settings, i.e.
+change the bus clock, enable retries, change time-out values, restore defaults
+etc., using the session's stack configuration object,
The
When issuing a series of application specific
+commands, the client will want to lock the stack, preventing any other client
+from generating bus activity during this period. This is accomplished by calling
As it may take time for the controller to lock the stack for +that session, the mechanism for locking the stack involves submitting a session +even though no bus activity results. If this session completes successfully +then the stack is locked. The client can then go ahead and invoke its series +of commands - configuring that same session object as necessary each time +and submitting it.
The +SHAI Interface can include two types of services (also known as calls):
SHAI abstraction services (or down calls) which are implemented +as part of the adaptation software.
SHAI adaptation services (or up calls) which are used within +the adaptation software.
The Device Driver framework supports the following +device driver models:
The LDD-PDD model: +drivers of this type must be loaded by a user application.
The kernel extension model: +drivers of this type are loaded by the Kernel at boot up.
The combined model +is a combination of the kernel extension and LDD-PDD models. This is used +when the driver must perform some initialization at the time of boot up.
For each model, the Kernel provides a standard API for the entry +points to the driver.
For a standard LDD-PDD driver, the +driver entry points are defined by the following macros:
For LDDs,
+use
For PDDs, use
For a kernel extension, the entry
+point is defined by the macro
For drivers with a combined model, +the entry points are defined by the following macros:
For LDDs, use
For PDDs, use
A driver creates factory objects in the entry point routines.
This page lists the keywords starting from A to C.
+BUILDROM only
Performs textual substitution. +All subsequent instances of the two characters ## are replaced with +an empty string.
Note that there is no UNDEFINE facility and +substitutions are applied in an unspecified order.
BUILDROM only
Allows
The use of this keyword is not appropriate +for production devices, but is useful in the development environment +as it increases the chances of producing a ROM in the presence of +build problems.
For example,
This substitutes \ARMI\ for \THUMB\ if a specified source file +cannot be found.
Another example is in localisation support.
+Problem suppression allows
rombuild and rofsbuild
Creates an additional
+filesystem entry, which refers to an existing file.
rombuild only
Specifies the alignment boundary
+for the file that follows immediately after the
rombuild only
Defines the area in which the
+executable will be relocated. The specified name must have been previously
+defined in the
rombuild only
Defines a new relocation area.
+The area is identified by the specified name. Executable files placed
+in this area will be relocated so that they run in the address range
The Bootstrap is required +to copy the relevant subset of ROM to the run address at Boot time. +The main purpose of this is to relocate time-critical executables +to internal or tightly-coupled RAM.
rombuild only
Indicates that this is not a Unicode +build.
rombuild and rofsbuild
Files may have the System, +Hidden, read-only and writable attributes.
File attributes
+are copied from the source file and are then overridden by the attributes
+specified by this keyword. Specifying
As this is a ROM, the file cannot be physically modified even +if the read-only attribute is cleared, but it is useful to mark files +as writeable so that copies are made writeable, for example on a CF +card or a RAM file system.
Using the
BUILDROM only
Either selects a compressed MBM +file if generating an XIP ROM image, or the original source file if +generating a non-XIP ROM image
This statement translates to
rofsbuild only
Reduces the size to a whole number +of blocks where <block size> defines the granularity.
For
+example if
This causes the tools to check the given locations for files +in the specified order:
causes
BUILDROM only
Generates an uncompressed Symbian platform XIP ROM format MBM file from the <source> MBM +file and copies it into the ROM at <dest>.
rombuild only
The file name of the ROM's bootstrap +code, which on ARM CPUs appears at physical address 0x00000000 when +the machine is booted.
rombuild only
A keyword for configuring the
+behaviour of the Symbian platform mechanism for generating and capturing
+trace information (
Each trace +has a category which is an 8-bit value. The kernel implements a filter +that enables traces in each category to be either discarded (disabled) +or passed to the trace handler (enabled). This keyword sets the initial +state of that filter, i.e. to indicate whether a trace category is +enabled or disabled.
A trace category is one of the
The BTrace keyword +takes up to eight 32-bit integers, representing a set of 256 bits. +Each bit in the set is associated with a single category: If a bit +is set it means that the corresponding category is enabled - if a +bit is not set it means that the corresponding category is disabled.
The rule for mapping the bits in these eight integers to the
For example to turn on the trace category
which turns on bit position 9 (counting from zero and starting +at the right hand side of the integer). Note that there is no need +to specify the remaining 7 integers if they all have zero values, +as zero is assumed by default.
See also
rombuild only
A keyword for configuring the +behaviour of the Symbian platform trace handler.
Use this +keyword to set the initial size for any memory buffer used to store +trace data.
See also
rombuild only
A keyword for configuring the +behaviour of the Symbian platform trace handler.
Use this
+keyword to set the initial mode of the trace handler. This BTRaceBuffer
+keyword takes a single integer that specifies the mode. For the default
+trace handler supplied with Symbian platform, this will be one of
+the
See also
rombuild +only
This keyword overrides platform security capabilities
+of the executable specified in the OBY file. For information on the
+use of capabilities and current capability set, see
This keyword
+specifies the cluster size for a FAT image generated by the ROM tools.
+The keyword must be specified in the
The value of the cluster +size must meet the following conditions:
The cluster size must be between 512 bytes and 32KB (sector +size).
For a FAT16 image, the cluster size must be in the range 4101– +65508 bytes.
For a FAT32 image, the cluster size must be greater than 65541 +bytes.
The cluster size must belong to the geometric progression series, +512, 1024, 2048, 4096, and so on. A geometric progression series is +a sequence of numbers where each term after the first is found by +multiplying the previous one by a fixed number called common ratio. +In this series, the first number of the series is 512 and the common +ratio is 2.
rombuild only
This keyword specifies the alignment +for the executable's code. It indicates the start address of a code +segment that must be aligned to a specified value. If the original +address of the code segment does not match the alignment, the whole +executable file is relocated to meet alignment. After the image is +loaded in RAM a block of memory is unused.
The code alignment +is an optimisation depending on the hardware the ROM is being built +for. Setting the code align can allow the ROM to be built in such +a way that it reduces the work for the CPU when loading the code that +is, it can be loaded in one pass. This improves the performance. If +it is unaligned it can take multiple passes.
For example:
Define an
Build ROM with
+the above
The log file +is:
In the log file, we can find that the code start address
+of
It is unique for
Note: If an
rombuild only
Indicates that a COFF ROM wrapper +is required.
rombuild only
This is only supported when the +<cpu> is ARM and the <compiler> is GCC.
The <mode> +can take one of the following values:
rombuild only
Compresses the resulting ROM image +using the Deflate, Huffman+LZ77 algorithm.
BUILDROM only
Generates a compressed Symbian +platform XIP ROM format MBM file from the <source> MBM file and +copies it into the ROM at <dest>.
rofsbuild only
This tells rofsbuild that the +generation of the core image is not required and that the specified +core image should be used to as the base for generating the extension +image.
rombuild and rofsbuild
This keyword sets the +page attributes of executables at a global level. It is applicable +to all executables in ROM or ROFS partitions. It takes a single argument, +which can be one of the following:
For example, the following entry in the Obey file marks +all the executables as unpaged:
rombuild and rofsbuild
This sets a flag in
+the ROM when it is built, and the loader in the kernel decides a policy
+for executables that are in default state (neither marked as paged
+nor as unpaged). This keyword takes a single argument, which can be
+one of the possible values listed in
For example, the following +entry in the Obey file instructs the loader not to page the executables +in default state:
Asynchronous request processing is normally +done in a Deferred Function Call (DFC). The second stage of interrupt +handling is deferred to run as a DFC, which runs in a non-critical +context.
Different asynchronous requests can be handled using +a single or multiple DFCs. The number of DFCs to be created must be +decided when the driver is designed. For example, a driver that handles +at the same time an asynchronous receive request and a transmit request +would create two DFCs, one each for receive and transmit.
There +are two main types of deferred function call:
standard Deferred Function Call (DFC)
Immediate Deferred Function Call (IDFC).
A DFC is a kernel object that specifies a function to +be run in a thread, which is processing a DFC queue. A DFC is added +to a DFC queue that is associated with a given thread, where it is +cooperatively scheduled with other DFCs on that queue. Queued DFCs +are run in order of their priority, followed by the order they where +queued. When the DFC gets to run, the function is run kernel side, +and no other DFC in this queue will get to run until it completes. +A DFC can be queued from any context.
An IDFC is run as soon +as the scheduler is next run, which is:
Unlike a DFC, the IDFC is not run from a thread context, +and its execution time must be much smaller. For these reasons, IDFCs +are rarely used directly, but are used for implementation of the kernel +and RTOS personality layers. An important use of IDFCs is in the +implementation of queuing DFCs from an ISR context. IDFCs are run +with interrupts enabled but the kernel locked.
DFCs are created using
The DFC is initialized with a DFC callback function, a pointer to +an object to be passed to the callback function, and a priority. It +can also provide the DFC queue to be used by this DFC at the time +of construction.
To initiate the process of DFC functionality, the DFC object +must be queued to the DFC queue associated with the Kernel thread.
Before adding the DFC to a DFC queue of the thread, it must be
+associated with the queue (
The DFC callback function is a static function +called when a DFC is executed. A pointer to this function is provided +at the time of DFC object creation. This function implements the deferred +functionality of an asynchronous request, such as reading or writing +data from or to an I/O peripheral. It would then either complete the +request or start another operation.
A DFC function must be cancelled while cleaning up resources,
+for example, when closing the channel. The function
Decide on the number of interrupts that exist in +the system. If the port is split into a core ASSP layer, and a device +Variant layer, the core layer should include only those interrupts +that exist on the ASSP.
In the core ASSP implementation, declare
+a static
In the template port, for example, this table is static member of
+the
Declare an enum in an exported header file that provides
+labels for each of the possible
For example, in the template
+port, the interrupt IDs are defined by the
Contains a collection of developer tips.
+The
The
+function declaration for the
Description
This method returns the number of IO functions that are present on +the SDIO card.
Parameters
None
Return value
The
+function declaration for the
Description
This returns a pointer to the class that carries +out the required SDIO function.
Parameters
Return value
The
+function declaration for the
Description
This method provides support for the validation +and the enumeration of the card functions.
This method finds the function
+that can carry out the required capabilities and either returns a function
+pointer to it (if it exists), or NULL, if it does not. See
Parameters
Return value
The SDIO stack
+is created by implementing the following functions of the
The Power Supply Unit (PSU) functionality
+is provided by the
The header file for the SDIO can be found
Device Driver debugging is similar to kernel-mode +debugging, as drivers run as part of the Kernel. Debug versions of the drivers +can be debugged using debug tools such as Lauterbach through a JTAG interface +or an IDE. Other debug tools such as Metro TRK can also be used.
Most +of the hardware platforms supported by Symbian platform are ICE-enabled. Kernel +developers and those porting the operating system to new hardware often have +access to development boards exposing the JTAG interface, which allows the +use of CPU-level debuggers. Using a host PC debugger, such as Carbide.c++ +or CodeWarrior configured for remote debugging, a debug ROM image (including +drivers) can be downloaded to the target and debugged over a JTAG interface.
For
+debugging, debug versions drivers are built, and the ROM image is built to
+include the kernel debug DLL, which enables kernel-side (stop mode) debugging.
+This is done by using the
This includes the kernel extension
When the ROM +image is downloaded to the target, the system boots up and the Kernel or driver +can be debugged using the host based IDE interface. Code can be stepped through +and halted, and memory on the target can be viewed.
Symbian also provide
+a debug monitor (sometimes called the crash debugger) to provide information
+on Kernel crashes. See
There are no specific tools required to use or implement the Time +platform service.
+This document explains the principles of +code paging in Symbian platform.
Intended +Audience:
This document is intended to be read by those interested +in the Symbian platform kernel.
Code paging means the use +of demand paging to executable code. Demand paging increases the apparent +size of available RAM on a device by loading data into RAM when needed. Since +the memory locations used by the code cannot be determined before it is loaded, +the code needs to be modified when it is paged into RAM.
Classes explained +here.
Executable code is paged in and out of memory in accordance +with the demand paging algorithm which is discussed in the document. The algorithm +involves four basic operations:
paging in,
aging,
rejuvenating, and
freeing.
The remainder of this document discusses the kernel side implementation
+of each of these operations in turn. Most of the work is done by the
When a program accesses an
+item of paged code for the first time the code needs to be paged into RAM.
+The initial call generates a data abort: this is caught by the exception handler
+which calls the
Checks the MMU page
+table entry for the address which caused the data abort. If the entry is not
Verifies that the exception +was caused by an access to the code chunk memory region.
Finds the code segment +which is at the current address.
Verifies that the code +segment is the one being demand paged.
The
Obtains a
Obtains a physical page
+of RAM by calling
Maps the RAM at the
+temporary location
Reads the correct contents
+into the RAM page by calling
Initialises the
Maps the page at the +correct address in the current process.
Adds the
When these calls have completed they return control to the program +which caused the data abort.
The demand paging algorithm defines
+pages in the live list to be either young or old. When a page changes status
+from young to old, the kernel changes the MMU mappings for the page to make
+it inaccessible. It does so by calling the
In the Moving Memory Model, the call to
Finds the MMU page table
+entry for the page and clears the bits
In the Multiple Memory Model,
Examines the bit array
Updates each mapping +in turn.
The status of a page may change during a call to
When a program accesses
+a program held in an old page, it generates a data abort because the kernel
+made the page inaccessible when it was set to old. The data abort is caught
+by the exception handler which calls the
Gets the MMU page table
+entry for the address which caused the abort. If the bits
Finds the
If it finds that the
+state of the page is
Updates the page table +entry to make the page accessible.
Moves the
These steps are performed with the system lock held.
When a physical page of RAM
+holding demand-paged code is needed for other purposes, it must be freed up.
+The kernel does this by calling the
In the Moving Memory Model, the call to
Finds the MMU page table
+entry for the page and sets it to
In the Multiple Memory Model, the call to
Examines the bit array
Makes each page inaccessible +in turn.
The SDIO hardware interface extends the functionality of SD devices. The +combination of SD card with I/O is usually found in devices that can be ported +easily and make heavy demands on memory.
+The
+MultiMediaCard media driver defines and implements the standard media driver
+interface derived from
Requests to the controller can also come from other device drivers. +This means that the architectural picture can be extended as shown below:
The controller currently supports a single stack capable of holding +up to 4 cards. In a future development, it may become capable of supporting +more than 1 stack. Note however, that Symbian platform allows more than one +peripheral bus controller.
The controller can handle multiple requests +outstanding simultaneously, i.e. multiple drivers each with a session engaged +(a session is a unit of work for the MMC stack). Internally, it schedules +these requests onto the bus, automatically putting the appropriate card into +the transfer state etc.
A platform containing a card socket is expected +to provide a means of detecting when a card is inserted or removed from the +stack (e.g. a media door interrupt). The controller does not attempt to initialize +and identify newly inserted cards immediately. Instead, cards remain unidentified +until they are interrogated by a client.
For hardware interfaces that +allow multiple cards in a single stack, the MultiMediaCard system supplies +no means of identifying a particular socket position within the stack. As +cards are inserted and removed, the potential exists for those cards which +remain to be allocated different card numbers (and ultimately a different +drive letter than that allocated previously) once the stack is re-initialized. +To avoid this, the controller maintains a fixed card number for a given card +for as long as it remains in the Symbian platform device (by storing the CID +of each card present). This means that a card will have a constant drive letter +for as long it remains in the Symbian platform device. However, if a card +is removed, and then subsequently re-inserted into exactly the same slot in +the stack, then there is no guarantee that it will be allocated the same card +number, and therefore, no guarantee that it will have the same drive letter +as before.
The +following diagram gives a more detailed view of the MultiMediaCard controller +in terms of the socket, stack, session, power supply and media change objects.
The socket
The +Symbian platform implementation of MultiMediaCard uses the idea of a socket. +A socket corresponds to a physical device on a given peripheral bus.
In +general, a socket is associated with its own thread. However, although each +card in a MultiMediaCard stack has its own individual slot, the bus architecture +prevents more than one card from being accessed at any one time. Further, +all cards in a stack must be powered up at the same time. All this suggests +that having a separate thread for each card in a multi-card slot would not +offer much benefit, and indeed would probably lead to conflicting hardware +accesses between the threads. This means that a single socket object is allocated +to represent for all cards in MultiMediaCard stack.
The socket is
+the highest level object in the MultiMediaCard controller, and is an instance
+of a
You can define and implement a derived class in the variant
+DLL,
The stack
Access
+to the card stack is handled by an instance of a
You will normally define and implement a derived class
+in the variant DLL,
The session
A
+session is an instance of a
The
Power supply unit
The
+behavior of the power supply unit (PSU) is represented by the
You
+will normally define and implement a derived class in the variant DLL,
Media change
The behavior that deals with media change events
+is represented by the
You
+will normally define and implement a derived class in the variant DLL,
The controller factory
The
+controller factory, also known as the controller interface, is a
You create an instance
+of your factory class in the kernel extension entry point code in your DLL.
+This normally calls
Note that these functions
+are only called for sockets that are associated with MultiMediaCard devices.
ROFSBUILD is the Symbian platform non-XIP (execute-in-place) ROM
+builder. It is normally invoked through
ROFSBUILD understands a sub-set of the BUILDROM OBEY file syntax.
+If the OBY files are encoded in UTF-8 with +non-ASCII character support, use the following the ROFSBUILD command +syntax:
If the OBY files are encoded in local character set with non-ASCII +characters support, use the following the ROFSBUILD command syntax:
Uses the specified core image file as the basis for creating +the extension.
Specifies the data drive description IBY or OBY file.
Reduces the physical memory consumption during image generation.
Level of information to log file. The following valid log +levels are available:
Displays a warning if a file is placed in a non-standard
+directory when
For example, the following instruction in
+OBY file leads to a warning when
Specifies the number of working threads that can run concurrently
+to create a ROFS image. The
If the
If the
If the
Enables cache mechanism. It ensures that ROFSBUILD uses +cached executable files while creating a ROFS image. This allows ROFSBUILD +to reuse or generate cached files.
Notes:
The cache mechanism +is disabled by default.
The cached files +are stored on the hard disk.
The cache command
+line options (
Disallows ROFSBUILD from using cached files while creating +a ROFS image.
Deletes all cached files from the hard disk.
Generates symbols for each data or executable specified +in the OBY file.
Note: The symbols file is not generated +by default.
Creates SMR partition images.
Prepends EPOCROOT to the file location, if the specified
+location starts from
See the
This topic provides a summary of related documentation for the USB Client +Driver to which you can refer.
+Kernel Architecture
Note:
Create an ASCII text
+file containing your local drive mapping records; see
Put this text file into
+your variant source tree. The source tree is typically of the form
In the
If you need more than
+one mapping file, or your single mapping file has a name other than
To make use of auto-detection, you need to ensure
+that your variant cannot find local drive mapping files in your ROM's
The most common customisation is to provide your own implementation
+of the
Take a copy of
Place the copy into
+your variant's source tree. This is typically of the form
Create a
Your
In the
You do this only if you need to save +code space, and you do not need the autodetect functionality.
See
+also
Create a
In your
The location of all
+source files in the
In the
This is the general pattern for the other ports supplied with Symbian +platform.
This object is also used to contain response information resulting from +the execution of that command.
+As it can sometimes be necessary to temporarily save current command and
+parameter information, the controller implements a small stack of
The platform independent layer provides three functions through which command +and parameter information can be set up:
+Two variants of
These functions are used to fill the current
The controller executes a single command by calling the state machine function
It calls
Internally, the command stack is implemented as a simple
+array of
The +platform independent layer provides two protected functions that the platform +specific layer can use to change the command information object that is current:
Symbian platform provides a logical device driver, and a generic +Platform Independent Layer for the physical device driver. The physical +device driver is called the USB client controller. You must provide +a Platform Specific Layer of the USB client controller to implement +the interface to the USB hardware on your phone.
+The USB Manager component in the Short Link Services module provides +a higher-level framework that can be used to implement and control +the USB device functions on a phone.
+You can use the same set of source code to build the following +DMA test harnesses:
+Simulation +PSL: T_DMASIM.EXE, D_DMASIM.LDD, and DMASIM.DLL. This is a software-simulated +implementation of DMA that is used mainly for debugging and validating +the PIL (platform-independent layer), as it simulates all types of +DMA controller.
User-side +harness: T_DMA.EXE.
The following test code are +available to test a DMA port.
The DMA test application is used to +test:
one shot single buffer transfer
one shot double buffer transfer
one shot scatter/gather transfer
streaming single buffer transfer
streaming double buffer transfer
streaming scatter/gather transfer
The DMA test application can be used on both reference hardware +platform and software simulated DMA.
The DMA test application has the following known limitations:
only supports one shot and streaming data transfer
many parameters such as the number of buffers, the transfer +size and the number of fragments are fixed
does not support performance measurements like CPU usage, setup +time and transfer time
does not support memory to peripheral data transfer
does not support SMP related testing like allocating DFC on +same core or different core.
Device +drivers can also be kernel extensions, which means that they are loaded by +the Kernel when it boots. They are used for extending the Kernel, as the name +suggests. Generally, kernel extensions provide early initialisation of devices +that must be permanently available, such as LCD, DMA, and I2C and other peripheral +bus controllers. Because kernel extensions are loaded by the Kernel, they +are never unloaded and so their destructors are never called.
The
Extensions are built into the ROM image, by specifying
+the
A
+kernel extension's interface to other Kernel side components is usually exported
+using a static interface. Clients can access this interface by using the global
+instance of the object created and initialised in the
Kernel
+extensions can also be implemented that let user code open channels on them
+to use the interface. This model is used for devices where initialisation
+has to be done at system boot up, but which can then be used by the clients,
+for example, the media driver
To do
+this, drivers have to declare
This document describes the overview of how to implement the IIC +platform service APIs and directs you to more specific documentation.
+The IIC platform service APIs provides a means of accessing devices +that are connected onto a multi-wire bus within the phone. These platform +service APIs rely on hardware-specific implementation in the SHAI +implementation layer of the IIC implementation. This hardware-specific +implementation is primarily creating concrete hardware-specific implementations +of functions defined in the Platform Independent Layer (PIL),.
Intended Audience:
This document is intended for +hardware device implementers who want to write adaptation software +to use their specific serial bus hardware with IIC.
There are two main forms of IIC operation:
Master operation
Slave mode
A master node on a bus controls transactions and is responsible +for sending commands along the bus to select the slave node which +is to send or receive the commands and data. A slave node receives +instructions from a master node and sends or receives commands and +data. The OS device drivers may act as a slave or a master node, or +in some bus technologies, the role of master and slave can be exchanged.
IIC has channels, which represent a connection between two nodes +on the bus. The channel has a queue for commands and will process +each command in turn.
A device driver can either use the IIC +Controller to access channels, or if there is a dedicated node that +is going to be used by a particular device driver, then the device +driver can talk directly to that node through IIC without using the +IIC Controller.
The
A summary of methods in the
The platform service APIs for controller-less operation
+are almost exactly the same, but with the addition of channel constructors
+and destructors.
You must next read:
There is no specific test suite available for Register Access platform +service.
+Each +Symbian platform device driver has a name. When the driver is +loaded by the user, LDDs and PDDs register themselves using a name. LDDs and +PDDs have different names, which are used to identify the logical device and +physical device objects.
The following shows how the example device +drivers set their names:
The framework uses driver names to identify the PDDs +that can be used with a given LDD. This can happen in two ways:
The framework uses the +name of the PDD factory object passed by the user while creating the logical +channel. This requires a PDD with that name to be already loaded, and its +factory object created. The framework searches for a PDD factory object with +the specified name.
If the user does not +pass the PDD name explicitly, then the framework searches all PDD factory +objects that have a name in the form of x.y, where x is the name of the LDD +factory object.
The device driver framework supplies each LDD with a pointer,
An example of this pointer +in use:
Similarly, a PDD can access a LDD, though +this access must be initialised by the LDD. In the following example, the +physical channel declares a pointer to a logical channel, which the LDD sets. +Callbacks to the LDD are done using this pointer.
The logical channel class,
The
The
+function declaration for the
Description
Enables the function by writing to the appropriate +register in the CCCR.
Parameters
Return value
The
+function declaration for the
Description
Disables +the function by writing to the appropriate register in the CCCR.
Parameters
None
Return +value
The
+function declaration for the
Description
Disables the function by writing to the appropriate register in the +CCCR.
Parameters
Return value
The
+function declaration for the
Description
Registers the client with the function.
Parameters
Return value
The
+function declaration for the
Description
Deregisters +the client from the function.
Parameters
Return value
The
+function declaration for the
Description
This sets the priority of accesses to the function. + It is intended to allow the suspend/resume protocol to determine whether +an access to a lower priority function should become suspended while a higher +priority function is accessed. Note that the Suspend/Resume protocol is not +currently implemented, but may be in a future release.
Parameters
Return value
The
+function declaration for the
Description
Returns
+a pointer to an instance of a
Parameters
Return value
A pointer to the
The
+function declaration for the
Description
Returns a reference to a TSDIOInterrupt class that may be used by +the client to enable the use of interrupts for the function.
Parameters
None
Return +value
A reference to the interrupt class associated with this function.
The
+function declaration for the
Description
Returns information about the basic capabilities +of the function (function number, function type etc.).
Parameters
None
Return +value
A pointer to the
The DMAC has different DMA channels that can be configured for +different DMA transfers. For some peripherals, such as the Camera +and the Display controller, there can be dedicated DMA channels that +cannot be configured for DMA transfers by other peripherals. However, +drivers generally initialize and open a multi-purpose DMA channel +and use that channel to perform the DMA transfers.
+Initialization and opening of DMA channels is done using the interface
DMA channels must be opened before use and closed after completion +of DMA operations.
+The DMAC has different DMA channels that can be configured for +different DMA transfers. For some peripherals, such as the Camera +and the Display controller, there can be dedicated DMA channels that +cannot be configured for DMA transfers by other peripherals. However, +drivers generally initialize and open a multi-purpose DMA channel +and use that channel to perform the DMA transfers.
+Initialization and opening of DMA channels is done using the interface
DMA channels must be opened before use and closed after completion +of DMA operations.
+You can extend
Each HAL group has an associated
+set of function numbers. For example, the enumerators of
In this
+specific case, new state can be represented by a new function number,
+and by changing the
It is +important to note that any new function numbers should not follow +consecutively from those defined by Symbian. Instead, choose high +values so that they are guaranteed not to clash with any future Symbian +extensions. Symbian will always use new values that follow consecutively +from previous Symbian defined values.
Although up to 32 HAL groups can be defined, +Symbian platform does not use all 32. This leaves some values that +can be used for new hardware.
In this case choose a HAL group +number at the high end of the range, so that it is guaranteed not +to clash with any future Symbian extensions. Symbian will always use +new values that follow consecutively from previous Symbian defined +values.
The IIC is a technology-independent interface for serial bus technologies. +The IIC supports multi-master and multi-slave serial interfaces, used +by the low power peripherals to exchange control information between +devices attached to the bus. The IIC supports different modes of data +transfer.
+The IIC platform +service provides a set of functions for device drivers to be able +to use serial interfaces without needing to know the details of the +actual chipset implementing the particular serial interface technology, +for example I2C or SPI. The client will however need to understand +how to configure headers for the particular interface technology.
For the technical details about IIC, see the
IIC is used in a number of +different areas in the OS. These may include:
controlling flash memory devices
controlling the LCD
reading data from the Real Time Clock.
The IIC platform service gives a common set of functions to +initiate a connection and to transfer the data.
The IIC documentation covers two types of user:
those that want to write device drivers
those that need to write SHAI implementation code to interface +to their particular IIC chipset.
IIC is an abstraction interface for several different serial bus +communication technologies. There may be features of a particular +technology that are not available through IIC. IIC imposes no throughput +limitations.
The code for the template port Sound Driver is split into two files:
+This file holds the +code for the record driver channel
This file holds code +for the playback driver channel together with code which is common to both +channels.
The header file for the above can be found in:
+
The template
Following the general pattern, your implementation is contained in the +directory:
+
For example,
Now complete these steps:
+create a copy of the
+template port implementation files
create a copy of the
+header file
rename the Sound Driver
+PDD factory class
rename the classes
After doing the required DMA initialisation, a driver can start +a DMA operation. To do this, the driver fragments the request and +then queues the request.
+A DMA request must be split into different fragments that are +small enough to be transferred by the DMAC in a single transaction. +The size of each fragment is smaller than or equal to the maximum +transfer size supported by the DMAC. To fragment a DMA request, a +source and destination address, and the size of data to be transferred, +must be provided. If the source and/or destination is in memory, each +fragment points to memory which is physically contiguous.
The kind of transfer to perform is specified using a set of flags +and hardware-specific information. The flags can be:
The following shows an example of fragmenting a DMA request:
DMA transfer is initiated by queuing the DMA request. A device
+driver queues the request by calling
If the request channel is idle when a request +is queued, the request is transferred immediately. Otherwise, it is +queued and transferred later. The client is responsible for ensuring +cache consistency before and/or after the transfer if necessary.
This is done by the DMA service callback function. +The completion notification is as follows:
Once the DMA request +is handled by the DMAC, the DMA service callback function is called +by the DMA driver. This runs in a DFC context, that is scheduled from +a DMA interrupt service routine.
The DMA service callback +function pointer must be provided while creating the DMA request:
The DMA callback function is called for both success and failure +of the request, so it needs to check the request result. On success, +the function should either initiate another DMA request, or stop DMA +and close the request. On failure, the function should stop DMA.
After doing the required DMA initialisation, a driver can start +a DMA operation. To do this, the driver fragments the request and +then queues the request.
+A DMA request must be split into different fragments that are +small enough to be transferred by the DMAC in a single transaction. +The size of each fragment is smaller than or equal to the maximum +transfer size supported by the DMAC. To fragment a DMA request, a +source and destination address, and the size of data to be transferred, +must be provided. If the source and/or destination is in memory, each +fragment points to memory which is physically contiguous.
The kind of transfer to perform is specified using a set of flags +and hardware-specific information. The flags can be:
The following shows an example of fragmenting a DMA request:
DMA transfer is initiated by queuing the DMA request. A device
+driver queues the request by calling
If the request channel is idle when a request +is queued, the request is transferred immediately. Otherwise, it is +queued and transferred later. The client is responsible for ensuring +cache consistency before and/or after the transfer if necessary.
This is done by the DMA service callback function. +The completion notification is as follows:
Once the DMA request +is handled by the DMAC, the DMA service callback function is called +by the DMA driver. This runs in a DFC context, that is scheduled from +a DMA interrupt service routine.
The DMA service callback +function pointer must be provided while creating the DMA request:
The DMA callback function is called for both success and failure +of the request, so it needs to check the request result. On success, +the function should either initiate another DMA request, or stop DMA +and close the request. On failure, the function should stop DMA.
For testing the port, you will need:
A Symbian platform
+text shell ROM image for the device that also contains the USB drivers
+(the LDD and the PDD), and the unit test applications
A computer running
+either MS Windows 98 or above to run the Symbian host-side
+test program USBRFLCT, or MS Windows 2000 to run, in addition, the
A USB driver
+disk (or some other location accessible to the PC) containing the
In addition, a USB analyser, like the ones made by LeCroy formerly CATC, is invaluable for this kind of work. Without this +equipment it may be impossible to be sure that the port has been fully +successful.
These are the recommended steps for testing the port:
Run
Use a USB cable
+to connect the device under test to a PC and start
Try and run
Run the USB
+Implementers Forum's
The host (PC) side test program:
The generic USB device driver USBIO is a product developed and
+marketed by the German company Thesycon [USBIO Software Development
+Kit for Windows; see
The driver +provides Win32 applications with direct access to USB devices which +are otherwise only accessible from kernel-mode. It can be used with +any kind of USB device, enabling application developers to control +devices without having to develop a kernel-mode WDM driver. Symbian +owns a licence to the driver and its development package, and is entitled +to ship renamed copies of the driver binary to licensees and partners +as part of its own development kits, usually adopting the name of +the application that uses it.
This is a device-side only, stand-alone test program. It is used +to verify the proper working of some basic USB API functionality. +For T_USBAPI to run, the device doesn’t need to be USB-connected to +a PC although if it is then some additional tests are performed. Tests +run by T_USBAPI include:
loading the +USB LDD and opening a channel
searching for +suitable endpoints and setting up an interface
searching for +suitable endpoints and creating an 'alternate setting' for that interface +(if supported).
querying and +allocating endpoint resources (DMA and double buffering)
powering up +the UDC and connecting it to the bus.
manipulating +(modifying) USB descriptors (Device, Device_Qualifier, Configuration, +Other_Speed_Configuration, Interface, Endpoint, String)
testing the +USB OTG API extensions
requesting and +releasing Device Control
some static +state notification tests (alternate device status, endpoint status).
setting and +clearing endpoint Halt condition (where supported in unconfigured +state)
closing the +channel and freeing the LDD
Most of these tests verify the functioning of the platform-independent +code (i.e. platform-independent layer + LDD). However, some of this +code relies on platform-specific layer functionality, such as the +stall state and the maximum packet size of endpoints. Therefore, a +working, or a partially working platform-specific layer is required +for T_USBAPI to pass.
When called normally, T_USBAPI runs
+once and then exits. It can also run endlessly in a loop, provided
+it is not failing, by adding the parameter soak to the command
+line. Once the program is running in endless mode, it can be stopped
+at any time by pressing the
T_USBAPI is a standalone program which can be safely
+run on a UDEB image. Choose to display error messages and warnings
+specific to the USB components using the appropriate kernel debug
+flags (
"trace 3 1"
+means "set (bit 1|bit 0) at index 1" (
These are the host and device side parts, respectively, +of a reflector-like USB test program. They are normally used together, +but T_USB can also be used to create a simple USB device to allow +USBCV to execute.
To ensure proper internetworking,
USBRFLCT
This is the host-side part of a reflector-like USB test program.
+This is normally used together with
The directory
The directory
The syntax for +invoking this is:
where
Three modes of operation are available:
Loop mode.
Data chunks are read from the USB device and then immediately
+written back to it. Successive data chunks are read, each being one
+byte larger than the one before, starting with size 0 until a specified
+maximum size has been reached. This maximum size value is specified
+on the device using
A preamble +packet is sent by the device before each data chunk, containing the +size of the chunk that is to follow. This kind of protocol is necessary, +because the host side driver has to specify the exact number of bytes +that it needs to read from the device.
Every 1024th loop, +the program displays statistics about the test run. For example, the +total number of bytes transferred, and the current average data throughput +in bytes/sec.
Receive-only +mode.
In this mode, a single preamble packet containing the +size of the data chunks that are to follow is sent from the device. +The host-side program subsequently, and continuously, reads data chunks +of that size from the device. This mode can be used to determine the +raw transfer performance of the USB connection in the device upstream +direction.
Transmit-only +mode.
In this mode, no preamble packet is sent. Instead, maximum +(constant) sized data chunks are sent. This mode can be used to determine +the raw transfer performance of the USB connection in the device downstream +direction.
Verbose mode.
In this mode, the driver and program output are both displayed.
+Use this to track down problems that occur when trying to get
the first time +you test a port of the platform specific layer
after installation +changes on the PC
The error messages are particularly helpful, a list of error
+codes together with explanations for the USBRFLCT.SYS driver can be
+found in the USBIO Reference Manual. These codes are also available
+on Thesycon's website at
T_USB
This is the device-side part of a reflector-like USB test program.
+This is normally used together with
The source for this +program can be found in:
After starting the program, you are prompted to choose a +maximum size for a single transfer.
Note: The maximum +transfer size is not the same as the maximum packet size.
The prompt appears as follows:
Once you have chosen the maximum transfer size you are prompted
+to choose a bandwidth priority for the interface that will be created
+and used for the tests. All options except '0' are used when
Choose a bandwidth priority from the following menu for the interface +specifically created for the tests. All tests will be carried out +over this interface.
Choose endpoint options from the following two configuration +menus:
A message +prompts the user to start the host side test program USBRFLCT.
Once host enumeration
+and
Choose
+one of the six main test modes. The selected mode should correspond
+with the mode in which the host-side program is running. For example,
+device-side
The Loop test with
The standard 'Receive-only' test mode
There are two further tests +which include file I/O. These are:
'Receive' mode
'Transmit' mode
In both cases you must first enter a source or destination +from the list of available drives.
Thesycon provide this Win32 GUI application. Like
Use
The following steps describe an example test:
Connect the
+Symbian platform USB device to a PC on which the
Start
Launch
Press the
Test and exercise
+the USB device using the different options provided by the
Further details can be found in the USBIO manual, which contains
+a chapter on the
This test program is written and +maintained by the USB Implementers Forum, Inc, (also referred to as +the “USB Forum” or “USB IF”). It can be used to check compatibility +of the USB device with the USB specification (currently version 2.0). +It contains a collection of different test suites, but the only tests +that are of interest regarding the platform-specific layer port are +the “Chapter 9 Tests”. Even though most of the "Chapter 9" request +processing is done by the platform-independent layer, the tests can +help to uncover potential problems with endpoint-0 controlling platform-specific +layer code. The tests will also show whether or not the platform-specific +layer and the platform-independent layer work together without problems.
This program runs on Windows 2000 and Windows XP (English Version)
+only. At the time of writing, the version of the package is R1.2.1,
+and can be obtained from
This is not a dedicated
+test program, but it can be used to check that the USB driver stack
+is operating correctly. Since it is not a test program USBMSAPP is
+not normally included in ROMs with any of the
Note: Add this macro to
Start the program from the command prompt with a drive letter +as a parameter:
This mounts the specified drive with the Mass Storage file system +('MSFS'). Check that you can see it as a Read/Write disk drive on +the USB host computer. The drive name on the host computer will be +the first free drive letter available.
Note: You +can only mount drives marked as 'removable' as USB Mass Storage devices. +The C: drive cannot be marked as 'removable'. Internal drives made +available for USB Mass Storage must be marked as 'removable' for reasons +of platform security.
Use the provided KTrace and BTrace macros +to debug your PRM implementation.
Introduction
These +kernel services allow you to generate and capture trace information in a manner +designed to minimise the impact on a running system.
Use the
The
The definitions are summarised in the following table. Each entry type
+is identified by its
Each entry is defined using the
Take the template port, in
This document explains how to implement a slave channel in the +SHAI implementation layer of IIC.
Intended Audience
Base porting engineers.
Introduction
IIC buses (a term used in this document to represent serial inter-chip +buses that IIC can operate on) are a class of bus used to transmit +non time-critical data between components of a hardware system
To implement the SHAI implementation layer
+of an IIC channel as slave you must write a class extending
The +four functions you must implement are:
You must also provide the following functionality:
From the constructor,
+call the constructor of the base class
assign the channel +number (if you are using the IIC Controller) and channel ID, and
report events
+signalled by the hardware by calling the
Your implementation of the SHAI implementation layer will +need to call functions of the PIL. You are restricted to using a certain +subset of these.
Implementing the Platform Specific Layer
Implementing DoCreate()
If you are using +the IIC controller, set the channel number,
call the
set the
Implementing DoRequest()
configure the +interface,
set triggers +and initiate transfers, and
disable interrupts +in the event of an abort.
The argument
+is
The argument
+is
The argument
+is
The argument
+is
The argument
+is
The argument
+is
Implementing ProcessData()
The easiest way
+to understand this, is to review the template port at
You implement the function to update the callback according +to the value of the trigger.
If the trigger +is a receive trigger, update the callback with the number of words +already received to the buffer by calling
If the trigger +is a transmit trigger, update the callback with the number of words +already transmitted (moved to the hardware transmit FIFO) by calling
Whether the +trigger is receive or transmit, update the trigger member of the callback +containing the accumulated event history by calling
where
Implementing CheckHdr()
Implement
check if the +specified configuration is valid.
Using the Platform Independent Layer
You can +only use certain functions of the PIL. They provide functionality +which is generic to IIC buses. They are:
Implementing event notification
A slave channel
+reports events on the hardware by calling the PIL function
Pass
Pass
Pass
Pass
providing new +receive buffer, and
resetting its
+receive notification triggers for instance by calling
To make this possible the SHAI implementation layer should +try to keep the receive hardware FIFO at the lowest possible level.
Pass
providing new +transmit buffer, and
resetting its
+transmit notification triggers for instance by calling
To make this possible the SHAI implementation layer should +try to keep the transmit hardware FIFO at the highest possible level.
Pass
Pass
All interaction between user code and the driver code takes place
+through a logical channel. On the user side, a channel is encapsulated by
+a
A hardware component can be added to introduce a new feature or +create a new variant of an existing feature. Mobile phones are developed +with various form factors, a new variant is created for each form +factor.
A particular new piece of hardware may require several +platform services to be used to support all the functionality of that +hardware. A camera may provide photos and video streams and require +the use of a GPIO platform service to configure the camera resolution, +to set focal distance, or to trigger the shutter.
Platform services are intended to be platform agnostic +and aim to reduce the porting time of hardware adaptation from one +platform to other. Platform services can be broadly classified into +two categories:
Platform services +that provide a service to other platform services.
Platform services +that use various kernel services and other platform services to drive +a hardware component.
The adaptation developer should identify the platform services + required to implement the new hardware. Some platform services may +already be implemented for other hardware and so it may be a simple +matter of reusing another implementation with minor modifications +for your particular component.
The adaptation will use certain hardware
+configurations and some of the configuration parameters can be reused.
+Reusable hardware configurations can be stored in a repository called
+the Hardware Configuration Repository (HCR). The HCR allows the configurations
+to be stored in the repository during various stages of the device
+creation. The HCR platform service provides APIs to the kernel and
+device drivers to read the values stored in the repository. For more
+information, see the
The adaptation developer needs to know the specifications to implement +a hardware on the device. The implementation process involves the +identification of other services that must be implemented to support +a hardware. The other services may be provided by other platform services, +kernel extensions or drivers of other hardware components.
Once the hardware component is implemented, +the developer must ensure that the implementation conforms to the +relevant platform service compliance testing. If the test fails then +the adaptation code should be modified until the tests pass.
The adaptation process can be summarized as:
Identify the +hardware
Identify the +platform services required to implement the hardware component
Identify the +dependencies
Identify the +required interfaces to be implemented
Implement the +required interfaces
Build and test +the interfaces
Modify the implementation, +if required
A device driver is a program that controls a particular type of device +that is attached to a base port. Writing a device driver requires an in-depth +understanding of how the hardware and the software of a given platform function. +The Device Driver writing guide tells you what you need and how to create +a device driver.
+For information about the parameters needed to create a valid Symbian
+platform device driver, see
For information about high-level implementation designs for your device
+driver, see
For information about how device drivers are implemented as logical
+device drivers (LDDs) and physical device drivers (PDDs), see
For information about organizing memory, managing data transfer across
+memory boundaries and the data transfer between LDD and PDD, see
For information about the user-side requests to a driver and the synchronization
+methods used by the driver, see
For information about how device drivers use interrupts, see
Kernel +Architecture 2
MAKSYM/MAKSYMROFS tool is used to create a symbol file, which lists +the addresses of every global and exported function in the ROM. It +reads a log file to generate the symbol file. The log file contains +information about the contents of the ROM or ROFS image.
+MAKSYM reads the log file which is created by ROMBUILD when building +a ROM image. MAKSYMROFS reads the log file which is created by ROFSBUILD +when building a ROFS image. The command syntax for MAKSYM and MAKSYMROFS +is the same.
+By default, ROMBUILD produces a log file called
This
+will generate an output text file (Symbolic information file) called
The file contains all
+the global variables along with their address and length. For example,
+the start of
In this example, the function
Notes:
If you are distributing +ROM images for testing, it is also useful to supply the symbolic information +file for that image.
If you re-build +the ROM image you must also rebuild the symbolic information file +using MAKSYM.
There is a template DMA Framework consisting of the source file
Decide which
+directory your DMA PSL and associated
Copy the template
+framework into your chosen location. Be aware that the template
Change the variant's
The remainder of this porting guide refers to the implementation
+of the DMA Framework for the template ASSP, which forms the basis
+for the template reference board; the source file can be found in
Demand paging documentation related to code demand paging.
+User-side requests to a driver can be of +three types: synchronous, asynchronous, and cancellation of an asynchronous +request. User requests can be handled in two different ways, based on the +logical channel model used by the device driver. A channel implementation +is derived from either:
In the
All requests from the user result in
+a call to a single function
A driver uses the following synchronisation methods.
The
+APIs to access hardware peripheral registers are provided by a Symbian base
+port. For example, the base port for the OMAP 2420 provides the
To set the UART LCR +register:
To get the UART MDR +register:
To modify only the specified +bits of the register: here, the UART LCR register bit 6 is being set to 0 +(LCR[6] = 0)
A Keyboard Driver is platform-specific, and is implemented +by the base port. Symbian platform provides template code for a driver. +A base port must also implement a keyboard mapping library, which +is used by user-side components to translate the codes used for hardware +keys into logical key codes.
+The Secure Digital Input/Output (SDIO) protocol is based on the SD +memory card protocols. The SDIO bus enables high speed data transfers between +the host and a card. The SDIO consumes low power and is efficient for use +in mobile devices. SDIO cards like SD cards use star topology that have dedicated +command and data signals. The purpose of the SDIO APIs is to enable easy porting +of the Symbian SDIO services for a particular SDIO controller.
+SDIO controller:
is the executable that manages access to +the SDIO card hardware interface and provides an API for class drivers to +access SDIO card functions.Stack:
handles multiple requests simultaneously. It internally +schedules the requests on to the bus and allocates the appropriate card to +transfer data accordingly.Session:
sets up a session object to perform SDIO specific +command sequences.Register interface:
allows single byte read/write operations +to a given register address together with the corresponding ones for multi-byte +access.Powering up stack containing a single SDIO Card.
Sending a command to PSL.
Acquiring new cards into the SDIO stack.
Configuring an SDIO card.
The purpose of this document is to describe interrupt specific +functions for the kernel side device drivers.
+The
The Interrupt class provides the following functions.
A driver can export its API to be used by other Kernel code by
+using IMPORT_C in the declaration and EXPORT_C in the definition.
+When the driver is built, these functions are exported to a
Kernel extensions can also provide information to user programs
+by setting attributes that can be read through the system HAL (Hardware
+Abstration Layer) API. See
Both LDDs and PDDs are DLLs.
+They implement a specific interface that allows the kernel to initialise +them, and for user-side code to communicate with them.
+An LDD must implement:
+A
A
A DLL entry point function +at ordinal 1, which constructs the LDD factory object.
It is possible +that a device driver is also an extension, in which case the entry point will +also be used for extension initialisation.
A PDD must implement:
+A
A
A DLL entry point function +at ordinal 1, which constructs the PDD factory object.
In general, the presence of Level 2 cache is transparent to device drivers. +The cache is physically indexed and physically tagged, and this means that +L2 cache is not affected by the remapping of virtual-to-physical addresses +in the MMU.
+However, where data is being transferred using DMA, then cached information +in the data buffers involved in a DMA transfer must be flushed before a DMA +write operation, for example when transferring data from memory to a peripheral, +and both before and after a DMA read operation, for example when transferring +data from a peripheral into memory.
+The kernel provides three functions for this:
+
All three functions are defined and implemented as part of the
See also
A PDD
+must define an entry point function using the macro
This factory object is created on the kernel heap. Its +purpose is to create the physical channel, if needed.
An LDD indicates
+that it requires an accompanying PDD by setting the LDD factory object member
A user can specify the
+name of a PDD through the
The user side passes a pointer to a descriptor containing the name of
+the required PDD as the fourth argument. A PDD with that name must already
+have been loaded, and its factory object created. The device driver framework
+searches for a PDD factory object with that name, and if found, calls
If no explicit PDD name
+is passed from the user side, then the device driver framework searches for
+all PDD factory objects, i.e.
The file extension of a PDD DLL can be any permitted Symbian Platform
+name but, by convention, the PDD DLL has the extension
A
+PDD is loaded by calling
loads the DLL into RAM, +if necessary
calls the exported function
+at ordinal 1 to create the factory object, the
places the factory object +into the appropriate object container.
If a PDD needs to perform initialisation at boot time (before
+the driver is loaded by
In order for the kernel to initialise the PDD extension
+at boot time then the
A keyboard mapping DLL provides a set of lookup tables that the
+generic
The basic purpose of the tables is to provide a mapping from scancodes
+to keycodes, so that, given a scancode, the corresponding keycode
+can be found. The tables are organized so that there is, in effect,
+one set of lookup tables for each likely combination of modifier key
+states.
An outline set of tables is provided in the template port.
+The following list outlines the structure of the tables.
+The tables are
+organized in a hierarchical structure. The start of this hierarchy
+is the root table, defined by a
an array of
+pointers to nodes, where each node is defined by the
a field containing +the total number of such nodes.
The combination
+of modifier key states that each node represents is encapsulated by
+a
A
then a match +occurs only for the following combination of modifier key states:
i.e. a match occurs only if
In +C++ code, this is expressed as:
where
In principle, +each node represents scancode-to-keycode mappings by associating one +or more pairs (or range) of scancodes with corresponding blocks +of keycodes. Each pair represents an inclusive and contiguous range +of scancodes.
Each pair (or range) of scancodes may be "discontiguous" +from the next.
The association is made through a table defined
+by the
a
a pointer to +a table containing the target keycodes. The target keycodes are arranged +so that successive scancode pairs are associated with successive blocks +of keycodes as the following diagram shows.
This means that successive scancodes, for example, from +"A1" through to "B1" are represented by the successive keycodes "keycode +for A1" through to "keycode for B1"; scancode "A2" is represented +by "keycode for A2", which follows "keycode for B1" in the keycode +table.
To allow for
+possible reuse of keycode tables, a node can point to more than one
If no keycode
+can be found that matches the scancode, for a given modifier combination,
+then the algorithm returns the
Kernel +Architecture
None published.
The DMA client interface specifies the connection between the Symbian +platform and the DMA platform service. This document describes how +to open and close a DMA channel along with how to use the DMA client +interface to perform a DMA transfer.
+The classes that process a DMA transfer are:
The steps involved in performing a DMA transfer are described +below.
The sequence diagram of a DMA transfer +is:
Initialization
Before any DMA operations can +be carried out, the data structure that holds the configuration of +the DMA channel must be initialized. An example of this is shown below:
The
The
The
The
Open a DMA channel
Next the DMA channel is opened and configured. An example +of this is show below:
The first parameter for the open method holds the +channel configuration details. The second parameter is the DMA channel +handle. In this example, if an error occurs during the open operation, +then the error code is returned to the rest of the system and the +transfer is aborted.
The first parameter contains the DMA channel +configuration information. The second parameter points to the instance +of the DMA channel class that is to be used. This class must be declared +before this method call can be made.
Create a request object
The next task is to create a request object, specifying the +channel it will be queued on and the callback when it completes. An +example of this is shown below:
The above line creates a new instance of the DDmaRequest class
+which will be queued on the DMA channel specified in the first parameter.
+The callback function (
Transfer
Once the DMA channel +has been setup, the DMA transfer can be carried out. Executing a DMA +transfer consists of two parts:
Specifying the +details of the DMA transfer.
Placing the +transfer onto the DMA transfer queue.
Once a successful transfer has been completed, the specified +DMA callback function is executed.
An example of the use of +the fragment method call is:
In this code snippet, the details of the DMA transfer +are specified. If an error is returned by the fragment method, then +the error code is passed on to the rest of the system and the transfer does +not take place. The first parameter specifies the data source. The +second specified the destination of the data. The third specifies +the number of bytes that are to be sent. The fourth parameter consists +of flags. The final parameter specifies any DMA PSL specific settings +(there are none in the example).
A code snippet that shows the +DMA transfer request being placed onto the DMA queue is:
Close a DMA channel
Finally, once the DMA functionality is no longer required, the +DMA channel is closed. A code snippet that shows this is:
Where
Adaptation is the process of adding a hardware or a platform service +to the device, it includes:
the required hardware
a list of device drivers to be written
the list of required Platform services
the process of porting the operating system to the hardware
integration of peripherals possibly created by third parties
Base porting and device drivers involve platform specific +implementation on hardware.
The adaptation layer +interfaces between the operating system and hardware: it is a hardware +abstraction layer extended with modular functionality. It is partitioned +into platform services and client interfaces. The hardware manufacturers +implement the platform services in platform-specific instructions. +The device driver writers use the client interface in their code to +control various hardware.
The purpose of Platform services:
to make porting easier,
to enable hardware vendors to supply standard solutions, +and
to create a common interface for use by chipset vendors, phone +vendors and peripheral vendors.
The simplest Platform services are sets of APIs running in +kernel space and corresponding to discrete items of hardware. Other +Platform services form part of frameworks which may extend into user +space and higher levels of the operating system. Some Platform services +interact with other Platform services within the adaptation layer, +forming a logical stack which may cross the boundary between adaptation +layer and operating system at several points. A Platform service is +not necessarily implemented on hardware: some are implemented wholly +or partly as device specific software.
Implementers are provided +with:
the APIs to be implemented, with an explanation of the intended +functionality
information about the associated framework and stack if appropriate
a statement of any associated standards and protocols
information about the tools required
tests to prove the implementation once it has been written
Kernel +Architecture
None published.
The platform-specific source code can be written in GNU assembler +syntax or ARM assembler syntax.
+The generic source and header files are all written using the ARM assembler +syntax, as are the source files for the template and example ports. However +the bootstrap can be built using the GNU assembler; in this case, source files +are translated from ARM to GNU assembler syntax automatically.
+The rules that you must follow to use GNU assembler syntax in the platform-specific +source are:
+The first non-blank +line of any GNU-syntax source or header file should start with an @ (the GNU +comment delimiter); this acts as a directive to the translation tool that +no translation is required.
Files included from
+GNU source should be included with a
To enable the generic makefile to work correctly, assembler source files
+should always be given the extension
The Baseport Template platform service provide an interface to +the kernel-side client for hardware-specific functions (for example, +GPIO, DMA, IIC, USB Client Controller) that are used by the kernel.
+The client interface +for the Baseport Template platform service is:
The Baseport Template interface provides the following +functions.
These are the +functions whose implementation are not provided in the class, but +the behavior can be overridden within an inheriting class by a function +with the same return type.
External interrupt handling functions are used by second-level interrupt controllers at variant level.
USB client controller functions are the called by +the USB PSL, and to be implemented by the variant .
An LDD and a PDD each have a version number which helps to identify the +interface. In order to communicate, an LDD and a PDD must have the same version +number.
+Each LDD and PDD has their own version number.
+LDDs and PDDs must set their version numbers in their respective factory objects,
+using a
A version number +defines the interface version supported by the LDD or PDD. It is used to check +that an LDD and PDD are compatible. It is also checked against the version +requested by a client when it opens a channel.
The following shows +how the example device drivers set their version numbers:
The
+Kernel provides the
the major version of +the client is less than the major version of the driver.
the major version of +the client is equal to the major version of the driver, and the minor version +of the client is less than or equal to the minor version of the driver.
When the device framework searches for a corresponding
+PDD factory object for an LDD, it calls
The example PDD's
If you know the address of the instruction which caused an exception, you
+can compare this address with the MAKSYM log to see which function this is
+in. You can narrow this down to the exact code within the function by using
The following example MAKSYM log is taken from an EKA1 build; however, +the principle is the same for EKA2.
+If, for example, the code address of the exception is at
Notice that you must specify the component that the file is part of, in
+this case
The listing file will be placed in the same directory as
Offset 8 is the first STR instruction. Comparing this with the C++ source:
+The first store is to PP::DmaMaxChannels, so clearly there is a problem +writing this memory.
+Each Symbian platform release may include APIs, which are insecure.
+These APIs are inherited from earlier versions of the OS, which are
+based on EKA1 kernel. You may choose to allow or not allow such APIs
+using the
Note: The list also states whether the API can be hidden
+at build time by defining the
The Register Access client interface is intended for use in writing +device drivers. Writing device drivers involves frequent access to +hardware registers by reading, writing and modifying them.
+The client interface for the Register +Access platform services is:
The Register Access client interface provides +the following functions:
All these functions can be called in any context.
The address of a particular register on a particular platform
+is typically expressed as a base address and an offset: this is what
+you pass to the
The write functions take an
+unsigned integer (
The modify functions
+take two unsigned integers (
The following code reads the current
+value of a hardware register identified by a base address
The following code clears the bits
+specified by the bitmask
The design assumes that interrupts are generated by pen-down events; however, +there is no difficulty in adjusting your implementation to deal with interrupts +generated by pen-up events.
+The heart of the design is a DFC that reads digitizer data samples when +the pen goes down, and then samples the digitizer at regular intervals until +the pen goes up. These samples are accumulated in a buffer in the platform +independent layer. When enough samples have been gathered, the platform independent +layer averages them, processes them, and issues pen movement events.
+The flow of control through the digitizer +is described by this state diagram. The notes following relate to this state +diagram on the left-hand side.
The diagram on the right-hand side +shows, in a simplified form, the flow of control. The numbers correspond to +the state diagram numbers. Calls into the platform independent layer are shown +in green boxes; important functions are shown in purple boxes.
If the pen is down at
+power on, i.e. when in the power up state, the device keeps sampling
+the digitiser at regular intervals until the pen is up. In the template port,
+the interval is defined by the constant
If the pen is up at
+power on, i.e. when in the power up state, or when the pen is first
+lifted, the sampling is initialised; see
If the pen is down in
+the collect sample state, then the digitiser coordinates are sampled
+at regular intervals, and stored in a buffer allocated by the platform independent
+layer. When a group of samples has been taken, the platform specific layer
+calls
In the template port,
+the interval is defined by the constant
If the pen is lifted +while in the collect sample state, then the state changes to the pen +up debounce state. This state describes the situation where a pen is lifted +from the device and there is uncertainty as to whether this represents a positive +move by the user, or whether the pen pressure has just been reduced momentarily. +A delay is scheduled to decide to see which is case is true. At the end of +the interval the state of the pen is checked again. If the pen is found to +be down at the end of this interval, then the state changes back to the collect +sample state, and the sample buffer is reset.
In the template
+port, the delay interval is defined by the constant
If the pen is found +to be down at the end of this interval, then the state changes back to the collect +sample state, and the sample buffer is reset.
If the pen is found
+to be still up at the end of this interval, then the pen is assumed to be
+intentionally up. The sample buffer is reset in preparation for future readings,
+and the platform independent layer is notified that the pen is now up by calling
This section +shows the main interactions between the two layers:
1
When the device is started, the platform independent layer calls
+the function
The platform independent layer then calls the function
See also
2
The platform specific layer calls
3
The platform independent layer calls
4
The platform independent layer calls
5
When the pen is lifted, the platform specific layer calls
6
The platform independent layer calls
7
8
There are two functions:
9
The platform independent layer provides the
This command has two formats:
+Using the first format you provide the start and end addresses that you +want to inspect; for example:
+Using the second form you provide the start address and the number of bytes +to dump (in hex); for example:
+Both of the above examples dump 64 bytes from address 0x81240000. The output +is a standard hex-dump:
+You can use the
Symbian platform is +little-endian, which means that all values are stored so that the least significant +bytes are stored at the lower addresses in memory (or “backwards” as commonly +perceived).
For example, the value 0x1234ABCD would be shown in the +memory dump as:
The compiler may add +padding between variables either to speed up access or to avoid alignment +restrictions; for example, words cannot be on odd addresses.
As an +example, the following struct:
would be laid out in memory as:
The padding and alignment is compiler-dependent. Generally,
+fields must be aligned on a boundary equal to their size; for example, a
When using GCC, classes which
+derive from
When +using an EABI-compliant compiler, the virtual table pointer is always the +first word of the class.
The section begins with a discussion of the APIs for fundamental types +such as buffers and arrays. Kernel side programs cannot use all of the same +APIs as user-side programs, so you need to be aware of these restrictions, +and the alternative APIs provided by the Kernel.
+The guide then discusses a number of idioms for communicating between different +threads and processes, including Publish and Subscribe, Kernel-side messages, +shared chunks, and environment slots.
+Some more advanced programming issues are then discussed, including how +to design a device driver to behave correctly in a demand paged OS environment, +in which client programs may not be continuously in memory, and how to integrate +a device driver with system wide power resource management.
+The section ends with a discussion of how Kernel APIs encourage safe programming +with the use of precondition checks.
+The IIC is a technology-independent interface for serial bus technologies. +The IIC supports multi-master and multi-slave serial interfaces, used +by devices attached to the bus to exchange control information.
+There are two categories of users who need +to understand IIC:
Device driver creators - need to know how to use IIC
Hardware adaptation developers - need to understand how to +write software to allow IIC to communicate with their hardware.
Both categories of user need to understand the basic concepts
+of IIC such as channels, nodes, transactions, transfers, the IIC controller.
+You should read the
After reading and understanding the
After reading and understanding the
Because media drivers are implemented as kernel extensions, use the
where
Device drivers use DMA to copy data quickly between memory locations, +and between memory and peripherals. This section describes how to create a +port of it for your phone hardware. The Device Driver Guide documentation +describes how to use the DMA Framework from a device driver.
+The DMA Framework provides a Platform Independent Layer. You must provide +a Platform Specific Layer to implement the interface to the DMA Controller +hardware on your phone.
+The sections are:
+a
one or more
Every section contains a list of obey statements that specify ROM +configuration information or specifies the files to be included in +the ROM image.
+Extension ROM sections are marked by the
The structure is defined as:
+See
See
See
A
A
A
A
A
An
The DMA client interface is the interface between DMA and the higher +layers of the platform
+Graphic User Interface (GUI) that accept input from a pen or a stylus +must implement the driver. This section describes how to create a port of +it for your phone hardware.
+Symbian platform provides a generic Platform Independent Layer for the +driver. You must provide a Platform Specific Layer to implement the interface +to the digitizer hardware on your phone.
+There are no specific tools required to use or implement the Interrupt +platform service.
+This tutorial +describes how to implement the Asic class. This is a pure virtual +interface that is defined and called by the Kernel, but which must +be implemented by the ASSP/Variant. The tutorial assumes that the +ASSP/Variant is split into an ASSP layer and a Variant layer.
+For a minimal port, it isn't necessary to provide implementations
+for the entire
The
Taking the template port as a concrete example, the ASSP layer
+implementation of the
Entry conditions
called in the +context of the initial (null) thread
interrupts are +disabled
there is no +kernel heap
memory management +functions are not available.
What the function should do
This is called during +stage 1 of kernel initialisation.
In this function, you need +to:
initialise the +real time clock
initialise the +interrupt dispatcher before CPU interrupts are enabled.
set the threshold +values for cache maintenance. You can set separate values for:
purging (invalidating) +a cache
cleaning a cache
flushing (i.e. +cleaning and invalidating) a cache.
You use the
As an example of what the threshold values +mean, if you purge a memory region from cache, and the size of that +region is greater than the threshold value, then the entire cache +is purged. If the size of that region is less than or equal to to +the threshold value, then only the region is purged.
The threshold
+values are platform specific, and you need to choose your values based
+on your own performance measurements. Symbian cannot make recommendations.
+If you choose not to set your own values, Symbian platform supplies
+a set of default values, which are set by
Note that there is also a
set up the RAM
+zones. For details, see the
Typically, you would also initialise any memory devices not +initialised by the bootstrap. Any other initialisation that must happen +early on should also be done here.
The kernel calls the Variant's
The last line is a call into the ASSP layer,
+which is implemented as shown below. On the template port, it is the
+ASSP layer that initialises the interrupt dispatcher and the real
+time clock. The source for this is in
Entry conditions
called in the +context of the supervisor thread
the kernel is +ready to handle interrupts
the kernel heap +and memory management system is fully functional.
What the function should do
This is called during +stage 3 of kernel initialisation.
In this function, you need +to:
enable interrupt +sources
start the millisecond +tick timer.
Optionally,
+replace the implementation used by
Any other general initialisation can also be done here.
As an example, on the template port, the function is implemented
+in the Variant layer, by
Millisecond tick timer
The kernel expects that the
+kernel's tick handler routine will be called at a fixed microsecond
+period, the value of which is returned by the implementation of
The template implementation +is as follows:
Servicing the timer interrupt
The timer interrupt
+service routine is required only to call the
Note that it is a requirement that the timer
+period should be an integral number of microseconds, even if the exact
+period is not 1000us. It is always possible to add code to the interrupt
+handler to achieve this average so that over a large number of ticks,
+the deviation from this average will tend to 0, by adjusting the exact
+number of ticks from tick to tick. See also
NanoWait() implementation
To replace the default implementation, you need to:
code your own
+function. This has the same signature as
where
register this
+implementation by adding the following call into your
You can see where this goes by looking at the template port
+at:
It is worth implementing this early so that +it is possible to get trace output to see what the kernel is doing. +This function is passed one character at a time. Normally this is +sent to a UART, though it can be output through any convenient communications +channel.
On the template port, this is implemented in the
+Variant layer, by
If no power management has been implemented,
+then this function is called when the system is to idle to allow power
+saving. This function can just return, until power management is implemented.
+Once power management has been implemented, then idling behaviour
+will be handled by the power controller, i.e. the Variant's implementation
+of the
This function is used to return the number +of microseconds per tick. To avoid timing drift, a tick frequency +should be chosen that gives a round number of microseconds per tick. +The function can return zero until the tick timer has been implemented.
On the template port, this function is implemented in the ASSP
+layer, and can be found in the source file
See also
This is a function that the kernel +uses to get the system time. Its signature is
An implementation must set the
For the template reference board, the +implementation is as follows:
Until a real time clock is implemented, this function
+can just return
This function
+calls the register access functions in the
Note that tracing output is +provided when the KHARDWARE bit in the kerneltrace flags is set for +the debug build.
This is a function that the kernel uses +to set the system time. Its signature is
This sets the real time clock to the number of seconds that have +elapsed since the start of the year 2000. This is a positive number; +a negative number is interpreted as time before 2000.
For +the template reference board, the implementation is as follows:
Note that tracing output is provided when the KHARDWARE +bit in the kerneltrace flags is set for the debug build. In this function, +the trace output shows the value passed in from the kernel and then +shows the value read back from the real time clock for verification.
The function
The default implementation
+provided by Symbian platform that
This approach cannot always take into account factors
+such as processor frequency scaling. An alternative approach is for
+the Variant to supply its own implementation to be used by
On the template port,
You might find it useful to review
This is the HAL handler
+for the HAL group
This returns a
The address of this object is obtained by calling
In the template port, the function is implemented in the Variant +layer:
Here, the machine configuration information is represented
+by an object of type
Note that
If a startup reason is available from hardware
+or a preserved RAM location, it should be returned by the function.
+The default is to return
On the template port, this is implemented in the ASSP layer:
The
The
+function declaration for the
Description
This adds a new SDIO card to the SDIO card stack.
+This is an extension of the
Parameters
None
Return +value
The
+function declaration for the
Description
Implements +the state machine for the IO_RW_DIRECT command (CMD52).
Parameters
None
Return +value
The
+function declaration for the
Description
Used +to write an CMD53 command to the card.
Parameters
None
Return +value
The following conventions apply to all obey file statements:
+A line that
+is a comment can be identified using the
Blank lines +are ignored.
The
A file name +can, optionally, be enclosed within quotes; a file name that +contains spaces must be enclosed within quotes.
The
Demand +paging is a technique where memory appears to application programs to be present +in RAM, but may in fact be stored on some external media and transparently +loaded into RAM when needed. Demand paging trades off increased available +RAM against decreased performance, increased media wear and increased power +usage. More RAM is made available by loading pages only on demand, but a cost +is incurred every time a page is loaded.
Demand paging is used to +reduce the amount of RAM that needs to be shipped with a device and so reduce +its cost.
For the code paging type of demand paging, the executable +is stored in a ROM. Since the memory locations that will be pointed to cannot +be determined ahead of time, the pointers in the executable have to be modified +after the page-in process. This process is known as 'relocation' and 'fix-up'. +It is usually done by loader.
The +executable is in a ROM and so has to be loaded into RAM before it can be executed. +When the required part of the executable is not present in RAM, then a paging +fault is generated which starts the paging in process of specifying which +part of the ROM is to be paged-in along with which RAM page is to be used.
The +above process (in very simple terms) describes how ROM paging works. With +code paging, there is the added complication that the executable is not execute +in place (it is probably stored via the use of an operating system e.g. ROFS) +and so when it is paged in any pointers in the page will not point to any +valid location. Hence a new step has to be carried out that modifies the pointers +in the new page so that will point to meaningful locations once they are RAM. +This process is known as 'relocation' and 'fix-up'. It is usually done by +loader.
These +are the main components of code demand paging:
Rom image - The executable +is stored in a ROM.
RAM - Where the executable +will be executed.
Paging Fault Handler +- To detect that a page-in process is required and to carry it out.
Loader - This usually +does the 'relocation' and 'fix-up' process after the 'page-in' process.
Which
+type of paging is used and for which area of memory is first specified the
An audio hardware device is an individual hardware codec +device together with any associated controller hardware that allows the CPU +to communicate with it.
A basic audio hardware device typically provides +two communication paths: an input path for the audio recording and an output +path for audio playback.
Most basic audio hardware devices support +full duplex data transfer although some are only half-duplex or maybe just +simplex. Each input or output path may be used to transfer mono data or stereo +data. In the case of stereo this consists of two audio channels, the left +and the right; mono data consists of just a single audio channel.
A +more complex audio hardware device could be an AC 97 codec or similar, plus +its associated controller, which can support multiple input and output paths. +Each input or output path may be used to transfer mono data, stereo data or +'multichannel data'. For example, left, right, centre, left and right surround +and Low-Frequency Effects (LFE).
Units +are used to provide access to the various audio hardware devices. Each unit +supports just one communication path this is either input or output.
Clients +of the audio hardware system can open a separate connection to each unit. +The mapping between the units on a given phone and the audio hardware devices +themselves is platform specific; this is determined by the implementer of +the Sound Driver PDD for that platform.
A basic full-duplex audio +hardware device is presented as two units, one input/record unit and one output/playback +unit. A more complex audio hardware device such as an AC 97 codec may be represented +to the rest of the OS as a number of audio input and output units.
An +audio channel is a data stream between a client and an audio unit. There are +one or more audio channels per driver channel.
A +driver channel is a session between a client and an audio unit. A client may +have driver channels open on more than one unit.
Many codecs that support stereo playback can only accept +audio data that is delivered with the samples for each of the channels interleaved, +for example, LRLRLR. For these audio hardware devices, in order to operate +the channel in mono mode and to play audio data which contains only samples +for a single channel it is necessary to perform mono-to-stereo conversion +on the audio data before delivering it to the codec. So, for a section of +mono audio data that contains three samples, lets call them S1, S2 and S3, +each sample is duplicated, so we have S1,S1,S2,S2,S3,S3, with identical samples +being delivered to each channel.
Unfortunately, the only way for the
+PDD to implement this conversion is for it to allocate a conversion buffer
+and to copy each sample twice into this buffer. The PDD has to allocate a
+separate conversion buffer for each simultaneous transfer operation it supports,
+for example, for the template playback driver, a conversion buffer count equal
+to
When performing conversion
+in this manner, the maximum transfer size that the PDD can accept from the
+LDD becomes half the size of each conversion buffer, and when configured in
+this mode, the value returned to the LDD in response to
Likewise, the record data may be delivered by the +audio hardware device only in stereo format with the data samples for each +channel interleaved. If the driver channel is configured in mono record mode, +stereo to mono conversion has to be performed in the PDD to discard each alternate +sample.
The
The
The
The
The
The
The following diagram shows the architecture of the Interrupt +platform service:
The Interrupt platform service provides an interface between +the interrupt controller in the hardware and device drivers in the +kernel-side. The Interrupt platform service is implemented in the +ASSP/Variant module.
Device driver developers – to bind and unbind interrupts.
Hardware implementers and base port developers - to implement +an interrupt dispatcher to manage interrupts.
BUILDROM is the Symbian platform ROM configuration tool. It is +a unified ROM building front end to prepare the configurations. It +then invokes the ROMBUILD to generate the ROM image or ROFSBUILD to +generate the ROFS image.
+If the OBY files are encoded in UTF-8 with +non-ASCII character support, use the following the BUILDROM command +syntax:
If the OBY files are encoded in local character set +with non-ASCII character support, use the following the BUILDROM command +syntax:
Reduces the physical memory consumption during image generation.
Level of information to log file. The following valid log +levels are available:
Instructs
Instructs
The name of the ROM image. This overrides any name specified
+using the
The strict option. Specifying this means that any missing
+files will stop
If this option is not specified, then it is possible +to generate a ROM image with missing files.
Preserves the intermediate files and folders (data drive
+and
Specifies the external tools that need to be invoked. These +are the names of the Perl modules (without the .pm extension). Multiple +modules names must be separated by commas. This is an optional parameter.
Alternatively, external tools can be invoked by using IBY file
+keyword
Parses the specified feature manager database XML files
+and generates the features data file(s) (
Notes
You can provide +multiple feature manager database XML files delimited by commas.
If a feature
+manager option (
Includes the specified features data file in the ROM image.
Note: This option is ignored by
Specifies the location to create the Z drive directory.
By default, a Z drive directory is created in the location from +where BUILDROM was invoked.
Specifies the location to create data drive directory.
By default, the data drive directory is created in the location +from where BUILDROM was invoked.
Enables BUILDROM to continue to create the data drive image +even if the SIS file(s), non-SIS file(s), or the Z drive image file(s) +are missing or corrupted.
Specifies the Z drive image (ROM, ROFS, and CORE image).
Instructs BUILDROM not to delete any pre-existing data drive +folder.
Specifies a parameter file for INTERPRETSIS to take additional +parameters.
Logs all stub-sis and SWI certificate file(s) that are part +of the Z drive image onto a log file.
Specifies the command-line argument for the InterpretSIS +tool. These command-line arguments take precedence over the arguments +in the parameter file.
Specifies an optional path in which the IBY files are stored. +This option is case-sensitive.
Notes:
If the IBY files
+are found both in the default path (
If the IBY files
+are not found in the default path,
Specifies the number of working threads that can run concurrently
+to create a ROM image. The
If the
If the
If the
Compresses the ROM image. It supports three types of compression:
Warns if an incorrect input file is selected. For example, +a warning is displayed, if an incorrect binary file is selected when +binary variant is enabled.
Suppresses the generation of any ROM or ROFS symbol file. +This option generates all the relevant files except symbol files.
Generates an INC file of ROM image. The INC file contains
+size information of the paged and the unpaged section within the ROM
+image. The name of the output file is the same as that of the output
+ROM image, with
Creates an SPI file.
Enables positioning of the SPI file.
Specifies the compression algorithm to use. The following
+options are available with the
Enables cache mechanism. It ensures that BUILDROM uses cached +executable files while creating a ROFS image. This allows BUILDROM +to reuse or generate cached files.
Notes:
The cache mechanism +is disabled by default.
The cached files +are stored on the hard disk.
The cache command
+line options (
Disallows BUILDROM from using cached files while creating +a ROFS image.
Deletes all cached files from the hard disk.
Generates dependencies file describing a internal dependencies +among executables or dlls in a ROM image.
Note: You can +only generate dependence information in the paged section of a ROM +image.
Checks the character case of the path or name in OBY or +IBY files. By default, this option is disabled.
Note: + This option is only valid on Windows.
Generates the output files at the specified location.
Prepends EPOCROOT to the file location, if the specified
+location starts from
Note:
See the
BUILDROM allows to pass parameters
+in a file using the
In
+the BUILDROM command above,
Rules of this argument file are as follows:
Single line +comments begin with a semicolon (;).
Comments can +also be specified after the parameters.
Parameters can +be specified either in a single line separated a by space, or in multiple +lines.
Parameters provided +in a parameter file override those specified in the command line.
Environment +variables must be specified in the command line and not in the parameter +file.
This topic The list is ordered by group.
+Symbian defines capabilities for each function ID. To find out a function +ID's capabilities, follow the link to its reference information. If no capabilities +are listed in the reference documentation for a function ID, it means that +no capabilities are required.
+Note:
+An attribute can map
+to more than one function ID. This reflects the fact that any given attribute
+can be passed to both
There are group/function +ID pairs for which there is no attribute. This reflects the fact that the +hardware specific information represented by that group/function ID cannot +be accessed from the user side using the generic interface.
Attributes are defined as values of the
HAL groups are defined as values of the
Function IDs are defined as values of various enums in
This +group is defined as internal and is listed here for information purposes only. +However, the attributes are public.
This +group is defined as internal and is listed here for information purposes only. +However, the attributes are public.
This class controls access to the MultiMediaCard stack. This
+class has a number of pure virtual functions that need to be implemented in
+your Variant DLL. The diagram at
There +is one virtual function with a default implementation that needs to be overridden.
Init()
The
+function is intended to initialize the stack, and is called during initialization
+of the MultiMediaCard controller Variant DLL from
You +will almost certainly need to provide your own implementation to perform any +platform-specific MultiMediaCard stack initialization. Whatever your implementation +provides, it is important that you call the base class function from within +your derived version.
Return
You will allocate a +data transfer buffer here. The MultiMediaCard media driver needs a memory +buffer to perform data transfer operations. Where supported, DMA is generally +used to do this, and requires physically contiguous memory. However, the media +driver is created each time a card is inserted into a machine and destroyed +when the card is removed, and giving the media driver the responsibility for +allocating the memory buffer means that it might not always be possible to +allocate physically contiguous pages for it as memory becomes fragmented over +time.
The MultiMediaCard media driver uses the
Although the MultiMediaCard +media driver only expects a single buffer, it actually uses this as two separate +buffers:
a minor buffer which +must have at least enough space for the MBR (512 bytes)
a cache buffer to cache +data blocks from the card.
The ideal size of the cache buffer depends on the characteristics +of the card present at the time, and it is possible to customize the MultiMediaCard +controller at the platform specific layer for a particular card.
The +following example code allocates a physically contiguous buffer - a minor +buffer size of one block is allocated together with a cache buffer size of +eight blocks. The whole buffer is then rounded up to a whole number of memory +pages.
MachineInfo()
The +function returns configuration information for the MultiMediaCard stack.
The
+function takes a reference to a
ProgramPeriodInMilliSeconds()
When a data block is written to a card, the data is read into an +internal buffer on the card and is then programmed into the payload memory. +While the card is in programming mode, it cannot be read from, or written +to, but it is possible to query its status using CMD13.
Immediately
+after a block of data is written by
For platforms that do not provide an interrupt to indicate when
+programming mode is finished,
AdjustPartialRead()
Some
+cards support a partial read feature, which is indicated by the
The MultiMediaCard media driver uses this feature to +read small amounts of data more quickly. However, many hardware implementations +impose restrictions on the granularity of the data that can be read from the +card. For example, they may use a 32-bit FIFO.
This function allows +you to enforce the limits imposed by the hardware.
The
For example, to word align data, +the function would be implemented using the following code:
GetBufferInfo()
The
+MultiMediaCard media driver needs a memory buffer to perform data transfer
+operations, and this is, typically, allocated once only by the
The +MultiMediaCard media driver is created each time a card is inserted into a +machine and destroyed when the card is removed, and it uses this function, +each time it is created to get a pointer to the memory buffer, and to get +its length. The MultiMediaCard media driver then uses this buffer, over its +lifetime, for data transfer operations.
SetBusConfigDefaults()
The function returns information about the MultiMediaCard bus configuration +for this platform.
The function takes a
The information returned by the call to
First, it is called
+at the start of the card initialization stage with the bus speed argument,
Second, it is called
+after the CSD registers for each card have been read with the bus speed argument,
InitClockOff()
Switches +from identification mode of operation to data transfer mode operation.
When
+this function is called, the clock information in the
This function
+should, in general, just switch from open drain to push-pull bus mode, with
+the clock rate being changed at the start of
ASSPDisengage()
This +function is called by the platform independent layer each time a session has +completed or has been aborted.
The function gives the platform specific +layer the chance to free resources or disable any activities that were required +to perform the session.
The implementation should not turn off the +clock to the hardware interface as this will be turned off by the inactivity +timer. Typically, the implementation disables DMA and interface interrupts, +and forces the hardware interface into idle.
At the end of your implementation,
+you must add a call
ASSPReset()
This
+function is called by the platform independent layer when the current session
+is being aborted, and platform specific asynchronous activity is to be cancelled.
+The function may also be called by the platform specific layer as part of
+the
The function gives the platform specific layer the chance to stop all activities
+on the host stack. It will, in general, perform the same operations as
At the end of your implementation, you must add a call
CardDetect()
Implement +this function to report whether a card is present in a specified card socket.
This
+function takes a
WriteProtected()
Implement +this function to report whether a card in a specified card socket is mechanically +write protected.
This function takes a
DoPowerDown()
This +function is called as part of the bus power down sequence:
by the power model, +in power standby and power emergency standby situations
when a door-open event +occurs
when the bus inactivity +timer has timed out
if a power supply unit +(PSU) voltage check fails.
The function should stop all activities on the host stack, turn off
+the clock to the hardware interface and release any requested power requirements
+made on the power model. The function is very often implemented as a call
+of
The
+function should not turn off the MultiMediaCard power supply unit as this
+will be performed immediately afterwards by a call to the
DoPowerUpSM()
This
+is a state machine function, called as a child function at the start of the
The function should perform the necessary platform +specific actions associated with powering up the bus. This includes turning +on the MultiMediaCard PSU. However, the hardware interface clock should not be +turned on as part of this function.
If the controller has to request +power resources from the power model, e.g. where a fast system clock is required +all the time the bus is powered, then this state machine function can be used +to wait asynchronously for this resource to become available.
If the +activity performed by this function completes successfully:
it must call
it returns
The function should return
See
+the general background information on
InitClockOnSM()
This
+is a state machine function, called as part of the
The function should turn on the clock to the hardware +interface. The function is so named because this clock is always first turned +on at the identification mode frequency.
The function is implemented +as a state machine function because it may be necessary to include a short +delay after the clock has been turned on to allow it to stabilize.
If +it is necessary for the MultiMediaCard controller to request any power resources +from the power model on this platform, for example, requesting a necessary +system clock, then it should be performed as part of this function. In some +cases, it may be necessary to wait for this power resource to become available.
At
+the beginning of your implementation, you must add a call
The function should return
Note:
the function is only +called once for each invocation of the CIM_UPDATE_ACQ macro and the important +thing to stress is that the interface clock is being turned on after a period +when it has been off, and therefore often requires time to stabilize.
In the course of executing +a session, the MultiMediaCard controller may switch the clock more than once +between the identification mode frequency and the data transfer mode frequency, +but this function only ever gets called once.
See the general background information on
IssueMMCCommandSM()
This
+is a
The
+input parameters for the command are passed via the current command descriptor,
+an instance of the
Information about the command
+response, the number of bytes transferred etc., is passed back using the same
+command descriptor. Specifically, the platform independent layer relies on
+responses to the following commands being returned in the
Returns the OCR register
+value in response to a SEND_OP_COND command (CMD1). Note that there is no
+CRC with this response. Your code should ignore any CRC failure indication
+from the MultiMediaCard controller hardware, and just copy the response into
Returns the CID register +value in response to an ALL_SEND_CID command (CMD2) and a SEND_CID command +(CMD10).
Returns the CSD register +value in response to a SEND_CSD command (CMD9).
Returns the card status +in response to all R1 and R1b commands.
Note that you can use the functions
The function should
+return
See also background +information:
This class controls the MultiMediaCard socket's power supply. +A class needs to be derived from this in the platform specific layer to handle +the Variant specific functionality of the power supply.
This class
+has a number of pure virtual functions that need to be implemented in your
+Variant DLL. The diagram at
There +is one virtual function with an empty default implementation that needs to +be overridden.
DoCreate()
The +function is intended to perform hardware initialization on the MultiMediaCard +power supply, for example, setting port direction registers.
The function
+is called after creation of the
The function has a default implementation
+that just returns
Your implementation
+should
PsuInfo()
The +function returns information about the MultiMediaCard power supply.
The
+function takes a reference to a
Note:
You can use the constant
Set
DoSetState()
The +function is called to turn the PSU on or off.
The requested state
+of the PSU depends on the
If the PSU supports voltage adjustment, rather than a single fixed
+value, then the required voltage setting is contained in the protected data
+member
Note that the +stack may call this function to request the power to be turned on when it +is already on. You should check for this and do nothing if the power is already +in the requested state.
DoCheckVoltage()
The +function is called to check that the voltage level of the PSU is as expected.
Checking +the voltage level may be a long running operation (e.g. using an ADC), and +it may not always be appropriate to perform and complete the check directly +within this function.
When voltage checking is complete, either synchronously
+in this function, or asynchronously at some later stage, the result should
+be returned by calling the base class function
Note that this function is not called as
+part of
This class provides support for dealing with media +change events, i.e. the insertion and removal of removable media.
A +class needs to be derived from this in the platform specific layer to handle +the Variant specific functionality.
This class has a number of pure
+virtual functions that need to be implemented in your Variant DLL. The diagram
+at
There +is one virtual function with an empty default implementation that needs to +be overridden.
Create()
The +function is intended to perform hardware initialization on the MultiMediaCard +media change hardware, for example, setting port direction registers, binding +to the door open interrupt etc.
The function is called after creation
+of the
The function has a default implementation
+that just returns
Your implementation
+should return
MediaState()
The
+function should return the current state of the media, i.e. whether the media
+door is open or closed. To indicate the state, it should return one of the
DoDoorOpen()
This +function should handle a media door open event. What needs to be done depends +on how door open and door closed events are detected.
The most common +pattern is where the platform hardware is capable of generating an interrupt +when a door open event occurs, but cannot generate an interrupt when a door +closed event occurs. In this situation, the hardware provides a readable door +status that can be checked for the door closed state on a periodic basis (i.e. +polling).
Assuming this,
Note that the door open interrupt is cleared before this function is
+called. The interrupt results in a call to
Your +implementation would necessarily be different if an open door event could +not be signalled by an interrupt and a tick timer were to be used to poll +for an open door status.
DoDoorClosed()
This +function should handle a media door closed event. What needs to be done depends +on how door open and door closed events are detected.
The most common +pattern is where the platform hardware is capable of generating an interrupt +when a door open event occurs, but cannot generate an interrupt when a door +closed event occurs. In this situation, the hardware provides a readable door +status that can be checked for the door closed state on a periodic basis (i.e. +polling).
Assuming this,
Your implementation +would necessarily be different if a closed door event were to be signalled +by an interrupt.
ForceMediaChange()
This function is called by the local media device driver to force a remount +of the media device. For example to reopen a media driver in secure mode.
It
+should result in the same sequence of operations as would occur if a door
+open event had taken place; for example, disabling the door open interrupt
+and calling
This is a class, also known as
This class defines +a number of pure virtual functions that need to be implemented in your Variant +DLL to provide the functionality that is specific to your platform.
An
+instance of your
IsMMCSocket()
Implement this function to indicate whether the peripheral bus socket, as
+identified by the specified peripheral bus socket number, is designated as
+a MultiMediaCard socket on this platform. It should return
The function is called from
Internally,
+Symbian platform reserves space for an array of pointers to
(This +array is internal to Symbian platform.)
If, on this platform, a socket
+has been designated as a MultiMediaCard stack, then the function not only
+returns
NewSocket()
Implement
+this function to create, and return a pointer to, an instance of the
The function is called from
If
+you create a
If you create an instance of a
Note:
The socket number can
+fall into the range 0 to
This function is only
+called for sockets that are associated with MultiMediaCard devices as reported
+by the function
NewStack()
Implement
+this function to create, and return a pointer to, an instance of a
The function is called from
The
+peripheral bus socket number and pointer to the socket object should be forwarded
+to the
Note:
The socket number can
+fall into the range 0 to
The socket is the object
+created by
This function is only
+called for sockets that are associated with MultiMediaCard devices as reported
+by the function
MediaChangeID()
Implement this function to report which media change object is to be +associated with the specified peripheral bus socket number.
The function
+is called from
The
+media change object is represented by a number, which is simply an index value
+that ranges from 0 to
(This array is internal to Symbian platform.)
Note:
The socket number can
+fall into the range 0 to
This function is only
+called for sockets that are associated with MultiMediaCard devices as reported
+by the function
NewMediaChange()
Implement this function to create, and return a pointer to, an instance
+of a
The function is called from
The
+media change number should be forwarded to the
Note:
The media change number
+can fall into the range 0 to
This function is only
+called for sockets that are associated with MultiMediaCard devices as reported
+by the function
VccID()
Implement +this function to report which power supply unit (PSU) object is to be associated +with the specified peripheral bus socket number.
The function is called
+from
The
+PSU object is represented by a number, which is simply an index value that
+ranges from 0 to
(This array is internal to Symbian platform.)
Note:
The socket number can
+fall into the range 0 to
This function is only
+called for sockets that are associated with MultiMediaCard devices as reported
+by the function
NewVcc()
The
+function should create, and return a pointer to, an instance of a
The function is called from
The
+Power Supply Unit (PSU) number and the media change number should be forwarded
+to the
Note:
The PSU number can fall
+into the range 0 to
The media change number
+can fall into the range 0 to
This function is only
+called for sockets that are associated with MultiMediaCard devices as reported
+by the function
Init()
Implement +this function to perform any initialization that the platform specific layer +needs to do.
It should return
Note that you should not do +any initialization that is specifically associated with:
the stack - use
the power supply unit
+- use
dealing with media change
+events - use
The platform-specific layer as implemented in +the Variant DLL is a standard kernel extension. The entry point for all standard +kernel extensions is declared by a
statement, +followed by the block of code that runs on entry to the DLL.
Initialization
+of the MultiMediaCard DLL is done at this point, and follows the pattern shown
+below. It needs to create an instance of your
In this example,
To transfer data between a user side process and the +media device, the Platform Specific Layer allocates a DMA-safe buffer at initialization. +This buffer is allocated from physical memory. The memory in the user side +process is virtual and you perform an inter-process copy of data between the +user side process and the buffer allocated by the Platform Specific Layer.
Data +transfer is faster if the MultiMediaCard controller knows that an address +passed in an I/O request is a physical address. The File caching and Demand +Paging features in the file server and kernel can pass physical addresses. +A physical address avoids the need for an inter-process copy operation.
If
+you use a mechanism like
Implement double buffers
If +you enable double buffer behavior, the MultiMediaCard subsystem can perform +multiple data transfers in a single bus transaction. The double buffer implementation +performs many data transfers in a single bus transaction. The MultiMediaCard +subsystem logically splits the buffer allocated by the platform specific layer +into two segments. Data transfer to the media device is in progress from one +segment - this is the active segment. Concurrently, the media driver can prepare +data in the other segment.
To implement double buffers, you need to +make changes to the platform specific layer.
Use the command descriptor functions
Use the function
Use the function
code where you test
+the progress of the data transfer operation and set up the MMC command. Do
+not change this code, because
code where you set up
+the DMA controller to transfer a number of blocks of data. Replace references
+to
You can use the function
Separate the command and data phases
Without double buffer +behavior, a single MMC command is always associated with a single buffer into +which the hardware transfers data. With double buffer behavior, multiple buffers +or segments are used to transfer data within a single command. You need to +separate the command and data transfer phases.
This code fragment +is a simplified example of a platform specific layer that sets up the command +and the data transfers in separate stages:
If you depend on the MMC controller to signal the completion +of data transfer after all blocks have been transmitted or received, change +the DMA controller. Change the code to block the stack when DMA transfer starts, +and unblock the stack when the current DMA transfer finishes. Do this operation +while you wait for the final interrupt that signals the end of the data transfer.
The
+following code fragment shows how to set the
Implement the double buffer state machine
Update the platform
+specific layer to implement the double buffer state machine. You use the function
This
+function sets the static
The +following code fragment shows how you do this:
Register support for double buffers with the platform independent layer
You
+must tell the
Choose the size of the buffer
To choose the optimum size
+of buffer, you must perform benchmark tests on your system. A small buffer
+gives you a lower command setup latency, but DMA transfers and calls to the
+callback function
Testing
You need to do the standard E32 and F32 automated +tests to check the operation of the MMC subsystem. You also need to perform +the MMC specific manual test, T_MMCDRV. The test listed below performs data +transfers in excess of the PSL buffer size to make sure that double buffer +behavior is exercised.
The test T_MMCDRV must be performed on versions of the platform +specific layer that has: double buffers enabled, double buffers disabled, +and with a number of different buffer sizes (for example, from 32k to 256k).
The +test cannot dynamically set the size of the buffer. You must do manual configuration +of the buffer to test all configurations.
To measure performance, +use T_FSYSBM, with and without double buffers enable.
Register support for physical +addresses
There are three items to do:
you must tell the
If
+you set the
You must tell the
Each flag represents a maximum transfer length. The MultiMediaCard +subsystem splits a data transfer request that is bigger than the maximum into +multiple data transfers.
You must tell the
The following code is an example implementation of
Modify the data transfer +phase to handle physical memory
The implementation of double +buffers has separated the command setup and the data transfer phases. You +need to change the data transfer phase to handle physical memory.
The data member
You do not need to perform +virtual address to physical address translation on physical addresses.
you do not need to perform
+DMA synchronization for physical addresses, because the local media subsystem
+performs DMA synchronization for you. You need to perform DMA synchronization
+for virtual addresses. DMA synchronization is performed by calls to
The following code is an example of the changes needed for a read +operation.
The use of data paging impacts on the task +of writing and migrating device drivers in two main ways: the preconditions +for kernel API calls and the performance of the kernel APIs.
Firstly, +kernel APIs which access user memory may only be called subject to preconditions. +The preconditions are that
no fast mutex may be +held while calling them, and
no kernel mutex may +be held while calling them.
The APIs are these:
Device drivers use kernel side APIs to access +user memory, and even when they are called in accordance with their preconditions +they are no longer guaranteed to execute in a short and bounded time. This +is because they may access paged memory and incur page faults which propagate +from one thread to another. This document discusses how to mitigate the impact +of data paging on device drivers.
Three general principles are involved +in mitigating the impact of data paging on device drivers.
Device drivers should +not use shared DFC queues.
Device drivers should, +as far as possible, not access paged memory.
If a device driver needs +to access paged memory, it should do so in the context of the client thread.
There +are three main categories of device driver:
Boot-Loaded Non-Channel +Drivers
Boot loaded drivers are built as kernel extensions. They +are typically simple device drivers such as keyboard drivers with limited +or no client side interface and are not much impacted by data paging. It is +generally safe for them to pass data structures using the HAL in the context +of a kernel thread and for them to execute in the context of a kernel thread: +however, this assumption must always be verified for individual cases.
Media Drivers
Media +drivers are both channel based drivers and kernel extensions. When written +according to the recommended model they either execute wholly in the context +of their clients or use a unique DFC queue and associated kernel thread. If +these recommendations are followed, no additional measures to mitigate the +impact of data paging are required.
Dynamically loaded +channel based IO device drivers
Channel based IO device drivers
+are based on various models: all are dynamically loaded. They are derived
+either from
Channel based drivers derived from
The impact of data paging on device drivers is mitigated +by the use of various different techniques which are the subject of the rest +of this document.
Passing +data by value
Clients should pass data by value not as pointers. +Return values of calls should be return codes not data.
Using dedicated DFC queues
All drivers which use DFCs should
+use a dedicated DFC queue to service them. You should not use the kernel queues
To service boot loaded drivers and media drivers, you
+create a DFC queue by calling
To
+service dynamically loaded drivers derived from DLogicalChannelBase you call
To service a dynamically loaded driver derived from
You destroy queues
+by calling their function
Setting +realtime state
The realtime state of a thread determines whether
+it is enabled to access paged memory. If a thread is realtime (its realtime
+state is on) it is guaranteed not to access paged memory, so avoiding unpredictable
+delays. The realtime state of a thread may be set to
If +a driver uses DFC threads and is subject to performance guarantees, their +realtime state should be set to on (this is the default when data paging is +enabled). Otherwise the state should be set to off: the warning state is used +for debugging.
Validating +arguments in client context
It is often necessary to validate +the arguments of a request function. This should be done in the context of +the client thread as far as possible.
When a driver derived from the
+class
Accessing user memory from client context
The DFC should access +user memory as little as possible. Whenever there is a need to access user +memory and it can be accessed in the context of the client thread, it should +be.
When the driver is derived from the class
When the
+driver is derived from the class
Message data
+can only be stored on the client thread's kernel stack if the message is synchronous
+and the size of the data is less than 4Kb. Since the stack is local to the
+client it can be used by more than one thread. One way of doing this is to
+implement
Where the message is asynchronous you can use a similar strategy
+for overriding the
Using +TClientDataRequest
An asynchronous request often needs to copy
+a structure of fixed size to its client to complete a request. The
The driver creates a
When the client makes
+a request the
The data to be written
+is copied into the buffer of the
A call to
The client is signalled +immediately.
When the client thread +next runs, the buffer contents and completion value are written to the client.
Using +TClientBufferRequest
When it is necessary to access user memory
+from a DFC thread context, that memory must be pinned for the duration of
+the request and unpinned when the request is completed. The pinning must be
+performed in the context of the client thread. The
The driver creates a
Whe a client makes a
+request, the
The driver calls
When the request is
+complete, the driver calls
When the client thread +next runs, the completion value is written back to the client along with the +updated length of any descriptors.
Using +Kern::RequestComplete()
The function
which is now deprecated, and its overloaded replacement
The +overloaded version should always be used, as it does not take a thread pointer +argument.
Using +shared chunks
Shared chunks are a mechanism by which kernel side +code shares buffers with user side code. As an alternative to pinning memory +they have the following advantages:
Shared chunks cannot +be paged and therefore paging faults never arise.
Shared chunks transfer +data with a minimum number of copying operations and are useful where high +speeds and large volumes are required.
Shared chunks present disadvantages when a driver is being migrated +rather than written from scratch, as the client API must be rewritten as well +as the driver code.
DMA
+requires contiguous memory chunks to be allocated to do copy and read operations.
+A driver can do this by using the Symbian Kernel API. A contiguous block of
+memory can be allocated using
After +allocating the buffer, a global hardware chunk must be created, and its attributes +set. The attributes define chunk properties such as whether it is non-cacheable, +or whether it can be accessed only in supervisor mode, and so on.
Buffers can also be made cacheable, in which case, the +driver has to ensure to synchronise by flushing the cache before writing and +after reading.
However, +unless required by design, DMA chunks are used in non-cacheable and non-buffered +mode.
DMA buffers have to be deallocated +when they are no longer used. Buffers are deallocated in the physical channel +destructor.
Like allocation, deallocation is performed in two stages. +When you allocate, the contiguous buffer is allocated and a hardware chunk +is created; when you de-allocate, the contiguous buffer is deallocated and +the chunk is closed.
Device drivers must register as clients to access services of +the PRM.
Introduction
Kernel side components
+can access PRM services through exported kernel side APIs by including
Kernel side components register as clients +with the PRM from their entry point during kernel boot and device +drivers register as clients on channel opening.
User side
+components can access PRM services through a user side proxy interface.
+See
To guarantee deterministic
+behaviour during resource state change and read operations (mainly
+on a long latency resource), clients should pre-allocate ‘client level’
+objects and request objects using
Before de-registering as clients from PRM, device +drivers must:
Cancel any pending
+asynchronous request with
Cancel all resource
+state change notification requests issued by this client using
Delete the asynchronous +callback objects and notification objects created by this client to +avoid memory leak.
Omissions
PRM APIs cannot be called from +a Null thread, ISR or from DFC thread1. However it might be possible +that ISR need to operate on a power resource, for example, a power +supply may need to be turned on before a hardware register that an +ISR is interested on can be read. In this case Base port developers +need to access the power resource directly. In order for the PRM to +provide a consistent view of power resources any resource manipulators +in an ISR must leave them unchanged, so in the above example, the +ISR must turn OFF the power supply after it read the registers.
+
There are no specific tools required to build the IIC platform
+service. Refer to
Kernel-side code can use a number of techniques to perform thread synchronisation, +to protect critical regions within threads or to ensure that shared data can +be safely read or modified.
+
A +mutex (mutual exclusion) is a mechanism to prevent more than one thread from +executing a section of code concurrently. The most common use is to synchronise +access to data shared between two or more threads.
There are two types +of mutex: the fast mutex, and a more general heavyweight mutex - the Symbian +platform mutex. Which one you use depends on the needs of your code and the +context in which it runs.
The fast mutex
A +fast mutex is the fundamental way of allowing mutual exclusion between nanokernel threads. +Remember that a Symbian platform thread, and a thread in a personality layer +are also nanokernel threads.
A fast mutex is represented by a
Rules
A +fast mutex is, be definition, fast and the price to be paid is that there +are a few rules that must be obeyed:
a thread can only hold +one fast mutex at a time, i.e. a thread cannot wait on a fast mutex if it +already holds another fast mutex
a thread cannot wait +on the same fast mutex more than once
a thread must not block
+or exit while holding a fast mutex because the thread is in an implied
In the moving memory model, the user address space is not guaranteed +to be consistent while a kernel thread holds a fast mutex.
How +to use
Typically you declare a fast mutex in a class declaration, +for example:
When you want to get hold of the fast mutex, i.e. when you +are about to enter a section of code that no other thread is executing concurrently, +you wait on that fast mutex. If no other thread has the mutex, then your thread +gets the mutex, and control flows into your critical code. On exiting the +section of code, you signal the fast mutex, which relinquishes it.
If, +on the other hand, another thread already has the fast mutex, then your thread +blocks, and only resumes when the other thread exits the code section by signalling +the fast mutex.
Getting and relinquishing the mutex is done using
+the
respectively, passing a pointer to your
The
+kernel lock must be held when
Although +this sounds like you will be blocking while holding the kernel lock, in reality +you do not because the thread is not blocked until after the kernel lock is +released.
Be aware however that there may be situations where you
+already have the kernel lock, or in the case of IDFCs, you do not need to
+acquire it as no preemption can occur. In these cases, you just call
The following diagram illustrates the general principle:
There are a number of assumptions here, one of which is that the +priorities are such that thread T1 does not run until a reschedule occurs, +after T2 has been interrupted.
Example +using NFastmutex to protect a critical region
The file
The mutex itself is declared +in the channel class:
The function
The overloaded
Look at
+the test case for
Each thread takes a pointer to the channel object as an argument,
+this is the
Before
+entering the critical region, the threads call
The +Symbian platform mutex
The Symbian platform mutex provides mutual +exclusion between Symbian platform threads without the restrictions imposed +by the fast mutex.
The Symbian platform mutex is represented by a
Characteristics
Operations
+on a
it is possible to wait +on a Symbian platform mutex multiple times, provided it is signalled the exact +same number of times
It is possible to hold +several Symbian platform mutexes simultaneously, although care is needed to +avoid deadlock situations
A thread can block while +holding a Symbian platform mutex
A Symbian platform mutex
+provides priority inheritance, although there is a limit on the number of
+threads that can wait on any
When a Symbian platform mutex is created it is given an 'order' value. +This is a deadlock prevention mechanism, although it is used only in debug +builds. When waiting on a mutex the system checks that the order value is +less than the order value of any mutex that the thread is already waiting +on.
In general, most code written for device drivers should use values
+which are greater than any used by the kernel itself. There are 8 constants
+defined in
The kernel faults with “Mutex Ordering Violation” if you try to +wait on a mutex that violates the ordering rules.
Note: the only time +when these values would not be suitable is when the kernel calls back into +non-kernel code while a mutex is already held by the kernel. This occurs in +only two cases:
The debug event handler +callback
The various timer classes
+like
How to use
Typically +you declare the mutex in a class declaration, for example:
You do not create a
Getting +and relinquishing the mutex is done using the kernel functions:
respectively, passing a reference to the
Example +using DMutex to protect critical regions
This example code fragment
+uses two
The two
The names of the mutexes are passed as the literal descriptors:
Notice +that the data mutex has an order value less than the handler mutex. This guards +against deadlock - we are asking the kernel to check that any thread waits +on the handler mutex before it waits on the data mutex.
When a thread
+panics, or an exception occurs, program control eventually reaches
A
In this example, both
A +semaphore is synchronisation primitive that you can use:
to signal one thread +from another thread
to signal a thread from +an Interrupt Service Routine using an IDFC.
In EKA2, there are two types of semaphore: the fast semaphore, and +a more general semaphore - the Symbian platform semaphore. Which one you use +depends on the needs of your code and the context in which it is runs.
The fast semaphore
A +fast semaphore is a fast lightweight mechanism that a thread can use to wait +for events. It provides a way of posting events to a single thread because +the semaphore can keep count of the number of events posted.
A fast
+semaphore is represented by a
Rules
Because +of its lightweight structure, only the owning thread is allowed to +wait on it.
How +to use
Typically you declare a fast semaphore in a class declaration, +for example:
You need to initialise the
constructing the semaphore
setting the thread that +owns the semaphore, i.e. the thread that will be allowed to wait in it.
The semaphore is initialised when its constructor is called. However, +setting the owning thread requires explicit code. For example, the following +code fragment is typical and sets the owning thread to be the current thread:
Waiting
+and signalling the fast semaphore is done by using the
respectively, passing a pointer to your
The
+kernel lock must be held when
Although +this sounds like you will be blocking while holding the kernel lock, in reality +you do not because the thread is not blocked until after the kernel lock is +released.
Be aware however that there may be situations where you
+already have the kernel lock, or in the case of IDFCs, you do not need to
+acquire it as no preemption can occur. In these cases, you just call
You can use use a fast semaphore to block a thread until an interrupt +occurs, but you cannot signal the semaphore directly from the interrupt service +routine (ISR) that services that interrupt; instead, you must queue an IDFC, +and signal from there.
Example +using NFastSemaphore and the NKern functions
This is an example
+that synchronises threads using the
When
+a channel is opened, the
When a thread panics, or an exception occurs, program
+control eventually reaches
At a later time, the debugger calls the driver’s
Example +using the NFastSemaphore::Signal() function
This is an example
+code fragment taken from
This +a device driver that uses a timer. The driver's logical channel can start +the timer, and it can wait for the timer to expire. The expiry of the timer +results in an interrupt; this results in a call to an ISR that schedules an +IDFC, which, in turn, signals the driver's logical channel.
Because
+the kernel is implicitly locked when the IDFC runs, there is no need to explicitly
+lock the kernel, and
The relevant part +of the driver's logical channel class is:
The semaphore's owning thread is set in the logical
+channel's constructor. Note that the constructor is called in the context
+of the client thread, and it is this thread that is the owner of the semaphore.
+This must also be the thread that waits for the semaphore, which it does when
+at some later time it sends an
The following code shows the implementation of this wait. +Note that it assumes that the timer has already been started, which we have +not shown here.
The wait is initiated using the
When the timer expires, the ISR runs, and this schedules +the IDFC, which in turn signals the client thread. The following code is the +IDFC implementation.
Note that this calls
The +Symbian platform semaphore
Symbian platform semaphores are standard +counting semaphores that can be used by one or more Symbian platform threads. +The most common use of semaphores is to synchronise processing between threads, +i.e. to force a thread to wait until some processing is complete in one or +more other threads or until one or more events have occurred.
The
+Symbian platform semaphore is represented by a
Characteristics
A
+Symbian platform semaphore is based on the value of a count, which the
if the count is positive +or zero, then there are no threads waiting
if the count is negative, +the magnitude of the value is the number of threads that are waiting on it.
There are two basic operations on semaphores:
WAIT - this decrements +the count atomically. If the count remains non-negative the calling thread +continues to run; if the count becomes negative the calling thread is blocked.
SIGNAL - this increments +the count atomically. If the count was originally negative the next highest +priority waiting thread is released.
Waiting threads are released in descending order of priority. Note +however that threads that are explicitly suspended as well as waiting on a +semaphore, are not kept on the semaphore wait queue; instead they are kept +on a separate suspended queue. Such threads are not regarded as waiting for +the semaphore; this means that if the semaphore is signalled, they will not +be released, and the semaphore count will just increase and may become positive.
Symbian
+platform semaphore operations are protected by the
Although +somewhat artificial, and not based on real code, the following diagram nevertheless +shows the basic idea behind Symbian platform semaphores.
Rules
There +are a few rules about the use of Symbian platform semaphores:
Only Symbian platform +threads are allowed to use Symbian platform semaphores
An IDFC is not allowed +to signal a Symbian platform semaphore.
How to use
Typically +you declare the Symbian platform semaphore in a class declaration, for example:
You cannot create a
Waiting on the +semaphore and signalling the semaphore are done using the kernel functions:
respectively, passing a reference to the
Putting a thread into a thread critical section prevents +it being killed or panicked. Any kill or panic request is deferred until the +thread leaves the critical section.
A thread critical section is used +to protect a section of code that is changing a global data structure or some +other global resource. Killing a thread that is in the middle of manipulating +such a global data structure might leave it in a corrupt state, or marked +is being "in use".
A thread critical section only applies to code +that is running on the kernel side but in the context of a user thread. Only +user threads can be terminated or panicked by another thread.
In practice,
+a thread critical section only applies to code implementing a
How +to use
Enter a thread critical section by calling:
Exit
+a thread critical section by calling:
Note:
it is important that +you only hold a thread critical section for the absolute minimum amount of +time it takes to access and change the resource.
you do not need to be
+in a critical section to hold a
There are a large number of examples scattered throughout Symbian +platform source code.
There +are a number of functions provided by the nanokernel that allow you +to do atomic operations, and may be useful when synchronising processing or +ensuring that data is safely read and/or updated.
This is a list of +the functions that are available. The function descriptions provide sufficient +information for their use.
The system lock is a specific fast mutex that only provides +exclusion against other threads acquiring the same fast mutex. Setting, and +acquiring the system lock means that a thread enters an implied critical section.
The +major items protected by the system lock are:
the consistency of the +memory map. On the kernel side, the state of user side memory or the mapping +of a process is not guaranteed unless one or other of the following conditions +is true:
you are a thread belonging +to the process that owns the memory.
you hold the system +lock.
the lifetime of
Note that the system lock is different from the kernel lock; the +kernel lock protects against any rescheduling. When the system lock is set, +the calling thread can still be preempted, even in the locked section.
How to use
The
+system lock is set by a call to
The
+system lock is unset by a call to
When to use
Only +use the system lock when you access a kernel resource that is protected by +the system lock. Generally you will not access these directly but will use +a kernel function, and the preconditions will tell you whether you need to +hold the system lock.
The kernel lock disables the scheduler so that the currently +running thread cannot be pre-empted. It also prevent IDFCs from running. If +the kernel lock is not set, then IDFCs can run immediately after ISRs
Its +main purpose is to prevent code from being reentered and corrupting important +global structures such as the thread-ready list.
How +to use
The kernel lock is set by a call to
The
+kernel lock is unset by a call to
When to use
ALMOST +NEVER.
The kernel exports this primarily for use by personality
+layers, which need to modify the thread-ready list. In general, you should
+use a
This +is the most drastic form of synchronisation. With interrupts disabled, timeslicing +cannot occur. If interrupts are disabled for any length of time, the responsiveness +of the whole system may be threatened, and real time guarantees may be invalidated.
How +to use
There are three functions supplied by the nanokernel involved +in disabling and enabling interrupts.
When to use
NEVER.
Unless there is
+absolutely no other suitable technique. You would probably only use this to
+protect some data that is shared between an interrupt service routine and
+a thread (or a DFC). Nevertheless, you may find that
Demand paging is a change made from Symbian platform v9.3 to how the Kernel +uses RAM and storage media. This topic
+If the ROM has been built with paging +enabled, the image is divided into two sections: an unpaged section and a +paged section. In addition to this, code that is not part of the ROM can be +loaded on demand into RAM from other non-XIP partitions and/or media, for +example FAT or ROFS partitions.
Two types of paging are currently +supported:
ROM paging - paging +from the paged section of the ROM image
code paging - paging +from non-removable media, for example, a FAT partition on an internal Multi +Media Card (MMC) drive or an internal NAND ROFS/FAT partition.
See
+also
Media drivers are typically
+PDDs with a filename of
A media driver that is capable of +servicing page requests from the paging subsystem must ensure that the thread +in which the media driver runs takes the page fault itself otherwise a deadlock +could occur. In theory, the only time this can happen is when a media driver +accepts a write request from a user side client that points to data in the +paged section of the ROM or to code that has been loaded into RAM from code +paging enabled media. To remedy this, the local media subsystem has been modified +to lock write requests to paging media drivers before they are dispatched +and to split large write requests into a series of smaller ones to avoid exhausting +available RAM.
The two initial stages relevant to this discussion +are:
the kernel extension
+entry point - identified by the
the PDD entry point
+- identified by the
To enable demand paging as soon as possible in the boot sequence +it is desirable to instantiate and install the PDD factory object earlier, +for example in the kernel extension entry point.
The +steps needed to support ROM and/or code paging are as follows:
determine whether code +paging is supported, and if so, identify the relevant local drive number or +numbers
modify the media drivers
The following should be defined using appropriate +names in the variant's variantmediadef.h file:
The
+macros defined in the file variantmediadef.h are passed to
Changes made to support paging on NAND:
The kernel-extension entry
+point must create a DFC queue to satisfy any page fault that occurs in the
+drive thread. Failure to do so results in a kernel fault. The entry point
+must then create a
The fifth parameter passed to the function
The
the
the
Additionally, the
Four new request types need to be handled to support paging:
Each sequence is terminated
+by a
the list of partitions
+reported by
when the read is complete
+the media driver needs to call
These request types are enumerated in the
In many respects,
If it is important +to maintain backwards compatibility and to prevent write requests from being +interleaved, the media driver must keep track of the current write-request +chain and defer requests from other drive threads while a write-fragment chain +is in progress by:
ensuring the local media
+subsystem LDD has been built with the
modifying the paging-media +driver so that it keeps track of write-request chains and defers any read +or format requests received after the first fragment and before the last in +a sequence. When undefined, the macro subsystem does not issue more than one +write-request chain at a time.
To achieve this the media driver +can maintain a bit mask, each bit of which represents a write in progress +flag for a particular drive:
If a read or format request is received while any of the bits in
This includes those drivers associated with fixed media, such as the internal +drive, or removable media, such as a PC Card or MultiMediaCard.
+
All the functions except the constructor and
The framework does not require the
There is, of course, nothing to stop you from adding your own functions +and data members, if this is appropriate for your implementation. In addition, +your are also free to add other classes, functions and enums to your media +driver implementation.
+
The
+media driver object is created by your PDD factory object's implementation
+of the
Your constructor, prototyped as:
gives you the chance to do any initialisation that is safe, i.e. that +cannot fail. Typically, this is the kind of initialisation that does not need +to acquire resources. This is the first phase of the typical Symbian platform +two-phase construction process.
As this code fragment shows, you need to call the
+base class constructor first, forwarding the
The media driver object is created by
+your PDD factory object's implementation of the
This is a second-phase constructor that allows you
+to do more complex initialisation, and initialisation that might fail. Typically,
+this is initialisation that acquires resources (including memory). The outline
+implementation of
Depending on the complexity of your initialisation, +you can either do all your initialisation here, and complete immediately, +or you can do the initialisation as an asynchronous operation, in which case +initialisation will complete at some later time.
If you do this synchronously,
+then the return code should reflect the success or failure of the operation.
+In practice, this will almost always be
If
+you do this asynchronously, then, on completion of the initialisation
+processing, a call should be made to:
Once the media driver has been
+successfully created and initialised, and has informed the media driver subsystem
+of this fact by a call to
The prototype function +is:
A
Decoding
+of partition information may require media access, and as such may be a long
+running activity. Support is provided that allows this to be done asynchronously.
+You use the return code from
return KErrNone,
+if the decoding operation is to be done asynchronously. Note that on
+completion, the asynchronous operation must call
return a value other +than KErrNone , if the decoding operation has been done synchronously. +If the synchronous operation is successful, return KErrCompletion, +otherwise return one of the other system-wide error codes, but not KErrNone.
Decoding +simple partitions
The following example shows the implementation
+of a simple
This implementation +reports a single partition with the size of the entire media. The partition +expects to be mounted with the FAT filesystem.
Note that this operation
+is done synchronously, and the function returns
Decoding +FAT Partitions
More complex implementations of
This example shows
+a typical implementation for a FAT based removable media device. Here,
Note
+that
Note
+also that on completion,
This is the function that runs asynchronously
You handle requests by implementing your media
+driver's
This +function is usually called in the context of the client thread that originally +initiated the I/O request to the file server, although you should never assume +so. Note that you may also see the originating thread referred to as the +remote thread.
The request type, as identified by the request
+ID, and the information associated with the request is accessed through the
Each request ID, as defined by
Depending on the request ID, the operation can be done +synchronously or asynchronously. However, it is the responsibility of the +implementor of the media driver to handle the incoming requests and to handle +them as appropriate to the specific media, i.e. synchronously or asynchronously.
In +general, the function should return once the request is initiated. If the +entire operation cannot be completed immediately, then further request processing +must occur within ISRs and DFCs, i.e. using some hardware specific mechanism +to indicate completion, or by the use of considerate poll timers to considerately +poll the device for it’s current status, with the final request completion +being done from within a DFC. The code that implements the asynchronous requests +can run within its own thread, or use one of the default threads provided +by the kernel (DFC queue thread 0).
The underlying media driver framework +allows multiple requests to be processed simultaneously. However, other than +being able to issue multiple requests, there is no inherent support in the +media driver framework to support the handling of multiple requests, so such +functionality must be handled by the media driver itself. The underlying media +driver framework does, however, provide basic support for deferring requests +for later processing should the media driver not be capable of supporting +multiple requests.
A simple implementation
This demonstrates the following behaviour:
The
This example only handles
+a single request at a time. If the media driver is busy handling a request,
+it can return the value
Each message is passed +on to the specific function that is responsible for handling the message. +This provides readability and ease of maintenance.
If an error occurs, +the request is completed immediately with the specified error code.
The following code is the implementation of the
The write operation to the hardware is performed
+by
This
+simple example has demonstrated how a simple
Issues about physical addresses
If +the media driver can use physical addresses, you need to be aware of a number +of issues.
The address scheme +used by the hardware
All media devices have a minimum number +of bytes that they can transfer. For example, the architecture of some memory +card types requires data transfer in blocks of 512 bytes. To read one byte +from this type of media device, the media driver must read a block of 512 +bytes and extract the byte from the block. To write one byte to a media device, +the media driver must read a block of 512 bytes, change the content of the +byte, and write the block to the media device.
Data transfer smaller +than the minimum size
If the local media subsystem receives a
+request to transfer data with a length smaller than the minimum transfer size,
+the local media subsystem does not make a physical address available to the
+media driver. A call to
Data transfer not +aligned to the media device block boundary
If the local media
+subsystem receives a request to transfer data, and the address on the media
+device is not aligned to the media device block boundary, you need
+to adopt the technique suggested below. The local media subsystem will make
+the physical address available to the media driver. A call to
Consider the following case. A request has been made to read +1024 bytes from a media device that has a block size of 512 bytes. The 1024 +bytes start at offset +256 on the media device.
To get the first 256 bytes, you must read the first block of 512 +bytes from the media device. This can corrupt the physical memory passed in +the I/O request. The solution is to read the first block from the media device +into an intermediate buffer. Copy the 256 bytes from that buffer into the +physical memory passed in the I/O request.
To get the last 256 bytes, +you must read the third block of 512 bytes from the media device into the +intermediate buffer. Copy the 256 bytes from that buffer into the correct +position in the physical memory passed in the I/O request.
The middle +512 bytes are aligned on the media device block boundary. The media driver +can read this data into the correct position in the physical memory passed +in the I/O request.
Scatter/Gather DMA +controllers
DMA controllers can support the Scatter/Gather mode +of operation. Each request in this mode of operation consists of a set of +smaller requests chained together. This chain of requests is called the Scatter/Gather +list. Each item in the list consists of a physical address and a length.
Use
The following code fragment +shows how you do this. The example assumes that the DMA controller supports +a Scatter/Gather list with an unlimited number of entries. In practice, the +number of entries in the list is finite.
See also
Device +driver DLLs come in two types - the logical device driver (LDD), and the physical +device driver (PDD). Typically, a single LDD supports functionality common +to a class of hardware devices, whereas a PDD supports a specific member of +that class. This means that the generic code in the LDD only needs to be written +once, and the same user-side API can be used for all variants of a device.
Many
+PDDs may be associated with a single LDD. For example, there is a single serial
+communications LDD (
Only LDDs communicate with user-side
+code; PDDs communicate only with the corresponding LDD, with the variant or
+kernel extensions, and with the hardware itself. Device drivers provide their
+interface for user side applications by implementing a class derived from
+the Kernel API
The +following diagram shows the general idea:
To make porting to particular hardware platforms easier, some drivers +make a further logical split in their PDD code between a platform-independent +layer (PIL), which contains code that is common to all the hardware platforms +that the driver could be deployed on, and a platform-specific layer (PSL), +which contains code such as the reading and writing of hardware-specific registers.
Depending +on the device or the type of device to access, this split between LDD and +PDD may not be necessary; the device driver may simply consist of an LDD alone.
+The general pattern for using on the kernel side is almost the same as
+for the user side. However, different classes are used on the kernel side.
+It may be useful to compare kernel side usage with user side usage as described
+in the
A +property has the two attributes: identity and type.
Identity
A +property is identified by a 64-bit integer made up of two 32-bit parts: the +category and the key.
A property belongs to a category, and a category +is identified by a UID.
A key is a 32-bit value that identifies a +specific property within a category. The meaning applied to the key depends +on the kind of enumeration scheme set up for the category. At its simplest, +a key can be an index value. It can also be another UID, if the category is +designed to be generally extensible.
Type
A property +can be:
a single 32-bit value
a contiguous set of +bytes, referred to as a byte-array, whose length can vary from 0 to 512 bytes
Once defined, a property value can change, but the property type cannot.
+Byte-array type properties can also change length provided the length does
+not exceed the value
The API allows +a byte-array text type property to be pre-allocated when it is defined. This +means that the time taken to set the values is bounded. However, if the length +of this property type subsequently increases, then memory allocation may take +place, and no guarantees can be made on the time taken to set them.
Note
+that the
For code running kernel side, properties
+and their values are defined, set and retrieved using a
On the kernel side, all accesses
+to a property must be done through a property reference, an instance
+of a
You must create a reference +to a property, before doing any operation on that property. By operation, +we mean defining a property, subscribing to a property, publishing and retrieving +a property value. The kernel will fault if you have not first created a reference.
Only
+one property, as uniquely identified by its category and key, can be accessed
+by an instance of
Internally,
+properties are represented by
Please note that the structure
+and management of
To establish a reference to a property, create an
You can call these functions from a user thread running in supervisor +mode, from a kernel thread or from a DFC. If calling from a user thread running +in supervisor mode, then your thread must be running in a critical section. +In debug mode, if a user thread is not in a critical section, then the kernel +will fault.
On successful return from these functions, the
It is difficult to make
+firm rules as to which one your code should use, but generally, you use
Note that responsibility +for creating the property does not necessarily align with who can define, +delete, publish (write) or retrieve (read) a property value. This is governed +by the intent of the property and, for retrieving and publishing, by the security +policies in place.
When the property is no longer needed, you can
+release it by calling
Note that it +is quite legitimate to attach to, or to open, a property that has not been +defined, and in this case no error will be returned either. This enables the +lazy definition of properties as used in some of the usage patterns.
Defining a property gives it "structure" i.e. attributes +such as the property type, and the security policies that define the capabilities +that a process must have to publish and retrieve the property value.
A
+property is defined using the
You can call this function from a user thread running +in supervisor mode, from a kernel thread or from a DFC. If calling from a +user thread running in supervisor mode, then your thread must be running in +a critical section. In debug mode, if a user thread is not in a critical section, +then the kernel will fault.
The information needed to define the property
+is passed to
A +property does not need to be defined before it can be accessed. This supports +programming patterns where both publishers and subscribers may define the +property.
Once defined, a property persists until the system reboots,
+or the property is explicitly deleted. Its lifetime is not tied to that of
+the thread or process that originally defined it. This means that, when defining
+a property, it is important to check the return code from the call to
The +following code shows the definition of two properties, which we call: 'name' +and 'counter':
Once defined, a property value can change, but the property
+type cannot. Byte-array type properties can also change length provided the
+length does not exceed the 512 bytes, for
The +API allows byte-array type properties to be pre-allocated when they are defined. +This means that the time taken to set the values is bounded. However, if the +length of these property types subsequently increases, then memory allocation +may take place, and no guarantees can then be made on the time taken to set +them.
Security notes:
Symbian platform defines
+a property category known as the system category that is reserved for system
+services, and is identified by the
To
+ensure that this security check is made, you must pass a pointer to the appropriate
Whether you pass a
You also need to define
+two security policies: one to define the capabilities that will be required
+to publish (write to) the property, and the other to define the capabilities
+that will be required to retrieve (read) the property. Security policies are
In the above code fragment, we specify +that all processes in the system will be able to read the defined property +but only those with power management capability will be able to write to the +property - this is an arbitrary choice and is for illustration only.
In the above code fragments,
The
+constructor code runs in the context of the client user thread. Note that
Deleting a property is the opposite of defining it. It +removes type and security information. It does not remove a reference +to the property.
A property is deleted using the
Any outstanding subscriptions for this property complete
+with
Security notes:
Only the owning process
+is allowed to delete the property. This is deemed to be the process that was
+current when the property was defined. However, to enforce this, you must pass
+into
For example, extending the code fragment introduced in defining a +property above:
A property is published (written), using the
This is guaranteed to have bounded execution time, suitable
+for high-priority, real-time tasks, except when publishing a byte-array property
+that requires the allocation of a larger space for the new value, or when
+publishing a large byte-array property type, as identified by
Property +values are written atomically. This means that it is not possible for threads +reading a property to get a garbled value.
All outstanding subscriptions +for a property are completed when the value is published, even if it is exactly +the same as the existing value. This means that a property can be used as +a simple broadcast notification service.
Publishing a property that
+is not defined is not necessarily a programming error. The
Security +notes:
If you pass a pointer
+to a
See the code fragment in the section
The current value of a property is retrieved
+(read) using the
This
+is guaranteed to have bounded execution time, suitable for high-priority,
+real-time tasks, except when retrieving a large byte-array property type,
+as identified by
Property values +are read atomically. This means that it is not possible for threads reading +a property to get a garbled value.
Retrieving a property that is not
+defined is not necessarily a programming error. The
Integer properties +must be accessed using the overload that takes an integer reference, whereas +a byte-array property is accessed using the overload that takes a descriptor +reference.
The following code fragment shows publication and retrieval +of a property. Note that it contains a race condition, especially if another +thread is executing the same sequence to increment the ‘counter’ value.
Security +notes:
If you pass a pointer
+to a
Subscribing to a property +is the act of making an asynchronous request to be notified of a change to +that property.
You make a request for notification of a change to
+a property by calling the
You
+can cancel an outstanding subscription request by calling
Subscribing to a property +is a single request to be notified when the property is next updated, it does +not generate an ongoing sequence of notifications for every change to that +property's value. Neither does it provide the caller with the new value. In +essence, the act of notification should be interpreted as “Property X has +changed” rather than “Property X has changed to Y”. This means that the new +value must be explicitly retrieved, if required. As a result, multiple updates +may be collapsed into one notification, and subscribers may not have visibility +of all intermediate values.
This might appear to introduce a window +of opportunity for a subscriber to be out of synchronisation with the property +value – in particular, if the property is updated again before the subscriber +thread has had the chance to process the original notification. However, a +simple programming pattern, outlined in the second example below ensures this +does not happen. The principle is that, before dealing with a subscription +completion event, you should re-issue the subscription request.
Note
+that if the property has not been defined, then a subscription request does
+not complete until the property is subsequently defined and published. Note
+that the request will complete with
If the property is already defined, then the request
+completes immediately with
The essence
+of subscribing to a property is that you pass a function into
The +following code fragments show the general idea.
There +are three usage patterns that can easily be identified, labelled as: standard +state, pure event distribution, and speculative publishing.
Standard state
This +pattern is used for events and state that are known to be used widely in the +system. Examples of this might be battery level and signal strength, which +are important in every phone.
The publisher calls
The memory to store the property
+value will be permanently allocated, even if it turns out that no-one in the
+system needs that value. This does ensure that the value can always be published,
+even if the system is in an out of memory situation. For this reason, this
+approach should be limited to widely used and important state. The
Pure event distribution
This +pattern is used when events need to be distributed, not values.
The
+publisher of the event simply uses an integer property, and calls
Subscribers +will be able to detect that an event has occurred, but will get no other information. +The minimum possible memory is wasted on storage for the dummy value.
Speculative publishing
This
+pattern is used when it is not known whether a value will be of interest to
+others or not. Unlike the
When
+other code in the system, i.e. a potential subscriber, is interested in the
+state, it calls
The
+subscriber then calls
Using this pattern, no memory is wasted on properties +that have no subscribers, while the publisher code is simpler as there is +no need for configuration as to which properties to publish.
The publisher, +however, wastes some time attempting to publish unneeded values, but this +should not be an issue unless the value is very frequently updated.
Where +events are published very infrequently, the subscriber could have a dummy +value for a long time, until the next publish event updates the value. Often +this is not a problem as a default value can be substituted. For example a +full/empty indicator for a battery level, none for signal strength etc. This +pattern is unlikely to be useful if there is no suitable default value.
The ARMv6-style page table reserves three bits +in the page/directory table for access permission, so eight possible values +are available. The use of four possible access permissions is sufficient. +Therefore, removing the surplus access permissions frees up one page table +bit that is used by Symbian platform internally.
Affected kernel interface
The
+shadow pages kernel interface is valid on all platforms except for the emulator.
+On ARMv6 and previous platforms, shadow pages are created using access permission
This represents a serious behaviour break in the kernel +interface. A device driver (running on ARMv7) that creates a shadow page and +then attempts to alter the content of the page now panics.
This is
+a common use case for run-mode debuggers. However, a debugging interface is
+already provided, see
After a shadow
+page is created using
ARMv6 architecture uses a large number of bits in the +page table to describe all of the options for inner and outer cachability. +No applications use all of these options simultaneously so a smaller number +of configurable options has been implemented to meet the needs of the system.
This +alternative cache mapping allows up to eight different mappings in page tables. +The Symbian platform kernel and device drivers do not need more +than four or five different cache mappings.
Cache mapping cannot be +altered during run-time. It must be configured before the MMU is initialised.
See
+the Bootstrap
Types of memory supported
The +kernel supports the following types of memory:
The
+complete set of memory types supported by Symbian platform are represented
+by the values of the
Mapping existing memory +types
The
Mapping ARMv6K or ARMv7 +onto TMappingAttributes
To describe memory on ARMv6K or ARMv7
+using the original
The ARMv6-style page table reserves three bits +in the page/directory table for access permission, so eight possible values +are available. The use of four possible access permissions is sufficient. +Therefore, removing the surplus access permissions frees up one page table +bit that is used by Symbian platform internally.
Affected kernel interface
The
+shadow pages kernel interface is valid on all platforms except for the emulator.
+On ARMv6 and previous platforms, shadow pages are created using access permission
This represents a serious behaviour break in the kernel +interface. A device driver (running on ARMv7) that creates a shadow page and +then attempts to alter the content of the page now panics.
This is
+a common use case for run-mode debuggers. However, a debugging interface is
+already provided, see
After a shadow
+page is created using
ARMv6 architecture uses a large number of bits in the +page table to describe all of the options for inner and outer cachability. +No applications use all of these options simultaneously so a smaller number +of configurable options has been implemented to meet the needs of the system.
This +alternative cache mapping allows up to eight different mappings in page tables. +The Symbian platform kernel and device drivers do not need more +than four or five different cache mappings.
Cache mapping cannot be +altered during run-time. It must be configured before the MMU is initialised.
See
+the Bootstrap
Types of memory supported
The +kernel supports the following types of memory:
The
+complete set of memory types supported by Symbian platform are represented
+by the values of the
Mapping existing memory +types
The
Mapping ARMv6K or ARMv7 +onto TMappingAttributes
To describe memory on ARMv6K or ARMv7
+using the original
The DMA Framework provides the DMA platform service API to clients +(usually drivers). It is necessary to implement a number of hardware-specific +functions when adapting the framework to a new board. hardware. This +section contains information about the Framework's implementation +and hardware requirements.
+The DMA framework bypasses the CPU for data transfers. Without +DMA, the CPU would have to copy each piece of data from the source +to the destination.
Device Driver: A set of methods that tells the platform +how to use a specific piece of hardware.
DMA Framework: The DMA framework provides a simplified +and unified access to DMA resources in a device. Generic transfer +parameters abstract the underlying DMA hardware for basic services, +and fine-grained control is still available for platform-specific +features.
DMA Operation: Before any DMA transfers can be undertaken, +the DMA channel to be used and the source and destination addresses +of the data to be transferred have to be specified. The transfer details +are placed onto a queue for the DMA channel specified. The item at +the head of the queue will be the active transfer and the rest of +the queue will be pending transfers.
Data paging degrades the performance of +user side code. This document describes strategies to mitigate these effects. +It is intended for application developers whose code uses device drivers.
Intended Audience:
Application developers writing modules +which involve device drivers.
Data +paging is a technique which increases the size of virtual RAM by holding data +on external media and read into physical RAM when accessed. The technique +trades off an increase in available RAM against decreased performance. Developers +must allow for the impact on performance and aim to mitigate it by using the +practices described in this document.
Data paging is mainly a property +of processes. Processes can be configured to be paged or unpaged when they +are built or put into a ROM image. Threads and the data which they use inherit +the data paging configuration of the creating process and that configuration +can be modified at the level of the thread or the individual items of data.
Thread scheduling
When a platform uses data paging there is +a higher risk of delays, timing-related defects and race conditions.
When +a thread accesses paged memory, the required page may be paged in (actually +in RAM) or paged out (stored in media). If it is paged out, a page fault results, +slowing performance by a factor of thousands and sometimes up to a million. +The delay can also expose latent race conditions and timing-related defects +in existing code: for instance an asynchronous request to a server may appear +to complete synchronously, returning control to the client before the request +has completed with incorrect behavior as a result.
The cure for this +problem is to configure data paging when chunks, heaps and threads are created.
When
+creating a thread of class
When creating a chunk of class
The
When creating a chunk heap of
+class
Inter-process +communication
Data paging impacts on inter-process communication +when a non-paged server accesses paged memory passed from a client. If the +memory being accessed is not paged in, unpredictable delays may occur, and +when the server offers performance guarantees to its clients, all the other +clients will be affected as well. There are three separate solutions to this +problem:
pinning memory automatically,
pinning memory as requested +by the client, and
using separate threads +for paged and unpaged clients.
Pinning paged memory means paging it into the RAM cache (if it is +not already present) and preventing it from being paged out until it is unpinned.
You
+can set a server so that all memory passed to it by a client call gets pinned
+for the duration of the call. You do so by calling the function
You
+can pin specified items of memory at the request of the client by calling
+the
Separate threads for paged and unpaged clients.
Thread performance
The set of pages accessed by a thread over +a given period of time is called its working set. If the working set is paged, +the performance of the thread degrades as the working set increases. When +working with paged memory it is therefore important to minimise the working +set.
The main solution to this problem is to choose data structures +with high locality, that is data structure residing in single or adjacent +pages. An example of this is a preference for arrays rather than linked lists, +since arrays are usually in adjacent pages while the elements of linked lists +may reside on multiple pages.
Drivers can enable and disable the interrupts on the relevant interrupt
+sources while handling the interrupts using the
Some operations may need to be performed with all the interrupts, not just
+the particular interrupt source, disabled. In this case, the
Describes the concepts of Demand Paging and how to use Demand Paging.
+The platform-specific layer only contains functionality that cannot +be abstracted and made generic because different USB device controller +designs operate differently. For example, the internal (and therefore +external) management of the endpoint FIFOs can be different. In some +USB device controller designs, the endpoints have hardwired FIFOs +of specific sizes, which may possibly be configured, whereas other +designs have a defined maximum amount of FIFO RAM available that has +to be shared by all endpoints in use in a given configuration. The +way that the chip is programmed can also differ. Some designs have +a single register into which all commands and their parameters are +written, whereas others are programmed via a number of special purpose +control registers.
+Everything else that has to be common because it is defined in +the USB specification, is contained in the platform-independent +layer.
+The operation of the USB device controller is hardware specific
+and the implementers of the platform-specific layer is free to do
+whatever is necessary to use the hardware. All of this is transparent
+to the platform-independent layer, which uses only the fixed set of
+pure virtual functions defined in
The platform-specific layer is also responsible for managing the
+transfer of data over the endpoints, and it is important that it can
+provide the services expected by the platform-independent layer. Data
+transfers are normally set up by the platform-independent layer by
+calling one of the following member functions of
which the platform-specific layer implements.
+These data transfer functions fall into two groups: one for handling
+endpoint-0 and another for handling general endpoints. Endpoint-0
+is handled differently from general endpoints throughout the USB stack.
+The functions for handling general endpoints are used to transfer
+user data. In addition to taking an endpoint number, these functions
+also take a reference to a
General endpoints
The platform-independent layer issues a request to read
+data by calling
Data is read into a large buffer, whose address and length are
+passed as data members of the
For all other transfer
+types (Control, Interrupt, Isochronous) it is usual to return only
+single packets to the LDD. The amount of data returned is controlled
+– and limited - by the USB client driver (the LDD) through the
The
These arrays are logically linked; both are the same size +and can hold information for two entries each.
Therefore, +received single packets have to be merged into one 'superpacket' in +the read buffer. It is assumed that these merged packets consist of +maximum packet sized packets, possibly terminated by a short packet. +A zero length packet or ZLP must appear separately; this would be +the optional second packet in the respective array.
For example, +for a Bulk endpoint with a maximum packet size of 64 bytes:
If 10 x 64 byte +packets and one 10 byte packet arrive, then these are marked as a +single large 650 byte packet.
If 10 x 64 byte +packets and one ZLP arrive, then these should be entered as two packets +in the arrays; one of size 640 bytes and one of size zero.
The general aim when servicing a Bulk read request from the +platform-independent layer is to capture as much data as possible +whilst not holding onto data too long before returning the data to +the LDD and receiving another buffer. There is considerable flexibility +in how this is achieved and the design does not mandate any particular +method; it also depends to a certain extent on the USB device controller +hardware used.
The platform implementation can use a re-startable
+timer to see whether data has been received in the time (usually milliseconds)
+since the last data was received. If data has been received, then
+the timer is restarted. If data has not been received, then the timer
+is not restarted, and the buffer is marked as complete for the platform-independent
+layer by calling
Note the following:
In the interrupt
+service routine (ISR), the flag
The timer is +not restarted in the ISR - the timer is allowed to expire and then +a test is made for received data.
Typical values +for the timer range from 1—5 ms.
Each OUT endpoint +requires a separate timer.
The read is +not flagged as complete to the platform-independent layer if no data +has been received. The timer is only started once the first packet/transfer +has been received.
The bare minimum
+of work is done in the ISR. After draining the FIFO, update the
The platform-specific layer completes a non endpoint-0 read
+request by calling the platform-independent layer function
Summary of Completion Criteria for general Endpoints
The platform-specific layer completes a read request by calling
The requested +number of bytes has been received.
A short packet,
+including a ZLP, is received. In the case of a ZLP being received,
+it must be represented separately in the
If the number
+of bytes in the current packet (still in the FIFO) were to cause the
+total number of bytes to exceed the buffer length
The timer has +expired, data is available, but no further packets have been received.
Endpoint-0
The handling of endpoint-0 read requests is similar to general +endpoints except for the following:
The platform-independent
+layer issues the request by calling
The endpoint +number is known, this is zero.
The request
+is completed after every received packet, and a
All the platform-specific layer needs to know is where to
+place the received data, and its size. Both are fixed values:
An endpoint-0 read request is completed by
+calling
General endpoints
The platform-independent layer issues a request to write
+data by calling
The address of the buffer, containing the data to
+be written, is passed into the data member
The +platform-specific layer's implementation needs to set up the transfer, +either using DMA or a conventional interrupt driven mechanism, and +then wait for the host to collect, by sending an IN token, the data +from the respective endpoint’s primed FIFO. This continues until all +data from the buffer has been transmitted to the host.
If
+the ZLP request flag
To summarize, a ZLP should be sent at the end of a write request +if:
The ZLP flag +is set.
The last packet +of the write request was not a short packet (i.e. it was a max-packet-sized +packet).
The platform-specific layer completes a non endpoint-0 write
+request by calling the platform-independent layer function
Endpoint-0
The handling of endpoint-0 write requests is similar to general +endpoints except for the following:
The platform-independent
+layer issues the request by calling
the address +of the location containing the data to be written
the length of +the data to be written
a
An endpoint-0 write request is completed by calling
There is another endpoint-0 write
+function,
When planning to use DMA for transferring data from +and to the endpoints’ FIFOs, there are two things that must be considered:
Flushing cached +information for the data buffers
The cached information +for the data buffers must be flushed before a DMA write operation, +(i.e. transfer memory to a FIFO), and both before and after a DMA +read operation, (i.e. transfer a FIFO to memory).
The kernel
+provides three functions for that purpose. These are static functions
+in the class
Implementing DMA +mode for OUT transfers (DMA reads)
The implementation +of DMA mode for IN transfers is normally relatively straightforward, +however complications can occur with OUT transfers (DMA reads), depending +on the DMA controller, the USB device controller and the way the two +are connected.
There are two issues:
If we set up +a DMA read for 4kB, for example, and this request returns after a +short packet, we must be able to tell how much data has been received +and is now in our buffer. In other words, the DMA controller must +provide a way of finding out how many bytes have been received, otherwise +we cannot complete the read request to the LDD.
Here is a +theoretical solution for a Scatter/Gather controller that doesn’t +provide information about the number of bytes transferred directly. Note: This proposal has not been tested in practice!
Instead of using one large DMA descriptor for 4kB, we could set up +a chain of descriptors for max-packet-size bytes each. When the DMA +completes, it will be:
for the whole +transfer, in which case we know the number of bytes received.
because it is +a short packet. In this case we can try and find out which descriptor +was being served at the time. The number of bytes received is then
Another potential
+problem is posed by the restartable OUT endpoint timer described in
+the section
if not, then +there was no DMA transfer ongoing, and we can just complete our read +request to the LDD.
otherwise we +have to abandon, at this point, the plan to complete and return from +the timer callback (without doing any damage). In this case, the DMA +complete DFC will run next, the LDD read request structure will be +updated, the RX timer will be set again, and we can proceed as normal.
Any outstanding asynchronous request can be cancelled.
Before you start, you must:
Have installed +the platform specific implementation of IIC channels that is required +to support the IIC platform service API.
Ensure that
+the
Include the
An application communicates over an IIC channel by operating +as a client of the channel. When the channel uses a slave node, a +transmission must be conducted at the level of the transfer because +slaves react to transfers between it and a master node. For this reason, +a large part of the client functionality is implemented as a callback +function which is passed to the channel object and placed on the client +thread’s DFC queue for execution. The callback notifies the client +of the individual transfers and is the entry point to the application +functionality which uses the data being transmitted. The other client +API functions control the transmission as a whole.
Capture the
+channel by calling
Pass the channel
+Id as its
Pass the channel
+configuration header as
Pass a pointer
+to the callback function as
Pass the boolean
In synchronous transmission when the function
+returns the parameter
Register receive
+and transmit buffers by calling
Pass the channel
+Id as the
A buffer is
+represented by a descriptor containing a point to the start of the
+buffer, its total size in bytes and an indication of how much of it
+is in use. Initialise a descriptor with these values and pass it as
Call
Pass the channel
+Id as
The callback
+is called when the state of the channel satisfies one of a number
+of conditions called triggers which are specified in the enumeration
The transmission of data will now take place, +controlled by the mechanism of callback and triggers. Different buses +signal that a transmission has been completed in different ways: for +instance I2C uses a stop bit and SPI changes the voltage on Chip Select.
When you have
+finished using the channel, release it by calling
Implement the
+callback as a function containing a series of conditions on the triggers.
+Its prototype must match the typedef for (*
These functions serve three purposes.
The callback +must react to the completion of an asynchronous transmission between +it and the bus master.
It must implement +the actual functionality of the application by doing something with +the data which is written to or read from the buffers.
It must react +to cases where the physical transfer of the data has not taken place +exactly as expected. For example there may be more data to transfer +than had been anticipated.
Implement the
+reaction to a completed asynchronous transmission as a conditional
+statement where
Implement the
+actual functionality of the application as a conditional statement
+where
Implement the
+other cases as conditional statements where the other triggers in
Register the
+new buffers by calling
Start data transfer
+to and from the new buffers by calling
The channel is implemented as a state
+machine as illustrated. The channel starts and finishes in the
Interrupts are sent by hardware or software to indicate an event +has occurred. Interrupts typically cause an Interrupt Service Routine +(ISR) to be executed. The Interrupt platform service specifies the +interface APIs for setting up the ISRs and connecting them to specific +Interrupt IDs.
+Interrupts can be managed using the
The Interrupt platform service is used by the device drivers that
+use an interrupt source routed directly to the interrupt controller
+hardware. For more information on how to use GPIO to manage interrupts,
+see
Not all hardware events are indicated by interrupts; hardware may +indicate a status change by setting a value in a register or sending +a command/signal over a bus.
+The Interrupt platform +service specifies certain APIs for attaching an Interrupt Service +Routine to an API. It is the low level interface between hardware/software +interrupts, and the kernel and kernel level device drivers.
You should be familiar with the following:
Writing a C++ +function that is either global or is in a static class so that the +function address can be used for the ISR.
The specification +for your hardware, including determining interrupt IDs.
The following concepts are relevant +to this component:
An identifier which says which interrupt has occurred. Typically +an unsigned integer.
An ISR is a static function which is called when an interrupt +is received. It should be very short, allocate no memory on the heap, +store any volatile information that needs to be captured for that +event, and, if required, queue a DFC for further processing.
Connects an Interrupt ID to an Interrupt Service Routine. Unbind +removes the connection.
When an interrupt occurs, execute the associated Interrupt +Service Routine.
The interrupt interface is provided by
+functions of the
The key users of the Interrupt platform service are +kernel and kernel-level device driver writers, and anyone who needs +to specify code to be run when a system event occurs.
The ISR needs to complete in short and finite time to prevent +excessive interrupt latency.
To keep ISRs short, most of the event processing should be +delegated to a DFC. The DFC will execute in the context of a DFC thread +allocated to itself by the device driver.
It is safe to access the device driver objects using appropriate +synchronization techniques such as spin-locks.
The ISR should store critical data which may not be available +later.
Interrupts can happen in the middle of updating kernel heap +free list or other non-atomic actions. ISRs must be coded with that +in mind.
This means that ISRs cannot:
access user process memory. This includes completing an asynchronous +request.
perform any allocation / de-allocation on heaps. The current +interrupt might have occurred when a heap operation is already in +progress.
It is therefore necessary that ISRs must only use pre-allocated +objects on the kernel heap.
The
+nanokernel timer can be used in either single mode or in periodic mode. Typically,
+the timer is started with
The
+timeout callback function is called when a timer started with
A
+timer can be cancelled using
If a timer was queued
+and a DFC callback was requested, then the expiry handler might run even after
The IIC client interface API provides the IIC bus interface functionality +to the OS.
+See
+the Template port (
The code can be found +at the label:
Entry +conditions
This function is called immediately after reset with
+the CPU in supervisor mode, with all interrupts disabled, and
What the function should do
The function is intended to perform
+the basic hardware initialisation that is required immediately after reset.
+It also sets up the address of the
In detail, it should:
Initialise the CPU/MMU
+registers. This is done by making a call to the generic function
Interrogate the hardware
+to determine the reset reason. In a transparent reset, all RAM contents are
+preserved and the reset should be invisible to the user, for example waking
+up from a low power mode such as the SA1110 sleep mode. In this case, perform
+whatever device-dependent sequence is required to restore the processor state,
+and jump back into the power down code:
For bootloader bootstraps,
+if the reset reason is power on reset or hardware cold reset,
+or if it is a software reset and a rerun of the bootloader was requested
+(i.e.
If the system contains FLASH memory which +can be programmed with a ROM image, an additional check may be needed to see +if an image already present in FLASH should be run instead of the bootloader +following a hardware reset. A spare switch on the board is often used for +this purpose.
Initialise enough hardware +to satisfy the rest of the bootstrap. The precise split between what is initialised +here, what is initialised later in the bootstrap, and what is done during +variant initialisation is flexible, but as an absolute minimum this function +must make some RAM accessible. Typically this function should set up ROM and +RAM controllers, set the CPU clock frequency, set sensible states for GPIO +lines and set up whatever bus controller is necessary to communicate with +any external peripheral chips present. Note also that it is often useful to +allow the ROM image to run from either ROM or RAM - this is one of the main +reasons for making the bootstrap position independent. To allow for this, +the intialisation code must check the PC when setting up the memory controllers; +if code is already running from RAM then RAM must already have been initialised +and it may not be necessary to touch the RAM controller.
Determine the physical
+address of the super page. This will usually be at the beginning of physical
+RAM, unless there is a good reason for it not to be. The main reason why the
+super page might not be at the beginning of physical RAM is usually because
+either that address is reserved for hardware (e.g. video RAM), or that code
+is running from that address.
The super page is defined by the
Fill in valid values +for the following minimum set of super page fields:
The super page field
The super page field
In debug builds of the
+bootstrap, as indicated by the symbol
If Level 2 cache is +included in the bootstrap (i.e. specifically L210 cache or L220 cache), then:
the function must initialise +the cache. This involves setting cache properties: size, type, N-ways and +cache invalidation. The following code is an example of L210 cache initialisation, +but it is also valid for L220 cache initialisation:
the superpage field
Note that the superpage is defined by the
Exit +conditions
The function can modify any CPU registers subject to +the conditions that:
it returns in supervisor +mode with all interrupts disabled
it returns the physical
+address of the super page in
Entry conditions
The processor state is completely undefined
+except that
What +the function should do
This function is called if a fatal error +is detected during boot.
The implementation should halt the system +and produce whatever diagnostics are possible. Typically, if debug tracing +is enabled, the CPU state can be dumped over the debug port. A generic function +is provided to do this, as the minimal implementation is simply:
Entry conditions
On entry the CPU state is indeterminate,
+as the restart may be caused by a kernel fault, with the exception that
This +value is hardware specific except for the following two values:
What +the function should do
The function is called by the kernel when +a system restart is required, and should perform whatever actions are necessary +to restart the system. If possible, a hardware reset should be used. Failing +that, it should reset all peripheral controllers and CPU registers, disable +the MMU and jump back to the reset vector.
The digitizer hardware provides coordinate values that are digitized raw +voltage values. These values are not the same as screen coordinates, which +are based on pixel positions. The digitizer converts the raw values into screen +coordinates and vice-versa.
+Screen coordinates always have their origin at the top left hand corner +of the screen, but the digitizer has no such constraint - its origin, orientation +and scaling are generally different from the screen coordinate system. This +means that the digitizer needs to be calibrated so that digitizer coordinates +can be translated to screen coordinates.
+On completion of the request, the driver calls
The user retrieves the result of the request by calling
An +asynchronous request is one which is used, typically, to communicate with +the hardware itself. The time taken for such a request to complete depends +on the hardware, and the client user-side thread may want to continue with +some other processing. The user-side thread is not blocked and can continue +with other operations, including issuing other requests (synchronous or asynchronous).
An
+asynchronous request is started by a call to
More than
+one asynchronous request can be outstanding at the same time, each one associated
+with its own
An outstanding asynchronous request can be cancelled by
+a call to
The phone hardware must therefore provide a high-speed timer that +can provide regular 1 ms interrupts. The ASSP/Variant part of the +base port must use this timer hardware to call the kernel every millisecond.
+The base port provides the timer service in an interrupt-service +routine called a tick handler. The functions used for this are as +follows:
+The tick handler
+must be started by the Variant's implementation of the
The tick handler
+calls the kernel's
The Variant
+reports the exact number of microseconds between ticks in its implementation
+of
The ASSP/Variant must decide how to implement a repeating tick +interrupt for the hardware available. Typically this involves using +either a free running timer with a match register, which is reset +on each match, or a self-reloading countdown timer, to generate a +periodic interrupt.
+A base port can implement other timers that are required by peripherals +that have sub-millisecond timing requirements. This is optional, and +depends on the available hardware.
++
There are four modes in which the Perl script
takes the
generates three tables:
the properties, i.e. +whether valid and/or writable, for each item
the offset of each item
+within the
the function implementing +each item, for derived attributes (i.e. those attributes that are not simply +stored by the HAL.
generates a source file containing skeleton code for the implementation +of the accessor function for each derived attribute
generates a header file containing prototypes for the accessor functions +for each derived attribute
Note that the header file
The USB client driver,
the
the
Add the command line test applications. This can be done in a special
+purpose
The macro
Kernel +Architecture
No specifications are +published.
The Interrupt interface is provided by functions of the
The header file for the Interrupt platform service can
+be found
This topic describes the architecture of keyboard drivers and related +components on Symbian platform.
+There are five components involved in this process:
+The keyboard +hardware.
A keyboard +device driver.
The keyboard driver is implemented as a +standard kernel extension, and is loaded by the kernel at boot time. +A keyboard driver must be implemented in the base port, and has no +platform independent parts.
A keyboard +mapping DLL.
A keyboard mapping DLL provides a set of +lookup tables that map codes for the hardware keys to standard logical +keycodes that user-side applications can process.
The DLL
+is platform specific. It must be implemented in the base port, and
+has no platform independent parts. It is usually called
The Window +Server.
The Window Server is a system server that manages +access to the screen and input devices for user-side programs.
The keyboard +translator.
The keyboard translator DLL translates scancodes +into keycodes using the data in the platform specific keyboard mapping +DLL.
It is part of Symbian platform generic code and is built +as part of the Text Window Server component.
The following diagram shows how these components fit together and +gives an overview of the data flow.
+Depending on the keyboard hardware, key presses in the form of
+key-down and key-up events are registered by the keyboard driver,
+either as a result of hardware interrupts, or as a result of polling
+the hardware. The driver assigns a standard scancode value to the
+key. This is one of the values defined by the
The driver creates
The Window Server captures these events and passes the scancode
+value to
The client +will typically have a DMA class containing pointers to:
a
one or more
a
a callback which returns when the data transfer completes (successfully +or not).
These classes are discussed in
To allow +a client application to receive or transmit a block of data by DMA +you set up a transfer between a source or a destination buffer and +a source or a destination peripheral. Data is copied between the client +process and kernel memory by IPC form data transfer between two buffers. +When a shared chunk or a shared data buffer is used, copying of data +to kernel memory is required. At the level of implementation there +is no difference between receiving and transmitting. Client drivers +using DMA sometimes have separate receive and transmit functions defined +from the driver point of view.
You transfer data in this sequence:
Open a DMA channel.
Fragment the data if necessary.
Queue the transfer.
Close the channel.
Example code in this tutorial is taken from the
The value should be set to a value based on number of simultaneous +transfers, transfer length and type of memory used for transfer. If +you select too high a value, no descriptors will be left for other +clients, and if you select too low a value the transfer may fail.
This is not necessary if the channel is already open due +to a previous transfer request by the same application on the same +channel.
The data is now queued on the channel.
The channel is now closed.
You have +now conducted a data transfer by DMA.
This +document explains the points that have to be considered when migrating existing +media drivers to a writable data paging environment.
The +two main issues that have to be addressed when migrating existing media drivers +are:
A deadlock condition +can occur between the driver and the client process, where both are trying +to access the same page of paged memory
A deadlock condition can occur between the driver and the client process, +where the client is allocating on the kernel heap, the paging system tries +to page out some data and the media driver also tries to allocate on the kernel +heap while servicing the resulting request.
Unpredictable delays +in servicing a DFC can occur, since memory might need to be paged in.
To address the above issues, the following points will have to be +considered/undertaken:
Pass data by value to +DoRequest/DoControl
Return results by using +a return code
Use of a dedicated DFC +queue
Determine if unpredictable +delays in servicing DFCs are acceptable
Validate the arguments +in the client context as far as possible
Move user memory accesses +to client context where possible
Replace the use of small +fixed-size structures with the use of the TClientDataRequest object
Pin user memory accessed +from DFCs
Do not allocate on the kernel heap while handling page out +requests.
The paging algorithm can be over ridden and pages of memory +can be kept in memory (pinned). This situation can be reversed by 'unpinning' +the page. The TClientBufferRequest API provides this functionality.
Re-write the driver +to use shared chunks
No fast mutex may be +held when calling APIs that access user memory
No kernel mutex may +be held
Use the new APIs, (See
+the
This document explains +techniques for writing device drivers on data paged systems and migrating +device drivers to data paged systems.
The functionality of a particular hardware +technology is enabled when an interface to the standard for that technology +is defined.
The hardware is enabled when the following exist:
LDD – Logical +Device Driver
PDD – Physical +Device Driver
Kernel extensions provide additional functionality +in the kernel and run with the same access rights and permissions +as the rest of the kernel.
Adaptation device drivers (LDD/PDD) +require low-level access to the device and implementing them as a +kernel extension gives them this access.
LDD
Each LDD has the following features:
the LDD is a +packaged DLL
provides user +side functionality
loaded into +the kernel
calls a function
+that opens a
opens an
allows user +code to communicate with the hardware and send commands to control +the hardware
defines an abstract +class that must be implemented in the PDD
Mandatory – there must be one LDD for each hardware component.
PDD
PDDs have the following features:
the PDD is a +packaged DLL
provides access +to hardware
loaded into +the kernel
calls a function +that opens a physical channel
receives control +commands and other communications through its interface with the LDD
may be an extension +to an existing device driver
Mandatory – there must be one PDD for each hardware platform +for each hardware component.
A Kernel Extension is additional functionality +that runs as part of the kernel.
For example, access to the +SD Card is provided through a kernel extension.
Adaptation ready architecture
Adaptation ready architecture +means the following have been created:
The LDD
The abstract +class that must be implemented in the PDD
The interface +to the user code
Any optional +kernel extension DLLs
Optional reference +implementation PDD
Adaptation options
There are broadly two types of hardware adaptation:
New hardware
Updated hardware
New hardware
The entire adaptation architecture, +as described above, must be defined. This includes:
Defining an +abstract class that the PDD creators are required to implement, creating +an LDD and so on.
The architecture +enables the new hardware.
The architecture +allows hardware manufacturers to create their own implementation (PDD).
New hardware adaptation will be supported with a reference +implementation, which will go a long way towards guiding the device +developer’s efforts.
Updated hardware
Updated +hardware which provides new or modified functionality requires a new +PDD to be created to either extend the functionality of the existing +PDD or to replace the existing PDD.
In most cases the new +functionality of the hardware will be enabled by extending the architecture +with a new extension PDD or be writing a new PDD that incorporates +all the existing and new functionality.
There are situations +where the entire adaptation architecture needs to be extended or redesigned, +such as when a hardware component is extended in a way that was not +expected. The Bluetooth baseband radio being extended to provide Bluetooth, +802.11, and FM radio from the same hardware component, for example, +will require a redesign of the adaptation architecture so the LDD +knows about 802.11 and FM as well as Bluetooth. Such a redesign should +ensure older Bluetooth PDDs will still work while extending the possible +PDDs that will work with the LDD to include Bluetooth, 802.11, FM +radio, and a combination of the above.
Adaptation is described in more detail in these +documents:
Requests from the user-side are initially handled by the driver
+in the context of the client user-side thread. All requests are passed to
+the "gateway" function:
There are two options for implementing this:
+Use the ready-made framework
+provided by the
In practice, this model makes the +writing of device drivers easier because the same kernel thread can be used +to process requests from (potentially multiple) user-side clients and DFCs, +thus in effect serialising access to the device driver, and eliminating thread-related +issues, such as the need to know about mutexes, pre-emption, etc. Several +drivers can use the same request/DFC kernel thread to reduce resource usage.
Derive your own logical
+channel class from
Option 1 lets you get a new driver up and running quickly. Option 2 gives +you greater flexibility if the requirements of your driver demand it.
+Requests from the user-side are initially handled by the driver
+in the context of the client user-side thread. All requests are passed to
+the "gateway" function:
There are two options for implementing this:
+Use the ready-made framework
+provided by the
In practice, this model makes the +writing of device drivers easier because the same kernel thread can be used +to process requests from (potentially multiple) user-side clients and DFCs, +thus in effect serialising access to the device driver, and eliminating thread-related +issues, such as the need to know about mutexes, pre-emption, etc. Several +drivers can use the same request/DFC kernel thread to reduce resource usage.
Derive your own logical
+channel class from
Option 1 lets you get a new driver up and running quickly. Option 2 gives +you greater flexibility if the requirements of your driver demand it.
+Before a channel can be opened the Logical Device Driver (LDD) must be loaded. Load a logical device using
Open your USB channel using the LDD open function
After you have loaded and opened the USB LDD you should
There are two places where you may want to consider using a registry +to hold information for DMA.
Chipset configuration
Channel configuration +and information.
On initialization of the chipset you can store +and retrieve settings in the HCR.
In the
When a client opens a channel,
+it specifies a cookie parameter in the
One way +that the implementation can communicate this information to the client +is by using the Central Repository to hold a table of cookie values +and matching channel identification. An alternative approach may be +to use a Central Repository entry to specify the number of channels +available and then the client knows the highest channel number it +can request.
Again this is entirely implementation specific. +In the implementation mentioned above, the repository is not used.
The Time platform service interface is provided by the
The following functions should be implemented +to provide the Time platform service functionality.
The functions related to the Time platform service +in the ASIC class.
The header file for the Time platform service can be found