--- a/Symbian3/SDK/Source/GUID-68417158-D625-56BF-BDD5-BE49A7651CED.dita Fri Jul 16 17:23:46 2010 +0100
+++ b/Symbian3/SDK/Source/GUID-68417158-D625-56BF-BDD5-BE49A7651CED.dita Tue Jul 20 12:00:49 2010 +0100
@@ -11,4 +11,4 @@
PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
<concept xml:lang="en" id="GUID-68417158-D625-56BF-BDD5-BE49A7651CED"><title>Audio Output Streaming Overview </title><prolog><metadata><keywords/></metadata></prolog><conbody><p>This document provides an overview of Audio Output Streaming. </p> <section><title>Purpose</title> <p>Audio Output Streaming is the interface to streaming sampled audio data to the low level audio controller part of the Multimedia Framework (MMF). </p> <p><b>Audio Output Streaming Library Details</b> </p> <p>The DLL that provides the functionality and the library to which your code must link is identified below. </p> <table id="GUID-DE828986-946F-5519-97C6-9FB89E972AB8"><tgroup cols="3"><colspec colname="col0"/><colspec colname="col1"/><colspec colname="col2"/><thead><row><entry>DLL</entry> <entry>LIB</entry> <entry>Short Description</entry> </row> </thead> <tbody><row><entry><p> <b>mediaclientaudiostream.dll</b> </p> </entry> <entry><p> <b>mediaclientaudiostream.lib</b> </p> </entry> <entry><p>These files are used for implementing Audio Output Streaming. </p> </entry> </row> </tbody> </tgroup> </table> </section> <section><title>Architectural Relationship</title> <p>How the Audio Output Streaming classes interact with other components of MMF is shown below: </p> <fig id="GUID-ABA14A10-ACBD-5C46-8E74-DD1B81AE1EF7"><title>
Audio output streaming overview
- </title> <image href="GUID-C3A8290D-44BA-5AAD-8F0D-745FF3F10E0B_d0e308998_href.png" placement="inline"/></fig> </section> <section><title>Description</title> <p>Streamed audio data is sent and received incrementally. This means: </p> <ul><li id="GUID-0B12FC8E-B60A-51A7-A2AD-AE9637DF46BC"><p>sound clips sent to the low level audio controller (audio play) can be sent as they arrive rather than waiting until the entire clip is received. </p> <p>The user of the API should maintain the data fragments in a queue before sending them to the server. If the user attempts to send data faster than the server can receive it, the excess data fragments are maintained in another client side queue (invisible to the user), whose elements are references to the buffers passed to it. The server notifies the client using a callback each time it has received a data fragment. This indicates to the client that the data fragment can be deleted. </p> </li> <li id="GUID-E6A88A7A-AEF2-596D-828B-8449D95D4829"><p>sound clips that are being captured by the low level audio controller (audio record) can be read incrementally without having to wait until audio capture is complete. </p> <p>The low level audio controller maintains the received buffers where it can place the audio data that is being captured. The client uses a read function to read the received data into destination descriptors. </p> </li> </ul> <p>The client is also notified (for audio play and record) when the stream is opened and available for use (opening takes place asynchronously), and when the stream is closed. </p> <p>This API can only be used to stream audio data, with the data being stored or sourced from a descriptor. Client applications must ensure that the data is in 16 bit PCM format as this is the only format supported. The API does not support mixing. A priority mechanism is used to control access to the sound device by more than one client. </p> </section> <section><title>Key Audio Output Streaming Classes</title> <p>The functionality provided by Audio Output Streaming is contained within the <xref href="GUID-B87C8F92-9737-3636-9800-BA267A1DCA6D.dita"><apiname>CMdaAudioOutputStream</apiname></xref> class. </p> </section> <section><title>Using Audio Output Streaming</title> <p>Clients can use Audio Output Streaming to: </p> <ul><li id="GUID-79DC35F3-2B6C-55DF-B6DE-88C075F1200E"><p>Open, set audio properties, write to and close the audio stream. </p> </li> </ul> </section> <section><title>See Also</title> <p><xref href="GUID-ECBA6331-2187-52C9-A5DF-20CD1EEFE781.dita"> Audio Output Streaming Tutorial </xref> </p> </section> </conbody></concept>
\ No newline at end of file
+ </title> <image href="GUID-C3A8290D-44BA-5AAD-8F0D-745FF3F10E0B_d0e315468_href.png" placement="inline"/></fig> </section> <section><title>Description</title> <p>Streamed audio data is sent and received incrementally. This means: </p> <ul><li id="GUID-0B12FC8E-B60A-51A7-A2AD-AE9637DF46BC"><p>sound clips sent to the low level audio controller (audio play) can be sent as they arrive rather than waiting until the entire clip is received. </p> <p>The user of the API should maintain the data fragments in a queue before sending them to the server. If the user attempts to send data faster than the server can receive it, the excess data fragments are maintained in another client side queue (invisible to the user), whose elements are references to the buffers passed to it. The server notifies the client using a callback each time it has received a data fragment. This indicates to the client that the data fragment can be deleted. </p> </li> <li id="GUID-E6A88A7A-AEF2-596D-828B-8449D95D4829"><p>sound clips that are being captured by the low level audio controller (audio record) can be read incrementally without having to wait until audio capture is complete. </p> <p>The low level audio controller maintains the received buffers where it can place the audio data that is being captured. The client uses a read function to read the received data into destination descriptors. </p> </li> </ul> <p>The client is also notified (for audio play and record) when the stream is opened and available for use (opening takes place asynchronously), and when the stream is closed. </p> <p>This API can only be used to stream audio data, with the data being stored or sourced from a descriptor. Client applications must ensure that the data is in 16 bit PCM format as this is the only format supported. The API does not support mixing. A priority mechanism is used to control access to the sound device by more than one client. </p> </section> <section><title>Key Audio Output Streaming Classes</title> <p>The functionality provided by Audio Output Streaming is contained within the <xref href="GUID-B87C8F92-9737-3636-9800-BA267A1DCA6D.dita"><apiname>CMdaAudioOutputStream</apiname></xref> class. </p> </section> <section><title>Using Audio Output Streaming</title> <p>Clients can use Audio Output Streaming to: </p> <ul><li id="GUID-79DC35F3-2B6C-55DF-B6DE-88C075F1200E"><p>Open, set audio properties, write to and close the audio stream. </p> </li> </ul> </section> <section><title>See Also</title> <p><xref href="GUID-ECBA6331-2187-52C9-A5DF-20CD1EEFE781.dita"> Audio Output Streaming Tutorial </xref> </p> </section> </conbody></concept>
\ No newline at end of file