Symbian3/SDK/Source/GUID-B0449B60-B78E-5CC1-8FAF-E5EE24D88EB2.dita
changeset 8 ae94777fff8f
parent 7 51a74ef9ed63
child 13 48780e181b38
equal deleted inserted replaced
7:51a74ef9ed63 8:ae94777fff8f
     9 -->
     9 -->
    10 <!DOCTYPE concept
    10 <!DOCTYPE concept
    11   PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
    11   PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
    12 <concept xml:lang="en" id="GUID-B0449B60-B78E-5CC1-8FAF-E5EE24D88EB2"><title>Advanced Audio Adaptation Framework Technology Guide</title><prolog><metadata><keywords/></metadata></prolog><conbody><p>This document provides additional information about the Advanced Audio Adaptation Framework. </p> <section><title>Purpose</title> <p>The Advanced Audio Adaptation Framework (A3F) provides a common interface for accessing audio resources, playing tones, and configuring audio for playing and recording. </p> </section> <section><title>Understanding A3F</title> <p>A3F functionality is provided by the <xref href="GUID-C4AD7B75-9027-3F62-889C-ADEF5E6DBC73.dita"><apiname>CAudioContextFactory</apiname></xref> class. <xref href="GUID-C4AD7B75-9027-3F62-889C-ADEF5E6DBC73.dita"><apiname>CAudioContextFactory</apiname></xref> is responsible for creating audio contexts. The Audio Context API (<xref href="GUID-67BE95B2-BE4A-32AF-8BDF-92FD8FBE6DC3.dita"><apiname>MAudioContext</apiname></xref>) maintains all audio components created in a context, where audio components are instances of the <xref href="GUID-C5B1FE01-DCFC-3CA5-931B-E371AEC918A6.dita"><apiname>MAudioStream</apiname></xref> and <xref href="GUID-0536EF5D-3DA6-3F30-A404-52FE9E21B359.dita"><apiname>MAudioProcessingUnit</apiname></xref> classes. </p> <p>An <xref href="GUID-0536EF5D-3DA6-3F30-A404-52FE9E21B359.dita"><apiname>MAudioProcessingUnit</apiname></xref> is created in a context by giving the type of the processing unit such as codec and source sink. <xref href="GUID-0536EF5D-3DA6-3F30-A404-52FE9E21B359.dita"><apiname>MAudioProcessingUnit</apiname></xref> allows extension interfaces to set additional audio processing unit settings. </p> <p>The Audio Stream API (<xref href="GUID-C5B1FE01-DCFC-3CA5-931B-E371AEC918A6.dita"><apiname>MAudioStream</apiname></xref>) is an audio component that links together processing units into an audio stream. <xref href="GUID-C5B1FE01-DCFC-3CA5-931B-E371AEC918A6.dita"><apiname>MAudioStream</apiname></xref> manages the audio processing state requested by the client and interacts with the audio processing units added to the stream. <xref href="GUID-C5B1FE01-DCFC-3CA5-931B-E371AEC918A6.dita"><apiname>MAudioStream</apiname></xref> provides runtime control over a stream's (composite) behaviour and lifetime management of the entities in a stream (whether the state of the stream is uninitialized, initialized, idle, primed, active or dead). For more information, see <xref href="GUID-B0449B60-B78E-5CC1-8FAF-E5EE24D88EB2.dita#GUID-B0449B60-B78E-5CC1-8FAF-E5EE24D88EB2/GUID-28D6AB9C-8F4F-573A-853D-726138249390">Stream States</xref>. </p> <p>The Audio Context API allows a grouping notion for multimedia allocations. <xref href="GUID-67BE95B2-BE4A-32AF-8BDF-92FD8FBE6DC3.dita"><apiname>MAudioContext</apiname></xref> groups component actions made inside a context. This means any allocation or loss of resources which occurs in any single audio component affects all of the components grouped in the context. </p> <p id="GUID-C13977B1-D855-5D46-913C-15059421C225"><b>The Commit Cycle</b> </p> <p>Changes made to an audio stream are handled in a transactional manner. If changes fail to be applied, then any other changes made to the stream are considered to fail also. The application of changes is dependent on a successful commit cycle. If there is a failure, a “rollback" mechanism is executed and the stream states are reverted to their previous state. </p> <p>The client requests changes through various <codeph>Set()</codeph> calls and then calls the <xref href="GUID-67BE95B2-BE4A-32AF-8BDF-92FD8FBE6DC3.dita#GUID-67BE95B2-BE4A-32AF-8BDF-92FD8FBE6DC3/GUID-7011BDC1-C4D8-3BB5-9B7C-8729FADCE67E"><apiname>MAudioContext::Commit()</apiname></xref> function. The platform-specific implementation then decides whether the <codeph>Commit()</codeph> request is allowed, possibly dictates new values if needed, and then applies the changes. If successful, the client is informed through callbacks. If there is a need to modify the context because resources are needed in another context, then the platform-specific implementation can take the resources by pre-empting the context. In this case, the client is informed about the pre-emption. </p> <p>The following diagram shows this behaviour: </p> <fig id="GUID-18D00361-18E7-5A5C-B8C0-115E1D2DF29F"><title>
    12 <concept xml:lang="en" id="GUID-B0449B60-B78E-5CC1-8FAF-E5EE24D88EB2"><title>Advanced Audio Adaptation Framework Technology Guide</title><prolog><metadata><keywords/></metadata></prolog><conbody><p>This document provides additional information about the Advanced Audio Adaptation Framework. </p> <section><title>Purpose</title> <p>The Advanced Audio Adaptation Framework (A3F) provides a common interface for accessing audio resources, playing tones, and configuring audio for playing and recording. </p> </section> <section><title>Understanding A3F</title> <p>A3F functionality is provided by the <xref href="GUID-C4AD7B75-9027-3F62-889C-ADEF5E6DBC73.dita"><apiname>CAudioContextFactory</apiname></xref> class. <xref href="GUID-C4AD7B75-9027-3F62-889C-ADEF5E6DBC73.dita"><apiname>CAudioContextFactory</apiname></xref> is responsible for creating audio contexts. The Audio Context API (<xref href="GUID-67BE95B2-BE4A-32AF-8BDF-92FD8FBE6DC3.dita"><apiname>MAudioContext</apiname></xref>) maintains all audio components created in a context, where audio components are instances of the <xref href="GUID-C5B1FE01-DCFC-3CA5-931B-E371AEC918A6.dita"><apiname>MAudioStream</apiname></xref> and <xref href="GUID-0536EF5D-3DA6-3F30-A404-52FE9E21B359.dita"><apiname>MAudioProcessingUnit</apiname></xref> classes. </p> <p>An <xref href="GUID-0536EF5D-3DA6-3F30-A404-52FE9E21B359.dita"><apiname>MAudioProcessingUnit</apiname></xref> is created in a context by giving the type of the processing unit such as codec and source sink. <xref href="GUID-0536EF5D-3DA6-3F30-A404-52FE9E21B359.dita"><apiname>MAudioProcessingUnit</apiname></xref> allows extension interfaces to set additional audio processing unit settings. </p> <p>The Audio Stream API (<xref href="GUID-C5B1FE01-DCFC-3CA5-931B-E371AEC918A6.dita"><apiname>MAudioStream</apiname></xref>) is an audio component that links together processing units into an audio stream. <xref href="GUID-C5B1FE01-DCFC-3CA5-931B-E371AEC918A6.dita"><apiname>MAudioStream</apiname></xref> manages the audio processing state requested by the client and interacts with the audio processing units added to the stream. <xref href="GUID-C5B1FE01-DCFC-3CA5-931B-E371AEC918A6.dita"><apiname>MAudioStream</apiname></xref> provides runtime control over a stream's (composite) behaviour and lifetime management of the entities in a stream (whether the state of the stream is uninitialized, initialized, idle, primed, active or dead). For more information, see <xref href="GUID-B0449B60-B78E-5CC1-8FAF-E5EE24D88EB2.dita#GUID-B0449B60-B78E-5CC1-8FAF-E5EE24D88EB2/GUID-28D6AB9C-8F4F-573A-853D-726138249390">Stream States</xref>. </p> <p>The Audio Context API allows a grouping notion for multimedia allocations. <xref href="GUID-67BE95B2-BE4A-32AF-8BDF-92FD8FBE6DC3.dita"><apiname>MAudioContext</apiname></xref> groups component actions made inside a context. This means any allocation or loss of resources which occurs in any single audio component affects all of the components grouped in the context. </p> <p id="GUID-C13977B1-D855-5D46-913C-15059421C225"><b>The Commit Cycle</b> </p> <p>Changes made to an audio stream are handled in a transactional manner. If changes fail to be applied, then any other changes made to the stream are considered to fail also. The application of changes is dependent on a successful commit cycle. If there is a failure, a “rollback" mechanism is executed and the stream states are reverted to their previous state. </p> <p>The client requests changes through various <codeph>Set()</codeph> calls and then calls the <xref href="GUID-67BE95B2-BE4A-32AF-8BDF-92FD8FBE6DC3.dita#GUID-67BE95B2-BE4A-32AF-8BDF-92FD8FBE6DC3/GUID-7011BDC1-C4D8-3BB5-9B7C-8729FADCE67E"><apiname>MAudioContext::Commit()</apiname></xref> function. The platform-specific implementation then decides whether the <codeph>Commit()</codeph> request is allowed, possibly dictates new values if needed, and then applies the changes. If successful, the client is informed through callbacks. If there is a need to modify the context because resources are needed in another context, then the platform-specific implementation can take the resources by pre-empting the context. In this case, the client is informed about the pre-emption. </p> <p>The following diagram shows this behaviour: </p> <fig id="GUID-18D00361-18E7-5A5C-B8C0-115E1D2DF29F"><title>
    13                 A3F Commit / Pre-emption Cycle 
    13                 A3F Commit / Pre-emption Cycle 
    14              </title> <image href="GUID-F68FF4C2-F9DB-5935-9027-9BEC006D031F_d0e330701_href.png" placement="inline"/></fig> <p><b>Observers</b> </p> <p>Most of the A3F API functions are asynchronous. For example, the functions <codeph>Foo()</codeph> or <codeph>SetFoo()</codeph> are followed with <codeph>FooComplete(Tint aError)</codeph> or <codeph>FooSet(Tint aError)</codeph>. </p> <p>The following A3F observers are defined: </p> <ul><li id="GUID-11C7E019-8628-533E-ACAB-E7A7C4893C65"><p> <xref href="GUID-3336EC2B-4FB8-3FD0-A702-0CB50DE059B4.dita"><apiname>MAudioContextObserver</apiname></xref>  </p> <p>Informs with <codeph>ContextEvent()</codeph> about the commit cycle. Note that <codeph>ContextEvent()</codeph> is in the same thread context as the A3F client. </p> </li> <li id="GUID-1E628CBE-641F-54B2-B144-FFEC88AB2ECF"><p> <xref href="GUID-D2075F61-F6FA-3FAE-9FBB-20CEFE81334C.dita"><apiname>MAudioStreamObserver</apiname></xref>  </p> <p>Informs with <codeph>StateEvent()</codeph> about state changes </p> </li> <li id="GUID-611327B3-9278-5793-9BE2-072B898A5245"><p> <xref href="GUID-805E421D-9143-326D-9455-FF40205AA70A.dita"><apiname>MAudioCodecObserver</apiname></xref>  </p> <p>Informs about completion of setters and getters. </p> </li> <li id="GUID-CD6FC1B2-B581-5CD2-9355-8B3AF8681680"><p> <xref href="GUID-B235174E-E8AC-36EE-8BCC-F466EEB8E720.dita"><apiname>MAudioGainControlObserver</apiname></xref>  </p> <p>Informs about changes in gain or changes in maximum gain values. </p> </li> <li id="GUID-A46EF843-AD06-583E-8E1E-712CC0341421"><p> <xref href="GUID-C070F06A-E77A-3477-90A2-C2E38B0E823C.dita"><apiname>MAudioProcessingUnitObserver</apiname></xref>  </p> <p>Informs about any additional errors in audio processing units. </p> </li> <li id="GUID-178B9281-35A8-519A-A01A-920134606A87"><p> <xref href="GUID-BC675A52-D3B5-3F97-B986-8643A8FEFE59.dita"><apiname>MMMFAudioDataConsumer</apiname></xref>  </p> <p>Informs about recorded buffer to be ready for storing. Also informs the client if there is a buffer to be ignored. </p> </li> <li id="GUID-8F5E2944-DB55-5601-B150-490919C98C4E"><p> <xref href="GUID-5F500DE3-5253-326C-B94A-1CBD7C83C082.dita"><apiname>MMMFAudioDataSupplier</apiname></xref>  </p> <p>Informs about new buffer ready to be filled with audio data. Also informs the client if any requested buffer should be ignored. </p> </li> </ul> <p id="GUID-28D6AB9C-8F4F-573A-853D-726138249390"><b>Stream States</b> </p> <p>Clients using A3F should be aware that requested audio resources can be lost at any time while they are being used. A client with a higher importance can cause requests for resources to be denied or available resources to become unavailable. In these cases, the commit cycle informs the client through pre-emption events and the resulted stream state is usually demoted to the highest non-disturbing state. </p> <p>The possible audio stream states are described in the following table: </p> <table id="GUID-33D9A149-C1CF-52F0-87E1-234CF93F677E"><tgroup cols="2"><colspec colname="col0"/><colspec colname="col1"/><thead><row><entry>State</entry> <entry>Description</entry> </row> </thead> <tbody><row><entry><p> <codeph>EUninitialized</codeph>  </p> </entry> <entry><p>This is the state of the chain before it is initialized. However, the settings in the logical chain cannot be related to the adaptation because no adaptation has been selected yet. </p> </entry> </row> <row><entry><p> <codeph>EInitialized</codeph>  </p> </entry> <entry><p>This state is set after a successful initialization request. </p> <p>The physical adaptation has been selected but may not be fully allocated. </p> <p>There should be no externally visible buffers allocated at this point </p> <p> <b>Note</b>: For zero copy and shared chunk buffers, a stream in the <codeph>EInitialized</codeph> state should not require buffer consumption. This is an important issue if the base port only has 16 addressable chunks per process. </p> <p>In terms of DevSound compatibility, some custom interfaces are available at this point (although others may require further construction before they are available). </p> </entry> </row> <row><entry><p> <codeph>EIdle</codeph>  </p> </entry> <entry><p>All the chain resources are allocated. However, no processing time (other than that expended to put the chain into the <codeph>EIdle</codeph> state) is consumed in this phase. </p> <p>In the <codeph>EIdle</codeph> state, any existing allocated buffers continue to exist. </p> <p>The codec is allocated in terms of hardware memory consumption. </p> <p>A stream in the <codeph>EIdle</codeph> state can be ‘mid-file position’. There is no implied reset of position or runtime settings (that is, time played) by returning to <codeph>EIdle</codeph>. </p> </entry> </row> <row><entry><p> <codeph>EPrimed</codeph>  </p> </entry> <entry><p>This state is the same as <codeph>EIdle</codeph> but the stream can consume processing time by filling its buffers. The purpose of this state is to prepare a stream such that it is ready to play in as short a time as possible (for example, low-latency for audio chains which can be pre-buffered). </p> <p>Note that once the buffer is full, the stream may continue to use some processing time. </p> <p>There will not be an automatic transition to the <codeph>EIdle</codeph> state when the buffer is full. If such behaviour is desired, the client must request it. </p> <p>There will not be an automatic transition to <codeph>EActive</codeph> when the buffer is full. If such behaviour is desired, the client must request it. </p> </entry> </row> <row><entry><p> <codeph>EActive</codeph>  </p> </entry> <entry><p>The chain has the resources as <codeph>EIdle</codeph> and <codeph>EPrimed</codeph> but also has started to process the actions requested by the client. </p> <p> <b>Note:</b> A chain can be in the <codeph>EActive</codeph> state and performing a wide range of operations. However the semantics are that it is processing the stream and that it is consuming both processing and memory resources. </p> </entry> </row> <row><entry><p> <codeph>EDead</codeph>  </p> </entry> <entry><p>The stream can no longer function due to a fatal error. The logical chain still exists, but any physical resources should be reclaimed by the adaptation. </p> </entry> </row> </tbody> </tgroup> </table> <p>The following diagram shows the stream states: </p> <fig id="GUID-1FA73F08-1C43-57AA-AEFA-DDEDD9464DDA"><title>
    14              </title> <image href="GUID-F68FF4C2-F9DB-5935-9027-9BEC006D031F_d0e324544_href.png" placement="inline"/></fig> <p><b>Observers</b> </p> <p>Most of the A3F API functions are asynchronous. For example, the functions <codeph>Foo()</codeph> or <codeph>SetFoo()</codeph> are followed with <codeph>FooComplete(Tint aError)</codeph> or <codeph>FooSet(Tint aError)</codeph>. </p> <p>The following A3F observers are defined: </p> <ul><li id="GUID-11C7E019-8628-533E-ACAB-E7A7C4893C65"><p> <xref href="GUID-3336EC2B-4FB8-3FD0-A702-0CB50DE059B4.dita"><apiname>MAudioContextObserver</apiname></xref>  </p> <p>Informs with <codeph>ContextEvent()</codeph> about the commit cycle. Note that <codeph>ContextEvent()</codeph> is in the same thread context as the A3F client. </p> </li> <li id="GUID-1E628CBE-641F-54B2-B144-FFEC88AB2ECF"><p> <xref href="GUID-D2075F61-F6FA-3FAE-9FBB-20CEFE81334C.dita"><apiname>MAudioStreamObserver</apiname></xref>  </p> <p>Informs with <codeph>StateEvent()</codeph> about state changes </p> </li> <li id="GUID-611327B3-9278-5793-9BE2-072B898A5245"><p> <xref href="GUID-805E421D-9143-326D-9455-FF40205AA70A.dita"><apiname>MAudioCodecObserver</apiname></xref>  </p> <p>Informs about completion of setters and getters. </p> </li> <li id="GUID-CD6FC1B2-B581-5CD2-9355-8B3AF8681680"><p> <xref href="GUID-B235174E-E8AC-36EE-8BCC-F466EEB8E720.dita"><apiname>MAudioGainControlObserver</apiname></xref>  </p> <p>Informs about changes in gain or changes in maximum gain values. </p> </li> <li id="GUID-A46EF843-AD06-583E-8E1E-712CC0341421"><p> <xref href="GUID-C070F06A-E77A-3477-90A2-C2E38B0E823C.dita"><apiname>MAudioProcessingUnitObserver</apiname></xref>  </p> <p>Informs about any additional errors in audio processing units. </p> </li> <li id="GUID-178B9281-35A8-519A-A01A-920134606A87"><p> <xref href="GUID-BC675A52-D3B5-3F97-B986-8643A8FEFE59.dita"><apiname>MMMFAudioDataConsumer</apiname></xref>  </p> <p>Informs about recorded buffer to be ready for storing. Also informs the client if there is a buffer to be ignored. </p> </li> <li id="GUID-8F5E2944-DB55-5601-B150-490919C98C4E"><p> <xref href="GUID-5F500DE3-5253-326C-B94A-1CBD7C83C082.dita"><apiname>MMMFAudioDataSupplier</apiname></xref>  </p> <p>Informs about new buffer ready to be filled with audio data. Also informs the client if any requested buffer should be ignored. </p> </li> </ul> <p id="GUID-28D6AB9C-8F4F-573A-853D-726138249390"><b>Stream States</b> </p> <p>Clients using A3F should be aware that requested audio resources can be lost at any time while they are being used. A client with a higher importance can cause requests for resources to be denied or available resources to become unavailable. In these cases, the commit cycle informs the client through pre-emption events and the resulted stream state is usually demoted to the highest non-disturbing state. </p> <p>The possible audio stream states are described in the following table: </p> <table id="GUID-33D9A149-C1CF-52F0-87E1-234CF93F677E"><tgroup cols="2"><colspec colname="col0"/><colspec colname="col1"/><thead><row><entry>State</entry> <entry>Description</entry> </row> </thead> <tbody><row><entry><p> <codeph>EUninitialized</codeph>  </p> </entry> <entry><p>This is the state of the chain before it is initialized. However, the settings in the logical chain cannot be related to the adaptation because no adaptation has been selected yet. </p> </entry> </row> <row><entry><p> <codeph>EInitialized</codeph>  </p> </entry> <entry><p>This state is set after a successful initialization request. </p> <p>The physical adaptation has been selected but may not be fully allocated. </p> <p>There should be no externally visible buffers allocated at this point </p> <p> <b>Note</b>: For zero copy and shared chunk buffers, a stream in the <codeph>EInitialized</codeph> state should not require buffer consumption. This is an important issue if the base port only has 16 addressable chunks per process. </p> <p>In terms of DevSound compatibility, some custom interfaces are available at this point (although others may require further construction before they are available). </p> </entry> </row> <row><entry><p> <codeph>EIdle</codeph>  </p> </entry> <entry><p>All the chain resources are allocated. However, no processing time (other than that expended to put the chain into the <codeph>EIdle</codeph> state) is consumed in this phase. </p> <p>In the <codeph>EIdle</codeph> state, any existing allocated buffers continue to exist. </p> <p>The codec is allocated in terms of hardware memory consumption. </p> <p>A stream in the <codeph>EIdle</codeph> state can be ‘mid-file position’. There is no implied reset of position or runtime settings (that is, time played) by returning to <codeph>EIdle</codeph>. </p> </entry> </row> <row><entry><p> <codeph>EPrimed</codeph>  </p> </entry> <entry><p>This state is the same as <codeph>EIdle</codeph> but the stream can consume processing time by filling its buffers. The purpose of this state is to prepare a stream such that it is ready to play in as short a time as possible (for example, low-latency for audio chains which can be pre-buffered). </p> <p>Note that once the buffer is full, the stream may continue to use some processing time. </p> <p>There will not be an automatic transition to the <codeph>EIdle</codeph> state when the buffer is full. If such behaviour is desired, the client must request it. </p> <p>There will not be an automatic transition to <codeph>EActive</codeph> when the buffer is full. If such behaviour is desired, the client must request it. </p> </entry> </row> <row><entry><p> <codeph>EActive</codeph>  </p> </entry> <entry><p>The chain has the resources as <codeph>EIdle</codeph> and <codeph>EPrimed</codeph> but also has started to process the actions requested by the client. </p> <p> <b>Note:</b> A chain can be in the <codeph>EActive</codeph> state and performing a wide range of operations. However the semantics are that it is processing the stream and that it is consuming both processing and memory resources. </p> </entry> </row> <row><entry><p> <codeph>EDead</codeph>  </p> </entry> <entry><p>The stream can no longer function due to a fatal error. The logical chain still exists, but any physical resources should be reclaimed by the adaptation. </p> </entry> </row> </tbody> </tgroup> </table> <p>The following diagram shows the stream states: </p> <fig id="GUID-1FA73F08-1C43-57AA-AEFA-DDEDD9464DDA"><title>
    15                 A3F State Machine 
    15                 A3F State Machine 
    16              </title> <image href="GUID-D2DCBC1F-91B8-5F81-AAE8-546AE3EB1E29_d0e331019_href.png" placement="inline"/></fig> </section> <section><title>A3F Tutorials</title> <p>The following tutorials are provided to help you create A3F solutions: </p> <ul><li id="GUID-EC27BD5D-A1DB-53E7-87F1-4A0AD6280F8B"><p><xref href="GUID-931207BE-3561-562D-8F67-0FB52CFF83CD.dita">Audio Component Framework Tutorial</xref>  </p> </li> <li id="GUID-4BBFA92A-F174-57A6-B512-9F0D56C0EA6D"><p><xref href="GUID-2A543E1C-F3DE-59EF-8A43-1B655F367FBC.dita">Audio Processing Tutorial</xref>  </p> </li> </ul> </section> <section><title>See Also</title> <p><xref href="GUID-4AAABD77-C08E-5EE2-A02A-3B412EA6D23F.dita">Advanced Audio Adaptation Framework Overview</xref> </p> <p><xref href="GUID-174D98FF-6782-564E-9FDF-1AE32F770591.dita">Sound Device Overview</xref>  </p> </section> </conbody></concept>
    16              </title> <image href="GUID-D2DCBC1F-91B8-5F81-AAE8-546AE3EB1E29_d0e324862_href.png" placement="inline"/></fig> </section> <section><title>A3F Tutorials</title> <p>The following tutorials are provided to help you create A3F solutions: </p> <ul><li id="GUID-EC27BD5D-A1DB-53E7-87F1-4A0AD6280F8B"><p><xref href="GUID-931207BE-3561-562D-8F67-0FB52CFF83CD.dita">Audio Component Framework Tutorial</xref>  </p> </li> <li id="GUID-4BBFA92A-F174-57A6-B512-9F0D56C0EA6D"><p><xref href="GUID-2A543E1C-F3DE-59EF-8A43-1B655F367FBC.dita">Audio Processing Tutorial</xref>  </p> </li> </ul> </section> <section><title>See Also</title> <p><xref href="GUID-4AAABD77-C08E-5EE2-A02A-3B412EA6D23F.dita">Advanced Audio Adaptation Framework Overview</xref> </p> <p><xref href="GUID-174D98FF-6782-564E-9FDF-1AE32F770591.dita">Sound Device Overview</xref>  </p> </section> </conbody></concept>