This tutorial describes how to use the Advanced Audio Adaptation Framework for audio processing.
The purpose of this tutorial is to show you how to play and record audio using the Advanced Audio Adaptation Framework (A3F).
Required Background
To understand this document, you need to be familiar with A3F. For more information, see:
Introduction
To start using A3F for audio processing, you must:
Each client that wants to use audio resources must have an instance of CAudioContextFactory associated with it. To construct a new instance, use the CAudioContextFactory::NewL() function:
static IMPORT_C CAudioContextFactory* NewL();
Once created, you can use CAudioContextFactory to create a reference to an MAudioContext pointer. To create the reference, call CAudioContextFactory::CreateAudioContext() :
IMPORT_C TInt CreateAudioContext(MAudioContext*& aContext);
To observe the commit cycle, you must register an audio context observer using the MAudioContext::RegisterAudioContextObserver() function:
virtual Tint RegisterAudioContextObserver(MAudioContextObserver& aObserver);
For more information about the commit cycle, see The Commit Cycle in the Advanced Audio Adaptation Framework Technology Guide .
You use the MAudioContext class to create references to all of the audio components you require. To create an MAudioStream reference, call the MAudioContext::CreateAudioStream() function:
virtual Tint CreateAudioStream(MAudioStream*& aAudioStream);
You can create other audio components by calling the MAudioContext::CreateAudioProcessingUnit() function:
virtual TInt CreateAudioProcessingUnit(TUid aTypeId, MAudioProcessingUnit*& aProcessingUnit);
The supported audio processing unit TypeIDs are defined in audioprocessingunittypeuids.h . The supported TypeIDs are as follows:
KUidAudioStream
KUidMmfBufferSource
KUidAudioDeviceSource
KUidAudioCodec
KUidAudioDecoder
KUidAudioEncoder
KUidMmfBufferSink
KUidAudioDeviceSink
KUidAudioGainControl
Before making changes to a stream and committing them, you must set the Process ID that identifies the client. Note that you must do this before making any calls to Commit() . To set the client information, call the MAudioContext::SetClientSettings() function:
virtual TInt SetClientSettings(const TClientContextSettings& aSettings);
Depending on the codec type, you set either the data supplier to be informed about MMMFDataSupplier::BufferToBeFilled notifications, or the data consumer to be informed about MMMFAudioDataConsumer::BufferToBeEmptied notifications. You must do this before you add the data source or data sink to the audio stream. For a buffer source, using codec type KUidAudioDecoder , you can set the data supplier by obtaining the MMMFBufferSource extension interface for the audio source. Once you have the extension interface, use the MMMFBufferSource::SetDataSupplier() function for registration:
virtual Tint SetDataSupplier(MMMFAudioDataSupplier& aSupplier);
For a buffer sink, using codec type KUidAudioEncoder , you can set the data consumer by obtaining the MMMFBufferSink extension interface for the audio sink. Once you have the extension interface, use the MMMFBufferSink::SetDataConsumer() function for registration:
virtual Tint SetDataConsumer(MMMFAudioDataConsumer& aConsumer);
After you have created your audio processing units, you must then add them to an audio stream in order to control them. The following Audio Stream API functions are available for adding audio processing units:
virtual TInt AddSource(MAudioProcessingUnit* aSource);
virtual TInt AddSink(MAudioProcessingUnit* aSink);
virtual TInt AddAudioCodec(MAudioProcessingUnit* aCodec);
MAudioStream::AddGainControl()
virtual TInt AddGainControl(MAudioProcessingUnit* aGainControl);
Before transferring the audio stream to the Initialized state, you must define the audio format that the codec should use. This requires Audio Processing Unit API functions to fetch the codec extension interface. The TypeID to fetch this interface is KUidAudioCodec and the function for this is TAny* Interface(TUid aType); .
virtual TAny* Interface(TUid aType);
You must cast the returned interface to the corresponding extension interface, that is, MAudioCodec . The MAudioCodec extension interface allows codec-specific functions like MAudioCodec::SetFormat() to be called:
virtual TInt SetFormat(TUid aFormat);
The audio format UIDs are defined in audioformatuids.h .
When you have finished configuring the required settings for your audio components, you can request that the audio stream be transitioned to the Initialized state. You use the Audio Stream API MAudioStream::Initialize() function to prepare the stream to transition from EUninitialized to EInitialized :
virtual TInt Initialize();
After the required changes have been made to the stream, you must make a call to the MAudioContext::Commit() function. Your request for audio resources is dependent on a successful commit cycle. A successful commit cycle means that you have changed the stream state to EInitialized .
Once MAudioStream::Initialize() has been successfully committed, A3F calls MAudioStreamObserver::StateEvent() :
virtual void StateEvent(MAudioStream& aStream , TInt aReason, TAudioState aNewState)=0;
If aNewState is EInitialized , and MAudioContextObserver::ContextEvent() is called on the MAudioContextObserver with event KUidA3FContextUpdateComplete , then the process of initialization is complete. Otherwise one of the system wide error codes is returned.
Note : ContextEvent() is a callback in the same thread context as the A3F client.
If the transition to EInitialized fails, it may be caused by one of the following:
The MAudioStreamObserver::StateEvent() callback gives an error code KErrNotReady if, for example, the codec format is not set when committing the transition to EInitialized .
The MAudioStreamObserver::StateEvent() callback provides information about the new state of stream, if the state change was involved with the commit cycle, and an error code if there were any problems with the state change.
The commit cycle can also fail, for example, because of pre-emption.
The MAudioContextObserver::ContextEvent() callback indicates the event and an error code if there were any problems with the commit cycle.
The following sections describe how to manage and manipulate audio streams for audio playing and recording.
Basic Procedure
The high level steps involved in processing audio are shown here:
You can configure the following settings while the stream is in the EInitialized state:
To change the channels used for audio processing, use the MAudioCodec::SetMode() function to set the mode:
virtual Tint SetMode(TUid aMode);
To set the sampling rate, use the MAudioCodec::SetSampleRate() function:
virtual Tint SetSampleRate(Tint aSampleRate);
You can configure the following settings before or during audio processing:
To set the audio priority and preference-related settings for the implementation, use the MAudioStream::SetAudioType() function:
virtual Tint SetAudioType(TAudioTypeSettings& aAudioTypeSettings) const;
To change gain settings, fetch the MAudioGainControl extension interface and use MAudioGainControl::SetGain() :
virtual Tint SetGain(RArray<TAudioChannelGain>& aChannels);
The aRampDuration parameter defines the period over which to raise the gain. The aGain value can be any value between zero and aMaxGain (the maximum gain). To obtain aMaxGain , use the MAudioGainControl::GetMaxGain() function:
virtual Tint GetMaxGain(TInt& aMaxGain) const
Alternatively you can set the gain on a channel-by-channel basis. Setting the gain this way also allows you to define a gain ramp. To set the channel gain with a ramp, use MAudioGainControl::SetGain() :
virtual Tint SetGain(RArray<TAudioChannelGain>& aChannels, TUid aRampOperation, const TTimeIntervalMicroSeconds& aRampDuration);
The UID determines how the ramp is interpreted. The following table describes the different ramps:
KNullUid |
The value of aRampDuration is ignored and the gain changes immediately. |
KGainSawTooth |
An explicit 0 to aGain sweep, rising linearly. If the channel is active, the effective gain drops to 0, as soon as possible, after the request. If KGainSawTooth is used when the stream is not in the EActive state, then it acts as a fade-in during the subsequent Activate() transition. |
KGainFadeOut |
A drop from the current gain to 0, dropping linearly over the aRampDuration period. |
KGainRamped |
A gradual change from the current gain value to aGain over the aRampDuration period |
KGainContinue |
If a previous ramp operation is in progress, then this continues but the concept of “target gain” is updated using the new values. The value of aRampDuration is ignored and the previous ramp duration is re-used, minus the time already spent ramping. The smoothness of this operation depends on the implementation. If no ramp operation is in progress, then this is the same as KNullUid . Note that this option is intended as a way of changing gain values without stopping an ongoing ramp operation. |
Once you have finished setting configuration values, you must apply them using the MAudioContext::Commit() function:
virtual Tint Commit();
Playing audio starts with the stream in the EInitialized state. The steps to play audio are shown here:
Request a transition from EInitialized to EIdle using the MAudioStream::Load() function:
virtual TInt MAudioStream::Load();
Call MAudioContext::Commit() to commit the state change.
Once the commit cycle is complete and StateEvent is informed about the new state, request a transition to the EActive state by calling MAudioStream::Activate() :
virtual TInt Activate();
Call MAudioContext::Commit() to commit the state change. The state transition and audio playing take effect once the commit cycle has successfully completed.
When the first buffer is ready to be filled with audio data, an MMMFDataSupplier::BufferToBeFilled() notification is received:
virtual void BufferToBeFilled(CMMFBuffer* aBuffer)=0;
After filling the buffer, respond by using the MMMFBufferSource::BufferFilled() extension interface function:
virtual Tint BufferFilled(CMMFBuffer* aBuffer);
This is similar to the existing DevSound behaviour. For more information, see Playing audio .
Recording audio starts with the steam in the EInitialized state. The steps to record audio are shown here:
Request a transition from EInitialized to EIdle using the MAudioStream::Load() function:
virtual TInt MAudioStream::Load();
Call MAudioContext::Commit() to commit the state change.
Once the commit cycle is complete and StateEvent is informed about the new state, request a transition to the EActive state by calling MAudioStream::Activate() :
virtual TInt Activate();
Call MAudioContext::Commit() to commit the state change. The state transition and audio recording take effect once the commit cycle has successfully completed.
When the first buffer is ready to be emptied of audio data, an MMMFDataConsumer::BufferToBeEmptied() notification is received:
virtual void BufferToBeEmptied(CMMFBuffer* aBuffer)=0;
After emptying the buffer, respond by using the MMMFBufferSink::BufferEmptied() extension interface function:
virtual Tint BufferEmptied(CMMFBuffer* aBuffer);
This is similar to the existing DevSound behaviour. For more information, see Recording Audio .
To pause the audio currently being processed, use the MAudioStream::Prime() function:
virtual Tint Prime();
Calling Prime() requests a transition to the EPrimed state. After calling Prime() , the state change must be committed.
Prime() temporarily stops the audio process. To continue audio processing from the EPrimed state, use the MAudioStream::Activate() function:
virtual Tint Activate();
After a successful commit cycle, the stream is transitioned to the EActive state, and processing resumes from the pause point.
To stop audio processing while playing audio, use the MAudioStream::Stop() function:
Virtual Tint Stop();
Stop() can be called while the stream state is EActive or EPrimed .
Note: After calling Stop() , do not call MMMFBufferSource::BufferFilled() for any outstanding MMMFDataSupplier::BufferToBeFilled() callbacks, as the buffer may not be valid anymore.
To stop audio processing while recording audio, use the MAudioStream::Stop() function:
Virtual Tint Stop();
Stop() can be called while the stream state is EActive or EPrimed .
Note: Calling Stop() breaks the recording cycle and some buffers of audio data may be lost. To guarantee that no recorded data is lost when stopping, transition the stream from EActive to EPrimed , and wait for MAudioStreamObserver::ProcessingFinished() to return. Upon receiving the callback, transition the stream from:
To retrieve the duration of the audio that has been processed, use the MAudioStream::GetStreamTime() function:
virtual TInt GetStreamTime(TTimeIntervalMicroSeconds& aStreamTime);
The result returned is the microseconds processed since the last EIdle state. GetStreamTime() returns a zero value until audio processing starts. The function returns the time processed when the stream is in the EPrimed and EActive states.
Copyright ©2010 Nokia Corporation and/or its subsidiary(-ies).
All rights
reserved. Unless otherwise stated, these materials are provided under the terms of the Eclipse Public License
v1.0.