diff -r 51a74ef9ed63 -r ae94777fff8f Symbian3/SDK/Source/GUID-D1F29744-EB92-5811-A735-B0BC1B352ED5.dita --- a/Symbian3/SDK/Source/GUID-D1F29744-EB92-5811-A735-B0BC1B352ED5.dita Wed Mar 31 11:11:55 2010 +0100 +++ b/Symbian3/SDK/Source/GUID-D1F29744-EB92-5811-A735-B0BC1B352ED5.dita Fri Jun 11 12:39:03 2010 +0100 @@ -1,115 +1,115 @@ - - - - - -Video -Renderer OverviewThis topic describes the Video Renderer component and is aimed -at video controller and video adaptation developers who want to take advantage -of rendering to graphics surfaces. -
Purpose

The Video Renderer renders video to a graphics -surface. The Video Renderer can be used by video decoders, video post-processors -and video controllers. It can also be used to implement the ECam -viewfinder using a graphics surface.

The Video Renderer has -two modes: timed and non-timed. In timed mode, the Video Renderer renders -a frame at a specific time. In non-timed mode, the Video Renderer renders -a frame immediately.

On the Symbian platform, ScreenPlay (also -known as the New Graphics Architecture or NGA) enables rendering to a graphics -surface. The Video Renderer can only be used in conjunction with ScreenPlay. -The Video Renderer does not contain any video or graphics processing software. -All operations that are needed to display video content on the screen are -handled by the Graphics -Composition Components.

The Video Renderer component is classified -as Optional Replaceable. This means that device creators can either substitute -it with their own implementation or remove it if they do not want to use it.

-
Required background

To understand the Video Renderer, -the reader must be familiar with the following:

    -
  • Graphics -Surfaces

  • -
-
Architecture

The Video Renderer can be implemented -in two different ways, as shown in the following diagram:

- The two Video Renderer architectures - -

Note: For simplicity, only the Multimedia Framework client/controller -thread boundary has been shown; other thread boundaries may exist.

In -both architectures, the Video -Client API (CVideoPlayerUtility2) is responsible -for retrieving graphics surface handles from the video player controller and -registering them with the Window -Server. The difference in the two approaches is in the implementation -of the video adaptation and the video player controller:

    -
  • Video adaptation -approach

    The video adaptation uses the Video Renderer to create -and manage graphics surfaces and handle video rendering. The video player -controller simply passes surface information between DevVideoPlay and -the client. An example of this is the Symbian reference AVI player controller -and XVid decoder. Symbian recommends this approach for -new implementations.

  • -
  • Video player controller -approach

    The video player controller uses the Video Renderer -to create and manage graphics surfaces, and handle video rendering and timing. -This approach is suitable for implementations where DevVideoPlay is only used -as a codec interface or where DevVideoPlay is not used.

  • -

In both architectures, the Surface -Update component provides a communication channel between the Video -Renderer and the composition engine.

-
APIs

The Video Renderer component contains the -following DLL with its associated APIs:

- - - -DLL -Description - - - - -

videorenderer.dll

-

The functionality of the Video Renderer is provided by the following -key classes:

    -
  • CVideoRenderer - -a utility class for rendering video to graphics surfaces on behalf of a client.

  • -
  • MVideoRendererObserver - -an observer class that provides notifications about the availability and status -of buffers submitted for rendering.

  • -
  • TVideoFrameBuffer - -represents a buffer for a single decoded video picture.

  • -

The Video Renderer also uses a resource file (videorenderer.rss) -to store supported pixel formats and timed mode values. For more information, -see Video Renderer -Resource File.

-
- - -
-
Typical uses

The Video Renderer is used for the -following:

    -
  • Creating a surface for -video rendering

  • -
  • Supplying buffers to -decode video frames into

  • -
  • Rendering a buffer on -the display (timed or non-timed)

  • -
  • Destroying a surface.

  • -
-
-Video Client - -Multimedia -Framework (MMF) -Multimedia -Plug-ins -Video HAI - -Graphics -Surface Composition Collection -Surface Manager -Component + + + + + +Video +Renderer OverviewThis topic describes the Video Renderer component and is aimed +at video controller and video adaptation developers who want to take advantage +of rendering to graphics surfaces. +
Purpose

The Video Renderer renders video to a graphics +surface. The Video Renderer can be used by video decoders, video post-processors +and video controllers. It can also be used to implement the ECam +viewfinder using a graphics surface.

The Video Renderer has +two modes: timed and non-timed. In timed mode, the Video Renderer renders +a frame at a specific time. In non-timed mode, the Video Renderer renders +a frame immediately.

On the Symbian platform, ScreenPlay (also +known as the New Graphics Architecture or NGA) enables rendering to a graphics +surface. The Video Renderer can only be used in conjunction with ScreenPlay. +The Video Renderer does not contain any video or graphics processing software. +All operations that are needed to display video content on the screen are +handled by the Graphics +Composition Components.

The Video Renderer component is classified +as Optional Replaceable. This means that device creators can either substitute +it with their own implementation or remove it if they do not want to use it.

+
Required background

To understand the Video Renderer, +the reader must be familiar with the following:

    +
  • Graphics +Surfaces

  • +
+
Architecture

The Video Renderer can be implemented +in two different ways, as shown in the following diagram:

+ The two Video Renderer architectures + +

Note: For simplicity, only the Multimedia Framework client/controller +thread boundary has been shown; other thread boundaries may exist.

In +both architectures, the Video +Client API (CVideoPlayerUtility2) is responsible +for retrieving graphics surface handles from the video player controller and +registering them with the Window +Server. The difference in the two approaches is in the implementation +of the video adaptation and the video player controller:

    +
  • Video adaptation +approach

    The video adaptation uses the Video Renderer to create +and manage graphics surfaces and handle video rendering. The video player +controller simply passes surface information between DevVideoPlay and +the client. An example of this is the Symbian reference AVI player controller +and XVid decoder. Symbian recommends this approach for +new implementations.

  • +
  • Video player controller +approach

    The video player controller uses the Video Renderer +to create and manage graphics surfaces, and handle video rendering and timing. +This approach is suitable for implementations where DevVideoPlay is only used +as a codec interface or where DevVideoPlay is not used.

  • +

In both architectures, the Surface +Update component provides a communication channel between the Video +Renderer and the composition engine.

+
APIs

The Video Renderer component contains the +following DLL with its associated APIs:

+ + + +DLL +Description + + + + +

videorenderer.dll

+

The functionality of the Video Renderer is provided by the following +key classes:

    +
  • CVideoRenderer - +a utility class for rendering video to graphics surfaces on behalf of a client.

  • +
  • MVideoRendererObserver - +an observer class that provides notifications about the availability and status +of buffers submitted for rendering.

  • +
  • TVideoFrameBuffer - +represents a buffer for a single decoded video picture.

  • +

The Video Renderer also uses a resource file (videorenderer.rss) +to store supported pixel formats and timed mode values. For more information, +see Video Renderer +Resource File.

+
+ + +
+
Typical uses

The Video Renderer is used for the +following:

    +
  • Creating a surface for +video rendering

  • +
  • Supplying buffers to +decode video frames into

  • +
  • Rendering a buffer on +the display (timed or non-timed)

  • +
  • Destroying a surface.

  • +
+
+Video Client + +Multimedia +Framework (MMF) +Multimedia +Plug-ins +Video HAI + +Graphics +Surface Composition Collection +Surface Manager +Component
\ No newline at end of file