From patchwork Fri Mar 4 11:26:22 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Kamil Debski X-Patchwork-Id: 608671 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p24BQtfU026655 for ; Fri, 4 Mar 2011 11:26:56 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759490Ab1CDL0y (ORCPT ); Fri, 4 Mar 2011 06:26:54 -0500 Received: from mailout3.w1.samsung.com ([210.118.77.13]:37634 "EHLO mailout3.w1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759388Ab1CDL0s (ORCPT ); Fri, 4 Mar 2011 06:26:48 -0500 MIME-version: 1.0 Content-transfer-encoding: 8BIT Content-type: text/plain; charset=UTF-8 Received: from eu_spt1 ([210.118.77.13]) by mailout3.w1.samsung.com (Sun Java(tm) System Messaging Server 6.3-8.04 (built Jul 29 2009; 32bit)) with ESMTP id <0LHJ008XX6GN7840@mailout3.w1.samsung.com>; Fri, 04 Mar 2011 11:26:47 +0000 (GMT) Received: from linux.samsung.com ([106.116.38.10]) by spt1.w1.samsung.com (iPlanet Messaging Server 5.2 Patch 2 (built Jul 14 2004)) with ESMTPA id <0LHJ00BVY6GMKD@spt1.w1.samsung.com>; Fri, 04 Mar 2011 11:26:47 +0000 (GMT) Received: from localhost.localdomain (smtp.w1.samsung.com [106.116.38.10]) by linux.samsung.com (Postfix) with ESMTP id E0CDE270059; Fri, 04 Mar 2011 12:27:32 +0100 (CET) Date: Fri, 04 Mar 2011 12:26:22 +0100 From: Kamil Debski Subject: =?UTF-8?q?=5BPATCH/RFC=20v7=205/5=5D=20v4l=3A=20Documentation=20for=20the=20codec=20interface?= In-reply-to: <1299237982-31687-1-git-send-email-k.debski@samsung.com> To: linux-media@vger.kernel.org, linux-samsung-soc@vger.kernel.org Cc: m.szyprowski@samsung.com, kyungmin.park@samsung.com, k.debski@samsung.com, jaeryul.oh@samsung.com, kgene.kim@samsung.com Message-id: <1299237982-31687-6-git-send-email-k.debski@samsung.com> X-Mailer: git-send-email 1.7.2.3 References: <1299237982-31687-1-git-send-email-k.debski@samsung.com> Sender: linux-media-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Fri, 04 Mar 2011 11:26:56 +0000 (UTC) diff --git a/Documentation/DocBook/v4l/dev-codec.xml b/Documentation/DocBook/v4l/dev-codec.xml index 6e156dc..73ecee8 100644 --- a/Documentation/DocBook/v4l/dev-codec.xml +++ b/Documentation/DocBook/v4l/dev-codec.xml @@ -1,21 +1,162 @@ - Codec Interface + Codec Interface - - Suspended + +A V4L2 codec can compress, decompress, transform, or otherwise convert +video data from one format into another format, in memory. Applications +can send and receive the data in two ways. First method utilizes +&func-write; and &func-read; calls to exchange the data. The second +uses streaming I/O to interact with the driver. + - This interface has been be suspended from the V4L2 API -implemented in Linux 2.6 until we have more experience with codec -device interfaces. - + +Codec devices belong to the group of memory-to-memory devices – memory +buffer containing a frame in one format is converted and stored in another +buffer in the memory. The format of capture and output buffers is +determined by the pixel formats passed in the &VIDIOC-S-FMT; call. + - A V4L2 codec can compress, decompress, transform, or otherwise -convert video data from one format into another format, in memory. -Applications send data to be converted to the driver through a -&func-write; call, and receive the converted data through a -&func-read; call. For efficiency a driver may also support streaming -I/O. + +Many advanced video codecs – such as H264 and MPEG4 require that decoded buffers +are kept as reference for other frames. This requirement may result in a use +case where a few output buffers have to be processed before the first capture +buffer is returned. In addition the buffers may be dequeued in an arbitrary +order. + - [to do] + +The codec hardware may enable to tweak decoding parameters and will require the +application to set encoding parameters. Hence the +V4L2_CTRL_CLASS_CODEC control class has been introduced. + + + +The standard V4L2 naming of buffers is kept – output means input of the device, +capture on the other hand means the device’s output. + + +
+ Querying Capabilities + + +Devices that support the codec interface set the +V4L2_CAP_VIDEO_M2M flag of the capabilities field of the +v4l2_capability struct returned by the &VIDIOC-QUERYCAP; ioctl. At least one of +the read/write or streaming I/O methods must be supported. + +
+ +
+ Multiple Instance Capabilities + + +Codecs as memory to memory devices can support multiple instances. +Drivers for devices with such capabilities should store configuration context +for every open file descriptor. This means that configuration is kept only until +the descriptor is closed and it is not possible to configure the device with one +application and perform streaming with another. If the device is capable of +handling only one stream at a time it can use a single context. + +
+ +
+ Image Format Negotiation + + +In case of decoding a video stream, the header may need to be parsed before the +stream decoding can take place. Usually only after the header is processed the +dimensions of the decoded image and the minimum number of buffers is known. This +requires that the output part of the interface has to be able to process the +header before allocating the buffers for capture. &VIDIOC-G-FMT; can be used by +the application to get the parameters of the capture buffers. If necessary the +&VIDIOC-S-FMT; call can be used to change those parameters, but it is up to the +driver to validate and correct those values. In addition it is necessary to +negotiate the number of capture buffers. In many cases the application may need +to allocate N more buffers than the minimum required by the codec. It can be +useful in case the device has a minimum number of buffers that have to be queued +in hardware to operate and if the application needs to keep N buffers dequeued +for processing. This cannot be easily done with &VIDIOC-REQBUFS; alone. To +address this issue the V4L2_CID_CODEC_MIN_REQ_ BUFS_CAP control has been +introduced and can be read to determine the minimum buffer count for the +capture queue. The application can use this number to calculate the number of +buffers passed to the REQBUFS call. The V4L2_CID_CODEC_MIN_REQ_ BUFS_OUT +control can also be used if the minimum number of buffers is determined for the +output queue, for example in case of encoding. + + + +When encoding &VIDIOC-S-FMT; has to be done on both capture and output. +&VIDIOC-S-FMT; on capture will determine the video codec to be used and encoding +parameters should be configured by setting appropriate controls. &VIDIOC-S-FMT; +on output will be used to determine the input image parameters. + +
+ +
+ Processing + + +In case of streaming I/O the processing is done by queueing and dequeueing +buffers on both the capture and output queue. In case of decoding it may be +necessary to parse the header before the parameters of the video are determined. +In such case the size and number of capture buffers is unknown, hence only the +output part of the device is initialized. The application should extract the +header from the stream and queue it as the first buffer on the output queue. +After the header is parsed by the hardware the capture part of the codec can be +initialized. In case of encoding both capture and output parts have to be +initialized before stream on is called. + + + +When streamoff is called then the buffers that were in that queue are discarded. +This can be used to support compressed stream seeking – after streamoff on +capture and output all the frames are discarded. Then buffers from new position +in the stream are queued on the output queue and empty buffers are queued on the +capture queue. When stream on is called afterwards processing is resumed from +the new position in the stream. + + + +Marking end-of-stream is necessary, because a number of buffers may be kept +queued in the hardware for the purpose of acting as reference frames. After +reaching end of compressed stream it is necessary to mark end-of-stream with a +buffer which bytesused is set to 0 on the output side. After all remaining +buffers are dequeued on the capture side, a buffer with bytesused set to 0 is +dequeued. + +
+ +
+ Resolution change + + +The resolution and the minimum number of buffers of the decoded picture can be +changed during streaming. This feature is widely used in digital TV broadcasts. +If the hardware is able to support such feature it is necessary that the driver +notifies the application about such event. It might be necessary for the +application to reallocate the buffers. Such notification is similar to the +end-of-stream notification – if resolution or the number of necessary buffers is +changed then the remaining capture buffers are dequeued and then one buffer with +bytesused set to 0 is returned to the application. At this time the application +should reallocate and queue the capture buffers. To do this it is necessary to +call streamoff on the capture, unmap the buffers, free the memory by calling +reqbufs with count set to 0. Now the application can read the new resolution by +running &VIDIOC-G-FMT; and the minimum number of buffers by reading the +V4L2_CID_CODEC_MIN_REQ_BUFS_CAP control. After allocating, mmaping and queueing the +buffers the hardware will continue processing. + +
+ +
+ Supplemental features + + +Hardware and format constraints may influence the size of the capture buffer. +For example the hardware may need the image buffer to be aligned to 128x32 +size. To read the actual video frame size the application should use the +&VIDIOC-G-CROP; call for decoding and use &VIDIOC-S-CROP; call in case of +encoding. + +