diff mbox

[v4.1,3/3] v4l: Add V4L2_BUF_FLAG_TIMESTAMP_SOF and use it

Message ID 1377703495-21112-1-git-send-email-sakari.ailus@iki.fi (mailing list archive)
State New, archived
Headers show

Commit Message

Sakari Ailus Aug. 28, 2013, 3:24 p.m. UTC
Some devices such as the uvc produce timestamps at the beginning of the
frame rather than at the end of it. Add a buffer flag
(V4L2_BUF_FLAG_TIMESTAMP_SOF) to tell about this.

Also document timestamp_type in struct vb2_queue, and make the uvc set the
buffer flag.

Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi>
---
since v4:
- Fixes according to Hans's comments.

- Note in comment the uvc driver will set the SOF flag from now on.

- Change comment of vb2_queue timestamp_type field: this is timestamp flags
  rather than just type. I stopped short of renaming the field.

 Documentation/DocBook/media/v4l/io.xml |   19 ++++++++++++++-----
 drivers/media/usb/uvc/uvc_queue.c      |    3 ++-
 include/media/videobuf2-core.h         |    1 +
 include/uapi/linux/videodev2.h         |   10 ++++++++++
 4 files changed, 27 insertions(+), 6 deletions(-)

Comments

Hans Verkuil Aug. 28, 2013, 3:30 p.m. UTC | #1
On 08/28/2013 05:24 PM, Sakari Ailus wrote:
> Some devices such as the uvc produce timestamps at the beginning of the
> frame rather than at the end of it. Add a buffer flag
> (V4L2_BUF_FLAG_TIMESTAMP_SOF) to tell about this.
> 
> Also document timestamp_type in struct vb2_queue, and make the uvc set the
> buffer flag.
> 
> Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi>
> ---
> since v4:
> - Fixes according to Hans's comments.
> 
> - Note in comment the uvc driver will set the SOF flag from now on.
> 
> - Change comment of vb2_queue timestamp_type field: this is timestamp flags
>   rather than just type. I stopped short of renaming the field.
> 
>  Documentation/DocBook/media/v4l/io.xml |   19 ++++++++++++++-----
>  drivers/media/usb/uvc/uvc_queue.c      |    3 ++-
>  include/media/videobuf2-core.h         |    1 +
>  include/uapi/linux/videodev2.h         |   10 ++++++++++
>  4 files changed, 27 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/DocBook/media/v4l/io.xml b/Documentation/DocBook/media/v4l/io.xml
> index 2c155cc..3aee210 100644
> --- a/Documentation/DocBook/media/v4l/io.xml
> +++ b/Documentation/DocBook/media/v4l/io.xml
> @@ -654,11 +654,12 @@ plane, are stored in struct <structname>v4l2_plane</structname> instead.
>  In that case, struct <structname>v4l2_buffer</structname> contains an array of
>  plane structures.</para>
>  
> -      <para>For timestamp types that are sampled from the system clock
> -(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp is
> -taken after the complete frame has been received (or transmitted in
> -case of video output devices). For other kinds of
> -timestamps this may vary depending on the driver.</para>
> +      <para>The timestamp is taken once the complete frame has been
> +received (or transmitted for output devices) unless

unless -> unless the

> +<constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> buffer flag is set.
> +If <constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> is set, the

the -> then the

> +timestamp is taken when the first pixel of the frame is received
> +(or transmitted).</para>
>  
>      <table frame="none" pgwide="1" id="v4l2-buffer">
>        <title>struct <structname>v4l2_buffer</structname></title>
> @@ -1120,6 +1121,14 @@ in which case caches have not been used.</entry>
>  	    <entry>The CAPTURE buffer timestamp has been taken from the
>  	    corresponding OUTPUT buffer. This flag applies only to mem2mem devices.</entry>
>  	  </row>
> +	  <row>
> +	    <entry><constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant></entry>
> +	    <entry>0x00010000</entry>
> +	    <entry>The buffer timestamp has been taken when the first

I think 'has been' should be 'was' in this context.

> +	    pixel is received (or transmitted for output devices). If
> +	    this flag is not set, the timestamp is taken when the
> +	    entire frame has been received (or transmitted).</entry>

Ditto.

> +	  </row>
>  	</tbody>
>        </tgroup>
>      </table>
> diff --git a/drivers/media/usb/uvc/uvc_queue.c b/drivers/media/usb/uvc/uvc_queue.c
> index cd962be..0d80512 100644
> --- a/drivers/media/usb/uvc/uvc_queue.c
> +++ b/drivers/media/usb/uvc/uvc_queue.c
> @@ -149,7 +149,8 @@ int uvc_queue_init(struct uvc_video_queue *queue, enum v4l2_buf_type type,
>  	queue->queue.buf_struct_size = sizeof(struct uvc_buffer);
>  	queue->queue.ops = &uvc_queue_qops;
>  	queue->queue.mem_ops = &vb2_vmalloc_memops;
> -	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
> +	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
> +		| V4L2_BUF_FLAG_TIMESTAMP_SOF;
>  	ret = vb2_queue_init(&queue->queue);
>  	if (ret)
>  		return ret;
> diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
> index 6781258..033efc7 100644
> --- a/include/media/videobuf2-core.h
> +++ b/include/media/videobuf2-core.h
> @@ -307,6 +307,7 @@ struct v4l2_fh;
>   * @buf_struct_size: size of the driver-specific buffer structure;
>   *		"0" indicates the driver doesn't want to use a custom buffer
>   *		structure type, so sizeof(struct vb2_buffer) will is used
> + * @timestamp_type: Timestamp flags; V4L2_BUF_FLAGS_TIMESTAMP_*
>   * @gfp_flags:	additional gfp flags used when allocating the buffers.
>   *		Typically this is 0, but it may be e.g. GFP_DMA or __GFP_DMA32
>   *		to force the buffer allocation to a specific memory zone.
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 691077d..c57765e 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -695,6 +695,16 @@ struct v4l2_buffer {
>  #define V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN		0x00000000
>  #define V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC	0x00002000
>  #define V4L2_BUF_FLAG_TIMESTAMP_COPY		0x00004000
> +/*
> + * Timestamp taken once the first pixel is received (or transmitted). 
> + * If the flag is not set the buffer timestamp is taken at the end of
> + * the frame. This is not a timestamp type.
> + *
> + * In general drivers should not use this flag if the end-of-frame
> + * timestamps is as good quality as the start-of-frame one; the
> + * V4L2_EVENT_FRAME_SYNC event should be used in that case instead.
> + */
> +#define V4L2_BUF_FLAG_TIMESTAMP_SOF		0x00010000
>  
>  /**
>   * struct v4l2_exportbuffer - export of video buffer as DMABUF file descriptor
> 

After making the changes suggested above:

Acked-by: Hans Verkuil <hans.verkuil@cisco.com>

Regards,

	Hans
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Laurent Pinchart Aug. 28, 2013, 4:03 p.m. UTC | #2
Hi Sakari,

Thank you for the patches.

On Wednesday 28 August 2013 18:24:55 Sakari Ailus wrote:
> Some devices such as the uvc produce timestamps at the beginning of the
> frame rather than at the end of it. Add a buffer flag
> (V4L2_BUF_FLAG_TIMESTAMP_SOF) to tell about this.
> 
> Also document timestamp_type in struct vb2_queue, and make the uvc set the
> buffer flag.
> 
> Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi>
> ---
> since v4:
> - Fixes according to Hans's comments.
> 
> - Note in comment the uvc driver will set the SOF flag from now on.
> 
> - Change comment of vb2_queue timestamp_type field: this is timestamp flags
>   rather than just type. I stopped short of renaming the field.
> 
>  Documentation/DocBook/media/v4l/io.xml |   19 ++++++++++++++-----
>  drivers/media/usb/uvc/uvc_queue.c      |    3 ++-
>  include/media/videobuf2-core.h         |    1 +
>  include/uapi/linux/videodev2.h         |   10 ++++++++++
>  4 files changed, 27 insertions(+), 6 deletions(-)
> 
> diff --git a/Documentation/DocBook/media/v4l/io.xml
> b/Documentation/DocBook/media/v4l/io.xml index 2c155cc..3aee210 100644
> --- a/Documentation/DocBook/media/v4l/io.xml
> +++ b/Documentation/DocBook/media/v4l/io.xml
> @@ -654,11 +654,12 @@ plane, are stored in struct
> <structname>v4l2_plane</structname> instead. In that case, struct
> <structname>v4l2_buffer</structname> contains an array of plane
> structures.</para>
> 
> -      <para>For timestamp types that are sampled from the system clock
> -(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp is
> -taken after the complete frame has been received (or transmitted in
> -case of video output devices). For other kinds of
> -timestamps this may vary depending on the driver.</para>
> +      <para>The timestamp is taken once the complete frame has been
> +received (or transmitted for output devices) unless
> +<constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> buffer flag is set.
> +If <constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> is set, the
> +timestamp is taken when the first pixel of the frame is received
> +(or transmitted).</para>
> 
>      <table frame="none" pgwide="1" id="v4l2-buffer">
>        <title>struct <structname>v4l2_buffer</structname></title>
> @@ -1120,6 +1121,14 @@ in which case caches have not been used.</entry>
>  	    <entry>The CAPTURE buffer timestamp has been taken from the
>  	    corresponding OUTPUT buffer. This flag applies only to mem2mem
> devices.</entry> </row>
> +	  <row>
> +	    <entry><constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant></entry>
> +	    <entry>0x00010000</entry>
> +	    <entry>The buffer timestamp has been taken when the first
> +	    pixel is received (or transmitted for output devices). If
> +	    this flag is not set, the timestamp is taken when the
> +	    entire frame has been received (or transmitted).</entry>
> +	  </row>
>  	</tbody>
>        </tgroup>
>      </table>
> diff --git a/drivers/media/usb/uvc/uvc_queue.c
> b/drivers/media/usb/uvc/uvc_queue.c index cd962be..0d80512 100644
> --- a/drivers/media/usb/uvc/uvc_queue.c
> +++ b/drivers/media/usb/uvc/uvc_queue.c
> @@ -149,7 +149,8 @@ int uvc_queue_init(struct uvc_video_queue *queue, enum
> v4l2_buf_type type, queue->queue.buf_struct_size = sizeof(struct
> uvc_buffer);
>  	queue->queue.ops = &uvc_queue_qops;
>  	queue->queue.mem_ops = &vb2_vmalloc_memops;
> -	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
> +	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
> +		| V4L2_BUF_FLAG_TIMESTAMP_SOF;
>  	ret = vb2_queue_init(&queue->queue);
>  	if (ret)
>  		return ret;
> diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
> index 6781258..033efc7 100644
> --- a/include/media/videobuf2-core.h
> +++ b/include/media/videobuf2-core.h
> @@ -307,6 +307,7 @@ struct v4l2_fh;
>   * @buf_struct_size: size of the driver-specific buffer structure;
>   *		"0" indicates the driver doesn't want to use a custom buffer
>   *		structure type, so sizeof(struct vb2_buffer) will is used
> + * @timestamp_type: Timestamp flags; V4L2_BUF_FLAGS_TIMESTAMP_*
>   * @gfp_flags:	additional gfp flags used when allocating the buffers.
>   *		Typically this is 0, but it may be e.g. GFP_DMA or __GFP_DMA32
>   *		to force the buffer allocation to a specific memory zone.
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 691077d..c57765e 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -695,6 +695,16 @@ struct v4l2_buffer {
>  #define V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN		0x00000000
>  #define V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC	0x00002000
>  #define V4L2_BUF_FLAG_TIMESTAMP_COPY		0x00004000
> +/*
> + * Timestamp taken once the first pixel is received (or transmitted).
> + * If the flag is not set the buffer timestamp is taken at the end of
> + * the frame. This is not a timestamp type.

UVC devices timestamp frames when the frame is captured, not when the first 
pixel is transmitted.

For the other two patches,

Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

> + * In general drivers should not use this flag if the end-of-frame
> + * timestamps is as good quality as the start-of-frame one; the
> + * V4L2_EVENT_FRAME_SYNC event should be used in that case instead.
> + */
> +#define V4L2_BUF_FLAG_TIMESTAMP_SOF		0x00010000
> 
>  /**
>   * struct v4l2_exportbuffer - export of video buffer as DMABUF file
> descriptor
Sakari Ailus Aug. 28, 2013, 4:06 p.m. UTC | #3
Hi Hans,

Thanks for your prompt comments.

On Wed, Aug 28, 2013 at 05:30:01PM +0200, Hans Verkuil wrote:
> On 08/28/2013 05:24 PM, Sakari Ailus wrote:
> > Some devices such as the uvc produce timestamps at the beginning of the
> > frame rather than at the end of it. Add a buffer flag
> > (V4L2_BUF_FLAG_TIMESTAMP_SOF) to tell about this.
> > 
> > Also document timestamp_type in struct vb2_queue, and make the uvc set the
> > buffer flag.
> > 
> > Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi>
> > ---
> > since v4:
> > - Fixes according to Hans's comments.
> > 
> > - Note in comment the uvc driver will set the SOF flag from now on.
> > 
> > - Change comment of vb2_queue timestamp_type field: this is timestamp flags
> >   rather than just type. I stopped short of renaming the field.
> > 
> >  Documentation/DocBook/media/v4l/io.xml |   19 ++++++++++++++-----
> >  drivers/media/usb/uvc/uvc_queue.c      |    3 ++-
> >  include/media/videobuf2-core.h         |    1 +
> >  include/uapi/linux/videodev2.h         |   10 ++++++++++
> >  4 files changed, 27 insertions(+), 6 deletions(-)
> > 
> > diff --git a/Documentation/DocBook/media/v4l/io.xml b/Documentation/DocBook/media/v4l/io.xml
> > index 2c155cc..3aee210 100644
> > --- a/Documentation/DocBook/media/v4l/io.xml
> > +++ b/Documentation/DocBook/media/v4l/io.xml
> > @@ -654,11 +654,12 @@ plane, are stored in struct <structname>v4l2_plane</structname> instead.
> >  In that case, struct <structname>v4l2_buffer</structname> contains an array of
> >  plane structures.</para>
> >  
> > -      <para>For timestamp types that are sampled from the system clock
> > -(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp is
> > -taken after the complete frame has been received (or transmitted in
> > -case of video output devices). For other kinds of
> > -timestamps this may vary depending on the driver.</para>
> > +      <para>The timestamp is taken once the complete frame has been
> > +received (or transmitted for output devices) unless
> 
> unless -> unless the
> 
> > +<constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> buffer flag is set.
> > +If <constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> is set, the
> 
> the -> then the

Fixed both.

> > +timestamp is taken when the first pixel of the frame is received
> > +(or transmitted).</para>
> >  
> >      <table frame="none" pgwide="1" id="v4l2-buffer">
> >        <title>struct <structname>v4l2_buffer</structname></title>
> > @@ -1120,6 +1121,14 @@ in which case caches have not been used.</entry>
> >  	    <entry>The CAPTURE buffer timestamp has been taken from the
> >  	    corresponding OUTPUT buffer. This flag applies only to mem2mem devices.</entry>
> >  	  </row>
> > +	  <row>
> > +	    <entry><constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant></entry>
> > +	    <entry>0x00010000</entry>
> > +	    <entry>The buffer timestamp has been taken when the first
> 
> I think 'has been' should be 'was' in this context.

Then I wonder if I should change all the other flags, too. :-) "Has been" is
consistent with the documentation of other flags.
Sakari Ailus Aug. 28, 2013, 4:09 p.m. UTC | #4
Hi Laurent,

Thanks for the comments!

On Wed, Aug 28, 2013 at 06:03:20PM +0200, Laurent Pinchart wrote:
> Hi Sakari,
> 
> Thank you for the patches.
> 
> On Wednesday 28 August 2013 18:24:55 Sakari Ailus wrote:
> > Some devices such as the uvc produce timestamps at the beginning of the
> > frame rather than at the end of it. Add a buffer flag
> > (V4L2_BUF_FLAG_TIMESTAMP_SOF) to tell about this.
> > 
> > Also document timestamp_type in struct vb2_queue, and make the uvc set the
> > buffer flag.
> > 
> > Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi>
> > ---
> > since v4:
> > - Fixes according to Hans's comments.
> > 
> > - Note in comment the uvc driver will set the SOF flag from now on.
> > 
> > - Change comment of vb2_queue timestamp_type field: this is timestamp flags
> >   rather than just type. I stopped short of renaming the field.
> > 
> >  Documentation/DocBook/media/v4l/io.xml |   19 ++++++++++++++-----
> >  drivers/media/usb/uvc/uvc_queue.c      |    3 ++-
> >  include/media/videobuf2-core.h         |    1 +
> >  include/uapi/linux/videodev2.h         |   10 ++++++++++
> >  4 files changed, 27 insertions(+), 6 deletions(-)
> > 
> > diff --git a/Documentation/DocBook/media/v4l/io.xml
> > b/Documentation/DocBook/media/v4l/io.xml index 2c155cc..3aee210 100644
> > --- a/Documentation/DocBook/media/v4l/io.xml
> > +++ b/Documentation/DocBook/media/v4l/io.xml
> > @@ -654,11 +654,12 @@ plane, are stored in struct
> > <structname>v4l2_plane</structname> instead. In that case, struct
> > <structname>v4l2_buffer</structname> contains an array of plane
> > structures.</para>
> > 
> > -      <para>For timestamp types that are sampled from the system clock
> > -(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp is
> > -taken after the complete frame has been received (or transmitted in
> > -case of video output devices). For other kinds of
> > -timestamps this may vary depending on the driver.</para>
> > +      <para>The timestamp is taken once the complete frame has been
> > +received (or transmitted for output devices) unless
> > +<constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> buffer flag is set.
> > +If <constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> is set, the
> > +timestamp is taken when the first pixel of the frame is received
> > +(or transmitted).</para>
> > 
> >      <table frame="none" pgwide="1" id="v4l2-buffer">
> >        <title>struct <structname>v4l2_buffer</structname></title>
> > @@ -1120,6 +1121,14 @@ in which case caches have not been used.</entry>
> >  	    <entry>The CAPTURE buffer timestamp has been taken from the
> >  	    corresponding OUTPUT buffer. This flag applies only to mem2mem
> > devices.</entry> </row>
> > +	  <row>
> > +	    <entry><constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant></entry>
> > +	    <entry>0x00010000</entry>
> > +	    <entry>The buffer timestamp has been taken when the first
> > +	    pixel is received (or transmitted for output devices). If
> > +	    this flag is not set, the timestamp is taken when the
> > +	    entire frame has been received (or transmitted).</entry>
> > +	  </row>
> >  	</tbody>
> >        </tgroup>
> >      </table>
> > diff --git a/drivers/media/usb/uvc/uvc_queue.c
> > b/drivers/media/usb/uvc/uvc_queue.c index cd962be..0d80512 100644
> > --- a/drivers/media/usb/uvc/uvc_queue.c
> > +++ b/drivers/media/usb/uvc/uvc_queue.c
> > @@ -149,7 +149,8 @@ int uvc_queue_init(struct uvc_video_queue *queue, enum
> > v4l2_buf_type type, queue->queue.buf_struct_size = sizeof(struct
> > uvc_buffer);
> >  	queue->queue.ops = &uvc_queue_qops;
> >  	queue->queue.mem_ops = &vb2_vmalloc_memops;
> > -	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
> > +	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
> > +		| V4L2_BUF_FLAG_TIMESTAMP_SOF;
> >  	ret = vb2_queue_init(&queue->queue);
> >  	if (ret)
> >  		return ret;
> > diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
> > index 6781258..033efc7 100644
> > --- a/include/media/videobuf2-core.h
> > +++ b/include/media/videobuf2-core.h
> > @@ -307,6 +307,7 @@ struct v4l2_fh;
> >   * @buf_struct_size: size of the driver-specific buffer structure;
> >   *		"0" indicates the driver doesn't want to use a custom buffer
> >   *		structure type, so sizeof(struct vb2_buffer) will is used
> > + * @timestamp_type: Timestamp flags; V4L2_BUF_FLAGS_TIMESTAMP_*
> >   * @gfp_flags:	additional gfp flags used when allocating the buffers.
> >   *		Typically this is 0, but it may be e.g. GFP_DMA or __GFP_DMA32
> >   *		to force the buffer allocation to a specific memory zone.
> > diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> > index 691077d..c57765e 100644
> > --- a/include/uapi/linux/videodev2.h
> > +++ b/include/uapi/linux/videodev2.h
> > @@ -695,6 +695,16 @@ struct v4l2_buffer {
> >  #define V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN		0x00000000
> >  #define V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC	0x00002000
> >  #define V4L2_BUF_FLAG_TIMESTAMP_COPY		0x00004000
> > +/*
> > + * Timestamp taken once the first pixel is received (or transmitted).
> > + * If the flag is not set the buffer timestamp is taken at the end of
> > + * the frame. This is not a timestamp type.
> 
> UVC devices timestamp frames when the frame is captured, not when the first 
> pixel is transmitted.

I.e. we shouldn't set the SOF flag? "When the frame is captured" doesn't say
much, or almost anything in terms of *when*. The frames have exposure time
and rolling shutter makes a difference, too.

> For the other two patches,
> 
> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>

Thanks! :-)
Laurent Pinchart Aug. 28, 2013, 4:14 p.m. UTC | #5
On Wednesday 28 August 2013 19:09:23 Sakari Ailus wrote:
> Hi Laurent,
> 
> Thanks for the comments!
> 
> On Wed, Aug 28, 2013 at 06:03:20PM +0200, Laurent Pinchart wrote:
> > Hi Sakari,
> > 
> > Thank you for the patches.
> > 
> > On Wednesday 28 August 2013 18:24:55 Sakari Ailus wrote:
> > > Some devices such as the uvc produce timestamps at the beginning of the
> > > frame rather than at the end of it. Add a buffer flag
> > > (V4L2_BUF_FLAG_TIMESTAMP_SOF) to tell about this.
> > > 
> > > Also document timestamp_type in struct vb2_queue, and make the uvc set
> > > the
> > > buffer flag.
> > > 
> > > Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi>
> > > ---
> > > since v4:
> > > - Fixes according to Hans's comments.
> > > 
> > > - Note in comment the uvc driver will set the SOF flag from now on.
> > > 
> > > - Change comment of vb2_queue timestamp_type field: this is timestamp
> > > flags
> > > 
> > >   rather than just type. I stopped short of renaming the field.
> > >  
> > >  Documentation/DocBook/media/v4l/io.xml |   19 ++++++++++++++-----
> > >  drivers/media/usb/uvc/uvc_queue.c      |    3 ++-
> > >  include/media/videobuf2-core.h         |    1 +
> > >  include/uapi/linux/videodev2.h         |   10 ++++++++++
> > >  4 files changed, 27 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/Documentation/DocBook/media/v4l/io.xml
> > > b/Documentation/DocBook/media/v4l/io.xml index 2c155cc..3aee210 100644
> > > --- a/Documentation/DocBook/media/v4l/io.xml
> > > +++ b/Documentation/DocBook/media/v4l/io.xml
> > > @@ -654,11 +654,12 @@ plane, are stored in struct
> > > <structname>v4l2_plane</structname> instead. In that case, struct
> > > <structname>v4l2_buffer</structname> contains an array of plane
> > > structures.</para>
> > > 
> > > -      <para>For timestamp types that are sampled from the system clock
> > > -(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp
> > > is
> > > -taken after the complete frame has been received (or transmitted in
> > > -case of video output devices). For other kinds of
> > > -timestamps this may vary depending on the driver.</para>
> > > +      <para>The timestamp is taken once the complete frame has been
> > > +received (or transmitted for output devices) unless
> > > +<constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> buffer flag is set.
> > > +If <constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> is set, the
> > > +timestamp is taken when the first pixel of the frame is received
> > > +(or transmitted).</para>
> > > 
> > >      <table frame="none" pgwide="1" id="v4l2-buffer">
> > >      
> > >        <title>struct <structname>v4l2_buffer</structname></title>
> > > 
> > > @@ -1120,6 +1121,14 @@ in which case caches have not been used.</entry>
> > > 
> > >  	    <entry>The CAPTURE buffer timestamp has been taken from the
> > >  	    corresponding OUTPUT buffer. This flag applies only to mem2mem
> > > 
> > > devices.</entry> </row>
> > > +	  <row>
> > > +	    <entry><constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant></entry>
> > > +	    <entry>0x00010000</entry>
> > > +	    <entry>The buffer timestamp has been taken when the first
> > > +	    pixel is received (or transmitted for output devices). If
> > > +	    this flag is not set, the timestamp is taken when the
> > > +	    entire frame has been received (or transmitted).</entry>
> > > +	  </row>
> > > 
> > >  	</tbody>
> > >  	
> > >        </tgroup>
> > >      
> > >      </table>
> > > 
> > > diff --git a/drivers/media/usb/uvc/uvc_queue.c
> > > b/drivers/media/usb/uvc/uvc_queue.c index cd962be..0d80512 100644
> > > --- a/drivers/media/usb/uvc/uvc_queue.c
> > > +++ b/drivers/media/usb/uvc/uvc_queue.c
> > > @@ -149,7 +149,8 @@ int uvc_queue_init(struct uvc_video_queue *queue,
> > > enum
> > > v4l2_buf_type type, queue->queue.buf_struct_size = sizeof(struct
> > > uvc_buffer);
> > > 
> > >  	queue->queue.ops = &uvc_queue_qops;
> > >  	queue->queue.mem_ops = &vb2_vmalloc_memops;
> > > 
> > > -	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
> > > +	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
> > > +		| V4L2_BUF_FLAG_TIMESTAMP_SOF;
> > > 
> > >  	ret = vb2_queue_init(&queue->queue);
> > >  	if (ret)
> > >  	
> > >  		return ret;
> > > 
> > > diff --git a/include/media/videobuf2-core.h
> > > b/include/media/videobuf2-core.h index 6781258..033efc7 100644
> > > --- a/include/media/videobuf2-core.h
> > > +++ b/include/media/videobuf2-core.h
> > > @@ -307,6 +307,7 @@ struct v4l2_fh;
> > > 
> > >   * @buf_struct_size: size of the driver-specific buffer structure;
> > >   *		"0" indicates the driver doesn't want to use a custom buffer
> > >   *		structure type, so sizeof(struct vb2_buffer) will is used
> > > 
> > > + * @timestamp_type: Timestamp flags; V4L2_BUF_FLAGS_TIMESTAMP_*
> > > 
> > >   * @gfp_flags:	additional gfp flags used when allocating the buffers.
> > >   *		Typically this is 0, but it may be e.g. GFP_DMA or __GFP_DMA32
> > >   *		to force the buffer allocation to a specific memory zone.
> > > 
> > > diff --git a/include/uapi/linux/videodev2.h
> > > b/include/uapi/linux/videodev2.h index 691077d..c57765e 100644
> > > --- a/include/uapi/linux/videodev2.h
> > > +++ b/include/uapi/linux/videodev2.h
> > > @@ -695,6 +695,16 @@ struct v4l2_buffer {
> > > 
> > >  #define V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN		0x00000000
> > >  #define V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC	0x00002000
> > >  #define V4L2_BUF_FLAG_TIMESTAMP_COPY		0x00004000
> > > 
> > > +/*
> > > + * Timestamp taken once the first pixel is received (or transmitted).
> > > + * If the flag is not set the buffer timestamp is taken at the end of
> > > + * the frame. This is not a timestamp type.
> > 
> > UVC devices timestamp frames when the frame is captured, not when the
> > first
> > pixel is transmitted.
> 
> I.e. we shouldn't set the SOF flag? "When the frame is captured" doesn't say
> much, or almost anything in terms of *when*. The frames have exposure time
> and rolling shutter makes a difference, too.

The UVC 1.1 specification defines the timestamp as

"The source clock time in native deviceclock units when the raw frame capture 
begins."

What devices do in practice may differ :-)

> > For the other two patches,
> > 
> > Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> 
> Thanks! :-)
Sakari Ailus Aug. 28, 2013, 4:39 p.m. UTC | #6
Hi Laurent,

On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart wrote:
...
> > > > diff --git a/drivers/media/usb/uvc/uvc_queue.c
> > > > b/drivers/media/usb/uvc/uvc_queue.c index cd962be..0d80512 100644
> > > > --- a/drivers/media/usb/uvc/uvc_queue.c
> > > > +++ b/drivers/media/usb/uvc/uvc_queue.c
> > > > @@ -149,7 +149,8 @@ int uvc_queue_init(struct uvc_video_queue *queue,
> > > > enum
> > > > v4l2_buf_type type, queue->queue.buf_struct_size = sizeof(struct
> > > > uvc_buffer);
> > > > 
> > > >  	queue->queue.ops = &uvc_queue_qops;
> > > >  	queue->queue.mem_ops = &vb2_vmalloc_memops;
> > > > 
> > > > -	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
> > > > +	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
> > > > +		| V4L2_BUF_FLAG_TIMESTAMP_SOF;
> > > > 
> > > >  	ret = vb2_queue_init(&queue->queue);
> > > >  	if (ret)
> > > >  	
> > > >  		return ret;
> > > > 
> > > > diff --git a/include/media/videobuf2-core.h
> > > > b/include/media/videobuf2-core.h index 6781258..033efc7 100644
> > > > --- a/include/media/videobuf2-core.h
> > > > +++ b/include/media/videobuf2-core.h
> > > > @@ -307,6 +307,7 @@ struct v4l2_fh;
> > > > 
> > > >   * @buf_struct_size: size of the driver-specific buffer structure;
> > > >   *		"0" indicates the driver doesn't want to use a custom buffer
> > > >   *		structure type, so sizeof(struct vb2_buffer) will is used
> > > > 
> > > > + * @timestamp_type: Timestamp flags; V4L2_BUF_FLAGS_TIMESTAMP_*
> > > > 
> > > >   * @gfp_flags:	additional gfp flags used when allocating the buffers.
> > > >   *		Typically this is 0, but it may be e.g. GFP_DMA or __GFP_DMA32
> > > >   *		to force the buffer allocation to a specific memory zone.
> > > > 
> > > > diff --git a/include/uapi/linux/videodev2.h
> > > > b/include/uapi/linux/videodev2.h index 691077d..c57765e 100644
> > > > --- a/include/uapi/linux/videodev2.h
> > > > +++ b/include/uapi/linux/videodev2.h
> > > > @@ -695,6 +695,16 @@ struct v4l2_buffer {
> > > > 
> > > >  #define V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN		0x00000000
> > > >  #define V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC	0x00002000
> > > >  #define V4L2_BUF_FLAG_TIMESTAMP_COPY		0x00004000
> > > > 
> > > > +/*
> > > > + * Timestamp taken once the first pixel is received (or transmitted).
> > > > + * If the flag is not set the buffer timestamp is taken at the end of
> > > > + * the frame. This is not a timestamp type.
> > > 
> > > UVC devices timestamp frames when the frame is captured, not when the
> > > first
> > > pixel is transmitted.
> > 
> > I.e. we shouldn't set the SOF flag? "When the frame is captured" doesn't say
> > much, or almost anything in terms of *when*. The frames have exposure time
> > and rolling shutter makes a difference, too.
> 
> The UVC 1.1 specification defines the timestamp as
> 
> "The source clock time in native deviceclock units when the raw frame capture 
> begins."
> 
> What devices do in practice may differ :-)

I think that this should mean start-of-frame - exposure time. I'd really
wonder if any practical implementation does that however.

What's your suggestion; should we use the SOF flag for this or do you prefer
the end-of-frame timestamp instead? I think it'd be quite nice for drivers
to know which one is which without having to guess, and based on the above
start-of-frame comes as close to that definition as is meaningful.

> > > For the other two patches,
> > > 
> > > Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
> > 
> > Thanks! :-)
Laurent Pinchart Aug. 28, 2013, 11:25 p.m. UTC | #7
Hi Sakari,

On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
> On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart wrote:
> ...
> 
> > > > > diff --git a/drivers/media/usb/uvc/uvc_queue.c
> > > > > b/drivers/media/usb/uvc/uvc_queue.c index cd962be..0d80512 100644
> > > > > --- a/drivers/media/usb/uvc/uvc_queue.c
> > > > > +++ b/drivers/media/usb/uvc/uvc_queue.c
> > > > > @@ -149,7 +149,8 @@ int uvc_queue_init(struct uvc_video_queue
> > > > > *queue, enum v4l2_buf_type type,
> > > > >  	queue->queue.buf_struct_size = sizeof(struct uvc_buffer);
> > > > >  	queue->queue.ops = &uvc_queue_qops;
> > > > >  	queue->queue.mem_ops = &vb2_vmalloc_memops;
> > > > > -	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
> > > > > +	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
> > > > > +		| V4L2_BUF_FLAG_TIMESTAMP_SOF;
> > > > >  	ret = vb2_queue_init(&queue->queue);
> > > > >  	if (ret)
> > > > >  		return ret;
> > > > > diff --git a/include/media/videobuf2-core.h
> > > > > b/include/media/videobuf2-core.h index 6781258..033efc7 100644
> > > > > --- a/include/media/videobuf2-core.h
> > > > > +++ b/include/media/videobuf2-core.h
> > > > > @@ -307,6 +307,7 @@ struct v4l2_fh;
> > > > > 
> > > > >   * @buf_struct_size: size of the driver-specific buffer structure;
> > > > >   *		"0" indicates the driver doesn't want to use a custom buffer
> > > > >   *		structure type, so sizeof(struct vb2_buffer) will is used
> > > > > 
> > > > > + * @timestamp_type: Timestamp flags; V4L2_BUF_FLAGS_TIMESTAMP_*
> > > > > 
> > > > >   * @gfp_flags:	additional gfp flags used when allocating the
> > > > >   buffers.
> > > > >   *		Typically this is 0, but it may be e.g. GFP_DMA or 
__GFP_DMA32
> > > > >   *		to force the buffer allocation to a specific memory zone.
> > > > > 
> > > > > diff --git a/include/uapi/linux/videodev2.h
> > > > > b/include/uapi/linux/videodev2.h index 691077d..c57765e 100644
> > > > > --- a/include/uapi/linux/videodev2.h
> > > > > +++ b/include/uapi/linux/videodev2.h
> > > > > @@ -695,6 +695,16 @@ struct v4l2_buffer {
> > > > > 
> > > > >  #define V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN		0x00000000
> > > > >  #define V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC	0x00002000
> > > > >  #define V4L2_BUF_FLAG_TIMESTAMP_COPY		0x00004000
> > > > > 
> > > > > +/*
> > > > > + * Timestamp taken once the first pixel is received (or
> > > > > transmitted).
> > > > > + * If the flag is not set the buffer timestamp is taken at the end
> > > > > of
> > > > > + * the frame. This is not a timestamp type.
> > > > 
> > > > UVC devices timestamp frames when the frame is captured, not when the
> > > > first pixel is transmitted.
> > > 
> > > I.e. we shouldn't set the SOF flag? "When the frame is captured" doesn't
> > > say much, or almost anything in terms of *when*. The frames have
> > > exposure time and rolling shutter makes a difference, too.
> > 
> > The UVC 1.1 specification defines the timestamp as
> > 
> > "The source clock time in native deviceclock units when the raw frame
> > capture begins."
> > 
> > What devices do in practice may differ :-)
> 
> I think that this should mean start-of-frame - exposure time. I'd really
> wonder if any practical implementation does that however.

It's start-of-frame - exposure time - internal delays (UVC webcams are 
supposed to report their internal delay value as well).

> What's your suggestion; should we use the SOF flag for this or do you prefer
> the end-of-frame timestamp instead? I think it'd be quite nice for drivers
> to know which one is which without having to guess, and based on the above
> start-of-frame comes as close to that definition as is meaningful.

SOF is better than EOF. Do we need a start-of-capture flag, or could we 
document SOF as meaning start-of-capture or start-of-reception depending on 
what the device can do ?
Sakari Ailus Aug. 29, 2013, 11:33 a.m. UTC | #8
Hi Laurent,

On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
> Hi Sakari,
> 
> On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
> > On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart wrote:
> > ...
> > 
> > > > > > diff --git a/drivers/media/usb/uvc/uvc_queue.c
> > > > > > b/drivers/media/usb/uvc/uvc_queue.c index cd962be..0d80512 100644
> > > > > > --- a/drivers/media/usb/uvc/uvc_queue.c
> > > > > > +++ b/drivers/media/usb/uvc/uvc_queue.c
> > > > > > @@ -149,7 +149,8 @@ int uvc_queue_init(struct uvc_video_queue
> > > > > > *queue, enum v4l2_buf_type type,
> > > > > >  	queue->queue.buf_struct_size = sizeof(struct uvc_buffer);
> > > > > >  	queue->queue.ops = &uvc_queue_qops;
> > > > > >  	queue->queue.mem_ops = &vb2_vmalloc_memops;
> > > > > > -	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
> > > > > > +	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
> > > > > > +		| V4L2_BUF_FLAG_TIMESTAMP_SOF;
> > > > > >  	ret = vb2_queue_init(&queue->queue);
> > > > > >  	if (ret)
> > > > > >  		return ret;
> > > > > > diff --git a/include/media/videobuf2-core.h
> > > > > > b/include/media/videobuf2-core.h index 6781258..033efc7 100644
> > > > > > --- a/include/media/videobuf2-core.h
> > > > > > +++ b/include/media/videobuf2-core.h
> > > > > > @@ -307,6 +307,7 @@ struct v4l2_fh;
> > > > > > 
> > > > > >   * @buf_struct_size: size of the driver-specific buffer structure;
> > > > > >   *		"0" indicates the driver doesn't want to use a custom buffer
> > > > > >   *		structure type, so sizeof(struct vb2_buffer) will is used
> > > > > > 
> > > > > > + * @timestamp_type: Timestamp flags; V4L2_BUF_FLAGS_TIMESTAMP_*
> > > > > > 
> > > > > >   * @gfp_flags:	additional gfp flags used when allocating the
> > > > > >   buffers.
> > > > > >   *		Typically this is 0, but it may be e.g. GFP_DMA or 
> __GFP_DMA32
> > > > > >   *		to force the buffer allocation to a specific memory zone.
> > > > > > 
> > > > > > diff --git a/include/uapi/linux/videodev2.h
> > > > > > b/include/uapi/linux/videodev2.h index 691077d..c57765e 100644
> > > > > > --- a/include/uapi/linux/videodev2.h
> > > > > > +++ b/include/uapi/linux/videodev2.h
> > > > > > @@ -695,6 +695,16 @@ struct v4l2_buffer {
> > > > > > 
> > > > > >  #define V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN		0x00000000
> > > > > >  #define V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC	0x00002000
> > > > > >  #define V4L2_BUF_FLAG_TIMESTAMP_COPY		0x00004000
> > > > > > 
> > > > > > +/*
> > > > > > + * Timestamp taken once the first pixel is received (or
> > > > > > transmitted).
> > > > > > + * If the flag is not set the buffer timestamp is taken at the end
> > > > > > of
> > > > > > + * the frame. This is not a timestamp type.
> > > > > 
> > > > > UVC devices timestamp frames when the frame is captured, not when the
> > > > > first pixel is transmitted.
> > > > 
> > > > I.e. we shouldn't set the SOF flag? "When the frame is captured" doesn't
> > > > say much, or almost anything in terms of *when*. The frames have
> > > > exposure time and rolling shutter makes a difference, too.
> > > 
> > > The UVC 1.1 specification defines the timestamp as
> > > 
> > > "The source clock time in native deviceclock units when the raw frame
> > > capture begins."
> > > 
> > > What devices do in practice may differ :-)
> > 
> > I think that this should mean start-of-frame - exposure time. I'd really
> > wonder if any practical implementation does that however.
> 
> It's start-of-frame - exposure time - internal delays (UVC webcams are 
> supposed to report their internal delay value as well).

Do they report it? How about the exposure time?

If you know them all you can calculate the SOF timestamp. The fewer
timestamps are available for user programs the better.

It's another matter then if there are webcams that report these values
wrong. Then you could get timestamps that are complete garbage. But I guess
you could compare them to the current monotonic timestamp and detect such
cases.

> > What's your suggestion; should we use the SOF flag for this or do you prefer
> > the end-of-frame timestamp instead? I think it'd be quite nice for drivers
> > to know which one is which without having to guess, and based on the above
> > start-of-frame comes as close to that definition as is meaningful.
> 
> SOF is better than EOF. Do we need a start-of-capture flag, or could we 
> document SOF as meaning start-of-capture or start-of-reception depending on 
> what the device can do ?

One possibility is to dedicate a few flags for this; by using three bits
we'd get eight different timestamps already. But I have to say that fewer is
better. :-)
Laurent Pinchart Aug. 30, 2013, 11:31 a.m. UTC | #9
Hi Sakari,

On Thursday 29 August 2013 14:33:39 Sakari Ailus wrote:
> On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
> > On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
> > > On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart wrote:
> > > ...
> > > 
> > > > > > UVC devices timestamp frames when the frame is captured, not when
> > > > > > the first pixel is transmitted.
> > > > > 
> > > > > I.e. we shouldn't set the SOF flag? "When the frame is captured"
> > > > > doesn't say much, or almost anything in terms of *when*. The frames
> > > > > have exposure time and rolling shutter makes a difference, too.
> > > > 
> > > > The UVC 1.1 specification defines the timestamp as
> > > > 
> > > > "The source clock time in native deviceclock units when the raw frame
> > > > capture begins."
> > > > 
> > > > What devices do in practice may differ :-)
> > > 
> > > I think that this should mean start-of-frame - exposure time. I'd really
> > > wonder if any practical implementation does that however.
> > 
> > It's start-of-frame - exposure time - internal delays (UVC webcams are
> > supposed to report their internal delay value as well).
> 
> Do they report it? How about the exposure time?

It's supposed to be configurable.

> If you know them all you can calculate the SOF timestamp. The fewer
> timestamps are available for user programs the better.
> 
> It's another matter then if there are webcams that report these values
> wrong.

There most probably are :-)

> Then you could get timestamps that are complete garbage. But I guess you
> could compare them to the current monotonic timestamp and detect such cases.
> 
> > > What's your suggestion; should we use the SOF flag for this or do you
> > > prefer the end-of-frame timestamp instead? I think it'd be quite nice
> > > for drivers to know which one is which without having to guess, and
> > > based on the above start-of-frame comes as close to that definition as
> > > is meaningful.
> > 
> > SOF is better than EOF. Do we need a start-of-capture flag, or could we
> > document SOF as meaning start-of-capture or start-of-reception depending
> > on what the device can do ?
> 
> One possibility is to dedicate a few flags for this; by using three bits
> we'd get eight different timestamps already. But I have to say that fewer is
> better. :-)

Does it really need to be a per-buffer flag ? This seems to be a driver-wide 
(or at least device-wide) behaviour to me.
Sakari Ailus Aug. 30, 2013, 4:08 p.m. UTC | #10
Hi Laurent,

On Fri, Aug 30, 2013 at 01:31:44PM +0200, Laurent Pinchart wrote:
> Hi Sakari,
> 
> On Thursday 29 August 2013 14:33:39 Sakari Ailus wrote:
> > On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
> > > On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
> > > > On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart wrote:
> > > > ...
> > > > 
> > > > > > > UVC devices timestamp frames when the frame is captured, not when
> > > > > > > the first pixel is transmitted.
> > > > > > 
> > > > > > I.e. we shouldn't set the SOF flag? "When the frame is captured"
> > > > > > doesn't say much, or almost anything in terms of *when*. The frames
> > > > > > have exposure time and rolling shutter makes a difference, too.
> > > > > 
> > > > > The UVC 1.1 specification defines the timestamp as
> > > > > 
> > > > > "The source clock time in native deviceclock units when the raw frame
> > > > > capture begins."
> > > > > 
> > > > > What devices do in practice may differ :-)
> > > > 
> > > > I think that this should mean start-of-frame - exposure time. I'd really
> > > > wonder if any practical implementation does that however.
> > > 
> > > It's start-of-frame - exposure time - internal delays (UVC webcams are
> > > supposed to report their internal delay value as well).
> > 
> > Do they report it? How about the exposure time?
> 
> It's supposed to be configurable.

Is the exposure reported with the frame so it could be used to construct the
per-frame SOF timestamp?

> > If you know them all you can calculate the SOF timestamp. The fewer
> > timestamps are available for user programs the better.
> > 
> > It's another matter then if there are webcams that report these values
> > wrong.
> 
> There most probably are :-)
> 
> > Then you could get timestamps that are complete garbage. But I guess you
> > could compare them to the current monotonic timestamp and detect such cases.
> > 
> > > > What's your suggestion; should we use the SOF flag for this or do you
> > > > prefer the end-of-frame timestamp instead? I think it'd be quite nice
> > > > for drivers to know which one is which without having to guess, and
> > > > based on the above start-of-frame comes as close to that definition as
> > > > is meaningful.
> > > 
> > > SOF is better than EOF. Do we need a start-of-capture flag, or could we
> > > document SOF as meaning start-of-capture or start-of-reception depending
> > > on what the device can do ?
> > 
> > One possibility is to dedicate a few flags for this; by using three bits
> > we'd get eight different timestamps already. But I have to say that fewer is
> > better. :-)
> 
> Does it really need to be a per-buffer flag ? This seems to be a driver-wide 
> (or at least device-wide) behaviour to me.

Same goes for timestamp clock sources. It was concluded to use buffer flags
for those as well.

Using a control for the purpose would however require quite non-zero amount
of initialisation code from each driver so that would probably need to be
sorted out first.
Laurent Pinchart Aug. 31, 2013, 9:43 p.m. UTC | #11
Hi Sakari,

On Friday 30 August 2013 19:08:48 Sakari Ailus wrote:
> On Fri, Aug 30, 2013 at 01:31:44PM +0200, Laurent Pinchart wrote:
> > On Thursday 29 August 2013 14:33:39 Sakari Ailus wrote:
> > > On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
> > > > On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
> > > > > On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart wrote:
> > > > > ...
> > > > > 
> > > > > > > > UVC devices timestamp frames when the frame is captured, not
> > > > > > > > when the first pixel is transmitted.
> > > > > > > 
> > > > > > > I.e. we shouldn't set the SOF flag? "When the frame is captured"
> > > > > > > doesn't say much, or almost anything in terms of *when*. The
> > > > > > > frames have exposure time and rolling shutter makes a
> > > > > > > difference, too.
> > > > > > 
> > > > > > The UVC 1.1 specification defines the timestamp as
> > > > > > 
> > > > > > "The source clock time in native deviceclock units when the raw
> > > > > > frame capture begins."
> > > > > > 
> > > > > > What devices do in practice may differ :-)
> > > > > 
> > > > > I think that this should mean start-of-frame - exposure time. I'd
> > > > > really wonder if any practical implementation does that however.
> > > > 
> > > > It's start-of-frame - exposure time - internal delays (UVC webcams are
> > > > supposed to report their internal delay value as well).
> > > 
> > > Do they report it? How about the exposure time?
> > 
> > It's supposed to be configurable.
> 
> Is the exposure reported with the frame so it could be used to construct the
> per-frame SOF timestamp?

Not when auto-exposure is turned on I'm afraid :-S

I believe that the capture timestamp makes more sense than the SOF timestamp 
for applications. SOF/EOF are more of a poor man's timestamp in case nothing 
else is available, but when you want to synchronize multiple audio and/or 
video streams the capture timestamp is what you're interested in. I don't 
think converting a capture timestamp to an SOF would be a good idea.

> > > If you know them all you can calculate the SOF timestamp. The fewer
> > > timestamps are available for user programs the better.
> > > 
> > > It's another matter then if there are webcams that report these values
> > > wrong.
> > 
> > There most probably are :-)
> > 
> > > Then you could get timestamps that are complete garbage. But I guess you
> > > could compare them to the current monotonic timestamp and detect such
> > > cases.
> > >
> > > > > What's your suggestion; should we use the SOF flag for this or do
> > > > > you prefer the end-of-frame timestamp instead? I think it'd be quite
> > > > > nice for drivers to know which one is which without having to guess,
> > > > > and based on the above start-of-frame comes as close to that
> > > > > definition as is meaningful.
> > > > 
> > > > SOF is better than EOF. Do we need a start-of-capture flag, or could
> > > > we document SOF as meaning start-of-capture or start-of-reception
> > > > depending on what the device can do ?
> > > 
> > > One possibility is to dedicate a few flags for this; by using three bits
> > > we'd get eight different timestamps already. But I have to say that
> > > fewer is better. :-)
> > 
> > Does it really need to be a per-buffer flag ? This seems to be a
> > driver-wide (or at least device-wide) behaviour to me.
> 
> Same goes for timestamp clock sources. It was concluded to use buffer flags
> for those as well.

Yes, and I don't think I was convinced, so I'm not convinced here either :-)

> Using a control for the purpose would however require quite non-zero amount
> of initialisation code from each driver so that would probably need to be
> sorted out first.

We could also use a capabilities flag.
Sakari Ailus Sept. 5, 2013, 4:31 p.m. UTC | #12
Hi Laurent,

On Sat, Aug 31, 2013 at 11:43:18PM +0200, Laurent Pinchart wrote:
> Hi Sakari,
> 
> On Friday 30 August 2013 19:08:48 Sakari Ailus wrote:
> > On Fri, Aug 30, 2013 at 01:31:44PM +0200, Laurent Pinchart wrote:
> > > On Thursday 29 August 2013 14:33:39 Sakari Ailus wrote:
> > > > On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
> > > > > On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
> > > > > > On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart wrote:
> > > > > > ...
> > > > > > 
> > > > > > > > > UVC devices timestamp frames when the frame is captured, not
> > > > > > > > > when the first pixel is transmitted.
> > > > > > > > 
> > > > > > > > I.e. we shouldn't set the SOF flag? "When the frame is captured"
> > > > > > > > doesn't say much, or almost anything in terms of *when*. The
> > > > > > > > frames have exposure time and rolling shutter makes a
> > > > > > > > difference, too.
> > > > > > > 
> > > > > > > The UVC 1.1 specification defines the timestamp as
> > > > > > > 
> > > > > > > "The source clock time in native deviceclock units when the raw
> > > > > > > frame capture begins."
> > > > > > > 
> > > > > > > What devices do in practice may differ :-)
> > > > > > 
> > > > > > I think that this should mean start-of-frame - exposure time. I'd
> > > > > > really wonder if any practical implementation does that however.
> > > > > 
> > > > > It's start-of-frame - exposure time - internal delays (UVC webcams are
> > > > > supposed to report their internal delay value as well).
> > > > 
> > > > Do they report it? How about the exposure time?
> > > 
> > > It's supposed to be configurable.
> > 
> > Is the exposure reported with the frame so it could be used to construct the
> > per-frame SOF timestamp?
> 
> Not when auto-exposure is turned on I'm afraid :-S
> 
> I believe that the capture timestamp makes more sense than the SOF timestamp 
> for applications. SOF/EOF are more of a poor man's timestamp in case nothing 
> else is available, but when you want to synchronize multiple audio and/or 
> video streams the capture timestamp is what you're interested in. I don't 
> think converting a capture timestamp to an SOF would be a good idea.

I'm not quite sure of that --- I think the SOF/EOF will be more stable than
the exposure start which depends on the exposure time. If you're recording a
video you may want to keep the time between the frames constant.

Nevertheless --- if we don't get such a timestamp from the device this will
only remain speculation. Applications might be best using e.g. half the
frame period to get a guesstimate of the differences between the two
timestamps.

> > > > If you know them all you can calculate the SOF timestamp. The fewer
> > > > timestamps are available for user programs the better.
> > > > 
> > > > It's another matter then if there are webcams that report these values
> > > > wrong.
> > > 
> > > There most probably are :-)
> > > 
> > > > Then you could get timestamps that are complete garbage. But I guess you
> > > > could compare them to the current monotonic timestamp and detect such
> > > > cases.
> > > >
> > > > > > What's your suggestion; should we use the SOF flag for this or do
> > > > > > you prefer the end-of-frame timestamp instead? I think it'd be quite
> > > > > > nice for drivers to know which one is which without having to guess,
> > > > > > and based on the above start-of-frame comes as close to that
> > > > > > definition as is meaningful.
> > > > > 
> > > > > SOF is better than EOF. Do we need a start-of-capture flag, or could
> > > > > we document SOF as meaning start-of-capture or start-of-reception
> > > > > depending on what the device can do ?
> > > > 
> > > > One possibility is to dedicate a few flags for this; by using three bits
> > > > we'd get eight different timestamps already. But I have to say that
> > > > fewer is better. :-)
> > > 
> > > Does it really need to be a per-buffer flag ? This seems to be a
> > > driver-wide (or at least device-wide) behaviour to me.
> > 
> > Same goes for timestamp clock sources. It was concluded to use buffer flags
> > for those as well.
> 
> Yes, and I don't think I was convinced, so I'm not convinced here either :-)
> 
> > Using a control for the purpose would however require quite non-zero amount
> > of initialisation code from each driver so that would probably need to be
> > sorted out first.
> 
> We could also use a capabilities flag.

Interesting idea. I'm fine that as well. Hans?
Laurent Pinchart Sept. 6, 2013, 11:05 a.m. UTC | #13
Hi Sakari,

On Thursday 05 September 2013 19:31:30 Sakari Ailus wrote:
> On Sat, Aug 31, 2013 at 11:43:18PM +0200, Laurent Pinchart wrote:
> > On Friday 30 August 2013 19:08:48 Sakari Ailus wrote:
> > > On Fri, Aug 30, 2013 at 01:31:44PM +0200, Laurent Pinchart wrote:
> > > > On Thursday 29 August 2013 14:33:39 Sakari Ailus wrote:
> > > > > On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
> > > > > > On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
> > > > > > > On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart
> > > > > > > wrote:
> > > > > > > ...
> > > > > > > 
> > > > > > > > > > UVC devices timestamp frames when the frame is captured,
> > > > > > > > > > not when the first pixel is transmitted.
> > > > > > > > > 
> > > > > > > > > I.e. we shouldn't set the SOF flag? "When the frame is
> > > > > > > > > captured" doesn't say much, or almost anything in terms of
> > > > > > > > > *when*. The frames have exposure time and rolling shutter
> > > > > > > > > makes a difference, too.
> > > > > > > > 
> > > > > > > > The UVC 1.1 specification defines the timestamp as
> > > > > > > > 
> > > > > > > > "The source clock time in native deviceclock units when the
> > > > > > > > raw frame capture begins."
> > > > > > > > 
> > > > > > > > What devices do in practice may differ :-)
> > > > > > > 
> > > > > > > I think that this should mean start-of-frame - exposure time.
> > > > > > > I'd really wonder if any practical implementation does that
> > > > > > > however.
> > > > > > 
> > > > > > It's start-of-frame - exposure time - internal delays (UVC webcams
> > > > > > are supposed to report their internal delay value as well).
> > > > > 
> > > > > Do they report it? How about the exposure time?
> > > > 
> > > > It's supposed to be configurable.
> > > 
> > > Is the exposure reported with the frame so it could be used to construct
> > > the per-frame SOF timestamp?
> > 
> > Not when auto-exposure is turned on I'm afraid :-S
> > 
> > I believe that the capture timestamp makes more sense than the SOF
> > timestamp for applications. SOF/EOF are more of a poor man's timestamp in
> > case nothing else is available, but when you want to synchronize multiple
> > audio and/or video streams the capture timestamp is what you're
> > interested in. I don't think converting a capture timestamp to an SOF
> > would be a good idea.
>
> I'm not quite sure of that --- I think the SOF/EOF will be more stable than
> the exposure start which depends on the exposure time. If you're recording a
> video you may want to keep the time between the frames constant.

I can see two main use cases for timestamps. The first one is multi-stream 
synchronization (audio and video, stereo video, ...), the second one is 
playback rate control.

To synchronize media streams you need to timestamp samples with a common 
clock. Timestamps must be correlated to the time at which the sound and/or 
image events occur. If we consider the speed of sound and speed of light as 
negligible (the former could be compensated for if needed, but that's out of 
scope), the time at which the sound or image is produced can be considered as 
equal to the time at which they're captured. Given that we only need to 
synchronize streams here, an offset wouldn't matter, so any clock that is 
synchronized to the capture clock with a fixed offset would do. The SOF event, 
in particular, will do if the capture time and device processing time is 
constant, and if interrupt latencies are kept small enough.. So will the EOF 
event if the transmission time is also constant.

Granted, frames are not captured at a precise point of time, as the sensor 
needs to be exposed for a certain duration. There is thus no such thing as a 
capture time, we instead have a capture interval. However, that's irrelevant 
for multi-video synchronization purposes. It could matter for audio+video 
synchronization though.

Regarding playback rate control, the goal is to render frames at the same rate 
they are captured. If the frame rate isn't constant (for instance because of a 
variable exposure time), then a time stamp is required for every frame. Here 
we care about the difference between timestamps for two consecutive frames, 
and the start of capture timestamp is what will give best results.

Let's consider three frames, A, B and C, captured as follows.


00000000001111111111222222222233333333334444444444555555555566666666667777
01234567890123456789012345678901234567890123456789012345678901234567890123

| --------- A ------------ |      | ----- B ----- |      | ----- C ----- |

On the playback side, we want to display A for a duration of 34. If we 
timestamp the frames with the start of capture time, we will have the 
following timestamps.

A  0
B  34
C  57

B-A = 34, which is the time during which A needs to be displayed.

If we use the end of capture time, we will get

A  27
B  50
C  73

B-A = 23, which is too short.

> Nevertheless --- if we don't get such a timestamp from the device this will
> only remain speculation. Applications might be best using e.g. half the
> frame period to get a guesstimate of the differences between the two
> timestamps.

Obviously if the device can't provide the start of capture timestamp we will 
need to use any source of timestamps, but I believe we should aim for start of 
capture as a first class citizen.

> > > > > If you know them all you can calculate the SOF timestamp. The fewer
> > > > > timestamps are available for user programs the better.
> > > > > 
> > > > > It's another matter then if there are webcams that report these
> > > > > values wrong.
> > > > 
> > > > There most probably are :-)
> > > > 
> > > > > Then you could get timestamps that are complete garbage. But I guess
> > > > > you could compare them to the current monotonic timestamp and detect
> > > > > such cases.
> > > > > 
> > > > > > > What's your suggestion; should we use the SOF flag for this or
> > > > > > > do you prefer the end-of-frame timestamp instead? I think it'd
> > > > > > > be quite nice for drivers to know which one is which without
> > > > > > > having to guess, and based on the above start-of-frame comes as
> > > > > > > close to that definition as is meaningful.
> > > > > > 
> > > > > > SOF is better than EOF. Do we need a start-of-capture flag, or
> > > > > > could we document SOF as meaning start-of-capture or start-of-
> > > > > > reception depending on what the device can do ?
> > > > > 
> > > > > One possibility is to dedicate a few flags for this; by using three
> > > > > bits we'd get eight different timestamps already. But I have to say
> > > > > that fewer is better. :-)
> > > > 
> > > > Does it really need to be a per-buffer flag ? This seems to be a
> > > > driver-wide (or at least device-wide) behaviour to me.
> > > 
> > > Same goes for timestamp clock sources. It was concluded to use buffer
> > > flags for those as well.
> > 
> > Yes, and I don't think I was convinced, so I'm not convinced here either
> > :-)
> > 
> > > Using a control for the purpose would however require quite non-zero
> > > amount of initialisation code from each driver so that would probably
> > > need to be sorted out first.
> > 
> > We could also use a capabilities flag.
> 
> Interesting idea. I'm fine that as well. Hans?
Hans Verkuil Dec. 12, 2013, 12:37 p.m. UTC | #14
Sakari asked me to reply to this old thread...

On 09/06/13 13:05, Laurent Pinchart wrote:
> Hi Sakari,
> 
> On Thursday 05 September 2013 19:31:30 Sakari Ailus wrote:
>> On Sat, Aug 31, 2013 at 11:43:18PM +0200, Laurent Pinchart wrote:
>>> On Friday 30 August 2013 19:08:48 Sakari Ailus wrote:
>>>> On Fri, Aug 30, 2013 at 01:31:44PM +0200, Laurent Pinchart wrote:
>>>>> On Thursday 29 August 2013 14:33:39 Sakari Ailus wrote:
>>>>>> On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
>>>>>>> On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
>>>>>>>> On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart
>>>>>>>> wrote:
>>>>>>>> ...
>>>>>>>>
>>>>>>>>>>> UVC devices timestamp frames when the frame is captured,
>>>>>>>>>>> not when the first pixel is transmitted.
>>>>>>>>>>
>>>>>>>>>> I.e. we shouldn't set the SOF flag? "When the frame is
>>>>>>>>>> captured" doesn't say much, or almost anything in terms of
>>>>>>>>>> *when*. The frames have exposure time and rolling shutter
>>>>>>>>>> makes a difference, too.
>>>>>>>>>
>>>>>>>>> The UVC 1.1 specification defines the timestamp as
>>>>>>>>>
>>>>>>>>> "The source clock time in native deviceclock units when the
>>>>>>>>> raw frame capture begins."
>>>>>>>>>
>>>>>>>>> What devices do in practice may differ :-)
>>>>>>>>
>>>>>>>> I think that this should mean start-of-frame - exposure time.
>>>>>>>> I'd really wonder if any practical implementation does that
>>>>>>>> however.
>>>>>>>
>>>>>>> It's start-of-frame - exposure time - internal delays (UVC webcams
>>>>>>> are supposed to report their internal delay value as well).
>>>>>>
>>>>>> Do they report it? How about the exposure time?
>>>>>
>>>>> It's supposed to be configurable.
>>>>
>>>> Is the exposure reported with the frame so it could be used to construct
>>>> the per-frame SOF timestamp?
>>>
>>> Not when auto-exposure is turned on I'm afraid :-S
>>>
>>> I believe that the capture timestamp makes more sense than the SOF
>>> timestamp for applications. SOF/EOF are more of a poor man's timestamp in
>>> case nothing else is available, but when you want to synchronize multiple
>>> audio and/or video streams the capture timestamp is what you're
>>> interested in. I don't think converting a capture timestamp to an SOF
>>> would be a good idea.
>>
>> I'm not quite sure of that --- I think the SOF/EOF will be more stable than
>> the exposure start which depends on the exposure time. If you're recording a
>> video you may want to keep the time between the frames constant.
> 
> I can see two main use cases for timestamps. The first one is multi-stream 
> synchronization (audio and video, stereo video, ...), the second one is 
> playback rate control.
> 
> To synchronize media streams you need to timestamp samples with a common 
> clock. Timestamps must be correlated to the time at which the sound and/or 
> image events occur. If we consider the speed of sound and speed of light as 
> negligible (the former could be compensated for if needed, but that's out of 
> scope), the time at which the sound or image is produced can be considered as 
> equal to the time at which they're captured. Given that we only need to 
> synchronize streams here, an offset wouldn't matter, so any clock that is 
> synchronized to the capture clock with a fixed offset would do. The SOF event, 
> in particular, will do if the capture time and device processing time is 
> constant, and if interrupt latencies are kept small enough.. So will the EOF 
> event if the transmission time is also constant.
> 
> Granted, frames are not captured at a precise point of time, as the sensor 
> needs to be exposed for a certain duration. There is thus no such thing as a 
> capture time, we instead have a capture interval. However, that's irrelevant 
> for multi-video synchronization purposes. It could matter for audio+video 
> synchronization though.
> 
> Regarding playback rate control, the goal is to render frames at the same rate 
> they are captured. If the frame rate isn't constant (for instance because of a 
> variable exposure time), then a time stamp is required for every frame. Here 
> we care about the difference between timestamps for two consecutive frames, 
> and the start of capture timestamp is what will give best results.
> 
> Let's consider three frames, A, B and C, captured as follows.
> 
> 
> 00000000001111111111222222222233333333334444444444555555555566666666667777
> 01234567890123456789012345678901234567890123456789012345678901234567890123
> 
> | --------- A ------------ |      | ----- B ----- |      | ----- C ----- |
> 
> On the playback side, we want to display A for a duration of 34. If we 
> timestamp the frames with the start of capture time, we will have the 
> following timestamps.
> 
> A  0
> B  34
> C  57
> 
> B-A = 34, which is the time during which A needs to be displayed.
> 
> If we use the end of capture time, we will get
> 
> A  27
> B  50
> C  73
> 
> B-A = 23, which is too short.
> 
>> Nevertheless --- if we don't get such a timestamp from the device this will
>> only remain speculation. Applications might be best using e.g. half the
>> frame period to get a guesstimate of the differences between the two
>> timestamps.
> 
> Obviously if the device can't provide the start of capture timestamp we will 
> need to use any source of timestamps, but I believe we should aim for start of 
> capture as a first class citizen.
> 
>>>>>> If you know them all you can calculate the SOF timestamp. The fewer
>>>>>> timestamps are available for user programs the better.
>>>>>>
>>>>>> It's another matter then if there are webcams that report these
>>>>>> values wrong.
>>>>>
>>>>> There most probably are :-)
>>>>>
>>>>>> Then you could get timestamps that are complete garbage. But I guess
>>>>>> you could compare them to the current monotonic timestamp and detect
>>>>>> such cases.
>>>>>>
>>>>>>>> What's your suggestion; should we use the SOF flag for this or
>>>>>>>> do you prefer the end-of-frame timestamp instead? I think it'd
>>>>>>>> be quite nice for drivers to know which one is which without
>>>>>>>> having to guess, and based on the above start-of-frame comes as
>>>>>>>> close to that definition as is meaningful.
>>>>>>>
>>>>>>> SOF is better than EOF. Do we need a start-of-capture flag, or
>>>>>>> could we document SOF as meaning start-of-capture or start-of-
>>>>>>> reception depending on what the device can do ?
>>>>>>
>>>>>> One possibility is to dedicate a few flags for this; by using three
>>>>>> bits we'd get eight different timestamps already. But I have to say
>>>>>> that fewer is better. :-)
>>>>>
>>>>> Does it really need to be a per-buffer flag ? This seems to be a
>>>>> driver-wide (or at least device-wide) behaviour to me.
>>>>
>>>> Same goes for timestamp clock sources. It was concluded to use buffer
>>>> flags for those as well.
>>>
>>> Yes, and I don't think I was convinced, so I'm not convinced here either
>>> :-)
>>>
>>>> Using a control for the purpose would however require quite non-zero
>>>> amount of initialisation code from each driver so that would probably
>>>> need to be sorted out first.
>>>
>>> We could also use a capabilities flag.
>>
>> Interesting idea. I'm fine that as well. Hans?

That would work for uvc, but not in the general case. Depending on the video
routing you might have either SOF or EOF timestamps. Unlikely, I admit, but
I feel keeping this flag in v4l2_buffers is the most generic solution.

Regards,

	Hans
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Laurent Pinchart Jan. 31, 2014, 3:39 p.m. UTC | #15
Hi Hans,

On Thursday 12 December 2013 13:37:10 Hans Verkuil wrote:
> Sakari asked me to reply to this old thread...

He asked me to reply as well :-)

> On 09/06/13 13:05, Laurent Pinchart wrote:
> > On Thursday 05 September 2013 19:31:30 Sakari Ailus wrote:
> >> On Sat, Aug 31, 2013 at 11:43:18PM +0200, Laurent Pinchart wrote:
> >>> On Friday 30 August 2013 19:08:48 Sakari Ailus wrote:
> >>>> On Fri, Aug 30, 2013 at 01:31:44PM +0200, Laurent Pinchart wrote:
> >>>>> On Thursday 29 August 2013 14:33:39 Sakari Ailus wrote:
> >>>>>> On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
> >>>>>>> On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
> >>>>>>>> On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart
> >>>>>>>> wrote:
> >>>>>>>> ...
> >>>>>>>> 
> >>>>>>>>>>> UVC devices timestamp frames when the frame is captured,
> >>>>>>>>>>> not when the first pixel is transmitted.
> >>>>>>>>>> 
> >>>>>>>>>> I.e. we shouldn't set the SOF flag? "When the frame is
> >>>>>>>>>> captured" doesn't say much, or almost anything in terms of
> >>>>>>>>>> *when*. The frames have exposure time and rolling shutter
> >>>>>>>>>> makes a difference, too.
> >>>>>>>>> 
> >>>>>>>>> The UVC 1.1 specification defines the timestamp as
> >>>>>>>>> 
> >>>>>>>>> "The source clock time in native deviceclock units when the
> >>>>>>>>> raw frame capture begins."
> >>>>>>>>> 
> >>>>>>>>> What devices do in practice may differ :-)
> >>>>>>>> 
> >>>>>>>> I think that this should mean start-of-frame - exposure time.
> >>>>>>>> I'd really wonder if any practical implementation does that
> >>>>>>>> however.
> >>>>>>> 
> >>>>>>> It's start-of-frame - exposure time - internal delays (UVC webcams
> >>>>>>> are supposed to report their internal delay value as well).
> >>>>>> 
> >>>>>> Do they report it? How about the exposure time?
> >>>>> 
> >>>>> It's supposed to be configurable.
> >>>> 
> >>>> Is the exposure reported with the frame so it could be used to
> >>>> construct
> >>>> the per-frame SOF timestamp?
> >>> 
> >>> Not when auto-exposure is turned on I'm afraid :-S
> >>> 
> >>> I believe that the capture timestamp makes more sense than the SOF
> >>> timestamp for applications. SOF/EOF are more of a poor man's timestamp
> >>> in case nothing else is available, but when you want to synchronize
> >>> multiple audio and/or video streams the capture timestamp is what you're
> >>> interested in. I don't think converting a capture timestamp to an SOF
> >>> would be a good idea.
> >> 
> >> I'm not quite sure of that --- I think the SOF/EOF will be more stable
> >> than the exposure start which depends on the exposure time. If you're
> >> recording a video you may want to keep the time between the frames
> >> constant.
> > 
> > I can see two main use cases for timestamps. The first one is multi-stream
> > synchronization (audio and video, stereo video, ...), the second one is
> > playback rate control.
> > 
> > To synchronize media streams you need to timestamp samples with a common
> > clock. Timestamps must be correlated to the time at which the sound and/or
> > image events occur. If we consider the speed of sound and speed of light
> > as negligible (the former could be compensated for if needed, but that's
> > out of scope), the time at which the sound or image is produced can be
> > considered as equal to the time at which they're captured. Given that we
> > only need to synchronize streams here, an offset wouldn't matter, so any
> > clock that is synchronized to the capture clock with a fixed offset would
> > do. The SOF event, in particular, will do if the capture time and device
> > processing time is constant, and if interrupt latencies are kept small
> > enough.. So will the EOF event if the transmission time is also constant.
> > 
> > Granted, frames are not captured at a precise point of time, as the sensor
> > needs to be exposed for a certain duration. There is thus no such thing as
> > a capture time, we instead have a capture interval. However, that's
> > irrelevant for multi-video synchronization purposes. It could matter for
> > audio+video synchronization though.
> > 
> > Regarding playback rate control, the goal is to render frames at the same
> > rate they are captured. If the frame rate isn't constant (for instance
> > because of a variable exposure time), then a time stamp is required for
> > every frame. Here we care about the difference between timestamps for two
> > consecutive frames, and the start of capture timestamp is what will give
> > best results.
> > 
> > Let's consider three frames, A, B and C, captured as follows.
> > 
> > 
> > 00000000001111111111222222222233333333334444444444555555555566666666667777
> > 01234567890123456789012345678901234567890123456789012345678901234567890123
> > 
> > | --------- A ------------ |      | ----- B ----- |      | ----- C ----- |
> > 
> > On the playback side, we want to display A for a duration of 34. If we
> > timestamp the frames with the start of capture time, we will have the
> > following timestamps.
> > 
> > A  0
> > B  34
> > C  57
> > 
> > B-A = 34, which is the time during which A needs to be displayed.
> > 
> > If we use the end of capture time, we will get
> > 
> > A  27
> > B  50
> > C  73
> > 
> > B-A = 23, which is too short.
> > 
> >> Nevertheless --- if we don't get such a timestamp from the device this
> >> will only remain speculation. Applications might be best using e.g. half
> >> the frame period to get a guesstimate of the differences between the two
> >> timestamps.
> > 
> > Obviously if the device can't provide the start of capture timestamp we
> > will need to use any source of timestamps, but I believe we should aim
> > for start of capture as a first class citizen.
> > 
> >>>>>> If you know them all you can calculate the SOF timestamp. The fewer
> >>>>>> timestamps are available for user programs the better.
> >>>>>> 
> >>>>>> It's another matter then if there are webcams that report these
> >>>>>> values wrong.
> >>>>> 
> >>>>> There most probably are :-)
> >>>>> 
> >>>>>> Then you could get timestamps that are complete garbage. But I guess
> >>>>>> you could compare them to the current monotonic timestamp and detect
> >>>>>> such cases.
> >>>>>> 
> >>>>>>>> What's your suggestion; should we use the SOF flag for this or
> >>>>>>>> do you prefer the end-of-frame timestamp instead? I think it'd
> >>>>>>>> be quite nice for drivers to know which one is which without
> >>>>>>>> having to guess, and based on the above start-of-frame comes as
> >>>>>>>> close to that definition as is meaningful.
> >>>>>>> 
> >>>>>>> SOF is better than EOF. Do we need a start-of-capture flag, or
> >>>>>>> could we document SOF as meaning start-of-capture or start-of-
> >>>>>>> reception depending on what the device can do ?
> >>>>>> 
> >>>>>> One possibility is to dedicate a few flags for this; by using three
> >>>>>> bits we'd get eight different timestamps already. But I have to say
> >>>>>> that fewer is better. :-)
> >>>>> 
> >>>>> Does it really need to be a per-buffer flag ? This seems to be a
> >>>>> driver-wide (or at least device-wide) behaviour to me.
> >>>> 
> >>>> Same goes for timestamp clock sources. It was concluded to use buffer
> >>>> flags for those as well.
> >>> 
> >>> Yes, and I don't think I was convinced, so I'm not convinced here either
> >>> 
> >>> :-)
> >>>
> >>>> Using a control for the purpose would however require quite non-zero
> >>>> amount of initialisation code from each driver so that would probably
> >>>> need to be sorted out first.
> >>> 
> >>> We could also use a capabilities flag.
> >> 
> >> Interesting idea. I'm fine that as well. Hans?
> 
> That would work for uvc, but not in the general case. Depending on the video
> routing you might have either SOF or EOF timestamps. Unlikely, I admit, but
> I feel keeping this flag in v4l2_buffers is the most generic solution.

My main concern about this (beside using an extra buffer flags bit, which is a 
scarce resource - but OK, that's not really a big concern) is complexity for 
userspace. Correctly handling buffer timestamps when the timestamp type can 
vary per buffer isn't easy, and I most applications will likely implement it 
wrong. I expect most applications to look at the timestamp type of the first 
buffer and use that information for all subsequent buffers. This would defeat 
the point of having per-buffer timestamp types.
Hans Verkuil Jan. 31, 2014, 3:45 p.m. UTC | #16
Hi Laurent,

On 01/31/2014 04:39 PM, Laurent Pinchart wrote:
> Hi Hans,
> 
> On Thursday 12 December 2013 13:37:10 Hans Verkuil wrote:
>> Sakari asked me to reply to this old thread...
> 
> He asked me to reply as well :-)
> 
>> On 09/06/13 13:05, Laurent Pinchart wrote:
>>> On Thursday 05 September 2013 19:31:30 Sakari Ailus wrote:
>>>> On Sat, Aug 31, 2013 at 11:43:18PM +0200, Laurent Pinchart wrote:
>>>>> On Friday 30 August 2013 19:08:48 Sakari Ailus wrote:
>>>>>> On Fri, Aug 30, 2013 at 01:31:44PM +0200, Laurent Pinchart wrote:
>>>>>>> On Thursday 29 August 2013 14:33:39 Sakari Ailus wrote:
>>>>>>>> On Thu, Aug 29, 2013 at 01:25:05AM +0200, Laurent Pinchart wrote:
>>>>>>>>> On Wednesday 28 August 2013 19:39:19 Sakari Ailus wrote:
>>>>>>>>>> On Wed, Aug 28, 2013 at 06:14:44PM +0200, Laurent Pinchart
>>>>>>>>>> wrote:
>>>>>>>>>> ...
>>>>>>>>>>
>>>>>>>>>>>>> UVC devices timestamp frames when the frame is captured,
>>>>>>>>>>>>> not when the first pixel is transmitted.
>>>>>>>>>>>>
>>>>>>>>>>>> I.e. we shouldn't set the SOF flag? "When the frame is
>>>>>>>>>>>> captured" doesn't say much, or almost anything in terms of
>>>>>>>>>>>> *when*. The frames have exposure time and rolling shutter
>>>>>>>>>>>> makes a difference, too.
>>>>>>>>>>>
>>>>>>>>>>> The UVC 1.1 specification defines the timestamp as
>>>>>>>>>>>
>>>>>>>>>>> "The source clock time in native deviceclock units when the
>>>>>>>>>>> raw frame capture begins."
>>>>>>>>>>>
>>>>>>>>>>> What devices do in practice may differ :-)
>>>>>>>>>>
>>>>>>>>>> I think that this should mean start-of-frame - exposure time.
>>>>>>>>>> I'd really wonder if any practical implementation does that
>>>>>>>>>> however.
>>>>>>>>>
>>>>>>>>> It's start-of-frame - exposure time - internal delays (UVC webcams
>>>>>>>>> are supposed to report their internal delay value as well).
>>>>>>>>
>>>>>>>> Do they report it? How about the exposure time?
>>>>>>>
>>>>>>> It's supposed to be configurable.
>>>>>>
>>>>>> Is the exposure reported with the frame so it could be used to
>>>>>> construct
>>>>>> the per-frame SOF timestamp?
>>>>>
>>>>> Not when auto-exposure is turned on I'm afraid :-S
>>>>>
>>>>> I believe that the capture timestamp makes more sense than the SOF
>>>>> timestamp for applications. SOF/EOF are more of a poor man's timestamp
>>>>> in case nothing else is available, but when you want to synchronize
>>>>> multiple audio and/or video streams the capture timestamp is what you're
>>>>> interested in. I don't think converting a capture timestamp to an SOF
>>>>> would be a good idea.
>>>>
>>>> I'm not quite sure of that --- I think the SOF/EOF will be more stable
>>>> than the exposure start which depends on the exposure time. If you're
>>>> recording a video you may want to keep the time between the frames
>>>> constant.
>>>
>>> I can see two main use cases for timestamps. The first one is multi-stream
>>> synchronization (audio and video, stereo video, ...), the second one is
>>> playback rate control.
>>>
>>> To synchronize media streams you need to timestamp samples with a common
>>> clock. Timestamps must be correlated to the time at which the sound and/or
>>> image events occur. If we consider the speed of sound and speed of light
>>> as negligible (the former could be compensated for if needed, but that's
>>> out of scope), the time at which the sound or image is produced can be
>>> considered as equal to the time at which they're captured. Given that we
>>> only need to synchronize streams here, an offset wouldn't matter, so any
>>> clock that is synchronized to the capture clock with a fixed offset would
>>> do. The SOF event, in particular, will do if the capture time and device
>>> processing time is constant, and if interrupt latencies are kept small
>>> enough.. So will the EOF event if the transmission time is also constant.
>>>
>>> Granted, frames are not captured at a precise point of time, as the sensor
>>> needs to be exposed for a certain duration. There is thus no such thing as
>>> a capture time, we instead have a capture interval. However, that's
>>> irrelevant for multi-video synchronization purposes. It could matter for
>>> audio+video synchronization though.
>>>
>>> Regarding playback rate control, the goal is to render frames at the same
>>> rate they are captured. If the frame rate isn't constant (for instance
>>> because of a variable exposure time), then a time stamp is required for
>>> every frame. Here we care about the difference between timestamps for two
>>> consecutive frames, and the start of capture timestamp is what will give
>>> best results.
>>>
>>> Let's consider three frames, A, B and C, captured as follows.
>>>
>>>
>>> 00000000001111111111222222222233333333334444444444555555555566666666667777
>>> 01234567890123456789012345678901234567890123456789012345678901234567890123
>>>
>>> | --------- A ------------ |      | ----- B ----- |      | ----- C ----- |
>>>
>>> On the playback side, we want to display A for a duration of 34. If we
>>> timestamp the frames with the start of capture time, we will have the
>>> following timestamps.
>>>
>>> A  0
>>> B  34
>>> C  57
>>>
>>> B-A = 34, which is the time during which A needs to be displayed.
>>>
>>> If we use the end of capture time, we will get
>>>
>>> A  27
>>> B  50
>>> C  73
>>>
>>> B-A = 23, which is too short.
>>>
>>>> Nevertheless --- if we don't get such a timestamp from the device this
>>>> will only remain speculation. Applications might be best using e.g. half
>>>> the frame period to get a guesstimate of the differences between the two
>>>> timestamps.
>>>
>>> Obviously if the device can't provide the start of capture timestamp we
>>> will need to use any source of timestamps, but I believe we should aim
>>> for start of capture as a first class citizen.
>>>
>>>>>>>> If you know them all you can calculate the SOF timestamp. The fewer
>>>>>>>> timestamps are available for user programs the better.
>>>>>>>>
>>>>>>>> It's another matter then if there are webcams that report these
>>>>>>>> values wrong.
>>>>>>>
>>>>>>> There most probably are :-)
>>>>>>>
>>>>>>>> Then you could get timestamps that are complete garbage. But I guess
>>>>>>>> you could compare them to the current monotonic timestamp and detect
>>>>>>>> such cases.
>>>>>>>>
>>>>>>>>>> What's your suggestion; should we use the SOF flag for this or
>>>>>>>>>> do you prefer the end-of-frame timestamp instead? I think it'd
>>>>>>>>>> be quite nice for drivers to know which one is which without
>>>>>>>>>> having to guess, and based on the above start-of-frame comes as
>>>>>>>>>> close to that definition as is meaningful.
>>>>>>>>>
>>>>>>>>> SOF is better than EOF. Do we need a start-of-capture flag, or
>>>>>>>>> could we document SOF as meaning start-of-capture or start-of-
>>>>>>>>> reception depending on what the device can do ?
>>>>>>>>
>>>>>>>> One possibility is to dedicate a few flags for this; by using three
>>>>>>>> bits we'd get eight different timestamps already. But I have to say
>>>>>>>> that fewer is better. :-)
>>>>>>>
>>>>>>> Does it really need to be a per-buffer flag ? This seems to be a
>>>>>>> driver-wide (or at least device-wide) behaviour to me.
>>>>>>
>>>>>> Same goes for timestamp clock sources. It was concluded to use buffer
>>>>>> flags for those as well.
>>>>>
>>>>> Yes, and I don't think I was convinced, so I'm not convinced here either
>>>>>
>>>>> :-)
>>>>>
>>>>>> Using a control for the purpose would however require quite non-zero
>>>>>> amount of initialisation code from each driver so that would probably
>>>>>> need to be sorted out first.
>>>>>
>>>>> We could also use a capabilities flag.
>>>>
>>>> Interesting idea. I'm fine that as well. Hans?
>>
>> That would work for uvc, but not in the general case. Depending on the video
>> routing you might have either SOF or EOF timestamps. Unlikely, I admit, but
>> I feel keeping this flag in v4l2_buffers is the most generic solution.
> 
> My main concern about this (beside using an extra buffer flags bit, which is a 
> scarce resource - but OK, that's not really a big concern) is complexity for 
> userspace. Correctly handling buffer timestamps when the timestamp type can 
> vary per buffer isn't easy, and I most applications will likely implement it 
> wrong. I expect most applications to look at the timestamp type of the first 
> buffer and use that information for all subsequent buffers. This would defeat 
> the point of having per-buffer timestamp types.

How about defining a capability for use with ENUMINPUT/OUTPUT? I agree that this
won't change between buffers, but it is a property of a specific input or output.

There are more than enough bits available in v4l2_input/output to add one for
SOF timestamps.

Regards,

	Hans
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sakari Ailus Jan. 31, 2014, 4:42 p.m. UTC | #17
Hi Hans and Laurent,

On Fri, Jan 31, 2014 at 04:45:56PM +0100, Hans Verkuil wrote:
> How about defining a capability for use with ENUMINPUT/OUTPUT? I agree that this
> won't change between buffers, but it is a property of a specific input or output.

Over 80 characters per line. :-P

> There are more than enough bits available in v4l2_input/output to add one for
> SOF timestamps.

In complex devices with a non-linear media graph inputs and outputs are not
very relevant, and for that reason many drivers do not even implement them.
I'd rather not bind video buffer queues to inputs or outputs.

My personal favourite is still to use controls for the purpose but the
buffer flags come close, too, especially now that we're using them for
timestamp sources.
Hans Verkuil Jan. 31, 2014, 5:21 p.m. UTC | #18
Hi Sakari,

On 01/31/2014 05:42 PM, Sakari Ailus wrote:
> Hi Hans and Laurent,
> 
> On Fri, Jan 31, 2014 at 04:45:56PM +0100, Hans Verkuil wrote:
>> How about defining a capability for use with ENUMINPUT/OUTPUT? I agree that this
>> won't change between buffers, but it is a property of a specific input or output.
> 
> Over 80 characters per line. :-P

Stupid thunderbird doesn't show the column, and I can't enable
automatic word-wrap because that plays hell with patches. Solutions
welcome!

> 
>> There are more than enough bits available in v4l2_input/output to add one for
>> SOF timestamps.
> 
> In complex devices with a non-linear media graph inputs and outputs are not
> very relevant, and for that reason many drivers do not even implement them.
> I'd rather not bind video buffer queues to inputs or outputs.

Then we end up again with buffer flags. It's a property of the selected input
or output, so if you can't/don't want to use that, then it's buffer flags.

Which I like as well, but it's probably useful that the documentation states
that it can only change if the input or output changes as well.

> My personal favourite is still to use controls for the purpose but the
> buffer flags come close, too, especially now that we're using them for
> timestamp sources.

Laurent, can we please end this discussion? It makes perfect sense to store
this information as a BUF_FLAG IMHO. You can just do a QUERYBUF once after
you called REQBUFS and you know what you have to deal with.

Regards,

	Hans
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sakari Ailus Feb. 1, 2014, 9:06 a.m. UTC | #19
Hi Hans,

Hans Verkuil wrote:
> Hi Sakari,
> 
> On 01/31/2014 05:42 PM, Sakari Ailus wrote:
>> Hi Hans and Laurent,
>>
>> On Fri, Jan 31, 2014 at 04:45:56PM +0100, Hans Verkuil wrote:
>>> How about defining a capability for use with ENUMINPUT/OUTPUT? I agree that this
>>> won't change between buffers, but it is a property of a specific input or output.
>>
>> Over 80 characters per line. :-P
> 
> Stupid thunderbird doesn't show the column, and I can't enable
> automatic word-wrap because that plays hell with patches. Solutions
> welcome!

Does it? I've used Thunderbird (well, mostly just tried) but I've
thought it behaves the same way as Seamonkey: wrapping only applies to
newly written lines. My $EDITOR does the same when I use Mutt.
Laurent Pinchart Feb. 2, 2014, 9:27 a.m. UTC | #20
Hi Hans,

On Friday 31 January 2014 18:21:15 Hans Verkuil wrote:
> On 01/31/2014 05:42 PM, Sakari Ailus wrote:
> > On Fri, Jan 31, 2014 at 04:45:56PM +0100, Hans Verkuil wrote:
> >>
> >> How about defining a capability for use with ENUMINPUT/OUTPUT? I agree
> >> that this won't change between buffers, but it is a property of a
> >> specific input or output.
> >
> > Over 80 characters per line. :-P
> 
> Stupid thunderbird doesn't show the column, and I can't enable
> automatic word-wrap because that plays hell with patches. Solutions
> welcome!
> 
> >> There are more than enough bits available in v4l2_input/output to add one
> >> for SOF timestamps.
> > 
> > In complex devices with a non-linear media graph inputs and outputs are
> > not very relevant, and for that reason many drivers do not even implement
> > them. I'd rather not bind video buffer queues to inputs or outputs.
> 
> Then we end up again with buffer flags. It's a property of the selected
> input or output, so if you can't/don't want to use that, then it's buffer
> flags.
> 
> Which I like as well, but it's probably useful that the documentation states
> that it can only change if the input or output changes as well.
> 
> > My personal favourite is still to use controls for the purpose but the
> > buffer flags come close, too, especially now that we're using them for
> > timestamp sources.
> 
> Laurent, can we please end this discussion? It makes perfect sense to store
> this information as a BUF_FLAG IMHO. You can just do a QUERYBUF once after
> you called REQBUFS and you know what you have to deal with.

I'm OK with a buffer flag. Can we state in the documentation that the same 
timestamp flag will be used for all buffers and that QUERYBUF can be used to 
query it before the first buffer gets dequeued ?
Sakari Ailus Feb. 5, 2014, 8:13 a.m. UTC | #21
On Sun, Feb 02, 2014 at 10:27:49AM +0100, Laurent Pinchart wrote:
> Hi Hans,
> 
> On Friday 31 January 2014 18:21:15 Hans Verkuil wrote:
> > On 01/31/2014 05:42 PM, Sakari Ailus wrote:
> > > On Fri, Jan 31, 2014 at 04:45:56PM +0100, Hans Verkuil wrote:
> > >>
> > >> How about defining a capability for use with ENUMINPUT/OUTPUT? I agree
> > >> that this won't change between buffers, but it is a property of a
> > >> specific input or output.
> > >
> > > Over 80 characters per line. :-P
> > 
> > Stupid thunderbird doesn't show the column, and I can't enable
> > automatic word-wrap because that plays hell with patches. Solutions
> > welcome!
> > 
> > >> There are more than enough bits available in v4l2_input/output to add one
> > >> for SOF timestamps.
> > > 
> > > In complex devices with a non-linear media graph inputs and outputs are
> > > not very relevant, and for that reason many drivers do not even implement
> > > them. I'd rather not bind video buffer queues to inputs or outputs.
> > 
> > Then we end up again with buffer flags. It's a property of the selected
> > input or output, so if you can't/don't want to use that, then it's buffer
> > flags.
> > 
> > Which I like as well, but it's probably useful that the documentation states
> > that it can only change if the input or output changes as well.
> > 
> > > My personal favourite is still to use controls for the purpose but the
> > > buffer flags come close, too, especially now that we're using them for
> > > timestamp sources.
> > 
> > Laurent, can we please end this discussion? It makes perfect sense to store
> > this information as a BUF_FLAG IMHO. You can just do a QUERYBUF once after
> > you called REQBUFS and you know what you have to deal with.
> 
> I'm OK with a buffer flag. Can we state in the documentation that the same 
> timestamp flag will be used for all buffers and that QUERYBUF can be used to 
> query it before the first buffer gets dequeued ?

I think that'd be reasonable. Otherwise it'd be quite painful to use. I'll
resend.
diff mbox

Patch

diff --git a/Documentation/DocBook/media/v4l/io.xml b/Documentation/DocBook/media/v4l/io.xml
index 2c155cc..3aee210 100644
--- a/Documentation/DocBook/media/v4l/io.xml
+++ b/Documentation/DocBook/media/v4l/io.xml
@@ -654,11 +654,12 @@  plane, are stored in struct <structname>v4l2_plane</structname> instead.
 In that case, struct <structname>v4l2_buffer</structname> contains an array of
 plane structures.</para>
 
-      <para>For timestamp types that are sampled from the system clock
-(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp is
-taken after the complete frame has been received (or transmitted in
-case of video output devices). For other kinds of
-timestamps this may vary depending on the driver.</para>
+      <para>The timestamp is taken once the complete frame has been
+received (or transmitted for output devices) unless
+<constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> buffer flag is set.
+If <constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant> is set, the
+timestamp is taken when the first pixel of the frame is received
+(or transmitted).</para>
 
     <table frame="none" pgwide="1" id="v4l2-buffer">
       <title>struct <structname>v4l2_buffer</structname></title>
@@ -1120,6 +1121,14 @@  in which case caches have not been used.</entry>
 	    <entry>The CAPTURE buffer timestamp has been taken from the
 	    corresponding OUTPUT buffer. This flag applies only to mem2mem devices.</entry>
 	  </row>
+	  <row>
+	    <entry><constant>V4L2_BUF_FLAG_TIMESTAMP_SOF</constant></entry>
+	    <entry>0x00010000</entry>
+	    <entry>The buffer timestamp has been taken when the first
+	    pixel is received (or transmitted for output devices). If
+	    this flag is not set, the timestamp is taken when the
+	    entire frame has been received (or transmitted).</entry>
+	  </row>
 	</tbody>
       </tgroup>
     </table>
diff --git a/drivers/media/usb/uvc/uvc_queue.c b/drivers/media/usb/uvc/uvc_queue.c
index cd962be..0d80512 100644
--- a/drivers/media/usb/uvc/uvc_queue.c
+++ b/drivers/media/usb/uvc/uvc_queue.c
@@ -149,7 +149,8 @@  int uvc_queue_init(struct uvc_video_queue *queue, enum v4l2_buf_type type,
 	queue->queue.buf_struct_size = sizeof(struct uvc_buffer);
 	queue->queue.ops = &uvc_queue_qops;
 	queue->queue.mem_ops = &vb2_vmalloc_memops;
-	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+	queue->queue.timestamp_type = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC
+		| V4L2_BUF_FLAG_TIMESTAMP_SOF;
 	ret = vb2_queue_init(&queue->queue);
 	if (ret)
 		return ret;
diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
index 6781258..033efc7 100644
--- a/include/media/videobuf2-core.h
+++ b/include/media/videobuf2-core.h
@@ -307,6 +307,7 @@  struct v4l2_fh;
  * @buf_struct_size: size of the driver-specific buffer structure;
  *		"0" indicates the driver doesn't want to use a custom buffer
  *		structure type, so sizeof(struct vb2_buffer) will is used
+ * @timestamp_type: Timestamp flags; V4L2_BUF_FLAGS_TIMESTAMP_*
  * @gfp_flags:	additional gfp flags used when allocating the buffers.
  *		Typically this is 0, but it may be e.g. GFP_DMA or __GFP_DMA32
  *		to force the buffer allocation to a specific memory zone.
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index 691077d..c57765e 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -695,6 +695,16 @@  struct v4l2_buffer {
 #define V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN		0x00000000
 #define V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC	0x00002000
 #define V4L2_BUF_FLAG_TIMESTAMP_COPY		0x00004000
+/*
+ * Timestamp taken once the first pixel is received (or transmitted). 
+ * If the flag is not set the buffer timestamp is taken at the end of
+ * the frame. This is not a timestamp type.
+ *
+ * In general drivers should not use this flag if the end-of-frame
+ * timestamps is as good quality as the start-of-frame one; the
+ * V4L2_EVENT_FRAME_SYNC event should be used in that case instead.
+ */
+#define V4L2_BUF_FLAG_TIMESTAMP_SOF		0x00010000
 
 /**
  * struct v4l2_exportbuffer - export of video buffer as DMABUF file descriptor