Message ID | 1364076274-726-1-git-send-email-sakari.ailus@iki.fi (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Sun, Mar 24, 2013 at 12:04:34AM +0200, Sakari Ailus wrote: > Document that monotonic timestamps are taken after the corresponding frame > has been received, not when the reception has begun. This corresponds to the > reality of current drivers: the timestamp is naturally taken when the > hardware triggers an interrupt to tell the driver to handle the received > frame. > > Remove the note on timestamp accurary as it is fairly subjective what is > actually an unstable timestamp. > > Also remove explanation that output buffer timestamps can be used to delay > outputting a frame. > > Remove the footnote saying we always use realtime clock. Ping.
On Sat March 23 2013 23:04:34 Sakari Ailus wrote: > Document that monotonic timestamps are taken after the corresponding frame > has been received, not when the reception has begun. This corresponds to the > reality of current drivers: the timestamp is naturally taken when the > hardware triggers an interrupt to tell the driver to handle the received > frame. > > Remove the note on timestamp accurary as it is fairly subjective what is > actually an unstable timestamp. > > Also remove explanation that output buffer timestamps can be used to delay > outputting a frame. > > Remove the footnote saying we always use realtime clock. > > Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi> Sorry for the delay, for some reason this patch wasn't picked up by patchwork. > --- > Hi all, > > This is the second version of the patch fixing timestamp behaviour > documentation. I've tried to address the comments I've received albeit I > don't think there was a definitive conclusion on all the trails of > discussion. What has changed since v1 is: > > - Removed discussion on timestamp stability. > > - Removed notes that timestamps on output buffers define when frames will be > displayed. It appears no driver has ever implemented this, or at least > does not implement this now. > > - Monotonic time is not affected by harms that the wall clock time is > subjected to. Remove notes on that. > > Documentation/DocBook/media/v4l/io.xml | 47 ++++++-------------------------- > 1 file changed, 8 insertions(+), 39 deletions(-) > > diff --git a/Documentation/DocBook/media/v4l/io.xml b/Documentation/DocBook/media/v4l/io.xml > index e6c5855..46d5a41 100644 > --- a/Documentation/DocBook/media/v4l/io.xml > +++ b/Documentation/DocBook/media/v4l/io.xml > @@ -654,38 +654,11 @@ plane, are stored in struct <structname>v4l2_plane</structname> instead. > In that case, struct <structname>v4l2_buffer</structname> contains an array of > plane structures.</para> > > - <para>Nominally timestamps refer to the first data byte transmitted. > -In practice however the wide range of hardware covered by the V4L2 API > -limits timestamp accuracy. Often an interrupt routine will > -sample the system clock shortly after the field or frame was stored > -completely in memory. So applications must expect a constant > -difference up to one field or frame period plus a small (few scan > -lines) random error. The delay and error can be much > -larger due to compression or transmission over an external bus when > -the frames are not properly stamped by the sender. This is frequently > -the case with USB cameras. Here timestamps refer to the instant the > -field or frame was received by the driver, not the capture time. These > -devices identify by not enumerating any video standards, see <xref > -linkend="standard" />.</para> > - > - <para>Similar limitations apply to output timestamps. Typically > -the video hardware locks to a clock controlling the video timing, the > -horizontal and vertical synchronization pulses. At some point in the > -line sequence, possibly the vertical blanking, an interrupt routine > -samples the system clock, compares against the timestamp and programs > -the hardware to repeat the previous field or frame, or to display the > -buffer contents.</para> > - > - <para>Apart of limitations of the video device and natural > -inaccuracies of all clocks, it should be noted system time itself is > -not perfectly stable. It can be affected by power saving cycles, > -warped to insert leap seconds, or even turned back or forth by the > -system administrator affecting long term measurements. <footnote> > - <para>Since no other Linux multimedia > -API supports unadjusted time it would be foolish to introduce here. We > -must use a universally supported clock to synchronize different media, > -hence time of day.</para> > - </footnote></para> > + <para>On timestamp types that are sampled from the system clock 'On' -> 'For' > +(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp is > +taken after the complete frame has been received (or transmitted in > +case of video output devices). For other kinds of > +timestamps this may vary depending on the driver.</para> > > <table frame="none" pgwide="1" id="v4l2-buffer"> > <title>struct <structname>v4l2_buffer</structname></title> > @@ -745,13 +718,9 @@ applications when an output stream.</entry> > byte was captured, as returned by the > <function>clock_gettime()</function> function for the relevant > clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in > - <xref linkend="buffer-flags" />. For output streams the data > - will not be displayed before this time, secondary to the nominal > - frame rate determined by the current video standard in enqueued > - order. Applications can for example zero this field to display > - frames as soon as possible. The driver stores the time at which > - the first data byte was actually sent out in the > - <structfield>timestamp</structfield> field. This permits > + <xref linkend="buffer-flags" />. For output streams he driver 'he' -> 'the' > + stores the time at which the first data byte was actually sent out > + in the <structfield>timestamp</structfield> field. This permits Not true: the timestamp is taken after the whole frame was transmitted. Note that the 'timestamp' field documentation still says that it is the timestamp of the first data byte for capture as well, that's also wrong. > applications to monitor the drift between the video and system > clock.</para></entry> > </row> > Regards, Hans -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Hans. On Fri, Jun 07, 2013 at 05:21:52PM +0200, Hans Verkuil wrote: > On Sat March 23 2013 23:04:34 Sakari Ailus wrote: > > Document that monotonic timestamps are taken after the corresponding frame > > has been received, not when the reception has begun. This corresponds to the > > reality of current drivers: the timestamp is naturally taken when the > > hardware triggers an interrupt to tell the driver to handle the received > > frame. > > > > Remove the note on timestamp accurary as it is fairly subjective what is > > actually an unstable timestamp. > > > > Also remove explanation that output buffer timestamps can be used to delay > > outputting a frame. > > > > Remove the footnote saying we always use realtime clock. > > > > Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi> > > Sorry for the delay, for some reason this patch wasn't picked up by patchwork. No problem --- this wasn't urgent anyway. And thanks for your comments! > > --- > > Hi all, > > > > This is the second version of the patch fixing timestamp behaviour > > documentation. I've tried to address the comments I've received albeit I > > don't think there was a definitive conclusion on all the trails of > > discussion. What has changed since v1 is: > > > > - Removed discussion on timestamp stability. > > > > - Removed notes that timestamps on output buffers define when frames will be > > displayed. It appears no driver has ever implemented this, or at least > > does not implement this now. > > > > - Monotonic time is not affected by harms that the wall clock time is > > subjected to. Remove notes on that. > > > > Documentation/DocBook/media/v4l/io.xml | 47 ++++++-------------------------- > > 1 file changed, 8 insertions(+), 39 deletions(-) > > > > diff --git a/Documentation/DocBook/media/v4l/io.xml b/Documentation/DocBook/media/v4l/io.xml > > index e6c5855..46d5a41 100644 > > --- a/Documentation/DocBook/media/v4l/io.xml > > +++ b/Documentation/DocBook/media/v4l/io.xml > > @@ -654,38 +654,11 @@ plane, are stored in struct <structname>v4l2_plane</structname> instead. > > In that case, struct <structname>v4l2_buffer</structname> contains an array of > > plane structures.</para> > > > > - <para>Nominally timestamps refer to the first data byte transmitted. > > -In practice however the wide range of hardware covered by the V4L2 API > > -limits timestamp accuracy. Often an interrupt routine will > > -sample the system clock shortly after the field or frame was stored > > -completely in memory. So applications must expect a constant > > -difference up to one field or frame period plus a small (few scan > > -lines) random error. The delay and error can be much > > -larger due to compression or transmission over an external bus when > > -the frames are not properly stamped by the sender. This is frequently > > -the case with USB cameras. Here timestamps refer to the instant the > > -field or frame was received by the driver, not the capture time. These > > -devices identify by not enumerating any video standards, see <xref > > -linkend="standard" />.</para> > > - > > - <para>Similar limitations apply to output timestamps. Typically > > -the video hardware locks to a clock controlling the video timing, the > > -horizontal and vertical synchronization pulses. At some point in the > > -line sequence, possibly the vertical blanking, an interrupt routine > > -samples the system clock, compares against the timestamp and programs > > -the hardware to repeat the previous field or frame, or to display the > > -buffer contents.</para> > > - > > - <para>Apart of limitations of the video device and natural > > -inaccuracies of all clocks, it should be noted system time itself is > > -not perfectly stable. It can be affected by power saving cycles, > > -warped to insert leap seconds, or even turned back or forth by the > > -system administrator affecting long term measurements. <footnote> > > - <para>Since no other Linux multimedia > > -API supports unadjusted time it would be foolish to introduce here. We > > -must use a universally supported clock to synchronize different media, > > -hence time of day.</para> > > - </footnote></para> > > + <para>On timestamp types that are sampled from the system clock > > 'On' -> 'For' Fixed. > > +(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp is > > +taken after the complete frame has been received (or transmitted in > > +case of video output devices). For other kinds of > > +timestamps this may vary depending on the driver.</para> > > > > <table frame="none" pgwide="1" id="v4l2-buffer"> > > <title>struct <structname>v4l2_buffer</structname></title> > > @@ -745,13 +718,9 @@ applications when an output stream.</entry> > > byte was captured, as returned by the > > <function>clock_gettime()</function> function for the relevant > > clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in > > - <xref linkend="buffer-flags" />. For output streams the data > > - will not be displayed before this time, secondary to the nominal > > - frame rate determined by the current video standard in enqueued > > - order. Applications can for example zero this field to display > > - frames as soon as possible. The driver stores the time at which > > - the first data byte was actually sent out in the > > - <structfield>timestamp</structfield> field. This permits > > + <xref linkend="buffer-flags" />. For output streams he driver > > 'he' -> 'the' Fixed. > > + stores the time at which the first data byte was actually sent out > > + in the <structfield>timestamp</structfield> field. This permits > > Not true: the timestamp is taken after the whole frame was transmitted. I've mostly done capture; so this is for OUTPUT, too..? Well --- it makes sense. I'll change that. > Note that the 'timestamp' field documentation still says that it is the > timestamp of the first data byte for capture as well, that's also wrong. I can't figure out how did I manage to miss that --- it was the primary reason for this patch to exist! Fixed now (s/time when the first/time right after the last/). I'll send the new version shortly.
Hi Hans, On Friday 07 June 2013 17:21:52 Hans Verkuil wrote: > On Sat March 23 2013 23:04:34 Sakari Ailus wrote: > > Document that monotonic timestamps are taken after the corresponding frame > > has been received, not when the reception has begun. This corresponds to > > the reality of current drivers: the timestamp is naturally taken when the > > hardware triggers an interrupt to tell the driver to handle the received > > frame. > > > > Remove the note on timestamp accurary as it is fairly subjective what is > > actually an unstable timestamp. > > > > Also remove explanation that output buffer timestamps can be used to delay > > outputting a frame. > > > > Remove the footnote saying we always use realtime clock. > > > > Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi> > > Sorry for the delay, for some reason this patch wasn't picked up by > patchwork. > > --- > > Hi all, > > > > This is the second version of the patch fixing timestamp behaviour > > documentation. I've tried to address the comments I've received albeit I > > don't think there was a definitive conclusion on all the trails of > > discussion. What has changed since v1 is: > > > > - Removed discussion on timestamp stability. > > > > - Removed notes that timestamps on output buffers define when frames will > > be displayed. It appears no driver has ever implemented this, or at > > least does not implement this now. > > > > - Monotonic time is not affected by harms that the wall clock time is > > > > subjected to. Remove notes on that. > > > > Documentation/DocBook/media/v4l/io.xml | 47 +++++---------------------- > > 1 file changed, 8 insertions(+), 39 deletions(-) > > > > diff --git a/Documentation/DocBook/media/v4l/io.xml > > b/Documentation/DocBook/media/v4l/io.xml index e6c5855..46d5a41 100644 > > --- a/Documentation/DocBook/media/v4l/io.xml > > +++ b/Documentation/DocBook/media/v4l/io.xml [snip] > > @@ -745,13 +718,9 @@ applications when an output stream.</entry> > > > > byte was captured, as returned by the > > <function>clock_gettime()</function> function for the relevant > > clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in > > > > - <xref linkend="buffer-flags" />. For output streams the data > > - will not be displayed before this time, secondary to the nominal > > - frame rate determined by the current video standard in enqueued > > - order. Applications can for example zero this field to display > > - frames as soon as possible. The driver stores the time at which > > - the first data byte was actually sent out in the > > - <structfield>timestamp</structfield> field. This permits > > + <xref linkend="buffer-flags" />. For output streams he driver > > 'he' -> 'the' > > > + stores the time at which the first data byte was actually sent out > > + in the <structfield>timestamp</structfield> field. This permits > > Not true: the timestamp is taken after the whole frame was transmitted. > > Note that the 'timestamp' field documentation still says that it is the > timestamp of the first data byte for capture as well, that's also wrong. I know we've already discussed this, but what about devices, such as uvcvideo, that can provide the time stamp at which the image has been captured ? I don't think it would be worth it making this configurable, or even reporting the information to userspace, but shouldn't we give some degree of freedom to drivers here ? > > applications to monitor the drift between the video and system > > clock.</para></entry> > > > > </row>
Hi Laurent, On Sat, Jun 08, 2013 at 08:59:43AM +0200, Laurent Pinchart wrote: > On Friday 07 June 2013 17:21:52 Hans Verkuil wrote: > > On Sat March 23 2013 23:04:34 Sakari Ailus wrote: > > > Document that monotonic timestamps are taken after the corresponding frame > > > has been received, not when the reception has begun. This corresponds to > > > the reality of current drivers: the timestamp is naturally taken when the > > > hardware triggers an interrupt to tell the driver to handle the received > > > frame. > > > > > > Remove the note on timestamp accurary as it is fairly subjective what is > > > actually an unstable timestamp. > > > > > > Also remove explanation that output buffer timestamps can be used to delay > > > outputting a frame. > > > > > > Remove the footnote saying we always use realtime clock. > > > > > > Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi> > > > > Sorry for the delay, for some reason this patch wasn't picked up by > > patchwork. > > > --- > > > Hi all, > > > > > > This is the second version of the patch fixing timestamp behaviour > > > documentation. I've tried to address the comments I've received albeit I > > > don't think there was a definitive conclusion on all the trails of > > > discussion. What has changed since v1 is: > > > > > > - Removed discussion on timestamp stability. > > > > > > - Removed notes that timestamps on output buffers define when frames will > > > be displayed. It appears no driver has ever implemented this, or at > > > least does not implement this now. > > > > > > - Monotonic time is not affected by harms that the wall clock time is > > > > > > subjected to. Remove notes on that. > > > > > > Documentation/DocBook/media/v4l/io.xml | 47 +++++---------------------- > > > 1 file changed, 8 insertions(+), 39 deletions(-) > > > > > > diff --git a/Documentation/DocBook/media/v4l/io.xml > > > b/Documentation/DocBook/media/v4l/io.xml index e6c5855..46d5a41 100644 > > > --- a/Documentation/DocBook/media/v4l/io.xml > > > +++ b/Documentation/DocBook/media/v4l/io.xml > > [snip] > > > > @@ -745,13 +718,9 @@ applications when an output stream.</entry> > > > > > > byte was captured, as returned by the > > > <function>clock_gettime()</function> function for the relevant > > > clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in > > > > > > - <xref linkend="buffer-flags" />. For output streams the data > > > - will not be displayed before this time, secondary to the nominal > > > - frame rate determined by the current video standard in enqueued > > > - order. Applications can for example zero this field to display > > > - frames as soon as possible. The driver stores the time at which > > > - the first data byte was actually sent out in the > > > - <structfield>timestamp</structfield> field. This permits > > > + <xref linkend="buffer-flags" />. For output streams he driver > > > > 'he' -> 'the' > > > > > + stores the time at which the first data byte was actually sent out > > > + in the <structfield>timestamp</structfield> field. This permits > > > > Not true: the timestamp is taken after the whole frame was transmitted. > > > > Note that the 'timestamp' field documentation still says that it is the > > timestamp of the first data byte for capture as well, that's also wrong. > > I know we've already discussed this, but what about devices, such as uvcvideo, > that can provide the time stamp at which the image has been captured ? I don't > think it would be worth it making this configurable, or even reporting the > information to userspace, but shouldn't we give some degree of freedom to > drivers here ? Hmm. That's a good question --- if we allow variation then we preferrably should also provide a way for applications to know which case is which. Could the uvcvideo timestamps be meaningfully converted to the frame end time instead? I'd suppose that a frame rate dependent constant would suffice. However, how to calculate this I don't know.
Hi Sakari, On Saturday 08 June 2013 19:31:43 Sakari Ailus wrote: > On Sat, Jun 08, 2013 at 08:59:43AM +0200, Laurent Pinchart wrote: > > On Friday 07 June 2013 17:21:52 Hans Verkuil wrote: > > > On Sat March 23 2013 23:04:34 Sakari Ailus wrote: > > > > Document that monotonic timestamps are taken after the corresponding > > > > frame has been received, not when the reception has begun. This > > > > corresponds to the reality of current drivers: the timestamp is > > > > naturally taken when the hardware triggers an interrupt to tell the > > > > driver to handle the received frame. > > > > > > > > Remove the note on timestamp accurary as it is fairly subjective what > > > > is actually an unstable timestamp. > > > > > > > > Also remove explanation that output buffer timestamps can be used to > > > > delay outputting a frame. > > > > > > > > Remove the footnote saying we always use realtime clock. > > > > > > > > Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi> > > > > > > Sorry for the delay, for some reason this patch wasn't picked up by > > > patchwork. > > > > > > > --- > > > > Hi all, > > > > > > > > This is the second version of the patch fixing timestamp behaviour > > > > documentation. I've tried to address the comments I've received albeit > > > > I don't think there was a definitive conclusion on all the trails of > > > > discussion. What has changed since v1 is: > > > > > > > > - Removed discussion on timestamp stability. > > > > > > > > - Removed notes that timestamps on output buffers define when frames > > > > will be displayed. It appears no driver has ever implemented this, > > > > or at least does not implement this now. > > > > > > > > - Monotonic time is not affected by harms that the wall clock time is > > > > > > > > subjected to. Remove notes on that. > > > > > > > > Documentation/DocBook/media/v4l/io.xml | 47 ++++------------------- > > > > 1 file changed, 8 insertions(+), 39 deletions(-) > > > > > > > > diff --git a/Documentation/DocBook/media/v4l/io.xml > > > > b/Documentation/DocBook/media/v4l/io.xml index e6c5855..46d5a41 100644 > > > > --- a/Documentation/DocBook/media/v4l/io.xml > > > > +++ b/Documentation/DocBook/media/v4l/io.xml > > > > [snip] > > > > > > @@ -745,13 +718,9 @@ applications when an output stream.</entry> > > > > > > > > byte was captured, as returned by the > > > > <function>clock_gettime()</function> function for the relevant > > > > clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in > > > > > > > > - <xref linkend="buffer-flags" />. For output streams the data > > > > - will not be displayed before this time, secondary to the nominal > > > > - frame rate determined by the current video standard in enqueued > > > > - order. Applications can for example zero this field to display > > > > - frames as soon as possible. The driver stores the time at which > > > > - the first data byte was actually sent out in the > > > > - <structfield>timestamp</structfield> field. This permits > > > > + <xref linkend="buffer-flags" />. For output streams he driver > > > > > > 'he' -> 'the' > > > > > > > + stores the time at which the first data byte was actually sent > > > > out > > > > + in the <structfield>timestamp</structfield> field. This permits > > > > > > Not true: the timestamp is taken after the whole frame was transmitted. > > > > > > Note that the 'timestamp' field documentation still says that it is the > > > timestamp of the first data byte for capture as well, that's also wrong. > > > > I know we've already discussed this, but what about devices, such as > > uvcvideo, that can provide the time stamp at which the image has been > > captured ? I don't think it would be worth it making this configurable, > > or even reporting the information to userspace, but shouldn't we give > > some degree of freedom to drivers here ? > > Hmm. That's a good question --- if we allow variation then we preferrably > should also provide a way for applications to know which case is which. > > Could the uvcvideo timestamps be meaningfully converted to the frame end > time instead? I'd suppose that a frame rate dependent constant would > suffice. However, how to calculate this I don't know. I don't think that's a good idea. The time at which the last byte of the image is received is meaningless to applications. What they care about, for synchronization purpose, is the time at which the image has been captured. I'm wondering if we really need to care for now. I would be enclined to leave it as-is until an application runs into a real issue related to timestamps.
Hi Laurent, Laurent Pinchart wrote: ... >>>>> @@ -745,13 +718,9 @@ applications when an output stream.</entry> >>>>> >>>>> byte was captured, as returned by the >>>>> <function>clock_gettime()</function> function for the relevant >>>>> clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in >>>>> >>>>> - <xref linkend="buffer-flags" />. For output streams the data >>>>> - will not be displayed before this time, secondary to the nominal >>>>> - frame rate determined by the current video standard in enqueued >>>>> - order. Applications can for example zero this field to display >>>>> - frames as soon as possible. The driver stores the time at which >>>>> - the first data byte was actually sent out in the >>>>> - <structfield>timestamp</structfield> field. This permits >>>>> + <xref linkend="buffer-flags" />. For output streams he driver >>>> >>>> 'he' -> 'the' >>>> >>>>> + stores the time at which the first data byte was actually sent >>>>> out >>>>> + in the <structfield>timestamp</structfield> field. This permits >>>> >>>> Not true: the timestamp is taken after the whole frame was transmitted. >>>> >>>> Note that the 'timestamp' field documentation still says that it is the >>>> timestamp of the first data byte for capture as well, that's also wrong. >>> >>> I know we've already discussed this, but what about devices, such as >>> uvcvideo, that can provide the time stamp at which the image has been >>> captured ? I don't think it would be worth it making this configurable, >>> or even reporting the information to userspace, but shouldn't we give >>> some degree of freedom to drivers here ? >> >> Hmm. That's a good question --- if we allow variation then we preferrably >> should also provide a way for applications to know which case is which. >> >> Could the uvcvideo timestamps be meaningfully converted to the frame end >> time instead? I'd suppose that a frame rate dependent constant would >> suffice. However, how to calculate this I don't know. > > I don't think that's a good idea. The time at which the last byte of the image > is received is meaningless to applications. What they care about, for > synchronization purpose, is the time at which the image has been captured. > > I'm wondering if we really need to care for now. I would be enclined to leave > it as-is until an application runs into a real issue related to timestamps. What do you mean by "image has been captured"? Which part of it? What I was thinking was the possibility that we could change the definition so that it'd be applicable to both cases: the time the whole image is fully in the system memory is of secondary importance in both cases anyway. As on embedded systems the time between the last pixel of the image is fully captured to it being in the host system memory is very, very short the two can be considered the same in most situations. I wonder if this change would have any undesirable consequences.
On Mon June 10 2013 00:35:44 Sakari Ailus wrote: > Hi Laurent, > > Laurent Pinchart wrote: > ... > >>>>> @@ -745,13 +718,9 @@ applications when an output stream.</entry> > >>>>> > >>>>> byte was captured, as returned by the > >>>>> <function>clock_gettime()</function> function for the relevant > >>>>> clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in > >>>>> > >>>>> - <xref linkend="buffer-flags" />. For output streams the data > >>>>> - will not be displayed before this time, secondary to the nominal > >>>>> - frame rate determined by the current video standard in enqueued > >>>>> - order. Applications can for example zero this field to display > >>>>> - frames as soon as possible. The driver stores the time at which > >>>>> - the first data byte was actually sent out in the > >>>>> - <structfield>timestamp</structfield> field. This permits > >>>>> + <xref linkend="buffer-flags" />. For output streams he driver > >>>> > >>>> 'he' -> 'the' > >>>> > >>>>> + stores the time at which the first data byte was actually sent > >>>>> out > >>>>> + in the <structfield>timestamp</structfield> field. This permits > >>>> > >>>> Not true: the timestamp is taken after the whole frame was transmitted. > >>>> > >>>> Note that the 'timestamp' field documentation still says that it is the > >>>> timestamp of the first data byte for capture as well, that's also wrong. > >>> > >>> I know we've already discussed this, but what about devices, such as > >>> uvcvideo, that can provide the time stamp at which the image has been > >>> captured ? I don't think it would be worth it making this configurable, > >>> or even reporting the information to userspace, but shouldn't we give > >>> some degree of freedom to drivers here ? > >> > >> Hmm. That's a good question --- if we allow variation then we preferrably > >> should also provide a way for applications to know which case is which. > >> > >> Could the uvcvideo timestamps be meaningfully converted to the frame end > >> time instead? I'd suppose that a frame rate dependent constant would > >> suffice. However, how to calculate this I don't know. > > > > I don't think that's a good idea. The time at which the last byte of the image > > is received is meaningless to applications. What they care about, for > > synchronization purpose, is the time at which the image has been captured. > > > > I'm wondering if we really need to care for now. I would be enclined to leave > > it as-is until an application runs into a real issue related to timestamps. > > What do you mean by "image has been captured"? Which part of it? > > What I was thinking was the possibility that we could change the > definition so that it'd be applicable to both cases: the time the whole > image is fully in the system memory is of secondary importance in both > cases anyway. As on embedded systems the time between the last pixel of > the image is fully captured to it being in the host system memory is > very, very short the two can be considered the same in most situations. > > I wonder if this change would have any undesirable consequences. I really think we need to add a buffer flag that states whether the timestamp is taken at the start or at the end of the frame. For video receivers the timestamp at the end of the frame is the logical choice and this is what almost all drivers do. Only for sensors can the start of the frame be more suitable since the framerate can be variable. /* Timestamp is taken at the start-of-frame, not the end-of-frame */ #define V4L2_BUF_FLAG_TIMESTAMP_SOF 0x0200 I think it is a safe bet that we won't see 'middle of frame' timestamps, so let's just add this flag. Regards, Hans -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Hans, On Monday 10 June 2013 13:29:53 Hans Verkuil wrote: > On Mon June 10 2013 00:35:44 Sakari Ailus wrote: [snip] > > >>>> Note that the 'timestamp' field documentation still says that it is > > >>>> the timestamp of the first data byte for capture as well, that's also > > >>>> wrong. > > >>> > > >>> I know we've already discussed this, but what about devices, such as > > >>> uvcvideo, that can provide the time stamp at which the image has been > > >>> captured ? I don't think it would be worth it making this > > >>> configurable, or even reporting the information to userspace, but > > >>> shouldn't we give some degree of freedom to drivers here ? > > >> > > >> Hmm. That's a good question --- if we allow variation then we > > >> preferrably should also provide a way for applications to know which > > >> case is which. > > >> > > >> Could the uvcvideo timestamps be meaningfully converted to the frame > > >> end time instead? I'd suppose that a frame rate dependent constant > > >> would suffice. However, how to calculate this I don't know. > > > > > > I don't think that's a good idea. The time at which the last byte of the > > > image is received is meaningless to applications. What they care about, > > > for synchronization purpose, is the time at which the image has been > > > captured. > > > > > > I'm wondering if we really need to care for now. I would be enclined to > > > leave it as-is until an application runs into a real issue related to > > > timestamps. > > > > What do you mean by "image has been captured"? Which part of it? > > > > What I was thinking was the possibility that we could change the > > definition so that it'd be applicable to both cases: the time the whole > > image is fully in the system memory is of secondary importance in both > > cases anyway. As on embedded systems the time between the last pixel of > > the image is fully captured to it being in the host system memory is > > very, very short the two can be considered the same in most situations. > > > > I wonder if this change would have any undesirable consequences. > > I really think we need to add a buffer flag that states whether the > timestamp is taken at the start or at the end of the frame. > > For video receivers the timestamp at the end of the frame is the logical > choice and this is what almost all drivers do. Only for sensors can the > start of the frame be more suitable since the framerate can be variable. > > /* Timestamp is taken at the start-of-frame, not the end-of-frame */ > #define V4L2_BUF_FLAG_TIMESTAMP_SOF 0x0200 > > I think it is a safe bet that we won't see 'middle of frame' timestamps, so > let's just add this flag. Given that the timestamp will very likely not vary during the stream, wouldn't it make sense to put the flag somewhere else ? Otherwise applications won't be able to know when the timestamp is taken beforehand.
On Tue June 18 2013 21:55:26 Laurent Pinchart wrote: > Hi Hans, > > On Monday 10 June 2013 13:29:53 Hans Verkuil wrote: > > On Mon June 10 2013 00:35:44 Sakari Ailus wrote: > > [snip] > > > > >>>> Note that the 'timestamp' field documentation still says that it is > > > >>>> the timestamp of the first data byte for capture as well, that's also > > > >>>> wrong. > > > >>> > > > >>> I know we've already discussed this, but what about devices, such as > > > >>> uvcvideo, that can provide the time stamp at which the image has been > > > >>> captured ? I don't think it would be worth it making this > > > >>> configurable, or even reporting the information to userspace, but > > > >>> shouldn't we give some degree of freedom to drivers here ? > > > >> > > > >> Hmm. That's a good question --- if we allow variation then we > > > >> preferrably should also provide a way for applications to know which > > > >> case is which. > > > >> > > > >> Could the uvcvideo timestamps be meaningfully converted to the frame > > > >> end time instead? I'd suppose that a frame rate dependent constant > > > >> would suffice. However, how to calculate this I don't know. > > > > > > > > I don't think that's a good idea. The time at which the last byte of the > > > > image is received is meaningless to applications. What they care about, > > > > for synchronization purpose, is the time at which the image has been > > > > captured. > > > > > > > > I'm wondering if we really need to care for now. I would be enclined to > > > > leave it as-is until an application runs into a real issue related to > > > > timestamps. > > > > > > What do you mean by "image has been captured"? Which part of it? > > > > > > What I was thinking was the possibility that we could change the > > > definition so that it'd be applicable to both cases: the time the whole > > > image is fully in the system memory is of secondary importance in both > > > cases anyway. As on embedded systems the time between the last pixel of > > > the image is fully captured to it being in the host system memory is > > > very, very short the two can be considered the same in most situations. > > > > > > I wonder if this change would have any undesirable consequences. > > > > I really think we need to add a buffer flag that states whether the > > timestamp is taken at the start or at the end of the frame. > > > > For video receivers the timestamp at the end of the frame is the logical > > choice and this is what almost all drivers do. Only for sensors can the > > start of the frame be more suitable since the framerate can be variable. > > > > /* Timestamp is taken at the start-of-frame, not the end-of-frame */ > > #define V4L2_BUF_FLAG_TIMESTAMP_SOF 0x0200 > > > > I think it is a safe bet that we won't see 'middle of frame' timestamps, so > > let's just add this flag. > > Given that the timestamp will very likely not vary during the stream, wouldn't > it make sense to put the flag somewhere else ? Otherwise applications won't be > able to know when the timestamp is taken beforehand. Actually, they can. After calling REQBUFS they can call QUERYBUF and that should have the flag set. The only ioctls where adding such a flag makes sense are REQBUFS and CREATE_BUFS, which means taking a reserved field just for this. But I actually think it is much more logical to keep all the timestamp information in one place. And since it can be queried before starting streaming using QUERYBUF... Regards, Hans -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Hans, Apologies for the much delayed answer. On Mon, Jun 10, 2013 at 01:29:53PM +0200, Hans Verkuil wrote: > On Mon June 10 2013 00:35:44 Sakari Ailus wrote: > > Hi Laurent, > > > > Laurent Pinchart wrote: > > ... > > >>>>> @@ -745,13 +718,9 @@ applications when an output stream.</entry> > > >>>>> > > >>>>> byte was captured, as returned by the > > >>>>> <function>clock_gettime()</function> function for the relevant > > >>>>> clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in > > >>>>> > > >>>>> - <xref linkend="buffer-flags" />. For output streams the data > > >>>>> - will not be displayed before this time, secondary to the nominal > > >>>>> - frame rate determined by the current video standard in enqueued > > >>>>> - order. Applications can for example zero this field to display > > >>>>> - frames as soon as possible. The driver stores the time at which > > >>>>> - the first data byte was actually sent out in the > > >>>>> - <structfield>timestamp</structfield> field. This permits > > >>>>> + <xref linkend="buffer-flags" />. For output streams he driver > > >>>> > > >>>> 'he' -> 'the' > > >>>> > > >>>>> + stores the time at which the first data byte was actually sent > > >>>>> out > > >>>>> + in the <structfield>timestamp</structfield> field. This permits > > >>>> > > >>>> Not true: the timestamp is taken after the whole frame was transmitted. > > >>>> > > >>>> Note that the 'timestamp' field documentation still says that it is the > > >>>> timestamp of the first data byte for capture as well, that's also wrong. > > >>> > > >>> I know we've already discussed this, but what about devices, such as > > >>> uvcvideo, that can provide the time stamp at which the image has been > > >>> captured ? I don't think it would be worth it making this configurable, > > >>> or even reporting the information to userspace, but shouldn't we give > > >>> some degree of freedom to drivers here ? > > >> > > >> Hmm. That's a good question --- if we allow variation then we preferrably > > >> should also provide a way for applications to know which case is which. > > >> > > >> Could the uvcvideo timestamps be meaningfully converted to the frame end > > >> time instead? I'd suppose that a frame rate dependent constant would > > >> suffice. However, how to calculate this I don't know. > > > > > > I don't think that's a good idea. The time at which the last byte of the image > > > is received is meaningless to applications. What they care about, for > > > synchronization purpose, is the time at which the image has been captured. > > > > > > I'm wondering if we really need to care for now. I would be enclined to leave > > > it as-is until an application runs into a real issue related to timestamps. > > > > What do you mean by "image has been captured"? Which part of it? > > > > What I was thinking was the possibility that we could change the > > definition so that it'd be applicable to both cases: the time the whole > > image is fully in the system memory is of secondary importance in both > > cases anyway. As on embedded systems the time between the last pixel of > > the image is fully captured to it being in the host system memory is > > very, very short the two can be considered the same in most situations. > > > > I wonder if this change would have any undesirable consequences. > > I really think we need to add a buffer flag that states whether the timestamp > is taken at the start or at the end of the frame. > > For video receivers the timestamp at the end of the frame is the logical > choice and this is what almost all drivers do. Only for sensors can the start > of the frame be more suitable since the framerate can be variable. Do you have a use case in mind? The start-of-frame (frame sync) event is can be subscribed for that purpose. Most of the time the start-of-frame event is also generated by a different hardware sub-block (and thus sub-device) than the one that finally writes the image to the system memory. In the future there could be cases where it's a different driver altogeter, albeit we don't have one now. Besides possibly requiring at least a tiny hack to implement in a driver I could hardly argue that kind of an implementation would be more beautiful: the buffer timestamp is better associated with end-of-frame if possible. Systems that need the start-of-frame event generally need both (at least the ones I'm aware of). > /* Timestamp is taken at the start-of-frame, not the end-of-frame */ > #define V4L2_BUF_FLAG_TIMESTAMP_SOF 0x0200 > > I think it is a safe bet that we won't see 'middle of frame' timestamps, so > let's just add this flag. Agreed. I'll add the flag and resend.
diff --git a/Documentation/DocBook/media/v4l/io.xml b/Documentation/DocBook/media/v4l/io.xml index e6c5855..46d5a41 100644 --- a/Documentation/DocBook/media/v4l/io.xml +++ b/Documentation/DocBook/media/v4l/io.xml @@ -654,38 +654,11 @@ plane, are stored in struct <structname>v4l2_plane</structname> instead. In that case, struct <structname>v4l2_buffer</structname> contains an array of plane structures.</para> - <para>Nominally timestamps refer to the first data byte transmitted. -In practice however the wide range of hardware covered by the V4L2 API -limits timestamp accuracy. Often an interrupt routine will -sample the system clock shortly after the field or frame was stored -completely in memory. So applications must expect a constant -difference up to one field or frame period plus a small (few scan -lines) random error. The delay and error can be much -larger due to compression or transmission over an external bus when -the frames are not properly stamped by the sender. This is frequently -the case with USB cameras. Here timestamps refer to the instant the -field or frame was received by the driver, not the capture time. These -devices identify by not enumerating any video standards, see <xref -linkend="standard" />.</para> - - <para>Similar limitations apply to output timestamps. Typically -the video hardware locks to a clock controlling the video timing, the -horizontal and vertical synchronization pulses. At some point in the -line sequence, possibly the vertical blanking, an interrupt routine -samples the system clock, compares against the timestamp and programs -the hardware to repeat the previous field or frame, or to display the -buffer contents.</para> - - <para>Apart of limitations of the video device and natural -inaccuracies of all clocks, it should be noted system time itself is -not perfectly stable. It can be affected by power saving cycles, -warped to insert leap seconds, or even turned back or forth by the -system administrator affecting long term measurements. <footnote> - <para>Since no other Linux multimedia -API supports unadjusted time it would be foolish to introduce here. We -must use a universally supported clock to synchronize different media, -hence time of day.</para> - </footnote></para> + <para>On timestamp types that are sampled from the system clock +(V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC) it is guaranteed that the timestamp is +taken after the complete frame has been received (or transmitted in +case of video output devices). For other kinds of +timestamps this may vary depending on the driver.</para> <table frame="none" pgwide="1" id="v4l2-buffer"> <title>struct <structname>v4l2_buffer</structname></title> @@ -745,13 +718,9 @@ applications when an output stream.</entry> byte was captured, as returned by the <function>clock_gettime()</function> function for the relevant clock id; see <constant>V4L2_BUF_FLAG_TIMESTAMP_*</constant> in - <xref linkend="buffer-flags" />. For output streams the data - will not be displayed before this time, secondary to the nominal - frame rate determined by the current video standard in enqueued - order. Applications can for example zero this field to display - frames as soon as possible. The driver stores the time at which - the first data byte was actually sent out in the - <structfield>timestamp</structfield> field. This permits + <xref linkend="buffer-flags" />. For output streams he driver + stores the time at which the first data byte was actually sent out + in the <structfield>timestamp</structfield> field. This permits applications to monitor the drift between the video and system clock.</para></entry> </row>
Document that monotonic timestamps are taken after the corresponding frame has been received, not when the reception has begun. This corresponds to the reality of current drivers: the timestamp is naturally taken when the hardware triggers an interrupt to tell the driver to handle the received frame. Remove the note on timestamp accurary as it is fairly subjective what is actually an unstable timestamp. Also remove explanation that output buffer timestamps can be used to delay outputting a frame. Remove the footnote saying we always use realtime clock. Signed-off-by: Sakari Ailus <sakari.ailus@iki.fi> --- Hi all, This is the second version of the patch fixing timestamp behaviour documentation. I've tried to address the comments I've received albeit I don't think there was a definitive conclusion on all the trails of discussion. What has changed since v1 is: - Removed discussion on timestamp stability. - Removed notes that timestamps on output buffers define when frames will be displayed. It appears no driver has ever implemented this, or at least does not implement this now. - Monotonic time is not affected by harms that the wall clock time is subjected to. Remove notes on that. Documentation/DocBook/media/v4l/io.xml | 47 ++++++-------------------------- 1 file changed, 8 insertions(+), 39 deletions(-)