diff mbox series

[RFC,3/4] media: Documentation: Add subdev configuration models, raw sensor model

Message ID 20241011075535.588140-4-sakari.ailus@linux.intel.com (mailing list archive)
State New
Headers show
Series Sub-device configuration models | expand

Commit Message

Sakari Ailus Oct. 11, 2024, 7:55 a.m. UTC
Sub-device configuration models define what V4L2 API elements are
available on a compliant sub-device and how do they behave.

The patch also adds a model for common raw sensors.

Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
---
 .../media/drivers/camera-sensor.rst           |   5 +
 .../media/v4l/common-raw-sensor.dia           | 441 ++++++++++++++++++
 .../media/v4l/common-raw-sensor.svg           | 134 ++++++
 .../userspace-api/media/v4l/dev-subdev.rst    |   2 +
 .../media/v4l/subdev-config-model.rst         | 180 +++++++
 5 files changed, 762 insertions(+)
 create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.dia
 create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.svg
 create mode 100644 Documentation/userspace-api/media/v4l/subdev-config-model.rst

Comments

Jacopo Mondi Oct. 22, 2024, 4:02 p.m. UTC | #1
Hi Sakari

On Fri, Oct 11, 2024 at 10:55:34AM +0300, Sakari Ailus wrote:
> Sub-device configuration models define what V4L2 API elements are
> available on a compliant sub-device and how do they behave.
>
> The patch also adds a model for common raw sensors.
>
> Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
> ---
>  .../media/drivers/camera-sensor.rst           |   5 +
>  .../media/v4l/common-raw-sensor.dia           | 441 ++++++++++++++++++
>  .../media/v4l/common-raw-sensor.svg           | 134 ++++++
>  .../userspace-api/media/v4l/dev-subdev.rst    |   2 +
>  .../media/v4l/subdev-config-model.rst         | 180 +++++++
>  5 files changed, 762 insertions(+)
>  create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.dia
>  create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.svg
>  create mode 100644 Documentation/userspace-api/media/v4l/subdev-config-model.rst
>
> diff --git a/Documentation/userspace-api/media/drivers/camera-sensor.rst b/Documentation/userspace-api/media/drivers/camera-sensor.rst
> index ad4049ff7eec..727cc12bc624 100644
> --- a/Documentation/userspace-api/media/drivers/camera-sensor.rst
> +++ b/Documentation/userspace-api/media/drivers/camera-sensor.rst
> @@ -18,6 +18,9 @@ binning functionality. The sensor drivers belong to two distinct classes, freely
>  configurable and register list based drivers, depending on how the driver
>  configures this functionality.
>
> +Also see
> +:ref:`media_subdev_config_model_common_raw_sensor`.
> +
>  Freely configurable camera sensor drivers
>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> @@ -105,6 +108,8 @@ values programmed by the register sequences. The default values of these
>  controls shall be 0 (disabled). Especially these controls shall not be inverted,
>  independently of the sensor's mounting rotation.
>
> +.. _media_using_camera_sensor_drivers_embedded_data:
> +
>  Embedded data
>  -------------
>

[snip images]

> diff --git a/Documentation/userspace-api/media/v4l/dev-subdev.rst b/Documentation/userspace-api/media/v4l/dev-subdev.rst
> index dcfcbd52490d..4d145bd3bd09 100644
> --- a/Documentation/userspace-api/media/v4l/dev-subdev.rst
> +++ b/Documentation/userspace-api/media/v4l/dev-subdev.rst
> @@ -838,3 +838,5 @@ stream while it may be possible to enable and disable the embedded data stream.
>
>  The embedded data format does not need to be configured on the sensor's pads as
>  the format is dictated by the pixel data format in this case.
> +
> +.. include:: subdev-config-model.rst
> diff --git a/Documentation/userspace-api/media/v4l/subdev-config-model.rst b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
> new file mode 100644
> index 000000000000..8ec801998f5f
> --- /dev/null
> +++ b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
> @@ -0,0 +1,180 @@
> +.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-no-invariants-or-later
> +
> +Sub-device configuration models
> +===============================
> +
> +A sub-device configuration model specifies in detail what the user space can
> +expect from a sub-device in terms of V4L2 sub-device interface support,
> +including semantics specific to a given configuration model.
> +
> +A sub-device may implement more than one configuration model at the same
> +time. The implemented configuration models can be obtained from the sub-device's
> +``V4L2_CID_CONFIG_MODEL`` control.

Isn't a control an overkill ? Isn't enough to identify that a sensor produces
RAW images and has an internal pad to the below description ?

Also, would it be the single sensor driver that has to correctly
populate the control ?

> +
> +.. _media_subdev_config_model_common_raw_sensor:
> +
> +Common raw camera sensor model
> +------------------------------
> +
> +The common raw camera sensor model defines the configurability of a superset
> +that covers the vast majority of raw camera sensors. Not all of the
> +configuration and enumeration interfaces are offered by all drivers.
> +
> +A sub-device complies with the common raw sensor model if the
> +``V4L2_CONFIG_MODEL_COMMON_RAW`` bit is set in the ``V4L2_CID_CONFIG_MODEL``
> +control of the sub-device.
> +
> +The common raw camera sensor model is aligned with
> +:ref:`media_using_camera_sensor_drivers`. Please refer to that regarding aspects
> +not specified here.
> +
> +Each camera sensor implementing the common raw sensor model exposes a single
> +V4L2 sub-device. The sub-device contains a single source pad (0) and two or more
> +internal pads: an image data internal pad (1) and optionally an embedded data
> +pad (2). Additionally, further internal pads may be supported for other
> +features, in which case they are documented separately for the given device.

That's pretty easy to identify from userspace without a control, isn't
it ?

> +
> +This is show in :ref:`media_subdev_config_model_common_raw_sensor_subdev`.
> +
> +.. _media_subdev_config_model_common_raw_sensor_subdev:
> +
> +.. kernel-figure:: common-raw-sensor.svg
> +    :alt:    common-raw-sensor.svg
> +    :align:  center
> +
> +    **Common raw sensor sub-device**
> +
> +Routes
> +^^^^^^
> +
> +A sub-device conforming to common raw camera sensor model implements the
> +following routes.
> +
> +.. flat-table:: Routes
> +    :header-rows: 1
> +
> +    * - Sink pad/stream
> +      - Source pad/stream
> +      - Static (X/M(aybe)/-)

afaiu either the route is Static (cannot be changed) or it is not.
What does Maybe means here ?

> +      - Mandatory (X/-)
> +      - Synopsis
> +    * - 1/0
> +      - 0/0
> +      - X
> +      - X
> +      - Image data
> +    * - 2/0
> +      - 0/1
> +      - M
> +      - -
> +      - Embedded data
> +
> +Some devices may support enabling and disabling the embedded data stream. Others
> +may not support it at all, in which case the embedded data route does not exist.

Does the driver need to expose a routing table at all if it has a
single, immutable image data stream ?

> +
> +Sensor pixel array size, cropping and binning
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +

This is not something I was expecting here. We teach how to compute
framerates for RAW sensors in camera-sensor.rst ("Using camera sensor
drivers") specifying which controls a sensor driver should register and
the expected controls units. It seems to me we define part of the expected
interface exposed by a raw camera sensor there and part here. I wonder
if camera-sensor.rst makes any sense at all if we define the "models"
here.

> +The sensor's pixel array is divided into one or more areas. The areas around the
> +edge of the pixel array, usually one one or more sides, may contain optical
> +black pixels, dummy pixels and other non-image pixels.
> +
> +A rectangle within the pixel area contains the visible pixels. Capturing the
> +non-visible pixels may be supported by the sensor.

This is a bit of simplification, as I presume there might be
rectangles of visible pixels which overlap in the most advanced use
cases.
https://git.libcamera.org/libcamera/libcamera.git/tree/src/libcamera/property_ids_core.yaml#n594

> +
> +The sensor can perform three operations that affect the output image size. First
> +comes analogue crop. This configuration limits parts of the pixel array which
> +the sensor will read, affecting sensor timing as well. The granularity of the
> +analogue crop configuration varies greatly across sensors: some sensors support
> +a few different analogue crop configurations whereas others may support anything
> +divisible by a given number of pixels.
> +
> +The default analogue crop rectangle corresponds to the visible pixel area if
> +supported by the hardware.

In what sense "if supported by the hardware" ? Is this referring to
the "visibile pixel area" ?

> +
> +In the next step, binning is performed on the image data read from camera
> +sensor's pixel array. This will effectively result in an image smaller than the
> +original by given proportions horizontally and vertically. Typical values are

s/proportions/scaling factors/ ?

> +1/2 and 1/3 but others may well be supported by the hardware as well.
> +
> +The combination of the analogue crop and binning operations may result in an
> +image size that may be larger than desirable. For this purpose, a digital crop

This is highly optional it seems.

> +operation may be performed on the binned image. The resulting image size is
> +further outputted by the sensor.
> +
> +.. flat-table:: Selection targets on pads
> +    :header-rows: 1
> +
> +    * - Pad/Stream
> +      - Selection target/format
> +      - Mandatory (X/-)
> +      - Synopsis

What about an R/W column ?

> +    * - 1/0
> +      - Format
> +      - X
> +      - Image data format. The width and height fields of this format are the
> +        same than those for the V4L2_SEL_TGT_CROP_BOUNDS rectangle. The media

Can sizes be changed at all ?

> +        bus code of this format reflects the native pixel depth of the sensor.
> +    * - 1/0
> +      - V4L2_SEL_TGT_NATIVE_SIZE
> +      - X
> +      - The full size of the pixel array, including all pixels in the pixel
> +	array, even if they cannot be captured. This rectangle is relative to
> +	the format on the same (pad, stream).
> +    * - 1/0
> +      - V4L2_SEL_TGT_CROP_BOUNDS
> +      - X
> +      - The crop rectangle bounds. No pixels outside this area can be

I would describe it as "the readable part of the full pixel array
area" instead of repeating "crop rectangle bounds"

> +        captured. This rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE

> +    * - 1/0
> +      - V4L2_SEL_TGT_CROP_DEFAULT
> +      - X
> +      - The visible pixel area. This rectangle is relative to the

Isn't this the default analogue crop rectangle ?

> +        V4L2_SEL_TGT_NATIVE_SIZE rectangle on the same (pad, stream).
> +    * - 1/0
> +      - V4L2_SEL_TGT_CROP
> +      - \-
> +      - Analogue crop. Analogue crop typically has a coarse granularity. This
> +        rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE rectangle on the
> +        same (pad, stream).
> +    * - 1/0
> +      - V4L2_SEL_TGT_COMPOSE
> +      - \-
> +      - Binning. This rectangle is relative to the V4L2_SEL_TGT_CROP
> +        rectangle on the same (pad, stream).

The size ratio between the V4L2_SEL_TGT_CROP and V4L2_SEL_TGT_COMPOSE
rectangles selects the desired binning factor.

> +    * - 2/0
> +      - Format
> +      - X
> +      - Embedded data format.
> +    * - 0/0
> +      - V4L2_SEL_TGT_CROP
> +      - \-
> +      - Digital crop. This rectangle is relative to the V4L2_SEL_TGT_COMPOSE
> +        rectangle on (pad, stream) pair 1/0.
> +    * - 0/0
> +      - Format
> +      - X
> +      - Image data source format. The width and height fields of the format are
> +        the same than for the V4L2_SEL_TGT crop rectangle on (pad, stream) pair
> +        0/0 where as the media bus code reflects the pixel data output of the

s/where as/and ?
Or maybe I didn't get what you mean

> +        sensor.
> +    * - 0/1
> +      - Format
> +      - X
> +      - Embedded data source format.
> +
> +Embedded data
> +^^^^^^^^^^^^^
> +
> +The embedded data stream is produced by the sensor when the corresponding route
> +is enabled. The embedded data route may also be immutable or not exist at all,
> +in case the sensor (or the driver) does not support it.
> +
> +Generally the sensor embedded data width is determined by the width of the image
> +data whereas the number of lines are constant for the embedded data. The user
> +space may obtain the size of the embedded data once the image data size on the
> +source pad has been configured.
> +
> +Also see :ref:`media_using_camera_sensor_drivers_embedded_data`.
> +
> --
> 2.39.5
>
>
Laurent Pinchart Oct. 22, 2024, 10:10 p.m. UTC | #2
Hello,

On Tue, Oct 22, 2024 at 06:02:32PM +0200, Jacopo Mondi wrote:
> On Fri, Oct 11, 2024 at 10:55:34AM +0300, Sakari Ailus wrote:
> > Sub-device configuration models define what V4L2 API elements are
> > available on a compliant sub-device and how do they behave.
> >
> > The patch also adds a model for common raw sensors.
> >
> > Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
> > ---
> >  .../media/drivers/camera-sensor.rst           |   5 +
> >  .../media/v4l/common-raw-sensor.dia           | 441 ++++++++++++++++++
> >  .../media/v4l/common-raw-sensor.svg           | 134 ++++++
> >  .../userspace-api/media/v4l/dev-subdev.rst    |   2 +
> >  .../media/v4l/subdev-config-model.rst         | 180 +++++++
> >  5 files changed, 762 insertions(+)
> >  create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.dia
> >  create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.svg
> >  create mode 100644 Documentation/userspace-api/media/v4l/subdev-config-model.rst
> >
> > diff --git a/Documentation/userspace-api/media/drivers/camera-sensor.rst b/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > index ad4049ff7eec..727cc12bc624 100644
> > --- a/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > +++ b/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > @@ -18,6 +18,9 @@ binning functionality. The sensor drivers belong to two distinct classes, freely
> >  configurable and register list based drivers, depending on how the driver
> >  configures this functionality.
> >
> > +Also see
> > +:ref:`media_subdev_config_model_common_raw_sensor`.

The line break doesn't seem necessary.

> > +
> >  Freely configurable camera sensor drivers
> >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > @@ -105,6 +108,8 @@ values programmed by the register sequences. The default values of these
> >  controls shall be 0 (disabled). Especially these controls shall not be inverted,
> >  independently of the sensor's mounting rotation.
> >
> > +.. _media_using_camera_sensor_drivers_embedded_data:
> > +
> >  Embedded data
> >  -------------
> 
> [snip images]

One day we should probably try to unify the look & feel of all diagrams
in the V4L2 documentation. One day :-)

> > diff --git a/Documentation/userspace-api/media/v4l/dev-subdev.rst b/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > index dcfcbd52490d..4d145bd3bd09 100644
> > --- a/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > +++ b/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > @@ -838,3 +838,5 @@ stream while it may be possible to enable and disable the embedded data stream.
> >
> >  The embedded data format does not need to be configured on the sensor's pads as
> >  the format is dictated by the pixel data format in this case.
> > +
> > +.. include:: subdev-config-model.rst

I think we should reorganize the documentation at some point. Things are
getting a bit too deep, with e.g. 4.13.5.1.2. That's for later.
Splitting the configuration model to a separate file as done in this
patch will be helpful.

> > diff --git a/Documentation/userspace-api/media/v4l/subdev-config-model.rst b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
> > new file mode 100644
> > index 000000000000..8ec801998f5f
> > --- /dev/null
> > +++ b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
> > @@ -0,0 +1,180 @@
> > +.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-no-invariants-or-later
> > +
> > +Sub-device configuration models
> > +===============================
> > +
> > +A sub-device configuration model specifies in detail what the user space can
> > +expect from a sub-device in terms of V4L2 sub-device interface support,
> > +including semantics specific to a given configuration model.
> > +
> > +A sub-device may implement more than one configuration model at the same
> > +time.

I wonder how that would work. I assume that, with the control being
read-only, you don't envision userspace selecting a model at runtime,
but rather drivers implementing the union (or intersection ?) of
multiple models. Do you have use cases in mind, or is this only to
prepare for a hypothetical future where this could happen ?

> The implemented configuration models can be obtained from the sub-device's
> > +``V4L2_CID_CONFIG_MODEL`` control.
> 
> Isn't a control an overkill ? Isn't enough to identify that a sensor produces
> RAW images and has an internal pad to the below description ?

V4L2 has a set of low-level, generic interfaces such as pads (external
and internal), formats, selection rectangles, routes, and controls. We
have some documentation about how those should be used, with a
relatively generic model defined in the "Order of configuration and
format propagation" section of Documentation/userspace-api/media/v4l/dev-subdev.rst.
The different API elements are however mostly defined independently of
each other (very little is said about interactions between controls and
formats or selection rectangles for instance), and we don't document how
they should be mapped to device features. This results in different
drivers for similar devices using the API elements differently,
sometimes in ways that breach the documented API. Userspace ends up
implementing heuristics and pray that thing work.

The whole idea behind configuration models is to avoid heuristics.
That's why we want an explicit API element to indicate what
configuration model a device supports, and document that model
explicitly in the kernel documentation.

> Also, would it be the single sensor driver that has to correctly
> populate the control ?
> 
> > +
> > +.. _media_subdev_config_model_common_raw_sensor:
> > +
> > +Common raw camera sensor model
> > +------------------------------
> > +
> > +The common raw camera sensor model defines the configurability of a superset

A superset of what ? Reading the next sentence I suspect you mean that
this model defines a set of features that compliant subdevs have to use
as documented, but where some API elements could be left unused by a
driver if the corresponding device doesn't implement the corresponding
hardware feature. Clarifying this would be nice.

> > +that covers the vast majority of raw camera sensors. Not all of the
> > +configuration and enumeration interfaces are offered by all drivers.
> > +
> > +A sub-device complies with the common raw sensor model if the
> > +``V4L2_CONFIG_MODEL_COMMON_RAW`` bit is set in the ``V4L2_CID_CONFIG_MODEL``
> > +control of the sub-device.
> > +
> > +The common raw camera sensor model is aligned with
> > +:ref:`media_using_camera_sensor_drivers`. Please refer to that regarding aspects
> > +not specified here.

Could we write this to emphasize the fact that
media_using_camera_sensor_drivers defines concepts common to all camera
sensors, while the model here focusses on how drivers expose API
elements for this particular model ? "is aligned with" is a bit vague.

> > +
> > +Each camera sensor implementing the common raw sensor model exposes a single
> > +V4L2 sub-device. The sub-device contains a single source pad (0) and two or more
> > +internal pads: an image data internal pad (1) and optionally an embedded data
> > +pad (2). Additionally, further internal pads may be supported for other
> > +features, in which case they are documented separately for the given device.
> 
> That's pretty easy to identify from userspace without a control, isn't
> it ?

See above.

> > +
> > +This is show in :ref:`media_subdev_config_model_common_raw_sensor_subdev`.

s/show/shown/

Maybe

This is show in the :ref:`media_subdev_config_model_common_raw_sensor_subdev`
figure.


Or is there a sphinx directive other than ref that would automate this ?

> > +
> > +.. _media_subdev_config_model_common_raw_sensor_subdev:
> > +
> > +.. kernel-figure:: common-raw-sensor.svg
> > +    :alt:    common-raw-sensor.svg
> > +    :align:  center
> > +
> > +    **Common raw sensor sub-device**
> > +
> > +Routes
> > +^^^^^^
> > +
> > +A sub-device conforming to common raw camera sensor model implements the
> > +following routes.
> > +
> > +.. flat-table:: Routes
> > +    :header-rows: 1
> > +
> > +    * - Sink pad/stream
> > +      - Source pad/stream
> > +      - Static (X/M(aybe)/-)
> 
> afaiu either the route is Static (cannot be changed) or it is not.
> What does Maybe means here ?
>
> > +      - Mandatory (X/-)
> > +      - Synopsis
> > +    * - 1/0
> > +      - 0/0
> > +      - X
> > +      - X
> > +      - Image data
> > +    * - 2/0
> > +      - 0/1
> > +      - M
> > +      - -

This is interpreted as a list, translated to

<ul class="simple">
<li></li>
</ul>

and rendered as •

> > +      - Embedded data
> > +
> > +Some devices may support enabling and disabling the embedded data stream. Others
> > +may not support it at all, in which case the embedded data route does not exist.

"may not support it at all" can be interpreted as "maybe not support
enabling and disabling it at all", i.e. only support a static always
enabled route. Based on the last sentence that's not what you mean, it
could be reworded.

> 
> Does the driver need to expose a routing table at all if it has a
> single, immutable image data stream ?

I think it should. If there are internal pads, they should be an
internal routing table, at the very least for consistency. It will make
userspace simpler I believe (and we can check that statement with a
libcamera implementation :-)).

> > +
> > +Sensor pixel array size, cropping and binning
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> 
> This is not something I was expecting here. We teach how to compute
> framerates for RAW sensors in camera-sensor.rst ("Using camera sensor
> drivers") specifying which controls a sensor driver should register and
> the expected controls units. It seems to me we define part of the expected
> interface exposed by a raw camera sensor there and part here. I wonder
> if camera-sensor.rst makes any sense at all if we define the "models"
> here.

Good question. I find the split to be a bit artificial indeed, and it's
not clear where some of the parts should live. I think we still need a
document that explains concept common to raw sensors, regardless of the
model. We probably won't get the split entirely right on the first try,
and that's fine with me, we can refine it later.

> > +The sensor's pixel array is divided into one or more areas. The areas around the
> > +edge of the pixel array, usually one one or more sides, may contain optical

"one one" ?

> > +black pixels, dummy pixels and other non-image pixels.
> > +
> > +A rectangle within the pixel area contains the visible pixels. Capturing the

Above you wrote "pixel array", and here "pixel area".

> > +non-visible pixels may be supported by the sensor.
> 
> This is a bit of simplification, as I presume there might be
> rectangles of visible pixels which overlap in the most advanced use
> cases.
> https://git.libcamera.org/libcamera/libcamera.git/tree/src/libcamera/property_ids_core.yaml#n594

That's a more generic model. I'd like to see if there are sensors
implementing it. Sakari, does CCS support that ?

This being said, we could restrict this subdev model to supporting only
sensors with a rectangular visible pixels area.

> > +
> > +The sensor can perform three operations that affect the output image size. First
> > +comes analogue crop. This configuration limits parts of the pixel array which
> > +the sensor will read, affecting sensor timing as well. The granularity of the
> > +analogue crop configuration varies greatly across sensors: some sensors support
> > +a few different analogue crop configurations whereas others may support anything
> > +divisible by a given number of pixels.
> > +
> > +The default analogue crop rectangle corresponds to the visible pixel area if
> > +supported by the hardware.
> 
> In what sense "if supported by the hardware" ? Is this referring to
> the "visibile pixel area" ?

I assume this means "if analogue crop is supported by the hardware".
This should be clarified.

> > +
> > +In the next step, binning is performed on the image data read from camera
> > +sensor's pixel array. This will effectively result in an image smaller than the
> > +original by given proportions horizontally and vertically. Typical values are
> 
> s/proportions/scaling factors/ ?
> 
> > +1/2 and 1/3 but others may well be supported by the hardware as well.
> > +
> > +The combination of the analogue crop and binning operations may result in an
> > +image size that may be larger than desirable. For this purpose, a digital crop
> 
> This is highly optional it seems.
> 
> > +operation may be performed on the binned image. The resulting image size is
> > +further outputted by the sensor.

s/outputted/output/ ? I think "outputted" is correct, but I believe it's
less common.

> > +
> > +.. flat-table:: Selection targets on pads
> > +    :header-rows: 1
> > +
> > +    * - Pad/Stream
> > +      - Selection target/format
> > +      - Mandatory (X/-)
> > +      - Synopsis
> 
> What about an R/W column ?
> 
> > +    * - 1/0
> > +      - Format
> > +      - X
> > +      - Image data format. The width and height fields of this format are the
> > +        same than those for the V4L2_SEL_TGT_CROP_BOUNDS rectangle. The media
> 
> Can sizes be changed at all ?

For the internal image pad, I wouldn't think so. I think it should
expose the full readable array. Or, reading what you wrote below, do you
envision this including non-readable pixels too ?

Mapping pixel array definitions from datasheets to formats and selection
rectangles is hard, as datasheets, when they are available, are often
confusing. Datasheets from different vendors, or even for different
sensors of the same vendor, often have different models. I think we need
to be extra precise here, and possibly work with sensor vendors. Using
the CCS model as a base could be useful.

> > +        bus code of this format reflects the native pixel depth of the sensor.
> > +    * - 1/0
> > +      - V4L2_SEL_TGT_NATIVE_SIZE
> > +      - X
> > +      - The full size of the pixel array, including all pixels in the pixel
> > +	array, even if they cannot be captured. This rectangle is relative to
> > +	the format on the same (pad, stream).

Mix of tabs and spaces.

What do you mean by "relative to the format on the same (pad, stream)" ?
Does it mean the rectangle is expressed relatively to a rectangle
defined as (0,0),(fmt.width x fmt.height) ? That doesn't seem to match
the description of the format and V4L2_SEL_TGT_CROP_BOUNDS, as the
format is defined as having the same width and height as
V4L2_SEL_TGT_CROP_BOUNDS, and V4L2_SEL_TGT_CROP_BOUNDS is defined as
being relative to V4L2_SEL_TGT_NATIVE_SIZE. I assume you mean something
else, it should be clarified.

> > +    * - 1/0
> > +      - V4L2_SEL_TGT_CROP_BOUNDS
> > +      - X
> > +      - The crop rectangle bounds. No pixels outside this area can be
> 
> I would describe it as "the readable part of the full pixel array
> area" instead of repeating "crop rectangle bounds"

Agreed. I would also explicitly state this can include black pixels. I
wonder if we should also define border pixels in this document.

> > +        captured. This rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE
> 
> > +    * - 1/0
> > +      - V4L2_SEL_TGT_CROP_DEFAULT
> > +      - X
> > +      - The visible pixel area. This rectangle is relative to the
> 
> Isn't this the default analogue crop rectangle ?

I assume so, and it should be documented as such explicitly. Also, if
the default crop rectangle doesn't include some of the readable pixels, 

> > +        V4L2_SEL_TGT_NATIVE_SIZE rectangle on the same (pad, stream).
> > +    * - 1/0
> > +      - V4L2_SEL_TGT_CROP
> > +      - \-
> > +      - Analogue crop. Analogue crop typically has a coarse granularity. This
> > +        rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE rectangle on the
> > +        same (pad, stream).
> > +    * - 1/0
> > +      - V4L2_SEL_TGT_COMPOSE
> > +      - \-
> > +      - Binning. This rectangle is relative to the V4L2_SEL_TGT_CROP
> > +        rectangle on the same (pad, stream).
> 
> The size ratio between the V4L2_SEL_TGT_CROP and V4L2_SEL_TGT_COMPOSE
> rectangles selects the desired binning factor.
> 
> > +    * - 2/0
> > +      - Format
> > +      - X
> > +      - Embedded data format.
> > +    * - 0/0
> > +      - V4L2_SEL_TGT_CROP
> > +      - \-
> > +      - Digital crop. This rectangle is relative to the V4L2_SEL_TGT_COMPOSE
> > +        rectangle on (pad, stream) pair 1/0.
> > +    * - 0/0
> > +      - Format
> > +      - X
> > +      - Image data source format. The width and height fields of the format are
> > +        the same than for the V4L2_SEL_TGT crop rectangle on (pad, stream) pair

I think you meant "V4L2_SEL_TGT_CROP rectangle" here.

> > +        0/0 where as the media bus code reflects the pixel data output of the
> 
> s/where as/and ?
> Or maybe I didn't get what you mean
> 
> > +        sensor.
> > +    * - 0/1
> > +      - Format
> > +      - X
> > +      - Embedded data source format.

I think we also need to consider skipping.

> > +
> > +Embedded data
> > +^^^^^^^^^^^^^
> > +
> > +The embedded data stream is produced by the sensor when the corresponding route
> > +is enabled. The embedded data route may also be immutable or not exist at all,
> > +in case the sensor (or the driver) does not support it.
> > +
> > +Generally the sensor embedded data width is determined by the width of the image
> > +data whereas the number of lines are constant for the embedded data. The user
> > +space may obtain the size of the embedded data once the image data size on the
> > +source pad has been configured.
> > +
> > +Also see :ref:`media_using_camera_sensor_drivers_embedded_data`.
> > +
Sakari Ailus Nov. 13, 2024, 7:35 a.m. UTC | #3
Hi Jacopo, Laurent,

Thank you for the review.

On Wed, Oct 23, 2024 at 01:10:32AM +0300, Laurent Pinchart wrote:
> Hello,
> 
> On Tue, Oct 22, 2024 at 06:02:32PM +0200, Jacopo Mondi wrote:
> > On Fri, Oct 11, 2024 at 10:55:34AM +0300, Sakari Ailus wrote:
> > > Sub-device configuration models define what V4L2 API elements are
> > > available on a compliant sub-device and how do they behave.
> > >
> > > The patch also adds a model for common raw sensors.
> > >
> > > Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
> > > ---
> > >  .../media/drivers/camera-sensor.rst           |   5 +
> > >  .../media/v4l/common-raw-sensor.dia           | 441 ++++++++++++++++++
> > >  .../media/v4l/common-raw-sensor.svg           | 134 ++++++
> > >  .../userspace-api/media/v4l/dev-subdev.rst    |   2 +
> > >  .../media/v4l/subdev-config-model.rst         | 180 +++++++
> > >  5 files changed, 762 insertions(+)
> > >  create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.dia
> > >  create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.svg
> > >  create mode 100644 Documentation/userspace-api/media/v4l/subdev-config-model.rst
> > >
> > > diff --git a/Documentation/userspace-api/media/drivers/camera-sensor.rst b/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > > index ad4049ff7eec..727cc12bc624 100644
> > > --- a/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > > +++ b/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > > @@ -18,6 +18,9 @@ binning functionality. The sensor drivers belong to two distinct classes, freely
> > >  configurable and register list based drivers, depending on how the driver
> > >  configures this functionality.
> > >
> > > +Also see
> > > +:ref:`media_subdev_config_model_common_raw_sensor`.
> 
> The line break doesn't seem necessary.

I agree.

> 
> > > +
> > >  Freely configurable camera sensor drivers
> > >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > >
> > > @@ -105,6 +108,8 @@ values programmed by the register sequences. The default values of these
> > >  controls shall be 0 (disabled). Especially these controls shall not be inverted,
> > >  independently of the sensor's mounting rotation.
> > >
> > > +.. _media_using_camera_sensor_drivers_embedded_data:
> > > +
> > >  Embedded data
> > >  -------------
> > 
> > [snip images]
> 
> One day we should probably try to unify the look & feel of all diagrams
> in the V4L2 documentation. One day :-)

That day may or may not come. :-)

> 
> > > diff --git a/Documentation/userspace-api/media/v4l/dev-subdev.rst b/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > > index dcfcbd52490d..4d145bd3bd09 100644
> > > --- a/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > > +++ b/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > > @@ -838,3 +838,5 @@ stream while it may be possible to enable and disable the embedded data stream.
> > >
> > >  The embedded data format does not need to be configured on the sensor's pads as
> > >  the format is dictated by the pixel data format in this case.
> > > +
> > > +.. include:: subdev-config-model.rst
> 
> I think we should reorganize the documentation at some point. Things are
> getting a bit too deep, with e.g. 4.13.5.1.2. That's for later.
> Splitting the configuration model to a separate file as done in this
> patch will be helpful.
> 
> > > diff --git a/Documentation/userspace-api/media/v4l/subdev-config-model.rst b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
> > > new file mode 100644
> > > index 000000000000..8ec801998f5f
> > > --- /dev/null
> > > +++ b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
> > > @@ -0,0 +1,180 @@
> > > +.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-no-invariants-or-later
> > > +
> > > +Sub-device configuration models
> > > +===============================
> > > +
> > > +A sub-device configuration model specifies in detail what the user space can
> > > +expect from a sub-device in terms of V4L2 sub-device interface support,
> > > +including semantics specific to a given configuration model.
> > > +
> > > +A sub-device may implement more than one configuration model at the same
> > > +time.
> 
> I wonder how that would work. I assume that, with the control being
> read-only, you don't envision userspace selecting a model at runtime,
> but rather drivers implementing the union (or intersection ?) of
> multiple models. Do you have use cases in mind, or is this only to
> prepare for a hypothetical future where this could happen ?

Let's suppose we would use configuration models to tell about other parts
of the sub-device than just what pads, links and streams may be present. A
number of things currently work implicitly, for instance whether a sensor
frame time is configured through controls or a dedicated IOCTL. I'm not
sure I'd add a configuration model for that exactly but something like that
seems entirely reasonable. It'd be orthogonal to the common raw sensor
model.

> 
> > The implemented configuration models can be obtained from the sub-device's
> > > +``V4L2_CID_CONFIG_MODEL`` control.
> > 
> > Isn't a control an overkill ? Isn't enough to identify that a sensor produces
> > RAW images and has an internal pad to the below description ?
> 
> V4L2 has a set of low-level, generic interfaces such as pads (external
> and internal), formats, selection rectangles, routes, and controls. We
> have some documentation about how those should be used, with a
> relatively generic model defined in the "Order of configuration and
> format propagation" section of Documentation/userspace-api/media/v4l/dev-subdev.rst.
> The different API elements are however mostly defined independently of
> each other (very little is said about interactions between controls and
> formats or selection rectangles for instance), and we don't document how
> they should be mapped to device features. This results in different
> drivers for similar devices using the API elements differently,
> sometimes in ways that breach the documented API. Userspace ends up
> implementing heuristics and pray that thing work.
> 
> The whole idea behind configuration models is to avoid heuristics.
> That's why we want an explicit API element to indicate what
> configuration model a device supports, and document that model
> explicitly in the kernel documentation.
> 
> > Also, would it be the single sensor driver that has to correctly
> > populate the control ?
> > 
> > > +
> > > +.. _media_subdev_config_model_common_raw_sensor:
> > > +
> > > +Common raw camera sensor model
> > > +------------------------------
> > > +
> > > +The common raw camera sensor model defines the configurability of a superset
> 
> A superset of what ? Reading the next sentence I suspect you mean that
> this model defines a set of features that compliant subdevs have to use
> as documented, but where some API elements could be left unused by a
> driver if the corresponding device doesn't implement the corresponding
> hardware feature. Clarifying this would be nice.

The intent is to say not everything may be impelemented by the sensors but
there may be simpler ways to do that. How about:

The common raw camera sensor model defines a set of configuration options
(formats, selections etc.) that cover the vast majority of funcitionality of raw
camera sensors. Not all of the configuration and enumeration interfaces are
offered by all drivers.

> 
> > > +that covers the vast majority of raw camera sensors. Not all of the
> > > +configuration and enumeration interfaces are offered by all drivers.
> > > +
> > > +A sub-device complies with the common raw sensor model if the
> > > +``V4L2_CONFIG_MODEL_COMMON_RAW`` bit is set in the ``V4L2_CID_CONFIG_MODEL``
> > > +control of the sub-device.
> > > +
> > > +The common raw camera sensor model is aligned with
> > > +:ref:`media_using_camera_sensor_drivers`. Please refer to that regarding aspects
> > > +not specified here.
> 
> Could we write this to emphasize the fact that
> media_using_camera_sensor_drivers defines concepts common to all camera
> sensors, while the model here focusses on how drivers expose API
> elements for this particular model ? "is aligned with" is a bit vague.

I think I'll just keep a reference here and add such a note to the other
file.

> 
> > > +
> > > +Each camera sensor implementing the common raw sensor model exposes a single
> > > +V4L2 sub-device. The sub-device contains a single source pad (0) and two or more
> > > +internal pads: an image data internal pad (1) and optionally an embedded data
> > > +pad (2). Additionally, further internal pads may be supported for other
> > > +features, in which case they are documented separately for the given device.
> > 
> > That's pretty easy to identify from userspace without a control, isn't
> > it ?
> 
> See above.
> 
> > > +
> > > +This is show in :ref:`media_subdev_config_model_common_raw_sensor_subdev`.
> 
> s/show/shown/
> 
> Maybe
> 
> This is show in the :ref:`media_subdev_config_model_common_raw_sensor_subdev`
> figure.
> 
> 
> Or is there a sphinx directive other than ref that would automate this ?

As these will be links, I'd just keep it as-is. Adding the note on what the
references are (figures, tables etc.) should be done separately from this,
including numbering.

> 
> > > +
> > > +.. _media_subdev_config_model_common_raw_sensor_subdev:
> > > +
> > > +.. kernel-figure:: common-raw-sensor.svg
> > > +    :alt:    common-raw-sensor.svg
> > > +    :align:  center
> > > +
> > > +    **Common raw sensor sub-device**
> > > +
> > > +Routes
> > > +^^^^^^
> > > +
> > > +A sub-device conforming to common raw camera sensor model implements the
> > > +following routes.
> > > +
> > > +.. flat-table:: Routes
> > > +    :header-rows: 1
> > > +
> > > +    * - Sink pad/stream
> > > +      - Source pad/stream
> > > +      - Static (X/M(aybe)/-)
> > 
> > afaiu either the route is Static (cannot be changed) or it is not.
> > What does Maybe means here ?

It depends on the device.

> >
> > > +      - Mandatory (X/-)
> > > +      - Synopsis
> > > +    * - 1/0
> > > +      - 0/0
> > > +      - X
> > > +      - X
> > > +      - Image data
> > > +    * - 2/0
> > > +      - 0/1
> > > +      - M
> > > +      - -
> 
> This is interpreted as a list, translated to
> 
> <ul class="simple">
> <li></li>
> </ul>
> 
> and rendered as •

The backslash was missing, I'll add one.

> 
> > > +      - Embedded data
> > > +
> > > +Some devices may support enabling and disabling the embedded data stream. Others
> > > +may not support it at all, in which case the embedded data route does not exist.
> 
> "may not support it at all" can be interpreted as "maybe not support
> enabling and disabling it at all", i.e. only support a static always
> enabled route. Based on the last sentence that's not what you mean, it
> could be reworded.

How about:

Some devices do not support the embedded data stream, others do support it and
in some of the latter, it can be turned on and off before streaming is started.

> 
> > 
> > Does the driver need to expose a routing table at all if it has a
> > single, immutable image data stream ?
> 
> I think it should. If there are internal pads, they should be an
> internal routing table, at the very least for consistency. It will make
> userspace simpler I believe (and we can check that statement with a
> libcamera implementation :-)).

I agree. The routing table does not only tell what's there, it also tells
what's not.

> 
> > > +
> > > +Sensor pixel array size, cropping and binning
> > > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > +
> > 
> > This is not something I was expecting here. We teach how to compute
> > framerates for RAW sensors in camera-sensor.rst ("Using camera sensor
> > drivers") specifying which controls a sensor driver should register and
> > the expected controls units. It seems to me we define part of the expected
> > interface exposed by a raw camera sensor there and part here. I wonder
> > if camera-sensor.rst makes any sense at all if we define the "models"
> > here.
> 
> Good question. I find the split to be a bit artificial indeed, and it's
> not clear where some of the parts should live. I think we still need a
> document that explains concept common to raw sensors, regardless of the
> model. We probably won't get the split entirely right on the first try,
> and that's fine with me, we can refine it later.

There are some aspects that are different from what is documented and
implemented elsewhere, such as some of the selectiong rectangles.

> 
> > > +The sensor's pixel array is divided into one or more areas. The areas around the
> > > +edge of the pixel array, usually one one or more sides, may contain optical
> 
> "one one" ?

One one might be enough indeed.

> 
> > > +black pixels, dummy pixels and other non-image pixels.
> > > +
> > > +A rectangle within the pixel area contains the visible pixels. Capturing the
> 
> Above you wrote "pixel array", and here "pixel area".

This should be pixel array, I'll fix it.

> 
> > > +non-visible pixels may be supported by the sensor.
> > 
> > This is a bit of simplification, as I presume there might be
> > rectangles of visible pixels which overlap in the most advanced use
> > cases.
> > https://git.libcamera.org/libcamera/libcamera.git/tree/src/libcamera/property_ids_core.yaml#n594
> 
> That's a more generic model. I'd like to see if there are sensors
> implementing it. Sakari, does CCS support that ?
> 
> This being said, we could restrict this subdev model to supporting only
> sensors with a rectangular visible pixels area.

The current frame format descriptors in CCS can't, I'm not sure if vendors
might prefer to be able to do that. The CSI-2 long packets (lines) still
need to be the same length.

In practice the visible pixel areas are rectangular, at least in those
sensor I've seen, so perhaps it's fine to indeed assume that.

> 
> > > +
> > > +The sensor can perform three operations that affect the output image size. First
> > > +comes analogue crop. This configuration limits parts of the pixel array which
> > > +the sensor will read, affecting sensor timing as well. The granularity of the
> > > +analogue crop configuration varies greatly across sensors: some sensors support
> > > +a few different analogue crop configurations whereas others may support anything
> > > +divisible by a given number of pixels.
> > > +
> > > +The default analogue crop rectangle corresponds to the visible pixel area if
> > > +supported by the hardware.
> > 
> > In what sense "if supported by the hardware" ? Is this referring to
> > the "visibile pixel area" ?
> 
> I assume this means "if analogue crop is supported by the hardware".
> This should be clarified.

Yes, analogue crop is not supported by all sensors (nor drivers, even if
the sensor does). I'd just do: s/if.*//.

> 
> > > +
> > > +In the next step, binning is performed on the image data read from camera
> > > +sensor's pixel array. This will effectively result in an image smaller than the
> > > +original by given proportions horizontally and vertically. Typical values are
> > 
> > s/proportions/scaling factors/ ?

How about "binning factors"?

> > 
> > > +1/2 and 1/3 but others may well be supported by the hardware as well.
> > > +
> > > +The combination of the analogue crop and binning operations may result in an
> > > +image size that may be larger than desirable. For this purpose, a digital crop
> > 
> > This is highly optional it seems.
> > 
> > > +operation may be performed on the binned image. The resulting image size is
> > > +further outputted by the sensor.
> 
> s/outputted/output/ ? I think "outputted" is correct, but I believe it's
> less common.

Given how "put" inflects, it probably should be "output" indeed.

> 
> > > +
> > > +.. flat-table:: Selection targets on pads
> > > +    :header-rows: 1
> > > +
> > > +    * - Pad/Stream
> > > +      - Selection target/format
> > > +      - Mandatory (X/-)
> > > +      - Synopsis
> > 
> > What about an R/W column ?
> > 
> > > +    * - 1/0
> > > +      - Format
> > > +      - X
> > > +      - Image data format. The width and height fields of this format are the
> > > +        same than those for the V4L2_SEL_TGT_CROP_BOUNDS rectangle. The media
> > 
> > Can sizes be changed at all ?
> 
> For the internal image pad, I wouldn't think so. I think it should
> expose the full readable array. Or, reading what you wrote below, do you
> envision this including non-readable pixels too ?

I don't -- they can't be accessed in any case. Timing information (such as
the latching points) should probably be conveyed using a different API if
they're needed.

> 
> Mapping pixel array definitions from datasheets to formats and selection
> rectangles is hard, as datasheets, when they are available, are often
> confusing. Datasheets from different vendors, or even for different
> sensors of the same vendor, often have different models. I think we need
> to be extra precise here, and possibly work with sensor vendors. Using
> the CCS model as a base could be useful.

I wouldn't necessarily try to convey what non-visiblepixels might be there,
apart from in the documentation. These tend to be fairly vendor/sensor
specific after all.

> 
> > > +        bus code of this format reflects the native pixel depth of the sensor.
> > > +    * - 1/0
> > > +      - V4L2_SEL_TGT_NATIVE_SIZE
> > > +      - X
> > > +      - The full size of the pixel array, including all pixels in the pixel
> > > +	array, even if they cannot be captured. This rectangle is relative to
> > > +	the format on the same (pad, stream).
> 
> Mix of tabs and spaces.
> 
> What do you mean by "relative to the format on the same (pad, stream)" ?
> Does it mean the rectangle is expressed relatively to a rectangle
> defined as (0,0),(fmt.width x fmt.height) ? That doesn't seem to match

Yes.

> the description of the format and V4L2_SEL_TGT_CROP_BOUNDS, as the
> format is defined as having the same width and height as
> V4L2_SEL_TGT_CROP_BOUNDS, and V4L2_SEL_TGT_CROP_BOUNDS is defined as
> being relative to V4L2_SEL_TGT_NATIVE_SIZE. I assume you mean something
> else, it should be clarified.

As discussed, I'll drop the NATIVE_SIZE rectangle from the documentation as
redundant.

> 
> > > +    * - 1/0
> > > +      - V4L2_SEL_TGT_CROP_BOUNDS
> > > +      - X
> > > +      - The crop rectangle bounds. No pixels outside this area can be
> > 
> > I would describe it as "the readable part of the full pixel array
> > area" instead of repeating "crop rectangle bounds"
> 
> Agreed. I would also explicitly state this can include black pixels. I
> wonder if we should also define border pixels in this document.

I wouldn't try to convey in APIs what qualities different non-visible
pixels might have or where they are located, just that they're non-visible
pixels.

> 
> > > +        captured. This rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE
> > 
> > > +    * - 1/0
> > > +      - V4L2_SEL_TGT_CROP_DEFAULT
> > > +      - X
> > > +      - The visible pixel area. This rectangle is relative to the
> > 
> > Isn't this the default analogue crop rectangle ?
> 
> I assume so, and it should be documented as such explicitly. Also, if
> the default crop rectangle doesn't include some of the readable pixels, 

It does indeed say this is the "visible pixel area". I can elaborate this in
the analogue crop documentation -- it's possible the sensor doesn't support
capturing just visible pixels, even if it's unlikely.

> 
> > > +        V4L2_SEL_TGT_NATIVE_SIZE rectangle on the same (pad, stream).
> > > +    * - 1/0
> > > +      - V4L2_SEL_TGT_CROP
> > > +      - \-
> > > +      - Analogue crop. Analogue crop typically has a coarse granularity. This
> > > +        rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE rectangle on the
> > > +        same (pad, stream).
> > > +    * - 1/0
> > > +      - V4L2_SEL_TGT_COMPOSE
> > > +      - \-
> > > +      - Binning. This rectangle is relative to the V4L2_SEL_TGT_CROP
> > > +        rectangle on the same (pad, stream).
> > 
> > The size ratio between the V4L2_SEL_TGT_CROP and V4L2_SEL_TGT_COMPOSE
> > rectangles selects the desired binning factor.
> > 
> > > +    * - 2/0
> > > +      - Format
> > > +      - X
> > > +      - Embedded data format.
> > > +    * - 0/0
> > > +      - V4L2_SEL_TGT_CROP
> > > +      - \-
> > > +      - Digital crop. This rectangle is relative to the V4L2_SEL_TGT_COMPOSE
> > > +        rectangle on (pad, stream) pair 1/0.
> > > +    * - 0/0
> > > +      - Format
> > > +      - X
> > > +      - Image data source format. The width and height fields of the format are
> > > +        the same than for the V4L2_SEL_TGT crop rectangle on (pad, stream) pair
> 
> I think you meant "V4L2_SEL_TGT_CROP rectangle" here.

Yes.

> 
> > > +        0/0 where as the media bus code reflects the pixel data output of the
> > 
> > s/where as/and ?
> > Or maybe I didn't get what you mean

And works here, yes.

> > 
> > > +        sensor.
> > > +    * - 0/1
> > > +      - Format
> > > +      - X
> > > +      - Embedded data source format.
> 
> I think we also need to consider skipping.

I'll add sub-sampling in v2.

> 
> > > +
> > > +Embedded data
> > > +^^^^^^^^^^^^^
> > > +
> > > +The embedded data stream is produced by the sensor when the corresponding route
> > > +is enabled. The embedded data route may also be immutable or not exist at all,
> > > +in case the sensor (or the driver) does not support it.
> > > +
> > > +Generally the sensor embedded data width is determined by the width of the image
> > > +data whereas the number of lines are constant for the embedded data. The user
> > > +space may obtain the size of the embedded data once the image data size on the
> > > +source pad has been configured.
> > > +
> > > +Also see :ref:`media_using_camera_sensor_drivers_embedded_data`.
> > > +
>
Jacopo Mondi Nov. 13, 2024, 8:20 a.m. UTC | #4
Hi Sakari

On Wed, Nov 13, 2024 at 07:35:46AM +0000, Sakari Ailus wrote:
> Hi Jacopo, Laurent,
>
> Thank you for the review.
>
> On Wed, Oct 23, 2024 at 01:10:32AM +0300, Laurent Pinchart wrote:
> > Hello,
> >
> > On Tue, Oct 22, 2024 at 06:02:32PM +0200, Jacopo Mondi wrote:
> > > On Fri, Oct 11, 2024 at 10:55:34AM +0300, Sakari Ailus wrote:
> > > > Sub-device configuration models define what V4L2 API elements are
> > > > available on a compliant sub-device and how do they behave.
> > > >
> > > > The patch also adds a model for common raw sensors.
> > > >
> > > > Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
> > > > ---
> > > >  .../media/drivers/camera-sensor.rst           |   5 +
> > > >  .../media/v4l/common-raw-sensor.dia           | 441 ++++++++++++++++++
> > > >  .../media/v4l/common-raw-sensor.svg           | 134 ++++++
> > > >  .../userspace-api/media/v4l/dev-subdev.rst    |   2 +
> > > >  .../media/v4l/subdev-config-model.rst         | 180 +++++++
> > > >  5 files changed, 762 insertions(+)
> > > >  create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.dia
> > > >  create mode 100644 Documentation/userspace-api/media/v4l/common-raw-sensor.svg
> > > >  create mode 100644 Documentation/userspace-api/media/v4l/subdev-config-model.rst
> > > >
> > > > diff --git a/Documentation/userspace-api/media/drivers/camera-sensor.rst b/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > > > index ad4049ff7eec..727cc12bc624 100644
> > > > --- a/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > > > +++ b/Documentation/userspace-api/media/drivers/camera-sensor.rst
> > > > @@ -18,6 +18,9 @@ binning functionality. The sensor drivers belong to two distinct classes, freely
> > > >  configurable and register list based drivers, depending on how the driver
> > > >  configures this functionality.
> > > >
> > > > +Also see
> > > > +:ref:`media_subdev_config_model_common_raw_sensor`.
> >
> > The line break doesn't seem necessary.
>
> I agree.
>
> >
> > > > +
> > > >  Freely configurable camera sensor drivers
> > > >  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > > >
> > > > @@ -105,6 +108,8 @@ values programmed by the register sequences. The default values of these
> > > >  controls shall be 0 (disabled). Especially these controls shall not be inverted,
> > > >  independently of the sensor's mounting rotation.
> > > >
> > > > +.. _media_using_camera_sensor_drivers_embedded_data:
> > > > +
> > > >  Embedded data
> > > >  -------------
> > >
> > > [snip images]
> >
> > One day we should probably try to unify the look & feel of all diagrams
> > in the V4L2 documentation. One day :-)
>
> That day may or may not come. :-)
>
> >
> > > > diff --git a/Documentation/userspace-api/media/v4l/dev-subdev.rst b/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > > > index dcfcbd52490d..4d145bd3bd09 100644
> > > > --- a/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > > > +++ b/Documentation/userspace-api/media/v4l/dev-subdev.rst
> > > > @@ -838,3 +838,5 @@ stream while it may be possible to enable and disable the embedded data stream.
> > > >
> > > >  The embedded data format does not need to be configured on the sensor's pads as
> > > >  the format is dictated by the pixel data format in this case.
> > > > +
> > > > +.. include:: subdev-config-model.rst
> >
> > I think we should reorganize the documentation at some point. Things are
> > getting a bit too deep, with e.g. 4.13.5.1.2. That's for later.
> > Splitting the configuration model to a separate file as done in this
> > patch will be helpful.
> >
> > > > diff --git a/Documentation/userspace-api/media/v4l/subdev-config-model.rst b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
> > > > new file mode 100644
> > > > index 000000000000..8ec801998f5f
> > > > --- /dev/null
> > > > +++ b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
> > > > @@ -0,0 +1,180 @@
> > > > +.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-no-invariants-or-later
> > > > +
> > > > +Sub-device configuration models
> > > > +===============================
> > > > +
> > > > +A sub-device configuration model specifies in detail what the user space can
> > > > +expect from a sub-device in terms of V4L2 sub-device interface support,
> > > > +including semantics specific to a given configuration model.
> > > > +
> > > > +A sub-device may implement more than one configuration model at the same
> > > > +time.
> >
> > I wonder how that would work. I assume that, with the control being
> > read-only, you don't envision userspace selecting a model at runtime,
> > but rather drivers implementing the union (or intersection ?) of
> > multiple models. Do you have use cases in mind, or is this only to
> > prepare for a hypothetical future where this could happen ?
>
> Let's suppose we would use configuration models to tell about other parts
> of the sub-device than just what pads, links and streams may be present. A
> number of things currently work implicitly, for instance whether a sensor
> frame time is configured through controls or a dedicated IOCTL. I'm not
> sure I'd add a configuration model for that exactly but something like that
> seems entirely reasonable. It'd be orthogonal to the common raw sensor
> model.
>
> >
> > > The implemented configuration models can be obtained from the sub-device's
> > > > +``V4L2_CID_CONFIG_MODEL`` control.
> > >
> > > Isn't a control an overkill ? Isn't enough to identify that a sensor produces
> > > RAW images and has an internal pad to the below description ?
> >
> > V4L2 has a set of low-level, generic interfaces such as pads (external
> > and internal), formats, selection rectangles, routes, and controls. We
> > have some documentation about how those should be used, with a
> > relatively generic model defined in the "Order of configuration and
> > format propagation" section of Documentation/userspace-api/media/v4l/dev-subdev.rst.
> > The different API elements are however mostly defined independently of
> > each other (very little is said about interactions between controls and
> > formats or selection rectangles for instance), and we don't document how
> > they should be mapped to device features. This results in different
> > drivers for similar devices using the API elements differently,
> > sometimes in ways that breach the documented API. Userspace ends up
> > implementing heuristics and pray that thing work.
> >
> > The whole idea behind configuration models is to avoid heuristics.
> > That's why we want an explicit API element to indicate what
> > configuration model a device supports, and document that model
> > explicitly in the kernel documentation.
> >
> > > Also, would it be the single sensor driver that has to correctly
> > > populate the control ?
> > >
> > > > +
> > > > +.. _media_subdev_config_model_common_raw_sensor:
> > > > +
> > > > +Common raw camera sensor model
> > > > +------------------------------
> > > > +
> > > > +The common raw camera sensor model defines the configurability of a superset
> >
> > A superset of what ? Reading the next sentence I suspect you mean that
> > this model defines a set of features that compliant subdevs have to use
> > as documented, but where some API elements could be left unused by a
> > driver if the corresponding device doesn't implement the corresponding
> > hardware feature. Clarifying this would be nice.
>
> The intent is to say not everything may be impelemented by the sensors but
> there may be simpler ways to do that. How about:
>
> The common raw camera sensor model defines a set of configuration options
> (formats, selections etc.) that cover the vast majority of funcitionality of raw
> camera sensors. Not all of the configuration and enumeration interfaces are
> offered by all drivers.
>
> >
> > > > +that covers the vast majority of raw camera sensors. Not all of the
> > > > +configuration and enumeration interfaces are offered by all drivers.
> > > > +
> > > > +A sub-device complies with the common raw sensor model if the
> > > > +``V4L2_CONFIG_MODEL_COMMON_RAW`` bit is set in the ``V4L2_CID_CONFIG_MODEL``
> > > > +control of the sub-device.
> > > > +
> > > > +The common raw camera sensor model is aligned with
> > > > +:ref:`media_using_camera_sensor_drivers`. Please refer to that regarding aspects
> > > > +not specified here.
> >
> > Could we write this to emphasize the fact that
> > media_using_camera_sensor_drivers defines concepts common to all camera
> > sensors, while the model here focusses on how drivers expose API
> > elements for this particular model ? "is aligned with" is a bit vague.
>
> I think I'll just keep a reference here and add such a note to the other
> file.
>
> >
> > > > +
> > > > +Each camera sensor implementing the common raw sensor model exposes a single
> > > > +V4L2 sub-device. The sub-device contains a single source pad (0) and two or more
> > > > +internal pads: an image data internal pad (1) and optionally an embedded data
> > > > +pad (2). Additionally, further internal pads may be supported for other
> > > > +features, in which case they are documented separately for the given device.
> > >
> > > That's pretty easy to identify from userspace without a control, isn't
> > > it ?
> >
> > See above.
> >
> > > > +
> > > > +This is show in :ref:`media_subdev_config_model_common_raw_sensor_subdev`.
> >
> > s/show/shown/
> >
> > Maybe
> >
> > This is show in the :ref:`media_subdev_config_model_common_raw_sensor_subdev`
> > figure.
> >
> >
> > Or is there a sphinx directive other than ref that would automate this ?
>
> As these will be links, I'd just keep it as-is. Adding the note on what the
> references are (figures, tables etc.) should be done separately from this,
> including numbering.
>
> >
> > > > +
> > > > +.. _media_subdev_config_model_common_raw_sensor_subdev:
> > > > +
> > > > +.. kernel-figure:: common-raw-sensor.svg
> > > > +    :alt:    common-raw-sensor.svg
> > > > +    :align:  center
> > > > +
> > > > +    **Common raw sensor sub-device**
> > > > +
> > > > +Routes
> > > > +^^^^^^
> > > > +
> > > > +A sub-device conforming to common raw camera sensor model implements the
> > > > +following routes.
> > > > +
> > > > +.. flat-table:: Routes
> > > > +    :header-rows: 1
> > > > +
> > > > +    * - Sink pad/stream
> > > > +      - Source pad/stream
> > > > +      - Static (X/M(aybe)/-)
> > >
> > > afaiu either the route is Static (cannot be changed) or it is not.
> > > What does Maybe means here ?
>
> It depends on the device.
>
> > >
> > > > +      - Mandatory (X/-)
> > > > +      - Synopsis
> > > > +    * - 1/0
> > > > +      - 0/0
> > > > +      - X
> > > > +      - X
> > > > +      - Image data
> > > > +    * - 2/0
> > > > +      - 0/1
> > > > +      - M
> > > > +      - -
> >
> > This is interpreted as a list, translated to
> >
> > <ul class="simple">
> > <li></li>
> > </ul>
> >
> > and rendered as •
>
> The backslash was missing, I'll add one.
>
> >
> > > > +      - Embedded data
> > > > +
> > > > +Some devices may support enabling and disabling the embedded data stream. Others
> > > > +may not support it at all, in which case the embedded data route does not exist.
> >
> > "may not support it at all" can be interpreted as "maybe not support
> > enabling and disabling it at all", i.e. only support a static always
> > enabled route. Based on the last sentence that's not what you mean, it
> > could be reworded.
>
> How about:
>
> Some devices do not support the embedded data stream, others do support it and
> in some of the latter, it can be turned on and off before streaming is started.
>
> >
> > >
> > > Does the driver need to expose a routing table at all if it has a
> > > single, immutable image data stream ?
> >
> > I think it should. If there are internal pads, they should be an
> > internal routing table, at the very least for consistency. It will make
> > userspace simpler I believe (and we can check that statement with a
> > libcamera implementation :-)).
>
> I agree. The routing table does not only tell what's there, it also tells
> what's not.
>
> >
> > > > +
> > > > +Sensor pixel array size, cropping and binning
> > > > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > > +
> > >
> > > This is not something I was expecting here. We teach how to compute
> > > framerates for RAW sensors in camera-sensor.rst ("Using camera sensor
> > > drivers") specifying which controls a sensor driver should register and
> > > the expected controls units. It seems to me we define part of the expected
> > > interface exposed by a raw camera sensor there and part here. I wonder
> > > if camera-sensor.rst makes any sense at all if we define the "models"
> > > here.
> >
> > Good question. I find the split to be a bit artificial indeed, and it's
> > not clear where some of the parts should live. I think we still need a
> > document that explains concept common to raw sensors, regardless of the
> > model. We probably won't get the split entirely right on the first try,
> > and that's fine with me, we can refine it later.
>
> There are some aspects that are different from what is documented and
> implemented elsewhere, such as some of the selectiong rectangles.
>
> >
> > > > +The sensor's pixel array is divided into one or more areas. The areas around the
> > > > +edge of the pixel array, usually one one or more sides, may contain optical
> >
> > "one one" ?
>
> One one might be enough indeed.
>
> >
> > > > +black pixels, dummy pixels and other non-image pixels.
> > > > +
> > > > +A rectangle within the pixel area contains the visible pixels. Capturing the
> >
> > Above you wrote "pixel array", and here "pixel area".
>
> This should be pixel array, I'll fix it.
>
> >
> > > > +non-visible pixels may be supported by the sensor.
> > >
> > > This is a bit of simplification, as I presume there might be
> > > rectangles of visible pixels which overlap in the most advanced use
> > > cases.
> > > https://git.libcamera.org/libcamera/libcamera.git/tree/src/libcamera/property_ids_core.yaml#n594
> >
> > That's a more generic model. I'd like to see if there are sensors
> > implementing it. Sakari, does CCS support that ?
> >
> > This being said, we could restrict this subdev model to supporting only
> > sensors with a rectangular visible pixels area.
>
> The current frame format descriptors in CCS can't, I'm not sure if vendors
> might prefer to be able to do that. The CSI-2 long packets (lines) still
> need to be the same length.
>
> In practice the visible pixel areas are rectangular, at least in those
> sensor I've seen, so perhaps it's fine to indeed assume that.
>
> >
> > > > +
> > > > +The sensor can perform three operations that affect the output image size. First
> > > > +comes analogue crop. This configuration limits parts of the pixel array which
> > > > +the sensor will read, affecting sensor timing as well. The granularity of the
> > > > +analogue crop configuration varies greatly across sensors: some sensors support
> > > > +a few different analogue crop configurations whereas others may support anything
> > > > +divisible by a given number of pixels.
> > > > +
> > > > +The default analogue crop rectangle corresponds to the visible pixel area if
> > > > +supported by the hardware.
> > >
> > > In what sense "if supported by the hardware" ? Is this referring to
> > > the "visibile pixel area" ?
> >
> > I assume this means "if analogue crop is supported by the hardware".
> > This should be clarified.
>
> Yes, analogue crop is not supported by all sensors (nor drivers, even if
> the sensor does). I'd just do: s/if.*//.
>
> >
> > > > +
> > > > +In the next step, binning is performed on the image data read from camera
> > > > +sensor's pixel array. This will effectively result in an image smaller than the
> > > > +original by given proportions horizontally and vertically. Typical values are
> > >
> > > s/proportions/scaling factors/ ?
>
> How about "binning factors"?
>

ack

> > >
> > > > +1/2 and 1/3 but others may well be supported by the hardware as well.
> > > > +
> > > > +The combination of the analogue crop and binning operations may result in an
> > > > +image size that may be larger than desirable. For this purpose, a digital crop
> > >
> > > This is highly optional it seems.
> > >
> > > > +operation may be performed on the binned image. The resulting image size is
> > > > +further outputted by the sensor.
> >
> > s/outputted/output/ ? I think "outputted" is correct, but I believe it's
> > less common.
>
> Given how "put" inflects, it probably should be "output" indeed.
>
> >
> > > > +
> > > > +.. flat-table:: Selection targets on pads
> > > > +    :header-rows: 1
> > > > +
> > > > +    * - Pad/Stream
> > > > +      - Selection target/format
> > > > +      - Mandatory (X/-)
> > > > +      - Synopsis
> > >
> > > What about an R/W column ?
> > >
> > > > +    * - 1/0
> > > > +      - Format
> > > > +      - X
> > > > +      - Image data format. The width and height fields of this format are the
> > > > +        same than those for the V4L2_SEL_TGT_CROP_BOUNDS rectangle. The media
> > >
> > > Can sizes be changed at all ?
> >
> > For the internal image pad, I wouldn't think so. I think it should
> > expose the full readable array. Or, reading what you wrote below, do you
> > envision this including non-readable pixels too ?
>
> I don't -- they can't be accessed in any case. Timing information (such as
> the latching points) should probably be conveyed using a different API if
> they're needed.
>

Ok, what I was suggesting was to explicitly say those sizes are not
modifiable.

> >
> > Mapping pixel array definitions from datasheets to formats and selection
> > rectangles is hard, as datasheets, when they are available, are often
> > confusing. Datasheets from different vendors, or even for different
> > sensors of the same vendor, often have different models. I think we need
> > to be extra precise here, and possibly work with sensor vendors. Using
> > the CCS model as a base could be useful.
>
> I wouldn't necessarily try to convey what non-visiblepixels might be there,
> apart from in the documentation. These tend to be fairly vendor/sensor
> specific after all.
>

In the whole document, I would use one of "non-readable" and
"non-visible".

> >
> > > > +        bus code of this format reflects the native pixel depth of the sensor.
> > > > +    * - 1/0
> > > > +      - V4L2_SEL_TGT_NATIVE_SIZE
> > > > +      - X
> > > > +      - The full size of the pixel array, including all pixels in the pixel
> > > > +	array, even if they cannot be captured. This rectangle is relative to
> > > > +	the format on the same (pad, stream).
> >
> > Mix of tabs and spaces.
> >
> > What do you mean by "relative to the format on the same (pad, stream)" ?
> > Does it mean the rectangle is expressed relatively to a rectangle
> > defined as (0,0),(fmt.width x fmt.height) ? That doesn't seem to match
>
> Yes.
>
> > the description of the format and V4L2_SEL_TGT_CROP_BOUNDS, as the
> > format is defined as having the same width and height as
> > V4L2_SEL_TGT_CROP_BOUNDS, and V4L2_SEL_TGT_CROP_BOUNDS is defined as
> > being relative to V4L2_SEL_TGT_NATIVE_SIZE. I assume you mean something
> > else, it should be clarified.
>
> As discussed, I'll drop the NATIVE_SIZE rectangle from the documentation as
> redundant.
>

Doesn't it help reporting the full pixel array size (readable and non
readable pixels ?). We care about that information in libcamera
(also to comply with Android's requirement to expose the full pixel
array size as a property of the camera)

I would just drop the "relative" part. NATIVE is a rectangle with
top-left corner in position (0,0).

> >
> > > > +    * - 1/0
> > > > +      - V4L2_SEL_TGT_CROP_BOUNDS
> > > > +      - X
> > > > +      - The crop rectangle bounds. No pixels outside this area can be
> > >
> > > I would describe it as "the readable part of the full pixel array
> > > area" instead of repeating "crop rectangle bounds"
> >
> > Agreed. I would also explicitly state this can include black pixels. I
> > wonder if we should also define border pixels in this document.
>
> I wouldn't try to convey in APIs what qualities different non-visible
> pixels might have or where they are located, just that they're non-visible
> pixels.
>
> >
> > > > +        captured. This rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE
> > >
> > > > +    * - 1/0
> > > > +      - V4L2_SEL_TGT_CROP_DEFAULT
> > > > +      - X
> > > > +      - The visible pixel area. This rectangle is relative to the
> > >
> > > Isn't this the default analogue crop rectangle ?
> >
> > I assume so, and it should be documented as such explicitly. Also, if
> > the default crop rectangle doesn't include some of the readable pixels,
>
> It does indeed say this is the "visible pixel area". I can elaborate this in
> the analogue crop documentation -- it's possible the sensor doesn't support
> capturing just visible pixels, even if it's unlikely.
>
> >
> > > > +        V4L2_SEL_TGT_NATIVE_SIZE rectangle on the same (pad, stream).
> > > > +    * - 1/0
> > > > +      - V4L2_SEL_TGT_CROP
> > > > +      - \-
> > > > +      - Analogue crop. Analogue crop typically has a coarse granularity. This
> > > > +        rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE rectangle on the
> > > > +        same (pad, stream).
> > > > +    * - 1/0
> > > > +      - V4L2_SEL_TGT_COMPOSE
> > > > +      - \-
> > > > +      - Binning. This rectangle is relative to the V4L2_SEL_TGT_CROP
> > > > +        rectangle on the same (pad, stream).
> > >
> > > The size ratio between the V4L2_SEL_TGT_CROP and V4L2_SEL_TGT_COMPOSE
> > > rectangles selects the desired binning factor.
> > >
> > > > +    * - 2/0
> > > > +      - Format
> > > > +      - X
> > > > +      - Embedded data format.
> > > > +    * - 0/0
> > > > +      - V4L2_SEL_TGT_CROP
> > > > +      - \-
> > > > +      - Digital crop. This rectangle is relative to the V4L2_SEL_TGT_COMPOSE
> > > > +        rectangle on (pad, stream) pair 1/0.
> > > > +    * - 0/0
> > > > +      - Format
> > > > +      - X
> > > > +      - Image data source format. The width and height fields of the format are
> > > > +        the same than for the V4L2_SEL_TGT crop rectangle on (pad, stream) pair
> >
> > I think you meant "V4L2_SEL_TGT_CROP rectangle" here.
>
> Yes.
>
> >
> > > > +        0/0 where as the media bus code reflects the pixel data output of the
> > >
> > > s/where as/and ?
> > > Or maybe I didn't get what you mean
>
> And works here, yes.
>
> > >
> > > > +        sensor.
> > > > +    * - 0/1
> > > > +      - Format
> > > > +      - X
> > > > +      - Embedded data source format.
> >
> > I think we also need to consider skipping.
>
> I'll add sub-sampling in v2.
>
> >
> > > > +
> > > > +Embedded data
> > > > +^^^^^^^^^^^^^
> > > > +
> > > > +The embedded data stream is produced by the sensor when the corresponding route
> > > > +is enabled. The embedded data route may also be immutable or not exist at all,
> > > > +in case the sensor (or the driver) does not support it.
> > > > +
> > > > +Generally the sensor embedded data width is determined by the width of the image
> > > > +data whereas the number of lines are constant for the embedded data. The user
> > > > +space may obtain the size of the embedded data once the image data size on the
> > > > +source pad has been configured.
> > > > +
> > > > +Also see :ref:`media_using_camera_sensor_drivers_embedded_data`.
> > > > +
> >
>
> --
> Kind regards,
>
> Sakari Ailus
Sakari Ailus Nov. 13, 2024, 10:18 a.m. UTC | #5
Hi Jacopo,

On Wed, Nov 13, 2024 at 09:20:32AM +0100, Jacopo Mondi wrote:
> > > > > +    * - 1/0
> > > > > +      - Format
> > > > > +      - X
> > > > > +      - Image data format. The width and height fields of this format are the
> > > > > +        same than those for the V4L2_SEL_TGT_CROP_BOUNDS rectangle. The media
> > > >
> > > > Can sizes be changed at all ?
> > >
> > > For the internal image pad, I wouldn't think so. I think it should
> > > expose the full readable array. Or, reading what you wrote below, do you
> > > envision this including non-readable pixels too ?
> >
> > I don't -- they can't be accessed in any case. Timing information (such as
> > the latching points) should probably be conveyed using a different API if
> > they're needed.
> >
> 
> Ok, what I was suggesting was to explicitly say those sizes are not
> modifiable.

I'll add that for v2.

> 
> > >
> > > Mapping pixel array definitions from datasheets to formats and selection
> > > rectangles is hard, as datasheets, when they are available, are often
> > > confusing. Datasheets from different vendors, or even for different
> > > sensors of the same vendor, often have different models. I think we need
> > > to be extra precise here, and possibly work with sensor vendors. Using
> > > the CCS model as a base could be useful.
> >
> > I wouldn't necessarily try to convey what non-visiblepixels might be there,
> > apart from in the documentation. These tend to be fairly vendor/sensor
> > specific after all.
> >
> 
> In the whole document, I would use one of "non-readable" and
> "non-visible".

I'll check these are the terms used. We won't probably say much -- if
anything -- about non-readable pixels as there's (by definition) no way to
access them.

> 
> > >
> > > > > +        bus code of this format reflects the native pixel depth of the sensor.
> > > > > +    * - 1/0
> > > > > +      - V4L2_SEL_TGT_NATIVE_SIZE
> > > > > +      - X
> > > > > +      - The full size of the pixel array, including all pixels in the pixel
> > > > > +	array, even if they cannot be captured. This rectangle is relative to
> > > > > +	the format on the same (pad, stream).
> > >
> > > Mix of tabs and spaces.
> > >
> > > What do you mean by "relative to the format on the same (pad, stream)" ?
> > > Does it mean the rectangle is expressed relatively to a rectangle
> > > defined as (0,0),(fmt.width x fmt.height) ? That doesn't seem to match
> >
> > Yes.
> >
> > > the description of the format and V4L2_SEL_TGT_CROP_BOUNDS, as the
> > > format is defined as having the same width and height as
> > > V4L2_SEL_TGT_CROP_BOUNDS, and V4L2_SEL_TGT_CROP_BOUNDS is defined as
> > > being relative to V4L2_SEL_TGT_NATIVE_SIZE. I assume you mean something
> > > else, it should be clarified.
> >
> > As discussed, I'll drop the NATIVE_SIZE rectangle from the documentation as
> > redundant.
> >
> 
> Doesn't it help reporting the full pixel array size (readable and non
> readable pixels ?). We care about that information in libcamera
> (also to comply with Android's requirement to expose the full pixel
> array size as a property of the camera)
> 
> I would just drop the "relative" part. NATIVE is a rectangle with
> top-left corner in position (0,0).

It's the same information (apart from the mbus code) that can be obtained
from the format on the internal pad. I'd avoid adding redundant selection
targets.
diff mbox series

Patch

diff --git a/Documentation/userspace-api/media/drivers/camera-sensor.rst b/Documentation/userspace-api/media/drivers/camera-sensor.rst
index ad4049ff7eec..727cc12bc624 100644
--- a/Documentation/userspace-api/media/drivers/camera-sensor.rst
+++ b/Documentation/userspace-api/media/drivers/camera-sensor.rst
@@ -18,6 +18,9 @@  binning functionality. The sensor drivers belong to two distinct classes, freely
 configurable and register list based drivers, depending on how the driver
 configures this functionality.
 
+Also see
+:ref:`media_subdev_config_model_common_raw_sensor`.
+
 Freely configurable camera sensor drivers
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
@@ -105,6 +108,8 @@  values programmed by the register sequences. The default values of these
 controls shall be 0 (disabled). Especially these controls shall not be inverted,
 independently of the sensor's mounting rotation.
 
+.. _media_using_camera_sensor_drivers_embedded_data:
+
 Embedded data
 -------------
 
diff --git a/Documentation/userspace-api/media/v4l/common-raw-sensor.dia b/Documentation/userspace-api/media/v4l/common-raw-sensor.dia
new file mode 100644
index 000000000000..aa927527eae3
--- /dev/null
+++ b/Documentation/userspace-api/media/v4l/common-raw-sensor.dia
@@ -0,0 +1,441 @@ 
+<?xml version="1.0" encoding="UTF-8"?>
+<dia:diagram xmlns:dia="http://www.lysator.liu.se/~alla/dia/">
+  <dia:diagramdata>
+    <dia:attribute name="background">
+      <dia:color val="#ffffffff"/>
+    </dia:attribute>
+    <dia:attribute name="pagebreak">
+      <dia:color val="#000099ff"/>
+    </dia:attribute>
+    <dia:attribute name="paper">
+      <dia:composite type="paper">
+        <dia:attribute name="name">
+          <dia:string>#A4#</dia:string>
+        </dia:attribute>
+        <dia:attribute name="tmargin">
+          <dia:real val="2.8222"/>
+        </dia:attribute>
+        <dia:attribute name="bmargin">
+          <dia:real val="2.8222"/>
+        </dia:attribute>
+        <dia:attribute name="lmargin">
+          <dia:real val="2.8222"/>
+        </dia:attribute>
+        <dia:attribute name="rmargin">
+          <dia:real val="2.8222"/>
+        </dia:attribute>
+        <dia:attribute name="is_portrait">
+          <dia:boolean val="true"/>
+        </dia:attribute>
+        <dia:attribute name="scaling">
+          <dia:real val="1"/>
+        </dia:attribute>
+        <dia:attribute name="fitto">
+          <dia:boolean val="false"/>
+        </dia:attribute>
+      </dia:composite>
+    </dia:attribute>
+    <dia:attribute name="grid">
+      <dia:composite type="grid">
+        <dia:attribute name="dynamic">
+          <dia:boolean val="true"/>
+        </dia:attribute>
+        <dia:attribute name="width_x">
+          <dia:real val="1"/>
+        </dia:attribute>
+        <dia:attribute name="width_y">
+          <dia:real val="1"/>
+        </dia:attribute>
+        <dia:attribute name="visible_x">
+          <dia:int val="1"/>
+        </dia:attribute>
+        <dia:attribute name="visible_y">
+          <dia:int val="1"/>
+        </dia:attribute>
+        <dia:composite type="color"/>
+      </dia:composite>
+    </dia:attribute>
+    <dia:attribute name="color">
+      <dia:color val="#d8e5e5ff"/>
+    </dia:attribute>
+    <dia:attribute name="guides"/>
+    <dia:attribute name="guide_color">
+      <dia:color val="#00ff00ff"/>
+    </dia:attribute>
+    <dia:attribute name="display">
+      <dia:composite type="display">
+        <dia:attribute name="antialiased">
+          <dia:boolean val="true"/>
+        </dia:attribute>
+        <dia:attribute name="snap-to-grid">
+          <dia:boolean val="false"/>
+        </dia:attribute>
+        <dia:attribute name="snap-to-guides">
+          <dia:boolean val="true"/>
+        </dia:attribute>
+        <dia:attribute name="snap-to-object">
+          <dia:boolean val="true"/>
+        </dia:attribute>
+        <dia:attribute name="show-grid">
+          <dia:boolean val="true"/>
+        </dia:attribute>
+        <dia:attribute name="show-guides">
+          <dia:boolean val="true"/>
+        </dia:attribute>
+        <dia:attribute name="show-connection-points">
+          <dia:boolean val="true"/>
+        </dia:attribute>
+      </dia:composite>
+    </dia:attribute>
+  </dia:diagramdata>
+  <dia:layer name="Background" visible="true" connectable="false"/>
+  <dia:layer name="Background" visible="true" connectable="false"/>
+  <dia:layer name="Background" visible="true" connectable="true" active="true">
+    <dia:object type="Standard - Box" version="0" id="O0">
+      <dia:attribute name="obj_pos">
+        <dia:point val="16.35,11.5"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="16.3,11.45;25.8,18.45"/>
+      </dia:attribute>
+      <dia:attribute name="elem_corner">
+        <dia:point val="16.35,11.5"/>
+      </dia:attribute>
+      <dia:attribute name="elem_width">
+        <dia:real val="9.4000000000000021"/>
+      </dia:attribute>
+      <dia:attribute name="elem_height">
+        <dia:real val="6.9000000000000021"/>
+      </dia:attribute>
+      <dia:attribute name="show_background">
+        <dia:boolean val="true"/>
+      </dia:attribute>
+    </dia:object>
+    <dia:object type="Geometric - Perfect Circle" version="1" id="O1">
+      <dia:attribute name="obj_pos">
+        <dia:point val="25.5627,14.578"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="25.4627,14.478;26.0191,15.0344"/>
+      </dia:attribute>
+      <dia:attribute name="meta">
+        <dia:composite type="dict"/>
+      </dia:attribute>
+      <dia:attribute name="elem_corner">
+        <dia:point val="25.5627,14.578"/>
+      </dia:attribute>
+      <dia:attribute name="elem_width">
+        <dia:real val="0.35638032780853468"/>
+      </dia:attribute>
+      <dia:attribute name="elem_height">
+        <dia:real val="0.35638032780853468"/>
+      </dia:attribute>
+      <dia:attribute name="line_width">
+        <dia:real val="0.10000000000000001"/>
+      </dia:attribute>
+      <dia:attribute name="line_colour">
+        <dia:color val="#000000ff"/>
+      </dia:attribute>
+      <dia:attribute name="fill_colour">
+        <dia:color val="#ffffffff"/>
+      </dia:attribute>
+      <dia:attribute name="show_background">
+        <dia:boolean val="true"/>
+      </dia:attribute>
+      <dia:attribute name="line_style">
+        <dia:enum val="0"/>
+        <dia:real val="1"/>
+      </dia:attribute>
+      <dia:attribute name="flip_horizontal">
+        <dia:boolean val="false"/>
+      </dia:attribute>
+      <dia:attribute name="flip_vertical">
+        <dia:boolean val="false"/>
+      </dia:attribute>
+      <dia:attribute name="subscale">
+        <dia:real val="1"/>
+      </dia:attribute>
+    </dia:object>
+    <dia:object type="Geometric - Perfect Circle" version="1" id="O2">
+      <dia:attribute name="obj_pos">
+        <dia:point val="17.7702,13.2656"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="17.6702,13.1656;18.24,13.7354"/>
+      </dia:attribute>
+      <dia:attribute name="meta">
+        <dia:composite type="dict"/>
+      </dia:attribute>
+      <dia:attribute name="elem_corner">
+        <dia:point val="17.7702,13.2656"/>
+      </dia:attribute>
+      <dia:attribute name="elem_width">
+        <dia:real val="0.36980024191863681"/>
+      </dia:attribute>
+      <dia:attribute name="elem_height">
+        <dia:real val="0.36980024191863681"/>
+      </dia:attribute>
+      <dia:attribute name="line_width">
+        <dia:real val="0.10000000000000001"/>
+      </dia:attribute>
+      <dia:attribute name="line_colour">
+        <dia:color val="#000000ff"/>
+      </dia:attribute>
+      <dia:attribute name="fill_colour">
+        <dia:color val="#ffffffff"/>
+      </dia:attribute>
+      <dia:attribute name="show_background">
+        <dia:boolean val="true"/>
+      </dia:attribute>
+      <dia:attribute name="line_style">
+        <dia:enum val="0"/>
+        <dia:real val="1"/>
+      </dia:attribute>
+      <dia:attribute name="flip_horizontal">
+        <dia:boolean val="false"/>
+      </dia:attribute>
+      <dia:attribute name="flip_vertical">
+        <dia:boolean val="false"/>
+      </dia:attribute>
+      <dia:attribute name="subscale">
+        <dia:real val="1"/>
+      </dia:attribute>
+    </dia:object>
+    <dia:object type="Geometric - Perfect Circle" version="1" id="O3">
+      <dia:attribute name="obj_pos">
+        <dia:point val="17.7464,16.2056"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="17.6464,16.1056;18.2162,16.6754"/>
+      </dia:attribute>
+      <dia:attribute name="meta">
+        <dia:composite type="dict"/>
+      </dia:attribute>
+      <dia:attribute name="elem_corner">
+        <dia:point val="17.7464,16.2056"/>
+      </dia:attribute>
+      <dia:attribute name="elem_width">
+        <dia:real val="0.36980024191863681"/>
+      </dia:attribute>
+      <dia:attribute name="elem_height">
+        <dia:real val="0.36980024191863681"/>
+      </dia:attribute>
+      <dia:attribute name="line_width">
+        <dia:real val="0.10000000000000001"/>
+      </dia:attribute>
+      <dia:attribute name="line_colour">
+        <dia:color val="#000000ff"/>
+      </dia:attribute>
+      <dia:attribute name="fill_colour">
+        <dia:color val="#ffffffff"/>
+      </dia:attribute>
+      <dia:attribute name="show_background">
+        <dia:boolean val="true"/>
+      </dia:attribute>
+      <dia:attribute name="line_style">
+        <dia:enum val="0"/>
+        <dia:real val="1"/>
+      </dia:attribute>
+      <dia:attribute name="flip_horizontal">
+        <dia:boolean val="false"/>
+      </dia:attribute>
+      <dia:attribute name="flip_vertical">
+        <dia:boolean val="false"/>
+      </dia:attribute>
+      <dia:attribute name="subscale">
+        <dia:real val="1"/>
+      </dia:attribute>
+    </dia:object>
+    <dia:object type="Standard - Line" version="0" id="O4">
+      <dia:attribute name="obj_pos">
+        <dia:point val="18.1609,16.3413"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="18.1016,14.5712;25.6221,16.4007"/>
+      </dia:attribute>
+      <dia:attribute name="conn_endpoints">
+        <dia:point val="18.1609,16.3413"/>
+        <dia:point val="25.5627,14.7562"/>
+      </dia:attribute>
+      <dia:attribute name="numcp">
+        <dia:int val="1"/>
+      </dia:attribute>
+      <dia:attribute name="line_style">
+        <dia:enum val="1"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow">
+        <dia:enum val="1"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow_length">
+        <dia:real val="0.5"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow_width">
+        <dia:real val="0.5"/>
+      </dia:attribute>
+      <dia:connections>
+        <dia:connection handle="0" to="O3" connection="8"/>
+        <dia:connection handle="1" to="O1" connection="2"/>
+      </dia:connections>
+    </dia:object>
+    <dia:object type="Standard - Line" version="0" id="O5">
+      <dia:attribute name="obj_pos">
+        <dia:point val="18.14,13.4505"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="18.0821,13.3926;25.6206,14.9674"/>
+      </dia:attribute>
+      <dia:attribute name="conn_endpoints">
+        <dia:point val="18.14,13.4505"/>
+        <dia:point val="25.5627,14.7562"/>
+      </dia:attribute>
+      <dia:attribute name="numcp">
+        <dia:int val="1"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow">
+        <dia:enum val="1"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow_length">
+        <dia:real val="0.5"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow_width">
+        <dia:real val="0.5"/>
+      </dia:attribute>
+      <dia:connections>
+        <dia:connection handle="0" to="O2" connection="3"/>
+        <dia:connection handle="1" to="O1" connection="2"/>
+      </dia:connections>
+    </dia:object>
+    <dia:object type="Standard - Line" version="0" id="O6">
+      <dia:attribute name="obj_pos">
+        <dia:point val="25.919,14.7562"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="25.8686,14.3879;31.0068,15.0497"/>
+      </dia:attribute>
+      <dia:attribute name="conn_endpoints">
+        <dia:point val="25.919,14.7562"/>
+        <dia:point val="30.9564,14.7131"/>
+      </dia:attribute>
+      <dia:attribute name="numcp">
+        <dia:int val="1"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow">
+        <dia:enum val="1"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow_length">
+        <dia:real val="0.5"/>
+      </dia:attribute>
+      <dia:attribute name="end_arrow_width">
+        <dia:real val="0.5"/>
+      </dia:attribute>
+      <dia:connections>
+        <dia:connection handle="0" to="O1" connection="3"/>
+      </dia:connections>
+    </dia:object>
+    <dia:object type="Standard - Text" version="1" id="O7">
+      <dia:attribute name="obj_pos">
+        <dia:point val="17.9551,13.2656"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="17.9551,12.5181;22.7051,13.2656"/>
+      </dia:attribute>
+      <dia:attribute name="text">
+        <dia:composite type="text">
+          <dia:attribute name="string">
+            <dia:string>#image data (1)#</dia:string>
+          </dia:attribute>
+          <dia:attribute name="font">
+            <dia:font family="sans" style="0" name="Helvetica"/>
+          </dia:attribute>
+          <dia:attribute name="height">
+            <dia:real val="0.80000000000000004"/>
+          </dia:attribute>
+          <dia:attribute name="pos">
+            <dia:point val="17.9551,13.1131"/>
+          </dia:attribute>
+          <dia:attribute name="color">
+            <dia:color val="#000000ff"/>
+          </dia:attribute>
+          <dia:attribute name="alignment">
+            <dia:enum val="0"/>
+          </dia:attribute>
+        </dia:composite>
+      </dia:attribute>
+      <dia:attribute name="valign">
+        <dia:enum val="1"/>
+      </dia:attribute>
+      <dia:connections>
+        <dia:connection handle="0" to="O2" connection="1"/>
+      </dia:connections>
+    </dia:object>
+    <dia:object type="Standard - Text" version="1" id="O8">
+      <dia:attribute name="obj_pos">
+        <dia:point val="17.9313,16.5754"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="17.9313,16.5754;24.1238,17.3229"/>
+      </dia:attribute>
+      <dia:attribute name="text">
+        <dia:composite type="text">
+          <dia:attribute name="string">
+            <dia:string>#embedded data (2)#</dia:string>
+          </dia:attribute>
+          <dia:attribute name="font">
+            <dia:font family="sans" style="0" name="Helvetica"/>
+          </dia:attribute>
+          <dia:attribute name="height">
+            <dia:real val="0.80000000000000004"/>
+          </dia:attribute>
+          <dia:attribute name="pos">
+            <dia:point val="17.9313,17.1704"/>
+          </dia:attribute>
+          <dia:attribute name="color">
+            <dia:color val="#000000ff"/>
+          </dia:attribute>
+          <dia:attribute name="alignment">
+            <dia:enum val="0"/>
+          </dia:attribute>
+        </dia:composite>
+      </dia:attribute>
+      <dia:attribute name="valign">
+        <dia:enum val="0"/>
+      </dia:attribute>
+      <dia:connections>
+        <dia:connection handle="0" to="O3" connection="0"/>
+      </dia:connections>
+    </dia:object>
+    <dia:object type="Standard - Text" version="1" id="O9">
+      <dia:attribute name="obj_pos">
+        <dia:point val="26.1,14.125"/>
+      </dia:attribute>
+      <dia:attribute name="obj_bb">
+        <dia:rectangle val="26.1,13.53;30.7475,14.2775"/>
+      </dia:attribute>
+      <dia:attribute name="text">
+        <dia:composite type="text">
+          <dia:attribute name="string">
+            <dia:string>#source pad (0)#</dia:string>
+          </dia:attribute>
+          <dia:attribute name="font">
+            <dia:font family="sans" style="0" name="Helvetica"/>
+          </dia:attribute>
+          <dia:attribute name="height">
+            <dia:real val="0.80000000000000004"/>
+          </dia:attribute>
+          <dia:attribute name="pos">
+            <dia:point val="26.1,14.125"/>
+          </dia:attribute>
+          <dia:attribute name="color">
+            <dia:color val="#000000ff"/>
+          </dia:attribute>
+          <dia:attribute name="alignment">
+            <dia:enum val="0"/>
+          </dia:attribute>
+        </dia:composite>
+      </dia:attribute>
+      <dia:attribute name="valign">
+        <dia:enum val="3"/>
+      </dia:attribute>
+    </dia:object>
+  </dia:layer>
+</dia:diagram>
diff --git a/Documentation/userspace-api/media/v4l/common-raw-sensor.svg b/Documentation/userspace-api/media/v4l/common-raw-sensor.svg
new file mode 100644
index 000000000000..1d6055da2519
--- /dev/null
+++ b/Documentation/userspace-api/media/v4l/common-raw-sensor.svg
@@ -0,0 +1,134 @@ 
+<?xml version="1.0" encoding="UTF-8"?>
+<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="296pt" height="142pt" viewBox="0 0 296 142" version="1.1">
+<defs>
+<g>
+<symbol overflow="visible" id="glyph0-0">
+<path style="stroke:none;" d="M 0.640625 2.265625 L 0.640625 -9.015625 L 7.03125 -9.015625 L 7.03125 2.265625 Z M 1.359375 1.546875 L 6.328125 1.546875 L 6.328125 -8.296875 L 1.359375 -8.296875 Z M 1.359375 1.546875 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-1">
+<path style="stroke:none;" d="M 1.203125 -7 L 2.359375 -7 L 2.359375 0 L 1.203125 0 Z M 1.203125 -9.71875 L 2.359375 -9.71875 L 2.359375 -8.265625 L 1.203125 -8.265625 Z M 1.203125 -9.71875 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-2">
+<path style="stroke:none;" d="M 6.65625 -5.65625 C 6.9375 -6.164062 7.273438 -6.546875 7.671875 -6.796875 C 8.078125 -7.046875 8.550781 -7.171875 9.09375 -7.171875 C 9.820312 -7.171875 10.382812 -6.914062 10.78125 -6.40625 C 11.175781 -5.894531 11.375 -5.164062 11.375 -4.21875 L 11.375 0 L 10.21875 0 L 10.21875 -4.1875 C 10.21875 -4.851562 10.097656 -5.347656 9.859375 -5.671875 C 9.628906 -6.003906 9.269531 -6.171875 8.78125 -6.171875 C 8.1875 -6.171875 7.710938 -5.972656 7.359375 -5.578125 C 7.015625 -5.179688 6.84375 -4.640625 6.84375 -3.953125 L 6.84375 0 L 5.6875 0 L 5.6875 -4.1875 C 5.6875 -4.863281 5.566406 -5.363281 5.328125 -5.6875 C 5.097656 -6.007812 4.734375 -6.171875 4.234375 -6.171875 C 3.648438 -6.171875 3.179688 -5.96875 2.828125 -5.5625 C 2.484375 -5.164062 2.3125 -4.628906 2.3125 -3.953125 L 2.3125 0 L 1.15625 0 L 1.15625 -7 L 2.3125 -7 L 2.3125 -5.90625 C 2.582031 -6.332031 2.898438 -6.648438 3.265625 -6.859375 C 3.628906 -7.066406 4.0625 -7.171875 4.5625 -7.171875 C 5.070312 -7.171875 5.503906 -7.039062 5.859375 -6.78125 C 6.222656 -6.519531 6.488281 -6.144531 6.65625 -5.65625 Z M 6.65625 -5.65625 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-3">
+<path style="stroke:none;" d="M 4.390625 -3.515625 C 3.460938 -3.515625 2.816406 -3.40625 2.453125 -3.1875 C 2.097656 -2.976562 1.921875 -2.617188 1.921875 -2.109375 C 1.921875 -1.703125 2.050781 -1.378906 2.3125 -1.140625 C 2.582031 -0.898438 2.953125 -0.78125 3.421875 -0.78125 C 4.054688 -0.78125 4.566406 -1.003906 4.953125 -1.453125 C 5.335938 -1.910156 5.53125 -2.515625 5.53125 -3.265625 L 5.53125 -3.515625 Z M 6.671875 -4 L 6.671875 0 L 5.53125 0 L 5.53125 -1.0625 C 5.269531 -0.632812 4.941406 -0.316406 4.546875 -0.109375 C 4.160156 0.0859375 3.679688 0.1875 3.109375 0.1875 C 2.390625 0.1875 1.816406 -0.015625 1.390625 -0.421875 C 0.972656 -0.828125 0.765625 -1.363281 0.765625 -2.03125 C 0.765625 -2.820312 1.023438 -3.414062 1.546875 -3.8125 C 2.078125 -4.21875 2.867188 -4.421875 3.921875 -4.421875 L 5.53125 -4.421875 L 5.53125 -4.53125 C 5.53125 -5.0625 5.351562 -5.46875 5 -5.75 C 4.65625 -6.039062 4.171875 -6.1875 3.546875 -6.1875 C 3.140625 -6.1875 2.75 -6.140625 2.375 -6.046875 C 2 -5.953125 1.632812 -5.8125 1.28125 -5.625 L 1.28125 -6.671875 C 1.695312 -6.835938 2.101562 -6.960938 2.5 -7.046875 C 2.894531 -7.128906 3.28125 -7.171875 3.65625 -7.171875 C 4.675781 -7.171875 5.429688 -6.90625 5.921875 -6.375 C 6.421875 -5.851562 6.671875 -5.0625 6.671875 -4 Z M 6.671875 -4 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-4">
+<path style="stroke:none;" d="M 5.8125 -3.578125 C 5.8125 -4.410156 5.640625 -5.054688 5.296875 -5.515625 C 4.953125 -5.972656 4.46875 -6.203125 3.84375 -6.203125 C 3.226562 -6.203125 2.75 -5.972656 2.40625 -5.515625 C 2.0625 -5.054688 1.890625 -4.410156 1.890625 -3.578125 C 1.890625 -2.753906 2.0625 -2.113281 2.40625 -1.65625 C 2.75 -1.195312 3.226562 -0.96875 3.84375 -0.96875 C 4.46875 -0.96875 4.953125 -1.195312 5.296875 -1.65625 C 5.640625 -2.113281 5.8125 -2.753906 5.8125 -3.578125 Z M 6.953125 -0.875 C 6.953125 0.320312 6.6875 1.207031 6.15625 1.78125 C 5.632812 2.363281 4.828125 2.65625 3.734375 2.65625 C 3.328125 2.65625 2.945312 2.625 2.59375 2.5625 C 2.238281 2.507812 1.890625 2.421875 1.546875 2.296875 L 1.546875 1.171875 C 1.890625 1.359375 2.222656 1.492188 2.546875 1.578125 C 2.878906 1.671875 3.21875 1.71875 3.5625 1.71875 C 4.3125 1.71875 4.875 1.519531 5.25 1.125 C 5.625 0.726562 5.8125 0.132812 5.8125 -0.65625 L 5.8125 -1.234375 C 5.570312 -0.816406 5.265625 -0.503906 4.890625 -0.296875 C 4.523438 -0.0976562 4.082031 0 3.5625 0 C 2.707031 0 2.015625 -0.328125 1.484375 -0.984375 C 0.960938 -1.640625 0.703125 -2.503906 0.703125 -3.578125 C 0.703125 -4.660156 0.960938 -5.53125 1.484375 -6.1875 C 2.015625 -6.84375 2.707031 -7.171875 3.5625 -7.171875 C 4.082031 -7.171875 4.523438 -7.066406 4.890625 -6.859375 C 5.265625 -6.648438 5.570312 -6.34375 5.8125 -5.9375 L 5.8125 -7 L 6.953125 -7 Z M 6.953125 -0.875 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-5">
+<path style="stroke:none;" d="M 7.1875 -3.78125 L 7.1875 -3.21875 L 1.90625 -3.21875 C 1.957031 -2.425781 2.195312 -1.820312 2.625 -1.40625 C 3.050781 -1 3.644531 -0.796875 4.40625 -0.796875 C 4.84375 -0.796875 5.269531 -0.847656 5.6875 -0.953125 C 6.101562 -1.066406 6.515625 -1.226562 6.921875 -1.4375 L 6.921875 -0.359375 C 6.515625 -0.179688 6.09375 -0.046875 5.65625 0.046875 C 5.21875 0.140625 4.78125 0.1875 4.34375 0.1875 C 3.21875 0.1875 2.328125 -0.132812 1.671875 -0.78125 C 1.023438 -1.4375 0.703125 -2.320312 0.703125 -3.4375 C 0.703125 -4.582031 1.007812 -5.488281 1.625 -6.15625 C 2.25 -6.832031 3.085938 -7.171875 4.140625 -7.171875 C 5.078125 -7.171875 5.816406 -6.863281 6.359375 -6.25 C 6.910156 -5.644531 7.1875 -4.820312 7.1875 -3.78125 Z M 6.046875 -4.125 C 6.035156 -4.75 5.859375 -5.25 5.515625 -5.625 C 5.171875 -6 4.71875 -6.1875 4.15625 -6.1875 C 3.507812 -6.1875 2.992188 -6.003906 2.609375 -5.640625 C 2.222656 -5.285156 2 -4.78125 1.9375 -4.125 Z M 6.046875 -4.125 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-6">
+<path style="stroke:none;" d=""/>
+</symbol>
+<symbol overflow="visible" id="glyph0-7">
+<path style="stroke:none;" d="M 5.8125 -5.9375 L 5.8125 -9.71875 L 6.953125 -9.71875 L 6.953125 0 L 5.8125 0 L 5.8125 -1.046875 C 5.570312 -0.628906 5.265625 -0.316406 4.890625 -0.109375 C 4.523438 0.0859375 4.082031 0.1875 3.5625 0.1875 C 2.71875 0.1875 2.03125 -0.148438 1.5 -0.828125 C 0.96875 -1.503906 0.703125 -2.394531 0.703125 -3.5 C 0.703125 -4.59375 0.96875 -5.476562 1.5 -6.15625 C 2.03125 -6.832031 2.71875 -7.171875 3.5625 -7.171875 C 4.082031 -7.171875 4.523438 -7.066406 4.890625 -6.859375 C 5.265625 -6.660156 5.570312 -6.351562 5.8125 -5.9375 Z M 1.890625 -3.5 C 1.890625 -2.644531 2.0625 -1.976562 2.40625 -1.5 C 2.757812 -1.019531 3.238281 -0.78125 3.84375 -0.78125 C 4.457031 -0.78125 4.9375 -1.019531 5.28125 -1.5 C 5.632812 -1.976562 5.8125 -2.644531 5.8125 -3.5 C 5.8125 -4.34375 5.632812 -5.003906 5.28125 -5.484375 C 4.9375 -5.960938 4.457031 -6.203125 3.84375 -6.203125 C 3.238281 -6.203125 2.757812 -5.960938 2.40625 -5.484375 C 2.0625 -5.003906 1.890625 -4.34375 1.890625 -3.5 Z M 1.890625 -3.5 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-8">
+<path style="stroke:none;" d="M 2.34375 -8.984375 L 2.34375 -7 L 4.71875 -7 L 4.71875 -6.109375 L 2.34375 -6.109375 L 2.34375 -2.3125 C 2.34375 -1.738281 2.421875 -1.367188 2.578125 -1.203125 C 2.734375 -1.046875 3.050781 -0.96875 3.53125 -0.96875 L 4.71875 -0.96875 L 4.71875 0 L 3.53125 0 C 2.644531 0 2.03125 -0.164062 1.6875 -0.5 C 1.351562 -0.832031 1.1875 -1.4375 1.1875 -2.3125 L 1.1875 -6.109375 L 0.34375 -6.109375 L 0.34375 -7 L 1.1875 -7 L 1.1875 -8.984375 Z M 2.34375 -8.984375 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-9">
+<path style="stroke:none;" d="M 3.96875 -9.703125 C 3.40625 -8.742188 2.988281 -7.796875 2.71875 -6.859375 C 2.445312 -5.929688 2.3125 -4.984375 2.3125 -4.015625 C 2.3125 -3.054688 2.445312 -2.101562 2.71875 -1.15625 C 3 -0.21875 3.414062 0.726562 3.96875 1.6875 L 2.96875 1.6875 C 2.34375 0.707031 1.875 -0.253906 1.5625 -1.203125 C 1.25 -2.148438 1.09375 -3.085938 1.09375 -4.015625 C 1.09375 -4.941406 1.25 -5.875 1.5625 -6.8125 C 1.875 -7.757812 2.34375 -8.722656 2.96875 -9.703125 Z M 3.96875 -9.703125 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-10">
+<path style="stroke:none;" d="M 1.59375 -1.0625 L 3.65625 -1.0625 L 3.65625 -8.171875 L 1.40625 -7.734375 L 1.40625 -8.875 L 3.640625 -9.328125 L 4.90625 -9.328125 L 4.90625 -1.0625 L 6.953125 -1.0625 L 6.953125 0 L 1.59375 0 Z M 1.59375 -1.0625 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-11">
+<path style="stroke:none;" d="M 1.03125 -9.703125 L 2.03125 -9.703125 C 2.65625 -8.722656 3.117188 -7.757812 3.421875 -6.8125 C 3.734375 -5.875 3.890625 -4.941406 3.890625 -4.015625 C 3.890625 -3.085938 3.734375 -2.148438 3.421875 -1.203125 C 3.117188 -0.253906 2.65625 0.707031 2.03125 1.6875 L 1.03125 1.6875 C 1.582031 0.726562 1.992188 -0.21875 2.265625 -1.15625 C 2.535156 -2.101562 2.671875 -3.054688 2.671875 -4.015625 C 2.671875 -4.984375 2.535156 -5.929688 2.265625 -6.859375 C 1.992188 -7.796875 1.582031 -8.742188 1.03125 -9.703125 Z M 1.03125 -9.703125 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-12">
+<path style="stroke:none;" d="M 6.234375 -3.5 C 6.234375 -4.34375 6.054688 -5.003906 5.703125 -5.484375 C 5.359375 -5.960938 4.882812 -6.203125 4.28125 -6.203125 C 3.664062 -6.203125 3.179688 -5.960938 2.828125 -5.484375 C 2.484375 -5.003906 2.3125 -4.34375 2.3125 -3.5 C 2.3125 -2.644531 2.484375 -1.976562 2.828125 -1.5 C 3.179688 -1.019531 3.664062 -0.78125 4.28125 -0.78125 C 4.882812 -0.78125 5.359375 -1.019531 5.703125 -1.5 C 6.054688 -1.976562 6.234375 -2.644531 6.234375 -3.5 Z M 2.3125 -5.9375 C 2.5625 -6.351562 2.867188 -6.660156 3.234375 -6.859375 C 3.597656 -7.066406 4.039062 -7.171875 4.5625 -7.171875 C 5.40625 -7.171875 6.09375 -6.832031 6.625 -6.15625 C 7.15625 -5.476562 7.421875 -4.59375 7.421875 -3.5 C 7.421875 -2.394531 7.15625 -1.503906 6.625 -0.828125 C 6.09375 -0.148438 5.40625 0.1875 4.5625 0.1875 C 4.039062 0.1875 3.597656 0.0859375 3.234375 -0.109375 C 2.867188 -0.316406 2.5625 -0.628906 2.3125 -1.046875 L 2.3125 0 L 1.15625 0 L 1.15625 -9.71875 L 2.3125 -9.71875 Z M 2.3125 -5.9375 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-13">
+<path style="stroke:none;" d="M 2.453125 -1.0625 L 6.859375 -1.0625 L 6.859375 0 L 0.9375 0 L 0.9375 -1.0625 C 1.414062 -1.5625 2.066406 -2.226562 2.890625 -3.0625 C 3.722656 -3.894531 4.242188 -4.429688 4.453125 -4.671875 C 4.859375 -5.128906 5.140625 -5.515625 5.296875 -5.828125 C 5.460938 -6.140625 5.546875 -6.445312 5.546875 -6.75 C 5.546875 -7.25 5.367188 -7.65625 5.015625 -7.96875 C 4.671875 -8.28125 4.21875 -8.4375 3.65625 -8.4375 C 3.257812 -8.4375 2.84375 -8.363281 2.40625 -8.21875 C 1.96875 -8.082031 1.5 -7.878906 1 -7.609375 L 1 -8.875 C 1.507812 -9.082031 1.984375 -9.238281 2.421875 -9.34375 C 2.867188 -9.445312 3.273438 -9.5 3.640625 -9.5 C 4.609375 -9.5 5.378906 -9.253906 5.953125 -8.765625 C 6.523438 -8.285156 6.8125 -7.640625 6.8125 -6.828125 C 6.8125 -6.453125 6.738281 -6.09375 6.59375 -5.75 C 6.445312 -5.40625 6.1875 -5 5.8125 -4.53125 C 5.707031 -4.40625 5.375 -4.054688 4.8125 -3.484375 C 4.257812 -2.910156 3.472656 -2.101562 2.453125 -1.0625 Z M 2.453125 -1.0625 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-14">
+<path style="stroke:none;" d="M 5.671875 -6.796875 L 5.671875 -5.703125 C 5.347656 -5.867188 5.007812 -5.992188 4.65625 -6.078125 C 4.300781 -6.160156 3.9375 -6.203125 3.5625 -6.203125 C 3 -6.203125 2.570312 -6.113281 2.28125 -5.9375 C 2 -5.769531 1.859375 -5.507812 1.859375 -5.15625 C 1.859375 -4.882812 1.957031 -4.671875 2.15625 -4.515625 C 2.363281 -4.367188 2.773438 -4.226562 3.390625 -4.09375 L 3.78125 -4 C 4.601562 -3.832031 5.1875 -3.585938 5.53125 -3.265625 C 5.875 -2.941406 6.046875 -2.5 6.046875 -1.9375 C 6.046875 -1.28125 5.785156 -0.757812 5.265625 -0.375 C 4.753906 0 4.050781 0.1875 3.15625 0.1875 C 2.78125 0.1875 2.390625 0.148438 1.984375 0.078125 C 1.578125 0.00390625 1.144531 -0.101562 0.6875 -0.25 L 0.6875 -1.4375 C 1.113281 -1.21875 1.53125 -1.050781 1.9375 -0.9375 C 2.351562 -0.832031 2.765625 -0.78125 3.171875 -0.78125 C 3.710938 -0.78125 4.128906 -0.875 4.421875 -1.0625 C 4.710938 -1.25 4.859375 -1.507812 4.859375 -1.84375 C 4.859375 -2.15625 4.753906 -2.394531 4.546875 -2.5625 C 4.335938 -2.726562 3.875 -2.890625 3.15625 -3.046875 L 2.765625 -3.140625 C 2.046875 -3.285156 1.53125 -3.515625 1.21875 -3.828125 C 0.90625 -4.140625 0.75 -4.566406 0.75 -5.109375 C 0.75 -5.765625 0.976562 -6.269531 1.4375 -6.625 C 1.90625 -6.988281 2.570312 -7.171875 3.4375 -7.171875 C 3.851562 -7.171875 4.25 -7.140625 4.625 -7.078125 C 5 -7.015625 5.347656 -6.921875 5.671875 -6.796875 Z M 5.671875 -6.796875 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-15">
+<path style="stroke:none;" d="M 3.921875 -6.1875 C 3.304688 -6.1875 2.816406 -5.945312 2.453125 -5.46875 C 2.097656 -4.988281 1.921875 -4.332031 1.921875 -3.5 C 1.921875 -2.65625 2.097656 -1.992188 2.453125 -1.515625 C 2.804688 -1.035156 3.296875 -0.796875 3.921875 -0.796875 C 4.535156 -0.796875 5.019531 -1.035156 5.375 -1.515625 C 5.726562 -2.003906 5.90625 -2.664062 5.90625 -3.5 C 5.90625 -4.320312 5.726562 -4.972656 5.375 -5.453125 C 5.019531 -5.941406 4.535156 -6.1875 3.921875 -6.1875 Z M 3.921875 -7.171875 C 4.921875 -7.171875 5.703125 -6.84375 6.265625 -6.1875 C 6.835938 -5.539062 7.125 -4.644531 7.125 -3.5 C 7.125 -2.351562 6.835938 -1.453125 6.265625 -0.796875 C 5.703125 -0.140625 4.921875 0.1875 3.921875 0.1875 C 2.910156 0.1875 2.117188 -0.140625 1.546875 -0.796875 C 0.984375 -1.453125 0.703125 -2.351562 0.703125 -3.5 C 0.703125 -4.644531 0.984375 -5.539062 1.546875 -6.1875 C 2.117188 -6.84375 2.910156 -7.171875 3.921875 -7.171875 Z M 3.921875 -7.171875 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-16">
+<path style="stroke:none;" d="M 1.09375 -2.765625 L 1.09375 -7 L 2.234375 -7 L 2.234375 -2.8125 C 2.234375 -2.144531 2.363281 -1.644531 2.625 -1.3125 C 2.882812 -0.976562 3.269531 -0.8125 3.78125 -0.8125 C 4.40625 -0.8125 4.894531 -1.007812 5.25 -1.40625 C 5.613281 -1.800781 5.796875 -2.34375 5.796875 -3.03125 L 5.796875 -7 L 6.953125 -7 L 6.953125 0 L 5.796875 0 L 5.796875 -1.078125 C 5.515625 -0.648438 5.191406 -0.332031 4.828125 -0.125 C 4.460938 0.0820312 4.035156 0.1875 3.546875 0.1875 C 2.742188 0.1875 2.132812 -0.0625 1.71875 -0.5625 C 1.300781 -1.0625 1.09375 -1.796875 1.09375 -2.765625 Z M 3.984375 -7.171875 Z M 3.984375 -7.171875 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-17">
+<path style="stroke:none;" d="M 5.265625 -5.921875 C 5.128906 -5.992188 4.984375 -6.046875 4.828125 -6.078125 C 4.679688 -6.117188 4.519531 -6.140625 4.34375 -6.140625 C 3.6875 -6.140625 3.179688 -5.925781 2.828125 -5.5 C 2.484375 -5.082031 2.3125 -4.476562 2.3125 -3.6875 L 2.3125 0 L 1.15625 0 L 1.15625 -7 L 2.3125 -7 L 2.3125 -5.90625 C 2.5625 -6.332031 2.878906 -6.648438 3.265625 -6.859375 C 3.648438 -7.066406 4.117188 -7.171875 4.671875 -7.171875 C 4.753906 -7.171875 4.84375 -7.164062 4.9375 -7.15625 C 5.03125 -7.144531 5.132812 -7.128906 5.25 -7.109375 Z M 5.265625 -5.921875 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-18">
+<path style="stroke:none;" d="M 6.25 -6.734375 L 6.25 -5.65625 C 5.914062 -5.832031 5.585938 -5.960938 5.265625 -6.046875 C 4.941406 -6.140625 4.613281 -6.1875 4.28125 -6.1875 C 3.53125 -6.1875 2.945312 -5.953125 2.53125 -5.484375 C 2.125 -5.015625 1.921875 -4.351562 1.921875 -3.5 C 1.921875 -2.644531 2.125 -1.976562 2.53125 -1.5 C 2.945312 -1.03125 3.53125 -0.796875 4.28125 -0.796875 C 4.613281 -0.796875 4.941406 -0.835938 5.265625 -0.921875 C 5.585938 -1.015625 5.914062 -1.148438 6.25 -1.328125 L 6.25 -0.265625 C 5.925781 -0.117188 5.59375 -0.0078125 5.25 0.0625 C 4.90625 0.144531 4.539062 0.1875 4.15625 0.1875 C 3.09375 0.1875 2.25 -0.144531 1.625 -0.8125 C 1.007812 -1.476562 0.703125 -2.375 0.703125 -3.5 C 0.703125 -4.632812 1.015625 -5.53125 1.640625 -6.1875 C 2.273438 -6.84375 3.132812 -7.171875 4.21875 -7.171875 C 4.570312 -7.171875 4.914062 -7.132812 5.25 -7.0625 C 5.59375 -6.988281 5.925781 -6.878906 6.25 -6.734375 Z M 6.25 -6.734375 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-19">
+<path style="stroke:none;" d="M 2.3125 -1.046875 L 2.3125 2.65625 L 1.15625 2.65625 L 1.15625 -7 L 2.3125 -7 L 2.3125 -5.9375 C 2.5625 -6.351562 2.867188 -6.660156 3.234375 -6.859375 C 3.597656 -7.066406 4.039062 -7.171875 4.5625 -7.171875 C 5.40625 -7.171875 6.09375 -6.832031 6.625 -6.15625 C 7.15625 -5.476562 7.421875 -4.59375 7.421875 -3.5 C 7.421875 -2.394531 7.15625 -1.503906 6.625 -0.828125 C 6.09375 -0.148438 5.40625 0.1875 4.5625 0.1875 C 4.039062 0.1875 3.597656 0.0859375 3.234375 -0.109375 C 2.867188 -0.316406 2.5625 -0.628906 2.3125 -1.046875 Z M 6.234375 -3.5 C 6.234375 -4.34375 6.054688 -5.003906 5.703125 -5.484375 C 5.359375 -5.960938 4.882812 -6.203125 4.28125 -6.203125 C 3.664062 -6.203125 3.179688 -5.960938 2.828125 -5.484375 C 2.484375 -5.003906 2.3125 -4.34375 2.3125 -3.5 C 2.3125 -2.644531 2.484375 -1.976562 2.828125 -1.5 C 3.179688 -1.019531 3.664062 -0.78125 4.28125 -0.78125 C 4.882812 -0.78125 5.359375 -1.019531 5.703125 -1.5 C 6.054688 -1.976562 6.234375 -2.644531 6.234375 -3.5 Z M 6.234375 -3.5 "/>
+</symbol>
+<symbol overflow="visible" id="glyph0-20">
+<path style="stroke:none;" d="M 4.0625 -8.5 C 3.414062 -8.5 2.925781 -8.175781 2.59375 -7.53125 C 2.269531 -6.894531 2.109375 -5.9375 2.109375 -4.65625 C 2.109375 -3.375 2.269531 -2.410156 2.59375 -1.765625 C 2.925781 -1.128906 3.414062 -0.8125 4.0625 -0.8125 C 4.71875 -0.8125 5.207031 -1.128906 5.53125 -1.765625 C 5.863281 -2.410156 6.03125 -3.375 6.03125 -4.65625 C 6.03125 -5.9375 5.863281 -6.894531 5.53125 -7.53125 C 5.207031 -8.175781 4.71875 -8.5 4.0625 -8.5 Z M 4.0625 -9.5 C 5.113281 -9.5 5.914062 -9.082031 6.46875 -8.25 C 7.019531 -7.425781 7.296875 -6.226562 7.296875 -4.65625 C 7.296875 -3.082031 7.019531 -1.878906 6.46875 -1.046875 C 5.914062 -0.222656 5.113281 0.1875 4.0625 0.1875 C 3.019531 0.1875 2.222656 -0.222656 1.671875 -1.046875 C 1.117188 -1.878906 0.84375 -3.082031 0.84375 -4.65625 C 0.84375 -6.226562 1.117188 -7.425781 1.671875 -8.25 C 2.222656 -9.082031 3.019531 -9.5 4.0625 -9.5 Z M 4.0625 -9.5 "/>
+</symbol>
+</g>
+</defs>
+<g id="surface11910">
+<rect x="0" y="0" width="296" height="142" style="fill:rgb(100%,100%,100%);fill-opacity:1;stroke:none;"/>
+<path style="fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 16.35 11.5 L 25.75 11.5 L 25.75 18.4 L 16.35 18.4 Z M 16.35 11.5 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 25.919141 14.75625 C 25.919141 14.99375 25.562695 14.99375 25.562695 14.75625 C 25.562695 14.518555 25.919141 14.518555 25.919141 14.75625 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 18.140039 13.450586 C 18.140039 13.69707 17.770117 13.69707 17.770117 13.450586 C 17.770117 13.203906 18.140039 13.203906 18.140039 13.450586 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill-rule:evenodd;fill:rgb(100%,100%,100%);fill-opacity:1;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 18.116211 16.39043 C 18.116211 16.637109 17.746484 16.637109 17.746484 16.39043 C 17.746484 16.143945 18.116211 16.143945 18.116211 16.39043 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill:none;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-dasharray:1,1;stroke-miterlimit:10;" d="M 18.160937 16.341406 L 25.344141 14.80293 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill:none;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 25.016797 15.128711 L 25.45332 14.779687 L 24.912109 14.639844 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill:none;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 18.140039 13.450586 L 25.342383 14.717383 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill:none;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 24.916797 14.896484 L 25.452539 14.736914 L 25.003516 14.403906 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill:none;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 25.918945 14.75625 L 30.732813 14.715039 " transform="matrix(20,0,0,20,-325,-228)"/>
+<path style="fill:none;stroke-width:0.1;stroke-linecap:butt;stroke-linejoin:miter;stroke:rgb(0%,0%,0%);stroke-opacity:1;stroke-miterlimit:10;" d="M 30.34668 14.968359 L 30.844531 14.714062 L 30.342383 14.468359 " transform="matrix(20,0,0,20,-325,-228)"/>
+<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
+  <use xlink:href="#glyph0-1" x="34.101562" y="34.267687"/>
+  <use xlink:href="#glyph0-2" x="37.712674" y="34.267687"/>
+  <use xlink:href="#glyph0-3" x="50.212674" y="34.267687"/>
+  <use xlink:href="#glyph0-4" x="57.990451" y="34.267687"/>
+  <use xlink:href="#glyph0-5" x="66.046007" y="34.267687"/>
+  <use xlink:href="#glyph0-6" x="73.823785" y="34.267687"/>
+  <use xlink:href="#glyph0-7" x="77.990451" y="34.267687"/>
+  <use xlink:href="#glyph0-3" x="86.046007" y="34.267687"/>
+  <use xlink:href="#glyph0-8" x="93.823785" y="34.267687"/>
+  <use xlink:href="#glyph0-3" x="98.823785" y="34.267687"/>
+  <use xlink:href="#glyph0-6" x="106.601562" y="34.267687"/>
+  <use xlink:href="#glyph0-9" x="110.768229" y="34.267687"/>
+  <use xlink:href="#glyph0-10" x="115.768229" y="34.267687"/>
+  <use xlink:href="#glyph0-11" x="123.823785" y="34.267687"/>
+</g>
+<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
+  <use xlink:href="#glyph0-5" x="33.625" y="115.412218"/>
+  <use xlink:href="#glyph0-2" x="41.402778" y="115.412218"/>
+  <use xlink:href="#glyph0-12" x="53.902778" y="115.412218"/>
+  <use xlink:href="#glyph0-5" x="61.958333" y="115.412218"/>
+  <use xlink:href="#glyph0-7" x="69.736111" y="115.412218"/>
+  <use xlink:href="#glyph0-7" x="77.791667" y="115.412218"/>
+  <use xlink:href="#glyph0-5" x="85.847222" y="115.412218"/>
+  <use xlink:href="#glyph0-7" x="93.625" y="115.412218"/>
+  <use xlink:href="#glyph0-6" x="101.680556" y="115.412218"/>
+  <use xlink:href="#glyph0-7" x="105.847222" y="115.412218"/>
+  <use xlink:href="#glyph0-3" x="113.902778" y="115.412218"/>
+  <use xlink:href="#glyph0-8" x="121.680556" y="115.412218"/>
+  <use xlink:href="#glyph0-3" x="126.680556" y="115.412218"/>
+  <use xlink:href="#glyph0-6" x="134.458333" y="115.412218"/>
+  <use xlink:href="#glyph0-9" x="138.625" y="115.412218"/>
+  <use xlink:href="#glyph0-13" x="143.625" y="115.412218"/>
+  <use xlink:href="#glyph0-11" x="151.680556" y="115.412218"/>
+</g>
+<g style="fill:rgb(0%,0%,0%);fill-opacity:1;">
+  <use xlink:href="#glyph0-14" x="197" y="54.505968"/>
+  <use xlink:href="#glyph0-15" x="203.666667" y="54.505968"/>
+  <use xlink:href="#glyph0-16" x="211.444444" y="54.505968"/>
+  <use xlink:href="#glyph0-17" x="219.5" y="54.505968"/>
+  <use xlink:href="#glyph0-18" x="224.5" y="54.505968"/>
+  <use xlink:href="#glyph0-5" x="231.444444" y="54.505968"/>
+  <use xlink:href="#glyph0-6" x="239.222222" y="54.505968"/>
+  <use xlink:href="#glyph0-19" x="243.388889" y="54.505968"/>
+  <use xlink:href="#glyph0-3" x="251.444444" y="54.505968"/>
+  <use xlink:href="#glyph0-7" x="259.222222" y="54.505968"/>
+  <use xlink:href="#glyph0-6" x="267.277778" y="54.505968"/>
+  <use xlink:href="#glyph0-9" x="271.444444" y="54.505968"/>
+  <use xlink:href="#glyph0-20" x="276.444444" y="54.505968"/>
+  <use xlink:href="#glyph0-11" x="284.5" y="54.505968"/>
+</g>
+</g>
+</svg>
diff --git a/Documentation/userspace-api/media/v4l/dev-subdev.rst b/Documentation/userspace-api/media/v4l/dev-subdev.rst
index dcfcbd52490d..4d145bd3bd09 100644
--- a/Documentation/userspace-api/media/v4l/dev-subdev.rst
+++ b/Documentation/userspace-api/media/v4l/dev-subdev.rst
@@ -838,3 +838,5 @@  stream while it may be possible to enable and disable the embedded data stream.
 
 The embedded data format does not need to be configured on the sensor's pads as
 the format is dictated by the pixel data format in this case.
+
+.. include:: subdev-config-model.rst
diff --git a/Documentation/userspace-api/media/v4l/subdev-config-model.rst b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
new file mode 100644
index 000000000000..8ec801998f5f
--- /dev/null
+++ b/Documentation/userspace-api/media/v4l/subdev-config-model.rst
@@ -0,0 +1,180 @@ 
+.. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-no-invariants-or-later
+
+Sub-device configuration models
+===============================
+
+A sub-device configuration model specifies in detail what the user space can
+expect from a sub-device in terms of V4L2 sub-device interface support,
+including semantics specific to a given configuration model.
+
+A sub-device may implement more than one configuration model at the same
+time. The implemented configuration models can be obtained from the sub-device's
+``V4L2_CID_CONFIG_MODEL`` control.
+
+.. _media_subdev_config_model_common_raw_sensor:
+
+Common raw camera sensor model
+------------------------------
+
+The common raw camera sensor model defines the configurability of a superset
+that covers the vast majority of raw camera sensors. Not all of the
+configuration and enumeration interfaces are offered by all drivers.
+
+A sub-device complies with the common raw sensor model if the
+``V4L2_CONFIG_MODEL_COMMON_RAW`` bit is set in the ``V4L2_CID_CONFIG_MODEL``
+control of the sub-device.
+
+The common raw camera sensor model is aligned with
+:ref:`media_using_camera_sensor_drivers`. Please refer to that regarding aspects
+not specified here.
+
+Each camera sensor implementing the common raw sensor model exposes a single
+V4L2 sub-device. The sub-device contains a single source pad (0) and two or more
+internal pads: an image data internal pad (1) and optionally an embedded data
+pad (2). Additionally, further internal pads may be supported for other
+features, in which case they are documented separately for the given device.
+
+This is show in :ref:`media_subdev_config_model_common_raw_sensor_subdev`.
+
+.. _media_subdev_config_model_common_raw_sensor_subdev:
+
+.. kernel-figure:: common-raw-sensor.svg
+    :alt:    common-raw-sensor.svg
+    :align:  center
+
+    **Common raw sensor sub-device**
+
+Routes
+^^^^^^
+
+A sub-device conforming to common raw camera sensor model implements the
+following routes.
+
+.. flat-table:: Routes
+    :header-rows: 1
+
+    * - Sink pad/stream
+      - Source pad/stream
+      - Static (X/M(aybe)/-)
+      - Mandatory (X/-)
+      - Synopsis
+    * - 1/0
+      - 0/0
+      - X
+      - X
+      - Image data
+    * - 2/0
+      - 0/1
+      - M
+      - -
+      - Embedded data
+
+Some devices may support enabling and disabling the embedded data stream. Others
+may not support it at all, in which case the embedded data route does not exist.
+
+Sensor pixel array size, cropping and binning
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The sensor's pixel array is divided into one or more areas. The areas around the
+edge of the pixel array, usually one one or more sides, may contain optical
+black pixels, dummy pixels and other non-image pixels.
+
+A rectangle within the pixel area contains the visible pixels. Capturing the
+non-visible pixels may be supported by the sensor.
+
+The sensor can perform three operations that affect the output image size. First
+comes analogue crop. This configuration limits parts of the pixel array which
+the sensor will read, affecting sensor timing as well. The granularity of the
+analogue crop configuration varies greatly across sensors: some sensors support
+a few different analogue crop configurations whereas others may support anything
+divisible by a given number of pixels.
+
+The default analogue crop rectangle corresponds to the visible pixel area if
+supported by the hardware.
+
+In the next step, binning is performed on the image data read from camera
+sensor's pixel array. This will effectively result in an image smaller than the
+original by given proportions horizontally and vertically. Typical values are
+1/2 and 1/3 but others may well be supported by the hardware as well.
+
+The combination of the analogue crop and binning operations may result in an
+image size that may be larger than desirable. For this purpose, a digital crop
+operation may be performed on the binned image. The resulting image size is
+further outputted by the sensor.
+
+.. flat-table:: Selection targets on pads
+    :header-rows: 1
+
+    * - Pad/Stream
+      - Selection target/format
+      - Mandatory (X/-)
+      - Synopsis
+    * - 1/0
+      - Format
+      - X
+      - Image data format. The width and height fields of this format are the
+        same than those for the V4L2_SEL_TGT_CROP_BOUNDS rectangle. The media
+        bus code of this format reflects the native pixel depth of the sensor.
+    * - 1/0
+      - V4L2_SEL_TGT_NATIVE_SIZE
+      - X
+      - The full size of the pixel array, including all pixels in the pixel
+	array, even if they cannot be captured. This rectangle is relative to
+	the format on the same (pad, stream).
+    * - 1/0
+      - V4L2_SEL_TGT_CROP_BOUNDS
+      - X
+      - The crop rectangle bounds. No pixels outside this area can be
+        captured. This rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE
+        rectangle on the same (pad, stream).
+    * - 1/0
+      - V4L2_SEL_TGT_CROP_DEFAULT
+      - X
+      - The visible pixel area. This rectangle is relative to the
+        V4L2_SEL_TGT_NATIVE_SIZE rectangle on the same (pad, stream).
+    * - 1/0
+      - V4L2_SEL_TGT_CROP
+      - \-
+      - Analogue crop. Analogue crop typically has a coarse granularity. This
+        rectangle is relative to the V4L2_SEL_TGT_NATIVE_SIZE rectangle on the
+        same (pad, stream).
+    * - 1/0
+      - V4L2_SEL_TGT_COMPOSE
+      - \-
+      - Binning. This rectangle is relative to the V4L2_SEL_TGT_CROP
+        rectangle on the same (pad, stream).
+    * - 2/0
+      - Format
+      - X
+      - Embedded data format.
+    * - 0/0
+      - V4L2_SEL_TGT_CROP
+      - \-
+      - Digital crop. This rectangle is relative to the V4L2_SEL_TGT_COMPOSE
+        rectangle on (pad, stream) pair 1/0.
+    * - 0/0
+      - Format
+      - X
+      - Image data source format. The width and height fields of the format are
+        the same than for the V4L2_SEL_TGT crop rectangle on (pad, stream) pair
+        0/0 where as the media bus code reflects the pixel data output of the
+        sensor.
+    * - 0/1
+      - Format
+      - X
+      - Embedded data source format.
+
+Embedded data
+^^^^^^^^^^^^^
+
+The embedded data stream is produced by the sensor when the corresponding route
+is enabled. The embedded data route may also be immutable or not exist at all,
+in case the sensor (or the driver) does not support it.
+
+Generally the sensor embedded data width is determined by the width of the image
+data whereas the number of lines are constant for the embedded data. The user
+space may obtain the size of the embedded data once the image data size on the
+source pad has been configured.
+
+Also see :ref:`media_using_camera_sensor_drivers_embedded_data`.
+