Message ID | 1560242912-17138-1-git-send-email-hugues.fruchet@st.com (mailing list archive) |
---|---|
Headers | show |
Series | DCMI bridge support | expand |
Hi Hugues, On Tue, Jun 11, 2019 at 10:48:29AM +0200, Hugues Fruchet wrote: > This patch serie allows to connect non-parallel camera sensor to > DCMI thanks to a bridge connected in between such as STMIPID02 [1]. > > Media controller support is introduced first, then support of > several sub-devices within pipeline with dynamic linking > between them. > In order to keep backward compatibility with applications > relying on V4L2 interface only, format set on video node > is propagated to all sub-devices connected to camera interface. > > [1] https://www.spinics.net/lists/devicetree/msg278002.html General notes on the set, not related to any single patch: - Where's the sub-device representing the bridge itself? - As the driver becomes MC-centric, crop configuration takes place through V4L2 sub-device interface, not through the video device node. - Same goes for accessing sensor configuration: it does not take place through video node but through the sub-device nodes.
Hi Sakari, > - Where's the sub-device representing the bridge itself? This is pointed by [1]: drivers/media/i2c/st-mipid02.c > - As the driver becomes MC-centric, crop configuration takes place through > V4L2 sub-device interface, not through the video device node. > - Same goes for accessing sensor configuration: it does not take place > through video node but through the sub-device nodes. Our objective is to be able to support either a simple parallel sensor or a CSI-2 sensor connected through a bridge without any changes on userspace side because no additional processing or conversion involved, only deserialisation is m. With the proposed set of patches, we succeeded to do so, the same non-regression tests campaign is passed with OV5640 parallel sensor (STM32MP1 evaluation board) or OV5640 CSI-2 sensor (Avenger96 board with D3 mezzanine board). We don't want driver to be MC-centric, media controller support was required only to get access to the set of functions needed to link and walk trough subdevices: media_create_pad_link(), media_entity_remote_pad(), etc... We did a try with the v1 version of this patchset, delegating subdevices handling to userspace, by using media-controller, but this require to configure first the pipeline for each single change of resolution and format before making any capture using v4l2-ctl or GStreamer, quite heavy in fact. Benjamin did another try using new libcamera codebase, but even for a basic capture use-case, negotiation code is quite tricky in order to match the right subdevices bus format to the required V4L2 format. Moreover, it was not clear how to call libcamera library prior to any v4l2-ctl or GStreamer calls. Adding 100 lines of code into DCMI to well configure resolution and formats fixes the point and allows us to keep backward compatibility as per our objective, so it seems far more reasonable to us to do so even if DCMI controls more than the subdevice it is connected to. Moreover we found similar code in other video interfaces code like qcom/camss/camss.c and xilinx/xilinx-dma.c, controlling the whole pipeline, so it seems to us quite natural to go this way. To summarize, if we cannot do the negotiation within kernel, delegating this to userspace implies far more complexity and breaks compatibility with existing applications without adding new functionalities. Having all that in mind, what should be reconsidered in your opinion Sakari ? Do you have some alternatives ? Best regards, Hugues. On 6/20/19 6:17 PM, Sakari Ailus wrote: > Hi Hugues, > > On Tue, Jun 11, 2019 at 10:48:29AM +0200, Hugues Fruchet wrote: >> This patch serie allows to connect non-parallel camera sensor to >> DCMI thanks to a bridge connected in between such as STMIPID02 [1]. >> >> Media controller support is introduced first, then support of >> several sub-devices within pipeline with dynamic linking >> between them. >> In order to keep backward compatibility with applications >> relying on V4L2 interface only, format set on video node >> is propagated to all sub-devices connected to camera interface. >> >> [1] https://www.spinics.net/lists/devicetree/msg278002.html > > General notes on the set, not related to any single patch: > > - Where's the sub-device representing the bridge itself? > > - As the driver becomes MC-centric, crop configuration takes place through > V4L2 sub-device interface, not through the video device node. > > - Same goes for accessing sensor configuration: it does not take place > through video node but through the sub-device nodes. >
Hi Hugues, On Mon, Jun 24, 2019 at 10:10:05AM +0000, Hugues FRUCHET wrote: > Hi Sakari, > > > - Where's the sub-device representing the bridge itself? > This is pointed by [1]: drivers/media/i2c/st-mipid02.c > > > - As the driver becomes MC-centric, crop configuration takes place > through > > V4L2 sub-device interface, not through the video device node. > > - Same goes for accessing sensor configuration: it does not take place > > through video node but through the sub-device nodes. > > Our objective is to be able to support either a simple parallel sensor > or a CSI-2 sensor connected through a bridge without any changes on > userspace side because no additional processing or conversion involved, > only deserialisation is m. > With the proposed set of patches, we succeeded to do so, the same > non-regression tests campaign is passed with OV5640 parallel sensor > (STM32MP1 evaluation board) or OV5640 CSI-2 sensor (Avenger96 board with > D3 mezzanine board). > > We don't want driver to be MC-centric, media controller support was > required only to get access to the set of functions needed to link and > walk trough subdevices: media_create_pad_link(), > media_entity_remote_pad(), etc... > > We did a try with the v1 version of this patchset, delegating subdevices > handling to userspace, by using media-controller, but this require to > configure first the pipeline for each single change of resolution and > format before making any capture using v4l2-ctl or GStreamer, quite > heavy in fact. > Benjamin did another try using new libcamera codebase, but even for a > basic capture use-case, negotiation code is quite tricky in order to > match the right subdevices bus format to the required V4L2 format. Why would it be trickier in userspace than in the kernel ? The V4L2 subdev operations are more or less expose verbatim through the subdev userspace API. > Moreover, it was not clear how to call libcamera library prior to any > v4l2-ctl or GStreamer calls. libcamera isn't meant to be called before v4l2-ctl or GStreamer. Applications are supposed to be based directly on libcamera, or, for existing userspace APIs such as V4L2 or GStreamer, compatibility layers are supposed to be developed. For V4L2 it will take the form of a LD_PRELOAD-able .so that will intercept the V4L2 API calls, making most V4L2 applications work with libcamera unmodified (I said most as 100% compatibility will likely not be achievable). For GStreamer it will take the form of a GStreamer libcamera element that will replace the V4L2 source element. > Adding 100 lines of code into DCMI to well configure resolution and > formats fixes the point and allows us to keep backward compatibility > as per our objective, so it seems far more reasonable to us to do so > even if DCMI controls more than the subdevice it is connected to. > Moreover we found similar code in other video interfaces code like > qcom/camss/camss.c and xilinx/xilinx-dma.c, controlling the whole > pipeline, so it seems to us quite natural to go this way. I can't comment on the qcom-camss driver as I'm not aware of its internals, but where have you found such code in the Xilinx V4L2 drivers ? > To summarize, if we cannot do the negotiation within kernel, delegating > this to userspace implies far more complexity and breaks compatibility > with existing applications without adding new functionalities. > > Having all that in mind, what should be reconsidered in your opinion > Sakari ? Do you have some alternatives ? First of all, let's note that your patch series performs to related but still independent changes: it enables MC support, *and* enables the V4L2 subdev userspace API. The former is clearly needed and will allow you to use the MC API internally in the kernel, simplifying pipeline traversal. The latter then enables the V4L2 subdev userspace API, moving the pipeline configuration responsibility to userspace. You could in theory move to the MC API inside the kernel, without enabling support for the V4L2 subdev userspace API. Configuring the pipeline and propagating the formats would then be the responsibility of the kernel driver. However, this will limit your driver to the following: - Fully linear pipelines only (single sensor) - No support for controls implemented by multiple entities in the pipeline (for instance controls that would exist in both the sensor and the bridge, such as gains) - No proper support for scaling configuration if multiple components in the pipeline can scale Are you willing to set those limitations in stone and give up on supporting those features ? > On 6/20/19 6:17 PM, Sakari Ailus wrote: > > On Tue, Jun 11, 2019 at 10:48:29AM +0200, Hugues Fruchet wrote: > >> This patch serie allows to connect non-parallel camera sensor to > >> DCMI thanks to a bridge connected in between such as STMIPID02 [1]. > >> > >> Media controller support is introduced first, then support of > >> several sub-devices within pipeline with dynamic linking > >> between them. > >> In order to keep backward compatibility with applications > >> relying on V4L2 interface only, format set on video node > >> is propagated to all sub-devices connected to camera interface. > >> > >> [1] https://www.spinics.net/lists/devicetree/msg278002.html > > > > General notes on the set, not related to any single patch: > > > > - Where's the sub-device representing the bridge itself? > > > > - As the driver becomes MC-centric, crop configuration takes place through > > V4L2 sub-device interface, not through the video device node. > > > > - Same goes for accessing sensor configuration: it does not take place > > through video node but through the sub-device nodes. > >
Hi Laurent, Thanks for reviewing, On 6/26/19 7:25 PM, Laurent Pinchart wrote: > Hi Hugues, > > On Mon, Jun 24, 2019 at 10:10:05AM +0000, Hugues FRUCHET wrote: >> Hi Sakari, >> >> > - Where's the sub-device representing the bridge itself? >> This is pointed by [1]: drivers/media/i2c/st-mipid02.c >> >> > - As the driver becomes MC-centric, crop configuration takes place >> through >> > V4L2 sub-device interface, not through the video device node. >> > - Same goes for accessing sensor configuration: it does not take place >> > through video node but through the sub-device nodes. >> >> Our objective is to be able to support either a simple parallel sensor >> or a CSI-2 sensor connected through a bridge without any changes on >> userspace side because no additional processing or conversion involved, >> only deserialisation is m. >> With the proposed set of patches, we succeeded to do so, the same >> non-regression tests campaign is passed with OV5640 parallel sensor >> (STM32MP1 evaluation board) or OV5640 CSI-2 sensor (Avenger96 board with >> D3 mezzanine board). >> >> We don't want driver to be MC-centric, media controller support was >> required only to get access to the set of functions needed to link and >> walk trough subdevices: media_create_pad_link(), >> media_entity_remote_pad(), etc... >> >> We did a try with the v1 version of this patchset, delegating subdevices >> handling to userspace, by using media-controller, but this require to >> configure first the pipeline for each single change of resolution and >> format before making any capture using v4l2-ctl or GStreamer, quite >> heavy in fact. >> Benjamin did another try using new libcamera codebase, but even for a >> basic capture use-case, negotiation code is quite tricky in order to >> match the right subdevices bus format to the required V4L2 format. > > Why would it be trickier in userspace than in the kernel ? The V4L2 > subdev operations are more or less expose verbatim through the subdev > userspace API. > >> Moreover, it was not clear how to call libcamera library prior to any >> v4l2-ctl or GStreamer calls. > > libcamera isn't meant to be called before v4l2-ctl or GStreamer. > Applications are supposed to be based directly on libcamera, or, for > existing userspace APIs such as V4L2 or GStreamer, compatibility layers > are supposed to be developed. For V4L2 it will take the form of a > LD_PRELOAD-able .so that will intercept the V4L2 API calls, making most > V4L2 applications work with libcamera unmodified (I said most as 100% > compatibility will likely not be achievable). For GStreamer it will take > the form of a GStreamer libcamera element that will replace the V4L2 > source element. > >> Adding 100 lines of code into DCMI to well configure resolution and >> formats fixes the point and allows us to keep backward compatibility >> as per our objective, so it seems far more reasonable to us to do so >> even if DCMI controls more than the subdevice it is connected to. >> Moreover we found similar code in other video interfaces code like >> qcom/camss/camss.c and xilinx/xilinx-dma.c, controlling the whole >> pipeline, so it seems to us quite natural to go this way. > > I can't comment on the qcom-camss driver as I'm not aware of its > internals, but where have you found such code in the Xilinx V4L2 drivers > ? For ex. in xilinx/xilinx-dma.c, stream on/off is propagated to all subdevices within pipeline: * Walk the entities chain starting at the pipeline output video node static int xvip_pipeline_start_stop(struct xvip_pipeline *pipe, bool start) Same for qcom/camss/camss-video.c: static int video_start_streaming(struct vb2_queue *q, unsigned int count) For resolution/format, in exynos4-is/fimc-capture.c: static int fimc_pipeline_try_format(struct fimc_ctx *ctx, ... while (1) { ... /* set format on all pipeline subdevs */ while (me != &fimc->vid_cap.subdev.entity) { ... ret = v4l2_subdev_call(sd, pad, set_fmt, NULL, &sfmt); > >> To summarize, if we cannot do the negotiation within kernel, delegating >> this to userspace implies far more complexity and breaks compatibility >> with existing applications without adding new functionalities. >> >> Having all that in mind, what should be reconsidered in your opinion >> Sakari ? Do you have some alternatives ? > > First of all, let's note that your patch series performs to related but > still independent changes: it enables MC support, *and* enables the V4L2 > subdev userspace API. The former is clearly needed and will allow you to > use the MC API internally in the kernel, simplifying pipeline traversal. > The latter then enables the V4L2 subdev userspace API, moving the > pipeline configuration responsibility to userspace. > > You could in theory move to the MC API inside the kernel, without > enabling support for the V4L2 subdev userspace API. Configuring the > pipeline and propagating the formats would then be the responsibility of > the kernel driver. Yes this is exactly what we want to do. If I understand well, to disable the V4L2 subdev userspace API, I just have to remove the media device registry: - /* Register the media device */ - ret = media_device_register(&dcmi->mdev); - if (ret) { - dev_err(dcmi->dev, "Failed to register media device (%d)\n", - ret); - goto err_media_device_cleanup; - } Do you see any additional things to do ? > However, this will limit your driver to the > following: > > - Fully linear pipelines only (single sensor) > - No support for controls implemented by multiple entities in the > pipeline (for instance controls that would exist in both the sensor > and the bridge, such as gains) > - No proper support for scaling configuration if multiple components in > the pipeline can scale > > Are you willing to set those limitations in stone and give up on > supporting those features ? > The involved hardware do not have those features, no need of extra functionalities to be exposed to userspace, so this is fine. I'll push a v3 with this change and the other fixes related to Sakari and Hans comments. Please Sakari & Hans, also comment on that change that we can converge on v3. Best regards, Hugues.
Hi Hugues, On Thu, Jun 27, 2019 at 12:38:40PM +0000, Hugues FRUCHET wrote: > On 6/26/19 7:25 PM, Laurent Pinchart wrote: > > On Mon, Jun 24, 2019 at 10:10:05AM +0000, Hugues FRUCHET wrote: > >> Hi Sakari, > >> > >>> - Where's the sub-device representing the bridge itself? > >> > >> This is pointed by [1]: drivers/media/i2c/st-mipid02.c > >> > >>> - As the driver becomes MC-centric, crop configuration takes place through > >>> V4L2 sub-device interface, not through the video device node. > >>> - Same goes for accessing sensor configuration: it does not take place > >>> through video node but through the sub-device nodes. > >> > >> Our objective is to be able to support either a simple parallel sensor > >> or a CSI-2 sensor connected through a bridge without any changes on > >> userspace side because no additional processing or conversion involved, > >> only deserialisation is m. > >> With the proposed set of patches, we succeeded to do so, the same > >> non-regression tests campaign is passed with OV5640 parallel sensor > >> (STM32MP1 evaluation board) or OV5640 CSI-2 sensor (Avenger96 board with > >> D3 mezzanine board). > >> > >> We don't want driver to be MC-centric, media controller support was > >> required only to get access to the set of functions needed to link and > >> walk trough subdevices: media_create_pad_link(), > >> media_entity_remote_pad(), etc... > >> > >> We did a try with the v1 version of this patchset, delegating subdevices > >> handling to userspace, by using media-controller, but this require to > >> configure first the pipeline for each single change of resolution and > >> format before making any capture using v4l2-ctl or GStreamer, quite > >> heavy in fact. > >> Benjamin did another try using new libcamera codebase, but even for a > >> basic capture use-case, negotiation code is quite tricky in order to > >> match the right subdevices bus format to the required V4L2 format. > > > > Why would it be trickier in userspace than in the kernel ? The V4L2 > > subdev operations are more or less expose verbatim through the subdev > > userspace API. > > > >> Moreover, it was not clear how to call libcamera library prior to any > >> v4l2-ctl or GStreamer calls. > > > > libcamera isn't meant to be called before v4l2-ctl or GStreamer. > > Applications are supposed to be based directly on libcamera, or, for > > existing userspace APIs such as V4L2 or GStreamer, compatibility layers > > are supposed to be developed. For V4L2 it will take the form of a > > LD_PRELOAD-able .so that will intercept the V4L2 API calls, making most > > V4L2 applications work with libcamera unmodified (I said most as 100% > > compatibility will likely not be achievable). For GStreamer it will take > > the form of a GStreamer libcamera element that will replace the V4L2 > > source element. > > > >> Adding 100 lines of code into DCMI to well configure resolution and > >> formats fixes the point and allows us to keep backward compatibility > >> as per our objective, so it seems far more reasonable to us to do so > >> even if DCMI controls more than the subdevice it is connected to. > >> Moreover we found similar code in other video interfaces code like > >> qcom/camss/camss.c and xilinx/xilinx-dma.c, controlling the whole > >> pipeline, so it seems to us quite natural to go this way. > > > > I can't comment on the qcom-camss driver as I'm not aware of its > > internals, but where have you found such code in the Xilinx V4L2 drivers > > ? > > For ex. in xilinx/xilinx-dma.c, stream on/off is propagated to all > subdevices within pipeline: > * Walk the entities chain starting at the pipeline output video node > static int xvip_pipeline_start_stop(struct xvip_pipeline *pipe, bool start) > > Same for qcom/camss/camss-video.c: > static int video_start_streaming(struct vb2_queue *q, unsigned int count) For stream start/stop, that's expected. Userspace only controls the stream start/stop on the video node, and the kernel propagates that along the pipeline. There is no VIDIOC_STREAMON or VIDIOC_STREAMOFF ioctl exposed to userspace for V4L2 subdevs. What is not propagated in the kernel for MC-centric devices is the pipeline configuration (formats and selection rectangles). > For resolution/format, in exynos4-is/fimc-capture.c: > static int fimc_pipeline_try_format(struct fimc_ctx *ctx, > ... > while (1) { > ... > /* set format on all pipeline subdevs */ > while (me != &fimc->vid_cap.subdev.entity) { > ... > ret = v4l2_subdev_call(sd, pad, set_fmt, NULL, &sfmt); As explained below, propagating formats is fine for video node-centric drivers, but comes with limitations. > >> To summarize, if we cannot do the negotiation within kernel, delegating > >> this to userspace implies far more complexity and breaks compatibility > >> with existing applications without adding new functionalities. > >> > >> Having all that in mind, what should be reconsidered in your opinion > >> Sakari ? Do you have some alternatives ? > > > > First of all, let's note that your patch series performs to related but > > still independent changes: it enables MC support, *and* enables the V4L2 > > subdev userspace API. The former is clearly needed and will allow you to > > use the MC API internally in the kernel, simplifying pipeline traversal. > > The latter then enables the V4L2 subdev userspace API, moving the > > pipeline configuration responsibility to userspace. > > > > You could in theory move to the MC API inside the kernel, without > > enabling support for the V4L2 subdev userspace API. Configuring the > > pipeline and propagating the formats would then be the responsibility of > > the kernel driver. > > Yes this is exactly what we want to do. > If I understand well, to disable the V4L2 subdev userspace API, I just > have to remove the media device registry: > > - /* Register the media device */ > - ret = media_device_register(&dcmi->mdev); > - if (ret) { > - dev_err(dcmi->dev, "Failed to register media device (%d)\n", > - ret); > - goto err_media_device_cleanup; > - } > > Do you see any additional things to do ? That should be it. Note that in that case pipeline configuration has to be handled by the master driver (DCMI in this case), the external subdevs involved (such as the CSI-2 to parallel bridge) must not handle any propagation of formats or selection rectangles. > > However, this will limit your driver to the > > following: > > > > - Fully linear pipelines only (single sensor) > > - No support for controls implemented by multiple entities in the > > pipeline (for instance controls that would exist in both the sensor > > and the bridge, such as gains) > > - No proper support for scaling configuration if multiple components in > > the pipeline can scale > > > > Are you willing to set those limitations in stone and give up on > > supporting those features ? > > > > The involved hardware do not have those features, no need of extra > functionalities to be exposed to userspace, so this is fine. > > I'll push a v3 with this change and the other fixes related to Sakari > and Hans comments. > > Please Sakari & Hans, also comment on that change that we can converge > on v3.
Hi Laurent, On Thu, Jun 27, 2019 at 04:38:24PM +0300, Laurent Pinchart wrote: > Hi Hugues, > > On Thu, Jun 27, 2019 at 12:38:40PM +0000, Hugues FRUCHET wrote: > > On 6/26/19 7:25 PM, Laurent Pinchart wrote: > > > On Mon, Jun 24, 2019 at 10:10:05AM +0000, Hugues FRUCHET wrote: > > >> Hi Sakari, > > >> > > >>> - Where's the sub-device representing the bridge itself? > > >> > > >> This is pointed by [1]: drivers/media/i2c/st-mipid02.c > > >> > > >>> - As the driver becomes MC-centric, crop configuration takes place through > > >>> V4L2 sub-device interface, not through the video device node. > > >>> - Same goes for accessing sensor configuration: it does not take place > > >>> through video node but through the sub-device nodes. > > >> > > >> Our objective is to be able to support either a simple parallel sensor > > >> or a CSI-2 sensor connected through a bridge without any changes on > > >> userspace side because no additional processing or conversion involved, > > >> only deserialisation is m. > > >> With the proposed set of patches, we succeeded to do so, the same > > >> non-regression tests campaign is passed with OV5640 parallel sensor > > >> (STM32MP1 evaluation board) or OV5640 CSI-2 sensor (Avenger96 board with > > >> D3 mezzanine board). > > >> > > >> We don't want driver to be MC-centric, media controller support was > > >> required only to get access to the set of functions needed to link and > > >> walk trough subdevices: media_create_pad_link(), > > >> media_entity_remote_pad(), etc... > > >> > > >> We did a try with the v1 version of this patchset, delegating subdevices > > >> handling to userspace, by using media-controller, but this require to > > >> configure first the pipeline for each single change of resolution and > > >> format before making any capture using v4l2-ctl or GStreamer, quite > > >> heavy in fact. > > >> Benjamin did another try using new libcamera codebase, but even for a > > >> basic capture use-case, negotiation code is quite tricky in order to > > >> match the right subdevices bus format to the required V4L2 format. > > > > > > Why would it be trickier in userspace than in the kernel ? The V4L2 > > > subdev operations are more or less expose verbatim through the subdev > > > userspace API. > > > > > >> Moreover, it was not clear how to call libcamera library prior to any > > >> v4l2-ctl or GStreamer calls. > > > > > > libcamera isn't meant to be called before v4l2-ctl or GStreamer. > > > Applications are supposed to be based directly on libcamera, or, for > > > existing userspace APIs such as V4L2 or GStreamer, compatibility layers > > > are supposed to be developed. For V4L2 it will take the form of a > > > LD_PRELOAD-able .so that will intercept the V4L2 API calls, making most > > > V4L2 applications work with libcamera unmodified (I said most as 100% > > > compatibility will likely not be achievable). For GStreamer it will take > > > the form of a GStreamer libcamera element that will replace the V4L2 > > > source element. > > > > > >> Adding 100 lines of code into DCMI to well configure resolution and > > >> formats fixes the point and allows us to keep backward compatibility > > >> as per our objective, so it seems far more reasonable to us to do so > > >> even if DCMI controls more than the subdevice it is connected to. > > >> Moreover we found similar code in other video interfaces code like > > >> qcom/camss/camss.c and xilinx/xilinx-dma.c, controlling the whole > > >> pipeline, so it seems to us quite natural to go this way. > > > > > > I can't comment on the qcom-camss driver as I'm not aware of its > > > internals, but where have you found such code in the Xilinx V4L2 drivers > > > ? > > > > For ex. in xilinx/xilinx-dma.c, stream on/off is propagated to all > > subdevices within pipeline: > > * Walk the entities chain starting at the pipeline output video node > > static int xvip_pipeline_start_stop(struct xvip_pipeline *pipe, bool start) > > > > Same for qcom/camss/camss-video.c: > > static int video_start_streaming(struct vb2_queue *q, unsigned int count) > > For stream start/stop, that's expected. Userspace only controls the > stream start/stop on the video node, and the kernel propagates that > along the pipeline. There is no VIDIOC_STREAMON or VIDIOC_STREAMOFF > ioctl exposed to userspace for V4L2 subdevs. What is not propagated in > the kernel for MC-centric devices is the pipeline configuration (formats > and selection rectangles). > > > For resolution/format, in exynos4-is/fimc-capture.c: > > static int fimc_pipeline_try_format(struct fimc_ctx *ctx, > > ... > > while (1) { > > ... > > /* set format on all pipeline subdevs */ > > while (me != &fimc->vid_cap.subdev.entity) { > > ... > > ret = v4l2_subdev_call(sd, pad, set_fmt, NULL, &sfmt); > > As explained below, propagating formats is fine for video node-centric > drivers, but comes with limitations. > > > >> To summarize, if we cannot do the negotiation within kernel, delegating > > >> this to userspace implies far more complexity and breaks compatibility > > >> with existing applications without adding new functionalities. > > >> > > >> Having all that in mind, what should be reconsidered in your opinion > > >> Sakari ? Do you have some alternatives ? > > > > > > First of all, let's note that your patch series performs to related but > > > still independent changes: it enables MC support, *and* enables the V4L2 > > > subdev userspace API. The former is clearly needed and will allow you to > > > use the MC API internally in the kernel, simplifying pipeline traversal. > > > The latter then enables the V4L2 subdev userspace API, moving the > > > pipeline configuration responsibility to userspace. > > > > > > You could in theory move to the MC API inside the kernel, without > > > enabling support for the V4L2 subdev userspace API. Configuring the > > > pipeline and propagating the formats would then be the responsibility of > > > the kernel driver. > > > > Yes this is exactly what we want to do. > > If I understand well, to disable the V4L2 subdev userspace API, I just > > have to remove the media device registry: > > > > - /* Register the media device */ > > - ret = media_device_register(&dcmi->mdev); > > - if (ret) { > > - dev_err(dcmi->dev, "Failed to register media device (%d)\n", > > - ret); > > - goto err_media_device_cleanup; > > - } > > > > Do you see any additional things to do ? > > That should be it. Note that in that case pipeline configuration has to > be handled by the master driver (DCMI in this case), the external > subdevs involved (such as the CSI-2 to parallel bridge) must not handle > any propagation of formats or selection rectangles. I wonder what we'd do in the case when someone needs to connect something else to the pipeline, such as a sensor with more than one sub-device, or a flash or a lens controller. For future-proofness, I'd just use MC for hardware that may be part of a complex pipeline. In this case, if you think backwards compatibility is important (and for most hardware it probably is), I don't think there are perfect solutions if your existing driver is not MC-enabled. A reasonable compromise would be to add a Kconfig option that allows enabling MC. This way you can provide backwards compatibility and allow making use of the full potential of the hardware. That's also why hardware that may be part of a non-trivial MC pipeline should start with MC-enabled so we wouldn't run into this.
Hi Sakari, On Fri, Jul 05, 2019 at 10:55:22AM +0300, Sakari Ailus wrote: > On Thu, Jun 27, 2019 at 04:38:24PM +0300, Laurent Pinchart wrote: > > On Thu, Jun 27, 2019 at 12:38:40PM +0000, Hugues FRUCHET wrote: > >> On 6/26/19 7:25 PM, Laurent Pinchart wrote: > >>> On Mon, Jun 24, 2019 at 10:10:05AM +0000, Hugues FRUCHET wrote: > >>>> Hi Sakari, > >>>> > >>>>> - Where's the sub-device representing the bridge itself? > >>>> > >>>> This is pointed by [1]: drivers/media/i2c/st-mipid02.c > >>>> > >>>>> - As the driver becomes MC-centric, crop configuration takes place through > >>>>> V4L2 sub-device interface, not through the video device node. > >>>>> - Same goes for accessing sensor configuration: it does not take place > >>>>> through video node but through the sub-device nodes. > >>>> > >>>> Our objective is to be able to support either a simple parallel sensor > >>>> or a CSI-2 sensor connected through a bridge without any changes on > >>>> userspace side because no additional processing or conversion involved, > >>>> only deserialisation is m. > >>>> With the proposed set of patches, we succeeded to do so, the same > >>>> non-regression tests campaign is passed with OV5640 parallel sensor > >>>> (STM32MP1 evaluation board) or OV5640 CSI-2 sensor (Avenger96 board with > >>>> D3 mezzanine board). > >>>> > >>>> We don't want driver to be MC-centric, media controller support was > >>>> required only to get access to the set of functions needed to link and > >>>> walk trough subdevices: media_create_pad_link(), > >>>> media_entity_remote_pad(), etc... > >>>> > >>>> We did a try with the v1 version of this patchset, delegating subdevices > >>>> handling to userspace, by using media-controller, but this require to > >>>> configure first the pipeline for each single change of resolution and > >>>> format before making any capture using v4l2-ctl or GStreamer, quite > >>>> heavy in fact. > >>>> Benjamin did another try using new libcamera codebase, but even for a > >>>> basic capture use-case, negotiation code is quite tricky in order to > >>>> match the right subdevices bus format to the required V4L2 format. > >>> > >>> Why would it be trickier in userspace than in the kernel ? The V4L2 > >>> subdev operations are more or less expose verbatim through the subdev > >>> userspace API. > >>> > >>>> Moreover, it was not clear how to call libcamera library prior to any > >>>> v4l2-ctl or GStreamer calls. > >>> > >>> libcamera isn't meant to be called before v4l2-ctl or GStreamer. > >>> Applications are supposed to be based directly on libcamera, or, for > >>> existing userspace APIs such as V4L2 or GStreamer, compatibility layers > >>> are supposed to be developed. For V4L2 it will take the form of a > >>> LD_PRELOAD-able .so that will intercept the V4L2 API calls, making most > >>> V4L2 applications work with libcamera unmodified (I said most as 100% > >>> compatibility will likely not be achievable). For GStreamer it will take > >>> the form of a GStreamer libcamera element that will replace the V4L2 > >>> source element. > >>> > >>>> Adding 100 lines of code into DCMI to well configure resolution and > >>>> formats fixes the point and allows us to keep backward compatibility > >>>> as per our objective, so it seems far more reasonable to us to do so > >>>> even if DCMI controls more than the subdevice it is connected to. > >>>> Moreover we found similar code in other video interfaces code like > >>>> qcom/camss/camss.c and xilinx/xilinx-dma.c, controlling the whole > >>>> pipeline, so it seems to us quite natural to go this way. > >>> > >>> I can't comment on the qcom-camss driver as I'm not aware of its > >>> internals, but where have you found such code in the Xilinx V4L2 drivers > >>> ? > >> > >> For ex. in xilinx/xilinx-dma.c, stream on/off is propagated to all > >> subdevices within pipeline: > >> * Walk the entities chain starting at the pipeline output video node > >> static int xvip_pipeline_start_stop(struct xvip_pipeline *pipe, bool start) > >> > >> Same for qcom/camss/camss-video.c: > >> static int video_start_streaming(struct vb2_queue *q, unsigned int count) > > > > For stream start/stop, that's expected. Userspace only controls the > > stream start/stop on the video node, and the kernel propagates that > > along the pipeline. There is no VIDIOC_STREAMON or VIDIOC_STREAMOFF > > ioctl exposed to userspace for V4L2 subdevs. What is not propagated in > > the kernel for MC-centric devices is the pipeline configuration (formats > > and selection rectangles). > > > >> For resolution/format, in exynos4-is/fimc-capture.c: > >> static int fimc_pipeline_try_format(struct fimc_ctx *ctx, > >> ... > >> while (1) { > >> ... > >> /* set format on all pipeline subdevs */ > >> while (me != &fimc->vid_cap.subdev.entity) { > >> ... > >> ret = v4l2_subdev_call(sd, pad, set_fmt, NULL, &sfmt); > > > > As explained below, propagating formats is fine for video node-centric > > drivers, but comes with limitations. > > > >>>> To summarize, if we cannot do the negotiation within kernel, delegating > >>>> this to userspace implies far more complexity and breaks compatibility > >>>> with existing applications without adding new functionalities. > >>>> > >>>> Having all that in mind, what should be reconsidered in your opinion > >>>> Sakari ? Do you have some alternatives ? > >>> > >>> First of all, let's note that your patch series performs to related but > >>> still independent changes: it enables MC support, *and* enables the V4L2 > >>> subdev userspace API. The former is clearly needed and will allow you to > >>> use the MC API internally in the kernel, simplifying pipeline traversal. > >>> The latter then enables the V4L2 subdev userspace API, moving the > >>> pipeline configuration responsibility to userspace. > >>> > >>> You could in theory move to the MC API inside the kernel, without > >>> enabling support for the V4L2 subdev userspace API. Configuring the > >>> pipeline and propagating the formats would then be the responsibility of > >>> the kernel driver. > >> > >> Yes this is exactly what we want to do. > >> If I understand well, to disable the V4L2 subdev userspace API, I just > >> have to remove the media device registry: > >> > >> - /* Register the media device */ > >> - ret = media_device_register(&dcmi->mdev); > >> - if (ret) { > >> - dev_err(dcmi->dev, "Failed to register media device (%d)\n", > >> - ret); > >> - goto err_media_device_cleanup; > >> - } > >> > >> Do you see any additional things to do ? > > > > That should be it. Note that in that case pipeline configuration has to > > be handled by the master driver (DCMI in this case), the external > > subdevs involved (such as the CSI-2 to parallel bridge) must not handle > > any propagation of formats or selection rectangles. > > I wonder what we'd do in the case when someone needs to connect something > else to the pipeline, such as a sensor with more than one sub-device, or a > flash or a lens controller. > > For future-proofness, I'd just use MC for hardware that may be part of a > complex pipeline. In this case, if you think backwards compatibility is > important (and for most hardware it probably is), I don't think there are > perfect solutions if your existing driver is not MC-enabled. Oh, I fully agree with you, which is why I mentioned in another e-mail that using a video node-centric approach would come with limitations, such as not being able to support more complex pipelines, ever. > A reasonable compromise would be to add a Kconfig option that allows > enabling MC. This way you can provide backwards compatibility and allow > making use of the full potential of the hardware. That's also why hardware > that may be part of a non-trivial MC pipeline should start with MC-enabled > so we wouldn't run into this. I really don't like this, as it introduces additional complexity. My recommendation is to go for an MC-centric approach. Going for a video node-centric approach is really shooting oneself in the foot regarding future extensions. But that being said, if there's a strong desire to go for foot self-shooting, the way to go is explained above.
Hi Laurent, On Fri, Jul 05, 2019 at 11:04:24AM +0300, Laurent Pinchart wrote: ... > > A reasonable compromise would be to add a Kconfig option that allows > > enabling MC. This way you can provide backwards compatibility and allow > > making use of the full potential of the hardware. That's also why hardware > > that may be part of a non-trivial MC pipeline should start with MC-enabled > > so we wouldn't run into this. > > I really don't like this, as it introduces additional complexity. My > recommendation is to go for an MC-centric approach. Going for a video > node-centric approach is really shooting oneself in the foot regarding > future extensions. But that being said, if there's a strong desire to go > for foot self-shooting, the way to go is explained above. Well, there's nothing that can be done anymore as this has already happened: this is an existing driver in the mainline kernel. Unless you have a time machine of some sort, of course. :-) The choice is now really between breaking existing applications (plain V4L2) and supporting new functionality (MC-centric), so if you need both, I don't really see another choice than a Kconfig option. On the other hand, if we know there are no existing users that could not support the MC-centric view of the device, we could just change the driver API and forget the Kconfig option. It'd be much more simple that way indeed. But I don'k know what's the case.