diff mbox series

Surface Go VCM type (was: Need to pass acpi_enforce_resources=lax on the Surface Go (version1))

Message ID 3b61bb2d-1136-cf35-ba7a-724da9642855@gmail.com (mailing list archive)
State New, archived
Headers show
Series Surface Go VCM type (was: Need to pass acpi_enforce_resources=lax on the Surface Go (version1)) | expand

Commit Message

Daniel Scally Oct. 29, 2021, 11:50 a.m. UTC
Hi all


+CC linux-media and libcamera-devel, as it's probably a good time to
broaden this out. Also Andy because I'm hoping you can help :) The
background of the discussion is about how we identify and enumerate
(correctly, I.E. with a type matching the vcm driver's i2c_device_id,
and there are a few different vcm's in scope which seem encoded in the
SSDB buffer) which VCM module is linked to a sensor in Intel's IPU3
centric ACPI tables. The I2C address for the device is just a second
I2cSerialBusV2 against the sensor's acpi device rather than a separate
one, which is no awkward. We also need to get firmware created for the
VCM such that the sensor will link to it via the lens-focus property.

On 28/10/2021 09:57, Hans de Goede wrote:
> Hi,
>
> On 10/28/21 10:49, Laurent Pinchart wrote:
>> Hi Hans,
>>
>> On Thu, Oct 28, 2021 at 09:51:08AM +0200, Hans de Goede wrote:
>>> On 10/28/21 09:10, Daniel Scally wrote:
>>>> On 27/10/2021 15:16, Hans de Goede wrote:
>>>>> On 10/27/21 12:07, Daniel Scally wrote:
>>>>>> On 26/10/2021 11:14, Hans de Goede wrote:
>>>>>>>>> So yesterday I already sorta guessed it would be the DW9714 because of
>>>>>>>>> the 0x0c address and I tried:
>>>>>>>>>
>>>>>>>>> i2ctransfer -y 2 w2@0x0c 0x00 0x00
>>>>>>>>>
>>>>>>>>> And the transfer fails, while according to the driver that is a valid
>>>>>>>>> value. So maybe we are missing a regulator enable? Or its not a DW9714.
>>>>>>>>>
>>>>>>>>> Also "i2cdetect -y -r 2" does not see anything at address 0x0c (but some of
>>>>>>>>> these VCMs seem to be write only...) it does OTOH see an unknown device at
>>>>>>>>> address 0x21.
>>>>>>>> Well, when debugging the necessary TPS68470 settings I used a poor man's
>>>>>>>> i2ctransfer on Windows whilst the camera was running to read the values
>>>>>>>> that were set for both the PMIC and the camera sensor. Using the same
>>>>>>>> program I can connect to and read values from a device at 0x0c,
>>>>>> Just as further testing I dumped the contents of the device at 0x0c,
>>>>>> which comes back as
>>>>>>
>>>>>> f1 1 2 1 61 0 40 60
>>>>>>
>>>>>> Byte 0 is given in the driver you linked as the ID field and expected to
>>>>>> be f1. The driver controls focus by writing to the 3rd and 4th byte
>>>>>> (with the 4th being the LSB); the only value that seemed to fluctuate
>>>>>> when running windows and moving my hand in front of the sensor was byte
>>>>>> 4 and testing it out I wrote values into that byte and the focus
>>>>>> changes. So the device at 0x0c is definitely the vcm and it sure looks
>>>>>> like it's the DW9719
>>>>>>
>>>>>> The device at 0x21 is only available on Windows when the camera is
>>>>>> running, I thought it was quite likely that one of the "spare"
>>>>>> regulators from the TPS68470. One line is called VCM, and sure enough
>>>>>> it's enabled whilst the world-facing camera is running. I switched to
>>>>>> linux and started streaming the back camera, then enabled that voltage
>>>>>> regulator via i2ctransfer:
>>>>>>
>>>>>> sudo i2ctransfer 2 w2@0x4d 0x3c 0x6d
>>>>>>
>>>>>> sudo i2ctransfer 2 w2@0x4d 0x44 0x01
>>>>>>
>>>>>> And now i2cdetect shows the device at 0x0c on bus 2 - so we need more
>>>>>> jiggery pokey to map that VCM regulator to this new device (once we've
>>>>>> gotten it enumerated...) and the driver needs to have a tweak to call
>>>>>> regulator get and do a power on at some point.
>>>>> Awesome, great job on figuring this out!
>>>>>
>>>>> As you know I can spend $dayjob time on this, so I'll take on the job
>>>>> of creating the i2c-client and hooking up the regulator in some
>>>>> upstreamable manner.
>>>> Okedokey cool. I'd probably start at the cio2-bridge, if only because we
>>>> already have the adev there and the SSDB buffer loaded, so should be
>>>> easy enough to add an enum for the vcm_type and a call to
>>>> i2c_acpi_new_device()...bit of a weird place for that though I guess.
>>> Ah, I was actually thinking about doing this int he int3472 code for
>>> a number of reasons:
>>>
>>> 1. We already have the regulator_init_data there and we will need to
>>> expand it for this.
>>>
>>> 2. It is sorta the central place where we deal with all this glue-stuff
>> I'm not too sure about that. The INT3472 model the "Intel camera PMIC"
>> (I don't remember the exact wording, but that's more or less how the
>> device is described in Windows, and it matches the intent we see in the
>> DSDT).
> I agree that the INT3472 models the PMIC, or whatever discrete bits
> which offer similar functionality.
>
>> Given that we already have cio2-bridge, and that it hooks up the
>> sensor to the CIO2, it seems to me that it would be a better central
>> place.
> Ok, I was sorta expecting you to want to keep glue code like this
> out of drivers/media. But I guess that only applies to putting ACPI
> specific stuff in sensor drivers; and since the cio2-bridge code is
> already x86/ACPI specific you are fine with adding ACPI code there?
>
> I'm fine with putting the VCM i2c-client instantiation in the
> cio2-bridge code, that may also make it easier to tie the 2 together
> at the media-controller level.


Having looked at this yesterday evening I'm more and more convinced it's
necessary. I hacked it into the ov8865 driver in the interim (just by
calling i2c_acpi_new_device() in probe) and then worked on that dw9719
code you found [1] to turn it into an i2c driver (attached, though still
needs a bit of work), which will successfully bind to the i2c client
enumerated by that i2c_acpi_new_device() call. From there though it
needs a way for the v4l2 subdev to be matched to the sensor's subdev.
This can happen automatically by way of the lens-focus firmware property
against the sensor - we currently build those in the cio2-bridge, so
adding another software node for the VCM and creating a lens-focus
property for the sensor's software_node with a pointer to the VCM's node
seems like the best way to do that.


To throw a spanner in the works though; I noticed this delightful _CRS
for the OV9734 sensor of a  Surface Laptop 1 earlier:


Method (_CRS, 0, Serialized)  // _CRS: Current Resource Settings
{
    Name (SBUF, ResourceTemplate ()
    {
        I2cSerialBusV2 (0x0036, ControllerInitiated, 0x00061A80,
            AddressingMode7Bit, "\\_SB.PCI0.I2C2",
            0x00, ResourceConsumer, , Exclusive,
            )
        I2cSerialBusV2 (0x0050, ControllerInitiated, 0x00061A80,
            AddressingMode7Bit, "\\_SB.PCI0.I2C2",
            0x00, ResourceConsumer, , Exclusive,
            )
        I2cSerialBusV2 (0x0051, ControllerInitiated, 0x00061A80,
            AddressingMode7Bit, "\\_SB.PCI0.I2C2",
            0x00, ResourceConsumer, , Exclusive,
            )
        I2cSerialBusV2 (0x0052, ControllerInitiated, 0x00061A80,
            AddressingMode7Bit, "\\_SB.PCI0.I2C2",
            0x00, ResourceConsumer, , Exclusive,
            )
        I2cSerialBusV2 (0x0053, ControllerInitiated, 0x00061A80,
            AddressingMode7Bit, "\\_SB.PCI0.I2C2",
            0x00, ResourceConsumer, , Exclusive,
            )
    })
    Return (SBUF) /* \_SB_.PCI0.I2C2.CAMF._CRS.SBUF */
}


How do we know which one's the VCM when there's more than just two like
that? Andy: don't suppose you can shed any light there?

[1]
https://github.com/ZenfoneArea/android_kernel_asus_zenfone5/blob/master/linux/modules/camera/drivers/media/i2c/imx/dw9719.c

Comments

Andy Shevchenko Nov. 1, 2021, 3:55 p.m. UTC | #1
On Fri, Oct 29, 2021 at 12:50:31PM +0100, Daniel Scally wrote:
> Hi all
> 
> +CC linux-media and libcamera-devel, as it's probably a good time to
> broaden this out. Also Andy because I'm hoping you can help :) The
> background of the discussion is about how we identify and enumerate
> (correctly, I.E. with a type matching the vcm driver's i2c_device_id,
> and there are a few different vcm's in scope which seem encoded in the
> SSDB buffer) which VCM module is linked to a sensor in Intel's IPU3
> centric ACPI tables. The I2C address for the device is just a second
> I2cSerialBusV2 against the sensor's acpi device rather than a separate
> one, which is no awkward. We also need to get firmware created for the
> VCM such that the sensor will link to it via the lens-focus property.

> On 28/10/2021 09:57, Hans de Goede wrote:

...

> To throw a spanner in the works though; I noticed this delightful _CRS
> for the OV9734 sensor of a  Surface Laptop 1 earlier:
> 
> Method (_CRS, 0, Serialized)  // _CRS: Current Resource Settings
> {
>     Name (SBUF, ResourceTemplate ()
>     {
>         I2cSerialBusV2 (0x0036, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>         I2cSerialBusV2 (0x0050, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>         I2cSerialBusV2 (0x0051, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>         I2cSerialBusV2 (0x0052, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>         I2cSerialBusV2 (0x0053, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>     })
>     Return (SBUF) /* \_SB_.PCI0.I2C2.CAMF._CRS.SBUF */
> }
> 
> How do we know which one's the VCM when there's more than just two like
> that? Andy: don't suppose you can shed any light there?

Seems to me that the order is defined by address and if software engineers are
not (so) crazy, it shouldn't deviate from device to device.

At least this is stated in the internal documentation.

The order is

1. Sensor (single addr)
2. VCM (single addr)
3. EEPROM (addr per page)

Interestingly that your list have no VCM in the _CRS defined...

Not sure how to distinguish that if it's not a typo and indeed the case.
Sounds like DMI quirk :-( again (something like 3-bit flag to define
which devices are present in the _CRS taking into account the ordering
requirements).
Andy Shevchenko Nov. 1, 2021, 3:59 p.m. UTC | #2
On Mon, Nov 01, 2021 at 05:55:18PM +0200, Andy Shevchenko wrote:
> On Fri, Oct 29, 2021 at 12:50:31PM +0100, Daniel Scally wrote:
> > Hi all
> > 
> > +CC linux-media and libcamera-devel, as it's probably a good time to
> > broaden this out. Also Andy because I'm hoping you can help :) The
> > background of the discussion is about how we identify and enumerate
> > (correctly, I.E. with a type matching the vcm driver's i2c_device_id,
> > and there are a few different vcm's in scope which seem encoded in the
> > SSDB buffer) which VCM module is linked to a sensor in Intel's IPU3
> > centric ACPI tables. The I2C address for the device is just a second
> > I2cSerialBusV2 against the sensor's acpi device rather than a separate
> > one, which is no awkward. We also need to get firmware created for the
> > VCM such that the sensor will link to it via the lens-focus property.
> 
> > On 28/10/2021 09:57, Hans de Goede wrote:
> 
> ...
> 
> > To throw a spanner in the works though; I noticed this delightful _CRS
> > for the OV9734 sensor of a  Surface Laptop 1 earlier:
> > 
> > Method (_CRS, 0, Serialized)  // _CRS: Current Resource Settings
> > {
> >     Name (SBUF, ResourceTemplate ()
> >     {
> >         I2cSerialBusV2 (0x0036, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >         I2cSerialBusV2 (0x0050, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >         I2cSerialBusV2 (0x0051, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >         I2cSerialBusV2 (0x0052, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >         I2cSerialBusV2 (0x0053, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >     })
> >     Return (SBUF) /* \_SB_.PCI0.I2C2.CAMF._CRS.SBUF */
> > }
> > 
> > How do we know which one's the VCM when there's more than just two like
> > that? Andy: don't suppose you can shed any light there?
> 
> Seems to me that the order is defined by address and if software engineers are
> not (so) crazy, it shouldn't deviate from device to device.
> 
> At least this is stated in the internal documentation.
> 
> The order is
> 
> 1. Sensor (single addr)
> 2. VCM (single addr)
> 3. EEPROM (addr per page)
> 
> Interestingly that your list have no VCM in the _CRS defined...
> 
> Not sure how to distinguish that if it's not a typo and indeed the case.
> Sounds like DMI quirk :-( again (something like 3-bit flag to define
> which devices are present in the _CRS taking into account the ordering
> requirements).

Hold on, there is a way out!

SSDB has fields:

	u8 romtype;
	u8 vcmtype;

0 means no device present.

So, seems documentation is consistent and no quirks are needed (until
proven otherwise).
Hans de Goede Nov. 1, 2021, 4:02 p.m. UTC | #3
On 10/29/21 13:50, Daniel Scally wrote:
> Hi all
> 
> 
> +CC linux-media and libcamera-devel, as it's probably a good time to
> broaden this out. Also Andy because I'm hoping you can help :) The
> background of the discussion is about how we identify and enumerate
> (correctly, I.E. with a type matching the vcm driver's i2c_device_id,
> and there are a few different vcm's in scope which seem encoded in the
> SSDB buffer) which VCM module is linked to a sensor in Intel's IPU3
> centric ACPI tables. The I2C address for the device is just a second
> I2cSerialBusV2 against the sensor's acpi device rather than a separate
> one, which is no awkward. We also need to get firmware created for the
> VCM such that the sensor will link to it via the lens-focus property.
> 
> On 28/10/2021 09:57, Hans de Goede wrote:
>> Hi,
>>
>> On 10/28/21 10:49, Laurent Pinchart wrote:
>>> Hi Hans,
>>>
>>> On Thu, Oct 28, 2021 at 09:51:08AM +0200, Hans de Goede wrote:
>>>> On 10/28/21 09:10, Daniel Scally wrote:
>>>>> On 27/10/2021 15:16, Hans de Goede wrote:
>>>>>> On 10/27/21 12:07, Daniel Scally wrote:
>>>>>>> On 26/10/2021 11:14, Hans de Goede wrote:
>>>>>>>>>> So yesterday I already sorta guessed it would be the DW9714 because of
>>>>>>>>>> the 0x0c address and I tried:
>>>>>>>>>>
>>>>>>>>>> i2ctransfer -y 2 w2@0x0c 0x00 0x00
>>>>>>>>>>
>>>>>>>>>> And the transfer fails, while according to the driver that is a valid
>>>>>>>>>> value. So maybe we are missing a regulator enable? Or its not a DW9714.
>>>>>>>>>>
>>>>>>>>>> Also "i2cdetect -y -r 2" does not see anything at address 0x0c (but some of
>>>>>>>>>> these VCMs seem to be write only...) it does OTOH see an unknown device at
>>>>>>>>>> address 0x21.
>>>>>>>>> Well, when debugging the necessary TPS68470 settings I used a poor man's
>>>>>>>>> i2ctransfer on Windows whilst the camera was running to read the values
>>>>>>>>> that were set for both the PMIC and the camera sensor. Using the same
>>>>>>>>> program I can connect to and read values from a device at 0x0c,
>>>>>>> Just as further testing I dumped the contents of the device at 0x0c,
>>>>>>> which comes back as
>>>>>>>
>>>>>>> f1 1 2 1 61 0 40 60
>>>>>>>
>>>>>>> Byte 0 is given in the driver you linked as the ID field and expected to
>>>>>>> be f1. The driver controls focus by writing to the 3rd and 4th byte
>>>>>>> (with the 4th being the LSB); the only value that seemed to fluctuate
>>>>>>> when running windows and moving my hand in front of the sensor was byte
>>>>>>> 4 and testing it out I wrote values into that byte and the focus
>>>>>>> changes. So the device at 0x0c is definitely the vcm and it sure looks
>>>>>>> like it's the DW9719
>>>>>>>
>>>>>>> The device at 0x21 is only available on Windows when the camera is
>>>>>>> running, I thought it was quite likely that one of the "spare"
>>>>>>> regulators from the TPS68470. One line is called VCM, and sure enough
>>>>>>> it's enabled whilst the world-facing camera is running. I switched to
>>>>>>> linux and started streaming the back camera, then enabled that voltage
>>>>>>> regulator via i2ctransfer:
>>>>>>>
>>>>>>> sudo i2ctransfer 2 w2@0x4d 0x3c 0x6d
>>>>>>>
>>>>>>> sudo i2ctransfer 2 w2@0x4d 0x44 0x01
>>>>>>>
>>>>>>> And now i2cdetect shows the device at 0x0c on bus 2 - so we need more
>>>>>>> jiggery pokey to map that VCM regulator to this new device (once we've
>>>>>>> gotten it enumerated...) and the driver needs to have a tweak to call
>>>>>>> regulator get and do a power on at some point.
>>>>>> Awesome, great job on figuring this out!
>>>>>>
>>>>>> As you know I can spend $dayjob time on this, so I'll take on the job
>>>>>> of creating the i2c-client and hooking up the regulator in some
>>>>>> upstreamable manner.
>>>>> Okedokey cool. I'd probably start at the cio2-bridge, if only because we
>>>>> already have the adev there and the SSDB buffer loaded, so should be
>>>>> easy enough to add an enum for the vcm_type and a call to
>>>>> i2c_acpi_new_device()...bit of a weird place for that though I guess.
>>>> Ah, I was actually thinking about doing this int he int3472 code for
>>>> a number of reasons:
>>>>
>>>> 1. We already have the regulator_init_data there and we will need to
>>>> expand it for this.
>>>>
>>>> 2. It is sorta the central place where we deal with all this glue-stuff
>>> I'm not too sure about that. The INT3472 model the "Intel camera PMIC"
>>> (I don't remember the exact wording, but that's more or less how the
>>> device is described in Windows, and it matches the intent we see in the
>>> DSDT).
>> I agree that the INT3472 models the PMIC, or whatever discrete bits
>> which offer similar functionality.
>>
>>> Given that we already have cio2-bridge, and that it hooks up the
>>> sensor to the CIO2, it seems to me that it would be a better central
>>> place.
>> Ok, I was sorta expecting you to want to keep glue code like this
>> out of drivers/media. But I guess that only applies to putting ACPI
>> specific stuff in sensor drivers; and since the cio2-bridge code is
>> already x86/ACPI specific you are fine with adding ACPI code there?
>>
>> I'm fine with putting the VCM i2c-client instantiation in the
>> cio2-bridge code, that may also make it easier to tie the 2 together
>> at the media-controller level.
> 
> 
> Having looked at this yesterday evening I'm more and more convinced it's
> necessary. I hacked it into the ov8865 driver in the interim (just by
> calling i2c_acpi_new_device() in probe) and then worked on that dw9719
> code you found [1] to turn it into an i2c driver (attached, though still
> needs a bit of work), which will successfully bind to the i2c client
> enumerated by that i2c_acpi_new_device() call. From there though it
> needs a way for the v4l2 subdev to be matched to the sensor's subdev.
> This can happen automatically by way of the lens-focus firmware property
> against the sensor - we currently build those in the cio2-bridge, so
> adding another software node for the VCM and creating a lens-focus
> property for the sensor's software_node with a pointer to the VCM's node
> seems like the best way to do that.

So besides prepping a v5 of my previous series, with update regulator
init-data for the VCM I've also been looking into this, attached are
the results.

Some notes from initial testing:

1. The driver you attached will only successful probe if I insmod
it while streaming video from the sensor. So I think we need another
regulator or the clk for just the VCM too, I will investigate this
later this week.

2. I need some help with all the fwnode link stuff (I'm not very familiar
with this). There seems to be a chicken and egg problem here though,
because the v4l2subdev for the VCM does not register because of async stuff
and if we add it to the "graph" then my idea to enumerate the VCMs
from the SSDB on the complete() callback won't work. But we can do this
on a per sensor basis instead from the cio2_notifier_bound() callback
instead I guess ?

Can someone give me some hints how the fwnode link code should look/work
and how I can get the async registering of the subdev for the VCM to
complete ?

Regards,

Hans




> 
> 
> To throw a spanner in the works though; I noticed this delightful _CRS
> for the OV9734 sensor of a  Surface Laptop 1 earlier:
> 
> 
> Method (_CRS, 0, Serialized)  // _CRS: Current Resource Settings
> {
>     Name (SBUF, ResourceTemplate ()
>     {
>         I2cSerialBusV2 (0x0036, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>         I2cSerialBusV2 (0x0050, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>         I2cSerialBusV2 (0x0051, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>         I2cSerialBusV2 (0x0052, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>         I2cSerialBusV2 (0x0053, ControllerInitiated, 0x00061A80,
>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>             0x00, ResourceConsumer, , Exclusive,
>             )
>     })
>     Return (SBUF) /* \_SB_.PCI0.I2C2.CAMF._CRS.SBUF */
> }

Hmm, we do have i2c_acpi_client_count(adev), so it is easy to use
that and just always use the last resource for the VCM. But that assumes
that is what is going on here and I have no idea.

Regards,

Hans
Andy Shevchenko Nov. 1, 2021, 7:18 p.m. UTC | #4
On Mon, Nov 01, 2021 at 05:02:58PM +0100, Hans de Goede wrote:
> On 10/29/21 13:50, Daniel Scally wrote:

...

> > To throw a spanner in the works though; I noticed this delightful _CRS
> > for the OV9734 sensor of a  Surface Laptop 1 earlier:
> > 
> > Method (_CRS, 0, Serialized)  // _CRS: Current Resource Settings
> > {
> >     Name (SBUF, ResourceTemplate ()
> >     {
> >         I2cSerialBusV2 (0x0036, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >         I2cSerialBusV2 (0x0050, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >         I2cSerialBusV2 (0x0051, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >         I2cSerialBusV2 (0x0052, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >         I2cSerialBusV2 (0x0053, ControllerInitiated, 0x00061A80,
> >             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
> >             0x00, ResourceConsumer, , Exclusive,
> >             )
> >     })
> >     Return (SBUF) /* \_SB_.PCI0.I2C2.CAMF._CRS.SBUF */
> > }
> 
> Hmm, we do have i2c_acpi_client_count(adev), so it is easy to use
> that and just always use the last resource for the VCM. But that assumes
> that is what is going on here and I have no idea.

You probably composed this message before reading my reply(ies).

...

> Change the first parameter of i2c_acpi_new_device() from
> a struct device * to a struct acpi_device *.
> 
> This is necessary because in some cases we may only have access
> to the fwnode / acpi_device and not to the matching physical-node
> struct device *.

Can we rather create an fwnode based API and then

static inline
struct i2c_client *i2c_acpi_new_device(struct device *dev, int index,
				       struct i2c_board_info *info)
	return i2c_acpi_new_device_by_fwnode(dev_fwnode(), index, info);
}

?
Hans de Goede Nov. 1, 2021, 7:51 p.m. UTC | #5
Hi,

On 11/1/21 20:18, Andy Shevchenko wrote:
> On Mon, Nov 01, 2021 at 05:02:58PM +0100, Hans de Goede wrote:
>> On 10/29/21 13:50, Daniel Scally wrote:
> 
> ...
> 
>>> To throw a spanner in the works though; I noticed this delightful _CRS
>>> for the OV9734 sensor of a  Surface Laptop 1 earlier:
>>>
>>> Method (_CRS, 0, Serialized)  // _CRS: Current Resource Settings
>>> {
>>>     Name (SBUF, ResourceTemplate ()
>>>     {
>>>         I2cSerialBusV2 (0x0036, ControllerInitiated, 0x00061A80,
>>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>>             0x00, ResourceConsumer, , Exclusive,
>>>             )
>>>         I2cSerialBusV2 (0x0050, ControllerInitiated, 0x00061A80,
>>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>>             0x00, ResourceConsumer, , Exclusive,
>>>             )
>>>         I2cSerialBusV2 (0x0051, ControllerInitiated, 0x00061A80,
>>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>>             0x00, ResourceConsumer, , Exclusive,
>>>             )
>>>         I2cSerialBusV2 (0x0052, ControllerInitiated, 0x00061A80,
>>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>>             0x00, ResourceConsumer, , Exclusive,
>>>             )
>>>         I2cSerialBusV2 (0x0053, ControllerInitiated, 0x00061A80,
>>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>>             0x00, ResourceConsumer, , Exclusive,
>>>             )
>>>     })
>>>     Return (SBUF) /* \_SB_.PCI0.I2C2.CAMF._CRS.SBUF */
>>> }
>>
>> Hmm, we do have i2c_acpi_client_count(adev), so it is easy to use
>> that and just always use the last resource for the VCM. But that assumes
>> that is what is going on here and I have no idea.
> 
> You probably composed this message before reading my reply(ies).

Yes, sorry about that; and thank you for your answer the info
you provided is very helpful. If I understand things correctly
will can just always take the 2nd I2cSerialBusV2 resource entry
for the VCM since if there is a VCM that is where its
I2cSerialBusV2 resource entry will be.

> ...
> 
>> Change the first parameter of i2c_acpi_new_device() from
>> a struct device * to a struct acpi_device *.
>>
>> This is necessary because in some cases we may only have access
>> to the fwnode / acpi_device and not to the matching physical-node
>> struct device *.
> 
> Can we rather create an fwnode based API and then
> 
> static inline
> struct i2c_client *i2c_acpi_new_device(struct device *dev, int index,
> 				       struct i2c_board_info *info)
> 	return i2c_acpi_new_device_by_fwnode(dev_fwnode(), index, info);
> }
> 
> ?

Yes that is a good idea, I'll switch to that in my local tree, so
v1 (which I will post once I've something working) will have this.

Regards,

Hans
Daniel Scally Nov. 1, 2021, 11:26 p.m. UTC | #6
Hi Andy

On 01/11/2021 15:55, Andy Shevchenko wrote:
> On Fri, Oct 29, 2021 at 12:50:31PM +0100, Daniel Scally wrote:
>> Hi all
>>
>> +CC linux-media and libcamera-devel, as it's probably a good time to
>> broaden this out. Also Andy because I'm hoping you can help :) The
>> background of the discussion is about how we identify and enumerate
>> (correctly, I.E. with a type matching the vcm driver's i2c_device_id,
>> and there are a few different vcm's in scope which seem encoded in the
>> SSDB buffer) which VCM module is linked to a sensor in Intel's IPU3
>> centric ACPI tables. The I2C address for the device is just a second
>> I2cSerialBusV2 against the sensor's acpi device rather than a separate
>> one, which is no awkward. We also need to get firmware created for the
>> VCM such that the sensor will link to it via the lens-focus property.
>> On 28/10/2021 09:57, Hans de Goede wrote:
> ...
>
>> To throw a spanner in the works though; I noticed this delightful _CRS
>> for the OV9734 sensor of a  Surface Laptop 1 earlier:
>>
>> Method (_CRS, 0, Serialized)  // _CRS: Current Resource Settings
>> {
>>     Name (SBUF, ResourceTemplate ()
>>     {
>>         I2cSerialBusV2 (0x0036, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>         I2cSerialBusV2 (0x0050, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>         I2cSerialBusV2 (0x0051, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>         I2cSerialBusV2 (0x0052, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>         I2cSerialBusV2 (0x0053, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>     })
>>     Return (SBUF) /* \_SB_.PCI0.I2C2.CAMF._CRS.SBUF */
>> }
>>
>> How do we know which one's the VCM when there's more than just two like
>> that? Andy: don't suppose you can shed any light there?
> Seems to me that the order is defined by address and if software engineers are
> not (so) crazy, it shouldn't deviate from device to device.
>
> At least this is stated in the internal documentation.
>
> The order is
>
> 1. Sensor (single addr)
> 2. VCM (single addr)
> 3. EEPROM (addr per page)


Thank you! That's really helpful, much appreciated.

> Interestingly that your list have no VCM in the _CRS defined...
>
> Not sure how to distinguish that if it's not a typo and indeed the case.
> Sounds like DMI quirk :-( again (something like 3-bit flag to define
> which devices are present in the _CRS taking into account the ordering
> requirements).


There's a field in the SSDB buffer against the sensor's acpi device,
with values from 0-4. We found an enumeration online, so we think that
the VCM in the surface go is a DW9719.
Daniel Scally Nov. 1, 2021, 11:43 p.m. UTC | #7
Hi Hans

On 01/11/2021 16:02, Hans de Goede wrote:
>
> On 10/29/21 13:50, Daniel Scally wrote:
>> Hi all
>>
>>
>> +CC linux-media and libcamera-devel, as it's probably a good time to
>> broaden this out. Also Andy because I'm hoping you can help :) The
>> background of the discussion is about how we identify and enumerate
>> (correctly, I.E. with a type matching the vcm driver's i2c_device_id,
>> and there are a few different vcm's in scope which seem encoded in the
>> SSDB buffer) which VCM module is linked to a sensor in Intel's IPU3
>> centric ACPI tables. The I2C address for the device is just a second
>> I2cSerialBusV2 against the sensor's acpi device rather than a separate
>> one, which is no awkward. We also need to get firmware created for the
>> VCM such that the sensor will link to it via the lens-focus property.
>>
>> On 28/10/2021 09:57, Hans de Goede wrote:
>>> Hi,
>>>
>>> On 10/28/21 10:49, Laurent Pinchart wrote:
>>>> Hi Hans,
>>>>
>>>> On Thu, Oct 28, 2021 at 09:51:08AM +0200, Hans de Goede wrote:
>>>>> On 10/28/21 09:10, Daniel Scally wrote:
>>>>>> On 27/10/2021 15:16, Hans de Goede wrote:
>>>>>>> On 10/27/21 12:07, Daniel Scally wrote:
>>>>>>>> On 26/10/2021 11:14, Hans de Goede wrote:
>>>>>>>>>>> So yesterday I already sorta guessed it would be the DW9714 because of
>>>>>>>>>>> the 0x0c address and I tried:
>>>>>>>>>>>
>>>>>>>>>>> i2ctransfer -y 2 w2@0x0c 0x00 0x00
>>>>>>>>>>>
>>>>>>>>>>> And the transfer fails, while according to the driver that is a valid
>>>>>>>>>>> value. So maybe we are missing a regulator enable? Or its not a DW9714.
>>>>>>>>>>>
>>>>>>>>>>> Also "i2cdetect -y -r 2" does not see anything at address 0x0c (but some of
>>>>>>>>>>> these VCMs seem to be write only...) it does OTOH see an unknown device at
>>>>>>>>>>> address 0x21.
>>>>>>>>>> Well, when debugging the necessary TPS68470 settings I used a poor man's
>>>>>>>>>> i2ctransfer on Windows whilst the camera was running to read the values
>>>>>>>>>> that were set for both the PMIC and the camera sensor. Using the same
>>>>>>>>>> program I can connect to and read values from a device at 0x0c,
>>>>>>>> Just as further testing I dumped the contents of the device at 0x0c,
>>>>>>>> which comes back as
>>>>>>>>
>>>>>>>> f1 1 2 1 61 0 40 60
>>>>>>>>
>>>>>>>> Byte 0 is given in the driver you linked as the ID field and expected to
>>>>>>>> be f1. The driver controls focus by writing to the 3rd and 4th byte
>>>>>>>> (with the 4th being the LSB); the only value that seemed to fluctuate
>>>>>>>> when running windows and moving my hand in front of the sensor was byte
>>>>>>>> 4 and testing it out I wrote values into that byte and the focus
>>>>>>>> changes. So the device at 0x0c is definitely the vcm and it sure looks
>>>>>>>> like it's the DW9719
>>>>>>>>
>>>>>>>> The device at 0x21 is only available on Windows when the camera is
>>>>>>>> running, I thought it was quite likely that one of the "spare"
>>>>>>>> regulators from the TPS68470. One line is called VCM, and sure enough
>>>>>>>> it's enabled whilst the world-facing camera is running. I switched to
>>>>>>>> linux and started streaming the back camera, then enabled that voltage
>>>>>>>> regulator via i2ctransfer:
>>>>>>>>
>>>>>>>> sudo i2ctransfer 2 w2@0x4d 0x3c 0x6d
>>>>>>>>
>>>>>>>> sudo i2ctransfer 2 w2@0x4d 0x44 0x01
>>>>>>>>
>>>>>>>> And now i2cdetect shows the device at 0x0c on bus 2 - so we need more
>>>>>>>> jiggery pokey to map that VCM regulator to this new device (once we've
>>>>>>>> gotten it enumerated...) and the driver needs to have a tweak to call
>>>>>>>> regulator get and do a power on at some point.
>>>>>>> Awesome, great job on figuring this out!
>>>>>>>
>>>>>>> As you know I can spend $dayjob time on this, so I'll take on the job
>>>>>>> of creating the i2c-client and hooking up the regulator in some
>>>>>>> upstreamable manner.
>>>>>> Okedokey cool. I'd probably start at the cio2-bridge, if only because we
>>>>>> already have the adev there and the SSDB buffer loaded, so should be
>>>>>> easy enough to add an enum for the vcm_type and a call to
>>>>>> i2c_acpi_new_device()...bit of a weird place for that though I guess.
>>>>> Ah, I was actually thinking about doing this int he int3472 code for
>>>>> a number of reasons:
>>>>>
>>>>> 1. We already have the regulator_init_data there and we will need to
>>>>> expand it for this.
>>>>>
>>>>> 2. It is sorta the central place where we deal with all this glue-stuff
>>>> I'm not too sure about that. The INT3472 model the "Intel camera PMIC"
>>>> (I don't remember the exact wording, but that's more or less how the
>>>> device is described in Windows, and it matches the intent we see in the
>>>> DSDT).
>>> I agree that the INT3472 models the PMIC, or whatever discrete bits
>>> which offer similar functionality.
>>>
>>>> Given that we already have cio2-bridge, and that it hooks up the
>>>> sensor to the CIO2, it seems to me that it would be a better central
>>>> place.
>>> Ok, I was sorta expecting you to want to keep glue code like this
>>> out of drivers/media. But I guess that only applies to putting ACPI
>>> specific stuff in sensor drivers; and since the cio2-bridge code is
>>> already x86/ACPI specific you are fine with adding ACPI code there?
>>>
>>> I'm fine with putting the VCM i2c-client instantiation in the
>>> cio2-bridge code, that may also make it easier to tie the 2 together
>>> at the media-controller level.
>>
>> Having looked at this yesterday evening I'm more and more convinced it's
>> necessary. I hacked it into the ov8865 driver in the interim (just by
>> calling i2c_acpi_new_device() in probe) and then worked on that dw9719
>> code you found [1] to turn it into an i2c driver (attached, though still
>> needs a bit of work), which will successfully bind to the i2c client
>> enumerated by that i2c_acpi_new_device() call. From there though it
>> needs a way for the v4l2 subdev to be matched to the sensor's subdev.
>> This can happen automatically by way of the lens-focus firmware property
>> against the sensor - we currently build those in the cio2-bridge, so
>> adding another software node for the VCM and creating a lens-focus
>> property for the sensor's software_node with a pointer to the VCM's node
>> seems like the best way to do that.
> So besides prepping a v5 of my previous series, with update regulator
> init-data for the VCM I've also been looking into this, attached are
> the results.
>
> Some notes from initial testing:
>
> 1. The driver you attached will only successful probe if I insmod
> it while streaming video from the sensor. So I think we need another
> regulator or the clk for just the VCM too, I will investigate this
> later this week.

Oh really, I'll test that too; thanks for the patches. There's a couple
of tweaks to the driver anyway, so hopefully be able to get it ironed out.


Regarding this comment on your 2nd patch:


Note there seems to be a pre-existing problem where there is no teardown
of the bridge?


I forget the exact reasoning, but this was deliberately done when we
originally merged the bridge code. I'll see if I can dig out the old
discussion where we decided to go that way, but my search-fu is failing
me at the moment.

> 2. I need some help with all the fwnode link stuff (I'm not very familiar
> with this). There seems to be a chicken and egg problem here though,
> because the v4l2subdev for the VCM does not register because of async stuff
> and if we add it to the "graph" then my idea to enumerate the VCMs
> from the SSDB on the complete() callback won't work. But we can do this
> on a per sensor basis instead from the cio2_notifier_bound() callback
> instead I guess ?


I think on top of your work in the cio2-bridge for patch 3 you can do this:


1. Create another software node against the cio2_sensor struct, with the
name coming from the vcm_types array

2. Assign that software node to board_info.swnode in
cio2_bridge_instantiate_vcm_i2c_client()

3. Add another entry to dev_properties for the sensor, that is named
"lens-focus" and contains a reference to the software_node created in #2
just like the references to the sensor/cio2 nodes.


This way when the sensor driver calls
v4l2_async_register_subdev_sensor() it should create a notifier that
looks for that VCM client to bind. I think then rather than putting
anything in the .bound() / .complete() callbacks, we should modify core
to do _something_ when async matching some subdevs. The something would
depend on the kind of devices that match, for example with the sensor
driver and the ipu3-cio2 driver, there's an entity whos function is
MEDIA_ENT_F_VID_IF_BRIDGE matching to an entity whos function is
MEDIA_ENT_F_CAM_SENSOR, and it seems to me that every scenario like that
is going to result in media pad links being created. Similarly for our
sensor that's a device with entity function MEDIA_ENT_F_LENS matching to
MEDIA_ENT_F_CAM_SENSOR, and I think that in those cases we can create
either an interface link or a new kind of link (maybe
"MEDIA_LNK_FL_ANCILLARY_LINK" or something...) between the two to show
that they form a single logical unit, which we can then report to libcamera.


Hope that makes sense...

>
> Can someone give me some hints how the fwnode link code should look/work
> and how I can get the async registering of the subdev for the VCM to
> complete ?
>
> Regards,
>
> Hans
>
>
>
>
>>
>> To throw a spanner in the works though; I noticed this delightful _CRS
>> for the OV9734 sensor of a  Surface Laptop 1 earlier:
>>
>>
>> Method (_CRS, 0, Serialized)  // _CRS: Current Resource Settings
>> {
>>     Name (SBUF, ResourceTemplate ()
>>     {
>>         I2cSerialBusV2 (0x0036, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>         I2cSerialBusV2 (0x0050, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>         I2cSerialBusV2 (0x0051, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>         I2cSerialBusV2 (0x0052, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>         I2cSerialBusV2 (0x0053, ControllerInitiated, 0x00061A80,
>>             AddressingMode7Bit, "\\_SB.PCI0.I2C2",
>>             0x00, ResourceConsumer, , Exclusive,
>>             )
>>     })
>>     Return (SBUF) /* \_SB_.PCI0.I2C2.CAMF._CRS.SBUF */
>> }
> Hmm, we do have i2c_acpi_client_count(adev), so it is easy to use
> that and just always use the last resource for the VCM. But that assumes
> that is what is going on here and I have no idea.
>
> Regards,
>
> Hans
Hans de Goede Nov. 4, 2021, 2:49 p.m. UTC | #8
Hi Daniel,

On 11/2/21 00:43, Daniel Scally wrote:
> Hi Hans
> 
> On 01/11/2021 16:02, Hans de Goede wrote:

<snip>

>>> Having looked at this yesterday evening I'm more and more convinced it's
>>> necessary. I hacked it into the ov8865 driver in the interim (just by
>>> calling i2c_acpi_new_device() in probe) and then worked on that dw9719
>>> code you found [1] to turn it into an i2c driver (attached, though still
>>> needs a bit of work), which will successfully bind to the i2c client
>>> enumerated by that i2c_acpi_new_device() call. From there though it
>>> needs a way for the v4l2 subdev to be matched to the sensor's subdev.
>>> This can happen automatically by way of the lens-focus firmware property
>>> against the sensor - we currently build those in the cio2-bridge, so
>>> adding another software node for the VCM and creating a lens-focus
>>> property for the sensor's software_node with a pointer to the VCM's node
>>> seems like the best way to do that.
>> So besides prepping a v5 of my previous series, with update regulator
>> init-data for the VCM I've also been looking into this, attached are
>> the results.
>>
>> Some notes from initial testing:
>>
>> 1. The driver you attached will only successful probe if I insmod
>> it while streaming video from the sensor. So I think we need another
>> regulator or the clk for just the VCM too, I will investigate this
>> later this week.
> 
> Oh really, I'll test that too; thanks for the patches. There's a couple
> of tweaks to the driver anyway, so hopefully be able to get it ironed out.

Ok, I've figured this out now, with the attached patch (which also
explains what is going on) as well as an updated tps68470_board_data.c
with updated regulator_init_data for the VCM (also attached), the driver
can now successfully talk to the VCM in probe() while we are NOT
streaming from the ov8865.

Daniel, please feel free to squash this into your original dw9719 patch.

<snip>

>> 2. I need some help with all the fwnode link stuff (I'm not very familiar
>> with this). There seems to be a chicken and egg problem here though,
>> because the v4l2subdev for the VCM does not register because of async stuff
>> and if we add it to the "graph" then my idea to enumerate the VCMs
>> from the SSDB on the complete() callback won't work. But we can do this
>> on a per sensor basis instead from the cio2_notifier_bound() callback
>> instead I guess ?
> 
> 
> I think on top of your work in the cio2-bridge for patch 3 you can do this:
> 
> 
> 1. Create another software node against the cio2_sensor struct, with the
> name coming from the vcm_types array
> 
> 2. Assign that software node to board_info.swnode in
> cio2_bridge_instantiate_vcm_i2c_client()
> 
> 3. Add another entry to dev_properties for the sensor, that is named
> "lens-focus" and contains a reference to the software_node created in #2
> just like the references to the sensor/cio2 nodes.
> 
> 
> This way when the sensor driver calls
> v4l2_async_register_subdev_sensor() it should create a notifier that
> looks for that VCM client to bind. I think then rather than putting
> anything in the .bound() / .complete() callbacks, we should modify core
> to do _something_ when async matching some subdevs. The something would
> depend on the kind of devices that match, for example with the sensor
> driver and the ipu3-cio2 driver, there's an entity whos function is
> MEDIA_ENT_F_VID_IF_BRIDGE matching to an entity whos function is
> MEDIA_ENT_F_CAM_SENSOR, and it seems to me that every scenario like that
> is going to result in media pad links being created. Similarly for our
> sensor that's a device with entity function MEDIA_ENT_F_LENS matching to
> MEDIA_ENT_F_CAM_SENSOR, and I think that in those cases we can create
> either an interface link or a new kind of link (maybe
> "MEDIA_LNK_FL_ANCILLARY_LINK" or something...) between the two to show
> that they form a single logical unit, which we can then report to libcamera.
> 
> 
> Hope that makes sense...

Maybe? I have not looked into this closely yet. I'll continue working on
this coming Tuesday.

If you feel like tinkering I would not mind if you beat me to it this
weekend :)   OTOH please enjoy your weekend doing whatever, I can continue
working on this during office-hours next week.

Regards,

Hans
Andy Shevchenko Nov. 4, 2021, 6:14 p.m. UTC | #9
On Thu, Nov 04, 2021 at 03:49:48PM +0100, Hans de Goede wrote:
> On 11/2/21 00:43, Daniel Scally wrote:

...

> Ok, I've figured this out now, with the attached patch (which also
> explains what is going on) as well as an updated tps68470_board_data.c
> with updated regulator_init_data for the VCM (also attached), the driver
> can now successfully talk to the VCM in probe() while we are NOT
> streaming from the ov8865.

Thanks, Hans.

...

> const struct int3472_tps68470_board_data *int3472_tps68470_get_board_data(const char *dev_name)
> {
> 	const struct int3472_tps68470_board_data *board_data;
> 	const struct dmi_system_id *match;
> 
> 	match = dmi_first_match(int3472_tps68470_board_data_table);
> 	while (match) {
> 		board_data = match->driver_data;
> 		if (strcmp(board_data->dev_name, dev_name) == 0)
> 			return board_data;

> 		dmi_first_match(++match);

Not sure I understood the purpose of the call.

> 	}
> 	return NULL;
> }
Daniel Scally Nov. 4, 2021, 11:20 p.m. UTC | #10
Hi Hans

On 04/11/2021 14:49, Hans de Goede wrote:
> Hi Daniel,
>
> On 11/2/21 00:43, Daniel Scally wrote:
>> Hi Hans
>>
>> On 01/11/2021 16:02, Hans de Goede wrote:
> <snip>
>
>>>> Having looked at this yesterday evening I'm more and more convinced it's
>>>> necessary. I hacked it into the ov8865 driver in the interim (just by
>>>> calling i2c_acpi_new_device() in probe) and then worked on that dw9719
>>>> code you found [1] to turn it into an i2c driver (attached, though still
>>>> needs a bit of work), which will successfully bind to the i2c client
>>>> enumerated by that i2c_acpi_new_device() call. From there though it
>>>> needs a way for the v4l2 subdev to be matched to the sensor's subdev.
>>>> This can happen automatically by way of the lens-focus firmware property
>>>> against the sensor - we currently build those in the cio2-bridge, so
>>>> adding another software node for the VCM and creating a lens-focus
>>>> property for the sensor's software_node with a pointer to the VCM's node
>>>> seems like the best way to do that.
>>> So besides prepping a v5 of my previous series, with update regulator
>>> init-data for the VCM I've also been looking into this, attached are
>>> the results.
>>>
>>> Some notes from initial testing:
>>>
>>> 1. The driver you attached will only successful probe if I insmod
>>> it while streaming video from the sensor. So I think we need another
>>> regulator or the clk for just the VCM too, I will investigate this
>>> later this week.
>> Oh really, I'll test that too; thanks for the patches. There's a couple
>> of tweaks to the driver anyway, so hopefully be able to get it ironed out.
> Ok, I've figured this out now, with the attached patch (which also
> explains what is going on) as well as an updated tps68470_board_data.c
> with updated regulator_init_data for the VCM (also attached), the driver
> can now successfully talk to the VCM in probe() while we are NOT
> streaming from the ov8865.
>
> Daniel, please feel free to squash this into your original dw9719 patch.
>

Nice thanks - I'll do that.

>
>>> 2. I need some help with all the fwnode link stuff (I'm not very familiar
>>> with this). There seems to be a chicken and egg problem here though,
>>> because the v4l2subdev for the VCM does not register because of async stuff
>>> and if we add it to the "graph" then my idea to enumerate the VCMs
>>> from the SSDB on the complete() callback won't work. But we can do this
>>> on a per sensor basis instead from the cio2_notifier_bound() callback
>>> instead I guess ?
>>
>> I think on top of your work in the cio2-bridge for patch 3 you can do this:
>>
>>
>> 1. Create another software node against the cio2_sensor struct, with the
>> name coming from the vcm_types array
>>
>> 2. Assign that software node to board_info.swnode in
>> cio2_bridge_instantiate_vcm_i2c_client()
>>
>> 3. Add another entry to dev_properties for the sensor, that is named
>> "lens-focus" and contains a reference to the software_node created in #2
>> just like the references to the sensor/cio2 nodes.
>>
>>
>> This way when the sensor driver calls
>> v4l2_async_register_subdev_sensor() it should create a notifier that
>> looks for that VCM client to bind. I think then rather than putting
>> anything in the .bound() / .complete() callbacks, we should modify core
>> to do _something_ when async matching some subdevs. The something would
>> depend on the kind of devices that match, for example with the sensor
>> driver and the ipu3-cio2 driver, there's an entity whos function is
>> MEDIA_ENT_F_VID_IF_BRIDGE matching to an entity whos function is
>> MEDIA_ENT_F_CAM_SENSOR, and it seems to me that every scenario like that
>> is going to result in media pad links being created. Similarly for our
>> sensor that's a device with entity function MEDIA_ENT_F_LENS matching to
>> MEDIA_ENT_F_CAM_SENSOR, and I think that in those cases we can create
>> either an interface link or a new kind of link (maybe
>> "MEDIA_LNK_FL_ANCILLARY_LINK" or something...) between the two to show
>> that they form a single logical unit, which we can then report to libcamera.
>>
>>
>> Hope that makes sense...
> Maybe? I have not looked into this closely yet. I'll continue working on
> this coming Tuesday.
>
> If you feel like tinkering I would not mind if you beat me to it this
> weekend :)   OTOH please enjoy your weekend doing whatever, I can continue
> working on this during office-hours next week.


I'll probably have some time to look at it over the next few days; I'll
let you know how I get on.

>
> Regards,
>
> Hans
Hans de Goede Nov. 6, 2021, 2:12 p.m. UTC | #11
Hi,

On 11/4/21 19:14, Andy Shevchenko wrote:
> On Thu, Nov 04, 2021 at 03:49:48PM +0100, Hans de Goede wrote:
>> On 11/2/21 00:43, Daniel Scally wrote:
> 
> ...
> 
>> Ok, I've figured this out now, with the attached patch (which also
>> explains what is going on) as well as an updated tps68470_board_data.c
>> with updated regulator_init_data for the VCM (also attached), the driver
>> can now successfully talk to the VCM in probe() while we are NOT
>> streaming from the ov8865.
> 
> Thanks, Hans.
> 
> ...
> 
>> const struct int3472_tps68470_board_data *int3472_tps68470_get_board_data(const char *dev_name)
>> {
>> 	const struct int3472_tps68470_board_data *board_data;
>> 	const struct dmi_system_id *match;
>>
>> 	match = dmi_first_match(int3472_tps68470_board_data_table);
>> 	while (match) {
>> 		board_data = match->driver_data;
>> 		if (strcmp(board_data->dev_name, dev_name) == 0)
>> 			return board_data;
> 
>> 		dmi_first_match(++match);
> 
> Not sure I understood the purpose of the call.

You are right , that should have a "match = " in front of it, but
I actually like this form found else where better:

        for (match = dmi_first_match(int3472_tps68470_board_data_table);
             match;
             match = dmi_first_match(match + 1)) {

That IMHO makes the whole code a lot clearer, so I'll switch to that for
the next version, thank you for catching this.

Regards,

Hans
Andy Shevchenko Nov. 6, 2021, 6:39 p.m. UTC | #12
On Sat, Nov 6, 2021 at 8:15 PM Hans de Goede <hdegoede@redhat.com> wrote:
> On 11/4/21 19:14, Andy Shevchenko wrote:
> > On Thu, Nov 04, 2021 at 03:49:48PM +0100, Hans de Goede wrote:
> >> On 11/2/21 00:43, Daniel Scally wrote:

...

> >>              dmi_first_match(++match);
> >
> > Not sure I understood the purpose of the call.
>
> You are right , that should have a "match = " in front of it, but
> I actually like this form found else where better:
>
>         for (match = dmi_first_match(int3472_tps68470_board_data_table);
>              match;
>              match = dmi_first_match(match + 1)) {
>
> That IMHO makes the whole code a lot clearer, so I'll switch to that for
> the next version, thank you for catching this.

I'm very glad that you read my mind!
I was too modest to express the same proposal as you do just above. Go for it!
Hans de Goede Nov. 8, 2021, 1:12 p.m. UTC | #13
Hi,

On 11/2/21 00:43, Daniel Scally wrote:
> Hi Hans

<snip>
 

>> 2. I need some help with all the fwnode link stuff (I'm not very familiar
>> with this). There seems to be a chicken and egg problem here though,
>> because the v4l2subdev for the VCM does not register because of async stuff
>> and if we add it to the "graph" then my idea to enumerate the VCMs
>> from the SSDB on the complete() callback won't work. But we can do this
>> on a per sensor basis instead from the cio2_notifier_bound() callback
>> instead I guess ?
> 
> 
> I think on top of your work in the cio2-bridge for patch 3 you can do this:
> 
> 
> 1. Create another software node against the cio2_sensor struct, with the
> name coming from the vcm_types array
> 
> 2. Assign that software node to board_info.swnode in
> cio2_bridge_instantiate_vcm_i2c_client()
> 
> 3. Add another entry to dev_properties for the sensor, that is named
> "lens-focus" and contains a reference to the software_node created in #2
> just like the references to the sensor/cio2 nodes.
> 
> 
> This way when the sensor driver calls
> v4l2_async_register_subdev_sensor() it should create a notifier that
> looks for that VCM client to bind. I think then rather than putting
> anything in the .bound() / .complete() callbacks, we should modify core
> to do _something_ when async matching some subdevs. The something would
> depend on the kind of devices that match, for example with the sensor
> driver and the ipu3-cio2 driver, there's an entity whos function is
> MEDIA_ENT_F_VID_IF_BRIDGE matching to an entity whos function is
> MEDIA_ENT_F_CAM_SENSOR, and it seems to me that every scenario like that
> is going to result in media pad links being created. Similarly for our
> sensor that's a device with entity function MEDIA_ENT_F_LENS matching to
> MEDIA_ENT_F_CAM_SENSOR, and I think that in those cases we can create
> either an interface link or a new kind of link (maybe
> "MEDIA_LNK_FL_ANCILLARY_LINK" or something...) between the two to show
> that they form a single logical unit, which we can then report to libcamera.
> 
> 
> Hope that makes sense...

Ok, so I gave this a try, see the attached patches, but the v4l2-subdev for
the VCM still does not show up.

I think that instead I need to build a full link between the sensor
and the VCM similar to the cio2 <-> sensor link. Both ends of that link
have:

<base-swnode attached to the device>
|
--<port-swnode named (SWNODE_GRAPH_PORT_NAME_FMT, X), where X is 0 on the
  |                           sensor side and the link nr on the cio2 side
  |
  --<end-point-swnode named (SWNODE_GRAPH_ENDPOINT_NAME_FMT, 0)

And then the 2 endpoints contain a swref property pointing to the
other endpoint swnode.

I think we need a similar setup adding a swnode child named
(SWNODE_GRAPH_PORT_NAME_FMT, 1), to the nodes[SWNODE_SENSOR_HID] node.

Note 1, since 0 is the "port" to the cio2, this new port child then
gets an endpoint "0" child itself, likewise we add a "port 0" child
to the vcm swnode, with a "endpoint 0" child below that and then have
the 2 endpoints contain swref properties pointing to each other.

I think that this will properly make the VCm part of the graph and
will make its v4l2-subdev get instantiated when the graph is
complete.  Before I spend a bunch of time on implementing this,
let me ask this:

Does this sound reasonable / like I'm heading in the right direction?

Regards,

Hans



p.s.

I have found a new solution for the probe-ordering problem which
is patch 2 of the attached patches, I personally I'm happy with
this solution. I hope you like it too.
Andy Shevchenko Nov. 8, 2021, 2:12 p.m. UTC | #14
On Mon, Nov 08, 2021 at 02:12:38PM +0100, Hans de Goede wrote:
> On 11/2/21 00:43, Daniel Scally wrote:

> Does this sound reasonable / like I'm heading in the right direction?

It is up to you folks, since I have no time to participate in this with
a full dive right now. Below just some comments on the patches in case
they will go.

...

> -	struct acpi_device *adev = ACPI_COMPANION(dev);
> +	struct acpi_device *adev = to_acpi_device_node(fwnode);
>  	struct i2c_acpi_lookup lookup;
>  	struct i2c_adapter *adapter;
>  	LIST_HEAD(resource_list);
>  	int ret;

Make sense to move assignment here.

	adev = to_acpi_device_node(fwnode);

> +	if (!adev)
> +		return ERR_PTR(-ENODEV);

...

> +static inline struct i2c_client *i2c_acpi_new_device(struct device *dev,
> +						     int index,
> +						     struct i2c_board_info *info)
> +{
> +	return i2c_acpi_new_device_by_fwnode(dev->fwnode, index, info);

dev_fwnode(dev)

> +}

...

> +int cio2_bridge_sensors_are_ready(void)
> +{
> +	struct acpi_device *adev;

> +	bool ready = true;

Redundant. See below.

> +	unsigned int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(cio2_supported_sensors); i++) {
> +		const struct cio2_sensor_config *cfg =
> +			&cio2_supported_sensors[i];
> +
> +		for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
> +			if (!adev->status.enabled)
> +				continue;

> +			if (!acpi_dev_ready_for_enumeration(adev))
> +				ready = false;

You may put the adev here and return false.

> +		}
> +	}

> +	return ready;

So return true.

> +}

...

> +	if (sensor->ssdb.vcmtype)
> +		nodes[SWNODE_VCM] = NODE_VCM(
> +					cio2_vcm_types[sensor->ssdb.vcmtype - 1]);

Wouldn't be better

		nodes[SWNODE_VCM] =
			NODE_VCM(cio2_vcm_types[sensor->ssdb.vcmtype - 1]);

?

...

> +	sensor->vcm_i2c_client = i2c_acpi_new_device_by_fwnode(
> +					acpi_fwnode_handle(sensor->adev),
> +					1, &board_info);

Ditto.
Daniel Scally Nov. 9, 2021, 12:43 a.m. UTC | #15
Hi Hans

On 08/11/2021 13:12, Hans de Goede wrote:
> Hi,
>
> On 11/2/21 00:43, Daniel Scally wrote:
>> Hi Hans
> <snip>
>  
>
>>> 2. I need some help with all the fwnode link stuff (I'm not very familiar
>>> with this). There seems to be a chicken and egg problem here though,
>>> because the v4l2subdev for the VCM does not register because of async stuff
>>> and if we add it to the "graph" then my idea to enumerate the VCMs
>>> from the SSDB on the complete() callback won't work. But we can do this
>>> on a per sensor basis instead from the cio2_notifier_bound() callback
>>> instead I guess ?
>>
>> I think on top of your work in the cio2-bridge for patch 3 you can do this:
>>
>>
>> 1. Create another software node against the cio2_sensor struct, with the
>> name coming from the vcm_types array
>>
>> 2. Assign that software node to board_info.swnode in
>> cio2_bridge_instantiate_vcm_i2c_client()
>>
>> 3. Add another entry to dev_properties for the sensor, that is named
>> "lens-focus" and contains a reference to the software_node created in #2
>> just like the references to the sensor/cio2 nodes.
>>
>>
>> This way when the sensor driver calls
>> v4l2_async_register_subdev_sensor() it should create a notifier that
>> looks for that VCM client to bind. I think then rather than putting
>> anything in the .bound() / .complete() callbacks, we should modify core
>> to do _something_ when async matching some subdevs. The something would
>> depend on the kind of devices that match, for example with the sensor
>> driver and the ipu3-cio2 driver, there's an entity whos function is
>> MEDIA_ENT_F_VID_IF_BRIDGE matching to an entity whos function is
>> MEDIA_ENT_F_CAM_SENSOR, and it seems to me that every scenario like that
>> is going to result in media pad links being created. Similarly for our
>> sensor that's a device with entity function MEDIA_ENT_F_LENS matching to
>> MEDIA_ENT_F_CAM_SENSOR, and I think that in those cases we can create
>> either an interface link or a new kind of link (maybe
>> "MEDIA_LNK_FL_ANCILLARY_LINK" or something...) between the two to show
>> that they form a single logical unit, which we can then report to libcamera.
>>
>>
>> Hope that makes sense...
> Ok, so I gave this a try, see the attached patches, but the v4l2-subdev for
> the VCM still does not show up.


This is exactly where I got to over the weekend too

> I think that instead I need to build a full link between the sensor
> and the VCM similar to the cio2 <-> sensor link. Both ends of that link
> have:
>
> <base-swnode attached to the device>
> |
> --<port-swnode named (SWNODE_GRAPH_PORT_NAME_FMT, X), where X is 0 on the
>   |                           sensor side and the link nr on the cio2 side
>   |
>   --<end-point-swnode named (SWNODE_GRAPH_ENDPOINT_NAME_FMT, 0)
>
> And then the 2 endpoints contain a swref property pointing to the
> other endpoint swnode.
>
> I think we need a similar setup adding a swnode child named
> (SWNODE_GRAPH_PORT_NAME_FMT, 1), to the nodes[SWNODE_SENSOR_HID] node.
>
> Note 1, since 0 is the "port" to the cio2, this new port child then
> gets an endpoint "0" child itself, likewise we add a "port 0" child
> to the vcm swnode, with a "endpoint 0" child below that and then have
> the 2 endpoints contain swref properties pointing to each other.
>
> I think that this will properly make the VCm part of the graph and
> will make its v4l2-subdev get instantiated when the graph is
> complete.  Before I spend a bunch of time on implementing this,
> let me ask this:
>
> Does this sound reasonable / like I'm heading in the right direction?
I don't think that we need to add the software nodes as
ports/endpoints...as far as I can tell it ought to work like this:


1. The sensor calls v4l2_async_register_subdev_sensor() which...

    a) creates a notifier

    b) looks for reference properties of the device's fwnode called
"lens-focus" and calls v4l2_async_notifier_add_fwnode_subdev() against
the reference, which tells the notifier it's connected to this other
fwnode and to expect it to bind.

2. When new subdevs are registered they get tested for a match against
the notifier registered in 1a that matches to their fwnode using
match_fwnode() [1]. This should work, on the grounds that we registered
the device using the board_info.swnode and registered a lens-focus
property that points to that software_node

3. When a match is found, the notifier's .bound() function is called.
When all the asds that the notifier expects are bound the notifier's
.complete() callback is called.


That's not working correctly for me at the moment, but I think this is a
surmountable problem rather than the wrong approach, so I'm just working
through the differences to try and get the matching working.


For the devnodes, the ipu3-cio2 driver itself creates the devnodes for
the subdevices that bind to it (like the sensor) as part of its
.complete() callback [2] by calling v4l2_device_register_subdev_nodes(),
as far as I can tell there's nothing in v4l2 core that handles that
automatically so I think that that lack is what's preventing the
devnodes from showing up. I think we should tackle the problem of the
missing devnodes by mimicking the effects of that function somewhere
within core, probably v4l2_async_match_notify() (which calls the
notifier's .bound() callback). I think the creation of the links to
expose to userspace that this is a logical unit should probably happen
in the same place, using the entity.function field of the subdev and the
asd to decide exactly what kind of link to create.


[1]
https://elixir.bootlin.com/linux/latest/source/drivers/media/v4l2-core/v4l2-async.c#L69

[2]
https://elixir.bootlin.com/linux/latest/source/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c#L1449

>
> Regards,
>
> Hans
>
>
>
> p.s.
>
> I have found a new solution for the probe-ordering problem which
> is patch 2 of the attached patches, I personally I'm happy with
> this solution. I hope you like it too.
Daniel Scally Nov. 9, 2021, 12:09 p.m. UTC | #16
Hi Hans

On 09/11/2021 00:43, Daniel Scally wrote:
> Hi Hans
>
> On 08/11/2021 13:12, Hans de Goede wrote:
>> Hi,
>>
>> On 11/2/21 00:43, Daniel Scally wrote:
>>> Hi Hans
>> <snip>
>>  
>>
>>>> 2. I need some help with all the fwnode link stuff (I'm not very familiar
>>>> with this). There seems to be a chicken and egg problem here though,
>>>> because the v4l2subdev for the VCM does not register because of async stuff
>>>> and if we add it to the "graph" then my idea to enumerate the VCMs
>>>> from the SSDB on the complete() callback won't work. But we can do this
>>>> on a per sensor basis instead from the cio2_notifier_bound() callback
>>>> instead I guess ?
>>> I think on top of your work in the cio2-bridge for patch 3 you can do this:
>>>
>>>
>>> 1. Create another software node against the cio2_sensor struct, with the
>>> name coming from the vcm_types array
>>>
>>> 2. Assign that software node to board_info.swnode in
>>> cio2_bridge_instantiate_vcm_i2c_client()
>>>
>>> 3. Add another entry to dev_properties for the sensor, that is named
>>> "lens-focus" and contains a reference to the software_node created in #2
>>> just like the references to the sensor/cio2 nodes.
>>>
>>>
>>> This way when the sensor driver calls
>>> v4l2_async_register_subdev_sensor() it should create a notifier that
>>> looks for that VCM client to bind. I think then rather than putting
>>> anything in the .bound() / .complete() callbacks, we should modify core
>>> to do _something_ when async matching some subdevs. The something would
>>> depend on the kind of devices that match, for example with the sensor
>>> driver and the ipu3-cio2 driver, there's an entity whos function is
>>> MEDIA_ENT_F_VID_IF_BRIDGE matching to an entity whos function is
>>> MEDIA_ENT_F_CAM_SENSOR, and it seems to me that every scenario like that
>>> is going to result in media pad links being created. Similarly for our
>>> sensor that's a device with entity function MEDIA_ENT_F_LENS matching to
>>> MEDIA_ENT_F_CAM_SENSOR, and I think that in those cases we can create
>>> either an interface link or a new kind of link (maybe
>>> "MEDIA_LNK_FL_ANCILLARY_LINK" or something...) between the two to show
>>> that they form a single logical unit, which we can then report to libcamera.
>>>
>>>
>>> Hope that makes sense...
>> Ok, so I gave this a try, see the attached patches, but the v4l2-subdev for
>> the VCM still does not show up.
>
> This is exactly where I got to over the weekend too
>
>> I think that instead I need to build a full link between the sensor
>> and the VCM similar to the cio2 <-> sensor link. Both ends of that link
>> have:
>>
>> <base-swnode attached to the device>
>> |
>> --<port-swnode named (SWNODE_GRAPH_PORT_NAME_FMT, X), where X is 0 on the
>>   |                           sensor side and the link nr on the cio2 side
>>   |
>>   --<end-point-swnode named (SWNODE_GRAPH_ENDPOINT_NAME_FMT, 0)
>>
>> And then the 2 endpoints contain a swref property pointing to the
>> other endpoint swnode.
>>
>> I think we need a similar setup adding a swnode child named
>> (SWNODE_GRAPH_PORT_NAME_FMT, 1), to the nodes[SWNODE_SENSOR_HID] node.
>>
>> Note 1, since 0 is the "port" to the cio2, this new port child then
>> gets an endpoint "0" child itself, likewise we add a "port 0" child
>> to the vcm swnode, with a "endpoint 0" child below that and then have
>> the 2 endpoints contain swref properties pointing to each other.
>>
>> I think that this will properly make the VCm part of the graph and
>> will make its v4l2-subdev get instantiated when the graph is
>> complete.  Before I spend a bunch of time on implementing this,
>> let me ask this:
>>
>> Does this sound reasonable / like I'm heading in the right direction?
> I don't think that we need to add the software nodes as
> ports/endpoints...as far as I can tell it ought to work like this:
>
>
> 1. The sensor calls v4l2_async_register_subdev_sensor() which...
>
>     a) creates a notifier
>
>     b) looks for reference properties of the device's fwnode called
> "lens-focus" and calls v4l2_async_notifier_add_fwnode_subdev() against
> the reference, which tells the notifier it's connected to this other
> fwnode and to expect it to bind.
>
> 2. When new subdevs are registered they get tested for a match against
> the notifier registered in 1a that matches to their fwnode using
> match_fwnode() [1]. This should work, on the grounds that we registered
> the device using the board_info.swnode and registered a lens-focus
> property that points to that software_node
>
> 3. When a match is found, the notifier's .bound() function is called.
> When all the asds that the notifier expects are bound the notifier's
> .complete() callback is called.
>
>
> That's not working correctly for me at the moment, but I think this is a
> surmountable problem rather than the wrong approach, so I'm just working
> through the differences to try and get the matching working.


OK, I eventually got this working - the dw9719 registers as
/dev/v4l-subdev7 for me now ... long story short is the attached patch
was needed to make the references work, as the internals of v4l2 aren't
checking for fwnode->secondary. Prior to your latest series as well, an
additional problem was that once the VCMs fwnode was linked to the
sensor's the .complete() callback for ipu3-cio2 would never call
(because it needs ALL the devices for the linked fwnodes to be bound to
do that)...which meant the VCMs never got instantiated, because that was
where that function was called. With your new set separating those
processes it works well, so yes I like that new approach very much :D


In the end we don't have to add a call creating the subdev's - it turns
out that v4l2 knows it's part of ipu3-cio2's v4l2-device so it registers
the nodes for the vcm when .complete() is called for that driver. I
still think we should add a bit creating the link to expose to userspace
in match_notify() though.


Trying to list controls for the dw9719 with v4l2-ctl -d /dev/v4l-subdev7
-L fails with an IOCTL error, so I have some remedial work on the driver
which I'll do tonight; I'd expect to be able to control focus with
v4l2-ctl -d /dev/v4l-subdev7 -c absolute_focus=n once this is sorted.

>
>
> For the devnodes, the ipu3-cio2 driver itself creates the devnodes for
> the subdevices that bind to it (like the sensor) as part of its
> .complete() callback [2] by calling v4l2_device_register_subdev_nodes(),
> as far as I can tell there's nothing in v4l2 core that handles that
> automatically so I think that that lack is what's preventing the
> devnodes from showing up. I think we should tackle the problem of the
> missing devnodes by mimicking the effects of that function somewhere
> within core, probably v4l2_async_match_notify() (which calls the
> notifier's .bound() callback). I think the creation of the links to
> expose to userspace that this is a logical unit should probably happen
> in the same place, using the entity.function field of the subdev and the
> asd to decide exactly what kind of link to create.
>
>
> [1]
> https://elixir.bootlin.com/linux/latest/source/drivers/media/v4l2-core/v4l2-async.c#L69
>
> [2]
> https://elixir.bootlin.com/linux/latest/source/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c#L1449
>
>> Regards,
>>
>> Hans
>>
>>
>>
>> p.s.
>>
>> I have found a new solution for the probe-ordering problem which
>> is patch 2 of the attached patches, I personally I'm happy with
>> this solution. I hope you like it too.
Hans de Goede Nov. 9, 2021, 4:02 p.m. UTC | #17
Hi,

On 11/9/21 13:09, Daniel Scally wrote:
> Hi Hans
> 
> On 09/11/2021 00:43, Daniel Scally wrote:
>> Hi Hans
>>
>> On 08/11/2021 13:12, Hans de Goede wrote:
>>> Hi,
>>>
>>> On 11/2/21 00:43, Daniel Scally wrote:
>>>> Hi Hans
>>> <snip>
>>>  
>>>
>>>>> 2. I need some help with all the fwnode link stuff (I'm not very familiar
>>>>> with this). There seems to be a chicken and egg problem here though,
>>>>> because the v4l2subdev for the VCM does not register because of async stuff
>>>>> and if we add it to the "graph" then my idea to enumerate the VCMs
>>>>> from the SSDB on the complete() callback won't work. But we can do this
>>>>> on a per sensor basis instead from the cio2_notifier_bound() callback
>>>>> instead I guess ?
>>>> I think on top of your work in the cio2-bridge for patch 3 you can do this:
>>>>
>>>>
>>>> 1. Create another software node against the cio2_sensor struct, with the
>>>> name coming from the vcm_types array
>>>>
>>>> 2. Assign that software node to board_info.swnode in
>>>> cio2_bridge_instantiate_vcm_i2c_client()
>>>>
>>>> 3. Add another entry to dev_properties for the sensor, that is named
>>>> "lens-focus" and contains a reference to the software_node created in #2
>>>> just like the references to the sensor/cio2 nodes.
>>>>
>>>>
>>>> This way when the sensor driver calls
>>>> v4l2_async_register_subdev_sensor() it should create a notifier that
>>>> looks for that VCM client to bind. I think then rather than putting
>>>> anything in the .bound() / .complete() callbacks, we should modify core
>>>> to do _something_ when async matching some subdevs. The something would
>>>> depend on the kind of devices that match, for example with the sensor
>>>> driver and the ipu3-cio2 driver, there's an entity whos function is
>>>> MEDIA_ENT_F_VID_IF_BRIDGE matching to an entity whos function is
>>>> MEDIA_ENT_F_CAM_SENSOR, and it seems to me that every scenario like that
>>>> is going to result in media pad links being created. Similarly for our
>>>> sensor that's a device with entity function MEDIA_ENT_F_LENS matching to
>>>> MEDIA_ENT_F_CAM_SENSOR, and I think that in those cases we can create
>>>> either an interface link or a new kind of link (maybe
>>>> "MEDIA_LNK_FL_ANCILLARY_LINK" or something...) between the two to show
>>>> that they form a single logical unit, which we can then report to libcamera.
>>>>
>>>>
>>>> Hope that makes sense...
>>> Ok, so I gave this a try, see the attached patches, but the v4l2-subdev for
>>> the VCM still does not show up.
>>
>> This is exactly where I got to over the weekend too
>>
>>> I think that instead I need to build a full link between the sensor
>>> and the VCM similar to the cio2 <-> sensor link. Both ends of that link
>>> have:
>>>
>>> <base-swnode attached to the device>
>>> |
>>> --<port-swnode named (SWNODE_GRAPH_PORT_NAME_FMT, X), where X is 0 on the
>>>   |                           sensor side and the link nr on the cio2 side
>>>   |
>>>   --<end-point-swnode named (SWNODE_GRAPH_ENDPOINT_NAME_FMT, 0)
>>>
>>> And then the 2 endpoints contain a swref property pointing to the
>>> other endpoint swnode.
>>>
>>> I think we need a similar setup adding a swnode child named
>>> (SWNODE_GRAPH_PORT_NAME_FMT, 1), to the nodes[SWNODE_SENSOR_HID] node.
>>>
>>> Note 1, since 0 is the "port" to the cio2, this new port child then
>>> gets an endpoint "0" child itself, likewise we add a "port 0" child
>>> to the vcm swnode, with a "endpoint 0" child below that and then have
>>> the 2 endpoints contain swref properties pointing to each other.
>>>
>>> I think that this will properly make the VCm part of the graph and
>>> will make its v4l2-subdev get instantiated when the graph is
>>> complete.  Before I spend a bunch of time on implementing this,
>>> let me ask this:
>>>
>>> Does this sound reasonable / like I'm heading in the right direction?
>> I don't think that we need to add the software nodes as
>> ports/endpoints...as far as I can tell it ought to work like this:
>>
>>
>> 1. The sensor calls v4l2_async_register_subdev_sensor() which...
>>
>>     a) creates a notifier
>>
>>     b) looks for reference properties of the device's fwnode called
>> "lens-focus" and calls v4l2_async_notifier_add_fwnode_subdev() against
>> the reference, which tells the notifier it's connected to this other
>> fwnode and to expect it to bind.
>>
>> 2. When new subdevs are registered they get tested for a match against
>> the notifier registered in 1a that matches to their fwnode using
>> match_fwnode() [1]. This should work, on the grounds that we registered
>> the device using the board_info.swnode and registered a lens-focus
>> property that points to that software_node
>>
>> 3. When a match is found, the notifier's .bound() function is called.
>> When all the asds that the notifier expects are bound the notifier's
>> .complete() callback is called.
>>
>>
>> That's not working correctly for me at the moment, but I think this is a
>> surmountable problem rather than the wrong approach, so I'm just working
>> through the differences to try and get the matching working.
> 
> 
> OK, I eventually got this working - the dw9719 registers as
> /dev/v4l-subdev7 for me now ... long story short is the attached patch
> was needed to make the references work, as the internals of v4l2 aren't
> checking for fwnode->secondary. Prior to your latest series as well, an
> additional problem was that once the VCMs fwnode was linked to the
> sensor's the .complete() callback for ipu3-cio2 would never call
> (because it needs ALL the devices for the linked fwnodes to be bound to
> do that)...which meant the VCMs never got instantiated, because that was
> where that function was called. With your new set separating those
> processes it works well, so yes I like that new approach very much :D
> 
> 
> In the end we don't have to add a call creating the subdev's - it turns
> out that v4l2 knows it's part of ipu3-cio2's v4l2-device so it registers
> the nodes for the vcm when .complete() is called for that driver. I
> still think we should add a bit creating the link to expose to userspace
> in match_notify() though.
> 
> 
> Trying to list controls for the dw9719 with v4l2-ctl -d /dev/v4l-subdev7
> -L fails with an IOCTL error, so I have some remedial work on the driver
> which I'll do tonight; I'd expect to be able to control focus with
> v4l2-ctl -d /dev/v4l-subdev7 -c absolute_focus=n once this is sorted.

That is great, thank you so much. I wanted to look into this myself
today but I got distracted by other stuff.

Mainly getting Windows to work (including cameras) after a fresh
Windows install on A Dell Latitude 7285 which I just got as a second
device to test IPU3 stuff on :)

Talking about this Dell Latitude 7285, I haven't had a chance to
look into this at all. But chances are I will need to do some
I2C-register dumps under Windows, last time you mentioned you
had some small tool for this ? It is ok if it is a bit hackish,
it will still be very useful to have :)  And I believe I will
also need to override the DSDT under Windows for this, right?
I should be able to cope with that too.

Regards,

Hans

> 
>>
>>
>> For the devnodes, the ipu3-cio2 driver itself creates the devnodes for
>> the subdevices that bind to it (like the sensor) as part of its
>> .complete() callback [2] by calling v4l2_device_register_subdev_nodes(),
>> as far as I can tell there's nothing in v4l2 core that handles that
>> automatically so I think that that lack is what's preventing the
>> devnodes from showing up. I think we should tackle the problem of the
>> missing devnodes by mimicking the effects of that function somewhere
>> within core, probably v4l2_async_match_notify() (which calls the
>> notifier's .bound() callback). I think the creation of the links to
>> expose to userspace that this is a logical unit should probably happen
>> in the same place, using the entity.function field of the subdev and the
>> asd to decide exactly what kind of link to create.
>>
>>
>> [1]
>> https://elixir.bootlin.com/linux/latest/source/drivers/media/v4l2-core/v4l2-async.c#L69
>>
>> [2]
>> https://elixir.bootlin.com/linux/latest/source/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c#L1449
>>
>>> Regards,
>>>
>>> Hans
>>>
>>>
>>>
>>> p.s.
>>>
>>> I have found a new solution for the probe-ordering problem which
>>> is patch 2 of the attached patches, I personally I'm happy with
>>> this solution. I hope you like it too.
Daniel Scally Nov. 9, 2021, 4:35 p.m. UTC | #18
Hi Hans

On 09/11/2021 16:02, Hans de Goede wrote:
> Hi,
>
> On 11/9/21 13:09, Daniel Scally wrote:
>> Hi Hans
>>
>> On 09/11/2021 00:43, Daniel Scally wrote:
>>> Hi Hans
>>>
>>> On 08/11/2021 13:12, Hans de Goede wrote:
>>>> Hi,
>>>>
>>>> On 11/2/21 00:43, Daniel Scally wrote:
>>>>> Hi Hans
>>>> <snip>
>>>>  
>>>>
>>>>>> 2. I need some help with all the fwnode link stuff (I'm not very familiar
>>>>>> with this). There seems to be a chicken and egg problem here though,
>>>>>> because the v4l2subdev for the VCM does not register because of async stuff
>>>>>> and if we add it to the "graph" then my idea to enumerate the VCMs
>>>>>> from the SSDB on the complete() callback won't work. But we can do this
>>>>>> on a per sensor basis instead from the cio2_notifier_bound() callback
>>>>>> instead I guess ?
>>>>> I think on top of your work in the cio2-bridge for patch 3 you can do this:
>>>>>
>>>>>
>>>>> 1. Create another software node against the cio2_sensor struct, with the
>>>>> name coming from the vcm_types array
>>>>>
>>>>> 2. Assign that software node to board_info.swnode in
>>>>> cio2_bridge_instantiate_vcm_i2c_client()
>>>>>
>>>>> 3. Add another entry to dev_properties for the sensor, that is named
>>>>> "lens-focus" and contains a reference to the software_node created in #2
>>>>> just like the references to the sensor/cio2 nodes.
>>>>>
>>>>>
>>>>> This way when the sensor driver calls
>>>>> v4l2_async_register_subdev_sensor() it should create a notifier that
>>>>> looks for that VCM client to bind. I think then rather than putting
>>>>> anything in the .bound() / .complete() callbacks, we should modify core
>>>>> to do _something_ when async matching some subdevs. The something would
>>>>> depend on the kind of devices that match, for example with the sensor
>>>>> driver and the ipu3-cio2 driver, there's an entity whos function is
>>>>> MEDIA_ENT_F_VID_IF_BRIDGE matching to an entity whos function is
>>>>> MEDIA_ENT_F_CAM_SENSOR, and it seems to me that every scenario like that
>>>>> is going to result in media pad links being created. Similarly for our
>>>>> sensor that's a device with entity function MEDIA_ENT_F_LENS matching to
>>>>> MEDIA_ENT_F_CAM_SENSOR, and I think that in those cases we can create
>>>>> either an interface link or a new kind of link (maybe
>>>>> "MEDIA_LNK_FL_ANCILLARY_LINK" or something...) between the two to show
>>>>> that they form a single logical unit, which we can then report to libcamera.
>>>>>
>>>>>
>>>>> Hope that makes sense...
>>>> Ok, so I gave this a try, see the attached patches, but the v4l2-subdev for
>>>> the VCM still does not show up.
>>> This is exactly where I got to over the weekend too
>>>
>>>> I think that instead I need to build a full link between the sensor
>>>> and the VCM similar to the cio2 <-> sensor link. Both ends of that link
>>>> have:
>>>>
>>>> <base-swnode attached to the device>
>>>> |
>>>> --<port-swnode named (SWNODE_GRAPH_PORT_NAME_FMT, X), where X is 0 on the
>>>>   |                           sensor side and the link nr on the cio2 side
>>>>   |
>>>>   --<end-point-swnode named (SWNODE_GRAPH_ENDPOINT_NAME_FMT, 0)
>>>>
>>>> And then the 2 endpoints contain a swref property pointing to the
>>>> other endpoint swnode.
>>>>
>>>> I think we need a similar setup adding a swnode child named
>>>> (SWNODE_GRAPH_PORT_NAME_FMT, 1), to the nodes[SWNODE_SENSOR_HID] node.
>>>>
>>>> Note 1, since 0 is the "port" to the cio2, this new port child then
>>>> gets an endpoint "0" child itself, likewise we add a "port 0" child
>>>> to the vcm swnode, with a "endpoint 0" child below that and then have
>>>> the 2 endpoints contain swref properties pointing to each other.
>>>>
>>>> I think that this will properly make the VCm part of the graph and
>>>> will make its v4l2-subdev get instantiated when the graph is
>>>> complete.  Before I spend a bunch of time on implementing this,
>>>> let me ask this:
>>>>
>>>> Does this sound reasonable / like I'm heading in the right direction?
>>> I don't think that we need to add the software nodes as
>>> ports/endpoints...as far as I can tell it ought to work like this:
>>>
>>>
>>> 1. The sensor calls v4l2_async_register_subdev_sensor() which...
>>>
>>>     a) creates a notifier
>>>
>>>     b) looks for reference properties of the device's fwnode called
>>> "lens-focus" and calls v4l2_async_notifier_add_fwnode_subdev() against
>>> the reference, which tells the notifier it's connected to this other
>>> fwnode and to expect it to bind.
>>>
>>> 2. When new subdevs are registered they get tested for a match against
>>> the notifier registered in 1a that matches to their fwnode using
>>> match_fwnode() [1]. This should work, on the grounds that we registered
>>> the device using the board_info.swnode and registered a lens-focus
>>> property that points to that software_node
>>>
>>> 3. When a match is found, the notifier's .bound() function is called.
>>> When all the asds that the notifier expects are bound the notifier's
>>> .complete() callback is called.
>>>
>>>
>>> That's not working correctly for me at the moment, but I think this is a
>>> surmountable problem rather than the wrong approach, so I'm just working
>>> through the differences to try and get the matching working.
>>
>> OK, I eventually got this working - the dw9719 registers as
>> /dev/v4l-subdev7 for me now ... long story short is the attached patch
>> was needed to make the references work, as the internals of v4l2 aren't
>> checking for fwnode->secondary. Prior to your latest series as well, an
>> additional problem was that once the VCMs fwnode was linked to the
>> sensor's the .complete() callback for ipu3-cio2 would never call
>> (because it needs ALL the devices for the linked fwnodes to be bound to
>> do that)...which meant the VCMs never got instantiated, because that was
>> where that function was called. With your new set separating those
>> processes it works well, so yes I like that new approach very much :D
>>
>>
>> In the end we don't have to add a call creating the subdev's - it turns
>> out that v4l2 knows it's part of ipu3-cio2's v4l2-device so it registers
>> the nodes for the vcm when .complete() is called for that driver. I
>> still think we should add a bit creating the link to expose to userspace
>> in match_notify() though.
>>
>>
>> Trying to list controls for the dw9719 with v4l2-ctl -d /dev/v4l-subdev7
>> -L fails with an IOCTL error, so I have some remedial work on the driver
>> which I'll do tonight; I'd expect to be able to control focus with
>> v4l2-ctl -d /dev/v4l-subdev7 -c absolute_focus=n once this is sorted.
> That is great, thank you so much. I wanted to look into this myself
> today but I got distracted by other stuff.


No problem; I'll link you the patches for the updated versions of
everything once I've sorted the IOCTL error tonight.

> Mainly getting Windows to work (including cameras) after a fresh
> Windows install on A Dell Latitude 7285 which I just got as a secondFwiw I gave up 
> device to test IPU3 stuff on :)
>
> Talking about this Dell Latitude 7285, I haven't had a chance to
> look into this at all. But chances are I will need to do some
> I2C-register dumps under Windows, last time you mentioned you
> had some small tool for this ? It is ok if it is a bit hackish,
> it will still be very useful to have :)  And I believe I will
> also need to override the DSDT under Windows for this, right?
> I should be able to cope with that too.


So the tool I was using was the I2cTestTool [1], which requires you to
first hack the DSDT to enable usermode access [2]. You need the
Microsoft ASL compiler [3] to insert the new DSDT, but fwiw I gave up
trying to use their tool to actually compile the table and just did it
running Ubuntu with iasl, then saved the file onto the Go2's SD card and
loaded it using asl.exe in Windows...the MS tool just wouldn't compile
for whatever reason.


All that said; you don't actually need to do this for the Latitude 7285
- on the Github thread a chap with that device found the schematics and
posted them [4], so we should already have the information we need to
populate the board data for that one. The sensor drivers need some work
though - the ov9734 I have a series somewhere that I think should work
but haven't ever tested, the ov8858 I don't think anyone's looked at yet.


[1]
https://github.com/microsoft/Windows-iotcore-samples/tree/develop/BusTools/I2cTestTool

[2]
https://docs.microsoft.com/en-us/windows/uwp/devices-sensors/enable-usermode-access

[3]
https://docs.microsoft.com/en-gb/windows-hardware/drivers/bringup/microsoft-asl-compiler

[4]
https://github.com/linux-surface/linux-surface/issues/91#issuecomment-829641311

>
> Regards,
>
> Hans
>
>>>
>>> For the devnodes, the ipu3-cio2 driver itself creates the devnodes for
>>> the subdevices that bind to it (like the sensor) as part of its
>>> .complete() callback [2] by calling v4l2_device_register_subdev_nodes(),
>>> as far as I can tell there's nothing in v4l2 core that handles that
>>> automatically so I think that that lack is what's preventing the
>>> devnodes from showing up. I think we should tackle the problem of the
>>> missing devnodes by mimicking the effects of that function somewhere
>>> within core, probably v4l2_async_match_notify() (which calls the
>>> notifier's .bound() callback). I think the creation of the links to
>>> expose to userspace that this is a logical unit should probably happen
>>> in the same place, using the entity.function field of the subdev and the
>>> asd to decide exactly what kind of link to create.
>>>
>>>
>>> [1]
>>> https://elixir.bootlin.com/linux/latest/source/drivers/media/v4l2-core/v4l2-async.c#L69
>>>
>>> [2]
>>> https://elixir.bootlin.com/linux/latest/source/drivers/media/pci/intel/ipu3/ipu3-cio2-main.c#L1449
>>>
>>>> Regards,
>>>>
>>>> Hans
>>>>
>>>>
>>>>
>>>> p.s.
>>>>
>>>> I have found a new solution for the probe-ordering problem which
>>>> is patch 2 of the attached patches, I personally I'm happy with
>>>> this solution. I hope you like it too.
Daniel Scally Nov. 10, 2021, 12:01 a.m. UTC | #19
Hi Hans

On 09/11/2021 16:35, Daniel Scally wrote:
>>>> That's not working correctly for me at the moment, but I think this is a
>>>> surmountable problem rather than the wrong approach, so I'm just working
>>>> through the differences to try and get the matching working.
>>> OK, I eventually got this working - the dw9719 registers as
>>> /dev/v4l-subdev7 for me now ... long story short is the attached patch
>>> was needed to make the references work, as the internals of v4l2 aren't
>>> checking for fwnode->secondary. Prior to your latest series as well, an
>>> additional problem was that once the VCMs fwnode was linked to the
>>> sensor's the .complete() callback for ipu3-cio2 would never call
>>> (because it needs ALL the devices for the linked fwnodes to be bound to
>>> do that)...which meant the VCMs never got instantiated, because that was
>>> where that function was called. With your new set separating those
>>> processes it works well, so yes I like that new approach very much :D
>>>
>>>
>>> In the end we don't have to add a call creating the subdev's - it turns
>>> out that v4l2 knows it's part of ipu3-cio2's v4l2-device so it registers
>>> the nodes for the vcm when .complete() is called for that driver. I
>>> still think we should add a bit creating the link to expose to userspace
>>> in match_notify() though.
>>>
>>>
>>> Trying to list controls for the dw9719 with v4l2-ctl -d /dev/v4l-subdev7
>>> -L fails with an IOCTL error, so I have some remedial work on the driver
>>> which I'll do tonight; I'd expect to be able to control focus with
>>> v4l2-ctl -d /dev/v4l-subdev7 -c absolute_focus=n once this is sorted.
>> That is great, thank you so much. I wanted to look into this myself
>> today but I got distracted by other stuff.
>
> No problem; I'll link you the patches for the updated versions of
> everything once I've sorted the IOCTL error tonight.


OK, this is running now. With the attached patches on top of your v5
series and the 4-patch series from earlier today, the dw9719 registers
as a v4l2 subdev and I can control it with v4l2-ctl -d /dev/v4l-subdev7
-c focus_absolute=1200 (or whatever value). One problem I'm experiencing
is that the focus position I set isn't maintained; it holds for a couple
of seconds and then resets to the "normal" focus...this happens when the
.close() callback for the driver is called, which happens right after
the control value is applied. All the other VCM drivers in the kernel
power down on .close() so I did the same, but the behaviour is not
particularly useful - since removing the power seems to reset it, it
needs to be on whilst the linked sensor is streaming I suppose. Given
that ascertaining the state of the sensor probably will require some
link established between them anyway I guess I will look at that next,
unless you'd rather do it?
Andy Shevchenko Nov. 10, 2021, 8:15 a.m. UTC | #20
On Wed, Nov 10, 2021 at 12:01:19AM +0000, Daniel Scally wrote:
> On 09/11/2021 16:35, Daniel Scally wrote:

Some comments to the code below.

...

> +static int dw9719_i2c_rd8(struct i2c_client *client, u8 reg, u8 *val)
> +{
> +	struct i2c_msg msg[2];

> +	u8 buf[2] = { reg };

See below.

> +	int ret;
> +
> +	msg[0].addr = client->addr;
> +	msg[0].flags = 0;
> +	msg[0].len = 1;
> +	msg[0].buf = buf;
> +
> +	msg[1].addr = client->addr;
> +	msg[1].flags = I2C_M_RD;
> +	msg[1].len = 1;
> +	msg[1].buf = &buf[1];
> +	*val = 0;
> +
> +	ret = i2c_transfer(client->adapter, msg, 2);
> +	if (ret < 0)
> +		goto err;
> +
> +	*val = buf[1];
> +
> +	return 0;

> +err:
> +	return ret;

Useless. Return in-place.

> +}

...

> +	u8 buf[3] = { reg, (u8)(val >> 8), (u8)(val & 0xff)};

This, and similar cases, has endianess issue.

You are supposed to have __be16 or __le16 buffer with respect to the hardware.
Another way (since I looked at the other places) is to use put_unligned_*().

As per above this requires put_unaligned_be16().

...

> +	pm_runtime_set_autosuspend_delay(&client->dev, 1000);

Why this can't be set by user space?

...

> Subject: [PATCH 2/3] device property: Check fwnode->secondary when finding
>  properties
> 
> fwnode_property_get_reference_args() searches for named properties
> against a fwnode_handle, but these could instead be against the fwnode's
> secondary. If the property isn't found against the primary, check the
> secondary to see if it's there instead.

Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

> Signed-off-by: Daniel Scally <djrscally@gmail.com>
> ---
>  drivers/base/property.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/base/property.c b/drivers/base/property.c
> index 453918eb7390..054e62a4e710 100644
> --- a/drivers/base/property.c
> +++ b/drivers/base/property.c
> @@ -479,8 +479,16 @@ int fwnode_property_get_reference_args(const struct fwnode_handle *fwnode,
>  				       unsigned int nargs, unsigned int index,
>  				       struct fwnode_reference_args *args)
>  {
> -	return fwnode_call_int_op(fwnode, get_reference_args, prop, nargs_prop,
> -				  nargs, index, args);
> +	int ret;
> +
> +	ret = fwnode_call_int_op(fwnode, get_reference_args, prop, nargs_prop,
> +				 nargs, index, args);
> +
> +	if (ret < 0 && !IS_ERR_OR_NULL(fwnode->secondary))
> +		ret = fwnode_call_int_op(fwnode->secondary, get_reference_args,
> +					 prop, nargs_prop, nargs, index, args);
> +
> +	return ret;
>  }
>  EXPORT_SYMBOL_GPL(fwnode_property_get_reference_args);
>  
> -- 
> 2.25.1
> 

> From dd7532ddea71482502394b6b36c9fd3e5f2a0a37 Mon Sep 17 00:00:00 2001
> From: Daniel Scally <djrscally@gmail.com>
> Date: Tue, 9 Nov 2021 23:12:06 +0000
> Subject: [PATCH 1/3] platform/x86: int3472: Add vsio regulator supply to board
>  file
> 
> The Surface Go2 board file needs to additionally specify a supply name
> mapping the VSIO regulator to the world facing camera's VCM device, as
> it can sit behind an I2C daisy chain which requires this regulator be
> enabled to function.

Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

> Signed-off-by: Daniel Scally <djrscally@gmail.com>
> ---
>  drivers/platform/x86/intel/int3472/tps68470_board_data.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/platform/x86/intel/int3472/tps68470_board_data.c b/drivers/platform/x86/intel/int3472/tps68470_board_data.c
> index 20615c342875..556a615afaa9 100644
> --- a/drivers/platform/x86/intel/int3472/tps68470_board_data.c
> +++ b/drivers/platform/x86/intel/int3472/tps68470_board_data.c
> @@ -29,6 +29,7 @@ static struct regulator_consumer_supply int347a_vcm_consumer_supplies[] = {
>  
>  static struct regulator_consumer_supply int347a_vsio_consumer_supplies[] = {
>  	REGULATOR_SUPPLY("dovdd", "i2c-INT347A:00"),
> +	REGULATOR_SUPPLY("vsio", "i2c-INT347A:00-VCM"),
>  };
>  
>  static const struct regulator_init_data surface_go_tps68470_core_reg_init_data = {
> -- 
> 2.25.1
>
Hans de Goede Nov. 11, 2021, 10:35 a.m. UTC | #21
Hi,

On 11/10/21 01:01, Daniel Scally wrote:
> Hi Hans
> 
> On 09/11/2021 16:35, Daniel Scally wrote:
>>>>> That's not working correctly for me at the moment, but I think this is a
>>>>> surmountable problem rather than the wrong approach, so I'm just working
>>>>> through the differences to try and get the matching working.
>>>> OK, I eventually got this working - the dw9719 registers as
>>>> /dev/v4l-subdev7 for me now ... long story short is the attached patch
>>>> was needed to make the references work, as the internals of v4l2 aren't
>>>> checking for fwnode->secondary. Prior to your latest series as well, an
>>>> additional problem was that once the VCMs fwnode was linked to the
>>>> sensor's the .complete() callback for ipu3-cio2 would never call
>>>> (because it needs ALL the devices for the linked fwnodes to be bound to
>>>> do that)...which meant the VCMs never got instantiated, because that was
>>>> where that function was called. With your new set separating those
>>>> processes it works well, so yes I like that new approach very much :D
>>>>
>>>>
>>>> In the end we don't have to add a call creating the subdev's - it turns
>>>> out that v4l2 knows it's part of ipu3-cio2's v4l2-device so it registers
>>>> the nodes for the vcm when .complete() is called for that driver. I
>>>> still think we should add a bit creating the link to expose to userspace
>>>> in match_notify() though.
>>>>
>>>>
>>>> Trying to list controls for the dw9719 with v4l2-ctl -d /dev/v4l-subdev7
>>>> -L fails with an IOCTL error, so I have some remedial work on the driver
>>>> which I'll do tonight; I'd expect to be able to control focus with
>>>> v4l2-ctl -d /dev/v4l-subdev7 -c absolute_focus=n once this is sorted.
>>> That is great, thank you so much. I wanted to look into this myself
>>> today but I got distracted by other stuff.
>>
>> No problem; I'll link you the patches for the updated versions of
>> everything once I've sorted the IOCTL error tonight.
> 
> 
> OK, this is running now. With the attached patches on top of your v5
> series and the 4-patch series from earlier today, the dw9719 registers
> as a v4l2 subdev and I can control it with v4l2-ctl -d /dev/v4l-subdev7
> -c focus_absolute=1200 (or whatever value).

Great, thank you! I've given this a quick test and indeed everything
works :)

I did notice a typo in a comment in the dw9719.c file which I added
myself, can you squash in this fix pleas? :

diff --git a/drivers/media/i2c/dw9719.c b/drivers/media/i2c/dw9719.c
index 047f7636efde..c647b50c2ebf 100644
--- a/drivers/media/i2c/dw9719.c
+++ b/drivers/media/i2c/dw9719.c
@@ -283,7 +283,7 @@ static int dw9719_probe(struct i2c_client *client)
 	 * the TPS68470 PMIC have I2C passthrough capability, to disconnect the
 	 * sensor's I2C pins from the I2C bus when the sensors VSIO (Sensor-IO)
 	 * is off, because some sensors then short these pins to ground;
-	 * and the DW9719 might sit behind this passthrough, this it needs to
+	 * and the DW9719 might sit behind this passthrough, thus it needs to
 	 * enable VSIO as that will also enable the I2C passthrough.
 	 */
 	dw9719->regulators[1].supply = "vsio";

Also I think that the 

"device property: Check fwnode->secondary when finding properties"

That patch looks good to me, so please add my:

Reviewed-by: Hans de Goede <hdegoede@redhat.com>

Can you submit this upstream please?

I will prepare a new version of my:

"[PATCH v5 00/11] Add support for X86/ACPI camera sensor/PMIC setup with clk and regulator platform data"

series, addressing the few remaining comments and adding the regulator
data + instantiating support for the VCM.

> One problem I'm experiencing
> is that the focus position I set isn't maintained; it holds for a couple
> of seconds and then resets to the "normal" focus...this happens when the
> .close() callback for the driver is called, which happens right after
> the control value is applied. All the other VCM drivers in the kernel
> power down on .close() so I did the same>

Right, I believe that this is fine though, we expect people to use
libcamera with this and once libcamera gets autofocus support, then
I would expect libcamera to keep the fd open the entire time while
streaming.

>, but the behaviour is not
> particularly useful - since removing the power seems to reset it, it
> needs to be on whilst the linked sensor is streaming I suppose. Given
> that ascertaining the state of the sensor probably will require some
> link established between them anyway I guess I will look at that next,
> unless you'd rather do it?

I don't think this is necessary, see above.

What is necessary is some way for libcamera to:

1. See if there is a VCM which belongs to the sensor; and
2. If there is a VCM figure out which v4l2-subdev it is.

Also see this email thread, where Hans Verkuil came to the
conclusion that this info is currently missing from the MC
representation (link is to the conclusion):

https://lists.libcamera.org/pipermail/libcamera-devel/2021-October/026144.html

Regards,

Hans
Daniel Scally Nov. 11, 2021, 11:18 a.m. UTC | #22
Hello

On 11/11/2021 10:35, Hans de Goede wrote:
> Hi,
>
> On 11/10/21 01:01, Daniel Scally wrote:
>> Hi Hans
>>
>> On 09/11/2021 16:35, Daniel Scally wrote:
>>>>>> That's not working correctly for me at the moment, but I think this is a
>>>>>> surmountable problem rather than the wrong approach, so I'm just working
>>>>>> through the differences to try and get the matching working.
>>>>> OK, I eventually got this working - the dw9719 registers as
>>>>> /dev/v4l-subdev7 for me now ... long story short is the attached patch
>>>>> was needed to make the references work, as the internals of v4l2 aren't
>>>>> checking for fwnode->secondary. Prior to your latest series as well, an
>>>>> additional problem was that once the VCMs fwnode was linked to the
>>>>> sensor's the .complete() callback for ipu3-cio2 would never call
>>>>> (because it needs ALL the devices for the linked fwnodes to be bound to
>>>>> do that)...which meant the VCMs never got instantiated, because that was
>>>>> where that function was called. With your new set separating those
>>>>> processes it works well, so yes I like that new approach very much :D
>>>>>
>>>>>
>>>>> In the end we don't have to add a call creating the subdev's - it turns
>>>>> out that v4l2 knows it's part of ipu3-cio2's v4l2-device so it registers
>>>>> the nodes for the vcm when .complete() is called for that driver. I
>>>>> still think we should add a bit creating the link to expose to userspace
>>>>> in match_notify() though.
>>>>>
>>>>>
>>>>> Trying to list controls for the dw9719 with v4l2-ctl -d /dev/v4l-subdev7
>>>>> -L fails with an IOCTL error, so I have some remedial work on the driver
>>>>> which I'll do tonight; I'd expect to be able to control focus with
>>>>> v4l2-ctl -d /dev/v4l-subdev7 -c absolute_focus=n once this is sorted.
>>>> That is great, thank you so much. I wanted to look into this myself
>>>> today but I got distracted by other stuff.
>>> No problem; I'll link you the patches for the updated versions of
>>> everything once I've sorted the IOCTL error tonight.
>>
>> OK, this is running now. With the attached patches on top of your v5
>> series and the 4-patch series from earlier today, the dw9719 registers
>> as a v4l2 subdev and I can control it with v4l2-ctl -d /dev/v4l-subdev7
>> -c focus_absolute=1200 (or whatever value).
> Great, thank you! I've given this a quick test and indeed everything
> works :)
>
> I did notice a typo in a comment in the dw9719.c file which I added
> myself, can you squash in this fix pleas? :


No problem, will do

> Also I think that the 
>
> "device property: Check fwnode->secondary when finding properties"
>
> That patch looks good to me, so please add my:
>
> Reviewed-by: Hans de Goede <hdegoede@redhat.com>
>
> Can you submit this upstream please?


Thanks; I'll post it later yes (and thanks for your R-b too Andy)

> I will prepare a new version of my:
>
> "[PATCH v5 00/11] Add support for X86/ACPI camera sensor/PMIC setup with clk and regulator platform data"
>
> series, addressing the few remaining comments and adding the regulator
> data + instantiating support for the VCM.
>
>> One problem I'm experiencing
>> is that the focus position I set isn't maintained; it holds for a couple
>> of seconds and then resets to the "normal" focus...this happens when the
>> .close() callback for the driver is called, which happens right after
>> the control value is applied. All the other VCM drivers in the kernel
>> power down on .close() so I did the same>
> Right, I believe that this is fine though, we expect people to use
> libcamera with this and once libcamera gets autofocus support, then
> I would expect libcamera to keep the fd open the entire time while
> streaming.


OK - as long as that's how it works then I agree that this is fine as is
yes.


> What is necessary is some way for libcamera to:
>
> 1. See if there is a VCM which belongs to the sensor; and
> 2. If there is a VCM figure out which v4l2-subdev it is.
>
> Also see this email thread, where Hans Verkuil came to the
> conclusion that this info is currently missing from the MC
> representation (link is to the conclusion):
>
> https://lists.libcamera.org/pipermail/libcamera-devel/2021-October/026144.html


Yeah I read through that thread too, and had a brief chat with Laurent
about it. My plan was to add a new type of link called an "ancillary
link" between two entities, and automatically create those in
match_notify() based on the function field of the matching entities, and
expose them as part of the media graph. I've started working on that but
not progressed far enough to share anything. Libcamera would need
updating with support for that too though.

>
> Regards,
>
> Hans
>
Hans de Goede Nov. 11, 2021, 3:23 p.m. UTC | #23
Hi,

On 11/11/21 12:18, Daniel Scally wrote:

<snip>

>>> One problem I'm experiencing
>>> is that the focus position I set isn't maintained; it holds for a couple
>>> of seconds and then resets to the "normal" focus...this happens when the
>>> .close() callback for the driver is called, which happens right after
>>> the control value is applied. All the other VCM drivers in the kernel
>>> power down on .close() so I did the same>
>> Right, I believe that this is fine though, we expect people to use
>> libcamera with this and once libcamera gets autofocus support, then
>> I would expect libcamera to keep the fd open the entire time while
>> streaming.
> 
> 
> OK - as long as that's how it works then I agree that this is fine as is
> yes.

So I've just picked up an old project of mine, called gtk-v4l which
is a nice simply v4l2 controls applet and patches it up to also
work on v4l-subdevs:

https://github.com/jwrdegoede/gtk-v4l/

So now you can run:

sudo gtk-v4l -d /dev/v4l-subdev8

And it will give you a slider to control the focus; and as
a bonus it keeps the v4l-subdev open, so no more runtime-pm
issue :)

>> What is necessary is some way for libcamera to:
>>
>> 1. See if there is a VCM which belongs to the sensor; and
>> 2. If there is a VCM figure out which v4l2-subdev it is.
>>
>> Also see this email thread, where Hans Verkuil came to the
>> conclusion that this info is currently missing from the MC
>> representation (link is to the conclusion):
>>
>> https://lists.libcamera.org/pipermail/libcamera-devel/2021-October/026144.html
> 
> 
> Yeah I read through that thread too, and had a brief chat with Laurent
> about it. My plan was to add a new type of link called an "ancillary
> link" between two entities, and automatically create those in
> match_notify() based on the function field of the matching entities, and
> expose them as part of the media graph. I've started working on that but
> not progressed far enough to share anything.

Sounds good.

> Libcamera would need
> updating with support for that too though.

Right I think libcamera will need updating no matter what, first we
need to comeup with a userspace API for this.

Although I guess it would be good to also write libcamera patches
once the kernel patches are ready, but not yet merged, to make
sure the API is usable without problems by libcamera.

Regards,

Hans
Dave Stevenson Nov. 11, 2021, 3:51 p.m. UTC | #24
Hi Hans

On Thu, 11 Nov 2021 at 15:23, Hans de Goede <hdegoede@redhat.com> wrote:
>
> Hi,
>
> On 11/11/21 12:18, Daniel Scally wrote:
>
> <snip>
>
> >>> One problem I'm experiencing
> >>> is that the focus position I set isn't maintained; it holds for a couple
> >>> of seconds and then resets to the "normal" focus...this happens when the
> >>> .close() callback for the driver is called, which happens right after
> >>> the control value is applied. All the other VCM drivers in the kernel
> >>> power down on .close() so I did the same>
> >> Right, I believe that this is fine though, we expect people to use
> >> libcamera with this and once libcamera gets autofocus support, then
> >> I would expect libcamera to keep the fd open the entire time while
> >> streaming.
> >
> >
> > OK - as long as that's how it works then I agree that this is fine as is
> > yes.
>
> So I've just picked up an old project of mine, called gtk-v4l which
> is a nice simply v4l2 controls applet and patches it up to also
> work on v4l-subdevs:
>
> https://github.com/jwrdegoede/gtk-v4l/
>
> So now you can run:
>
> sudo gtk-v4l -d /dev/v4l-subdev8
>
> And it will give you a slider to control the focus; and as
> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> issue :)

Do the lens and sensor share a regulator / enable GPIO?

I was looking at the same issue for a Sony IMX135 module with AD5398
VCM driver [1].
In my case they do share an enable GPIO, so using regulator-gpio we
can register via regulator_register_notifier for information on when
the regulator is powered up. It can then also reset to the last
position should the sensor subdev enable the regulator without the
lens driver being opened at all.

I don't know if that helps in your case.

 Dave

[1] https://github.com/6by9/linux/commit/e15e712e8c17afe03f121540178371ce2a8a7922
on branch https://github.com/6by9/linux/commits/rpi-5.10.y-imx135

> >> What is necessary is some way for libcamera to:
> >>
> >> 1. See if there is a VCM which belongs to the sensor; and
> >> 2. If there is a VCM figure out which v4l2-subdev it is.
> >>
> >> Also see this email thread, where Hans Verkuil came to the
> >> conclusion that this info is currently missing from the MC
> >> representation (link is to the conclusion):
> >>
> >> https://lists.libcamera.org/pipermail/libcamera-devel/2021-October/026144.html
> >
> >
> > Yeah I read through that thread too, and had a brief chat with Laurent
> > about it. My plan was to add a new type of link called an "ancillary
> > link" between two entities, and automatically create those in
> > match_notify() based on the function field of the matching entities, and
> > expose them as part of the media graph. I've started working on that but
> > not progressed far enough to share anything.
>
> Sounds good.
>
> > Libcamera would need
> > updating with support for that too though.
>
> Right I think libcamera will need updating no matter what, first we
> need to comeup with a userspace API for this.
>
> Although I guess it would be good to also write libcamera patches
> once the kernel patches are ready, but not yet merged, to make
> sure the API is usable without problems by libcamera.
>
> Regards,
>
> Hans
>
Laurent Pinchart Nov. 11, 2021, 3:51 p.m. UTC | #25
On Thu, Nov 11, 2021 at 04:23:38PM +0100, Hans de Goede wrote:
> On 11/11/21 12:18, Daniel Scally wrote:
> 
> <snip>
> 
> >>> One problem I'm experiencing
> >>> is that the focus position I set isn't maintained; it holds for a couple
> >>> of seconds and then resets to the "normal" focus...this happens when the
> >>> .close() callback for the driver is called, which happens right after
> >>> the control value is applied. All the other VCM drivers in the kernel
> >>> power down on .close() so I did the same>
> >> Right, I believe that this is fine though, we expect people to use
> >> libcamera with this and once libcamera gets autofocus support, then
> >> I would expect libcamera to keep the fd open the entire time while
> >> streaming.
> > 
> > 
> > OK - as long as that's how it works then I agree that this is fine as is
> > yes.
> 
> So I've just picked up an old project of mine, called gtk-v4l which
> is a nice simply v4l2 controls applet and patches it up to also
> work on v4l-subdevs:
> 
> https://github.com/jwrdegoede/gtk-v4l/
> 
> So now you can run:
> 
> sudo gtk-v4l -d /dev/v4l-subdev8
> 
> And it will give you a slider to control the focus; and as
> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> issue :)
> 
> >> What is necessary is some way for libcamera to:
> >>
> >> 1. See if there is a VCM which belongs to the sensor; and
> >> 2. If there is a VCM figure out which v4l2-subdev it is.
> >>
> >> Also see this email thread, where Hans Verkuil came to the
> >> conclusion that this info is currently missing from the MC
> >> representation (link is to the conclusion):
> >>
> >> https://lists.libcamera.org/pipermail/libcamera-devel/2021-October/026144.html
> > 
> > 
> > Yeah I read through that thread too, and had a brief chat with Laurent
> > about it. My plan was to add a new type of link called an "ancillary
> > link" between two entities, and automatically create those in
> > match_notify() based on the function field of the matching entities, and
> > expose them as part of the media graph. I've started working on that but
> > not progressed far enough to share anything.
> 
> Sounds good.
> 
> > Libcamera would need
> > updating with support for that too though.
> 
> Right I think libcamera will need updating no matter what, first we
> need to comeup with a userspace API for this.
> 
> Although I guess it would be good to also write libcamera patches
> once the kernel patches are ready, but not yet merged, to make
> sure the API is usable without problems by libcamera.

I strongly agree with this. We're moving towards mandating a libcamera
implementation to get new APIs merged in the kernel.
Hans de Goede Nov. 11, 2021, 3:59 p.m. UTC | #26
Hi,

On 11/9/21 17:35, Daniel Scally wrote:

<snip>

>> Talking about this Dell Latitude 7285, I haven't had a chance to
>> look into this at all. But chances are I will need to do some
>> I2C-register dumps under Windows, last time you mentioned you
>> had some small tool for this ? It is ok if it is a bit hackish,
>> it will still be very useful to have :)  And I believe I will
>> also need to override the DSDT under Windows for this, right?
>> I should be able to cope with that too.
> 
> 
> So the tool I was using was the I2cTestTool [1], which requires you to
> first hack the DSDT to enable usermode access [2]. You need the
> Microsoft ASL compiler [3] to insert the new DSDT, but fwiw I gave up
> trying to use their tool to actually compile the table and just did it
> running Ubuntu with iasl, then saved the file onto the Go2's SD card and
> loaded it using asl.exe in Windows...the MS tool just wouldn't compile
> for whatever reason.
> 
> 
> All that said; you don't actually need to do this for the Latitude 7285
> - on the Github thread a chap with that device found the schematics and
> posted them [4], so we should already have the information we need to
> populate the board data for that one. The sensor drivers need some work
> though - the ov9734 I have a series somewhere that I think should work
> but haven't ever tested, the ov8858 I don't think anyone's looked at yet.

Awesome, thank you very much for these links.

If you can dig up the ov9734 patch series you have and email me a copy (1),
that would be great, then I can start looking into getting things to work
on the Latitude 7285.

Regards,

Hans


1) No need to make it compile with the latest, I can take care of that
just email me what you have :)
Hans de Goede Nov. 11, 2021, 4:50 p.m. UTC | #27
Hi,

On 11/11/21 16:51, Dave Stevenson wrote:
> Hi Hans
> 
> On Thu, 11 Nov 2021 at 15:23, Hans de Goede <hdegoede@redhat.com> wrote:
>>
>> Hi,
>>
>> On 11/11/21 12:18, Daniel Scally wrote:
>>
>> <snip>
>>
>>>>> One problem I'm experiencing
>>>>> is that the focus position I set isn't maintained; it holds for a couple
>>>>> of seconds and then resets to the "normal" focus...this happens when the
>>>>> .close() callback for the driver is called, which happens right after
>>>>> the control value is applied. All the other VCM drivers in the kernel
>>>>> power down on .close() so I did the same>
>>>> Right, I believe that this is fine though, we expect people to use
>>>> libcamera with this and once libcamera gets autofocus support, then
>>>> I would expect libcamera to keep the fd open the entire time while
>>>> streaming.
>>>
>>>
>>> OK - as long as that's how it works then I agree that this is fine as is
>>> yes.
>>
>> So I've just picked up an old project of mine, called gtk-v4l which
>> is a nice simply v4l2 controls applet and patches it up to also
>> work on v4l-subdevs:
>>
>> https://github.com/jwrdegoede/gtk-v4l/
>>
>> So now you can run:
>>
>> sudo gtk-v4l -d /dev/v4l-subdev8
>>
>> And it will give you a slider to control the focus; and as
>> a bonus it keeps the v4l-subdev open, so no more runtime-pm
>> issue :)
> 
> Do the lens and sensor share a regulator / enable GPIO?

No, if they did then there would be no runtime-pm issue,
because then the VCM would not get turned off after
a v4l2-set command (for a quick test) since then the
streaming from the sensor would keep the sensor and
thus the regulator on.

> I was looking at the same issue for a Sony IMX135 module with AD5398
> VCM driver [1].
> In my case they do share an enable GPIO, so using regulator-gpio we
> can register via regulator_register_notifier for information on when
> the regulator is powered up. It can then also reset to the last
> position should the sensor subdev enable the regulator without the
> lens driver being opened at all.

That sounds like it is relying on board-depedent behavior
(the enable GPIO and/or regulator being shared) which we don't
want in the VCM drivers as those are supposed to be board
agnostic.

This really is something which should be fixed in userspace
where the userspace consumer of the sensor should also always
open the vcm v4l-subdev.

Regards,

Hans
Dave Stevenson Nov. 11, 2021, 7:30 p.m. UTC | #28
On Thu, 11 Nov 2021 at 16:50, Hans de Goede <hdegoede@redhat.com> wrote:
>
> Hi,
>
> On 11/11/21 16:51, Dave Stevenson wrote:
> > Hi Hans
> >
> > On Thu, 11 Nov 2021 at 15:23, Hans de Goede <hdegoede@redhat.com> wrote:
> >>
> >> Hi,
> >>
> >> On 11/11/21 12:18, Daniel Scally wrote:
> >>
> >> <snip>
> >>
> >>>>> One problem I'm experiencing
> >>>>> is that the focus position I set isn't maintained; it holds for a couple
> >>>>> of seconds and then resets to the "normal" focus...this happens when the
> >>>>> .close() callback for the driver is called, which happens right after
> >>>>> the control value is applied. All the other VCM drivers in the kernel
> >>>>> power down on .close() so I did the same>
> >>>> Right, I believe that this is fine though, we expect people to use
> >>>> libcamera with this and once libcamera gets autofocus support, then
> >>>> I would expect libcamera to keep the fd open the entire time while
> >>>> streaming.
> >>>
> >>>
> >>> OK - as long as that's how it works then I agree that this is fine as is
> >>> yes.
> >>
> >> So I've just picked up an old project of mine, called gtk-v4l which
> >> is a nice simply v4l2 controls applet and patches it up to also
> >> work on v4l-subdevs:
> >>
> >> https://github.com/jwrdegoede/gtk-v4l/
> >>
> >> So now you can run:
> >>
> >> sudo gtk-v4l -d /dev/v4l-subdev8
> >>
> >> And it will give you a slider to control the focus; and as
> >> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> >> issue :)
> >
> > Do the lens and sensor share a regulator / enable GPIO?
>
> No, if they did then there would be no runtime-pm issue,
> because then the VCM would not get turned off after
> a v4l2-set command (for a quick test) since then the
> streaming from the sensor would keep the sensor and
> thus the regulator on.

Registering with the regulator was more so that it restored the
position on sensor power up, independent of whether the lens driver
was opened or not.

> > I was looking at the same issue for a Sony IMX135 module with AD5398
> > VCM driver [1].
> > In my case they do share an enable GPIO, so using regulator-gpio we
> > can register via regulator_register_notifier for information on when
> > the regulator is powered up. It can then also reset to the last
> > position should the sensor subdev enable the regulator without the
> > lens driver being opened at all.
>
> That sounds like it is relying on board-depedent behavior
> (the enable GPIO and/or regulator being shared) which we don't
> want in the VCM drivers as those are supposed to be board
> agnostic.

All platforms I've encountered so far have used the same GPIO to
control both VCM and sensor, hence why I asked. The number of use
cases where you want one without the other is incredibly low, and
hardware guys generally don't like wasting GPIOs or having to route
them around the PCB. It's interesting that your platform has separated
them.

> This really is something which should be fixed in userspace
> where the userspace consumer of the sensor should also always
> open the vcm v4l-subdev.

Not all use cases involve libcamera, and what you're proposing is
making life very difficult for the simple use cases.
There may be GStreamer folk on board with libcamera, but I've heard no
noises from FFmpeg about libcamera support. V4L2 is still the default
API that users generally care about. Particularly with mono sensors
the output is often directly usable without worrying about the
complexities of ISPs, but you're effectively saying "jump through lots
of hoops or you can't use a VCM with these sensors".

If userspace has called VIDIOC_STREAMON doesn't that mean they want
the whole entity (as configured) to be powered on and start streaming?
Are you saying that the lens isn't part of that entity? In which case
why does Media Controller include it (and eventually link it to the
sensor) in the media entity?

Would you advocate making backlight control in DRM a function that
userspace is responsible for independently of the panel pipeline?
There are significant similarities to this situation as the panel
isn't usable without the backlight being powered, same as the sensor
isn't usable without the VCM being powered.

Sorry, but I just see isolating power control for the VCM from the
sensor in this way to be a very odd design decision. It'd be
interesting to hear other views.

  Dave
Laurent Pinchart Nov. 11, 2021, 10:04 p.m. UTC | #29
Hello,

On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:
> On Thu, 11 Nov 2021 at 16:50, Hans de Goede wrote:
> > On 11/11/21 16:51, Dave Stevenson wrote:
> > > On Thu, 11 Nov 2021 at 15:23, Hans de Goede wrote:
> > >> On 11/11/21 12:18, Daniel Scally wrote:
> > >>
> > >> <snip>
> > >>
> > >>>>> One problem I'm experiencing
> > >>>>> is that the focus position I set isn't maintained; it holds for a couple
> > >>>>> of seconds and then resets to the "normal" focus...this happens when the
> > >>>>> .close() callback for the driver is called, which happens right after
> > >>>>> the control value is applied. All the other VCM drivers in the kernel
> > >>>>> power down on .close() so I did the same>
> > >>>> Right, I believe that this is fine though, we expect people to use
> > >>>> libcamera with this and once libcamera gets autofocus support, then
> > >>>> I would expect libcamera to keep the fd open the entire time while
> > >>>> streaming.
> > >>>
> > >>>
> > >>> OK - as long as that's how it works then I agree that this is fine as is
> > >>> yes.
> > >>
> > >> So I've just picked up an old project of mine, called gtk-v4l which
> > >> is a nice simply v4l2 controls applet and patches it up to also
> > >> work on v4l-subdevs:
> > >>
> > >> https://github.com/jwrdegoede/gtk-v4l/
> > >>
> > >> So now you can run:
> > >>
> > >> sudo gtk-v4l -d /dev/v4l-subdev8
> > >>
> > >> And it will give you a slider to control the focus; and as
> > >> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> > >> issue :)
> > >
> > > Do the lens and sensor share a regulator / enable GPIO?
> >
> > No, if they did then there would be no runtime-pm issue,
> > because then the VCM would not get turned off after
> > a v4l2-set command (for a quick test) since then the
> > streaming from the sensor would keep the sensor and
> > thus the regulator on.
> 
> Registering with the regulator was more so that it restored the
> position on sensor power up, independent of whether the lens driver
> was opened or not.
> 
> > > I was looking at the same issue for a Sony IMX135 module with AD5398
> > > VCM driver [1].
> > > In my case they do share an enable GPIO, so using regulator-gpio we
> > > can register via regulator_register_notifier for information on when
> > > the regulator is powered up. It can then also reset to the last
> > > position should the sensor subdev enable the regulator without the
> > > lens driver being opened at all.
> >
> > That sounds like it is relying on board-depedent behavior
> > (the enable GPIO and/or regulator being shared) which we don't
> > want in the VCM drivers as those are supposed to be board
> > agnostic.
> 
> All platforms I've encountered so far have used the same GPIO to
> control both VCM and sensor, hence why I asked. The number of use
> cases where you want one without the other is incredibly low, and
> hardware guys generally don't like wasting GPIOs or having to route
> them around the PCB. It's interesting that your platform has separated
> them.
> 
> > This really is something which should be fixed in userspace
> > where the userspace consumer of the sensor should also always
> > open the vcm v4l-subdev.
> 
> Not all use cases involve libcamera, and what you're proposing is
> making life very difficult for the simple use cases.
> There may be GStreamer folk on board with libcamera, but I've heard no
> noises from FFmpeg about libcamera support. V4L2 is still the default
> API that users generally care about. Particularly with mono sensors
> the output is often directly usable without worrying about the
> complexities of ISPs, but you're effectively saying "jump through lots
> of hoops or you can't use a VCM with these sensors".

Usage of libcamera is certainly not mandatory, but let's not forget that
we're dealing with complex devices. In most cases applications will want
auto-focus, which will require a userspace camera stack. Even when using
manual focus, apart from moving the lens to the infinity position, there
isn't much that an application could do without some sort of calibration
data. Having to keep the VCM subdev open is the easy part. As long as
this is documented properly in the V4L2 API, I don't think it's a big
issue.

> If userspace has called VIDIOC_STREAMON doesn't that mean they want
> the whole entity (as configured) to be powered on and start streaming?
> Are you saying that the lens isn't part of that entity? In which case
> why does Media Controller include it (and eventually link it to the
> sensor) in the media entity?
> 
> Would you advocate making backlight control in DRM a function that
> userspace is responsible for independently of the panel pipeline?
> There are significant similarities to this situation as the panel
> isn't usable without the backlight being powered, same as the sensor
> isn't usable without the VCM being powered.

Isn't the backlight actually controlled through sysfs separately from
the display pipeline ?

> Sorry, but I just see isolating power control for the VCM from the
> sensor in this way to be a very odd design decision. It'd be
> interesting to hear other views.

Despite the above, I wouldn't oppose powering the VCM automatically when
the sensor is streaming, but I'm concerned about corner cases. For
instance, one may want to keep the VCM powered when toggling streaming
off and then back on. I wouldn't be surprised if there were other need
to have control of VCM power from userspace. I haven't studied the
question in details though.
Dave Stevenson Nov. 12, 2021, 10:32 a.m. UTC | #30
Hi Laurent

On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
>
> Hello,
>
> On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:
> > On Thu, 11 Nov 2021 at 16:50, Hans de Goede wrote:
> > > On 11/11/21 16:51, Dave Stevenson wrote:
> > > > On Thu, 11 Nov 2021 at 15:23, Hans de Goede wrote:
> > > >> On 11/11/21 12:18, Daniel Scally wrote:
> > > >>
> > > >> <snip>
> > > >>
> > > >>>>> One problem I'm experiencing
> > > >>>>> is that the focus position I set isn't maintained; it holds for a couple
> > > >>>>> of seconds and then resets to the "normal" focus...this happens when the
> > > >>>>> .close() callback for the driver is called, which happens right after
> > > >>>>> the control value is applied. All the other VCM drivers in the kernel
> > > >>>>> power down on .close() so I did the same>
> > > >>>> Right, I believe that this is fine though, we expect people to use
> > > >>>> libcamera with this and once libcamera gets autofocus support, then
> > > >>>> I would expect libcamera to keep the fd open the entire time while
> > > >>>> streaming.
> > > >>>
> > > >>>
> > > >>> OK - as long as that's how it works then I agree that this is fine as is
> > > >>> yes.
> > > >>
> > > >> So I've just picked up an old project of mine, called gtk-v4l which
> > > >> is a nice simply v4l2 controls applet and patches it up to also
> > > >> work on v4l-subdevs:
> > > >>
> > > >> https://github.com/jwrdegoede/gtk-v4l/
> > > >>
> > > >> So now you can run:
> > > >>
> > > >> sudo gtk-v4l -d /dev/v4l-subdev8
> > > >>
> > > >> And it will give you a slider to control the focus; and as
> > > >> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> > > >> issue :)
> > > >
> > > > Do the lens and sensor share a regulator / enable GPIO?
> > >
> > > No, if they did then there would be no runtime-pm issue,
> > > because then the VCM would not get turned off after
> > > a v4l2-set command (for a quick test) since then the
> > > streaming from the sensor would keep the sensor and
> > > thus the regulator on.
> >
> > Registering with the regulator was more so that it restored the
> > position on sensor power up, independent of whether the lens driver
> > was opened or not.
> >
> > > > I was looking at the same issue for a Sony IMX135 module with AD5398
> > > > VCM driver [1].
> > > > In my case they do share an enable GPIO, so using regulator-gpio we
> > > > can register via regulator_register_notifier for information on when
> > > > the regulator is powered up. It can then also reset to the last
> > > > position should the sensor subdev enable the regulator without the
> > > > lens driver being opened at all.
> > >
> > > That sounds like it is relying on board-depedent behavior
> > > (the enable GPIO and/or regulator being shared) which we don't
> > > want in the VCM drivers as those are supposed to be board
> > > agnostic.
> >
> > All platforms I've encountered so far have used the same GPIO to
> > control both VCM and sensor, hence why I asked. The number of use
> > cases where you want one without the other is incredibly low, and
> > hardware guys generally don't like wasting GPIOs or having to route
> > them around the PCB. It's interesting that your platform has separated
> > them.
> >
> > > This really is something which should be fixed in userspace
> > > where the userspace consumer of the sensor should also always
> > > open the vcm v4l-subdev.
> >
> > Not all use cases involve libcamera, and what you're proposing is
> > making life very difficult for the simple use cases.
> > There may be GStreamer folk on board with libcamera, but I've heard no
> > noises from FFmpeg about libcamera support. V4L2 is still the default
> > API that users generally care about. Particularly with mono sensors
> > the output is often directly usable without worrying about the
> > complexities of ISPs, but you're effectively saying "jump through lots
> > of hoops or you can't use a VCM with these sensors".
>
> Usage of libcamera is certainly not mandatory, but let's not forget that
> we're dealing with complex devices. In most cases applications will want
> auto-focus, which will require a userspace camera stack. Even when using
> manual focus, apart from moving the lens to the infinity position, there
> isn't much that an application could do without some sort of calibration
> data. Having to keep the VCM subdev open is the easy part. As long as
> this is documented properly in the V4L2 API, I don't think it's a big
> issue.

You know I've never been a huge fan of Media Controller, but at least
there you can preconfigure your pipeline via media-ctl and then stream
with v4l2-ctl. If the VCM isn't powered, then v4l2-ctl becomes largely
useless as a test tool without now having a second program to hold the
subdev open (as Hans has found out). The same goes for anything else
that streams a pre-configured pipeline (eg GStreamer v4l2src or FFmpeg
v4l2 plugin).

Preconfigure your lens position via "v4l2-ctl
--set-ctrl=focus_absolute=X", or have a sensible default in the VCM
driver config (it describes the hardware, so it could be in DT), have
the pipeline handle power, and you still have a usable capture device
through just V4L2. Otherwise you're saying that the powered down
position of the VCM (wherever that might be) is the best you get.

> > If userspace has called VIDIOC_STREAMON doesn't that mean they want
> > the whole entity (as configured) to be powered on and start streaming?
> > Are you saying that the lens isn't part of that entity? In which case
> > why does Media Controller include it (and eventually link it to the
> > sensor) in the media entity?
> >
> > Would you advocate making backlight control in DRM a function that
> > userspace is responsible for independently of the panel pipeline?
> > There are significant similarities to this situation as the panel
> > isn't usable without the backlight being powered, same as the sensor
> > isn't usable without the VCM being powered.
>
> Isn't the backlight actually controlled through sysfs separately from
> the display pipeline ?

Brightness is controlled via sysfs, same as lens position is set via
the VCM subdev.
It allows for an override of the state via sysfs, same as you can have
userspace open the VCM subdev.
However drm_panel_enable [1] calls backlight_enable, and
drm_panel_disable [2] calls backlight_disable for automatic control by
the framework.

[1] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L151
[2] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L183

> > Sorry, but I just see isolating power control for the VCM from the
> > sensor in this way to be a very odd design decision. It'd be
> > interesting to hear other views.
>
> Despite the above, I wouldn't oppose powering the VCM automatically when
> the sensor is streaming, but I'm concerned about corner cases. For
> instance, one may want to keep the VCM powered when toggling streaming
> off and then back on. I wouldn't be surprised if there were other need
> to have control of VCM power from userspace. I haven't studied the
> question in details though.

Refcount the users. Opening the subdev counts as one, and streaming
counts as one. You can now hold the power on if you wish to do so.

It's the "let userspace worry about it" that worries me. The same
approach was taken with MC, and it was a pain in the neck for users
until libcamera comes along a decade later.
IMHO V4L2 as an API should be fit for purpose and usable with or
without libcamera.
Telling users that they need to go and read the EDID for their display
themselves and configure the mode would be viewed as daft, but the I2C
channel to a display is largely as independent of the display pipeline
as the VCM is to the video pipeline. Perhaps display pipelines aren't
complex enough?

Sorry, just my two-penneth as someone who has to support general
users, rather than just develop platforms or address specific use
cases.

  Dave
Laurent Pinchart Nov. 12, 2021, 10:46 a.m. UTC | #31
Hi Dave,

CC'ing Sakari.

On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> > On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:
> > > On Thu, 11 Nov 2021 at 16:50, Hans de Goede wrote:
> > > > On 11/11/21 16:51, Dave Stevenson wrote:
> > > > > On Thu, 11 Nov 2021 at 15:23, Hans de Goede wrote:
> > > > >> On 11/11/21 12:18, Daniel Scally wrote:
> > > > >>
> > > > >> <snip>
> > > > >>
> > > > >>>>> One problem I'm experiencing
> > > > >>>>> is that the focus position I set isn't maintained; it holds for a couple
> > > > >>>>> of seconds and then resets to the "normal" focus...this happens when the
> > > > >>>>> .close() callback for the driver is called, which happens right after
> > > > >>>>> the control value is applied. All the other VCM drivers in the kernel
> > > > >>>>> power down on .close() so I did the same>
> > > > >>>> Right, I believe that this is fine though, we expect people to use
> > > > >>>> libcamera with this and once libcamera gets autofocus support, then
> > > > >>>> I would expect libcamera to keep the fd open the entire time while
> > > > >>>> streaming.
> > > > >>>
> > > > >>>
> > > > >>> OK - as long as that's how it works then I agree that this is fine as is
> > > > >>> yes.
> > > > >>
> > > > >> So I've just picked up an old project of mine, called gtk-v4l which
> > > > >> is a nice simply v4l2 controls applet and patches it up to also
> > > > >> work on v4l-subdevs:
> > > > >>
> > > > >> https://github.com/jwrdegoede/gtk-v4l/
> > > > >>
> > > > >> So now you can run:
> > > > >>
> > > > >> sudo gtk-v4l -d /dev/v4l-subdev8
> > > > >>
> > > > >> And it will give you a slider to control the focus; and as
> > > > >> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> > > > >> issue :)
> > > > >
> > > > > Do the lens and sensor share a regulator / enable GPIO?
> > > >
> > > > No, if they did then there would be no runtime-pm issue,
> > > > because then the VCM would not get turned off after
> > > > a v4l2-set command (for a quick test) since then the
> > > > streaming from the sensor would keep the sensor and
> > > > thus the regulator on.
> > >
> > > Registering with the regulator was more so that it restored the
> > > position on sensor power up, independent of whether the lens driver
> > > was opened or not.
> > >
> > > > > I was looking at the same issue for a Sony IMX135 module with AD5398
> > > > > VCM driver [1].
> > > > > In my case they do share an enable GPIO, so using regulator-gpio we
> > > > > can register via regulator_register_notifier for information on when
> > > > > the regulator is powered up. It can then also reset to the last
> > > > > position should the sensor subdev enable the regulator without the
> > > > > lens driver being opened at all.
> > > >
> > > > That sounds like it is relying on board-depedent behavior
> > > > (the enable GPIO and/or regulator being shared) which we don't
> > > > want in the VCM drivers as those are supposed to be board
> > > > agnostic.
> > >
> > > All platforms I've encountered so far have used the same GPIO to
> > > control both VCM and sensor, hence why I asked. The number of use
> > > cases where you want one without the other is incredibly low, and
> > > hardware guys generally don't like wasting GPIOs or having to route
> > > them around the PCB. It's interesting that your platform has separated
> > > them.
> > >
> > > > This really is something which should be fixed in userspace
> > > > where the userspace consumer of the sensor should also always
> > > > open the vcm v4l-subdev.
> > >
> > > Not all use cases involve libcamera, and what you're proposing is
> > > making life very difficult for the simple use cases.
> > > There may be GStreamer folk on board with libcamera, but I've heard no
> > > noises from FFmpeg about libcamera support. V4L2 is still the default
> > > API that users generally care about. Particularly with mono sensors
> > > the output is often directly usable without worrying about the
> > > complexities of ISPs, but you're effectively saying "jump through lots
> > > of hoops or you can't use a VCM with these sensors".
> >
> > Usage of libcamera is certainly not mandatory, but let's not forget that
> > we're dealing with complex devices. In most cases applications will want
> > auto-focus, which will require a userspace camera stack. Even when using
> > manual focus, apart from moving the lens to the infinity position, there
> > isn't much that an application could do without some sort of calibration
> > data. Having to keep the VCM subdev open is the easy part. As long as
> > this is documented properly in the V4L2 API, I don't think it's a big
> > issue.
> 
> You know I've never been a huge fan of Media Controller, but at least
> there you can preconfigure your pipeline via media-ctl and then stream
> with v4l2-ctl. If the VCM isn't powered, then v4l2-ctl becomes largely
> useless as a test tool without now having a second program to hold the
> subdev open (as Hans has found out). The same goes for anything else
> that streams a pre-configured pipeline (eg GStreamer v4l2src or FFmpeg
> v4l2 plugin).
> 
> Preconfigure your lens position via "v4l2-ctl
> --set-ctrl=focus_absolute=X", or have a sensible default in the VCM
> driver config (it describes the hardware, so it could be in DT), have
> the pipeline handle power, and you still have a usable capture device
> through just V4L2. Otherwise you're saying that the powered down
> position of the VCM (wherever that might be) is the best you get.
> 
> > > If userspace has called VIDIOC_STREAMON doesn't that mean they want
> > > the whole entity (as configured) to be powered on and start streaming?
> > > Are you saying that the lens isn't part of that entity? In which case
> > > why does Media Controller include it (and eventually link it to the
> > > sensor) in the media entity?
> > >
> > > Would you advocate making backlight control in DRM a function that
> > > userspace is responsible for independently of the panel pipeline?
> > > There are significant similarities to this situation as the panel
> > > isn't usable without the backlight being powered, same as the sensor
> > > isn't usable without the VCM being powered.
> >
> > Isn't the backlight actually controlled through sysfs separately from
> > the display pipeline ?
> 
> Brightness is controlled via sysfs, same as lens position is set via
> the VCM subdev.
> It allows for an override of the state via sysfs, same as you can have
> userspace open the VCM subdev.
> However drm_panel_enable [1] calls backlight_enable, and
> drm_panel_disable [2] calls backlight_disable for automatic control by
> the framework.
> 
> [1] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L151
> [2] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L183
> 
> > > Sorry, but I just see isolating power control for the VCM from the
> > > sensor in this way to be a very odd design decision. It'd be
> > > interesting to hear other views.
> >
> > Despite the above, I wouldn't oppose powering the VCM automatically when
> > the sensor is streaming, but I'm concerned about corner cases. For
> > instance, one may want to keep the VCM powered when toggling streaming
> > off and then back on. I wouldn't be surprised if there were other need
> > to have control of VCM power from userspace. I haven't studied the
> > question in details though.
> 
> Refcount the users. Opening the subdev counts as one, and streaming
> counts as one. You can now hold the power on if you wish to do so.
> 
> It's the "let userspace worry about it" that worries me. The same
> approach was taken with MC, and it was a pain in the neck for users
> until libcamera comes along a decade later.
> IMHO V4L2 as an API should be fit for purpose and usable with or
> without libcamera.

It really depends on the type of device I'm afraid :-) If you want to
capture processed image with a raw bayer sensor on RPi, you need to
control the ISP, and the 3A algorithms need to run in userspace. For
other types of devices, going straight to the kernel API is easier (and
can sometimes be preferred).

At the end of the day, I don't think it makes much of a difference
though. Once the libcamera API stabilizes, the library gets packaged by
distributions and applications start using it (or possibly even through
pipewire), nobody will complain about MC anymore :-) The important part,
in my opinion, is to handle the complexity somewhere in a framework so
that applications don't have to do so manually.

> Telling users that they need to go and read the EDID for their display
> themselves and configure the mode would be viewed as daft, but the I2C
> channel to a display is largely as independent of the display pipeline
> as the VCM is to the video pipeline. Perhaps display pipelines aren't
> complex enough?

Cameras are too complex :-S

> Sorry, just my two-penneth as someone who has to support general
> users, rather than just develop platforms or address specific use
> cases.

As mentioned above, I certainly don't oppose improving power management
for VCMs, as well as the VCM control API in general, as long as we can
cover all use cases. I'm not familiar enough with the use cases to tell
whether making the kernel side more "clever" would be just fine or could
cause issues.
Andy Shevchenko Nov. 12, 2021, 11:37 a.m. UTC | #32
On Fri, Nov 12, 2021 at 12:46:56PM +0200, Laurent Pinchart wrote:
> On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:

> > Sorry, just my two-penneth as someone who has to support general
> > users, rather than just develop platforms or address specific use
> > cases.
> 
> As mentioned above, I certainly don't oppose improving power management
> for VCMs, as well as the VCM control API in general, as long as we can
> cover all use cases. I'm not familiar enough with the use cases to tell
> whether making the kernel side more "clever" would be just fine or could
> cause issues.

Personally I found the

  kernel <--> library in userspace <--> another library or app

schema is more flexible in many ways:
 - we unburden kernel from the heavy code that has nothing to
   do directly with HW
 - we allow nevertheless to use kernel ABIs if needed
 - we decrease burden of the ABI evolution by doing it in only
   two places

After all this kind of schema might lead us at some point to the
shifting of 'we don't break user space' paradigm to the 'we hardly
try not to break user space and do not break library ABIs / APIs
in user space'.
Dave Stevenson Nov. 12, 2021, 11:43 a.m. UTC | #33
On Fri, 12 Nov 2021 at 10:47, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
>
> Hi Dave,
>
> CC'ing Sakari.
>
> On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> > > On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:
> > > > On Thu, 11 Nov 2021 at 16:50, Hans de Goede wrote:
> > > > > On 11/11/21 16:51, Dave Stevenson wrote:
> > > > > > On Thu, 11 Nov 2021 at 15:23, Hans de Goede wrote:
> > > > > >> On 11/11/21 12:18, Daniel Scally wrote:
> > > > > >>
> > > > > >> <snip>
> > > > > >>
> > > > > >>>>> One problem I'm experiencing
> > > > > >>>>> is that the focus position I set isn't maintained; it holds for a couple
> > > > > >>>>> of seconds and then resets to the "normal" focus...this happens when the
> > > > > >>>>> .close() callback for the driver is called, which happens right after
> > > > > >>>>> the control value is applied. All the other VCM drivers in the kernel
> > > > > >>>>> power down on .close() so I did the same>
> > > > > >>>> Right, I believe that this is fine though, we expect people to use
> > > > > >>>> libcamera with this and once libcamera gets autofocus support, then
> > > > > >>>> I would expect libcamera to keep the fd open the entire time while
> > > > > >>>> streaming.
> > > > > >>>
> > > > > >>>
> > > > > >>> OK - as long as that's how it works then I agree that this is fine as is
> > > > > >>> yes.
> > > > > >>
> > > > > >> So I've just picked up an old project of mine, called gtk-v4l which
> > > > > >> is a nice simply v4l2 controls applet and patches it up to also
> > > > > >> work on v4l-subdevs:
> > > > > >>
> > > > > >> https://github.com/jwrdegoede/gtk-v4l/
> > > > > >>
> > > > > >> So now you can run:
> > > > > >>
> > > > > >> sudo gtk-v4l -d /dev/v4l-subdev8
> > > > > >>
> > > > > >> And it will give you a slider to control the focus; and as
> > > > > >> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> > > > > >> issue :)
> > > > > >
> > > > > > Do the lens and sensor share a regulator / enable GPIO?
> > > > >
> > > > > No, if they did then there would be no runtime-pm issue,
> > > > > because then the VCM would not get turned off after
> > > > > a v4l2-set command (for a quick test) since then the
> > > > > streaming from the sensor would keep the sensor and
> > > > > thus the regulator on.
> > > >
> > > > Registering with the regulator was more so that it restored the
> > > > position on sensor power up, independent of whether the lens driver
> > > > was opened or not.
> > > >
> > > > > > I was looking at the same issue for a Sony IMX135 module with AD5398
> > > > > > VCM driver [1].
> > > > > > In my case they do share an enable GPIO, so using regulator-gpio we
> > > > > > can register via regulator_register_notifier for information on when
> > > > > > the regulator is powered up. It can then also reset to the last
> > > > > > position should the sensor subdev enable the regulator without the
> > > > > > lens driver being opened at all.
> > > > >
> > > > > That sounds like it is relying on board-depedent behavior
> > > > > (the enable GPIO and/or regulator being shared) which we don't
> > > > > want in the VCM drivers as those are supposed to be board
> > > > > agnostic.
> > > >
> > > > All platforms I've encountered so far have used the same GPIO to
> > > > control both VCM and sensor, hence why I asked. The number of use
> > > > cases where you want one without the other is incredibly low, and
> > > > hardware guys generally don't like wasting GPIOs or having to route
> > > > them around the PCB. It's interesting that your platform has separated
> > > > them.
> > > >
> > > > > This really is something which should be fixed in userspace
> > > > > where the userspace consumer of the sensor should also always
> > > > > open the vcm v4l-subdev.
> > > >
> > > > Not all use cases involve libcamera, and what you're proposing is
> > > > making life very difficult for the simple use cases.
> > > > There may be GStreamer folk on board with libcamera, but I've heard no
> > > > noises from FFmpeg about libcamera support. V4L2 is still the default
> > > > API that users generally care about. Particularly with mono sensors
> > > > the output is often directly usable without worrying about the
> > > > complexities of ISPs, but you're effectively saying "jump through lots
> > > > of hoops or you can't use a VCM with these sensors".
> > >
> > > Usage of libcamera is certainly not mandatory, but let's not forget that
> > > we're dealing with complex devices. In most cases applications will want
> > > auto-focus, which will require a userspace camera stack. Even when using
> > > manual focus, apart from moving the lens to the infinity position, there
> > > isn't much that an application could do without some sort of calibration
> > > data. Having to keep the VCM subdev open is the easy part. As long as
> > > this is documented properly in the V4L2 API, I don't think it's a big
> > > issue.
> >
> > You know I've never been a huge fan of Media Controller, but at least
> > there you can preconfigure your pipeline via media-ctl and then stream
> > with v4l2-ctl. If the VCM isn't powered, then v4l2-ctl becomes largely
> > useless as a test tool without now having a second program to hold the
> > subdev open (as Hans has found out). The same goes for anything else
> > that streams a pre-configured pipeline (eg GStreamer v4l2src or FFmpeg
> > v4l2 plugin).
> >
> > Preconfigure your lens position via "v4l2-ctl
> > --set-ctrl=focus_absolute=X", or have a sensible default in the VCM
> > driver config (it describes the hardware, so it could be in DT), have
> > the pipeline handle power, and you still have a usable capture device
> > through just V4L2. Otherwise you're saying that the powered down
> > position of the VCM (wherever that might be) is the best you get.
> >
> > > > If userspace has called VIDIOC_STREAMON doesn't that mean they want
> > > > the whole entity (as configured) to be powered on and start streaming?
> > > > Are you saying that the lens isn't part of that entity? In which case
> > > > why does Media Controller include it (and eventually link it to the
> > > > sensor) in the media entity?
> > > >
> > > > Would you advocate making backlight control in DRM a function that
> > > > userspace is responsible for independently of the panel pipeline?
> > > > There are significant similarities to this situation as the panel
> > > > isn't usable without the backlight being powered, same as the sensor
> > > > isn't usable without the VCM being powered.
> > >
> > > Isn't the backlight actually controlled through sysfs separately from
> > > the display pipeline ?
> >
> > Brightness is controlled via sysfs, same as lens position is set via
> > the VCM subdev.
> > It allows for an override of the state via sysfs, same as you can have
> > userspace open the VCM subdev.
> > However drm_panel_enable [1] calls backlight_enable, and
> > drm_panel_disable [2] calls backlight_disable for automatic control by
> > the framework.
> >
> > [1] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L151
> > [2] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L183
> >
> > > > Sorry, but I just see isolating power control for the VCM from the
> > > > sensor in this way to be a very odd design decision. It'd be
> > > > interesting to hear other views.
> > >
> > > Despite the above, I wouldn't oppose powering the VCM automatically when
> > > the sensor is streaming, but I'm concerned about corner cases. For
> > > instance, one may want to keep the VCM powered when toggling streaming
> > > off and then back on. I wouldn't be surprised if there were other need
> > > to have control of VCM power from userspace. I haven't studied the
> > > question in details though.
> >
> > Refcount the users. Opening the subdev counts as one, and streaming
> > counts as one. You can now hold the power on if you wish to do so.
> >
> > It's the "let userspace worry about it" that worries me. The same
> > approach was taken with MC, and it was a pain in the neck for users
> > until libcamera comes along a decade later.
> > IMHO V4L2 as an API should be fit for purpose and usable with or
> > without libcamera.
>
> It really depends on the type of device I'm afraid :-) If you want to
> capture processed image with a raw bayer sensor on RPi, you need to
> control the ISP, and the 3A algorithms need to run in userspace. For
> other types of devices, going straight to the kernel API is easier (and
> can sometimes be preferred).

But you're forcing a YUYV sensor with VCM to jump through hoops.
Or a mono sensor with onboard AE (eg most of the OnSemi global shutter
sensors) and external VCM. Focus control there would have uses in CV
applications.

> At the end of the day, I don't think it makes much of a difference
> though. Once the libcamera API stabilizes, the library gets packaged by
> distributions and applications start using it (or possibly even through
> pipewire), nobody will complain about MC anymore :-) The important part,
> in my opinion, is to handle the complexity somewhere in a framework so
> that applications don't have to do so manually.

Has anyone approached FFmpeg then about doing a libcamera integration?
Or is the V4L2 compatibility layer being uprated from "best efforts"
for "complete support"?
Until ALL applications support libcamera somehow, then plain old V4L2
still has a significant place.

And don't get me started on getting all the examples on the internet
updated to reflect the new and shiny way of doing things. Have a look
at the Pi forums at the moment as we've switched to using libcamera
with Bullseye, and all the complaints of "but I followed these random
instructions (for V4L2) and it doesn't work".

> > Telling users that they need to go and read the EDID for their display
> > themselves and configure the mode would be viewed as daft, but the I2C
> > channel to a display is largely as independent of the display pipeline
> > as the VCM is to the video pipeline. Perhaps display pipelines aren't
> > complex enough?
>
> Cameras are too complex :-S

*Some* cameras are too complex.
Some display pipelines are pretty hideous as well.

> > Sorry, just my two-penneth as someone who has to support general
> > users, rather than just develop platforms or address specific use
> > cases.
>
> As mentioned above, I certainly don't oppose improving power management
> for VCMs, as well as the VCM control API in general, as long as we can
> cover all use cases. I'm not familiar enough with the use cases to tell
> whether making the kernel side more "clever" would be just fine or could
> cause issues.

Can I request that it is seriously considered then, rather than just
documenting it as userspace's responsibility?

OK, I'll stop pushing my point now.

  Dave
Sakari Ailus Nov. 12, 2021, 12:23 p.m. UTC | #34
Hi Laurent, Dave,

On Fri, Nov 12, 2021 at 12:46:56PM +0200, Laurent Pinchart wrote:
> Hi Dave,
> 
> CC'ing Sakari.
> 
> On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> > > On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:
> > > > On Thu, 11 Nov 2021 at 16:50, Hans de Goede wrote:
> > > > > On 11/11/21 16:51, Dave Stevenson wrote:
> > > > > > On Thu, 11 Nov 2021 at 15:23, Hans de Goede wrote:
> > > > > >> On 11/11/21 12:18, Daniel Scally wrote:
> > > > > >>
> > > > > >> <snip>
> > > > > >>
> > > > > >>>>> One problem I'm experiencing
> > > > > >>>>> is that the focus position I set isn't maintained; it holds for a couple
> > > > > >>>>> of seconds and then resets to the "normal" focus...this happens when the
> > > > > >>>>> .close() callback for the driver is called, which happens right after
> > > > > >>>>> the control value is applied. All the other VCM drivers in the kernel
> > > > > >>>>> power down on .close() so I did the same>
> > > > > >>>> Right, I believe that this is fine though, we expect people to use
> > > > > >>>> libcamera with this and once libcamera gets autofocus support, then
> > > > > >>>> I would expect libcamera to keep the fd open the entire time while
> > > > > >>>> streaming.
> > > > > >>>
> > > > > >>>
> > > > > >>> OK - as long as that's how it works then I agree that this is fine as is
> > > > > >>> yes.
> > > > > >>
> > > > > >> So I've just picked up an old project of mine, called gtk-v4l which
> > > > > >> is a nice simply v4l2 controls applet and patches it up to also
> > > > > >> work on v4l-subdevs:
> > > > > >>
> > > > > >> https://github.com/jwrdegoede/gtk-v4l/
> > > > > >>
> > > > > >> So now you can run:
> > > > > >>
> > > > > >> sudo gtk-v4l -d /dev/v4l-subdev8
> > > > > >>
> > > > > >> And it will give you a slider to control the focus; and as
> > > > > >> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> > > > > >> issue :)
> > > > > >
> > > > > > Do the lens and sensor share a regulator / enable GPIO?
> > > > >
> > > > > No, if they did then there would be no runtime-pm issue,
> > > > > because then the VCM would not get turned off after
> > > > > a v4l2-set command (for a quick test) since then the
> > > > > streaming from the sensor would keep the sensor and
> > > > > thus the regulator on.
> > > >
> > > > Registering with the regulator was more so that it restored the
> > > > position on sensor power up, independent of whether the lens driver
> > > > was opened or not.
> > > >
> > > > > > I was looking at the same issue for a Sony IMX135 module with AD5398
> > > > > > VCM driver [1].
> > > > > > In my case they do share an enable GPIO, so using regulator-gpio we
> > > > > > can register via regulator_register_notifier for information on when
> > > > > > the regulator is powered up. It can then also reset to the last
> > > > > > position should the sensor subdev enable the regulator without the
> > > > > > lens driver being opened at all.
> > > > >
> > > > > That sounds like it is relying on board-depedent behavior
> > > > > (the enable GPIO and/or regulator being shared) which we don't
> > > > > want in the VCM drivers as those are supposed to be board
> > > > > agnostic.
> > > >
> > > > All platforms I've encountered so far have used the same GPIO to
> > > > control both VCM and sensor, hence why I asked. The number of use
> > > > cases where you want one without the other is incredibly low, and
> > > > hardware guys generally don't like wasting GPIOs or having to route
> > > > them around the PCB. It's interesting that your platform has separated
> > > > them.
> > > >
> > > > > This really is something which should be fixed in userspace
> > > > > where the userspace consumer of the sensor should also always
> > > > > open the vcm v4l-subdev.
> > > >
> > > > Not all use cases involve libcamera, and what you're proposing is
> > > > making life very difficult for the simple use cases.
> > > > There may be GStreamer folk on board with libcamera, but I've heard no
> > > > noises from FFmpeg about libcamera support. V4L2 is still the default
> > > > API that users generally care about. Particularly with mono sensors
> > > > the output is often directly usable without worrying about the
> > > > complexities of ISPs, but you're effectively saying "jump through lots
> > > > of hoops or you can't use a VCM with these sensors".
> > >
> > > Usage of libcamera is certainly not mandatory, but let's not forget that
> > > we're dealing with complex devices. In most cases applications will want
> > > auto-focus, which will require a userspace camera stack. Even when using
> > > manual focus, apart from moving the lens to the infinity position, there
> > > isn't much that an application could do without some sort of calibration
> > > data. Having to keep the VCM subdev open is the easy part. As long as
> > > this is documented properly in the V4L2 API, I don't think it's a big
> > > issue.
> > 
> > You know I've never been a huge fan of Media Controller, but at least
> > there you can preconfigure your pipeline via media-ctl and then stream
> > with v4l2-ctl. If the VCM isn't powered, then v4l2-ctl becomes largely
> > useless as a test tool without now having a second program to hold the
> > subdev open (as Hans has found out). The same goes for anything else
> > that streams a pre-configured pipeline (eg GStreamer v4l2src or FFmpeg
> > v4l2 plugin).
> > 
> > Preconfigure your lens position via "v4l2-ctl
> > --set-ctrl=focus_absolute=X", or have a sensible default in the VCM
> > driver config (it describes the hardware, so it could be in DT), have
> > the pipeline handle power, and you still have a usable capture device
> > through just V4L2. Otherwise you're saying that the powered down
> > position of the VCM (wherever that might be) is the best you get.

Note that on some camera modules, the sensor and the VCM power-up and
power-down sequences are intertwined. So if you power up one without the
other, powering up the other later on may no longer be possible.

I have to say I don't know how common such camera modules are nowadays. My
guess is that they're rare.

> > 
> > > > If userspace has called VIDIOC_STREAMON doesn't that mean they want
> > > > the whole entity (as configured) to be powered on and start streaming?
> > > > Are you saying that the lens isn't part of that entity? In which case
> > > > why does Media Controller include it (and eventually link it to the
> > > > sensor) in the media entity?
> > > >
> > > > Would you advocate making backlight control in DRM a function that
> > > > userspace is responsible for independently of the panel pipeline?
> > > > There are significant similarities to this situation as the panel
> > > > isn't usable without the backlight being powered, same as the sensor
> > > > isn't usable without the VCM being powered.
> > >
> > > Isn't the backlight actually controlled through sysfs separately from
> > > the display pipeline ?
> > 
> > Brightness is controlled via sysfs, same as lens position is set via
> > the VCM subdev.
> > It allows for an override of the state via sysfs, same as you can have
> > userspace open the VCM subdev.
> > However drm_panel_enable [1] calls backlight_enable, and
> > drm_panel_disable [2] calls backlight_disable for automatic control by
> > the framework.
> > 
> > [1] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L151
> > [2] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L183
> > 
> > > > Sorry, but I just see isolating power control for the VCM from the
> > > > sensor in this way to be a very odd design decision. It'd be
> > > > interesting to hear other views.
> > >
> > > Despite the above, I wouldn't oppose powering the VCM automatically when
> > > the sensor is streaming, but I'm concerned about corner cases. For
> > > instance, one may want to keep the VCM powered when toggling streaming
> > > off and then back on. I wouldn't be surprised if there were other need
> > > to have control of VCM power from userspace. I haven't studied the
> > > question in details though.
> > 
> > Refcount the users. Opening the subdev counts as one, and streaming
> > counts as one. You can now hold the power on if you wish to do so.
> > 
> > It's the "let userspace worry about it" that worries me. The same
> > approach was taken with MC, and it was a pain in the neck for users
> > until libcamera comes along a decade later.
> > IMHO V4L2 as an API should be fit for purpose and usable with or
> > without libcamera.
> 
> It really depends on the type of device I'm afraid :-) If you want to
> capture processed image with a raw bayer sensor on RPi, you need to
> control the ISP, and the 3A algorithms need to run in userspace. For
> other types of devices, going straight to the kernel API is easier (and
> can sometimes be preferred).
> 
> At the end of the day, I don't think it makes much of a difference
> though. Once the libcamera API stabilizes, the library gets packaged by
> distributions and applications start using it (or possibly even through
> pipewire), nobody will complain about MC anymore :-) The important part,
> in my opinion, is to handle the complexity somewhere in a framework so
> that applications don't have to do so manually.

Agreed.

The user space would need to open the device node of the lens, and a
non-test user space would in any case keep the device node open in order to
change lens drive current.

In the kernel I think we'd need probably a new (subdev) API for just that.
Poking other devices with runtime PM calls isn't the right way to do it.

This is a trivial matter compared to video node centric vs. MC centric
case.

> As mentioned above, I certainly don't oppose improving power management
> for VCMs, as well as the VCM control API in general, as long as we can
> cover all use cases. I'm not familiar enough with the use cases to tell
> whether making the kernel side more "clever" would be just fine or could
> cause issues.

Usually if you're using a VCM then you need a way to focus the image. I
guess VCMs have been getting better so you might be able to get decent
images without focussing in VCM resting position if the target is distant
enough, too. But that's just a guess. So theoretically there's possibility
of saving a little bit of power.
Kieran Bingham Nov. 12, 2021, 5:51 p.m. UTC | #35
Quoting Laurent Pinchart (2021-11-12 10:46:56)
> Hi Dave,
> 
> CC'ing Sakari.
> 
> On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> > > On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:

<big snip>

> > Refcount the users. Opening the subdev counts as one, and streaming
> > counts as one. You can now hold the power on if you wish to do so.
> > 
> > It's the "let userspace worry about it" that worries me. The same
> > approach was taken with MC, and it was a pain in the neck for users
> > until libcamera comes along a decade later.
> > IMHO V4L2 as an API should be fit for purpose and usable with or
> > without libcamera.
> 
> It really depends on the type of device I'm afraid :-) If you want to
> capture processed image with a raw bayer sensor on RPi, you need to
> control the ISP, and the 3A algorithms need to run in userspace. For
> other types of devices, going straight to the kernel API is easier (and
> can sometimes be preferred).
> 
> At the end of the day, I don't think it makes much of a difference
> though. Once the libcamera API stabilizes, the library gets packaged by
> distributions and applications start using it (or possibly even through
> pipewire), nobody will complain about MC anymore :-) The important part,

I don't really want to pull this thread further away from $SUBJECT .. but:

Unfortunately, I don't think that's true.

We've still got a long way to go!

libcamera isn't enough to cover all MC use cases. The RPi for instance
has the ability to capture HDMI in through the CSI2 receiver with a
TC358743 or such. This won't need an IPA or 3a, but might want to go
through the ISP for scaling or format conversions...

Some time ago, I started to explore how we could handle 'easily'
capturing non-camera devices. But it was not in scope for libcamera.

> in my opinion, is to handle the complexity somewhere in a framework so
> that applications don't have to do so manually.

Yes, the complexity needs to be handled somewhere. Applications
should be able to work with a generic interface and get their video
frames. But right now - I don't think applications have this, and key
areas needed for supporting that are not under development or even
consideration yet as far as I can tell.

--
Kieran

> -- 
> Regards,
> 
> Laurent Pinchart
Laurent Pinchart Nov. 15, 2021, noon UTC | #36
Hi Sakari,

On Fri, Nov 12, 2021 at 02:23:12PM +0200, Sakari Ailus wrote:
> On Fri, Nov 12, 2021 at 12:46:56PM +0200, Laurent Pinchart wrote:
> > On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> > > > On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:
> > > > > On Thu, 11 Nov 2021 at 16:50, Hans de Goede wrote:
> > > > > > On 11/11/21 16:51, Dave Stevenson wrote:
> > > > > > > On Thu, 11 Nov 2021 at 15:23, Hans de Goede wrote:
> > > > > > >> On 11/11/21 12:18, Daniel Scally wrote:
> > > > > > >>
> > > > > > >> <snip>
> > > > > > >>
> > > > > > >>>>> One problem I'm experiencing
> > > > > > >>>>> is that the focus position I set isn't maintained; it holds for a couple
> > > > > > >>>>> of seconds and then resets to the "normal" focus...this happens when the
> > > > > > >>>>> .close() callback for the driver is called, which happens right after
> > > > > > >>>>> the control value is applied. All the other VCM drivers in the kernel
> > > > > > >>>>> power down on .close() so I did the same>
> > > > > > >>>> Right, I believe that this is fine though, we expect people to use
> > > > > > >>>> libcamera with this and once libcamera gets autofocus support, then
> > > > > > >>>> I would expect libcamera to keep the fd open the entire time while
> > > > > > >>>> streaming.
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> OK - as long as that's how it works then I agree that this is fine as is
> > > > > > >>> yes.
> > > > > > >>
> > > > > > >> So I've just picked up an old project of mine, called gtk-v4l which
> > > > > > >> is a nice simply v4l2 controls applet and patches it up to also
> > > > > > >> work on v4l-subdevs:
> > > > > > >>
> > > > > > >> https://github.com/jwrdegoede/gtk-v4l/
> > > > > > >>
> > > > > > >> So now you can run:
> > > > > > >>
> > > > > > >> sudo gtk-v4l -d /dev/v4l-subdev8
> > > > > > >>
> > > > > > >> And it will give you a slider to control the focus; and as
> > > > > > >> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> > > > > > >> issue :)
> > > > > > >
> > > > > > > Do the lens and sensor share a regulator / enable GPIO?
> > > > > >
> > > > > > No, if they did then there would be no runtime-pm issue,
> > > > > > because then the VCM would not get turned off after
> > > > > > a v4l2-set command (for a quick test) since then the
> > > > > > streaming from the sensor would keep the sensor and
> > > > > > thus the regulator on.
> > > > >
> > > > > Registering with the regulator was more so that it restored the
> > > > > position on sensor power up, independent of whether the lens driver
> > > > > was opened or not.
> > > > >
> > > > > > > I was looking at the same issue for a Sony IMX135 module with AD5398
> > > > > > > VCM driver [1].
> > > > > > > In my case they do share an enable GPIO, so using regulator-gpio we
> > > > > > > can register via regulator_register_notifier for information on when
> > > > > > > the regulator is powered up. It can then also reset to the last
> > > > > > > position should the sensor subdev enable the regulator without the
> > > > > > > lens driver being opened at all.
> > > > > >
> > > > > > That sounds like it is relying on board-depedent behavior
> > > > > > (the enable GPIO and/or regulator being shared) which we don't
> > > > > > want in the VCM drivers as those are supposed to be board
> > > > > > agnostic.
> > > > >
> > > > > All platforms I've encountered so far have used the same GPIO to
> > > > > control both VCM and sensor, hence why I asked. The number of use
> > > > > cases where you want one without the other is incredibly low, and
> > > > > hardware guys generally don't like wasting GPIOs or having to route
> > > > > them around the PCB. It's interesting that your platform has separated
> > > > > them.
> > > > >
> > > > > > This really is something which should be fixed in userspace
> > > > > > where the userspace consumer of the sensor should also always
> > > > > > open the vcm v4l-subdev.
> > > > >
> > > > > Not all use cases involve libcamera, and what you're proposing is
> > > > > making life very difficult for the simple use cases.
> > > > > There may be GStreamer folk on board with libcamera, but I've heard no
> > > > > noises from FFmpeg about libcamera support. V4L2 is still the default
> > > > > API that users generally care about. Particularly with mono sensors
> > > > > the output is often directly usable without worrying about the
> > > > > complexities of ISPs, but you're effectively saying "jump through lots
> > > > > of hoops or you can't use a VCM with these sensors".
> > > >
> > > > Usage of libcamera is certainly not mandatory, but let's not forget that
> > > > we're dealing with complex devices. In most cases applications will want
> > > > auto-focus, which will require a userspace camera stack. Even when using
> > > > manual focus, apart from moving the lens to the infinity position, there
> > > > isn't much that an application could do without some sort of calibration
> > > > data. Having to keep the VCM subdev open is the easy part. As long as
> > > > this is documented properly in the V4L2 API, I don't think it's a big
> > > > issue.
> > > 
> > > You know I've never been a huge fan of Media Controller, but at least
> > > there you can preconfigure your pipeline via media-ctl and then stream
> > > with v4l2-ctl. If the VCM isn't powered, then v4l2-ctl becomes largely
> > > useless as a test tool without now having a second program to hold the
> > > subdev open (as Hans has found out). The same goes for anything else
> > > that streams a pre-configured pipeline (eg GStreamer v4l2src or FFmpeg
> > > v4l2 plugin).
> > > 
> > > Preconfigure your lens position via "v4l2-ctl
> > > --set-ctrl=focus_absolute=X", or have a sensible default in the VCM
> > > driver config (it describes the hardware, so it could be in DT), have
> > > the pipeline handle power, and you still have a usable capture device
> > > through just V4L2. Otherwise you're saying that the powered down
> > > position of the VCM (wherever that might be) is the best you get.
> 
> Note that on some camera modules, the sensor and the VCM power-up and
> power-down sequences are intertwined. So if you power up one without the
> other, powering up the other later on may no longer be possible.
> 
> I have to say I don't know how common such camera modules are nowadays. My
> guess is that they're rare.

If I recall correctly, this was the case on the N900 or N9. Hopefully we
won't have to face similar devices in the future, but... It's a problem
that needs to be handled in kernel space anyway.

> > > > > If userspace has called VIDIOC_STREAMON doesn't that mean they want
> > > > > the whole entity (as configured) to be powered on and start streaming?
> > > > > Are you saying that the lens isn't part of that entity? In which case
> > > > > why does Media Controller include it (and eventually link it to the
> > > > > sensor) in the media entity?
> > > > >
> > > > > Would you advocate making backlight control in DRM a function that
> > > > > userspace is responsible for independently of the panel pipeline?
> > > > > There are significant similarities to this situation as the panel
> > > > > isn't usable without the backlight being powered, same as the sensor
> > > > > isn't usable without the VCM being powered.
> > > >
> > > > Isn't the backlight actually controlled through sysfs separately from
> > > > the display pipeline ?
> > > 
> > > Brightness is controlled via sysfs, same as lens position is set via
> > > the VCM subdev.
> > > It allows for an override of the state via sysfs, same as you can have
> > > userspace open the VCM subdev.
> > > However drm_panel_enable [1] calls backlight_enable, and
> > > drm_panel_disable [2] calls backlight_disable for automatic control by
> > > the framework.
> > > 
> > > [1] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L151
> > > [2] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L183
> > > 
> > > > > Sorry, but I just see isolating power control for the VCM from the
> > > > > sensor in this way to be a very odd design decision. It'd be
> > > > > interesting to hear other views.
> > > >
> > > > Despite the above, I wouldn't oppose powering the VCM automatically when
> > > > the sensor is streaming, but I'm concerned about corner cases. For
> > > > instance, one may want to keep the VCM powered when toggling streaming
> > > > off and then back on. I wouldn't be surprised if there were other need
> > > > to have control of VCM power from userspace. I haven't studied the
> > > > question in details though.
> > > 
> > > Refcount the users. Opening the subdev counts as one, and streaming
> > > counts as one. You can now hold the power on if you wish to do so.
> > > 
> > > It's the "let userspace worry about it" that worries me. The same
> > > approach was taken with MC, and it was a pain in the neck for users
> > > until libcamera comes along a decade later.
> > > IMHO V4L2 as an API should be fit for purpose and usable with or
> > > without libcamera.
> > 
> > It really depends on the type of device I'm afraid :-) If you want to
> > capture processed image with a raw bayer sensor on RPi, you need to
> > control the ISP, and the 3A algorithms need to run in userspace. For
> > other types of devices, going straight to the kernel API is easier (and
> > can sometimes be preferred).
> > 
> > At the end of the day, I don't think it makes much of a difference
> > though. Once the libcamera API stabilizes, the library gets packaged by
> > distributions and applications start using it (or possibly even through
> > pipewire), nobody will complain about MC anymore :-) The important part,
> > in my opinion, is to handle the complexity somewhere in a framework so
> > that applications don't have to do so manually.
> 
> Agreed.
> 
> The user space would need to open the device node of the lens, and a
> non-test user space would in any case keep the device node open in order to
> change lens drive current.
> 
> In the kernel I think we'd need probably a new (subdev) API for just that.
> Poking other devices with runtime PM calls isn't the right way to do it.

Do you mean a kernel API to instruct the lens driver to power the device
up/down when streaming starts/stops ? Maybe that's a good use case for
.s_power() ?

> This is a trivial matter compared to video node centric vs. MC centric
> case.
> 
> > As mentioned above, I certainly don't oppose improving power management
> > for VCMs, as well as the VCM control API in general, as long as we can
> > cover all use cases. I'm not familiar enough with the use cases to tell
> > whether making the kernel side more "clever" would be just fine or could
> > cause issues.
> 
> Usually if you're using a VCM then you need a way to focus the image. I
> guess VCMs have been getting better so you might be able to get decent
> images without focussing in VCM resting position if the target is distant
> enough, too. But that's just a guess. So theoretically there's possibility
> of saving a little bit of power.

I was thinking about that too, but it seems to be quite a corner case.
It could also be better handled in the kernel by powering the device off
when the lens position is set to infinity (although we'd likely need
some uAPI extension, to set a power vs. latency hint).
Laurent Pinchart Nov. 15, 2021, 1:08 p.m. UTC | #37
Hi Kieran,

On Fri, Nov 12, 2021 at 05:51:57PM +0000, Kieran Bingham wrote:
> Quoting Laurent Pinchart (2021-11-12 10:46:56)
> > On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> > > > On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:
> 
> <big snip>
> 
> > > Refcount the users. Opening the subdev counts as one, and streaming
> > > counts as one. You can now hold the power on if you wish to do so.
> > > 
> > > It's the "let userspace worry about it" that worries me. The same
> > > approach was taken with MC, and it was a pain in the neck for users
> > > until libcamera comes along a decade later.
> > > IMHO V4L2 as an API should be fit for purpose and usable with or
> > > without libcamera.
> > 
> > It really depends on the type of device I'm afraid :-) If you want to
> > capture processed image with a raw bayer sensor on RPi, you need to
> > control the ISP, and the 3A algorithms need to run in userspace. For
> > other types of devices, going straight to the kernel API is easier (and
> > can sometimes be preferred).
> > 
> > At the end of the day, I don't think it makes much of a difference
> > though. Once the libcamera API stabilizes, the library gets packaged by
> > distributions and applications start using it (or possibly even through
> > pipewire), nobody will complain about MC anymore :-) The important part,
> 
> I don't really want to pull this thread further away from $SUBJECT .. but:
> 
> Unfortunately, I don't think that's true.
> 
> We've still got a long way to go!
> 
> libcamera isn't enough to cover all MC use cases. The RPi for instance
> has the ability to capture HDMI in through the CSI2 receiver with a
> TC358743 or such. This won't need an IPA or 3a, but might want to go
> through the ISP for scaling or format conversions...

I was indeed mostly thinking about the camera use cases, as we were
discussing lens control. There's certainly more than that, with a need
to at least configure the unicam MC pipeline to capture from, for
instance, an HDMI-to-CSI-2 converter. This isn't something that
libcamera was designed for, and I don't know whether the use case could
be retrofitted, or if a different userspace framework would be better.

> Some time ago, I started to explore how we could handle 'easily'
> capturing non-camera devices. But it was not in scope for libcamera.
> 
> > in my opinion, is to handle the complexity somewhere in a framework so
> > that applications don't have to do so manually.
> 
> Yes, the complexity needs to be handled somewhere. Applications
> should be able to work with a generic interface and get their video
> frames. But right now - I don't think applications have this, and key
> areas needed for supporting that are not under development or even
> consideration yet as far as I can tell.
Laurent Pinchart Nov. 15, 2021, 1:21 p.m. UTC | #38
Hi Dave,

On Fri, Nov 12, 2021 at 11:43:55AM +0000, Dave Stevenson wrote:
> On Fri, 12 Nov 2021 at 10:47, Laurent Pinchart wrote:
> > On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> > > > On Thu, Nov 11, 2021 at 07:30:39PM +0000, Dave Stevenson wrote:
> > > > > On Thu, 11 Nov 2021 at 16:50, Hans de Goede wrote:
> > > > > > On 11/11/21 16:51, Dave Stevenson wrote:
> > > > > > > On Thu, 11 Nov 2021 at 15:23, Hans de Goede wrote:
> > > > > > >> On 11/11/21 12:18, Daniel Scally wrote:
> > > > > > >>
> > > > > > >> <snip>
> > > > > > >>
> > > > > > >>>>> One problem I'm experiencing
> > > > > > >>>>> is that the focus position I set isn't maintained; it holds for a couple
> > > > > > >>>>> of seconds and then resets to the "normal" focus...this happens when the
> > > > > > >>>>> .close() callback for the driver is called, which happens right after
> > > > > > >>>>> the control value is applied. All the other VCM drivers in the kernel
> > > > > > >>>>> power down on .close() so I did the same>
> > > > > > >>>> Right, I believe that this is fine though, we expect people to use
> > > > > > >>>> libcamera with this and once libcamera gets autofocus support, then
> > > > > > >>>> I would expect libcamera to keep the fd open the entire time while
> > > > > > >>>> streaming.
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> OK - as long as that's how it works then I agree that this is fine as is
> > > > > > >>> yes.
> > > > > > >>
> > > > > > >> So I've just picked up an old project of mine, called gtk-v4l which
> > > > > > >> is a nice simply v4l2 controls applet and patches it up to also
> > > > > > >> work on v4l-subdevs:
> > > > > > >>
> > > > > > >> https://github.com/jwrdegoede/gtk-v4l/
> > > > > > >>
> > > > > > >> So now you can run:
> > > > > > >>
> > > > > > >> sudo gtk-v4l -d /dev/v4l-subdev8
> > > > > > >>
> > > > > > >> And it will give you a slider to control the focus; and as
> > > > > > >> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> > > > > > >> issue :)
> > > > > > >
> > > > > > > Do the lens and sensor share a regulator / enable GPIO?
> > > > > >
> > > > > > No, if they did then there would be no runtime-pm issue,
> > > > > > because then the VCM would not get turned off after
> > > > > > a v4l2-set command (for a quick test) since then the
> > > > > > streaming from the sensor would keep the sensor and
> > > > > > thus the regulator on.
> > > > >
> > > > > Registering with the regulator was more so that it restored the
> > > > > position on sensor power up, independent of whether the lens driver
> > > > > was opened or not.
> > > > >
> > > > > > > I was looking at the same issue for a Sony IMX135 module with AD5398
> > > > > > > VCM driver [1].
> > > > > > > In my case they do share an enable GPIO, so using regulator-gpio we
> > > > > > > can register via regulator_register_notifier for information on when
> > > > > > > the regulator is powered up. It can then also reset to the last
> > > > > > > position should the sensor subdev enable the regulator without the
> > > > > > > lens driver being opened at all.
> > > > > >
> > > > > > That sounds like it is relying on board-depedent behavior
> > > > > > (the enable GPIO and/or regulator being shared) which we don't
> > > > > > want in the VCM drivers as those are supposed to be board
> > > > > > agnostic.
> > > > >
> > > > > All platforms I've encountered so far have used the same GPIO to
> > > > > control both VCM and sensor, hence why I asked. The number of use
> > > > > cases where you want one without the other is incredibly low, and
> > > > > hardware guys generally don't like wasting GPIOs or having to route
> > > > > them around the PCB. It's interesting that your platform has separated
> > > > > them.
> > > > >
> > > > > > This really is something which should be fixed in userspace
> > > > > > where the userspace consumer of the sensor should also always
> > > > > > open the vcm v4l-subdev.
> > > > >
> > > > > Not all use cases involve libcamera, and what you're proposing is
> > > > > making life very difficult for the simple use cases.
> > > > > There may be GStreamer folk on board with libcamera, but I've heard no
> > > > > noises from FFmpeg about libcamera support. V4L2 is still the default
> > > > > API that users generally care about. Particularly with mono sensors
> > > > > the output is often directly usable without worrying about the
> > > > > complexities of ISPs, but you're effectively saying "jump through lots
> > > > > of hoops or you can't use a VCM with these sensors".
> > > >
> > > > Usage of libcamera is certainly not mandatory, but let's not forget that
> > > > we're dealing with complex devices. In most cases applications will want
> > > > auto-focus, which will require a userspace camera stack. Even when using
> > > > manual focus, apart from moving the lens to the infinity position, there
> > > > isn't much that an application could do without some sort of calibration
> > > > data. Having to keep the VCM subdev open is the easy part. As long as
> > > > this is documented properly in the V4L2 API, I don't think it's a big
> > > > issue.
> > >
> > > You know I've never been a huge fan of Media Controller, but at least
> > > there you can preconfigure your pipeline via media-ctl and then stream
> > > with v4l2-ctl. If the VCM isn't powered, then v4l2-ctl becomes largely
> > > useless as a test tool without now having a second program to hold the
> > > subdev open (as Hans has found out). The same goes for anything else
> > > that streams a pre-configured pipeline (eg GStreamer v4l2src or FFmpeg
> > > v4l2 plugin).
> > >
> > > Preconfigure your lens position via "v4l2-ctl
> > > --set-ctrl=focus_absolute=X", or have a sensible default in the VCM
> > > driver config (it describes the hardware, so it could be in DT), have
> > > the pipeline handle power, and you still have a usable capture device
> > > through just V4L2. Otherwise you're saying that the powered down
> > > position of the VCM (wherever that might be) is the best you get.
> > >
> > > > > If userspace has called VIDIOC_STREAMON doesn't that mean they want
> > > > > the whole entity (as configured) to be powered on and start streaming?
> > > > > Are you saying that the lens isn't part of that entity? In which case
> > > > > why does Media Controller include it (and eventually link it to the
> > > > > sensor) in the media entity?
> > > > >
> > > > > Would you advocate making backlight control in DRM a function that
> > > > > userspace is responsible for independently of the panel pipeline?
> > > > > There are significant similarities to this situation as the panel
> > > > > isn't usable without the backlight being powered, same as the sensor
> > > > > isn't usable without the VCM being powered.
> > > >
> > > > Isn't the backlight actually controlled through sysfs separately from
> > > > the display pipeline ?
> > >
> > > Brightness is controlled via sysfs, same as lens position is set via
> > > the VCM subdev.
> > > It allows for an override of the state via sysfs, same as you can have
> > > userspace open the VCM subdev.
> > > However drm_panel_enable [1] calls backlight_enable, and
> > > drm_panel_disable [2] calls backlight_disable for automatic control by
> > > the framework.
> > >
> > > [1] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L151
> > > [2] https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/drm_panel.c#L183
> > >
> > > > > Sorry, but I just see isolating power control for the VCM from the
> > > > > sensor in this way to be a very odd design decision. It'd be
> > > > > interesting to hear other views.
> > > >
> > > > Despite the above, I wouldn't oppose powering the VCM automatically when
> > > > the sensor is streaming, but I'm concerned about corner cases. For
> > > > instance, one may want to keep the VCM powered when toggling streaming
> > > > off and then back on. I wouldn't be surprised if there were other need
> > > > to have control of VCM power from userspace. I haven't studied the
> > > > question in details though.
> > >
> > > Refcount the users. Opening the subdev counts as one, and streaming
> > > counts as one. You can now hold the power on if you wish to do so.
> > >
> > > It's the "let userspace worry about it" that worries me. The same
> > > approach was taken with MC, and it was a pain in the neck for users
> > > until libcamera comes along a decade later.
> > > IMHO V4L2 as an API should be fit for purpose and usable with or
> > > without libcamera.
> >
> > It really depends on the type of device I'm afraid :-) If you want to
> > capture processed image with a raw bayer sensor on RPi, you need to
> > control the ISP, and the 3A algorithms need to run in userspace. For
> > other types of devices, going straight to the kernel API is easier (and
> > can sometimes be preferred).
> 
> But you're forcing a YUYV sensor with VCM to jump through hoops.

Doesn't the YUV sensor with VCM control the VCM from the sensor (or
include a VCM controller in the sensor chip) ? The YUV camera modules
I've seen so far don't require manual power management of the VCM.

> Or a mono sensor with onboard AE (eg most of the OnSemi global shutter
> sensors) and external VCM. Focus control there would have uses in CV
> applications.

That's a valid use case, but I'd still expect it to go through libcamera
(at least on platforms that have an ISP).

> > At the end of the day, I don't think it makes much of a difference
> > though. Once the libcamera API stabilizes, the library gets packaged by
> > distributions and applications start using it (or possibly even through
> > pipewire), nobody will complain about MC anymore :-) The important part,
> > in my opinion, is to handle the complexity somewhere in a framework so
> > that applications don't have to do so manually.
> 
> Has anyone approached FFmpeg then about doing a libcamera integration?

Not that I know of. We've started with GStreamer, and I think we're
approaching a point where we can start working on native libcamera
support in more projects.

> Or is the V4L2 compatibility layer being uprated from "best efforts"
> for "complete support"?

100% compatibility isn't realistic I think, as there are corner cases in
V4L2 that are mostly unused and would be very difficult to emulate.
There's no strict line, and improvements to the V4L2 adaptation layer
are being considered (for instance DMABUF support, as we currently
support MMAP only).

> Until ALL applications support libcamera somehow, then plain old V4L2
> still has a significant place.
> 
> And don't get me started on getting all the examples on the internet
> updated to reflect the new and shiny way of doing things. Have a look
> at the Pi forums at the moment as we've switched to using libcamera
> with Bullseye, and all the complaints of "but I followed these random
> instructions (for V4L2) and it doesn't work".

I think we all agree it will take time (it reminds me a bit of the V4L1
to V4L2 transition).

> > > Telling users that they need to go and read the EDID for their display
> > > themselves and configure the mode would be viewed as daft, but the I2C
> > > channel to a display is largely as independent of the display pipeline
> > > as the VCM is to the video pipeline. Perhaps display pipelines aren't
> > > complex enough?
> >
> > Cameras are too complex :-S
> 
> *Some* cameras are too complex.
> Some display pipelines are pretty hideous as well.

Display pipelines are slowly getting there, I dread to think about the
first display device that will require pushing pipeline configuration
from kernelspace to userspace. I just hope I won't be the one who will
have to work on it :-)

> > > Sorry, just my two-penneth as someone who has to support general
> > > users, rather than just develop platforms or address specific use
> > > cases.
> >
> > As mentioned above, I certainly don't oppose improving power management
> > for VCMs, as well as the VCM control API in general, as long as we can
> > cover all use cases. I'm not familiar enough with the use cases to tell
> > whether making the kernel side more "clever" would be just fine or could
> > cause issues.
> 
> Can I request that it is seriously considered then, rather than just
> documenting it as userspace's responsibility?

Sure, I second that.

> OK, I'll stop pushing my point now.

Thanks for sharing your opinion, it was useful.
Laurent Pinchart Nov. 15, 2021, 1:33 p.m. UTC | #39
On Fri, Nov 12, 2021 at 01:37:27PM +0200, Andy Shevchenko wrote:
> On Fri, Nov 12, 2021 at 12:46:56PM +0200, Laurent Pinchart wrote:
> > On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> 
> > > Sorry, just my two-penneth as someone who has to support general
> > > users, rather than just develop platforms or address specific use
> > > cases.
> > 
> > As mentioned above, I certainly don't oppose improving power management
> > for VCMs, as well as the VCM control API in general, as long as we can
> > cover all use cases. I'm not familiar enough with the use cases to tell
> > whether making the kernel side more "clever" would be just fine or could
> > cause issues.
> 
> Personally I found the
> 
>   kernel <--> library in userspace <--> another library or app
> 
> schema is more flexible in many ways:
>  - we unburden kernel from the heavy code that has nothing to
>    do directly with HW
>  - we allow nevertheless to use kernel ABIs if needed
>  - we decrease burden of the ABI evolution by doing it in only
>    two places

I think that's generally true (provided the low-level userspace library
is well designed). In this specific case, we're moving towards that
model, and even if it ends up being better, I agree with Dave that the
transition can be painful.

> After all this kind of schema might lead us at some point to the
> shifting of 'we don't break user space' paradigm to the 'we hardly
> try not to break user space and do not break library ABIs / APIs
> in user space'.

Is that an officially allowed policy for kernel subsystems ?
Andy Shevchenko Nov. 15, 2021, 3:03 p.m. UTC | #40
On Mon, Nov 15, 2021 at 03:33:56PM +0200, Laurent Pinchart wrote:
> On Fri, Nov 12, 2021 at 01:37:27PM +0200, Andy Shevchenko wrote:
> > On Fri, Nov 12, 2021 at 12:46:56PM +0200, Laurent Pinchart wrote:
> > > On Fri, Nov 12, 2021 at 10:32:31AM +0000, Dave Stevenson wrote:
> > > > On Thu, 11 Nov 2021 at 22:04, Laurent Pinchart wrote:
> > 
> > > > Sorry, just my two-penneth as someone who has to support general
> > > > users, rather than just develop platforms or address specific use
> > > > cases.
> > > 
> > > As mentioned above, I certainly don't oppose improving power management
> > > for VCMs, as well as the VCM control API in general, as long as we can
> > > cover all use cases. I'm not familiar enough with the use cases to tell
> > > whether making the kernel side more "clever" would be just fine or could
> > > cause issues.
> > 
> > Personally I found the
> > 
> >   kernel <--> library in userspace <--> another library or app
> > 
> > schema is more flexible in many ways:
> >  - we unburden kernel from the heavy code that has nothing to
> >    do directly with HW
> >  - we allow nevertheless to use kernel ABIs if needed
> >  - we decrease burden of the ABI evolution by doing it in only
> >    two places
> 
> I think that's generally true (provided the low-level userspace library
> is well designed). In this specific case, we're moving towards that
> model, and even if it ends up being better, I agree with Dave that the
> transition can be painful.
> 
> > After all this kind of schema might lead us at some point to the
> > shifting of 'we don't break user space' paradigm to the 'we hardly
> > try not to break user space and do not break library ABIs / APIs
> > in user space'.
> 
> Is that an officially allowed policy for kernel subsystems ?

Keyword is "might". And no, it's not allowed right now, but that's I recognize
as a trend.
Daniel Scally Nov. 15, 2021, 11:43 p.m. UTC | #41
Hi Hans

On 11/11/2021 15:59, Hans de Goede wrote:
> Hi,
>
> On 11/9/21 17:35, Daniel Scally wrote:
>
> <snip>
>
>>> Talking about this Dell Latitude 7285, I haven't had a chance to
>>> look into this at all. But chances are I will need to do some
>>> I2C-register dumps under Windows, last time you mentioned you
>>> had some small tool for this ? It is ok if it is a bit hackish,
>>> it will still be very useful to have :)  And I believe I will
>>> also need to override the DSDT under Windows for this, right?
>>> I should be able to cope with that too.
>>
>> So the tool I was using was the I2cTestTool [1], which requires you to
>> first hack the DSDT to enable usermode access [2]. You need the
>> Microsoft ASL compiler [3] to insert the new DSDT, but fwiw I gave up
>> trying to use their tool to actually compile the table and just did it
>> running Ubuntu with iasl, then saved the file onto the Go2's SD card and
>> loaded it using asl.exe in Windows...the MS tool just wouldn't compile
>> for whatever reason.
>>
>>
>> All that said; you don't actually need to do this for the Latitude 7285
>> - on the Github thread a chap with that device found the schematics and
>> posted them [4], so we should already have the information we need to
>> populate the board data for that one. The sensor drivers need some work
>> though - the ov9734 I have a series somewhere that I think should work
>> but haven't ever tested, the ov8858 I don't think anyone's looked at yet.
> Awesome, thank you very much for these links.
>
> If you can dig up the ov9734 patch series you have and email me a copy (1),
> that would be great, then I can start looking into getting things to work
> on the Latitude 7285.


Sorry for the late reply - kid's birthday this weekend so haven't had
much time to do anything. I had a look for this but I suspect I spring
cleaned that branch as I can't find it, sorry. I can recall changing
ov9734_check_hwcfg() to return -EPROBE_DEFER if
fwnode_graph_get_next_endpoint() returns NULL (because the cio2-bridge
can build that endpoint later), and adding the entry for this sensor to
the cio2-bridge, with the 180MHz link frequency from the driver, but
past that memory fails.


> Regards,
>
> Hans
>
>
> 1) No need to make it compile with the latest, I can take care of that
> just email me what you have :)
>
>
Hans de Goede Nov. 16, 2021, 9:54 a.m. UTC | #42
Hi,

On 11/8/21 15:12, Andy Shevchenko wrote:
> On Mon, Nov 08, 2021 at 02:12:38PM +0100, Hans de Goede wrote:
>> On 11/2/21 00:43, Daniel Scally wrote:
> 
>> Does this sound reasonable / like I'm heading in the right direction?
> 
> It is up to you folks, since I have no time to participate in this with
> a full dive right now. Below just some comments on the patches in case
> they will go.
> 
> ...
> 
>> -	struct acpi_device *adev = ACPI_COMPANION(dev);
>> +	struct acpi_device *adev = to_acpi_device_node(fwnode);
>>  	struct i2c_acpi_lookup lookup;
>>  	struct i2c_adapter *adapter;
>>  	LIST_HEAD(resource_list);
>>  	int ret;
> 
> Make sense to move assignment here.

Ack, will fix.

> 
> 	adev = to_acpi_device_node(fwnode);
> 
>> +	if (!adev)
>> +		return ERR_PTR(-ENODEV);
> 
> ...
> 
>> +static inline struct i2c_client *i2c_acpi_new_device(struct device *dev,
>> +						     int index,
>> +						     struct i2c_board_info *info)
>> +{
>> +	return i2c_acpi_new_device_by_fwnode(dev->fwnode, index, info);
> 
> dev_fwnode(dev)

Ack, will fix.

> 
>> +}
> 
> ...
> 
>> +int cio2_bridge_sensors_are_ready(void)
>> +{
>> +	struct acpi_device *adev;
> 
>> +	bool ready = true;
> 
> Redundant. See below.
> 
>> +	unsigned int i;
>> +
>> +	for (i = 0; i < ARRAY_SIZE(cio2_supported_sensors); i++) {
>> +		const struct cio2_sensor_config *cfg =
>> +			&cio2_supported_sensors[i];
>> +
>> +		for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
>> +			if (!adev->status.enabled)
>> +				continue;
> 
>> +			if (!acpi_dev_ready_for_enumeration(adev))
>> +				ready = false;
> 
> You may put the adev here and return false.
> 
>> +		}
>> +	}
> 
>> +	return ready;
> 
> So return true.

I actually did it this way deliberately making use of
for_each_acpi_dev_match() not "leaking" a ref when you let
it run to the end.

I find this clearer because this way all the ref handling
is abstracted away in for_each_acpi_dev_match(), where as with
a put in the middle of the loop a causal reader of the code
is going to wonder there the put ref is coming from.


> 
>> +}
> 
> ...
> 
>> +	if (sensor->ssdb.vcmtype)
>> +		nodes[SWNODE_VCM] = NODE_VCM(
>> +					cio2_vcm_types[sensor->ssdb.vcmtype - 1]);
> 
> Wouldn't be better
> 
> 		nodes[SWNODE_VCM] =
> 			NODE_VCM(cio2_vcm_types[sensor->ssdb.vcmtype - 1]);
> 
> ?
> 
> ...
> 
>> +	sensor->vcm_i2c_client = i2c_acpi_new_device_by_fwnode(
>> +					acpi_fwnode_handle(sensor->adev),
>> +					1, &board_info);
> 
> Ditto.

Ack, will fix both for the next version.

Regards,

Hans
Andy Shevchenko Nov. 16, 2021, 12:26 p.m. UTC | #43
On Tue, Nov 16, 2021 at 10:54:36AM +0100, Hans de Goede wrote:
> On 11/8/21 15:12, Andy Shevchenko wrote:
> > On Mon, Nov 08, 2021 at 02:12:38PM +0100, Hans de Goede wrote:
> >> On 11/2/21 00:43, Daniel Scally wrote:

...

> >> +int cio2_bridge_sensors_are_ready(void)
> >> +{
> >> +	struct acpi_device *adev;
> > 
> >> +	bool ready = true;
> > 
> > Redundant. See below.
> > 
> >> +	unsigned int i;
> >> +
> >> +	for (i = 0; i < ARRAY_SIZE(cio2_supported_sensors); i++) {
> >> +		const struct cio2_sensor_config *cfg =
> >> +			&cio2_supported_sensors[i];
> >> +
> >> +		for_each_acpi_dev_match(adev, cfg->hid, NULL, -1) {
> >> +			if (!adev->status.enabled)
> >> +				continue;
> > 
> >> +			if (!acpi_dev_ready_for_enumeration(adev))
> >> +				ready = false;
> > 
> > You may put the adev here and return false.
> > 
> >> +		}
> >> +	}
> > 
> >> +	return ready;
> > 
> > So return true.
> 
> I actually did it this way deliberately making use of
> for_each_acpi_dev_match() not "leaking" a ref when you let
> it run to the end.
> 
> I find this clearer because this way all the ref handling
> is abstracted away in for_each_acpi_dev_match(), where as with
> a put in the middle of the loop a causal reader of the code
> is going to wonder there the put ref is coming from.

Here are pros and cons, currently you abstracted it away, but in case of
extending this piece of code (I don't believe it will happen, though) it may
play a trick if one forgets about this nuance of for_each_acpi_dev_match().

But it's fine for me, so you decide.

> >> +}
Daniel Scally Nov. 23, 2021, 12:10 p.m. UTC | #44
Hi Hans

On 11/11/2021 15:23, Hans de Goede wrote:
> Hi,
>
> On 11/11/21 12:18, Daniel Scally wrote:
>
> <snip>
>
>>>> One problem I'm experiencing
>>>> is that the focus position I set isn't maintained; it holds for a couple
>>>> of seconds and then resets to the "normal" focus...this happens when the
>>>> .close() callback for the driver is called, which happens right after
>>>> the control value is applied. All the other VCM drivers in the kernel
>>>> power down on .close() so I did the same>
>>> Right, I believe that this is fine though, we expect people to use
>>> libcamera with this and once libcamera gets autofocus support, then
>>> I would expect libcamera to keep the fd open the entire time while
>>> streaming.
>>
>> OK - as long as that's how it works then I agree that this is fine as is
>> yes.
> So I've just picked up an old project of mine, called gtk-v4l which
> is a nice simply v4l2 controls applet and patches it up to also
> work on v4l-subdevs:
>
> https://github.com/jwrdegoede/gtk-v4l/
>
> So now you can run:
>
> sudo gtk-v4l -d /dev/v4l-subdev8
>
> And it will give you a slider to control the focus; and as
> a bonus it keeps the v4l-subdev open, so no more runtime-pm
> issue :)


This is just neat regardless of the problem we were having; thanks!

>>> What is necessary is some way for libcamera to:
>>>
>>> 1. See if there is a VCM which belongs to the sensor; and
>>> 2. If there is a VCM figure out which v4l2-subdev it is.
>>>
>>> Also see this email thread, where Hans Verkuil came to the
>>> conclusion that this info is currently missing from the MC
>>> representation (link is to the conclusion):
>>>
>>> https://lists.libcamera.org/pipermail/libcamera-devel/2021-October/026144.html
>>
>> Yeah I read through that thread too, and had a brief chat with Laurent
>> about it. My plan was to add a new type of link called an "ancillary
>> link" between two entities, and automatically create those in
>> match_notify() based on the function field of the matching entities, and
>> expose them as part of the media graph. I've started working on that but
>> not progressed far enough to share anything.
> Sounds good.
>
>> Libcamera would need
>> updating with support for that too though.
> Right I think libcamera will need updating no matter what, first we
> need to comeup with a userspace API for this.
>
> Although I guess it would be good to also write libcamera patches
> once the kernel patches are ready, but not yet merged, to make
> sure the API is usable without problems by libcamera.


Realised I'd not updated you on this for a while - I've got the new
style of links created by the kernel when the fwnode match is made, and
those are visible in userspace, I'm just working on hacking libcamera to
accomodate them too. cpp is new to me though so it might take me a while

>
> Regards,
>
> Hans
>
Hans de Goede Nov. 23, 2021, 7:02 p.m. UTC | #45
Hi Daniel,

On 11/23/21 13:10, Daniel Scally wrote:
> Hi Hans
> 
> On 11/11/2021 15:23, Hans de Goede wrote:
>> Hi,
>>
>> On 11/11/21 12:18, Daniel Scally wrote:
>>
>> <snip>
>>
>>>>> One problem I'm experiencing
>>>>> is that the focus position I set isn't maintained; it holds for a couple
>>>>> of seconds and then resets to the "normal" focus...this happens when the
>>>>> .close() callback for the driver is called, which happens right after
>>>>> the control value is applied. All the other VCM drivers in the kernel
>>>>> power down on .close() so I did the same>
>>>> Right, I believe that this is fine though, we expect people to use
>>>> libcamera with this and once libcamera gets autofocus support, then
>>>> I would expect libcamera to keep the fd open the entire time while
>>>> streaming.
>>>
>>> OK - as long as that's how it works then I agree that this is fine as is
>>> yes.
>> So I've just picked up an old project of mine, called gtk-v4l which
>> is a nice simply v4l2 controls applet and patches it up to also
>> work on v4l-subdevs:
>>
>> https://github.com/jwrdegoede/gtk-v4l/
>>
>> So now you can run:
>>
>> sudo gtk-v4l -d /dev/v4l-subdev8
>>
>> And it will give you a slider to control the focus; and as
>> a bonus it keeps the v4l-subdev open, so no more runtime-pm
>> issue :)
> 
> 
> This is just neat regardless of the problem we were having; thanks!
> 
>>>> What is necessary is some way for libcamera to:
>>>>
>>>> 1. See if there is a VCM which belongs to the sensor; and
>>>> 2. If there is a VCM figure out which v4l2-subdev it is.
>>>>
>>>> Also see this email thread, where Hans Verkuil came to the
>>>> conclusion that this info is currently missing from the MC
>>>> representation (link is to the conclusion):
>>>>
>>>> https://lists.libcamera.org/pipermail/libcamera-devel/2021-October/026144.html
>>>
>>> Yeah I read through that thread too, and had a brief chat with Laurent
>>> about it. My plan was to add a new type of link called an "ancillary
>>> link" between two entities, and automatically create those in
>>> match_notify() based on the function field of the matching entities, and
>>> expose them as part of the media graph. I've started working on that but
>>> not progressed far enough to share anything.
>> Sounds good.
>>
>>> Libcamera would need
>>> updating with support for that too though.
>> Right I think libcamera will need updating no matter what, first we
>> need to comeup with a userspace API for this.
>>
>> Although I guess it would be good to also write libcamera patches
>> once the kernel patches are ready, but not yet merged, to make
>> sure the API is usable without problems by libcamera.
> 
> 
> Realised I'd not updated you on this for a while - I've got the new
> style of links created by the kernel when the fwnode match is made, and
> those are visible in userspace, I'm just working on hacking libcamera to
> accomodate them too. cpp is new to me though so it might take me a while

That is good to hear. You mau have noticed I've been a bit quiet too.

I've been busy with other drivers/platform/x86 stuff, but I hope to wrap
that up soon and post a v6 of the INT3472 / surface go back camera series,
including VCM support.

Regards,

Hans
diff mbox series

Patch

From 0f0e2fa09b1bf8260f0d5f3753ebc27bd2c74ad1 Mon Sep 17 00:00:00 2001
From: Daniel Scally <djrscally@gmail.com>
Date: Thu, 28 Oct 2021 21:55:16 +0100
Subject: [PATCH] media: i2c: Add driver for DW9719 VCM

Add a driver for the DW9719 VCM

Signed-off-by: Daniel Scally <djrscally@gmail.com>
---
 MAINTAINERS                |   7 +
 drivers/media/i2c/Kconfig  |  11 ++
 drivers/media/i2c/Makefile |   1 +
 drivers/media/i2c/dw9719.c | 376 +++++++++++++++++++++++++++++++++++++
 4 files changed, 395 insertions(+)
 create mode 100644 drivers/media/i2c/dw9719.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 891189cecd51..3db7124c24ee 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -5647,6 +5647,13 @@  T:	git git://linuxtv.org/media_tree.git
 F:	Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.txt
 F:	drivers/media/i2c/dw9714.c
 
+DONGWOON DW9714 LENS VOICE COIL DRIVER
+M:	Daniel Scally <djrscally@gmail.com>
+L:	linux-media@vger.kernel.org
+S:	Maintained
+T:	git git://linuxtv.org/media_tree.git
+F:	drivers/media/i2c/dw9719.c
+
 DONGWOON DW9768 LENS VOICE COIL DRIVER
 M:	Dongchun Zhu <dongchun.zhu@mediatek.com>
 L:	linux-media@vger.kernel.org
diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
index dee06f535f2c..505483e7b1df 100644
--- a/drivers/media/i2c/Kconfig
+++ b/drivers/media/i2c/Kconfig
@@ -1396,6 +1396,17 @@  config VIDEO_DW9714
 	  capability. This is designed for linear control of
 	  voice coil motors, controlled via I2C serial interface.
 
+config VIDEO_DW9719
+	tristate "DW9719 lens voice coil support"
+	depends on I2C && VIDEO_V4L2
+	select MEDIA_CONTROLLER
+	select VIDEO_V4L2_SUBDEV_API
+	select V4L2_ASYNC
+	help
+	  This is a driver for the DW9719 camera lens voice coil.
+	  This is designed for linear control of  voice coil motors,
+	  controlled via I2C serial interface.
+
 config VIDEO_DW9768
 	tristate "DW9768 lens voice coil support"
 	depends on I2C && VIDEO_V4L2
diff --git a/drivers/media/i2c/Makefile b/drivers/media/i2c/Makefile
index 011e90c1a288..6a8f55b6c6b9 100644
--- a/drivers/media/i2c/Makefile
+++ b/drivers/media/i2c/Makefile
@@ -24,6 +24,7 @@  obj-$(CONFIG_VIDEO_SAA6752HS) += saa6752hs.o
 obj-$(CONFIG_VIDEO_AD5820)  += ad5820.o
 obj-$(CONFIG_VIDEO_AK7375)  += ak7375.o
 obj-$(CONFIG_VIDEO_DW9714)  += dw9714.o
+obj-$(CONFIG_VIDEO_DW9719)  += dw9719.o
 obj-$(CONFIG_VIDEO_DW9768)  += dw9768.o
 obj-$(CONFIG_VIDEO_DW9807_VCM)  += dw9807-vcm.o
 obj-$(CONFIG_VIDEO_ADV7170) += adv7170.o
diff --git a/drivers/media/i2c/dw9719.c b/drivers/media/i2c/dw9719.c
new file mode 100644
index 000000000000..9021caa915a3
--- /dev/null
+++ b/drivers/media/i2c/dw9719.c
@@ -0,0 +1,376 @@ 
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2012 Intel Corporation
+
+#include <linux/delay.h>
+#include <linux/i2c.h>
+#include <linux/pm_runtime.h>
+#include <linux/regulator/consumer.h>
+#include <linux/types.h>
+
+#include <media/v4l2-common.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-subdev.h>
+
+#define DW9719_MAX_FOCUS_POS	1023
+#define DELAY_PER_STEP_NS	1000000
+#define DELAY_MAX_PER_STEP_NS	(1000000 * 1023)
+
+#define DW9719_INFO			0
+#define DW9719_ID			0xF1
+#define DW9719_CONTROL			2
+#define DW9719_VCM_CURRENT		3
+
+#define DW9719_MODE			6
+#define DW9719_VCM_FREQ			7
+
+#define DW9719_MODE_SAC3		0x40
+#define DW9719_DEFAULT_VCM_FREQ		0x60
+#define DW9719_ENABLE_RINGING		0x02
+
+#define to_dw9719_device(x) container_of(x, struct dw9719_device, sd)
+
+struct dw9719_device {
+	struct device *dev;
+	struct i2c_client *client;
+	struct regulator *vdd;
+	struct v4l2_subdev sd;
+
+	struct dw9719_v4l2_ctrls {
+		struct v4l2_ctrl_handler handler;
+		struct v4l2_ctrl *focus;
+	} ctrls;
+};
+
+static int dw9719_i2c_rd8(struct i2c_client *client, u8 reg, u8 *val)
+{
+	struct i2c_msg msg[2];
+	u8 buf[2] = { reg };
+
+	msg[0].addr = client->addr;
+	msg[0].flags = 0;
+	msg[0].len = 1;
+	msg[0].buf = buf;
+
+	msg[1].addr = client->addr;
+	msg[1].flags = I2C_M_RD;
+	msg[1].len = 1;
+	msg[1].buf = &buf[1];
+	*val = 0;
+
+	if (i2c_transfer(client->adapter, msg, 2) != 2)
+		return -EIO;
+	*val = buf[1];
+
+	return 0;
+}
+
+static int dw9719_i2c_wr8(struct i2c_client *client, u8 reg, u8 val)
+{
+	struct i2c_msg msg;
+	u8 buf[2] = { reg, val };
+
+	msg.addr = client->addr;
+	msg.flags = 0;
+	msg.len = sizeof(buf);
+	msg.buf = buf;
+
+	if (i2c_transfer(client->adapter, &msg, 1) != 1)
+		return -EIO;
+
+	return 0;
+}
+
+static int dw9719_i2c_wr16(struct i2c_client *client, u8 reg, u16 val)
+{
+	struct i2c_msg msg;
+	u8 buf[3] = { reg, (u8)(val >> 8), (u8)(val & 0xff)};
+
+	msg.addr = client->addr;
+	msg.flags = 0;
+	msg.len = sizeof(buf);
+	msg.buf = buf;
+
+	if (i2c_transfer(client->adapter, &msg, 1) != 1)
+		return -EIO;
+
+	return 0;
+}
+
+static int dw9719_detect(struct dw9719_device *dw9719)
+{
+	int ret;
+	u8 val;
+
+	ret = dw9719_i2c_rd8(dw9719->client, DW9719_INFO, &val);
+	if (ret < 0)
+		return ret;
+
+	if (val != DW9719_ID) {
+		dev_err(dw9719->dev, "Failed to detect correct id\n");
+		ret = -ENXIO;
+	}
+
+	return 0;
+}
+
+static int dw9719_power_down(struct dw9719_device *dw9719)
+{
+	return regulator_disable(dw9719->vdd);
+}
+
+static int dw9719_power_up(struct dw9719_device *dw9719)
+{
+	int ret;
+
+	ret = regulator_enable(dw9719->vdd);
+	if (ret)
+		return ret;
+
+	/* Jiggle SCL pin to wake up device */
+	ret = dw9719_i2c_wr8(dw9719->client, DW9719_CONTROL, 1);
+
+	/* Need 100us to transit from SHUTDOWN to STANDBY*/
+	usleep_range(100, 1000);
+
+	ret = dw9719_i2c_wr8(dw9719->client, DW9719_CONTROL,
+			     DW9719_ENABLE_RINGING);
+	if (ret < 0)
+		goto fail_powerdown;
+
+	ret = dw9719_i2c_wr8(dw9719->client, DW9719_MODE, DW9719_MODE_SAC3);
+	if (ret < 0)
+		goto fail_powerdown;
+
+	ret = dw9719_i2c_wr8(dw9719->client, DW9719_VCM_FREQ,
+			     DW9719_DEFAULT_VCM_FREQ);
+	if (ret < 0)
+		goto fail_powerdown;
+
+	return 0;
+
+fail_powerdown:
+	dw9719_power_down(dw9719);
+	return ret;
+}
+
+static int __maybe_unused dw9719_suspend(struct device *dev)
+{
+	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+	struct dw9719_device *dw9719 = to_dw9719_device(sd);
+
+	return dw9719_power_up(dw9719);
+}
+
+static int __maybe_unused dw9719_resume(struct device *dev)
+{
+	struct v4l2_subdev *sd = dev_get_drvdata(dev);
+	struct dw9719_device *dw9719 = to_dw9719_device(sd);
+
+	return dw9719_power_down(dw9719);
+}
+
+static int dw9719_t_focus_abs(struct dw9719_device *dw9719, s32 value)
+{
+	int ret;
+
+	value = clamp(value, 0, DW9719_MAX_FOCUS_POS);
+	ret = dw9719_i2c_wr16(dw9719->client, DW9719_VCM_CURRENT, value);
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+static int dw9719_t_focus_rel(struct dw9719_device *dw9719, s32 value)
+{
+	s32 cur_val = dw9719->ctrls.focus->val;
+
+	return dw9719_t_focus_abs(dw9719, cur_val + value);
+}
+
+static int dw9719_set_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct dw9719_device *dw9719 = container_of(ctrl->handler,
+						    struct dw9719_device,
+						    ctrls.handler);
+	int ret;
+
+	/* Only apply changes to the controls if the device is powered up */
+	if (!pm_runtime_get_if_in_use(dw9719->dev))
+		return 0;
+
+	switch (ctrl->id) {
+	case V4L2_CID_FOCUS_ABSOLUTE:
+		ret = dw9719_t_focus_abs(dw9719, ctrl->val);
+		break;
+	case V4L2_CID_FOCUS_RELATIVE:
+		ret = dw9719_t_focus_rel(dw9719, ctrl->val);
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	pm_runtime_put(dw9719->dev);
+
+	return ret;
+}
+
+static const struct v4l2_ctrl_ops dw9719_ctrl_ops = {
+	.s_ctrl = dw9719_set_ctrl,
+};
+
+static int dw9719_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
+{
+	return pm_runtime_resume_and_get(sd->dev);
+}
+
+static int dw9719_close(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
+{
+	pm_runtime_put(sd->dev);
+
+	return 0;
+}
+
+static const struct v4l2_subdev_internal_ops dw9719_internal_ops = {
+	.open = dw9719_open,
+	.close = dw9719_close,
+};
+
+static int dw9719_init_controls(struct dw9719_device *dw9719)
+{
+	const struct v4l2_ctrl_ops *ops = &dw9719_ctrl_ops;
+	int ret;
+
+	ret = v4l2_ctrl_handler_init(&dw9719->ctrls.handler, 1);
+	if (ret)
+		return ret;
+
+	dw9719->ctrls.focus = v4l2_ctrl_new_std(&dw9719->ctrls.handler, ops,
+						V4L2_CID_FOCUS_ABSOLUTE, 0,
+						DW9719_MAX_FOCUS_POS, 1, 0);
+
+	if (dw9719->ctrls.handler.error) {
+		dev_err(dw9719->dev, "Error initialising v4l2 ctrls\n");
+		ret = dw9719->ctrls.handler.error;
+		goto err_free_handler;
+	}
+
+	return ret;
+
+err_free_handler:
+	v4l2_ctrl_handler_free(&dw9719->ctrls.handler);
+	return ret;
+}
+
+static const struct v4l2_subdev_ops dw9719_ops = { };
+
+static int dw9719_probe(struct i2c_client *client)
+{
+	struct dw9719_device *dw9719;
+	int ret;
+
+	dw9719 = devm_kzalloc(&client->dev, sizeof(*dw9719), GFP_KERNEL);
+	if (!dw9719)
+		return -ENOMEM;
+
+	dw9719->client = client;
+	dw9719->dev = &client->dev;
+
+	dw9719->vdd = devm_regulator_get(&client->dev, "vdd");
+	if (IS_ERR(dw9719->vdd)) {
+		dev_err(&client->dev, "Error getting regulator\n");
+		return PTR_ERR(dw9719->vdd);
+	}
+
+	v4l2_i2c_subdev_init(&dw9719->sd, client, &dw9719_ops);
+	dw9719->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+	dw9719->sd.internal_ops = &dw9719_internal_ops;
+
+	ret = dw9719_init_controls(dw9719);
+	if (ret)
+		return ret;
+
+	ret = media_entity_pads_init(&dw9719->sd.entity, 0, NULL);
+	if (ret < 0)
+		goto err_free_ctrl_handler;
+
+	dw9719->sd.entity.function = MEDIA_ENT_F_LENS;
+
+	/*
+	 * We need the driver to work in the event that pm runtime is disable in
+	 * the kernel, so power up and verify the chip now. In the event that
+	 * runtime pm is disabled this will leave the chip on, so that the lens
+	 * will work.
+	 */
+
+	ret = dw9719_power_up(dw9719);
+	if (ret)
+		goto err_cleanup_media;
+
+	ret = dw9719_detect(dw9719);
+	if (ret)
+		goto err_powerdown;
+
+	pm_runtime_set_active(&client->dev);
+	pm_runtime_get_noresume(&client->dev);
+	pm_runtime_enable(&client->dev);
+
+	ret = v4l2_async_register_subdev(&dw9719->sd);
+	if (ret < 0)
+		goto err_pm_runtime;
+
+	pm_runtime_set_autosuspend_delay(&client->dev, 1000);
+	pm_runtime_use_autosuspend(&client->dev);
+	pm_runtime_put_autosuspend(&client->dev);
+
+	return ret;
+
+err_pm_runtime:
+	pm_runtime_disable(&client->dev);
+	pm_runtime_put_noidle(&client->dev);
+err_powerdown:
+	dw9719_power_down(dw9719);
+err_cleanup_media:
+	media_entity_cleanup(&dw9719->sd.entity);
+err_free_ctrl_handler:
+	v4l2_ctrl_handler_free(&dw9719->ctrls.handler);
+
+	return ret;
+}
+
+static int dw9719_remove(struct i2c_client *client)
+{
+	struct v4l2_subdev *sd = i2c_get_clientdata(client);
+	struct dw9719_device *dw9719 = container_of(sd, struct dw9719_device,
+						    sd);
+
+	v4l2_async_unregister_subdev(sd);
+	v4l2_ctrl_handler_free(&dw9719->ctrls.handler);
+	media_entity_cleanup(&dw9719->sd.entity);
+
+	return 0;
+}
+
+static const struct i2c_device_id dw9719_id_table[] = {
+	{ "dw9719" },
+	{ }
+};
+MODULE_DEVICE_TABLE(i2c, dw9719_id_table);
+
+static const struct dev_pm_ops dw9719_pm_ops = {
+	SET_RUNTIME_PM_OPS(dw9719_suspend, dw9719_resume, NULL)
+};
+
+static struct i2c_driver dw9719_i2c_driver = {
+	.driver = {
+		.name = "dw9719",
+		.pm = &dw9719_pm_ops,
+	},
+	.probe_new = dw9719_probe,
+	.remove = dw9719_remove,
+	.id_table = dw9719_id_table,
+};
+module_i2c_driver(dw9719_i2c_driver);
+
+MODULE_DESCRIPTION("DW9719 VCM Driver");
+MODULE_LICENSE("GPL");
-- 
2.25.1