mbox series

[RFC,0/7] clk: sunxi-ng: Add a RTC CCU driver

Message ID 20210901053951.60952-1-samuel@sholland.org (mailing list archive)
Headers show
Series clk: sunxi-ng: Add a RTC CCU driver | expand

Message

Samuel Holland Sept. 1, 2021, 5:39 a.m. UTC
This patch series adds a CCU driver for the RTC in the H616 and R329.
The extra patches at the end of this series show how it would be
explanded to additional hardware variants.

The driver is intended to support the existing binding used for the H6,
but also an updated binding which includes all RTC input clocks. I do
not know how to best represent that binding -- that is a major reason
why this series is an RFC.

A future patch series could add functionality to the driver to manage
IOSC calibration at boot and during suspend/resume.

It may be possible to support all of these hardware variants in the
existing RTC clock driver and avoid some duplicate code, but I'm
concerned about the complexity there, without any of the CCU
abstraction.

This series is currently based on top of the other series I just sent
(clk: sunxi-ng: Lifetime fixes and module support), but I can rebase it
elsewhere.

Samuel Holland (7):
  dt-bindings: rtc: sun6i: Add H616 and R329 compatibles
  clk: sunxi-ng: div: Add macro using CLK_HW_INIT_FW_NAME
  clk: sunxi-ng: mux: Add macro using CLK_HW_INIT_PARENTS_DATA
  clk: sunxi-ng: mux: Allow muxes to have keys
  clk: sunxi-ng: Add support for the sun50i RTC clocks
  [DO NOT MERGE] clk: sunxi-ng: Add support for H6
  [DO NOT MERGE] clk: sunxi-ng: Add support for T5

 .../bindings/rtc/allwinner,sun6i-a31-rtc.yaml |  55 ++-
 drivers/clk/sunxi-ng/Kconfig                  |   6 +
 drivers/clk/sunxi-ng/Makefile                 |   1 +
 drivers/clk/sunxi-ng/ccu_common.h             |   1 +
 drivers/clk/sunxi-ng/ccu_div.h                |  14 +
 drivers/clk/sunxi-ng/ccu_mux.c                |   7 +
 drivers/clk/sunxi-ng/ccu_mux.h                |  28 ++
 drivers/clk/sunxi-ng/sun50i-rtc-ccu.c         | 433 ++++++++++++++++++
 drivers/clk/sunxi-ng/sun50i-rtc-ccu.h         |  15 +
 drivers/rtc/rtc-sun6i.c                       |  17 -
 include/dt-bindings/clock/sun50i-rtc.h        |  12 +
 11 files changed, 566 insertions(+), 23 deletions(-)
 create mode 100644 drivers/clk/sunxi-ng/sun50i-rtc-ccu.c
 create mode 100644 drivers/clk/sunxi-ng/sun50i-rtc-ccu.h
 create mode 100644 include/dt-bindings/clock/sun50i-rtc.h

Comments

Maxime Ripard Sept. 3, 2021, 2:50 p.m. UTC | #1
Hi,

On Wed, Sep 01, 2021 at 12:39:44AM -0500, Samuel Holland wrote:
> This patch series adds a CCU driver for the RTC in the H616 and R329.
> The extra patches at the end of this series show how it would be
> explanded to additional hardware variants.
> 
> The driver is intended to support the existing binding used for the H6,
> but also an updated binding which includes all RTC input clocks. I do
> not know how to best represent that binding -- that is a major reason
> why this series is an RFC.
> 
> A future patch series could add functionality to the driver to manage
> IOSC calibration at boot and during suspend/resume.
> 
> It may be possible to support all of these hardware variants in the
> existing RTC clock driver and avoid some duplicate code, but I'm
> concerned about the complexity there, without any of the CCU
> abstraction.
> 
> This series is currently based on top of the other series I just sent
> (clk: sunxi-ng: Lifetime fixes and module support), but I can rebase it
> elsewhere.

I'm generally ok with this, it makes sense to move it to sunxi-ng,
especially with that other series of yours.

My main concern about this is the split driver approach. We used to have
that before in the RTC too, but it was mostly due to the early clock
requirements. With your previous work, that requirement is not there
anymore and we can just register it as a device just like the other
clock providers.

And since we can register all those clocks at device probe time, we
don't really need to split the driver in two (and especially in two
different places). The only obstacle to this after your previous series
is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe
functions public, but that can easily be fixed by moving their
definition to include/linux/clk/sunxi-ng.h

Maxime
Samuel Holland Sept. 3, 2021, 3:21 p.m. UTC | #2
On 9/3/21 9:50 AM, Maxime Ripard wrote:
> Hi,
> 
> On Wed, Sep 01, 2021 at 12:39:44AM -0500, Samuel Holland wrote:
>> This patch series adds a CCU driver for the RTC in the H616 and R329.
>> The extra patches at the end of this series show how it would be
>> explanded to additional hardware variants.
>>
>> The driver is intended to support the existing binding used for the H6,
>> but also an updated binding which includes all RTC input clocks. I do
>> not know how to best represent that binding -- that is a major reason
>> why this series is an RFC.
>>
>> A future patch series could add functionality to the driver to manage
>> IOSC calibration at boot and during suspend/resume.
>>
>> It may be possible to support all of these hardware variants in the
>> existing RTC clock driver and avoid some duplicate code, but I'm
>> concerned about the complexity there, without any of the CCU
>> abstraction.
>>
>> This series is currently based on top of the other series I just sent
>> (clk: sunxi-ng: Lifetime fixes and module support), but I can rebase it
>> elsewhere.
> 
> I'm generally ok with this, it makes sense to move it to sunxi-ng,
> especially with that other series of yours.
> 
> My main concern about this is the split driver approach. We used to have
> that before in the RTC too, but it was mostly due to the early clock
> requirements. With your previous work, that requirement is not there
> anymore and we can just register it as a device just like the other
> clock providers.

That's a good point. Originally, I had this RTC CCU providing osc24M, so
it did need to be an early provider. But with the current version, we
could have the RTC platform driver call devm_sunxi_ccu_probe. That does
seem cleaner.

(Since it wasn't immediately obvious to me why this works: the only
early provider remaining is the sun5i CCU, and it doesn't use the sun6i
RTC driver.)

> And since we can register all those clocks at device probe time, we
> don't really need to split the driver in two (and especially in two
> different places). The only obstacle to this after your previous series
> is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe
> functions public, but that can easily be fixed by moving their
> definition to include/linux/clk/sunxi-ng.h

Where are you thinking the clock definitions would go? We don't export
any of those structures (ccu_mux, ccu_common) or macros
(SUNXI_CCU_GATE_DATA) in a public header either.

Would you want to export those? That seems like a lot of churn. Or would
we put the CCU descriptions in drivers/clk/sunxi-ng and export a
function that the RTC driver can call? (Or some other idea?)

Regards,
Samuel
Maxime Ripard Sept. 9, 2021, 8:45 a.m. UTC | #3
On Fri, Sep 03, 2021 at 10:21:13AM -0500, Samuel Holland wrote:
> On 9/3/21 9:50 AM, Maxime Ripard wrote:
> > Hi,
> > 
> > On Wed, Sep 01, 2021 at 12:39:44AM -0500, Samuel Holland wrote:
> >> This patch series adds a CCU driver for the RTC in the H616 and R329.
> >> The extra patches at the end of this series show how it would be
> >> explanded to additional hardware variants.
> >>
> >> The driver is intended to support the existing binding used for the H6,
> >> but also an updated binding which includes all RTC input clocks. I do
> >> not know how to best represent that binding -- that is a major reason
> >> why this series is an RFC.
> >>
> >> A future patch series could add functionality to the driver to manage
> >> IOSC calibration at boot and during suspend/resume.
> >>
> >> It may be possible to support all of these hardware variants in the
> >> existing RTC clock driver and avoid some duplicate code, but I'm
> >> concerned about the complexity there, without any of the CCU
> >> abstraction.
> >>
> >> This series is currently based on top of the other series I just sent
> >> (clk: sunxi-ng: Lifetime fixes and module support), but I can rebase it
> >> elsewhere.
> > 
> > I'm generally ok with this, it makes sense to move it to sunxi-ng,
> > especially with that other series of yours.
> > 
> > My main concern about this is the split driver approach. We used to have
> > that before in the RTC too, but it was mostly due to the early clock
> > requirements. With your previous work, that requirement is not there
> > anymore and we can just register it as a device just like the other
> > clock providers.
> 
> That's a good point. Originally, I had this RTC CCU providing osc24M, so
> it did need to be an early provider. But with the current version, we
> could have the RTC platform driver call devm_sunxi_ccu_probe. That does
> seem cleaner.
> 
> (Since it wasn't immediately obvious to me why this works: the only
> early provider remaining is the sun5i CCU, and it doesn't use the sun6i
> RTC driver.)
> 
> > And since we can register all those clocks at device probe time, we
> > don't really need to split the driver in two (and especially in two
> > different places). The only obstacle to this after your previous series
> > is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe
> > functions public, but that can easily be fixed by moving their
> > definition to include/linux/clk/sunxi-ng.h
> 
> Where are you thinking the clock definitions would go? We don't export
> any of those structures (ccu_mux, ccu_common) or macros
> (SUNXI_CCU_GATE_DATA) in a public header either.

Ah, right...

> Would you want to export those? That seems like a lot of churn. Or would
> we put the CCU descriptions in drivers/clk/sunxi-ng and export a
> function that the RTC driver can call? (Or some other idea?)

I guess we could export it. There's some fairly big headers in
include/linux/clk already (tegra and ti), it's not uAPI and we do have
reasons to do so, so I guess it's fine.

I'd like to avoid having two drivers for the same device if possible,
especially in two separate places. This creates some confusion since the
general expectation is that there's only one driver per device. There's
also the fact that this could lead to subtle bugs since the probe order
is the link order (or module loading).

And synchronizing access to registers between those two drivers will be
hard, while we could just share the same spin lock between the RTC and
clock drivers if they are instanciated in the same place.

Maxime
Samuel Holland Sept. 28, 2021, 7:46 a.m. UTC | #4
On 9/9/21 3:45 AM, Maxime Ripard wrote:
> On Fri, Sep 03, 2021 at 10:21:13AM -0500, Samuel Holland wrote:
>> On 9/3/21 9:50 AM, Maxime Ripard wrote:
>>> Hi,
>>>
>>> On Wed, Sep 01, 2021 at 12:39:44AM -0500, Samuel Holland wrote:
>>>> This patch series adds a CCU driver for the RTC in the H616 and R329.
>>>> The extra patches at the end of this series show how it would be
>>>> explanded to additional hardware variants.
>>>>
>>>> The driver is intended to support the existing binding used for the H6,
>>>> but also an updated binding which includes all RTC input clocks. I do
>>>> not know how to best represent that binding -- that is a major reason
>>>> why this series is an RFC.
>>>>
>>>> A future patch series could add functionality to the driver to manage
>>>> IOSC calibration at boot and during suspend/resume.
>>>>
>>>> It may be possible to support all of these hardware variants in the
>>>> existing RTC clock driver and avoid some duplicate code, but I'm
>>>> concerned about the complexity there, without any of the CCU
>>>> abstraction.
>>>>
>>>> This series is currently based on top of the other series I just sent
>>>> (clk: sunxi-ng: Lifetime fixes and module support), but I can rebase it
>>>> elsewhere.
>>>
>>> I'm generally ok with this, it makes sense to move it to sunxi-ng,
>>> especially with that other series of yours.
>>>
>>> My main concern about this is the split driver approach. We used to have
>>> that before in the RTC too, but it was mostly due to the early clock
>>> requirements. With your previous work, that requirement is not there
>>> anymore and we can just register it as a device just like the other
>>> clock providers.
>>
>> That's a good point. Originally, I had this RTC CCU providing osc24M, so
>> it did need to be an early provider. But with the current version, we
>> could have the RTC platform driver call devm_sunxi_ccu_probe. That does
>> seem cleaner.
>>
>> (Since it wasn't immediately obvious to me why this works: the only
>> early provider remaining is the sun5i CCU, and it doesn't use the sun6i
>> RTC driver.)
>>
>>> And since we can register all those clocks at device probe time, we
>>> don't really need to split the driver in two (and especially in two
>>> different places). The only obstacle to this after your previous series
>>> is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe
>>> functions public, but that can easily be fixed by moving their
>>> definition to include/linux/clk/sunxi-ng.h
>>
>> Where are you thinking the clock definitions would go? We don't export
>> any of those structures (ccu_mux, ccu_common) or macros
>> (SUNXI_CCU_GATE_DATA) in a public header either.
> 
> Ah, right...
> 
>> Would you want to export those? That seems like a lot of churn. Or would
>> we put the CCU descriptions in drivers/clk/sunxi-ng and export a
>> function that the RTC driver can call? (Or some other idea?)
> 
> I guess we could export it. There's some fairly big headers in
> include/linux/clk already (tegra and ti), it's not uAPI and we do have
> reasons to do so, so I guess it's fine.
> 
> I'd like to avoid having two drivers for the same device if possible,
> especially in two separate places. This creates some confusion since the
> general expectation is that there's only one driver per device. There's
> also the fact that this could lead to subtle bugs since the probe order
> is the link order (or module loading).

I don't think there can be two "struct device"s for a single OF node. So
if the CCU part is in drivers/clk/sunxi-ng, the CCU "probe" function
would have to be called from the RTC driver. Since there has to be
cooperation anyway, I don't think there would be any ordering problems.

> And synchronizing access to registers between those two drivers will be
> hard, while we could just share the same spin lock between the RTC and
> clock drivers if they are instanciated in the same place.

While the RTC driver currently shares a spinlock between the clock part
and the RTC part, there isn't actually any overlap in register usage
between the two. So there doesn't need to be any synchronization.

Regards,
Samuel
Maxime Ripard Sept. 28, 2021, 9:06 a.m. UTC | #5
Hi,

On Tue, Sep 28, 2021 at 02:46:39AM -0500, Samuel Holland wrote:
> On 9/9/21 3:45 AM, Maxime Ripard wrote:
> > On Fri, Sep 03, 2021 at 10:21:13AM -0500, Samuel Holland wrote:
> >> On 9/3/21 9:50 AM, Maxime Ripard wrote:
> >>> Hi,
> >>>
> >>> On Wed, Sep 01, 2021 at 12:39:44AM -0500, Samuel Holland wrote:
> >>>> This patch series adds a CCU driver for the RTC in the H616 and R329.
> >>>> The extra patches at the end of this series show how it would be
> >>>> explanded to additional hardware variants.
> >>>>
> >>>> The driver is intended to support the existing binding used for the H6,
> >>>> but also an updated binding which includes all RTC input clocks. I do
> >>>> not know how to best represent that binding -- that is a major reason
> >>>> why this series is an RFC.
> >>>>
> >>>> A future patch series could add functionality to the driver to manage
> >>>> IOSC calibration at boot and during suspend/resume.
> >>>>
> >>>> It may be possible to support all of these hardware variants in the
> >>>> existing RTC clock driver and avoid some duplicate code, but I'm
> >>>> concerned about the complexity there, without any of the CCU
> >>>> abstraction.
> >>>>
> >>>> This series is currently based on top of the other series I just sent
> >>>> (clk: sunxi-ng: Lifetime fixes and module support), but I can rebase it
> >>>> elsewhere.
> >>>
> >>> I'm generally ok with this, it makes sense to move it to sunxi-ng,
> >>> especially with that other series of yours.
> >>>
> >>> My main concern about this is the split driver approach. We used to have
> >>> that before in the RTC too, but it was mostly due to the early clock
> >>> requirements. With your previous work, that requirement is not there
> >>> anymore and we can just register it as a device just like the other
> >>> clock providers.
> >>
> >> That's a good point. Originally, I had this RTC CCU providing osc24M, so
> >> it did need to be an early provider. But with the current version, we
> >> could have the RTC platform driver call devm_sunxi_ccu_probe. That does
> >> seem cleaner.
> >>
> >> (Since it wasn't immediately obvious to me why this works: the only
> >> early provider remaining is the sun5i CCU, and it doesn't use the sun6i
> >> RTC driver.)
> >>
> >>> And since we can register all those clocks at device probe time, we
> >>> don't really need to split the driver in two (and especially in two
> >>> different places). The only obstacle to this after your previous series
> >>> is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe
> >>> functions public, but that can easily be fixed by moving their
> >>> definition to include/linux/clk/sunxi-ng.h
> >>
> >> Where are you thinking the clock definitions would go? We don't export
> >> any of those structures (ccu_mux, ccu_common) or macros
> >> (SUNXI_CCU_GATE_DATA) in a public header either.
> > 
> > Ah, right...
> > 
> >> Would you want to export those? That seems like a lot of churn. Or would
> >> we put the CCU descriptions in drivers/clk/sunxi-ng and export a
> >> function that the RTC driver can call? (Or some other idea?)
> > 
> > I guess we could export it. There's some fairly big headers in
> > include/linux/clk already (tegra and ti), it's not uAPI and we do have
> > reasons to do so, so I guess it's fine.
> > 
> > I'd like to avoid having two drivers for the same device if possible,
> > especially in two separate places. This creates some confusion since the
> > general expectation is that there's only one driver per device. There's
> > also the fact that this could lead to subtle bugs since the probe order
> > is the link order (or module loading).
> 
> I don't think there can be two "struct device"s for a single OF node.

That's not what I meant, there's indeed a single of_node for a single
struct device. If we dig a bit into the core framework, the most likely
scenario is that we would register both the RTC and clock driver at
module_init, and with the device already created with its of_node set
during the initial DT parsing.

We register our platform driver using module_platform_driver, which
expands to calling driver_register() at module_init(), setting the
driver bus to the platform_bus in the process (in
__platform_driver_register()).

After some sanity check, driver_register() calls bus_add_driver(), which
will call driver_attach() if drivers_autoprobe is set (which is the
default, set into bus_register()).

driver_attach() will, for each device on the platform bus, call
__driver_attach(). If there's a match between that device and our driver
(which is evaluated by platform_match() in our case), we'll call our
driver probe with that device through driver_probe_device(),
__driver_probe_device() and finally really_probe().

However, at no point in time there's any check about whether that device
has already been bound to a driver, nor does it create a new device for
each driver. So this means that, if you have two drivers that match the
same device (like our clock and RTC drivers), you'll have both probe
being called with the same device, and the probe order will be defined
by the link order. Worse, they would share the same driver_data, with
each driver not being aware of the other. This is incredibly fragile,
and hard to notice since it goes against the usual expectations.

> So if the CCU part is in drivers/clk/sunxi-ng, the CCU "probe"
> function would have to be called from the RTC driver.

No, it would be called by the core directly if there's a compatible to
match.

> Since there has to be cooperation anyway, I don't think there would be
> any ordering problems.

My initial point was that, with a direct function call, it's both
deterministic and obvious.

> > And synchronizing access to registers between those two drivers will be
> > hard, while we could just share the same spin lock between the RTC and
> > clock drivers if they are instanciated in the same place.
> 
> While the RTC driver currently shares a spinlock between the clock part
> and the RTC part, there isn't actually any overlap in register usage
> between the two. So there doesn't need to be any synchronization.

I know, but this was more of a social problem than a technical one. Each
contributor and reviewer in the future will have to know or remember
that it's there, and make sure that it's still the case after any change
they make or review.

This is again a fairly fragile assumption.

Maxime
Samuel Holland Sept. 29, 2021, 3:54 a.m. UTC | #6
Hi Maxime,

Thanks for your reply.

On 9/28/21 4:06 AM, Maxime Ripard wrote:
> On Tue, Sep 28, 2021 at 02:46:39AM -0500, Samuel Holland wrote:
>> On 9/9/21 3:45 AM, Maxime Ripard wrote:
>>> On Fri, Sep 03, 2021 at 10:21:13AM -0500, Samuel Holland wrote:
>>>> On 9/3/21 9:50 AM, Maxime Ripard wrote:
>>>>> And since we can register all those clocks at device probe time, we
>>>>> don't really need to split the driver in two (and especially in two
>>>>> different places). The only obstacle to this after your previous series
>>>>> is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe
>>>>> functions public, but that can easily be fixed by moving their
>>>>> definition to include/linux/clk/sunxi-ng.h
>>>>
>>>> Where are you thinking the clock definitions would go? We don't export
>>>> any of those structures (ccu_mux, ccu_common) or macros
>>>> (SUNXI_CCU_GATE_DATA) in a public header either.
>>>
>>> Ah, right...
>>>
>>>> Would you want to export those? That seems like a lot of churn. Or would
>>>> we put the CCU descriptions in drivers/clk/sunxi-ng and export a
>>>> function that the RTC driver can call? (Or some other idea?)
>>>
>>> I guess we could export it. There's some fairly big headers in
>>> include/linux/clk already (tegra and ti), it's not uAPI and we do have
>>> reasons to do so, so I guess it's fine.
>>>
>>> I'd like to avoid having two drivers for the same device if possible,
>>> especially in two separate places. This creates some confusion since the
>>> general expectation is that there's only one driver per device. There's
>>> also the fact that this could lead to subtle bugs since the probe order
>>> is the link order (or module loading).
>>
>> I don't think there can be two "struct device"s for a single OF node.
> 
> That's not what I meant, there's indeed a single of_node for a single
> struct device. If we dig a bit into the core framework, the most likely
> scenario is that we would register both the RTC and clock driver at
> module_init, and with the device already created with its of_node set
> during the initial DT parsing.
> 
> We register our platform driver using module_platform_driver, which
> expands to calling driver_register() at module_init(), setting the
> driver bus to the platform_bus in the process (in
> __platform_driver_register()).
> 
> After some sanity check, driver_register() calls bus_add_driver(), which
> will call driver_attach() if drivers_autoprobe is set (which is the
> default, set into bus_register()).
> 
> driver_attach() will, for each device on the platform bus, call
> __driver_attach(). If there's a match between that device and our driver
> (which is evaluated by platform_match() in our case), we'll call our
> driver probe with that device through driver_probe_device(),
> __driver_probe_device() and finally really_probe().
> 
> However, at no point in time there's any check about whether that device
> has already been bound to a driver, nor does it create a new device for
> each driver.

I would expect this to hit the:

	if (dev->driver)
		return -EBUSY;

in __driver_probe_device(), or fail the "if (!dev->driver)" check in
__driver_attach() for the async case, once the first driver is bound.

> So this means that, if you have two drivers that match the
> same device (like our clock and RTC drivers), you'll have both probe
> being called with the same device, and the probe order will be defined
> by the link order. Worse, they would share the same driver_data, with
> each driver not being aware of the other. This is incredibly fragile,
> and hard to notice since it goes against the usual expectations.
> 
>> So if the CCU part is in drivers/clk/sunxi-ng, the CCU "probe"
>> function would have to be called from the RTC driver.
> 
> No, it would be called by the core directly if there's a compatible to
> match.
> 
>> Since there has to be cooperation anyway, I don't think there would be
>> any ordering problems.
> 
> My initial point was that, with a direct function call, it's both
> deterministic and obvious.

I believe I did what you are suggesting for v2. From patch 7:

--- a/drivers/rtc/rtc-sun6i.c
+++ b/drivers/rtc/rtc-sun6i.c
@@ -683,6 +684,10 @@ static int sun6i_rtc_probe(struct platform_device
*pdev)
 		chip->base = devm_platform_ioremap_resource(pdev, 0);
 		if (IS_ERR(chip->base))
 			return PTR_ERR(chip->base);
+
+		ret = sun6i_rtc_ccu_probe(&pdev->dev, chip->base);
+		if (ret)
+			return ret;
 	}

 	platform_set_drvdata(pdev, chip);

>>> And synchronizing access to registers between those two drivers will be
>>> hard, while we could just share the same spin lock between the RTC and
>>> clock drivers if they are instanciated in the same place.
>>
>> While the RTC driver currently shares a spinlock between the clock part
>> and the RTC part, there isn't actually any overlap in register usage
>> between the two. So there doesn't need to be any synchronization.
> 
> I know, but this was more of a social problem than a technical one. Each
> contributor and reviewer in the future will have to know or remember
> that it's there, and make sure that it's still the case after any change
> they make or review.
> 
> This is again a fairly fragile assumption.

Yeah, I agree that having a lock that is only sometimes safe to use with
certain registers is quite fragile.

Would splitting the spinlock in rtc-sun6i.c into "losc_lock" (for the
clock provider) and "alarm_lock" (for the RTC driver) make this
distinction clear enough?

Eventually, I want to split up the struct between the clock provider and
RTC driver so it's clear which members belong to whom, and there's no
ugly global pointer use. Maybe I should do this first?

Regards,
Samuel
Maxime Ripard Oct. 25, 2021, 3:54 p.m. UTC | #7
On Tue, Sep 28, 2021 at 10:54:26PM -0500, Samuel Holland wrote:
> Hi Maxime,
> 
> Thanks for your reply.
> 
> On 9/28/21 4:06 AM, Maxime Ripard wrote:
> > On Tue, Sep 28, 2021 at 02:46:39AM -0500, Samuel Holland wrote:
> >> On 9/9/21 3:45 AM, Maxime Ripard wrote:
> >>> On Fri, Sep 03, 2021 at 10:21:13AM -0500, Samuel Holland wrote:
> >>>> On 9/3/21 9:50 AM, Maxime Ripard wrote:
> >>>>> And since we can register all those clocks at device probe time, we
> >>>>> don't really need to split the driver in two (and especially in two
> >>>>> different places). The only obstacle to this after your previous series
> >>>>> is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe
> >>>>> functions public, but that can easily be fixed by moving their
> >>>>> definition to include/linux/clk/sunxi-ng.h
> >>>>
> >>>> Where are you thinking the clock definitions would go? We don't export
> >>>> any of those structures (ccu_mux, ccu_common) or macros
> >>>> (SUNXI_CCU_GATE_DATA) in a public header either.
> >>>
> >>> Ah, right...
> >>>
> >>>> Would you want to export those? That seems like a lot of churn. Or would
> >>>> we put the CCU descriptions in drivers/clk/sunxi-ng and export a
> >>>> function that the RTC driver can call? (Or some other idea?)
> >>>
> >>> I guess we could export it. There's some fairly big headers in
> >>> include/linux/clk already (tegra and ti), it's not uAPI and we do have
> >>> reasons to do so, so I guess it's fine.
> >>>
> >>> I'd like to avoid having two drivers for the same device if possible,
> >>> especially in two separate places. This creates some confusion since the
> >>> general expectation is that there's only one driver per device. There's
> >>> also the fact that this could lead to subtle bugs since the probe order
> >>> is the link order (or module loading).
> >>
> >> I don't think there can be two "struct device"s for a single OF node.
> > 
> > That's not what I meant, there's indeed a single of_node for a single
> > struct device. If we dig a bit into the core framework, the most likely
> > scenario is that we would register both the RTC and clock driver at
> > module_init, and with the device already created with its of_node set
> > during the initial DT parsing.
> > 
> > We register our platform driver using module_platform_driver, which
> > expands to calling driver_register() at module_init(), setting the
> > driver bus to the platform_bus in the process (in
> > __platform_driver_register()).
> > 
> > After some sanity check, driver_register() calls bus_add_driver(), which
> > will call driver_attach() if drivers_autoprobe is set (which is the
> > default, set into bus_register()).
> > 
> > driver_attach() will, for each device on the platform bus, call
> > __driver_attach(). If there's a match between that device and our driver
> > (which is evaluated by platform_match() in our case), we'll call our
> > driver probe with that device through driver_probe_device(),
> > __driver_probe_device() and finally really_probe().
> > 
> > However, at no point in time there's any check about whether that device
> > has already been bound to a driver, nor does it create a new device for
> > each driver.
> 
> I would expect this to hit the:
> 
> 	if (dev->driver)
> 		return -EBUSY;
> 
> in __driver_probe_device(), or fail the "if (!dev->driver)" check in
> __driver_attach() for the async case, once the first driver is bound.

Hmmm, it might. I know we "leveraged" this some time ago for another
platform, but it might not be working anymore indeed.

> > So this means that, if you have two drivers that match the
> > same device (like our clock and RTC drivers), you'll have both probe
> > being called with the same device, and the probe order will be defined
> > by the link order. Worse, they would share the same driver_data, with
> > each driver not being aware of the other. This is incredibly fragile,
> > and hard to notice since it goes against the usual expectations.
> > 
> >> So if the CCU part is in drivers/clk/sunxi-ng, the CCU "probe"
> >> function would have to be called from the RTC driver.
> > 
> > No, it would be called by the core directly if there's a compatible to
> > match.
> > 
> >> Since there has to be cooperation anyway, I don't think there would be
> >> any ordering problems.
> > 
> > My initial point was that, with a direct function call, it's both
> > deterministic and obvious.
> 
> I believe I did what you are suggesting for v2. From patch 7:
> 
> --- a/drivers/rtc/rtc-sun6i.c
> +++ b/drivers/rtc/rtc-sun6i.c
> @@ -683,6 +684,10 @@ static int sun6i_rtc_probe(struct platform_device
> *pdev)
>  		chip->base = devm_platform_ioremap_resource(pdev, 0);
>  		if (IS_ERR(chip->base))
>  			return PTR_ERR(chip->base);
> +
> +		ret = sun6i_rtc_ccu_probe(&pdev->dev, chip->base);
> +		if (ret)
> +			return ret;
>  	}

Ah, sorry, I entirely missed it. Yes, that totally fine by me then. I'd
prefer to have the spinlock passed as an argument as well, but it can be
done in a follow-up patch.

>  	platform_set_drvdata(pdev, chip);
> 
> >>> And synchronizing access to registers between those two drivers will be
> >>> hard, while we could just share the same spin lock between the RTC and
> >>> clock drivers if they are instanciated in the same place.
> >>
> >> While the RTC driver currently shares a spinlock between the clock part
> >> and the RTC part, there isn't actually any overlap in register usage
> >> between the two. So there doesn't need to be any synchronization.
> > 
> > I know, but this was more of a social problem than a technical one. Each
> > contributor and reviewer in the future will have to know or remember
> > that it's there, and make sure that it's still the case after any change
> > they make or review.
> > 
> > This is again a fairly fragile assumption.
> 
> Yeah, I agree that having a lock that is only sometimes safe to use with
> certain registers is quite fragile.
> 
> Would splitting the spinlock in rtc-sun6i.c into "losc_lock" (for the
> clock provider) and "alarm_lock" (for the RTC driver) make this
> distinction clear enough?
> 
> Eventually, I want to split up the struct between the clock provider and
> RTC driver so it's clear which members belong to whom, and there's no
> ugly global pointer use. Maybe I should do this first?

Yeah, it sounds like a good plan

Thanks!
Maxime