Message ID | 20231205103559.9605-10-fancer.lancer@gmail.com (mailing list archive) |
---|---|
State | Changes Requested |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | net: pcs: xpcs: Add memory-based management iface support | expand |
Hi Serge, On Tue, 5 Dec 2023 13:35:30 +0300 Serge Semin <fancer.lancer@gmail.com> wrote: > Synopsys DesignWare XPCS IP-core can be synthesized with the device CSRs > being accessible over MCI or APB3 interface instead of the MDIO bus (see > the CSR_INTERFACE HDL parameter). Thus all the PCS registers can be just > memory mapped and be a subject of standard MMIO operations of course > taking into account the way the Clause C45 CSRs mapping is defined. This > commit is about adding a device driver for the DW XPCS Management > Interface platform device and registering it in the framework of the > kernel MDIO subsystem. > > DW XPCS platform device is supposed to be described by the respective > compatible string "snps,dw-xpcs-mi", CSRs memory space and optional > peripheral bus clock source. Note depending on the INDIRECT_ACCESS DW XPCS > IP-core synthesize parameter the memory-mapped reg-space can be > represented as either directly or indirectly mapped Clause 45 space. In > the former case the particular address is determined based on the MMD > device and the registers offset (5 + 16 bits all together) within the > device reg-space. In the later case there is only 256 lower address bits > are utilized for the registers mapping. The upper bits are supposed to be > written into the respective viewport CSR in order to reach the entire C45 > space. Too bad the mdio-regmap driver can't be re-used here, it would deal with reg width for you, for example. I guess the main reason would be the direct vs indirect accesses ? I do have a comment tough : [...] > +static inline ptrdiff_t dw_xpcs_mmio_addr_format(int dev, int reg) > +{ > + return FIELD_PREP(0x1f0000, dev) | FIELD_PREP(0xffff, reg); > +} > + > +static inline u16 dw_xpcs_mmio_addr_page(ptrdiff_t csr) > +{ > + return FIELD_GET(0x1fff00, csr); > +} > + > +static inline ptrdiff_t dw_xpcs_mmio_addr_offset(ptrdiff_t csr) > +{ > + return FIELD_GET(0xff, csr); > +} You shouldn't use inline in C files, only in headers. Maxime
Hi Maxime, On Tue, Dec 05, 2023 at 01:32:05PM +0100, Maxime Chevallier wrote: > Hi Serge, > > On Tue, 5 Dec 2023 13:35:30 +0300 > Serge Semin <fancer.lancer@gmail.com> wrote: > > > Synopsys DesignWare XPCS IP-core can be synthesized with the device CSRs > > being accessible over MCI or APB3 interface instead of the MDIO bus (see > > the CSR_INTERFACE HDL parameter). Thus all the PCS registers can be just > > memory mapped and be a subject of standard MMIO operations of course > > taking into account the way the Clause C45 CSRs mapping is defined. This > > commit is about adding a device driver for the DW XPCS Management > > Interface platform device and registering it in the framework of the > > kernel MDIO subsystem. > > > > DW XPCS platform device is supposed to be described by the respective > > compatible string "snps,dw-xpcs-mi", CSRs memory space and optional > > peripheral bus clock source. Note depending on the INDIRECT_ACCESS DW XPCS > > IP-core synthesize parameter the memory-mapped reg-space can be > > represented as either directly or indirectly mapped Clause 45 space. In > > the former case the particular address is determined based on the MMD > > device and the registers offset (5 + 16 bits all together) within the > > device reg-space. In the later case there is only 256 lower address bits > > are utilized for the registers mapping. The upper bits are supposed to be > > written into the respective viewport CSR in order to reach the entire C45 > > space. > > Too bad the mdio-regmap driver can't be re-used here, it would deal > with reg width for you, for example. I guess the main reason would be > the direct vs indirect accesses ? Right, it's one of the reasons. I could have used the mdio-regmap here, but that would have meant to implement an additional abstraction layer: regspace<->regmap<->mdio-regmap<->mii_bus, instead of just regspace<->mii_bus. This would have also required to patch the mdio-remap module somehow in order to have the c45 ops supported. It would have been much easier to do before the commit 99d5fe9c7f3d ("net: mdio: Remove support for building C45 muxed addresses"). But since then MDIO reg-address has no longer had muxed dev-address. Of course I could have got it back to the mdio-regmap driver only, mix the C22/C45 address in the regmap 'addr' argument, then extract the MMD (for C45) and reg addresses from the regmap IO ops argument and perform the respective MMIO access. But as you can see it is much hardware and would cause additional steps for the address translations, than just directly implement the C22/C45 IO ops and register the MDIO bus for them. I couldn't find much benefits to follow that road, sorry. > > I do have a comment tough : > > [...] > > > +static inline ptrdiff_t dw_xpcs_mmio_addr_format(int dev, int reg) > > +{ > > + return FIELD_PREP(0x1f0000, dev) | FIELD_PREP(0xffff, reg); > > +} > > + > > +static inline u16 dw_xpcs_mmio_addr_page(ptrdiff_t csr) > > +{ > > + return FIELD_GET(0x1fff00, csr); > > +} > > + > > +static inline ptrdiff_t dw_xpcs_mmio_addr_offset(ptrdiff_t csr) > > +{ > > + return FIELD_GET(0xff, csr); > > +} > > You shouldn't use inline in C files, only in headers. Could you please clarify why? I failed to find such requirement in the coding style doc. Moreover there are multiple examples of using the static-inline-ers in the C files in the kernel including the net/mdio subsystem. -Serge(y) > > Maxime
> > You shouldn't use inline in C files, only in headers. > > Could you please clarify why? I failed to find such requirement in the > coding style doc. Moreover there are multiple examples of using the > static-inline-ers in the C files in the kernel including the net/mdio > subsystem. The compiler does a better job at deciding what to inline than we humans do. If you can show the compiler is doing it wrong, then we might accept them. But in general, netdev does not like inline in .C file. Also, nothing in MDIO is hot path, it spends a lot of time waiting for a slow bus. So inline is likely to just bloat the code for no gain. Andrew
Hi Andrew On Wed, Dec 06, 2023 at 06:01:30PM +0100, Andrew Lunn wrote: > > > You shouldn't use inline in C files, only in headers. > > > > Could you please clarify why? I failed to find such requirement in the > > coding style doc. Moreover there are multiple examples of using the > > static-inline-ers in the C files in the kernel including the net/mdio > > subsystem. > > The compiler does a better job at deciding what to inline than we > humans do. If you can show the compiler is doing it wrong, then we > might accept them. In general I would have agreed with you especially if the methods were heavier than what they are: static inline ptrdiff_t dw_xpcs_mmio_addr_format(int dev, int reg) { return FIELD_PREP(0x1f0000, dev) | FIELD_PREP(0xffff, reg); } static inline u16 dw_xpcs_mmio_addr_page(ptrdiff_t csr) { return FIELD_GET(0x1fff00, csr); } static inline ptrdiff_t dw_xpcs_mmio_addr_offset(ptrdiff_t csr) { return FIELD_GET(0xff, csr); } > But in general, netdev does not like inline in .C > file. I see. I'll do as you say if you don't change your mind after my reasoning below. > Also, nothing in MDIO is hot path, it spends a lot of time > waiting for a slow bus. So inline is likely to just bloat the code for > no gain. I would have been absolutely with you in this matter, if we were talking about a normal MDIO bus. In this case the devices are accessed over the system IO-memory. So the bus isn't that slow. Regarding the compiler knowing better when to inline the code. Here is what it does with the methods above. If the inline keyword is specified the compiler will inline all three methods. If the keyword isn't specified then dw_xpcs_mmio_addr_format() won't be inlined while the rest two functions will be. So the only part at consideration is the dw_xpcs_mmio_addr_format() method since the rest of the functions are inlined anyway. The dw_xpcs_mmio_addr_format() function body is of the 5 asm instructions length (on MIPS). Since the function call in this case requires two jump instructions (to function and back), one instruction to save the previous return address on stack and two instructions for the function arguments, the trade-off of having non-inlined function are those five additional instructions on each call. There are four dw_xpcs_mmio_addr_format() calls. So here is what we get in both cases: inlined: 5 func ins * 4 calls = 20 ins non-inlined: (5 func + 1 jump) ins + (1 jump + 1 ra + 2 arg) ins * 4 calls = 22 ins but seeing the return address needs to be saved anyway in the callers here is what we finally get: non-inlined: (5 func + 1 jump) ins + (1 jump + 2 arg) ins * 4 calls = 18 ins So unless I am mistaken in some of the aspects if we have the function non-inlined then we'll save 2 instructions in the object file, but each call would require additional _4_ instructions to execute (2 jumps and 2 arg creations), which makes the function execution almost two times longer than it would have been should it was inlined. IMO in this case saving 2 instructions of the object file isn't worth than getting rid from four instructions on each call seeing the DW XPCS MCI/APB3 buses are the memory IO interfaces which don't require any long-time waits to perform the ops. Thus I'd suggest to keep the inline keywords specified here. What is your conclusion? -Serge(y) > > Andrew
On Thu, Dec 07, 2023 at 04:35:47PM +0300, Serge Semin wrote: > Hi Andrew > > On Wed, Dec 06, 2023 at 06:01:30PM +0100, Andrew Lunn wrote: > > The compiler does a better job at deciding what to inline than we > > humans do. If you can show the compiler is doing it wrong, then we > > might accept them. > > In general I would have agreed with you especially if the methods were > heavier than what they are: > static inline ptrdiff_t dw_xpcs_mmio_addr_format(int dev, int reg) > { > return FIELD_PREP(0x1f0000, dev) | FIELD_PREP(0xffff, reg); > } > > static inline u16 dw_xpcs_mmio_addr_page(ptrdiff_t csr) > { > return FIELD_GET(0x1fff00, csr); > } > > static inline ptrdiff_t dw_xpcs_mmio_addr_offset(ptrdiff_t csr) > { > return FIELD_GET(0xff, csr); > } > > > But in general, netdev does not like inline in .C > > file. > > I see. I'll do as you say if you don't change your mind after my > reasoning below. This isn't Andrew saying it - you seem to have missed the detail that "netdev". If Andrew doesn't say it, then DaveM, Jakub or Paolo will. Have you read the "Inline functions" section in Documentation/process/4.Coding.rst ? > > Also, nothing in MDIO is hot path, it spends a lot of time > > waiting for a slow bus. So inline is likely to just bloat the code for > > no gain. > > I would have been absolutely with you in this matter, if we were talking > about a normal MDIO bus. In this case the devices are accessed over > the system IO-memory. So the bus isn't that slow. > > Regarding the compiler knowing better when to inline the code. Here is > what it does with the methods above. If the inline keyword is > specified the compiler will inline all three methods. If the keyword isn't > specified then dw_xpcs_mmio_addr_format() won't be inlined while the rest > two functions will be. So the only part at consideration is the > dw_xpcs_mmio_addr_format() method since the rest of the functions are > inlined anyway. > > The dw_xpcs_mmio_addr_format() function body is of the 5 asm > instructions length (on MIPS). Since the function call in this case > requires two jump instructions (to function and back), one instruction > to save the previous return address on stack and two instructions for > the function arguments, the trade-off of having non-inlined function > are those five additional instructions on each call. There are four > dw_xpcs_mmio_addr_format() calls. So here is what we get in both > cases: > inlined: 5 func ins * 4 calls = 20 ins > non-inlined: (5 func + 1 jump) ins + (1 jump + 1 ra + 2 arg) ins * 4 calls = 22 ins > but seeing the return address needs to be saved anyway in the callers > here is what we finally get: > non-inlined: (5 func + 1 jump) ins + (1 jump + 2 arg) ins * 4 calls = 18 ins > > So unless I am mistaken in some of the aspects if we have the function > non-inlined then we'll save 2 instructions in the object file, but > each call would require additional _4_ instructions to execute (2 > jumps and 2 arg creations), which makes the function execution almost > two times longer than it would have been should it was inlined. Rather than just focusing on instruction count, you also need to consider things like branch prediction, prefetching and I-cache usage. Modern CPUs don't execute instruction-by-instruction anymore. It is entirely possible that the compiler is making better choices even if it results in more jumps in the code.
On Thu, Dec 07, 2023 at 04:35:47PM +0300, Serge Semin wrote: > Hi Andrew > > On Wed, Dec 06, 2023 at 06:01:30PM +0100, Andrew Lunn wrote: > > > > You shouldn't use inline in C files, only in headers. > > > > > > Could you please clarify why? I failed to find such requirement in the > > > coding style doc. Moreover there are multiple examples of using the > > > static-inline-ers in the C files in the kernel including the net/mdio > > > subsystem. > > > > > The compiler does a better job at deciding what to inline than we > > humans do. If you can show the compiler is doing it wrong, then we > > might accept them. > > In general I would have agreed with you especially if the methods were > heavier than what they are: > static inline ptrdiff_t dw_xpcs_mmio_addr_format(int dev, int reg) > { > return FIELD_PREP(0x1f0000, dev) | FIELD_PREP(0xffff, reg); > } > > static inline u16 dw_xpcs_mmio_addr_page(ptrdiff_t csr) > { > return FIELD_GET(0x1fff00, csr); > } > > static inline ptrdiff_t dw_xpcs_mmio_addr_offset(ptrdiff_t csr) > { > return FIELD_GET(0xff, csr); > } > > > But in general, netdev does not like inline in .C > > file. > > I see. I'll do as you say if you don't change your mind after my > reasoning below. > > > Also, nothing in MDIO is hot path, it spends a lot of time > > waiting for a slow bus. So inline is likely to just bloat the code for > > no gain. > > I would have been absolutely with you in this matter, if we were talking > about a normal MDIO bus. In this case the devices are accessed over > the system IO-memory. So the bus isn't that slow. O.K, so its not slow. But how often is it used? PHYs tend to get polled once a second if interrupts are not used. But is the PCS also polled at the same time? Does this optimisation make a noticeable difference at once per second? Do you have a requirement that the system boots very very fast, and this optimisation makes a difference when there is heavier activity on the PCS at boot? Is the saving noticeable, when auto-neg takes a little over 1 second. The best way to make your case is show real world requirements and benchmark results. Andrew
On Thu, Dec 07, 2023 at 03:54:08PM +0100, Andrew Lunn wrote: > On Thu, Dec 07, 2023 at 04:35:47PM +0300, Serge Semin wrote: > > Hi Andrew > > > > On Wed, Dec 06, 2023 at 06:01:30PM +0100, Andrew Lunn wrote: > > > > > You shouldn't use inline in C files, only in headers. > > > > > > > > Could you please clarify why? I failed to find such requirement in the > > > > coding style doc. Moreover there are multiple examples of using the > > > > static-inline-ers in the C files in the kernel including the net/mdio > > > > subsystem. > > > > > > > > The compiler does a better job at deciding what to inline than we > > > humans do. If you can show the compiler is doing it wrong, then we > > > might accept them. > > > > In general I would have agreed with you especially if the methods were > > heavier than what they are: > > static inline ptrdiff_t dw_xpcs_mmio_addr_format(int dev, int reg) > > { > > return FIELD_PREP(0x1f0000, dev) | FIELD_PREP(0xffff, reg); > > } > > > > static inline u16 dw_xpcs_mmio_addr_page(ptrdiff_t csr) > > { > > return FIELD_GET(0x1fff00, csr); > > } > > > > static inline ptrdiff_t dw_xpcs_mmio_addr_offset(ptrdiff_t csr) > > { > > return FIELD_GET(0xff, csr); > > } > > > > > But in general, netdev does not like inline in .C > > > file. > > > > I see. I'll do as you say if you don't change your mind after my > > reasoning below. > > > > > Also, nothing in MDIO is hot path, it spends a lot of time > > > waiting for a slow bus. So inline is likely to just bloat the code for > > > no gain. > > > > I would have been absolutely with you in this matter, if we were talking > > about a normal MDIO bus. In this case the devices are accessed over > > the system IO-memory. So the bus isn't that slow. > > O.K, so its not slow. But how often is it used? PHYs tend to get > polled once a second if interrupts are not used. But is the PCS also > polled at the same time? Does this optimisation make a noticeable > difference at once per second? Do you have a requirement that the > system boots very very fast, and this optimisation makes a difference > when there is heavier activity on the PCS at boot? Is the saving > noticeable, when auto-neg takes a little over 1 second. > > The best way to make your case is show real world requirements and > benchmark results. You do know how to well define your point.) Polling is what currently implemented in the XPCS driver indeed. So in this case such small optimization won't be even noticeable. Although theoretically the IO-access could be performed on the fast paths, in the IRQ context, but it could be relevant only if the DW XPCS device had the IRQ feature activated on the particular platform and the DW XPCS driver supported it. But seeing the driver currently doesn't support it and the DW XPCS could be also found on the DW MAC SMA MDIO bus (non-memory-mapped case), always handling IRQ in the hardware IRQ context would be wrong. Splitting up the handlers would be over-complication for indeed doubtful reason, since inlining those methods won't gain significant performance even in that case. And of course I don't have such strict requirements as you say. So I'll drop the inline keywords then. Thank you very much for having kept discussing the topic in order to fully clarify it for me, even though the issue could have seemed unimportant to spend time for. -Serge(y) > > Andrew
diff --git a/drivers/net/mdio/Kconfig b/drivers/net/mdio/Kconfig index 4a7a303be2f7..39f7ce8087bf 100644 --- a/drivers/net/mdio/Kconfig +++ b/drivers/net/mdio/Kconfig @@ -185,6 +185,14 @@ config MDIO_IPQ8064 This driver supports the MDIO interface found in the network interface units of the IPQ8064 SoC +config MDIO_DW_XPCS + tristate "Synopsys DesignWare XPCS MI bus support" + depends on HAS_IOMEM + select MDIO_DEVRES + help + This driver supports the MCI/APB3 Management Interface responsible + for communicating with the Synopsys DesignWare XPCS devices. + config MDIO_REGMAP tristate help diff --git a/drivers/net/mdio/Makefile b/drivers/net/mdio/Makefile index 1015f0db4531..6389d4c3b862 100644 --- a/drivers/net/mdio/Makefile +++ b/drivers/net/mdio/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_MDIO_BCM_IPROC) += mdio-bcm-iproc.o obj-$(CONFIG_MDIO_BCM_UNIMAC) += mdio-bcm-unimac.o obj-$(CONFIG_MDIO_BITBANG) += mdio-bitbang.o obj-$(CONFIG_MDIO_CAVIUM) += mdio-cavium.o +obj-$(CONFIG_MDIO_DW_XPCS) += mdio-dw-xpcs.o obj-$(CONFIG_MDIO_GPIO) += mdio-gpio.o obj-$(CONFIG_MDIO_HISI_FEMAC) += mdio-hisi-femac.o obj-$(CONFIG_MDIO_I2C) += mdio-i2c.o diff --git a/drivers/net/mdio/mdio-dw-xpcs.c b/drivers/net/mdio/mdio-dw-xpcs.c new file mode 100644 index 000000000000..c47f0a54d31b --- /dev/null +++ b/drivers/net/mdio/mdio-dw-xpcs.c @@ -0,0 +1,384 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Synopsys DesignWare XPCS Management Interface driver + * + * Copyright (C) 2023 BAIKAL ELECTRONICS, JSC + */ + +#include <linux/atomic.h> +#include <linux/bitfield.h> +#include <linux/clk.h> +#include <linux/device.h> +#include <linux/kernel.h> +#include <linux/mdio.h> +#include <linux/module.h> +#include <linux/of_mdio.h> +#include <linux/phy.h> +#include <linux/platform_device.h> +#include <linux/pm_runtime.h> +#include <linux/property.h> +#include <linux/sizes.h> + +/* Page select register for the indirect MMIO CSRs access */ +#define DW_VR_CSR_VIEWPORT 0xff + +struct dw_xpcs_mi { + struct platform_device *pdev; + struct mii_bus *bus; + bool reg_indir; + int reg_width; + void __iomem *reg_base; + struct clk *pclk; +}; + +static inline ptrdiff_t dw_xpcs_mmio_addr_format(int dev, int reg) +{ + return FIELD_PREP(0x1f0000, dev) | FIELD_PREP(0xffff, reg); +} + +static inline u16 dw_xpcs_mmio_addr_page(ptrdiff_t csr) +{ + return FIELD_GET(0x1fff00, csr); +} + +static inline ptrdiff_t dw_xpcs_mmio_addr_offset(ptrdiff_t csr) +{ + return FIELD_GET(0xff, csr); +} + +static int dw_xpcs_mmio_read_reg_indirect(struct dw_xpcs_mi *dxmi, + int dev, int reg) +{ + ptrdiff_t csr, ofs; + u16 page; + int ret; + + csr = dw_xpcs_mmio_addr_format(dev, reg); + page = dw_xpcs_mmio_addr_page(csr); + ofs = dw_xpcs_mmio_addr_offset(csr); + + ret = pm_runtime_resume_and_get(&dxmi->pdev->dev); + if (ret) + return ret; + + switch (dxmi->reg_width) { + case 4: + writel(page, dxmi->reg_base + (DW_VR_CSR_VIEWPORT << 2)); + ret = readl(dxmi->reg_base + (ofs << 2)); + break; + default: + writew(page, dxmi->reg_base + (DW_VR_CSR_VIEWPORT << 1)); + ret = readw(dxmi->reg_base + (ofs << 1)); + break; + } + + pm_runtime_put(&dxmi->pdev->dev); + + return ret; +} + +static int dw_xpcs_mmio_write_reg_indirect(struct dw_xpcs_mi *dxmi, + int dev, int reg, u16 val) +{ + ptrdiff_t csr, ofs; + u16 page; + int ret; + + csr = dw_xpcs_mmio_addr_format(dev, reg); + page = dw_xpcs_mmio_addr_page(csr); + ofs = dw_xpcs_mmio_addr_offset(csr); + + ret = pm_runtime_resume_and_get(&dxmi->pdev->dev); + if (ret) + return ret; + + switch (dxmi->reg_width) { + case 4: + writel(page, dxmi->reg_base + (DW_VR_CSR_VIEWPORT << 2)); + writel(val, dxmi->reg_base + (ofs << 2)); + break; + default: + writew(page, dxmi->reg_base + (DW_VR_CSR_VIEWPORT << 1)); + writew(val, dxmi->reg_base + (ofs << 1)); + break; + } + + pm_runtime_put(&dxmi->pdev->dev); + + return 0; +} + +static int dw_xpcs_mmio_read_reg_direct(struct dw_xpcs_mi *dxmi, + int dev, int reg) +{ + ptrdiff_t csr; + int ret; + + csr = dw_xpcs_mmio_addr_format(dev, reg); + + ret = pm_runtime_resume_and_get(&dxmi->pdev->dev); + if (ret) + return ret; + + switch (dxmi->reg_width) { + case 4: + ret = readl(dxmi->reg_base + (csr << 2)); + break; + default: + ret = readw(dxmi->reg_base + (csr << 1)); + break; + } + + pm_runtime_put(&dxmi->pdev->dev); + + return ret; +} + +static int dw_xpcs_mmio_write_reg_direct(struct dw_xpcs_mi *dxmi, + int dev, int reg, u16 val) +{ + ptrdiff_t csr; + int ret; + + csr = dw_xpcs_mmio_addr_format(dev, reg); + + ret = pm_runtime_resume_and_get(&dxmi->pdev->dev); + if (ret) + return ret; + + switch (dxmi->reg_width) { + case 4: + writel(val, dxmi->reg_base + (csr << 2)); + break; + default: + writew(val, dxmi->reg_base + (csr << 1)); + break; + } + + pm_runtime_put(&dxmi->pdev->dev); + + return 0; +} + +static int dw_xpcs_mmio_read_c22(struct mii_bus *bus, int addr, int reg) +{ + struct dw_xpcs_mi *dxmi = bus->priv; + + if (addr != 0) + return -ENODEV; + + if (dxmi->reg_indir) + return dw_xpcs_mmio_read_reg_indirect(dxmi, MDIO_MMD_VEND2, reg); + else + return dw_xpcs_mmio_read_reg_direct(dxmi, MDIO_MMD_VEND2, reg); +} + +static int dw_xpcs_mmio_write_c22(struct mii_bus *bus, int addr, int reg, u16 val) +{ + struct dw_xpcs_mi *dxmi = bus->priv; + + if (addr != 0) + return -ENODEV; + + if (dxmi->reg_indir) + return dw_xpcs_mmio_write_reg_indirect(dxmi, MDIO_MMD_VEND2, reg, val); + else + return dw_xpcs_mmio_write_reg_direct(dxmi, MDIO_MMD_VEND2, reg, val); +} + +static int dw_xpcs_mmio_read_c45(struct mii_bus *bus, int addr, int dev, int reg) +{ + struct dw_xpcs_mi *dxmi = bus->priv; + + if (addr != 0) + return -ENODEV; + + if (dxmi->reg_indir) + return dw_xpcs_mmio_read_reg_indirect(dxmi, dev, reg); + else + return dw_xpcs_mmio_read_reg_direct(dxmi, dev, reg); +} + +static int dw_xpcs_mmio_write_c45(struct mii_bus *bus, int addr, int dev, + int reg, u16 val) +{ + struct dw_xpcs_mi *dxmi = bus->priv; + + if (addr != 0) + return -ENODEV; + + if (dxmi->reg_indir) + return dw_xpcs_mmio_write_reg_indirect(dxmi, dev, reg, val); + else + return dw_xpcs_mmio_write_reg_direct(dxmi, dev, reg, val); +} + +static struct dw_xpcs_mi *dw_xpcs_mi_create_data(struct platform_device *pdev) +{ + struct dw_xpcs_mi *dxmi; + + dxmi = devm_kzalloc(&pdev->dev, sizeof(*dxmi), GFP_KERNEL); + if (!dxmi) + return ERR_PTR(-ENOMEM); + + dxmi->pdev = pdev; + + dev_set_drvdata(&pdev->dev, dxmi); + + return dxmi; +} + +static int dw_xpcs_mi_init_res(struct dw_xpcs_mi *dxmi) +{ + struct device *dev = &dxmi->pdev->dev; + struct resource *res; + + if (!device_property_read_u32(dev, "reg-io-width", &dxmi->reg_width)) { + if (dxmi->reg_width != 2 && dxmi->reg_width != 4) { + dev_err(dev, "Invalid regspace data width\n"); + return -EINVAL; + } + } else { + dxmi->reg_width = 2; + } + + res = platform_get_resource_byname(dxmi->pdev, IORESOURCE_MEM, "direct") ?: + platform_get_resource_byname(dxmi->pdev, IORESOURCE_MEM, "indirect"); + if (!res) { + dev_err(dev, "No regspace found\n"); + return -EINVAL; + } + + if (!strcmp(res->name, "indirect")) + dxmi->reg_indir = true; + + if ((dxmi->reg_indir && resource_size(res) < dxmi->reg_width * SZ_256) || + (!dxmi->reg_indir && resource_size(res) < dxmi->reg_width * SZ_2M)) { + dev_err(dev, "Invalid regspace size\n"); + return -EINVAL; + } + + dxmi->reg_base = devm_ioremap_resource(dev, res); + if (IS_ERR(dxmi->reg_base)) { + dev_err(dev, "Failed to map regspace\n"); + return PTR_ERR(dxmi->reg_base); + } + + return 0; +} + +static int dw_xpcs_mi_init_clk(struct dw_xpcs_mi *dxmi) +{ + struct device *dev = &dxmi->pdev->dev; + int ret; + + dxmi->pclk = devm_clk_get_optional(dev, "pclk"); + if (IS_ERR(dxmi->pclk)) + return dev_err_probe(dev, PTR_ERR(dxmi->pclk), + "Failed to get ref clock\n"); + + pm_runtime_set_active(dev); + ret = devm_pm_runtime_enable(dev); + if (ret) { + dev_err(dev, "Failed to enable runtime-PM\n"); + return ret; + } + + return 0; +} + +static int dw_xpcs_mi_init_mdio(struct dw_xpcs_mi *dxmi) +{ + struct device *dev = &dxmi->pdev->dev; + static atomic_t id = ATOMIC_INIT(-1); + int ret; + + dxmi->bus = devm_mdiobus_alloc_size(dev, 0); + if (!dxmi->bus) + return -ENOMEM; + + dxmi->bus->name = "DW XPCS MI"; + dxmi->bus->read = dw_xpcs_mmio_read_c22; + dxmi->bus->write = dw_xpcs_mmio_write_c22; + dxmi->bus->read_c45 = dw_xpcs_mmio_read_c45; + dxmi->bus->write_c45 = dw_xpcs_mmio_write_c45; + dxmi->bus->phy_mask = ~0; + dxmi->bus->parent = dev; + dxmi->bus->priv = dxmi; + + snprintf(dxmi->bus->id, MII_BUS_ID_SIZE, + "dwxpcs-%x", atomic_inc_return(&id)); + + ret = devm_of_mdiobus_register(dev, dxmi->bus, dev_of_node(dev)); + if (ret) { + dev_err(dev, "Failed to create MDIO bus\n"); + return ret; + } + + return 0; +} + +static int dw_xpcs_mi_probe(struct platform_device *pdev) +{ + struct dw_xpcs_mi *dxmi; + int ret; + + dxmi = dw_xpcs_mi_create_data(pdev); + if (IS_ERR(dxmi)) + return PTR_ERR(dxmi); + + ret = dw_xpcs_mi_init_res(dxmi); + if (ret) + return ret; + + ret = dw_xpcs_mi_init_clk(dxmi); + if (ret) + return ret; + + ret = dw_xpcs_mi_init_mdio(dxmi); + if (ret) + return ret; + + return 0; +} + +static int __maybe_unused dw_xpcs_mi_pm_runtime_suspend(struct device *dev) +{ + struct dw_xpcs_mi *dxmi = dev_get_drvdata(dev); + + clk_disable_unprepare(dxmi->pclk); + + return 0; +} + +static int __maybe_unused dw_xpcs_mi_pm_runtime_resume(struct device *dev) +{ + struct dw_xpcs_mi *dxmi = dev_get_drvdata(dev); + + return clk_prepare_enable(dxmi->pclk); +} + +const struct dev_pm_ops dw_xpcs_mi_pm_ops = { + SET_RUNTIME_PM_OPS(dw_xpcs_mi_pm_runtime_suspend, dw_xpcs_mi_pm_runtime_resume, NULL) +}; + +static const struct of_device_id dw_xpcs_mi_of_ids[] = { + { .compatible = "snps,dw-xpcs-mi" }, + { /* sentinel */ }, +}; +MODULE_DEVICE_TABLE(of, dw_xpcs_mi_of_ids); + +static struct platform_driver dw_xpcs_mi_driver = { + .probe = dw_xpcs_mi_probe, + .driver = { + .name = "dw-xpcs-mi", + .pm = &dw_xpcs_mi_pm_ops, + .of_match_table = dw_xpcs_mi_of_ids, + }, +}; + +module_platform_driver(dw_xpcs_mi_driver); + +MODULE_DESCRIPTION("Synopsys DesignWare XPCS Management Interface driver"); +MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); +MODULE_LICENSE("GPL v2");
Synopsys DesignWare XPCS IP-core can be synthesized with the device CSRs being accessible over MCI or APB3 interface instead of the MDIO bus (see the CSR_INTERFACE HDL parameter). Thus all the PCS registers can be just memory mapped and be a subject of standard MMIO operations of course taking into account the way the Clause C45 CSRs mapping is defined. This commit is about adding a device driver for the DW XPCS Management Interface platform device and registering it in the framework of the kernel MDIO subsystem. DW XPCS platform device is supposed to be described by the respective compatible string "snps,dw-xpcs-mi", CSRs memory space and optional peripheral bus clock source. Note depending on the INDIRECT_ACCESS DW XPCS IP-core synthesize parameter the memory-mapped reg-space can be represented as either directly or indirectly mapped Clause 45 space. In the former case the particular address is determined based on the MMD device and the registers offset (5 + 16 bits all together) within the device reg-space. In the later case there is only 256 lower address bits are utilized for the registers mapping. The upper bits are supposed to be written into the respective viewport CSR in order to reach the entire C45 space. Signed-off-by: Serge Semin <fancer.lancer@gmail.com> --- drivers/net/mdio/Kconfig | 8 + drivers/net/mdio/Makefile | 1 + drivers/net/mdio/mdio-dw-xpcs.c | 384 ++++++++++++++++++++++++++++++++ 3 files changed, 393 insertions(+) create mode 100644 drivers/net/mdio/mdio-dw-xpcs.c