diff mbox series

[v7] cxl/region: check interleave capability

Message ID 20240612032544.39149-1-yaoxt.fnst@fujitsu.com
State New, archived
Headers show
Series [v7] cxl/region: check interleave capability | expand

Commit Message

Xingtao Yao (Fujitsu) June 12, 2024, 3:25 a.m. UTC
Since interleave capability is not verified, if the interleave
capability of a target does not match the region need, committing decoder
should have failed at the device end.

In order to checkout this error as quickly as possible, driver needs
to check the interleave capability of target during attaching it to
region.

Per CXL specification r3.1(8.2.4.20.1 CXL HDM Decoder Capability Register),
bits 11 and 12 indicate the capability to establish interleaving in 3, 6,
12 and 16 ways. If these bits are not set, the target cannot be attached to
a region utilizing such interleave ways.

Additionally, bits 8 and 9 represent the capability of the bits used for
interleaving in the address, Linux tracks this in the cxl_port
interleave_mask.

Per CXL specification r3.1(8.2.4.20.13 Decoder Protection):
  eIW means encoded Interleave Ways.
  eIG means encoded Interleave Granularity.

  in HPA:
  if eIW is 0 or 8 (interleave ways: 1, 3), all the bits of HPA are used,
  the interleave bits are none, the following check is ignored.

  if eIW is less than 8 (interleave ways: 2, 4, 8, 16), the interleave bits
  start at bit position eIG + 8 and end at eIG + eIW + 8 - 1.

  if eIW is greater than 8 (interleave ways: 6, 12), the interleave bits
  start at bit position eIG + 8 and end at eIG + eIW - 1.

  if the interleave mask is insufficient to cover the required interleave
  bits, the target cannot be attached to the region.

Fixes: 384e624bb211 ("cxl/region: Attach endpoint decoders")
Signed-off-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>

---
V6[6] -> V7:
-- update comment.

V5[5] -> V6:
-- fix some typo.
-- update comment.
-- set rc when check faild in cxl_port_attach_region().

V4[4] -> V5:
-- update comment.
-- add nr_targets check while attaching a port to switch.
-- delete passthrough flag and allow all the capabilities for passthrough
   decoders.

V3[3] -> V4:
-- update comment.
-- optimize the code.
-- add a passthrough flag to mark the passthrough decoder.

V2[2] -> V3:
-- revert ig_cap_mask to interleave_mask.
-- fix the interleave bits check logical.

V1[1] -> V2:
-- rename interleave_mask to ig_cap_mask.
-- add a check for interleave granularity.
-- update commit.
-- move hdm caps init to parse_hdm_decoder_caps().

[1]
https://lore.kernel.org/linux-cxl/20240401075635.9333-1-yaoxt.fnst@fujitsu.com

[2]
https://lore.kernel.org/linux-cxl/20240403021747.17260-1-yaoxt.fnst@fujitsu.com

[3]
https://lore.kernel.org/linux-cxl/20240409022621.29115-1-yaoxt.fnst@fujitsu.com

[4]
https://lore.kernel.org/linux-cxl/20240422091350.4701-1-yaoxt.fnst@fujitsu.com

[5]
https://lore.kernel.org/linux-cxl/20240524092740.4260-1-yaoxt.fnst@fujitsu.com

[6]
https://lore.kernel.org/linux-cxl/20240611021511.35315-1-yaoxt.fnst@fujitsu.com
---
 drivers/cxl/core/hdm.c    | 13 ++++++
 drivers/cxl/core/region.c | 89 +++++++++++++++++++++++++++++++++++++++
 drivers/cxl/cxl.h         |  2 +
 drivers/cxl/cxlmem.h      | 10 +++++
 4 files changed, 114 insertions(+)

Comments

Dan Williams June 12, 2024, 4:07 a.m. UTC | #1
Yao Xingtao wrote:
> Since interleave capability is not verified, if the interleave
> capability of a target does not match the region need, committing decoder
> should have failed at the device end.
> 
> In order to checkout this error as quickly as possible, driver needs
> to check the interleave capability of target during attaching it to
> region.
> 
> Per CXL specification r3.1(8.2.4.20.1 CXL HDM Decoder Capability Register),
> bits 11 and 12 indicate the capability to establish interleaving in 3, 6,
> 12 and 16 ways. If these bits are not set, the target cannot be attached to
> a region utilizing such interleave ways.
> 
> Additionally, bits 8 and 9 represent the capability of the bits used for
> interleaving in the address, Linux tracks this in the cxl_port
> interleave_mask.
> 
> Per CXL specification r3.1(8.2.4.20.13 Decoder Protection):
>   eIW means encoded Interleave Ways.
>   eIG means encoded Interleave Granularity.
> 
>   in HPA:
>   if eIW is 0 or 8 (interleave ways: 1, 3), all the bits of HPA are used,
>   the interleave bits are none, the following check is ignored.
> 
>   if eIW is less than 8 (interleave ways: 2, 4, 8, 16), the interleave bits
>   start at bit position eIG + 8 and end at eIG + eIW + 8 - 1.
> 
>   if eIW is greater than 8 (interleave ways: 6, 12), the interleave bits
>   start at bit position eIG + 8 and end at eIG + eIW - 1.
> 
>   if the interleave mask is insufficient to cover the required interleave
>   bits, the target cannot be attached to the region.
> 
> Fixes: 384e624bb211 ("cxl/region: Attach endpoint decoders")
> Signed-off-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
[..]
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 36cee9c30ceb..7fe617122d33 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -848,11 +848,21 @@ static inline void cxl_mem_active_dec(void)
>  
>  int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd);
>  
> +/*

Minor detail that can come in a follow-on patch is that this needs to
be:

/**

...in order for the kernel-doc system to autoformat it like it does
other 'struct' documentation:

https://docs.kernel.org/driver-api/cxl/memory-devices.html

However, the reason it needs to be a follow-on patch is that this file
is not currently included for parsing and needs something like this:

diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
index 5149ecdc53c7..e33ee67ac1a2 100644
--- a/Documentation/driver-api/cxl/memory-devices.rst
+++ b/Documentation/driver-api/cxl/memory-devices.rst
@@ -325,6 +325,9 @@ CXL Memory Device
 .. kernel-doc:: drivers/cxl/pci.c
    :internal:
 
+.. kernel-doc:: drivers/cxl/cxlmem.h
+   :internal:
+
 .. kernel-doc:: drivers/cxl/mem.c
    :doc: cxl mem
Alison Schofield June 12, 2024, 5:04 p.m. UTC | #2
On Tue, Jun 11, 2024 at 11:25:44PM -0400, Yao Xingtao wrote:
> Since interleave capability is not verified, if the interleave
> capability of a target does not match the region need, committing decoder
> should have failed at the device end.

This needs some fixups to pass the cxl unit tests. 

snip...

>  
> +static int check_interleave_cap(struct cxl_decoder *cxld, int iw, int ig)
> +{
> +	struct cxl_port *port = to_cxl_port(cxld->dev.parent);
> +	struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev);

Tried this out with cxl-test and NULL ptr deref trying to load
the cxl-test module. Needs something like this:

diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
index 3482248aa344..f7ed3dd19992 100644
--- a/tools/testing/cxl/test/cxl.c
+++ b/tools/testing/cxl/test/cxl.c
@@ -630,11 +630,13 @@ static struct cxl_hdm *mock_cxl_setup_hdm(struct cxl_port *port,
                                          struct cxl_endpoint_dvsec_info *info)
 {
        struct cxl_hdm *cxlhdm = devm_kzalloc(&port->dev, sizeof(*cxlhdm), GFP_KERNEL);
+       struct device *dev = &port->dev;

        if (!cxlhdm)
                return ERR_PTR(-ENOMEM);

        cxlhdm->port = port;
+       dev_set_drvdata(dev, cxlhdm);
        return cxlhdm;
 }


After that, we do load the cxl-test module but the autoconf region
fails to set up and other unit tests fail trying to setup regions.
I didn't go further into those, seems all failing here:

cxl_region_attach()
	dev_dbg(&cxlr->dev, "%s iw: %d ig: %d is not supported\n",
                       dev_name(&cxled->cxld.dev), p->interleave_ways,
                       p->interleave_granularity);


-- Alison

> +	unsigned int interleave_mask;
> +	u8 eiw;
> +	u16 eig;
> +	int rc, high_pos, low_pos;
> +
> +	rc = ways_to_eiw(iw, &eiw);
> +	if (rc)
> +		return rc;
> +
> +	if (!test_bit(iw, &cxlhdm->iw_cap_mask))
> +		return -ENXIO;
> +
> +	rc = granularity_to_eig(ig, &eig);
> +	if (rc)
> +		return rc;
> +
> +	/*
> +	 * Per CXL specification r3.1(8.2.4.20.13 Decoder Protection),
> +	 * if eiw < 8:
> +	 *   DPAOFFSET[51: eig + 8] = HPAOFFSET[51: eig + 8 + eiw]
> +	 *   DPAOFFSET[eig + 7: 0]  = HPAOFFSET[eig + 7: 0]
> +	 *
> +	 *   when the eiw is 0, all the bits of HPAOFFSET[51: 0] are used, the
> +	 *   interleave bits are none.
> +	 *
> +	 * if eiw >= 8:
> +	 *   DPAOFFSET[51: eig + 8] = HPAOFFSET[51: eig + eiw] / 3
> +	 *   DPAOFFSET[eig + 7: 0]  = HPAOFFSET[eig + 7: 0]
> +	 *
> +	 *   when the eiw is 8, all the bits of HPAOFFSET[51: 0] are used, the
> +	 *   interleave bits are none.
> +	 */
> +	if (eiw == 0 || eiw == 8)
> +		return 0;
> +
> +	if (eiw > 8)
> +		high_pos = eiw + eig - 1;
> +	else
> +		high_pos = eiw + eig + 7;
> +	low_pos = eig + 8;
> +	interleave_mask = GENMASK(high_pos, low_pos);
> +	if (interleave_mask & ~cxlhdm->interleave_mask)
> +		return -ENXIO;
> +
> +	return 0;
> +}
> +
>  static int cxl_port_setup_targets(struct cxl_port *port,
>  				  struct cxl_region *cxlr,
>  				  struct cxl_endpoint_decoder *cxled)
> @@ -1360,6 +1431,15 @@ static int cxl_port_setup_targets(struct cxl_port *port,
>  			return -ENXIO;
>  		}
>  	} else {
> +		rc = check_interleave_cap(cxld, iw, ig);
> +		if (rc) {
> +			dev_dbg(&cxlr->dev,
> +				"%s:%s iw: %d ig: %d is not supported\n",
> +				dev_name(port->uport_dev),
> +				dev_name(&port->dev), iw, ig);
> +			return rc;
> +		}
> +
>  		cxld->interleave_ways = iw;
>  		cxld->interleave_granularity = ig;
>  		cxld->hpa_range = (struct range) {
> @@ -1796,6 +1876,15 @@ static int cxl_region_attach(struct cxl_region *cxlr,
>  	struct cxl_dport *dport;
>  	int rc = -ENXIO;
>  
> +	rc = check_interleave_cap(&cxled->cxld, p->interleave_ways,
> +				  p->interleave_granularity);
> +	if (rc) {
> +		dev_dbg(&cxlr->dev, "%s iw: %d ig: %d is not supported\n",
> +			dev_name(&cxled->cxld.dev), p->interleave_ways,
> +			p->interleave_granularity);
> +		return rc;
> +	}
> +
>  	if (cxled->mode != cxlr->mode) {
>  		dev_dbg(&cxlr->dev, "%s region mode: %d mismatch: %d\n",
>  			dev_name(&cxled->cxld.dev), cxlr->mode, cxled->mode);
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index 036d17db68e0..dc8e46a1fe82 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -45,6 +45,8 @@
>  #define   CXL_HDM_DECODER_TARGET_COUNT_MASK GENMASK(7, 4)
>  #define   CXL_HDM_DECODER_INTERLEAVE_11_8 BIT(8)
>  #define   CXL_HDM_DECODER_INTERLEAVE_14_12 BIT(9)
> +#define   CXL_HDM_DECODER_INTERLEAVE_3_6_12_WAY BIT(11)
> +#define   CXL_HDM_DECODER_INTERLEAVE_16_WAY BIT(12)
>  #define CXL_HDM_DECODER_CTRL_OFFSET 0x4
>  #define   CXL_HDM_DECODER_ENABLE BIT(1)
>  #define CXL_HDM_DECODER0_BASE_LOW_OFFSET(i) (0x20 * (i) + 0x10)
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 36cee9c30ceb..7fe617122d33 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -848,11 +848,21 @@ static inline void cxl_mem_active_dec(void)
>  
>  int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd);
>  
> +/*
> + * struct cxl_hdm - HDM Decoder registers and cached / decoded capabilities
> + * @regs: mapped registers, see devm_cxl_setup_hdm()
> + * @decoder_count: number of decoders for this port
> + * @target_count: for switch decoders, max downstream port targets
> + * @interleave_mask: interleave granularity capability, see check_interleave_cap()
> + * @iw_cap_mask: bitmask of supported interleave ways, see check_interleave_cap()
> + * @port: mapped cxl_port, see devm_cxl_setup_hdm()
> + */
>  struct cxl_hdm {
>  	struct cxl_component_regs regs;
>  	unsigned int decoder_count;
>  	unsigned int target_count;
>  	unsigned int interleave_mask;
> +	unsigned long iw_cap_mask;
>  	struct cxl_port *port;
>  };
>  
> -- 
> 2.37.3
>
Alison Schofield June 12, 2024, 5:45 p.m. UTC | #3
On Wed, Jun 12, 2024 at 10:04:11AM -0700, Alison Schofield wrote:
> On Tue, Jun 11, 2024 at 11:25:44PM -0400, Yao Xingtao wrote:
> > Since interleave capability is not verified, if the interleave
> > capability of a target does not match the region need, committing decoder
> > should have failed at the device end.
> 
> This needs some fixups to pass the cxl unit tests. 

BTW - not saying anything is broken in this code. It just doesn't
consider the cxl-test module and fails the unit tests. The cxl-test
module needs to mock this new stuff.

> 
> snip...
> 
> >  
> > +static int check_interleave_cap(struct cxl_decoder *cxld, int iw, int ig)
> > +{
> > +	struct cxl_port *port = to_cxl_port(cxld->dev.parent);
> > +	struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev);
> 
> Tried this out with cxl-test and NULL ptr deref trying to load
> the cxl-test module. Needs something like this:
> 
> diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
> index 3482248aa344..f7ed3dd19992 100644
> --- a/tools/testing/cxl/test/cxl.c
> +++ b/tools/testing/cxl/test/cxl.c
> @@ -630,11 +630,13 @@ static struct cxl_hdm *mock_cxl_setup_hdm(struct cxl_port *port,
>                                           struct cxl_endpoint_dvsec_info *info)
>  {
>         struct cxl_hdm *cxlhdm = devm_kzalloc(&port->dev, sizeof(*cxlhdm), GFP_KERNEL);
> +       struct device *dev = &port->dev;
> 
>         if (!cxlhdm)
>                 return ERR_PTR(-ENOMEM);
> 
>         cxlhdm->port = port;
> +       dev_set_drvdata(dev, cxlhdm);
>         return cxlhdm;
>  }
> 
> 
> After that, we do load the cxl-test module but the autoconf region
> fails to set up and other unit tests fail trying to setup regions.
> I didn't go further into those, seems all failing here:
> 
> cxl_region_attach()
> 	dev_dbg(&cxlr->dev, "%s iw: %d ig: %d is not supported\n",
>                        dev_name(&cxled->cxld.dev), p->interleave_ways,
>                        p->interleave_granularity);
> 
> 
> -- Alison
> 
> > +	unsigned int interleave_mask;
> > +	u8 eiw;
> > +	u16 eig;
> > +	int rc, high_pos, low_pos;
> > +
> > +	rc = ways_to_eiw(iw, &eiw);
> > +	if (rc)
> > +		return rc;
> > +
> > +	if (!test_bit(iw, &cxlhdm->iw_cap_mask))
> > +		return -ENXIO;
> > +
> > +	rc = granularity_to_eig(ig, &eig);
> > +	if (rc)
> > +		return rc;
> > +
> > +	/*
> > +	 * Per CXL specification r3.1(8.2.4.20.13 Decoder Protection),
> > +	 * if eiw < 8:
> > +	 *   DPAOFFSET[51: eig + 8] = HPAOFFSET[51: eig + 8 + eiw]
> > +	 *   DPAOFFSET[eig + 7: 0]  = HPAOFFSET[eig + 7: 0]
> > +	 *
> > +	 *   when the eiw is 0, all the bits of HPAOFFSET[51: 0] are used, the
> > +	 *   interleave bits are none.
> > +	 *
> > +	 * if eiw >= 8:
> > +	 *   DPAOFFSET[51: eig + 8] = HPAOFFSET[51: eig + eiw] / 3
> > +	 *   DPAOFFSET[eig + 7: 0]  = HPAOFFSET[eig + 7: 0]
> > +	 *
> > +	 *   when the eiw is 8, all the bits of HPAOFFSET[51: 0] are used, the
> > +	 *   interleave bits are none.
> > +	 */
> > +	if (eiw == 0 || eiw == 8)
> > +		return 0;
> > +
> > +	if (eiw > 8)
> > +		high_pos = eiw + eig - 1;
> > +	else
> > +		high_pos = eiw + eig + 7;
> > +	low_pos = eig + 8;
> > +	interleave_mask = GENMASK(high_pos, low_pos);
> > +	if (interleave_mask & ~cxlhdm->interleave_mask)
> > +		return -ENXIO;
> > +
> > +	return 0;
> > +}
> > +
> >  static int cxl_port_setup_targets(struct cxl_port *port,
> >  				  struct cxl_region *cxlr,
> >  				  struct cxl_endpoint_decoder *cxled)
> > @@ -1360,6 +1431,15 @@ static int cxl_port_setup_targets(struct cxl_port *port,
> >  			return -ENXIO;
> >  		}
> >  	} else {
> > +		rc = check_interleave_cap(cxld, iw, ig);
> > +		if (rc) {
> > +			dev_dbg(&cxlr->dev,
> > +				"%s:%s iw: %d ig: %d is not supported\n",
> > +				dev_name(port->uport_dev),
> > +				dev_name(&port->dev), iw, ig);
> > +			return rc;
> > +		}
> > +
> >  		cxld->interleave_ways = iw;
> >  		cxld->interleave_granularity = ig;
> >  		cxld->hpa_range = (struct range) {
> > @@ -1796,6 +1876,15 @@ static int cxl_region_attach(struct cxl_region *cxlr,
> >  	struct cxl_dport *dport;
> >  	int rc = -ENXIO;
> >  
> > +	rc = check_interleave_cap(&cxled->cxld, p->interleave_ways,
> > +				  p->interleave_granularity);
> > +	if (rc) {
> > +		dev_dbg(&cxlr->dev, "%s iw: %d ig: %d is not supported\n",
> > +			dev_name(&cxled->cxld.dev), p->interleave_ways,
> > +			p->interleave_granularity);
> > +		return rc;
> > +	}
> > +
> >  	if (cxled->mode != cxlr->mode) {
> >  		dev_dbg(&cxlr->dev, "%s region mode: %d mismatch: %d\n",
> >  			dev_name(&cxled->cxld.dev), cxlr->mode, cxled->mode);
> > diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> > index 036d17db68e0..dc8e46a1fe82 100644
> > --- a/drivers/cxl/cxl.h
> > +++ b/drivers/cxl/cxl.h
> > @@ -45,6 +45,8 @@
> >  #define   CXL_HDM_DECODER_TARGET_COUNT_MASK GENMASK(7, 4)
> >  #define   CXL_HDM_DECODER_INTERLEAVE_11_8 BIT(8)
> >  #define   CXL_HDM_DECODER_INTERLEAVE_14_12 BIT(9)
> > +#define   CXL_HDM_DECODER_INTERLEAVE_3_6_12_WAY BIT(11)
> > +#define   CXL_HDM_DECODER_INTERLEAVE_16_WAY BIT(12)
> >  #define CXL_HDM_DECODER_CTRL_OFFSET 0x4
> >  #define   CXL_HDM_DECODER_ENABLE BIT(1)
> >  #define CXL_HDM_DECODER0_BASE_LOW_OFFSET(i) (0x20 * (i) + 0x10)
> > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > index 36cee9c30ceb..7fe617122d33 100644
> > --- a/drivers/cxl/cxlmem.h
> > +++ b/drivers/cxl/cxlmem.h
> > @@ -848,11 +848,21 @@ static inline void cxl_mem_active_dec(void)
> >  
> >  int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd);
> >  
> > +/*
> > + * struct cxl_hdm - HDM Decoder registers and cached / decoded capabilities
> > + * @regs: mapped registers, see devm_cxl_setup_hdm()
> > + * @decoder_count: number of decoders for this port
> > + * @target_count: for switch decoders, max downstream port targets
> > + * @interleave_mask: interleave granularity capability, see check_interleave_cap()
> > + * @iw_cap_mask: bitmask of supported interleave ways, see check_interleave_cap()
> > + * @port: mapped cxl_port, see devm_cxl_setup_hdm()
> > + */
> >  struct cxl_hdm {
> >  	struct cxl_component_regs regs;
> >  	unsigned int decoder_count;
> >  	unsigned int target_count;
> >  	unsigned int interleave_mask;
> > +	unsigned long iw_cap_mask;
> >  	struct cxl_port *port;
> >  };
> >  
> > -- 
> > 2.37.3
> > 
>
Alison Schofield June 12, 2024, 9:56 p.m. UTC | #4
On Tue, Jun 11, 2024 at 09:07:09PM -0700, Dan Williams wrote:
> Yao Xingtao wrote:
> > Since interleave capability is not verified, if the interleave
> > capability of a target does not match the region need, committing decoder
> > should have failed at the device end.
> > 
> > In order to checkout this error as quickly as possible, driver needs
> > to check the interleave capability of target during attaching it to
> > region.
> > 
> > Per CXL specification r3.1(8.2.4.20.1 CXL HDM Decoder Capability Register),
> > bits 11 and 12 indicate the capability to establish interleaving in 3, 6,
> > 12 and 16 ways. If these bits are not set, the target cannot be attached to
> > a region utilizing such interleave ways.
> > 
> > Additionally, bits 8 and 9 represent the capability of the bits used for
> > interleaving in the address, Linux tracks this in the cxl_port
> > interleave_mask.
> > 
> > Per CXL specification r3.1(8.2.4.20.13 Decoder Protection):
> >   eIW means encoded Interleave Ways.
> >   eIG means encoded Interleave Granularity.
> > 
> >   in HPA:
> >   if eIW is 0 or 8 (interleave ways: 1, 3), all the bits of HPA are used,
> >   the interleave bits are none, the following check is ignored.
> > 
> >   if eIW is less than 8 (interleave ways: 2, 4, 8, 16), the interleave bits
> >   start at bit position eIG + 8 and end at eIG + eIW + 8 - 1.
> > 
> >   if eIW is greater than 8 (interleave ways: 6, 12), the interleave bits
> >   start at bit position eIG + 8 and end at eIG + eIW - 1.
> > 
> >   if the interleave mask is insufficient to cover the required interleave
> >   bits, the target cannot be attached to the region.
> > 
> > Fixes: 384e624bb211 ("cxl/region: Attach endpoint decoders")
> > Signed-off-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
> > Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> [..]
> > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > index 36cee9c30ceb..7fe617122d33 100644
> > --- a/drivers/cxl/cxlmem.h
> > +++ b/drivers/cxl/cxlmem.h
> > @@ -848,11 +848,21 @@ static inline void cxl_mem_active_dec(void)
> >  
> >  int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd);
> >  
> > +/*
> 
> Minor detail that can come in a follow-on patch is that this needs to
> be:
> 
> /**
> 
> ...in order for the kernel-doc system to autoformat it like it does
> other 'struct' documentation:
> 
> https://docs.kernel.org/driver-api/cxl/memory-devices.html
> 
> However, the reason it needs to be a follow-on patch is that this file
> is not currently included for parsing and needs something like this:

We'll need another revision of this patch to address the cxl-test module
dependencies so how about adding the "/**" in the next revision.

I'm suggesting this because cxl_mem.h contains other kernel doc
comments that are not being picked up because cxl_mem.h is missing
in Documentation/driver-api/cxl/memory-devices.rst. There also seem
to be other ommissions when compared with the kernel doc notations
in drivers/cxl/ :  core/cdat.c, core/hdm.c. I say 'seems' because
I guess it could be intentional.

Can Yao add the kernel doc notation in the next rev of this patch
and then come back soon and sync memory-devices.rst for all of
drivers/cxl/ ?

-- Alison

> 
> diff --git a/Documentation/driver-api/cxl/memory-devices.rst b/Documentation/driver-api/cxl/memory-devices.rst
> index 5149ecdc53c7..e33ee67ac1a2 100644
> --- a/Documentation/driver-api/cxl/memory-devices.rst
> +++ b/Documentation/driver-api/cxl/memory-devices.rst
> @@ -325,6 +325,9 @@ CXL Memory Device
>  .. kernel-doc:: drivers/cxl/pci.c
>     :internal:
>  
> +.. kernel-doc:: drivers/cxl/cxlmem.h
> +   :internal:
> +
>  .. kernel-doc:: drivers/cxl/mem.c
>     :doc: cxl mem
>
Xingtao Yao (Fujitsu) June 13, 2024, 12:31 a.m. UTC | #5
> -----Original Message-----
> From: Alison Schofield <alison.schofield@intel.com>
> Sent: Thursday, June 13, 2024 1:04 AM
> To: Yao, Xingtao/姚 幸涛 <yaoxt.fnst@fujitsu.com>
> Cc: dave@stgolabs.net; jonathan.cameron@huawei.com; dave.jiang@intel.com;
> vishal.l.verma@intel.com; ira.weiny@intel.com; dan.j.williams@intel.com;
> jim.harris@samsung.com; linux-cxl@vger.kernel.org
> Subject: Re: [PATCH v7] cxl/region: check interleave capability
> 
> On Tue, Jun 11, 2024 at 11:25:44PM -0400, Yao Xingtao wrote:
> > Since interleave capability is not verified, if the interleave
> > capability of a target does not match the region need, committing decoder
> > should have failed at the device end.
> 
> This needs some fixups to pass the cxl unit tests.
thanks, will fix.
> 
> snip...
> 
> >
> > +static int check_interleave_cap(struct cxl_decoder *cxld, int iw, int ig)
> > +{
> > +	struct cxl_port *port = to_cxl_port(cxld->dev.parent);
> > +	struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev);
> 
> Tried this out with cxl-test and NULL ptr deref trying to load
> the cxl-test module. Needs something like this:
> 
> diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
> index 3482248aa344..f7ed3dd19992 100644
> --- a/tools/testing/cxl/test/cxl.c
> +++ b/tools/testing/cxl/test/cxl.c
> @@ -630,11 +630,13 @@ static struct cxl_hdm *mock_cxl_setup_hdm(struct
> cxl_port *port,
>                                           struct cxl_endpoint_dvsec_info *info)
>  {
>         struct cxl_hdm *cxlhdm = devm_kzalloc(&port->dev, sizeof(*cxlhdm),
> GFP_KERNEL);
> +       struct device *dev = &port->dev;
> 
>         if (!cxlhdm)
>                 return ERR_PTR(-ENOMEM);
> 
>         cxlhdm->port = port;
> +       dev_set_drvdata(dev, cxlhdm);
>         return cxlhdm;
>  }
> 
> 
> After that, we do load the cxl-test module but the autoconf region
> fails to set up and other unit tests fail trying to setup regions.
> I didn't go further into those, seems all failing here:
OK, I will find the reason and fix it.
> 
> cxl_region_attach()
> 	dev_dbg(&cxlr->dev, "%s iw: %d ig: %d is not supported\n",
>                        dev_name(&cxled->cxld.dev), p->interleave_ways,
>                        p->interleave_granularity);
> 
> 
> -- Alison
> 
> > +	unsigned int interleave_mask;
> > +	u8 eiw;
> > +	u16 eig;
> > +	int rc, high_pos, low_pos;
> > +
> > +	rc = ways_to_eiw(iw, &eiw);
> > +	if (rc)
> > +		return rc;
> > +
> > +	if (!test_bit(iw, &cxlhdm->iw_cap_mask))
> > +		return -ENXIO;
> > +
> > +	rc = granularity_to_eig(ig, &eig);
> > +	if (rc)
> > +		return rc;
> > +
> > +	/*
> > +	 * Per CXL specification r3.1(8.2.4.20.13 Decoder Protection),
> > +	 * if eiw < 8:
> > +	 *   DPAOFFSET[51: eig + 8] = HPAOFFSET[51: eig + 8 + eiw]
> > +	 *   DPAOFFSET[eig + 7: 0]  = HPAOFFSET[eig + 7: 0]
> > +	 *
> > +	 *   when the eiw is 0, all the bits of HPAOFFSET[51: 0] are used, the
> > +	 *   interleave bits are none.
> > +	 *
> > +	 * if eiw >= 8:
> > +	 *   DPAOFFSET[51: eig + 8] = HPAOFFSET[51: eig + eiw] / 3
> > +	 *   DPAOFFSET[eig + 7: 0]  = HPAOFFSET[eig + 7: 0]
> > +	 *
> > +	 *   when the eiw is 8, all the bits of HPAOFFSET[51: 0] are used, the
> > +	 *   interleave bits are none.
> > +	 */
> > +	if (eiw == 0 || eiw == 8)
> > +		return 0;
> > +
> > +	if (eiw > 8)
> > +		high_pos = eiw + eig - 1;
> > +	else
> > +		high_pos = eiw + eig + 7;
> > +	low_pos = eig + 8;
> > +	interleave_mask = GENMASK(high_pos, low_pos);
> > +	if (interleave_mask & ~cxlhdm->interleave_mask)
> > +		return -ENXIO;
> > +
> > +	return 0;
> > +}
> > +
> >  static int cxl_port_setup_targets(struct cxl_port *port,
> >  				  struct cxl_region *cxlr,
> >  				  struct cxl_endpoint_decoder *cxled)
> > @@ -1360,6 +1431,15 @@ static int cxl_port_setup_targets(struct cxl_port *port,
> >  			return -ENXIO;
> >  		}
> >  	} else {
> > +		rc = check_interleave_cap(cxld, iw, ig);
> > +		if (rc) {
> > +			dev_dbg(&cxlr->dev,
> > +				"%s:%s iw: %d ig: %d is not supported\n",
> > +				dev_name(port->uport_dev),
> > +				dev_name(&port->dev), iw, ig);
> > +			return rc;
> > +		}
> > +
> >  		cxld->interleave_ways = iw;
> >  		cxld->interleave_granularity = ig;
> >  		cxld->hpa_range = (struct range) {
> > @@ -1796,6 +1876,15 @@ static int cxl_region_attach(struct cxl_region *cxlr,
> >  	struct cxl_dport *dport;
> >  	int rc = -ENXIO;
> >
> > +	rc = check_interleave_cap(&cxled->cxld, p->interleave_ways,
> > +				  p->interleave_granularity);
> > +	if (rc) {
> > +		dev_dbg(&cxlr->dev, "%s iw: %d ig: %d is not supported\n",
> > +			dev_name(&cxled->cxld.dev), p->interleave_ways,
> > +			p->interleave_granularity);
> > +		return rc;
> > +	}
> > +
> >  	if (cxled->mode != cxlr->mode) {
> >  		dev_dbg(&cxlr->dev, "%s region mode: %d mismatch: %d\n",
> >  			dev_name(&cxled->cxld.dev), cxlr->mode,
> cxled->mode);
> > diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> > index 036d17db68e0..dc8e46a1fe82 100644
> > --- a/drivers/cxl/cxl.h
> > +++ b/drivers/cxl/cxl.h
> > @@ -45,6 +45,8 @@
> >  #define   CXL_HDM_DECODER_TARGET_COUNT_MASK GENMASK(7, 4)
> >  #define   CXL_HDM_DECODER_INTERLEAVE_11_8 BIT(8)
> >  #define   CXL_HDM_DECODER_INTERLEAVE_14_12 BIT(9)
> > +#define   CXL_HDM_DECODER_INTERLEAVE_3_6_12_WAY BIT(11)
> > +#define   CXL_HDM_DECODER_INTERLEAVE_16_WAY BIT(12)
> >  #define CXL_HDM_DECODER_CTRL_OFFSET 0x4
> >  #define   CXL_HDM_DECODER_ENABLE BIT(1)
> >  #define CXL_HDM_DECODER0_BASE_LOW_OFFSET(i) (0x20 * (i) + 0x10)
> > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > index 36cee9c30ceb..7fe617122d33 100644
> > --- a/drivers/cxl/cxlmem.h
> > +++ b/drivers/cxl/cxlmem.h
> > @@ -848,11 +848,21 @@ static inline void cxl_mem_active_dec(void)
> >
> >  int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd);
> >
> > +/*
> > + * struct cxl_hdm - HDM Decoder registers and cached / decoded capabilities
> > + * @regs: mapped registers, see devm_cxl_setup_hdm()
> > + * @decoder_count: number of decoders for this port
> > + * @target_count: for switch decoders, max downstream port targets
> > + * @interleave_mask: interleave granularity capability, see
> check_interleave_cap()
> > + * @iw_cap_mask: bitmask of supported interleave ways, see
> check_interleave_cap()
> > + * @port: mapped cxl_port, see devm_cxl_setup_hdm()
> > + */
> >  struct cxl_hdm {
> >  	struct cxl_component_regs regs;
> >  	unsigned int decoder_count;
> >  	unsigned int target_count;
> >  	unsigned int interleave_mask;
> > +	unsigned long iw_cap_mask;
> >  	struct cxl_port *port;
> >  };
> >
> > --
> > 2.37.3
> >
Xingtao Yao (Fujitsu) June 13, 2024, 12:34 a.m. UTC | #6
> -----Original Message-----
> From: Alison Schofield <alison.schofield@intel.com>
> Sent: Thursday, June 13, 2024 5:56 AM
> To: Dan Williams <dan.j.williams@intel.com>
> Cc: Yao, Xingtao/姚 幸涛 <yaoxt.fnst@fujitsu.com>; dave@stgolabs.net;
> jonathan.cameron@huawei.com; dave.jiang@intel.com; vishal.l.verma@intel.com;
> ira.weiny@intel.com; jim.harris@samsung.com; linux-cxl@vger.kernel.org
> Subject: Re: [PATCH v7] cxl/region: check interleave capability
> 
> On Tue, Jun 11, 2024 at 09:07:09PM -0700, Dan Williams wrote:
> > Yao Xingtao wrote:
> > > Since interleave capability is not verified, if the interleave
> > > capability of a target does not match the region need, committing decoder
> > > should have failed at the device end.
> > >
> > > In order to checkout this error as quickly as possible, driver needs
> > > to check the interleave capability of target during attaching it to
> > > region.
> > >
> > > Per CXL specification r3.1(8.2.4.20.1 CXL HDM Decoder Capability Register),
> > > bits 11 and 12 indicate the capability to establish interleaving in 3, 6,
> > > 12 and 16 ways. If these bits are not set, the target cannot be attached to
> > > a region utilizing such interleave ways.
> > >
> > > Additionally, bits 8 and 9 represent the capability of the bits used for
> > > interleaving in the address, Linux tracks this in the cxl_port
> > > interleave_mask.
> > >
> > > Per CXL specification r3.1(8.2.4.20.13 Decoder Protection):
> > >   eIW means encoded Interleave Ways.
> > >   eIG means encoded Interleave Granularity.
> > >
> > >   in HPA:
> > >   if eIW is 0 or 8 (interleave ways: 1, 3), all the bits of HPA are used,
> > >   the interleave bits are none, the following check is ignored.
> > >
> > >   if eIW is less than 8 (interleave ways: 2, 4, 8, 16), the interleave bits
> > >   start at bit position eIG + 8 and end at eIG + eIW + 8 - 1.
> > >
> > >   if eIW is greater than 8 (interleave ways: 6, 12), the interleave bits
> > >   start at bit position eIG + 8 and end at eIG + eIW - 1.
> > >
> > >   if the interleave mask is insufficient to cover the required interleave
> > >   bits, the target cannot be attached to the region.
> > >
> > > Fixes: 384e624bb211 ("cxl/region: Attach endpoint decoders")
> > > Signed-off-by: Yao Xingtao <yaoxt.fnst@fujitsu.com>
> > > Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> > [..]
> > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > > index 36cee9c30ceb..7fe617122d33 100644
> > > --- a/drivers/cxl/cxlmem.h
> > > +++ b/drivers/cxl/cxlmem.h
> > > @@ -848,11 +848,21 @@ static inline void cxl_mem_active_dec(void)
> > >
> > >  int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd);
> > >
> > > +/*
> >
> > Minor detail that can come in a follow-on patch is that this needs to
> > be:
> >
> > /**
> >
> > ...in order for the kernel-doc system to autoformat it like it does
> > other 'struct' documentation:
> >
> > https://docs.kernel.org/driver-api/cxl/memory-devices.html
> >
> > However, the reason it needs to be a follow-on patch is that this file
> > is not currently included for parsing and needs something like this:
> 
> We'll need another revision of this patch to address the cxl-test module
> dependencies so how about adding the "/**" in the next revision.
> 
> I'm suggesting this because cxl_mem.h contains other kernel doc
> comments that are not being picked up because cxl_mem.h is missing
> in Documentation/driver-api/cxl/memory-devices.rst. There also seem
> to be other ommissions when compared with the kernel doc notations
> in drivers/cxl/ :  core/cdat.c, core/hdm.c. I say 'seems' because
> I guess it could be intentional.
> 
> Can Yao add the kernel doc notation in the next rev of this patch
> and then come back soon and sync memory-devices.rst for all of
> drivers/cxl/ ?
ok, my pleasure.
> 
> -- Alison
> 
> >
> > diff --git a/Documentation/driver-api/cxl/memory-devices.rst
> b/Documentation/driver-api/cxl/memory-devices.rst
> > index 5149ecdc53c7..e33ee67ac1a2 100644
> > --- a/Documentation/driver-api/cxl/memory-devices.rst
> > +++ b/Documentation/driver-api/cxl/memory-devices.rst
> > @@ -325,6 +325,9 @@ CXL Memory Device
> >  .. kernel-doc:: drivers/cxl/pci.c
> >     :internal:
> >
> > +.. kernel-doc:: drivers/cxl/cxlmem.h
> > +   :internal:
> > +
> >  .. kernel-doc:: drivers/cxl/mem.c
> >     :doc: cxl mem
> >
Dan Williams June 13, 2024, 3:27 a.m. UTC | #7
Alison Schofield wrote:
[..]
> > > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> > > index 36cee9c30ceb..7fe617122d33 100644
> > > --- a/drivers/cxl/cxlmem.h
> > > +++ b/drivers/cxl/cxlmem.h
> > > @@ -848,11 +848,21 @@ static inline void cxl_mem_active_dec(void)
> > >  
> > >  int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd);
> > >  
> > > +/*
> > 
> > Minor detail that can come in a follow-on patch is that this needs to
> > be:
> > 
> > /**
> > 
> > ...in order for the kernel-doc system to autoformat it like it does
> > other 'struct' documentation:
> > 
> > https://docs.kernel.org/driver-api/cxl/memory-devices.html
> > 
> > However, the reason it needs to be a follow-on patch is that this file
> > is not currently included for parsing and needs something like this:
> 
> We'll need another revision of this patch to address the cxl-test module

Thanks for running that by the way!

Now, I tend to not want to leave people hanging figuring out how
cxl_test works, so in this case I think the incremental fix on top of
the crash fix is this:

diff --git a/tools/testing/cxl/test/cxl.c b/tools/testing/cxl/test/cxl.c
index 908e0d083936..4b2416d9f66f 100644
--- a/tools/testing/cxl/test/cxl.c
+++ b/tools/testing/cxl/test/cxl.c
@@ -635,6 +635,9 @@ static struct cxl_hdm *mock_cxl_setup_hdm(struct cxl_port *port,
                return ERR_PTR(-ENOMEM);
 
        cxlhdm->port = port;
+       cxlhdm->interleave_mask = ~0U;
+       cxlhdm->iw_cap_mask = ~0UL;
+
        return cxlhdm;
 }
 
I.e. allow everything for now.

The good news is the patch does work to reject decoders without the
proper capabilities.

> dependencies so how about adding the "/**" in the next revision.
> 
> I'm suggesting this because cxl_mem.h contains other kernel doc
> comments that are not being picked up because cxl_mem.h is missing
> in Documentation/driver-api/cxl/memory-devices.rst. There also seem
> to be other ommissions when compared with the kernel doc notations
> in drivers/cxl/ :  core/cdat.c, core/hdm.c. I say 'seems' because
> I guess it could be intentional.
> 
> Can Yao add the kernel doc notation in the next rev of this patch
> and then come back soon and sync memory-devices.rst for all of
> drivers/cxl/ ?

Sure, but any review issues on the second patch should not hold up
merging the first since they're unrelated changes.
diff mbox series

Patch

diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
index 7d97790b893d..e01c16fdc757 100644
--- a/drivers/cxl/core/hdm.c
+++ b/drivers/cxl/core/hdm.c
@@ -52,6 +52,14 @@  int devm_cxl_add_passthrough_decoder(struct cxl_port *port)
 	struct cxl_dport *dport = NULL;
 	int single_port_map[1];
 	unsigned long index;
+	struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev);
+
+	/*
+	 * Capability checks are moot for passthrough decoders, support
+	 * any and all possibilities.
+	 */
+	cxlhdm->interleave_mask = ~0U;
+	cxlhdm->iw_cap_mask = ~0UL;
 
 	cxlsd = cxl_switch_decoder_alloc(port, 1);
 	if (IS_ERR(cxlsd))
@@ -79,6 +87,11 @@  static void parse_hdm_decoder_caps(struct cxl_hdm *cxlhdm)
 		cxlhdm->interleave_mask |= GENMASK(11, 8);
 	if (FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_14_12, hdm_cap))
 		cxlhdm->interleave_mask |= GENMASK(14, 12);
+	cxlhdm->iw_cap_mask = BIT(1) | BIT(2) | BIT(4) | BIT(8);
+	if (FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_3_6_12_WAY, hdm_cap))
+		cxlhdm->iw_cap_mask |= BIT(3) | BIT(6) | BIT(12);
+	if (FIELD_GET(CXL_HDM_DECODER_INTERLEAVE_16_WAY, hdm_cap))
+		cxlhdm->iw_cap_mask |= BIT(16);
 }
 
 static bool should_emulate_decoders(struct cxl_endpoint_dvsec_info *info)
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index 5c186e0a39b9..5ba5a5f6923a 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -1101,6 +1101,26 @@  static int cxl_port_attach_region(struct cxl_port *port,
 	}
 	cxld = cxl_rr->decoder;
 
+	/*
+	 * the number of targets should not exceed the target_count
+	 * of the decoder
+	 */
+	if (is_switch_decoder(&cxld->dev)) {
+		struct cxl_switch_decoder *cxlsd;
+
+		cxlsd = to_cxl_switch_decoder(&cxld->dev);
+		if (cxl_rr->nr_targets > cxlsd->nr_targets) {
+			dev_dbg(&cxlr->dev,
+				"%s:%s %s add: %s:%s @ %d overflows targets: %d\n",
+				dev_name(port->uport_dev), dev_name(&port->dev),
+				dev_name(&cxld->dev), dev_name(&cxlmd->dev),
+				dev_name(&cxled->cxld.dev), pos,
+				cxlsd->nr_targets);
+			rc = -ENXIO;
+			goto out_erase;
+		}
+	}
+
 	rc = cxl_rr_ep_add(cxl_rr, cxled);
 	if (rc) {
 		dev_dbg(&cxlr->dev,
@@ -1210,6 +1230,57 @@  static int check_last_peer(struct cxl_endpoint_decoder *cxled,
 	return 0;
 }
 
+static int check_interleave_cap(struct cxl_decoder *cxld, int iw, int ig)
+{
+	struct cxl_port *port = to_cxl_port(cxld->dev.parent);
+	struct cxl_hdm *cxlhdm = dev_get_drvdata(&port->dev);
+	unsigned int interleave_mask;
+	u8 eiw;
+	u16 eig;
+	int rc, high_pos, low_pos;
+
+	rc = ways_to_eiw(iw, &eiw);
+	if (rc)
+		return rc;
+
+	if (!test_bit(iw, &cxlhdm->iw_cap_mask))
+		return -ENXIO;
+
+	rc = granularity_to_eig(ig, &eig);
+	if (rc)
+		return rc;
+
+	/*
+	 * Per CXL specification r3.1(8.2.4.20.13 Decoder Protection),
+	 * if eiw < 8:
+	 *   DPAOFFSET[51: eig + 8] = HPAOFFSET[51: eig + 8 + eiw]
+	 *   DPAOFFSET[eig + 7: 0]  = HPAOFFSET[eig + 7: 0]
+	 *
+	 *   when the eiw is 0, all the bits of HPAOFFSET[51: 0] are used, the
+	 *   interleave bits are none.
+	 *
+	 * if eiw >= 8:
+	 *   DPAOFFSET[51: eig + 8] = HPAOFFSET[51: eig + eiw] / 3
+	 *   DPAOFFSET[eig + 7: 0]  = HPAOFFSET[eig + 7: 0]
+	 *
+	 *   when the eiw is 8, all the bits of HPAOFFSET[51: 0] are used, the
+	 *   interleave bits are none.
+	 */
+	if (eiw == 0 || eiw == 8)
+		return 0;
+
+	if (eiw > 8)
+		high_pos = eiw + eig - 1;
+	else
+		high_pos = eiw + eig + 7;
+	low_pos = eig + 8;
+	interleave_mask = GENMASK(high_pos, low_pos);
+	if (interleave_mask & ~cxlhdm->interleave_mask)
+		return -ENXIO;
+
+	return 0;
+}
+
 static int cxl_port_setup_targets(struct cxl_port *port,
 				  struct cxl_region *cxlr,
 				  struct cxl_endpoint_decoder *cxled)
@@ -1360,6 +1431,15 @@  static int cxl_port_setup_targets(struct cxl_port *port,
 			return -ENXIO;
 		}
 	} else {
+		rc = check_interleave_cap(cxld, iw, ig);
+		if (rc) {
+			dev_dbg(&cxlr->dev,
+				"%s:%s iw: %d ig: %d is not supported\n",
+				dev_name(port->uport_dev),
+				dev_name(&port->dev), iw, ig);
+			return rc;
+		}
+
 		cxld->interleave_ways = iw;
 		cxld->interleave_granularity = ig;
 		cxld->hpa_range = (struct range) {
@@ -1796,6 +1876,15 @@  static int cxl_region_attach(struct cxl_region *cxlr,
 	struct cxl_dport *dport;
 	int rc = -ENXIO;
 
+	rc = check_interleave_cap(&cxled->cxld, p->interleave_ways,
+				  p->interleave_granularity);
+	if (rc) {
+		dev_dbg(&cxlr->dev, "%s iw: %d ig: %d is not supported\n",
+			dev_name(&cxled->cxld.dev), p->interleave_ways,
+			p->interleave_granularity);
+		return rc;
+	}
+
 	if (cxled->mode != cxlr->mode) {
 		dev_dbg(&cxlr->dev, "%s region mode: %d mismatch: %d\n",
 			dev_name(&cxled->cxld.dev), cxlr->mode, cxled->mode);
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 036d17db68e0..dc8e46a1fe82 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -45,6 +45,8 @@ 
 #define   CXL_HDM_DECODER_TARGET_COUNT_MASK GENMASK(7, 4)
 #define   CXL_HDM_DECODER_INTERLEAVE_11_8 BIT(8)
 #define   CXL_HDM_DECODER_INTERLEAVE_14_12 BIT(9)
+#define   CXL_HDM_DECODER_INTERLEAVE_3_6_12_WAY BIT(11)
+#define   CXL_HDM_DECODER_INTERLEAVE_16_WAY BIT(12)
 #define CXL_HDM_DECODER_CTRL_OFFSET 0x4
 #define   CXL_HDM_DECODER_ENABLE BIT(1)
 #define CXL_HDM_DECODER0_BASE_LOW_OFFSET(i) (0x20 * (i) + 0x10)
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 36cee9c30ceb..7fe617122d33 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -848,11 +848,21 @@  static inline void cxl_mem_active_dec(void)
 
 int cxl_mem_sanitize(struct cxl_memdev *cxlmd, u16 cmd);
 
+/*
+ * struct cxl_hdm - HDM Decoder registers and cached / decoded capabilities
+ * @regs: mapped registers, see devm_cxl_setup_hdm()
+ * @decoder_count: number of decoders for this port
+ * @target_count: for switch decoders, max downstream port targets
+ * @interleave_mask: interleave granularity capability, see check_interleave_cap()
+ * @iw_cap_mask: bitmask of supported interleave ways, see check_interleave_cap()
+ * @port: mapped cxl_port, see devm_cxl_setup_hdm()
+ */
 struct cxl_hdm {
 	struct cxl_component_regs regs;
 	unsigned int decoder_count;
 	unsigned int target_count;
 	unsigned int interleave_mask;
+	unsigned long iw_cap_mask;
 	struct cxl_port *port;
 };