Message ID | 20240715172835.24757-11-alejandro.lucero-palau@amd.com (mailing list archive) |
---|---|
State | Not Applicable |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | cxl: add Type2 device support | expand |
Hi, kernel test robot noticed the following build warnings: [auto build test WARNING on linus/master] [also build test WARNING on v6.10 next-20240715] [cannot apply to cxl/next cxl/pending horms-ipvs/master] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/alejandro-lucero-palau-amd-com/cxl-add-type2-device-basic-support/20240716-015920 base: linus/master patch link: https://lore.kernel.org/r/20240715172835.24757-11-alejandro.lucero-palau%40amd.com patch subject: [PATCH v2 10/15] cxl: define a driver interface for DPA allocation config: s390-allmodconfig (https://download.01.org/0day-ci/archive/20240716/202407161159.KA2METLk-lkp@intel.com/config) compiler: clang version 19.0.0git (https://github.com/llvm/llvm-project a0c6b8aef853eedaa0980f07c0a502a5a8a9740e) reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240716/202407161159.KA2METLk-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot <lkp@intel.com> | Closes: https://lore.kernel.org/oe-kbuild-all/202407161159.KA2METLk-lkp@intel.com/ All warnings (new ones prefixed by >>): >> drivers/cxl/core/hdm.c:612: warning: Function parameter or struct member 'is_ram' not described in 'cxl_request_dpa' >> drivers/cxl/core/hdm.c:612: warning: Excess function parameter 'mode' description in 'cxl_request_dpa' vim +612 drivers/cxl/core/hdm.c 589 590 /** 591 * cxl_request_dpa - search and reserve DPA given input constraints 592 * @endpoint: an endpoint port with available decoders 593 * @mode: DPA operation mode (ram vs pmem) 594 * @min: the minimum amount of capacity the call needs 595 * @max: extra capacity to allocate after min is satisfied 596 * 597 * Given that a region needs to allocate from limited HPA capacity it 598 * may be the case that a device has more mappable DPA capacity than 599 * available HPA. So, the expectation is that @min is a driver known 600 * value for how much capacity is needed, and @max is based the limit of 601 * how much HPA space is available for a new region. 602 * 603 * Returns a pinned cxl_decoder with at least @min bytes of capacity 604 * reserved, or an error pointer. The caller is also expected to own the 605 * lifetime of the memdev registration associated with the endpoint to 606 * pin the decoder registered as well. 607 */ 608 struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_port *endpoint, 609 bool is_ram, 610 resource_size_t min, 611 resource_size_t max) > 612 { 613 struct cxl_endpoint_decoder *cxled; 614 enum cxl_decoder_mode mode; 615 struct device *cxled_dev; 616 resource_size_t alloc; 617 int rc; 618 619 if (!IS_ALIGNED(min | max, SZ_256M)) 620 return ERR_PTR(-EINVAL); 621 622 down_read(&cxl_dpa_rwsem); 623 624 cxled_dev = device_find_child(&endpoint->dev, NULL, find_free_decoder); 625 if (!cxled_dev) 626 cxled = ERR_PTR(-ENXIO); 627 else 628 cxled = to_cxl_endpoint_decoder(cxled_dev); 629 630 up_read(&cxl_dpa_rwsem); 631 632 if (IS_ERR(cxled)) 633 return cxled; 634 635 if (is_ram) 636 mode = CXL_DECODER_RAM; 637 else 638 mode = CXL_DECODER_PMEM; 639 640 rc = cxl_dpa_set_mode(cxled, mode); 641 if (rc) 642 goto err; 643 644 down_read(&cxl_dpa_rwsem); 645 alloc = cxl_dpa_freespace(cxled, NULL, NULL); 646 up_read(&cxl_dpa_rwsem); 647 648 if (max) 649 alloc = min(max, alloc); 650 if (alloc < min) { 651 rc = -ENOMEM; 652 goto err; 653 } 654 655 rc = cxl_dpa_alloc(cxled, alloc); 656 if (rc) 657 goto err; 658 659 return cxled; 660 err: 661 put_device(cxled_dev); 662 return ERR_PTR(rc); 663 } 664 EXPORT_SYMBOL_NS_GPL(cxl_request_dpa, CXL); 665
On Mon, 15 Jul 2024 18:28:30 +0100 alejandro.lucero-palau@amd.com wrote: > From: Alejandro Lucero <alucerop@amd.com> > > Region creation involves finding available DPA (device-physical-address) > capacity to map into HPA (host-physical-address) space. Given the HPA > capacity constraint, define an API, cxl_request_dpa(), that has the > flexibility to map the minimum amount of memory the driver needs to > operate vs the total possible that can be mapped given HPA availability. > > Factor out the core of cxl_dpa_alloc, that does free space scanning, > into a cxl_dpa_freespace() helper, and use that to balance the capacity > available to map vs the @min and @max arguments to cxl_request_dpa. > > Based on https://lore.kernel.org/linux-cxl/168592149709.1948938.8663425987110396027.stgit@dwillia2-xfh.jf.intel.com/T/#m4271ee49a91615c8af54e3ab20679f8be3099393 > Use the permalink link under these to get shorter links. https://lore.kernel.org/linux-cxl/168592158743.1948938.7622563891193802610.stgit@dwillia2-xfh.jf.intel.com/ goes to the same patch. > Signed-off-by: Alejandro Lucero <alucerop@amd.com> > Co-developed-by: Dan Williams <dan.j.williams@intel.com> > + > +int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) > +{ > + struct cxl_port *port = cxled_to_port(cxled); > + struct device *dev = &cxled->cxld.dev; > + resource_size_t start, avail, skip; > + int rc; > + > + down_write(&cxl_dpa_rwsem); Some cleanup.h magic would help here by allowing early returns. Needs the scoped lock though to ensure it's released before the devm_add_action_or_reset() as I'd guess we will deadlock otherwise if that fails. > + if (cxled->cxld.region) { > + dev_dbg(dev, "EBUSY, decoder attached to %s\n", > + dev_name(&cxled->cxld.region->dev)); > + rc = -EBUSY; > goto out; > } > > + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { > + dev_dbg(dev, "EBUSY, decoder enabled\n"); > + rc = -EBUSY; > + goto out; > + } > + > + avail = cxl_dpa_freespace(cxled, &start, &skip); > + > if (size > avail) { > dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, > - cxl_decoder_mode_name(cxled->mode), &avail); > + cxled->mode == CXL_DECODER_RAM ? "ram" : "pmem", > + &avail); > rc = -ENOSPC; > goto out; > } > @@ -550,6 +570,99 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) > return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled); > } > > +static int find_free_decoder(struct device *dev, void *data) > +{ > + struct cxl_endpoint_decoder *cxled; > + struct cxl_port *port; > + > + if (!is_endpoint_decoder(dev)) > + return 0; > + > + cxled = to_cxl_endpoint_decoder(dev); > + port = cxled_to_port(cxled); > + > + if (cxled->cxld.id != port->hdm_end + 1) { > + return 0; No brackets > + } > + return 1; > +} > + > +/** > + * cxl_request_dpa - search and reserve DPA given input constraints > + * @endpoint: an endpoint port with available decoders > + * @mode: DPA operation mode (ram vs pmem) > + * @min: the minimum amount of capacity the call needs > + * @max: extra capacity to allocate after min is satisfied > + * > + * Given that a region needs to allocate from limited HPA capacity it > + * may be the case that a device has more mappable DPA capacity than > + * available HPA. So, the expectation is that @min is a driver known > + * value for how much capacity is needed, and @max is based the limit of > + * how much HPA space is available for a new region. We are going to need a policy control on the max value. Otherwise, if you have two devices that support huge capacity and not enough space, who gets it will just be a race. Not a problem for now though! > + * > + * Returns a pinned cxl_decoder with at least @min bytes of capacity > + * reserved, or an error pointer. The caller is also expected to own the > + * lifetime of the memdev registration associated with the endpoint to > + * pin the decoder registered as well. > + */
On Mon, Jul 15, 2024 at 06:28:30PM +0100, alejandro.lucero-palau@amd.com wrote: > From: Alejandro Lucero <alucerop@amd.com> > > Region creation involves finding available DPA (device-physical-address) > capacity to map into HPA (host-physical-address) space. Given the HPA > capacity constraint, define an API, cxl_request_dpa(), that has the > flexibility to map the minimum amount of memory the driver needs to > operate vs the total possible that can be mapped given HPA availability. > > Factor out the core of cxl_dpa_alloc, that does free space scanning, > into a cxl_dpa_freespace() helper, and use that to balance the capacity > available to map vs the @min and @max arguments to cxl_request_dpa. > > Based on https://lore.kernel.org/linux-cxl/168592149709.1948938.8663425987110396027.stgit@dwillia2-xfh.jf.intel.com/T/#m4271ee49a91615c8af54e3ab20679f8be3099393 > > Signed-off-by: Alejandro Lucero <alucerop@amd.com> > Co-developed-by: Dan Williams <dan.j.williams@intel.com> > --- > drivers/cxl/core/core.h | 1 + > drivers/cxl/core/hdm.c | 153 +++++++++++++++++++++++++---- > drivers/net/ethernet/sfc/efx.c | 2 + > drivers/net/ethernet/sfc/efx_cxl.c | 18 +++- > drivers/net/ethernet/sfc/efx_cxl.h | 1 + > include/linux/cxl_accel_mem.h | 7 ++ > 6 files changed, 161 insertions(+), 21 deletions(-) > > diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h > index 625394486459..a243ff12c0f4 100644 > --- a/drivers/cxl/core/core.h > +++ b/drivers/cxl/core/core.h > @@ -76,6 +76,7 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, > enum cxl_decoder_mode mode); > int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size); > int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); > +int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); Function declared twice here. Fan > resource_size_t cxl_dpa_size(struct cxl_endpoint_decoder *cxled); > resource_size_t cxl_dpa_resource_start(struct cxl_endpoint_decoder *cxled); > > diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c > index 4af9225d4b59..3e53ae222d40 100644 > --- a/drivers/cxl/core/hdm.c > +++ b/drivers/cxl/core/hdm.c > @@ -3,6 +3,7 @@ > #include <linux/seq_file.h> > #include <linux/device.h> > #include <linux/delay.h> > +#include <linux/cxl_accel_mem.h> > > #include "cxlmem.h" > #include "core.h" > @@ -420,6 +421,7 @@ int cxl_dpa_free(struct cxl_endpoint_decoder *cxled) > up_write(&cxl_dpa_rwsem); > return rc; > } > +EXPORT_SYMBOL_NS_GPL(cxl_dpa_free, CXL); > > int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, > enum cxl_decoder_mode mode) > @@ -467,30 +469,17 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, > return rc; > } > > -int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) > +static resource_size_t cxl_dpa_freespace(struct cxl_endpoint_decoder *cxled, > + resource_size_t *start_out, > + resource_size_t *skip_out) > { > struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); > resource_size_t free_ram_start, free_pmem_start; > - struct cxl_port *port = cxled_to_port(cxled); > struct cxl_dev_state *cxlds = cxlmd->cxlds; > - struct device *dev = &cxled->cxld.dev; > resource_size_t start, avail, skip; > struct resource *p, *last; > - int rc; > - > - down_write(&cxl_dpa_rwsem); > - if (cxled->cxld.region) { > - dev_dbg(dev, "decoder attached to %s\n", > - dev_name(&cxled->cxld.region->dev)); > - rc = -EBUSY; > - goto out; > - } > > - if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { > - dev_dbg(dev, "decoder enabled\n"); > - rc = -EBUSY; > - goto out; > - } > + lockdep_assert_held(&cxl_dpa_rwsem); > > for (p = cxlds->ram_res.child, last = NULL; p; p = p->sibling) > last = p; > @@ -528,14 +517,45 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) > skip_end = start - 1; > skip = skip_end - skip_start + 1; > } else { > - dev_dbg(dev, "mode not set\n"); > - rc = -EINVAL; > + avail = 0; > + } > + > + if (!avail) > + return 0; > + if (start_out) > + *start_out = start; > + if (skip_out) > + *skip_out = skip; > + return avail; > +} > + > +int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) > +{ > + struct cxl_port *port = cxled_to_port(cxled); > + struct device *dev = &cxled->cxld.dev; > + resource_size_t start, avail, skip; > + int rc; > + > + down_write(&cxl_dpa_rwsem); > + if (cxled->cxld.region) { > + dev_dbg(dev, "EBUSY, decoder attached to %s\n", > + dev_name(&cxled->cxld.region->dev)); > + rc = -EBUSY; > goto out; > } > > + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { > + dev_dbg(dev, "EBUSY, decoder enabled\n"); > + rc = -EBUSY; > + goto out; > + } > + > + avail = cxl_dpa_freespace(cxled, &start, &skip); > + > if (size > avail) { > dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, > - cxl_decoder_mode_name(cxled->mode), &avail); > + cxled->mode == CXL_DECODER_RAM ? "ram" : "pmem", > + &avail); > rc = -ENOSPC; > goto out; > } > @@ -550,6 +570,99 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) > return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled); > } > > +static int find_free_decoder(struct device *dev, void *data) > +{ > + struct cxl_endpoint_decoder *cxled; > + struct cxl_port *port; > + > + if (!is_endpoint_decoder(dev)) > + return 0; > + > + cxled = to_cxl_endpoint_decoder(dev); > + port = cxled_to_port(cxled); > + > + if (cxled->cxld.id != port->hdm_end + 1) { > + return 0; > + } > + return 1; > +} > + > +/** > + * cxl_request_dpa - search and reserve DPA given input constraints > + * @endpoint: an endpoint port with available decoders > + * @mode: DPA operation mode (ram vs pmem) > + * @min: the minimum amount of capacity the call needs > + * @max: extra capacity to allocate after min is satisfied > + * > + * Given that a region needs to allocate from limited HPA capacity it > + * may be the case that a device has more mappable DPA capacity than > + * available HPA. So, the expectation is that @min is a driver known > + * value for how much capacity is needed, and @max is based the limit of > + * how much HPA space is available for a new region. > + * > + * Returns a pinned cxl_decoder with at least @min bytes of capacity > + * reserved, or an error pointer. The caller is also expected to own the > + * lifetime of the memdev registration associated with the endpoint to > + * pin the decoder registered as well. > + */ > +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_port *endpoint, > + bool is_ram, > + resource_size_t min, > + resource_size_t max) > +{ > + struct cxl_endpoint_decoder *cxled; > + enum cxl_decoder_mode mode; > + struct device *cxled_dev; > + resource_size_t alloc; > + int rc; > + > + if (!IS_ALIGNED(min | max, SZ_256M)) > + return ERR_PTR(-EINVAL); > + > + down_read(&cxl_dpa_rwsem); > + > + cxled_dev = device_find_child(&endpoint->dev, NULL, find_free_decoder); > + if (!cxled_dev) > + cxled = ERR_PTR(-ENXIO); > + else > + cxled = to_cxl_endpoint_decoder(cxled_dev); > + > + up_read(&cxl_dpa_rwsem); > + > + if (IS_ERR(cxled)) > + return cxled; > + > + if (is_ram) > + mode = CXL_DECODER_RAM; > + else > + mode = CXL_DECODER_PMEM; > + > + rc = cxl_dpa_set_mode(cxled, mode); > + if (rc) > + goto err; > + > + down_read(&cxl_dpa_rwsem); > + alloc = cxl_dpa_freespace(cxled, NULL, NULL); > + up_read(&cxl_dpa_rwsem); > + > + if (max) > + alloc = min(max, alloc); > + if (alloc < min) { > + rc = -ENOMEM; > + goto err; > + } > + > + rc = cxl_dpa_alloc(cxled, alloc); > + if (rc) > + goto err; > + > + return cxled; > +err: > + put_device(cxled_dev); > + return ERR_PTR(rc); > +} > +EXPORT_SYMBOL_NS_GPL(cxl_request_dpa, CXL); > + > static void cxld_set_interleave(struct cxl_decoder *cxld, u32 *ctrl) > { > u16 eig; > diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c > index cb3f74d30852..9cfe29002d98 100644 > --- a/drivers/net/ethernet/sfc/efx.c > +++ b/drivers/net/ethernet/sfc/efx.c > @@ -901,6 +901,8 @@ static void efx_pci_remove(struct pci_dev *pci_dev) > > efx_fini_io(efx); > > + efx_cxl_exit(efx); > + > pci_dbg(efx->pci_dev, "shutdown successful\n"); > > efx_fini_devlink_and_unlock(efx); > diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c > index 6d49571ccff7..b5626d724b52 100644 > --- a/drivers/net/ethernet/sfc/efx_cxl.c > +++ b/drivers/net/ethernet/sfc/efx_cxl.c > @@ -84,12 +84,28 @@ void efx_cxl_init(struct efx_nic *efx) > goto out; > } > > - if (max < EFX_CTPIO_BUFFER_SIZE) > + if (max < EFX_CTPIO_BUFFER_SIZE) { > pci_info(pci_dev, "CXL accel not enough free HPA space %llu < %u\n", > max, EFX_CTPIO_BUFFER_SIZE); > + goto out; > + } > + > + cxl->cxled = cxl_request_dpa(cxl->endpoint, true, EFX_CTPIO_BUFFER_SIZE, > + EFX_CTPIO_BUFFER_SIZE); > + if (IS_ERR(cxl->cxled)) > + pci_info(pci_dev, "CXL accel request DPA failed"); > out: > cxl_release_endpoint(cxl->cxlmd, cxl->endpoint); > } > > +void efx_cxl_exit(struct efx_nic *efx) > +{ > + struct efx_cxl *cxl = efx->cxl; > + > + if (cxl->cxled) > + cxl_dpa_free(cxl->cxled); > + > + return; > + } > > MODULE_IMPORT_NS(CXL); > diff --git a/drivers/net/ethernet/sfc/efx_cxl.h b/drivers/net/ethernet/sfc/efx_cxl.h > index 76c6794c20d8..59d5217a684c 100644 > --- a/drivers/net/ethernet/sfc/efx_cxl.h > +++ b/drivers/net/ethernet/sfc/efx_cxl.h > @@ -26,4 +26,5 @@ struct efx_cxl { > }; > > void efx_cxl_init(struct efx_nic *efx); > +void efx_cxl_exit(struct efx_nic *efx); > #endif > diff --git a/include/linux/cxl_accel_mem.h b/include/linux/cxl_accel_mem.h > index f3e77688ffe0..d4ecb5bb4fc8 100644 > --- a/include/linux/cxl_accel_mem.h > +++ b/include/linux/cxl_accel_mem.h > @@ -2,6 +2,7 @@ > /* Copyright(c) 2024 Advanced Micro Devices, Inc. */ > > #include <linux/cdev.h> > +#include <linux/pci.h> > > #ifndef __CXL_ACCEL_MEM_H > #define __CXL_ACCEL_MEM_H > @@ -41,4 +42,10 @@ struct cxl_root_decoder *cxl_get_hpa_freespace(struct cxl_port *endpoint, > int interleave_ways, > unsigned long flags, > resource_size_t *max); > + > +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_port *endpoint, > + bool is_ram, > + resource_size_t min, > + resource_size_t max); > +int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); > #endif > -- > 2.17.1 >
On 8/4/24 19:07, Jonathan Cameron wrote: > On Mon, 15 Jul 2024 18:28:30 +0100 > alejandro.lucero-palau@amd.com wrote: > >> From: Alejandro Lucero <alucerop@amd.com> >> >> Region creation involves finding available DPA (device-physical-address) >> capacity to map into HPA (host-physical-address) space. Given the HPA >> capacity constraint, define an API, cxl_request_dpa(), that has the >> flexibility to map the minimum amount of memory the driver needs to >> operate vs the total possible that can be mapped given HPA availability. >> >> Factor out the core of cxl_dpa_alloc, that does free space scanning, >> into a cxl_dpa_freespace() helper, and use that to balance the capacity >> available to map vs the @min and @max arguments to cxl_request_dpa. >> >> Based on https://lore.kernel.org/linux-cxl/168592149709.1948938.8663425987110396027.stgit@dwillia2-xfh.jf.intel.com/T/#m4271ee49a91615c8af54e3ab20679f8be3099393 >> > Use the permalink link under these to get shorter links. > https://lore.kernel.org/linux-cxl/168592158743.1948938.7622563891193802610.stgit@dwillia2-xfh.jf.intel.com/ > goes to the same patch. I'll do. > >> Signed-off-by: Alejandro Lucero <alucerop@amd.com> >> Co-developed-by: Dan Williams <dan.j.williams@intel.com> > >> + >> +int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) >> +{ >> + struct cxl_port *port = cxled_to_port(cxled); >> + struct device *dev = &cxled->cxld.dev; >> + resource_size_t start, avail, skip; >> + int rc; >> + >> + down_write(&cxl_dpa_rwsem); > Some cleanup.h magic would help here by allowing early returns. > Needs the scoped lock though to ensure it's released before the > devm_add_action_or_reset() as I'd guess we will deadlock otherwise > if that fails. Yes, I'll try to use it making cleaner code. >> + if (cxled->cxld.region) { >> + dev_dbg(dev, "EBUSY, decoder attached to %s\n", >> + dev_name(&cxled->cxld.region->dev)); >> + rc = -EBUSY; >> goto out; >> } >> >> + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { >> + dev_dbg(dev, "EBUSY, decoder enabled\n"); >> + rc = -EBUSY; >> + goto out; >> + } >> + >> + avail = cxl_dpa_freespace(cxled, &start, &skip); >> + >> if (size > avail) { >> dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, >> - cxl_decoder_mode_name(cxled->mode), &avail); >> + cxled->mode == CXL_DECODER_RAM ? "ram" : "pmem", >> + &avail); >> rc = -ENOSPC; >> goto out; >> } >> @@ -550,6 +570,99 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) >> return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled); >> } >> >> +static int find_free_decoder(struct device *dev, void *data) >> +{ >> + struct cxl_endpoint_decoder *cxled; >> + struct cxl_port *port; >> + >> + if (!is_endpoint_decoder(dev)) >> + return 0; >> + >> + cxled = to_cxl_endpoint_decoder(dev); >> + port = cxled_to_port(cxled); >> + >> + if (cxled->cxld.id != port->hdm_end + 1) { >> + return 0; > No brackets Sure. >> + } >> + return 1; >> +} >> + >> +/** >> + * cxl_request_dpa - search and reserve DPA given input constraints >> + * @endpoint: an endpoint port with available decoders >> + * @mode: DPA operation mode (ram vs pmem) >> + * @min: the minimum amount of capacity the call needs >> + * @max: extra capacity to allocate after min is satisfied >> + * >> + * Given that a region needs to allocate from limited HPA capacity it >> + * may be the case that a device has more mappable DPA capacity than >> + * available HPA. So, the expectation is that @min is a driver known >> + * value for how much capacity is needed, and @max is based the limit of >> + * how much HPA space is available for a new region. > We are going to need a policy control on the max value. > Otherwise, if you have two devices that support huge capacity and > not enough space, who gets it will just be a race. > > Not a problem for now though! I agree. If CXL ends up being what we hope, these races will need to be better handled. Thanks! >> + * >> + * Returns a pinned cxl_decoder with at least @min bytes of capacity >> + * reserved, or an error pointer. The caller is also expected to own the >> + * lifetime of the memdev registration associated with the endpoint to >> + * pin the decoder registered as well. >> + */ > >
On 8/6/24 18:33, Fan Ni wrote: > On Mon, Jul 15, 2024 at 06:28:30PM +0100, alejandro.lucero-palau@amd.com wrote: >> From: Alejandro Lucero <alucerop@amd.com> >> >> Region creation involves finding available DPA (device-physical-address) >> capacity to map into HPA (host-physical-address) space. Given the HPA >> capacity constraint, define an API, cxl_request_dpa(), that has the >> flexibility to map the minimum amount of memory the driver needs to >> operate vs the total possible that can be mapped given HPA availability. >> >> Factor out the core of cxl_dpa_alloc, that does free space scanning, >> into a cxl_dpa_freespace() helper, and use that to balance the capacity >> available to map vs the @min and @max arguments to cxl_request_dpa. >> >> Based on https://lore.kernel.org/linux-cxl/168592149709.1948938.8663425987110396027.stgit@dwillia2-xfh.jf.intel.com/T/#m4271ee49a91615c8af54e3ab20679f8be3099393 >> >> Signed-off-by: Alejandro Lucero <alucerop@amd.com> >> Co-developed-by: Dan Williams <dan.j.williams@intel.com> >> --- >> drivers/cxl/core/core.h | 1 + >> drivers/cxl/core/hdm.c | 153 +++++++++++++++++++++++++---- >> drivers/net/ethernet/sfc/efx.c | 2 + >> drivers/net/ethernet/sfc/efx_cxl.c | 18 +++- >> drivers/net/ethernet/sfc/efx_cxl.h | 1 + >> include/linux/cxl_accel_mem.h | 7 ++ >> 6 files changed, 161 insertions(+), 21 deletions(-) >> >> diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h >> index 625394486459..a243ff12c0f4 100644 >> --- a/drivers/cxl/core/core.h >> +++ b/drivers/cxl/core/core.h >> @@ -76,6 +76,7 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, >> enum cxl_decoder_mode mode); >> int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size); >> int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); >> +int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); > Function declared twice here. I'll fixed. Thanks! > Fan >> resource_size_t cxl_dpa_size(struct cxl_endpoint_decoder *cxled); >> resource_size_t cxl_dpa_resource_start(struct cxl_endpoint_decoder *cxled); >> >> diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c >> index 4af9225d4b59..3e53ae222d40 100644 >> --- a/drivers/cxl/core/hdm.c >> +++ b/drivers/cxl/core/hdm.c >> @@ -3,6 +3,7 @@ >> #include <linux/seq_file.h> >> #include <linux/device.h> >> #include <linux/delay.h> >> +#include <linux/cxl_accel_mem.h> >> >> #include "cxlmem.h" >> #include "core.h" >> @@ -420,6 +421,7 @@ int cxl_dpa_free(struct cxl_endpoint_decoder *cxled) >> up_write(&cxl_dpa_rwsem); >> return rc; >> } >> +EXPORT_SYMBOL_NS_GPL(cxl_dpa_free, CXL); >> >> int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, >> enum cxl_decoder_mode mode) >> @@ -467,30 +469,17 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, >> return rc; >> } >> >> -int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) >> +static resource_size_t cxl_dpa_freespace(struct cxl_endpoint_decoder *cxled, >> + resource_size_t *start_out, >> + resource_size_t *skip_out) >> { >> struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); >> resource_size_t free_ram_start, free_pmem_start; >> - struct cxl_port *port = cxled_to_port(cxled); >> struct cxl_dev_state *cxlds = cxlmd->cxlds; >> - struct device *dev = &cxled->cxld.dev; >> resource_size_t start, avail, skip; >> struct resource *p, *last; >> - int rc; >> - >> - down_write(&cxl_dpa_rwsem); >> - if (cxled->cxld.region) { >> - dev_dbg(dev, "decoder attached to %s\n", >> - dev_name(&cxled->cxld.region->dev)); >> - rc = -EBUSY; >> - goto out; >> - } >> >> - if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { >> - dev_dbg(dev, "decoder enabled\n"); >> - rc = -EBUSY; >> - goto out; >> - } >> + lockdep_assert_held(&cxl_dpa_rwsem); >> >> for (p = cxlds->ram_res.child, last = NULL; p; p = p->sibling) >> last = p; >> @@ -528,14 +517,45 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) >> skip_end = start - 1; >> skip = skip_end - skip_start + 1; >> } else { >> - dev_dbg(dev, "mode not set\n"); >> - rc = -EINVAL; >> + avail = 0; >> + } >> + >> + if (!avail) >> + return 0; >> + if (start_out) >> + *start_out = start; >> + if (skip_out) >> + *skip_out = skip; >> + return avail; >> +} >> + >> +int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) >> +{ >> + struct cxl_port *port = cxled_to_port(cxled); >> + struct device *dev = &cxled->cxld.dev; >> + resource_size_t start, avail, skip; >> + int rc; >> + >> + down_write(&cxl_dpa_rwsem); >> + if (cxled->cxld.region) { >> + dev_dbg(dev, "EBUSY, decoder attached to %s\n", >> + dev_name(&cxled->cxld.region->dev)); >> + rc = -EBUSY; >> goto out; >> } >> >> + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { >> + dev_dbg(dev, "EBUSY, decoder enabled\n"); >> + rc = -EBUSY; >> + goto out; >> + } >> + >> + avail = cxl_dpa_freespace(cxled, &start, &skip); >> + >> if (size > avail) { >> dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, >> - cxl_decoder_mode_name(cxled->mode), &avail); >> + cxled->mode == CXL_DECODER_RAM ? "ram" : "pmem", >> + &avail); >> rc = -ENOSPC; >> goto out; >> } >> @@ -550,6 +570,99 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) >> return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled); >> } >> >> +static int find_free_decoder(struct device *dev, void *data) >> +{ >> + struct cxl_endpoint_decoder *cxled; >> + struct cxl_port *port; >> + >> + if (!is_endpoint_decoder(dev)) >> + return 0; >> + >> + cxled = to_cxl_endpoint_decoder(dev); >> + port = cxled_to_port(cxled); >> + >> + if (cxled->cxld.id != port->hdm_end + 1) { >> + return 0; >> + } >> + return 1; >> +} >> + >> +/** >> + * cxl_request_dpa - search and reserve DPA given input constraints >> + * @endpoint: an endpoint port with available decoders >> + * @mode: DPA operation mode (ram vs pmem) >> + * @min: the minimum amount of capacity the call needs >> + * @max: extra capacity to allocate after min is satisfied >> + * >> + * Given that a region needs to allocate from limited HPA capacity it >> + * may be the case that a device has more mappable DPA capacity than >> + * available HPA. So, the expectation is that @min is a driver known >> + * value for how much capacity is needed, and @max is based the limit of >> + * how much HPA space is available for a new region. >> + * >> + * Returns a pinned cxl_decoder with at least @min bytes of capacity >> + * reserved, or an error pointer. The caller is also expected to own the >> + * lifetime of the memdev registration associated with the endpoint to >> + * pin the decoder registered as well. >> + */ >> +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_port *endpoint, >> + bool is_ram, >> + resource_size_t min, >> + resource_size_t max) >> +{ >> + struct cxl_endpoint_decoder *cxled; >> + enum cxl_decoder_mode mode; >> + struct device *cxled_dev; >> + resource_size_t alloc; >> + int rc; >> + >> + if (!IS_ALIGNED(min | max, SZ_256M)) >> + return ERR_PTR(-EINVAL); >> + >> + down_read(&cxl_dpa_rwsem); >> + >> + cxled_dev = device_find_child(&endpoint->dev, NULL, find_free_decoder); >> + if (!cxled_dev) >> + cxled = ERR_PTR(-ENXIO); >> + else >> + cxled = to_cxl_endpoint_decoder(cxled_dev); >> + >> + up_read(&cxl_dpa_rwsem); >> + >> + if (IS_ERR(cxled)) >> + return cxled; >> + >> + if (is_ram) >> + mode = CXL_DECODER_RAM; >> + else >> + mode = CXL_DECODER_PMEM; >> + >> + rc = cxl_dpa_set_mode(cxled, mode); >> + if (rc) >> + goto err; >> + >> + down_read(&cxl_dpa_rwsem); >> + alloc = cxl_dpa_freespace(cxled, NULL, NULL); >> + up_read(&cxl_dpa_rwsem); >> + >> + if (max) >> + alloc = min(max, alloc); >> + if (alloc < min) { >> + rc = -ENOMEM; >> + goto err; >> + } >> + >> + rc = cxl_dpa_alloc(cxled, alloc); >> + if (rc) >> + goto err; >> + >> + return cxled; >> +err: >> + put_device(cxled_dev); >> + return ERR_PTR(rc); >> +} >> +EXPORT_SYMBOL_NS_GPL(cxl_request_dpa, CXL); >> + >> static void cxld_set_interleave(struct cxl_decoder *cxld, u32 *ctrl) >> { >> u16 eig; >> diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c >> index cb3f74d30852..9cfe29002d98 100644 >> --- a/drivers/net/ethernet/sfc/efx.c >> +++ b/drivers/net/ethernet/sfc/efx.c >> @@ -901,6 +901,8 @@ static void efx_pci_remove(struct pci_dev *pci_dev) >> >> efx_fini_io(efx); >> >> + efx_cxl_exit(efx); >> + >> pci_dbg(efx->pci_dev, "shutdown successful\n"); >> >> efx_fini_devlink_and_unlock(efx); >> diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c >> index 6d49571ccff7..b5626d724b52 100644 >> --- a/drivers/net/ethernet/sfc/efx_cxl.c >> +++ b/drivers/net/ethernet/sfc/efx_cxl.c >> @@ -84,12 +84,28 @@ void efx_cxl_init(struct efx_nic *efx) >> goto out; >> } >> >> - if (max < EFX_CTPIO_BUFFER_SIZE) >> + if (max < EFX_CTPIO_BUFFER_SIZE) { >> pci_info(pci_dev, "CXL accel not enough free HPA space %llu < %u\n", >> max, EFX_CTPIO_BUFFER_SIZE); >> + goto out; >> + } >> + >> + cxl->cxled = cxl_request_dpa(cxl->endpoint, true, EFX_CTPIO_BUFFER_SIZE, >> + EFX_CTPIO_BUFFER_SIZE); >> + if (IS_ERR(cxl->cxled)) >> + pci_info(pci_dev, "CXL accel request DPA failed"); >> out: >> cxl_release_endpoint(cxl->cxlmd, cxl->endpoint); >> } >> >> +void efx_cxl_exit(struct efx_nic *efx) >> +{ >> + struct efx_cxl *cxl = efx->cxl; >> + >> + if (cxl->cxled) >> + cxl_dpa_free(cxl->cxled); >> + >> + return; >> + } >> >> MODULE_IMPORT_NS(CXL); >> diff --git a/drivers/net/ethernet/sfc/efx_cxl.h b/drivers/net/ethernet/sfc/efx_cxl.h >> index 76c6794c20d8..59d5217a684c 100644 >> --- a/drivers/net/ethernet/sfc/efx_cxl.h >> +++ b/drivers/net/ethernet/sfc/efx_cxl.h >> @@ -26,4 +26,5 @@ struct efx_cxl { >> }; >> >> void efx_cxl_init(struct efx_nic *efx); >> +void efx_cxl_exit(struct efx_nic *efx); >> #endif >> diff --git a/include/linux/cxl_accel_mem.h b/include/linux/cxl_accel_mem.h >> index f3e77688ffe0..d4ecb5bb4fc8 100644 >> --- a/include/linux/cxl_accel_mem.h >> +++ b/include/linux/cxl_accel_mem.h >> @@ -2,6 +2,7 @@ >> /* Copyright(c) 2024 Advanced Micro Devices, Inc. */ >> >> #include <linux/cdev.h> >> +#include <linux/pci.h> >> >> #ifndef __CXL_ACCEL_MEM_H >> #define __CXL_ACCEL_MEM_H >> @@ -41,4 +42,10 @@ struct cxl_root_decoder *cxl_get_hpa_freespace(struct cxl_port *endpoint, >> int interleave_ways, >> unsigned long flags, >> resource_size_t *max); >> + >> +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_port *endpoint, >> + bool is_ram, >> + resource_size_t min, >> + resource_size_t max); >> +int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); >> #endif >> -- >> 2.17.1 >>
diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 625394486459..a243ff12c0f4 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -76,6 +76,7 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, enum cxl_decoder_mode mode); int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size); int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); +int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); resource_size_t cxl_dpa_size(struct cxl_endpoint_decoder *cxled); resource_size_t cxl_dpa_resource_start(struct cxl_endpoint_decoder *cxled); diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index 4af9225d4b59..3e53ae222d40 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -3,6 +3,7 @@ #include <linux/seq_file.h> #include <linux/device.h> #include <linux/delay.h> +#include <linux/cxl_accel_mem.h> #include "cxlmem.h" #include "core.h" @@ -420,6 +421,7 @@ int cxl_dpa_free(struct cxl_endpoint_decoder *cxled) up_write(&cxl_dpa_rwsem); return rc; } +EXPORT_SYMBOL_NS_GPL(cxl_dpa_free, CXL); int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, enum cxl_decoder_mode mode) @@ -467,30 +469,17 @@ int cxl_dpa_set_mode(struct cxl_endpoint_decoder *cxled, return rc; } -int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) +static resource_size_t cxl_dpa_freespace(struct cxl_endpoint_decoder *cxled, + resource_size_t *start_out, + resource_size_t *skip_out) { struct cxl_memdev *cxlmd = cxled_to_memdev(cxled); resource_size_t free_ram_start, free_pmem_start; - struct cxl_port *port = cxled_to_port(cxled); struct cxl_dev_state *cxlds = cxlmd->cxlds; - struct device *dev = &cxled->cxld.dev; resource_size_t start, avail, skip; struct resource *p, *last; - int rc; - - down_write(&cxl_dpa_rwsem); - if (cxled->cxld.region) { - dev_dbg(dev, "decoder attached to %s\n", - dev_name(&cxled->cxld.region->dev)); - rc = -EBUSY; - goto out; - } - if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { - dev_dbg(dev, "decoder enabled\n"); - rc = -EBUSY; - goto out; - } + lockdep_assert_held(&cxl_dpa_rwsem); for (p = cxlds->ram_res.child, last = NULL; p; p = p->sibling) last = p; @@ -528,14 +517,45 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) skip_end = start - 1; skip = skip_end - skip_start + 1; } else { - dev_dbg(dev, "mode not set\n"); - rc = -EINVAL; + avail = 0; + } + + if (!avail) + return 0; + if (start_out) + *start_out = start; + if (skip_out) + *skip_out = skip; + return avail; +} + +int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) +{ + struct cxl_port *port = cxled_to_port(cxled); + struct device *dev = &cxled->cxld.dev; + resource_size_t start, avail, skip; + int rc; + + down_write(&cxl_dpa_rwsem); + if (cxled->cxld.region) { + dev_dbg(dev, "EBUSY, decoder attached to %s\n", + dev_name(&cxled->cxld.region->dev)); + rc = -EBUSY; goto out; } + if (cxled->cxld.flags & CXL_DECODER_F_ENABLE) { + dev_dbg(dev, "EBUSY, decoder enabled\n"); + rc = -EBUSY; + goto out; + } + + avail = cxl_dpa_freespace(cxled, &start, &skip); + if (size > avail) { dev_dbg(dev, "%pa exceeds available %s capacity: %pa\n", &size, - cxl_decoder_mode_name(cxled->mode), &avail); + cxled->mode == CXL_DECODER_RAM ? "ram" : "pmem", + &avail); rc = -ENOSPC; goto out; } @@ -550,6 +570,99 @@ int cxl_dpa_alloc(struct cxl_endpoint_decoder *cxled, unsigned long long size) return devm_add_action_or_reset(&port->dev, cxl_dpa_release, cxled); } +static int find_free_decoder(struct device *dev, void *data) +{ + struct cxl_endpoint_decoder *cxled; + struct cxl_port *port; + + if (!is_endpoint_decoder(dev)) + return 0; + + cxled = to_cxl_endpoint_decoder(dev); + port = cxled_to_port(cxled); + + if (cxled->cxld.id != port->hdm_end + 1) { + return 0; + } + return 1; +} + +/** + * cxl_request_dpa - search and reserve DPA given input constraints + * @endpoint: an endpoint port with available decoders + * @mode: DPA operation mode (ram vs pmem) + * @min: the minimum amount of capacity the call needs + * @max: extra capacity to allocate after min is satisfied + * + * Given that a region needs to allocate from limited HPA capacity it + * may be the case that a device has more mappable DPA capacity than + * available HPA. So, the expectation is that @min is a driver known + * value for how much capacity is needed, and @max is based the limit of + * how much HPA space is available for a new region. + * + * Returns a pinned cxl_decoder with at least @min bytes of capacity + * reserved, or an error pointer. The caller is also expected to own the + * lifetime of the memdev registration associated with the endpoint to + * pin the decoder registered as well. + */ +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_port *endpoint, + bool is_ram, + resource_size_t min, + resource_size_t max) +{ + struct cxl_endpoint_decoder *cxled; + enum cxl_decoder_mode mode; + struct device *cxled_dev; + resource_size_t alloc; + int rc; + + if (!IS_ALIGNED(min | max, SZ_256M)) + return ERR_PTR(-EINVAL); + + down_read(&cxl_dpa_rwsem); + + cxled_dev = device_find_child(&endpoint->dev, NULL, find_free_decoder); + if (!cxled_dev) + cxled = ERR_PTR(-ENXIO); + else + cxled = to_cxl_endpoint_decoder(cxled_dev); + + up_read(&cxl_dpa_rwsem); + + if (IS_ERR(cxled)) + return cxled; + + if (is_ram) + mode = CXL_DECODER_RAM; + else + mode = CXL_DECODER_PMEM; + + rc = cxl_dpa_set_mode(cxled, mode); + if (rc) + goto err; + + down_read(&cxl_dpa_rwsem); + alloc = cxl_dpa_freespace(cxled, NULL, NULL); + up_read(&cxl_dpa_rwsem); + + if (max) + alloc = min(max, alloc); + if (alloc < min) { + rc = -ENOMEM; + goto err; + } + + rc = cxl_dpa_alloc(cxled, alloc); + if (rc) + goto err; + + return cxled; +err: + put_device(cxled_dev); + return ERR_PTR(rc); +} +EXPORT_SYMBOL_NS_GPL(cxl_request_dpa, CXL); + static void cxld_set_interleave(struct cxl_decoder *cxld, u32 *ctrl) { u16 eig; diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c index cb3f74d30852..9cfe29002d98 100644 --- a/drivers/net/ethernet/sfc/efx.c +++ b/drivers/net/ethernet/sfc/efx.c @@ -901,6 +901,8 @@ static void efx_pci_remove(struct pci_dev *pci_dev) efx_fini_io(efx); + efx_cxl_exit(efx); + pci_dbg(efx->pci_dev, "shutdown successful\n"); efx_fini_devlink_and_unlock(efx); diff --git a/drivers/net/ethernet/sfc/efx_cxl.c b/drivers/net/ethernet/sfc/efx_cxl.c index 6d49571ccff7..b5626d724b52 100644 --- a/drivers/net/ethernet/sfc/efx_cxl.c +++ b/drivers/net/ethernet/sfc/efx_cxl.c @@ -84,12 +84,28 @@ void efx_cxl_init(struct efx_nic *efx) goto out; } - if (max < EFX_CTPIO_BUFFER_SIZE) + if (max < EFX_CTPIO_BUFFER_SIZE) { pci_info(pci_dev, "CXL accel not enough free HPA space %llu < %u\n", max, EFX_CTPIO_BUFFER_SIZE); + goto out; + } + + cxl->cxled = cxl_request_dpa(cxl->endpoint, true, EFX_CTPIO_BUFFER_SIZE, + EFX_CTPIO_BUFFER_SIZE); + if (IS_ERR(cxl->cxled)) + pci_info(pci_dev, "CXL accel request DPA failed"); out: cxl_release_endpoint(cxl->cxlmd, cxl->endpoint); } +void efx_cxl_exit(struct efx_nic *efx) +{ + struct efx_cxl *cxl = efx->cxl; + + if (cxl->cxled) + cxl_dpa_free(cxl->cxled); + + return; + } MODULE_IMPORT_NS(CXL); diff --git a/drivers/net/ethernet/sfc/efx_cxl.h b/drivers/net/ethernet/sfc/efx_cxl.h index 76c6794c20d8..59d5217a684c 100644 --- a/drivers/net/ethernet/sfc/efx_cxl.h +++ b/drivers/net/ethernet/sfc/efx_cxl.h @@ -26,4 +26,5 @@ struct efx_cxl { }; void efx_cxl_init(struct efx_nic *efx); +void efx_cxl_exit(struct efx_nic *efx); #endif diff --git a/include/linux/cxl_accel_mem.h b/include/linux/cxl_accel_mem.h index f3e77688ffe0..d4ecb5bb4fc8 100644 --- a/include/linux/cxl_accel_mem.h +++ b/include/linux/cxl_accel_mem.h @@ -2,6 +2,7 @@ /* Copyright(c) 2024 Advanced Micro Devices, Inc. */ #include <linux/cdev.h> +#include <linux/pci.h> #ifndef __CXL_ACCEL_MEM_H #define __CXL_ACCEL_MEM_H @@ -41,4 +42,10 @@ struct cxl_root_decoder *cxl_get_hpa_freespace(struct cxl_port *endpoint, int interleave_ways, unsigned long flags, resource_size_t *max); + +struct cxl_endpoint_decoder *cxl_request_dpa(struct cxl_port *endpoint, + bool is_ram, + resource_size_t min, + resource_size_t max); +int cxl_dpa_free(struct cxl_endpoint_decoder *cxled); #endif