From patchwork Mon Jan 9 21:43:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 13094395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5C50C54EBD for ; Mon, 9 Jan 2023 21:44:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229616AbjAIVoJ (ORCPT ); Mon, 9 Jan 2023 16:44:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237952AbjAIVnU (ORCPT ); Mon, 9 Jan 2023 16:43:20 -0500 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC50CDFCD for ; Mon, 9 Jan 2023 13:43:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673300591; x=1704836591; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9G1w8tKj46mBU96VvJLnfTKrakeGadD8LFnuDkTsejA=; b=AMZnb85TCigr1qIzMTQLSzKO9DxWP6xNU0qRg3ByW7xJOGJgeFun/OCM n+rpPOHO4mGH9JLJLN1DQgY+15uK87O2c/cnWsLKMVt7CgJYDmae5RUR+ +58PrkXN595z4p4SisAQY6KOnf4uOMJUrI94N9zj/aVoRhxz+X+dWnPKH gsrgNTeObkZCTxSmn9T5nEjoWtYondypvy1bx84C38Js3hZSCi+j2Nbct r7WGA+z58MMIUGSDD4LAuDzMkAKC7BmuJURrVnJLTwMHA2QbYPZffPwLo Zxxx5yqzjK3MZCC/LW7qPDXK8Wu/xY8CuxVun+7e7ZE2/4BV+VxfL3u7O A==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="321690110" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="321690110" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:11 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="720084705" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="720084705" Received: from djiang5-mobl3.amr.corp.intel.com (HELO djiang5-mobl3.local) ([10.212.37.174]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:11 -0800 Subject: [PATCH v2 1/8] cxl: break out range register decoding from cxl_hdm_decode_init() From: Dave Jiang To: linux-cxl@vger.kernel.org Cc: dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, jonathan.cameron@huawei.com Date: Mon, 09 Jan 2023 14:43:09 -0700 Message-ID: <167330058797.975161.6614835520451455277.stgit@djiang5-mobl3.local> In-Reply-To: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> References: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org There are 2 scenarios that requires additional handling. 1. A device that has active ranges in DVSEC range registers (RR) but no HDM decoder register block. 2. A device that has both RR active and HDM, but the HDM decoders are not programmed. The goal is to create emulated decoder software structs based on the RR. Move the CXL DVSEC range register decoding code block from cxl_hdm_decode_init() to its own function. Refactor code in preparation for the HDM decoder emulation. There is no functionality change to the code. Name the new function to cxl_dvsec_rr_decode(). The only change is to set range->start and range->end to CXL_RESOURCE_NONE and skipping the reading of base registers if the range size is 0, which equates to range not active. Signed-off-by: Dave Jiang Reviewed-by: Jonathan Cameron --- v2: - Refactor to return when size is 0. (Jonathan) --- drivers/cxl/core/pci.c | 63 ++++++++++++++++++++++++++++++------------------ 1 file changed, 40 insertions(+), 23 deletions(-) diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c index 57764e9cd19d..a8ecc6ddb3d7 100644 --- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -141,11 +141,10 @@ int cxl_await_media_ready(struct cxl_dev_state *cxlds) } EXPORT_SYMBOL_NS_GPL(cxl_await_media_ready, CXL); -static int wait_for_valid(struct cxl_dev_state *cxlds) +static int wait_for_valid(struct pci_dev *pdev, int d) { - struct pci_dev *pdev = to_pci_dev(cxlds->dev); - int d = cxlds->cxl_dvsec, rc; u32 val; + int rc; /* * Memory_Info_Valid: When set, indicates that the CXL Range 1 Size high @@ -334,20 +333,11 @@ static bool __cxl_hdm_decode_init(struct cxl_dev_state *cxlds, return true; } -/** - * cxl_hdm_decode_init() - Setup HDM decoding for the endpoint - * @cxlds: Device state - * @cxlhdm: Mapped HDM decoder Capability - * - * Try to enable the endpoint's HDM Decoder Capability - */ -int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm) +static int cxl_dvsec_rr_decode(struct pci_dev *pdev, int d, + struct cxl_endpoint_dvsec_info *info) { - struct pci_dev *pdev = to_pci_dev(cxlds->dev); - struct cxl_endpoint_dvsec_info info = { 0 }; int hdm_count, rc, i, ranges = 0; struct device *dev = &pdev->dev; - int d = cxlds->cxl_dvsec; u16 cap, ctrl; if (!d) { @@ -378,7 +368,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm) if (!hdm_count || hdm_count > 2) return -EINVAL; - rc = wait_for_valid(cxlds); + rc = wait_for_valid(pdev, d); if (rc) { dev_dbg(dev, "Failure awaiting MEM_INFO_VALID (%d)\n", rc); return rc; @@ -389,9 +379,9 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm) * disabled, and they will remain moot after the HDM Decoder * capability is enabled. */ - info.mem_enabled = FIELD_GET(CXL_DVSEC_MEM_ENABLE, ctrl); - if (!info.mem_enabled) - goto hdm_init; + info->mem_enabled = FIELD_GET(CXL_DVSEC_MEM_ENABLE, ctrl); + if (!info->mem_enabled) + return 0; for (i = 0; i < hdm_count; i++) { u64 base, size; @@ -410,6 +400,13 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm) return rc; size |= temp & CXL_DVSEC_MEM_SIZE_LOW_MASK; + if (!size) { + info->dvsec_range[i] = (struct range) { + .start = CXL_RESOURCE_NONE, + .end = CXL_RESOURCE_NONE, + }; + continue; + } rc = pci_read_config_dword( pdev, d + CXL_DVSEC_RANGE_BASE_HIGH(i), &temp); @@ -425,22 +422,42 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm) base |= temp & CXL_DVSEC_MEM_BASE_LOW_MASK; - info.dvsec_range[i] = (struct range) { + info->dvsec_range[i] = (struct range) { .start = base, .end = base + size - 1 }; - if (size) - ranges++; + ranges++; } - info.ranges = ranges; + info->ranges = ranges; + + return 0; +} + +/** + * cxl_hdm_decode_init() - Setup HDM decoding for the endpoint + * @cxlds: Device state + * @cxlhdm: Mapped HDM decoder Capability + * + * Try to enable the endpoint's HDM Decoder Capability + */ +int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm) +{ + struct pci_dev *pdev = to_pci_dev(cxlds->dev); + struct cxl_endpoint_dvsec_info info = { 0 }; + struct device *dev = &pdev->dev; + int d = cxlds->cxl_dvsec; + int rc; + + rc = cxl_dvsec_rr_decode(pdev, d, &info); + if (rc < 0) + return rc; /* * If DVSEC ranges are being used instead of HDM decoder registers there * is no use in trying to manage those. */ -hdm_init: if (!__cxl_hdm_decode_init(cxlds, cxlhdm, &info)) { dev_err(dev, "Legacy range registers configuration prevents HDM operation.\n"); From patchwork Mon Jan 9 21:43:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 13094400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FFF2C5479D for ; Mon, 9 Jan 2023 21:44:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235367AbjAIVoO (ORCPT ); Mon, 9 Jan 2023 16:44:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238095AbjAIVnX (ORCPT ); Mon, 9 Jan 2023 16:43:23 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FFA42702 for ; Mon, 9 Jan 2023 13:43:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673300600; x=1704836600; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=l3z3942/jSY257oH83j36StrG93mI8mDsMbkC2OGTm0=; b=JJbqdkGPMXqxG1Ucxya3lAgP0cLE7u4jtxi2z/PS+NefmcKeibHZzPyA zmKUPPh/tRO/0bBEvLG3EthomhV0pH+N/+nOPMQTDMM6i44kUCQWcHDwD 8vuEHrOdl83BaGoTGQB5lAkejf8AKfZfBKDPDAqc9y/6WMmdCo3VTe0DK ZZSXc4RR1j0mgaHtQ35pP+sqp2047zRxB7ksWNF/jw0VIChR70A2nfiaY OpZL2+ARmSyz+0+IqBH4j6CsQ9pDMR5sID0Wk17vPOD8mvHi1J6w5JNoB wBtN4NEIW4nRs0z4enARBIeVWW9xLxwWqqK2N2dsWyzW6NUA3smj7J3c7 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="320687524" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="320687524" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:20 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="985540615" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="985540615" Received: from djiang5-mobl3.amr.corp.intel.com (HELO djiang5-mobl3.local) ([10.212.37.174]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:19 -0800 Subject: [PATCH v2 2/8] cxl: export cxl_dvsec_rr_decode() to cxl_port From: Dave Jiang To: linux-cxl@vger.kernel.org Cc: dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, jonathan.cameron@huawei.com Date: Mon, 09 Jan 2023 14:43:17 -0700 Message-ID: <167330059655.975161.1643311250590533844.stgit@djiang5-mobl3.local> In-Reply-To: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> References: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Call cxl_dvsec_rr_decode() in the beginning of cxl_port_probe() and preserve the decoded information in a local 'struct cxl_endpoint_dvsec_info'. This info can be passed to various functions later on in order to support the HDM decoder emulation. The invocation of cxl_dvsec_rr_decode() in cxl_hdm_decode_init() is removed and a pointer to the 'struct cxl_endpoint_dvsec_info' is passed in. Signed-off-by: Dave Jiang Reviewed-by: Jonathan Cameron --- v2: - Update kdoc comments (Jonathan) - Use a bool for is_cxl_endpoint() to make it easier for static analysis (Jonathan) --- drivers/cxl/core/pci.c | 18 +++++++----------- drivers/cxl/cxl.h | 14 ++++++++++++++ drivers/cxl/cxlmem.h | 12 ------------ drivers/cxl/cxlpci.h | 3 ++- drivers/cxl/port.c | 21 ++++++++++++++------- 5 files changed, 37 insertions(+), 31 deletions(-) diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c index a8ecc6ddb3d7..2b68b5d462da 100644 --- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -333,8 +333,8 @@ static bool __cxl_hdm_decode_init(struct cxl_dev_state *cxlds, return true; } -static int cxl_dvsec_rr_decode(struct pci_dev *pdev, int d, - struct cxl_endpoint_dvsec_info *info) +int cxl_dvsec_rr_decode(struct pci_dev *pdev, int d, + struct cxl_endpoint_dvsec_info *info) { int hdm_count, rc, i, ranges = 0; struct device *dev = &pdev->dev; @@ -434,31 +434,27 @@ static int cxl_dvsec_rr_decode(struct pci_dev *pdev, int d, return 0; } +EXPORT_SYMBOL_NS_GPL(cxl_dvsec_rr_decode, CXL); /** * cxl_hdm_decode_init() - Setup HDM decoding for the endpoint * @cxlds: Device state * @cxlhdm: Mapped HDM decoder Capability + * @info: Cached DVSEC range registers info * * Try to enable the endpoint's HDM Decoder Capability */ -int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm) +int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm, + struct cxl_endpoint_dvsec_info *info) { struct pci_dev *pdev = to_pci_dev(cxlds->dev); - struct cxl_endpoint_dvsec_info info = { 0 }; struct device *dev = &pdev->dev; - int d = cxlds->cxl_dvsec; - int rc; - - rc = cxl_dvsec_rr_decode(pdev, d, &info); - if (rc < 0) - return rc; /* * If DVSEC ranges are being used instead of HDM decoder registers there * is no use in trying to manage those. */ - if (!__cxl_hdm_decode_init(cxlds, cxlhdm, &info)) { + if (!__cxl_hdm_decode_init(cxlds, cxlhdm, info)) { dev_err(dev, "Legacy range registers configuration prevents HDM operation.\n"); return -EBUSY; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 1b1cf459ac77..1057affb2db0 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -630,10 +630,24 @@ int cxl_decoder_add_locked(struct cxl_decoder *cxld, int *target_map); int cxl_decoder_autoremove(struct device *host, struct cxl_decoder *cxld); int cxl_endpoint_autoremove(struct cxl_memdev *cxlmd, struct cxl_port *endpoint); +/** + * struct cxl_endpoint_dvsec_info - Cached DVSEC info + * @mem_enabled: cached value of mem_enabled in the DVSEC, PCIE_DEVICE + * @ranges: Number of active HDM ranges this device uses. + * @dvsec_range: cached attributes of the ranges in the DVSEC, PCIE_DEVICE + */ +struct cxl_endpoint_dvsec_info { + bool mem_enabled; + int ranges; + struct range dvsec_range[2]; +}; + struct cxl_hdm; struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port); int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm); int devm_cxl_add_passthrough_decoder(struct cxl_port *port); +int cxl_dvsec_rr_decode(struct pci_dev *pdev, int dvsec, + struct cxl_endpoint_dvsec_info *info); bool is_cxl_region(struct device *dev); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index ab138004f644..187a310780a9 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -181,18 +181,6 @@ static inline int cxl_mbox_cmd_rc2errno(struct cxl_mbox_cmd *mbox_cmd) */ #define CXL_CAPACITY_MULTIPLIER SZ_256M -/** - * struct cxl_endpoint_dvsec_info - Cached DVSEC info - * @mem_enabled: cached value of mem_enabled in the DVSEC, PCIE_DEVICE - * @ranges: Number of active HDM ranges this device uses. - * @dvsec_range: cached attributes of the ranges in the DVSEC, PCIE_DEVICE - */ -struct cxl_endpoint_dvsec_info { - bool mem_enabled; - int ranges; - struct range dvsec_range[2]; -}; - /** * struct cxl_dev_state - The driver device state * diff --git a/drivers/cxl/cxlpci.h b/drivers/cxl/cxlpci.h index 920909791bb9..430e23345a16 100644 --- a/drivers/cxl/cxlpci.h +++ b/drivers/cxl/cxlpci.h @@ -64,6 +64,7 @@ enum cxl_regloc_type { int devm_cxl_port_enumerate_dports(struct cxl_port *port); struct cxl_dev_state; -int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm); +int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm, + struct cxl_endpoint_dvsec_info *info); void read_cdat_data(struct cxl_port *port); #endif /* __CXL_PCI_H__ */ diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c index 5453771bf330..404639a1c3d0 100644 --- a/drivers/cxl/port.c +++ b/drivers/cxl/port.c @@ -32,12 +32,22 @@ static void schedule_detach(void *cxlmd) static int cxl_port_probe(struct device *dev) { + struct cxl_endpoint_dvsec_info info = { 0 }; struct cxl_port *port = to_cxl_port(dev); + bool is_ep = is_cxl_endpoint(port); + struct cxl_dev_state *cxlds; + struct cxl_memdev *cxlmd; struct cxl_hdm *cxlhdm; int rc; - - if (!is_cxl_endpoint(port)) { + if (is_ep) { + cxlmd = to_cxl_memdev(port->uport); + cxlds = cxlmd->cxlds; + rc = cxl_dvsec_rr_decode(to_pci_dev(cxlds->dev), + cxlds->cxl_dvsec, &info); + if (rc < 0) + return rc; + } else { rc = devm_cxl_port_enumerate_dports(port); if (rc < 0) return rc; @@ -49,10 +59,7 @@ static int cxl_port_probe(struct device *dev) if (IS_ERR(cxlhdm)) return PTR_ERR(cxlhdm); - if (is_cxl_endpoint(port)) { - struct cxl_memdev *cxlmd = to_cxl_memdev(port->uport); - struct cxl_dev_state *cxlds = cxlmd->cxlds; - + if (is_ep) { /* Cache the data early to ensure is_visible() works */ read_cdat_data(port); @@ -61,7 +68,7 @@ static int cxl_port_probe(struct device *dev) if (rc) return rc; - rc = cxl_hdm_decode_init(cxlds, cxlhdm); + rc = cxl_hdm_decode_init(cxlds, cxlhdm, &info); if (rc) return rc; From patchwork Mon Jan 9 21:43:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 13094398 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D91FC678D6 for ; Mon, 9 Jan 2023 21:44:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237879AbjAIVoM (ORCPT ); Mon, 9 Jan 2023 16:44:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238142AbjAIVn3 (ORCPT ); Mon, 9 Jan 2023 16:43:29 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D55C4D116 for ; Mon, 9 Jan 2023 13:43:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673300608; x=1704836608; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=u9aepsaXyirNhcbFiN9vznmR3hEwKZCc4Z+/uMAW3XI=; b=YL1u4nfYGaRYxOiqnY0dbf3EjErZnkWJIG1QvZ1mi0LXPCo4XJiLjjR/ HdPBt5uAPJiJKiW8w5BemkJrQoi42ppu+RweHZkm0DDaOfP5KCGD8ClHz 2uwhZI4VTR5wP6/gBvs3ViAs2zP2p2etgBnsyo5BXq6EkRNL6Ylbs28YT GLOHS8lDG6baJTWEYUA90UjKmhJIquIN+4/6lCrKukX8wVmH61fmKt3NU QId/F4jEml2gayaM0zSdc0i2RMF7TPqIdy2z+i+bwfuqWKNftpohLRz8w G0G7XJI1LWXUuSiK1ZojUzycRkMqC1YRVFT0ujj0MoHwUBDcD14/mWCft Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="320687549" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="320687549" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:28 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="985540657" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="985540657" Received: from djiang5-mobl3.amr.corp.intel.com (HELO djiang5-mobl3.local) ([10.212.37.174]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:28 -0800 Subject: [PATCH v2 3/8] cxl: refactor cxl_hdm_decode_init() From: Dave Jiang To: linux-cxl@vger.kernel.org Cc: dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, jonathan.cameron@huawei.com Date: Mon, 09 Jan 2023 14:43:26 -0700 Message-ID: <167330060509.975161.6672958439144493413.stgit@djiang5-mobl3.local> In-Reply-To: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> References: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org With the previous refactoring of DVSEC range registers out of cxl_hdm_decode_init(), it basically becomes a skeleton function. Squash __cxl_hdm_decode_init() with cxl_hdm_decode_init() to simplify the code. cxl_hdm_decode_init() now returns more error codes than just -EBUSY. Signed-off-by: Dave Jiang Reviewed-by: Jonathan Cameron --- v2: - Update commit log to indicate cxl_hdm_decode_init() return additional error codes after change. (Jonathan) --- drivers/cxl/core/pci.c | 137 ++++++++++++++++++++---------------------------- 1 file changed, 57 insertions(+), 80 deletions(-) diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c index 2b68b5d462da..fac6094bb6d4 100644 --- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -259,80 +259,6 @@ static int devm_cxl_enable_hdm(struct device *host, struct cxl_hdm *cxlhdm) return devm_add_action_or_reset(host, disable_hdm, cxlhdm); } -static bool __cxl_hdm_decode_init(struct cxl_dev_state *cxlds, - struct cxl_hdm *cxlhdm, - struct cxl_endpoint_dvsec_info *info) -{ - void __iomem *hdm = cxlhdm->regs.hdm_decoder; - struct cxl_port *port = cxlhdm->port; - struct device *dev = cxlds->dev; - struct cxl_port *root; - int i, rc, allowed; - u32 global_ctrl; - - global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET); - - /* - * If the HDM Decoder Capability is already enabled then assume - * that some other agent like platform firmware set it up. - */ - if (global_ctrl & CXL_HDM_DECODER_ENABLE) { - rc = devm_cxl_enable_mem(&port->dev, cxlds); - if (rc) - return false; - return true; - } - - root = to_cxl_port(port->dev.parent); - while (!is_cxl_root(root) && is_cxl_port(root->dev.parent)) - root = to_cxl_port(root->dev.parent); - if (!is_cxl_root(root)) { - dev_err(dev, "Failed to acquire root port for HDM enable\n"); - return false; - } - - for (i = 0, allowed = 0; info->mem_enabled && i < info->ranges; i++) { - struct device *cxld_dev; - - cxld_dev = device_find_child(&root->dev, &info->dvsec_range[i], - dvsec_range_allowed); - if (!cxld_dev) { - dev_dbg(dev, "DVSEC Range%d denied by platform\n", i); - continue; - } - dev_dbg(dev, "DVSEC Range%d allowed by platform\n", i); - put_device(cxld_dev); - allowed++; - } - - if (!allowed) { - cxl_set_mem_enable(cxlds, 0); - info->mem_enabled = 0; - } - - /* - * Per CXL 2.0 Section 8.1.3.8.3 and 8.1.3.8.4 DVSEC CXL Range 1 Base - * [High,Low] when HDM operation is enabled the range register values - * are ignored by the device, but the spec also recommends matching the - * DVSEC Range 1,2 to HDM Decoder Range 0,1. So, non-zero info->ranges - * are expected even though Linux does not require or maintain that - * match. If at least one DVSEC range is enabled and allowed, skip HDM - * Decoder Capability Enable. - */ - if (info->mem_enabled) - return false; - - rc = devm_cxl_enable_hdm(&port->dev, cxlhdm); - if (rc) - return false; - - rc = devm_cxl_enable_mem(&port->dev, cxlds); - if (rc) - return false; - - return true; -} - int cxl_dvsec_rr_decode(struct pci_dev *pdev, int d, struct cxl_endpoint_dvsec_info *info) { @@ -449,17 +375,68 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm, { struct pci_dev *pdev = to_pci_dev(cxlds->dev); struct device *dev = &pdev->dev; + void __iomem *hdm = cxlhdm->regs.hdm_decoder; + struct cxl_port *port = cxlhdm->port; + struct cxl_port *root; + int i, rc, allowed; + u32 global_ctrl; + + global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET); /* - * If DVSEC ranges are being used instead of HDM decoder registers there - * is no use in trying to manage those. + * If the HDM Decoder Capability is already enabled then assume + * that some other agent like platform firmware set it up. */ - if (!__cxl_hdm_decode_init(cxlds, cxlhdm, info)) { - dev_err(dev, - "Legacy range registers configuration prevents HDM operation.\n"); - return -EBUSY; + if (global_ctrl & CXL_HDM_DECODER_ENABLE) + return devm_cxl_enable_mem(&port->dev, cxlds); + + root = to_cxl_port(port->dev.parent); + while (!is_cxl_root(root) && is_cxl_port(root->dev.parent)) + root = to_cxl_port(root->dev.parent); + if (!is_cxl_root(root)) { + dev_err(dev, "Failed to acquire root port for HDM enable\n"); + return -ENODEV; + } + + for (i = 0, allowed = 0; info->mem_enabled && i < info->ranges; i++) { + struct device *cxld_dev; + + cxld_dev = device_find_child(&root->dev, &info->dvsec_range[i], + dvsec_range_allowed); + if (!cxld_dev) { + dev_dbg(dev, "DVSEC Range%d denied by platform\n", i); + continue; + } + dev_dbg(dev, "DVSEC Range%d allowed by platform\n", i); + put_device(cxld_dev); + allowed++; } + if (!allowed) { + cxl_set_mem_enable(cxlds, 0); + info->mem_enabled = 0; + } + + /* + * Per CXL 2.0 Section 8.1.3.8.3 and 8.1.3.8.4 DVSEC CXL Range 1 Base + * [High,Low] when HDM operation is enabled the range register values + * are ignored by the device, but the spec also recommends matching the + * DVSEC Range 1,2 to HDM Decoder Range 0,1. So, non-zero info->ranges + * are expected even though Linux does not require or maintain that + * match. If at least one DVSEC range is enabled and allowed, skip HDM + * Decoder Capability Enable. + */ + if (info->mem_enabled) + return -EBUSY; + + rc = devm_cxl_enable_hdm(&port->dev, cxlhdm); + if (rc) + return rc; + + rc = devm_cxl_enable_mem(&port->dev, cxlds); + if (rc) + return rc; + return 0; } EXPORT_SYMBOL_NS_GPL(cxl_hdm_decode_init, CXL); From patchwork Mon Jan 9 21:43:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 13094396 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF35CC678D5 for ; Mon, 9 Jan 2023 21:44:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237345AbjAIVoK (ORCPT ); Mon, 9 Jan 2023 16:44:10 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238171AbjAIVnj (ORCPT ); Mon, 9 Jan 2023 16:43:39 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E09DF1D for ; Mon, 9 Jan 2023 13:43:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673300617; x=1704836617; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zn2+OYolLqh1OIr2SufUiK9yt82vFscUf3lZRobNs5o=; b=Yj6IcPLphPWywfJJfS59b4uRkpyHZ7z8zmAbZwvpmzTcSb6g9yloQugD eWZScT30pNxfJVV43CTduGGQJM+CwFVncGHfCouyjiHJJkHRrG+qzZOGS H+D+AxcA9TYLvCq47424XOvDHhNe0IbbEe8e+3lmP6gQt2uzc4+Z3ax61 Z6CiI3PG9kA1YDCRfdlvXJdJUltsrC5MVASK8EpTXN/qRNBBcVFc2uaT+ te/cZWexsA54V81Wl2VQ5boAyw4Y/8607abJVOQEenEJp4R7+whK4BQ68 AkQBuT85co5LxSiKigOh25pNmeBgd2c0F6SNP8rCyKbHXhoOF8r1AE2SL g==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="320687579" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="320687579" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:37 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="985540697" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="985540697" Received: from djiang5-mobl3.amr.corp.intel.com (HELO djiang5-mobl3.local) ([10.212.37.174]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:36 -0800 Subject: [PATCH v2 4/8] cxl: emulate HDM decoder from DVSEC range registers From: Dave Jiang To: linux-cxl@vger.kernel.org Cc: dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, jonathan.cameron@huawei.com Date: Mon, 09 Jan 2023 14:43:35 -0700 Message-ID: <167330061368.975161.13239811394839772488.stgit@djiang5-mobl3.local> In-Reply-To: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> References: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org In the case where HDM decoder register block exists but is not programmed and at the same time the DVSEC range register range is active, populate the CXL decoder object 'cxl_decoder' with info from DVSEC range registers. Signed-off-by: Dave Jiang Reviewed-by: Jonathan Cameron --- v2: - Set target_type to CXL_DECODER_EXPANDER (type 3). (Jonathan) - Skip HDM enabling if DVSEC range is active. (Jonathan) --- drivers/cxl/core/hdm.c | 36 +++++++++++++++++++++++++++++++++--- drivers/cxl/core/pci.c | 2 +- drivers/cxl/cxl.h | 3 ++- drivers/cxl/port.c | 2 +- 4 files changed, 37 insertions(+), 6 deletions(-) diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index dcc16d7cb8f3..af1f5f906f52 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -679,9 +679,34 @@ static int cxl_decoder_reset(struct cxl_decoder *cxld) return 0; } +static int cxl_setup_hdm_decoder_from_dvsec(struct cxl_port *port, + struct cxl_decoder *cxld, int which, + struct cxl_endpoint_dvsec_info *info) +{ + if (!is_cxl_endpoint(port)) + return -EOPNOTSUPP; + + if (info->dvsec_range[which].start == CXL_RESOURCE_NONE) + return -ENXIO; + + cxld->target_type = CXL_DECODER_EXPANDER; + cxld->commit = NULL; + cxld->reset = NULL; + + cxld->hpa_range = (struct range) { + .start = info->dvsec_range[which].start, + .end = info->dvsec_range[which].end, + }; + + cxld->flags |= CXL_DECODER_F_ENABLE | CXL_DECODER_F_LOCK; + port->commit_end = cxld->id; + + return 0; +} + static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, int *target_map, void __iomem *hdm, int which, - u64 *dpa_base) + u64 *dpa_base, struct cxl_endpoint_dvsec_info *info) { struct cxl_endpoint_decoder *cxled = NULL; u64 size, base, skip, dpa_size; @@ -717,6 +742,10 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, .end = base + size - 1, }; + if (cxled && !committed && + info->dvsec_range[which].start != CXL_RESOURCE_NONE) + return cxl_setup_hdm_decoder_from_dvsec(port, cxld, which, info); + /* decoders are enabled if committed */ if (committed) { cxld->flags |= CXL_DECODER_F_ENABLE; @@ -790,7 +819,8 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, * devm_cxl_enumerate_decoders - add decoder objects per HDM register set * @cxlhdm: Structure to populate with HDM capabilities */ -int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm) +int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, + struct cxl_endpoint_dvsec_info *info) { void __iomem *hdm = cxlhdm->regs.hdm_decoder; struct cxl_port *port = cxlhdm->port; @@ -842,7 +872,7 @@ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm) cxld = &cxlsd->cxld; } - rc = init_hdm_decoder(port, cxld, target_map, hdm, i, &dpa_base); + rc = init_hdm_decoder(port, cxld, target_map, hdm, i, &dpa_base, info); if (rc) { put_device(&cxld->dev); return rc; diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c index fac6094bb6d4..d5eb3aa1df85 100644 --- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -427,7 +427,7 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm, * Decoder Capability Enable. */ if (info->mem_enabled) - return -EBUSY; + return 0; rc = devm_cxl_enable_hdm(&port->dev, cxlhdm); if (rc) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 1057affb2db0..ea9548cbc7eb 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -644,7 +644,8 @@ struct cxl_endpoint_dvsec_info { struct cxl_hdm; struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port); -int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm); +int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, + struct cxl_endpoint_dvsec_info *info); int devm_cxl_add_passthrough_decoder(struct cxl_port *port); int cxl_dvsec_rr_decode(struct pci_dev *pdev, int dvsec, struct cxl_endpoint_dvsec_info *info); diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c index 404639a1c3d0..7f1b71c5cf15 100644 --- a/drivers/cxl/port.c +++ b/drivers/cxl/port.c @@ -79,7 +79,7 @@ static int cxl_port_probe(struct device *dev) } } - rc = devm_cxl_enumerate_decoders(cxlhdm); + rc = devm_cxl_enumerate_decoders(cxlhdm, &info); if (rc) { dev_err(dev, "Couldn't enumerate decoders (%d)\n", rc); return rc; From patchwork Mon Jan 9 21:43:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 13094401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A42EDC54EBD for ; Mon, 9 Jan 2023 21:44:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237206AbjAIVoP (ORCPT ); Mon, 9 Jan 2023 16:44:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49668 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238271AbjAIVnv (ORCPT ); Mon, 9 Jan 2023 16:43:51 -0500 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE61F25B for ; Mon, 9 Jan 2023 13:43:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673300630; x=1704836630; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=H7U4E/EiJmIW+DPLfAwq1viDQORc3aeOytCU3iyZYOc=; b=XwgpqeEEC4WzoALhTrDbMe5DtNecR409QBzXob1bESJCU9LJsWj77nDS sJapNvDUj3OOdaP5u6DnLwd09COvhJyv9SeI3mFuRD8xHRAn1C2gbJ2o1 qO6s5I/+OkWHllACFD2aAmLsA54+bfzZckFxYMHf0xGqedQ/qikXGyzML 7uMeBangpy6qRNMdZL1tCrxzklCOAgzgXUjDev5dGXSnSYruhzwVimJwK s5GIaeLTXpjdLbkFLvm+dnHgkILx29VFdvcpeT6D1bVpBuNb8h9jDzbjb uLbzm9lcpWamUoXuHNX87v3gn8njZW3vJrLLADxfVI1w0oB46RbT7kvwM g==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="320687623" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="320687623" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:50 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="985540754" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="985540754" Received: from djiang5-mobl3.amr.corp.intel.com (HELO djiang5-mobl3.local) ([10.212.37.174]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:47 -0800 Subject: [PATCH v2 5/8] cxl: create emulated cxl_hdm for devices that do not have HDM decoders From: Dave Jiang To: linux-cxl@vger.kernel.org Cc: dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, jonathan.cameron@huawei.com Date: Mon, 09 Jan 2023 14:43:43 -0700 Message-ID: <167330062229.975161.18102703412584824456.stgit@djiang5-mobl3.local> In-Reply-To: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> References: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org CXL rev3 spec 8.1.3 RCDs may not have HDM register blocks. Create a fake HDM with information from the CXL PCIe DVSEC registers. The decoder count will be set to the HDM count retrieved from the DVSEC cap register. Signed-off-by: Dave Jiang Reviewed-by: Jonathan Cameron --- v2: - Set target_count to same as number of ranges. (Jonathan) --- drivers/cxl/core/hdm.c | 27 ++++++++++++++++++++++++++- drivers/cxl/core/pci.c | 9 ++++++--- drivers/cxl/cxl.h | 3 ++- drivers/cxl/port.c | 2 +- 4 files changed, 35 insertions(+), 6 deletions(-) diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index af1f5f906f52..165c0f382ce1 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -101,11 +101,33 @@ static int map_hdm_decoder_regs(struct cxl_port *port, void __iomem *crb, BIT(CXL_CM_CAP_CAP_ID_HDM)); } +static struct cxl_hdm *devm_cxl_setup_emulated_hdm(struct cxl_port *port, + struct cxl_endpoint_dvsec_info *info) +{ + struct device *dev = &port->dev; + struct cxl_hdm *cxlhdm; + + if (!info->mem_enabled) + return ERR_PTR(-ENODEV); + + cxlhdm = devm_kzalloc(dev, sizeof(*cxlhdm), GFP_KERNEL); + if (!cxlhdm) + return ERR_PTR(-ENOMEM); + + cxlhdm->port = port; + cxlhdm->decoder_count = info->ranges; + cxlhdm->target_count = info->ranges; + dev_set_drvdata(&port->dev, cxlhdm); + + return cxlhdm; +} + /** * devm_cxl_setup_hdm - map HDM decoder component registers * @port: cxl_port to map */ -struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port) +struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port, + struct cxl_endpoint_dvsec_info *info) { struct device *dev = &port->dev; struct cxl_hdm *cxlhdm; @@ -119,6 +141,9 @@ struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port) cxlhdm->port = port; crb = ioremap(port->component_reg_phys, CXL_COMPONENT_REG_BLOCK_SIZE); if (!crb) { + if (info->mem_enabled) + return devm_cxl_setup_emulated_hdm(port, info); + dev_err(dev, "No component registers mapped\n"); return ERR_PTR(-ENXIO); } diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c index d5eb3aa1df85..c6d6d7b720c5 100644 --- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -379,16 +379,19 @@ int cxl_hdm_decode_init(struct cxl_dev_state *cxlds, struct cxl_hdm *cxlhdm, struct cxl_port *port = cxlhdm->port; struct cxl_port *root; int i, rc, allowed; - u32 global_ctrl; + u32 global_ctrl = 0; - global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET); + if (hdm) + global_ctrl = readl(hdm + CXL_HDM_DECODER_CTRL_OFFSET); /* * If the HDM Decoder Capability is already enabled then assume * that some other agent like platform firmware set it up. */ - if (global_ctrl & CXL_HDM_DECODER_ENABLE) + if (global_ctrl & CXL_HDM_DECODER_ENABLE || (!hdm && info->mem_enabled)) return devm_cxl_enable_mem(&port->dev, cxlds); + else if (!hdm) + return -ENODEV; root = to_cxl_port(port->dev.parent); while (!is_cxl_root(root) && is_cxl_port(root->dev.parent)) diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index ea9548cbc7eb..0ec047cced90 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -643,7 +643,8 @@ struct cxl_endpoint_dvsec_info { }; struct cxl_hdm; -struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port); +struct cxl_hdm *devm_cxl_setup_hdm(struct cxl_port *port, + struct cxl_endpoint_dvsec_info *info); int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, struct cxl_endpoint_dvsec_info *info); int devm_cxl_add_passthrough_decoder(struct cxl_port *port); diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c index 7f1b71c5cf15..875bf45db4ad 100644 --- a/drivers/cxl/port.c +++ b/drivers/cxl/port.c @@ -55,7 +55,7 @@ static int cxl_port_probe(struct device *dev) return devm_cxl_add_passthrough_decoder(port); } - cxlhdm = devm_cxl_setup_hdm(port); + cxlhdm = devm_cxl_setup_hdm(port, &info); if (IS_ERR(cxlhdm)) return PTR_ERR(cxlhdm); From patchwork Mon Jan 9 21:43:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 13094397 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4B57FC54EBD for ; Mon, 9 Jan 2023 21:44:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237965AbjAIVoM (ORCPT ); Mon, 9 Jan 2023 16:44:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53994 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238297AbjAIVn7 (ORCPT ); Mon, 9 Jan 2023 16:43:59 -0500 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1265825B for ; Mon, 9 Jan 2023 13:43:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673300638; x=1704836638; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=tJuhAWwHWZvWyHuRW8/PxhUex/oDdjRNpevV7WeyONg=; b=gDGBPfAVjvbfgq4Uc+B4kdFfBzj50X3VPjYgzJwPCa2IoNbBgAWamAg2 Ru9TOlmKmb2MVs9QLqtg5lQ8VokNTxWBMxFBjIyql/F2fJCOJvsQ4PjI/ H5+r5HVB8byEfqHo6V+0aZwZ83Sb7uIf07/tAHTALL+27YB6LFm6qKDqY M+hblseCiCAR2eq8UJt3hcBIB9QK5KCQt2arUqw2PioLDLMOcMH7qKqHA kB4qfg2vcYQYF01ouKo7ljQZUp0fdHKzb1sWXkyvsbQHECf59tCToDBpJ AYWZg11UlVSj5xDbSMsRy5BgQRyJE7ZXA5Z4F8M9ZUj/x+O7Umxp/AGYq g==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="324996124" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="324996124" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:57 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="780833594" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="780833594" Received: from djiang5-mobl3.amr.corp.intel.com (HELO djiang5-mobl3.local) ([10.212.37.174]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:43:56 -0800 Subject: [PATCH v2 6/8] cxl: create emulated decoders for devices without HDM decoders From: Dave Jiang To: linux-cxl@vger.kernel.org Cc: dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, jonathan.cameron@huawei.com Date: Mon, 09 Jan 2023 14:43:54 -0700 Message-ID: <167330063330.975161.12886268473522769039.stgit@djiang5-mobl3.local> In-Reply-To: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> References: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org CXL rev3.0 spec 8.1.3 RCDs may not have HDM register blocks. Create fake decoders based on CXL PCIe DVSEC registers. The DVSEC Range Registers provide the memory range for these decoder structs. For the RCD, there can be up to 2 decoders depending on the DVSEC Capability register HDM_count. Signed-off-by: Dave Jiang --- v2: - Refactor to put error case out of line. (Jonathan) - kdoc update. (Jonathan) - Remove init_emulated_hdm_decoder(), duplicate of cxl_setup_hdm_decoder_from_dvsec(). --- drivers/cxl/core/hdm.c | 37 ++++++++++++++++++++++++++++--------- 1 file changed, 28 insertions(+), 9 deletions(-) diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index 165c0f382ce1..ed5e9ef3aa9b 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -747,6 +747,13 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, if (is_endpoint_decoder(&cxld->dev)) cxled = to_cxl_endpoint_decoder(&cxld->dev); + if (!hdm) { + if (!cxled) + return -EINVAL; + + return cxl_setup_hdm_decoder_from_dvsec(port, cxld, which, info); + } + ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(which)); base = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(which)); size = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(which)); @@ -840,19 +847,15 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, return 0; } -/** - * devm_cxl_enumerate_decoders - add decoder objects per HDM register set - * @cxlhdm: Structure to populate with HDM capabilities - */ -int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, - struct cxl_endpoint_dvsec_info *info) +static void cxl_settle_decoders(struct cxl_hdm *cxlhdm) { void __iomem *hdm = cxlhdm->regs.hdm_decoder; - struct cxl_port *port = cxlhdm->port; - int i, committed; - u64 dpa_base = 0; + int committed, i; u32 ctrl; + if (!hdm) + return; + /* * Since the register resource was recently claimed via request_region() * be careful about trusting the "not-committed" status until the commit @@ -869,6 +872,22 @@ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, /* ensure that future checks of committed can be trusted */ if (committed != cxlhdm->decoder_count) msleep(20); +} + +/** + * devm_cxl_enumerate_decoders - add decoder objects per HDM register set + * @cxlhdm: Structure to populate with HDM capabilities + * @info: cached DVSEC range register info + */ +int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, + struct cxl_endpoint_dvsec_info *info) +{ + void __iomem *hdm = cxlhdm->regs.hdm_decoder; + struct cxl_port *port = cxlhdm->port; + int i; + u64 dpa_base = 0; + + cxl_settle_decoders(cxlhdm); for (i = 0; i < cxlhdm->decoder_count; i++) { int target_map[CXL_DECODER_MAX_INTERLEAVE] = { 0 }; From patchwork Mon Jan 9 21:44:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 13094402 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0B86C54EBE for ; Mon, 9 Jan 2023 21:44:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237573AbjAIVoQ (ORCPT ); Mon, 9 Jan 2023 16:44:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237864AbjAIVoI (ORCPT ); Mon, 9 Jan 2023 16:44:08 -0500 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A594DC8 for ; Mon, 9 Jan 2023 13:44:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673300647; x=1704836647; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fx7sWD4wX92tsiAevGnOosHj/a5EJwFgkJqoukHvyf0=; b=epi742WxfnLAt3C4sfUcLa7ufSPhvBbZ9gMTqWxmTLq/SPNTYKI/Q34s gH2nWsoQQsxmrLhuwVjcNQ5zgjJm9w+PVtlvYER4a+6exNLFNgotvZG9n oa0XwP7HHt4HnqU7M1/dewdxBMcosLO1ML0nzfUGP1T8QEjP5NRccc+k8 SjMWXLEJxp3Hs53bWhBQb8i7/GsLB3DW8QX5E9WeSVYO04lLpBH6HVIWm nZbwrqMjRmLvY2hWGZahrLkuzW2bzDoNNlHp2F8qCQu97oX0qz4PjF8lG HIriFN5YTF7CiCBIhdw8NOKUtE56N8wSZiqVmUmt3HlpouTbs64xEq9m8 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="324996133" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="324996133" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:44:06 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="780833630" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="780833630" Received: from djiang5-mobl3.amr.corp.intel.com (HELO djiang5-mobl3.local) ([10.212.37.174]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:44:05 -0800 Subject: [PATCH v2 7/8] cxl: Add emulation when HDM decoders are not committed From: Dave Jiang To: linux-cxl@vger.kernel.org Cc: dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, jonathan.cameron@huawei.com Date: Mon, 09 Jan 2023 14:44:03 -0700 Message-ID: <167330064247.975161.16867413974628215063.stgit@djiang5-mobl3.local> In-Reply-To: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> References: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org For the case where DVSEC range register(s) are active and HDM decoders are not committed, use RR to provide emulation. A first pass is done to note whether any decoders are committed. If there are no committed endpoint decoders, then DVSEC ranges will be used for emulation. Signed-off-by: Dave Jiang Reviewed-by: Jonathan Cameron --- drivers/cxl/core/hdm.c | 39 ++++++++++++++++++++++++++++++++------- drivers/cxl/cxl.h | 1 + 2 files changed, 33 insertions(+), 7 deletions(-) diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c index ed5e9ef3aa9b..40844ff2fe52 100644 --- a/drivers/cxl/core/hdm.c +++ b/drivers/cxl/core/hdm.c @@ -729,6 +729,33 @@ static int cxl_setup_hdm_decoder_from_dvsec(struct cxl_port *port, return 0; } +static bool should_emulate_decoders(struct cxl_hdm *cxlhdm) +{ + void __iomem *hdm = cxlhdm->regs.hdm_decoder; + bool committed; + u32 ctrl; + int i; + + if (!is_cxl_endpoint(cxlhdm->port)) + return false; + + if (!hdm) + return true; + + /* + * If any decoders are committed already, there should not be any + * emulated DVSEC decoders. + */ + for (i = 0; i < cxlhdm->decoder_count; i++) { + ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(i)); + committed = !!(ctrl & CXL_HDM_DECODER0_CTRL_COMMITTED); + if (committed) + return false; + } + + return true; +} + static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, int *target_map, void __iomem *hdm, int which, u64 *dpa_base, struct cxl_endpoint_dvsec_info *info) @@ -744,16 +771,12 @@ static int init_hdm_decoder(struct cxl_port *port, struct cxl_decoder *cxld, unsigned char target_id[8]; } target_list; + if (info->emulate_decoders) + return cxl_setup_hdm_decoder_from_dvsec(port, cxld, which, info); + if (is_endpoint_decoder(&cxld->dev)) cxled = to_cxl_endpoint_decoder(&cxld->dev); - if (!hdm) { - if (!cxled) - return -EINVAL; - - return cxl_setup_hdm_decoder_from_dvsec(port, cxld, which, info); - } - ctrl = readl(hdm + CXL_HDM_DECODER0_CTRL_OFFSET(which)); base = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_BASE_LOW_OFFSET(which)); size = ioread64_hi_lo(hdm + CXL_HDM_DECODER0_SIZE_LOW_OFFSET(which)); @@ -889,6 +912,8 @@ int devm_cxl_enumerate_decoders(struct cxl_hdm *cxlhdm, cxl_settle_decoders(cxlhdm); + info->emulate_decoders = should_emulate_decoders(cxlhdm); + for (i = 0; i < cxlhdm->decoder_count; i++) { int target_map[CXL_DECODER_MAX_INTERLEAVE] = { 0 }; int rc, target_count = cxlhdm->target_count; diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 0ec047cced90..f1aa57a95150 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -640,6 +640,7 @@ struct cxl_endpoint_dvsec_info { bool mem_enabled; int ranges; struct range dvsec_range[2]; + bool emulate_decoders; }; struct cxl_hdm; From patchwork Mon Jan 9 21:44:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Jiang X-Patchwork-Id: 13094403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 325F0C54EBD for ; Mon, 9 Jan 2023 21:44:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238093AbjAIVoW (ORCPT ); Mon, 9 Jan 2023 16:44:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54364 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237739AbjAIVoQ (ORCPT ); Mon, 9 Jan 2023 16:44:16 -0500 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9B2860E1 for ; Mon, 9 Jan 2023 13:44:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673300655; x=1704836655; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=E/trlT13ETI3YmJPANL0iuM/1+/cvraUbIuYEE/imag=; b=hF4x41GChaTYTMircpuarWyM80JYjKx/y47z4fiXB0vCjP8MfzgGXuJd pJbfTRk6/JL8lWa1VU+iGUbyuNQ6S8qKqg5kwzAjevKgameUK58nrM6UF wj5haIsyS09hBM8ZA3FJy7xKnruGm5qsW2fTAx9yuBzIwKI4gmHvpgetN mESfVq770AORTMVf601lOWe31aQKfCXjfJI7/CVuqKChohmXLRULCXZMv pKvSbVM7f09d7p5eC4q469G3M/N9fL2PsJhPnrExqgZyYeiQ/ZKpyPaxv BCr4+h6Z/iybBOhGT1Hd+jrSW2UKLEae5XCRuTGnCSSeCNDnVhj38SDvc Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="324996149" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="324996149" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:44:15 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10585"; a="780833670" X-IronPort-AV: E=Sophos;i="5.96,313,1665471600"; d="scan'208";a="780833670" Received: from djiang5-mobl3.amr.corp.intel.com (HELO djiang5-mobl3.local) ([10.212.37.174]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Jan 2023 13:44:15 -0800 Subject: [PATCH v2 8/8] cxl: remove locked check for dvsec_range_allowed() From: Dave Jiang To: linux-cxl@vger.kernel.org Cc: dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, jonathan.cameron@huawei.com Date: Mon, 09 Jan 2023 14:44:13 -0700 Message-ID: <167330065180.975161.18302590071676919347.stgit@djiang5-mobl3.local> In-Reply-To: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> References: <167330048147.975161.8832707018372221375.stgit@djiang5-mobl3.local> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org There is no reason that the CFMWS will always set the "Fixed Device Configuration" bit in the "Window Restrictions" field. Remove the CXL_DECODER_F_LOCK check. Signed-off-by: Dave Jiang Reviewed-by: Jonathan Cameron --- drivers/cxl/core/pci.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c index c6d6d7b720c5..931ac4be539a 100644 --- a/drivers/cxl/core/pci.c +++ b/drivers/cxl/core/pci.c @@ -228,8 +228,6 @@ static int dvsec_range_allowed(struct device *dev, void *arg) cxld = to_cxl_decoder(dev); - if (!(cxld->flags & CXL_DECODER_F_LOCK)) - return 0; if (!(cxld->flags & CXL_DECODER_F_RAM)) return 0;