From patchwork Sat Jun 17 01:24:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 13283391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27E1EEB64D7 for ; Sat, 17 Jun 2023 01:24:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229804AbjFQBYo (ORCPT ); Fri, 16 Jun 2023 21:24:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229483AbjFQBYn (ORCPT ); Fri, 16 Jun 2023 21:24:43 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D27C3AAE for ; Fri, 16 Jun 2023 18:24:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686965082; x=1718501082; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qpGGveCuGqnZs09AtgHn4rMn9UDK9+2UVj95rt4TR7g=; b=awd/Ut4VNWiVEqcDXIomrdC+8AfFBi5qQMNBFSmxxrucmBZgh/9CpqX4 8rD2TZl2sariWesjLJO7+3v6981f59plVbyAGjXEhFYxHBkCXXglH9mLK fz9SgWDEor+eqYa3jSWxEXOK9CyXp23rnXjwVXin9r9/ncamw6kyPfLIv 9yzGUtyiFMpDuAI5bpMWlxCX7SL9yxSp6gHfQGbLpGFIYqy+H2rSmy5Ur GHuVixZ6f4lvTYzyljBkgFVyVthlp/Z8aBjlsMO1tJTZ8nnUW2dmT1ftC HFL5XzK0NaJ8vYMDEfq2oBjK8Qgc2JiqFq6oepW0asD9SzY9fITjCKwJp w==; X-IronPort-AV: E=McAfee;i="6600,9927,10743"; a="362762334" X-IronPort-AV: E=Sophos;i="6.00,249,1681196400"; d="scan'208";a="362762334" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2023 18:24:29 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10743"; a="713073300" X-IronPort-AV: E=Sophos;i="6.00,249,1681196400"; d="scan'208";a="713073300" Received: from slovely-mobl1.amr.corp.intel.com (HELO dwillia2-xfh.jf.intel.com) ([10.209.23.80]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2023 18:24:29 -0700 Subject: [PATCH 1/3] cxl/region: Move cache invalidation before region teardown, and before setup From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Jonathan Cameron , Vikram Sethi Date: Fri, 16 Jun 2023 18:24:28 -0700 Message-ID: <168696506886.3590522.4597053660991916591.stgit@dwillia2-xfh.jf.intel.com> In-Reply-To: <168696506332.3590522.12981963617215460385.stgit@dwillia2-xfh.jf.intel.com> References: <168696506332.3590522.12981963617215460385.stgit@dwillia2-xfh.jf.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Vikram raised a concern with the theoretical case of a CPU sending MemClnEvict to a device that is not prepared to receive. MemClnEvict is a message that is sent after a CPU has taken ownership of a cacheline from accelerator memory (HDM-DB). In the case of hotplug or HDM decoder reconfiguration it is possible that the CPU is holding old contents for a new device that has taken over the physical address range being cached by the CPU. To avoid this scenario, invalidate caches prior to tearing down an HDM decoder configuration. Now, this poses another problem that it is possible for something to speculate into that space while the decode configuration is still up, so to close that gap also invalidate prior to establish new contents behind a given physical address range. With this change the cache invalidation is now explicit and need not be checked in cxl_region_probe(), and that obviates the need for CXL_REGION_F_INCOHERENT. Cc: Jonathan Cameron Fixes: d18bc74aced6 ("cxl/region: Manage CPU caches relative to DPA invalidation events") Reported-by: Vikram Sethi Closes: http://lore.kernel.org/r/BYAPR12MB33364B5EB908BF7239BB996BBD53A@BYAPR12MB3336.namprd12.prod.outlook.com Signed-off-by: Dan Williams Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron --- drivers/cxl/core/region.c | 66 +++++++++++++++++++++++++-------------------- drivers/cxl/cxl.h | 8 +---- 2 files changed, 38 insertions(+), 36 deletions(-) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 013f3656e661..d48c5916f2e9 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -125,10 +125,38 @@ static struct cxl_region_ref *cxl_rr_load(struct cxl_port *port, return xa_load(&port->regions, (unsigned long)cxlr); } +static int cxl_region_invalidate_memregion(struct cxl_region *cxlr) +{ + if (!cpu_cache_has_invalidate_memregion()) { + if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) { + dev_warn_once( + &cxlr->dev, + "Bypassing cpu_cache_invalidate_memregion() for testing!\n"); + return 0; + } else { + dev_err(&cxlr->dev, + "Failed to synchronize CPU cache state\n"); + return -ENXIO; + } + } + + cpu_cache_invalidate_memregion(IORES_DESC_CXL); + return 0; +} + static int cxl_region_decode_reset(struct cxl_region *cxlr, int count) { struct cxl_region_params *p = &cxlr->params; - int i; + int i, rc = 0; + + /* + * Before region teardown attempt to flush, and if the flush + * fails cancel the region teardown for data consistency + * concerns + */ + rc = cxl_region_invalidate_memregion(cxlr); + if (rc) + return rc; for (i = count - 1; i >= 0; i--) { struct cxl_endpoint_decoder *cxled = p->targets[i]; @@ -136,7 +164,6 @@ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count) struct cxl_port *iter = cxled_to_port(cxled); struct cxl_dev_state *cxlds = cxlmd->cxlds; struct cxl_ep *ep; - int rc = 0; if (cxlds->rcd) goto endpoint_reset; @@ -256,6 +283,14 @@ static ssize_t commit_store(struct device *dev, struct device_attribute *attr, goto out; } + /* + * Invalidate caches before region setup to drop any speculative + * consumption of this address space + */ + rc = cxl_region_invalidate_memregion(cxlr); + if (rc) + return rc; + if (commit) rc = cxl_region_decode_commit(cxlr); else { @@ -1686,7 +1721,6 @@ static int cxl_region_attach(struct cxl_region *cxlr, if (rc) goto err_decrement; p->state = CXL_CONFIG_ACTIVE; - set_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); } cxled->cxld.interleave_ways = p->interleave_ways; @@ -2815,30 +2849,6 @@ int cxl_add_to_region(struct cxl_port *root, struct cxl_endpoint_decoder *cxled) } EXPORT_SYMBOL_NS_GPL(cxl_add_to_region, CXL); -static int cxl_region_invalidate_memregion(struct cxl_region *cxlr) -{ - if (!test_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags)) - return 0; - - if (!cpu_cache_has_invalidate_memregion()) { - if (IS_ENABLED(CONFIG_CXL_REGION_INVALIDATION_TEST)) { - dev_warn_once( - &cxlr->dev, - "Bypassing cpu_cache_invalidate_memregion() for testing!\n"); - clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); - return 0; - } else { - dev_err(&cxlr->dev, - "Failed to synchronize CPU cache state\n"); - return -ENXIO; - } - } - - cpu_cache_invalidate_memregion(IORES_DESC_CXL); - clear_bit(CXL_REGION_F_INCOHERENT, &cxlr->flags); - return 0; -} - static int is_system_ram(struct resource *res, void *arg) { struct cxl_region *cxlr = arg; @@ -2866,8 +2876,6 @@ static int cxl_region_probe(struct device *dev) goto out; } - rc = cxl_region_invalidate_memregion(cxlr); - /* * From this point on any path that changes the region's state away from * CXL_CONFIG_COMMIT is also responsible for releasing the driver. diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index f0c428cb9a71..187abd8ba673 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -462,18 +462,12 @@ struct cxl_region_params { int nr_targets; }; -/* - * Flag whether this region needs to have its HPA span synchronized with - * CPU cache state at region activation time. - */ -#define CXL_REGION_F_INCOHERENT 0 - /* * Indicate whether this region has been assembled by autodetection or * userspace assembly. Prevent endpoint decoders outside of automatic * detection from being added to the region. */ -#define CXL_REGION_F_AUTO 1 +#define CXL_REGION_F_AUTO 0 /** * struct cxl_region - CXL region From patchwork Sat Jun 17 01:24:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 13283393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A774EB64DB for ; Sat, 17 Jun 2023 01:24:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229483AbjFQBYp (ORCPT ); Fri, 16 Jun 2023 21:24:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41044 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229500AbjFQBYo (ORCPT ); Fri, 16 Jun 2023 21:24:44 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81F9C3A89 for ; Fri, 16 Jun 2023 18:24:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686965083; x=1718501083; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=K7tluqE3P57+t6+elFobmBO3ulslD61Io2yb0mi+Kt4=; b=AnUYvjFe9aifjAHXjZQ/cPAo0lE6V6fehMxx/Ba5tMmiDDeRvP94pf5P Gju7DUhx/SjUJc3w9GI4iYFfkcrFBSAVBcZ7TRurRwpRlO/wHrO47ndNq VADIF5F9bJZwuvUVkUUfOBzn7c3ZhDzm/PHmKdUrXQSwIFQxlRkMZ6joT U3FNfQ0bV3yyCsnUh5dN0tdlwdFPQhQqJ56+WLNhuUrBX46y8E9JQP7wy lPQTIwyamfyYUdcE7wASeHesT63NCwM9HgXNPwwLvlce8eyakytM3UtCd +cr/I9q27xf/A24ryHNz/V4xrHTCMtGNa/OK9u3H7ZQ0asKaCWXliahdo w==; X-IronPort-AV: E=McAfee;i="6600,9927,10743"; a="362762340" X-IronPort-AV: E=Sophos;i="6.00,249,1681196400"; d="scan'208";a="362762340" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2023 18:24:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10743"; a="713073323" X-IronPort-AV: E=Sophos;i="6.00,249,1681196400"; d="scan'208";a="713073323" Received: from slovely-mobl1.amr.corp.intel.com (HELO dwillia2-xfh.jf.intel.com) ([10.209.23.80]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2023 18:24:34 -0700 Subject: [PATCH 2/3] cxl/region: Flag partially torn down regions as unusable From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Jonathan Cameron Date: Fri, 16 Jun 2023 18:24:34 -0700 Message-ID: <168696507423.3590522.16254212607926684429.stgit@dwillia2-xfh.jf.intel.com> In-Reply-To: <168696506332.3590522.12981963617215460385.stgit@dwillia2-xfh.jf.intel.com> References: <168696506332.3590522.12981963617215460385.stgit@dwillia2-xfh.jf.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org cxl_region_decode_reset() walks all the decoders associated with a given region and disables them. Due to decoder ordering rules it is possible that a switch in the topology notices that a given decoder can not be shutdown before another region with a higher HPA is shutdown first. That can leave the region in a partially committed state. Capture that state in a new CXL_REGION_F_NEEDS_RESET flag and require that a successful cxl_region_decode_reset() attempt must be completed before cxl_region_probe() accepts the region. This is a corollary for the bug that Jonathan identified in "CXL/region : commit reset of out of order region appears to succeed." [1]. Cc: Jonathan Cameron Link: http://lore.kernel.org/r/20230316171441.0000205b@Huawei.com [1] Fixes: 176baefb2eb5 ("cxl/hdm: Commit decoder state to hardware") Signed-off-by: Dan Williams Reviewed-by: Dave Jiang --- drivers/cxl/core/region.c | 12 ++++++++++++ drivers/cxl/cxl.h | 8 ++++++++ 2 files changed, 20 insertions(+) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index d48c5916f2e9..31f498f0fb3a 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -182,14 +182,19 @@ static int cxl_region_decode_reset(struct cxl_region *cxlr, int count) rc = cxld->reset(cxld); if (rc) return rc; + set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags); } endpoint_reset: rc = cxled->cxld.reset(&cxled->cxld); if (rc) return rc; + set_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags); } + /* all decoders associated with this region have been torn down */ + clear_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags); + return 0; } @@ -2876,6 +2881,13 @@ static int cxl_region_probe(struct device *dev) goto out; } + if (test_bit(CXL_REGION_F_NEEDS_RESET, &cxlr->flags)) { + dev_err(&cxlr->dev, + "failed to activate, re-commit region and retry\n"); + rc = -ENXIO; + goto out; + } + /* * From this point on any path that changes the region's state away from * CXL_CONFIG_COMMIT is also responsible for releasing the driver. diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h index 187abd8ba673..cd7bf6697a51 100644 --- a/drivers/cxl/cxl.h +++ b/drivers/cxl/cxl.h @@ -469,6 +469,14 @@ struct cxl_region_params { */ #define CXL_REGION_F_AUTO 0 +/* + * Require that a committed region successfully complete a teardown once + * any of its associated decoders have been torn down. This maintains + * the commit state for the region since there are committed decoders, + * but blocks cxl_region_probe(). + */ +#define CXL_REGION_F_NEEDS_RESET 1 + /** * struct cxl_region - CXL region * @dev: This region's device From patchwork Sat Jun 17 01:24:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 13283392 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF379EB64D8 for ; Sat, 17 Jun 2023 01:24:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229924AbjFQBYq (ORCPT ); Fri, 16 Jun 2023 21:24:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229585AbjFQBYo (ORCPT ); Fri, 16 Jun 2023 21:24:44 -0400 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C1C9C3AAF for ; Fri, 16 Jun 2023 18:24:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686965083; x=1718501083; h=subject:from:to:cc:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6ts+CEwZ9mULQARVcK/5Quce23gVOaqNyP2/qlacEKk=; b=HBGmZ/oxdKWD36vWFdvP3BJj7cWsJKAOM0+1AODngG/QE6KskHY+NF4A JBRAhgWWwI6CQ656ms9BX8Rkm1UC34/ry78bdzzZXr3ybV+hGFoHHPEWi 3p+//NMVnAwmnTwiO97RSLdBGuESp1i+tREmkxw51oBmwnKI6wc1VcaCy sH2Uh+SUr/ScMbG/uWHsnVVA36uRO1ZX5TbVi3OGVFUf8z2iUimVEgEgC C1FLJbTaNGaXbsMFvxSd2IzgAZyoUwKwQn6r7VKi/AFECSjRyxJUfH+AJ VVtcGdPzi3ASpEObFdK11hNAg+E2jxeO3aMD6DA/N73lRZmhKtcb0A9O5 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10743"; a="362762349" X-IronPort-AV: E=Sophos;i="6.00,249,1681196400"; d="scan'208";a="362762349" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2023 18:24:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10743"; a="713073359" X-IronPort-AV: E=Sophos;i="6.00,249,1681196400"; d="scan'208";a="713073359" Received: from slovely-mobl1.amr.corp.intel.com (HELO dwillia2-xfh.jf.intel.com) ([10.209.23.80]) by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Jun 2023 18:24:39 -0700 Subject: [PATCH 3/3] cxl/region: Fix state transitions after reset failure From: Dan Williams To: linux-cxl@vger.kernel.org Cc: Jonathan Cameron Date: Fri, 16 Jun 2023 18:24:39 -0700 Message-ID: <168696507968.3590522.14484000711718573626.stgit@dwillia2-xfh.jf.intel.com> In-Reply-To: <168696506332.3590522.12981963617215460385.stgit@dwillia2-xfh.jf.intel.com> References: <168696506332.3590522.12981963617215460385.stgit@dwillia2-xfh.jf.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org Jonathan reports that failed attempts to reset a region (teardown its HDM decoder configuration) mistakenly advance the state of the region to "not committed". Revert to the previous state of the region on reset failure so that the reset can be re-attempted. Reported-by: Jonathan Cameron Closes: http://lore.kernel.org/r/20230316171441.0000205b@Huawei.com Fixes: 176baefb2eb5 ("cxl/hdm: Commit decoder state to hardware") Signed-off-by: Dan Williams Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron --- drivers/cxl/core/region.c | 26 +++++++++++++++----------- 1 file changed, 15 insertions(+), 11 deletions(-) diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 31f498f0fb3a..38db377e13f1 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -296,9 +296,11 @@ static ssize_t commit_store(struct device *dev, struct device_attribute *attr, if (rc) return rc; - if (commit) + if (commit) { rc = cxl_region_decode_commit(cxlr); - else { + if (rc == 0) + p->state = CXL_CONFIG_COMMIT; + } else { p->state = CXL_CONFIG_RESET_PENDING; up_write(&cxl_region_rwsem); device_release_driver(&cxlr->dev); @@ -308,18 +310,20 @@ static ssize_t commit_store(struct device *dev, struct device_attribute *attr, * The lock was dropped, so need to revalidate that the reset is * still pending. */ - if (p->state == CXL_CONFIG_RESET_PENDING) + if (p->state == CXL_CONFIG_RESET_PENDING) { rc = cxl_region_decode_reset(cxlr, p->interleave_ways); + /* + * Revert to committed since there may still be active + * decoders associated with this region, or move forward + * to active to mark the reset successful + */ + if (rc) + p->state = CXL_CONFIG_COMMIT; + else + p->state = CXL_CONFIG_ACTIVE; + } } - if (rc) - goto out; - - if (commit) - p->state = CXL_CONFIG_COMMIT; - else if (p->state == CXL_CONFIG_RESET_PENDING) - p->state = CXL_CONFIG_ACTIVE; - out: up_write(&cxl_region_rwsem);