From patchwork Wed Dec 11 03:42:28 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13902901 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B0EB423ED77; Wed, 11 Dec 2024 03:42:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.16 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733888574; cv=none; b=WALEQ6/e/1H14ldO8sd6suT7G3vxuUQ7GmFzk4F6prE16rip0QrtGFSmWHwmmXqCcJF7js4BZP550dtLYH4FecsE6QVIwVVLx1tzYTuc7LLe6fyIkPV8pD42EhMcHZDPH5d91QCRFHlx8HOYKa6WDvaCCAH6MnxBByDMB3WaoPQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1733888574; c=relaxed/simple; bh=o9QIS5iTmm9IxvWgCeafqRVJcssxbzkk0h96YhyM/LM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=emF2xAyZquyGXwHxA5EIFiZTVT0GuJE/rzSE8ukxp0yu+2M6p2Xy4ZURdrd9u5nhM1qRy1Ts/1eG977ODEf6uGB4RXNCpm4pvYT94hBjxqaC8Ndnlk01T/gSFZwtE8sZBdVZcRcHMz0fHePEgwFImqTjckgN/fQhTSEDFwR9tIE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=AU0+JuXj; arc=none smtp.client-ip=198.175.65.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="AU0+JuXj" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733888573; x=1765424573; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=o9QIS5iTmm9IxvWgCeafqRVJcssxbzkk0h96YhyM/LM=; b=AU0+JuXjUYAbv1I+/XbYXwWorAwbs7B/u/0L46zDnLWaZcGQaUOn6Cof SYixjO32hl3sWQu48QpRo4jyS5rUQnDNN+Cf4Zo3Ar5Rivp/1PwkHskDY VYceEqgDZSXLQxIDSUgyN4u8eOW8+cOroa1JC4T3+FZUDhSBvWVcPQfNm 3ISw4u7PA5cHm8/jk14XghPDInHi/NMbdva3d4/q3JQ5w5DjOW+CrUgNQ TNJddbKEaREgt5no292QkTZEWmxQbMG5kwRe7ySch+28XaDYF1DBbuzvy dgGTEu29J94RWnJ2ZYqIgDL3uFg/AKSLBc4Akw/Lnpyv3qs2tLzQSX/Ik g==; X-CSE-ConnectionGUID: Hvrohf4XRJurIf72ifhxLg== X-CSE-MsgGUID: bxfvCEs9S9OM7je0KWakpw== X-IronPort-AV: E=McAfee;i="6700,10204,11282"; a="34395758" X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="34395758" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by orvoesa108.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2024 19:42:53 -0800 X-CSE-ConnectionGUID: rwM+GpgBR4uO3zlqxaRp1A== X-CSE-MsgGUID: ffnGGHaaRdqVLzvCxOFgUQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,224,1728975600"; d="scan'208";a="95696908" Received: from lstrano-mobl6.amr.corp.intel.com (HELO localhost) ([10.125.109.231]) by orviesa006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Dec 2024 19:42:51 -0800 From: Ira Weiny Date: Tue, 10 Dec 2024 21:42:28 -0600 Subject: [PATCH v8 13/21] cxl/core: Return endpoint decoder information from region search Precedence: bulk X-Mailing-List: linux-hardening@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20241210-dcd-type2-upstream-v8-13-812852504400@intel.com> References: <20241210-dcd-type2-upstream-v8-0-812852504400@intel.com> In-Reply-To: <20241210-dcd-type2-upstream-v8-0-812852504400@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Jonathan Corbet , Andrew Morton , Kees Cook , "Gustavo A. R. Silva" Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-cxl@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev, linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org, Li Ming X-Mailer: b4 0.15-dev-2a633 X-Developer-Signature: v=1; a=ed25519-sha256; t=1733888537; l=4663; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=o9QIS5iTmm9IxvWgCeafqRVJcssxbzkk0h96YhyM/LM=; b=ThLxXfTmp6qJM4VFbeJR49be4RqsDGwpYuiWRU3OAj9vkP/MuCdfjwyuu7dzg+Im4z0dF14U+ w/BrjYiUvGPBRpo9Q8IaoG/qAKk3vsRHX3FCfxzPI4xzHUQEUR/bXgA X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= cxl_dpa_to_region() finds the region from a tuple. The search involves finding the device endpoint decoder as well. Dynamic capacity extent processing uses the endpoint decoder HPA information to calculate the HPA offset. In addition, well behaved extents should be contained within an endpoint decoder. Return the endpoint decoder found to be used in subsequent DCD code. Reviewed-by: Jonathan Cameron Reviewed-by: Fan Ni Reviewed-by: Dave Jiang Reviewed-by: Li Ming Reviewed-by: Alison Schofield Signed-off-by: Ira Weiny --- drivers/cxl/core/core.h | 6 ++++-- drivers/cxl/core/mbox.c | 2 +- drivers/cxl/core/memdev.c | 4 ++-- drivers/cxl/core/region.c | 8 +++++++- 4 files changed, 14 insertions(+), 6 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 03ab7e66102e1e1fa378b9afb1c6b3e8235e8ed4..cada2647966d91bf3997a1c3f1252c100f7d0b30 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -39,7 +39,8 @@ void cxl_decoder_kill_region(struct cxl_endpoint_decoder *cxled); int cxl_region_init(void); void cxl_region_exit(void); int cxl_get_poison_by_endpoint(struct cxl_port *port); -struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa, + struct cxl_endpoint_decoder **cxled); u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa); @@ -50,7 +51,8 @@ static inline u64 cxl_dpa_to_hpa(struct cxl_region *cxlr, return ULLONG_MAX; } static inline -struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa) +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa, + struct cxl_endpoint_decoder **cxled) { return NULL; } diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 56c4389e0031e15bc66056b8a73f4159864f6c4e..6305cce453c0e6fdef1a7ddf3444f6794831f9d0 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -916,7 +916,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd, guard(rwsem_read)(&cxl_dpa_rwsem); dpa = le64_to_cpu(evt->media_hdr.phys_addr) & CXL_DPA_MASK; - cxlr = cxl_dpa_to_region(cxlmd, dpa); + cxlr = cxl_dpa_to_region(cxlmd, dpa, NULL); if (cxlr) hpa = cxl_dpa_to_hpa(cxlr, cxlmd, dpa); diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 56cdf09d3affb81969755769a8803f6bded7a4ce..e0dbf2a8398adb47e7b9c4261b77fa77dcde7463 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -313,7 +313,7 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) if (rc) goto out; - cxlr = cxl_dpa_to_region(cxlmd, dpa); + cxlr = cxl_dpa_to_region(cxlmd, dpa, NULL); if (cxlr) dev_warn_once(cxl_mbox->host, "poison inject dpa:%#llx region: %s\n", dpa, @@ -377,7 +377,7 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) if (rc) goto out; - cxlr = cxl_dpa_to_region(cxlmd, dpa); + cxlr = cxl_dpa_to_region(cxlmd, dpa, NULL); if (cxlr) dev_warn_once(cxl_mbox->host, "poison clear dpa:%#llx region: %s\n", dpa, diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index a393c46871235e33b3f077951f191178be48f449..5154d00d2ee2026041d93bb4b20c9e0bb97f6449 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2828,6 +2828,7 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port) struct cxl_dpa_to_region_context { struct cxl_region *cxlr; u64 dpa; + struct cxl_endpoint_decoder *cxled; }; static int __cxl_dpa_to_region(struct device *dev, void *arg) @@ -2861,11 +2862,13 @@ static int __cxl_dpa_to_region(struct device *dev, void *arg) dev_name(dev)); ctx->cxlr = cxlr; + ctx->cxled = cxled; return 1; } -struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa) +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa, + struct cxl_endpoint_decoder **cxled) { struct cxl_dpa_to_region_context ctx; struct cxl_port *port; @@ -2877,6 +2880,9 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa) if (port && is_cxl_endpoint(port) && cxl_num_decoders_committed(port)) device_for_each_child(&port->dev, &ctx, __cxl_dpa_to_region); + if (cxled) + *cxled = ctx.cxled; + return ctx.cxlr; }