From patchwork Fri Apr 26 03:46:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 13644108 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24B1378C70 for ; Fri, 26 Apr 2024 03:46:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714103191; cv=none; b=O7Yu55mUylD5QOLpG3rbTP09GMWfHiYEVmLJOSsjKfS6ewS152BSkk1SCnCVWR+hP4RHPPHQ9Mb8JR7zojftKDYquHZPBvE74eeo7KSs0KjfMKqlDhNNOWuf8C+MHbpbv53gBuTnDCgLZAoKxC+lqrHW35wp+jdxA68rAiCXANk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714103191; c=relaxed/simple; bh=ABXHD79SAY7dW0Fx6Y7aYlXFP4JckPvRUUEtzb8P5uc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=MUZKrGYCZ8zJ7IUpFIemAvIoZu1+uFECihdgPO2qt57zp6HQFXej+vUpYi9Cq+3rjxgkm0FGqr1URDrm+V4s4Ui02O2GfZzwna9iBBhvTyYW7a0wsLJuLyKSRoZQvSrxoh+TwLfX0CQ7y6AHiUoVDofJWbaWdEET6cEVgDEQ//4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=lExk8EtP; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="lExk8EtP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1714103190; x=1745639190; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ABXHD79SAY7dW0Fx6Y7aYlXFP4JckPvRUUEtzb8P5uc=; b=lExk8EtPTRjlnHDpYuyfNQrJ8tlfIHm+6L519lEqFACMdmda5jTDZ78m Sq+9xpANdjcpT9v8fcZalPIczvFWXDOqet5xdELWv18YttWds+TNdPVhT 1YUCtnI/Ik4hLdjC91/wr7EgglhMFRjj4cXh51S6D/fsgHAwv3zK0wO2L ooEKwNsZ9AL8K33xnusA4cO0/Ju2XxyAJdPSImMqMq5fd1ACSzMSyJe8F f5xmxx92i8qE+RShSGyMBJAJ/NX7HL+FOQjRIIWuT2CaT0NSd2ygsP/hk 1+/uyrCxNtT4+/RgmECWVp2zpUSe3Ehsg7xJUfHkfwM/uBr4CDblIwn2g Q==; X-CSE-ConnectionGUID: jSmab6ZfSKudYXNWn8tdfQ== X-CSE-MsgGUID: p7DDyKh5QNyxX/ZNmoa8jg== X-IronPort-AV: E=McAfee;i="6600,9927,11055"; a="20381461" X-IronPort-AV: E=Sophos;i="6.07,231,1708416000"; d="scan'208";a="20381461" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2024 20:46:28 -0700 X-CSE-ConnectionGUID: S7lNsRdCRZ+ZQ4KxcyB7JQ== X-CSE-MsgGUID: 6bJSfRU8SoSmqbpAImmf3g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,231,1708416000"; d="scan'208";a="29738215" Received: from aschofie-mobl2.amr.corp.intel.com (HELO localhost) ([10.252.128.24]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2024 20:46:27 -0700 From: alison.schofield@intel.com To: Davidlohr Bueso , Jonathan Cameron , Dave Jiang , Alison Schofield , Vishal Verma , Ira Weiny , Dan Williams Cc: linux-cxl@vger.kernel.org, Steven Rostedt , Jonathan Cameron Subject: [PATCH v4 1/3] cxl/region: Move cxl_dpa_to_region() work to the region driver Date: Thu, 25 Apr 2024 20:46:23 -0700 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Alison Schofield This helper belongs in the region driver as it is only useful with CONFIG_CXL_REGION. Add a stub in core.h for when the region driver is not built. Signed-off-by: Alison Schofield Reviewed-by: Jonathan Cameron Reviewed-by: Ira Weiny --- drivers/cxl/core/core.h | 7 +++++++ drivers/cxl/core/memdev.c | 44 --------------------------------------- drivers/cxl/core/region.c | 44 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 51 insertions(+), 44 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index bc5a95665aa0..87008505f8a9 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -27,7 +27,14 @@ void cxl_decoder_kill_region(struct cxl_endpoint_decoder *cxled); int cxl_region_init(void); void cxl_region_exit(void); int cxl_get_poison_by_endpoint(struct cxl_port *port); +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); + #else +static inline +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa) +{ + return NULL; +} static inline int cxl_get_poison_by_endpoint(struct cxl_port *port) { return 0; diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index d4e259f3a7e9..0277726afd04 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -251,50 +251,6 @@ int cxl_trigger_poison_list(struct cxl_memdev *cxlmd) } EXPORT_SYMBOL_NS_GPL(cxl_trigger_poison_list, CXL); -struct cxl_dpa_to_region_context { - struct cxl_region *cxlr; - u64 dpa; -}; - -static int __cxl_dpa_to_region(struct device *dev, void *arg) -{ - struct cxl_dpa_to_region_context *ctx = arg; - struct cxl_endpoint_decoder *cxled; - u64 dpa = ctx->dpa; - - if (!is_endpoint_decoder(dev)) - return 0; - - cxled = to_cxl_endpoint_decoder(dev); - if (!cxled->dpa_res || !resource_size(cxled->dpa_res)) - return 0; - - if (dpa > cxled->dpa_res->end || dpa < cxled->dpa_res->start) - return 0; - - dev_dbg(dev, "dpa:0x%llx mapped in region:%s\n", dpa, - dev_name(&cxled->cxld.region->dev)); - - ctx->cxlr = cxled->cxld.region; - - return 1; -} - -static struct cxl_region *cxl_dpa_to_region(struct cxl_memdev *cxlmd, u64 dpa) -{ - struct cxl_dpa_to_region_context ctx; - struct cxl_port *port; - - ctx = (struct cxl_dpa_to_region_context) { - .dpa = dpa, - }; - port = cxlmd->endpoint; - if (port && is_cxl_endpoint(port) && cxl_num_decoders_committed(port)) - device_for_each_child(&port->dev, &ctx, __cxl_dpa_to_region); - - return ctx.cxlr; -} - static int cxl_validate_poison_dpa(struct cxl_memdev *cxlmd, u64 dpa) { struct cxl_dev_state *cxlds = cxlmd->cxlds; diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 5c186e0a39b9..4b227659e3f8 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2679,6 +2679,50 @@ int cxl_get_poison_by_endpoint(struct cxl_port *port) return rc; } +struct cxl_dpa_to_region_context { + struct cxl_region *cxlr; + u64 dpa; +}; + +static int __cxl_dpa_to_region(struct device *dev, void *arg) +{ + struct cxl_dpa_to_region_context *ctx = arg; + struct cxl_endpoint_decoder *cxled; + u64 dpa = ctx->dpa; + + if (!is_endpoint_decoder(dev)) + return 0; + + cxled = to_cxl_endpoint_decoder(dev); + if (!cxled->dpa_res || !resource_size(cxled->dpa_res)) + return 0; + + if (dpa > cxled->dpa_res->end || dpa < cxled->dpa_res->start) + return 0; + + dev_dbg(dev, "dpa:0x%llx mapped in region:%s\n", dpa, + dev_name(&cxled->cxld.region->dev)); + + ctx->cxlr = cxled->cxld.region; + + return 1; +} + +struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa) +{ + struct cxl_dpa_to_region_context ctx; + struct cxl_port *port; + + ctx = (struct cxl_dpa_to_region_context) { + .dpa = dpa, + }; + port = cxlmd->endpoint; + if (port && is_cxl_endpoint(port) && cxl_num_decoders_committed(port)) + device_for_each_child(&port->dev, &ctx, __cxl_dpa_to_region); + + return ctx.cxlr; +} + static struct lock_class_key cxl_pmem_region_key; static struct cxl_pmem_region *cxl_pmem_region_alloc(struct cxl_region *cxlr) From patchwork Fri Apr 26 03:46:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 13644109 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D195762C1 for ; Fri, 26 Apr 2024 03:46:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714103192; cv=none; b=JjO36TgW/d60wjt3kV4gXEywivhn6rwLlrnYjUavDJ0xuitSYZKOeTfBmao5OomU/eM383SkKK70DQw+ZLyPZsCYZFU/y0ek5VS74WS4ARpzW/vyG+//lLryvCEZOBM9Nf0gsPb1KhsxZuJpMGsGS9lxDRosGhka4QrjtcExnu0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714103192; c=relaxed/simple; bh=yPYnKclMREVRLLUt2xYrlcYrMI1HGouYO5TgD/9AZYU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OOHmJkrgko5WdZtBpxdoaELC3UEBDq3tG2L7FXmCj8Jyzf6Xl7WnCWpiRDt9V1hu58czfu/PPKO4tKETFg0WiSZJh7vEmqMcKVMSn0Q6NcsxIXcv8tsFIvCSZRpy9UeMKclSwauy5XeHbOGNQVgxDYf4TJGTc1+NTqD2KcyBoLg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UJqZdwln; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UJqZdwln" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1714103190; x=1745639190; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yPYnKclMREVRLLUt2xYrlcYrMI1HGouYO5TgD/9AZYU=; b=UJqZdwln5OKZuw0PMQYNibGBo1N51vji7Obw3G4LGSNXSXsakyBa4Aww XR1UeQxeEGneAzoJKPwQZhdg3jpMPjnGgl0O2hROQ3TLbp4gzw9nwDi/6 KDdIkPjOETU+WQ2+bgDj+NPMwgWdg5qIB2CRVVbc103c6mhSaxGQmxcSd NnMFs0rRwTrb9yovitzAaL8pWgAkcmJaS3t5l5BPtfG6Ej2m11/+XpNYt 4CdLotEyR0zUllzM2R5j6XPMxuoVcg5GQuKxBmgko9SJf6SDS2b4b8Xl1 NCIY/8NUxrxu1BFYtCEIxxNoCxI6/slomlNRDEYx/ZUW6XtmRbjXmzvdn w==; X-CSE-ConnectionGUID: FpLQfUOGRhyJZOcdNeIg3w== X-CSE-MsgGUID: xrvHuNQNTpm7CuQs7ccn/A== X-IronPort-AV: E=McAfee;i="6600,9927,11055"; a="20381465" X-IronPort-AV: E=Sophos;i="6.07,231,1708416000"; d="scan'208";a="20381465" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2024 20:46:29 -0700 X-CSE-ConnectionGUID: IZVExevvSe+reQgyzN73PA== X-CSE-MsgGUID: 1yJ2OMwTTFCHwQOjjSr8YQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,231,1708416000"; d="scan'208";a="29738227" Received: from aschofie-mobl2.amr.corp.intel.com (HELO localhost) ([10.252.128.24]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2024 20:46:28 -0700 From: alison.schofield@intel.com To: Davidlohr Bueso , Jonathan Cameron , Dave Jiang , Alison Schofield , Vishal Verma , Ira Weiny , Dan Williams Cc: linux-cxl@vger.kernel.org, Steven Rostedt , Jonathan Cameron Subject: [PATCH v4 2/3] cxl/region: Move cxl_trace_hpa() work to the region driver Date: Thu, 25 Apr 2024 20:46:24 -0700 Message-Id: <9a7b3700f9ab84d7d0ea087b7e212c63850e41f9.1714102202.git.alison.schofield@intel.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Alison Schofield This work belongs in the region driver as it is only useful with CONFIG_CXL_REGION. Add a stub in core.h for when the region driver is not built. Reviewed-by: Jonathan Cameron Reviewed-by: Ira Weiny Signed-off-by: Alison Schofield --- drivers/cxl/core/core.h | 7 +++ drivers/cxl/core/region.c | 91 +++++++++++++++++++++++++++++++++++++++ drivers/cxl/core/trace.c | 91 --------------------------------------- drivers/cxl/core/trace.h | 2 - 4 files changed, 98 insertions(+), 93 deletions(-) diff --git a/drivers/cxl/core/core.h b/drivers/cxl/core/core.h index 87008505f8a9..625394486459 100644 --- a/drivers/cxl/core/core.h +++ b/drivers/cxl/core/core.h @@ -28,8 +28,15 @@ int cxl_region_init(void); void cxl_region_exit(void); int cxl_get_poison_by_endpoint(struct cxl_port *port); struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa); +u64 cxl_trace_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, + u64 dpa); #else +static inline u64 +cxl_trace_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, u64 dpa) +{ + return ULLONG_MAX; +} static inline struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa) { diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c index 4b227659e3f8..45eb9c560fd6 100644 --- a/drivers/cxl/core/region.c +++ b/drivers/cxl/core/region.c @@ -2723,6 +2723,97 @@ struct cxl_region *cxl_dpa_to_region(const struct cxl_memdev *cxlmd, u64 dpa) return ctx.cxlr; } +static bool cxl_is_hpa_in_range(u64 hpa, struct cxl_region *cxlr, int pos) +{ + struct cxl_region_params *p = &cxlr->params; + int gran = p->interleave_granularity; + int ways = p->interleave_ways; + u64 offset; + + /* Is the hpa within this region at all */ + if (hpa < p->res->start || hpa > p->res->end) { + dev_dbg(&cxlr->dev, + "Addr trans fail: hpa 0x%llx not in region\n", hpa); + return false; + } + + /* Is the hpa in an expected chunk for its pos(-ition) */ + offset = hpa - p->res->start; + offset = do_div(offset, gran * ways); + if ((offset >= pos * gran) && (offset < (pos + 1) * gran)) + return true; + + dev_dbg(&cxlr->dev, + "Addr trans fail: hpa 0x%llx not in expected chunk\n", hpa); + + return false; +} + +static u64 cxl_dpa_to_hpa(u64 dpa, struct cxl_region *cxlr, + struct cxl_endpoint_decoder *cxled) +{ + u64 dpa_offset, hpa_offset, bits_upper, mask_upper, hpa; + struct cxl_region_params *p = &cxlr->params; + int pos = cxled->pos; + u16 eig = 0; + u8 eiw = 0; + + ways_to_eiw(p->interleave_ways, &eiw); + granularity_to_eig(p->interleave_granularity, &eig); + + /* + * The device position in the region interleave set was removed + * from the offset at HPA->DPA translation. To reconstruct the + * HPA, place the 'pos' in the offset. + * + * The placement of 'pos' in the HPA is determined by interleave + * ways and granularity and is defined in the CXL Spec 3.0 Section + * 8.2.4.19.13 Implementation Note: Device Decode Logic + */ + + /* Remove the dpa base */ + dpa_offset = dpa - cxl_dpa_resource_start(cxled); + + mask_upper = GENMASK_ULL(51, eig + 8); + + if (eiw < 8) { + hpa_offset = (dpa_offset & mask_upper) << eiw; + hpa_offset |= pos << (eig + 8); + } else { + bits_upper = (dpa_offset & mask_upper) >> (eig + 8); + bits_upper = bits_upper * 3; + hpa_offset = ((bits_upper << (eiw - 8)) + pos) << (eig + 8); + } + + /* The lower bits remain unchanged */ + hpa_offset |= dpa_offset & GENMASK_ULL(eig + 7, 0); + + /* Apply the hpa_offset to the region base address */ + hpa = hpa_offset + p->res->start; + + if (!cxl_is_hpa_in_range(hpa, cxlr, cxled->pos)) + return ULLONG_MAX; + + return hpa; +} + +u64 cxl_trace_hpa(struct cxl_region *cxlr, const struct cxl_memdev *cxlmd, + u64 dpa) +{ + struct cxl_region_params *p = &cxlr->params; + struct cxl_endpoint_decoder *cxled = NULL; + + for (int i = 0; i < p->nr_targets; i++) { + cxled = p->targets[i]; + if (cxlmd == cxled_to_memdev(cxled)) + break; + } + if (!cxled || cxlmd != cxled_to_memdev(cxled)) + return ULLONG_MAX; + + return cxl_dpa_to_hpa(dpa, cxlr, cxled); +} + static struct lock_class_key cxl_pmem_region_key; static struct cxl_pmem_region *cxl_pmem_region_alloc(struct cxl_region *cxlr) diff --git a/drivers/cxl/core/trace.c b/drivers/cxl/core/trace.c index d0403dc3c8ab..7f2a9dd0d0e3 100644 --- a/drivers/cxl/core/trace.c +++ b/drivers/cxl/core/trace.c @@ -6,94 +6,3 @@ #define CREATE_TRACE_POINTS #include "trace.h" - -static bool cxl_is_hpa_in_range(u64 hpa, struct cxl_region *cxlr, int pos) -{ - struct cxl_region_params *p = &cxlr->params; - int gran = p->interleave_granularity; - int ways = p->interleave_ways; - u64 offset; - - /* Is the hpa within this region at all */ - if (hpa < p->res->start || hpa > p->res->end) { - dev_dbg(&cxlr->dev, - "Addr trans fail: hpa 0x%llx not in region\n", hpa); - return false; - } - - /* Is the hpa in an expected chunk for its pos(-ition) */ - offset = hpa - p->res->start; - offset = do_div(offset, gran * ways); - if ((offset >= pos * gran) && (offset < (pos + 1) * gran)) - return true; - - dev_dbg(&cxlr->dev, - "Addr trans fail: hpa 0x%llx not in expected chunk\n", hpa); - - return false; -} - -static u64 cxl_dpa_to_hpa(u64 dpa, struct cxl_region *cxlr, - struct cxl_endpoint_decoder *cxled) -{ - u64 dpa_offset, hpa_offset, bits_upper, mask_upper, hpa; - struct cxl_region_params *p = &cxlr->params; - int pos = cxled->pos; - u16 eig = 0; - u8 eiw = 0; - - ways_to_eiw(p->interleave_ways, &eiw); - granularity_to_eig(p->interleave_granularity, &eig); - - /* - * The device position in the region interleave set was removed - * from the offset at HPA->DPA translation. To reconstruct the - * HPA, place the 'pos' in the offset. - * - * The placement of 'pos' in the HPA is determined by interleave - * ways and granularity and is defined in the CXL Spec 3.0 Section - * 8.2.4.19.13 Implementation Note: Device Decode Logic - */ - - /* Remove the dpa base */ - dpa_offset = dpa - cxl_dpa_resource_start(cxled); - - mask_upper = GENMASK_ULL(51, eig + 8); - - if (eiw < 8) { - hpa_offset = (dpa_offset & mask_upper) << eiw; - hpa_offset |= pos << (eig + 8); - } else { - bits_upper = (dpa_offset & mask_upper) >> (eig + 8); - bits_upper = bits_upper * 3; - hpa_offset = ((bits_upper << (eiw - 8)) + pos) << (eig + 8); - } - - /* The lower bits remain unchanged */ - hpa_offset |= dpa_offset & GENMASK_ULL(eig + 7, 0); - - /* Apply the hpa_offset to the region base address */ - hpa = hpa_offset + p->res->start; - - if (!cxl_is_hpa_in_range(hpa, cxlr, cxled->pos)) - return ULLONG_MAX; - - return hpa; -} - -u64 cxl_trace_hpa(struct cxl_region *cxlr, struct cxl_memdev *cxlmd, - u64 dpa) -{ - struct cxl_region_params *p = &cxlr->params; - struct cxl_endpoint_decoder *cxled = NULL; - - for (int i = 0; i < p->nr_targets; i++) { - cxled = p->targets[i]; - if (cxlmd == cxled_to_memdev(cxled)) - break; - } - if (!cxled || cxlmd != cxled_to_memdev(cxled)) - return ULLONG_MAX; - - return cxl_dpa_to_hpa(dpa, cxlr, cxled); -} diff --git a/drivers/cxl/core/trace.h b/drivers/cxl/core/trace.h index e5f13260fc52..161bdb5734b0 100644 --- a/drivers/cxl/core/trace.h +++ b/drivers/cxl/core/trace.h @@ -642,8 +642,6 @@ TRACE_EVENT(cxl_memory_module, #define cxl_poison_overflow(flags, time) \ (flags & CXL_POISON_FLAG_OVERFLOW ? le64_to_cpu(time) : 0) -u64 cxl_trace_hpa(struct cxl_region *cxlr, struct cxl_memdev *memdev, u64 dpa); - TRACE_EVENT(cxl_poison, TP_PROTO(struct cxl_memdev *cxlmd, struct cxl_region *cxlr, From patchwork Fri Apr 26 03:46:25 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alison Schofield X-Patchwork-Id: 13644110 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D8D778C73 for ; Fri, 26 Apr 2024 03:46:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.11 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714103192; cv=none; b=Z0LhwjKxzcyeefQ1a6kCq8FCscveO9papDTMmL1tfDjLP5MATQnuI2fI33xHDrsCfoxkUMeOdD5/S0J2J5vfmZldQTf6sYylMMtUE98HyhD2RL3+8K8KxXFigtTF7UnYd4SfKjvSZGAJwc60ow6hUh9bmvzvLUwbcppZ14Iqm8I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714103192; c=relaxed/simple; bh=i0g7U7XfQovjH1T2QzgeqNTocs+ZhtGH4opjKZs/ppQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OH7FAfLHXB5TE2Aq8OG5uTKpKk5vuD49Aj7aviRjyAn1uvdHOMnnqMYaLhejDJoYL15oLV80kddVJP1ZEPc+wbFvCGKVFk7M/UFi5hE1w3DbOMpxbLan94os3lxaits+zHJ6+IIaJwu6+Y1iKBDpzFq1KdS+1IFfauNwdVqcPYg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=j8MZtmOS; arc=none smtp.client-ip=198.175.65.11 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="j8MZtmOS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1714103191; x=1745639191; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=i0g7U7XfQovjH1T2QzgeqNTocs+ZhtGH4opjKZs/ppQ=; b=j8MZtmOSGOhkPpsq6pV7TGoG4bFCL9IGf2kWVBBDaKOe49PsLjWs+ukX AiWQdojSzqjNm8W4AJPE9HFt94gVLmPYF3JvG7lpLPjtyhcggNTsgOWmB RzjmeuxZejLC1doxsXwGXf0OJ5FCyY68i6w50fRu9wl+vwdsPIsBP+1Mh DKVJM3sKdcTGULPW9yHT4mjF0wsIF+adVffLff9/lCTiMZZCbPDmF1BFn 8GL5KTGH3Cb6tn5i4dVul0DTMM9+LDsQytmsROXoGQd4CMjvz22VyZJpB QHldUguGhTWVDIfMiFLp5+x/iqtdinriNg0M/J2hDYR0X/jwzLrD33Zq3 A==; X-CSE-ConnectionGUID: CBHXqfgMSJW9SORHa4o0VA== X-CSE-MsgGUID: 0BHX9s1fQS+cNGguUUxcLw== X-IronPort-AV: E=McAfee;i="6600,9927,11055"; a="20381469" X-IronPort-AV: E=Sophos;i="6.07,231,1708416000"; d="scan'208";a="20381469" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2024 20:46:30 -0700 X-CSE-ConnectionGUID: tRGfr+UJR6urkcYZZFyzYA== X-CSE-MsgGUID: ZRFuvAsnTmCx8vHhN38sog== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,231,1708416000"; d="scan'208";a="29738235" Received: from aschofie-mobl2.amr.corp.intel.com (HELO localhost) ([10.252.128.24]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Apr 2024 20:46:29 -0700 From: alison.schofield@intel.com To: Davidlohr Bueso , Jonathan Cameron , Dave Jiang , Alison Schofield , Vishal Verma , Ira Weiny , Dan Williams Cc: linux-cxl@vger.kernel.org, Steven Rostedt Subject: [PATCH v4 3/3] cxl/core: Add region info to cxl_general_media and cxl_dram events Date: Thu, 25 Apr 2024 20:46:25 -0700 Message-Id: X-Mailer: git-send-email 2.40.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Alison Schofield User space may need to know which region, if any, maps the DPAs (device physical addresses) reported in a cxl_general_media or cxl_dram event. Since the mapping can change, the kernel provides this information at the time the event occurs. This informs user space that at event this mapped this to this . Add the same region info that is included in the cxl_poison trace event: the DPA->HPA translation, region name, and region uuid. The new fields are inserted in the trace event and no existing fields are modified. If the DPA is not mapped, user will see: hpa=ULLONG_MAX, region="", and uuid=0 This work must be protected by dpa_rwsem & region_rwsem since it is looking up region mappings. Signed-off-by: Alison Schofield Reviewed-by: Dan Williams --- drivers/cxl/core/mbox.c | 36 ++++++++++++++++++++++++++------ drivers/cxl/core/trace.h | 44 +++++++++++++++++++++++++++++++-------- include/linux/cxl-event.h | 10 +++++++++ 3 files changed, 75 insertions(+), 15 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 9adda4795eb7..df0fc2a4570f 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -842,14 +842,38 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd, enum cxl_event_type event_type, const uuid_t *uuid, union cxl_event *evt) { - if (event_type == CXL_CPER_EVENT_GEN_MEDIA) - trace_cxl_general_media(cxlmd, type, &evt->gen_media); - else if (event_type == CXL_CPER_EVENT_DRAM) - trace_cxl_dram(cxlmd, type, &evt->dram); - else if (event_type == CXL_CPER_EVENT_MEM_MODULE) + if (event_type == CXL_CPER_EVENT_MEM_MODULE) { trace_cxl_memory_module(cxlmd, type, &evt->mem_module); - else + return; + } + if (event_type == CXL_CPER_EVENT_GENERIC) { trace_cxl_generic_event(cxlmd, type, uuid, &evt->generic); + return; + } + + if (trace_cxl_general_media_enabled() || trace_cxl_dram_enabled()) { + u64 dpa, hpa = ULLONG_MAX; + struct cxl_region *cxlr; + + /* + * These trace points are annotated with HPA and region + * translations. Take topology mutation locks and lookup + * { HPA, REGION } from { DPA, MEMDEV } in the event record. + */ + guard(rwsem_read)(&cxl_region_rwsem); + guard(rwsem_read)(&cxl_dpa_rwsem); + + dpa = le64_to_cpu(evt->common.phys_addr) & CXL_DPA_MASK; + cxlr = cxl_dpa_to_region(cxlmd, dpa); + if (cxlr) + hpa = cxl_trace_hpa(cxlr, cxlmd, dpa); + + if (event_type == CXL_CPER_EVENT_GEN_MEDIA) + trace_cxl_general_media(cxlmd, type, cxlr, hpa, + &evt->gen_media); + else if (event_type == CXL_CPER_EVENT_DRAM) + trace_cxl_dram(cxlmd, type, cxlr, hpa, &evt->dram); + } } EXPORT_SYMBOL_NS_GPL(cxl_event_trace_record, CXL); diff --git a/drivers/cxl/core/trace.h b/drivers/cxl/core/trace.h index 161bdb5734b0..790686a8a443 100644 --- a/drivers/cxl/core/trace.h +++ b/drivers/cxl/core/trace.h @@ -316,9 +316,9 @@ TRACE_EVENT(cxl_generic_event, TRACE_EVENT(cxl_general_media, TP_PROTO(const struct cxl_memdev *cxlmd, enum cxl_event_log_type log, - struct cxl_event_gen_media *rec), + struct cxl_region *cxlr, u64 hpa, struct cxl_event_gen_media *rec), - TP_ARGS(cxlmd, log, rec), + TP_ARGS(cxlmd, log, cxlr, hpa, rec), TP_STRUCT__entry( CXL_EVT_TP_entry @@ -330,10 +330,13 @@ TRACE_EVENT(cxl_general_media, __field(u8, channel) __field(u32, device) __array(u8, comp_id, CXL_EVENT_GEN_MED_COMP_ID_SIZE) - __field(u16, validity_flags) /* Following are out of order to pack trace record */ + __field(u64, hpa) + __field_struct(uuid_t, region_uuid) + __field(u16, validity_flags) __field(u8, rank) __field(u8, dpa_flags) + __string(region_name, cxlr ? dev_name(&cxlr->dev) : "") ), TP_fast_assign( @@ -354,18 +357,28 @@ TRACE_EVENT(cxl_general_media, memcpy(__entry->comp_id, &rec->component_id, CXL_EVENT_GEN_MED_COMP_ID_SIZE); __entry->validity_flags = get_unaligned_le16(&rec->validity_flags); + __entry->hpa = hpa; + if (cxlr) { + __assign_str(region_name, dev_name(&cxlr->dev)); + uuid_copy(&__entry->region_uuid, &cxlr->params.uuid); + } else { + __assign_str(region_name, ""); + uuid_copy(&__entry->region_uuid, &uuid_null); + } ), CXL_EVT_TP_printk("dpa=%llx dpa_flags='%s' " \ "descriptor='%s' type='%s' transaction_type='%s' channel=%u rank=%u " \ - "device=%x comp_id=%s validity_flags='%s'", + "device=%x comp_id=%s validity_flags='%s' " \ + "hpa=%llx region=%s region_uuid=%pUb", __entry->dpa, show_dpa_flags(__entry->dpa_flags), show_event_desc_flags(__entry->descriptor), show_mem_event_type(__entry->type), show_trans_type(__entry->transaction_type), __entry->channel, __entry->rank, __entry->device, __print_hex(__entry->comp_id, CXL_EVENT_GEN_MED_COMP_ID_SIZE), - show_valid_flags(__entry->validity_flags) + show_valid_flags(__entry->validity_flags), + __entry->hpa, __get_str(region_name), &__entry->region_uuid ) ); @@ -400,9 +413,9 @@ TRACE_EVENT(cxl_general_media, TRACE_EVENT(cxl_dram, TP_PROTO(const struct cxl_memdev *cxlmd, enum cxl_event_log_type log, - struct cxl_event_dram *rec), + struct cxl_region *cxlr, u64 hpa, struct cxl_event_dram *rec), - TP_ARGS(cxlmd, log, rec), + TP_ARGS(cxlmd, log, cxlr, hpa, rec), TP_STRUCT__entry( CXL_EVT_TP_entry @@ -417,10 +430,13 @@ TRACE_EVENT(cxl_dram, __field(u32, nibble_mask) __field(u32, row) __array(u8, cor_mask, CXL_EVENT_DER_CORRECTION_MASK_SIZE) + __field(u64, hpa) + __field_struct(uuid_t, region_uuid) __field(u8, rank) /* Out of order to pack trace record */ __field(u8, bank_group) /* Out of order to pack trace record */ __field(u8, bank) /* Out of order to pack trace record */ __field(u8, dpa_flags) /* Out of order to pack trace record */ + __string(region_name, cxlr ? dev_name(&cxlr->dev) : "") ), TP_fast_assign( @@ -444,12 +460,21 @@ TRACE_EVENT(cxl_dram, __entry->column = get_unaligned_le16(rec->column); memcpy(__entry->cor_mask, &rec->correction_mask, CXL_EVENT_DER_CORRECTION_MASK_SIZE); + __entry->hpa = hpa; + if (cxlr) { + __assign_str(region_name, dev_name(&cxlr->dev)); + uuid_copy(&__entry->region_uuid, &cxlr->params.uuid); + } else { + __assign_str(region_name, ""); + uuid_copy(&__entry->region_uuid, &uuid_null); + } ), CXL_EVT_TP_printk("dpa=%llx dpa_flags='%s' descriptor='%s' type='%s' " \ "transaction_type='%s' channel=%u rank=%u nibble_mask=%x " \ "bank_group=%u bank=%u row=%u column=%u cor_mask=%s " \ - "validity_flags='%s'", + "validity_flags='%s' " \ + "hpa=%llx region=%s region_uuid=%pUb", __entry->dpa, show_dpa_flags(__entry->dpa_flags), show_event_desc_flags(__entry->descriptor), show_mem_event_type(__entry->type), @@ -458,7 +483,8 @@ TRACE_EVENT(cxl_dram, __entry->bank_group, __entry->bank, __entry->row, __entry->column, __print_hex(__entry->cor_mask, CXL_EVENT_DER_CORRECTION_MASK_SIZE), - show_dram_valid_flags(__entry->validity_flags) + show_dram_valid_flags(__entry->validity_flags), + __entry->hpa, __get_str(region_name), &__entry->region_uuid ) ); diff --git a/include/linux/cxl-event.h b/include/linux/cxl-event.h index 03fa6d50d46f..5342755777cc 100644 --- a/include/linux/cxl-event.h +++ b/include/linux/cxl-event.h @@ -91,11 +91,21 @@ struct cxl_event_mem_module { u8 reserved[0x3d]; } __packed; +/* + * General Media or DRAM Event Common Fields + * - provides common access to phys_addr + */ +struct cxl_event_common { + struct cxl_event_record_hdr hdr; + __le64 phys_addr; +} __packed; + union cxl_event { struct cxl_event_generic generic; struct cxl_event_gen_media gen_media; struct cxl_event_dram dram; struct cxl_event_mem_module mem_module; + struct cxl_event_common common; } __packed; /*