From patchwork Fri Aug 16 14:44:24 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 13766423 X-Patchwork-Delegate: iweiny@gmail.com Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 90FEE1C8245 for ; Fri, 16 Aug 2024 14:45:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.19 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723819517; cv=none; b=fAHxLFgwEK/+mBcG4iYR587Baf/zccFNdgWRKZr17FpUC3iMDL82b9vzqYAv4JIZy1DbZQUj1WWDuTqyAOu3Td/Gky/Stp7QjjIPlQ8jiBX/zZaQQxODytckwMdTVQqdmQcIOkRMPW2ekNdDfw4iBR4QhMyGNy/ZSoRRmkKvMDg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1723819517; c=relaxed/simple; bh=xgtDDFnbYG1PgDcDzyakm0YnyHWhi9MOgea2ciekshc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=kQStswRgckY0NjbydmQ+Qi/o3q27v/BO2L06+HPMvkYmxk1JPVrsQdswuhnu9GZn5PlAEhi93HbnJUeTUkuji/Xnu43MDUMdrf54Z08biT3pqhscqEdjHvcncRpezGjXLin2yA3qiBul/CZ+zznu1SERPL897Joxe4QYaAzIbDg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=IILln/IJ; arc=none smtp.client-ip=198.175.65.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="IILln/IJ" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1723819516; x=1755355516; h=from:date:subject:mime-version:content-transfer-encoding: message-id:references:in-reply-to:to:cc; bh=xgtDDFnbYG1PgDcDzyakm0YnyHWhi9MOgea2ciekshc=; b=IILln/IJcVOzbYa7ZsJABvmoQfrCM1QWzZYv1NYzJtcCrDR+GxmYv6Wu pvqq0AlhVABu0GiUozSplzw1Jp0vu7SjNjtegCmY+yKBy0LVbJtCYbnWj +x0g+uFojgdQyyAyFRDKoRThyBeQ+37LGv5vRCx2IjjdZK6vmC62QxHa7 DFsVcSofRMjrH35SemQeO1OAZIa2SaDZ7nfqZBj4/b4j3Vmwhl+Fs+t5V 5NnZhk3CGvHIrb0ZG6mYCG+hZ3TvmRhCG/jMncvkmb2yUgMQ0/hgZzsR3 iL9dQmr528TPMlEcE86fd+38A9RmlDkg+sJhF4xm8lUioqAUl5E3TdMLd Q==; X-CSE-ConnectionGUID: 9TIIMMEATW64PmZ5Z8TcXQ== X-CSE-MsgGUID: tZRKreU9TWG+mOZPNzWirg== X-IronPort-AV: E=McAfee;i="6700,10204,11166"; a="21973040" X-IronPort-AV: E=Sophos;i="6.10,152,1719903600"; d="scan'208";a="21973040" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa111.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Aug 2024 07:45:08 -0700 X-CSE-ConnectionGUID: GFqKX8jTQNmUukkEdNKRoA== X-CSE-MsgGUID: 9XcuQEWpT/iXpTq/rdl+Jg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.10,152,1719903600"; d="scan'208";a="97205570" Received: from iweiny-mobl.amr.corp.intel.com (HELO localhost) ([10.125.111.52]) by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Aug 2024 07:45:05 -0700 From: ira.weiny@intel.com Date: Fri, 16 Aug 2024 09:44:24 -0500 Subject: [PATCH v3 16/25] cxl/mem: Configure dynamic capacity interrupts Precedence: bulk X-Mailing-List: nvdimm@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20240816-dcd-type2-upstream-v3-16-7c9b96cba6d7@intel.com> References: <20240816-dcd-type2-upstream-v3-0-7c9b96cba6d7@intel.com> In-Reply-To: <20240816-dcd-type2-upstream-v3-0-7c9b96cba6d7@intel.com> To: Dave Jiang , Fan Ni , Jonathan Cameron , Navneet Singh , Chris Mason , Josef Bacik , David Sterba , Petr Mladek , Steven Rostedt , Andy Shevchenko , Rasmus Villemoes , Sergey Senozhatsky , Jonathan Corbet , Andrew Morton Cc: Dan Williams , Davidlohr Bueso , Alison Schofield , Vishal Verma , Ira Weiny , linux-btrfs@vger.kernel.org, linux-cxl@vger.kernel.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, nvdimm@lists.linux.dev X-Mailer: b4 0.15-dev-37811 X-Developer-Signature: v=1; a=ed25519-sha256; t=1723819456; l=5482; i=ira.weiny@intel.com; s=20221211; h=from:subject:message-id; bh=nG+6QeATGPf/ZeewSqjsGsZHFXYrJSQpwae8PFSMtvA=; b=eVXh5NGEVoSLxpdo2poZcNBxRky9xfQ5wm6d6wfwyi4fFCMaOg/6GCCqwEOE9v+kSKA3YYb3r j6yqf2T3XrsBLrbMRjqCy0b8nhZiu5h33V//z06gyvFJmlwRnEOUiz5 X-Developer-Key: i=ira.weiny@intel.com; a=ed25519; pk=noldbkG+Wp1qXRrrkfY1QJpDf7QsOEthbOT7vm0PqsE= From: Navneet Singh Dynamic Capacity Devices (DCD) support extent change notifications through the event log mechanism. The interrupt mailbox commands were extended in CXL 3.1 to support these notifications. Firmware can't configure DCD events to be FW controlled but can retain control of memory events. Configure DCD event log interrupts on devices supporting dynamic capacity. Disable DCD if interrupts are not supported. Care is taken to preserve the interrupt policy set by the FW if FW first has been selected by the BIOS. Signed-off-by: Navneet Singh Co-developed-by: Ira Weiny Signed-off-by: Ira Weiny Reviewed-by: Dave Jiang Reviewed-by: Jonathan Cameron Reviewed-by: Li Ming --- Changes: [iweiny: update commit message] [iweiny: rebase to upstream irq code] [iweiny: disable DCD if irqs not supported] [Jonathan: formatting fix] [Fan: add text to debug print] [djiang: make dcd helpers inline] --- drivers/cxl/cxlmem.h | 2 ++ drivers/cxl/pci.c | 72 +++++++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 62 insertions(+), 12 deletions(-) diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index b4eb8164d05d..d41bec5433db 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -225,7 +225,9 @@ struct cxl_event_interrupt_policy { u8 warn_settings; u8 failure_settings; u8 fatal_settings; + u8 dcd_settings; } __packed; +#define CXL_EVENT_INT_POLICY_BASE_SIZE 4 /* info, warn, failure, fatal */ /** * struct cxl_event_state - Event log driver state diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index 370c74eae323..e5430c4e3a3b 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -669,22 +669,33 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, } static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, - struct cxl_event_interrupt_policy *policy) + struct cxl_event_interrupt_policy *policy, + bool native_cxl) { + size_t size_in = CXL_EVENT_INT_POLICY_BASE_SIZE; struct cxl_mbox_cmd mbox_cmd; int rc; - *policy = (struct cxl_event_interrupt_policy) { - .info_settings = CXL_INT_MSI_MSIX, - .warn_settings = CXL_INT_MSI_MSIX, - .failure_settings = CXL_INT_MSI_MSIX, - .fatal_settings = CXL_INT_MSI_MSIX, - }; + /* memory event policy is left if FW has control */ + if (native_cxl) { + *policy = (struct cxl_event_interrupt_policy) { + .info_settings = CXL_INT_MSI_MSIX, + .warn_settings = CXL_INT_MSI_MSIX, + .failure_settings = CXL_INT_MSI_MSIX, + .fatal_settings = CXL_INT_MSI_MSIX, + .dcd_settings = 0, + }; + } + + if (cxl_dcd_supported(mds)) { + policy->dcd_settings = CXL_INT_MSI_MSIX; + size_in += sizeof(policy->dcd_settings); + } mbox_cmd = (struct cxl_mbox_cmd) { .opcode = CXL_MBOX_OP_SET_EVT_INT_POLICY, .payload_in = policy, - .size_in = sizeof(*policy), + .size_in = size_in, }; rc = cxl_internal_send_cmd(mds, &mbox_cmd); @@ -731,6 +742,31 @@ static int cxl_event_irqsetup(struct cxl_memdev_state *mds, return 0; } +static int cxl_irqsetup(struct cxl_memdev_state *mds, + struct cxl_event_interrupt_policy *policy, + bool native_cxl) +{ + struct cxl_dev_state *cxlds = &mds->cxlds; + int rc; + + if (native_cxl) { + rc = cxl_event_irqsetup(mds, policy); + if (rc) + return rc; + } + + if (cxl_dcd_supported(mds)) { + rc = cxl_event_req_irq(cxlds, policy->dcd_settings); + if (rc) { + dev_err(cxlds->dev, "Failed to get interrupt for DCD event log\n"); + cxl_disable_dcd(mds); + return rc; + } + } + + return 0; +} + static bool cxl_event_int_is_fw(u8 setting) { u8 mode = FIELD_GET(CXLDEV_EVENT_INT_MODE_MASK, setting); @@ -757,17 +793,25 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, struct cxl_memdev_state *mds, bool irq_avail) { struct cxl_event_interrupt_policy policy = { 0 }; + bool native_cxl = host_bridge->native_cxl_error; int rc; /* * When BIOS maintains CXL error reporting control, it will process * event records. Only one agent can do so. + * + * If BIOS has control of events and DCD is not supported skip event + * configuration. */ - if (!host_bridge->native_cxl_error) + if (!native_cxl && !cxl_dcd_supported(mds)) return 0; if (!irq_avail) { dev_info(mds->cxlds.dev, "No interrupt support, disable event processing.\n"); + if (cxl_dcd_supported(mds)) { + dev_info(mds->cxlds.dev, "DCD requires interrupts, disable DCD\n"); + cxl_disable_dcd(mds); + } return 0; } @@ -775,10 +819,10 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, if (rc) return rc; - if (!cxl_event_validate_mem_policy(mds, &policy)) + if (native_cxl && !cxl_event_validate_mem_policy(mds, &policy)) return -EBUSY; - rc = cxl_event_config_msgnums(mds, &policy); + rc = cxl_event_config_msgnums(mds, &policy, native_cxl); if (rc) return rc; @@ -786,12 +830,16 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, if (rc) return rc; - rc = cxl_event_irqsetup(mds, &policy); + rc = cxl_irqsetup(mds, &policy, native_cxl); if (rc) return rc; cxl_mem_get_event_records(mds, CXLDEV_EVENT_STATUS_ALL); + dev_dbg(mds->cxlds.dev, "Event config : %s DCD %s\n", + native_cxl ? "OS" : "BIOS", + cxl_dcd_supported(mds) ? "supported" : "not supported"); + return 0; }