From patchwork Fri Sep 25 19:11:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 11800641 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9EC5D6CA for ; Fri, 25 Sep 2020 19:30:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4D8E1235FD for ; Fri, 25 Sep 2020 19:30:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4D8E1235FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3F1F96B0062; Fri, 25 Sep 2020 15:30:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3C76D6B006C; Fri, 25 Sep 2020 15:30:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3056D6B006E; Fri, 25 Sep 2020 15:30:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0194.hostedemail.com [216.40.44.194]) by kanga.kvack.org (Postfix) with ESMTP id 18BFB6B0062 for ; Fri, 25 Sep 2020 15:30:16 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C93CC8249980 for ; Fri, 25 Sep 2020 19:30:15 +0000 (UTC) X-FDA: 77302574790.20.scene34_02038d42716a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id A1A67180C07AB for ; Fri, 25 Sep 2020 19:30:15 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dan.j.williams@intel.com,,RULES_HIT:30029:30054:30064,0,RBL:192.55.52.88:@intel.com:.lbl8.mailshell.net-64.95.201.95 62.18.0.100;04ygzqwg3zhfbmufoffb768zzbutioc4wfc3zbyri49yetdenrmf4jw9u1z3ntm.ewr9d693bh3i57r7c89ki8wkr6haro7x3g1op6b7j8dk8qgydxygue1fp1skfqo.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: scene34_02038d42716a X-Filterd-Recvd-Size: 6270 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Fri, 25 Sep 2020 19:30:13 +0000 (UTC) IronPort-SDR: lCL6Oeap3OZcuFxymgLurT25NIt003/y3YonLG6oyXda8zmsVJX0Enp6Usmpfd5j0s4BAYctCo eOIumLOBFckQ== X-IronPort-AV: E=McAfee;i="6000,8403,9755"; a="179717678" X-IronPort-AV: E=Sophos;i="5.77,303,1596524400"; d="scan'208";a="179717678" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2020 12:30:12 -0700 IronPort-SDR: XSvlghnB1tJsZcOX7krYgEDRiotpg6HKhmkknI0fihWB4dCKWvZG16kjNyhrZAcDdM6AdS6X3A q+g3Atacrn8A== X-IronPort-AV: E=Sophos;i="5.77,303,1596524400"; d="scan'208";a="306390388" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Sep 2020 12:30:11 -0700 Subject: [PATCH v5 02/17] device-dax/kmem: introduce dax_kmem_range() From: Dan Williams To: akpm@linux-foundation.org Cc: David Hildenbrand , Vishal Verma , Dave Hansen , Pavel Tatashin , Brice Goglin , Dave Jiang , David Hildenbrand , Ira Weiny , Jia He , Joao Martins , Jonathan Cameron , linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org Date: Fri, 25 Sep 2020 12:11:51 -0700 Message-ID: <160106111109.30709.3173462396758431559.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <160106109960.30709.7379926726669669398.stgit@dwillia2-desk3.amr.corp.intel.com> References: <160106109960.30709.7379926726669669398.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Towards removing the mode specific @dax_kmem_res attribute from the generic 'struct dev_dax', and preparing for multi-range support, teach the driver to calculate the hotplug range from the device range. The hotplug range is the trivially calculated memory-block-size aligned version of the device range. Cc: David Hildenbrand Cc: Vishal Verma Cc: Dave Hansen Cc: Pavel Tatashin Cc: Brice Goglin Cc: Dave Jiang Cc: David Hildenbrand Cc: Ira Weiny Cc: Jia He Cc: Joao Martins Cc: Jonathan Cameron Signed-off-by: Dan Williams Reviewed-by: David Hildenbrand --- drivers/dax/kmem.c | 40 +++++++++++++++++----------------------- 1 file changed, 17 insertions(+), 23 deletions(-) diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index 5bb133df147d..b0d6a99cf12d 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -19,13 +19,20 @@ static const char *kmem_name; /* Set if any memory will remain added when the driver will be unloaded. */ static bool any_hotremove_failed; +static struct range dax_kmem_range(struct dev_dax *dev_dax) +{ + struct range range; + + /* memory-block align the hotplug range */ + range.start = ALIGN(dev_dax->range.start, memory_block_size_bytes()); + range.end = ALIGN_DOWN(dev_dax->range.end + 1, memory_block_size_bytes()) - 1; + return range; +} + int dev_dax_kmem_probe(struct device *dev) { struct dev_dax *dev_dax = to_dev_dax(dev); - struct range *range = &dev_dax->range; - resource_size_t kmem_start; - resource_size_t kmem_size; - resource_size_t kmem_end; + struct range range = dax_kmem_range(dev_dax); struct resource *new_res; const char *new_res_name; int numa_node; @@ -44,25 +51,14 @@ int dev_dax_kmem_probe(struct device *dev) return -EINVAL; } - /* Hotplug starting at the beginning of the next block: */ - kmem_start = ALIGN(range->start, memory_block_size_bytes()); - - kmem_size = range_len(range); - /* Adjust the size down to compensate for moving up kmem_start: */ - kmem_size -= kmem_start - range->start; - /* Align the size down to cover only complete blocks: */ - kmem_size &= ~(memory_block_size_bytes() - 1); - kmem_end = kmem_start + kmem_size; - new_res_name = kstrdup(dev_name(dev), GFP_KERNEL); if (!new_res_name) return -ENOMEM; /* Region is permanently reserved if hotremove fails. */ - new_res = request_mem_region(kmem_start, kmem_size, new_res_name); + new_res = request_mem_region(range.start, range_len(&range), new_res_name); if (!new_res) { - dev_warn(dev, "could not reserve region [%pa-%pa]\n", - &kmem_start, &kmem_end); + dev_warn(dev, "could not reserve region [%#llx-%#llx]\n", range.start, range.end); kfree(new_res_name); return -EBUSY; } @@ -96,9 +92,8 @@ int dev_dax_kmem_probe(struct device *dev) static int dev_dax_kmem_remove(struct device *dev) { struct dev_dax *dev_dax = to_dev_dax(dev); + struct range range = dax_kmem_range(dev_dax); struct resource *res = dev_dax->dax_kmem_res; - resource_size_t kmem_start = res->start; - resource_size_t kmem_size = resource_size(res); const char *res_name = res->name; int rc; @@ -108,12 +103,11 @@ static int dev_dax_kmem_remove(struct device *dev) * there is no way to hotremove this memory until reboot because device * unbind will succeed even if we return failure. */ - rc = remove_memory(dev_dax->target_node, kmem_start, kmem_size); + rc = remove_memory(dev_dax->target_node, range.start, range_len(&range)); if (rc) { any_hotremove_failed = true; - dev_err(dev, - "DAX region %pR cannot be hotremoved until the next reboot\n", - res); + dev_err(dev, "%#llx-%#llx cannot be hotremoved until the next reboot\n", + range.start, range.end); return rc; }