From patchwork Sun Nov 17 17:44:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 11248501 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B327930 for ; Sun, 17 Nov 2019 17:59:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 557772075C for ; Sun, 17 Nov 2019 17:59:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 557772075C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5BF246B000A; Sun, 17 Nov 2019 12:59:04 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 549596B000C; Sun, 17 Nov 2019 12:59:04 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 411916B000D; Sun, 17 Nov 2019 12:59:04 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 277226B000A for ; Sun, 17 Nov 2019 12:59:04 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id CFC0E8249980 for ; Sun, 17 Nov 2019 17:59:03 +0000 (UTC) X-FDA: 76166530566.26.hot05_4ef5ccd3f6913 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dan.j.williams@intel.com,:linux-nvdimm@lists.01.org:ira.weiny@intel.com:vishal.l.verma@intel.com:aneesh.kumar@linux.ibm.com:peterz@infradead.org:dave.hansen@linux.intel.com:hch@lst.de:linux-kernel@vger.kernel.org::linux-acpi@vger.kernel.org:dan.j.williams@intel.com,RULES_HIT:30029:30054:30064:30070,0,RBL:134.134.136.20:@intel.com:.lbl8.mailshell.net-62.50.0.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: hot05_4ef5ccd3f6913 X-Filterd-Recvd-Size: 8892 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Sun, 17 Nov 2019 17:59:02 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Nov 2019 09:59:01 -0800 X-IronPort-AV: E=Sophos;i="5.68,317,1569308400"; d="scan'208";a="208876218" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.16]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Nov 2019 09:59:01 -0800 Subject: [PATCH v2 02/18] libnvdimm: Move region attribute group definition From: Dan Williams To: linux-nvdimm@lists.01.org Cc: Ira Weiny , Vishal Verma , "Aneesh Kumar K.V" , peterz@infradead.org, dave.hansen@linux.intel.com, hch@lst.de, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-acpi@vger.kernel.org Date: Sun, 17 Nov 2019 09:44:45 -0800 Message-ID: <157401268506.43284.15446878125298907341.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <157401267421.43284.2135775608523385279.stgit@dwillia2-desk3.amr.corp.intel.com> References: <157401267421.43284.2135775608523385279.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.18-3-g996c MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In preparation for moving region attributes from device attribute groups to the region device-type, reorder the declaration so that it can be referenced by the device-type definition without forward declarations. No functional changes are intended to result from this change. Cc: Ira Weiny Cc: Vishal Verma Signed-off-by: Dan Williams Reviewed-by: Aneesh Kumar K.V Link: https://lore.kernel.org/r/157309900624.1582359.6929998072035982264.stgit@dwillia2-desk3.amr.corp.intel.com --- drivers/nvdimm/region_devs.c | 208 +++++++++++++++++++++--------------------- 1 file changed, 104 insertions(+), 104 deletions(-) diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c index ef423ba1a711..e89f2eb3678c 100644 --- a/drivers/nvdimm/region_devs.c +++ b/drivers/nvdimm/region_devs.c @@ -140,36 +140,6 @@ static void nd_region_release(struct device *dev) kfree(nd_region); } -static struct device_type nd_blk_device_type = { - .name = "nd_blk", - .release = nd_region_release, -}; - -static struct device_type nd_pmem_device_type = { - .name = "nd_pmem", - .release = nd_region_release, -}; - -static struct device_type nd_volatile_device_type = { - .name = "nd_volatile", - .release = nd_region_release, -}; - -bool is_nd_pmem(struct device *dev) -{ - return dev ? dev->type == &nd_pmem_device_type : false; -} - -bool is_nd_blk(struct device *dev) -{ - return dev ? dev->type == &nd_blk_device_type : false; -} - -bool is_nd_volatile(struct device *dev) -{ - return dev ? dev->type == &nd_volatile_device_type : false; -} - struct nd_region *to_nd_region(struct device *dev) { struct nd_region *nd_region = container_of(dev, struct nd_region, dev); @@ -674,80 +644,6 @@ static umode_t region_visible(struct kobject *kobj, struct attribute *a, int n) return 0; } -struct attribute_group nd_region_attribute_group = { - .attrs = nd_region_attributes, - .is_visible = region_visible, -}; -EXPORT_SYMBOL_GPL(nd_region_attribute_group); - -u64 nd_region_interleave_set_cookie(struct nd_region *nd_region, - struct nd_namespace_index *nsindex) -{ - struct nd_interleave_set *nd_set = nd_region->nd_set; - - if (!nd_set) - return 0; - - if (nsindex && __le16_to_cpu(nsindex->major) == 1 - && __le16_to_cpu(nsindex->minor) == 1) - return nd_set->cookie1; - return nd_set->cookie2; -} - -u64 nd_region_interleave_set_altcookie(struct nd_region *nd_region) -{ - struct nd_interleave_set *nd_set = nd_region->nd_set; - - if (nd_set) - return nd_set->altcookie; - return 0; -} - -void nd_mapping_free_labels(struct nd_mapping *nd_mapping) -{ - struct nd_label_ent *label_ent, *e; - - lockdep_assert_held(&nd_mapping->lock); - list_for_each_entry_safe(label_ent, e, &nd_mapping->labels, list) { - list_del(&label_ent->list); - kfree(label_ent); - } -} - -/* - * When a namespace is activated create new seeds for the next - * namespace, or namespace-personality to be configured. - */ -void nd_region_advance_seeds(struct nd_region *nd_region, struct device *dev) -{ - nvdimm_bus_lock(dev); - if (nd_region->ns_seed == dev) { - nd_region_create_ns_seed(nd_region); - } else if (is_nd_btt(dev)) { - struct nd_btt *nd_btt = to_nd_btt(dev); - - if (nd_region->btt_seed == dev) - nd_region_create_btt_seed(nd_region); - if (nd_region->ns_seed == &nd_btt->ndns->dev) - nd_region_create_ns_seed(nd_region); - } else if (is_nd_pfn(dev)) { - struct nd_pfn *nd_pfn = to_nd_pfn(dev); - - if (nd_region->pfn_seed == dev) - nd_region_create_pfn_seed(nd_region); - if (nd_region->ns_seed == &nd_pfn->ndns->dev) - nd_region_create_ns_seed(nd_region); - } else if (is_nd_dax(dev)) { - struct nd_dax *nd_dax = to_nd_dax(dev); - - if (nd_region->dax_seed == dev) - nd_region_create_dax_seed(nd_region); - if (nd_region->ns_seed == &nd_dax->nd_pfn.ndns->dev) - nd_region_create_ns_seed(nd_region); - } - nvdimm_bus_unlock(dev); -} - static ssize_t mappingN(struct device *dev, char *buf, int n) { struct nd_region *nd_region = to_nd_region(dev); @@ -861,6 +757,110 @@ struct attribute_group nd_mapping_attribute_group = { }; EXPORT_SYMBOL_GPL(nd_mapping_attribute_group); +struct attribute_group nd_region_attribute_group = { + .attrs = nd_region_attributes, + .is_visible = region_visible, +}; +EXPORT_SYMBOL_GPL(nd_region_attribute_group); + +static struct device_type nd_blk_device_type = { + .name = "nd_blk", + .release = nd_region_release, +}; + +static struct device_type nd_pmem_device_type = { + .name = "nd_pmem", + .release = nd_region_release, +}; + +static struct device_type nd_volatile_device_type = { + .name = "nd_volatile", + .release = nd_region_release, +}; + +bool is_nd_pmem(struct device *dev) +{ + return dev ? dev->type == &nd_pmem_device_type : false; +} + +bool is_nd_blk(struct device *dev) +{ + return dev ? dev->type == &nd_blk_device_type : false; +} + +bool is_nd_volatile(struct device *dev) +{ + return dev ? dev->type == &nd_volatile_device_type : false; +} + +u64 nd_region_interleave_set_cookie(struct nd_region *nd_region, + struct nd_namespace_index *nsindex) +{ + struct nd_interleave_set *nd_set = nd_region->nd_set; + + if (!nd_set) + return 0; + + if (nsindex && __le16_to_cpu(nsindex->major) == 1 + && __le16_to_cpu(nsindex->minor) == 1) + return nd_set->cookie1; + return nd_set->cookie2; +} + +u64 nd_region_interleave_set_altcookie(struct nd_region *nd_region) +{ + struct nd_interleave_set *nd_set = nd_region->nd_set; + + if (nd_set) + return nd_set->altcookie; + return 0; +} + +void nd_mapping_free_labels(struct nd_mapping *nd_mapping) +{ + struct nd_label_ent *label_ent, *e; + + lockdep_assert_held(&nd_mapping->lock); + list_for_each_entry_safe(label_ent, e, &nd_mapping->labels, list) { + list_del(&label_ent->list); + kfree(label_ent); + } +} + +/* + * When a namespace is activated create new seeds for the next + * namespace, or namespace-personality to be configured. + */ +void nd_region_advance_seeds(struct nd_region *nd_region, struct device *dev) +{ + nvdimm_bus_lock(dev); + if (nd_region->ns_seed == dev) { + nd_region_create_ns_seed(nd_region); + } else if (is_nd_btt(dev)) { + struct nd_btt *nd_btt = to_nd_btt(dev); + + if (nd_region->btt_seed == dev) + nd_region_create_btt_seed(nd_region); + if (nd_region->ns_seed == &nd_btt->ndns->dev) + nd_region_create_ns_seed(nd_region); + } else if (is_nd_pfn(dev)) { + struct nd_pfn *nd_pfn = to_nd_pfn(dev); + + if (nd_region->pfn_seed == dev) + nd_region_create_pfn_seed(nd_region); + if (nd_region->ns_seed == &nd_pfn->ndns->dev) + nd_region_create_ns_seed(nd_region); + } else if (is_nd_dax(dev)) { + struct nd_dax *nd_dax = to_nd_dax(dev); + + if (nd_region->dax_seed == dev) + nd_region_create_dax_seed(nd_region); + if (nd_region->ns_seed == &nd_dax->nd_pfn.ndns->dev) + nd_region_create_ns_seed(nd_region); + } + nvdimm_bus_unlock(dev); +} + int nd_blk_region_init(struct nd_region *nd_region) { struct device *dev = &nd_region->dev;