From patchwork Thu Mar 12 12:44:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baoquan He X-Patchwork-Id: 11434141 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 27CE492C for ; Thu, 12 Mar 2020 12:44:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E00AC20724 for ; Thu, 12 Mar 2020 12:44:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="B9yYOYPD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E00AC20724 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B389A6B000E; Thu, 12 Mar 2020 08:44:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AE8DA6B0010; Thu, 12 Mar 2020 08:44:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A25EC6B0032; Thu, 12 Mar 2020 08:44:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 8C68D6B000E for ; Thu, 12 Mar 2020 08:44:39 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6E38D181AEF07 for ; Thu, 12 Mar 2020 12:44:39 +0000 (UTC) X-FDA: 76586679078.22.offer11_b845009f2b49 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,bhe@redhat.com,,RULES_HIT:30012:30054:30070:30090,0,RBL:207.211.31.81:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: offer11_b845009f2b49 X-Filterd-Recvd-Size: 5736 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-1.mimecast.com [207.211.31.81]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Mar 2020 12:44:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1584017078; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:in-reply-to:in-reply-to:references:references; bh=9PXhNxPi/CfCvCFsszvCkwjev77Xv7Ik9WAVL6qcxUU=; b=B9yYOYPDx+T2w+kUvMjcLyjI9dZT7uNOnF6nDixr6usdhwkXplaj/FQItsnMNOhA8TtkwQ xubI/1GUt9Em2UibojWgRYUlZQ9I5rW95+CqRxZFnKUZScb9pV1YhXIYaR9+uSW5xYV20j pyPO84qFcVjvOAYtGZjxujCJMyMyKuo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-207-aHxmy7g4PRiCYjWFfOOQxA-1; Thu, 12 Mar 2020 08:44:34 -0400 X-MC-Unique: aHxmy7g4PRiCYjWFfOOQxA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 111AF100550E; Thu, 12 Mar 2020 12:44:33 +0000 (UTC) Received: from MiWiFi-R3L-srv.redhat.com (ovpn-12-43.pek2.redhat.com [10.72.12.43]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43CB68AC34; Thu, 12 Mar 2020 12:44:29 +0000 (UTC) From: Baoquan He To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, akpm@linux-foundation.org, mhocko@suse.com, david@redhat.com, richard.weiyang@gmail.com, dan.j.williams@intel.com, bhe@redhat.com Subject: [PATCH v4 3/5] mm/sparse.c: only use subsection map in VMEMMAP case Date: Thu, 12 Mar 2020 20:44:12 +0800 Message-Id: <20200312124414.439-4-bhe@redhat.com> In-Reply-To: <20200312124414.439-1-bhe@redhat.com> References: <20200312124414.439-1-bhe@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, to support subsection aligned memory region adding for pmem, subsection map is added to track which subsection is present. However, config ZONE_DEVICE depends on SPARSEMEM_VMEMMAP. It means subsection map only makes sense when SPARSEMEM_VMEMMAP enabled. For the classic sparse, it's meaningless. Even worse, it may confuse people when checking code related to the classic sparse. About the classic sparse which doesn't support subsection hotplug, Dan said it's more because the effort and maintenance burden outweighs the benefit. Besides, the current 64 bit ARCHes all enable SPARSEMEM_VMEMMAP_ENABLE by default. Combining the above reasons, no need to provide subsection map and the relevant handling for the classic sparse. Let's remove them. Signed-off-by: Baoquan He Reviewed-by: David Hildenbrand --- include/linux/mmzone.h | 2 ++ mm/sparse.c | 25 +++++++++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 42b77d3b68e8..f3f264826423 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1143,7 +1143,9 @@ static inline unsigned long section_nr_to_pfn(unsigned long sec) #define SUBSECTION_ALIGN_DOWN(pfn) ((pfn) & PAGE_SUBSECTION_MASK) struct mem_section_usage { +#ifdef CONFIG_SPARSEMEM_VMEMMAP DECLARE_BITMAP(subsection_map, SUBSECTIONS_PER_SECTION); +#endif /* See declaration of similar field in struct zone */ unsigned long pageblock_flags[0]; }; diff --git a/mm/sparse.c b/mm/sparse.c index 0be4d4ed96de..117fe4554c38 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -209,6 +209,7 @@ static inline unsigned long first_present_section_nr(void) return next_present_section_nr(-1); } +#ifdef CONFIG_SPARSEMEM_VMEMMAP static void subsection_mask_set(unsigned long *map, unsigned long pfn, unsigned long nr_pages) { @@ -243,6 +244,11 @@ void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) nr_pages -= pfns; } } +#else +void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages) +{ +} +#endif /* Record a memory area against a node. */ void __init memory_present(int nid, unsigned long start, unsigned long end) @@ -726,6 +732,7 @@ static void free_map_bootmem(struct page *memmap) } #endif /* CONFIG_SPARSEMEM_VMEMMAP */ +#ifdef CONFIG_SPARSEMEM_VMEMMAP static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) { DECLARE_BITMAP(map, SUBSECTIONS_PER_SECTION) = { 0 }; @@ -752,6 +759,17 @@ static bool is_subsection_map_empty(struct mem_section *ms) return bitmap_empty(&ms->usage->subsection_map[0], SUBSECTIONS_PER_SECTION); } +#else +static int clear_subsection_map(unsigned long pfn, unsigned long nr_pages) +{ + return 0; +} + +static bool is_subsection_map_empty(struct mem_section *ms) +{ + return true; +} +#endif static void section_deactivate(unsigned long pfn, unsigned long nr_pages, struct vmem_altmap *altmap) @@ -807,6 +825,7 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages, ms->section_mem_map = (unsigned long)NULL; } +#ifdef CONFIG_SPARSEMEM_VMEMMAP static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) { struct mem_section *ms = __pfn_to_section(pfn); @@ -828,6 +847,12 @@ static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) return rc; } +#else +static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages) +{ + return 0; +} +#endif static struct page * __meminit section_activate(int nid, unsigned long pfn, unsigned long nr_pages, struct vmem_altmap *altmap)