From patchwork Sun Apr 12 19:48:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11484695 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D1BF112C for ; Sun, 12 Apr 2020 19:49:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 29D0D2072D for ; Sun, 12 Apr 2020 19:49:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="1vkQpz6Y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 29D0D2072D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5D7C78E00D7; Sun, 12 Apr 2020 15:49:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 586C38E00D0; Sun, 12 Apr 2020 15:49:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4507B8E00D7; Sun, 12 Apr 2020 15:49:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 2954F8E00D0 for ; Sun, 12 Apr 2020 15:49:54 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D8B5F180AD801 for ; Sun, 12 Apr 2020 19:49:53 +0000 (UTC) X-FDA: 76700243466.23.light72_9091e0ffd633f X-Spam-Summary: 2,0,0,097248a729016cc5,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:1981:2194:2196:2198:2199:2200:2201:2393:2559:2562:2731:3138:3139:3140:3141:3142:3354:3865:3867:3868:3871:3872:4118:4250:4321:4385:4605:5007:6119:6261:6653:6742:6743:7576:7875:7903:8660:9592:10004:11026:11232:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13141:13148:13161:13229:13230:13255:13894:14096:14181:14394:14721:21080:21451:21627:21990:30045:30054,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: light72_9091e0ffd633f X-Filterd-Recvd-Size: 7118 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Sun, 12 Apr 2020 19:49:53 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 8D66F20708; Sun, 12 Apr 2020 19:49:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1586720992; bh=FNdax+wcSI1khaCVENhrYfcROCQD4fRstvxxOL9s7HU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=1vkQpz6YbGitKBselvdWYCxtvBesluk8x4SE82TDr2kJ8ZShxo+FSPq7tk2sTI0px NOrL1XeXxbS4nqCMlU8LgzZOez+Xfz4V+JFjnlpLzhWanixG9y+DNSK3E6u0W26JDq aKq4mVUpJCgovhBya6Md/O1HUcQ+1deg7nKD6Fk0= From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Baoquan He , Brian Cain , Catalin Marinas , "David S. Miller" , Geert Uytterhoeven , Greentime Hu , Greg Ungerer , Guan Xuetao , Guo Ren , Heiko Carstens , Helge Deller , Hoan Tran , "James E.J. Bottomley" , Jonathan Corbet , Ley Foon Tan , Mark Salter , Matt Turner , Max Filippov , Michael Ellerman , Michal Hocko , Michal Simek , Mike Rapoport , Nick Hu , Paul Walmsley , Richard Weinberger , Rich Felker , Russell King , Stafford Horne , Thomas Bogendoerfer , Tony Luck , Vineet Gupta , x86@kernel.org, Yoshinori Sato , linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-csky@vger.kernel.org, linux-doc@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, uclinux-h8-devel@lists.sourceforge.jp, Mike Rapoport Subject: [PATCH 02/21] mm: make early_pfn_to_nid() and related defintions close to each other Date: Sun, 12 Apr 2020 22:48:40 +0300 Message-Id: <20200412194859.12663-3-rppt@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200412194859.12663-1-rppt@kernel.org> References: <20200412194859.12663-1-rppt@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The early_pfn_to_nid() and it's helper __early_pfn_to_nid() are spread around include/linux/mm.h, include/linux/mmzone.h and mm/page_alloc.c. Drop unused stub for __early_pfn_to_nid() and move its actual generic implementation close to its users. Signed-off-by: Mike Rapoport Reviewed-by: Baoquan He --- include/linux/mm.h | 4 ++-- include/linux/mmzone.h | 9 -------- mm/page_alloc.c | 51 +++++++++++++++++++++--------------------- 3 files changed, 27 insertions(+), 37 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 5a323422d783..a404026d14d4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2388,9 +2388,9 @@ extern void sparse_memory_present_with_active_regions(int nid); #if !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) && \ !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) -static inline int __early_pfn_to_nid(unsigned long pfn, - struct mminit_pfnnid_cache *state) +static inline int early_pfn_to_nid(unsigned long pfn) { + BUILD_BUG_ON(IS_ENABLED(CONFIG_NUMA)); return 0; } #else diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1b9de7d220fb..7b5b6eba402f 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1078,15 +1078,6 @@ static inline struct zoneref *first_zones_zonelist(struct zonelist *zonelist, #include #endif -#if !defined(CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID) && \ - !defined(CONFIG_HAVE_MEMBLOCK_NODE_MAP) -static inline unsigned long early_pfn_to_nid(unsigned long pfn) -{ - BUILD_BUG_ON(IS_ENABLED(CONFIG_NUMA)); - return 0; -} -#endif - #ifdef CONFIG_FLATMEM #define pfn_to_nid(pfn) (0) #endif diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0d012eda1694..1ac775bfc9cf 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1504,6 +1504,31 @@ void __free_pages_core(struct page *page, unsigned int order) static struct mminit_pfnnid_cache early_pfnnid_cache __meminitdata; +#ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID + +/* + * Required by SPARSEMEM. Given a PFN, return what node the PFN is on. + */ +int __meminit __early_pfn_to_nid(unsigned long pfn, + struct mminit_pfnnid_cache *state) +{ + unsigned long start_pfn, end_pfn; + int nid; + + if (state->last_start <= pfn && pfn < state->last_end) + return state->last_nid; + + nid = memblock_search_pfn_nid(pfn, &start_pfn, &end_pfn); + if (nid != NUMA_NO_NODE) { + state->last_start = start_pfn; + state->last_end = end_pfn; + state->last_nid = nid; + } + + return nid; +} +#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ + int __meminit early_pfn_to_nid(unsigned long pfn) { static DEFINE_SPINLOCK(early_pfn_lock); @@ -6298,32 +6323,6 @@ void __meminit init_currently_empty_zone(struct zone *zone, zone->initialized = 1; } -#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP -#ifndef CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID - -/* - * Required by SPARSEMEM. Given a PFN, return what node the PFN is on. - */ -int __meminit __early_pfn_to_nid(unsigned long pfn, - struct mminit_pfnnid_cache *state) -{ - unsigned long start_pfn, end_pfn; - int nid; - - if (state->last_start <= pfn && pfn < state->last_end) - return state->last_nid; - - nid = memblock_search_pfn_nid(pfn, &start_pfn, &end_pfn); - if (nid != NUMA_NO_NODE) { - state->last_start = start_pfn; - state->last_end = end_pfn; - state->last_nid = nid; - } - - return nid; -} -#endif /* CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID */ - /** * free_bootmem_with_active_regions - Call memblock_free_early_nid for each active range * @nid: The node to free memory on. If MAX_NUMNODES, all nodes are freed.