From patchwork Fri Jun 29 02:29:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jia He X-Patchwork-Id: 10495525 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D034D60325 for ; Fri, 29 Jun 2018 02:30:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAD1C2A6B5 for ; Fri, 29 Jun 2018 02:30:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE5922A6E5; Fri, 29 Jun 2018 02:30:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D4612A6B5 for ; Fri, 29 Jun 2018 02:30:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D5FA6B000A; Thu, 28 Jun 2018 22:30:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 986066B000C; Thu, 28 Jun 2018 22:30:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8755B6B000D; Thu, 28 Jun 2018 22:30:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f198.google.com (mail-qk0-f198.google.com [209.85.220.198]) by kanga.kvack.org (Postfix) with ESMTP id 5B8CE6B000A for ; Thu, 28 Jun 2018 22:30:28 -0400 (EDT) Received: by mail-qk0-f198.google.com with SMTP id s63-v6so7807438qkc.7 for ; Thu, 28 Jun 2018 19:30:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=LNIVj8mcJRSFjcBxFmCRFF49bEt8xstenAEyW0ynDOk=; b=umT1nKroXH3jwHt0TQjamZgdjT6/bWjPrNTtR90WGOnI1ja89rBxLNDYNkmJpE7o9y dTtcyLCybSkGI+38XzuVh5w4PBmo+gExrEijx/v4o+PWOi7IW1yyoDuNqLsnP1SG6XHT S2J/o3ktD9R1XtIqyDoz/gLP6YqTb7ZO4ngiRiDlP793PDwbWHZ1AESvxp4dXI8T44uB 7Xf3CNR3gkZgdsfHReQTWIjUmXxqDk7YttkRBO8AOvS6RoodgFMCopJND3ihNRL2lSQi ZyD7U3/ssX9SuW6+N9iOPkjD/rO1Vd0ktwnGacKifyURQpD3PmQ7z2XqZGhlvrSvnZnq rzUw== X-Gm-Message-State: APt69E23dqzPelTqQLkm5ipbvZb4qRVYMOCQ253FCcEUEJKAhbdr6DGb KfRxpu8OgkZmGCrnCkUP8doCf16s7zG7kKKaMAhGnN9bs0K3GVdsMlSXXHmPEQS2TSF+alJyfLF tuq6XC3GTCGDipwV+WCN5UIrqog0T66YRj5MYJeWCPFYQJltyEluobNAY93LLw8BtOSiLgg5nNo 4v+eRElSAK2OgfM/BUhxWqCRDIIRYGH328B7grFrq+b6bRTiOVme5Ihi+7pCcsYxdBjdixsAKCF wzPTs2l+T/Gp7th9S3KW6DpVniI90lMOwv1TMbrfg/gE0pzDZCpq5tftlqU199U9fV8eDcxwDEX Zz6yTut6UaMClRjCyTb37DfjhxYADkYemsex009NYsHpP6Uw5PtAYkOVersgb8eJiDbBbmQeQZW U X-Received: by 2002:a0c:c683:: with SMTP id d3-v6mr11537745qvj.152.1530239428147; Thu, 28 Jun 2018 19:30:28 -0700 (PDT) X-Received: by 2002:a0c:c683:: with SMTP id d3-v6mr11537723qvj.152.1530239427519; Thu, 28 Jun 2018 19:30:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1530239427; cv=none; d=google.com; s=arc-20160816; b=OvkuiCvqRPHW5d0cnBYu2d7Lt+TGeeaTlbbVxpK7t2/9d78vY3epG/TxeoktPE5MdA 39tlNMAs5vyV5XC+PQiAkYkn4tyiQr186pkvSGVUlTdNip/HtJPDI7mF4bq2L+ySxYrI H1Oy4gbIyNStI+we2T5A2JogFa5zh1zFZ+kgtsqQrIjWG1cWwEobEnDKGZf73P3CMYDn c/HuJDpNzYJrCCDkLVa5IgFpxJKhZeOSg8eoX6Txq9a51tb1ztxpgwlog+8q0Xg5Vv51 AJst6f+9TEjuMnmu2AEgDIZgdyXZqysicaRRRCPhvINMoS4qSREO+8KKLtxpE0WhG4o1 3SsA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=LNIVj8mcJRSFjcBxFmCRFF49bEt8xstenAEyW0ynDOk=; b=wlJXPNAFCrrs0TNXGghVV8woUxTM/pz+ayfD3b1Y4ULrl+NU/l2pPebVLEp5/sr42L vcnLPb7wZ3fZTqCxm1H8wVKahCMqapHfCcOTIBnv5O+XQMXb/AsbkM2zxJNEYRMIoyKT 34XZzSogyYJmYvma/fxA2vWCu3lxoRb8USz6uoo0mNGSZZCPFLY65HZLRRoquP2rngsD u+rQ3/Gg9nbUsUfVYz2ws0XLuI3UkSqBhxyPTD3C5Lqf/Jzvm0LcPJQGRLNquX/AHNSu /rk0KXP4uD9Kx786hLv+BsHe/J4qfFt2PZZuut47SCzxl0ypf/nxVO02mjEQtg3tSrUy wkhg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=cJNuAMqY; spf=pass (google.com: domain of hejianet@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=hejianet@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id t129-v6sor3659826qkc.79.2018.06.28.19.30.27 for (Google Transport Security); Thu, 28 Jun 2018 19:30:27 -0700 (PDT) Received-SPF: pass (google.com: domain of hejianet@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=cJNuAMqY; spf=pass (google.com: domain of hejianet@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=hejianet@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LNIVj8mcJRSFjcBxFmCRFF49bEt8xstenAEyW0ynDOk=; b=cJNuAMqYgKR6/ywwnfmcYs8TmmfnisDxpEtqVQvcMqxYjV8xnA4akS+ZZ6mm2XbCTm Mi691IRS30Ezbc3k+8ryL7i28Pj4Kbx6tflZisa//N6V5Pmgs3Tths9JgdO1/QavlFuM sG4bGizoS2gxT9S8Kll4g27oypwTBeG5avALJO9Lf/VQJxDnNVakvoVXkCl2TpBsuCmO U+8hytK8lnx88BaWFnr/Xai9wnQl+UgOdsLzr89a5eFJ5Kx4+K2M/lFSfiZcOqMUT4ES Q2msq0CCwi4Quypy/3gS50K8oT2rma4TwbIVbBfNw4p4wldd7bb82ZzHgrkAXax8S4/h E/0g== X-Google-Smtp-Source: AAOMgpe+nWQI6ITpl1rEdIW/O5Q90VFlXNgX9jFJsFDW8Wls4wzWtbpF1gbX5jsE94qPZ9ToErJOZg== X-Received: by 2002:ae9:e848:: with SMTP id a69-v6mr11395716qkg.161.1530239427234; Thu, 28 Jun 2018 19:30:27 -0700 (PDT) Received: from ct7host.localdomain ([38.106.11.25]) by smtp.gmail.com with ESMTPSA id y25-v6sm6390186qtc.48.2018.06.28.19.30.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 28 Jun 2018 19:30:26 -0700 (PDT) From: Jia He To: Russell King , Andrew Morton , Michal Hocko , Catalin Marinas , Mel Gorman , Will Deacon , Mark Rutland , "H. Peter Anvin" Cc: Pavel Tatashin , Daniel Jordan , AKASHI Takahiro , Gioh Kim , Steven Sistare , Daniel Vacek , Eugeniu Rosca , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org, James Morse , Ard Biesheuvel , Steve Capper , Thomas Gleixner , Ingo Molnar , Greg Kroah-Hartman , Kate Stewart , Philippe Ombredanne , Johannes Weiner , Kemi Wang , Petr Tesarik , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , richard.weiyang@gmail.com, Jia He , Jia He Subject: [PATCH v9 2/6] mm: page_alloc: remain memblock_next_valid_pfn() on arm/arm64 Date: Fri, 29 Jun 2018 10:29:19 +0800 Message-Id: <1530239363-2356-3-git-send-email-hejianet@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1530239363-2356-1-git-send-email-hejianet@gmail.com> References: <1530239363-2356-1-git-send-email-hejianet@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Commit b92df1de5d28 ("mm: page_alloc: skip over regions of invalid pfns where possible") optimized the loop in memmap_init_zone(). But it causes possible panic bug. So Daniel Vacek reverted it later. But as suggested by Daniel Vacek, it is fine to using memblock to skip gaps and finding next valid frame with CONFIG_HAVE_ARCH_PFN_VALID. On arm and arm64, memblock is used by default. But generic version of pfn_valid() is based on mem sections and memblock_next_valid_pfn() does not always return the next valid one but skips more resulting in some valid frames to be skipped (as if they were invalid). And that's why kernel was eventually crashing on some !arm machines. And as verified by Eugeniu Rosca, arm can benifit from commit b92df1de5d28. So it would be better if we remain the memblock_next_valid_pfn on arm/arm64 and move the related codes to one file include/linux/early_pfn.h Suggested-by: Daniel Vacek Signed-off-by: Jia He --- arch/arm/mm/init.c | 1 + arch/arm64/mm/init.c | 1 + include/linux/early_pfn.h | 34 ++++++++++++++++++++++++++++++++++ include/linux/mmzone.h | 11 +++++++++++ mm/page_alloc.c | 5 ++++- 5 files changed, 51 insertions(+), 1 deletion(-) create mode 100644 include/linux/early_pfn.h diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index c186474..aa99f4d 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 325cfb3..495e299 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -40,6 +40,7 @@ #include #include #include +#include #include #include diff --git a/include/linux/early_pfn.h b/include/linux/early_pfn.h new file mode 100644 index 0000000..1b001c7 --- /dev/null +++ b/include/linux/early_pfn.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* Copyright (C) 2018 HXT-semitech Corp. */ +#ifndef __EARLY_PFN_H +#define __EARLY_PFN_H +#ifdef CONFIG_HAVE_MEMBLOCK_PFN_VALID +ulong __init_memblock memblock_next_valid_pfn(ulong pfn) +{ + struct memblock_type *type = &memblock.memory; + unsigned int right = type->cnt; + unsigned int mid, left = 0; + phys_addr_t addr = PFN_PHYS(++pfn); + + do { + mid = (right + left) / 2; + + if (addr < type->regions[mid].base) + right = mid; + else if (addr >= (type->regions[mid].base + + type->regions[mid].size)) + left = mid + 1; + else { + /* addr is within the region, so pfn is valid */ + return pfn; + } + } while (left < right); + + if (right == type->cnt) + return -1UL; + else + return PHYS_PFN(type->regions[right].base); +} +EXPORT_SYMBOL(memblock_next_valid_pfn); +#endif /*CONFIG_HAVE_MEMBLOCK_PFN_VALID*/ +#endif /*__EARLY_PFN_H*/ diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 32699b2..57cdc42 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1241,6 +1241,8 @@ static inline int pfn_valid(unsigned long pfn) return 0; return valid_section(__nr_to_section(pfn_to_section_nr(pfn))); } + +#define next_valid_pfn(pfn) (pfn + 1) #endif static inline int pfn_present(unsigned long pfn) @@ -1266,6 +1268,10 @@ static inline int pfn_present(unsigned long pfn) #endif #define early_pfn_valid(pfn) pfn_valid(pfn) +#ifdef CONFIG_HAVE_MEMBLOCK_PFN_VALID +extern ulong memblock_next_valid_pfn(ulong pfn); +#define next_valid_pfn(pfn) memblock_next_valid_pfn(pfn) +#endif void sparse_init(void); #else #define sparse_init() do {} while (0) @@ -1287,6 +1293,11 @@ struct mminit_pfnnid_cache { #define early_pfn_valid(pfn) (1) #endif +/* fallback to default definitions*/ +#ifndef next_valid_pfn +#define next_valid_pfn(pfn) (pfn + 1) +#endif + void memory_present(int nid, unsigned long start, unsigned long end); /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cd3c7b9..607deff 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5485,8 +5485,11 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone, if (context != MEMMAP_EARLY) goto not_early; - if (!early_pfn_valid(pfn)) + if (!early_pfn_valid(pfn)) { + pfn = next_valid_pfn(pfn) - 1; continue; + } + if (!early_pfn_in_nid(pfn, nid)) continue; if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised))