From patchwork Sun Dec 1 01:54:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11268337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 98240921 for ; Sun, 1 Dec 2019 01:54:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 57BCF215E5 for ; Sun, 1 Dec 2019 01:54:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="sQWtT7Gu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 57BCF215E5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 275E36B02F7; Sat, 30 Nov 2019 20:54:10 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 226626B02F9; Sat, 30 Nov 2019 20:54:10 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 114706B02FA; Sat, 30 Nov 2019 20:54:10 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id EFA656B02F7 for ; Sat, 30 Nov 2019 20:54:09 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id A9F6A2C8A for ; Sun, 1 Dec 2019 01:54:09 +0000 (UTC) X-FDA: 76214902218.01.wax56_24466d88abc45 X-Spam-Summary: 2,0,0,2ddb96505f5f5496,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alexander.h.duyck@linux.intel.com:anshuman.khandual@arm.com:cai@lca.pw:dan.j.williams@intel.com:david@redhat.com:kernelfans@gmail.com::mgorman@techsingularity.net:mhocko@suse.com:mm-commits@vger.kernel.org:osalvador@suse.de:pasha.tatashin@soleen.com:pavel.tatashin@microsoft.com:richard.weiyang@gmail.com:rppt@linux.ibm.com:rppt@linux.vnet.ibm.com:torvalds@linux-foundation.org:vbabka@suse.cz,RULES_HIT:2:41:355:379:800:960:966:967:973:988:989:1260:1263:1345:1381:1431:1437:1535:1730:1747:1777:1792:1963:2196:2198:2199:2200:2393:2525:2553:2559:2563:2682:2685:2859:2898:2899:2902:2903:2915:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4049:4119:4321:4385:4605:5007:6117:6119:6261:6653:6737:7514:7576:7688:7875:8599:8784:8957:9025:9121:9545:10004:10913:11026:1123 3:11473: X-HE-Tag: wax56_24466d88abc45 X-Filterd-Recvd-Size: 8155 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Dec 2019 01:54:09 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id ECBE3206E1; Sun, 1 Dec 2019 01:54:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575165248; bh=DcxjY0kYqJ+q+htorjgyojrX640TKiospGUeJNAv6Jc=; h=Date:From:To:Subject:From; b=sQWtT7Gu+d4TxhaVGPtA7BZ4gS7bghHwz0Q75oxpJWJ5r6IEYiDP4IPWB/762do1I LuOwepHzu97DBsAuA7GH/rQI1ZuHYKL3dghdCLEOU9HyxR/h58gXrIl1YMj3l7Rypb s8GQbg5a75/ji44EoS4Am8J4ktVAISOm/jGBOMtY= Date: Sat, 30 Nov 2019 17:54:07 -0800 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, alexander.h.duyck@linux.intel.com, anshuman.khandual@arm.com, cai@lca.pw, dan.j.williams@intel.com, david@redhat.com, kernelfans@gmail.com, linux-mm@kvack.org, mgorman@techsingularity.net, mhocko@suse.com, mm-commits@vger.kernel.org, osalvador@suse.de, pasha.tatashin@soleen.com, pavel.tatashin@microsoft.com, richard.weiyang@gmail.com, rppt@linux.ibm.com, rppt@linux.vnet.ibm.com, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 080/158] mm/page_isolation.c: convert SKIP_HWPOISON to MEMORY_OFFLINE Message-ID: <20191201015407.BoZ_1bSSb%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: David Hildenbrand Subject: mm/page_isolation.c: convert SKIP_HWPOISON to MEMORY_OFFLINE We have two types of users of page isolation: 1. Memory offlining: Offline memory so it can be unplugged. Memory won't be touched. 2. Memory allocation: Allocate memory (e.g., alloc_contig_range()) to become the owner of the memory and make use of it. For example, in case we want to offline memory, we can ignore (skip over) PageHWPoison() pages, as the memory won't get used. We can allow to offline memory. In contrast, we don't want to allow to allocate such memory. Let's generalize the approach so we can special case other types of pages we want to skip over in case we offline memory. While at it, also pass the same flags to test_pages_isolated(). Link: http://lkml.kernel.org/r/20191021172353.3056-3-david@redhat.com Signed-off-by: David Hildenbrand Suggested-by: Michal Hocko Acked-by: Michal Hocko Cc: Oscar Salvador Cc: Anshuman Khandual Cc: David Hildenbrand Cc: Pingfan Liu Cc: Qian Cai Cc: Pavel Tatashin Cc: Dan Williams Cc: Vlastimil Babka Cc: Mel Gorman Cc: Mike Rapoport Cc: Alexander Duyck Cc: Mike Rapoport Cc: Pavel Tatashin Cc: Wei Yang Signed-off-by: Andrew Morton --- include/linux/page-isolation.h | 4 ++-- mm/memory_hotplug.c | 8 +++++--- mm/page_alloc.c | 4 ++-- mm/page_isolation.c | 12 ++++++------ 4 files changed, 15 insertions(+), 13 deletions(-) --- a/include/linux/page-isolation.h~mm-page_isolationc-convert-skip_hwpoison-to-memory_offline +++ a/include/linux/page-isolation.h @@ -30,7 +30,7 @@ static inline bool is_migrate_isolate(in } #endif -#define SKIP_HWPOISON 0x1 +#define MEMORY_OFFLINE 0x1 #define REPORT_FAILURE 0x2 bool has_unmovable_pages(struct zone *zone, struct page *page, int count, @@ -58,7 +58,7 @@ undo_isolate_page_range(unsigned long st * Test all pages in [start_pfn, end_pfn) are isolated or not. */ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, - bool skip_hwpoisoned_pages); + int isol_flags); struct page *alloc_migrate_target(struct page *page, unsigned long private); --- a/mm/memory_hotplug.c~mm-page_isolationc-convert-skip_hwpoison-to-memory_offline +++ a/mm/memory_hotplug.c @@ -1187,7 +1187,8 @@ static bool is_pageblock_removable_noloc if (!zone_spans_pfn(zone, pfn)) return false; - return !has_unmovable_pages(zone, page, 0, MIGRATE_MOVABLE, SKIP_HWPOISON); + return !has_unmovable_pages(zone, page, 0, MIGRATE_MOVABLE, + MEMORY_OFFLINE); } /* Checks if this range of memory is likely to be hot-removable. */ @@ -1402,7 +1403,8 @@ static int check_pages_isolated_cb(unsigned long start_pfn, unsigned long nr_pages, void *data) { - return test_pages_isolated(start_pfn, start_pfn + nr_pages, true); + return test_pages_isolated(start_pfn, start_pfn + nr_pages, + MEMORY_OFFLINE); } static int __init cmdline_parse_movable_node(char *p) @@ -1513,7 +1515,7 @@ static int __ref __offline_pages(unsigne /* set above range as isolated */ ret = start_isolate_page_range(start_pfn, end_pfn, MIGRATE_MOVABLE, - SKIP_HWPOISON | REPORT_FAILURE); + MEMORY_OFFLINE | REPORT_FAILURE); if (ret < 0) { reason = "failure to isolate range"; goto failed_removal; --- a/mm/page_alloc.c~mm-page_isolationc-convert-skip_hwpoison-to-memory_offline +++ a/mm/page_alloc.c @@ -8261,7 +8261,7 @@ bool has_unmovable_pages(struct zone *zo * The HWPoisoned page may be not in buddy system, and * page_count() is not 0. */ - if ((flags & SKIP_HWPOISON) && PageHWPoison(page)) + if ((flags & MEMORY_OFFLINE) && PageHWPoison(page)) continue; if (__PageMovable(page)) @@ -8477,7 +8477,7 @@ int alloc_contig_range(unsigned long sta } /* Make sure the range is really isolated. */ - if (test_pages_isolated(outer_start, end, false)) { + if (test_pages_isolated(outer_start, end, 0)) { pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", __func__, outer_start, end); ret = -EBUSY; --- a/mm/page_isolation.c~mm-page_isolationc-convert-skip_hwpoison-to-memory_offline +++ a/mm/page_isolation.c @@ -168,7 +168,8 @@ __first_valid_page(unsigned long pfn, un * @migratetype: Migrate type to set in error recovery. * @flags: The following flags are allowed (they can be combined in * a bit mask) - * SKIP_HWPOISON - ignore hwpoison pages + * MEMORY_OFFLINE - isolate to offline (!allocate) memory + * e.g., skip over PageHWPoison() pages * REPORT_FAILURE - report details about the failure to * isolate the range * @@ -257,7 +258,7 @@ void undo_isolate_page_range(unsigned lo */ static unsigned long __test_page_isolated_in_pageblock(unsigned long pfn, unsigned long end_pfn, - bool skip_hwpoisoned_pages) + int flags) { struct page *page; @@ -274,7 +275,7 @@ __test_page_isolated_in_pageblock(unsign * simple way to verify that as VM_BUG_ON(), though. */ pfn += 1 << page_order(page); - else if (skip_hwpoisoned_pages && PageHWPoison(page)) + else if ((flags & MEMORY_OFFLINE) && PageHWPoison(page)) /* A HWPoisoned page cannot be also PageBuddy */ pfn++; else @@ -286,7 +287,7 @@ __test_page_isolated_in_pageblock(unsign /* Caller should ensure that requested range is in a single zone */ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn, - bool skip_hwpoisoned_pages) + int isol_flags) { unsigned long pfn, flags; struct page *page; @@ -308,8 +309,7 @@ int test_pages_isolated(unsigned long st /* Check all pages are free or marked as ISOLATED */ zone = page_zone(page); spin_lock_irqsave(&zone->lock, flags); - pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn, - skip_hwpoisoned_pages); + pfn = __test_page_isolated_in_pageblock(start_pfn, end_pfn, isol_flags); spin_unlock_irqrestore(&zone->lock, flags); trace_test_pages_isolated(start_pfn, end_pfn, pfn);