From patchwork Sun Dec 1 01:54:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11268335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5A42921 for ; Sun, 1 Dec 2019 01:54:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7339A206E1 for ; Sun, 1 Dec 2019 01:54:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="VdEUw5bg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7339A206E1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5D0DE6B02F5; Sat, 30 Nov 2019 20:54:06 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 581E36B02F7; Sat, 30 Nov 2019 20:54:06 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4986E6B02F8; Sat, 30 Nov 2019 20:54:06 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 30B276B02F5 for ; Sat, 30 Nov 2019 20:54:06 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id CE3C72A87 for ; Sun, 1 Dec 2019 01:54:05 +0000 (UTC) X-FDA: 76214902050.04.yard73_23b83eb6f190b X-Spam-Summary: 2,0,0,4a3ab7d74be5bbcf,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alexander.h.duyck@linux.intel.com:anshuman.khandual@arm.com:cai@lca.pw:dan.j.williams@intel.com:david@redhat.com:kernelfans@gmail.com::mgorman@techsingularity.net:mhocko@suse.com:mm-commits@vger.kernel.org:osalvador@suse.de:pasha.tatashin@soleen.com:pavel.tatashin@microsoft.com:richard.weiyang@gmail.com:rppt@linux.ibm.com:rppt@linux.vnet.ibm.com:torvalds@linux-foundation.org:vbabka@suse.cz,RULES_HIT:41:355:379:800:960:966:967:968:973:988:989:1260:1263:1345:1381:1431:1437:1535:1543:1711:1730:1747:1777:1792:1963:2194:2196:2198:2199:2200:2201:2393:2525:2559:2563:2682:2685:2693:2731:2859:2898:2901:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4385:5007:6119:6261:6653:6737:7514:7576:7688:8599:8660:8957:9025:9545:10004:10913:11026:11232:11658:11914 :12043:1 X-HE-Tag: yard73_23b83eb6f190b X-Filterd-Recvd-Size: 5160 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Sun, 1 Dec 2019 01:54:05 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2838A215A5; Sun, 1 Dec 2019 01:54:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1575165244; bh=nV2mhqNm5IkpJ30bu72iv80jMk4XXMP0vH2LExHC5oM=; h=Date:From:To:Subject:From; b=VdEUw5bg2IV1rxVGy0Fg8IeTh44JvI7CK95mN4z8aXXEkg+199VvlT7qaPf1Ojhgt T4fqJQGAeTn2+JLVajZgym0tPBNbrtkw7z0s86HDGt5xxNr9I5xPqRpc2SNhS4r0Y2 0tYVdkeD7LCrdBUkQ02KSYNj9K29NdJDKqMBhfZQ= Date: Sat, 30 Nov 2019 17:54:03 -0800 From: akpm@linux-foundation.org To: akpm@linux-foundation.org, alexander.h.duyck@linux.intel.com, anshuman.khandual@arm.com, cai@lca.pw, dan.j.williams@intel.com, david@redhat.com, kernelfans@gmail.com, linux-mm@kvack.org, mgorman@techsingularity.net, mhocko@suse.com, mm-commits@vger.kernel.org, osalvador@suse.de, pasha.tatashin@soleen.com, pavel.tatashin@microsoft.com, richard.weiyang@gmail.com, rppt@linux.ibm.com, rppt@linux.vnet.ibm.com, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 079/158] mm/page_alloc.c: don't set pages PageReserved() when offlining Message-ID: <20191201015403.828G0Qt_w%akpm@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: David Hildenbrand Subject: mm/page_alloc.c: don't set pages PageReserved() when offlining Patch series "mm: Memory offlining + page isolation cleanups", v2. This patch (of 2): We call __offline_isolated_pages() from __offline_pages() after all pages were isolated and are either free (PageBuddy()) or PageHWPoison. Nothing can stop us from offlining memory at this point. In __offline_isolated_pages() we first set all affected memory sections offline (offline_mem_sections(pfn, end_pfn)), to mark the memmap as invalid (pfn_to_online_page() will no longer succeed), and then walk over all pages to pull the free pages from the free lists (to the isolated free lists, to be precise). Note that re-onlining a memory block will result in the whole memmap getting reinitialized, overwriting any old state. We already poision the memmap when offlining is complete to find any access to stale/uninitialized memmaps. So, setting the pages PageReserved() is not helpful. The memap is marked offline and all pageblocks are isolated. As soon as offline, the memmap is stale either way. This looks like a leftover from ancient times where we initialized the memmap when adding memory and not when onlining it (the pages were set PageReserved so re-onling would work as expected). Link: http://lkml.kernel.org/r/20191021172353.3056-2-david@redhat.com Signed-off-by: David Hildenbrand Acked-by: Michal Hocko Cc: Vlastimil Babka Cc: Oscar Salvador Cc: Mel Gorman Cc: Mike Rapoport Cc: Dan Williams Cc: Wei Yang Cc: Alexander Duyck Cc: Anshuman Khandual Cc: Pavel Tatashin Cc: Mike Rapoport Cc: Pavel Tatashin Cc: Pingfan Liu Cc: Qian Cai Signed-off-by: Andrew Morton --- mm/memory_hotplug.c | 4 +--- mm/page_alloc.c | 5 +---- 2 files changed, 2 insertions(+), 7 deletions(-) --- a/mm/memory_hotplug.c~mm-page_allocc-dont-set-pages-pagereserved-when-offlining +++ a/mm/memory_hotplug.c @@ -1384,9 +1384,7 @@ do_migrate_range(unsigned long start_pfn return ret; } -/* - * remove from free_area[] and mark all as Reserved. - */ +/* Mark all sections offline and remove all free pages from the buddy. */ static int offline_isolated_pages_cb(unsigned long start, unsigned long nr_pages, void *data) --- a/mm/page_alloc.c~mm-page_allocc-dont-set-pages-pagereserved-when-offlining +++ a/mm/page_alloc.c @@ -8560,7 +8560,7 @@ __offline_isolated_pages(unsigned long s { struct page *page; struct zone *zone; - unsigned int order, i; + unsigned int order; unsigned long pfn; unsigned long flags; unsigned long offlined_pages = 0; @@ -8588,7 +8588,6 @@ __offline_isolated_pages(unsigned long s */ if (unlikely(!PageBuddy(page) && PageHWPoison(page))) { pfn++; - SetPageReserved(page); offlined_pages++; continue; } @@ -8602,8 +8601,6 @@ __offline_isolated_pages(unsigned long s pfn, 1 << order, end_pfn); #endif del_page_from_free_area(page, &zone->free_area[order]); - for (i = 0; i < (1 << order); i++) - SetPageReserved((page+i)); pfn += (1 << order); } spin_unlock_irqrestore(&zone->lock, flags);