From patchwork Wed Jun 24 15:01:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Naoya Horiguchi X-Patchwork-Id: 11623393 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10EAA912 for ; Wed, 24 Jun 2020 15:03:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C433D206C0 for ; Wed, 24 Jun 2020 15:03:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QaLRGFBO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C433D206C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E8596B0026; Wed, 24 Jun 2020 11:03:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 698F06B0027; Wed, 24 Jun 2020 11:03:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 53A6C6B0028; Wed, 24 Jun 2020 11:03:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0066.hostedemail.com [216.40.44.66]) by kanga.kvack.org (Postfix) with ESMTP id 3D80D6B0026 for ; Wed, 24 Jun 2020 11:03:26 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id ED53C1A4CB for ; Wed, 24 Jun 2020 15:03:25 +0000 (UTC) X-FDA: 76964423970.19.drain80_210836426e45 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin19.hostedemail.com (Postfix) with ESMTP id 9B1F9190BC for ; Wed, 24 Jun 2020 15:02:24 +0000 (UTC) X-Spam-Summary: 1,0,0,ec5a383ee318805e,d41d8cd98f00b204,nao.horiguchi@gmail.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1437:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2559:2562:2693:2731:2899:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4385:4605:5007:6119:6261:6653:7576:7875:7903:8660:8957:9010:9413:9592:9707:10004:11026:11232:11473:11658:11914:12043:12257:12296:12297:12438:12517:12519:12555:12679:12683:12740:12895:13148:13230:14096:14394:21060:21080:21094:21323:21324:21444:21451:21627:21666:21740:21795:21939:21987:21990:22119:30012:30051:30054:30070:30083:30091,0,RBL:209.85.214.196:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.50.0.100;04y894hwjm579toxgd7swaqn8oh4bopahqr1kwan5dimdngm6dckh33w3asfex3.axfopzzridaaw8rwrcykk9zoe7rbmknkotq9obuzu8cb7mbnga3dke3kuopsobw.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DN SBL:neut X-HE-Tag: drain80_210836426e45 X-Filterd-Recvd-Size: 12896 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 24 Jun 2020 15:02:20 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id 35so1174109ple.0 for ; Wed, 24 Jun 2020 08:02:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GUNynxl8EV7sXCufCcmxBHs/J/t13PuDOABln2FP2HY=; b=QaLRGFBOpYLfo3KElRWDb/N+ZUOYT4JiTTQhQOp4UTbfOflbSHQqPwcS9I4EOVTs4G IUEaKFXTFCMnMbELCfL847hSUmfheSlrn5s9LKtNAmqVz9h1BYA+XRAufX67buBfRxkg NPI89RvIQb2BeudSjfAJB0SSznTuspk/iYVpH2HU4JZJ4Yy5cGPo+znfyPEPb9cvSz2n aiUhBmtNRwxHn59N3P6E+7UTN/4XBcFx8FWOQsRr1+Sb6/9WrGrhSZCOjDvI4repjLiG jRM+s4HSJ2GsID+YKnVCYuaB6xtTVthLNUeitf+T4/tLXNuiXeYeQqTbkDhn4zuDugU4 LDow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GUNynxl8EV7sXCufCcmxBHs/J/t13PuDOABln2FP2HY=; b=ZpOKIqkigNPqFr0M4jQ6NmQY1MLtBJoXaRkUHj0CzB9/klvuvskvI/QuxadM6kA9Wf zRnsBq2MucKTWYvk3Ei2uHnTWR2e8GLDncntUhVcPRDHI21bTYjZQdw//d99+lVqVc1j VoeAlvxGBIYmQ1W6vxhQdJa3PTmI4zChirpnWnfpAqKDIzIExaWnbHT/nTGylC+YCbF+ nShF44QU//TjEGlTZZB/4XbXRzE2/suooNeVjtate+QzC0sqq+xaGEWyv3fQTTq77HZM K99jb8r6Xbmy/xk5t+zi1eLlazq2wrFKpotDzJcWtVLef7Mt/aSM0b5p5w2FmkVsviWv SrHQ== X-Gm-Message-State: AOAM531IIqfrvokrH69r64zvSLnUuPmG0qyYvlysEAj1YlRkDXltC5Hi yxzENiKZ2GfY/ZlRw6fmssvSxrdAhg== X-Google-Smtp-Source: ABdhPJyTloXvGWvJr8+6caJEix7U+ixydVE9Wei2bf+99EFnkVXi+PNL0DCIucJrHmvE5Lq6WT3Kxw== X-Received: by 2002:a17:90a:fcc:: with SMTP id 70mr27771463pjz.106.1593010939488; Wed, 24 Jun 2020 08:02:19 -0700 (PDT) Received: from ip-172-31-41-194.ap-northeast-1.compute.internal (ec2-52-199-21-241.ap-northeast-1.compute.amazonaws.com. [52.199.21.241]) by smtp.gmail.com with ESMTPSA id i125sm17013705pgd.21.2020.06.24.08.02.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jun 2020 08:02:18 -0700 (PDT) From: nao.horiguchi@gmail.com To: linux-mm@kvack.org Cc: mhocko@kernel.org, akpm@linux-foundation.org, mike.kravetz@oracle.com, osalvador@suse.de, tony.luck@intel.com, david@redhat.com, aneesh.kumar@linux.vnet.ibm.com, zeil@yandex-team.ru, naoya.horiguchi@nec.com, linux-kernel@vger.kernel.org Subject: [PATCH v3 12/15] mm,hwpoison: Rework soft offline for in-use pages Date: Wed, 24 Jun 2020 15:01:34 +0000 Message-Id: <20200624150137.7052-13-nao.horiguchi@gmail.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200624150137.7052-1-nao.horiguchi@gmail.com> References: <20200624150137.7052-1-nao.horiguchi@gmail.com> X-Rspamd-Queue-Id: 9B1F9190BC X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Oscar Salvador This patch changes the way we set and handle in-use poisoned pages. Until now, poisoned pages were released to the buddy allocator, trusting that the checks that take place prior to hand the page would act as a safe net and would skip that page. This has proved to be wrong, as we got some pfn walkers out there, like compaction, that all they care is the page to be PageBuddy and be in a freelist. Although this might not be the only user, having poisoned pages in the buddy allocator seems a bad idea as we should only have free pages that are ready and meant to be used as such. Before explaining the taken approach, let us break down the kind of pages we can soft offline. - Anonymous THP (after the split, they end up being 4K pages) - Hugetlb - Order-0 pages (that can be either migrated or invalited) * Normal pages (order-0 and anon-THP) - If they are clean and unmapped page cache pages, we invalidate then by means of invalidate_inode_page(). - If they are mapped/dirty, we do the isolate-and-migrate dance. Either way, do not call put_page directly from those paths. Instead, we keep the page and send it to page_set_poison to perform the right handling. page_set_poison sets the HWPoison flag and does the last put_page. This call to put_page is mainly to be able to call __page_cache_release, since this function is not exported. Down the chain, we placed a check for HWPoison page in free_pages_prepare, that just skips any poisoned page, so those pages do not end up in any pcplist/freelist. After that, we set the refcount on the page to 1 and we increment the poisoned pages counter. We could do as we do for free pages: 1) wait until the page hits buddy's freelists 2) take it off 3) flag it The problem is that we could race with an allocation, so by the time we want to take the page off the buddy, the page is already allocated, so we cannot soft-offline it. This is not fatal of course, but if it is better if we can close the race as does not require a lot of code. * Hugetlb pages - We isolate-and-migrate them After the migration has been successful, we call dissolve_free_huge_page, and we set HWPoison on the page if we succeed. Hugetlb has a slightly different handling though. While for non-hugetlb pages we cared about closing the race with an allocation, doing so for hugetlb pages requires quite some additional code (we would need to hook in free_huge_page and some other places). So I decided to not make the code overly complicated and just fail normally if the page we allocated in the meantime. Because of the way we handle now in-use pages, we no longer need the put-as-isolation-migratetype dance, that was guarding for poisoned pages to end up in pcplists. Signed-off-by: Oscar Salvador Signed-off-by: Naoya Horiguchi --- include/linux/page-flags.h | 5 ----- mm/memory-failure.c | 44 +++++++++++++------------------------- mm/migrate.c | 11 +++------- mm/page_alloc.c | 31 +++------------------------ 4 files changed, 21 insertions(+), 70 deletions(-) diff --git v5.8-rc1-mmots-2020-06-20-21-44/include/linux/page-flags.h v5.8-rc1-mmots-2020-06-20-21-44_patched/include/linux/page-flags.h index 9fa5d4e2d69a..d1df51ed6eeb 100644 --- v5.8-rc1-mmots-2020-06-20-21-44/include/linux/page-flags.h +++ v5.8-rc1-mmots-2020-06-20-21-44_patched/include/linux/page-flags.h @@ -422,14 +422,9 @@ PAGEFLAG_FALSE(Uncached) PAGEFLAG(HWPoison, hwpoison, PF_ANY) TESTSCFLAG(HWPoison, hwpoison, PF_ANY) #define __PG_HWPOISON (1UL << PG_hwpoison) -extern bool set_hwpoison_free_buddy_page(struct page *page); extern bool take_page_off_buddy(struct page *page); #else PAGEFLAG_FALSE(HWPoison) -static inline bool set_hwpoison_free_buddy_page(struct page *page) -{ - return 0; -} #define __PG_HWPOISON 0 #endif diff --git v5.8-rc1-mmots-2020-06-20-21-44/mm/memory-failure.c v5.8-rc1-mmots-2020-06-20-21-44_patched/mm/memory-failure.c index d79e756a97be..f744eb90c15c 100644 --- v5.8-rc1-mmots-2020-06-20-21-44/mm/memory-failure.c +++ v5.8-rc1-mmots-2020-06-20-21-44_patched/mm/memory-failure.c @@ -78,9 +78,12 @@ EXPORT_SYMBOL_GPL(hwpoison_filter_dev_minor); EXPORT_SYMBOL_GPL(hwpoison_filter_flags_mask); EXPORT_SYMBOL_GPL(hwpoison_filter_flags_value); -static void page_handle_poison(struct page *page) +static void page_handle_poison(struct page *page, bool release) { + SetPageHWPoison(page); + if (release) + put_page(page); page_ref_inc(page); num_poisoned_pages_inc(); } @@ -1757,19 +1760,13 @@ static int soft_offline_huge_page(struct page *page) ret = -EIO; } else { /* - * We set PG_hwpoison only when the migration source hugepage - * was successfully dissolved, because otherwise hwpoisoned - * hugepage remains on free hugepage list, then userspace will - * find it as SIGBUS by allocation failure. That's not expected - * in soft-offlining. + * We set PG_hwpoison only when we were able to take the page + * off the buddy. */ - ret = dissolve_free_huge_page(page); - if (!ret) { - if (set_hwpoison_free_buddy_page(page)) - num_poisoned_pages_inc(); - else - ret = -EBUSY; - } + if (!dissolve_free_huge_page(page) && take_page_off_buddy(page)) + page_handle_poison(page, false); + else + ret = -EBUSY; } return ret; } @@ -1804,10 +1801,8 @@ static int __soft_offline_page(struct page *page) * would need to fix isolation locking first. */ if (ret == 1) { - put_page(page); pr_info("soft_offline: %#lx: invalidated\n", pfn); - SetPageHWPoison(page); - num_poisoned_pages_inc(); + page_handle_poison(page, true); return 0; } @@ -1838,7 +1833,9 @@ static int __soft_offline_page(struct page *page) list_add(&page->lru, &pagelist); ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL, MIGRATE_SYNC, MR_MEMORY_FAILURE); - if (ret) { + if (!ret) { + page_handle_poison(page, true); + } else { if (!list_empty(&pagelist)) putback_movable_pages(&pagelist); @@ -1857,27 +1854,16 @@ static int __soft_offline_page(struct page *page) static int soft_offline_in_use_page(struct page *page) { int ret; - int mt; struct page *hpage = compound_head(page); if (!PageHuge(page) && PageTransHuge(hpage)) if (try_to_split_thp_page(page, "soft offline") < 0) return -EBUSY; - /* - * Setting MIGRATE_ISOLATE here ensures that the page will be linked - * to free list immediately (not via pcplist) when released after - * successful page migration. Otherwise we can't guarantee that the - * page is really free after put_page() returns, so - * set_hwpoison_free_buddy_page() highly likely fails. - */ - mt = get_pageblock_migratetype(page); - set_pageblock_migratetype(page, MIGRATE_ISOLATE); if (PageHuge(page)) ret = soft_offline_huge_page(page); else ret = __soft_offline_page(page); - set_pageblock_migratetype(page, mt); return ret; } @@ -1886,7 +1872,7 @@ static int soft_offline_free_page(struct page *page) int rc = -EBUSY; if (!dissolve_free_huge_page(page) && take_page_off_buddy(page)) { - page_handle_poison(page); + page_handle_poison(page, false); rc = 0; } diff --git v5.8-rc1-mmots-2020-06-20-21-44/mm/migrate.c v5.8-rc1-mmots-2020-06-20-21-44_patched/mm/migrate.c index c95912f74fe2..4381f76becee 100644 --- v5.8-rc1-mmots-2020-06-20-21-44/mm/migrate.c +++ v5.8-rc1-mmots-2020-06-20-21-44_patched/mm/migrate.c @@ -1255,16 +1255,11 @@ static ICE_noinline int unmap_and_move(new_page_t get_new_page, */ if (newpage && PageTransHuge(newpage)) thp_pmd_migration_success(true); - put_page(page); - if (reason == MR_MEMORY_FAILURE) { + if (reason != MR_MEMORY_FAILURE) /* - * Set PG_HWPoison on just freed page - * intentionally. Although it's rather weird, - * it's how HWPoison flag works at the moment. + * We release the page in page_handle_poison. */ - if (set_hwpoison_free_buddy_page(page)) - num_poisoned_pages_inc(); - } + put_page(page); } else { if (rc != -EAGAIN) { if (likely(!__PageMovable(page))) { diff --git v5.8-rc1-mmots-2020-06-20-21-44/mm/page_alloc.c v5.8-rc1-mmots-2020-06-20-21-44_patched/mm/page_alloc.c index 3b145bceb477..5fbd28d63d60 100644 --- v5.8-rc1-mmots-2020-06-20-21-44/mm/page_alloc.c +++ v5.8-rc1-mmots-2020-06-20-21-44_patched/mm/page_alloc.c @@ -1175,6 +1175,9 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); + if (unlikely(PageHWPoison(page)) && !order) + return false; + trace_mm_page_free(page, order); /* @@ -8848,32 +8851,4 @@ bool take_page_off_buddy(struct page *page) spin_unlock_irqrestore(&zone->lock, flags); return ret; } - -/* - * Set PG_hwpoison flag if a given page is confirmed to be a free page. This - * test is performed under the zone lock to prevent a race against page - * allocation. - */ -bool set_hwpoison_free_buddy_page(struct page *page) -{ - struct zone *zone = page_zone(page); - unsigned long pfn = page_to_pfn(page); - unsigned long flags; - unsigned int order; - bool hwpoisoned = false; - - spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { - struct page *page_head = page - (pfn & ((1 << order) - 1)); - - if (PageBuddy(page_head) && page_order(page_head) >= order) { - if (!TestSetPageHWPoison(page)) - hwpoisoned = true; - break; - } - } - spin_unlock_irqrestore(&zone->lock, flags); - - return hwpoisoned; -} #endif