From patchwork Thu Jul 16 12:38:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oscar Salvador X-Patchwork-Id: 11667337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63B8C1392 for ; Thu, 16 Jul 2020 12:39:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3017A207BB for ; Thu, 16 Jul 2020 12:39:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3017A207BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F07ED8D000D; Thu, 16 Jul 2020 08:38:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EDFCF8D0016; Thu, 16 Jul 2020 08:38:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF7BA8D000D; Thu, 16 Jul 2020 08:38:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id C6ABE8D0016 for ; Thu, 16 Jul 2020 08:38:37 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 87487180AD815 for ; Thu, 16 Jul 2020 12:38:37 +0000 (UTC) X-FDA: 77043892674.30.jump51_5201aad26f02 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 5AB61180B3C83 for ; Thu, 16 Jul 2020 12:38:37 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,osalvador@suse.de,,RULES_HIT:30012:30034:30054:30070,0,RBL:195.135.220.15:@suse.de:.lbl8.mailshell.net-62.2.6.2 64.100.201.201;04yr3gwmhdkeaeh9hozrjcj67b6x8ycf77rubwzfnbxkuagzctn8uy5tzp6ep5y.98n6xx7sb9nd8q9sbcccgkxkwzpc66ahwctt9fnuzp1a5b36baf8cikfgesqoeu.h-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: jump51_5201aad26f02 X-Filterd-Recvd-Size: 8357 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jul 2020 12:38:36 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id B2ADBB944; Thu, 16 Jul 2020 12:38:39 +0000 (UTC) From: Oscar Salvador To: akpm@linux-foundation.org Cc: mhocko@suse.com, linux-mm@kvack.org, mike.kravetz@oracle.com, david@redhat.com, aneesh.kumar@linux.vnet.ibm.com, naoya.horiguchi@nec.com, linux-kernel@vger.kernel.org, Oscar Salvador , Oscar Salvador Subject: [PATCH v4 13/15] mm,hwpoison: Refactor soft_offline_huge_page and __soft_offline_page Date: Thu, 16 Jul 2020 14:38:07 +0200 Message-Id: <20200716123810.25292-14-osalvador@suse.de> X-Mailer: git-send-email 2.13.7 In-Reply-To: <20200716123810.25292-1-osalvador@suse.de> References: <20200716123810.25292-1-osalvador@suse.de> X-Rspamd-Queue-Id: 5AB61180B3C83 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Merging soft_offline_huge_page and __soft_offline_page let us get rid of quite some duplicated code, and makes the code much easier to follow. Now, __soft_offline_page will handle both normal and hugetlb pages. Note that move put_page() block to the beginning of page_handle_poison() with drain_all_pages() in order to make sure that the target page is freed and sent into free list to make take_page_off_buddy() work properly. Signed-off-by: Oscar Salvador Signed-off-by: Naoya Horiguchi --- mm/memory-failure.c | 141 ++++++++++++++++---------------------------- 1 file changed, 52 insertions(+), 89 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index c0ebab4eed4c..c6c83337708a 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1723,62 +1723,50 @@ static int get_any_page(struct page *page, unsigned long pfn) return ret; } -static int soft_offline_huge_page(struct page *page) +static bool isolate_page(struct page *page, struct list_head *pagelist) { - int ret = -EBUSY; - unsigned long pfn = page_to_pfn(page); - struct page *hpage = compound_head(page); - LIST_HEAD(pagelist); + bool isolated = false; + bool lru = PageLRU(page); - /* - * This double-check of PageHWPoison is to avoid the race with - * memory_failure(). See also comment in __soft_offline_page(). - */ - lock_page(hpage); - if (PageHWPoison(hpage)) { - unlock_page(hpage); - put_page(hpage); - pr_info("soft offline: %#lx hugepage already poisoned\n", pfn); - return -EBUSY; + if (PageHuge(page)) { + isolated = isolate_huge_page(page, pagelist); + } else { + if (lru) + isolated = !isolate_lru_page(page); + else + isolated = !isolate_movable_page(page, ISOLATE_UNEVICTABLE); + + if (isolated) + list_add(&page->lru, pagelist); } - unlock_page(hpage); - ret = isolate_huge_page(hpage, &pagelist); + if (isolated && lru) + inc_node_page_state(page, NR_ISOLATED_ANON + + page_is_file_lru(page)); + /* - * get_any_page() and isolate_huge_page() takes a refcount each, - * so need to drop one here. + * If we succeed to isolate the page, we grabbed another refcount on + * the page, so we can safely drop the one we got from get_any_pages(). + * If we failed to isolate the page, it means that we cannot go further + * and we will return an error, so drop the reference we got from + * get_any_pages() as well. */ - put_page(hpage); - if (!ret) { - pr_info("soft offline: %#lx hugepage failed to isolate\n", pfn); - return -EBUSY; - } - - ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL, - MIGRATE_SYNC, MR_MEMORY_FAILURE); - if (ret) { - pr_info("soft offline: %#lx: hugepage migration failed %d, type %lx (%pGp)\n", - pfn, ret, page->flags, &page->flags); - if (!list_empty(&pagelist)) - putback_movable_pages(&pagelist); - if (ret > 0) - ret = -EIO; - } else { - /* - * At this point the page cannot be in-use since we do not - * let the page to go back to hugetlb freelists. - * In that case we just need to dissolve it. - * page_handle_poison will take care of it. - */ - page_handle_poison(page, true, true, true); - } - return ret; + put_page(page); + return isolated; } +/* + * __soft_offline_page handles hugetlb-pages and non-hugetlb pages. + * If the page is a non-dirty unmapped page-cache page, it simply invalidates. + */ static int __soft_offline_page(struct page *page) { - int ret; + int ret = 0; unsigned long pfn = page_to_pfn(page); + struct page *hpage = compound_head(page); + const char *msg_page[] = {"page", "hugepage"}; + bool huge = PageHuge(page); + LIST_HEAD(pagelist); /* * Check PageHWPoison again inside page lock because PageHWPoison @@ -1787,88 +1775,63 @@ static int __soft_offline_page(struct page *page) * so there's no race between soft_offline_page() and memory_failure(). */ lock_page(page); - wait_on_page_writeback(page); + if (!PageHuge(page)) + wait_on_page_writeback(page); if (PageHWPoison(page)) { unlock_page(page); put_page(page); pr_info("soft offline: %#lx page already poisoned\n", pfn); return -EBUSY; } - /* - * Try to invalidate first. This should work for - * non dirty unmapped page cache pages. - */ - ret = invalidate_inode_page(page); + + if (!PageHuge(page)) + /* + * Try to invalidate first. This should work for + * non dirty unmapped page cache pages. + */ + ret = invalidate_inode_page(page); unlock_page(page); + /* * RED-PEN would be better to keep it isolated here, but we * would need to fix isolation locking first. */ if (ret == 1) { pr_info("soft_offline: %#lx: invalidated\n", pfn); - page_handle_poison(page, true, true, false); + page_handle_poison(page, false, true, false); return 0; } - /* - * Simple invalidation didn't work. - * Try to migrate to a new page instead. migrate.c - * handles a large number of cases for us. - */ - if (PageLRU(page)) - ret = isolate_lru_page(page); - else - ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE); - /* - * Drop page reference which is came from get_any_page() - * successful isolate_lru_page() already took another one. - */ - put_page(page); - if (!ret) { - LIST_HEAD(pagelist); - /* - * After isolated lru page, the PageLRU will be cleared, - * so use !__PageMovable instead for LRU page's mapping - * cannot have PAGE_MAPPING_MOVABLE. - */ - if (!__PageMovable(page)) - inc_node_page_state(page, NR_ISOLATED_ANON + - page_is_file_lru(page)); - list_add(&page->lru, &pagelist); - ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL, + if (isolate_page(hpage, &pagelist)) { + ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL, MIGRATE_SYNC, MR_MEMORY_FAILURE); if (!ret) { - page_handle_poison(page, true, true, false); + page_handle_poison(page, true, true, huge); } else { if (!list_empty(&pagelist)) putback_movable_pages(&pagelist); - pr_info("soft offline: %#lx: migration failed %d, type %lx (%pGp)\n", - pfn, ret, page->flags, &page->flags); + pr_info("soft offline: %#lx: %s migration failed %d, type %lx (%pGp)\n", + pfn, msg_page[huge], ret, page->flags, &page->flags); if (ret > 0) ret = -EIO; } } else { - pr_info("soft offline: %#lx: isolation failed: %d, page count %d, type %lx (%pGp)\n", - pfn, ret, page_count(page), page->flags, &page->flags); + pr_info("soft offline: %#lx: %s isolation failed: %d, page count %d, type %lx (%pGp)\n", + pfn, msg_page[huge], ret, page_count(page), page->flags, &page->flags); } return ret; } static int soft_offline_in_use_page(struct page *page) { - int ret; struct page *hpage = compound_head(page); if (!PageHuge(page) && PageTransHuge(hpage)) if (try_to_split_thp_page(page, "soft offline") < 0) return -EBUSY; - if (PageHuge(page)) - ret = soft_offline_huge_page(page); - else - ret = __soft_offline_page(page); - return ret; + return __soft_offline_page(page); } static int soft_offline_free_page(struct page *page)