From patchwork Mon Jul 9 10:28:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Naoya Horiguchi X-Patchwork-Id: 10514241 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AAC6660318 for ; Mon, 9 Jul 2018 10:29:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9EF5728A86 for ; Mon, 9 Jul 2018 10:29:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9249528A8B; Mon, 9 Jul 2018 10:29:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C87528A85 for ; Mon, 9 Jul 2018 10:29:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B44376B02B1; Mon, 9 Jul 2018 06:29:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B18C06B02B2; Mon, 9 Jul 2018 06:29:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A092A6B02B3; Mon, 9 Jul 2018 06:29:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id 5563B6B02B1 for ; Mon, 9 Jul 2018 06:29:32 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id u16-v6so11552657pfm.15 for ; Mon, 09 Jul 2018 03:29:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:thread-topic:thread-index:date:message-id:references :in-reply-to:accept-language:content-language:content-id :content-transfer-encoding:mime-version; bh=HF+xxiIEKfS/9081dFQWDuu4lGzXVQ5dzSCr/3NZ+7o=; b=c8pn3g8Qim4OK07BN8LQCGtD/TOW/vxsH3JDAd/MBpvzV0Nn1A2ZaxAt335GOQRiYy VN5xbsiol3wVnIwTDuCF14DxYpMr7l2SJoRbyiHOxvrZntiIydGUYZzZkatfhGz6a0vN /IAmeV8+r0snEWanVH+yxMuOMXF6BsqhwnWBBZrQYj0lb5LmtAEEW7U9oUWE/hJWn5Id B0u3XNluzM10GtB18+Lo0Ju1DIjonrAS+jYedJAHbBFJ85ojggvgBs1RpFGqI+FYhs1O gsV0fK8tuw9vnhbeL92DeRvd2O4457+1pAIb5fKCJwEXsgR5YkjCStsK1M/UC5fuGAQq zWDQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of n-horiguchi@ah.jp.nec.com designates 114.179.232.161 as permitted sender) smtp.mailfrom=n-horiguchi@ah.jp.nec.com X-Gm-Message-State: APt69E1hF61FA90cQj1Hq5wUSaysOwUYuuxOicIDanb69ge4R5dHbVsd LYf25MzxuuaXARq6Rz5eVPugaQ4C+u+5ZRAs3lQc+t7zYtnuX74LITZ5lfNlsGrkmhB86jpt7YU UOtVboa2TjUyMXFG5WCiAya/seB76p4LB5G/GeGeoEpEIJbdP7Wfne7dSXJYwLmD+aA== X-Received: by 2002:a62:8d7:: with SMTP id 84-v6mr20790396pfi.172.1531132172012; Mon, 09 Jul 2018 03:29:32 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeuw5zyxyw+RKoUSqZ/RgYFIrUW7Ad2lN3CkQW1x75fTmcV5hDw4DBCB8WycT/FUY1KURac X-Received: by 2002:a62:8d7:: with SMTP id 84-v6mr20790321pfi.172.1531132170557; Mon, 09 Jul 2018 03:29:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531132170; cv=none; d=google.com; s=arc-20160816; b=wYM/S3kBUx3NlZZXyDBQ7Cg2zByEyqpDGd6jTrnX/tlyLdGW56RaDKhbXROy2DGO/v jvsy9A53fUmN+0HpCLq3LjXFRDtEObbwqBJN4PM63P+l3XPmd1C3PVUObDPdPZWN2pSx sd8dzPnbU7OD8KO1RXQT6vWWvR1atR3M++4FkBMzO4ZgtRhey8fQWNDjdvEVbhn8mVaU ockVgjLWet+mW13ip73GEXxK76+KrQXDAFS2Snh0KJoMALiYmnmLukOPrrsDyxW/1qA8 QfPyghfYutbT49WaZDJ3bNcG2Nka0miX7lphXSilue5sWDldYdrgIwLazXhHT86rjBDz JoEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:content-transfer-encoding:content-id:content-language :accept-language:in-reply-to:references:message-id:date:thread-index :thread-topic:subject:cc:to:from:arc-authentication-results; bh=HF+xxiIEKfS/9081dFQWDuu4lGzXVQ5dzSCr/3NZ+7o=; b=QK/lmYUuAMjSve6WcxpemzG6Tx59UhrT18fV5OW6AYBqA0NXrWEr2mUKYROl6B+Bdt cWeHKfgh5Z4/KXYLO4yzFGmb74OFaM1vcao9QaD402d3KUlNkBI8kyYBbkZWU/yBzRSp rFRFoBeA1BcE/tcWLAdp9/7Sgz+VhtenULgR9dUKcO3Xwt78x3l2bD0jym2ckTyUUhGK Wl68/HxrgQemexyrhxYP4HZNkU/fkc30kheJYlgSskLbEFHcOew+iobKI0DsrsJVPyl5 daN/CwrPEqO5QINXSo0DrCTLSd3M682xhVXgJ0sXn6cr3fteFxWUylrr0jR2l5Lv1X97 7zYA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of n-horiguchi@ah.jp.nec.com designates 114.179.232.161 as permitted sender) smtp.mailfrom=n-horiguchi@ah.jp.nec.com Received: from tyo161.gate.nec.co.jp (tyo161.gate.nec.co.jp. [114.179.232.161]) by mx.google.com with ESMTPS id p84-v6si9105636pfl.17.2018.07.09.03.29.30 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 09 Jul 2018 03:29:30 -0700 (PDT) Received-SPF: pass (google.com: domain of n-horiguchi@ah.jp.nec.com designates 114.179.232.161 as permitted sender) client-ip=114.179.232.161; Authentication-Results: mx.google.com; spf=pass (google.com: domain of n-horiguchi@ah.jp.nec.com designates 114.179.232.161 as permitted sender) smtp.mailfrom=n-horiguchi@ah.jp.nec.com Received: from mailgate01.nec.co.jp ([114.179.233.122]) by tyo161.gate.nec.co.jp (8.15.1/8.15.1) with ESMTPS id w69ATM2i016962 (version=TLSv1.2 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 9 Jul 2018 19:29:22 +0900 Received: from mailsv01.nec.co.jp (mailgate-v.nec.co.jp [10.204.236.94]) by mailgate01.nec.co.jp (8.15.1/8.15.1) with ESMTP id w69ATMTO005111; Mon, 9 Jul 2018 19:29:22 +0900 Received: from mail01b.kamome.nec.co.jp (mail01b.kamome.nec.co.jp [10.25.43.2]) by mailsv01.nec.co.jp (8.15.1/8.15.1) with ESMTP id w69AT7Vu010910; Mon, 9 Jul 2018 19:29:22 +0900 Received: from bpxc99gp.gisp.nec.co.jp ([10.38.151.149] [10.38.151.149]) by mail01b.kamome.nec.co.jp with ESMTP id BT-MMP-1799262; Mon, 9 Jul 2018 19:28:27 +0900 Received: from BPXM23GP.gisp.nec.co.jp ([10.38.151.215]) by BPXC21GP.gisp.nec.co.jp ([10.38.151.149]) with mapi id 14.03.0319.002; Mon, 9 Jul 2018 19:28:27 +0900 From: Naoya Horiguchi To: =?gb2312?B?9MPPocqvKM+hyq8p?= CC: linux-mm , linux-kernel , =?gb2312?B?s8LS5cir?= Subject: =?gb2312?B?UmU6IFJlo7pbUkZDXSBhIHF1ZXN0aW9uIGFib3V0IHJldXNlIGh3cG9pc29u?= =?gb2312?B?IHBhZ2UgaW4gc29mdF9vZmZsaW5lX3BhZ2UoKQ==?= Thread-Topic: =?gb2312?B?UmWjultSRkNdIGEgcXVlc3Rpb24gYWJvdXQgcmV1c2UgaHdwb2lzb24gcGFn?= =?gb2312?B?ZSBpbiBzb2Z0X29mZmxpbmVfcGFnZSgp?= Thread-Index: AQHUF0fKiA6dS00lUEKTvZq5LYR8wKSGGbOA Date: Mon, 9 Jul 2018 10:28:26 +0000 Message-ID: <20180709102825.GA21147@hori1.linux.bs1.fc.nec.co.jp> References: <518e6b02-47ef-4ba8-ab98-8d807e2de7d5.xishi.qiuxishi@alibaba-inc.com> In-Reply-To: <518e6b02-47ef-4ba8-ab98-8d807e2de7d5.xishi.qiuxishi@alibaba-inc.com> Accept-Language: en-US, ja-JP Content-Language: ja-JP X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.51.8.81] Content-ID: <54A481ED597FFA4AB56F18403A665E18@gisp.nec.co.jp> MIME-Version: 1.0 X-TM-AS-MML: disable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Jul 09, 2018 at 01:43:35PM +0800, 裘稀石(稀石) wrote: > Hi Naoya, > > Shall we fix this path too? It also will set hwpoison before > dissolve_free_huge_page(). > > soft_offline_huge_page > migrate_pages > unmap_and_move_huge_page > if (reason == MR_MEMORY_FAILURE && !test_set_page_hwpoison(hpage)) > dissolve_free_huge_page Thank you Xishi, I added it to the current (still draft) version below. I start feeling that current code is broken about behavior of PageHWPoison (at least) in soft offline. And this patch might not cover all of the issues. My current questions/concerns are: - does the same issue happens on soft offlining of normal pages? - does hard offling of free (huge) page have the similar issue? I'll try to clarify these next and will update the patch if necessary. I'm happy if I get some comment around these. Thanks, Naoya Horiguchi --- From 9ce4df899f4c859001571958be6a281cdaf5a58f Mon Sep 17 00:00:00 2001 From: Naoya Horiguchi Date: Mon, 9 Jul 2018 13:07:46 +0900 Subject: [PATCH] mm: fix race on soft-offlining free huge pages There's a race condition between soft offline and hugetlb_fault which causes unexpected process killing and/or hugetlb allocation failure. The process killing is caused by the following flow: CPU 0 CPU 1 CPU 2 soft offline get_any_page // find the hugetlb is free mmap a hugetlb file page fault ... hugetlb_fault hugetlb_no_page alloc_huge_page // succeed soft_offline_free_page // set hwpoison flag mmap the hugetlb file page fault ... hugetlb_fault hugetlb_no_page find_lock_page return VM_FAULT_HWPOISON mm_fault_error do_sigbus // kill the process The hugetlb allocation failure comes from the following flow: CPU 0 CPU 1 mmap a hugetlb file // reserve all free page but don't fault-in soft offline get_any_page // find the hugetlb is free soft_offline_free_page // set hwpoison flag dissolve_free_huge_page // fail because all free hugepages are reserved page fault ... hugetlb_fault hugetlb_no_page alloc_huge_page ... dequeue_huge_page_node_exact // ignore hwpoisoned hugepage // and finally fail due to no-mem The root cause of this is that current soft-offline code is written based on an assumption that PageHWPoison flag should beset at first to avoid accessing the corrupted data. This makes sense for memory_failure() or hard offline, but does not for soft offline because soft offline is not about corrected error and is safe from data lost. This patch changes soft offline semantics where it sets PageHWPoison flag only after containment of the error page completes succesfully. Reported-by: Xishi Qiu Suggested-by: Xishi Qiu Signed-off-by: Naoya Horiguchi --- mm/hugetlb.c | 11 +++++------ mm/memory-failure.c | 13 +++++++------ mm/migrate.c | 2 -- 3 files changed, 12 insertions(+), 14 deletions(-) -- 2.7.4 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d34225c1cb5b..3c9ce4c05f1b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1479,22 +1479,20 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed, /* * Dissolve a given free hugepage into free buddy pages. This function does * nothing for in-use (including surplus) hugepages. Returns -EBUSY if the - * number of free hugepages would be reduced below the number of reserved - * hugepages. + * dissolution fails because a give page is not a free hugepage, or because + * free hugepages are fully reserved. */ int dissolve_free_huge_page(struct page *page) { - int rc = 0; + int rc = -EBUSY; spin_lock(&hugetlb_lock); if (PageHuge(page) && !page_count(page)) { struct page *head = compound_head(page); struct hstate *h = page_hstate(head); int nid = page_to_nid(head); - if (h->free_huge_pages - h->resv_huge_pages == 0) { - rc = -EBUSY; + if (h->free_huge_pages - h->resv_huge_pages == 0) goto out; - } /* * Move PageHWPoison flag from head page to the raw error page, * which makes any subpages rather than the error page reusable. @@ -1508,6 +1506,7 @@ int dissolve_free_huge_page(struct page *page) h->free_huge_pages_node[nid]--; h->max_huge_pages--; update_and_free_page(h, head); + rc = 0; } out: spin_unlock(&hugetlb_lock); diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 9d142b9b86dc..7a519d947408 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -1598,8 +1598,9 @@ static int soft_offline_huge_page(struct page *page, int flags) if (ret > 0) ret = -EIO; } else { - if (PageHuge(page)) - dissolve_free_huge_page(page); + ret = dissolve_free_huge_page(page); + if (!ret) + num_poisoned_pages_inc(); } return ret; } @@ -1715,13 +1716,13 @@ static int soft_offline_in_use_page(struct page *page, int flags) static void soft_offline_free_page(struct page *page) { + int rc = 0; struct page *head = compound_head(page); - if (!TestSetPageHWPoison(head)) { + if (PageHuge(head)) + rc = dissolve_free_huge_page(page); + if (!rc && !TestSetPageHWPoison(page)) num_poisoned_pages_inc(); - if (PageHuge(head)) - dissolve_free_huge_page(page); - } } /** diff --git a/mm/migrate.c b/mm/migrate.c index 198af4289f9b..3ae213b799a1 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1318,8 +1318,6 @@ static int unmap_and_move_huge_page(new_page_t get_new_page, out: if (rc != -EAGAIN) putback_active_hugepage(hpage); - if (reason == MR_MEMORY_FAILURE && !test_set_page_hwpoison(hpage)) - num_poisoned_pages_inc(); /* * If migration was not successful and there's a freeing callback, use