From patchwork Tue Aug 14 00:30:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Kravetz X-Patchwork-Id: 10564969 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3039F1057 for ; Tue, 14 Aug 2018 00:31:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E8AA29380 for ; Tue, 14 Aug 2018 00:31:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1178C293C4; Tue, 14 Aug 2018 00:31:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 76D2E29380 for ; Tue, 14 Aug 2018 00:31:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 666A26B0003; Mon, 13 Aug 2018 20:31:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5EF016B0006; Mon, 13 Aug 2018 20:31:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4900A6B000A; Mon, 13 Aug 2018 20:31:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ua1-f70.google.com (mail-ua1-f70.google.com [209.85.222.70]) by kanga.kvack.org (Postfix) with ESMTP id 164E36B0003 for ; Mon, 13 Aug 2018 20:31:11 -0400 (EDT) Received: by mail-ua1-f70.google.com with SMTP id w11-v6so8713433uaj.20 for ; Mon, 13 Aug 2018 17:31:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=mFNNuSRJx4KqAm0biANJWhUhWPT2qL5b9JLLxxfT67w=; b=qatfqiXRMlwWCQFKDxt672hkoXC2yereDW/4nHqFJHA+/+BCiW9nuhUTjZMB5atfD5 Bkcynu1aPas49xKHKXOYlmisjd82BpCEq8usEWGEjzGaQ/achPs0/XcwSFdEZdadIEDr +pcH3tK5PbjjZwf+suuzSj+UN7otS5hZzVbk0iEtx3LlH6en3xBQCZGYYOS41z0ultkT MEoxWbFTh9h3mi8lUFy47az9qVYt5qMBwfD+Dq30YqGUEVXokaMBMtrsC4sqQRV4UpQB fRWpJzun0qRcfwjgtS6MP+KLO2iFwWQ5byWgpJwYUQgy9zAanCD1qwjnYkU+SBxzVHIq 0bRg== X-Gm-Message-State: AOUpUlGxd9UzE/sa9h7DQe0cixAOrN1EoZGn9a/FWFH7QwI0NY+LQrPo wphayb4et3MnoVJjA6OKiR+VV7OK9tBmliUGw+Sb9xjFKmDpzvKjr/eV23YtJ/bNuGEmlvIVfWx SJinUxjTOFfWnlvtVz+bTdJNhJufDmOyShGYMOl32wl8J9IjmB4jgJaYCHIwzKYfmkg== X-Received: by 2002:ab0:5612:: with SMTP id y18-v6mr1043554uaa.151.1534206670745; Mon, 13 Aug 2018 17:31:10 -0700 (PDT) X-Google-Smtp-Source: AA+uWPzrvvdTFoZFP7Sq4SxCQdvMyS4E8JcQxDpeGtPUQmWBj4nlHaFSjjutrZ2gxYixq03rWgL6 X-Received: by 2002:ab0:5612:: with SMTP id y18-v6mr1043518uaa.151.1534206669939; Mon, 13 Aug 2018 17:31:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1534206669; cv=none; d=google.com; s=arc-20160816; b=VbpsPXc975TelZD594DEu+QWblEjKJEsBvpXEAq7X37j5LqiH5/fMKLI6gyOMJAmhv dcH0p17+1h33IxiMpzp9rYY6ZrOQ6Zq4ZSCaUWJAyNvVWqyCrEqWNiNGLJk2C3vRAmoA yNWDjLnARi3UE3cD5wpNdXZ62zsqiJGlLKkEvDKwvug7kaT31cl3vVn74fFM9smqOsz6 aomQW8JWSHzaftRvIAUWgGsYuSICk0FJn4EzBSWgomef9WkoamcPKVb3byaum6L1k76F zBh944vkilAQwRCL3LeN9nFv4fxhTIAAhS6oyJoMfBA6JWZc/XSxDS+LokM1Gk919d6K 4YDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=mFNNuSRJx4KqAm0biANJWhUhWPT2qL5b9JLLxxfT67w=; b=iOoAQHlWrxQwcitogVuwN7axB7UO3rCKDPpVx6O1+elWX8F45JHAfRGXVmde4QDWjG WUpiF77LVvfGbTl5/DGkRKntEgZUfE1L3Q2BhFBessNW3Mbn2pR8vZs6ZXsW/AQ3n08a G2C6ie/rqDEfkWiue4fyFkwk97CoBWz0Il55JnNg8tZ20j4vl/WrD8MbkOQk1fe+9KzV CtInhyLJKWfsvXXtFCOiOa90aEanNeyqa38+eeOYn6dZsK5LHt4d8SrSap5EaevgtvsY Co3FEzewl48Npvh+EfRpxrz68p32It0RLuuLeKynYDDhJrlICSXOi4kdsIV8koRQzSb0 vefQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=dQPvt9U9; spf=pass (google.com: domain of mike.kravetz@oracle.com designates 141.146.126.78 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from aserp2120.oracle.com (aserp2120.oracle.com. [141.146.126.78]) by mx.google.com with ESMTPS id u186-v6si8316609vku.155.2018.08.13.17.31.09 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 13 Aug 2018 17:31:09 -0700 (PDT) Received-SPF: pass (google.com: domain of mike.kravetz@oracle.com designates 141.146.126.78 as permitted sender) client-ip=141.146.126.78; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=dQPvt9U9; spf=pass (google.com: domain of mike.kravetz@oracle.com designates 141.146.126.78 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1]) by aserp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w7E0T3fo079954; Tue, 14 Aug 2018 00:31:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=mFNNuSRJx4KqAm0biANJWhUhWPT2qL5b9JLLxxfT67w=; b=dQPvt9U9vXb0orE+Ap1ijVaWHJX3OTK2WBPivSN/TilIj6BlukVbVbKsqc4rcyhtZhfw CcporJ5GyIkIBIKnX4jjZvFD9D/+O+wtscgEpHGD7MP7eQt8SoYyZI+O36GrTFcb5nLM yRsJhNW4vTPbiD6oDyGKC6ytWD0+hKg30r5A5+ihhxw1jBm5TARpFQGjZXnfqTeQwKyR jgjdPRT0NIqFSY+30wW2GQR1Icx62MJg+E3M1xDk0v58Bpuhcr7oMI9gwdA3V7kbAgoC 0zK75AfKpsMkK4XEBE1TAwyU22GR2Qm4NdQBwQ5YKk7wM9/NyoP15HCQuJldFHnrMlhf Ig== Received: from userv0021.oracle.com (userv0021.oracle.com [156.151.31.71]) by aserp2120.oracle.com with ESMTP id 2ksqrp6e1n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Aug 2018 00:31:04 +0000 Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235]) by userv0021.oracle.com (8.14.4/8.14.4) with ESMTP id w7E0V2db005135 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Aug 2018 00:31:03 GMT Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17]) by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id w7E0V1NY011837; Tue, 14 Aug 2018 00:31:01 GMT Received: from monkey.oracle.com (/50.38.38.67) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Mon, 13 Aug 2018 17:31:01 -0700 From: Mike Kravetz To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: "Kirill A . Shutemov" , =?utf-8?b?SsOp?= =?utf-8?b?csO0bWUgR2xpc3Nl?= , Vlastimil Babka , Naoya Horiguchi , Davidlohr Bueso , Michal Hocko , Andrew Morton , stable@vger.kernel.org, Mike Kravetz Subject: [PATCH v2] mm: migration: fix migration of huge PMD shared pages Date: Mon, 13 Aug 2018 17:30:58 -0700 Message-Id: <20180814003058.19732-1-mike.kravetz@oracle.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <201808131221.zDDttbc8%fengguang.wu@intel.com> References: <201808131221.zDDttbc8%fengguang.wu@intel.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8984 signatures=668707 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=878 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1808140003 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The page migration code employs try_to_unmap() to try and unmap the source page. This is accomplished by using rmap_walk to find all vmas where the page is mapped. This search stops when page mapcount is zero. For shared PMD huge pages, the page map count is always 1 no matter the number of mappings. Shared mappings are tracked via the reference count of the PMD page. Therefore, try_to_unmap stops prematurely and does not completely unmap all mappings of the source page. This problem can result is data corruption as writes to the original source page can happen after contents of the page are copied to the target page. Hence, data is lost. This problem was originally seen as DB corruption of shared global areas after a huge page was soft offlined due to ECC memory errors. DB developers noticed they could reproduce the issue by (hotplug) offlining memory used to back huge pages. A simple testcase can reproduce the problem by creating a shared PMD mapping (note that this must be at least PUD_SIZE in size and PUD_SIZE aligned (1GB on x86)), and using migrate_pages() to migrate process pages between nodes while continually writing to the huge pages being migrated. To fix, have the try_to_unmap_one routine check for huge PMD sharing by calling huge_pmd_unshare for hugetlbfs huge pages. If it is a shared mapping it will be 'unshared' which removes the page table entry and drops the reference on the PMD page. After this, flush caches and TLB. Fixes: 39dde65c9940 ("shared page table for hugetlb page") Signed-off-by: Mike Kravetz --- v2: Fixed build issue for !CONFIG_HUGETLB_PAGE and typos in comment include/linux/hugetlb.h | 6 ++++++ mm/rmap.c | 21 +++++++++++++++++++++ 2 files changed, 27 insertions(+) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 36fa6a2a82e3..7524663028ec 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -170,6 +170,12 @@ static inline unsigned long hugetlb_total_pages(void) return 0; } +static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, + pte_t *ptep) +{ + return 0; +} + #define follow_hugetlb_page(m,v,p,vs,a,b,i,w,n) ({ BUG(); 0; }) #define follow_huge_addr(mm, addr, write) ERR_PTR(-EINVAL) #define copy_hugetlb_page_range(src, dst, vma) ({ BUG(); 0; }) diff --git a/mm/rmap.c b/mm/rmap.c index 09a799c9aebd..cf2340adad10 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1409,6 +1409,27 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); address = pvmw.address; + /* + * PMDs for hugetlbfs pages could be shared. In this case, + * pages with shared PMDs will have a mapcount of 1 no matter + * how many times they are actually mapped. Map counting for + * PMD sharing is mostly done via the reference count on the + * PMD page itself. If the page we are trying to unmap is a + * hugetlbfs page, attempt to 'unshare' at the PMD level. + * huge_pmd_unshare clears the PUD and adjusts reference + * counting on the PMD page which effectively unmaps the page. + * Take care of flushing cache and TLB for page in this + * specific mapping here. + */ + if (PageHuge(page) && + huge_pmd_unshare(mm, &address, pvmw.pte)) { + unsigned long end_add = address + vma_mmu_pagesize(vma); + + flush_cache_range(vma, address, end_add); + flush_tlb_range(vma, address, end_add); + mmu_notifier_invalidate_range(mm, address, end_add); + continue; + } if (IS_ENABLED(CONFIG_MIGRATION) && (flags & TTU_MIGRATION) &&