From patchwork Thu Aug 23 20:59:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Mike Kravetz X-Patchwork-Id: 10574577 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DEAE21390 for ; Thu, 23 Aug 2018 20:59:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D14052C7A0 for ; Thu, 23 Aug 2018 20:59:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C558B2C7A5; Thu, 23 Aug 2018 20:59:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F2BD2C7A0 for ; Thu, 23 Aug 2018 20:59:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 34CD96B2C0B; Thu, 23 Aug 2018 16:59:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 126FB6B2C09; Thu, 23 Aug 2018 16:59:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA5ED6B2C0B; Thu, 23 Aug 2018 16:59:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f198.google.com (mail-qk0-f198.google.com [209.85.220.198]) by kanga.kvack.org (Postfix) with ESMTP id A73776B2C0A for ; Thu, 23 Aug 2018 16:59:33 -0400 (EDT) Received: by mail-qk0-f198.google.com with SMTP id 123-v6so5810523qkl.3 for ; Thu, 23 Aug 2018 13:59:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=OY3fa08GAEfmN9mjSUjHXL5yBA1fSuN/Cv5FlT6QjAU=; b=BIjWeLTLSyksG+wT5J5Of/N0KIEgtFFqzCu07dFkD76Fi90f3oqQHr/ZdT9+2S1RED BVnzVDG5FcpbaiJkEHhcF3YqZx5fKf2iCNk8VqO3n732FlB7AiA3BqA91nQooo0JlGqp Wlk8F4aoysk30j+VkWAUmGMnVDET/FKBTvITK6eDmd5vFRCuuzMEga9To9TNPc+U5lEG Inl8ISzL7sPaMg9ODXveA1tByFvfe0F076GthNfo/xz4IxaSDFWMy4uMaU08Y3zNSGMj h8nzlJE3tFi5Id51ZVZXnJDkjNEQ/lrutM90juA1Z92l7AP4b9yi72y+cU/LQimdBrFy r6VA== X-Gm-Message-State: AOUpUlGOw8D1Y/StLTpVxeAqA6JzIlOCT5Y/JHTKYBtGnC+ysycLDj6N 5NriLwWwmfM9WQl04dB4E2ryp9UWXiaxUDLTKQU2VXkdy4W4LmfHqZCBnsw7MODEwWj7X1SzJP7 5vE4XCEONb0NWiwqY0I1gJCR4NcOXcqv4Ov2TITeYE9F/6ulA8aQ2uQJXeXhjpu4BIA== X-Received: by 2002:a0c:8d8e:: with SMTP id t14-v6mr58211360qvb.32.1535057973422; Thu, 23 Aug 2018 13:59:33 -0700 (PDT) X-Google-Smtp-Source: AA+uWPymoKoGGwyt0txoFXNnHlHsUMMsX10Z0PIfihsZotDCHw4VCOPiaOAWAI76hRXbcr4yXDWz X-Received: by 2002:a0c:8d8e:: with SMTP id t14-v6mr58211309qvb.32.1535057972433; Thu, 23 Aug 2018 13:59:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535057972; cv=none; d=google.com; s=arc-20160816; b=FtbG0in3pvxKQlATTUXdnI1MQ8/QDB4KCZ7E69X80AuZn0SVrR0MnHWs1vI/nAYPX4 sIcavmmZoHNV8+f2TzNJ2NnIlXS8XBdQghvgbrfJD6eBg13DjbTStFdeCvr8/AWmPzCR vS0xHiCurPtFc/M8iCJ9DQJ+GACbCpxQDHphxHQxqYfgmqK23QbwHcLvVPgdFd2gOhqe WIgiRty1P0w25lrlbANxOO/IfChz8jpg5sK023eRZMGFr7MahB4BfapD/Rc94/2ALg8L HounyxN7IpXVq9dPgF1WmljddNR1iPH6Ryh/vUG6XTnSuXJ65R9af+gzeekEYtosyN4b mdpw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=OY3fa08GAEfmN9mjSUjHXL5yBA1fSuN/Cv5FlT6QjAU=; b=ChBHG4EHx9/de5tt9snl+1Y2kgMhyZIuR9wD7O2X9xLFVgoV5DQBu2IPxIyWfN5AVT MB+dgUTMDP1MYcC0Z9rzc/Y4SK6I80bd8qil1mQLBevWe3cRZ2jR1XaoW3YLlufM0OSd N9awITwDlnhiU/GQfJGkTMu0FHXzVsgmNrc9WXXT3olEfOazjQpLa7Bllp/IFLpg9xU5 /bokg17LeRU3MXUmfcmRubyqLL2kCPWg43bFYhzGVB5goSxeY49TTaVfPTvvfKpORT2n gNmjbct5qSxXmUa9JFYrOvDzPsZyvM0lEF2drVwXPImpAblWts3a1PQkRvhg1qkQvDuo TuHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=Df1fEfMJ; spf=pass (google.com: domain of mike.kravetz@oracle.com designates 156.151.31.85 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from userp2120.oracle.com (userp2120.oracle.com. [156.151.31.85]) by mx.google.com with ESMTPS id 3-v6si1598612qtv.79.2018.08.23.13.59.32 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 23 Aug 2018 13:59:32 -0700 (PDT) Received-SPF: pass (google.com: domain of mike.kravetz@oracle.com designates 156.151.31.85 as permitted sender) client-ip=156.151.31.85; Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2018-07-02 header.b=Df1fEfMJ; spf=pass (google.com: domain of mike.kravetz@oracle.com designates 156.151.31.85 as permitted sender) smtp.mailfrom=mike.kravetz@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.22/8.16.0.22) with SMTP id w7NKwZAp129318; Thu, 23 Aug 2018 20:59:27 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id : in-reply-to : references; s=corp-2018-07-02; bh=OY3fa08GAEfmN9mjSUjHXL5yBA1fSuN/Cv5FlT6QjAU=; b=Df1fEfMJnNxf8PWYr6nLc/grZ2EAWlz7MlxiOIEF2Lw6QDb3vx086Shqdqg/WhuLnAlb Ccn2kdtCnmU5KpRFl4+cgHffVCjMnIp2DoQIDfVaAsSRXv2MyI18PF0m+9/xsV+obnWN G2aXd2BDOST198aC5RO4cXUz1lkPzlxp3O8Zw+K42bAglzlM88DbDBokiAvvfabaeqhl 9Qt0bEAn/XC1glricLvppZkTkBstJ5o6mT0vooPnkGCZBpLJce1vO1xLI+WEJpmlE8xo pDArIavazDKaPO67m9+Bk4zJP7y1denDn2d2ki/AQ5WklX0/ReLc2I89AItkdM8J9wuc 9w== Received: from aserv0022.oracle.com (aserv0022.oracle.com [141.146.126.234]) by userp2120.oracle.com with ESMTP id 2kxc3r3pf9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 23 Aug 2018 20:59:27 +0000 Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236]) by aserv0022.oracle.com (8.14.4/8.14.4) with ESMTP id w7NKxQtl021972 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 23 Aug 2018 20:59:26 GMT Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id w7NKxPtI002480; Thu, 23 Aug 2018 20:59:25 GMT Received: from monkey.oracle.com (/50.38.38.67) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Thu, 23 Aug 2018 13:59:25 -0700 From: Mike Kravetz To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: "Kirill A . Shutemov" , =?utf-8?b?SsOp?= =?utf-8?b?csO0bWUgR2xpc3Nl?= , Vlastimil Babka , Naoya Horiguchi , Davidlohr Bueso , Michal Hocko , Andrew Morton , Mike Kravetz , stable@vger.kernel.org Subject: [PATCH v6 1/2] mm: migration: fix migration of huge PMD shared pages Date: Thu, 23 Aug 2018 13:59:16 -0700 Message-Id: <20180823205917.16297-2-mike.kravetz@oracle.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180823205917.16297-1-mike.kravetz@oracle.com> References: <20180823205917.16297-1-mike.kravetz@oracle.com> X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8994 signatures=668707 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1808230215 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The page migration code employs try_to_unmap() to try and unmap the source page. This is accomplished by using rmap_walk to find all vmas where the page is mapped. This search stops when page mapcount is zero. For shared PMD huge pages, the page map count is always 1 no matter the number of mappings. Shared mappings are tracked via the reference count of the PMD page. Therefore, try_to_unmap stops prematurely and does not completely unmap all mappings of the source page. This problem can result is data corruption as writes to the original source page can happen after contents of the page are copied to the target page. Hence, data is lost. This problem was originally seen as DB corruption of shared global areas after a huge page was soft offlined due to ECC memory errors. DB developers noticed they could reproduce the issue by (hotplug) offlining memory used to back huge pages. A simple testcase can reproduce the problem by creating a shared PMD mapping (note that this must be at least PUD_SIZE in size and PUD_SIZE aligned (1GB on x86)), and using migrate_pages() to migrate process pages between nodes while continually writing to the huge pages being migrated. To fix, have the try_to_unmap_one routine check for huge PMD sharing by calling huge_pmd_unshare for hugetlbfs huge pages. If it is a shared mapping it will be 'unshared' which removes the page table entry and drops the reference on the PMD page. After this, flush caches and TLB. mmu notifiers are called before locking page tables, but we can not be sure of PMD sharing until page tables are locked. Therefore, check for the possibility of PMD sharing before locking so that notifiers can prepare for the worst possible case. Fixes: 39dde65c9940 ("shared page table for hugetlb page") Cc: stable@vger.kernel.org Signed-off-by: Mike Kravetz Reviewed-by: Naoya Horiguchi Acked-by: Michal Hocko Acked-by: Michal Hocko Signed-off-by: Jérôme Glisse Reviewed-by: Andrea Arcangeli Signed-off-by: Linus Torvalds Signed-off-by: Michal Hocko # backport to 4.4 Signed-off-by: Jérôme Glisse Reviewed-by: Andrea Arcangeli Signed-off-by: Linus Torvalds Signed-off-by: Michal Hocko # backport to 4.4 Signed-off-by: Jérôme Glisse Reviewed-by: Andrea Arcangeli Signed-off-by: Linus Torvalds Signed-off-by: Michal Hocko # backport to 4.4 --- include/linux/hugetlb.h | 14 ++++++++++++++ mm/hugetlb.c | 40 +++++++++++++++++++++++++++++++++++++-- mm/rmap.c | 42 ++++++++++++++++++++++++++++++++++++++--- 3 files changed, 91 insertions(+), 5 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 36fa6a2a82e3..4ee95d8c8413 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -140,6 +140,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz); int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep); +void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end); struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address, int write); struct page *follow_huge_pd(struct vm_area_struct *vma, @@ -170,6 +172,18 @@ static inline unsigned long hugetlb_total_pages(void) return 0; } +static inline int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, + pte_t *ptep) +{ + return 0; +} + +static inline void adjust_range_if_pmd_sharing_possible( + struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ +} + #define follow_hugetlb_page(m,v,p,vs,a,b,i,w,n) ({ BUG(); 0; }) #define follow_huge_addr(mm, addr, write) ERR_PTR(-EINVAL) #define copy_hugetlb_page_range(src, dst, vma) ({ BUG(); 0; }) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3103099f64fd..a73c5728e961 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4548,6 +4548,9 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma, return saddr; } +#define _range_in_vma(vma, start, end) \ + ((vma)->vm_start <= (start) && (end) <= (vma)->vm_end) + static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr) { unsigned long base = addr & PUD_MASK; @@ -4556,12 +4559,40 @@ static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr) /* * check on proper vm_flags and page table alignment */ - if (vma->vm_flags & VM_MAYSHARE && - vma->vm_start <= base && end <= vma->vm_end) + if (vma->vm_flags & VM_MAYSHARE && _range_in_vma(vma, base, end)) return true; return false; } +/* + * Determine if start,end range within vma could be mapped by shared pmd. + * If yes, adjust start and end to cover range associated with possible + * shared pmd mappings. + */ +void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ + unsigned long check_addr = *start; + + if (!(vma->vm_flags & VM_MAYSHARE)) + return; + + for (check_addr = *start; check_addr < *end; check_addr += PUD_SIZE) { + unsigned long a_start = check_addr & PUD_MASK; + unsigned long a_end = a_start + PUD_SIZE; + + /* + * If sharing is possible, adjust start/end if necessary. + */ + if (_range_in_vma(vma, a_start, a_end)) { + if (a_start < *start) + *start = a_start; + if (a_end > *end) + *end = a_end; + } + } +} + /* * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc() * and returns the corresponding pte. While this is not necessary for the @@ -4659,6 +4690,11 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) { return 0; } + +void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, + unsigned long *start, unsigned long *end) +{ +} #define want_pmd_share() (0) #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ diff --git a/mm/rmap.c b/mm/rmap.c index eb477809a5c0..1e79fac3186b 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1362,11 +1362,21 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, } /* - * We have to assume the worse case ie pmd for invalidation. Note that - * the page can not be free in this function as call of try_to_unmap() - * must hold a reference on the page. + * For THP, we have to assume the worse case ie pmd for invalidation. + * For hugetlb, it could be much worse if we need to do pud + * invalidation in the case of pmd sharing. + * + * Note that the page can not be free in this function as call of + * try_to_unmap() must hold a reference on the page. */ end = min(vma->vm_end, start + (PAGE_SIZE << compound_order(page))); + if (PageHuge(page)) { + /* + * If sharing is possible, start and end will be adjusted + * accordingly. + */ + adjust_range_if_pmd_sharing_possible(vma, &start, &end); + } mmu_notifier_invalidate_range_start(vma->vm_mm, start, end); while (page_vma_mapped_walk(&pvmw)) { @@ -1409,6 +1419,32 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, subpage = page - page_to_pfn(page) + pte_pfn(*pvmw.pte); address = pvmw.address; + if (PageHuge(page)) { + if (huge_pmd_unshare(mm, &address, pvmw.pte)) { + /* + * huge_pmd_unshare unmapped an entire PMD + * page. There is no way of knowing exactly + * which PMDs may be cached for this mm, so + * we must flush them all. start/end were + * already adjusted above to cover this range. + */ + flush_cache_range(vma, start, end); + flush_tlb_range(vma, start, end); + mmu_notifier_invalidate_range(mm, start, end); + + /* + * The ref count of the PMD page was dropped + * which is part of the way map counting + * is done for shared PMDs. Return 'true' + * here. When there is no other sharing, + * huge_pmd_unshare returns false and we will + * unmap the actual page and drop map count + * to zero. + */ + page_vma_mapped_walk_done(&pvmw); + break; + } + } if (IS_ENABLED(CONFIG_MIGRATION) && (flags & TTU_MIGRATION) &&