From patchwork Wed Apr 27 10:52:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 12828597 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87138C433EF for ; Wed, 27 Apr 2022 10:52:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 923D86B0072; Wed, 27 Apr 2022 06:52:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D46C6B0073; Wed, 27 Apr 2022 06:52:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B27F6B0075; Wed, 27 Apr 2022 06:52:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 511CC6B0073 for ; Wed, 27 Apr 2022 06:52:19 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1AF722A4A9 for ; Wed, 27 Apr 2022 10:52:19 +0000 (UTC) X-FDA: 79402344798.12.D389B23 Received: from out30-56.freemail.mail.aliyun.com (out30-56.freemail.mail.aliyun.com [115.124.30.56]) by imf25.hostedemail.com (Postfix) with ESMTP id D5DFFA004F for ; Wed, 27 Apr 2022 10:52:09 +0000 (UTC) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04395;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0VBSxH83_1651056734; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VBSxH83_1651056734) by smtp.aliyun-inc.com(127.0.0.1); Wed, 27 Apr 2022 18:52:14 +0800 From: Baolin Wang To: akpm@linux-foundation.org, mike.kravetz@oracle.com Cc: almasrymina@google.com, songmuchun@bytedance.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 1/3] mm: hugetlb: Considering PMD sharing when flushing cache/TLBs Date: Wed, 27 Apr 2022 18:52:05 +0800 Message-Id: <0443c8cf20db554d3ff4b439b30e0ff26c0181dd.1651056365.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: References: In-Reply-To: References: X-Stat-Signature: 7bffkyn5jnxec7saxbngeajiz4dqthik X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: D5DFFA004F X-Rspam-User: Authentication-Results: imf25.hostedemail.com; dkim=none; spf=pass (imf25.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.56 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-HE-Tag: 1651056729-5314 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When moving hugetlb page tables, the cache flushing is called in move_page_tables() without considering the shared PMDs, which may be cause cache issues on some architectures. Thus we should move the hugetlb cache flushing into move_hugetlb_page_tables() with considering the shared PMDs ranges, calculated by adjust_range_if_pmd_sharing_possible(). Meanwhile also expanding the TLBs flushing range in case of shared PMDs. Note this is discovered via code inspection, and did not meet a real problem in practice so far. Fixes: 550a7d60bd5e ("mm, hugepages: add mremap() support for hugepage backed vma") Signed-off-by: Baolin Wang Reviewed-by: Mike Kravetz Reviewed-by: Muchun Song --- mm/hugetlb.c | 17 +++++++++++++++-- mm/mremap.c | 2 +- 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1945dfb..d3a6094 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4937,10 +4937,17 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, unsigned long old_addr_copy; pte_t *src_pte, *dst_pte; struct mmu_notifier_range range; + bool shared_pmd = false; mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, old_addr, old_end); adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); + /* + * In case of shared PMDs, we should cover the maximum possible + * range. + */ + flush_cache_range(vma, range.start, range.end); + mmu_notifier_invalidate_range_start(&range); /* Prevent race with file truncation */ i_mmap_lock_write(mapping); @@ -4957,8 +4964,10 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, */ old_addr_copy = old_addr; - if (huge_pmd_unshare(mm, vma, &old_addr_copy, src_pte)) + if (huge_pmd_unshare(mm, vma, &old_addr_copy, src_pte)) { + shared_pmd = true; continue; + } dst_pte = huge_pte_alloc(mm, new_vma, new_addr, sz); if (!dst_pte) @@ -4966,7 +4975,11 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, move_huge_pte(vma, old_addr, new_addr, src_pte, dst_pte); } - flush_tlb_range(vma, old_end - len, old_end); + + if (shared_pmd) + flush_tlb_range(vma, range.start, range.end); + else + flush_tlb_range(vma, old_end - len, old_end); mmu_notifier_invalidate_range_end(&range); i_mmap_unlock_write(mapping); diff --git a/mm/mremap.c b/mm/mremap.c index 98f50e6..0970025 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -490,12 +490,12 @@ unsigned long move_page_tables(struct vm_area_struct *vma, return 0; old_end = old_addr + len; - flush_cache_range(vma, old_addr, old_end); if (is_vm_hugetlb_page(vma)) return move_hugetlb_page_tables(vma, new_vma, old_addr, new_addr, len); + flush_cache_range(vma, old_addr, old_end); mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm, old_addr, old_end); mmu_notifier_invalidate_range_start(&range);