From patchwork Wed Mar 9 04:10:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12774734 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98676C433FE for ; Wed, 9 Mar 2022 04:10:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C2FE98D0005; Tue, 8 Mar 2022 23:09:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BDFC08D0001; Tue, 8 Mar 2022 23:09:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A7FC48D0005; Tue, 8 Mar 2022 23:09:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 94DF68D0001 for ; Tue, 8 Mar 2022 23:09:59 -0500 (EST) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2BA0B182890C4 for ; Wed, 9 Mar 2022 04:09:59 +0000 (UTC) X-FDA: 79223519718.31.130D3A6 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf23.hostedemail.com (Postfix) with ESMTP id AEBA514000A for ; Wed, 9 Mar 2022 04:09:58 +0000 (UTC) Received: by mail-pl1-f178.google.com with SMTP id r12so898788pla.1 for ; Tue, 08 Mar 2022 20:09:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2SJwDWdiGMEZCVwB+/7uYYAov/bAs7rI8E5dgDcHCHE=; b=ATVTz40hJi2tmeDX85+96Z/GuBi2dfLoScIagjwpTYztYkuWr9vE5ghlLwx5hJOr5f PO9KAStO8HI2xQWYX1PW6CrIfi1/OANTsATHA3jYRMhGAxpaFd6HWUpTEazUMro04a3d Bcn0qpj0Fx8BFfYvgXR+3/NbOToYqNe3fX8YDBJowKHlp/tIGy/Ez1vkfqwV43+ey/IN E37twpo7wWLDkx/9rUbh1ssGjU1o7i2Qn76Ct0RmhWhA05O7CektW1nKBcDCty6Bbanv YRvzX6s1L6Yka/lD5dXl75Nyq0hvxqoNeunS+wOV5DQICLE/pYH9+fFiADyfy57o/Q2u S5EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2SJwDWdiGMEZCVwB+/7uYYAov/bAs7rI8E5dgDcHCHE=; b=NnFqyCVSxU6wGiRM0fX/1azJP2FsdAeCZz0gLVZg8UV0ti0dOq5f77V6nitwi4DJFL xvqTswO/qxCBW1d+oe7extQEtA7LUx6JiI2xGgYkvk7Cq1Lu2aMkGMh4g9QynkL323V5 pe53FDD7HdJyuic1qmTepCBNRpCO56GAKIFZJu2XWHTsfpeI24uWOj1N4CUtIEgvozle VJMxxS1lZiUbezII0Uh9YwhBGtNZPlYxOBIMiFKukI+PbT58M6u1UPwwEc2FsKwYvs6h mPBy8TgL0xAJYBZB2GjUQfUAUYk9f8lVFjoNKDjsqkTH1moTOQP3964qb97zow/PnVnh 4gTQ== X-Gm-Message-State: AOAM5326fZQtw1IfZQbVdFvqb4VppJhtLyOBtAMTymuIdsJiZtMp4Io6 GD37pCbLepQmIl7KNVoKBFDw1Zmo8DA= X-Google-Smtp-Source: ABdhPJzoRyh9dUlZqdCM+iui5NsBCu4xoyMPa/RF2114leOcWaPd45FxGNhhrskQuzehdlyE6kWOuQ== X-Received: by 2002:a17:90b:1d8a:b0:1be:f9b0:1fca with SMTP id pf10-20020a17090b1d8a00b001bef9b01fcamr8344653pjb.137.1646798997095; Tue, 08 Mar 2022 20:09:57 -0800 (PST) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id g5-20020a655805000000b003643e405b56sm604343pgr.24.2022.03.08.20.09.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 20:09:56 -0800 (PST) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Xu , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH v3 2/5] x86/mm: check exec permissions on fault Date: Tue, 8 Mar 2022 20:10:40 -0800 Message-Id: <20220309041043.302261-3-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220309041043.302261-1-namit@vmware.com> References: <20220309041043.302261-1-namit@vmware.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: AEBA514000A X-Stat-Signature: ko1t5o686dmz8sa7sqmes3hdtztuajw8 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=ATVTz40h; dmarc=pass (policy=none) header.from=gmail.com; spf=none (imf23.hostedemail.com: domain of mail-pl1-f178.google.com has no SPF policy when checking 209.85.214.178) smtp.helo=mail-pl1-f178.google.com X-HE-Tag: 1646798998-69437 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit access_error() currently does not check for execution permission violation. As a result, spurious page-faults due to execution permission violation cause SIGSEGV. It appears not to be an issue so far, but the next patches avoid TLB flushes on permission promotion, which can lead to this scenario. nodejs for instance crashes when TLB flush is avoided on permission promotion. Add a check to prevent access_error() from returning mistakenly that spurious page-faults due to instruction fetch are a reason for an access error. It is assumed that error code bits of "instruction fetch" and "write" in the hardware error code are mutual exclusive, and the change assumes so. However, to be on the safe side, especially if hypervisors misbehave, assert this is the case and warn otherwise. Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Xu Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Signed-off-by: Nadav Amit --- arch/x86/mm/fault.c | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index d0074c6ed31a..ad0ef0a6087a 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1107,10 +1107,28 @@ access_error(unsigned long error_code, struct vm_area_struct *vma) (error_code & X86_PF_INSTR), foreign)) return 1; - if (error_code & X86_PF_WRITE) { + if (error_code & (X86_PF_WRITE | X86_PF_INSTR)) { + /* + * CPUs are not expected to set the two error code bits + * together, but to ensure that hypervisors do not misbehave, + * run an additional sanity check. + */ + if ((error_code & (X86_PF_WRITE|X86_PF_INSTR)) == + (X86_PF_WRITE|X86_PF_INSTR)) { + WARN_ON_ONCE(1); + return 1; + } + /* write, present and write, not present: */ - if (unlikely(!(vma->vm_flags & VM_WRITE))) + if ((error_code & X86_PF_WRITE) && + unlikely(!(vma->vm_flags & VM_WRITE))) + return 1; + + /* exec, present and exec, not present: */ + if ((error_code & X86_PF_INSTR) && + unlikely(!(vma->vm_flags & VM_EXEC))) return 1; + return 0; } From patchwork Wed Mar 9 04:10:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12774735 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95308C433EF for ; Wed, 9 Mar 2022 04:10:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 04A218D0006; Tue, 8 Mar 2022 23:10:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F152D8D0001; Tue, 8 Mar 2022 23:10:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8CE68D0006; Tue, 8 Mar 2022 23:10:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id C90958D0001 for ; Tue, 8 Mar 2022 23:10:00 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6FC51182888B0 for ; Wed, 9 Mar 2022 04:10:00 +0000 (UTC) X-FDA: 79223519760.18.CE0824F Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) by imf31.hostedemail.com (Postfix) with ESMTP id 0799320011 for ; Wed, 9 Mar 2022 04:09:59 +0000 (UTC) Received: by mail-pl1-f176.google.com with SMTP id r12so898850pla.1 for ; Tue, 08 Mar 2022 20:09:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/LbC4rtnRLWUDSZPi6OsRf96JKBt4h4/bksG42SwEfI=; b=cl2IW6qUsk0M44xG/eKTMHUfs34P25FhxVhjayROGFpOO/i0qiRoPEPHo6bsUkH64G 6/cDS9ZAYpiN8CkeCkFdHGJvY8KN3saBcu02bVmXbrG7iAaawfoRw0MnXXtwqhxFPLRj +5pKBJaH30rxiHe2GXHoRCaCmxpsUOAsgcv+zhBBKIDZgpd2liz2HtVmwrC9fcbQbxmy d/eIyCKRG752wiEDJ59epgCypbmM6hqA7oGiNkez88HSMd+NqXj5vsYFtBH9k4ObpMyv rRXX2VdePwM/k9EfzNTt4UsHEQy7bTW9TIoFF+Q8f3LuTWAp2cLti4MFJtmNZ1jLnZM6 KwLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/LbC4rtnRLWUDSZPi6OsRf96JKBt4h4/bksG42SwEfI=; b=XINbP6iuXol8efTNanlzkP1uibSIma3EBt4NmJp23QFOUfHTZo5+M13aA2fuqKO/G4 gd8oi4x3V0Hx8kZGQk5cYEQzrFbRHV0ILtfLkpbfKtxtzlnE60k17ya6SQWpmFTk4edF Ml8xNOT/mrXW9gc1M7Yn28+ehoBU7Uylg1+dXyfElVfXvIFjfM4BP/p0zcXARsk7wnss Qm6fxyzzuxxA6XT9/7szw66NnF27KuybM4JUnkeFsKJ63v1cRVc/ITpGrQaFofXtD06F iYIe+cSOlfzh2lMDmByUTNvBJ4x1QsQDrMCHT0YuTDSLCt2mpnV1sOcEFtwL+yToh1AS FREg== X-Gm-Message-State: AOAM531sAGwsbQT9yWdix+ItV3O6OJU/Lxucea+CqZ+5X1rEPAChbL/5 qy99FrFoeKP4i4RADenCpguC6juSuHY= X-Google-Smtp-Source: ABdhPJzjDQwI20p32VGThCJWdAZKoMhzs3A4mNJuohc8CvUzkpbLtbtzcaY0uY3qbAcWjK5O3HlBXA== X-Received: by 2002:a17:90b:3886:b0:1bf:1a16:25d4 with SMTP id mu6-20020a17090b388600b001bf1a1625d4mr8277149pjb.193.1646798998554; Tue, 08 Mar 2022 20:09:58 -0800 (PST) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id g5-20020a655805000000b003643e405b56sm604343pgr.24.2022.03.08.20.09.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 20:09:58 -0800 (PST) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Xu , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH v3 3/5] mm/mprotect: use mmu_gather Date: Tue, 8 Mar 2022 20:10:41 -0800 Message-Id: <20220309041043.302261-4-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220309041043.302261-1-namit@vmware.com> References: <20220309041043.302261-1-namit@vmware.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 0799320011 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=cl2IW6qU; dmarc=pass (policy=none) header.from=gmail.com; spf=none (imf31.hostedemail.com: domain of mail-pl1-f176.google.com has no SPF policy when checking 209.85.214.176) smtp.helo=mail-pl1-f176.google.com X-Stat-Signature: bqjajc6dgxec818op9xcrj6zdjz1ra4b X-HE-Tag: 1646798999-196928 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit change_pXX_range() currently does not use mmu_gather, but instead implements its own deferred TLB flushes scheme. This both complicates the code, as developers need to be aware of different invalidation schemes, and prevents opportunities to avoid TLB flushes or perform them in finer granularity. The use of mmu_gather for modified PTEs has benefits in various scenarios even if pages are not released. For instance, if only a single page needs to be flushed out of a range of many pages, only that page would be flushed. If a THP page is flushed, on x86 a single TLB invlpg instruction can be used instead of 512 instructions (or a full TLB flush, which would Linux would actually use by default). mprotect() over multiple VMAs requires a single flush. Use mmu_gather in change_pXX_range(). As the pages are not released, only record the flushed range using tlb_flush_pXX_range(). Handle THP similarly and get rid of flush_cache_range() which becomes redundant since tlb_start_vma() calls it when needed. Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Xu Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Signed-off-by: Nadav Amit --- fs/exec.c | 6 ++- include/linux/huge_mm.h | 5 ++- include/linux/mm.h | 5 ++- mm/huge_memory.c | 10 ++++- mm/mprotect.c | 93 ++++++++++++++++++++++------------------- mm/userfaultfd.c | 6 ++- 6 files changed, 75 insertions(+), 50 deletions(-) diff --git a/fs/exec.c b/fs/exec.c index a39108c1190a..1c46b7d7c32c 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -759,6 +759,7 @@ int setup_arg_pages(struct linux_binprm *bprm, unsigned long stack_size; unsigned long stack_expand; unsigned long rlim_stack; + struct mmu_gather tlb; #ifdef CONFIG_STACK_GROWSUP /* Limit stack size */ @@ -813,8 +814,11 @@ int setup_arg_pages(struct linux_binprm *bprm, vm_flags |= mm->def_flags; vm_flags |= VM_STACK_INCOMPLETE_SETUP; - ret = mprotect_fixup(vma, &prev, vma->vm_start, vma->vm_end, + tlb_gather_mmu(&tlb, mm); + ret = mprotect_fixup(&tlb, vma, &prev, vma->vm_start, vma->vm_end, vm_flags); + tlb_finish_mmu(&tlb); + if (ret) goto out_unlock; BUG_ON(prev != vma); diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2999190adc22..9a26bd10e083 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -36,8 +36,9 @@ int zap_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma, pud_t *pud, unsigned long addr); bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd); -int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, - pgprot_t newprot, unsigned long cp_flags); +int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, + pmd_t *pmd, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags); vm_fault_t vmf_insert_pfn_pmd_prot(struct vm_fault *vmf, pfn_t pfn, pgprot_t pgprot, bool write); diff --git a/include/linux/mm.h b/include/linux/mm.h index dc69d2a69912..b2574bbdb76d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1969,10 +1969,11 @@ extern unsigned long move_page_tables(struct vm_area_struct *vma, #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ MM_CP_UFFD_WP_RESOLVE) -extern unsigned long change_protection(struct vm_area_struct *vma, unsigned long start, +extern unsigned long change_protection(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long start, unsigned long end, pgprot_t newprot, unsigned long cp_flags); -extern int mprotect_fixup(struct vm_area_struct *vma, +extern int mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma, struct vm_area_struct **pprev, unsigned long start, unsigned long end, unsigned long newflags); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3557aabe86fe..d58a5b498011 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1692,8 +1692,9 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, * or if prot_numa but THP migration is not supported * - HPAGE_PMD_NR if protections changed and TLB flush necessary */ -int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, - unsigned long addr, pgprot_t newprot, unsigned long cp_flags) +int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, + pmd_t *pmd, unsigned long addr, pgprot_t newprot, + unsigned long cp_flags) { struct mm_struct *mm = vma->vm_mm; spinlock_t *ptl; @@ -1704,6 +1705,8 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, bool uffd_wp = cp_flags & MM_CP_UFFD_WP; bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; + tlb_change_page_size(tlb, HPAGE_PMD_SIZE); + if (prot_numa && !thp_migration_supported()) return 1; @@ -1799,6 +1802,9 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, } ret = HPAGE_PMD_NR; set_pmd_at(mm, addr, pmd, entry); + + tlb_flush_pmd_range(tlb, addr, HPAGE_PMD_SIZE); + BUG_ON(vma_is_anonymous(vma) && !preserve_write && pmd_write(entry)); unlock: spin_unlock(ptl); diff --git a/mm/mprotect.c b/mm/mprotect.c index b69ce7a7b2b7..f9730bac2d78 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -33,12 +33,13 @@ #include #include #include +#include #include "internal.h" -static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, - unsigned long addr, unsigned long end, pgprot_t newprot, - unsigned long cp_flags) +static unsigned long change_pte_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pmd_t *pmd, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pte_t *pte, oldpte; spinlock_t *ptl; @@ -49,6 +50,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, bool uffd_wp = cp_flags & MM_CP_UFFD_WP; bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; + tlb_change_page_size(tlb, PAGE_SIZE); + /* * Can be called with only the mmap_lock for reading by * prot_numa so we must check the pmd isn't constantly @@ -149,6 +152,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, ptent = pte_mkwrite(ptent); } ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); + tlb_flush_pte_range(tlb, addr, PAGE_SIZE); pages++; } else if (is_swap_pte(oldpte)) { swp_entry_t entry = pte_to_swp_entry(oldpte); @@ -230,9 +234,9 @@ static inline int pmd_none_or_clear_bad_unless_trans_huge(pmd_t *pmd) return 0; } -static inline unsigned long change_pmd_range(struct vm_area_struct *vma, - pud_t *pud, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_pmd_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pud_t *pud, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pmd_t *pmd; unsigned long next; @@ -272,8 +276,12 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, if (next - addr != HPAGE_PMD_SIZE) { __split_huge_pmd(vma, pmd, addr, false, NULL); } else { - int nr_ptes = change_huge_pmd(vma, pmd, addr, - newprot, cp_flags); + /* + * change_huge_pmd() does not defer TLB flushes, + * so no need to propagate the tlb argument. + */ + int nr_ptes = change_huge_pmd(tlb, vma, pmd, + addr, newprot, cp_flags); if (nr_ptes) { if (nr_ptes == HPAGE_PMD_NR) { @@ -287,8 +295,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, } /* fall through, the trans huge pmd just split */ } - this_pages = change_pte_range(vma, pmd, addr, next, newprot, - cp_flags); + this_pages = change_pte_range(tlb, vma, pmd, addr, next, + newprot, cp_flags); pages += this_pages; next: cond_resched(); @@ -302,9 +310,9 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma, return pages; } -static inline unsigned long change_pud_range(struct vm_area_struct *vma, - p4d_t *p4d, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_pud_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, p4d_t *p4d, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { pud_t *pud; unsigned long next; @@ -315,16 +323,16 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma, next = pud_addr_end(addr, end); if (pud_none_or_clear_bad(pud)) continue; - pages += change_pmd_range(vma, pud, addr, next, newprot, + pages += change_pmd_range(tlb, vma, pud, addr, next, newprot, cp_flags); } while (pud++, addr = next, addr != end); return pages; } -static inline unsigned long change_p4d_range(struct vm_area_struct *vma, - pgd_t *pgd, unsigned long addr, unsigned long end, - pgprot_t newprot, unsigned long cp_flags) +static inline unsigned long change_p4d_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, pgd_t *pgd, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { p4d_t *p4d; unsigned long next; @@ -335,44 +343,40 @@ static inline unsigned long change_p4d_range(struct vm_area_struct *vma, next = p4d_addr_end(addr, end); if (p4d_none_or_clear_bad(p4d)) continue; - pages += change_pud_range(vma, p4d, addr, next, newprot, + pages += change_pud_range(tlb, vma, p4d, addr, next, newprot, cp_flags); } while (p4d++, addr = next, addr != end); return pages; } -static unsigned long change_protection_range(struct vm_area_struct *vma, - unsigned long addr, unsigned long end, pgprot_t newprot, - unsigned long cp_flags) +static unsigned long change_protection_range(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long addr, + unsigned long end, pgprot_t newprot, unsigned long cp_flags) { struct mm_struct *mm = vma->vm_mm; pgd_t *pgd; unsigned long next; - unsigned long start = addr; unsigned long pages = 0; BUG_ON(addr >= end); pgd = pgd_offset(mm, addr); - flush_cache_range(vma, addr, end); - inc_tlb_flush_pending(mm); + tlb_start_vma(tlb, vma); do { next = pgd_addr_end(addr, end); if (pgd_none_or_clear_bad(pgd)) continue; - pages += change_p4d_range(vma, pgd, addr, next, newprot, + pages += change_p4d_range(tlb, vma, pgd, addr, next, newprot, cp_flags); } while (pgd++, addr = next, addr != end); - /* Only flush the TLB if we actually modified any entries: */ - if (pages) - flush_tlb_range(vma, start, end); - dec_tlb_flush_pending(mm); + tlb_end_vma(tlb, vma); return pages; } -unsigned long change_protection(struct vm_area_struct *vma, unsigned long start, +unsigned long change_protection(struct mmu_gather *tlb, + struct vm_area_struct *vma, unsigned long start, unsigned long end, pgprot_t newprot, unsigned long cp_flags) { @@ -383,7 +387,7 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start, if (is_vm_hugetlb_page(vma)) pages = hugetlb_change_protection(vma, start, end, newprot); else - pages = change_protection_range(vma, start, end, newprot, + pages = change_protection_range(tlb, vma, start, end, newprot, cp_flags); return pages; @@ -417,8 +421,9 @@ static const struct mm_walk_ops prot_none_walk_ops = { }; int -mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, - unsigned long start, unsigned long end, unsigned long newflags) +mprotect_fixup(struct mmu_gather *tlb, struct vm_area_struct *vma, + struct vm_area_struct **pprev, unsigned long start, + unsigned long end, unsigned long newflags) { struct mm_struct *mm = vma->vm_mm; unsigned long oldflags = vma->vm_flags; @@ -505,7 +510,7 @@ mprotect_fixup(struct vm_area_struct *vma, struct vm_area_struct **pprev, dirty_accountable = vma_wants_writenotify(vma, vma->vm_page_prot); vma_set_page_prot(vma); - change_protection(vma, start, end, vma->vm_page_prot, + change_protection(tlb, vma, start, end, vma->vm_page_prot, dirty_accountable ? MM_CP_DIRTY_ACCT : 0); /* @@ -539,6 +544,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len, const int grows = prot & (PROT_GROWSDOWN|PROT_GROWSUP); const bool rier = (current->personality & READ_IMPLIES_EXEC) && (prot & PROT_READ); + struct mmu_gather tlb; start = untagged_addr(start); @@ -598,6 +604,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len, else prev = vma->vm_prev; + tlb_gather_mmu(&tlb, current->mm); for (nstart = start ; ; ) { unsigned long mask_off_old_flags; unsigned long newflags; @@ -624,18 +631,18 @@ static int do_mprotect_pkey(unsigned long start, size_t len, /* newflags >> 4 shift VM_MAY% in place of VM_% */ if ((newflags & ~(newflags >> 4)) & VM_ACCESS_FLAGS) { error = -EACCES; - goto out; + goto out_tlb; } /* Allow architectures to sanity-check the new flags */ if (!arch_validate_flags(newflags)) { error = -EINVAL; - goto out; + goto out_tlb; } error = security_file_mprotect(vma, reqprot, prot); if (error) - goto out; + goto out_tlb; tmp = vma->vm_end; if (tmp > end) @@ -644,27 +651,29 @@ static int do_mprotect_pkey(unsigned long start, size_t len, if (vma->vm_ops && vma->vm_ops->mprotect) { error = vma->vm_ops->mprotect(vma, nstart, tmp, newflags); if (error) - goto out; + goto out_tlb; } - error = mprotect_fixup(vma, &prev, nstart, tmp, newflags); + error = mprotect_fixup(&tlb, vma, &prev, nstart, tmp, newflags); if (error) - goto out; + goto out_tlb; nstart = tmp; if (nstart < prev->vm_end) nstart = prev->vm_end; if (nstart >= end) - goto out; + goto out_tlb; vma = prev->vm_next; if (!vma || vma->vm_start != nstart) { error = -ENOMEM; - goto out; + goto out_tlb; } prot = reqprot; } +out_tlb: + tlb_finish_mmu(&tlb); out: mmap_write_unlock(current->mm); return error; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e9bb6db002aa..0fd7fc8010d7 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -16,6 +16,7 @@ #include #include #include +#include #include "internal.h" static __always_inline @@ -687,6 +688,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, atomic_t *mmap_changing) { struct vm_area_struct *dst_vma; + struct mmu_gather tlb; pgprot_t newprot; int err; @@ -728,8 +730,10 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, else newprot = vm_get_page_prot(dst_vma->vm_flags); - change_protection(dst_vma, start, start + len, newprot, + tlb_gather_mmu(&tlb, dst_mm); + change_protection(&tlb, dst_vma, start, start + len, newprot, enable_wp ? MM_CP_UFFD_WP : MM_CP_UFFD_WP_RESOLVE); + tlb_finish_mmu(&tlb); err = 0; out_unlock: From patchwork Wed Mar 9 04:10:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12774736 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 416ABC433FE for ; Wed, 9 Mar 2022 04:10:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CCDC18D0007; Tue, 8 Mar 2022 23:10:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C7BCA8D0001; Tue, 8 Mar 2022 23:10:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A814C8D0007; Tue, 8 Mar 2022 23:10:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0084.hostedemail.com [216.40.44.84]) by kanga.kvack.org (Postfix) with ESMTP id 98A5E8D0001 for ; Tue, 8 Mar 2022 23:10:03 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4B21A1828958B for ; Wed, 9 Mar 2022 04:10:03 +0000 (UTC) X-FDA: 79223519886.22.15E1EE8 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf13.hostedemail.com (Postfix) with ESMTP id 8F09320002 for ; Wed, 9 Mar 2022 04:10:02 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id p17so886060plo.9 for ; Tue, 08 Mar 2022 20:10:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=H+7HXLBkyAOLpnbUPOmqITaSjFmsP/5pELWsoOHR/vE=; b=MfsGJPwWyDMvyL1T59Ih3sOsJ9u2v4j8aGO0XaZ3KIb4388ZkxhUtWz2kSSakAvdvQ fja25Gg2Me/TdX8V4mlvBtciwBfjO5nABn/gr+drVXhJhbFGzJJKkVT1vczzyfDhHbhr Zba5YFzibH3XBShb6rm9f0LqaZfu4yy0Hzfdv9bgyzFWWKib+WWxxR2g7XkPgl7gPr6O arfa2xO3VrRNUq/Ugck6YDY9QM66XUZ68Pv6/QoSRVVd6Rmj1iV4l6xPY4O91v1EPu/+ t52TYmyxM0hSZHCkubEIF4LgasTDYOKMgMt/KOW/xgQCQCw5OiogWEOlaL9RSDAH/b9+ rWvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H+7HXLBkyAOLpnbUPOmqITaSjFmsP/5pELWsoOHR/vE=; b=BOJAqZ00/y9V3lv/csWduNJHH1Xq1/qqtE+lGG6KyX99qTTo4JFThZYu1mjRW4T5Ei 4g6rqApAdjpgIhv7+Ld/48TF8aeD6c1dI6eR8iOczIrUrh/3Nojm0crQTcIkQBzOnsqp GGe/5x9vPFNRbrINsQH2IBkj7uXicgUqfTQNwhO07CGvSIawWtl20nDKR3f4yYz9jTuw WJVECXZFtSTPAZvCHjrYc+1Et6HBKd8dgChQr7WizTYhW6MT5NK9b8hV/jaVclv9m0cC VM+ShP9+tPGQKJte8Q93r7sV3gXGQi/LIVmyVRKdn/mYy7Y+uulLS8cez5GFNZtSyBDs 6gXQ== X-Gm-Message-State: AOAM533wtB35x1GtUx2K6TLS/W0omAuempX03x2dogJ2f8RuvyjYDz5G 2Rar9RBvYnR7NxqPa1OSjKdgr14+AoI= X-Google-Smtp-Source: ABdhPJxQZ3sgwqpjKvQcjt0M3A+/5t6QgTIum9HbB2Tsb3eB/2CY2OrWg3AwGO8ue9+92QN1YdadbA== X-Received: by 2002:a17:90a:4289:b0:1bc:275b:8986 with SMTP id p9-20020a17090a428900b001bc275b8986mr8247190pjg.153.1646799001284; Tue, 08 Mar 2022 20:10:01 -0800 (PST) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id g5-20020a655805000000b003643e405b56sm604343pgr.24.2022.03.08.20.10.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Mar 2022 20:10:00 -0800 (PST) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Nadav Amit , Andrea Arcangeli , Andrew Cooper , Andy Lutomirski , Dave Hansen , Peter Xu , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [PATCH v3 5/5] mm: avoid unnecessary flush on change_huge_pmd() Date: Tue, 8 Mar 2022 20:10:43 -0800 Message-Id: <20220309041043.302261-6-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220309041043.302261-1-namit@vmware.com> References: <20220309041043.302261-1-namit@vmware.com> MIME-Version: 1.0 X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 8F09320002 X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=MfsGJPwW; dmarc=pass (policy=none) header.from=gmail.com; spf=none (imf13.hostedemail.com: domain of mail-pl1-f179.google.com has no SPF policy when checking 209.85.214.179) smtp.helo=mail-pl1-f179.google.com X-Stat-Signature: mr6pccxbxn4x97efj6yxmg6rji68647u X-HE-Tag: 1646799002-236168 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit Calls to change_protection_range() on THP can trigger, at least on x86, two TLB flushes for one page: one immediately, when pmdp_invalidate() is called by change_huge_pmd(), and then another one later (that can be batched) when change_protection_range() finishes. The first TLB flush is only necessary to prevent the dirty bit (and with a lesser importance the access bit) from changing while the PTE is modified. However, this is not necessary as the x86 CPUs set the dirty-bit atomically with an additional check that the PTE is (still) present. One caveat is Intel's Knights Landing that has a bug and does not do so. Leverage this behavior to eliminate the unnecessary TLB flush in change_huge_pmd(). Introduce a new arch specific pmdp_invalidate_ad() that only invalidates the access and dirty bit from further changes. Cc: Andrea Arcangeli Cc: Andrew Cooper Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Xu Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org Signed-off-by: Nadav Amit --- arch/x86/include/asm/pgtable.h | 5 +++++ arch/x86/mm/pgtable.c | 10 ++++++++++ include/linux/pgtable.h | 20 ++++++++++++++++++++ mm/huge_memory.c | 4 ++-- mm/pgtable-generic.c | 8 ++++++++ 5 files changed, 45 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 62ab07e24aef..23ad34edcc4b 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1173,6 +1173,11 @@ static inline pmd_t pmdp_establish(struct vm_area_struct *vma, } } #endif + +#define __HAVE_ARCH_PMDP_INVALIDATE_AD +extern pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmdp); + /* * Page table pages are page-aligned. The lower half of the top * level is used for userspace and the top half for the kernel. diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 3481b35cb4ec..b2fcb2c749ce 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -608,6 +608,16 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma, return young; } + +pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmdp) +{ + pmd_t old = pmdp_establish(vma, address, pmdp, pmd_mkinvalid(*pmdp)); + + if (cpu_feature_enabled(X86_BUG_PTE_LEAK)) + flush_pmd_tlb_range(vma, address, address + HPAGE_PMD_SIZE); + return old; +} #endif /** diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index f4f4077b97aa..5826e8e52619 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -570,6 +570,26 @@ extern pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp); #endif +#ifndef __HAVE_ARCH_PMDP_INVALIDATE_AD + +/* + * pmdp_invalidate_ad() invalidates the PMD while changing a transparent + * hugepage mapping in the page tables. This function is similar to + * pmdp_invalidate(), but should only be used if the access and dirty bits would + * not be cleared by the software in the new PMD value. The function ensures + * that hardware changes of the access and dirty bits updates would not be lost. + * + * Doing so can allow in certain architectures to avoid a TLB flush in most + * cases. Yet, another TLB flush might be necessary later if the PMD update + * itself requires such flush (e.g., if protection was set to be stricter). Yet, + * even when a TLB flush is needed because of the update, the caller may be able + * to batch these TLB flushing operations, so fewer TLB flush operations are + * needed. + */ +extern pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, + unsigned long address, pmd_t *pmdp); +#endif + #ifndef __HAVE_ARCH_PTE_SAME static inline int pte_same(pte_t pte_a, pte_t pte_b) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 51b0f3cb1ba0..691d80edcfd7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1781,10 +1781,10 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, * The race makes MADV_DONTNEED miss the huge pmd and don't clear it * which may break userspace. * - * pmdp_invalidate() is required to make sure we don't miss + * pmdp_invalidate_ad() is required to make sure we don't miss * dirty/young flags set by hardware. */ - oldpmd = pmdp_invalidate(vma, addr, pmd); + oldpmd = pmdp_invalidate_ad(vma, addr, pmd); entry = pmd_modify(oldpmd, newprot); if (preserve_write) diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 6523fda274e5..90ab721a12a8 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -201,6 +201,14 @@ pmd_t pmdp_invalidate(struct vm_area_struct *vma, unsigned long address, } #endif +#ifndef __HAVE_ARCH_PMDP_INVALIDATE_AD +pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, + pmd_t *pmdp) +{ + return pmdp_invalidate(vma, address, pmdp); +} +#endif + #ifndef pmdp_collapse_flush pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp)