From patchwork Sun Jan 31 00:11:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12057457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.5 required=3.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED,DKIM_INVALID,DKIM_SIGNED,FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF435C433E0 for ; Sun, 31 Jan 2021 00:16:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 736E664E0F for ; Sun, 31 Jan 2021 00:16:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 736E664E0F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 95D9A6B0075; Sat, 30 Jan 2021 19:16:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E4C76B0078; Sat, 30 Jan 2021 19:16:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7611C6B007B; Sat, 30 Jan 2021 19:16:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 5A0F66B0075 for ; Sat, 30 Jan 2021 19:16:17 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 20EC51EE6 for ; Sun, 31 Jan 2021 00:16:17 +0000 (UTC) X-FDA: 77764153194.08.corn94_5f0a75e275b5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id F27861819E772 for ; Sun, 31 Jan 2021 00:16:16 +0000 (UTC) X-HE-Tag: corn94_5f0a75e275b5 X-Filterd-Recvd-Size: 11177 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Sun, 31 Jan 2021 00:16:16 +0000 (UTC) Received: by mail-pj1-f47.google.com with SMTP id z9so234613pjl.5 for ; Sat, 30 Jan 2021 16:16:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pXtsqL7OnePQSIl7PBrxGJpBqi17FMqIVecv4DEE9Io=; b=HWnMEPQTPZY8rmZlWoMLQUShrGKYPxiPXEDmXrKPxu0sd2fkt7QDryKz1vR9o9gy2L PzAGCR8eDGwEDrfJKTPwoPwYUerFd9mdQMZkfb9KWChY2xRYnDe0zwJPq3olbLm8UYuk SnwakZEKJflQiP9eOWgb4DoMzl+rgODt6VoYGsCXZymnrPrp14Gcb2SLe7+CVI7GCBkC e0iUy5+UeO1HcMiYxfaUn1bnFcooMMFDTmgn+nSKfXfweXwrvURkKEUAdfQCtSC/f2lv 13x3QazIzNf5V5d1S1ORV+z599hdqREURm5m/8pM8JAQRm5QrJacIB62AzP3Iysen/7H FOXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pXtsqL7OnePQSIl7PBrxGJpBqi17FMqIVecv4DEE9Io=; b=I1293deecTUmTjBGYfR9J0Qct1JgBlXfyCjhp3ON1dQ4lMdN4l3DAyHww8B5VoFlGh VYUvVWIRQXA92bffi3fqF90/COLQNH2iSs0kSnNn4bJxGcJDOndfnEa6Yc6TAgrO+b4A 31nxn0iqVnKujX/G9Zt7a9LIPAlol+cKystPWRVsfNV+LOM81Wt8PJW+pMFhaUM3+VV2 oY+r1ggjFMvvD+QPzjv7qP5Hzl57VMxKnvEoVJs9nAonthLDT5wesFiR14mClFIuieY1 fa54sPkA81OmRGEduWWXt1GJSwaTt1oKTDf6073gJ3d/t0gMgbXt5go4aHjGs1JszXQ/ rxOQ== X-Gm-Message-State: AOAM533D5BhjVSjfpjOTzFGD6kdnUWRHC4Acn02UdT2RmV3wsKNk3Nmg wwfuU/3rFng5/lL8LJyK4Qnnw5WowPU= X-Google-Smtp-Source: ABdhPJy0UgFXnVAqlHfJHtpa7PxNh6RointC74k3ZkIMdLDA/33nXTOgUee/DDeu4NstAPwAedTUvA== X-Received: by 2002:a17:90a:7e94:: with SMTP id j20mr11015456pjl.218.1612052175121; Sat, 30 Jan 2021 16:16:15 -0800 (PST) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id e12sm13127365pga.13.2021.01.30.16.16.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Jan 2021 16:16:14 -0800 (PST) From: Nadav Amit X-Google-Original-From: Nadav Amit To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Nadav Amit , Andrea Arcangeli , Andrew Morton , Andy Lutomirski , Dave Hansen , Peter Zijlstra , Thomas Gleixner , Will Deacon , Yu Zhao , Nick Piggin , x86@kernel.org Subject: [RFC 09/20] mm: create pte/pmd_tlb_flush_pending() Date: Sat, 30 Jan 2021 16:11:21 -0800 Message-Id: <20210131001132.3368247-10-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210131001132.3368247-1-namit@vmware.com> References: <20210131001132.3368247-1-namit@vmware.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit In preparation for fine(r) granularity, introduce pte_tlb_flush_pending() and pmd_tlb_flush_pending(). Right now the function directs to mm_tlb_flush_pending(). Change pte_accessible() to provide the vma as well. No functional change. Next patches will use this information on architectures that use per-table deferred TLB tracking. Signed-off-by: Nadav Amit Cc: Andrea Arcangeli Cc: Andrew Morton Cc: Andy Lutomirski Cc: Dave Hansen Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Will Deacon Cc: Yu Zhao Cc: Nick Piggin Cc: x86@kernel.org --- arch/arm/include/asm/pgtable.h | 4 +++- arch/arm64/include/asm/pgtable.h | 4 ++-- arch/sparc/include/asm/pgtable_64.h | 9 ++++++--- arch/sparc/mm/init_64.c | 2 +- arch/x86/include/asm/pgtable.h | 7 +++---- include/linux/mm_types.h | 10 ++++++++++ include/linux/pgtable.h | 2 +- mm/huge_memory.c | 2 +- mm/ksm.c | 2 +- mm/pgtable-generic.c | 2 +- 10 files changed, 29 insertions(+), 15 deletions(-) diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index c02f24400369..59bcacc14dc3 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -190,7 +190,9 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd) #define pte_none(pte) (!pte_val(pte)) #define pte_present(pte) (pte_isset((pte), L_PTE_PRESENT)) #define pte_valid(pte) (pte_isset((pte), L_PTE_VALID)) -#define pte_accessible(mm, pte) (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid(pte)) +#define pte_accessible(vma, pte) \ + (pte_tlb_flush_pending(vma, pte) ? \ + pte_present(*pte) : pte_valid(*pte)) #define pte_write(pte) (pte_isclear((pte), L_PTE_RDONLY)) #define pte_dirty(pte) (pte_isset((pte), L_PTE_DIRTY)) #define pte_young(pte) (pte_isset((pte), L_PTE_YOUNG)) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 501562793ce2..f14f1e9dbc3e 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -126,8 +126,8 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; * flag, since ptep_clear_flush_young() elides a DSB when invalidating the * TLB. */ -#define pte_accessible(mm, pte) \ - (mm_tlb_flush_pending(mm) ? pte_present(pte) : pte_valid(pte)) +#define pte_accessible(vma, pte) \ + (pte_tlb_flush_pending(vma, pte) ? pte_present(*pte) : pte_valid(*pte)) /* * p??_access_permitted() is true for valid user mappings (subject to the diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h index 550d3904de65..749efd9c49c9 100644 --- a/arch/sparc/include/asm/pgtable_64.h +++ b/arch/sparc/include/asm/pgtable_64.h @@ -673,9 +673,9 @@ static inline unsigned long pte_present(pte_t pte) } #define pte_accessible pte_accessible -static inline unsigned long pte_accessible(struct mm_struct *mm, pte_t a) +static inline unsigned long pte_accessible(struct vm_area_struct *vma, pte_t *a) { - return pte_val(a) & _PAGE_VALID; + return pte_val(*a) & _PAGE_VALID; } static inline unsigned long pte_special(pte_t pte) @@ -906,8 +906,11 @@ static void maybe_tlb_batch_add(struct mm_struct *mm, unsigned long vaddr, * * SUN4V NOTE: _PAGE_VALID is the same value in both the SUN4U * and SUN4V pte layout, so this inline test is fine. + * + * The vma is not propagated to this point, but it is not used by + * sparc's pte_accessible(). We therefore provide NULL. */ - if (likely(mm != &init_mm) && pte_accessible(mm, orig)) + if (likely(mm != &init_mm) && pte_accessible(NULL, ptep)) tlb_batch_add(mm, vaddr, ptep, orig, fullmm, hugepage_shift); } diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 182bb7bdaa0a..bda397aa9709 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -404,7 +404,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * mm = vma->vm_mm; /* Don't insert a non-valid PTE into the TSB, we'll deadlock. */ - if (!pte_accessible(mm, pte)) + if (!pte_accessible(vma, ptep)) return; spin_lock_irqsave(&mm->context.lock, flags); diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index a02c67291cfc..a0e069c15dbc 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -775,13 +775,12 @@ static inline int pte_devmap(pte_t a) #endif #define pte_accessible pte_accessible -static inline bool pte_accessible(struct mm_struct *mm, pte_t a) +static inline bool pte_accessible(struct vm_area_struct *vma, pte_t *a) { - if (pte_flags(a) & _PAGE_PRESENT) + if (pte_flags(*a) & _PAGE_PRESENT) return true; - if ((pte_flags(a) & _PAGE_PROTNONE) && - mm_tlb_flush_pending(mm)) + if ((pte_flags(*a) & _PAGE_PROTNONE) && pte_tlb_flush_pending(vma, a)) return true; return false; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8a5eb4bfac59..812ee0fd4c35 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -682,6 +682,16 @@ static inline bool mm_tlb_flush_pending(struct mm_struct *mm) return atomic_read(&mm->tlb_flush_pending); } +static inline bool pte_tlb_flush_pending(struct vm_area_struct *vma, pte_t *pte) +{ + return mm_tlb_flush_pending(vma->vm_mm); +} + +static inline bool pmd_tlb_flush_pending(struct vm_area_struct *vma, pmd_t *pmd) +{ + return mm_tlb_flush_pending(vma->vm_mm); +} + static inline bool mm_tlb_flush_nested(struct mm_struct *mm) { /* diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 8fcdfa52eb4b..e8bce53ca3e8 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -725,7 +725,7 @@ static inline void arch_swap_restore(swp_entry_t entry, struct page *page) #endif #ifndef pte_accessible -# define pte_accessible(mm, pte) ((void)(pte), 1) +# define pte_accessible(vma, pte) ((void)(pte), 1) #endif #ifndef flush_tlb_fix_spurious_fault diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c345b8b06183..c4b7c00cc69c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1514,7 +1514,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) * We are not sure a pending tlb flush here is for a huge page * mapping or not. Hence use the tlb range variant */ - if (mm_tlb_flush_pending(vma->vm_mm)) { + if (pmd_tlb_flush_pending(vma, vmf->pmd)) { flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE); /* * change_huge_pmd() released the pmd lock before diff --git a/mm/ksm.c b/mm/ksm.c index 9694ee2c71de..515acbffc283 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1060,7 +1060,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, if (pte_write(*pvmw.pte) || pte_dirty(*pvmw.pte) || (pte_protnone(*pvmw.pte) && pte_savedwrite(*pvmw.pte)) || - mm_tlb_flush_pending(mm)) { + pte_tlb_flush_pending(vma, pvmw.pte)) { pte_t entry; swapped = PageSwapCache(page); diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 9578db83e312..2ca66e269d33 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -93,7 +93,7 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, struct mm_struct *mm = (vma)->vm_mm; pte_t pte; pte = ptep_get_and_clear(mm, address, ptep); - if (pte_accessible(mm, pte)) + if (pte_accessible(vma, ptep)) flush_tlb_page(vma, address); return pte; }