From patchwork Fri Apr 3 09:00:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11472205 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A4D1492C for ; Fri, 3 Apr 2020 09:06:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7167620721 for ; Fri, 3 Apr 2020 09:06:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7167620721 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A4CBC8E0009; Fri, 3 Apr 2020 05:06:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9D66B8E0008; Fri, 3 Apr 2020 05:06:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C65C8E0009; Fri, 3 Apr 2020 05:06:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 6F3D08E0008 for ; Fri, 3 Apr 2020 05:06:18 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1C70F98BA for ; Fri, 3 Apr 2020 09:06:18 +0000 (UTC) X-FDA: 76665962436.27.bikes13_711fdcb57685c X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,yezhenyu2@huawei.com,,RULES_HIT:30003:30012:30054:30075,0,RBL:45.249.212.35:@huawei.com:.lbl8.mailshell.net-64.95.201.95 62.18.2.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: bikes13_711fdcb57685c X-Filterd-Recvd-Size: 5335 Received: from huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 3 Apr 2020 09:06:17 +0000 (UTC) Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id E6FA519880F97011DEC2; Fri, 3 Apr 2020 17:01:11 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.487.0; Fri, 3 Apr 2020 17:01:05 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , CC: , , , , , , , , , Subject: [PATCH v1 5/6] mm: tlb: Provide flush_*_tlb_range wrappers Date: Fri, 3 Apr 2020 17:00:47 +0800 Message-ID: <20200403090048.938-6-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200403090048.938-1-yezhenyu2@huawei.com> References: <20200403090048.938-1-yezhenyu2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch provides flush_{pte|pmd|pud|p4d}_tlb_range() in generic code, which are expressed through the mmu_gather APIs. These interface set tlb->cleared_* and finally call tlb_flush(), so we can do the tlb invalidation according to the information in struct mmu_gather. Signed-off-by: Zhenyu Ye --- include/asm-generic/pgtable.h | 12 +++++++-- mm/pgtable-generic.c | 50 +++++++++++++++++++++++++++++++++++ 2 files changed, 60 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index e2e2bef07dd2..2bedeee94131 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1160,11 +1160,19 @@ static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) * invalidate the entire TLB which is not desitable. * e.g. see arch/arc: flush_pmd_tlb_range */ -#define flush_pmd_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end) -#define flush_pud_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end) +extern void flush_pte_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end); +extern void flush_pmd_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end); +extern void flush_pud_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end); +extern void flush_p4d_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end); #else +#define flush_pte_tlb_range(vma, addr, end) BUILD_BUG() #define flush_pmd_tlb_range(vma, addr, end) BUILD_BUG() #define flush_pud_tlb_range(vma, addr, end) BUILD_BUG() +#define flush_p4d_tlb_range(vma, addr, end) BUILD_BUG() #endif #endif diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c index 3d7c01e76efc..0f5414a4a2ec 100644 --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -101,6 +101,56 @@ pte_t ptep_clear_flush(struct vm_area_struct *vma, unsigned long address, #ifdef CONFIG_TRANSPARENT_HUGEPAGE +#ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE +void flush_pte_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); + tlb_start_vma(&tlb, vma); + tlb_set_pte_range(&tlb, addr, end - addr); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb, addr, end); +} + +void flush_pmd_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); + tlb_start_vma(&tlb, vma); + tlb_set_pmd_range(&tlb, addr, end - addr); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb, addr, end); +} + +void flush_pud_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); + tlb_start_vma(&tlb, vma); + tlb_set_pud_range(&tlb, addr, end - addr); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb, addr, end); +} + +void flush_p4d_tlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end) +{ + struct mmu_gather tlb; + + tlb_gather_mmu(&tlb, vma->vm_mm, addr, end); + tlb_start_vma(&tlb, vma); + tlb_set_p4d_range(&tlb, addr, end - addr); + tlb_end_vma(&tlb, vma); + tlb_finish_mmu(&tlb, addr, end); +} +#endif /* __HAVE_ARCH_FLUSH_PMD_TLB_RANGE */ + #ifndef __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp,