From patchwork Thu Aug 4 09:15:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthias Brugger X-Patchwork-Id: 9263101 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2DAAD6048F for ; Thu, 4 Aug 2016 09:20:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D75B28383 for ; Thu, 4 Aug 2016 09:20:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 11A1B28389; Thu, 4 Aug 2016 09:20:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3511A28383 for ; Thu, 4 Aug 2016 09:20:20 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bVEnj-0002Hm-S2; Thu, 04 Aug 2016 09:18:51 +0000 Received: from smtp.nue.novell.com ([195.135.221.5]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bVElJ-0000QE-O3 for linux-arm-kernel@lists.infradead.org; Thu, 04 Aug 2016 09:17:10 +0000 Received: from nwb-ext-pat.microfocus.com ([10.120.13.103]) by smtp.nue.novell.com with ESMTP (TLS encrypted); Thu, 04 Aug 2016 11:15:59 +0200 Received: from linux-gy6r.site (nwb-a10-snat.microfocus.com [10.120.13.201]) by nwb-ext-pat.microfocus.com with ESMTP (TLS encrypted); Thu, 04 Aug 2016 10:15:34 +0100 From: Matthias Brugger To: pbonzini@redhat.com, rkrcmar@redhat.com, christoffer.dall@linaro.org, marc.zyngier@arm.com, linux@armlinux.org.uk, catalin.marinas@arm.com, will.deacon@arm.com Subject: [PATCH 2/4] arm64: Implement IPI based TLB invalidation Date: Thu, 4 Aug 2016 11:15:15 +0200 Message-Id: <1470302117-32296-3-git-send-email-mbrugger@suse.com> X-Mailer: git-send-email 2.6.6 In-Reply-To: <1470302117-32296-1-git-send-email-mbrugger@suse.com> References: <1470302117-32296-1-git-send-email-mbrugger@suse.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160804_021622_571238_7E9714F8 X-CRM114-Status: GOOD ( 11.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, mbrugger@suse.com, kvm@vger.kernel.org, david.daney@cavium.com, ard.biesheuvel@linaro.org, zlim.lnx@gmail.com, suzuki.poulose@arm.com, agraf@suse.de, linux-kernel@vger.kernel.org, rrichter@cavium.com, lorenzo.pieralisi@arm.com, james.morse@arm.com, dave.long@linaro.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Hardware may lack a sane implementation of TLB invalidation using broadcast TLBI command. Add a capability to enable TLB invalidation using IPI. Signed-off-by: David Daney Signed-off-by: Robert Richter Signed-off-by: Alexander Graf Signed-off-by: Matthias Brugger --- arch/arm64/include/asm/cpufeature.h | 3 +- arch/arm64/include/asm/tlbflush.h | 94 ++++++++++++++++++++++++++++++++----- arch/arm64/mm/flush.c | 46 ++++++++++++++++++ 3 files changed, 129 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h index 49dd1bd..c4bf72b 100644 --- a/arch/arm64/include/asm/cpufeature.h +++ b/arch/arm64/include/asm/cpufeature.h @@ -36,8 +36,9 @@ #define ARM64_HAS_VIRT_HOST_EXTN 11 #define ARM64_WORKAROUND_CAVIUM_27456 12 #define ARM64_HAS_32BIT_EL0 13 +#define ARM64_HAS_NO_BCAST_TLBI 14 -#define ARM64_NCAPS 14 +#define ARM64_NCAPS 15 #ifndef __ASSEMBLY__ diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index b460ae2..edc5495 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -71,7 +71,10 @@ static inline void local_flush_tlb_all(void) isb(); } -static inline void flush_tlb_all(void) +void __flush_tlb_all_ipi(void); +void __flush_tlb_mm_ipi(struct mm_struct *mm); + +static inline void __flush_tlb_all_tlbi(void) { dsb(ishst); asm("tlbi vmalle1is"); @@ -79,7 +82,17 @@ static inline void flush_tlb_all(void) isb(); } -static inline void flush_tlb_mm(struct mm_struct *mm) +static inline void flush_tlb_all(void) +{ + if (cpus_have_cap(ARM64_HAS_NO_BCAST_TLBI)) { + __flush_tlb_all_ipi(); + return; + } + + __flush_tlb_all_tlbi(); +} + +static inline void __flush_tlb_mm_tlbi(struct mm_struct *mm) { unsigned long asid = ASID(mm) << 48; @@ -88,8 +101,18 @@ static inline void flush_tlb_mm(struct mm_struct *mm) dsb(ish); } -static inline void flush_tlb_page(struct vm_area_struct *vma, - unsigned long uaddr) +static inline void flush_tlb_mm(struct mm_struct *mm) +{ + if (cpus_have_cap(ARM64_HAS_NO_BCAST_TLBI)) { + __flush_tlb_mm_ipi(mm); + return; + } + + __flush_tlb_mm_tlbi(mm); +} + +static inline void __flush_tlb_page_tlbi(struct vm_area_struct *vma, + unsigned long uaddr) { unsigned long addr = uaddr >> 12 | (ASID(vma->vm_mm) << 48); @@ -98,15 +121,26 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, dsb(ish); } +static inline void flush_tlb_page(struct vm_area_struct *vma, + unsigned long uaddr) +{ + if (cpus_have_cap(ARM64_HAS_NO_BCAST_TLBI)) { + __flush_tlb_mm_ipi(vma->vm_mm); + return; + } + + __flush_tlb_page_tlbi(vma, uaddr); +} + /* * This is meant to avoid soft lock-ups on large TLB flushing ranges and not * necessarily a performance improvement. */ #define MAX_TLB_RANGE (1024UL << PAGE_SHIFT) -static inline void __flush_tlb_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end, - bool last_level) +static inline void __flush_tlb_range_tlbi(struct vm_area_struct *vma, + unsigned long start, unsigned long end, + bool last_level) { unsigned long asid = ASID(vma->vm_mm) << 48; unsigned long addr; @@ -129,13 +163,26 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ish); } +static inline void __flush_tlb_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end, + bool last_level) +{ + if (cpus_have_cap(ARM64_HAS_NO_BCAST_TLBI)) { + __flush_tlb_mm_ipi(vma->vm_mm); + return; + } + + __flush_tlb_range_tlbi(vma, start, end, last_level); +} + static inline void flush_tlb_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end) + unsigned long start, unsigned long end) { __flush_tlb_range(vma, start, end, false); } -static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) +static inline void __flush_tlb_kernel_range_tlbi(unsigned long start, + unsigned long end) { unsigned long addr; @@ -154,17 +201,38 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end isb(); } +static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + if (cpus_have_cap(ARM64_HAS_NO_BCAST_TLBI)) { + __flush_tlb_all_ipi(); + return; + } + + __flush_tlb_kernel_range_tlbi(start, end); +} + +static inline void __flush_tlb_pgtable_tlbi(struct mm_struct *mm, + unsigned long uaddr) +{ + unsigned long addr = uaddr >> 12 | (ASID(mm) << 48); + + asm("tlbi vae1is, %0" : : "r" (addr)); + dsb(ish); +} + /* * Used to invalidate the TLB (walk caches) corresponding to intermediate page * table levels (pgd/pud/pmd). */ static inline void __flush_tlb_pgtable(struct mm_struct *mm, - unsigned long uaddr) + unsigned long uaddr) { - unsigned long addr = uaddr >> 12 | (ASID(mm) << 48); + if (cpus_have_cap(ARM64_HAS_NO_BCAST_TLBI)) { + __flush_tlb_mm_ipi(mm); + return; + } - asm("tlbi vae1is, %0" : : "r" (addr)); - dsb(ish); + __flush_tlb_pgtable_tlbi(mm, uaddr); } #endif diff --git a/arch/arm64/mm/flush.c b/arch/arm64/mm/flush.c index 43a76b0..402036a 100644 --- a/arch/arm64/mm/flush.c +++ b/arch/arm64/mm/flush.c @@ -27,6 +27,24 @@ #include "mm.h" +static void flush_tlb_local(void *info) +{ + local_flush_tlb_all(); +} + +static void flush_tlb_mm_local(void *info) +{ + unsigned long asid = (unsigned long)info; + + asm volatile("\n" + " dsb nshst\n" + " tlbi aside1, %0\n" + " dsb nsh\n" + " isb sy" + : : "r" (asid) + ); +} + void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) { @@ -90,6 +108,34 @@ void flush_dcache_page(struct page *page) } EXPORT_SYMBOL(flush_dcache_page); +void __flush_tlb_mm_ipi(struct mm_struct *mm) +{ + unsigned long asid; + + if (!mm) { + flush_tlb_all(); + } else { + asid = ASID(mm) << 48; + /* Make sure page table modifications are visible. */ + dsb(ishst); + /* IPI to all CPUs to do local flush. */ + on_each_cpu(flush_tlb_mm_local, (void *)asid, 1); + } +} +EXPORT_SYMBOL(__flush_tlb_mm_ipi); + +void __flush_tlb_all_ipi(void) +{ + /* Make sure page table modifications are visible. */ + dsb(ishst); + if (num_online_cpus() <= 1) + local_flush_tlb_all(); + else + /* IPI to all CPUs to do local flush. */ + on_each_cpu(flush_tlb_local, NULL, 1); +} +EXPORT_SYMBOL(__flush_tlb_all_ipi); + /* * Additional functions defined in assembly. */