From patchwork Thu Aug 30 16:15:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10582263 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1AF43920 for ; Thu, 30 Aug 2018 16:25:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04F922C0F5 for ; Thu, 30 Aug 2018 16:25:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ED63E2C102; Thu, 30 Aug 2018 16:25:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 3BCA42C0EC for ; Thu, 30 Aug 2018 16:25:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=IcZflFWMLP/NIuzMtjvJJgGAvYt8VWYb+N/9K06nCQg=; b=ow//G2tDZENvelPDhiNxp/MYuG zCdUM8FozNyzeELDA4Y9W5Wp+gPwnnS2tD9ncEgTBBbb/HPj/zyKfwsLGzzOua90H2HT8jsEnz6iZ w7V+v0CIDYUac+SuPIPvxkvrU00CPYUE/Zd2mFIoVqUY93V1ypFVHSQdBkqLo2wjj0EPm+a3jemd8 SzeKQIZifDP/erOSlpQsvAB9wSDf6dAixweNlYlfWrL+VGS/JrifsUCF/Ftgc3v5W3tT80D2azs1x iz11ANw+nibmkdhMTje2ObqKprqLwoz6c2rvbBZ84ulu6JM5di8KXSWb1it78PPXMGCQNGlLPtT+v ZNLdgp+g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fvPlC-0002Eq-Mx; Thu, 30 Aug 2018 16:25:30 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1fvPcB-0004ge-2r for linux-arm-kernel@lists.infradead.org; Thu, 30 Aug 2018 16:16:18 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 178FE1BB2; Thu, 30 Aug 2018 09:15:50 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DD25C3F994; Thu, 30 Aug 2018 09:15:49 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id B45241AE3A87; Thu, 30 Aug 2018 17:16:01 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Subject: [PATCH 10/12] arm64: tlb: Adjust stride and type of TLBI according to mmu_gather Date: Thu, 30 Aug 2018 17:15:44 +0100 Message-Id: <1535645747-9823-11-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1535645747-9823-1-git-send-email-will.deacon@arm.com> References: <1535645747-9823-1-git-send-email-will.deacon@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180830_091611_406434_AB5305E6 X-CRM114-Status: GOOD ( 14.51 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: benh@au1.ibm.com, peterz@infradead.org, catalin.marinas@arm.com, Will Deacon , npiggin@gmail.com, torvalds@linux-foundation.org, linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now that the core mmu_gather code keeps track of both the levels of page table cleared and also whether or not these entries correspond to intermediate entries, we can use this in our tlb_flush() callback to reduce the number of invalidations we issue as well as their scope. Signed-off-by: Will Deacon --- arch/arm64/include/asm/tlb.h | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index bd00017d529a..b078fdec10d5 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -34,20 +34,21 @@ static void tlb_flush(struct mmu_gather *tlb); static inline void tlb_flush(struct mmu_gather *tlb) { struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0); + bool last_level = !tlb->freed_tables; + unsigned long stride = tlb_get_unmap_size(tlb); /* - * The ASID allocator will either invalidate the ASID or mark - * it as used. + * If we're tearing down the address space then we only care about + * invalidating the walk-cache, since the ASID allocator won't + * reallocate our ASID without invalidating the entire TLB. */ - if (tlb->fullmm) + if (tlb->fullmm) { + if (!last_level) + flush_tlb_mm(tlb->mm); return; + } - /* - * The intermediate page table levels are already handled by - * the __(pte|pmd|pud)_free_tlb() functions, so last level - * TLBI is sufficient here. - */ - __flush_tlb_range(&vma, tlb->start, tlb->end, PAGE_SIZE, true); + __flush_tlb_range(&vma, tlb->start, tlb->end, stride, last_level); } static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte,