From patchwork Wed Sep 26 11:36:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615785 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A2BCE14BD for ; Wed, 26 Sep 2018 11:54:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92FA82A980 for ; Wed, 26 Sep 2018 11:54:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 872A52A98A; Wed, 26 Sep 2018 11:54:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE6682A980 for ; Wed, 26 Sep 2018 11:54:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F5198E0001; Wed, 26 Sep 2018 07:54:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0C2588E0005; Wed, 26 Sep 2018 07:54:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA4A38E0003; Wed, 26 Sep 2018 07:54:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id 9694D8E0001 for ; Wed, 26 Sep 2018 07:54:27 -0400 (EDT) Received: by mail-pl1-f199.google.com with SMTP id 43-v6so5873550ple.19 for ; Wed, 26 Sep 2018 04:54:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=8CAUMl8gZutcZmBzDRYq6jo4NUmI1AIT8UQur7unr98=; b=GnVJP7XkWdq8zTNZj67QqQ168XZ1w1ePi3ME6hfTePLzwPUWmiBEhM7A5Ous3P/o9I 4EfGBsGFiEKoaqUSXbmhl93LBuJuTv7Vu9ClCIcknxcSd4giPM77RcNmUfixzg2inveI K+3AVwYCTErzClh4SqF6bnjswcDNlHaZERYwYMkADzJGyblc9D+j0ak7L2IaCuQ7YoxV rPUIPFF5UIAJq3BcgujrTS5nALrdm2bl4QgjksIFtw4w2v1BHip+YVQlliVrdLFH3fJo RhpwK3Th25U3S4Y7EMqvIKcVdJf2MIdpAK9mlDDerRnCtZI8vJItze1rQJucG0Za7tdd +Etw== X-Gm-Message-State: ABuFfoh/4ZvKRSXimNJbmnvwd9T87PZH347/sQV8oL7IPBKgv+tG6QTH G9lQhepHsJFvRwrYTgoTCFTD1vnRe6dhNelVqFLYg8NrsP1vWAdbQOv5yPtIuX+kG45dnYiIRuf Zj6g6LQoMCNobKThG/Eb2ATcvw9JTEoRvGH82yhvxXkeKZVyf7zDCBlQwTybokFbVvA== X-Received: by 2002:a63:e318:: with SMTP id f24-v6mr5352640pgh.175.1537962867284; Wed, 26 Sep 2018 04:54:27 -0700 (PDT) X-Google-Smtp-Source: ACcGV63MS4GZS2NuxvnL8CjD31s9lTX9Y6ns85q6nNzlRJBtDnEPIhN3b+WupB58yk5ADko3WzwB X-Received: by 2002:a63:e318:: with SMTP id f24-v6mr5352590pgh.175.1537962866447; Wed, 26 Sep 2018 04:54:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962866; cv=none; d=google.com; s=arc-20160816; b=w/TO4vxKAPpbnHhTtP8Dl+rARPuZGVqi9zxlm4yQSubsDVQHLnZZeo8hKj9kkl3qHK KD6RSR+Yu+sSz9gJ3//qcJf9xNvg8G3XAdaE/YpDLikSDxbggBGgvcuOb7rhDX0PMDQk rv6oGSAFFxSgqz8J55mRBiZdfZ3hqbQejpiyklSThy4FyO09SC7kEAccSTOTBjzhY5GM mQS37tMrTVMf0jJg1OwxSC79YtEV9VQcOgBzYjbq56n7DMxXPuVVN3T5zpuRhoES+ypz bte1SIVWC2+XfyeVLOP27Uk2pQI9tyD4jBd7T1RkZGsEvB73kxspW99DUkYgOJokiFSH FAig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=8CAUMl8gZutcZmBzDRYq6jo4NUmI1AIT8UQur7unr98=; b=suFtwcSAUsujClbbJuKw/p2jQ7AANCwdC/NSi6/Xaby5+0Mn61YcFtvH2HMoQfNdD8 ddxdmVZWxyZ3uP/qhhcWBMAuVlqUPqgQL2tKMALXMGLLHcrlFFajUkfu7Zgk57Pi2EX3 2xbkbA2IMriAbqPXNPQW3sUXwsFgojDq45Ygwl+n8q8RYS9+nohDmg/e+MbXnvPMUS4w OsCBWhuk9lwBaTu3VslDNilCDJeYANgg8cjzRnd6466k5qUzR2ZAk4TLHLUw5MCEoHbz hh7iTnBFzmMl8DrnYYFPhWi3mf0FerdwCNMmP4vVoR/KwlQx7gCLIdv6azOVuoMprc6m U7LA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="tW9/sI6K"; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id y15-v6si4295697plp.371.2018.09.26.04.54.26 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="tW9/sI6K"; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=8CAUMl8gZutcZmBzDRYq6jo4NUmI1AIT8UQur7unr98=; b=tW9/sI6KYT4BUDjm88JFUIWyI+ NVZk+t4aavyqPoBWvyzdtYdr7zudbCHT8qBmyd+0ozB8+lZUHffNgpnZixNKVN/+aQ4K46wboNnng /JFZX93t20Agy19ZCGcHZZtqyttarTUAxrn8NO3gzVJfWIFao8dUdoxeqCtM+9qHGgzerYijUKenc /+GCMMxuYDdiWP7FOD45xEjhmOyy9z5JlJWq2q5NVr7tAC7u3A9YhFuTEIaCtSYV0jZvGJdWMfyDg Tu9dsDuI1IVdMhHdqntVDg9sf10Pa/dBrSNkH30ze3aH/VBfFl2iU535/s44PXqUUPZe3JD6iMCKz yEeylKHQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58OM-0005vd-7L; Wed, 26 Sep 2018 11:54:21 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 7D24B206F1808; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.666150500@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:26 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Dave Hansen Subject: [PATCH 03/18] x86/mm: Page size aware flush_tlb_mm_range() References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Use the new tlb_get_unmap_shift() to determine the stride of the INVLPG loop. Cc: Nick Piggin Cc: Andrew Morton Cc: "Aneesh Kumar K.V" Cc: Will Deacon Cc: Dave Hansen Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/include/asm/tlb.h | 21 ++++++++++++++------- arch/x86/include/asm/tlbflush.h | 12 ++++++++---- arch/x86/mm/tlb.c | 17 ++++++++--------- mm/pgtable-generic.c | 1 + 4 files changed, 31 insertions(+), 20 deletions(-) --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -6,16 +6,23 @@ #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) -#define tlb_flush(tlb) \ -{ \ - if (!tlb->fullmm && !tlb->need_flush_all) \ - flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, 0UL); \ - else \ - flush_tlb_mm_range(tlb->mm, 0UL, TLB_FLUSH_ALL, 0UL); \ -} +static inline void tlb_flush(struct mmu_gather *tlb); #include +static inline void tlb_flush(struct mmu_gather *tlb) +{ + unsigned long start = 0UL, end = TLB_FLUSH_ALL; + unsigned int stride_shift = tlb_get_unmap_shift(tlb); + + if (!tlb->fullmm && !tlb->need_flush_all) { + start = tlb->start; + end = tlb->end; + } + + flush_tlb_mm_range(tlb->mm, start, end, stride_shift); +} + /* * While x86 architecture in general requires an IPI to perform TLB * shootdown, enablement code for several hypervisors overrides --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -547,23 +547,27 @@ struct flush_tlb_info { unsigned long start; unsigned long end; u64 new_tlb_gen; + unsigned int stride_shift; }; #define local_flush_tlb() __flush_tlb() #define flush_tlb_mm(mm) flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL) -#define flush_tlb_range(vma, start, end) \ - flush_tlb_mm_range(vma->vm_mm, start, end, vma->vm_flags) +#define flush_tlb_range(vma, start, end) \ + flush_tlb_mm_range((vma)->vm_mm, start, end, \ + ((vma)->vm_flags & VM_HUGETLB) \ + ? huge_page_shift(hstate_vma(vma)) \ + : PAGE_SHIFT) extern void flush_tlb_all(void); extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end, unsigned long vmflag); + unsigned long end, unsigned int stride_shift); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { - flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, VM_NONE); + flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT); } void native_flush_tlb_others(const struct cpumask *cpumask, --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -528,17 +528,16 @@ static void flush_tlb_func_common(const f->new_tlb_gen == local_tlb_gen + 1 && f->new_tlb_gen == mm_tlb_gen) { /* Partial flush */ - unsigned long addr; - unsigned long nr_pages = (f->end - f->start) >> PAGE_SHIFT; + unsigned long nr_invalidate = (f->end - f->start) >> f->stride_shift; + unsigned long addr = f->start; - addr = f->start; while (addr < f->end) { __flush_tlb_one_user(addr); - addr += PAGE_SIZE; + addr += 1UL << f->stride_shift; } if (local) - count_vm_tlb_events(NR_TLB_LOCAL_FLUSH_ONE, nr_pages); - trace_tlb_flush(reason, nr_pages); + count_vm_tlb_events(NR_TLB_LOCAL_FLUSH_ONE, nr_invalidate); + trace_tlb_flush(reason, nr_invalidate); } else { /* Full flush. */ local_flush_tlb(); @@ -623,12 +622,13 @@ void native_flush_tlb_others(const struc static unsigned long tlb_single_page_flush_ceiling __read_mostly = 33; void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end, unsigned long vmflag) + unsigned long end, unsigned int stride_shift) { int cpu; struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = { .mm = mm, + .stride_shift = stride_shift, }; cpu = get_cpu(); @@ -638,8 +638,7 @@ void flush_tlb_mm_range(struct mm_struct /* Should we flush just the requested range? */ if ((end != TLB_FLUSH_ALL) && - !(vmflag & VM_HUGETLB) && - ((end - start) >> PAGE_SHIFT) <= tlb_single_page_flush_ceiling) { + ((end - start) >> stride_shift) <= tlb_single_page_flush_ceiling) { info.start = start; info.end = end; } else { --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -8,6 +8,7 @@ */ #include +#include #include #include