From patchwork Thu Sep 13 09:21:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B2FD614E5 for ; Thu, 13 Sep 2018 09:29:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A25C42A457 for ; Thu, 13 Sep 2018 09:29:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 96E972A475; Thu, 13 Sep 2018 09:29:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B7D82A457 for ; Thu, 13 Sep 2018 09:29:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3146D8E0009; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2C6C08E0008; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDB008E0007; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 9FA228E0006 for ; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id s11-v6so2335867pgv.9 for ; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=jgrjUnzXtEdFVgHINLQMwf19p5Mc/gjokyImpgoVnII=; b=liDLPbUu7PFFVEcNKTKBQuont2HdZfZZLEffmopAqNSnyHJRCBFPKrROsC2qRVdauB 3hFk5RWXTvm/H38h1BTm+UO86EJiJU3u7fU5DxGf5ThFVB8kakpexS1kGU6w1WCEL9mA cBOWAX4YaSSAyO5dMgGrvPFjapF0BSgMsvA+eireYgU41dcEcedUcSGcHS/mWfe2RcVH wVxUZSp+Dy5y26JBavkEV0NzEawdTMKPKfXscZePogVOvglF2lFeE1l+0bdiyonYvIws 4+lpuV6YmvFqT7Lj8XCiQFfhlqRyYVWiijBt69HmJ18zQlJwm1X2tNqNTrvWYpibvPjj /bgg== X-Gm-Message-State: APzg51Cjf/mY8PnZXwapVaqX+OXhVRpEZoq2VD88Ya8Zo+Z3+vpvZFVf +xkYZMIedENnG6iZXGw4RJmUnwQfJsD1nUjYgtlC8lsC0mjUD5Lgxcv3/ZVzONWU8D8pt/IQVXY Gon/8C+1n+EcJztH6y6OF/bZtXHRQ7MdMXRGVQQIdxrNrRbU/JYpo59SmaR5onE8B5Q== X-Received: by 2002:a62:198e:: with SMTP id 136-v6mr6568637pfz.103.1536830967308; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbL2N2ooCNI4q94ijrtoC/HjSOtvRKK1fcQTzMgf+3rB3Uvfh+nn4fhzZaluQvaGN/9eTTI X-Received: by 2002:a62:198e:: with SMTP id 136-v6mr6568558pfz.103.1536830966190; Thu, 13 Sep 2018 02:29:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830966; cv=none; d=google.com; s=arc-20160816; b=zbuhr3C0/Pp5mjaVUf+IrwkpwTh2KKiznX5KkWkLZWMIs+pvzwiYimkZ1MK9s1UESO /rR+XLiRzxIkTC+/89U1/Vx4nhJNePG6TcZ8tr/a7UtuKa2yEJ5d9GJ2j3guBgjQnAzY faRon3ISjFY/vLgIGH7WcLXToEodxLUYKJ82heWnTF7jN04Uq/kolv/qeCcz5OwTD6RY NDaOLeQawlN146UqIgS3UIl3rfxPd7hRWhEtzIq0Nj+VhbodnuuVFVE3f/bbedQfu8Q1 eLrDbuqoQBDqoSfK/I9HycPnpJSNDcpx0QUQWjD0RkJAYdyDAxcrQTHxUT15B1ogM10p Xqmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=jgrjUnzXtEdFVgHINLQMwf19p5Mc/gjokyImpgoVnII=; b=SDl2TpcdGydZ8M3tV5WArlWlniemoEsKcs1XjAwvEAOek6fJkrTK5JgKAE2A68O1UH 8V8ISXK87wc+FkvVyuuOEgdgRrq2EcB2R67IyCTrSL/Xs75rOCdMlTt2AiPsp5kgc12S XesHI2vze3iHaORlBjMkEHXJfxqLHbeqOfbvQL6R6UFd4KVPtuKyNpCCxMpUy6jyNE1G yiXvT9LnUf5Q4q8ujubEtp+aWWxHVrOrZBQhXeM+4JeGxteWVqeh6Xz516tHSJt6FxBx 54IS5wqGpKBMaeTOTqW+ZZu7eYlCjxJKgHy3+sTCOIgE6p88LpXp582fwXjxqzCrwUcY aTUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=dG7yd+j3; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id j30-v6si4070215pgj.73.2018.09.13.02.29.26 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=dG7yd+j3; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=jgrjUnzXtEdFVgHINLQMwf19p5Mc/gjokyImpgoVnII=; b=dG7yd+j3hwiQWUBzF/T8E5qxOO FoRrlpP092nw81flcWecqF/OuPeOcB6KMxMSoA3C3V0pVh4wUrg3M2eggdHHfTX94/nUHGOPKAnpz 5YmR7QzbZaBIkra0K9ZanS4trhOGz77ug42Az+irfHcoAhIS44tUCuaNV3GIFu0cHPpceaFlxSxdr Ft9qNrgzk4Y6bM/5FMEJ1HotlrLxmwliOMwjzqRWUVw48WfS4hvN5mIiK2ed3imS6lczr+GZEX7ZP 3r0DAnJ/KHW3kzasBluftuLbgczxolfSC52jW8B7UmUXGX9OUR+EQH7BLjVwaqt7CYL6F4eBFSDCi s46JL1Bg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw0-0000uo-92; Thu, 13 Sep 2018 09:29:12 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4300120587E4D; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.012757318@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:13 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, Dave Hansen Subject: [RFC][PATCH 03/11] x86/mm: Page size aware flush_tlb_mm_range() References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Use the new tlb_get_unmap_shift() to determine the stride of the INVLPG loop. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Dave Hansen Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/include/asm/tlb.h | 21 ++++++++++++++------- arch/x86/include/asm/tlbflush.h | 10 ++++++---- arch/x86/mm/tlb.c | 10 +++++----- 3 files changed, 25 insertions(+), 16 deletions(-) --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -6,16 +6,23 @@ #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) -#define tlb_flush(tlb) \ -{ \ - if (!tlb->fullmm && !tlb->need_flush_all) \ - flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, 0UL); \ - else \ - flush_tlb_mm_range(tlb->mm, 0UL, TLB_FLUSH_ALL, 0UL); \ -} +static inline void tlb_flush(struct mmu_gather *tlb); #include +static inline void tlb_flush(struct mmu_gather *tlb) +{ + unsigned long start = 0UL, end = TLB_FLUSH_ALL; + unsigned int invl_shift = tlb_get_unmap_shift(tlb); + + if (!tlb->fullmm && !tlb->need_flush_all) { + start = tlb->start; + end = tlb->end; + } + + flush_tlb_mm_range(tlb->mm, start, end, invl_shift); +} + /* * While x86 architecture in general requires an IPI to perform TLB * shootdown, enablement code for several hypervisors overrides --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -507,23 +507,25 @@ struct flush_tlb_info { unsigned long start; unsigned long end; u64 new_tlb_gen; + unsigned int invl_shift; }; #define local_flush_tlb() __flush_tlb() #define flush_tlb_mm(mm) flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL) -#define flush_tlb_range(vma, start, end) \ - flush_tlb_mm_range(vma->vm_mm, start, end, vma->vm_flags) +#define flush_tlb_range(vma, start, end) \ + flush_tlb_mm_range((vma)->vm_mm, start, end, \ + (vma)->vm_flags & VM_HUGETLB ? PMD_SHIFT : PAGE_SHIFT) extern void flush_tlb_all(void); extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end, unsigned long vmflag); + unsigned long end, unsigned int invl_shift); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { - flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, VM_NONE); + flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT); } void native_flush_tlb_others(const struct cpumask *cpumask, --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -522,12 +522,12 @@ static void flush_tlb_func_common(const f->new_tlb_gen == mm_tlb_gen) { /* Partial flush */ unsigned long addr; - unsigned long nr_pages = (f->end - f->start) >> PAGE_SHIFT; + unsigned long nr_pages = (f->end - f->start) >> f->invl_shift; addr = f->start; while (addr < f->end) { __flush_tlb_one_user(addr); - addr += PAGE_SIZE; + addr += 1UL << f->invl_shift; } if (local) count_vm_tlb_events(NR_TLB_LOCAL_FLUSH_ONE, nr_pages); @@ -616,12 +616,13 @@ void native_flush_tlb_others(const struc static unsigned long tlb_single_page_flush_ceiling __read_mostly = 33; void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end, unsigned long vmflag) + unsigned long end, unsigned int invl_shift) { int cpu; struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = { .mm = mm, + .invl_shift = invl_shift, }; cpu = get_cpu(); @@ -631,8 +632,7 @@ void flush_tlb_mm_range(struct mm_struct /* Should we flush just the requested range? */ if ((end != TLB_FLUSH_ALL) && - !(vmflag & VM_HUGETLB) && - ((end - start) >> PAGE_SHIFT) <= tlb_single_page_flush_ceiling) { + ((end - start) >> invl_shift) <= tlb_single_page_flush_ceiling) { info.start = start; info.end = end; } else {