From patchwork Wed Sep 26 11:36:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615801 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D715E913 for ; Wed, 26 Sep 2018 11:55:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C6FAC26B41 for ; Wed, 26 Sep 2018 11:55:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BA46E2A987; Wed, 26 Sep 2018 11:55:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0AD9626B41 for ; Wed, 26 Sep 2018 11:55:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A1B08E000E; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 751EE8E000D; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F53C8E000E; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 1769E8E000D for ; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id x85-v6so14699052pfe.13 for ; Wed, 26 Sep 2018 04:54:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=l4TwOLypeLrg8jwGO2V6hg9OgwsYg7tBLQ3YZNFkico=; b=HyETlHCyPRCIGIv+MtyBPJqR3r63EDiRy1WEcLE4w4O350RQkbwX6TOSfr+7tt1SC4 R41ht1KM6Qfq3Dtfgf3q0Bvapat29wVarLG7jKPTYdvJ+daO0LwjxGNo7B/3PxinXwmv 5LEG0NhLBHjNAQ9sZnlsRKk3Q1Gax/zOvnj3WHyCFSS8T19Nv82MLjTjynP5RPbemsru +Yj5cpuG2LXr/4vvvKJIfrKf7T1R6PT97raMYzYdc17xMvWyg0A6pP/WYNwTbbyAjS5m 9RI1vfD8O7LOPItwa44phgdoRsFai+RKYA7x0K0zOx06OseltezdLuHzsxTHW4GiEAf7 eANQ== X-Gm-Message-State: ABuFfojTCPFumTWRn4IZPuHCr+VIdC6jZPXphP9f7qhyyxIwPtY2z1gl 2XOuTZFXadXK5nlo88nK8DrwDA5Mt74BWkjUVAy+CGSfSV9YCVzxkIdMQkCGXvSescednY+XPLV CLX4tqOxuTUS6CvgVMP7Cns6y2TdkoOeZadu+AdKztHIZbjY1dLb++FkV9rleeK/rRQ== X-Received: by 2002:a62:cdcf:: with SMTP id o198-v6mr6061207pfg.12.1537962897522; Wed, 26 Sep 2018 04:54:57 -0700 (PDT) X-Google-Smtp-Source: ACcGV62NZeC+spEk8GzZ6BoQOjPtFkbecc4YYBABFZXfpOJE7PEn6S5soVjG/e5VbJRut+CikMFF X-Received: by 2002:a62:cdcf:: with SMTP id o198-v6mr6061140pfg.12.1537962896485; Wed, 26 Sep 2018 04:54:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962896; cv=none; d=google.com; s=arc-20160816; b=rFk2RFsOE7mSXKMJXrl6Sy0Yc2eMaUJUksj0lrUbUcYQXDwLCI/Y1CP8qMNc15Acsw grHSYotXkHMqDmvpPvmwP8ZwKbKl37w2C0lUERyUuRfSShi0z620/klR6x46apX87Y/t 0Yps4Z3f1E+tMnRvpKHliSrNuSgXN0Iy2j861MYOYICdao540dOUGRALHh1thHlNk+Px IyCybUlbbUJftxg74AKOjL+Jj2+KuI+6LK59sA+F+idhqfAkgmK8qk/iuEWX3YjqVOYQ R8Mj3k0VsYr5CRA+UeHUXBq3L0ot/5V/900Tw1mVISMsg+RDNUcnWDtfMQcoM8KcTM8B m2Hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=l4TwOLypeLrg8jwGO2V6hg9OgwsYg7tBLQ3YZNFkico=; b=PDEmc7QfgbfX93lEPP3YyumQOYxawLz47k19lxsVx1x/8O3q3tkG8QMId55W+ruRaG lhJPZReBknn8vEDNi32yY+9yT3iliVj8XRHyHqnBgBYyzfSZZrJfoPd1emyU+M8DuVrL rysYXKSHB+BaJ8nKsY/uBsC00yJ29VgyKVtm4N2aIO5VrxIrPsO8DysWgOjFtCBhJKsk 88avA9LrvJ7ocqXrOyJxHw2WAd+A3sbrjSuSaIzPTWuMyO/MV3RADWXwBWf3XREfPzxr 8E4O84VY463BKzlWZmQc4TwAiGUBmGJPepD8L64aWKyjrrW8vjJNx6idArd8TH96Bdeu SyvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="m/t6hoHE"; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id 137-v6si4846523pfx.155.2018.09.26.04.54.56 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="m/t6hoHE"; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=l4TwOLypeLrg8jwGO2V6hg9OgwsYg7tBLQ3YZNFkico=; b=m/t6hoHEMC6O0KYP1dcigJOIsF oyucecRXnOqF9JqyNXklAc+6DaDhz0W31iCBRhFFEkOjmgD+Ctw37rKNqj3OslcPdy6wr6bJQ+wuL IzQtpJtd7xgqn7H825vqN5g9WUkWwwL27xl8tzlWvW+wFLwucQHkLShQeI4/5SvSiioEoKliIlCuq q/FKyT/VguQjPuQuQfF2o02uNhpdIDyuAUcmqOviITIoYhGOQmCcVj9YMmtxV5hmJrRlsWXFBT/Su I46YKHv2qBYnFEJWFEP3vj4nvsrMsztzOeMtTURdFPazgF/kbIn4TQ36bcTGr/Lenz/eJfrGfc87B bTMnsXFg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Og-00062I-Qa; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id A9271206F1818; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.199256189@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:36 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Linus Torvalds , Martin Schwidefsky Subject: [PATCH 13/18] asm-generic/tlb: Introduce HAVE_MMU_GATHER_NO_GATHER References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Martin Schwidefsky Add the Kconfig option HAVE_MMU_GATHER_NO_GATHER to the generic mmu_gather code. If the option is set the mmu_gather will not track individual pages for delayed page free anymore. A platform that enables the option needs to provide its own implementation of the __tlb_remove_page_size function to free pages. Cc: npiggin@gmail.com Cc: heiko.carstens@de.ibm.com Cc: will.deacon@arm.com Cc: aneesh.kumar@linux.vnet.ibm.com Cc: akpm@linux-foundation.org Cc: Linus Torvalds Cc: linux@armlinux.org.uk Signed-off-by: Martin Schwidefsky Signed-off-by: Peter Zijlstra (Intel) Link: http://lkml.kernel.org/r/20180918125151.31744-2-schwidefsky@de.ibm.com --- arch/Kconfig | 3 + include/asm-generic/tlb.h | 9 +++ mm/mmu_gather.c | 107 +++++++++++++++++++++++++--------------------- 3 files changed, 70 insertions(+), 49 deletions(-) --- a/arch/Kconfig +++ b/arch/Kconfig @@ -368,6 +368,9 @@ config HAVE_RCU_TABLE_NO_INVALIDATE config HAVE_MMU_GATHER_PAGE_SIZE bool +config HAVE_MMU_GATHER_NO_GATHER + bool + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -184,6 +184,7 @@ extern void tlb_remove_table(struct mmu_ #endif +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER /* * If we can't allocate a page to make a big batch of page pointers * to work on, then just handle a few from the on-stack structure. @@ -208,6 +209,10 @@ struct mmu_gather_batch { */ #define MAX_GATHER_BATCH_COUNT (10000UL/MAX_GATHER_BATCH) +extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, + int page_size); +#endif + /* * struct mmu_gather is an opaque type used by the mm code for passing around * any data needed by arch specific code for tlb_remove_page. @@ -254,6 +259,7 @@ struct mmu_gather { unsigned int batch_count; +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; @@ -261,6 +267,7 @@ struct mmu_gather { #ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE unsigned int page_size; #endif +#endif }; void arch_tlb_gather_mmu(struct mmu_gather *tlb, @@ -269,8 +276,6 @@ void tlb_flush_mmu(struct mmu_gather *tl void arch_tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end, bool force); void tlb_flush_mmu_free(struct mmu_gather *tlb); -extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, - int page_size); static inline void __tlb_adjust_range(struct mmu_gather *tlb, unsigned long address, --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -13,6 +13,8 @@ #ifdef HAVE_GENERIC_MMU_GATHER +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + static bool tlb_next_batch(struct mmu_gather *tlb) { struct mmu_gather_batch *batch; @@ -41,6 +43,56 @@ static bool tlb_next_batch(struct mmu_ga return true; } +static void tlb_batch_pages_flush(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch; + + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { + free_pages_and_swap_cache(batch->pages, batch->nr); + batch->nr = 0; + } + tlb->active = &tlb->local; +} + +static void tlb_batch_list_free(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch, *next; + + for (batch = tlb->local.next; batch; batch = next) { + next = batch->next; + free_pages((unsigned long)batch, 0); + } + tlb->local.next = NULL; +} + +bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) +{ + struct mmu_gather_batch *batch; + + VM_BUG_ON(!tlb->end); + +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE + VM_WARN_ON(tlb->page_size != page_size); +#endif + + batch = tlb->active; + /* + * Add the page and check if we are full. If so + * force a flush. + */ + batch->pages[batch->nr++] = page; + if (batch->nr == batch->max) { + if (!tlb_next_batch(tlb)) + return true; + batch = tlb->active; + } + VM_BUG_ON_PAGE(batch->nr > batch->max, page); + + return false; +} + +#endif /* HAVE_MMU_GATHER_NO_GATHER */ + void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) { @@ -48,12 +100,15 @@ void arch_tlb_gather_mmu(struct mmu_gath /* Is it from 0 to ~0? */ tlb->fullmm = !(start | (end+1)); + +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER tlb->need_flush_all = 0; tlb->local.next = NULL; tlb->local.nr = 0; tlb->local.max = ARRAY_SIZE(tlb->__pages); tlb->active = &tlb->local; tlb->batch_count = 0; +#endif #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb->batch = NULL; @@ -67,16 +122,12 @@ void arch_tlb_gather_mmu(struct mmu_gath void tlb_flush_mmu_free(struct mmu_gather *tlb) { - struct mmu_gather_batch *batch; - #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb_table_flush(tlb); #endif - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { - free_pages_and_swap_cache(batch->pages, batch->nr); - batch->nr = 0; - } - tlb->active = &tlb->local; +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + tlb_batch_pages_flush(tlb); +#endif } void tlb_flush_mmu(struct mmu_gather *tlb) @@ -92,8 +143,6 @@ void tlb_flush_mmu(struct mmu_gather *tl void arch_tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end, bool force) { - struct mmu_gather_batch *batch, *next; - if (force) { __tlb_reset_range(tlb); __tlb_adjust_range(tlb, start, end - start); @@ -103,45 +152,9 @@ void arch_tlb_finish_mmu(struct mmu_gath /* keep the page table cache within bounds */ check_pgt_cache(); - - for (batch = tlb->local.next; batch; batch = next) { - next = batch->next; - free_pages((unsigned long)batch, 0); - } - tlb->local.next = NULL; -} - -/* __tlb_remove_page - * Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), while - * handling the additional races in SMP caused by other CPUs caching valid - * mappings in their TLBs. Returns the number of free page slots left. - * When out of page slots we must call tlb_flush_mmu(). - *returns true if the caller should flush. - */ -bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) -{ - struct mmu_gather_batch *batch; - - VM_BUG_ON(!tlb->end); - -#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE - VM_WARN_ON(tlb->page_size != page_size); +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + tlb_batch_list_free(tlb); #endif - - batch = tlb->active; - /* - * Add the page and check if we are full. If so - * force a flush. - */ - batch->pages[batch->nr++] = page; - if (batch->nr == batch->max) { - if (!tlb_next_batch(tlb)) - return true; - batch = tlb->active; - } - VM_BUG_ON_PAGE(batch->nr > batch->max, page); - - return false; } #endif /* HAVE_GENERIC_MMU_GATHER */