From patchwork Tue Sep 4 11:45:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10587199 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0DA1112B for ; Tue, 4 Sep 2018 11:45:26 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ABEC429264 for ; Tue, 4 Sep 2018 11:45:26 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9F7CD2926A; Tue, 4 Sep 2018 11:45:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75D9029264 for ; Tue, 4 Sep 2018 11:45:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1DD56B6D4E; Tue, 4 Sep 2018 07:45:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BB7BA6B6D54; Tue, 4 Sep 2018 07:45:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 927426B6D50; Tue, 4 Sep 2018 07:45:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi0-f72.google.com (mail-oi0-f72.google.com [209.85.218.72]) by kanga.kvack.org (Postfix) with ESMTP id 628326B6D4F for ; Tue, 4 Sep 2018 07:45:21 -0400 (EDT) Received: by mail-oi0-f72.google.com with SMTP id u74-v6so3799064oie.16 for ; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=E/NRrP7bddD+/+YjR+u1jL6AsGXChDge17/FW4JEhcE=; b=TnyiUmH/3sA7sadtZOt7KYgNEh3vekuAidIie2ZpBHvrOLf6DKKvMV4j3R0tqL4psU nNjkpqibNB0BhbXqb85fmE8yn3FeexxwmUasPAawCFvmUnbNHSJvAGdA+P16PsodC1NZ EQHxedsDUsCiIqs0shXkQ7LuOTPWtffYBu8WYj8taEkoeWY51a/Z/lvdm1zYlLvf4Hke VY31mmaNy8bOaOrT6t9rSl6EcEP2e4e97wDdrl7qa3l1CM6BBI9Yx7kFEJjYZ+Q2fNEZ 6HUTtGCQzSYyXsLbSmgNYosidMzoNY6B3UOkLOArN92NRUrSJU5JckPUxpGO3WpB2OZJ IhUQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com X-Gm-Message-State: APzg51BVkKCg9am8fOez4HeaS/gy2wb00ZCaPvWCyqpEv9jKAqFBxwVx rLDmBEedGmfE187PzNwKbShhX7dTehAglwQTxQ0mEWPBEb2Hh+BINvTQo787sLnwKwU6J/8YGVO R0LXE4pCQ1lPEty8uFtoAgNiKEyQ2h9TfmKaund9xhD3oR+3sZ61+dJdozlY1u7TfMg== X-Received: by 2002:aca:6942:: with SMTP id e63-v6mr25775565oic.48.1536061521075; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-Smtp-Source: ANB0VdaVaItzo1OxeslXFSDgnDr9KGD9h+HofXbRwFKcLe4zKidg3UUiMhQ6xBwlqTNsD1bCTbXr X-Received: by 2002:aca:6942:: with SMTP id e63-v6mr25775510oic.48.1536061520405; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536061520; cv=none; d=google.com; s=arc-20160816; b=Pu7nk8LirSk4VLs+ZTyj8zAMNu7VBL8AJyWpJ9vvD56iat+gZCaC8S4KiOHZuzw9BW nRCD+Hn+lbq+vJR4GtZYXhu0ecgtqmzCrmO6GY+i6tkSyWjaSHQtt0caS7VphCAxcJLt eZunCrmlIsKxZj8MshoUx/EG7NuOwokQvp8rhM9YR0Wwnip61aNKqzzoY6kAd5l8Q7Ib /+YLfQh0u3FoTL6pGjTZsKgB2aLgvQjcrLK6u+cH+DdC+2LTOCmkqSDnDyN51Bl+m9wo Z/qlC9sGIPzRLn/bbDCMo/kuf4Ji/00B3PQZ/ss1ZzmLlEiKSZZIBJTNfPQuNOEKBT/T ZnsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=E/NRrP7bddD+/+YjR+u1jL6AsGXChDge17/FW4JEhcE=; b=elDr6KzV8RhC4+CMBBKs6iD/rGln72DHFncqw6j2apQeyYdHlI7En3Up49Aybi+lSi CIIJZCanFMsT1axd/+jPUaaywR5VU7OkiS5uLEl2a6KR/R8BQSIzb1B9gBlIcovk5pAU Pbg6OaBUuynEuz5FNZDaqOw5pL2VclcIVdhNZGWLrQJ7Oc2oeukrw6aAKyTQLEG/b9PK zomoKRX59FB84krHtvvkBgKS65Z3nh+Pj5mJsHLkFSwS8gLrCMasK4G0PdGZgPJCjOs9 dSYejDf3Yqs2AWUZwCOW2ZEBF90eovlbPA6l1fW1Mw69Ko2IS11FMyFL8F81bnYUCa/y 4AvA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com. [217.140.101.70]) by mx.google.com with ESMTP id r135-v6si14092251oie.100.2018.09.04.04.45.20 for ; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) Received-SPF: pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) client-ip=217.140.101.70; Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B5C641596; Tue, 4 Sep 2018 04:45:19 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 84BF43F703; Tue, 4 Sep 2018 04:45:19 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 87D7D1AE2DE6; Tue, 4 Sep 2018 12:45:33 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, npiggin@gmail.com, linux-mm@kvack.org, kirill.shutemov@linux.intel.com, akpm@linux-foundation.org, mhocko@suse.com, aneesh.kumar@linux.vnet.ibm.com Subject: [PATCH v2 1/5] asm-generic/tlb: Guard with #ifdef CONFIG_MMU Date: Tue, 4 Sep 2018 12:45:29 +0100 Message-Id: <1536061533-16188-2-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1536061533-16188-1-git-send-email-will.deacon@arm.com> References: <1536061533-16188-1-git-send-email-will.deacon@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The inner workings of the mmu_gather-based TLB invalidation mechanism are not relevant to nommu configurations, so guard them with an #ifdef. This allows us to implement future functions using static inlines without breaking the build. Acked-by: Nicholas Piggin Acked-by: Peter Zijlstra (Intel) Signed-off-by: Will Deacon --- include/asm-generic/tlb.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b3353e21f3b3..a25e236f7a7f 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -20,6 +20,8 @@ #include #include +#ifdef CONFIG_MMU + #ifdef CONFIG_HAVE_RCU_TABLE_FREE /* * Semi RCU freeing of the page directories. @@ -310,6 +312,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #endif #endif +#endif /* CONFIG_MMU */ + #define tlb_migrate_finish(mm) do {} while (0) #endif /* _ASM_GENERIC__TLB_H */ From patchwork Tue Sep 4 11:45:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10587203 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BEDCA14E0 for ; Tue, 4 Sep 2018 11:45:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A83BA29261 for ; Tue, 4 Sep 2018 11:45:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9C9982926A; Tue, 4 Sep 2018 11:45:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 22C2029261 for ; Tue, 4 Sep 2018 11:45:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A6766B6D51; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2CEE76B6D55; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 085A66B6D51; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi0-f70.google.com (mail-oi0-f70.google.com [209.85.218.70]) by kanga.kvack.org (Postfix) with ESMTP id A36FF6B6D52 for ; Tue, 4 Sep 2018 07:45:21 -0400 (EDT) Received: by mail-oi0-f70.google.com with SMTP id w194-v6so3828711oiw.5 for ; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=E3dL0F6LtXoWeiJ3fMJnIB2P7vgcB2aMq324R5er1Gk=; b=HKhGzID6DWKmsgliwkVHkCOMM2rV9wNKuACyZhRsYNXkBMmbG09UtWKKwXd5IRpV+x qxv7VFR7ajBBKaBGTcOffbZgoL/oFrulQDpfUvVVD8kWNPlZ66sd4FDK9i+iTP242gmW +9BRligUMMFUYo4Or4ZJ5VdsbA13OzeldIg5GhWZ0OPt07aesg9Jae0zQIao/XATb2Fr mAM8YmsJk89JPyEZ4veRAzyk31yrb4DXyVdIDUsn2ayUnyHyZ5D/PKp8DsC+sRVq2rHs F0PAUbKPLRE1s0U9FT+Idcd47CxmLBMDHVi9qkHLv5a2Ki42H64mb19BfC7z1iuxatFh 1JiA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com X-Gm-Message-State: APzg51AxFHNs3ji6fSlnnsY+NjGYXDmfxrTo8Mcs2Oc7E6mbKCx1BaQY OYMIEIeyjPI313h2maZVp737VKmDjA7LNEduIdgceVxzFDtmRzsdfsn4BzKB+reMYiLC0dKnfff PclFCrrmBgjzWniE3ZxczTxNdUYhTl34pxi71j2ZHsMRsj8xJPg9C0hvOfh19TRFVOA== X-Received: by 2002:aca:401:: with SMTP id 1-v6mr22419992oie.28.1536061521210; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbZO9YnwsU4jx25iMeZNjsN1t/0VQFv8L2QgSXWBEK1nmeHx2vrf4O4BVyZchjyQbH6H9Ju X-Received: by 2002:aca:401:: with SMTP id 1-v6mr22419927oie.28.1536061520415; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536061520; cv=none; d=google.com; s=arc-20160816; b=It2lYKfUg7rwb2RHmws4d4ASF1EWoAoiaJPwH2+/eX9MdUH5ks12Ry65VOmskwI00q 3na6jBxdI/tS59A+jbI6PIMPicJzIS5Za1UT6721+ZU5pbxMp/taaW1PAu2gt7p8IF2V RzcvazZYxpdDLCdVxF9M5xusgHfWPUEWDeF9Qi3Au3SZ34uCfF+KsGnwytT+ZekeEsb5 UtfG3MUfd5YYdzIWHs7MuFZ17yOeaAtPPuSFRE8wRnyhc+CjcqPrvgEfZPvVYJNhwIKM p4ZvbNV1kRsHtRDJf83WynXo8e5VztfnnXielRIuoIDSzG1trW2jDEwaNC5lfaXQLUam 6b+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=E3dL0F6LtXoWeiJ3fMJnIB2P7vgcB2aMq324R5er1Gk=; b=iZ4NBdSolu3WRoDBQkubxV2fudzd910V2Mw3rKsKHuk7CnhE7M0Vlk1gsXrcF+BwCc Dy50ew9u7aXgP1SE0AkS1zE4jCb9VBeZeiyT0AYSKaElaVln2mgCsOfizAlBNjDZqDhR mS6tAifLpY78P0oc+8kaAlNisKCFUQyHcz4P55INw+PxNjifdFxj6tPRV40trLffSO1Y 90uvGLW0oIiGuMyAo8cmMNmXKyT7GqLsQWNUC0pMcYq7pkLQcYTz1xwOjLR2P6kEa2VF /xqNTGgp8E4ypjL63P9pavRnTcZSvyx25yrRZjaQWaoBZ867dlI4gO9/GZtybQaiLqNj cfoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com. [217.140.101.70]) by mx.google.com with ESMTP id k73-v6si14211436oib.416.2018.09.04.04.45.20 for ; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) Received-SPF: pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) client-ip=217.140.101.70; Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C165515AD; Tue, 4 Sep 2018 04:45:19 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 927423F763; Tue, 4 Sep 2018 04:45:19 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id 97A6D1AE358D; Tue, 4 Sep 2018 12:45:33 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, npiggin@gmail.com, linux-mm@kvack.org, kirill.shutemov@linux.intel.com, akpm@linux-foundation.org, mhocko@suse.com, aneesh.kumar@linux.vnet.ibm.com Subject: [PATCH v2 2/5] asm-generic/tlb: Track freeing of page-table directories in struct mmu_gather Date: Tue, 4 Sep 2018 12:45:30 +0100 Message-Id: <1536061533-16188-3-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1536061533-16188-1-git-send-email-will.deacon@arm.com> References: <1536061533-16188-1-git-send-email-will.deacon@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Peter Zijlstra Some architectures require different TLB invalidation instructions depending on whether it is only the last-level of page table being changed, or whether there are also changes to the intermediate (directory) entries higher up the tree. Add a new bit to the flags bitfield in struct mmu_gather so that the architecture code can operate accordingly if it's the intermediate levels being invalidated. Acked-by: Nicholas Piggin Signed-off-by: Peter Zijlstra Signed-off-by: Will Deacon --- include/asm-generic/tlb.h | 31 +++++++++++++++++++++++-------- 1 file changed, 23 insertions(+), 8 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index a25e236f7a7f..2b444ad94566 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -99,12 +99,22 @@ struct mmu_gather { #endif unsigned long start; unsigned long end; - /* we are in the middle of an operation to clear - * a full mm and can make some optimizations */ - unsigned int fullmm : 1, - /* we have performed an operation which - * requires a complete flush of the tlb */ - need_flush_all : 1; + /* + * we are in the middle of an operation to clear + * a full mm and can make some optimizations + */ + unsigned int fullmm : 1; + + /* + * we have performed an operation which + * requires a complete flush of the tlb + */ + unsigned int need_flush_all : 1; + + /* + * we have removed page directories + */ + unsigned int freed_tables : 1; struct mmu_gather_batch *active; struct mmu_gather_batch local; @@ -139,6 +149,7 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->start = TASK_SIZE; tlb->end = 0; } + tlb->freed_tables = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -280,6 +291,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -287,7 +299,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #ifndef pmd_free_tlb #define pmd_free_tlb(tlb, pmdp, address) \ do { \ - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -297,6 +310,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -306,7 +320,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #ifndef p4d_free_tlb #define p4d_free_tlb(tlb, pudp, address) \ do { \ - __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif From patchwork Tue Sep 4 11:45:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10587207 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA6B5112B for ; Tue, 4 Sep 2018 11:45:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 978A829261 for ; Tue, 4 Sep 2018 11:45:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8C3A52926A; Tue, 4 Sep 2018 11:45:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC0C429261 for ; Tue, 4 Sep 2018 11:45:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 370036B6D54; Tue, 4 Sep 2018 07:45:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 323AF6B6D55; Tue, 4 Sep 2018 07:45:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0FB176B6D56; Tue, 4 Sep 2018 07:45:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi0-f70.google.com (mail-oi0-f70.google.com [209.85.218.70]) by kanga.kvack.org (Postfix) with ESMTP id D99716B6D54 for ; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) Received: by mail-oi0-f70.google.com with SMTP id w185-v6so3795703oig.19 for ; Tue, 04 Sep 2018 04:45:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=vfGjlN9R0u/blO0JGqukNKHV35fKWjX32T6xo/ATi24=; b=PFlnxQWnfHShscwpqwDSVl+Jy3Ewc2lbgyLziMzQcYsH/g4TijlXIt315oMn9RrX8C t0H20EopdLER2v/b+sc2T6bSXSKvaTXJ3UTJ8pf6YW78Uiho3VhFgT5PJy+iPPXd/AFF uxL81+XKIGiTSM4cSDBKO8PLg79yC4POpyZhFKbmprwYEH/U30RTz9o800RS99T+YUsm eK1vWQLddwo/bRTWPlRTFTMoSQ8PIoOaTyKdFVSPNHweBzoBC9I5FnefUN+9bMcepnQ2 kl/oPHPYaqYsBfPpgPbTX0fe3Z0c2Z6cTrF3Jr5x9gNhPa40BAakb4oLSbNYKhNJ9ky+ HtgQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com X-Gm-Message-State: APzg51BPedfEIgr6XQV3vAW4jCasJuvYL+wx1nSX8h6mG3T/yHDXuPbe E0yqunt5X3FzKhpyI8EJ38GAGe0MAuOebJKsOf103XrLJnsnquOidv7IT8KneBaCwviR1d4HOXN dPu1YbzPHHb+8n40AsvxaNlzrSfv83GVPWqy6c0BjPz+OziCE9Yz8aS1033bh/suOmA== X-Received: by 2002:aca:5155:: with SMTP id f82-v6mr22824072oib.272.1536061521344; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZXHZMaVUN//HOItEWTZtdmdBFv4Dpdp26fCKYDEyWrLOTbyZmxGvO0KYwuK6pHkjwdePpq X-Received: by 2002:aca:5155:: with SMTP id f82-v6mr22823995oib.272.1536061520413; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536061520; cv=none; d=google.com; s=arc-20160816; b=b84DMfUS0vm5bgJBlt+UQzdRP+/nknS+52hNXaDY6fo0Xugofqamxe4pmu8VV0+cwL 22//v82V2u8zI5CkJPe40Ox2Jh61dLQKo0A+TGciu5ncBxtHF28jH+PVVLhD6JqheITP dkTWVUOkRuXhuzLypebHq1hYZsbo2wSgihUlLknG+reL69KFwLqN1GTGPQ1r0yW4vF3V 78LBRqaZhTYok4PeFN5m1D3W2GntZNN0scvQ4QiP327kW1tPoJxl/wpnUVOLHTVpX3o2 6vI6Uzmqi67LsrfRBYYnn4Zk3GYOFJu1cyr5S7L2uHzyS25oIqm7fHOr/S/9WIpePB0v u9wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=vfGjlN9R0u/blO0JGqukNKHV35fKWjX32T6xo/ATi24=; b=rfjvhocMVRjvTZqJ+GRuf/BfAAiJYOP1MgmH6JQqLjqSV2M28lJtQRp0FVYYI4kRob xbFkQ+/0SLsfYSX5VH9iTdrnpLW5AhfmR/dowC3cjFBVulNDmGVcL2q1HhrmEEKZplUM 9QXkz7fcfQomEjlhyuqGyBOoYPb4TKd3X+cgJxTvCI5bb4NF3IoKrpPB3Zfev/WCz4oM Zvsj0izhWOMFVZANVbqPLQtjP07tNJhTGeZG8URXmFd5yZ2Lm124ABaAJAJsVvs6FjHm pfd6YnRbrRFh5r2ketdtoUf+PeQq/Siz9c95U2pmjodeiK73dVvrL+yCWMGdk46jiQpn 9HQg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com. [217.140.101.70]) by mx.google.com with ESMTP id e80-v6si4549869oib.276.2018.09.04.04.45.20 for ; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) Received-SPF: pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) client-ip=217.140.101.70; Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CF4CD15BE; Tue, 4 Sep 2018 04:45:19 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id A1B5D3F93D; Tue, 4 Sep 2018 04:45:19 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id A58AD1AE35FF; Tue, 4 Sep 2018 12:45:33 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, npiggin@gmail.com, linux-mm@kvack.org, kirill.shutemov@linux.intel.com, akpm@linux-foundation.org, mhocko@suse.com, aneesh.kumar@linux.vnet.ibm.com Subject: [PATCH v2 3/5] asm-generic/tlb: Track which levels of the page tables have been cleared Date: Tue, 4 Sep 2018 12:45:31 +0100 Message-Id: <1536061533-16188-4-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1536061533-16188-1-git-send-email-will.deacon@arm.com> References: <1536061533-16188-1-git-send-email-will.deacon@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP It is common for architectures with hugepage support to require only a single TLB invalidation operation per hugepage during unmap(), rather than iterating through the mapping at a PAGE_SIZE increment. Currently, however, the level in the page table where the unmap() operation occurs is not stored in the mmu_gather structure, therefore forcing architectures to issue additional TLB invalidation operations or to give up and over-invalidate by e.g. invalidating the entire TLB. Ideally, we could add an interval rbtree to the mmu_gather structure, which would allow us to associate the correct mapping granule with the various sub-mappings within the range being invalidated. However, this is costly in terms of book-keeping and memory management, so instead we approximate by keeping track of the page table levels that are cleared and provide a means to query the smallest granule required for invalidation. Acked-by: Peter Zijlstra (Intel) Acked-by: Nicholas Piggin Signed-off-by: Will Deacon --- include/asm-generic/tlb.h | 58 ++++++++++++++++++++++++++++++++++++++++------- mm/memory.c | 4 +++- 2 files changed, 53 insertions(+), 9 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2b444ad94566..9791e98122a0 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -116,6 +116,14 @@ struct mmu_gather { */ unsigned int freed_tables : 1; + /* + * at which levels have we cleared entries? + */ + unsigned int cleared_ptes : 1; + unsigned int cleared_pmds : 1; + unsigned int cleared_puds : 1; + unsigned int cleared_p4ds : 1; + struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; @@ -150,6 +158,10 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) tlb->end = 0; } tlb->freed_tables = 0; + tlb->cleared_ptes = 0; + tlb->cleared_pmds = 0; + tlb->cleared_puds = 0; + tlb->cleared_p4ds = 0; } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) @@ -199,6 +211,25 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, } #endif +static inline unsigned long tlb_get_unmap_shift(struct mmu_gather *tlb) +{ + if (tlb->cleared_ptes) + return PAGE_SHIFT; + if (tlb->cleared_pmds) + return PMD_SHIFT; + if (tlb->cleared_puds) + return PUD_SHIFT; + if (tlb->cleared_p4ds) + return P4D_SHIFT; + + return PAGE_SHIFT; +} + +static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) +{ + return 1UL << tlb_get_unmap_shift(tlb); +} + /* * In the case of tlb vma handling, we can optimise these away in the * case where we're doing a full MM flush. When we're doing a munmap, @@ -232,13 +263,19 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_tlb_entry(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ + tlb->cleared_ptes = 1; \ __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - do { \ - __tlb_adjust_range(tlb, address, huge_page_size(h)); \ - __tlb_remove_tlb_entry(tlb, ptep, address); \ +#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ + do { \ + unsigned long _sz = huge_page_size(h); \ + __tlb_adjust_range(tlb, address, _sz); \ + if (_sz == PMD_SIZE) \ + tlb->cleared_pmds = 1; \ + else if (_sz == PUD_SIZE) \ + tlb->cleared_puds = 1; \ + __tlb_remove_tlb_entry(tlb, ptep, address); \ } while (0) /** @@ -252,6 +289,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_pmd_tlb_entry(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PMD_SIZE); \ + tlb->cleared_pmds = 1; \ __tlb_remove_pmd_tlb_entry(tlb, pmdp, address); \ } while (0) @@ -266,6 +304,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define tlb_remove_pud_tlb_entry(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, HPAGE_PUD_SIZE); \ + tlb->cleared_puds = 1; \ __tlb_remove_pud_tlb_entry(tlb, pudp, address); \ } while (0) @@ -291,7 +330,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pte_free_tlb(tlb, ptep, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_pmds = 1; \ __pte_free_tlb(tlb, ptep, address); \ } while (0) #endif @@ -300,7 +340,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pmd_free_tlb(tlb, pmdp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_puds = 1; \ __pmd_free_tlb(tlb, pmdp, address); \ } while (0) #endif @@ -310,7 +351,8 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define pud_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ + tlb->cleared_p4ds = 1; \ __pud_free_tlb(tlb, pudp, address); \ } while (0) #endif @@ -321,7 +363,7 @@ static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, #define p4d_free_tlb(tlb, pudp, address) \ do { \ __tlb_adjust_range(tlb, address, PAGE_SIZE); \ - tlb->freed_tables = 1; \ + tlb->freed_tables = 1; \ __p4d_free_tlb(tlb, pudp, address); \ } while (0) #endif diff --git a/mm/memory.c b/mm/memory.c index c467102a5cbc..9135f48e8d84 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -267,8 +267,10 @@ void arch_tlb_finish_mmu(struct mmu_gather *tlb, { struct mmu_gather_batch *batch, *next; - if (force) + if (force) { + __tlb_reset_range(tlb); __tlb_adjust_range(tlb, start, end - start); + } tlb_flush_mmu(tlb); From patchwork Tue Sep 4 11:45:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10587201 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B68B814E0 for ; Tue, 4 Sep 2018 11:45:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A02A729264 for ; Tue, 4 Sep 2018 11:45:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 940AB2926A; Tue, 4 Sep 2018 11:45:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D0E329264 for ; Tue, 4 Sep 2018 11:45:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 260936B6D50; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 19BBB6B6D53; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 011026B6D54; Tue, 4 Sep 2018 07:45:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi0-f71.google.com (mail-oi0-f71.google.com [209.85.218.71]) by kanga.kvack.org (Postfix) with ESMTP id BADB66B6D53 for ; Tue, 4 Sep 2018 07:45:21 -0400 (EDT) Received: by mail-oi0-f71.google.com with SMTP id c18-v6so3828945oiy.3 for ; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=cxjjAa8kaNR5pA+nPqRkzyZk00DAi4DlHD508ZQdxkk=; b=oYtz4ucMf4/2F8RPiNf1GRiFbN7CtRjPENxNaeLYyorK5Z43e+MBpY9PojTulBtH+q 2gOvZnxaXVIjmY52Nf6ER2of+EJOAmxqBoIIx000YZc/JAt5l46G2Z7EcLS9jQdk001G gXF6fs1KZSPtXpkeqEdOudkSG96Y5sIaBnL5OdYb6MabizVYEJkNMxOf/E0oeUbTrWX/ FUHFPNVMab9cr4AnI5fSF/b/qJo1ILXvxsw4ScpWuc0FMCzqAPVhxicpX60nRWu1Idw3 tq3b7duFkUNIwaHlN8+ZDBceSxfC0GJevd2j/B/VAJohqS14qk//iJX9x3SyK8Gxh47F RqKA== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com X-Gm-Message-State: APzg51DyS3HZTSxnFhukq1NGLl9hsjXxjh5U+AIsauc2/TyWvqJbcJgu 2rmItGJWF0Ge0w2oRSMddx9sRqN5/hubOwx5TtchFRv25vinshnsVUNr98LTUBt3OQIcBAJmQMM 8wDi2/EjA4jgK5U7VQNjm/G/PuHUAzIneMCsh8QuWKiPnmipkEO6jWggKvx6LaGdyRQ== X-Received: by 2002:aca:31c6:: with SMTP id x189-v6mr23025631oix.213.1536061521501; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYV+M6HvtaHkaom5kLkHD2v7ahz9R+EGvKkjC+bCSOr6LmeyTQIb9RTVkCrOIs5gYV3RMxD X-Received: by 2002:aca:31c6:: with SMTP id x189-v6mr23025556oix.213.1536061520422; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536061520; cv=none; d=google.com; s=arc-20160816; b=AbPGy14X5YJc4cYFFTUpC7BMp2VtoUJD5js/t+Is8BEQwOCfQNe2iJkokl86GIKS9k wAwkCL6j+6sJ2MrN39+diniw8wSJSP36R/jS//hpeh4R0RYNohGEygiuqO3MgDe9SYbW YGMSe3Qic+3BwALFnhzXYCksEZ8fuiYz1CNPS1zd0DRpXpe+TNnPgHH6+32Cc8CdYdul NcUHwORBvPSsSmqMh5JPlVp2cxKu3DELJZMRYjaeRcOlHf0fHXOJZs6hH7L7L+L8GnYk dFO5NXahUOIJghARiepXEA5NE7seaaRo4cyUGAA2Jj47AgnZ89EWaoVscXHl7VybvZ2R YFKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=cxjjAa8kaNR5pA+nPqRkzyZk00DAi4DlHD508ZQdxkk=; b=nkqUUvNa0bZPeJvjZOKBGo5WKJGUvgjQwwml2x3ptJgyNHAtXA4wbXz8oOQY19j1v4 r8HVBiN8GO3hifQjP5TUKvu0SSXAaAuOCKc49wdn1VTCBZsKCN/MheHpsn2fVZytH9pi 0/ivkwudydksPY3udDzzsfgcolJQWwP+nPnz3tpfusOfQxmqu1yJF2NB2Tj4ijd890Vm WV/aS6/oIpxetvv/e2LdhrCO8FAQ21/YbpkNwVB+f5e9nmKMbv9BfxrrNStZ7Sa2Rlq6 hmtW+1PYlh80T+c3Dy5MyIOp85pGzx6XAN5DO8xW4h4wSfqvWLf76kmXerqWogJ5MpsE Of3A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com. [217.140.101.70]) by mx.google.com with ESMTP id g84-v6si13583498oif.285.2018.09.04.04.45.20 for ; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) Received-SPF: pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) client-ip=217.140.101.70; Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0A89C1650; Tue, 4 Sep 2018 04:45:20 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AD8ED3F5BC; Tue, 4 Sep 2018 04:45:19 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id B506F1AE361E; Tue, 4 Sep 2018 12:45:33 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, npiggin@gmail.com, linux-mm@kvack.org, kirill.shutemov@linux.intel.com, akpm@linux-foundation.org, mhocko@suse.com, aneesh.kumar@linux.vnet.ibm.com Subject: [PATCH v2 4/5] mm/memory: Move mmu_gather and TLB invalidation code into its own file Date: Tue, 4 Sep 2018 12:45:32 +0100 Message-Id: <1536061533-16188-5-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1536061533-16188-1-git-send-email-will.deacon@arm.com> References: <1536061533-16188-1-git-send-email-will.deacon@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Peter Zijlstra In preparation for maintaining the mmu_gather code as its own entity, move the implementation out of memory.c and into its own file. Cc: "Kirill A. Shutemov" Cc: Andrew Morton Cc: Michal Hocko Signed-off-by: Peter Zijlstra --- include/asm-generic/tlb.h | 1 + mm/Makefile | 6 +- mm/memory.c | 249 -------------------------------------------- mm/mmu_gather.c | 259 ++++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 263 insertions(+), 252 deletions(-) create mode 100644 mm/mmu_gather.c diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 9791e98122a0..6be86c1c5c58 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -138,6 +138,7 @@ void arch_tlb_gather_mmu(struct mmu_gather *tlb, void tlb_flush_mmu(struct mmu_gather *tlb); void arch_tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end, bool force); +void tlb_flush_mmu_free(struct mmu_gather *tlb); extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size); diff --git a/mm/Makefile b/mm/Makefile index 8716bdabe1e6..7c48e0d3d8ab 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -23,9 +23,9 @@ KCOV_INSTRUMENT_vmstat.o := n mmu-y := nommu.o mmu-$(CONFIG_MMU) := gup.o highmem.o memory.o mincore.o \ - mlock.o mmap.o mprotect.o mremap.o msync.o \ - page_vma_mapped.o pagewalk.o pgtable-generic.o \ - rmap.o vmalloc.o + mlock.o mmap.o mmu_gather.o mprotect.o mremap.o \ + msync.o page_vma_mapped.o pagewalk.o \ + pgtable-generic.o rmap.o vmalloc.o ifdef CONFIG_CROSS_MEMORY_ATTACH diff --git a/mm/memory.c b/mm/memory.c index 9135f48e8d84..21a5e6e4758b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -186,255 +186,6 @@ static void check_sync_rss_stat(struct task_struct *task) #endif /* SPLIT_RSS_COUNTING */ -#ifdef HAVE_GENERIC_MMU_GATHER - -static bool tlb_next_batch(struct mmu_gather *tlb) -{ - struct mmu_gather_batch *batch; - - batch = tlb->active; - if (batch->next) { - tlb->active = batch->next; - return true; - } - - if (tlb->batch_count == MAX_GATHER_BATCH_COUNT) - return false; - - batch = (void *)__get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); - if (!batch) - return false; - - tlb->batch_count++; - batch->next = NULL; - batch->nr = 0; - batch->max = MAX_GATHER_BATCH; - - tlb->active->next = batch; - tlb->active = batch; - - return true; -} - -void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - - /* Is it from 0 to ~0? */ - tlb->fullmm = !(start | (end+1)); - tlb->need_flush_all = 0; - tlb->local.next = NULL; - tlb->local.nr = 0; - tlb->local.max = ARRAY_SIZE(tlb->__pages); - tlb->active = &tlb->local; - tlb->batch_count = 0; - -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb->batch = NULL; -#endif - tlb->page_size = 0; - - __tlb_reset_range(tlb); -} - -static void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - struct mmu_gather_batch *batch; - -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb_table_flush(tlb); -#endif - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { - free_pages_and_swap_cache(batch->pages, batch->nr); - batch->nr = 0; - } - tlb->active = &tlb->local; -} - -void tlb_flush_mmu(struct mmu_gather *tlb) -{ - tlb_flush_mmu_tlbonly(tlb); - tlb_flush_mmu_free(tlb); -} - -/* tlb_finish_mmu - * Called at the end of the shootdown operation to free up any resources - * that were required. - */ -void arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - struct mmu_gather_batch *batch, *next; - - if (force) { - __tlb_reset_range(tlb); - __tlb_adjust_range(tlb, start, end - start); - } - - tlb_flush_mmu(tlb); - - /* keep the page table cache within bounds */ - check_pgt_cache(); - - for (batch = tlb->local.next; batch; batch = next) { - next = batch->next; - free_pages((unsigned long)batch, 0); - } - tlb->local.next = NULL; -} - -/* __tlb_remove_page - * Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), while - * handling the additional races in SMP caused by other CPUs caching valid - * mappings in their TLBs. Returns the number of free page slots left. - * When out of page slots we must call tlb_flush_mmu(). - *returns true if the caller should flush. - */ -bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) -{ - struct mmu_gather_batch *batch; - - VM_BUG_ON(!tlb->end); - VM_WARN_ON(tlb->page_size != page_size); - - batch = tlb->active; - /* - * Add the page and check if we are full. If so - * force a flush. - */ - batch->pages[batch->nr++] = page; - if (batch->nr == batch->max) { - if (!tlb_next_batch(tlb)) - return true; - batch = tlb->active; - } - VM_BUG_ON_PAGE(batch->nr > batch->max, page); - - return false; -} - -#endif /* HAVE_GENERIC_MMU_GATHER */ - -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - -/* - * See the comment near struct mmu_table_batch. - */ - -/* - * If we want tlb_remove_table() to imply TLB invalidates. - */ -static inline void tlb_table_invalidate(struct mmu_gather *tlb) -{ -#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE - /* - * Invalidate page-table caches used by hardware walkers. Then we still - * need to RCU-sched wait while freeing the pages because software - * walkers can still be in-flight. - */ - tlb_flush_mmu_tlbonly(tlb); -#endif -} - -static void tlb_remove_table_smp_sync(void *arg) -{ - /* Simply deliver the interrupt */ -} - -static void tlb_remove_table_one(void *table) -{ - /* - * This isn't an RCU grace period and hence the page-tables cannot be - * assumed to be actually RCU-freed. - * - * It is however sufficient for software page-table walkers that rely on - * IRQ disabling. See the comment near struct mmu_table_batch. - */ - smp_call_function(tlb_remove_table_smp_sync, NULL, 1); - __tlb_remove_table(table); -} - -static void tlb_remove_table_rcu(struct rcu_head *head) -{ - struct mmu_table_batch *batch; - int i; - - batch = container_of(head, struct mmu_table_batch, rcu); - - for (i = 0; i < batch->nr; i++) - __tlb_remove_table(batch->tables[i]); - - free_page((unsigned long)batch); -} - -void tlb_table_flush(struct mmu_gather *tlb) -{ - struct mmu_table_batch **batch = &tlb->batch; - - if (*batch) { - tlb_table_invalidate(tlb); - call_rcu_sched(&(*batch)->rcu, tlb_remove_table_rcu); - *batch = NULL; - } -} - -void tlb_remove_table(struct mmu_gather *tlb, void *table) -{ - struct mmu_table_batch **batch = &tlb->batch; - - if (*batch == NULL) { - *batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN); - if (*batch == NULL) { - tlb_table_invalidate(tlb); - tlb_remove_table_one(table); - return; - } - (*batch)->nr = 0; - } - - (*batch)->tables[(*batch)->nr++] = table; - if ((*batch)->nr == MAX_TABLE_BATCH) - tlb_table_flush(tlb); -} - -#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ - -/** - * tlb_gather_mmu - initialize an mmu_gather structure for page-table tear-down - * @tlb: the mmu_gather structure to initialize - * @mm: the mm_struct of the target address space - * @start: start of the region that will be removed from the page-table - * @end: end of the region that will be removed from the page-table - * - * Called to initialize an (on-stack) mmu_gather structure for page-table - * tear-down from @mm. The @start and @end are set to 0 and -1 - * respectively when @mm is without users and we're going to destroy - * the full address space (exit/execve). - */ -void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - arch_tlb_gather_mmu(tlb, mm, start, end); - inc_tlb_flush_pending(tlb->mm); -} - -void tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end) -{ - /* - * If there are parallel threads are doing PTE changes on same range - * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB - * flush by batching, a thread has stable TLB entry can fail to flush - * the TLB by observing pte_none|!pte_dirty, for example so flush TLB - * forcefully if we detect parallel PTE batching threads. - */ - bool force = mm_tlb_flush_nested(tlb->mm); - - arch_tlb_finish_mmu(tlb, start, end, force); - dec_tlb_flush_pending(tlb->mm); -} - /* * Note: this doesn't free the actual pages themselves. That * has been handled earlier when unmapping all the memory regions. diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c new file mode 100644 index 000000000000..d41b63d9cdaa --- /dev/null +++ b/mm/mmu_gather.c @@ -0,0 +1,259 @@ +#include +#include +#include +#include +#include +#include +#include + +#include +#include + +#ifdef HAVE_GENERIC_MMU_GATHER + +static bool tlb_next_batch(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch; + + batch = tlb->active; + if (batch->next) { + tlb->active = batch->next; + return true; + } + + if (tlb->batch_count == MAX_GATHER_BATCH_COUNT) + return false; + + batch = (void *)__get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); + if (!batch) + return false; + + tlb->batch_count++; + batch->next = NULL; + batch->nr = 0; + batch->max = MAX_GATHER_BATCH; + + tlb->active->next = batch; + tlb->active = batch; + + return true; +} + +void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) +{ + tlb->mm = mm; + + /* Is it from 0 to ~0? */ + tlb->fullmm = !(start | (end+1)); + tlb->need_flush_all = 0; + tlb->local.next = NULL; + tlb->local.nr = 0; + tlb->local.max = ARRAY_SIZE(tlb->__pages); + tlb->active = &tlb->local; + tlb->batch_count = 0; + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb->batch = NULL; +#endif + tlb->page_size = 0; + + __tlb_reset_range(tlb); +} + +void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch; + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb_table_flush(tlb); +#endif + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { + free_pages_and_swap_cache(batch->pages, batch->nr); + batch->nr = 0; + } + tlb->active = &tlb->local; +} + +void tlb_flush_mmu(struct mmu_gather *tlb) +{ + tlb_flush_mmu_tlbonly(tlb); + tlb_flush_mmu_free(tlb); +} + +/* tlb_finish_mmu + * Called at the end of the shootdown operation to free up any resources + * that were required. + */ +void arch_tlb_finish_mmu(struct mmu_gather *tlb, + unsigned long start, unsigned long end, bool force) +{ + struct mmu_gather_batch *batch, *next; + + if (force) { + __tlb_reset_range(tlb); + __tlb_adjust_range(tlb, start, end - start); + } + + tlb_flush_mmu(tlb); + + /* keep the page table cache within bounds */ + check_pgt_cache(); + + for (batch = tlb->local.next; batch; batch = next) { + next = batch->next; + free_pages((unsigned long)batch, 0); + } + tlb->local.next = NULL; +} + +/* __tlb_remove_page + * Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), while + * handling the additional races in SMP caused by other CPUs caching valid + * mappings in their TLBs. Returns the number of free page slots left. + * When out of page slots we must call tlb_flush_mmu(). + *returns true if the caller should flush. + */ +bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) +{ + struct mmu_gather_batch *batch; + + VM_BUG_ON(!tlb->end); + VM_WARN_ON(tlb->page_size != page_size); + + batch = tlb->active; + /* + * Add the page and check if we are full. If so + * force a flush. + */ + batch->pages[batch->nr++] = page; + if (batch->nr == batch->max) { + if (!tlb_next_batch(tlb)) + return true; + batch = tlb->active; + } + VM_BUG_ON_PAGE(batch->nr > batch->max, page); + + return false; +} + +#endif /* HAVE_GENERIC_MMU_GATHER */ + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + +/* + * See the comment near struct mmu_table_batch. + */ + +/* + * If we want tlb_remove_table() to imply TLB invalidates. + */ +static inline void tlb_table_invalidate(struct mmu_gather *tlb) +{ +#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE + /* + * Invalidate page-table caches used by hardware walkers. Then we still + * need to RCU-sched wait while freeing the pages because software + * walkers can still be in-flight. + */ + tlb_flush_mmu_tlbonly(tlb); +#endif +} + +static void tlb_remove_table_smp_sync(void *arg) +{ + /* Simply deliver the interrupt */ +} + +static void tlb_remove_table_one(void *table) +{ + /* + * This isn't an RCU grace period and hence the page-tables cannot be + * assumed to be actually RCU-freed. + * + * It is however sufficient for software page-table walkers that rely on + * IRQ disabling. See the comment near struct mmu_table_batch. + */ + smp_call_function(tlb_remove_table_smp_sync, NULL, 1); + __tlb_remove_table(table); +} + +static void tlb_remove_table_rcu(struct rcu_head *head) +{ + struct mmu_table_batch *batch; + int i; + + batch = container_of(head, struct mmu_table_batch, rcu); + + for (i = 0; i < batch->nr; i++) + __tlb_remove_table(batch->tables[i]); + + free_page((unsigned long)batch); +} + +void tlb_table_flush(struct mmu_gather *tlb) +{ + struct mmu_table_batch **batch = &tlb->batch; + + if (*batch) { + tlb_table_invalidate(tlb); + call_rcu_sched(&(*batch)->rcu, tlb_remove_table_rcu); + *batch = NULL; + } +} + +void tlb_remove_table(struct mmu_gather *tlb, void *table) +{ + struct mmu_table_batch **batch = &tlb->batch; + + if (*batch == NULL) { + *batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN); + if (*batch == NULL) { + tlb_table_invalidate(tlb); + tlb_remove_table_one(table); + return; + } + (*batch)->nr = 0; + } + + (*batch)->tables[(*batch)->nr++] = table; + if ((*batch)->nr == MAX_TABLE_BATCH) + tlb_table_flush(tlb); +} + +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ + +/** + * tlb_gather_mmu - initialize an mmu_gather structure for page-table tear-down + * @tlb: the mmu_gather structure to initialize + * @mm: the mm_struct of the target address space + * @start: start of the region that will be removed from the page-table + * @end: end of the region that will be removed from the page-table + * + * Called to initialize an (on-stack) mmu_gather structure for page-table + * tear-down from @mm. The @start and @end are set to 0 and -1 + * respectively when @mm is without users and we're going to destroy + * the full address space (exit/execve). + */ +void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, + unsigned long start, unsigned long end) +{ + arch_tlb_gather_mmu(tlb, mm, start, end); + inc_tlb_flush_pending(tlb->mm); +} + +void tlb_finish_mmu(struct mmu_gather *tlb, + unsigned long start, unsigned long end) +{ + /* + * If there are parallel threads are doing PTE changes on same range + * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB + * flush by batching, a thread has stable TLB entry can fail to flush + * the TLB by observing pte_none|!pte_dirty, for example so flush TLB + * forcefully if we detect parallel PTE batching threads. + */ + bool force = mm_tlb_flush_nested(tlb->mm); + + arch_tlb_finish_mmu(tlb, start, end, force); + dec_tlb_flush_pending(tlb->mm); +} From patchwork Tue Sep 4 11:45:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 10587205 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0424180E for ; Tue, 4 Sep 2018 11:45:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB68729261 for ; Tue, 4 Sep 2018 11:45:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9F93F2926A; Tue, 4 Sep 2018 11:45:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 03D8429261 for ; Tue, 4 Sep 2018 11:45:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B7DE6B6D52; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 40A406B6D54; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2597B6B6D52; Tue, 4 Sep 2018 07:45:22 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi0-f69.google.com (mail-oi0-f69.google.com [209.85.218.69]) by kanga.kvack.org (Postfix) with ESMTP id CE4166B6D50 for ; Tue, 4 Sep 2018 07:45:21 -0400 (EDT) Received: by mail-oi0-f69.google.com with SMTP id v4-v6so3842073oix.2 for ; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=aIicwUVftSNnfS4Gm1h09+G+H2qhLBZb+YIeh/Nh8pg=; b=fgwd3YZHTSxDNFWb5P5qh5y3DoVNrwSVgD1FqNjOgSeGBUOZBJ1HUbbpztx+fO7PpG hMWJc0PS9Pcmu5ALGTwnIj2ZrVMrDkdMBz53nvcsqP36v9EukfAUT0bDWMXzViYoZUL8 TljXv5ny1nhz1wBuLAkMGuvFWl+F84SQSPElxzS1wHmfHq9ZhGKFewjSElSx5XgiPKE/ OU/mw2UxNhwWWTDSrBuIZe+kgH3wRe4qzOT+yX24UuMqfVyaLln8y2D9fwVSTsUVbI79 v1FqXQpeOIT2MxaNBb0iPWcMVlswTMkyekqVOEveEluKA05q6sqi/ZheY3YxTz3NZd5H 0+cw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com X-Gm-Message-State: APzg51DNgkTfuCQ+gsKd1iEqzYLPE9LyUqiM4FnBO7p9vS8iYVPYceue pn2T0c8dqpJQSFukIlOOqZVoR96tk/BMYPwZvsdWq+LlB5hENM7qnRalqmCLfzJeXbyEvLnEe6Y /FJLz+UopygtLXPYAMmQDudMTSOwn+QJUqtn0zwtsdz9fHgqkMy3u7h/wxk0295bOhQ== X-Received: by 2002:aca:d098:: with SMTP id j24-v6mr11044121oiy.72.1536061521627; Tue, 04 Sep 2018 04:45:21 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYfaMMpT1SP1fKSntEr81eFmhlQvG98pwA2Dcw8uoO8yMopdK1m/eCb9z+1RoALtvjwizMR X-Received: by 2002:aca:d098:: with SMTP id j24-v6mr11044061oiy.72.1536061520923; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536061520; cv=none; d=google.com; s=arc-20160816; b=h92TuMpZ7EA22PRGdgvi8y446TCqb7sZrNlvsC4eBtEoGkuDyqxotFEId7TTI4hX1y Q2XIKgd1UXVFHGu7RvdjDGDyegl5qK3qjEXYzghgw+iVRdOb+Ygz/9mDf9R59tYzGDQ+ jCx8rhLalsD7O+XNZCVEApjgotc5Iwu58/FvK60mo1HROF/NTHki2rqz7ZGYFk81E4t6 hL+5n3yLtJtf9ni4RBHyTb42yuB+/AwX05mQgIzl0um5LyEE91hyxi+KKziQwvY7n7Bl lv0gHWIBXht93diryohC08TxEe6QdO+HjYIklYokoONhhhBogAFpx52yezC4IlKCNQWH v90w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=aIicwUVftSNnfS4Gm1h09+G+H2qhLBZb+YIeh/Nh8pg=; b=cRH6Waoobd2woQbHl+8CdBCW5tHk3NsQmgmduiWgVgmkWrv1a9rWn1Veu99eok6M4s ot7j7gziTLFndas6txPXzSjDcRj7pvnTGdCDU+qM6C9eq1Ogq7s3qu5dWz4e7lNTSvma BvvvZMpnfIWzSNEVt+BpfUnZvJV7IZTLAGqSJ2rcGr+v5vRL14IX7stNt2TxND8GoGDD 7eqGgq0S5b5PJFH4A2zU+fBErs5wPX9dp9KYnVr6JwQpieSLHl5cZ/xSN7S2HnCyopoI 46ROnPgf3SRdbX8pfoVS5CE0/RYY/md6rqvhSP6ftNE17hYxXW/1c7gRobM9nVKKHXns +tuQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com. [217.140.101.70]) by mx.google.com with ESMTP id v2-v6si14419793oia.448.2018.09.04.04.45.20 for ; Tue, 04 Sep 2018 04:45:20 -0700 (PDT) Received-SPF: pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) client-ip=217.140.101.70; Authentication-Results: mx.google.com; spf=pass (google.com: domain of will.deacon@arm.com designates 217.140.101.70 as permitted sender) smtp.mailfrom=will.deacon@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 944571684; Tue, 4 Sep 2018 04:45:20 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 66C8C3F5BC; Tue, 4 Sep 2018 04:45:20 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id C0F8F1AE3654; Tue, 4 Sep 2018 12:45:33 +0100 (BST) From: Will Deacon To: linux-kernel@vger.kernel.org Cc: peterz@infradead.org, npiggin@gmail.com, linux-mm@kvack.org, kirill.shutemov@linux.intel.com, akpm@linux-foundation.org, mhocko@suse.com, aneesh.kumar@linux.vnet.ibm.com Subject: [PATCH v2 5/5] MAINTAINERS: Add entry for MMU GATHER AND TLB INVALIDATION Date: Tue, 4 Sep 2018 12:45:33 +0100 Message-Id: <1536061533-16188-6-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1536061533-16188-1-git-send-email-will.deacon@arm.com> References: <1536061533-16188-1-git-send-email-will.deacon@arm.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP We recently had to debug a TLB invalidation problem on the munmap() path, which was made more difficult than necessary because: (a) The MMU gather code had changed without people realising (b) Many people subtly misunderstood the operation of the MMU gather code and its interactions with RCU and arch-specific TLB invalidation (c) Untangling the intended behaviour involved educated guesswork and plenty of discussion Hopefully, we can avoid getting into this mess again by designating a cross-arch group of people to look after this code. It is not intended that they will have a separate tree, but they at least provide a point of contact for anybody working in this area and can co-ordinate any proposed future changes to the internal API. Cc: Peter Zijlstra Cc: Nicholas Piggin Cc: Linus Torvalds Cc: "Aneesh Kumar K.V" Cc: "Kirill A. Shutemov" Cc: Andrew Morton Cc: Michal Hocko Signed-off-by: Will Deacon --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 9ad052aeac39..e490a0a0605a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -9681,6 +9681,18 @@ S: Maintained F: arch/arm/boot/dts/mmp* F: arch/arm/mach-mmp/ +MMU GATHER AND TLB INVALIDATION +M: Will Deacon +M: "Aneesh Kumar K.V" +M: Nick Piggin +M: Peter Zijlstra +L: linux-arch@vger.kernel.org +L: linux-mm@kvack.org +S: Maintained +F: arch/*/include/asm/tlb.h +F: include/asm-generic/tlb.h +F: mm/mmu_gather.c + MN88472 MEDIA DRIVER M: Antti Palosaari L: linux-media@vger.kernel.org