From patchwork Tue Jan 14 10:01:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331783 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F036930 for ; Tue, 14 Jan 2020 10:03:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 375292468C for ; Tue, 14 Jan 2020 10:03:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 375292468C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 64A1F8E0006; Tue, 14 Jan 2020 05:03:42 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 621708E0005; Tue, 14 Jan 2020 05:03:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E9828E0006; Tue, 14 Jan 2020 05:03:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 38AD88E0005 for ; Tue, 14 Jan 2020 05:03:42 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id EB811181AC9BF for ; Tue, 14 Jan 2020 10:03:41 +0000 (UTC) X-FDA: 76375803042.07.coal88_2036ace181b5c X-Spam-Summary: 10,1,0,29038539be72e799,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:41:355:379:404:541:800:960:965:966:968:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1535:1544:1711:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2559:2562:2731:2904:3138:3139:3140:3141:3142:3355:3865:3867:3868:3870:3871:3874:4118:4250:4321:4385:4390:4395:4605:5007:6119:6120:6261:7903:8634:9592:10004:11026:11473:11657:11658:11914:12043:12296:12297:12438:12555:12895:12986:13894:14181:14394:14721:21080:21451:21627:21939:21990:30003:30054:30055:30089,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: coal88_2036ace181b5c X-Filterd-Recvd-Size: 7815 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:03:41 +0000 (UTC) Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA3MjB123813; Tue, 14 Jan 2020 05:03:33 -0500 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com with ESMTP id 2xh8d379xk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:03:30 -0500 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA1UHl022453; Tue, 14 Jan 2020 10:01:59 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma02dal.us.ibm.com with ESMTP id 2xf74p13k7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:01:59 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA1vdS49217992 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:01:57 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 799B4C605F; Tue, 14 Jan 2020 10:01:57 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1ED30C6055; Tue, 14 Jan 2020 10:01:55 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:01:54 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K.V" Subject: [PATCH v3 1/9] powerpc/mmu_gather: Enable RCU_TABLE_FREE even for !SMP case Date: Tue, 14 Jan 2020 15:31:37 +0530 Message-Id: <20200114100145.365527-2-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 spamscore=0 suspectscore=2 priorityscore=1501 mlxscore=0 bulkscore=0 malwarescore=0 clxscore=1015 adultscore=0 phishscore=0 impostorscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A follow up patch is going to make sure we correctly invalidate page walk cache before we free page table pages. In order to keep things simple enable RCU_TABLE_FREE even for !SMP so that we don't have to fixup the !SMP case differently in the followup patch !SMP case is right now broken for radix translation w.r.t page walk cache flush. We can get interrupted in between page table free and that would imply we have page walk cache entries pointing to tables which got freed already. Acked-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/Kconfig | 2 +- arch/powerpc/include/asm/book3s/32/pgalloc.h | 8 -------- arch/powerpc/include/asm/book3s/64/pgalloc.h | 2 -- arch/powerpc/include/asm/nohash/pgalloc.h | 8 -------- arch/powerpc/mm/book3s64/pgtable.c | 7 ------- 5 files changed, 1 insertion(+), 26 deletions(-) diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1ec34e16ed65..04240205f38c 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -222,7 +222,7 @@ config PPC select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_FREE select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h index 998317702630..dc5c039eb28e 100644 --- a/arch/powerpc/include/asm/book3s/32/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h @@ -49,7 +49,6 @@ static inline void pgtable_free(void *table, unsigned index_size) #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { @@ -66,13 +65,6 @@ static inline void __tlb_remove_table(void *_table) pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, - void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h index f6968c811026..a41e91bd0580 100644 --- a/arch/powerpc/include/asm/book3s/64/pgalloc.h +++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h @@ -19,9 +19,7 @@ extern struct vmemmap_backing *vmemmap_list; extern pmd_t *pmd_fragment_alloc(struct mm_struct *, unsigned long); extern void pmd_fragment_free(unsigned long *); extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); -#ifdef CONFIG_SMP extern void __tlb_remove_table(void *_table); -#endif void pte_frag_destroy(void *pte_frag); static inline pgd_t *radix__pgd_alloc(struct mm_struct *mm) diff --git a/arch/powerpc/include/asm/nohash/pgalloc.h b/arch/powerpc/include/asm/nohash/pgalloc.h index 332b13b4ecdb..29c43665a753 100644 --- a/arch/powerpc/include/asm/nohash/pgalloc.h +++ b/arch/powerpc/include/asm/nohash/pgalloc.h @@ -46,7 +46,6 @@ static inline void pgtable_free(void *table, int shift) #define get_hugepd_cache_index(x) (x) -#ifdef CONFIG_SMP static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) { unsigned long pgf = (unsigned long)table; @@ -64,13 +63,6 @@ static inline void __tlb_remove_table(void *_table) pgtable_free(table, shift); } -#else -static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift) -{ - pgtable_free(table, shift); -} -#endif - static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, unsigned long address) { diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 75483b40fcb1..2bf7e1b4fd82 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -378,7 +378,6 @@ static inline void pgtable_free(void *table, int index) } } -#ifdef CONFIG_SMP void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int index) { unsigned long pgf = (unsigned long)table; @@ -395,12 +394,6 @@ void __tlb_remove_table(void *_table) return pgtable_free(table, index); } -#else -void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int index) -{ - return pgtable_free(table, index); -} -#endif #ifdef CONFIG_PROC_FS atomic_long_t direct_pages_count[MMU_PAGE_COUNT]; From patchwork Tue Jan 14 10:01:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331771 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B6F3139A for ; Tue, 14 Jan 2020 10:02:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D95AB24672 for ; Tue, 14 Jan 2020 10:02:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D95AB24672 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4ED478E0008; Tue, 14 Jan 2020 05:02:29 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 474E68E000A; Tue, 14 Jan 2020 05:02:29 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2780E8E0009; Tue, 14 Jan 2020 05:02:29 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 0E03A8E0003 for ; Tue, 14 Jan 2020 05:02:29 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id C9BC0180AD81D for ; Tue, 14 Jan 2020 10:02:28 +0000 (UTC) X-FDA: 76375799976.12.bear10_15920459c505b X-Spam-Summary: 2,0,0,7d03c19049941644,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:2:41:355:379:541:800:960:966:968:973:988:989:1042:1260:1261:1311:1314:1345:1359:1431:1437:1515:1535:1605:1730:1747:1777:1792:1801:1981:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3873:3874:4049:4120:4250:4321:4385:4605:5007:6119:6120:6261:6691:6755:7576:7901:7903:8634:9592:10004:11026:11473:11657:11658:11914:12043:12219:12296:12297:12438:12555:12663:12679:12895:12986:13255:13846:13894:14096:14394:21080:21324:21451:21611:21627:21772:30003:30054:30055:30074:30089,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bear10_15920459c505b X-Filterd-Recvd-Size: 9461 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:02:27 +0000 (UTC) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA2CRS013866; Tue, 14 Jan 2020 05:02:14 -0500 Received: from ppma04dal.us.ibm.com (7a.29.35a9.ip4.static.sl-reverse.com [169.53.41.122]) by mx0a-001b2d01.pphosted.com with ESMTP id 2xfvrju4rn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:02:13 -0500 Received: from pps.filterd (ppma04dal.us.ibm.com [127.0.0.1]) by ppma04dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA1UUx008234; Tue, 14 Jan 2020 10:02:01 GMT Received: from b03cxnp07028.gho.boulder.ibm.com (b03cxnp07028.gho.boulder.ibm.com [9.17.130.15]) by ppma04dal.us.ibm.com with ESMTP id 2xf75ks46g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:02:01 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp07028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA20gl43254168 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:02:00 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 77A8DC605A; Tue, 14 Jan 2020 10:02:00 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1F030C6055; Tue, 14 Jan 2020 10:01:58 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:01:57 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K . V" Subject: [PATCH v3 2/9] mm/mmu_gather: Invalidate TLB correctly on batch allocation failure and flush Date: Tue, 14 Jan 2020 15:31:38 +0530 Message-Id: <20200114100145.365527-3-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 suspectscore=2 mlxlogscore=999 spamscore=0 mlxscore=0 impostorscore=0 phishscore=0 clxscore=1015 priorityscore=1501 lowpriorityscore=0 malwarescore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra Architectures for which we have hardware walkers of Linux page table should flush TLB on mmu gather batch allocation failures and batch flush. Some architectures like POWER supports multiple translation modes (hash and radix) and in the case of POWER only radix translation mode needs the above TLBI. This is because for hash translation mode kernel wants to avoid this extra flush since there are no hardware walkers of linux page table. With radix translation, the hardware also walks linux page table and with that, kernel needs to make sure to TLB invalidate page walk cache before page table pages are freed. More details in commit: d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE") Fixes: a46cc7a90fd8 ("powerpc/mm/radix: Improve TLB/PWC flushes") Signed-off-by: Peter Zijlstra (Intel) --- arch/Kconfig | 3 --- arch/powerpc/Kconfig | 1 - arch/powerpc/include/asm/tlb.h | 11 +++++++++++ arch/sparc/Kconfig | 1 - arch/sparc/include/asm/tlb_64.h | 9 +++++++++ include/asm-generic/tlb.h | 22 +++++++++++++++------- mm/mmu_gather.c | 16 ++++++++-------- 7 files changed, 43 insertions(+), 20 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 48b5e103bdb0..208aad121630 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -396,9 +396,6 @@ config HAVE_ARCH_JUMP_LABEL_RELATIVE config HAVE_RCU_TABLE_FREE bool -config HAVE_RCU_TABLE_NO_INVALIDATE - bool - config HAVE_MMU_GATHER_PAGE_SIZE bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 04240205f38c..f9970f87612e 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -223,7 +223,6 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h index b2c0be93929d..7f3a8b902325 100644 --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -26,6 +26,17 @@ #define tlb_flush tlb_flush extern void tlb_flush(struct mmu_gather *tlb); +/* + * book3s: + * Hash does not use the linux page-tables, so we can avoid + * the TLB invalidate for page-table freeing, Radix otoh does use the + * page-tables and needs the TLBI. + * + * nohash: + * We still do TLB invalidate in the __pte_free_tlb routine before we + * add the page table pages to mmu gather table batch. + */ +#define tlb_needs_table_invalidate() radix_enabled() /* Get the generic bits... */ #include diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index eb24cb1afc11..18e9fb6fcf1b 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -65,7 +65,6 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select HAVE_RCU_TABLE_FREE if SMP - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE diff --git a/arch/sparc/include/asm/tlb_64.h b/arch/sparc/include/asm/tlb_64.h index a2f3fa61ee36..8cb8f3833239 100644 --- a/arch/sparc/include/asm/tlb_64.h +++ b/arch/sparc/include/asm/tlb_64.h @@ -28,6 +28,15 @@ void flush_tlb_pending(void); #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) #define tlb_flush(tlb) flush_tlb_pending() +/* + * SPARC64's hardware TLB fill does not use the Linux page-tables + * and therefore we don't need a TLBI when freeing page-table pages. + */ + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_needs_table_invalidate() (false) +#endif + #include #endif /* _SPARC64_TLB_H */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2b10036fefd0..9e22ac369d1d 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -137,13 +137,6 @@ * When used, an architecture is expected to provide __tlb_remove_table() * which does the actual freeing of these pages. * - * HAVE_RCU_TABLE_NO_INVALIDATE - * - * This makes HAVE_RCU_TABLE_FREE avoid calling tlb_flush_mmu_tlbonly() before - * freeing the page-table pages. This can be avoided if you use - * HAVE_RCU_TABLE_FREE and your architecture does _NOT_ use the Linux - * page-tables natively. - * * MMU_GATHER_NO_RANGE * * Use this if your architecture lacks an efficient flush_tlb_range(). @@ -189,8 +182,23 @@ struct mmu_table_batch { extern void tlb_remove_table(struct mmu_gather *tlb, void *table); +/* + * This allows an architecture that does not use the linux page-tables for + * hardware to skip the TLBI when freeing page tables. + */ +#ifndef tlb_needs_table_invalidate +#define tlb_needs_table_invalidate() (true) +#endif + +#else + +#ifdef tlb_needs_table_invalidate +#error tlb_needs_table_invalidate() requires HAVE_RCU_TABLE_FREE #endif +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ + + #ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER /* * If we can't allocate a page to make a big batch of page pointers diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 7d70e5c78f97..7c1b8f67af7b 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -102,14 +102,14 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE - /* - * Invalidate page-table caches used by hardware walkers. Then we still - * need to RCU-sched wait while freeing the pages because software - * walkers can still be in-flight. - */ - tlb_flush_mmu_tlbonly(tlb); -#endif + if (tlb_needs_table_invalidate()) { + /* + * Invalidate page-table caches used by hardware walkers. Then + * we still need to RCU-sched wait while freeing the pages + * because software walkers can still be in-flight. + */ + tlb_flush_mmu_tlbonly(tlb); + } } static void tlb_remove_table_smp_sync(void *arg) From patchwork Tue Jan 14 10:01:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331799 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E32FB930 for ; Tue, 14 Jan 2020 10:07:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B9C052468B for ; Tue, 14 Jan 2020 10:07:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9C052468B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E84D98E0005; Tue, 14 Jan 2020 05:07:13 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E34FE8E0003; Tue, 14 Jan 2020 05:07:13 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFDB98E0005; Tue, 14 Jan 2020 05:07:13 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id BB47C8E0003 for ; Tue, 14 Jan 2020 05:07:13 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 8513934A4 for ; Tue, 14 Jan 2020 10:07:13 +0000 (UTC) X-FDA: 76375811946.08.art91_3f0706859360d X-Spam-Summary: 2,0,0,7c0447b3d29b37b4,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1534:1541:1711:1730:1747:1777:1792:1978:1981:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3352:3865:3866:3867:3868:3870:3874:4250:4321:4385:4605:5007:6261:7576:7903:8634:10004:11026:11658:11914:12296:12297:12438:12555:12679:12895:12986:13069:13221:13229:13311:13357:13894:14096:14181:14384:14394:14721:21080:21451:21611:21627:21990:30054:30055:30070,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: art91_3f0706859360d X-Filterd-Recvd-Size: 4649 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:07:12 +0000 (UTC) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA2EFC089133; Tue, 14 Jan 2020 05:07:05 -0500 Received: from ppma04dal.us.ibm.com (7a.29.35a9.ip4.static.sl-reverse.com [169.53.41.122]) by mx0b-001b2d01.pphosted.com with ESMTP id 2xfve9hgmp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:07:05 -0500 Received: from pps.filterd (ppma04dal.us.ibm.com [127.0.0.1]) by ppma04dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA07WZ006100; Tue, 14 Jan 2020 10:02:04 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma04dal.us.ibm.com with ESMTP id 2xf75ks46j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:02:04 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA23Ii49873348 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:02:03 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5BF63C6055; Tue, 14 Jan 2020 10:02:03 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 030D5C605A; Tue, 14 Jan 2020 10:02:01 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:02:00 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K.V" Subject: [PATCH v3 3/9] asm-generic/tlb: Avoid potential double flush Date: Tue, 14 Jan 2020 15:31:39 +0530 Message-Id: <20200114100145.365527-4-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 priorityscore=1501 adultscore=0 phishscore=0 impostorscore=0 malwarescore=0 spamscore=0 bulkscore=0 suspectscore=2 mlxscore=0 mlxlogscore=912 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra Aneesh reported that: tlb_flush_mmu() tlb_flush_mmu_tlbonly() tlb_flush() <-- #1 tlb_flush_mmu_free() tlb_table_flush() tlb_table_invalidate() tlb_flush_mmu_tlbonly() tlb_flush() <-- #2 does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not clear tlb->end in that case. Observe that any caller to __tlb_adjust_range() also sets at least one of the tlb->freed_tables || tlb->cleared_p* bits, and those are unconditionally cleared by __tlb_reset_range(). Change the condition for actually issuing TLBI to having one of those bits set, as opposed to having tlb->end != 0. Reported-by: "Aneesh Kumar K.V" Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V --- include/asm-generic/tlb.h | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 9e22ac369d1d..b36b3bef5661 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -402,7 +402,12 @@ tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { - if (!tlb->end) + /* + * Anything calling __tlb_adjust_range() also sets at least one of + * these bits. + */ + if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds || + tlb->cleared_puds || tlb->cleared_p4ds)) return; tlb_flush(tlb); From patchwork Tue Jan 14 10:01:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331765 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8ABC114B7 for ; Tue, 14 Jan 2020 10:02:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5B74124677 for ; Tue, 14 Jan 2020 10:02:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B74124677 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ED5918E0005; Tue, 14 Jan 2020 05:02:23 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E87068E0003; Tue, 14 Jan 2020 05:02:23 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DC4D28E0005; Tue, 14 Jan 2020 05:02:23 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id C87608E0003 for ; Tue, 14 Jan 2020 05:02:23 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 617ED8248047 for ; Tue, 14 Jan 2020 10:02:23 +0000 (UTC) X-FDA: 76375799766.06.sheet49_14c177a8a6c36 X-Spam-Summary: 2,0,0,6facb73be63b1eff,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1534:1540:1711:1714:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3351:3865:4605:5007:6261:7576:8634:10004:11026:11658:11914:12296:12297:12555:12679:12895:13069:13311:13357:13894:14096:14181:14384:14394:14721:21080:21451:21627:30003:30054:30055,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: sheet49_14c177a8a6c36 X-Filterd-Recvd-Size: 4032 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:02:22 +0000 (UTC) Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA2CpJ040439; Tue, 14 Jan 2020 05:02:12 -0500 Received: from ppma02wdc.us.ibm.com (aa.5b.37a9.ip4.static.sl-reverse.com [169.55.91.170]) by mx0a-001b2d01.pphosted.com with ESMTP id 2xh8d3evj3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:02:12 -0500 Received: from pps.filterd (ppma02wdc.us.ibm.com [127.0.0.1]) by ppma02wdc.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA06UR019477; Tue, 14 Jan 2020 10:02:07 GMT Received: from b03cxnp07028.gho.boulder.ibm.com (b03cxnp07028.gho.boulder.ibm.com [9.17.130.15]) by ppma02wdc.us.ibm.com with ESMTP id 2xf750ahs4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:02:07 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp07028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA26xA43254174 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:02:06 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4ADB8C6059; Tue, 14 Jan 2020 10:02:06 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DB083C6055; Tue, 14 Jan 2020 10:02:03 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:02:03 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K . V" Subject: [PATCH v3 4/9] asm-gemeric/tlb: Remove stray function declarations Date: Tue, 14 Jan 2020 15:31:40 +0530 Message-Id: <20200114100145.365527-5-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 mlxscore=0 mlxlogscore=597 spamscore=0 bulkscore=0 phishscore=0 suspectscore=0 adultscore=0 impostorscore=0 clxscore=1015 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra We removed the actual functions a while ago. Fixes: 1808d65b55e4 ("asm-generic/tlb: Remove arch_tlb*_mmu()") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V --- include/asm-generic/tlb.h | 4 ---- 1 file changed, 4 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b36b3bef5661..1a4cea5f95df 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -285,11 +285,7 @@ struct mmu_gather { #endif }; -void arch_tlb_gather_mmu(struct mmu_gather *tlb, - struct mm_struct *mm, unsigned long start, unsigned long end); void tlb_flush_mmu(struct mmu_gather *tlb); -void arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force); static inline void __tlb_adjust_range(struct mmu_gather *tlb, unsigned long address, From patchwork Tue Jan 14 10:01:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C79C139A for ; Tue, 14 Jan 2020 10:02:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 437A224672 for ; Tue, 14 Jan 2020 10:02:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 437A224672 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A451D8E0006; Tue, 14 Jan 2020 05:02:24 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9CE118E0003; Tue, 14 Jan 2020 05:02:24 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 845AD8E0006; Tue, 14 Jan 2020 05:02:24 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id 714D28E0003 for ; Tue, 14 Jan 2020 05:02:24 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 3F90F180AD817 for ; Tue, 14 Jan 2020 10:02:24 +0000 (UTC) X-FDA: 76375799808.22.range12_14e847cafc744 X-Spam-Summary: 2,0,0,bcd05f3ca3c2c75b,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:41:334:355:368:369:379:541:800:960:966:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1534:1539:1567:1711:1714:1730:1747:1777:1792:1978:1981:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3870:4321:4385:5007:6261:7576:8634:10004:11026:11473:11658:11914:12297:12438:12555:12679:12895:12986:13069:13311:13357:13894:14181:14384:14394:14721:21080:21451:21627:30054:30055:30089,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: range12_14e847cafc744 X-Filterd-Recvd-Size: 3754 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:02:23 +0000 (UTC) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA2DWt114268; Tue, 14 Jan 2020 05:02:13 -0500 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com with ESMTP id 2xh8d37e13-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:02:13 -0500 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA075l020269; Tue, 14 Jan 2020 10:02:10 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma02dal.us.ibm.com with ESMTP id 2xf74p13ks-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:02:10 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA29uV60621164 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:02:09 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 48DFEC6057; Tue, 14 Jan 2020 10:02:09 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DCAA1C6055; Tue, 14 Jan 2020 10:02:06 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:02:06 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K . V" Subject: [PATCH v3 5/9] asm-generic/tlb: Add missing CONFIG symbol Date: Tue, 14 Jan 2020 15:31:41 +0530 Message-Id: <20200114100145.365527-6-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 suspectscore=2 impostorscore=0 spamscore=0 priorityscore=1501 mlxlogscore=766 bulkscore=0 malwarescore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra Without this the symbol will not actually end up in .config files. Fixes: a30e32bd79e9 ("asm-generic/tlb: Provide generic tlb_flush() based on flush_tlb_mm()") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V --- arch/Kconfig | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/Kconfig b/arch/Kconfig index 208aad121630..5e907a954532 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -399,6 +399,9 @@ config HAVE_RCU_TABLE_FREE config HAVE_MMU_GATHER_PAGE_SIZE bool +config MMU_GATHER_NO_RANGE + bool + config HAVE_MMU_GATHER_NO_GATHER bool From patchwork Tue Jan 14 10:01:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331777 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E6EB139A for ; Tue, 14 Jan 2020 10:02:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A93D2465B for ; Tue, 14 Jan 2020 10:02:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A93D2465B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 539C28E000C; Tue, 14 Jan 2020 05:02:43 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 512508E000A; Tue, 14 Jan 2020 05:02:43 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3DA948E000C; Tue, 14 Jan 2020 05:02:43 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0196.hostedemail.com [216.40.44.196]) by kanga.kvack.org (Postfix) with ESMTP id 281C18E000A for ; Tue, 14 Jan 2020 05:02:43 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id D9B6A6D70 for ; Tue, 14 Jan 2020 10:02:42 +0000 (UTC) X-FDA: 76375800564.05.month79_179cb92ffd11f X-Spam-Summary: 10,1,0,dcc17c0d003a9fd4,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:1:2:41:355:379:404:541:800:960:965:966:968:973:988:989:1042:1260:1261:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1801:1978:1981:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3503:3504:3505:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4385:4390:4395:4605:5007:6119:6261:6755:7576:7903:8603:8634:8660:10004:11026:11473:11657:11658:11914:12043:12219:12295:12296:12297:12438:12555:12679:12895:12986:13148:13230:13894:14096:14394:21080:21451:21611:21627:21772:30012:30054:30055:30074:30089,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: month79_179cb92ffd11f X-Filterd-Recvd-Size: 12085 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:02:42 +0000 (UTC) Received: from pps.filterd (m0187473.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA23bj081523; Tue, 14 Jan 2020 05:02:31 -0500 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com with ESMTP id 2xfavytytj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:02:28 -0500 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA075s020269; Tue, 14 Jan 2020 10:02:13 GMT Received: from b03cxnp07028.gho.boulder.ibm.com (b03cxnp07028.gho.boulder.ibm.com [9.17.130.15]) by ppma02dal.us.ibm.com with ESMTP id 2xf74p13ky-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:02:13 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp07028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA2C0B47382964 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:02:12 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2EA12C6055; Tue, 14 Jan 2020 10:02:12 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CAA54C6059; Tue, 14 Jan 2020 10:02:09 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:02:09 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K . V" Subject: [PATCH v3 6/9] asm-generic/tlb: Rename HAVE_RCU_TABLE_FREE Date: Tue, 14 Jan 2020 15:31:42 +0530 Message-Id: <20200114100145.365527-7-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 spamscore=0 suspectscore=2 adultscore=0 priorityscore=1501 lowpriorityscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 bulkscore=0 mlxscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra Towards a more consistent naming scheme. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V --- arch/Kconfig | 2 +- arch/arm/Kconfig | 2 +- arch/arm/include/asm/tlb.h | 2 +- arch/arm64/Kconfig | 2 +- arch/powerpc/Kconfig | 2 +- arch/s390/Kconfig | 2 +- arch/sparc/Kconfig | 2 +- arch/sparc/include/asm/tlb_64.h | 2 +- arch/x86/Kconfig | 2 +- arch/x86/include/asm/tlb.h | 4 ++-- include/asm-generic/tlb.h | 10 +++++----- mm/gup.c | 2 +- mm/mmu_gather.c | 8 ++++---- 13 files changed, 21 insertions(+), 21 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 5e907a954532..501d565690b5 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -393,7 +393,7 @@ config HAVE_ARCH_JUMP_LABEL config HAVE_ARCH_JUMP_LABEL_RELATIVE bool -config HAVE_RCU_TABLE_FREE +config MMU_GATHER_RCU_TABLE_FREE bool config HAVE_MMU_GATHER_PAGE_SIZE diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 96dab76da3b3..36445579243c 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -102,7 +102,7 @@ config ARM select HAVE_PERF_EVENTS select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_RCU_TABLE_FREE if SMP && ARM_LPAE + select MMU_GATHER_RCU_TABLE_FREE if SMP && ARM_LPAE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RSEQ select HAVE_STACKPROTECTOR diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 669474add486..46a21cee3442 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -37,7 +37,7 @@ static inline void __tlb_remove_table(void *_table) #include -#ifndef CONFIG_HAVE_RCU_TABLE_FREE +#ifndef CONFIG_MMU_GATHER_RCU_TABLE_FREE #define tlb_remove_table(tlb, entry) tlb_remove_page(tlb, entry) #endif diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e688dfad0b72..a434f7c2438f 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -162,7 +162,7 @@ config ARM64 select HAVE_PERF_USER_STACK_DUMP select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_FUNCTION_ARG_ACCESS_API - select HAVE_RCU_TABLE_FREE + select MMU_GATHER_RCU_TABLE_FREE select HAVE_RSEQ select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index f9970f87612e..955759234776 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -222,7 +222,7 @@ config PPC select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_RCU_TABLE_FREE + select MMU_GATHER_RCU_TABLE_FREE select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index bc88841d335d..e2cde82a1a3c 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -169,7 +169,7 @@ config S390 select HAVE_OPROFILE select HAVE_PCI select HAVE_PERF_EVENTS - select HAVE_RCU_TABLE_FREE + select MMU_GATHER_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE select HAVE_RSEQ diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 18e9fb6fcf1b..c703eb6b7461 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -64,7 +64,7 @@ config SPARC64 select HAVE_FUNCTION_GRAPH_TRACER select HAVE_KRETPROBES select HAVE_KPROBES - select HAVE_RCU_TABLE_FREE if SMP + select MMU_GATHER_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE diff --git a/arch/sparc/include/asm/tlb_64.h b/arch/sparc/include/asm/tlb_64.h index 8cb8f3833239..6820d357581c 100644 --- a/arch/sparc/include/asm/tlb_64.h +++ b/arch/sparc/include/asm/tlb_64.h @@ -33,7 +33,7 @@ void flush_tlb_pending(void); * and therefore we don't need a TLBI when freeing page-table pages. */ -#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE #define tlb_needs_table_invalidate() (false) #endif diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 5e8949953660..f809bed408dd 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -200,7 +200,7 @@ config X86 select HAVE_PCI select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP - select HAVE_RCU_TABLE_FREE if PARAVIRT + select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION select HAVE_FUNCTION_ARG_ACCESS_API diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index f23e7aaff4cd..820082bd6880 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -29,8 +29,8 @@ static inline void tlb_flush(struct mmu_gather *tlb) * shootdown, enablement code for several hypervisors overrides * .flush_tlb_others hook in pv_mmu_ops and implements it by issuing * a hypercall. To keep software pagetable walkers safe in this case we - * switch to RCU based table free (HAVE_RCU_TABLE_FREE). See the comment - * below 'ifdef CONFIG_HAVE_RCU_TABLE_FREE' in include/asm-generic/tlb.h + * switch to RCU based table free (MMU_GATHER_RCU_TABLE_FREE). See the comment + * below 'ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE' in include/asm-generic/tlb.h * for more details. */ static inline void __tlb_remove_table(void *table) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 1a4cea5f95df..04a1b8f08eea 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -126,7 +126,7 @@ * This ensures we call tlb_flush() every time tlb_change_page_size() actually * changes the size and provides mmu_gather::page_size to tlb_flush(). * - * HAVE_RCU_TABLE_FREE + * MMU_GATHER_RCU_TABLE_FREE * * This provides tlb_remove_table(), to be used instead of tlb_remove_page() * for page directores (__p*_free_tlb()). This provides separate freeing of @@ -142,7 +142,7 @@ * Use this if your architecture lacks an efficient flush_tlb_range(). */ -#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE /* * Semi RCU freeing of the page directories. * @@ -193,10 +193,10 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table); #else #ifdef tlb_needs_table_invalidate -#error tlb_needs_table_invalidate() requires HAVE_RCU_TABLE_FREE +#error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE #endif -#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ +#endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ #ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER @@ -235,7 +235,7 @@ extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, struct mmu_gather { struct mm_struct *mm; -#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE struct mmu_table_batch *batch; #endif diff --git a/mm/gup.c b/mm/gup.c index 7646bf993b25..789fadc011b8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1729,7 +1729,7 @@ EXPORT_SYMBOL(get_user_pages_unlocked); * Before activating this code, please be aware that the following assumptions * are currently made: * - * *) Either HAVE_RCU_TABLE_FREE is enabled, and tlb_remove_table() is used to + * *) Either MMU_GATHER_RCU_TABLE_FREE is enabled, and tlb_remove_table() is used to * free pages containing page tables or TLB flushing requires IPI broadcast. * * *) ptes can be read atomically by the architecture. diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 7c1b8f67af7b..86bb2176e173 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -91,7 +91,7 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ #endif /* HAVE_MMU_GATHER_NO_GATHER */ -#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE /* * See the comment near struct mmu_table_batch. @@ -173,11 +173,11 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table) tlb_table_flush(tlb); } -#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ +#endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ static void tlb_flush_mmu_free(struct mmu_gather *tlb) { -#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE tlb_table_flush(tlb); #endif #ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER @@ -220,7 +220,7 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, tlb->batch_count = 0; #endif -#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE tlb->batch = NULL; #endif #ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE From patchwork Tue Jan 14 10:01:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E1AE139A for ; Tue, 14 Jan 2020 10:02:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4B4CD2465B for ; Tue, 14 Jan 2020 10:02:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B4CD2465B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 27BCD8E0003; Tue, 14 Jan 2020 05:02:29 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 22DA78E0008; Tue, 14 Jan 2020 05:02:29 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 11A768E0009; Tue, 14 Jan 2020 05:02:29 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0153.hostedemail.com [216.40.44.153]) by kanga.kvack.org (Postfix) with ESMTP id E65DD8E0008 for ; Tue, 14 Jan 2020 05:02:28 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id A71C8582C for ; Tue, 14 Jan 2020 10:02:28 +0000 (UTC) X-FDA: 76375799976.26.north13_158e7672ef20a X-Spam-Summary: 2,0,0,d82414ad0db7250c,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:1978:1981:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:4117:4250:4321:4385:4605:5007:6119:6261:7576:8603:8634:8660:8784:10004:11026:11473:11658:11914:12043:12114:12296:12297:12438:12555:12679:12895:12986:13148:13230:13894:14096:14181:14394:14721:21080:21451:21627:30054:30055:30089,0,RBL:148.163.156.1:@linux.ibm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: north13_158e7672ef20a X-Filterd-Recvd-Size: 6359 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:02:27 +0000 (UTC) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA2EO9004231; Tue, 14 Jan 2020 05:02:18 -0500 Received: from ppma04dal.us.ibm.com (7a.29.35a9.ip4.static.sl-reverse.com [169.53.41.122]) by mx0a-001b2d01.pphosted.com with ESMTP id 2xfvt0gkdm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:02:18 -0500 Received: from pps.filterd (ppma04dal.us.ibm.com [127.0.0.1]) by ppma04dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA1kS1008515; Tue, 14 Jan 2020 10:02:16 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma04dal.us.ibm.com with ESMTP id 2xf75ks482-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:02:16 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA2Fs560686770 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:02:15 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 15D89C6062; Tue, 14 Jan 2020 10:02:15 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AE191C6055; Tue, 14 Jan 2020 10:02:12 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:02:12 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K . V" Subject: [PATCH v3 7/9] asm-generic/tlb: Rename HAVE_MMU_GATHER_PAGE_SIZE Date: Tue, 14 Jan 2020 15:31:43 +0530 Message-Id: <20200114100145.365527-8-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxlogscore=998 suspectscore=2 bulkscore=0 phishscore=0 mlxscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra Towards a more consistent naming scheme. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V --- arch/Kconfig | 2 +- arch/powerpc/Kconfig | 2 +- include/asm-generic/tlb.h | 9 ++++++--- mm/mmu_gather.c | 4 ++-- 4 files changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 501d565690b5..e8548211b6a9 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -396,7 +396,7 @@ config HAVE_ARCH_JUMP_LABEL_RELATIVE config MMU_GATHER_RCU_TABLE_FREE bool -config HAVE_MMU_GATHER_PAGE_SIZE +config MMU_GATHER_PAGE_SIZE bool config MMU_GATHER_NO_RANGE diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 955759234776..cefacb9c8f48 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -223,7 +223,7 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select MMU_GATHER_RCU_TABLE_FREE - select HAVE_MMU_GATHER_PAGE_SIZE + select MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 04a1b8f08eea..53befa5acb27 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -121,11 +121,14 @@ * * Additionally there are a few opt-in features: * - * HAVE_MMU_GATHER_PAGE_SIZE + * MMU_GATHER_PAGE_SIZE * * This ensures we call tlb_flush() every time tlb_change_page_size() actually * changes the size and provides mmu_gather::page_size to tlb_flush(). * + * This might be useful if your architecture has size specific TLB + * invalidation instructions. + * * MMU_GATHER_RCU_TABLE_FREE * * This provides tlb_remove_table(), to be used instead of tlb_remove_page() @@ -279,7 +282,7 @@ struct mmu_gather { struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; -#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE +#ifdef CONFIG_MMU_GATHER_PAGE_SIZE unsigned int page_size; #endif #endif @@ -435,7 +438,7 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { -#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE +#ifdef CONFIG_MMU_GATHER_PAGE_SIZE if (tlb->page_size && tlb->page_size != page_size) { if (!tlb->fullmm && !tlb->need_flush_all) tlb_flush_mmu(tlb); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 86bb2176e173..297c70307367 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -69,7 +69,7 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ VM_BUG_ON(!tlb->end); -#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE +#ifdef CONFIG_MMU_GATHER_PAGE_SIZE VM_WARN_ON(tlb->page_size != page_size); #endif @@ -223,7 +223,7 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, #ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE tlb->batch = NULL; #endif -#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE +#ifdef CONFIG_MMU_GATHER_PAGE_SIZE tlb->page_size = 0; #endif From patchwork Tue Jan 14 10:01:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331767 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C8BA930 for ; Tue, 14 Jan 2020 10:02:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 367E924680 for ; Tue, 14 Jan 2020 10:02:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 367E924680 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D2B6A8E0007; Tue, 14 Jan 2020 05:02:28 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CB2648E0003; Tue, 14 Jan 2020 05:02:28 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BCD138E0007; Tue, 14 Jan 2020 05:02:28 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id A47448E0003 for ; Tue, 14 Jan 2020 05:02:28 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 6F698282A for ; Tue, 14 Jan 2020 10:02:28 +0000 (UTC) X-FDA: 76375799976.22.hook38_1588ae9fafb08 X-Spam-Summary: 2,0,0,6bbffb9bad8643f2,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1535:1544:1711:1730:1747:1777:1792:1978:1981:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3355:3865:3866:3867:3870:3871:3872:3874:4118:4250:4321:4385:4605:5007:6119:6261:7576:8634:10004:11026:11473:11658:11914:12043:12114:12296:12297:12438:12555:12679:12895:12986:13894:14181:14394:14721:21080:21451:21611:21627:21740:21990:30054:30055:30070,0,RBL:148.163.158.5:@linux.ibm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: hook38_1588ae9fafb08 X-Filterd-Recvd-Size: 7104 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:02:27 +0000 (UTC) Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA2DH4157834; Tue, 14 Jan 2020 05:02:20 -0500 Received: from ppma01wdc.us.ibm.com (fd.55.37a9.ip4.static.sl-reverse.com [169.55.85.253]) by mx0b-001b2d01.pphosted.com with ESMTP id 2xh7h70vdy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:02:19 -0500 Received: from pps.filterd (ppma01wdc.us.ibm.com [127.0.0.1]) by ppma01wdc.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA0uDp029507; Tue, 14 Jan 2020 10:02:25 GMT Received: from b03cxnp07028.gho.boulder.ibm.com (b03cxnp07028.gho.boulder.ibm.com [9.17.130.15]) by ppma01wdc.us.ibm.com with ESMTP id 2xf7502ga5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:02:25 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp07028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA2Iuu23790018 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:02:18 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0F47AC605A; Tue, 14 Jan 2020 10:02:18 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AB55BC6061; Tue, 14 Jan 2020 10:02:15 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:02:15 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K . V" Subject: [PATCH v3 8/9] asm-generic/tlb: Rename HAVE_MMU_GATHER_NO_GATHER Date: Tue, 14 Jan 2020 15:31:44 +0530 Message-Id: <20200114100145.365527-9-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 mlxlogscore=999 priorityscore=1501 spamscore=0 impostorscore=0 phishscore=0 bulkscore=0 adultscore=0 clxscore=1015 mlxscore=0 suspectscore=2 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra Towards a more consistent naming scheme. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V --- arch/Kconfig | 2 +- arch/s390/Kconfig | 2 +- include/asm-generic/tlb.h | 14 ++++++++++++-- mm/mmu_gather.c | 10 +++++----- 4 files changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index e8548211b6a9..c35668fbf4d4 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -402,7 +402,7 @@ config MMU_GATHER_PAGE_SIZE config MMU_GATHER_NO_RANGE bool -config HAVE_MMU_GATHER_NO_GATHER +config MMU_GATHER_NO_GATHER bool config ARCH_HAVE_NMI_SAFE_CMPXCHG diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index e2cde82a1a3c..de39c2e92435 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -163,7 +163,7 @@ config S390 select HAVE_PERF_USER_STACK_DUMP select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_PHYS_MAP - select HAVE_MMU_GATHER_NO_GATHER + select MMU_GATHER_NO_GATHER select HAVE_MOD_ARCH_SPECIFIC select HAVE_NOP_MCOUNT select HAVE_OPROFILE diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 53befa5acb27..ca0fe75b5355 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -143,6 +143,16 @@ * MMU_GATHER_NO_RANGE * * Use this if your architecture lacks an efficient flush_tlb_range(). + * + * MMU_GATHER_NO_GATHER + * + * If the option is set the mmu_gather will not track individual pages for + * delayed page free anymore. A platform that enables the option needs to + * provide its own implementation of the __tlb_remove_page_size() function to + * free pages. + * + * This is useful if your architecture already flushes TLB entries in the + * various ptep_get_and_clear() functions. */ #ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE @@ -202,7 +212,7 @@ extern void tlb_remove_table(struct mmu_gather *tlb, void *table); #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER +#ifndef CONFIG_MMU_GATHER_NO_GATHER /* * If we can't allocate a page to make a big batch of page pointers * to work on, then just handle a few from the on-stack structure. @@ -277,7 +287,7 @@ struct mmu_gather { unsigned int batch_count; -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER +#ifndef CONFIG_MMU_GATHER_NO_GATHER struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 297c70307367..a28c74328085 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -11,7 +11,7 @@ #include #include -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER +#ifndef CONFIG_MMU_GATHER_NO_GATHER static bool tlb_next_batch(struct mmu_gather *tlb) { @@ -89,7 +89,7 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ return false; } -#endif /* HAVE_MMU_GATHER_NO_GATHER */ +#endif /* MMU_GATHER_NO_GATHER */ #ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE @@ -180,7 +180,7 @@ static void tlb_flush_mmu_free(struct mmu_gather *tlb) #ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE tlb_table_flush(tlb); #endif -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER +#ifndef CONFIG_MMU_GATHER_NO_GATHER tlb_batch_pages_flush(tlb); #endif } @@ -211,7 +211,7 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, /* Is it from 0 to ~0? */ tlb->fullmm = !(start | (end+1)); -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER +#ifndef CONFIG_MMU_GATHER_NO_GATHER tlb->need_flush_all = 0; tlb->local.next = NULL; tlb->local.nr = 0; @@ -271,7 +271,7 @@ void tlb_finish_mmu(struct mmu_gather *tlb, tlb_flush_mmu(tlb); -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER +#ifndef CONFIG_MMU_GATHER_NO_GATHER tlb_batch_list_free(tlb); #endif dec_tlb_flush_pending(tlb->mm); From patchwork Tue Jan 14 10:01:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11331775 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38C8E930 for ; Tue, 14 Jan 2020 10:02:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E86102465B for ; Tue, 14 Jan 2020 10:02:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E86102465B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D28C38E0009; Tue, 14 Jan 2020 05:02:32 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BA1BE8E000C; Tue, 14 Jan 2020 05:02:32 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F48D8E0009; Tue, 14 Jan 2020 05:02:32 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 7E3478E000A for ; Tue, 14 Jan 2020 05:02:32 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 1094F282A for ; Tue, 14 Jan 2020 10:02:32 +0000 (UTC) X-FDA: 76375800144.23.goat53_1606fcdb4c217 X-Spam-Summary: 10,1,0,4051d22b83ab77b0,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:peterz@infradead.org:will@kernel.org::linux-kernel@vger.kernel.org:linux-arch@vger.kernel.org:aneesh.kumar@linux.ibm.com,RULES_HIT:4:41:69:355:379:404:541:800:960:965:966:968:973:988:989:1042:1260:1261:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:1978:1981:2194:2196:2198:2199:2200:2201:2393:2559:2562:2638:2693:2731:2737:2904:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4390:4395:4605:5007:6117:6119:6261:7576:7903:8603:8634:9168:9592:11026:11473:11657:11658:11914:12043:12114:12291:12295:12296:12297:12438:12555:12679:12683:12895:12986:13894:14096:14394:21080:21324:21451:21627:21772:30012:30054:30055:30062:30070:30089,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: goat53_1606fcdb4c217 X-Filterd-Recvd-Size: 15504 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Jan 2020 10:02:31 +0000 (UTC) Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 00EA2C0A003934; Tue, 14 Jan 2020 05:02:24 -0500 Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131]) by mx0a-001b2d01.pphosted.com with ESMTP id 2xfvt0gkk7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 05:02:23 -0500 Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1]) by ppma01dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 00EA0xbl007634; Tue, 14 Jan 2020 10:02:22 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma01dal.us.ibm.com with ESMTP id 2xf757s6m4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 14 Jan 2020 10:02:22 +0000 Received: from b03ledav006.gho.boulder.ibm.com (b03ledav006.gho.boulder.ibm.com [9.17.130.237]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 00EA2LoL49873398 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 14 Jan 2020 10:02:21 GMT Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E76DDC605F; Tue, 14 Jan 2020 10:02:20 +0000 (GMT) Received: from b03ledav006.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8FBB4C6057; Tue, 14 Jan 2020 10:02:18 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.124.35.105]) by b03ledav006.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 14 Jan 2020 10:02:18 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, peterz@infradead.org, will@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, "Aneesh Kumar K . V" Subject: [PATCH v3 9/9] asm-generic/tlb: Provide MMU_GATHER_TABLE_FREE Date: Tue, 14 Jan 2020 15:31:45 +0530 Message-Id: <20200114100145.365527-10-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> References: <20200114100145.365527-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.138,18.0.572 definitions=2020-01-14_02:2020-01-13,2020-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxlogscore=895 suspectscore=2 bulkscore=0 phishscore=0 mlxscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-2001140090 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra As described in the comment, the correct order for freeing pages is: 1) unhook page 2) TLB invalidate page 3) free page This order equally applies to page directories. Currently there are two correct options: - use tlb_remove_page(), when all page directores are full pages and there are no futher contraints placed by things like software walkers (HAVE_FAST_GUP). - use MMU_GATHER_RCU_TABLE_FREE and tlb_remove_table() when the architecture does not do IPI based TLB invalidate and has HAVE_FAST_GUP (or software TLB fill). This however leaves architectures that don't have page based directories but don't need RCU in a bind. For those, provide MMU_GATHER_TABLE_FREE, which provides the independent batching for directories without the additional RCU freeing. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Aneesh Kumar K.V --- arch/Kconfig | 5 ++ arch/arm/include/asm/tlb.h | 4 -- include/asm-generic/tlb.h | 72 +++++++++++----------- mm/mmu_gather.c | 120 +++++++++++++++++++++++++++---------- 4 files changed, 130 insertions(+), 71 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index c35668fbf4d4..98de654b79b3 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -393,8 +393,12 @@ config HAVE_ARCH_JUMP_LABEL config HAVE_ARCH_JUMP_LABEL_RELATIVE bool +config MMU_GATHER_TABLE_FREE + bool + config MMU_GATHER_RCU_TABLE_FREE bool + select MMU_GATHER_TABLE_FREE config MMU_GATHER_PAGE_SIZE bool @@ -404,6 +408,7 @@ config MMU_GATHER_NO_RANGE config MMU_GATHER_NO_GATHER bool + depends on MMU_GATHER_TABLE_FREE config ARCH_HAVE_NMI_SAFE_CMPXCHG bool diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 46a21cee3442..4d4e7b6aabff 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -37,10 +37,6 @@ static inline void __tlb_remove_table(void *_table) #include -#ifndef CONFIG_MMU_GATHER_RCU_TABLE_FREE -#define tlb_remove_table(tlb, entry) tlb_remove_page(tlb, entry) -#endif - static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index ca0fe75b5355..f391f6b500b4 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -56,6 +56,15 @@ * Defaults to flushing at tlb_end_vma() to reset the range; helps when * there's large holes between the VMAs. * + * - tlb_remove_table() + * + * tlb_remove_table() is the basic primitive to free page-table directories + * (__p*_free_tlb()). In it's most primitive form it is an alias for + * tlb_remove_page() below, for when page directories are pages and have no + * additional constraints. + * + * See also MMU_GATHER_TABLE_FREE and MMU_GATHER_RCU_TABLE_FREE. + * * - tlb_remove_page() / __tlb_remove_page() * - tlb_remove_page_size() / __tlb_remove_page_size() * @@ -129,17 +138,24 @@ * This might be useful if your architecture has size specific TLB * invalidation instructions. * - * MMU_GATHER_RCU_TABLE_FREE + * MMU_GATHER_TABLE_FREE * * This provides tlb_remove_table(), to be used instead of tlb_remove_page() - * for page directores (__p*_free_tlb()). This provides separate freeing of - * the page-table pages themselves in a semi-RCU fashion (see comment below). - * Useful if your architecture doesn't use IPIs for remote TLB invalidates - * and therefore doesn't naturally serialize with software page-table walkers. + * for page directores (__p*_free_tlb()). + * + * Useful if your architecture has non-page page directories. * * When used, an architecture is expected to provide __tlb_remove_table() * which does the actual freeing of these pages. * + * MMU_GATHER_RCU_TABLE_FREE + * + * Like MMU_GATHER_TABLE_FREE, and adds semi-RCU semantics to the free (see + * comment below). + * + * Useful if your architecture doesn't use IPIs for remote TLB invalidates + * and therefore doesn't naturally serialize with software page-table walkers. + * * MMU_GATHER_NO_RANGE * * Use this if your architecture lacks an efficient flush_tlb_range(). @@ -155,37 +171,12 @@ * various ptep_get_and_clear() functions. */ -#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE -/* - * Semi RCU freeing of the page directories. - * - * This is needed by some architectures to implement software pagetable walkers. - * - * gup_fast() and other software pagetable walkers do a lockless page-table - * walk and therefore needs some synchronization with the freeing of the page - * directories. The chosen means to accomplish that is by disabling IRQs over - * the walk. - * - * Architectures that use IPIs to flush TLBs will then automagically DTRT, - * since we unlink the page, flush TLBs, free the page. Since the disabling of - * IRQs delays the completion of the TLB flush we can never observe an already - * freed page. - * - * Architectures that do not have this (PPC) need to delay the freeing by some - * other means, this is that means. - * - * What we do is batch the freed directory pages (tables) and RCU free them. - * We use the sched RCU variant, as that guarantees that IRQ/preempt disabling - * holds off grace periods. - * - * However, in order to batch these pages we need to allocate storage, this - * allocation is deep inside the MM code and can thus easily fail on memory - * pressure. To guarantee progress we fall back to single table freeing, see - * the implementation of tlb_remove_table_one(). - * - */ +#ifdef CONFIG_MMU_GATHER_TABLE_FREE + struct mmu_table_batch { +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE struct rcu_head rcu; +#endif unsigned int nr; void *tables[0]; }; @@ -195,6 +186,17 @@ struct mmu_table_batch { extern void tlb_remove_table(struct mmu_gather *tlb, void *table); +#else /* !CONFIG_MMU_GATHER_HAVE_TABLE_FREE */ + +/* + * Without MMU_GATHER_TABLE_FREE the architecture is assumed to have page based + * page directories and we can use the normal page batching to free them. + */ +#define tlb_remove_table(tlb, page) tlb_remove_page((tlb), (page)) + +#endif /* CONFIG_MMU_GATHER_TABLE_FREE */ + +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE /* * This allows an architecture that does not use the linux page-tables for * hardware to skip the TLBI when freeing page tables. @@ -248,7 +250,7 @@ extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, struct mmu_gather { struct mm_struct *mm; -#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE +#ifdef CONFIG_MMU_GATHER_TABLE_FREE struct mmu_table_batch *batch; #endif diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index a28c74328085..a3538cb2bcbe 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -91,56 +91,106 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ #endif /* MMU_GATHER_NO_GATHER */ -#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE +#ifdef CONFIG_MMU_GATHER_TABLE_FREE -/* - * See the comment near struct mmu_table_batch. - */ +static void __tlb_remove_table_free(struct mmu_table_batch *batch) +{ + int i; + + for (i = 0; i < batch->nr; i++) + __tlb_remove_table(batch->tables[i]); + + free_page((unsigned long)batch); +} + +#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE /* - * If we want tlb_remove_table() to imply TLB invalidates. + * Semi RCU freeing of the page directories. + * + * This is needed by some architectures to implement software pagetable walkers. + * + * gup_fast() and other software pagetable walkers do a lockless page-table + * walk and therefore needs some synchronization with the freeing of the page + * directories. The chosen means to accomplish that is by disabling IRQs over + * the walk. + * + * Architectures that use IPIs to flush TLBs will then automagically DTRT, + * since we unlink the page, flush TLBs, free the page. Since the disabling of + * IRQs delays the completion of the TLB flush we can never observe an already + * freed page. + * + * Architectures that do not have this (PPC) need to delay the freeing by some + * other means, this is that means. + * + * What we do is batch the freed directory pages (tables) and RCU free them. + * We use the sched RCU variant, as that guarantees that IRQ/preempt disabling + * holds off grace periods. + * + * However, in order to batch these pages we need to allocate storage, this + * allocation is deep inside the MM code and can thus easily fail on memory + * pressure. To guarantee progress we fall back to single table freeing, see + * the implementation of tlb_remove_table_one(). + * */ -static inline void tlb_table_invalidate(struct mmu_gather *tlb) -{ - if (tlb_needs_table_invalidate()) { - /* - * Invalidate page-table caches used by hardware walkers. Then - * we still need to RCU-sched wait while freeing the pages - * because software walkers can still be in-flight. - */ - tlb_flush_mmu_tlbonly(tlb); - } -} static void tlb_remove_table_smp_sync(void *arg) { /* Simply deliver the interrupt */ } -static void tlb_remove_table_one(void *table) +static void tlb_remove_table_sync_one(void) { /* * This isn't an RCU grace period and hence the page-tables cannot be * assumed to be actually RCU-freed. * * It is however sufficient for software page-table walkers that rely on - * IRQ disabling. See the comment near struct mmu_table_batch. + * IRQ disabling. */ smp_call_function(tlb_remove_table_smp_sync, NULL, 1); - __tlb_remove_table(table); } static void tlb_remove_table_rcu(struct rcu_head *head) { - struct mmu_table_batch *batch; - int i; + __tlb_remove_table_free(container_of(head, struct mmu_table_batch, rcu)); +} - batch = container_of(head, struct mmu_table_batch, rcu); +static void tlb_remove_table_free(struct mmu_table_batch *batch) +{ + call_rcu(&batch->rcu, tlb_remove_table_rcu); +} - for (i = 0; i < batch->nr; i++) - __tlb_remove_table(batch->tables[i]); +#else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */ - free_page((unsigned long)batch); +static void tlb_remove_table_sync_one(void) { } + +static void tlb_remove_table_free(struct mmu_table_batch *batch) +{ + __tlb_remove_table_free(batch); +} + +#endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ + +/* + * If we want tlb_remove_table() to imply TLB invalidates. + */ +static inline void tlb_table_invalidate(struct mmu_gather *tlb) +{ + if (tlb_needs_table_invalidate()) { + /* + * Invalidate page-table caches used by hardware walkers. Then + * we still need to RCU-sched wait while freeing the pages + * because software walkers can still be in-flight. + */ + tlb_flush_mmu_tlbonly(tlb); + } +} + +static void tlb_remove_table_one(void *table) +{ + tlb_remove_table_sync_one(); + __tlb_remove_table(table); } static void tlb_table_flush(struct mmu_gather *tlb) @@ -149,7 +199,7 @@ static void tlb_table_flush(struct mmu_gather *tlb) if (*batch) { tlb_table_invalidate(tlb); - call_rcu(&(*batch)->rcu, tlb_remove_table_rcu); + tlb_remove_table_free(*batch); *batch = NULL; } } @@ -173,13 +223,21 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table) tlb_table_flush(tlb); } -#endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ +static inline void tlb_table_init(struct mmu_gather *tlb) +{ + tlb->batch = NULL; +} + +#else /* !CONFIG_MMU_GATHER_TABLE_FREE */ + +static inline void tlb_table_flush(struct mmu_gather *tlb) { } +static inline void tlb_table_init(struct mmu_gather *tlb) { } + +#endif /* CONFIG_MMU_GATHER_TABLE_FREE */ static void tlb_flush_mmu_free(struct mmu_gather *tlb) { -#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE tlb_table_flush(tlb); -#endif #ifndef CONFIG_MMU_GATHER_NO_GATHER tlb_batch_pages_flush(tlb); #endif @@ -220,9 +278,7 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, tlb->batch_count = 0; #endif -#ifdef CONFIG_MMU_GATHER_RCU_TABLE_FREE - tlb->batch = NULL; -#endif + tlb_table_init(tlb); #ifdef CONFIG_MMU_GATHER_PAGE_SIZE tlb->page_size = 0; #endif