From patchwork Wed Aug 11 00:42:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429657 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFA8CC4320A for ; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D1AF60EB5 for ; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6D1AF60EB5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AA7E76B0072; Tue, 10 Aug 2021 20:43:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A2FE16B0073; Tue, 10 Aug 2021 20:43:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80C846B0074; Tue, 10 Aug 2021 20:43:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 66B296B0072 for ; Tue, 10 Aug 2021 20:43:18 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 2359B21960 for ; Wed, 11 Aug 2021 00:43:18 +0000 (UTC) X-FDA: 78460950876.37.7B58E8B Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf19.hostedemail.com (Postfix) with ESMTP id 74EF5B002468 for ; Wed, 11 Aug 2021 00:43:17 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 79F7D60EB7; Wed, 11 Aug 2021 00:43:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642596; bh=uXKpbJ3Z+SCcgPMWkmhiiIS5hjgeCpPYAif1Yg1470w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pOckdQAWP8L66RAfndqODARBsukOFxCZBMKgNgH91dKQaKdxiM3TLCU+uZl6uXL1x yQjfObCGS7o5G47ZbSvb0ihA7HNydt9nUxmRT9y/RoS4Q0U5XFpSO9R1FG2y7qhyrZ XhO7t2hthYh1+APgpkz+WPBM2GuyWlVe7Kv6SH29eZ281EP1RanLcfEmMP61hxTypL vribWhUobrP0VJMHuwWdT90s2mA+nOSE9Mw41050ahdwVx8kIYuMammiEd1ftiAblA BZexw66vM+Fvd36A1I4tFgUubRGJm64YH3nFJS+RghPw8wHs0WxqA8ip6Na9NTK9Yw bApR84zlo8YgA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 01/18] ARC: mm: simplify mmu scratch register assingment to mmu needs Date: Tue, 10 Aug 2021 17:42:41 -0700 Message-Id: <20210811004258.138075-2-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=pOckdQAW; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf19.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: 3g76tskd5irnziogs3rpkxjn76q8uinn X-Rspamd-Queue-Id: 74EF5B002468 X-Rspamd-Server: rspam01 X-HE-Tag: 1628642597-744887 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ARC700 SMP uses MMU scratch reg for re-entrant interrupt handling (as opposed to the canonical usage for task pgd pointer caching for ARCv2 and ARC700 UP builds). However this requires fabricating a #define in a header which has usual issues of dependency nesting and ugliness. So clean this up and just use it as intended for ARCv2 only. For ARC700 just don't use it for mmu needs (even for UP which it potentially can (degrades it slightly) but that config it not a big deal in this day and age. Signed-off-by: Vineet Gupta --- arch/arc/include/asm/entry-compact.h | 8 -------- arch/arc/include/asm/mmu.h | 4 ---- arch/arc/include/asm/mmu_context.h | 2 +- arch/arc/mm/tlb.c | 4 ++-- arch/arc/mm/tlbex.S | 2 +- 5 files changed, 4 insertions(+), 16 deletions(-) diff --git a/arch/arc/include/asm/entry-compact.h b/arch/arc/include/asm/entry-compact.h index 6dbf5cecc8cc..5aab4f93ab8a 100644 --- a/arch/arc/include/asm/entry-compact.h +++ b/arch/arc/include/asm/entry-compact.h @@ -126,19 +126,11 @@ * to be saved again on kernel mode stack, as part of pt_regs. *-------------------------------------------------------------*/ .macro PROLOG_FREEUP_REG reg, mem -#ifndef ARC_USE_SCRATCH_REG - sr \reg, [ARC_REG_SCRATCH_DATA0] -#else st \reg, [\mem] -#endif .endm .macro PROLOG_RESTORE_REG reg, mem -#ifndef ARC_USE_SCRATCH_REG - lr \reg, [ARC_REG_SCRATCH_DATA0] -#else ld \reg, [\mem] -#endif .endm /*-------------------------------------------------------------- diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index a81d1975866a..4065335a7922 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -31,10 +31,6 @@ #define ARC_REG_SCRATCH_DATA0 0x46c #endif -#if defined(CONFIG_ISA_ARCV2) || !defined(CONFIG_SMP) -#define ARC_USE_SCRATCH_REG -#endif - /* Bits in MMU PID register */ #define __TLB_ENABLE (1 << 31) #define __PROG_ENABLE (1 << 30) diff --git a/arch/arc/include/asm/mmu_context.h b/arch/arc/include/asm/mmu_context.h index df164066e172..49318a126879 100644 --- a/arch/arc/include/asm/mmu_context.h +++ b/arch/arc/include/asm/mmu_context.h @@ -146,7 +146,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, */ cpumask_set_cpu(cpu, mm_cpumask(next)); -#ifdef ARC_USE_SCRATCH_REG +#ifdef CONFIG_ISA_ARCV2 /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ write_aux_reg(ARC_REG_SCRATCH_DATA0, next->pgd); #endif diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 8696829d37c0..349fb7a75d1d 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -719,8 +719,8 @@ void arc_mmu_init(void) /* Enable the MMU */ write_aux_reg(ARC_REG_PID, MMU_ENABLE); - /* In smp we use this reg for interrupt 1 scratch */ -#ifdef ARC_USE_SCRATCH_REG + /* In arc700/smp needed for re-entrant interrupt handling */ +#ifdef CONFIG_ISA_ARCV2 /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); #endif diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index 96c3a5de9dd4..bcd2909c691f 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -202,7 +202,7 @@ ex_saved_reg1: lr r2, [efa] -#ifdef ARC_USE_SCRATCH_REG +#ifdef CONFIG_ISA_ARCV2 lr r1, [ARC_REG_SCRATCH_DATA0] ; current pgd #else GET_CURR_TASK_ON_CPU r1 From patchwork Wed Aug 11 00:42:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429659 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A921AC43216 for ; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 50354610A4 for ; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 50354610A4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2DEB56B0073; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CC056B0074; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE98F6B0075; Tue, 10 Aug 2021 20:43:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id D22AC6B0073 for ; Tue, 10 Aug 2021 20:43:18 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4764D8249980 for ; Wed, 11 Aug 2021 00:43:18 +0000 (UTC) X-FDA: 78460950876.40.E650802 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP id DA20F9000533 for ; Wed, 11 Aug 2021 00:43:17 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id D8A1860EB9; Wed, 11 Aug 2021 00:43:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642597; bh=Sa6jMMUF++eFjzZYBgaSDDh1YfwHQ1O6NoBVOUIDx/g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uMxUxQgx1jgTzlvGIW04vLHOiuyIkTT6O6VD7SWGFbbuke4VTOsJg4wyRQdAAhFoG xQjGGzOOEizjooOwacnj5Zvnlkr7ozFvpGV8uZYF1/Eux50OJmSn1Ot6AVgxp2lo9f FJJ7wgHeJHPLCHjnCh3NyI67yJRDkiCtgCaQlkkkH/U0YX70ZZACMUwgdZOdNT6RSy 5Ash1i70JjBkvTZdaV4T87bSdkfUXmmlLd7Ku81CA1hQhTdvvhEGAMxT7Mp9C6jjLa uKAZIxRmhIk7p7Ly02C59FJcSLE62HS7a/tP56gKlOy1DLKbJzed0AmRM/ItoYES0+ 4eMVYfU2uv3Bg== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta , Vineet Gupta Subject: [PATCH 02/18] ARC: mm: remove tlb paranoid code Date: Tue, 10 Aug 2021 17:42:42 -0700 Message-Id: <20210811004258.138075-3-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: DA20F9000533 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=uMxUxQgx; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: fc5td9gc8gq7tbtykm8yzk3jumjxfsea X-HE-Tag: 1628642597-395786 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This was used way back when in arc700 debugging when ASID allocator was still bit flaky. Not needed in last 5 years Signed-off-by: Vineet Gupta Signed-off-by: Vineet Gupta --- arch/arc/Kconfig | 3 --- arch/arc/include/asm/mmu.h | 6 ----- arch/arc/mm/tlb.c | 40 ------------------------------ arch/arc/mm/tlbex.S | 50 -------------------------------------- 4 files changed, 99 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 0680b1de0fc3..59d5b2a179f6 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -537,9 +537,6 @@ config ARC_DW2_UNWIND If you don't debug the kernel, you can say N, but we may not be able to solve problems without frame unwind information -config ARC_DBG_TLB_PARANOIA - bool "Paranoia Checks in Low Level TLB Handlers" - config ARC_DBG_JUMP_LABEL bool "Paranoid checks in Static Keys (jump labels) code" depends on JUMP_LABEL diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 4065335a7922..38a036508699 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -64,12 +64,6 @@ typedef struct { unsigned long asid[NR_CPUS]; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; -#ifdef CONFIG_ARC_DBG_TLB_PARANOIA -void tlb_paranoid_check(unsigned int mm_asid, unsigned long address); -#else -#define tlb_paranoid_check(a, b) -#endif - void arc_mmu_init(void); extern char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len); void read_decode_mmu_bcr(void); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 349fb7a75d1d..6079dfd129b9 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -400,7 +400,6 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * * Removing the assumption involves * -Using vma->mm->context{ASID,SASID}, as opposed to MMU reg. - * -Fix the TLB paranoid debug code to not trigger false negatives. * -More importantly it makes this handler inconsistent with fast-path * TLB Refill handler which always deals with "current" * @@ -423,8 +422,6 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) local_irq_save(flags); - tlb_paranoid_check(asid_mm(vma->vm_mm, smp_processor_id()), vaddr); - vaddr &= PAGE_MASK; /* update this PTE credentials */ @@ -818,40 +815,3 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address, local_irq_restore(flags); } - -/*********************************************************************** - * Diagnostic Routines - * -Called from Low Level TLB Handlers if things don;t look good - **********************************************************************/ - -#ifdef CONFIG_ARC_DBG_TLB_PARANOIA - -/* - * Low Level ASM TLB handler calls this if it finds that HW and SW ASIDS - * don't match - */ -void print_asid_mismatch(int mm_asid, int mmu_asid, int is_fast_path) -{ - pr_emerg("ASID Mismatch in %s Path Handler: sw-pid=0x%x hw-pid=0x%x\n", - is_fast_path ? "Fast" : "Slow", mm_asid, mmu_asid); - - __asm__ __volatile__("flag 1"); -} - -void tlb_paranoid_check(unsigned int mm_asid, unsigned long addr) -{ - unsigned int mmu_asid; - - mmu_asid = read_aux_reg(ARC_REG_PID) & 0xff; - - /* - * At the time of a TLB miss/installation - * - HW version needs to match SW version - * - SW needs to have a valid ASID - */ - if (addr < 0x70000000 && - ((mm_asid == MM_CTXT_NO_ASID) || - (mmu_asid != (mm_asid & MM_CTXT_ASID_MASK)))) - print_asid_mismatch(mm_asid, mmu_asid, 0); -} -#endif diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index bcd2909c691f..0b4bb62fa0ab 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -93,11 +93,6 @@ ex_saved_reg1: st_s r1, [r0, 4] st_s r2, [r0, 8] st_s r3, [r0, 12] - - ; VERIFY if the ASID in MMU-PID Reg is same as - ; one in Linux data structures - - tlb_paranoid_check_asm .endm .macro TLBMISS_RESTORE_REGS @@ -146,51 +141,6 @@ ex_saved_reg1: #endif -;============================================================================ -; Troubleshooting Stuff -;============================================================================ - -; Linux keeps ASID (Address Space ID) in task->active_mm->context.asid -; When Creating TLB Entries, instead of doing 3 dependent loads from memory, -; we use the MMU PID Reg to get current ASID. -; In bizzare scenrios SW and HW ASID can get out-of-sync which is trouble. -; So we try to detect this in TLB Mis shandler - -.macro tlb_paranoid_check_asm - -#ifdef CONFIG_ARC_DBG_TLB_PARANOIA - - GET_CURR_TASK_ON_CPU r3 - ld r0, [r3, TASK_ACT_MM] - ld r0, [r0, MM_CTXT+MM_CTXT_ASID] - breq r0, 0, 55f ; Error if no ASID allocated - - lr r1, [ARC_REG_PID] - and r1, r1, 0xFF - - and r2, r0, 0xFF ; MMU PID bits only for comparison - breq r1, r2, 5f - -55: - ; Error if H/w and S/w ASID don't match, but NOT if in kernel mode - lr r2, [erstatus] - bbit0 r2, STATUS_U_BIT, 5f - - ; We sure are in troubled waters, Flag the error, but to do so - ; need to switch to kernel mode stack to call error routine - GET_TSK_STACK_BASE r3, sp - - ; Call printk to shoutout aloud - mov r2, 1 - j print_asid_mismatch - -5: ; ASIDs match so proceed normally - nop - -#endif - -.endm - ;============================================================================ ;TLB Miss handling Code ;============================================================================ From patchwork Wed Aug 11 00:42:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429661 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65B61C432BE for ; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F310B60EB7 for ; Wed, 11 Aug 2021 00:43:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F310B60EB7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AFC0D8D0002; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 99A1C6B007B; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 797246B0078; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 5C4816B0074 for ; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E58BE18014A2F for ; Wed, 11 Aug 2021 00:43:18 +0000 (UTC) X-FDA: 78460950876.35.93A6AD3 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id 7BE26501F5B8 for ; Wed, 11 Aug 2021 00:43:18 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 4815A60C40; Wed, 11 Aug 2021 00:43:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642597; bh=ErFVGLefxP4MHVDH8Q7kswgyJ2BqPsS+5BKbLRJMXC0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=egE2spqBx9LY1O7g9Vel5aVr2I9KAWdVGpTZz7DgmNWefl5hyvv2tvPorFxHkRDM6 71q003FAcfpShb9J3ju2shYDhiSXfgvywX1nA6koVInrgfrBNwBCZCDXr3f7iqGnGk wOWsQuPFRv7iEzuOIANE9FD5R1TosNU3At/HgbrtDMrxyQdmOXDghsduu2L9ilt14G qAGZuuhrLvsYi5uN86y5YuDxQK+if4ld8+jYDyb90HcE5WSW+GXfQHPr8D43pX0abq dz+hrCoqbVnJwkM/RlUwdmYnwPxroHVgLApEPTTYHjfz1UO4oTLFrD+5HFlwoMesVi 4WVmPOy3cmk1g== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta , Vineet Gupta Subject: [PATCH 03/18] ARC: mm: move mmu/cache externs out to setup.h Date: Tue, 10 Aug 2021 17:42:43 -0700 Message-Id: <20210811004258.138075-4-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7BE26501F5B8 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=egE2spqB; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: btp4acw1p7cwtgjyokdxkr8y99wnineo X-HE-Tag: 1628642598-662018 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta Signed-off-by: Vineet Gupta --- arch/arc/include/asm/cache.h | 4 ---- arch/arc/include/asm/mmu.h | 4 ---- arch/arc/include/asm/setup.h | 12 ++++++++++-- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arc/include/asm/cache.h b/arch/arc/include/asm/cache.h index d8ece4292388..f0f1fc5d62b6 100644 --- a/arch/arc/include/asm/cache.h +++ b/arch/arc/include/asm/cache.h @@ -62,10 +62,6 @@ #define ARCH_SLAB_MINALIGN 8 #endif -extern void arc_cache_init(void); -extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len); -extern void read_decode_cache_bcr(void); - extern int ioc_enable; extern unsigned long perip_base, perip_end; diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 38a036508699..762cfe66e16b 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -64,10 +64,6 @@ typedef struct { unsigned long asid[NR_CPUS]; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; -void arc_mmu_init(void); -extern char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len); -void read_decode_mmu_bcr(void); - static inline int is_pae40_enabled(void) { return IS_ENABLED(CONFIG_ARC_HAS_PAE40); diff --git a/arch/arc/include/asm/setup.h b/arch/arc/include/asm/setup.h index 01f85478170d..028a8cf76206 100644 --- a/arch/arc/include/asm/setup.h +++ b/arch/arc/include/asm/setup.h @@ -2,8 +2,8 @@ /* * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) */ -#ifndef __ASMARC_SETUP_H -#define __ASMARC_SETUP_H +#ifndef __ASM_ARC_SETUP_H +#define __ASM_ARC_SETUP_H #include @@ -34,4 +34,12 @@ long __init arc_get_mem_sz(void); #define IS_AVAIL2(v, s, cfg) IS_AVAIL1(v, s), IS_AVAIL1(v, IS_USED_CFG(cfg)) #define IS_AVAIL3(v, v2, s) IS_AVAIL1(v, s), IS_AVAIL1(v, IS_DISABLED_RUN(v2)) +extern void arc_mmu_init(void); +extern char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len); +extern void read_decode_mmu_bcr(void); + +extern void arc_cache_init(void); +extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len); +extern void read_decode_cache_bcr(void); + #endif /* __ASMARC_SETUP_H */ From patchwork Wed Aug 11 00:42:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429663 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5ADB1C4338F for ; Wed, 11 Aug 2021 00:43:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F15836056C for ; Wed, 11 Aug 2021 00:43:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F15836056C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id F3B7F6B0075; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E9D2D8D0003; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF5F48D0001; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 83DFC6B0074 for ; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 366238249980 for ; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) X-FDA: 78460950918.01.190F86E Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf15.hostedemail.com (Postfix) with ESMTP id E3783D006327 for ; Wed, 11 Aug 2021 00:43:18 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id D498960EFF; Wed, 11 Aug 2021 00:43:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642598; bh=o68pQKOkoMjqGvslTv9+5NhBcfzvevarzmfnV2WjK38=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i9E0moRAtyOyw9Sx7Ur777jymu/dOGX06mLGlJiV9J7UjMcNqnZWnrasSe3jnphZJ m+oAeup8cQVGK1bf56LRFIkx8Afv89ZYjAcP+0kpXROVmnN3s7OvB6IHo18pefNK/r xqrqgkMNIo/2uXAZMXCbo4FTNMGFWtrVIwuY3cLhAzJFHNFFX4bqFzm6uNgosWyKrL v8phW7+OpxKLP1YbUgtaCcUGwQDdZc7EJhMrrDN0UpaM8kwi3aZ9n9iEj68R/2sVMI +3zBgBaR0tjm1iv+IVAFceG3SkGbmKRRzSnLN8WcfL6xIs+dWiE0ZyoSB+RmCE4a4o d9YpYBgglc12A== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 04/18] ARC: mm: remove pgd_offset_fast Date: Tue, 10 Aug 2021 17:42:44 -0700 Message-Id: <20210811004258.138075-5-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E3783D006327 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=i9E0moRA; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: 4i8s47pnxaxsq174z5zcisoqy3u3pg4n X-HE-Tag: 1628642598-527470 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgtable.h | 23 ----------------------- arch/arc/mm/fault.c | 2 +- 2 files changed, 1 insertion(+), 24 deletions(-) diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index 0c3e220bd2b4..80b57c14b430 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -284,29 +284,6 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pteval); } -/* - * Macro to quickly access the PGD entry, utlising the fact that some - * arch may cache the pointer to Page Directory of "current" task - * in a MMU register - * - * Thus task->mm->pgd (3 pointer dereferences, cache misses etc simply - * becomes read a register - * - * ********CAUTION*******: - * Kernel code might be dealing with some mm_struct of NON "current" - * Thus use this macro only when you are certain that "current" is current - * e.g. when dealing with signal frame setup code etc - */ -#ifdef ARC_USE_SCRATCH_REG -#define pgd_offset_fast(mm, addr) \ -({ \ - pgd_t *pgd_base = (pgd_t *) read_aux_reg(ARC_REG_SCRATCH_DATA0); \ - pgd_base + pgd_index(addr); \ -}) -#else -#define pgd_offset_fast(mm, addr) pgd_offset(mm, addr) -#endif - extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index f5657cb68e4f..41f154320964 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -33,7 +33,7 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) pud_t *pud, *pud_k; pmd_t *pmd, *pmd_k; - pgd = pgd_offset_fast(current->active_mm, address); + pgd = pgd_offset(current->active_mm, address); pgd_k = pgd_offset_k(address); if (!pgd_present(*pgd_k)) From patchwork Wed Aug 11 00:42:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429665 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C72EC4320E for ; Wed, 11 Aug 2021 00:43:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 259A560EB5 for ; Wed, 11 Aug 2021 00:43:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 259A560EB5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 315726B0074; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F67B6B007D; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF2786B0074; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id 9B0656B0078 for ; Tue, 10 Aug 2021 20:43:19 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3606D1807CAA4 for ; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) X-FDA: 78460950918.11.CAC286B Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id E9D5C504BC75 for ; Wed, 11 Aug 2021 00:43:18 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3C1C06101D; Wed, 11 Aug 2021 00:43:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642598; bh=SDwFDPOQmz/pmHMM+4prMbb/FBKaF/3A8kiP4jUiFjY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZkJz7h7+oRGr3Xu1DUppMJk0wp3gXJP97aJv06qXq1C4tMizWAXPfFe5GyVCcMKmT +vPP/Ls42lfy8cgY4VR+nEGmA3S59MGj1hVyPixzMORUsnw2WxfPJqoCRGZuTx0mdD RdowomDxp3/FtdZbae2lZGnJAvVQb3RJrx1SPcmtMcvRC8+cMDbj6UVbitRzs6Ti9w x4iWLmXJXWjEHmuPA59CjG8jRhMnzUCdq/fdTSXgaZLh4dDN5MmG4PsFU4Zb58WdRF 9LkrTwbllbHu4pkenQgBHaTko3uU7oTXGT4YJqugupg64C+RCKItUwjwomN0F5mOG3 bd2XlgnvbUzaQ== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 05/18] ARC: mm: Fixes to allow STRICT_MM_TYPECHECKS Date: Tue, 10 Aug 2021 17:42:45 -0700 Message-Id: <20210811004258.138075-6-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E9D5C504BC75 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZkJz7h7+; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: hzxqbjp8aiifn911ay6dc7snqu779e9o X-HE-Tag: 1628642598-325769 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/mm/tlb.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 6079dfd129b9..15cbc285b0de 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -71,7 +71,7 @@ static void tlb_entry_erase(unsigned int vaddr_n_asid) } } -static void tlb_entry_insert(unsigned int pd0, pte_t pd1) +static void tlb_entry_insert(unsigned int pd0, phys_addr_t pd1) { unsigned int idx; @@ -109,13 +109,16 @@ static void tlb_entry_erase(unsigned int vaddr_n_asid) write_aux_reg(ARC_REG_TLBCOMMAND, TLBDeleteEntry); } -static void tlb_entry_insert(unsigned int pd0, pte_t pd1) +static void tlb_entry_insert(unsigned int pd0, phys_addr_t pd1) { write_aux_reg(ARC_REG_TLBPD0, pd0); - write_aux_reg(ARC_REG_TLBPD1, pd1); - if (is_pae40_enabled()) + if (!is_pae40_enabled()) { + write_aux_reg(ARC_REG_TLBPD1, pd1); + } else { + write_aux_reg(ARC_REG_TLBPD1, pd1 & 0xFFFFFFFF); write_aux_reg(ARC_REG_TLBPD1HI, (u64)pd1 >> 32); + } write_aux_reg(ARC_REG_TLBCOMMAND, TLBInsertEntry); } @@ -391,7 +394,7 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) unsigned long flags; unsigned int asid_or_sasid, rwx; unsigned long pd0; - pte_t pd1; + phys_addr_t pd1; /* * create_tlb() assumes that current->mm == vma->mm, since From patchwork Wed Aug 11 00:42:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0397C432BE for ; Wed, 11 Aug 2021 00:43:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 743CD60E97 for ; Wed, 11 Aug 2021 00:43:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 743CD60E97 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7998E6B0078; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 74B816B007D; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52EF76B007E; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0024.hostedemail.com [216.40.44.24]) by kanga.kvack.org (Postfix) with ESMTP id 07C1D6B007B for ; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B1CDA235D3 for ; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) X-FDA: 78460950918.22.F095442 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf15.hostedemail.com (Postfix) with ESMTP id 67D15D006327 for ; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id CA36C60EB7; Wed, 11 Aug 2021 00:43:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642599; bh=BuXcRdzmWNIaOIOjXT4hDOm7eBJ0j1zqPiGdlTfIt3c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NAIjZVUTNQb2BJzsdAr3U4xsFLcKsv+4Ht25B1BbDsNPwpB2KSQF+Y4jmTlLMhWEs Xb18N17B7SI7QdUMHiWuwy30IeczfwO+Rz6FtHYMw/1/sr73OIjotSX6lotAgkIOYn 8luy7zPkY7o2GjM3gi4Afbk4/0BR5v7mUw3B29Oskq3ctOd/HhHQBpDgY1eNL98w/Q 0kRFfAlZdQkWx/WQrN21NQyyqGRBQpV2J7fwiIHYu4kUDULM/pHYEBAGzFmHXaQQLj Tdzv6c1H9x1bsnRnbY7wupfVVXtAMIrhmMv/BkRQD3Yzsf9BNqYlXuzw/MdcDbW7sZ /kQlMijewEjnQ== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 06/18] ARC: mm: Enable STRICT_MM_TYPECHECKS Date: Tue, 10 Aug 2021 17:42:46 -0700 Message-Id: <20210811004258.138075-7-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 67D15D006327 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=NAIjZVUT; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: hjptrihc3piweux4f46yegtwkyy4xdpd X-HE-Tag: 1628642599-774112 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the past I've refrained from doing this (atleast 2 times) due to the slight code bloat due to ABI implications of pte_t etc becoming sttuct Per ARC ABI, functions return struct via memory and not through register r0, even if the struct would fits in register(s) - caller allocates space on stack and passes the address as first arg (r0), shifting rest of args by one - callee creates return struct in memory (referenced via r0) This time around the code actually shrunk slightly (due to subtle inlining heuristic effects), but still slightly inefficient due to return values passed through memory. That however seems like a small cost compared to maintenance burden given the impending new mmu support for page walk etc Signed-off-by: Vineet Gupta --- arch/arc/include/asm/page.h | 26 -------------------------- arch/arc/mm/ioremap.c | 2 +- 2 files changed, 1 insertion(+), 27 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 4a9d33372fe2..c4ac827379cd 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -34,12 +34,6 @@ void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma); void clear_user_page(void *to, unsigned long u_vaddr, struct page *page); -#undef STRICT_MM_TYPECHECKS - -#ifdef STRICT_MM_TYPECHECKS -/* - * These are used to make use of C type-checking.. - */ typedef struct { #ifdef CONFIG_ARC_HAS_PAE40 unsigned long long pte; @@ -64,26 +58,6 @@ typedef struct { #define pte_pgprot(x) __pgprot(pte_val(x)) -#else /* !STRICT_MM_TYPECHECKS */ - -#ifdef CONFIG_ARC_HAS_PAE40 -typedef unsigned long long pte_t; -#else -typedef unsigned long pte_t; -#endif -typedef unsigned long pgd_t; -typedef unsigned long pgprot_t; - -#define pte_val(x) (x) -#define pgd_val(x) (x) -#define pgprot_val(x) (x) -#define __pte(x) (x) -#define __pgd(x) (x) -#define __pgprot(x) (x) -#define pte_pgprot(x) (x) - -#endif - typedef pte_t * pgtable_t; /* diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c index 95c649fbc95a..052bbd8b1e5f 100644 --- a/arch/arc/mm/ioremap.c +++ b/arch/arc/mm/ioremap.c @@ -39,7 +39,7 @@ void __iomem *ioremap(phys_addr_t paddr, unsigned long size) if (arc_uncached_addr_space(paddr)) return (void __iomem *)(u32)paddr; - return ioremap_prot(paddr, size, PAGE_KERNEL_NO_CACHE); + return ioremap_prot(paddr, size, pgprot_val(PAGE_KERNEL_NO_CACHE)); } EXPORT_SYMBOL(ioremap); From patchwork Wed Aug 11 00:42:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8914C25AE5 for ; Wed, 11 Aug 2021 00:43:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 654C360EB9 for ; Wed, 11 Aug 2021 00:43:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 654C360EB9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D90896B007B; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C04918D0001; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 972006B0080; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0087.hostedemail.com [216.40.44.87]) by kanga.kvack.org (Postfix) with ESMTP id 5DFBD6B007B for ; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 18E8E22C11 for ; Wed, 11 Aug 2021 00:43:20 +0000 (UTC) X-FDA: 78460950960.08.C93293E Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id CE467501F5B8 for ; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 333A26101E; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642599; bh=GnoibyRqmuzmp9rlWYYiDkJSczSXQYZSBk2r8Ru2kb8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ltgpLAb09scNIqf7hB4w+9wq/kMJo6klNCdbwyjz06C+di55vyu2NbNrNBDqyQ9d3 G+a+jg7VCy+On16gHsKi9E77og6pPYyylaxZP9GF5YmIZWvMiizD5r/xsqP0uOGyV8 jpQ7sfofWOWGx0uDHlZYtIEKMPcdVORhrxrIMnv2ujP0u0akUBQB+OsZVW1qW1Id1/ LqMgqGQ88ZG3Fc0qx+fFWFglMWcLiCkzllK6qHsU7zhbTV17MKkumU279yN0pra/vJ ZOvjlXq/r+X/C201M9G70O7Rn6yAyiI9DNAedv3Ikgg2EP/FWSuK8UnBVIJ0GbPqko 8z8HOQz17OYFg== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 07/18] ARC: ioremap: use more commonly used PAGE_KERNEL based uncached flag Date: Tue, 10 Aug 2021 17:42:47 -0700 Message-Id: <20210811004258.138075-8-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: CE467501F5B8 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ltgpLAb0; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: ty6jt8wrjtenpki768h8gyzengzmhz7x X-HE-Tag: 1628642599-416244 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: and remove the one off uncached definition for ARC Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgtable.h | 3 --- arch/arc/mm/ioremap.c | 3 ++- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index 80b57c14b430..b054c14f8bf6 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -103,9 +103,6 @@ */ #define PAGE_KERNEL __pgprot(_K_PAGE_PERMS | _PAGE_CACHEABLE) -/* ioremap */ -#define PAGE_KERNEL_NO_CACHE __pgprot(_K_PAGE_PERMS) - /* Masks for actual TLB "PD"s */ #define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ) #define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ) diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c index 052bbd8b1e5f..0ee75aca6e10 100644 --- a/arch/arc/mm/ioremap.c +++ b/arch/arc/mm/ioremap.c @@ -39,7 +39,8 @@ void __iomem *ioremap(phys_addr_t paddr, unsigned long size) if (arc_uncached_addr_space(paddr)) return (void __iomem *)(u32)paddr; - return ioremap_prot(paddr, size, pgprot_val(PAGE_KERNEL_NO_CACHE)); + return ioremap_prot(paddr, size, + pgprot_val(pgprot_noncached(PAGE_KERNEL))); } EXPORT_SYMBOL(ioremap); From patchwork Wed Aug 11 00:42:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C97BBC43216 for ; Wed, 11 Aug 2021 00:43:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6911B610CF for ; Wed, 11 Aug 2021 00:43:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6911B610CF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 466196B007D; Tue, 10 Aug 2021 20:43:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A12A6B0080; Tue, 10 Aug 2021 20:43:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17FC46B007E; Tue, 10 Aug 2021 20:43:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id CC8FD8D0003 for ; Tue, 10 Aug 2021 20:43:20 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7AFAE1804193B for ; Wed, 11 Aug 2021 00:43:20 +0000 (UTC) X-FDA: 78460950960.15.DF25015 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf15.hostedemail.com (Postfix) with ESMTP id 3C149D003D24 for ; Wed, 11 Aug 2021 00:43:20 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id A159360ED8; Wed, 11 Aug 2021 00:43:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642599; bh=d/hfOn3w687jN4zkC+l2dcxEmGUAtvEy2OYtzBkQRqc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pIcHx0LwNdige9Yz4F7qvxqxWWqcdfoJ8KIVPY7/B1MadrCEssQKjIARhkuSMovjY CLivgL8YXgM/F0w6snoahKVCye50IBx3DHZP/jp1eIcXND/dlIopDzGucOi5LpivVa psqbaFcT4/X7E8I3zztdp+CLPCtYIB61CXEeiS49QpQh28ca7ptv/uH8nHdrOMPrJ6 FnBEuAp044ryOyJdP2ICTeLef9KIs4cDzlsTRXJppqroAduldkTk1ndltlPKddFPqZ NMPBEaRZh4mCp5XTnj8pVHpJ1aIf64umV22sGxs9g5vAlipiGgnov0p2eCrgTh1PXm xZ8rMYITTfEiA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 08/18] ARC: mm: pmd_populate* to use the canonical set_pmd (and drop pmd_set) Date: Tue, 10 Aug 2021 17:42:48 -0700 Message-Id: <20210811004258.138075-9-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3C149D003D24 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=pIcHx0Lw; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf15.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: 4sppas4imojmycjdm1p8hr5phth3fm6y X-HE-Tag: 1628642600-163228 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgalloc.h | 21 ++++++++++++++------- arch/arc/include/asm/pgtable.h | 6 ------ 2 files changed, 14 insertions(+), 13 deletions(-) diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index a32ca3104ced..356237b9c537 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -33,16 +33,23 @@ #include static inline void -pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) +pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep) { - pmd_set(pmd, pte); + /* + * The cast to long below is OK even when pte is long long (PAE40) + * Despite "wider" pte, the pte table needs to be in non-PAE low memory + * as all higher levels can only hold long pointers. + * + * The cast itself is needed given simplistic definition of set_pmd() + */ + set_pmd(pmdp, __pmd((unsigned long)ptep)); } -static inline void -pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t ptep) -{ - pmd_set(pmd, (pte_t *) ptep); -} +/* + * pmd_populate can be implemented in terms of pmd_populate_kernel since + * pgtable_t is pte * on ARC + */ +#define pmd_populate(mm, pmdp, ptep) pmd_populate_kernel(mm, pmdp, ptep) static inline int __get_order_pgd(void) { diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index b054c14f8bf6..f762bacb2358 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -222,12 +222,6 @@ extern char empty_zero_page[PAGE_SIZE]; /* find the logical addr (phy for ARC) of the Page Tbl ref by PMD entry */ #define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) -/* In a 2 level sys, setup the PGD entry with PTE value */ -static inline void pmd_set(pmd_t *pmdp, pte_t *ptep) -{ - pmd_val(*pmdp) = (unsigned long)ptep; -} - #define pte_none(x) (!pte_val(x)) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_clear(mm, addr, ptep) set_pte_at(mm, addr, ptep, __pte(0)) From patchwork Wed Aug 11 00:42:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 745F9C4320E for ; Wed, 11 Aug 2021 00:43:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14B1C60E97 for ; Wed, 11 Aug 2021 00:43:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 14B1C60E97 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 4B73A6B007E; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3EF036B0080; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2676E6B0081; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id 013726B007E for ; Tue, 10 Aug 2021 20:43:21 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7F36918014A2F for ; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) X-FDA: 78460951002.19.A7F2C6C Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf24.hostedemail.com (Postfix) with ESMTP id 2997DB006B60 for ; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0C330610FC; Wed, 11 Aug 2021 00:43:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642600; bh=iW4KvuWuC+r5z39MFQzwrbwCs0tKRK5GIGFWIb/ZRJY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=agy6Rj11nk6gxe06wBgAua7nlZQ5Lkktx0Qtu9NWGx3Dyz1Ac9gkJfbTindxD4NJb iVuNiEnXjhwfd7iOzQIh42HKk7mj/QSv3lYGVA83QIM1zjK1KTZxKWtt10tPW8abL7 QcB4UC8QATKQ2XIOcbTGGiuba4r5F5wyMGhNuM/bB8ehF6Vtz/70Wo23Np+qvS821W yQ6zTuTQG9h/rkXk+7Sz40K8+hK+FLYpZQUV0uT3Q0bAsr/J+jl9nh2986SA5G41YZ 6T/OzK/iD2vARp9YolUHaRcVfZMjd0I0ruDHt4s+h6VfCNZrKi/1NbRU1OgopWpU3G UDwuuvxls+8Lw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 09/18] ARC: mm: non-functional code cleanup ahead of 3 levels Date: Tue, 10 Aug 2021 17:42:49 -0700 Message-Id: <20210811004258.138075-10-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 2997DB006B60 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=agy6Rj11; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: n4exrt19nwykfiwz14d3nnb8t65o8fiq X-HE-Tag: 1628642601-296272 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/include/asm/page.h | 30 ++++++++++++++++-------------- arch/arc/include/asm/pgalloc.h | 7 ++++++- 2 files changed, 22 insertions(+), 15 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index c4ac827379cd..313e6f543d2d 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -34,6 +34,13 @@ void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma); void clear_user_page(void *to, unsigned long u_vaddr, struct page *page); +typedef struct { + unsigned long pgd; +} pgd_t; + +#define pgd_val(x) ((x).pgd) +#define __pgd(x) ((pgd_t) { (x) }) + typedef struct { #ifdef CONFIG_ARC_HAS_PAE40 unsigned long long pte; @@ -41,22 +48,17 @@ typedef struct { unsigned long pte; #endif } pte_t; -typedef struct { - unsigned long pgd; -} pgd_t; + +#define pte_val(x) ((x).pte) +#define __pte(x) ((pte_t) { (x) }) + typedef struct { unsigned long pgprot; } pgprot_t; -#define pte_val(x) ((x).pte) -#define pgd_val(x) ((x).pgd) -#define pgprot_val(x) ((x).pgprot) - -#define __pte(x) ((pte_t) { (x) }) -#define __pgd(x) ((pgd_t) { (x) }) -#define __pgprot(x) ((pgprot_t) { (x) }) - -#define pte_pgprot(x) __pgprot(pte_val(x)) +#define pgprot_val(x) ((x).pgprot) +#define __pgprot(x) ((pgprot_t) { (x) }) +#define pte_pgprot(x) __pgprot(pte_val(x)) typedef pte_t * pgtable_t; @@ -96,8 +98,8 @@ extern int pfn_valid(unsigned long pfn); * virt here means link-address/program-address as embedded in object code. * And for ARC, link-addr = physical address */ -#define __pa(vaddr) ((unsigned long)(vaddr)) -#define __va(paddr) ((void *)((unsigned long)(paddr))) +#define __pa(vaddr) ((unsigned long)(vaddr)) +#define __va(paddr) ((void *)((unsigned long)(paddr))) #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 356237b9c537..0cf73431eb89 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -29,6 +29,11 @@ #ifndef _ASM_ARC_PGALLOC_H #define _ASM_ARC_PGALLOC_H +/* + * For ARC, pgtable_t is not struct page *, but pte_t * (to avoid + * extraneous page_address() calculations) hence can't use + * use asm-generic/pgalloc.h which assumes it being struct page * + */ #include #include @@ -36,7 +41,7 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep) { /* - * The cast to long below is OK even when pte is long long (PAE40) + * The cast to long below is OK in 32-bit PAE40 regime with long long pte * Despite "wider" pte, the pte table needs to be in non-PAE low memory * as all higher levels can only hold long pointers. * From patchwork Wed Aug 11 00:42:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61810C19F35 for ; Wed, 11 Aug 2021 00:43:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 08C2060EB5 for ; Wed, 11 Aug 2021 00:43:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 08C2060EB5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C82096B0087; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA6676B0083; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A9A36B0082; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 47CD96B0081 for ; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E219218045E84 for ; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) X-FDA: 78460951002.13.9873416 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf14.hostedemail.com (Postfix) with ESMTP id 90146600AAAC for ; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 8227160F11; Wed, 11 Aug 2021 00:43:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642600; bh=ALJrrGc6ovgV0DuDRmC1sw13c6c8XRN1Ta2YoCgCtk8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CN7kjHjEJKlf7pYQhCwbva8FhCN0JrpHf0lPuJGgsbNcMyWCLb6FuxHHHMH6Gsuyi 77eEhwoGxcXUZexu83WSrfDFXWGxaXY2JzLGAKFgvWYXUcM9Ynmx71I0sGiW7XVxDp H++DJ7Ee8iwEj9HYBKi7JT4VaiW0qM9ZqP4IFI8p8onJvTtSgdKuGBrQHL2ZikMypN sb0MGUPqJmr089evcC1CnuzDZHe3pcPOQOVF+s+PmHvzQOUyQUumLJ4iJrQ1g08rQo aRZ2kd1bLswsExSdBtwxu+rEvXMd0TXUmMjePX9duerkkEgXQEe/0xv/htz9hX/Utn cba0kb11dVa6w== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 10/18] ARC: mm: move MMU specific bits out of ASID allocator Date: Tue, 10 Aug 2021 17:42:50 -0700 Message-Id: <20210811004258.138075-11-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=CN7kjHjE; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf14.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: stmpaj9ycttne4guzhq77qsyb8dfmkww X-Rspamd-Queue-Id: 90146600AAAC X-Rspamd-Server: rspam01 X-HE-Tag: 1628642601-12271 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: And while at it, rewrite commentary on ASID allocator Signed-off-by: Vineet Gupta --- arch/arc/include/asm/mmu.h | 13 +++++++++++++ arch/arc/include/asm/mmu_context.h | 28 +++++++++++++--------------- arch/arc/mm/tlb.c | 11 ++++------- 3 files changed, 30 insertions(+), 22 deletions(-) diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 762cfe66e16b..2cabdfaf2afb 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -64,6 +64,19 @@ typedef struct { unsigned long asid[NR_CPUS]; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; +static void inline mmu_setup_asid(struct mm_struct *mm, unsigned int asid) +{ + write_aux_reg(ARC_REG_PID, asid | MMU_ENABLE); +} + +static void inline mmu_setup_pgd(struct mm_struct *mm, pgd_t *pgd) +{ + /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ +#ifdef CONFIG_ISA_ARCV2 + write_aux_reg(ARC_REG_SCRATCH_DATA0, (unsigned int)pgd); +#endif +} + static inline int is_pae40_enabled(void) { return IS_ENABLED(CONFIG_ARC_HAS_PAE40); diff --git a/arch/arc/include/asm/mmu_context.h b/arch/arc/include/asm/mmu_context.h index 49318a126879..dda471f5f05b 100644 --- a/arch/arc/include/asm/mmu_context.h +++ b/arch/arc/include/asm/mmu_context.h @@ -15,22 +15,23 @@ #ifndef _ASM_ARC_MMU_CONTEXT_H #define _ASM_ARC_MMU_CONTEXT_H -#include -#include #include +#include #include -/* ARC700 ASID Management +/* ARC ASID Management + * + * MMU tags TLBs with an 8-bit ASID, avoiding need to flush the TLB on + * context-switch. * - * ARC MMU provides 8-bit ASID (0..255) to TAG TLB entries, allowing entries - * with same vaddr (different tasks) to co-exit. This provides for - * "Fast Context Switch" i.e. no TLB flush on ctxt-switch + * ASID is managed per cpu, so task threads across CPUs can have different + * ASID. Global ASID management is needed if hardware supports TLB shootdown + * and/or shared TLB across cores, which ARC doesn't. * - * Linux assigns each task a unique ASID. A simple round-robin allocation - * of H/w ASID is done using software tracker @asid_cpu. - * When it reaches max 255, the allocation cycle starts afresh by flushing - * the entire TLB and wrapping ASID back to zero. + * Each task is assigned unique ASID, with a simple round-robin allocator + * tracked in @asid_cpu. When 8-bit value rolls over,a new cycle is started + * over from 0, and TLB is flushed * * A new allocation cycle, post rollover, could potentially reassign an ASID * to a different task. Thus the rule is to refresh the ASID in a new cycle. @@ -93,7 +94,7 @@ static inline void get_new_mmu_context(struct mm_struct *mm) asid_mm(mm, cpu) = asid_cpu(cpu); set_hw: - write_aux_reg(ARC_REG_PID, hw_pid(mm, cpu) | MMU_ENABLE); + mmu_setup_asid(mm, hw_pid(mm, cpu)); local_irq_restore(flags); } @@ -146,10 +147,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, */ cpumask_set_cpu(cpu, mm_cpumask(next)); -#ifdef CONFIG_ISA_ARCV2 - /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ - write_aux_reg(ARC_REG_SCRATCH_DATA0, next->pgd); -#endif + mmu_setup_pgd(next, next->pgd); get_new_mmu_context(next); } diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 15cbc285b0de..b68d5798327b 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -716,14 +716,11 @@ void arc_mmu_init(void) if (IS_ENABLED(CONFIG_ARC_HAS_PAE40) && !mmu->pae) panic("Hardware doesn't support PAE40\n"); - /* Enable the MMU */ - write_aux_reg(ARC_REG_PID, MMU_ENABLE); + /* Enable the MMU with ASID 0 */ + mmu_setup_asid(NULL, 0); - /* In arc700/smp needed for re-entrant interrupt handling */ -#ifdef CONFIG_ISA_ARCV2 - /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ - write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); -#endif + /* cache the pgd pointer in MMU SCRATCH reg (ARCv2 only) */ + mmu_setup_pgd(NULL, swapper_pg_dir); if (pae40_exist_but_not_enab()) write_aux_reg(ARC_REG_TLBPD1HI, 0); From patchwork Wed Aug 11 00:42:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54D21C4320E for ; Wed, 11 Aug 2021 00:43:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF49D60ED8 for ; Wed, 11 Aug 2021 00:43:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EF49D60ED8 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 26D0F6B0085; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 12FFA6B0080; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7D806B0081; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0053.hostedemail.com [216.40.44.53]) by kanga.kvack.org (Postfix) with ESMTP id 9421C6B0080 for ; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4E5E021960 for ; Wed, 11 Aug 2021 00:43:22 +0000 (UTC) X-FDA: 78460951044.02.CBD4C01 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf23.hostedemail.com (Postfix) with ESMTP id 00B729001B22 for ; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id E229B60EDF; Wed, 11 Aug 2021 00:43:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642601; bh=rQG6R4IYtK009L51KoLDJPAjJafHG5PoJg6sbJHan6E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DJg+8ueSH0w/uLEJilaVjKfT7/1op5f9m25OKoODB2hgOotmDiKaiWNsqlSSJFjWa I184RC/qwaFr3Qxsf9BJ26Wjz+PnlteUyda2kOpu1K6VbC/nasY6d5hW8DYjn/WWDL hDzArlo4G20vmIyrDFCP94XrRzXCnTmzUfnY9X2IYALSRgT66zRyKRXAJliAEtpt6E Y8r94qPZjjfFn2porTkDsupsx0dUBiMIqZzn6ww0B0d1Sv1nWhSn7L5BYLwxWXm3kq L2IG9lDzZVjkulLxDzXvYk2Tx4sRr1sIbfds+xs2KQPooGRGNYWAXU42SlIi64Pbf5 N7KCYFHYLALnA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 11/18] ARC: mm: move MMU specific bits out of entry code Date: Tue, 10 Aug 2021 17:42:51 -0700 Message-Id: <20210811004258.138075-12-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DJg+8ueS; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: ihaoffo7ktt6bm5y369agdsra4a7nmck X-Rspamd-Queue-Id: 00B729001B22 X-Rspamd-Server: rspam01 X-HE-Tag: 1628642601-554931 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/kernel/entry.S | 6 ------ arch/arc/mm/tlb.c | 3 +++ 2 files changed, 3 insertions(+), 6 deletions(-) diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S index 2cb8dfe866b6..684efd094520 100644 --- a/arch/arc/kernel/entry.S +++ b/arch/arc/kernel/entry.S @@ -101,12 +101,6 @@ ENTRY(EV_MachineCheck) lr r0, [efa] mov r1, sp - ; hardware auto-disables MMU, re-enable it to allow kernel vaddr - ; access for say stack unwinding of modules for crash dumps - lr r3, [ARC_REG_PID] - or r3, r3, MMU_ENABLE - sr r3, [ARC_REG_PID] - lsr r3, r2, 8 bmsk r3, r3, 7 brne r3, ECR_C_MCHK_DUP_TLB, 1f diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index b68d5798327b..34f16e0b41e6 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -813,5 +813,8 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address, } } + /* Re-enable MMU as hardware may have auto-disabled it upon exception */ + write_aux_reg(ARC_REG_PID, read_aux_reg(ARC_REG_PID) | MMU_ENABLE); + local_irq_restore(flags); } From patchwork Wed Aug 11 00:42:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 330CAC4320A for ; Wed, 11 Aug 2021 00:43:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B35A560E97 for ; Wed, 11 Aug 2021 00:43:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B35A560E97 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 546FC6B0080; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 268FD6B0082; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BB856B0083; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C2F6D6B0085 for ; Tue, 10 Aug 2021 20:43:22 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 70386180945DF for ; Wed, 11 Aug 2021 00:43:22 +0000 (UTC) X-FDA: 78460951044.31.A541A65 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id 00DCF504B363 for ; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 47A0D610A3; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642601; bh=L2ko66DkoJjHQTi6LLynJZ5hIWAzaz3TxQBWShlZROY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=caFPEOdX8sLFSJ91RT+Bqvni8K7mR61I4lTaqPH22gnLnj6HUZ38TSGgxW47eLhAk vX4qGEGOUdGPUZRf/NPu39s+lMVgjI7j5w1vAHOE2+WWO7IHROcE85laoGnAUV2rHC yhflhAA/1v4m2ksutdopSKWTXXWGRC3x/ZGWph2AWA0jQF/WG8mPURQV8H20dddDzT rx8hs+wE8wy3oSb80gvf3Pt7zWcyXM80/XohWh7k5ypxOGNG3zbD24adKkn5c0iF/U hNCFdaveYIiEqk0/tk3ThxkZv6tZbHxPULLlF1ncX3Z5sAHn12onG3ycdZL7l5m5Df o8dRZR5fdjN/A== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 12/18] ARC: mm: disintegrate mmu.h (arcv2 bits out) Date: Tue, 10 Aug 2021 17:42:52 -0700 Message-Id: <20210811004258.138075-13-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 00DCF504B363 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=caFPEOdX; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: 9bmnkt7z6z9w74hx1arqjpsganrh8i6i X-HE-Tag: 1628642601-696933 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: non functional change Signed-off-by: Vineet Gupta --- arch/arc/include/asm/mmu-arcv2.h | 94 ++++++++++++++++++++++++++++++ arch/arc/include/asm/mmu.h | 72 +---------------------- arch/arc/include/asm/mmu_context.h | 1 + arch/arc/include/asm/pgtable.h | 6 -- arch/arc/mm/tlbex.S | 2 +- 5 files changed, 97 insertions(+), 78 deletions(-) create mode 100644 arch/arc/include/asm/mmu-arcv2.h diff --git a/arch/arc/include/asm/mmu-arcv2.h b/arch/arc/include/asm/mmu-arcv2.h new file mode 100644 index 000000000000..837a54e39539 --- /dev/null +++ b/arch/arc/include/asm/mmu-arcv2.h @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2004, 2007-2010, 2011-2012, 2019-20 Synopsys, Inc. (www.synopsys.com) + * + * MMUv3 (arc700) / MMUv4 (archs) are software page walked and software managed. + * This file contains the TLB access registers and commands + */ + +#ifndef _ASM_ARC_MMU_ARCV2_H +#define _ASM_ARC_MMU_ARCV2_H + +/* + * TLB Management regs + */ +#define ARC_REG_MMU_BCR 0x06f + +#ifdef CONFIG_ARC_MMU_V3 +#define ARC_REG_TLBPD0 0x405 +#define ARC_REG_TLBPD1 0x406 +#define ARC_REG_TLBPD1HI 0 /* Dummy: allows common code */ +#define ARC_REG_TLBINDEX 0x407 +#define ARC_REG_TLBCOMMAND 0x408 +#define ARC_REG_PID 0x409 +#define ARC_REG_SCRATCH_DATA0 0x418 +#else +#define ARC_REG_TLBPD0 0x460 +#define ARC_REG_TLBPD1 0x461 +#define ARC_REG_TLBPD1HI 0x463 +#define ARC_REG_TLBINDEX 0x464 +#define ARC_REG_TLBCOMMAND 0x465 +#define ARC_REG_PID 0x468 +#define ARC_REG_SCRATCH_DATA0 0x46c +#endif + +/* Bits in MMU PID reg */ +#define __TLB_ENABLE (1 << 31) +#define __PROG_ENABLE (1 << 30) +#define MMU_ENABLE (__TLB_ENABLE | __PROG_ENABLE) + +/* Bits in TLB Index reg */ +#define TLB_LKUP_ERR 0x80000000 + +#ifdef CONFIG_ARC_MMU_V3 +#define TLB_DUP_ERR (TLB_LKUP_ERR | 0x00000001) +#else +#define TLB_DUP_ERR (TLB_LKUP_ERR | 0x40000000) +#endif + +/* + * TLB Commands + */ +#define TLBWrite 0x1 +#define TLBRead 0x2 +#define TLBGetIndex 0x3 +#define TLBProbe 0x4 +#define TLBWriteNI 0x5 /* write JTLB without inv uTLBs */ +#define TLBIVUTLB 0x6 /* explicitly inv uTLBs */ + +#ifdef CONFIG_ARC_MMU_V4 +#define TLBInsertEntry 0x7 +#define TLBDeleteEntry 0x8 +#endif + +/* Masks for actual TLB "PD"s */ +#define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ) +#define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ) + +#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE) + +#ifndef __ASSEMBLY__ + +extern int pae40_exist_but_not_enab(void); + +static inline int is_pae40_enabled(void) +{ + return IS_ENABLED(CONFIG_ARC_HAS_PAE40); +} + +static void inline mmu_setup_asid(struct mm_struct *mm, unsigned long asid) +{ + write_aux_reg(ARC_REG_PID, asid | MMU_ENABLE); +} + +static void inline mmu_setup_pgd(struct mm_struct *mm, pgd_t *pgd) +{ + /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ +#ifdef CONFIG_ISA_ARCV2 + write_aux_reg(ARC_REG_SCRATCH_DATA0, (unsigned int)pgd); +#endif +} + +#endif /* !__ASSEMBLY__ */ + +#endif diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 2cabdfaf2afb..6a27a4caa44c 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -7,83 +7,13 @@ #define _ASM_ARC_MMU_H #ifndef __ASSEMBLY__ -#include /* NR_CPUS */ -#endif - -/* MMU Management regs */ -#define ARC_REG_MMU_BCR 0x06f - -#ifdef CONFIG_ARC_MMU_V3 -#define ARC_REG_TLBPD0 0x405 -#define ARC_REG_TLBPD1 0x406 -#define ARC_REG_TLBPD1HI 0 /* Dummy: allows code sharing with ARC700 */ -#define ARC_REG_TLBINDEX 0x407 -#define ARC_REG_TLBCOMMAND 0x408 -#define ARC_REG_PID 0x409 -#define ARC_REG_SCRATCH_DATA0 0x418 -#else -#define ARC_REG_TLBPD0 0x460 -#define ARC_REG_TLBPD1 0x461 -#define ARC_REG_TLBPD1HI 0x463 -#define ARC_REG_TLBINDEX 0x464 -#define ARC_REG_TLBCOMMAND 0x465 -#define ARC_REG_PID 0x468 -#define ARC_REG_SCRATCH_DATA0 0x46c -#endif - -/* Bits in MMU PID register */ -#define __TLB_ENABLE (1 << 31) -#define __PROG_ENABLE (1 << 30) -#define MMU_ENABLE (__TLB_ENABLE | __PROG_ENABLE) - -/* Error code if probe fails */ -#define TLB_LKUP_ERR 0x80000000 - -#ifdef CONFIG_ARC_MMU_V3 -#define TLB_DUP_ERR (TLB_LKUP_ERR | 0x00000001) -#else -#define TLB_DUP_ERR (TLB_LKUP_ERR | 0x40000000) -#endif -/* TLB Commands */ -#define TLBWrite 0x1 -#define TLBRead 0x2 -#define TLBGetIndex 0x3 -#define TLBProbe 0x4 -#define TLBWriteNI 0x5 /* write JTLB without inv uTLBs */ -#define TLBIVUTLB 0x6 /* explicitly inv uTLBs */ - -#ifdef CONFIG_ARC_MMU_V4 -#define TLBInsertEntry 0x7 -#define TLBDeleteEntry 0x8 -#endif - -#ifndef __ASSEMBLY__ +#include /* NR_CPUS */ typedef struct { unsigned long asid[NR_CPUS]; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; -static void inline mmu_setup_asid(struct mm_struct *mm, unsigned int asid) -{ - write_aux_reg(ARC_REG_PID, asid | MMU_ENABLE); -} - -static void inline mmu_setup_pgd(struct mm_struct *mm, pgd_t *pgd) -{ - /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ -#ifdef CONFIG_ISA_ARCV2 - write_aux_reg(ARC_REG_SCRATCH_DATA0, (unsigned int)pgd); #endif -} - -static inline int is_pae40_enabled(void) -{ - return IS_ENABLED(CONFIG_ARC_HAS_PAE40); -} - -extern int pae40_exist_but_not_enab(void); - -#endif /* !__ASSEMBLY__ */ #endif diff --git a/arch/arc/include/asm/mmu_context.h b/arch/arc/include/asm/mmu_context.h index dda471f5f05b..2057f55c7685 100644 --- a/arch/arc/include/asm/mmu_context.h +++ b/arch/arc/include/asm/mmu_context.h @@ -19,6 +19,7 @@ #include #include +#include /* ARC ASID Management * diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index f762bacb2358..de4576e8d17a 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -103,12 +103,6 @@ */ #define PAGE_KERNEL __pgprot(_K_PAGE_PERMS | _PAGE_CACHEABLE) -/* Masks for actual TLB "PD"s */ -#define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ) -#define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ) - -#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE) - /************************************************************************** * Mapping of vm_flags (Generic VM) to PTE flags (arch specific) * diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index 0b4bb62fa0ab..6b5872197005 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -35,7 +35,7 @@ #include #include #include -#include +#include #include #include #include From patchwork Wed Aug 11 00:42:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429683 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72980C4320A for ; Wed, 11 Aug 2021 00:43:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1378A60EB5 for ; Wed, 11 Aug 2021 00:43:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1378A60EB5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 26CED6B0081; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1D1D06B0088; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8A718D0001; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0104.hostedemail.com [216.40.44.104]) by kanga.kvack.org (Postfix) with ESMTP id AD36D6B0082 for ; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 607A61803952B for ; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) X-FDA: 78460951086.05.01A1C72 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf16.hostedemail.com (Postfix) with ESMTP id CFE60F002499 for ; Wed, 11 Aug 2021 00:43:22 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id CC4F160EB5; Wed, 11 Aug 2021 00:43:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642602; bh=c2E/fYN0v+g0xfHg2rVvxKo6rGmQOQAUbmipN+98JqE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kdRdzw3qxJOpftjPzoaR2QYxiuPNhjRjnbGlQtgdRzul9n6YVAHff9qtPvEGeuSBs 0rH6kI76O6pGa2zYNsHbW/JYicslHs7/2kH5b90dnrfswZ2kmxXc2fYkuMwkX5+qi4 5i4bIeZLr+o4ufWg6b9Zjao9c/tALI7Pt77Qek0dGP94OeMyhrvaeHaJEehiEhoGX2 gfwWJ+UkyAfgPAoIvY9ZGp8UEaEj6kHstPykkoqn3ro70WyF4yEoIJGVJ1QVLCEWg7 W224wNdj8HoFZbvMzmErwfq+okZVaJde9BN6VOGz4xCTioVWavat9IZYvxLa4FsBMY REFOzkmx5OrXw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 13/18] ARC: mm: disintegrate pgtable.h into levels and flags Date: Tue, 10 Aug 2021 17:42:53 -0700 Message-Id: <20210811004258.138075-14-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kdRdzw3q; spf=pass (imf16.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: CFE60F002499 X-Stat-Signature: sux811xzwyuifsobuxc6pbu61c8p3egf X-HE-Tag: 1628642602-885311 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: - pgtable-bits-arcv2.h (MMU specific page table flags) - pgtable-levels.h (paging levels) No functional changes, but paves way for easy addition of new MMU code with different bits and levels etc Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgtable-bits-arcv2.h | 149 ++++++++++++ arch/arc/include/asm/pgtable-levels.h | 91 +++++++ arch/arc/include/asm/pgtable.h | 277 +--------------------- 3 files changed, 244 insertions(+), 273 deletions(-) create mode 100644 arch/arc/include/asm/pgtable-bits-arcv2.h create mode 100644 arch/arc/include/asm/pgtable-levels.h diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h new file mode 100644 index 000000000000..183d23bc1e00 --- /dev/null +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -0,0 +1,149 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) + */ + +/* + * page table flags for software walked/managed MMUv3 (ARC700) and MMUv4 (HS) + * There correspond to the corresponding bits in the TLB + */ + +#ifndef _ASM_ARC_PGTABLE_BITS_ARCV2_H +#define _ASM_ARC_PGTABLE_BITS_ARCV2_H + +#ifdef CONFIG_ARC_CACHE_PAGES +#define _PAGE_CACHEABLE (1 << 0) /* Cached (H) */ +#else +#define _PAGE_CACHEABLE 0 +#endif + +#define _PAGE_EXECUTE (1 << 1) /* User Execute (H) */ +#define _PAGE_WRITE (1 << 2) /* User Write (H) */ +#define _PAGE_READ (1 << 3) /* User Read (H) */ +#define _PAGE_ACCESSED (1 << 4) /* Accessed (s) */ +#define _PAGE_DIRTY (1 << 5) /* Modified (s) */ +#define _PAGE_SPECIAL (1 << 6) +#define _PAGE_GLOBAL (1 << 8) /* ASID agnostic (H) */ +#define _PAGE_PRESENT (1 << 9) /* PTE/TLB Valid (H) */ + +#ifdef CONFIG_ARC_MMU_V4 +#define _PAGE_HW_SZ (1 << 10) /* Normal/super (H) */ +#else +#define _PAGE_HW_SZ 0 +#endif + +/* Defaults for every user page */ +#define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE) + +/* Set of bits not changed in pte_modify */ +#define _PAGE_CHG_MASK (PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \ + _PAGE_SPECIAL) + +/* More Abbrevaited helpers */ +#define PAGE_U_NONE __pgprot(___DEF) +#define PAGE_U_R __pgprot(___DEF | _PAGE_READ) +#define PAGE_U_W_R __pgprot(___DEF | _PAGE_READ | _PAGE_WRITE) +#define PAGE_U_X_R __pgprot(___DEF | _PAGE_READ | _PAGE_EXECUTE) +#define PAGE_U_X_W_R __pgprot(___DEF \ + | _PAGE_READ | _PAGE_WRITE | _PAGE_EXECUTE) +#define PAGE_KERNEL __pgprot(___DEF | _PAGE_GLOBAL \ + | _PAGE_READ | _PAGE_WRITE | _PAGE_EXECUTE) + +#define PAGE_SHARED PAGE_U_W_R + +#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE)) + +/* + * Mapping of vm_flags (Generic VM) to PTE flags (arch specific) + * + * Certain cases have 1:1 mapping + * e.g. __P101 means VM_READ, VM_EXEC and !VM_SHARED + * which directly corresponds to PAGE_U_X_R + * + * Other rules which cause the divergence from 1:1 mapping + * + * 1. Although ARC700 can do exclusive execute/write protection (meaning R + * can be tracked independet of X/W unlike some other CPUs), still to + * keep things consistent with other archs: + * -Write implies Read: W => R + * -Execute implies Read: X => R + * + * 2. Pvt Writable doesn't have Write Enabled initially: Pvt-W => !W + * This is to enable COW mechanism + */ + /* xwr */ +#define __P000 PAGE_U_NONE +#define __P001 PAGE_U_R +#define __P010 PAGE_U_R /* Pvt-W => !W */ +#define __P011 PAGE_U_R /* Pvt-W => !W */ +#define __P100 PAGE_U_X_R /* X => R */ +#define __P101 PAGE_U_X_R +#define __P110 PAGE_U_X_R /* Pvt-W => !W and X => R */ +#define __P111 PAGE_U_X_R /* Pvt-W => !W */ + +#define __S000 PAGE_U_NONE +#define __S001 PAGE_U_R +#define __S010 PAGE_U_W_R /* W => R */ +#define __S011 PAGE_U_W_R +#define __S100 PAGE_U_X_R /* X => R */ +#define __S101 PAGE_U_X_R +#define __S110 PAGE_U_X_W_R /* X => R */ +#define __S111 PAGE_U_X_W_R + +#ifndef __ASSEMBLY__ + +#define pte_write(pte) (pte_val(pte) & _PAGE_WRITE) +#define pte_dirty(pte) (pte_val(pte) & _PAGE_DIRTY) +#define pte_young(pte) (pte_val(pte) & _PAGE_ACCESSED) +#define pte_special(pte) (pte_val(pte) & _PAGE_SPECIAL) + +#define PTE_BIT_FUNC(fn, op) \ + static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; } + +PTE_BIT_FUNC(mknotpresent, &= ~(_PAGE_PRESENT)); +PTE_BIT_FUNC(wrprotect, &= ~(_PAGE_WRITE)); +PTE_BIT_FUNC(mkwrite, |= (_PAGE_WRITE)); +PTE_BIT_FUNC(mkclean, &= ~(_PAGE_DIRTY)); +PTE_BIT_FUNC(mkdirty, |= (_PAGE_DIRTY)); +PTE_BIT_FUNC(mkold, &= ~(_PAGE_ACCESSED)); +PTE_BIT_FUNC(mkyoung, |= (_PAGE_ACCESSED)); +PTE_BIT_FUNC(mkspecial, |= (_PAGE_SPECIAL)); +PTE_BIT_FUNC(mkhuge, |= (_PAGE_HW_SZ)); + +static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) +{ + return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); +} + +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval) +{ + set_pte(ptep, pteval); +} + +void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep); + +/* Encode swap {type,off} tuple into PTE + * We reserve 13 bits for 5-bit @type, keeping bits 12-5 zero, ensuring that + * PAGE_PRESENT is zero in a PTE holding swap "identifier" + */ +#define __swp_entry(type, off) ((swp_entry_t) \ + { ((type) & 0x1f) | ((off) << 13) }) + +/* Decode a PTE containing swap "identifier "into constituents */ +#define __swp_type(pte_lookalike) (((pte_lookalike).val) & 0x1f) +#define __swp_offset(pte_lookalike) ((pte_lookalike).val >> 13) + +#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) +#define __swp_entry_to_pte(x) ((pte_t) { (x).val }) + +#define kern_addr_valid(addr) (1) + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#include +#endif + +#endif /* __ASSEMBLY__ */ + +#endif diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h new file mode 100644 index 000000000000..8ece75335bb5 --- /dev/null +++ b/arch/arc/include/asm/pgtable-levels.h @@ -0,0 +1,91 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 Synopsys, Inc. (www.synopsys.com) + */ + +/* + * Helpers for implemenintg paging levels + */ + +#ifndef _ASM_ARC_PGTABLE_LEVELS_H +#define _ASM_ARC_PGTABLE_LEVELS_H + +/* + * 2 level paging setup for software walked MMUv3 (ARC700) and MMUv4 (HS) + * + * [31] 32 bit virtual address [0] + * ------------------------------------------------------- + * | | <---------- PGDIR_SHIFT ----------> | + * | | | <-- PAGE_SHIFT --> | + * ------------------------------------------------------- + * | | | + * | | --> off in page frame + * | ---> index into Page Table + * ----> index into Page Directory + * + * Given software walk, the vaddr split is arbitrary set to 11:8:13 + * However enabling of super page in a 2 level regime pegs PGDIR_SHIFT to + * super page size. + */ + +#if defined(CONFIG_ARC_HUGEPAGE_16M) +#define PGDIR_SHIFT 24 +#elif defined(CONFIG_ARC_HUGEPAGE_2M) +#define PGDIR_SHIFT 21 +#else +/* No Super page case: in theory this can be any number */ +#define PGDIR_SHIFT 21 +#endif + +#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ +#define PGDIR_MASK (~(PGDIR_SIZE - 1)) + +#define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) + +#define PTRS_PER_PTE BIT(PGDIR_SHIFT - PAGE_SHIFT) + +#ifndef __ASSEMBLY__ + +#include + +/* + * 1st level paging: pgd + */ +#define pgd_index(addr) ((addr) >> PGDIR_SHIFT) +#define pgd_offset(mm, addr) (((mm)->pgd) + pgd_index(addr)) +#define pgd_offset_k(addr) pgd_offset(&init_mm, addr) +#define pgd_ERROR(e) \ + pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) + +/* + * Due to the strange way generic pgtable level folding works, in a 2 level + * setup, pmd_val() returns pgd, so these pmd_* macros actually work on pgd + */ +#define pmd_none(x) (!pmd_val(x)) +#define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) +#define pmd_present(x) (pmd_val(x)) +#define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) +#define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) +#define pmd_page(pmd) virt_to_page(pmd_page_vaddr(pmd)) +#define set_pmd(pmdp, pmd) (*(pmdp) = pmd) +#define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) + +#define pte_ERROR(e) \ + pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) + +#define pte_none(x) (!pte_val(x)) +#define pte_present(x) (pte_val(x) & _PAGE_PRESENT) +#define pte_clear(mm,addr,ptep) set_pte_at(mm, addr, ptep, __pte(0)) +#define pte_page(pte) pfn_to_page(pte_pfn(pte)) +#define set_pte(ptep, pte) ((*(ptep)) = (pte)) +#define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) +#define pfn_pte(pfn, prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot)) +#define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) + +#ifdef CONFIG_ISA_ARCV2 +#define pmd_leaf(x) (pmd_val(x) & _PAGE_HW_SZ) +#endif + +#endif /* !__ASSEMBLY__ */ + +#endif diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index de4576e8d17a..9320b04c04bf 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -1,304 +1,35 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) - * - * vineetg: May 2011 - * -Folded PAGE_PRESENT (used by VM) and PAGE_VALID (used by MMU) into 1. - * They are semantically the same although in different contexts - * VALID marks a TLB entry exists and it will only happen if PRESENT - * - Utilise some unused free bits to confine PTE flags to 12 bits - * This is a must for 4k pg-sz - * - * vineetg: Mar 2011 - changes to accommodate MMU TLB Page Descriptor mods - * -TLB Locking never really existed, except for initial specs - * -SILENT_xxx not needed for our port - * -Per my request, MMU V3 changes the layout of some of the bits - * to avoid a few shifts in TLB Miss handlers. - * - * vineetg: April 2010 - * -PGD entry no longer contains any flags. If empty it is 0, otherwise has - * Pg-Tbl ptr. Thus pmd_present(), pmd_valid(), pmd_set( ) become simpler - * - * vineetg: April 2010 - * -Switched form 8:11:13 split for page table lookup to 11:8:13 - * -this speeds up page table allocation itself as we now have to memset 1K - * instead of 8k per page table. - * -TODO: Right now page table alloc is 8K and rest 7K is unused - * need to optimise it - * - * Amit Bhor, Sameer Dhavale: Codito Technologies 2004 */ #ifndef _ASM_ARC_PGTABLE_H #define _ASM_ARC_PGTABLE_H #include -#include + +#include +#include #include #include -/************************************************************************** - * Page Table Flags - * - * ARC700 MMU only deals with softare managed TLB entries. - * Page Tables are purely for Linux VM's consumption and the bits below are - * suited to that (uniqueness). Hence some are not implemented in the TLB and - * some have different value in TLB. - * e.g. MMU v2: K_READ bit is 8 and so is GLOBAL (possible because they live in - * seperate PD0 and PD1, which combined forms a translation entry) - * while for PTE perspective, they are 8 and 9 respectively - * with MMU v3: Most bits (except SHARED) represent the exact hardware pos - * (saves some bit shift ops in TLB Miss hdlrs) - */ - -#define _PAGE_CACHEABLE (1<<0) /* Page is cached (H) */ -#define _PAGE_EXECUTE (1<<1) /* Page has user execute perm (H) */ -#define _PAGE_WRITE (1<<2) /* Page has user write perm (H) */ -#define _PAGE_READ (1<<3) /* Page has user read perm (H) */ -#define _PAGE_ACCESSED (1<<4) /* Page is accessed (S) */ -#define _PAGE_DIRTY (1<<5) /* Page modified (dirty) (S) */ -#define _PAGE_SPECIAL (1<<6) - -#define _PAGE_GLOBAL (1<<8) /* Page is global (H) */ -#define _PAGE_PRESENT (1<<9) /* TLB entry is valid (H) */ - -#ifdef CONFIG_ARC_MMU_V4 -#define _PAGE_HW_SZ (1<<10) /* Page Size indicator (H): 0 normal, 1 super */ -#endif - -#define _PAGE_SHARED_CODE (1<<11) /* Shared Code page with cmn vaddr - usable for shared TLB entries (H) */ -/* vmalloc permissions */ -#define _K_PAGE_PERMS (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ | \ - _PAGE_GLOBAL | _PAGE_PRESENT) - -#ifndef CONFIG_ARC_CACHE_PAGES -#undef _PAGE_CACHEABLE -#define _PAGE_CACHEABLE 0 -#endif - -#ifndef _PAGE_HW_SZ -#define _PAGE_HW_SZ 0 -#endif - -/* Defaults for every user page */ -#define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE) - -/* Set of bits not changed in pte_modify */ -#define _PAGE_CHG_MASK (PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \ - _PAGE_SPECIAL) -/* More Abbrevaited helpers */ -#define PAGE_U_NONE __pgprot(___DEF) -#define PAGE_U_R __pgprot(___DEF | _PAGE_READ) -#define PAGE_U_W_R __pgprot(___DEF | _PAGE_READ | _PAGE_WRITE) -#define PAGE_U_X_R __pgprot(___DEF | _PAGE_READ | _PAGE_EXECUTE) -#define PAGE_U_X_W_R __pgprot(___DEF | _PAGE_READ | _PAGE_WRITE | \ - _PAGE_EXECUTE) - -#define PAGE_SHARED PAGE_U_W_R - -/* While kernel runs out of unstranslated space, vmalloc/modules use a chunk of - * user vaddr space - visible in all addr spaces, but kernel mode only - * Thus Global, all-kernel-access, no-user-access, cached - */ -#define PAGE_KERNEL __pgprot(_K_PAGE_PERMS | _PAGE_CACHEABLE) - -/************************************************************************** - * Mapping of vm_flags (Generic VM) to PTE flags (arch specific) - * - * Certain cases have 1:1 mapping - * e.g. __P101 means VM_READ, VM_EXEC and !VM_SHARED - * which directly corresponds to PAGE_U_X_R - * - * Other rules which cause the divergence from 1:1 mapping - * - * 1. Although ARC700 can do exclusive execute/write protection (meaning R - * can be tracked independet of X/W unlike some other CPUs), still to - * keep things consistent with other archs: - * -Write implies Read: W => R - * -Execute implies Read: X => R - * - * 2. Pvt Writable doesn't have Write Enabled initially: Pvt-W => !W - * This is to enable COW mechanism - */ - /* xwr */ -#define __P000 PAGE_U_NONE -#define __P001 PAGE_U_R -#define __P010 PAGE_U_R /* Pvt-W => !W */ -#define __P011 PAGE_U_R /* Pvt-W => !W */ -#define __P100 PAGE_U_X_R /* X => R */ -#define __P101 PAGE_U_X_R -#define __P110 PAGE_U_X_R /* Pvt-W => !W and X => R */ -#define __P111 PAGE_U_X_R /* Pvt-W => !W */ - -#define __S000 PAGE_U_NONE -#define __S001 PAGE_U_R -#define __S010 PAGE_U_W_R /* W => R */ -#define __S011 PAGE_U_W_R -#define __S100 PAGE_U_X_R /* X => R */ -#define __S101 PAGE_U_X_R -#define __S110 PAGE_U_X_W_R /* X => R */ -#define __S111 PAGE_U_X_W_R - -/**************************************************************** - * 2 tier (PGD:PTE) software page walker - * - * [31] 32 bit virtual address [0] - * ------------------------------------------------------- - * | | <------------ PGDIR_SHIFT ----------> | - * | | | - * | BITS_FOR_PGD | BITS_FOR_PTE | <-- PAGE_SHIFT --> | - * ------------------------------------------------------- - * | | | - * | | --> off in page frame - * | ---> index into Page Table - * ----> index into Page Directory - * - * In a single page size configuration, only PAGE_SHIFT is fixed - * So both PGD and PTE sizing can be tweaked - * e.g. 8K page (PAGE_SHIFT 13) can have - * - PGDIR_SHIFT 21 -> 11:8:13 address split - * - PGDIR_SHIFT 24 -> 8:11:13 address split - * - * If Super Page is configured, PGDIR_SHIFT becomes fixed too, - * so the sizing flexibility is gone. - */ - -#if defined(CONFIG_ARC_HUGEPAGE_16M) -#define PGDIR_SHIFT 24 -#elif defined(CONFIG_ARC_HUGEPAGE_2M) -#define PGDIR_SHIFT 21 -#else -/* - * Only Normal page support so "hackable" (see comment above) - * Default value provides 11:8:13 (8K), 11:9:12 (4K) - */ -#define PGDIR_SHIFT 21 -#endif - -#define BITS_FOR_PTE (PGDIR_SHIFT - PAGE_SHIFT) -#define BITS_FOR_PGD (32 - PGDIR_SHIFT) - -#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ -#define PGDIR_MASK (~(PGDIR_SIZE-1)) - -#define PTRS_PER_PTE BIT(BITS_FOR_PTE) -#define PTRS_PER_PGD BIT(BITS_FOR_PGD) - /* * Number of entries a user land program use. * TASK_SIZE is the maximum vaddr that can be used by a userland program. */ #define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE) - -/**************************************************************** - * Bucket load of VM Helpers - */ - #ifndef __ASSEMBLY__ -#define pte_ERROR(e) \ - pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) -#define pgd_ERROR(e) \ - pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) - -/* the zero page used for uninitialized and anonymous pages */ extern char empty_zero_page[PAGE_SIZE]; #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) -#define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval) - -/* find the page descriptor of the Page Tbl ref by PMD entry */ -#define pmd_page(pmd) virt_to_page(pmd_val(pmd) & PAGE_MASK) - -/* find the logical addr (phy for ARC) of the Page Tbl ref by PMD entry */ -#define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) - -#define pte_none(x) (!pte_val(x)) -#define pte_present(x) (pte_val(x) & _PAGE_PRESENT) -#define pte_clear(mm, addr, ptep) set_pte_at(mm, addr, ptep, __pte(0)) - -#define pmd_none(x) (!pmd_val(x)) -#define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) -#define pmd_present(x) (pmd_val(x)) -#define pmd_leaf(x) (pmd_val(x) & _PAGE_HW_SZ) -#define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) - -#define pte_page(pte) pfn_to_page(pte_pfn(pte)) -#define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) -#define pfn_pte(pfn, prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot)) - -/* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/ -#define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) - -/* Zoo of pte_xxx function */ -#define pte_read(pte) (pte_val(pte) & _PAGE_READ) -#define pte_write(pte) (pte_val(pte) & _PAGE_WRITE) -#define pte_dirty(pte) (pte_val(pte) & _PAGE_DIRTY) -#define pte_young(pte) (pte_val(pte) & _PAGE_ACCESSED) -#define pte_special(pte) (pte_val(pte) & _PAGE_SPECIAL) - -#define PTE_BIT_FUNC(fn, op) \ - static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; } - -PTE_BIT_FUNC(mknotpresent, &= ~(_PAGE_PRESENT)); -PTE_BIT_FUNC(wrprotect, &= ~(_PAGE_WRITE)); -PTE_BIT_FUNC(mkwrite, |= (_PAGE_WRITE)); -PTE_BIT_FUNC(mkclean, &= ~(_PAGE_DIRTY)); -PTE_BIT_FUNC(mkdirty, |= (_PAGE_DIRTY)); -PTE_BIT_FUNC(mkold, &= ~(_PAGE_ACCESSED)); -PTE_BIT_FUNC(mkyoung, |= (_PAGE_ACCESSED)); -PTE_BIT_FUNC(exprotect, &= ~(_PAGE_EXECUTE)); -PTE_BIT_FUNC(mkexec, |= (_PAGE_EXECUTE)); -PTE_BIT_FUNC(mkspecial, |= (_PAGE_SPECIAL)); -PTE_BIT_FUNC(mkhuge, |= (_PAGE_HW_SZ)); - -static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) -{ - return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); -} +extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); /* Macro to mark a page protection as uncacheable */ #define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE)) -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); -} - extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); - -/* Encode swap {type,off} tuple into PTE - * We reserve 13 bits for 5-bit @type, keeping bits 12-5 zero, ensuring that - * PAGE_PRESENT is zero in a PTE holding swap "identifier" - */ -#define __swp_entry(type, off) ((swp_entry_t) { \ - ((type) & 0x1f) | ((off) << 13) }) - -/* Decode a PTE containing swap "identifier "into constituents */ -#define __swp_type(pte_lookalike) (((pte_lookalike).val) & 0x1f) -#define __swp_offset(pte_lookalike) ((pte_lookalike).val >> 13) - -/* NOPs, to keep generic kernel happy */ -#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) -#define __swp_entry_to_pte(x) ((pte_t) { (x).val }) - -#define kern_addr_valid(addr) (1) - -#define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) - -/* - * remap a physical page `pfn' of size `size' with page protection `prot' - * into virtual address `from' - */ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#include -#endif /* to cope with aliasing VIPT cache */ #define HAVE_ARCH_UNMAPPED_AREA From patchwork Wed Aug 11 00:42:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F6E0C4338F for ; Wed, 11 Aug 2021 00:43:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B91E96056C for ; Wed, 11 Aug 2021 00:43:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B91E96056C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EEC026B0082; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E783B6B0081; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C7AA06B0088; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 93AF46B0081 for ; Tue, 10 Aug 2021 20:43:23 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3F3888249980 for ; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) X-FDA: 78460951086.30.16586FA Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id E38EE504BC75 for ; Wed, 11 Aug 2021 00:43:22 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3FF6260E97; Wed, 11 Aug 2021 00:43:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642602; bh=xDJ908ZzfO6+yH0suS+Wf2y1mcWCsd5q/TbB74bMqLk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Dx/7wL9VqmoKx8zT58MEHcTkH3sRnAUzpeBYJPrDooFmOZWP2OEPHhDvVRq1SWBpn rI3jxw5FPKPfAnrRZqqV22NN4vrCt+0MgWAaAcv6iAmMfGOL5CcUf9ZZMWGZFRhFBC TrZM6Pf4/X6/FFbDo4TxsniUc0aO+lVH0C/Rsf4XoTQNd2ZiUOzLDYkEVuRC2Xf0eg t6jq2Trd3/FOcPVYECbXSpQcAFgGAuulYPKg7lyK3GA5xXZmxN8T91uBiKk+Ruhzfh cWneKU1nY8fqyoUdpdf5AW5KWz84GyO70HYRgl7oiNE0miDCnjFtg2lYuVFvIvyT7o mHiBMOwpkp2rA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 14/18] ARC: mm: hack to allow 2 level build with 4 level code Date: Tue, 10 Aug 2021 17:42:54 -0700 Message-Id: <20210811004258.138075-15-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E38EE504BC75 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="Dx/7wL9V"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: 443am89tg3buyp8h4p6pp5hu7goexsda X-HE-Tag: 1628642602-200069 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: PMD_SHIFT is mapped to PUD_SHIFT or PGD_SHIFT by asm-generic/pgtable-* but only for !__ASSEMBLY__ tlbex.S asm code has PTRS_PER_PTE which uses PMD_SHIFT hence barfs for CONFIG_PGTABLE_LEVEL={2,3} and works for 4. So add a workaround local to tlbex.S - the proper fix is to change asm-generic/pgtable-* headers to expose the defines for __ASSEMBLY__ too Signed-off-by: Vineet Gupta --- arch/arc/mm/tlbex.S | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index 6b5872197005..d08bd09a0afc 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -145,6 +145,14 @@ ex_saved_reg1: ;TLB Miss handling Code ;============================================================================ +#ifndef PMD_SHIFT +#define PMD_SHIFT PUD_SHIFT +#endif + +#ifndef PUD_SHIFT +#define PUD_SHIFT PGDIR_SHIFT +#endif + ;----------------------------------------------------------------------------- ; This macro does the page-table lookup for the faulting address. ; OUT: r0 = PTE faulted on, r1 = ptr to PTE, r2 = Faulting V-address From patchwork Wed Aug 11 00:42:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429685 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14965C4320E for ; Wed, 11 Aug 2021 00:43:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B36AF6056C for ; Wed, 11 Aug 2021 00:43:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B36AF6056C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C2D2B6B0089; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB5D56B008C; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91F006B008A; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 591316B0083 for ; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 134B121960 for ; Wed, 11 Aug 2021 00:43:24 +0000 (UTC) X-FDA: 78460951128.21.F18F748 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf24.hostedemail.com (Postfix) with ESMTP id AEC5BB000838 for ; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id B562B61008; Wed, 11 Aug 2021 00:43:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642602; bh=fbU0hMmleCw5vRgQgXl1CKzt9JscXC+sV2x5UO3YuGo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LqPNYuK2wTt0KTP/CLyyyDZ+7qk/5ZvutiPZwv+6AkJVRYI0LKwaa1YjBXbbsRz1n zjEu0d8Eub46NnLcuoa+2ThJ7y5XJtzvkRe3QmLCpcAVUcOAZMKrEoJxlj+TnmPKV5 x38JQmkSO8xA6cVdO5ksMmGXYczvX71ocM+rx+k1Ds2GnbXa8OcfY0kBb91xrAEnuG v+3gU2t1eM3m79CFAT845BEO5PPztfBR41ymp6IFS0FIQMlk9biORrxHHJf1YAV2wx EtF4Vb7lGwYHzLAr6n4Jtxt/EzW8euzUhTxQEHXtRUC0vTCrgYdKY7jV0k28WIIC7M xwmMoRf/1m+gA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 15/18] ARC: mm: support 3 levels of page tables Date: Tue, 10 Aug 2021 17:42:55 -0700 Message-Id: <20210811004258.138075-16-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LqPNYuK2; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: qeoynp8q7kd9rdzjjkut4m65i85phd1i X-Rspamd-Queue-Id: AEC5BB000838 X-Rspamd-Server: rspam01 X-HE-Tag: 1628642603-859128 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. Forthcoming hw will have multiple levels, so this change preps mm code for same. It is also fun to try multi levels even on soft-walked code to ensure generic mm code is robust to handle. overview diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 59d5b2a179f6..43cb8aaf57a2 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -314,6 +314,10 @@ config ARC_HUGEPAGE_16M endchoice +config PGTABLE_LEVELS + int "Number of Page table levels" + default 2 + config ARC_COMPACT_IRQ_LEVELS depends on ISA_ARCOMPACT bool "Setup Timer IRQ as high Priority" diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 313e6f543d2d..df3cc154ae4a 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -41,6 +41,17 @@ typedef struct { #define pgd_val(x) ((x).pgd) #define __pgd(x) ((pgd_t) { (x) }) +#if CONFIG_PGTABLE_LEVELS > 2 + +typedef struct { + unsigned long pmd; +} pmd_t; + +#define pmd_val(x) ((x).pmd) +#define __pmd(x) ((pmd_t) { (x) }) + +#endif + typedef struct { #ifdef CONFIG_ARC_HAS_PAE40 unsigned long long pte; diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 0cf73431eb89..01c2d84418ed 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -86,6 +86,28 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) } +#if CONFIG_PGTABLE_LEVELS > 2 + +static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) +{ + set_pud(pudp, __pud((unsigned long)pmdp)); +} + +static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) +{ + return (pmd_t *)__get_free_page( + GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO); +} + +static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) +{ + free_page((unsigned long)pmd); +} + +#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) + +#endif + /* * With software-only page-tables, addr-split for traversal is tweakable and * that directly governs how big tables would be at each level. diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index 8ece75335bb5..1c2f022d4ad0 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -10,6 +10,8 @@ #ifndef _ASM_ARC_PGTABLE_LEVELS_H #define _ASM_ARC_PGTABLE_LEVELS_H +#if CONFIG_PGTABLE_LEVELS == 2 + /* * 2 level paging setup for software walked MMUv3 (ARC700) and MMUv4 (HS) * @@ -37,16 +39,38 @@ #define PGDIR_SHIFT 21 #endif -#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ -#define PGDIR_MASK (~(PGDIR_SIZE - 1)) +#else + +/* + * A default 3 level paging testing setup in software walked MMU + * MMUv4 (8K page): <4> : <7> : <8> : <13> + */ +#define PGDIR_SHIFT 28 +#if CONFIG_PGTABLE_LEVELS > 2 +#define PMD_SHIFT 21 +#endif + +#endif +#define PGDIR_SIZE BIT(PGDIR_SHIFT) +#define PGDIR_MASK (~(PGDIR_SIZE - 1)) #define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) -#define PTRS_PER_PTE BIT(PGDIR_SHIFT - PAGE_SHIFT) +#if CONFIG_PGTABLE_LEVELS > 2 +#define PMD_SIZE BIT(PMD_SHIFT) +#define PMD_MASK (~(PMD_SIZE - 1)) +#define PTRS_PER_PMD BIT(PGDIR_SHIFT - PMD_SHIFT) +#endif + +#define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT) #ifndef __ASSEMBLY__ +#if CONFIG_PGTABLE_LEVELS > 2 +#include +#else #include +#endif /* * 1st level paging: pgd @@ -57,9 +81,35 @@ #define pgd_ERROR(e) \ pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#if CONFIG_PGTABLE_LEVELS > 2 + +/* In 3 level paging, pud_* macros work on pgd */ +#define pud_none(x) (!pud_val(x)) +#define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) +#define pud_present(x) (pud_val(x)) +#define pud_clear(xp) do { pud_val(*(xp)) = 0; } while (0) +#define pud_pgtable(pud) ((pmd_t *)(pud_val(pud) & PAGE_MASK)) +#define pud_page(pud) virt_to_page(pud_pgtable(pud)) +#define set_pud(pudp, pud) (*(pudp) = pud) + +/* + * 2nd level paging: pmd + */ +#define pmd_ERROR(e) \ + pr_crit("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e)) + +#define pmd_pfn(pmd) ((pmd_val(pmd) & PMD_MASK) >> PAGE_SHIFT) +#define pfn_pmd(pfn,prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) + +#endif + /* - * Due to the strange way generic pgtable level folding works, in a 2 level - * setup, pmd_val() returns pgd, so these pmd_* macros actually work on pgd + * Due to the strange way generic pgtable level folding works, the pmd_* macros + * - are valid even for 2 levels (which supposedly only has pgd - pte) + * - behave differently for 2 vs. 3 + * In 2 level paging (pgd -> pte), pmd_* macros work on pgd + * In 3+ level paging (pgd -> pmd -> pte), pmd_* macros work on pmd */ #define pmd_none(x) (!pmd_val(x)) #define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) @@ -70,6 +120,9 @@ #define set_pmd(pmdp, pmd) (*(pmdp) = pmd) #define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) +/* + * 3rd level paging: pte + */ #define pte_ERROR(e) \ pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h index e4031ecd3c8c..f28afcf5c6d1 100644 --- a/arch/arc/include/asm/processor.h +++ b/arch/arc/include/asm/processor.h @@ -93,7 +93,7 @@ extern unsigned int get_wchan(struct task_struct *p); #define VMALLOC_START (PAGE_OFFSET - (CONFIG_ARC_KVADDR_SIZE << 20)) /* 1 PGDIR_SIZE each for fixmap/pkmap, 2 PGDIR_SIZE gutter (see asm/highmem.h) */ -#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PGDIR_SIZE * 4) +#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PMD_SIZE * 4) #define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 41f154320964..8da2f0ad8c69 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -39,6 +39,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) if (!pgd_present(*pgd_k)) goto bad_area; + set_pgd(pgd, *pgd_k); + p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); if (!p4d_present(*p4d_k)) @@ -49,6 +51,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) if (!pud_present(*pud_k)) goto bad_area; + set_pud(pud, *pud_k); + pmd = pmd_offset(pud, address); pmd_k = pmd_offset(pud_k, address); if (!pmd_present(*pmd_k)) diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 34f16e0b41e6..77da83569b36 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -658,8 +658,8 @@ char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len) IS_USED_CFG(CONFIG_TRANSPARENT_HUGEPAGE)); n += scnprintf(buf + n, len - n, - "MMU [v%x]\t: %dk PAGE, %sJTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", - p_mmu->ver, p_mmu->pg_sz_k, super_pg, + "MMU [v%x]\t: %dk PAGE, %s, swalk %d lvl, JTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", + p_mmu->ver, p_mmu->pg_sz_k, super_pg, CONFIG_PGTABLE_LEVELS, p_mmu->sets * p_mmu->ways, p_mmu->sets, p_mmu->ways, p_mmu->u_dtlb, p_mmu->u_itlb, IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40)); diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index d08bd09a0afc..5f6bfdfda1be 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -173,6 +173,15 @@ ex_saved_reg1: tst r3, r3 bz do_slow_path_pf ; if no Page Table, do page fault +#if CONFIG_PGTABLE_LEVELS > 2 + lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD + and r0, r0, (PTRS_PER_PMD - 1) + ld.as r1, [r3, r0] ; PMD entry + tst r1, r1 + bz do_slow_path_pf + mov r3, r1 +#endif + #ifdef CONFIG_TRANSPARENT_HUGEPAGE and.f 0, r3, _PAGE_HW_SZ ; Is this Huge PMD (thp) add2.nz r1, r1, r0 From patchwork Wed Aug 11 00:42:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429687 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57FB6C4338F for ; Wed, 11 Aug 2021 00:43:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E81336056C for ; Wed, 11 Aug 2021 00:43:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E81336056C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0BE7B6B0088; Tue, 10 Aug 2021 20:43:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7B618D0001; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C2CA66B0088; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 86E266B0089 for ; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3B7928249980 for ; Wed, 11 Aug 2021 00:43:24 +0000 (UTC) X-FDA: 78460951128.12.837CC5B Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf01.hostedemail.com (Postfix) with ESMTP id CA9BF501F5B8 for ; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 1C69C60EB9; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642603; bh=bkSeK1zM8h+JAAwlZRlJp4kcognaCCVRcd5O5x4DWII=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XTmZmpVbbXqnh9TVRFwLMoAKGFYaxy6EIAZYZ9Or+yC04yXxh8CPyNvv//OB/89On cRZeQ7WUVn9vMYQ8qXM7tE947CmZGBGoy/7+q3bWPQY1L0jxe431nY3eKI36TZJclg exYHWq9ckoJPpMFiAmzX9BqGlgf9LbJUdGXhwDe+7izebqxgqq5/vR6LDOkvM/LqfX QU6Ad0hBV+pNgrbfBk0X46vduXxi3E767Wu0I0kBvp4PjXeOYuT1GCAMaHJT0Ub+pt Pibgs8vmBCHolgHTtQ1atZU1syYeRRfiXfF4izfN98TNd8sPHMKi0EhquPKY3EHySD LNZtRxggf36lw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 16/18] ARC: mm: support 4 levels of page tables Date: Tue, 10 Aug 2021 17:42:56 -0700 Message-Id: <20210811004258.138075-17-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: CA9BF501F5B8 Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=XTmZmpVb; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf01.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: 3wgd1udapxuw39tsq1cr3q9sy9fzcqub X-HE-Tag: 1628642603-495966 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/include/asm/page.h | 11 +++++++ arch/arc/include/asm/pgalloc.h | 22 +++++++++++++ arch/arc/include/asm/pgtable-levels.h | 45 ++++++++++++++++++++++++--- arch/arc/mm/fault.c | 2 ++ arch/arc/mm/tlbex.S | 9 ++++++ 5 files changed, 84 insertions(+), 5 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index df3cc154ae4a..883856f12afe 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -41,6 +41,17 @@ typedef struct { #define pgd_val(x) ((x).pgd) #define __pgd(x) ((pgd_t) { (x) }) +#if CONFIG_PGTABLE_LEVELS > 3 + +typedef struct { + unsigned long pud; +} pud_t; + +#define pud_val(x) ((x).pud) +#define __pud(x) ((pud_t) { (x) }) + +#endif + #if CONFIG_PGTABLE_LEVELS > 2 typedef struct { diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 01c2d84418ed..e99c724d9235 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -86,6 +86,28 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) } +#if CONFIG_PGTABLE_LEVELS > 3 + +static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) +{ + set_p4d(p4dp, __p4d((unsigned long)pudp)); +} + +static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) +{ + return (pud_t *)__get_free_page( + GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO); +} + +static inline void pud_free(struct mm_struct *mm, pud_t *pudp) +{ + free_page((unsigned long)pudp); +} + +#define __pud_free_tlb(tlb, pmd, addr) pud_free((tlb)->mm, pmd) + +#endif + #if CONFIG_PGTABLE_LEVELS > 2 static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index 1c2f022d4ad0..2da3c4e52a91 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -44,8 +44,13 @@ /* * A default 3 level paging testing setup in software walked MMU * MMUv4 (8K page): <4> : <7> : <8> : <13> + * A default 4 level paging testing setup in software walked MMU + * MMUv4 (8K page): <4> : <3> : <4> : <8> : <13> */ #define PGDIR_SHIFT 28 +#if CONFIG_PGTABLE_LEVELS > 3 +#define PUD_SHIFT 25 +#endif #if CONFIG_PGTABLE_LEVELS > 2 #define PMD_SHIFT 21 #endif @@ -56,17 +61,25 @@ #define PGDIR_MASK (~(PGDIR_SIZE - 1)) #define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) +#if CONFIG_PGTABLE_LEVELS > 3 +#define PUD_SIZE BIT(PUD_SHIFT) +#define PUD_MASK (~(PUD_SIZE - 1)) +#define PTRS_PER_PUD BIT(PGDIR_SHIFT - PUD_SHIFT) +#endif + #if CONFIG_PGTABLE_LEVELS > 2 #define PMD_SIZE BIT(PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE - 1)) -#define PTRS_PER_PMD BIT(PGDIR_SHIFT - PMD_SHIFT) +#define PTRS_PER_PMD BIT(PUD_SHIFT - PMD_SHIFT) #endif #define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT) #ifndef __ASSEMBLY__ -#if CONFIG_PGTABLE_LEVELS > 2 +#if CONFIG_PGTABLE_LEVELS > 3 +#include +#elif CONFIG_PGTABLE_LEVELS > 2 #include #else #include @@ -81,9 +94,31 @@ #define pgd_ERROR(e) \ pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#if CONFIG_PGTABLE_LEVELS > 3 + +/* In 4 level paging, p4d_* macros work on pgd */ +#define p4d_none(x) (!p4d_val(x)) +#define p4d_bad(x) ((p4d_val(x) & ~PAGE_MASK)) +#define p4d_present(x) (p4d_val(x)) +#define p4d_clear(xp) do { p4d_val(*(xp)) = 0; } while (0) +#define p4d_pgtable(p4d) ((pud_t *)(p4d_val(p4d) & PAGE_MASK)) +#define p4d_page(p4d) virt_to_page(p4d_pgtable(p4d)) +#define set_p4d(p4dp, p4d) (*(p4dp) = p4d) + +/* + * 2nd level paging: pud + */ +#define pud_ERROR(e) \ + pr_crit("%s:%d: bad pud %08lx.\n", __FILE__, __LINE__, pud_val(e)) + +#endif + #if CONFIG_PGTABLE_LEVELS > 2 -/* In 3 level paging, pud_* macros work on pgd */ +/* + * In 3 level paging, pud_* macros work on pgd + * In 4 level paging, pud_* macros work on pud + */ #define pud_none(x) (!pud_val(x)) #define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) #define pud_present(x) (pud_val(x)) @@ -93,7 +128,7 @@ #define set_pud(pudp, pud) (*(pudp) = pud) /* - * 2nd level paging: pmd + * 3rd level paging: pmd */ #define pmd_ERROR(e) \ pr_crit("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e)) @@ -121,7 +156,7 @@ #define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) /* - * 3rd level paging: pte + * 4th level paging: pte */ #define pte_ERROR(e) \ pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 8da2f0ad8c69..f8994164fa36 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -46,6 +46,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) if (!p4d_present(*p4d_k)) goto bad_area; + set_p4d(p4d, *p4d_k); + pud = pud_offset(p4d, address); pud_k = pud_offset(p4d_k, address); if (!pud_present(*pud_k)) diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index 5f6bfdfda1be..e1831b6fafa9 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -173,6 +173,15 @@ ex_saved_reg1: tst r3, r3 bz do_slow_path_pf ; if no Page Table, do page fault +#if CONFIG_PGTABLE_LEVELS > 3 + lsr r0, r2, PUD_SHIFT ; Bits for indexing into PUD + and r0, r0, (PTRS_PER_PUD - 1) + ld.as r1, [r3, r0] ; PMD entry + tst r1, r1 + bz do_slow_path_pf + mov r3, r1 +#endif + #if CONFIG_PGTABLE_LEVELS > 2 lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD and r0, r0, (PTRS_PER_PMD - 1) From patchwork Wed Aug 11 00:42:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429689 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B2ABC4338F for ; Wed, 11 Aug 2021 00:43:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1000A6056C for ; Wed, 11 Aug 2021 00:43:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1000A6056C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 417C16B0083; Tue, 10 Aug 2021 20:43:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B14C8D0001; Tue, 10 Aug 2021 20:43:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BAB56B0083; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0172.hostedemail.com [216.40.44.172]) by kanga.kvack.org (Postfix) with ESMTP id CC2A46B008A for ; Tue, 10 Aug 2021 20:43:24 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 85FDF22C08 for ; Wed, 11 Aug 2021 00:43:24 +0000 (UTC) X-FDA: 78460951128.11.0AE97D4 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf16.hostedemail.com (Postfix) with ESMTP id 36238F002499 for ; Wed, 11 Aug 2021 00:43:24 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 989E960EB7; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642603; bh=7WJKlO9bHU1KTACHVSZNKYdiOMEX5YhZuZxQV/SdLuA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HRnf5A4x4/jZiHyG9ZaRzXwbGnLGx/X7Meb25l6g61b2CB/dhxZTicTnsdeBUwEq8 uPR3P/20cfHxaAzD6B2oM/PG1YvlEyleAelnhCPOKRpYyeZV9eD6HaQLNoqoMYJ5+i 2rIEeTEM6HtnRIZPN+OyVa2JIZyVIuu5RMfoHoB93BHSDxUcrtSR/Cooe+bKYGge6S pIkAR2zJuxnAB8pdDLRsZ9gNp3OBD1xCxOF+VfdJYwLfEyiceL5vlqx2BCXqvIO0fg A8qiQb2OlNVgyyjuEAHcQu5PA47i5amo6imVqNQdIHNK+WD4wY38YMUbQnEoKKcI/T 8ijDY055CsVjw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 17/18] ARC: mm: vmalloc sync from kernel to user table to update PMD ... Date: Tue, 10 Aug 2021 17:42:57 -0700 Message-Id: <20210811004258.138075-18-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=HRnf5A4x; spf=pass (imf16.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 36238F002499 X-Stat-Signature: r6jsrazaxn84wfe7qth8m4wqmunfw7iy X-HE-Tag: 1628642604-880963 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ... not PGD vmalloc() sets up the kernel page table (starting from @swapper_pg_dir). But when vmalloc area is accessed in context of a user task, say opening terminal in n_tty_open(), the user page tables need to be synced from kernel page tables so that TLB entry is created in "user context". The old code was doing this incorrectly, as it was updating the user pgd entry (first level itself) to point to kernel pud table (2nd level), effectively yanking away the entire user space translation with kernel one. The correct way to do this is to ONLY update a user space pgd/pud/pmd entry if it is not popluated already. This ensures that only the missing leaf pmd entry gets updated to point to relevant kernel pte table. From code change pov, we are chaging the pattern: p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); if (!p4d_present(*p4d_k)) goto bad_area; set_p4d(p4d, *p4d_k); with p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); if (p4d_none(*p4d_k)) goto bad_area; if (!p4d_present(*p4d)) set_p4d(p4d, *p4d_k); Signed-off-by: Vineet Gupta --- arch/arc/mm/fault.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index f8994164fa36..5787c261c9a4 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -36,31 +36,31 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) pgd = pgd_offset(current->active_mm, address); pgd_k = pgd_offset_k(address); - if (!pgd_present(*pgd_k)) + if (pgd_none (*pgd_k)) goto bad_area; - - set_pgd(pgd, *pgd_k); + if (!pgd_present(*pgd)) + set_pgd(pgd, *pgd_k); p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); - if (!p4d_present(*p4d_k)) + if (p4d_none(*p4d_k)) goto bad_area; - - set_p4d(p4d, *p4d_k); + if (!p4d_present(*p4d)) + set_p4d(p4d, *p4d_k); pud = pud_offset(p4d, address); pud_k = pud_offset(p4d_k, address); - if (!pud_present(*pud_k)) + if (pud_none(*pud_k)) goto bad_area; - - set_pud(pud, *pud_k); + if (!pud_present(*pud)) + set_pud(pud, *pud_k); pmd = pmd_offset(pud, address); pmd_k = pmd_offset(pud_k, address); - if (!pmd_present(*pmd_k)) + if (pmd_none(*pmd_k)) goto bad_area; - - set_pmd(pmd, *pmd_k); + if (!pmd_present(*pmd)) + set_pmd(pmd, *pmd_k); /* XXX: create the TLB entry here */ return 0; From patchwork Wed Aug 11 00:42:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12429691 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C297DC4338F for ; Wed, 11 Aug 2021 00:43:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 55DF86056C for ; Wed, 11 Aug 2021 00:43:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 55DF86056C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AF5F36B008A; Tue, 10 Aug 2021 20:43:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A57328D0001; Tue, 10 Aug 2021 20:43:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85A256B0092; Tue, 10 Aug 2021 20:43:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 5A6F86B008A for ; Tue, 10 Aug 2021 20:43:25 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0068F18018131 for ; Wed, 11 Aug 2021 00:43:24 +0000 (UTC) X-FDA: 78460951170.18.CB31518 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf24.hostedemail.com (Postfix) with ESMTP id 8D0A2B000838 for ; Wed, 11 Aug 2021 00:43:24 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0079660EFF; Wed, 11 Aug 2021 00:43:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628642604; bh=VtfgJOO1qpNWVqGWqU8ugb4GoIQJlSXlTdOyk1vLDs4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bCR/GFLnYPmnn66Ttg8iFKgVMi00kwUX/TMFPZKzieY2MZZBOK77ZOjK8pYUDACvn YaDPIkxIGZtGMYO0XB8HQsIPku4XbMiFDUs6YhZgkI0Pdb47LOLEqX/562FkmzHs2z aokCZiU4s7LV3iejsSLLovnTmZmRweEcMsl9yf2XTySeRAmKOqhCxmAFtPLponYUwU NtR5T21tbhmCI2u90Q6X0KgwDacUEVxNjEkY3ITzJiFNyUnkBUEkgOZtWhUF3qR4oE hZd/f5OIv5cqskq3H1vprHmeT5LYjE21L5y+qFwBQLt1C/Fcx6RZfURy4gx5ujqBCs AjqdN0/dmStKQ== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH 18/18] ARC: mm: introduce _PAGE_TABLE to explicitly link pgd,pud,pmd entries Date: Tue, 10 Aug 2021 17:42:58 -0700 Message-Id: <20210811004258.138075-19-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210811004258.138075-1-vgupta@kernel.org> References: <20210811004258.138075-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="bCR/GFLn"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf24.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: s5oiden4a9y4phxg4so5xkw145tykqqj X-Rspamd-Queue-Id: 8D0A2B000838 X-Rspamd-Server: rspam01 X-HE-Tag: 1628642604-477702 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ARCv3 hardware walker expects Table Descriptors to have b'11 in LSB bits to continue moving to next level. This commits adds that (to ARCv2 code) and ensures that it works in software walked regime. The pte entries stil need tagging, but that is not possible in ARCv2 since the LSB 2 bits are currently used. Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgalloc.h | 6 +++--- arch/arc/include/asm/pgtable-bits-arcv2.h | 2 ++ arch/arc/include/asm/pgtable-levels.h | 6 +++--- arch/arc/mm/tlbex.S | 4 +++- 4 files changed, 11 insertions(+), 7 deletions(-) diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index e99c724d9235..230d43a998af 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -47,7 +47,7 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep) * * The cast itself is needed given simplistic definition of set_pmd() */ - set_pmd(pmdp, __pmd((unsigned long)ptep)); + set_pmd(pmdp, __pmd((unsigned long)ptep | _PAGE_TABLE)); } /* @@ -90,7 +90,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) { - set_p4d(p4dp, __p4d((unsigned long)pudp)); + set_p4d(p4dp, __p4d((unsigned long)pudp | _PAGE_TABLE)); } static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) @@ -112,7 +112,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pudp) static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) { - set_pud(pudp, __pud((unsigned long)pmdp)); + set_pud(pudp, __pud((unsigned long)pmdp | _PAGE_TABLE)); } static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 183d23bc1e00..54aba0d3ae34 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -32,6 +32,8 @@ #define _PAGE_HW_SZ 0 #endif +#define _PAGE_TABLE 0x3 + /* Defaults for every user page */ #define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE) diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index 2da3c4e52a91..6c7a8360d986 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -98,7 +98,7 @@ /* In 4 level paging, p4d_* macros work on pgd */ #define p4d_none(x) (!p4d_val(x)) -#define p4d_bad(x) ((p4d_val(x) & ~PAGE_MASK)) +#define p4d_bad(x) (!(p4d_val(x) & _PAGE_TABLE)) #define p4d_present(x) (p4d_val(x)) #define p4d_clear(xp) do { p4d_val(*(xp)) = 0; } while (0) #define p4d_pgtable(p4d) ((pud_t *)(p4d_val(p4d) & PAGE_MASK)) @@ -120,7 +120,7 @@ * In 4 level paging, pud_* macros work on pud */ #define pud_none(x) (!pud_val(x)) -#define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) +#define pud_bad(x) (!(pud_val(x) & _PAGE_TABLE)) #define pud_present(x) (pud_val(x)) #define pud_clear(xp) do { pud_val(*(xp)) = 0; } while (0) #define pud_pgtable(pud) ((pmd_t *)(pud_val(pud) & PAGE_MASK)) @@ -147,7 +147,7 @@ * In 3+ level paging (pgd -> pmd -> pte), pmd_* macros work on pmd */ #define pmd_none(x) (!pmd_val(x)) -#define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) +#define pmd_bad(pmd) (!(pmd_val(pmd) & _PAGE_TABLE)) #define pmd_present(x) (pmd_val(x)) #define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) #define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index e1831b6fafa9..24a9670186b3 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -171,11 +171,12 @@ ex_saved_reg1: lsr r0, r2, PGDIR_SHIFT ; Bits for indexing into PGD ld.as r3, [r1, r0] ; PGD entry corresp to faulting addr tst r3, r3 - bz do_slow_path_pf ; if no Page Table, do page fault + bz do_slow_path_pf ; next level table missing, handover to linux vm code #if CONFIG_PGTABLE_LEVELS > 3 lsr r0, r2, PUD_SHIFT ; Bits for indexing into PUD and r0, r0, (PTRS_PER_PUD - 1) + bmskn r3, r3, 1 ; clear _PAGE_TABLE bits ld.as r1, [r3, r0] ; PMD entry tst r1, r1 bz do_slow_path_pf @@ -185,6 +186,7 @@ ex_saved_reg1: #if CONFIG_PGTABLE_LEVELS > 2 lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD and r0, r0, (PTRS_PER_PMD - 1) + bmskn r3, r3, 1 ; clear _PAGE_TABLE bits ld.as r1, [r3, r0] ; PMD entry tst r1, r1 bz do_slow_path_pf