From patchwork Thu Aug 12 23:37:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 733B4C432BE for ; Thu, 12 Aug 2021 23:38:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F18FE61107 for ; Thu, 12 Aug 2021 23:38:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F18FE61107 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 76B696B0071; Thu, 12 Aug 2021 19:37:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 71BCF8D0001; Thu, 12 Aug 2021 19:37:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 630EA6B0073; Thu, 12 Aug 2021 19:37:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id 4A7A36B0071 for ; Thu, 12 Aug 2021 19:37:59 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id EE91EB9F6 for ; Thu, 12 Aug 2021 23:37:58 +0000 (UTC) X-FDA: 78468043836.03.5ED5479 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf08.hostedemail.com (Postfix) with ESMTP id 9A36B300C70D for ; Thu, 12 Aug 2021 23:37:58 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 784C160EE2; Thu, 12 Aug 2021 23:37:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811477; bh=iGK8PU/tmDB3AhUhP6XkaxS4+cLDTMouE860/hGTGtk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kmHBNkN4z+GRV91Y98Q1O5gj2BS2hSbsyBU1Rx42aTRA4/P9lVbJnrvjY5Eb8eMnv 0pqmEK6WYQTWwCKA5BnsmlHJ2HO7Vp7NLGBqchILI58wf97BmDY4YcH8upSiFpcwB9 MFvVipwSqrmvgtXplIi9CggjYMAbUEIeoOePlmwCC7EOLUYS87fEoGbvCBaDAfdTuC mDstBKy/+HjdigeqxLcTwOd6iP0OizA7OhuBZ6JW4YQxAIFxMSeoFPgheg066VCrGI +wwmROgzgSqQvsTRy6KYUPX1JPktQyDyUHbcmqLvoyW7bQBvvJK6O1Vnb0qUTFJeN7 wPm/0VpPpF59Q== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 01/19] ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 only Date: Thu, 12 Aug 2021 16:37:35 -0700 Message-Id: <20210812233753.104217-2-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 9A36B300C70D Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=kmHBNkN4; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf08.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Rspamd-Server: rspam04 X-Stat-Signature: 9buwcpxn6baht7b4yofzuiro1eidw93t X-HE-Tag: 1628811478-905717 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MMU SCRATCH_DATA0 register is intended to cache task pgd. However in ARC700 SMP port, it has to be repurposed for reentrant interrupt handling, while UP port doesn't. We currently ahandle boe usecases using a fabricated which has usual issues of dependency nesting and ugliness. So clean this up: for ARC700 don't use to cache pgd (even in UP) and do the opposite for ARCv2. And while here, switch to canonical pgd_offset(). Signed-off-by: Vineet Gupta --- arch/arc/include/asm/entry-compact.h | 8 -------- arch/arc/include/asm/mmu.h | 4 ---- arch/arc/include/asm/mmu_context.h | 2 +- arch/arc/include/asm/pgtable.h | 23 ----------------------- arch/arc/mm/fault.c | 2 +- arch/arc/mm/tlb.c | 4 ++-- arch/arc/mm/tlbex.S | 2 +- 7 files changed, 5 insertions(+), 40 deletions(-) diff --git a/arch/arc/include/asm/entry-compact.h b/arch/arc/include/asm/entry-compact.h index 6dbf5cecc8cc..5aab4f93ab8a 100644 --- a/arch/arc/include/asm/entry-compact.h +++ b/arch/arc/include/asm/entry-compact.h @@ -126,19 +126,11 @@ * to be saved again on kernel mode stack, as part of pt_regs. *-------------------------------------------------------------*/ .macro PROLOG_FREEUP_REG reg, mem -#ifndef ARC_USE_SCRATCH_REG - sr \reg, [ARC_REG_SCRATCH_DATA0] -#else st \reg, [\mem] -#endif .endm .macro PROLOG_RESTORE_REG reg, mem -#ifndef ARC_USE_SCRATCH_REG - lr \reg, [ARC_REG_SCRATCH_DATA0] -#else ld \reg, [\mem] -#endif .endm /*-------------------------------------------------------------- diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index a81d1975866a..4065335a7922 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -31,10 +31,6 @@ #define ARC_REG_SCRATCH_DATA0 0x46c #endif -#if defined(CONFIG_ISA_ARCV2) || !defined(CONFIG_SMP) -#define ARC_USE_SCRATCH_REG -#endif - /* Bits in MMU PID register */ #define __TLB_ENABLE (1 << 31) #define __PROG_ENABLE (1 << 30) diff --git a/arch/arc/include/asm/mmu_context.h b/arch/arc/include/asm/mmu_context.h index df164066e172..49318a126879 100644 --- a/arch/arc/include/asm/mmu_context.h +++ b/arch/arc/include/asm/mmu_context.h @@ -146,7 +146,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, */ cpumask_set_cpu(cpu, mm_cpumask(next)); -#ifdef ARC_USE_SCRATCH_REG +#ifdef CONFIG_ISA_ARCV2 /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ write_aux_reg(ARC_REG_SCRATCH_DATA0, next->pgd); #endif diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index 0c3e220bd2b4..80b57c14b430 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -284,29 +284,6 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, set_pte(ptep, pteval); } -/* - * Macro to quickly access the PGD entry, utlising the fact that some - * arch may cache the pointer to Page Directory of "current" task - * in a MMU register - * - * Thus task->mm->pgd (3 pointer dereferences, cache misses etc simply - * becomes read a register - * - * ********CAUTION*******: - * Kernel code might be dealing with some mm_struct of NON "current" - * Thus use this macro only when you are certain that "current" is current - * e.g. when dealing with signal frame setup code etc - */ -#ifdef ARC_USE_SCRATCH_REG -#define pgd_offset_fast(mm, addr) \ -({ \ - pgd_t *pgd_base = (pgd_t *) read_aux_reg(ARC_REG_SCRATCH_DATA0); \ - pgd_base + pgd_index(addr); \ -}) -#else -#define pgd_offset_fast(mm, addr) pgd_offset(mm, addr) -#endif - extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep); diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index f5657cb68e4f..41f154320964 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -33,7 +33,7 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) pud_t *pud, *pud_k; pmd_t *pmd, *pmd_k; - pgd = pgd_offset_fast(current->active_mm, address); + pgd = pgd_offset(current->active_mm, address); pgd_k = pgd_offset_k(address); if (!pgd_present(*pgd_k)) diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 8696829d37c0..349fb7a75d1d 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -719,8 +719,8 @@ void arc_mmu_init(void) /* Enable the MMU */ write_aux_reg(ARC_REG_PID, MMU_ENABLE); - /* In smp we use this reg for interrupt 1 scratch */ -#ifdef ARC_USE_SCRATCH_REG + /* In arc700/smp needed for re-entrant interrupt handling */ +#ifdef CONFIG_ISA_ARCV2 /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); #endif diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index 96c3a5de9dd4..bcd2909c691f 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -202,7 +202,7 @@ ex_saved_reg1: lr r2, [efa] -#ifdef ARC_USE_SCRATCH_REG +#ifdef CONFIG_ISA_ARCV2 lr r1, [ARC_REG_SCRATCH_DATA0] ; current pgd #else GET_CURR_TASK_ON_CPU r1 From patchwork Thu Aug 12 23:37:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434431 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73F67C4338F for ; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D2236112F for ; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1D2236112F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 1EF726B0072; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 150DE6B0073; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE3AC8D0001; Thu, 12 Aug 2021 19:37:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id D62A76B0072 for ; Thu, 12 Aug 2021 19:37:59 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 6F2D78249980 for ; Thu, 12 Aug 2021 23:37:59 +0000 (UTC) X-FDA: 78468043878.09.6A61C2B Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf29.hostedemail.com (Postfix) with ESMTP id 06E3A901E0CA for ; Thu, 12 Aug 2021 23:37:58 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 02C38610A4; Thu, 12 Aug 2021 23:37:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811478; bh=yKgAZpCv9bEnKnaHu5fqX12u7Ay6u1osj+T/lRSld68=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZGpgx3NY0Whw2BFWOk6BtzWzxx0jTpr4YQiu0JigeB8zNzkn4UxnpPpWXDeeOJt7R c/3LJUbGlThxuLdu8JFlWtustZcS9YG49Nqi+kpM7Uz3zyQfxcg21caTlQa9beQTX1 oUjgy8PZQ3T3pOsyHusjhDr1BZIH5iw0BthWH/Uv1y961iQj0Vn8qNhNq2xMnQ+9Vm 75p455RzRdsBWBeiFAY/XBpGxnZ1YK/rWJ6gnkd5/LJszkzAqiSZe2djDLSUyXgr+y 56sdKqbtiBoSYPSjk07qo2bbwdhv9jbh6XXG1KpcMns5IjNoHNerHvNJa4KY2yY31F iUU6H2VWWk5KA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 02/19] ARC: mm: remove tlb paranoid code Date: Thu, 12 Aug 2021 16:37:36 -0700 Message-Id: <20210812233753.104217-3-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 06E3A901E0CA Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ZGpgx3NY; spf=pass (imf29.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam01 X-Stat-Signature: 8df5grk5db8tr1aaa3q355y13mfngz9c X-HE-Tag: 1628811478-660376 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This was used back in arc700 days when ASID allocator was fragile. Not needed in last 5 years Signed-off-by: Vineet Gupta --- arch/arc/Kconfig | 3 --- arch/arc/include/asm/mmu.h | 6 ----- arch/arc/mm/tlb.c | 40 ------------------------------ arch/arc/mm/tlbex.S | 50 -------------------------------------- 4 files changed, 99 deletions(-) diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 0680b1de0fc3..59d5b2a179f6 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -537,9 +537,6 @@ config ARC_DW2_UNWIND If you don't debug the kernel, you can say N, but we may not be able to solve problems without frame unwind information -config ARC_DBG_TLB_PARANOIA - bool "Paranoia Checks in Low Level TLB Handlers" - config ARC_DBG_JUMP_LABEL bool "Paranoid checks in Static Keys (jump labels) code" depends on JUMP_LABEL diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 4065335a7922..38a036508699 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -64,12 +64,6 @@ typedef struct { unsigned long asid[NR_CPUS]; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; -#ifdef CONFIG_ARC_DBG_TLB_PARANOIA -void tlb_paranoid_check(unsigned int mm_asid, unsigned long address); -#else -#define tlb_paranoid_check(a, b) -#endif - void arc_mmu_init(void); extern char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len); void read_decode_mmu_bcr(void); diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 349fb7a75d1d..6079dfd129b9 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -400,7 +400,6 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) * * Removing the assumption involves * -Using vma->mm->context{ASID,SASID}, as opposed to MMU reg. - * -Fix the TLB paranoid debug code to not trigger false negatives. * -More importantly it makes this handler inconsistent with fast-path * TLB Refill handler which always deals with "current" * @@ -423,8 +422,6 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) local_irq_save(flags); - tlb_paranoid_check(asid_mm(vma->vm_mm, smp_processor_id()), vaddr); - vaddr &= PAGE_MASK; /* update this PTE credentials */ @@ -818,40 +815,3 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address, local_irq_restore(flags); } - -/*********************************************************************** - * Diagnostic Routines - * -Called from Low Level TLB Handlers if things don;t look good - **********************************************************************/ - -#ifdef CONFIG_ARC_DBG_TLB_PARANOIA - -/* - * Low Level ASM TLB handler calls this if it finds that HW and SW ASIDS - * don't match - */ -void print_asid_mismatch(int mm_asid, int mmu_asid, int is_fast_path) -{ - pr_emerg("ASID Mismatch in %s Path Handler: sw-pid=0x%x hw-pid=0x%x\n", - is_fast_path ? "Fast" : "Slow", mm_asid, mmu_asid); - - __asm__ __volatile__("flag 1"); -} - -void tlb_paranoid_check(unsigned int mm_asid, unsigned long addr) -{ - unsigned int mmu_asid; - - mmu_asid = read_aux_reg(ARC_REG_PID) & 0xff; - - /* - * At the time of a TLB miss/installation - * - HW version needs to match SW version - * - SW needs to have a valid ASID - */ - if (addr < 0x70000000 && - ((mm_asid == MM_CTXT_NO_ASID) || - (mmu_asid != (mm_asid & MM_CTXT_ASID_MASK)))) - print_asid_mismatch(mm_asid, mmu_asid, 0); -} -#endif diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index bcd2909c691f..0b4bb62fa0ab 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -93,11 +93,6 @@ ex_saved_reg1: st_s r1, [r0, 4] st_s r2, [r0, 8] st_s r3, [r0, 12] - - ; VERIFY if the ASID in MMU-PID Reg is same as - ; one in Linux data structures - - tlb_paranoid_check_asm .endm .macro TLBMISS_RESTORE_REGS @@ -146,51 +141,6 @@ ex_saved_reg1: #endif -;============================================================================ -; Troubleshooting Stuff -;============================================================================ - -; Linux keeps ASID (Address Space ID) in task->active_mm->context.asid -; When Creating TLB Entries, instead of doing 3 dependent loads from memory, -; we use the MMU PID Reg to get current ASID. -; In bizzare scenrios SW and HW ASID can get out-of-sync which is trouble. -; So we try to detect this in TLB Mis shandler - -.macro tlb_paranoid_check_asm - -#ifdef CONFIG_ARC_DBG_TLB_PARANOIA - - GET_CURR_TASK_ON_CPU r3 - ld r0, [r3, TASK_ACT_MM] - ld r0, [r0, MM_CTXT+MM_CTXT_ASID] - breq r0, 0, 55f ; Error if no ASID allocated - - lr r1, [ARC_REG_PID] - and r1, r1, 0xFF - - and r2, r0, 0xFF ; MMU PID bits only for comparison - breq r1, r2, 5f - -55: - ; Error if H/w and S/w ASID don't match, but NOT if in kernel mode - lr r2, [erstatus] - bbit0 r2, STATUS_U_BIT, 5f - - ; We sure are in troubled waters, Flag the error, but to do so - ; need to switch to kernel mode stack to call error routine - GET_TSK_STACK_BASE r3, sp - - ; Call printk to shoutout aloud - mov r2, 1 - j print_asid_mismatch - -5: ; ASIDs match so proceed normally - nop - -#endif - -.endm - ;============================================================================ ;TLB Miss handling Code ;============================================================================ From patchwork Thu Aug 12 23:37:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434433 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0A03C43216 for ; Thu, 12 Aug 2021 23:38:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4CF0361107 for ; Thu, 12 Aug 2021 23:38:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4CF0361107 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 65B5B6B0073; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 609506B0074; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 483FF8D0001; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 1B3156B0074 for ; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BF575B9F6 for ; Thu, 12 Aug 2021 23:37:59 +0000 (UTC) X-FDA: 78468043878.08.285081E Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP id 7E706802820C for ; Thu, 12 Aug 2021 23:37:59 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 77E26610A7; Thu, 12 Aug 2021 23:37:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811478; bh=EzG5ZCaN0Ov/2bBrqjCJNBMDzydYaPitAAplmJJosTQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QWX4/XqTEYX3uK7kYwjc64idMadI7TWJKNlhgJxfe+uTnye9wMyCedpg3BohCpUEZ Cw519zJ2THhizOzD5E8kB9KiYcb4EeXRYG3k4yb2ANwFQJBHsKI0N+gM0IrJbinlQs 40c7rNVVTc7tnjJNy2IKKz9pJrBTzyw6Oe9RDBmNEkkxugYcMEEmTUUQCnaagkiFKs cQ28PAGRMSzWHrkYd7KohwInsnhgLgzs9Z3xDugDQWaXqM0GF2mJqI4xOYzkB+lEJD gwuMSRJ8T+UAmNro6UggatV1907Rej9TM2BAnvdDGHXat3YUoMD61lKQqn1RUM4S+7 oOJab22wO8s6A== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 03/19] ARC: mm: move mmu/cache externs out to setup.h Date: Thu, 12 Aug 2021 16:37:37 -0700 Message-Id: <20210812233753.104217-4-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7E706802820C X-Stat-Signature: hutdbn1uzgumgdddn15w4bg748irdghd Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="QWX4/XqT"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf06.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811479-853731 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/include/asm/cache.h | 4 ---- arch/arc/include/asm/mmu.h | 4 ---- arch/arc/include/asm/setup.h | 12 ++++++++++-- 3 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arc/include/asm/cache.h b/arch/arc/include/asm/cache.h index d8ece4292388..f0f1fc5d62b6 100644 --- a/arch/arc/include/asm/cache.h +++ b/arch/arc/include/asm/cache.h @@ -62,10 +62,6 @@ #define ARCH_SLAB_MINALIGN 8 #endif -extern void arc_cache_init(void); -extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len); -extern void read_decode_cache_bcr(void); - extern int ioc_enable; extern unsigned long perip_base, perip_end; diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 38a036508699..762cfe66e16b 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -64,10 +64,6 @@ typedef struct { unsigned long asid[NR_CPUS]; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; -void arc_mmu_init(void); -extern char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len); -void read_decode_mmu_bcr(void); - static inline int is_pae40_enabled(void) { return IS_ENABLED(CONFIG_ARC_HAS_PAE40); diff --git a/arch/arc/include/asm/setup.h b/arch/arc/include/asm/setup.h index 01f85478170d..028a8cf76206 100644 --- a/arch/arc/include/asm/setup.h +++ b/arch/arc/include/asm/setup.h @@ -2,8 +2,8 @@ /* * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) */ -#ifndef __ASMARC_SETUP_H -#define __ASMARC_SETUP_H +#ifndef __ASM_ARC_SETUP_H +#define __ASM_ARC_SETUP_H #include @@ -34,4 +34,12 @@ long __init arc_get_mem_sz(void); #define IS_AVAIL2(v, s, cfg) IS_AVAIL1(v, s), IS_AVAIL1(v, IS_USED_CFG(cfg)) #define IS_AVAIL3(v, v2, s) IS_AVAIL1(v, s), IS_AVAIL1(v, IS_DISABLED_RUN(v2)) +extern void arc_mmu_init(void); +extern char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len); +extern void read_decode_mmu_bcr(void); + +extern void arc_cache_init(void); +extern char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len); +extern void read_decode_cache_bcr(void); + #endif /* __ASMARC_SETUP_H */ From patchwork Thu Aug 12 23:37:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 154D7C432BE for ; Thu, 12 Aug 2021 23:38:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA021610A5 for ; Thu, 12 Aug 2021 23:38:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org AA021610A5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C746F6B0075; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C25E96B0074; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9EFD6B007B; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id 7ED3A6B0075 for ; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 27A408249980 for ; Thu, 12 Aug 2021 23:38:00 +0000 (UTC) X-FDA: 78468043920.02.BD4F6E3 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf18.hostedemail.com (Postfix) with ESMTP id DE0C4401007A for ; Thu, 12 Aug 2021 23:37:59 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id DF3F8610A5; Thu, 12 Aug 2021 23:37:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811479; bh=SDwFDPOQmz/pmHMM+4prMbb/FBKaF/3A8kiP4jUiFjY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=roa0nbg6P0b49j69EM6FvuJQOVTYh9WC3m/MSLBaWFNjUV+xkyM3Sep4d0y4LtEHo f5CaYtQWvkn4tZ3DHgCgkHKPXNkc+32AAjUCGiV39QnXL3t5bdYB/5wGizl+5rc2Cn 9DaypbSSEqCr1e3eBWDofGqHqUwVLMVjgPNujXwg/B0jYRDyrnN8c1o9foaRw7+0hm iZ3uHbCyeDgFh1QMjdXKP5Go5wH/hXnkju2P+6OelxdXM3z/eOh5Zm/sf+xKs7eEv5 +qBR8Iaju8svs91k0gm4lKV3K7Cls/dHPTqk+BHqo9h0NODF2NAm7K9k486yY4Ka9H /djBZrU5z5bnA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 04/19] ARC: mm: Fixes to allow STRICT_MM_TYPECHECKS Date: Thu, 12 Aug 2021 16:37:38 -0700 Message-Id: <20210812233753.104217-5-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: DE0C4401007A X-Stat-Signature: kdcu3h6mytq8pfu7fpaa5ebacxj4eyy5 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=roa0nbg6; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf18.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811479-301339 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/mm/tlb.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 6079dfd129b9..15cbc285b0de 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -71,7 +71,7 @@ static void tlb_entry_erase(unsigned int vaddr_n_asid) } } -static void tlb_entry_insert(unsigned int pd0, pte_t pd1) +static void tlb_entry_insert(unsigned int pd0, phys_addr_t pd1) { unsigned int idx; @@ -109,13 +109,16 @@ static void tlb_entry_erase(unsigned int vaddr_n_asid) write_aux_reg(ARC_REG_TLBCOMMAND, TLBDeleteEntry); } -static void tlb_entry_insert(unsigned int pd0, pte_t pd1) +static void tlb_entry_insert(unsigned int pd0, phys_addr_t pd1) { write_aux_reg(ARC_REG_TLBPD0, pd0); - write_aux_reg(ARC_REG_TLBPD1, pd1); - if (is_pae40_enabled()) + if (!is_pae40_enabled()) { + write_aux_reg(ARC_REG_TLBPD1, pd1); + } else { + write_aux_reg(ARC_REG_TLBPD1, pd1 & 0xFFFFFFFF); write_aux_reg(ARC_REG_TLBPD1HI, (u64)pd1 >> 32); + } write_aux_reg(ARC_REG_TLBCOMMAND, TLBInsertEntry); } @@ -391,7 +394,7 @@ void create_tlb(struct vm_area_struct *vma, unsigned long vaddr, pte_t *ptep) unsigned long flags; unsigned int asid_or_sasid, rwx; unsigned long pd0; - pte_t pd1; + phys_addr_t pd1; /* * create_tlb() assumes that current->mm == vma->mm, since From patchwork Thu Aug 12 23:37:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434437 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70786C4320A for ; Thu, 12 Aug 2021 23:38:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 147F8610CD for ; Thu, 12 Aug 2021 23:38:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 147F8610CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EC5CB6B0074; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D43366B007B; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5F118D0001; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0062.hostedemail.com [216.40.44.62]) by kanga.kvack.org (Postfix) with ESMTP id 958E26B0074 for ; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 40C921AEC0 for ; Thu, 12 Aug 2021 23:38:00 +0000 (UTC) X-FDA: 78468043920.31.CA80FC1 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP id F3CC7801B0F7 for ; Thu, 12 Aug 2021 23:37:59 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 4C30B610CF; Thu, 12 Aug 2021 23:37:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811479; bh=cpK9jdDVnyzUhhaLu6k4Nl4QhGWKsalzqRxrkovXOy4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rHFvkmg3DnqKPsG484pJFu4vngYzfYgCM9Sn03Lcy1JGuAc2LeyuOYEyMZqKwjB9j fPDVQ//D+UGAIIg5iL/aY8yqTzUWYXK14bPDweaNhyKYFXs2x/F6X7TK5TovUd5Dtx 44WSgUgpAiJENUVnnaVri3UPdb2uhceOR/3RO2Vc+wABGbduL1thNSpaACoTH0YvaB IvhlBTSgQfzOFoQ5rPMIBPwWIGWcZ14ZVoXFgFhElMIEyD59vuwPEzqCpaCS2CPkV3 t/CI0MCKfcn/w0OabuJeBrYpPPKLzWZjwEnC5OSe7A08+U9LIr2jbgo1PM39rXgUZA wSM2zygyM1bsg== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 05/19] ARC: mm: Enable STRICT_MM_TYPECHECKS Date: Thu, 12 Aug 2021 16:37:39 -0700 Message-Id: <20210812233753.104217-6-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: F3CC7801B0F7 X-Stat-Signature: 6dmajr3z446rep74t5t1u3tfznc7ntas Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=rHFvkmg3; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf06.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811479-966251 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In the past I've refrained from doing this (at least 2 times) due to the slight code bloat due to ABI implications of pte_t etc becoming struct Per ARC ABI, functions return struct via memory and not through register r0, even if the struct would fit in register(s) - caller allocates space on stack and passes the address as first arg (r0), shifting rest of args by one - callee creates return struct in memory (referenced via r0) This time around the code actually shrunk slightly (due to subtle inlining heuristic effects), but still slightly inefficient due to return values passed through memory. That however seems like a small cost compared to maintenance burden given the impending new mmu support for page walk etc Signed-off-by: Vineet Gupta --- arch/arc/include/asm/page.h | 26 -------------------------- arch/arc/mm/ioremap.c | 2 +- 2 files changed, 1 insertion(+), 27 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 4a9d33372fe2..c4ac827379cd 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -34,12 +34,6 @@ void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma); void clear_user_page(void *to, unsigned long u_vaddr, struct page *page); -#undef STRICT_MM_TYPECHECKS - -#ifdef STRICT_MM_TYPECHECKS -/* - * These are used to make use of C type-checking.. - */ typedef struct { #ifdef CONFIG_ARC_HAS_PAE40 unsigned long long pte; @@ -64,26 +58,6 @@ typedef struct { #define pte_pgprot(x) __pgprot(pte_val(x)) -#else /* !STRICT_MM_TYPECHECKS */ - -#ifdef CONFIG_ARC_HAS_PAE40 -typedef unsigned long long pte_t; -#else -typedef unsigned long pte_t; -#endif -typedef unsigned long pgd_t; -typedef unsigned long pgprot_t; - -#define pte_val(x) (x) -#define pgd_val(x) (x) -#define pgprot_val(x) (x) -#define __pte(x) (x) -#define __pgd(x) (x) -#define __pgprot(x) (x) -#define pte_pgprot(x) (x) - -#endif - typedef pte_t * pgtable_t; /* diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c index 95c649fbc95a..052bbd8b1e5f 100644 --- a/arch/arc/mm/ioremap.c +++ b/arch/arc/mm/ioremap.c @@ -39,7 +39,7 @@ void __iomem *ioremap(phys_addr_t paddr, unsigned long size) if (arc_uncached_addr_space(paddr)) return (void __iomem *)(u32)paddr; - return ioremap_prot(paddr, size, PAGE_KERNEL_NO_CACHE); + return ioremap_prot(paddr, size, pgprot_val(PAGE_KERNEL_NO_CACHE)); } EXPORT_SYMBOL(ioremap); From patchwork Thu Aug 12 23:37:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88AC6C19F39 for ; Thu, 12 Aug 2021 23:38:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2D664610CD for ; Thu, 12 Aug 2021 23:38:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2D664610CD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 373256B0078; Thu, 12 Aug 2021 19:38:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 325166B007D; Thu, 12 Aug 2021 19:38:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 214EB6B007B; Thu, 12 Aug 2021 19:38:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id F24006B0078 for ; Thu, 12 Aug 2021 19:38:00 -0400 (EDT) Received: from smtpin36.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9B654181AF5E6 for ; Thu, 12 Aug 2021 23:38:00 +0000 (UTC) X-FDA: 78468043920.36.0BA7D69 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf18.hostedemail.com (Postfix) with ESMTP id 538C74010061 for ; Thu, 12 Aug 2021 23:38:00 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id CD134610A8; Thu, 12 Aug 2021 23:37:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811480; bh=GnoibyRqmuzmp9rlWYYiDkJSczSXQYZSBk2r8Ru2kb8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HJ5LZ4ULgVjyz2u3mRUlYamQ/7jmPMb7gl8kc5FY89Ndfg1EO7yHTvHxz7q8XDKaA bck5ngmrh8YhCbTADIdDXRaYTj8Nh2Vr8b24023FVZvRD4426mg78uWRO2Pfdom4A8 oB5v4Cozx+FrwVZIErcM8A/GwPNE/+vMszDu1dnhDa5gHUgN4OULVKJ+qqYbLierIW LqShWqr+RbK1q4XPVDe25+wvzUFfBLMSlGwLA0cRMtq2fbEoRw9aELP3HDq+1ArdAU /c5X97hoSq4KcTPBqFFrVy2KLzBHy4uWQONQnKUzKwpxb5xxk1GxpEyLpjgTbAMdzx BgRFn/rB65VUw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 06/19] ARC: ioremap: use more commonly used PAGE_KERNEL based uncached flag Date: Thu, 12 Aug 2021 16:37:40 -0700 Message-Id: <20210812233753.104217-7-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 538C74010061 X-Stat-Signature: tpuwc5xeocb3e91jjzus949a4q65h4gh Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=HJ5LZ4UL; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf18.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811480-522781 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: and remove the one off uncached definition for ARC Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgtable.h | 3 --- arch/arc/mm/ioremap.c | 3 ++- 2 files changed, 2 insertions(+), 4 deletions(-) diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index 80b57c14b430..b054c14f8bf6 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -103,9 +103,6 @@ */ #define PAGE_KERNEL __pgprot(_K_PAGE_PERMS | _PAGE_CACHEABLE) -/* ioremap */ -#define PAGE_KERNEL_NO_CACHE __pgprot(_K_PAGE_PERMS) - /* Masks for actual TLB "PD"s */ #define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ) #define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ) diff --git a/arch/arc/mm/ioremap.c b/arch/arc/mm/ioremap.c index 052bbd8b1e5f..0ee75aca6e10 100644 --- a/arch/arc/mm/ioremap.c +++ b/arch/arc/mm/ioremap.c @@ -39,7 +39,8 @@ void __iomem *ioremap(phys_addr_t paddr, unsigned long size) if (arc_uncached_addr_space(paddr)) return (void __iomem *)(u32)paddr; - return ioremap_prot(paddr, size, pgprot_val(PAGE_KERNEL_NO_CACHE)); + return ioremap_prot(paddr, size, + pgprot_val(pgprot_noncached(PAGE_KERNEL))); } EXPORT_SYMBOL(ioremap); From patchwork Thu Aug 12 23:37:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434441 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96A93C432BE for ; Thu, 12 Aug 2021 23:38:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2BBB06108C for ; Thu, 12 Aug 2021 23:38:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2BBB06108C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3E8C06B007B; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3992E6B007D; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23A426B007E; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id F17D66B007B for ; Thu, 12 Aug 2021 19:38:01 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A31F4181AF5E6 for ; Thu, 12 Aug 2021 23:38:01 +0000 (UTC) X-FDA: 78468043962.02.5A8EBD1 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP id 5D7CA900BEB0 for ; Thu, 12 Aug 2021 23:38:01 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 3CC94610FC; Thu, 12 Aug 2021 23:38:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811480; bh=756huhpCIkumJJNbUzXKUy3Y4Jb2tLDhfukRVM8geaQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p8eBEeTBrCxLXIaLILkA2TGAhtbxiRSFBgwckHnqQ5EgTXPG4fpzE9H2hdqfIAbH+ 8255LaxxXNfBXSFz1+BH+10nGjdGLcuav46otwMidYiKWTxqhlFubzPvw7z73sTX17 jXfSGZlKphIU4bGxMIuP2SU2D7AIp8qGOFkx+AyPQV8IB+l7FiykgvLYoz6WXzfy8W P6BwvqPFlYm/eiOT8KXX7skUHeWR/pkQ4Bf0B9GPgBuiSh33x9zmz2RuQeMjECKgFd zx+NM36kRPClDOwJpwC+jdZEk7Fx/LgRKzA1BgGu1NTpSg415ewnkuFQB2aO0QWDgi bJs6uCGfEpgpw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 07/19] ARC: mm: pmd_populate* to use the canonical set_pmd (and drop pmd_set) Date: Thu, 12 Aug 2021 16:37:41 -0700 Message-Id: <20210812233753.104217-8-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=p8eBEeTB; spf=pass (imf28.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5D7CA900BEB0 X-Stat-Signature: 9gjjymixoe6uj7keeire1utb4ixtb4ce X-HE-Tag: 1628811481-481419 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgalloc.h | 14 ++++++++++---- arch/arc/include/asm/pgtable.h | 6 ------ 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index a32ca3104ced..408bc4b0842d 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -35,13 +35,19 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { - pmd_set(pmd, pte); + /* + * The cast to long below is OK in 32-bit PAE40 regime with long long pte + * Despite "wider" pte, the pte table needs to be in non-PAE low memory + * as all higher levels can only hold long pointers. + * + * The cast itself is needed given simplistic definition of set_pmd() + */ + set_pmd(pmd, __pmd((unsigned long)pte)); } -static inline void -pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t ptep) +static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t pte) { - pmd_set(pmd, (pte_t *) ptep); + set_pmd(pmd, __pmd((unsigned long)pte)); } static inline int __get_order_pgd(void) diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index b054c14f8bf6..f762bacb2358 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -222,12 +222,6 @@ extern char empty_zero_page[PAGE_SIZE]; /* find the logical addr (phy for ARC) of the Page Tbl ref by PMD entry */ #define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) -/* In a 2 level sys, setup the PGD entry with PTE value */ -static inline void pmd_set(pmd_t *pmdp, pte_t *ptep) -{ - pmd_val(*pmdp) = (unsigned long)ptep; -} - #define pte_none(x) (!pte_val(x)) #define pte_present(x) (pte_val(x) & _PAGE_PRESENT) #define pte_clear(mm, addr, ptep) set_pte_at(mm, addr, ptep, __pte(0)) From patchwork Thu Aug 12 23:37:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0920FC19F36 for ; Thu, 12 Aug 2021 23:38:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 94A5B60EE2 for ; Thu, 12 Aug 2021 23:38:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 94A5B60EE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id F0EC66B007D; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E47506B0082; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B60FC6B0080; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id 980826B007E for ; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3DD91C5D4 for ; Thu, 12 Aug 2021 23:38:02 +0000 (UTC) X-FDA: 78468044004.31.CF52A48 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf23.hostedemail.com (Postfix) with ESMTP id DA18F9007069 for ; Thu, 12 Aug 2021 23:38:01 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id C3860610FA; Thu, 12 Aug 2021 23:38:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811481; bh=IWyNaBVOFxoQxqyiGMzz51disJ1/hK0R28q3NRLGK10=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bMzaNmK8JyKpzEfgJszpOn24QyfZmmJmIytZ9u72rJ4MIEDPomyV+8NLRWsCvdEU8 xKk4nJ33OyT3Wlm56ooihYeoFpRckoUvE106CUqsWHhktfKPD03Qf4g07PTjvujD51 OTJfMKqss4dsWzqWwyw4GDz5tAJ32m+XqQkhfEGJDhyIhrrgTmocPdGipZIHWF9UrF qi1pJ3FHjUZwMKWSt3LHZk5VU+uI0jeNB9aR14kjPKQHcj8iSwYXTtB17hMqnlg1z6 yZVZFDLTw4E2SPNDA1vwsdHOaI4t5BEFXaTkCHLn2b0rB4FgBLxIIeQQ33X+cZqtx/ A5VFk27meYc5g== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 08/19] ARC: mm: switch pgtable_t back to struct page * Date: Thu, 12 Aug 2021 16:37:42 -0700 Message-Id: <20210812233753.104217-9-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=bMzaNmK8; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf23.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Stat-Signature: dqnsobyosiczrjy7mg3bzhkgyurx9jfk X-Rspamd-Queue-Id: DA18F9007069 X-Rspamd-Server: rspam05 X-HE-Tag: 1628811481-959725 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: So far ARC pgtable_t has not been struct page based to avoid extra page_address() calls involved. However the differences are down to noise and get in the way of using generic code, hence this patch. Suggested-by: Mike Rapoport Signed-off-by: Vineet Gupta Reported-by: kernel test robot --- arch/arc/include/asm/page.h | 2 +- arch/arc/include/asm/pgalloc.h | 57 ++++++++++------------------------ arch/arc/mm/init.c | 3 ++ 3 files changed, 21 insertions(+), 41 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index c4ac827379cd..514b8b70df50 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -58,7 +58,7 @@ typedef struct { #define pte_pgprot(x) __pgprot(pte_val(x)) -typedef pte_t * pgtable_t; +typedef struct page *pgtable_t; /* * Use virt_to_pfn with caution: diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 408bc4b0842d..8ab1af3da6e7 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -45,22 +45,17 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) set_pmd(pmd, __pmd((unsigned long)pte)); } -static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t pte) +static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t pte_page) { - set_pmd(pmd, __pmd((unsigned long)pte)); -} - -static inline int __get_order_pgd(void) -{ - return get_order(PTRS_PER_PGD * sizeof(pgd_t)); + set_pmd(pmd, __pmd((unsigned long)page_address(pte_page))); } static inline pgd_t *pgd_alloc(struct mm_struct *mm) { - int num, num2; - pgd_t *ret = (pgd_t *) __get_free_pages(GFP_KERNEL, __get_order_pgd()); + pgd_t *ret = (pgd_t *) __get_free_page(GFP_KERNEL); if (ret) { + int num, num2; num = USER_PTRS_PER_PGD + USER_KERNEL_GUTTER / PGDIR_SIZE; memzero(ret, num * sizeof(pgd_t)); @@ -76,61 +71,43 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_pages((unsigned long)pgd, __get_order_pgd()); -} - - -/* - * With software-only page-tables, addr-split for traversal is tweakable and - * that directly governs how big tables would be at each level. - * Further, the MMU page size is configurable. - * Thus we need to programatically assert the size constraint - * All of this is const math, allowing gcc to do constant folding/propagation. - */ - -static inline int __get_order_pte(void) -{ - return get_order(PTRS_PER_PTE * sizeof(pte_t)); + free_page((unsigned long)pgd); } static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) { pte_t *pte; - pte = (pte_t *) __get_free_pages(GFP_KERNEL | __GFP_ZERO, - __get_order_pte()); + pte = (pte_t *) __get_free_page(GFP_KERNEL | __GFP_ZERO); return pte; } -static inline pgtable_t -pte_alloc_one(struct mm_struct *mm) +static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { - pgtable_t pte_pg; struct page *page; - pte_pg = (pgtable_t)__get_free_pages(GFP_KERNEL, __get_order_pte()); - if (!pte_pg) - return 0; - memzero((void *)pte_pg, PTRS_PER_PTE * sizeof(pte_t)); - page = virt_to_page(pte_pg); + page = (pgtable_t)alloc_page(GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT); + if (!page) + return NULL; + if (!pgtable_pte_page_ctor(page)) { __free_page(page); - return 0; + return NULL; } - return pte_pg; + return page; } static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_pages((unsigned long)pte, __get_order_pte()); /* takes phy addr */ + free_page((unsigned long)pte); } -static inline void pte_free(struct mm_struct *mm, pgtable_t ptep) +static inline void pte_free(struct mm_struct *mm, pgtable_t pte_page) { - pgtable_pte_page_dtor(virt_to_page(ptep)); - free_pages((unsigned long)ptep, __get_order_pte()); + pgtable_pte_page_dtor(pte_page); + __free_page(pte_page); } #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index c083bf660cec..46ad9aee7a73 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -189,6 +189,9 @@ void __init mem_init(void) { memblock_free_all(); highmem_init(); + + BUILD_BUG_ON((PTRS_PER_PGD * sizeof(pgd_t)) > PAGE_SIZE); + BUILD_BUG_ON((PTRS_PER_PTE * sizeof(pte_t)) > PAGE_SIZE); } #ifdef CONFIG_HIGHMEM From patchwork Thu Aug 12 23:37:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 508C5C43216 for ; Thu, 12 Aug 2021 23:38:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EAC916109F for ; Thu, 12 Aug 2021 23:38:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EAC916109F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C27D16B007E; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD6B96B007D; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A77DD8D0001; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id 8402A6B007D for ; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 26FCF180AD81A for ; Thu, 12 Aug 2021 23:38:02 +0000 (UTC) X-FDA: 78468044004.29.1AD861D Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP id D371C801B0F7 for ; Thu, 12 Aug 2021 23:38:01 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 311AD6113E; Thu, 12 Aug 2021 23:38:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811481; bh=P/zEitpJGWXqQnjfHf92tqfSZBtCS0Di5vxOaU/9MD8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SoMM9n1Tr9XrlR6f4RTBC9zHGaQ7Tv1e0NIHSbudb/XWoxRn7yJH/EpDjhLAoPG1U k2VsEjQP02DPXvFXHVj0R/S5Y2782k0scB4WxtX6mUIyBLJ7DJBN+h7ftKfmMPBqL/ +DwhvMMSy8nq0eWpPPrIbZSFCcFjBC7vIKQLYgC3XG9amedpT0Z2lEUgg4tURebKiG YLkUqikSOR7OvDrVu4UmlG/b0J5Fj+t4XXjgRaiRZjacqnGyXqsrLy0Xfxr5Ihhlte 5Eng0QYUmJZbsFRfwkwDPu7hBvayhRq+wFiEagK/TeSRFOxgwNdiBAFPKisC4F4rIU +uvGLl4pZGrxQ== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 09/19] ARC: mm: switch to asm-generic/pgalloc.h Date: Thu, 12 Aug 2021 16:37:43 -0700 Message-Id: <20210812233753.104217-10-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D371C801B0F7 X-Stat-Signature: zfk8di37h8mh1qq6cdxjfn44o3riuj9j Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=SoMM9n1T; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf06.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811481-555680 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With previous patch ARC pgalloc functions are same as generic, hence switch to that. Suggested-by: Mike Rapoport Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgalloc.h | 42 +--------------------------------- 1 file changed, 1 insertion(+), 41 deletions(-) diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 8ab1af3da6e7..0cde9e5eefd7 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -31,6 +31,7 @@ #include #include +#include static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) @@ -69,47 +70,6 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) return ret; } -static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) -{ - free_page((unsigned long)pgd); -} - -static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) -{ - pte_t *pte; - - pte = (pte_t *) __get_free_page(GFP_KERNEL | __GFP_ZERO); - - return pte; -} - -static inline pgtable_t pte_alloc_one(struct mm_struct *mm) -{ - struct page *page; - - page = (pgtable_t)alloc_page(GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT); - if (!page) - return NULL; - - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); - return NULL; - } - - return page; -} - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_page((unsigned long)pte); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte_page) -{ - pgtable_pte_page_dtor(pte_page); - __free_page(pte_page); -} - #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) #endif /* _ASM_ARC_PGALLOC_H */ From patchwork Thu Aug 12 23:37:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0A24C19F3A for ; Thu, 12 Aug 2021 23:38:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8E90A6108C for ; Thu, 12 Aug 2021 23:38:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8E90A6108C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3C60A6B0080; Thu, 12 Aug 2021 19:38:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34B1C8D0001; Thu, 12 Aug 2021 19:38:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 212E36B0082; Thu, 12 Aug 2021 19:38:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0246.hostedemail.com [216.40.44.246]) by kanga.kvack.org (Postfix) with ESMTP id 03E146B0080 for ; Thu, 12 Aug 2021 19:38:02 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 82595180ACF62 for ; Thu, 12 Aug 2021 23:38:02 +0000 (UTC) X-FDA: 78468044004.27.68DA1CD Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP id 4C9A4801B0F7 for ; Thu, 12 Aug 2021 23:38:02 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id B0B6E6108C; Thu, 12 Aug 2021 23:38:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811482; bh=ToKfOhhiER8j02fngobXA+3bITHxEeKJM0blznUvlNM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qz4+ACaX+VhL6UDoG4GATHnXNG72f1jie44pd2QKRM/Fe242x5Rqwzt62yiHkhABI MJ/1t7szUYGw8tdaKtR8D9vCerC3VuDIIJtYTJWFWb0pVK8YkdwVZFN3/cC7PoaxVo Y+thDihH1djtm3oHMhYWc8ik+VvG4d58g49pit5v+E6Avw3D1LtRGJriJJ99kRM7+b BoH5ZmPGOCl0RZ3RbAOOdfUod3kXbWyaxhtNCcWKf6XLiOhhFwxV84smL9w5OC2FSs bVNGScJpcJGUt98HJx8A8QBEXBFWzYUXbsUC2wEVQpuzDpOBj8cOxRKwU32diPvcU/ WPaoB/kKJ0qTA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 10/19] ARC: mm: non-functional code cleanup ahead of 3 levels Date: Thu, 12 Aug 2021 16:37:44 -0700 Message-Id: <20210812233753.104217-11-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4C9A4801B0F7 X-Stat-Signature: in71doa5rpcx9ey46j73dyq7xuzwjygy Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qz4+ACaX; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf06.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811482-529470 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/include/asm/page.h | 30 ++++++++++++++++-------------- 1 file changed, 16 insertions(+), 14 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 514b8b70df50..28ed82b1800f 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -34,6 +34,13 @@ void copy_user_highpage(struct page *to, struct page *from, unsigned long u_vaddr, struct vm_area_struct *vma); void clear_user_page(void *to, unsigned long u_vaddr, struct page *page); +typedef struct { + unsigned long pgd; +} pgd_t; + +#define pgd_val(x) ((x).pgd) +#define __pgd(x) ((pgd_t) { (x) }) + typedef struct { #ifdef CONFIG_ARC_HAS_PAE40 unsigned long long pte; @@ -41,22 +48,17 @@ typedef struct { unsigned long pte; #endif } pte_t; -typedef struct { - unsigned long pgd; -} pgd_t; + +#define pte_val(x) ((x).pte) +#define __pte(x) ((pte_t) { (x) }) + typedef struct { unsigned long pgprot; } pgprot_t; -#define pte_val(x) ((x).pte) -#define pgd_val(x) ((x).pgd) -#define pgprot_val(x) ((x).pgprot) - -#define __pte(x) ((pte_t) { (x) }) -#define __pgd(x) ((pgd_t) { (x) }) -#define __pgprot(x) ((pgprot_t) { (x) }) - -#define pte_pgprot(x) __pgprot(pte_val(x)) +#define pgprot_val(x) ((x).pgprot) +#define __pgprot(x) ((pgprot_t) { (x) }) +#define pte_pgprot(x) __pgprot(pte_val(x)) typedef struct page *pgtable_t; @@ -96,8 +98,8 @@ extern int pfn_valid(unsigned long pfn); * virt here means link-address/program-address as embedded in object code. * And for ARC, link-addr = physical address */ -#define __pa(vaddr) ((unsigned long)(vaddr)) -#define __va(paddr) ((void *)((unsigned long)(paddr))) +#define __pa(vaddr) ((unsigned long)(vaddr)) +#define __va(paddr) ((void *)((unsigned long)(paddr))) #define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr)) #define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr)) From patchwork Thu Aug 12 23:37:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C914FC4338F for ; Thu, 12 Aug 2021 23:38:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 742CB6108C for ; Thu, 12 Aug 2021 23:38:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 742CB6108C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5E4F86B0081; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 547606B0082; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3C2286B0083; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 1882E6B0081 for ; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id AB584180AD81A for ; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) X-FDA: 78468044046.07.7EE49D0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP id 5007E900BEB4 for ; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 204C86109F; Thu, 12 Aug 2021 23:38:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811482; bh=Jvemhc6NK+B85RnvnFOWoDrTakqtEWuGs1UKrKyANmU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ido4Na3TV+bU6rUPv7EqSinKJX4s2BCubbt0YOja5AkqcyvCgaKyZxoURN5dq0sQ5 4Szcj2DRWNI4n+VdNRzdjf/WTcnDR/sFlcn/dlIW/PA8O+15aWQeQtM6stnG17wJ5S 7DWHwEvEA+0fc6t9KLaG0Ls1Y08rrvZ1893noPHT5P/3l4AAjFbK6Mgc22ZssdVyWJ hTjKUqHLsAI0vF2BTCX3MMBuQI6X115LHn7/k4ZAziRE0GqIk85CJemkNJNm/oI+EB gXMhsLQOWPjtNTsJBlJbExC7Om5SwEky8LySkkptpImSNm25rzCSGhHwKrsWvuYpas LfWF4py32WNEg== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 11/19] ARC: mm: move MMU specific bits out of ASID allocator Date: Thu, 12 Aug 2021 16:37:45 -0700 Message-Id: <20210812233753.104217-12-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5007E900BEB4 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ido4Na3T; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf28.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-Rspamd-Server: rspam04 X-Stat-Signature: 3dcpa9pkarbui9759y744eirb7164c98 X-HE-Tag: 1628811483-904028 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: And while at it, rewrite commentary on ASID allocator Signed-off-by: Vineet Gupta --- arch/arc/include/asm/mmu.h | 13 +++++++++++++ arch/arc/include/asm/mmu_context.h | 28 +++++++++++++--------------- arch/arc/mm/tlb.c | 11 ++++------- 3 files changed, 30 insertions(+), 22 deletions(-) diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 762cfe66e16b..0b117ea07048 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -64,6 +64,19 @@ typedef struct { unsigned long asid[NR_CPUS]; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; +static void inline mmu_setup_asid(struct mm_struct *mm, unsigned int asid) +{ + write_aux_reg(ARC_REG_PID, asid | MMU_ENABLE); +} + +static void inline mmu_setup_pgd(struct mm_struct *mm, void *pgd) +{ + /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ +#ifdef CONFIG_ISA_ARCV2 + write_aux_reg(ARC_REG_SCRATCH_DATA0, (unsigned int)pgd); +#endif +} + static inline int is_pae40_enabled(void) { return IS_ENABLED(CONFIG_ARC_HAS_PAE40); diff --git a/arch/arc/include/asm/mmu_context.h b/arch/arc/include/asm/mmu_context.h index 49318a126879..dda471f5f05b 100644 --- a/arch/arc/include/asm/mmu_context.h +++ b/arch/arc/include/asm/mmu_context.h @@ -15,22 +15,23 @@ #ifndef _ASM_ARC_MMU_CONTEXT_H #define _ASM_ARC_MMU_CONTEXT_H -#include -#include #include +#include #include -/* ARC700 ASID Management +/* ARC ASID Management + * + * MMU tags TLBs with an 8-bit ASID, avoiding need to flush the TLB on + * context-switch. * - * ARC MMU provides 8-bit ASID (0..255) to TAG TLB entries, allowing entries - * with same vaddr (different tasks) to co-exit. This provides for - * "Fast Context Switch" i.e. no TLB flush on ctxt-switch + * ASID is managed per cpu, so task threads across CPUs can have different + * ASID. Global ASID management is needed if hardware supports TLB shootdown + * and/or shared TLB across cores, which ARC doesn't. * - * Linux assigns each task a unique ASID. A simple round-robin allocation - * of H/w ASID is done using software tracker @asid_cpu. - * When it reaches max 255, the allocation cycle starts afresh by flushing - * the entire TLB and wrapping ASID back to zero. + * Each task is assigned unique ASID, with a simple round-robin allocator + * tracked in @asid_cpu. When 8-bit value rolls over,a new cycle is started + * over from 0, and TLB is flushed * * A new allocation cycle, post rollover, could potentially reassign an ASID * to a different task. Thus the rule is to refresh the ASID in a new cycle. @@ -93,7 +94,7 @@ static inline void get_new_mmu_context(struct mm_struct *mm) asid_mm(mm, cpu) = asid_cpu(cpu); set_hw: - write_aux_reg(ARC_REG_PID, hw_pid(mm, cpu) | MMU_ENABLE); + mmu_setup_asid(mm, hw_pid(mm, cpu)); local_irq_restore(flags); } @@ -146,10 +147,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, */ cpumask_set_cpu(cpu, mm_cpumask(next)); -#ifdef CONFIG_ISA_ARCV2 - /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ - write_aux_reg(ARC_REG_SCRATCH_DATA0, next->pgd); -#endif + mmu_setup_pgd(next, next->pgd); get_new_mmu_context(next); } diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 15cbc285b0de..b68d5798327b 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -716,14 +716,11 @@ void arc_mmu_init(void) if (IS_ENABLED(CONFIG_ARC_HAS_PAE40) && !mmu->pae) panic("Hardware doesn't support PAE40\n"); - /* Enable the MMU */ - write_aux_reg(ARC_REG_PID, MMU_ENABLE); + /* Enable the MMU with ASID 0 */ + mmu_setup_asid(NULL, 0); - /* In arc700/smp needed for re-entrant interrupt handling */ -#ifdef CONFIG_ISA_ARCV2 - /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */ - write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir); -#endif + /* cache the pgd pointer in MMU SCRATCH reg (ARCv2 only) */ + mmu_setup_pgd(NULL, swapper_pg_dir); if (pae40_exist_but_not_enab()) write_aux_reg(ARC_REG_TLBPD1HI, 0); From patchwork Thu Aug 12 23:37:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71CE9C4320E for ; Thu, 12 Aug 2021 23:38:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0B34460EE2 for ; Thu, 12 Aug 2021 23:38:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0B34460EE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AC9AD6B0082; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A51466B0083; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93EB76B0085; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 6972F6B0083 for ; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 146B018F7E for ; Thu, 12 Aug 2021 23:38:04 +0000 (UTC) X-FDA: 78468044088.06.F2B8A88 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf30.hostedemail.com (Postfix) with ESMTP id BB8F7E0104D8 for ; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 9A10960EE2; Thu, 12 Aug 2021 23:38:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811482; bh=t0y1Zd6GQZMYoGFevFFkBQTS73Vy19z+gE09o2JvAIk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=luSYuKdNhi/DuXx9297D2f0p5HeU1kvjcfpqOAUjBqzOC056SojDpSuEeMGm+c5Hl tc/kj/Q/oeVF3Pm8oBm5gXgDjjQyKveTpxj4vfVEW4HUR7ur8Z4E9Sj66F99vTaKgh 8hEGmQk6aJ6lZF3gRT7+xVn70njJ6pUKDT01x7Id6OprW1K04FSo2k9gBXRXGEqK5g Dabm6safYGwJcxgYUJ6f5i4Rv/2jadNXcg97MnXpN2evl/cbkwcJNplatxAGigNJ1N baFT2fEpFwqRarqgI4zuo46i5WN+YwQUzQlR3lHaDzPsJUml14VMqikoaL9g809yzh J0u+PNKR+6hrA== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta , Jose Abreu Subject: [PATCH v2 12/19] ARC: mm: move MMU specific bits out of entry code ... Date: Thu, 12 Aug 2021 16:37:46 -0700 Message-Id: <20210812233753.104217-13-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: BB8F7E0104D8 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=luSYuKdN; spf=pass (imf30.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam01 X-Stat-Signature: r1koshzay53kyzf83k31whnnpxthrqir X-HE-Tag: 1628811483-410195 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ... to avoid polluting shared entry code (across three ISA variants) with ISA/MMU specific code. Cc: Jose Abreu Signed-off-by: Vineet Gupta --- arch/arc/include/asm/mmu.h | 8 ++++++++ arch/arc/kernel/entry-arcv2.S | 1 + arch/arc/kernel/entry.S | 7 ++----- 3 files changed, 11 insertions(+), 5 deletions(-) diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 0b117ea07048..414a27e806b6 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -84,6 +84,14 @@ static inline int is_pae40_enabled(void) extern int pae40_exist_but_not_enab(void); +#else + +.macro ARC_MMU_REENABLE reg + lr \reg, [ARC_REG_PID] + or \reg, \reg, MMU_ENABLE + sr \reg, [ARC_REG_PID] +.endm + #endif /* !__ASSEMBLY__ */ #endif diff --git a/arch/arc/kernel/entry-arcv2.S b/arch/arc/kernel/entry-arcv2.S index 12d5f12d10d2..a7e6a2174187 100644 --- a/arch/arc/kernel/entry-arcv2.S +++ b/arch/arc/kernel/entry-arcv2.S @@ -10,6 +10,7 @@ #include #include #include +#include ; A maximum number of supported interrupts in the core interrupt controller. ; This number is not equal to the maximum interrupt number (256) because diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S index 2cb8dfe866b6..dd77a0c8f740 100644 --- a/arch/arc/kernel/entry.S +++ b/arch/arc/kernel/entry.S @@ -101,11 +101,8 @@ ENTRY(EV_MachineCheck) lr r0, [efa] mov r1, sp - ; hardware auto-disables MMU, re-enable it to allow kernel vaddr - ; access for say stack unwinding of modules for crash dumps - lr r3, [ARC_REG_PID] - or r3, r3, MMU_ENABLE - sr r3, [ARC_REG_PID] + ; MC excpetions disable MMU + ARC_MMU_REENABLE r3 lsr r3, r2, 8 bmsk r3, r3, 7 From patchwork Thu Aug 12 23:37:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AA06C43214 for ; Thu, 12 Aug 2021 23:38:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D117C6108C for ; Thu, 12 Aug 2021 23:38:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D117C6108C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 1797B6B0085; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 128A66B0087; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE6476B0088; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id C047A6B0085 for ; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6CDEE181AF5E6 for ; Thu, 12 Aug 2021 23:38:04 +0000 (UTC) X-FDA: 78468044088.11.E972C4D Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf23.hostedemail.com (Postfix) with ESMTP id 0ECAF9009B4A for ; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 0FB61610FD; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811483; bh=2Fl+UaQd2LbBkvpg/DazzYYTlhrohlhzA+PnuOctBVY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=aPK0kff4ECHeCAQ5/7/T8RrOf3uCFlgE4teUXWOt0D32NUoOkPIkN5V9THIfZtjas kw1gv/ujim8O654BSs6OKq+uDfgJOQQSGzf1c8ii07AWLy+c9WLQqUcB9SM+sVZuxe 0Y+XfnZRNZ0B7gbFefwxETwkm8QKVbl2AzPaAt72it1DgB3p27p26vPlUwDwG8SmCv VjdtYqTwVtYvTUtF0H+tseJNpFtqJp1QhNN9cSsJ1qB4aDdfqQs2RkggLFobtEatMn j83jsD5tMIau/9zOlhfvTOWHkD3DvRFtItE3nQvaZbw6bC9SpDHIDMUQ1xEpc0Vmac kNVgGqnboAfkw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 13/19] ARC: mm: disintegrate mmu.h (arcv2 bits out) Date: Thu, 12 Aug 2021 16:37:47 -0700 Message-Id: <20210812233753.104217-14-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0ECAF9009B4A Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=aPK0kff4; spf=pass (imf23.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam01 X-Stat-Signature: rqddj6jsaq5th57txsrjs5qhzzhe5xnr X-HE-Tag: 1628811483-70761 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: non functional change Signed-off-by: Vineet Gupta Reported-by: kernel test robot --- arch/arc/include/asm/mmu-arcv2.h | 103 +++++++++++++++++++++++++++++++ arch/arc/include/asm/mmu.h | 80 +----------------------- arch/arc/include/asm/pgtable.h | 6 -- 3 files changed, 105 insertions(+), 84 deletions(-) create mode 100644 arch/arc/include/asm/mmu-arcv2.h diff --git a/arch/arc/include/asm/mmu-arcv2.h b/arch/arc/include/asm/mmu-arcv2.h new file mode 100644 index 000000000000..4c47dd3864d1 --- /dev/null +++ b/arch/arc/include/asm/mmu-arcv2.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2004, 2007-2010, 2011-2012, 2019-20 Synopsys, Inc. (www.synopsys.com) + * + * MMUv3 (arc700) / MMUv4 (archs) are software page walked and software managed. + * This file contains the TLB access registers and commands + */ + +#ifndef _ASM_ARC_MMU_ARCV2_H +#define _ASM_ARC_MMU_ARCV2_H + +/* + * TLB Management regs + */ +#define ARC_REG_MMU_BCR 0x06f + +#ifdef CONFIG_ARC_MMU_V3 +#define ARC_REG_TLBPD0 0x405 +#define ARC_REG_TLBPD1 0x406 +#define ARC_REG_TLBPD1HI 0 /* Dummy: allows common code */ +#define ARC_REG_TLBINDEX 0x407 +#define ARC_REG_TLBCOMMAND 0x408 +#define ARC_REG_PID 0x409 +#define ARC_REG_SCRATCH_DATA0 0x418 +#else +#define ARC_REG_TLBPD0 0x460 +#define ARC_REG_TLBPD1 0x461 +#define ARC_REG_TLBPD1HI 0x463 +#define ARC_REG_TLBINDEX 0x464 +#define ARC_REG_TLBCOMMAND 0x465 +#define ARC_REG_PID 0x468 +#define ARC_REG_SCRATCH_DATA0 0x46c +#endif + +/* Bits in MMU PID reg */ +#define __TLB_ENABLE (1 << 31) +#define __PROG_ENABLE (1 << 30) +#define MMU_ENABLE (__TLB_ENABLE | __PROG_ENABLE) + +/* Bits in TLB Index reg */ +#define TLB_LKUP_ERR 0x80000000 + +#ifdef CONFIG_ARC_MMU_V3 +#define TLB_DUP_ERR (TLB_LKUP_ERR | 0x00000001) +#else +#define TLB_DUP_ERR (TLB_LKUP_ERR | 0x40000000) +#endif + +/* + * TLB Commands + */ +#define TLBWrite 0x1 +#define TLBRead 0x2 +#define TLBGetIndex 0x3 +#define TLBProbe 0x4 +#define TLBWriteNI 0x5 /* write JTLB without inv uTLBs */ +#define TLBIVUTLB 0x6 /* explicitly inv uTLBs */ + +#ifdef CONFIG_ARC_MMU_V4 +#define TLBInsertEntry 0x7 +#define TLBDeleteEntry 0x8 +#endif + +/* Masks for actual TLB "PD"s */ +#define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ) +#define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ) + +#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE) + +#ifndef __ASSEMBLY__ + +struct mm_struct; +extern int pae40_exist_but_not_enab(void); + +static inline int is_pae40_enabled(void) +{ + return IS_ENABLED(CONFIG_ARC_HAS_PAE40); +} + +static void inline mmu_setup_asid(struct mm_struct *mm, unsigned long asid) +{ + write_aux_reg(ARC_REG_PID, asid | MMU_ENABLE); +} + +static void inline mmu_setup_pgd(struct mm_struct *mm, void *pgd) +{ + /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ +#ifdef CONFIG_ISA_ARCV2 + write_aux_reg(ARC_REG_SCRATCH_DATA0, (unsigned int)pgd); +#endif +} + +#else + +.macro ARC_MMU_REENABLE reg + lr \reg, [ARC_REG_PID] + or \reg, \reg, MMU_ENABLE + sr \reg, [ARC_REG_PID] +.endm + +#endif /* !__ASSEMBLY__ */ + +#endif diff --git a/arch/arc/include/asm/mmu.h b/arch/arc/include/asm/mmu.h index 414a27e806b6..ca427c30f70e 100644 --- a/arch/arc/include/asm/mmu.h +++ b/arch/arc/include/asm/mmu.h @@ -7,91 +7,15 @@ #define _ASM_ARC_MMU_H #ifndef __ASSEMBLY__ -#include /* NR_CPUS */ -#endif - -/* MMU Management regs */ -#define ARC_REG_MMU_BCR 0x06f - -#ifdef CONFIG_ARC_MMU_V3 -#define ARC_REG_TLBPD0 0x405 -#define ARC_REG_TLBPD1 0x406 -#define ARC_REG_TLBPD1HI 0 /* Dummy: allows code sharing with ARC700 */ -#define ARC_REG_TLBINDEX 0x407 -#define ARC_REG_TLBCOMMAND 0x408 -#define ARC_REG_PID 0x409 -#define ARC_REG_SCRATCH_DATA0 0x418 -#else -#define ARC_REG_TLBPD0 0x460 -#define ARC_REG_TLBPD1 0x461 -#define ARC_REG_TLBPD1HI 0x463 -#define ARC_REG_TLBINDEX 0x464 -#define ARC_REG_TLBCOMMAND 0x465 -#define ARC_REG_PID 0x468 -#define ARC_REG_SCRATCH_DATA0 0x46c -#endif - -/* Bits in MMU PID register */ -#define __TLB_ENABLE (1 << 31) -#define __PROG_ENABLE (1 << 30) -#define MMU_ENABLE (__TLB_ENABLE | __PROG_ENABLE) - -/* Error code if probe fails */ -#define TLB_LKUP_ERR 0x80000000 - -#ifdef CONFIG_ARC_MMU_V3 -#define TLB_DUP_ERR (TLB_LKUP_ERR | 0x00000001) -#else -#define TLB_DUP_ERR (TLB_LKUP_ERR | 0x40000000) -#endif - -/* TLB Commands */ -#define TLBWrite 0x1 -#define TLBRead 0x2 -#define TLBGetIndex 0x3 -#define TLBProbe 0x4 -#define TLBWriteNI 0x5 /* write JTLB without inv uTLBs */ -#define TLBIVUTLB 0x6 /* explicitly inv uTLBs */ -#ifdef CONFIG_ARC_MMU_V4 -#define TLBInsertEntry 0x7 -#define TLBDeleteEntry 0x8 -#endif - -#ifndef __ASSEMBLY__ +#include /* NR_CPUS */ typedef struct { unsigned long asid[NR_CPUS]; /* 8 bit MMU PID + Generation cycle */ } mm_context_t; -static void inline mmu_setup_asid(struct mm_struct *mm, unsigned int asid) -{ - write_aux_reg(ARC_REG_PID, asid | MMU_ENABLE); -} - -static void inline mmu_setup_pgd(struct mm_struct *mm, void *pgd) -{ - /* PGD cached in MMU reg to avoid 3 mem lookups: task->mm->pgd */ -#ifdef CONFIG_ISA_ARCV2 - write_aux_reg(ARC_REG_SCRATCH_DATA0, (unsigned int)pgd); #endif -} - -static inline int is_pae40_enabled(void) -{ - return IS_ENABLED(CONFIG_ARC_HAS_PAE40); -} - -extern int pae40_exist_but_not_enab(void); - -#else - -.macro ARC_MMU_REENABLE reg - lr \reg, [ARC_REG_PID] - or \reg, \reg, MMU_ENABLE - sr \reg, [ARC_REG_PID] -.endm -#endif /* !__ASSEMBLY__ */ +#include #endif diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index f762bacb2358..de4576e8d17a 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -103,12 +103,6 @@ */ #define PAGE_KERNEL __pgprot(_K_PAGE_PERMS | _PAGE_CACHEABLE) -/* Masks for actual TLB "PD"s */ -#define PTE_BITS_IN_PD0 (_PAGE_GLOBAL | _PAGE_PRESENT | _PAGE_HW_SZ) -#define PTE_BITS_RWX (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ) - -#define PTE_BITS_NON_RWX_IN_PD1 (PAGE_MASK_PHYS | _PAGE_CACHEABLE) - /************************************************************************** * Mapping of vm_flags (Generic VM) to PTE flags (arch specific) * From patchwork Thu Aug 12 23:37:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 237D9C4320E for ; Thu, 12 Aug 2021 23:38:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9B2716103E for ; Thu, 12 Aug 2021 23:38:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9B2716103E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 460E26B0083; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E7638D0001; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03E466B0083; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id D36B56B0087 for ; Thu, 12 Aug 2021 19:38:04 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 854CA8249980 for ; Thu, 12 Aug 2021 23:38:04 +0000 (UTC) X-FDA: 78468044088.07.555AC2C Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP id 24648802820C for ; Thu, 12 Aug 2021 23:38:04 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 891E46103E; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811483; bh=c2E/fYN0v+g0xfHg2rVvxKo6rGmQOQAUbmipN+98JqE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FYTPuT8n3GgV0Xucjd68vJ6CHAUXHzjTflotEQmAyyX+8HP6E9WuMgnxjI66/5Fha KVimx/dzzrvWY/KTN5XYjHwpNE24BI/VhHe39jQGVDZx/VTp0tK3KLlYwV4gvEQYKw Qa5ueqtgIxHTqSThAVDoaBafxES4L/z9AkVW+Qzl4oeN/YfRBvnmcrITzl3Ifu6n5+ DJGAyr2sQAbKJWQoRIJSvugwQxj63oGyXJlG/AEVXiD/qs6Ap9RBSvwwjtdHvOKOPR ZghQYx9ukRRXEYWL/O5LVk4b5cWXIsxOnzmdL9vRnTD9G0Jk0Zmxdkf5J6RBQG18xD shGwSKcC1I4Dw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 14/19] ARC: mm: disintegrate pgtable.h into levels and flags Date: Thu, 12 Aug 2021 16:37:48 -0700 Message-Id: <20210812233753.104217-15-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 24648802820C X-Stat-Signature: 73d3dijrmk3txd3okyofdxr6gik6ku9q Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=FYTPuT8n; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf06.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811484-495921 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: - pgtable-bits-arcv2.h (MMU specific page table flags) - pgtable-levels.h (paging levels) No functional changes, but paves way for easy addition of new MMU code with different bits and levels etc Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgtable-bits-arcv2.h | 149 ++++++++++++ arch/arc/include/asm/pgtable-levels.h | 91 +++++++ arch/arc/include/asm/pgtable.h | 277 +--------------------- 3 files changed, 244 insertions(+), 273 deletions(-) create mode 100644 arch/arc/include/asm/pgtable-bits-arcv2.h create mode 100644 arch/arc/include/asm/pgtable-levels.h diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h new file mode 100644 index 000000000000..183d23bc1e00 --- /dev/null +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -0,0 +1,149 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) + */ + +/* + * page table flags for software walked/managed MMUv3 (ARC700) and MMUv4 (HS) + * There correspond to the corresponding bits in the TLB + */ + +#ifndef _ASM_ARC_PGTABLE_BITS_ARCV2_H +#define _ASM_ARC_PGTABLE_BITS_ARCV2_H + +#ifdef CONFIG_ARC_CACHE_PAGES +#define _PAGE_CACHEABLE (1 << 0) /* Cached (H) */ +#else +#define _PAGE_CACHEABLE 0 +#endif + +#define _PAGE_EXECUTE (1 << 1) /* User Execute (H) */ +#define _PAGE_WRITE (1 << 2) /* User Write (H) */ +#define _PAGE_READ (1 << 3) /* User Read (H) */ +#define _PAGE_ACCESSED (1 << 4) /* Accessed (s) */ +#define _PAGE_DIRTY (1 << 5) /* Modified (s) */ +#define _PAGE_SPECIAL (1 << 6) +#define _PAGE_GLOBAL (1 << 8) /* ASID agnostic (H) */ +#define _PAGE_PRESENT (1 << 9) /* PTE/TLB Valid (H) */ + +#ifdef CONFIG_ARC_MMU_V4 +#define _PAGE_HW_SZ (1 << 10) /* Normal/super (H) */ +#else +#define _PAGE_HW_SZ 0 +#endif + +/* Defaults for every user page */ +#define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE) + +/* Set of bits not changed in pte_modify */ +#define _PAGE_CHG_MASK (PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \ + _PAGE_SPECIAL) + +/* More Abbrevaited helpers */ +#define PAGE_U_NONE __pgprot(___DEF) +#define PAGE_U_R __pgprot(___DEF | _PAGE_READ) +#define PAGE_U_W_R __pgprot(___DEF | _PAGE_READ | _PAGE_WRITE) +#define PAGE_U_X_R __pgprot(___DEF | _PAGE_READ | _PAGE_EXECUTE) +#define PAGE_U_X_W_R __pgprot(___DEF \ + | _PAGE_READ | _PAGE_WRITE | _PAGE_EXECUTE) +#define PAGE_KERNEL __pgprot(___DEF | _PAGE_GLOBAL \ + | _PAGE_READ | _PAGE_WRITE | _PAGE_EXECUTE) + +#define PAGE_SHARED PAGE_U_W_R + +#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE)) + +/* + * Mapping of vm_flags (Generic VM) to PTE flags (arch specific) + * + * Certain cases have 1:1 mapping + * e.g. __P101 means VM_READ, VM_EXEC and !VM_SHARED + * which directly corresponds to PAGE_U_X_R + * + * Other rules which cause the divergence from 1:1 mapping + * + * 1. Although ARC700 can do exclusive execute/write protection (meaning R + * can be tracked independet of X/W unlike some other CPUs), still to + * keep things consistent with other archs: + * -Write implies Read: W => R + * -Execute implies Read: X => R + * + * 2. Pvt Writable doesn't have Write Enabled initially: Pvt-W => !W + * This is to enable COW mechanism + */ + /* xwr */ +#define __P000 PAGE_U_NONE +#define __P001 PAGE_U_R +#define __P010 PAGE_U_R /* Pvt-W => !W */ +#define __P011 PAGE_U_R /* Pvt-W => !W */ +#define __P100 PAGE_U_X_R /* X => R */ +#define __P101 PAGE_U_X_R +#define __P110 PAGE_U_X_R /* Pvt-W => !W and X => R */ +#define __P111 PAGE_U_X_R /* Pvt-W => !W */ + +#define __S000 PAGE_U_NONE +#define __S001 PAGE_U_R +#define __S010 PAGE_U_W_R /* W => R */ +#define __S011 PAGE_U_W_R +#define __S100 PAGE_U_X_R /* X => R */ +#define __S101 PAGE_U_X_R +#define __S110 PAGE_U_X_W_R /* X => R */ +#define __S111 PAGE_U_X_W_R + +#ifndef __ASSEMBLY__ + +#define pte_write(pte) (pte_val(pte) & _PAGE_WRITE) +#define pte_dirty(pte) (pte_val(pte) & _PAGE_DIRTY) +#define pte_young(pte) (pte_val(pte) & _PAGE_ACCESSED) +#define pte_special(pte) (pte_val(pte) & _PAGE_SPECIAL) + +#define PTE_BIT_FUNC(fn, op) \ + static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; } + +PTE_BIT_FUNC(mknotpresent, &= ~(_PAGE_PRESENT)); +PTE_BIT_FUNC(wrprotect, &= ~(_PAGE_WRITE)); +PTE_BIT_FUNC(mkwrite, |= (_PAGE_WRITE)); +PTE_BIT_FUNC(mkclean, &= ~(_PAGE_DIRTY)); +PTE_BIT_FUNC(mkdirty, |= (_PAGE_DIRTY)); +PTE_BIT_FUNC(mkold, &= ~(_PAGE_ACCESSED)); +PTE_BIT_FUNC(mkyoung, |= (_PAGE_ACCESSED)); +PTE_BIT_FUNC(mkspecial, |= (_PAGE_SPECIAL)); +PTE_BIT_FUNC(mkhuge, |= (_PAGE_HW_SZ)); + +static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) +{ + return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); +} + +static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pteval) +{ + set_pte(ptep, pteval); +} + +void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, + pte_t *ptep); + +/* Encode swap {type,off} tuple into PTE + * We reserve 13 bits for 5-bit @type, keeping bits 12-5 zero, ensuring that + * PAGE_PRESENT is zero in a PTE holding swap "identifier" + */ +#define __swp_entry(type, off) ((swp_entry_t) \ + { ((type) & 0x1f) | ((off) << 13) }) + +/* Decode a PTE containing swap "identifier "into constituents */ +#define __swp_type(pte_lookalike) (((pte_lookalike).val) & 0x1f) +#define __swp_offset(pte_lookalike) ((pte_lookalike).val >> 13) + +#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) +#define __swp_entry_to_pte(x) ((pte_t) { (x).val }) + +#define kern_addr_valid(addr) (1) + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#include +#endif + +#endif /* __ASSEMBLY__ */ + +#endif diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h new file mode 100644 index 000000000000..8ece75335bb5 --- /dev/null +++ b/arch/arc/include/asm/pgtable-levels.h @@ -0,0 +1,91 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2020 Synopsys, Inc. (www.synopsys.com) + */ + +/* + * Helpers for implemenintg paging levels + */ + +#ifndef _ASM_ARC_PGTABLE_LEVELS_H +#define _ASM_ARC_PGTABLE_LEVELS_H + +/* + * 2 level paging setup for software walked MMUv3 (ARC700) and MMUv4 (HS) + * + * [31] 32 bit virtual address [0] + * ------------------------------------------------------- + * | | <---------- PGDIR_SHIFT ----------> | + * | | | <-- PAGE_SHIFT --> | + * ------------------------------------------------------- + * | | | + * | | --> off in page frame + * | ---> index into Page Table + * ----> index into Page Directory + * + * Given software walk, the vaddr split is arbitrary set to 11:8:13 + * However enabling of super page in a 2 level regime pegs PGDIR_SHIFT to + * super page size. + */ + +#if defined(CONFIG_ARC_HUGEPAGE_16M) +#define PGDIR_SHIFT 24 +#elif defined(CONFIG_ARC_HUGEPAGE_2M) +#define PGDIR_SHIFT 21 +#else +/* No Super page case: in theory this can be any number */ +#define PGDIR_SHIFT 21 +#endif + +#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ +#define PGDIR_MASK (~(PGDIR_SIZE - 1)) + +#define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) + +#define PTRS_PER_PTE BIT(PGDIR_SHIFT - PAGE_SHIFT) + +#ifndef __ASSEMBLY__ + +#include + +/* + * 1st level paging: pgd + */ +#define pgd_index(addr) ((addr) >> PGDIR_SHIFT) +#define pgd_offset(mm, addr) (((mm)->pgd) + pgd_index(addr)) +#define pgd_offset_k(addr) pgd_offset(&init_mm, addr) +#define pgd_ERROR(e) \ + pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) + +/* + * Due to the strange way generic pgtable level folding works, in a 2 level + * setup, pmd_val() returns pgd, so these pmd_* macros actually work on pgd + */ +#define pmd_none(x) (!pmd_val(x)) +#define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) +#define pmd_present(x) (pmd_val(x)) +#define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) +#define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) +#define pmd_page(pmd) virt_to_page(pmd_page_vaddr(pmd)) +#define set_pmd(pmdp, pmd) (*(pmdp) = pmd) +#define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) + +#define pte_ERROR(e) \ + pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) + +#define pte_none(x) (!pte_val(x)) +#define pte_present(x) (pte_val(x) & _PAGE_PRESENT) +#define pte_clear(mm,addr,ptep) set_pte_at(mm, addr, ptep, __pte(0)) +#define pte_page(pte) pfn_to_page(pte_pfn(pte)) +#define set_pte(ptep, pte) ((*(ptep)) = (pte)) +#define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) +#define pfn_pte(pfn, prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot)) +#define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) + +#ifdef CONFIG_ISA_ARCV2 +#define pmd_leaf(x) (pmd_val(x) & _PAGE_HW_SZ) +#endif + +#endif /* !__ASSEMBLY__ */ + +#endif diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h index de4576e8d17a..9320b04c04bf 100644 --- a/arch/arc/include/asm/pgtable.h +++ b/arch/arc/include/asm/pgtable.h @@ -1,304 +1,35 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* * Copyright (C) 2004, 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com) - * - * vineetg: May 2011 - * -Folded PAGE_PRESENT (used by VM) and PAGE_VALID (used by MMU) into 1. - * They are semantically the same although in different contexts - * VALID marks a TLB entry exists and it will only happen if PRESENT - * - Utilise some unused free bits to confine PTE flags to 12 bits - * This is a must for 4k pg-sz - * - * vineetg: Mar 2011 - changes to accommodate MMU TLB Page Descriptor mods - * -TLB Locking never really existed, except for initial specs - * -SILENT_xxx not needed for our port - * -Per my request, MMU V3 changes the layout of some of the bits - * to avoid a few shifts in TLB Miss handlers. - * - * vineetg: April 2010 - * -PGD entry no longer contains any flags. If empty it is 0, otherwise has - * Pg-Tbl ptr. Thus pmd_present(), pmd_valid(), pmd_set( ) become simpler - * - * vineetg: April 2010 - * -Switched form 8:11:13 split for page table lookup to 11:8:13 - * -this speeds up page table allocation itself as we now have to memset 1K - * instead of 8k per page table. - * -TODO: Right now page table alloc is 8K and rest 7K is unused - * need to optimise it - * - * Amit Bhor, Sameer Dhavale: Codito Technologies 2004 */ #ifndef _ASM_ARC_PGTABLE_H #define _ASM_ARC_PGTABLE_H #include -#include + +#include +#include #include #include -/************************************************************************** - * Page Table Flags - * - * ARC700 MMU only deals with softare managed TLB entries. - * Page Tables are purely for Linux VM's consumption and the bits below are - * suited to that (uniqueness). Hence some are not implemented in the TLB and - * some have different value in TLB. - * e.g. MMU v2: K_READ bit is 8 and so is GLOBAL (possible because they live in - * seperate PD0 and PD1, which combined forms a translation entry) - * while for PTE perspective, they are 8 and 9 respectively - * with MMU v3: Most bits (except SHARED) represent the exact hardware pos - * (saves some bit shift ops in TLB Miss hdlrs) - */ - -#define _PAGE_CACHEABLE (1<<0) /* Page is cached (H) */ -#define _PAGE_EXECUTE (1<<1) /* Page has user execute perm (H) */ -#define _PAGE_WRITE (1<<2) /* Page has user write perm (H) */ -#define _PAGE_READ (1<<3) /* Page has user read perm (H) */ -#define _PAGE_ACCESSED (1<<4) /* Page is accessed (S) */ -#define _PAGE_DIRTY (1<<5) /* Page modified (dirty) (S) */ -#define _PAGE_SPECIAL (1<<6) - -#define _PAGE_GLOBAL (1<<8) /* Page is global (H) */ -#define _PAGE_PRESENT (1<<9) /* TLB entry is valid (H) */ - -#ifdef CONFIG_ARC_MMU_V4 -#define _PAGE_HW_SZ (1<<10) /* Page Size indicator (H): 0 normal, 1 super */ -#endif - -#define _PAGE_SHARED_CODE (1<<11) /* Shared Code page with cmn vaddr - usable for shared TLB entries (H) */ -/* vmalloc permissions */ -#define _K_PAGE_PERMS (_PAGE_EXECUTE | _PAGE_WRITE | _PAGE_READ | \ - _PAGE_GLOBAL | _PAGE_PRESENT) - -#ifndef CONFIG_ARC_CACHE_PAGES -#undef _PAGE_CACHEABLE -#define _PAGE_CACHEABLE 0 -#endif - -#ifndef _PAGE_HW_SZ -#define _PAGE_HW_SZ 0 -#endif - -/* Defaults for every user page */ -#define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE) - -/* Set of bits not changed in pte_modify */ -#define _PAGE_CHG_MASK (PAGE_MASK_PHYS | _PAGE_ACCESSED | _PAGE_DIRTY | \ - _PAGE_SPECIAL) -/* More Abbrevaited helpers */ -#define PAGE_U_NONE __pgprot(___DEF) -#define PAGE_U_R __pgprot(___DEF | _PAGE_READ) -#define PAGE_U_W_R __pgprot(___DEF | _PAGE_READ | _PAGE_WRITE) -#define PAGE_U_X_R __pgprot(___DEF | _PAGE_READ | _PAGE_EXECUTE) -#define PAGE_U_X_W_R __pgprot(___DEF | _PAGE_READ | _PAGE_WRITE | \ - _PAGE_EXECUTE) - -#define PAGE_SHARED PAGE_U_W_R - -/* While kernel runs out of unstranslated space, vmalloc/modules use a chunk of - * user vaddr space - visible in all addr spaces, but kernel mode only - * Thus Global, all-kernel-access, no-user-access, cached - */ -#define PAGE_KERNEL __pgprot(_K_PAGE_PERMS | _PAGE_CACHEABLE) - -/************************************************************************** - * Mapping of vm_flags (Generic VM) to PTE flags (arch specific) - * - * Certain cases have 1:1 mapping - * e.g. __P101 means VM_READ, VM_EXEC and !VM_SHARED - * which directly corresponds to PAGE_U_X_R - * - * Other rules which cause the divergence from 1:1 mapping - * - * 1. Although ARC700 can do exclusive execute/write protection (meaning R - * can be tracked independet of X/W unlike some other CPUs), still to - * keep things consistent with other archs: - * -Write implies Read: W => R - * -Execute implies Read: X => R - * - * 2. Pvt Writable doesn't have Write Enabled initially: Pvt-W => !W - * This is to enable COW mechanism - */ - /* xwr */ -#define __P000 PAGE_U_NONE -#define __P001 PAGE_U_R -#define __P010 PAGE_U_R /* Pvt-W => !W */ -#define __P011 PAGE_U_R /* Pvt-W => !W */ -#define __P100 PAGE_U_X_R /* X => R */ -#define __P101 PAGE_U_X_R -#define __P110 PAGE_U_X_R /* Pvt-W => !W and X => R */ -#define __P111 PAGE_U_X_R /* Pvt-W => !W */ - -#define __S000 PAGE_U_NONE -#define __S001 PAGE_U_R -#define __S010 PAGE_U_W_R /* W => R */ -#define __S011 PAGE_U_W_R -#define __S100 PAGE_U_X_R /* X => R */ -#define __S101 PAGE_U_X_R -#define __S110 PAGE_U_X_W_R /* X => R */ -#define __S111 PAGE_U_X_W_R - -/**************************************************************** - * 2 tier (PGD:PTE) software page walker - * - * [31] 32 bit virtual address [0] - * ------------------------------------------------------- - * | | <------------ PGDIR_SHIFT ----------> | - * | | | - * | BITS_FOR_PGD | BITS_FOR_PTE | <-- PAGE_SHIFT --> | - * ------------------------------------------------------- - * | | | - * | | --> off in page frame - * | ---> index into Page Table - * ----> index into Page Directory - * - * In a single page size configuration, only PAGE_SHIFT is fixed - * So both PGD and PTE sizing can be tweaked - * e.g. 8K page (PAGE_SHIFT 13) can have - * - PGDIR_SHIFT 21 -> 11:8:13 address split - * - PGDIR_SHIFT 24 -> 8:11:13 address split - * - * If Super Page is configured, PGDIR_SHIFT becomes fixed too, - * so the sizing flexibility is gone. - */ - -#if defined(CONFIG_ARC_HUGEPAGE_16M) -#define PGDIR_SHIFT 24 -#elif defined(CONFIG_ARC_HUGEPAGE_2M) -#define PGDIR_SHIFT 21 -#else -/* - * Only Normal page support so "hackable" (see comment above) - * Default value provides 11:8:13 (8K), 11:9:12 (4K) - */ -#define PGDIR_SHIFT 21 -#endif - -#define BITS_FOR_PTE (PGDIR_SHIFT - PAGE_SHIFT) -#define BITS_FOR_PGD (32 - PGDIR_SHIFT) - -#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ -#define PGDIR_MASK (~(PGDIR_SIZE-1)) - -#define PTRS_PER_PTE BIT(BITS_FOR_PTE) -#define PTRS_PER_PGD BIT(BITS_FOR_PGD) - /* * Number of entries a user land program use. * TASK_SIZE is the maximum vaddr that can be used by a userland program. */ #define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE) - -/**************************************************************** - * Bucket load of VM Helpers - */ - #ifndef __ASSEMBLY__ -#define pte_ERROR(e) \ - pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) -#define pgd_ERROR(e) \ - pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) - -/* the zero page used for uninitialized and anonymous pages */ extern char empty_zero_page[PAGE_SIZE]; #define ZERO_PAGE(vaddr) (virt_to_page(empty_zero_page)) -#define set_pte(pteptr, pteval) ((*(pteptr)) = (pteval)) -#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval) - -/* find the page descriptor of the Page Tbl ref by PMD entry */ -#define pmd_page(pmd) virt_to_page(pmd_val(pmd) & PAGE_MASK) - -/* find the logical addr (phy for ARC) of the Page Tbl ref by PMD entry */ -#define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) - -#define pte_none(x) (!pte_val(x)) -#define pte_present(x) (pte_val(x) & _PAGE_PRESENT) -#define pte_clear(mm, addr, ptep) set_pte_at(mm, addr, ptep, __pte(0)) - -#define pmd_none(x) (!pmd_val(x)) -#define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) -#define pmd_present(x) (pmd_val(x)) -#define pmd_leaf(x) (pmd_val(x) & _PAGE_HW_SZ) -#define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) - -#define pte_page(pte) pfn_to_page(pte_pfn(pte)) -#define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot) -#define pfn_pte(pfn, prot) __pte(__pfn_to_phys(pfn) | pgprot_val(prot)) - -/* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/ -#define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT) - -/* Zoo of pte_xxx function */ -#define pte_read(pte) (pte_val(pte) & _PAGE_READ) -#define pte_write(pte) (pte_val(pte) & _PAGE_WRITE) -#define pte_dirty(pte) (pte_val(pte) & _PAGE_DIRTY) -#define pte_young(pte) (pte_val(pte) & _PAGE_ACCESSED) -#define pte_special(pte) (pte_val(pte) & _PAGE_SPECIAL) - -#define PTE_BIT_FUNC(fn, op) \ - static inline pte_t pte_##fn(pte_t pte) { pte_val(pte) op; return pte; } - -PTE_BIT_FUNC(mknotpresent, &= ~(_PAGE_PRESENT)); -PTE_BIT_FUNC(wrprotect, &= ~(_PAGE_WRITE)); -PTE_BIT_FUNC(mkwrite, |= (_PAGE_WRITE)); -PTE_BIT_FUNC(mkclean, &= ~(_PAGE_DIRTY)); -PTE_BIT_FUNC(mkdirty, |= (_PAGE_DIRTY)); -PTE_BIT_FUNC(mkold, &= ~(_PAGE_ACCESSED)); -PTE_BIT_FUNC(mkyoung, |= (_PAGE_ACCESSED)); -PTE_BIT_FUNC(exprotect, &= ~(_PAGE_EXECUTE)); -PTE_BIT_FUNC(mkexec, |= (_PAGE_EXECUTE)); -PTE_BIT_FUNC(mkspecial, |= (_PAGE_SPECIAL)); -PTE_BIT_FUNC(mkhuge, |= (_PAGE_HW_SZ)); - -static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) -{ - return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)); -} +extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); /* Macro to mark a page protection as uncacheable */ #define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CACHEABLE)) -static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, - pte_t *ptep, pte_t pteval) -{ - set_pte(ptep, pteval); -} - extern pgd_t swapper_pg_dir[] __aligned(PAGE_SIZE); -void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, - pte_t *ptep); - -/* Encode swap {type,off} tuple into PTE - * We reserve 13 bits for 5-bit @type, keeping bits 12-5 zero, ensuring that - * PAGE_PRESENT is zero in a PTE holding swap "identifier" - */ -#define __swp_entry(type, off) ((swp_entry_t) { \ - ((type) & 0x1f) | ((off) << 13) }) - -/* Decode a PTE containing swap "identifier "into constituents */ -#define __swp_type(pte_lookalike) (((pte_lookalike).val) & 0x1f) -#define __swp_offset(pte_lookalike) ((pte_lookalike).val >> 13) - -/* NOPs, to keep generic kernel happy */ -#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) -#define __swp_entry_to_pte(x) ((pte_t) { (x).val }) - -#define kern_addr_valid(addr) (1) - -#define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) - -/* - * remap a physical page `pfn' of size `size' with page protection `prot' - * into virtual address `from' - */ -#ifdef CONFIG_TRANSPARENT_HUGEPAGE -#include -#endif /* to cope with aliasing VIPT cache */ #define HAVE_ARCH_UNMAPPED_AREA From patchwork Thu Aug 12 23:37:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80E2CC4338F for ; Thu, 12 Aug 2021 23:38:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1A6DE60EE2 for ; Thu, 12 Aug 2021 23:38:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1A6DE60EE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id E0FC76B0088; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D602C6B0089; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6DAD8D0001; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id 9525B6B0087 for ; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 4E13DC5D4 for ; Thu, 12 Aug 2021 23:38:05 +0000 (UTC) X-FDA: 78468044130.02.60D8214 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP id 0226A900BEB2 for ; Thu, 12 Aug 2021 23:38:04 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 00929610FF; Thu, 12 Aug 2021 23:38:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811484; bh=OznPj0H31NuyGJ0zu5NrPr8zmibKxgGabmxj0tfu3OI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qSLCgRfDUgYypTq6vAUv99MIOcg2OnKf0tnDXJqcySf3ubU/7uoPK7McrjorJIkxX VfcDxT+ITm2tR4CPvh0KFoYz7ntI9yAluS95kThBPQzj4peUcTgFXvXnixaRcjf4oz mvs8MWR0B4l9tfx3cNDVtWbyU9kofkJhuFdo243+DMU9Jh8DGEgSEz+EkTdio3gE+g J/H9DmrTQ4RbJIOPOW9rTn8IDNhacihpSUGjLyQzwhEITlQo9bQeNjdriIb90bWMvn ETyBJ0iKYzooOHqC2RVmF1lRDL8zf7EGYlVavXPmsVbI08yOdOsazanNOQZVC2nb8z xgdu49URX80Dg== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 15/19] ARC: mm: hack to allow 2 level build with 4 level code Date: Thu, 12 Aug 2021 16:37:49 -0700 Message-Id: <20210812233753.104217-16-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 0226A900BEB2 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=qSLCgRfD; spf=pass (imf28.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam01 X-Stat-Signature: 6pcug1jdnwfrgm6m533euns816dfxkwu X-HE-Tag: 1628811484-803515 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: PMD_SHIFT is mapped to PUD_SHIFT or PGD_SHIFT by asm-generic/pgtable-* but only for !__ASSEMBLY__ tlbex.S asm code has PTRS_PER_PTE which uses PMD_SHIFT hence barfs for CONFIG_PGTABLE_LEVEL={2,3} and works for 4. So add a workaround local to tlbex.S - the proper fix is to change asm-generic/pgtable-* headers to expose the defines for __ASSEMBLY__ too Signed-off-by: Vineet Gupta --- arch/arc/mm/tlbex.S | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index 0b4bb62fa0ab..c4a5f16444ce 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -145,6 +145,14 @@ ex_saved_reg1: ;TLB Miss handling Code ;============================================================================ +#ifndef PMD_SHIFT +#define PMD_SHIFT PUD_SHIFT +#endif + +#ifndef PUD_SHIFT +#define PUD_SHIFT PGDIR_SHIFT +#endif + ;----------------------------------------------------------------------------- ; This macro does the page-table lookup for the faulting address. ; OUT: r0 = PTE faulted on, r1 = ptr to PTE, r2 = Faulting V-address From patchwork Thu Aug 12 23:37:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D78FAC432BE for ; Thu, 12 Aug 2021 23:38:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6CBF860EE2 for ; Thu, 12 Aug 2021 23:38:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6CBF860EE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 21CB56B0087; Thu, 12 Aug 2021 19:38:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08C996B008C; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C274A6B0087; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id A12116B0088 for ; Thu, 12 Aug 2021 19:38:05 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 57F898249980 for ; Thu, 12 Aug 2021 23:38:05 +0000 (UTC) X-FDA: 78468044130.05.6FD40D3 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP id 0D21F801B0F7 for ; Thu, 12 Aug 2021 23:38:04 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 7766D6112E; Thu, 12 Aug 2021 23:38:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811484; bh=WwnrEymPtlTXjy10ynCidFxMgHguuIzE/6UZ1pzghrQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=isKycoi2XNHSvdcerZYs00WsF4IWnL5ab2yY6BhAjcr2inMrF6bR8DVcPtADZxM8F AJjsG0M1wX88FKJC28LkQR7tKiD+Bj9DqMWCFa6si6z+T7eHxUA1bdps461M0zUsHJ kAymUno45/Ii8RwiJfwJCwKx0DgIudQuZ96ar1H3Gxppy9ToPKiXS6yPCOMACYOBrm nCXiUVWWbbqCg7xb7NvWybTlWSLsdiceMPphU4knfALUvXHFT4bEB6gF+mSlOAPiTw nOiiBiobX1lQZ8G0/Bf1thsTcjMDL6xfQqiLUylLgcBIj+q9uavsAQq9UWetDcWnmv 5IRDjaJSovhLw== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 16/19] ARC: mm: support 3 levels of page tables Date: Thu, 12 Aug 2021 16:37:50 -0700 Message-Id: <20210812233753.104217-17-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0D21F801B0F7 X-Stat-Signature: j83isw7s71aa88uuggaziet1g1per3nw Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=isKycoi2; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf06.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811484-898171 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. Forthcoming hw will have multiple levels, so this change preps mm code for same. It is also fun to try multi levels even on soft-walked code to ensure generic mm code is robust to handle. overview diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index 59d5b2a179f6..43cb8aaf57a2 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -314,6 +314,10 @@ config ARC_HUGEPAGE_16M endchoice +config PGTABLE_LEVELS + int "Number of Page table levels" + default 2 + config ARC_COMPACT_IRQ_LEVELS depends on ISA_ARCOMPACT bool "Setup Timer IRQ as high Priority" diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 28ed82b1800f..5d7899d87c08 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -41,6 +41,17 @@ typedef struct { #define pgd_val(x) ((x).pgd) #define __pgd(x) ((pgd_t) { (x) }) +#if CONFIG_PGTABLE_LEVELS > 2 + +typedef struct { + unsigned long pmd; +} pmd_t; + +#define pmd_val(x) ((x).pmd) +#define __pmd(x) ((pmd_t) { (x) }) + +#endif + typedef struct { #ifdef CONFIG_ARC_HAS_PAE40 unsigned long long pte; diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 0cde9e5eefd7..781620d2e86f 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -70,6 +70,17 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) return ret; } +#if CONFIG_PGTABLE_LEVELS > 2 + +static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) +{ + set_pud(pudp, __pud((unsigned long)pmdp)); +} + +#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) + +#endif + #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) #endif /* _ASM_ARC_PGALLOC_H */ diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index 8ece75335bb5..1c2f022d4ad0 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -10,6 +10,8 @@ #ifndef _ASM_ARC_PGTABLE_LEVELS_H #define _ASM_ARC_PGTABLE_LEVELS_H +#if CONFIG_PGTABLE_LEVELS == 2 + /* * 2 level paging setup for software walked MMUv3 (ARC700) and MMUv4 (HS) * @@ -37,16 +39,38 @@ #define PGDIR_SHIFT 21 #endif -#define PGDIR_SIZE BIT(PGDIR_SHIFT) /* vaddr span, not PDG sz */ -#define PGDIR_MASK (~(PGDIR_SIZE - 1)) +#else + +/* + * A default 3 level paging testing setup in software walked MMU + * MMUv4 (8K page): <4> : <7> : <8> : <13> + */ +#define PGDIR_SHIFT 28 +#if CONFIG_PGTABLE_LEVELS > 2 +#define PMD_SHIFT 21 +#endif + +#endif +#define PGDIR_SIZE BIT(PGDIR_SHIFT) +#define PGDIR_MASK (~(PGDIR_SIZE - 1)) #define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) -#define PTRS_PER_PTE BIT(PGDIR_SHIFT - PAGE_SHIFT) +#if CONFIG_PGTABLE_LEVELS > 2 +#define PMD_SIZE BIT(PMD_SHIFT) +#define PMD_MASK (~(PMD_SIZE - 1)) +#define PTRS_PER_PMD BIT(PGDIR_SHIFT - PMD_SHIFT) +#endif + +#define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT) #ifndef __ASSEMBLY__ +#if CONFIG_PGTABLE_LEVELS > 2 +#include +#else #include +#endif /* * 1st level paging: pgd @@ -57,9 +81,35 @@ #define pgd_ERROR(e) \ pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#if CONFIG_PGTABLE_LEVELS > 2 + +/* In 3 level paging, pud_* macros work on pgd */ +#define pud_none(x) (!pud_val(x)) +#define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) +#define pud_present(x) (pud_val(x)) +#define pud_clear(xp) do { pud_val(*(xp)) = 0; } while (0) +#define pud_pgtable(pud) ((pmd_t *)(pud_val(pud) & PAGE_MASK)) +#define pud_page(pud) virt_to_page(pud_pgtable(pud)) +#define set_pud(pudp, pud) (*(pudp) = pud) + +/* + * 2nd level paging: pmd + */ +#define pmd_ERROR(e) \ + pr_crit("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e)) + +#define pmd_pfn(pmd) ((pmd_val(pmd) & PMD_MASK) >> PAGE_SHIFT) +#define pfn_pmd(pfn,prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot)) +#define mk_pmd(page,prot) pfn_pmd(page_to_pfn(page),prot) + +#endif + /* - * Due to the strange way generic pgtable level folding works, in a 2 level - * setup, pmd_val() returns pgd, so these pmd_* macros actually work on pgd + * Due to the strange way generic pgtable level folding works, the pmd_* macros + * - are valid even for 2 levels (which supposedly only has pgd - pte) + * - behave differently for 2 vs. 3 + * In 2 level paging (pgd -> pte), pmd_* macros work on pgd + * In 3+ level paging (pgd -> pmd -> pte), pmd_* macros work on pmd */ #define pmd_none(x) (!pmd_val(x)) #define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) @@ -70,6 +120,9 @@ #define set_pmd(pmdp, pmd) (*(pmdp) = pmd) #define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) +/* + * 3rd level paging: pte + */ #define pte_ERROR(e) \ pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h index e4031ecd3c8c..f28afcf5c6d1 100644 --- a/arch/arc/include/asm/processor.h +++ b/arch/arc/include/asm/processor.h @@ -93,7 +93,7 @@ extern unsigned int get_wchan(struct task_struct *p); #define VMALLOC_START (PAGE_OFFSET - (CONFIG_ARC_KVADDR_SIZE << 20)) /* 1 PGDIR_SIZE each for fixmap/pkmap, 2 PGDIR_SIZE gutter (see asm/highmem.h) */ -#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PGDIR_SIZE * 4) +#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PMD_SIZE * 4) #define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 41f154320964..8da2f0ad8c69 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -39,6 +39,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) if (!pgd_present(*pgd_k)) goto bad_area; + set_pgd(pgd, *pgd_k); + p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); if (!p4d_present(*p4d_k)) @@ -49,6 +51,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) if (!pud_present(*pud_k)) goto bad_area; + set_pud(pud, *pud_k); + pmd = pmd_offset(pud, address); pmd_k = pmd_offset(pud_k, address); if (!pmd_present(*pmd_k)) diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index 46ad9aee7a73..f7ba2a5d5ec8 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -191,6 +191,7 @@ void __init mem_init(void) highmem_init(); BUILD_BUG_ON((PTRS_PER_PGD * sizeof(pgd_t)) > PAGE_SIZE); + BUILD_BUG_ON((PTRS_PER_PMD * sizeof(pmd_t)) > PAGE_SIZE); BUILD_BUG_ON((PTRS_PER_PTE * sizeof(pte_t)) > PAGE_SIZE); } diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index b68d5798327b..352abb35a2ad 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -658,8 +658,8 @@ char *arc_mmu_mumbojumbo(int cpu_id, char *buf, int len) IS_USED_CFG(CONFIG_TRANSPARENT_HUGEPAGE)); n += scnprintf(buf + n, len - n, - "MMU [v%x]\t: %dk PAGE, %sJTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", - p_mmu->ver, p_mmu->pg_sz_k, super_pg, + "MMU [v%x]\t: %dk PAGE, %s, swalk %d lvl, JTLB %d (%dx%d), uDTLB %d, uITLB %d%s%s\n", + p_mmu->ver, p_mmu->pg_sz_k, super_pg, CONFIG_PGTABLE_LEVELS, p_mmu->sets * p_mmu->ways, p_mmu->sets, p_mmu->ways, p_mmu->u_dtlb, p_mmu->u_itlb, IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40)); diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index c4a5f16444ce..5f57eba1089d 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -173,6 +173,15 @@ ex_saved_reg1: tst r3, r3 bz do_slow_path_pf ; if no Page Table, do page fault +#if CONFIG_PGTABLE_LEVELS > 2 + lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD + and r0, r0, (PTRS_PER_PMD - 1) + ld.as r1, [r3, r0] ; PMD entry + tst r1, r1 + bz do_slow_path_pf + mov r3, r1 +#endif + #ifdef CONFIG_TRANSPARENT_HUGEPAGE and.f 0, r3, _PAGE_HW_SZ ; Is this Huge PMD (thp) add2.nz r1, r1, r0 From patchwork Thu Aug 12 23:37:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7A75C4338F for ; Thu, 12 Aug 2021 23:38:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8B1A760EE2 for ; Thu, 12 Aug 2021 23:38:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8B1A760EE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7B9286B0089; Thu, 12 Aug 2021 19:38:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 71B436B008A; Thu, 12 Aug 2021 19:38:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 51F4D6B008C; Thu, 12 Aug 2021 19:38:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id 35F9D6B0089 for ; Thu, 12 Aug 2021 19:38:06 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C6E2A180AD81A for ; Thu, 12 Aug 2021 23:38:05 +0000 (UTC) X-FDA: 78468044130.29.F924B6C Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf28.hostedemail.com (Postfix) with ESMTP id 7503C900BEB2 for ; Thu, 12 Aug 2021 23:38:05 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id D3AA8610A4; Thu, 12 Aug 2021 23:38:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811485; bh=Zfvf7PqDmvH0bSGDxCh5bZxbmsW668jVrn2bJhjQeHQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wksp8+pKY+hKDYRj3YOG2np/XDZg2A/rgtEyHJv+Z7IJHIR2YKymr4LM7dJtCPWHe d7H9kXfhKrwl6GcqX//o0NVW/3nFAPaQM9LB2RlSlN67weKsXMeT62WLZh6xCnw/7M BNmomwKekBhYPuxADNKln+49+7LtDIQfpDGOZ5pQCFn1eFC64MkMGV+Z0ptrw7Zs3p Jmrh3sHNXA70eEz0NI/bqH0jIY0U55Mg3fztXSlFY/zEUzVKTaxQaVj8RAtP7Szb/y tNxS+fE2M6vTKp4u6Me5JzfoKEv0juo/o/u5m4b9V63useHWfEJOv/mbIbmVqhLcIX ZnLOtrexr4zhQ== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 17/19] ARC: mm: support 4 levels of page tables Date: Thu, 12 Aug 2021 16:37:51 -0700 Message-Id: <20210812233753.104217-18-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 7503C900BEB2 Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Wksp8+pK; spf=pass (imf28.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam01 X-Stat-Signature: yy5h3wbnq6549m7mq1fuag7frcbhzu8i X-HE-Tag: 1628811485-443280 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Vineet Gupta --- arch/arc/include/asm/page.h | 11 +++++++ arch/arc/include/asm/pgalloc.h | 11 +++++++ arch/arc/include/asm/pgtable-levels.h | 45 ++++++++++++++++++++++++--- arch/arc/mm/fault.c | 2 ++ arch/arc/mm/init.c | 1 + arch/arc/mm/tlbex.S | 9 ++++++ 6 files changed, 74 insertions(+), 5 deletions(-) diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h index 5d7899d87c08..9a62e1d87967 100644 --- a/arch/arc/include/asm/page.h +++ b/arch/arc/include/asm/page.h @@ -41,6 +41,17 @@ typedef struct { #define pgd_val(x) ((x).pgd) #define __pgd(x) ((pgd_t) { (x) }) +#if CONFIG_PGTABLE_LEVELS > 3 + +typedef struct { + unsigned long pud; +} pud_t; + +#define pud_val(x) ((x).pud) +#define __pud(x) ((pud_t) { (x) }) + +#endif + #if CONFIG_PGTABLE_LEVELS > 2 typedef struct { diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 781620d2e86f..096b8ef58edb 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -70,6 +70,17 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) return ret; } +#if CONFIG_PGTABLE_LEVELS > 3 + +static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) +{ + set_p4d(p4dp, __p4d((unsigned long)pudp)); +} + +#define __pud_free_tlb(tlb, pmd, addr) pud_free((tlb)->mm, pmd) + +#endif + #if CONFIG_PGTABLE_LEVELS > 2 static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index 1c2f022d4ad0..2da3c4e52a91 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -44,8 +44,13 @@ /* * A default 3 level paging testing setup in software walked MMU * MMUv4 (8K page): <4> : <7> : <8> : <13> + * A default 4 level paging testing setup in software walked MMU + * MMUv4 (8K page): <4> : <3> : <4> : <8> : <13> */ #define PGDIR_SHIFT 28 +#if CONFIG_PGTABLE_LEVELS > 3 +#define PUD_SHIFT 25 +#endif #if CONFIG_PGTABLE_LEVELS > 2 #define PMD_SHIFT 21 #endif @@ -56,17 +61,25 @@ #define PGDIR_MASK (~(PGDIR_SIZE - 1)) #define PTRS_PER_PGD BIT(32 - PGDIR_SHIFT) +#if CONFIG_PGTABLE_LEVELS > 3 +#define PUD_SIZE BIT(PUD_SHIFT) +#define PUD_MASK (~(PUD_SIZE - 1)) +#define PTRS_PER_PUD BIT(PGDIR_SHIFT - PUD_SHIFT) +#endif + #if CONFIG_PGTABLE_LEVELS > 2 #define PMD_SIZE BIT(PMD_SHIFT) #define PMD_MASK (~(PMD_SIZE - 1)) -#define PTRS_PER_PMD BIT(PGDIR_SHIFT - PMD_SHIFT) +#define PTRS_PER_PMD BIT(PUD_SHIFT - PMD_SHIFT) #endif #define PTRS_PER_PTE BIT(PMD_SHIFT - PAGE_SHIFT) #ifndef __ASSEMBLY__ -#if CONFIG_PGTABLE_LEVELS > 2 +#if CONFIG_PGTABLE_LEVELS > 3 +#include +#elif CONFIG_PGTABLE_LEVELS > 2 #include #else #include @@ -81,9 +94,31 @@ #define pgd_ERROR(e) \ pr_crit("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) +#if CONFIG_PGTABLE_LEVELS > 3 + +/* In 4 level paging, p4d_* macros work on pgd */ +#define p4d_none(x) (!p4d_val(x)) +#define p4d_bad(x) ((p4d_val(x) & ~PAGE_MASK)) +#define p4d_present(x) (p4d_val(x)) +#define p4d_clear(xp) do { p4d_val(*(xp)) = 0; } while (0) +#define p4d_pgtable(p4d) ((pud_t *)(p4d_val(p4d) & PAGE_MASK)) +#define p4d_page(p4d) virt_to_page(p4d_pgtable(p4d)) +#define set_p4d(p4dp, p4d) (*(p4dp) = p4d) + +/* + * 2nd level paging: pud + */ +#define pud_ERROR(e) \ + pr_crit("%s:%d: bad pud %08lx.\n", __FILE__, __LINE__, pud_val(e)) + +#endif + #if CONFIG_PGTABLE_LEVELS > 2 -/* In 3 level paging, pud_* macros work on pgd */ +/* + * In 3 level paging, pud_* macros work on pgd + * In 4 level paging, pud_* macros work on pud + */ #define pud_none(x) (!pud_val(x)) #define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) #define pud_present(x) (pud_val(x)) @@ -93,7 +128,7 @@ #define set_pud(pudp, pud) (*(pudp) = pud) /* - * 2nd level paging: pmd + * 3rd level paging: pmd */ #define pmd_ERROR(e) \ pr_crit("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e)) @@ -121,7 +156,7 @@ #define pmd_pgtable(pmd) ((pgtable_t) pmd_page_vaddr(pmd)) /* - * 3rd level paging: pte + * 4th level paging: pte */ #define pte_ERROR(e) \ pr_crit("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e)) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index 8da2f0ad8c69..f8994164fa36 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -46,6 +46,8 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) if (!p4d_present(*p4d_k)) goto bad_area; + set_p4d(p4d, *p4d_k); + pud = pud_offset(p4d, address); pud_k = pud_offset(p4d_k, address); if (!pud_present(*pud_k)) diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c index f7ba2a5d5ec8..699ecf119641 100644 --- a/arch/arc/mm/init.c +++ b/arch/arc/mm/init.c @@ -191,6 +191,7 @@ void __init mem_init(void) highmem_init(); BUILD_BUG_ON((PTRS_PER_PGD * sizeof(pgd_t)) > PAGE_SIZE); + BUILD_BUG_ON((PTRS_PER_PUD * sizeof(pud_t)) > PAGE_SIZE); BUILD_BUG_ON((PTRS_PER_PMD * sizeof(pmd_t)) > PAGE_SIZE); BUILD_BUG_ON((PTRS_PER_PTE * sizeof(pte_t)) > PAGE_SIZE); } diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index 5f57eba1089d..e054780a8fe0 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -173,6 +173,15 @@ ex_saved_reg1: tst r3, r3 bz do_slow_path_pf ; if no Page Table, do page fault +#if CONFIG_PGTABLE_LEVELS > 3 + lsr r0, r2, PUD_SHIFT ; Bits for indexing into PUD + and r0, r0, (PTRS_PER_PUD - 1) + ld.as r1, [r3, r0] ; PMD entry + tst r1, r1 + bz do_slow_path_pf + mov r3, r1 +#endif + #if CONFIG_PGTABLE_LEVELS > 2 lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD and r0, r0, (PTRS_PER_PMD - 1) From patchwork Thu Aug 12 23:37:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 192F2C4338F for ; Thu, 12 Aug 2021 23:38:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B88B660EE2 for ; Thu, 12 Aug 2021 23:38:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B88B660EE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3EA716B008A; Thu, 12 Aug 2021 19:38:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 371B86B008C; Thu, 12 Aug 2021 19:38:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2606C6B0092; Thu, 12 Aug 2021 19:38:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id 09B5D6B008A for ; Thu, 12 Aug 2021 19:38:07 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B6914C5D4 for ; Thu, 12 Aug 2021 23:38:06 +0000 (UTC) X-FDA: 78468044172.06.01A2C4E Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf27.hostedemail.com (Postfix) with ESMTP id 6FB277008035 for ; Thu, 12 Aug 2021 23:38:06 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 4087A61131; Thu, 12 Aug 2021 23:38:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811485; bh=7WJKlO9bHU1KTACHVSZNKYdiOMEX5YhZuZxQV/SdLuA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Dt1uufFxiBpM64WdTSAuTk8+1zrKO6HU17TYlEUlNN2kujLxQYptBPWGS+0DQsO8c qLICm2qZyETLEzLvScOtCQYp5JedDoefi3B5ziqpYXQusDyQeCNzXRPINiFGzyB0ik AIsE96Dk0G2t12AyO1Ka1LfR5NFh9OOSpW0+yL/l0+qDRko4spy5cH0V3OuvtRJ3Fg nIKGR1NJko/owd0DL01zdM6HsvN6k862UPaFztDZw0p5ZGHiObtNREUaZCUqexrveP m580WAdbM/F0mF37GPiVk2k5f33D1sPQrkwPSbiJSQCipodfYOi5O5jCZ9SrkyCP0n GSOvLOSh6E24w== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 18/19] ARC: mm: vmalloc sync from kernel to user table to update PMD ... Date: Thu, 12 Aug 2021 16:37:52 -0700 Message-Id: <20210812233753.104217-19-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6FB277008035 X-Stat-Signature: r54e1iiqmfupmi9kmgcajyecr8kam1qy Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=Dt1uufFx; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf27.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org X-HE-Tag: 1628811486-691910 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ... not PGD vmalloc() sets up the kernel page table (starting from @swapper_pg_dir). But when vmalloc area is accessed in context of a user task, say opening terminal in n_tty_open(), the user page tables need to be synced from kernel page tables so that TLB entry is created in "user context". The old code was doing this incorrectly, as it was updating the user pgd entry (first level itself) to point to kernel pud table (2nd level), effectively yanking away the entire user space translation with kernel one. The correct way to do this is to ONLY update a user space pgd/pud/pmd entry if it is not popluated already. This ensures that only the missing leaf pmd entry gets updated to point to relevant kernel pte table. From code change pov, we are chaging the pattern: p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); if (!p4d_present(*p4d_k)) goto bad_area; set_p4d(p4d, *p4d_k); with p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); if (p4d_none(*p4d_k)) goto bad_area; if (!p4d_present(*p4d)) set_p4d(p4d, *p4d_k); Signed-off-by: Vineet Gupta --- arch/arc/mm/fault.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/arc/mm/fault.c b/arch/arc/mm/fault.c index f8994164fa36..5787c261c9a4 100644 --- a/arch/arc/mm/fault.c +++ b/arch/arc/mm/fault.c @@ -36,31 +36,31 @@ noinline static int handle_kernel_vaddr_fault(unsigned long address) pgd = pgd_offset(current->active_mm, address); pgd_k = pgd_offset_k(address); - if (!pgd_present(*pgd_k)) + if (pgd_none (*pgd_k)) goto bad_area; - - set_pgd(pgd, *pgd_k); + if (!pgd_present(*pgd)) + set_pgd(pgd, *pgd_k); p4d = p4d_offset(pgd, address); p4d_k = p4d_offset(pgd_k, address); - if (!p4d_present(*p4d_k)) + if (p4d_none(*p4d_k)) goto bad_area; - - set_p4d(p4d, *p4d_k); + if (!p4d_present(*p4d)) + set_p4d(p4d, *p4d_k); pud = pud_offset(p4d, address); pud_k = pud_offset(p4d_k, address); - if (!pud_present(*pud_k)) + if (pud_none(*pud_k)) goto bad_area; - - set_pud(pud, *pud_k); + if (!pud_present(*pud)) + set_pud(pud, *pud_k); pmd = pmd_offset(pud, address); pmd_k = pmd_offset(pud_k, address); - if (!pmd_present(*pmd_k)) + if (pmd_none(*pmd_k)) goto bad_area; - - set_pmd(pmd, *pmd_k); + if (!pmd_present(*pmd)) + set_pmd(pmd, *pmd_k); /* XXX: create the TLB entry here */ return 0; From patchwork Thu Aug 12 23:37:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vineet Gupta X-Patchwork-Id: 12434465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85EB7C4338F for ; Thu, 12 Aug 2021 23:38:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1788A60EE2 for ; Thu, 12 Aug 2021 23:38:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1788A60EE2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D5EF96B008C; Thu, 12 Aug 2021 19:38:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CE8666B0092; Thu, 12 Aug 2021 19:38:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B38FB8D0001; Thu, 12 Aug 2021 19:38:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 953506B008C for ; Thu, 12 Aug 2021 19:38:07 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4229C181AF5E6 for ; Thu, 12 Aug 2021 23:38:07 +0000 (UTC) X-FDA: 78468044214.09.343B306 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf18.hostedemail.com (Postfix) with ESMTP id D6A6F401007A for ; Thu, 12 Aug 2021 23:38:06 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id D23A661101; Thu, 12 Aug 2021 23:38:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1628811486; bh=VGtZCbKljVwjXPskC+/bFDWf9l1SbMAK3xd6joWYr3c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dh3f/J/QQ5q0R8A4sxpCNodX8JWCbztZwgSkoR0mwiE1JPB3LjQhc4rHCOFvEsFUz Z0gRIJyWG38cuR4CyMOXJ6fdZ/Z3i2B1XJe3FVGhSYklCqrtLWcz2tms1tFmgLaczJ mmiKZdQWV7pRKUAJnkYDnGyvX1O0uwRdkBE+UG/WvEMeT30FMXroO0s914EwJdn12t 5MCEQm4i53THfTpslaFBf1snNLHe/Snr1vKnHWhQHYYrXW/zhaFJzoKGEAVWceKe6+ n2PMoIx0/vvIIGG+vrxMJtKA3bIomvupXvJ4UwiQn7/orvyqUt+lx/uBX09r9Jyjvj kVgvrQGf1V30g== From: Vineet Gupta To: linux-snps-arc@lists.infradead.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Anshuman Khandual , Mike Rapoport , Vineet Gupta Subject: [PATCH v2 19/19] ARC: mm: introduce _PAGE_TABLE to explicitly link pgd,pud,pmd entries Date: Thu, 12 Aug 2021 16:37:53 -0700 Message-Id: <20210812233753.104217-20-vgupta@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210812233753.104217-1-vgupta@kernel.org> References: <20210812233753.104217-1-vgupta@kernel.org> MIME-Version: 1.0 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="dh3f/J/Q"; spf=pass (imf18.hostedemail.com: domain of vgupta@kernel.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=vgupta@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D6A6F401007A X-Stat-Signature: noumqfipray3yzw1zxthmw19j1cgjwdf X-HE-Tag: 1628811486-282720 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ARCv3 hardware walker expects Table Descriptors to have b'11 in LSB bits to continue moving to next level. This commits adds that (to ARCv2 code) and ensures that it works in software walked regime. The pte entries stil need tagging, but that is not possible in ARCv2 since the LSB 2 bits are currently used. Signed-off-by: Vineet Gupta --- arch/arc/include/asm/pgalloc.h | 8 ++++---- arch/arc/include/asm/pgtable-bits-arcv2.h | 2 ++ arch/arc/include/asm/pgtable-levels.h | 6 +++--- arch/arc/mm/tlbex.S | 4 +++- 4 files changed, 12 insertions(+), 8 deletions(-) diff --git a/arch/arc/include/asm/pgalloc.h b/arch/arc/include/asm/pgalloc.h index 096b8ef58edb..a8c01eceba1b 100644 --- a/arch/arc/include/asm/pgalloc.h +++ b/arch/arc/include/asm/pgalloc.h @@ -43,12 +43,12 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) * * The cast itself is needed given simplistic definition of set_pmd() */ - set_pmd(pmd, __pmd((unsigned long)pte)); + set_pmd(pmd, __pmd((unsigned long)pte | _PAGE_TABLE)); } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, pgtable_t pte_page) { - set_pmd(pmd, __pmd((unsigned long)page_address(pte_page))); + set_pmd(pmd, __pmd((unsigned long)page_address(pte_page) | _PAGE_TABLE)); } static inline pgd_t *pgd_alloc(struct mm_struct *mm) @@ -74,7 +74,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) { - set_p4d(p4dp, __p4d((unsigned long)pudp)); + set_p4d(p4dp, __p4d((unsigned long)pudp | _PAGE_TABLE)); } #define __pud_free_tlb(tlb, pmd, addr) pud_free((tlb)->mm, pmd) @@ -85,7 +85,7 @@ static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp) static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp) { - set_pud(pudp, __pud((unsigned long)pmdp)); + set_pud(pudp, __pud((unsigned long)pmdp | _PAGE_TABLE)); } #define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) diff --git a/arch/arc/include/asm/pgtable-bits-arcv2.h b/arch/arc/include/asm/pgtable-bits-arcv2.h index 183d23bc1e00..54aba0d3ae34 100644 --- a/arch/arc/include/asm/pgtable-bits-arcv2.h +++ b/arch/arc/include/asm/pgtable-bits-arcv2.h @@ -32,6 +32,8 @@ #define _PAGE_HW_SZ 0 #endif +#define _PAGE_TABLE 0x3 + /* Defaults for every user page */ #define ___DEF (_PAGE_PRESENT | _PAGE_CACHEABLE) diff --git a/arch/arc/include/asm/pgtable-levels.h b/arch/arc/include/asm/pgtable-levels.h index 2da3c4e52a91..6c7a8360d986 100644 --- a/arch/arc/include/asm/pgtable-levels.h +++ b/arch/arc/include/asm/pgtable-levels.h @@ -98,7 +98,7 @@ /* In 4 level paging, p4d_* macros work on pgd */ #define p4d_none(x) (!p4d_val(x)) -#define p4d_bad(x) ((p4d_val(x) & ~PAGE_MASK)) +#define p4d_bad(x) (!(p4d_val(x) & _PAGE_TABLE)) #define p4d_present(x) (p4d_val(x)) #define p4d_clear(xp) do { p4d_val(*(xp)) = 0; } while (0) #define p4d_pgtable(p4d) ((pud_t *)(p4d_val(p4d) & PAGE_MASK)) @@ -120,7 +120,7 @@ * In 4 level paging, pud_* macros work on pud */ #define pud_none(x) (!pud_val(x)) -#define pud_bad(x) ((pud_val(x) & ~PAGE_MASK)) +#define pud_bad(x) (!(pud_val(x) & _PAGE_TABLE)) #define pud_present(x) (pud_val(x)) #define pud_clear(xp) do { pud_val(*(xp)) = 0; } while (0) #define pud_pgtable(pud) ((pmd_t *)(pud_val(pud) & PAGE_MASK)) @@ -147,7 +147,7 @@ * In 3+ level paging (pgd -> pmd -> pte), pmd_* macros work on pmd */ #define pmd_none(x) (!pmd_val(x)) -#define pmd_bad(x) ((pmd_val(x) & ~PAGE_MASK)) +#define pmd_bad(pmd) (!(pmd_val(pmd) & _PAGE_TABLE)) #define pmd_present(x) (pmd_val(x)) #define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0) #define pmd_page_vaddr(pmd) (pmd_val(pmd) & PAGE_MASK) diff --git a/arch/arc/mm/tlbex.S b/arch/arc/mm/tlbex.S index e054780a8fe0..3874a8086591 100644 --- a/arch/arc/mm/tlbex.S +++ b/arch/arc/mm/tlbex.S @@ -171,11 +171,12 @@ ex_saved_reg1: lsr r0, r2, PGDIR_SHIFT ; Bits for indexing into PGD ld.as r3, [r1, r0] ; PGD entry corresp to faulting addr tst r3, r3 - bz do_slow_path_pf ; if no Page Table, do page fault + bz do_slow_path_pf ; next level table missing, handover to linux vm code #if CONFIG_PGTABLE_LEVELS > 3 lsr r0, r2, PUD_SHIFT ; Bits for indexing into PUD and r0, r0, (PTRS_PER_PUD - 1) + bmskn r3, r3, 1 ; clear _PAGE_TABLE bits ld.as r1, [r3, r0] ; PMD entry tst r1, r1 bz do_slow_path_pf @@ -185,6 +186,7 @@ ex_saved_reg1: #if CONFIG_PGTABLE_LEVELS > 2 lsr r0, r2, PMD_SHIFT ; Bits for indexing into PMD and r0, r0, (PTRS_PER_PMD - 1) + bmskn r3, r3, 1 ; clear _PAGE_TABLE bits ld.as r1, [r3, r0] ; PMD entry tst r1, r1 bz do_slow_path_pf