From patchwork Sat Mar 13 08:45:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 12136655 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DA2EC433E0 for ; Sat, 13 Mar 2021 08:48:31 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C668D64F16 for ; Sat, 13 Mar 2021 08:48:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C668D64F16 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ghiti.fr Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FqduPaXiAaUltJFBKdgOERxeUiS/uQVibMN/5iLC8fM=; b=HjMeZ8SDHZk1jKJb36tdC6rpT 6NuZcSy84P6PliC30s3qofNhsA0xJ/MTAvg+L44CwgqTyIbPxTquC+/Oo9FinJpWxSGaL1Fx5hGHG pm6REt6fIwC5c+AZqgS8WNxSJHYn2t1P73Rrdps6OJoWjbvDY0SuLcr2N5NBF+vAo6Vu7C1iYbCno SOf1LXyWsq/Xq6bkK9Hfma/2i8hRLpMkJKV9X8VfAd5KY+kxwFtaLRu7v1evutv6rMo6r+5af8Qtj AG/eX6yE0BvHUoLs7PPwkBlm5R1fMb9NUHMWHiMwHsL3Ut0h85V+4ev23naynf+sOcZKfhqLgbbYg XVYr0RJGg==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lKzwV-00D2fN-DA; Sat, 13 Mar 2021 08:48:15 +0000 Received: from relay12.mail.gandi.net ([217.70.178.232]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lKzve-00D2Vw-Ug for linux-riscv@lists.infradead.org; Sat, 13 Mar 2021 08:47:39 +0000 Received: from debian.home (lfbn-lyo-1-457-219.w2-7.abo.wanadoo.fr [2.7.49.219]) (Authenticated sender: alex@ghiti.fr) by relay12.mail.gandi.net (Postfix) with ESMTPSA id A4B27200005; Sat, 13 Mar 2021 08:47:19 +0000 (UTC) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Nylon Chen , Nick Hu , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com Cc: Alexandre Ghiti , Palmer Dabbelt Subject: [PATCH v3 2/2] riscv: Cleanup KASAN_VMALLOC support Date: Sat, 13 Mar 2021 03:45:05 -0500 Message-Id: <20210313084505.16132-3-alex@ghiti.fr> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210313084505.16132-1-alex@ghiti.fr> References: <20210313084505.16132-1-alex@ghiti.fr> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210313_084737_464667_95861B67 X-CRM114-Status: GOOD ( 11.62 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org When KASAN vmalloc region is populated, there is no userspace process and the page table in use is swapper_pg_dir, so there is no need to read SATP. Then we can use the same scheme used by kasan_populate_p*d functions to go through the page table, which harmonizes the code. In addition, make use of set_pgd that goes through all unused page table levels, contrary to p*d_populate functions, which makes this function work whatever the number of page table levels. Signed-off-by: Alexandre Ghiti Reviewed-by: Palmer Dabbelt --- arch/riscv/mm/kasan_init.c | 59 ++++++++++++-------------------------- 1 file changed, 18 insertions(+), 41 deletions(-) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 57bf4ae09361..c16178918239 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -11,18 +11,6 @@ #include #include -static __init void *early_alloc(size_t size, int node) -{ - void *ptr = memblock_alloc_try_nid(size, size, - __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, node); - - if (!ptr) - panic("%pS: Failed to allocate %zu bytes align=%zx nid=%d from=%llx\n", - __func__, size, size, node, (u64)__pa(MAX_DMA_ADDRESS)); - - return ptr; -} - extern pgd_t early_pg_dir[PTRS_PER_PGD]; asmlinkage void __init kasan_early_init(void) { @@ -155,38 +143,27 @@ static void __init kasan_populate(void *start, void *end) memset(start, KASAN_SHADOW_INIT, end - start); } -void __init kasan_shallow_populate(void *start, void *end) +static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long end) { - unsigned long vaddr = (unsigned long)start & PAGE_MASK; - unsigned long vend = PAGE_ALIGN((unsigned long)end); - unsigned long pfn; - int index; + unsigned long next; void *p; - pud_t *pud_dir, *pud_k; - pgd_t *pgd_dir, *pgd_k; - p4d_t *p4d_dir, *p4d_k; - - while (vaddr < vend) { - index = pgd_index(vaddr); - pfn = csr_read(CSR_SATP) & SATP_PPN; - pgd_dir = (pgd_t *)pfn_to_virt(pfn) + index; - pgd_k = init_mm.pgd + index; - pgd_dir = pgd_offset_k(vaddr); - set_pgd(pgd_dir, *pgd_k); - - p4d_dir = p4d_offset(pgd_dir, vaddr); - p4d_k = p4d_offset(pgd_k, vaddr); - - vaddr = (vaddr + PUD_SIZE) & PUD_MASK; - pud_dir = pud_offset(p4d_dir, vaddr); - pud_k = pud_offset(p4d_k, vaddr); - - if (pud_present(*pud_dir)) { - p = early_alloc(PAGE_SIZE, NUMA_NO_NODE); - pud_populate(&init_mm, pud_dir, p); + pgd_t *pgd_k = pgd_offset_k(vaddr); + + do { + next = pgd_addr_end(vaddr, end); + if (pgd_page_vaddr(*pgd_k) == (unsigned long)lm_alias(kasan_early_shadow_pmd)) { + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); } - vaddr += PAGE_SIZE; - } + } while (pgd_k++, vaddr = next, vaddr != end); +} + +static void __init kasan_shallow_populate(void *start, void *end) +{ + unsigned long vaddr = (unsigned long)start & PAGE_MASK; + unsigned long vend = PAGE_ALIGN((unsigned long)end); + + kasan_shallow_populate_pgd(vaddr, vend); local_flush_tlb_all(); }