From patchwork Tue Nov 3 17:58:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11878637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9743CC2D0A3 for ; Tue, 3 Nov 2020 18:01:12 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2507222264 for ; Tue, 3 Nov 2020 18:01:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="v7HLc8hv"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="rhyK8dme" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2507222264 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version:Message-Id: In-Reply-To:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=WggmSfZA1uKe/Ne79WEEJUV65XSLotLLbJa/B6NOKxU=; b=v7HLc8hveVf2oJGhic2dgQBrd tWumd2zudsVp6J1cDY19eaw9OnoLjdIcXelln+NyMWq0spsSLjYlWvmZXuiyMNjS0s0lgSnUm8kIt RbsZCWkp8LQYcJwfr3Hco8yNXgtybNf53EdSmE0t8OQ/5MKIvVaynjbfWuRH1F8+lJtSffeL8SUKP F5mjaKuHdFXGKsByeBF1L27tJ9idTXanopEXxyT8rVNpb4O96pUASPS7k2EOlEJvi+j/vNOYddrqT 8sDU3ZxctlYAt/4mEY9Z0rKvDrSP0CR08kWmlk+c2X/ZLVUlVubppZDH4avxzZTiS4TAG+c6Au7v9 QpLigfJVA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1ka0ac-0005is-7X; Tue, 03 Nov 2020 17:59:26 +0000 Received: from mail-wm1-x349.google.com ([2a00:1450:4864:20::349]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1ka0aI-0005Yl-9j for linux-arm-kernel@lists.infradead.org; Tue, 03 Nov 2020 17:59:21 +0000 Received: by mail-wm1-x349.google.com with SMTP id e15so51430wme.4 for ; Tue, 03 Nov 2020 09:59:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=EipKG0yUMWWV8YDScCe0e6tWyMQpyXBYPjRJqBhcvjE=; b=rhyK8dmeE4dr53sE/uFWlN822DH2Ol7TeauJlUCwqh+ZLQsbWqThh2kjRX2zk/paPH syHVvKdmaEtyOfdrMpsJQweFzziLkIb/Hk12rHvcTXldeC5fZki04zFMcZBI1sAEFD5Z XCwrDGuPC+IWcrKKBHAxkj+JLHrWsy+pe436HAI2geGW2qXuNbmqfoVnyyEaKMO0PJWV HGfir6YJZVg7TAUN8YD8wsnNaH5hZO/2fMYWREJQ02X8h+UyBFFZaJ2xlOSQqKoYHCw/ Zw995f0aCY225MLi+FnNDAOWrKur5U9VO6RaMD0Wdpg3x8fCsj+qfoIyN5AvgqTEUJH9 xTFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EipKG0yUMWWV8YDScCe0e6tWyMQpyXBYPjRJqBhcvjE=; b=mJE75pwfMGk0CcgcvEas9B0pqX7j6ZBBk7h/zSttWbe7N+8xiSwBOa89XOshBNmckM wbrsjXbNTvFuZQU/JRrnbeTKC2JFAz6+RpuaMK47BFRVg/Q7xAjDzZ7nbW1u2icGQQrt ucr79WfeXuA9wQ6nRxJkgVaFpcscJ3kGWHVyRYWkDyQYHdyfFUGa21dXQoTVGb1o77yM smDDqpzP7xIRZSBzDDeT7zWVyHBuBm8GagkQ+LB2XPWanvCRxprPkgOka3JD+V038+DW als8sfGxAJi4x6ai46kpVaC47HWQ1iOFDZtX196Gcv8MkrvXkNZtD9TDZG9FLhb7x1re O8CQ== X-Gm-Message-State: AOAM531uYPk4uIOUmXAfbN7uZScE26ENIl4kuxc9tmge76xSkHz5E+39 7W1yC6RakzF2RQiXsSRx0WhAJBh7hQ== X-Google-Smtp-Source: ABdhPJyhqKceIRNwBh1BUeRkNhloGmf2vhHYB+pCac/9CNlh5VLKBpOkM4qnvi8yfN9phagGWV06vmmobA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) (user=elver job=sendgmr) by 2002:a7b:c401:: with SMTP id k1mr353453wmi.120.1604426343066; Tue, 03 Nov 2020 09:59:03 -0800 (PST) Date: Tue, 3 Nov 2020 18:58:35 +0100 In-Reply-To: <20201103175841.3495947-1-elver@google.com> Message-Id: <20201103175841.3495947-4-elver@google.com> Mime-Version: 1.0 References: <20201103175841.3495947-1-elver@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH v7 3/9] arm64, kfence: enable KFENCE for ARM64 From: Marco Elver To: elver@google.com, akpm@linux-foundation.org, glider@google.com X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201103_125906_446949_5854B93A X-CRM114-Status: GOOD ( 20.79 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, hdanton@sina.com, linux-doc@vger.kernel.org, peterz@infradead.org, catalin.marinas@arm.com, dave.hansen@linux.intel.com, linux-mm@kvack.org, edumazet@google.com, hpa@zytor.com, cl@linux.com, will@kernel.org, sjpark@amazon.com, corbet@lwn.net, x86@kernel.org, kasan-dev@googlegroups.com, mingo@redhat.com, vbabka@suse.cz, rientjes@google.com, aryabinin@virtuozzo.com, joern@purestorage.com, keescook@chromium.org, paulmck@kernel.org, jannh@google.com, andreyknvl@google.com, bp@alien8.de, luto@kernel.org, Jonathan.Cameron@huawei.com, tglx@linutronix.de, dvyukov@google.com, linux-arm-kernel@lists.infradead.org, gregkh@linuxfoundation.org, linux-kernel@vger.kernel.org, penberg@kernel.org, iamjoonsoo.kim@lge.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add architecture specific implementation details for KFENCE and enable KFENCE for the arm64 architecture. In particular, this implements the required interface in . KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the entire linear map to be mapped at page granularity. Doing so may result in extra memory allocated for page tables in case rodata=full is not set; however, currently CONFIG_RODATA_FULL_DEFAULT_ENABLED=y is the default, and the common case is therefore not affected by this change. Reviewed-by: Dmitry Vyukov Co-developed-by: Alexander Potapenko Signed-off-by: Alexander Potapenko Signed-off-by: Marco Elver Reviewed-by: Jann Horn Reviewed-by: Mark Rutland --- v7: * Remove dependency on page size [reported by Mark Rutland]. * fault normally on permission faults [reported by Jann Horn]. v5: * Move generic page allocation code to core.c [suggested by Jann Horn]. * Remove comment about HAVE_ARCH_KFENCE_STATIC_POOL, since we no longer support static pools. * Force page granularity for the linear map [suggested by Mark Rutland]. --- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/kfence.h | 19 +++++++++++++++++++ arch/arm64/mm/fault.c | 4 ++++ arch/arm64/mm/mmu.c | 7 ++++++- 4 files changed, 30 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/kfence.h diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1d466addb078..e524c07c3eda 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -135,6 +135,7 @@ config ARM64 select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if !(ARM64_16K_PAGES && ARM64_VA_BITS_48) select HAVE_ARCH_KASAN_SW_TAGS if HAVE_ARCH_KASAN + select HAVE_ARCH_KFENCE select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h new file mode 100644 index 000000000000..5ac0f599cc9a --- /dev/null +++ b/arch/arm64/include/asm/kfence.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_KFENCE_H +#define __ASM_KFENCE_H + +#include + +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "el1_sync" + +static inline bool arch_kfence_init_pool(void) { return true; } + +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + set_memory_valid(addr, 1, !protect); + + return true; +} + +#endif /* __ASM_KFENCE_H */ diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 1ee94002801f..2d60204b4ed2 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include #include @@ -322,6 +323,9 @@ static void __do_kernel_fault(unsigned long addr, unsigned int esr, } else if (addr < PAGE_SIZE) { msg = "NULL pointer dereference"; } else { + if (kfence_handle_page_fault(addr)) + return; + msg = "paging request"; } diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 1c0f3e02f731..86be6d1a78ab 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -1449,7 +1449,12 @@ int arch_add_memory(int nid, u64 start, u64 size, { int ret, flags = 0; - if (rodata_full || debug_pagealloc_enabled()) + /* + * KFENCE requires linear map to be mapped at page granularity, so that + * it is possible to protect/unprotect single pages in the KFENCE pool. + */ + if (rodata_full || debug_pagealloc_enabled() || + IS_ENABLED(CONFIG_KFENCE)) flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),