From patchwork Thu Oct 29 13:16:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 11866333 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90B6792C for ; Thu, 29 Oct 2020 13:17:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3EFE420809 for ; Thu, 29 Oct 2020 13:17:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="XtCQzmxA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EFE420809 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B42976B0073; Thu, 29 Oct 2020 09:17:06 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ACE396B0074; Thu, 29 Oct 2020 09:17:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91DCA6B0075; Thu, 29 Oct 2020 09:17:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 59D666B0073 for ; Thu, 29 Oct 2020 09:17:06 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E4E701EFD for ; Thu, 29 Oct 2020 13:17:05 +0000 (UTC) X-FDA: 77425013610.16.paint81_0915ed52728d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id A4DA3100E6903 for ; Thu, 29 Oct 2020 13:17:05 +0000 (UTC) X-Spam-Summary: 1,0,0,2936c404d3608301,d41d8cd98f00b204,3z8caxwukcl4ipzivksskpi.gsqpmry1-qqozego.svk@flex--elver.bounces.google.com,,RULES_HIT:41:152:355:379:421:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1594:1605:1711:1730:1747:1777:1792:2194:2198:2199:2200:2393:2553:2559:2562:2897:2901:2903:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3874:4118:4250:4321:4605:5007:6117:6261:6653:6742:6743:7576:7903:9108:9969:10004:11026:11473:11657:11658:11914:12043:12219:12291:12295:12296:12297:12438:12555:12679:12683:12895:12986:13141:13230:14096:14097:14181:14394:14659:14721:21063:21080:21220:21324:21433:21444:21451:21627:21966:21990:30003:30054:30070:30074:30090,0,RBL:209.85.128.74:@flex--elver.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yrjzhfxg1y14f9x4bya6xn31crgopa1mw5z3ikwtbet91tnqsdiy3wjayxxmm.86zubsrwbjkcz88dy4u1hgwk7nkza1yejas4qnwqo9yrfn34iqu638qjgz8ouz8.y-lbl8.mailshell.net-223.238.255.100,Cach eIP:none X-HE-Tag: paint81_0915ed52728d X-Filterd-Recvd-Size: 7994 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Thu, 29 Oct 2020 13:17:05 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id t201so914567wmt.1 for ; Thu, 29 Oct 2020 06:17:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=aekwiuoH1QcAWJM1a9jRrxi/2azRZLfvm2QlRD7DdRg=; b=XtCQzmxACqw1yUrVMenuR8I+emeGeBJbzPEEUPpHCN4hZxwgObsPk7BlVP63fqYnMU lMCz0DrljS9mIW5F9//J5hdnIi/wU/Z1YFa/ZHMvxLF0JOvixJCbzd/mNBgUklEC3uWr oM90d49sEtB4NJKOy2u3xpKHh/aQLLaYUbIBl2zWe98v7jbfczougeJTQcPo0ED5qVGb qKOJPQz9vZabhajp8QpNZJqoWeJFg0RiRo2CU5O+ifa1YrJ6ouMW1lGkECQaoS+80MNj sRjDUQVk9us7CQ5SXtF2LRt0VHSoamV1Vn3lUTEuomHpyuoC4+uHTbXbrePxTHWVbCnU sVRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=aekwiuoH1QcAWJM1a9jRrxi/2azRZLfvm2QlRD7DdRg=; b=YQnPMQp0ib1RSeyy8z3ObJb3/DggT+68zBvroKoNskc0zZAQsmMAF8DDVGL1luMFqW FbBd2Vxq+tJCjgOBSbJfejrNIfaWEP4egAfn5ixKrB2nXB5GezPnhdgHWcXcVTHh7Qki TQ4FVLyePoOif2cx52YQJW1gQ3/RHIn3spv7+aEDkXKDaE4N1l+jhb8/TOGTFKJre+eS A0aRyvaHwbYfCDs7MFcUIq+u/pRqwmhbomwTcuAxEFSXEjHbzIvS78kAT+SwwInAzhgC lyfgwrfodQof5kZS2Qj/frFDw9mr4xWQVD+gBVO/Ji+R9hSWL0BOCliFaP7QBu/4R9FX 8muA== X-Gm-Message-State: AOAM5324+hcMFeS0GX9NB+tohHYTlIcSNBK2ChoDpcAnd8m8jpkjP/MJ SnaLTB4I2wDvwiYRX8V2LIhQl3mEjw== X-Google-Smtp-Source: ABdhPJyCWl1GBWpt3uuCIPF0Dc6zzj+bmI+GgVEushJJQe9uB5SscKaIPk23hn6sg5j9pXvKf/gaFx3Zkw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:f693:9fff:fef4:2449]) (user=elver job=sendgmr) by 2002:a1c:f20d:: with SMTP id s13mr4625315wmc.156.1603977423631; Thu, 29 Oct 2020 06:17:03 -0700 (PDT) Date: Thu, 29 Oct 2020 14:16:42 +0100 In-Reply-To: <20201029131649.182037-1-elver@google.com> Message-Id: <20201029131649.182037-3-elver@google.com> Mime-Version: 1.0 References: <20201029131649.182037-1-elver@google.com> X-Mailer: git-send-email 2.29.1.341.ge80a0c044ae-goog Subject: [PATCH v6 2/9] x86, kfence: enable KFENCE for x86 From: Marco Elver To: elver@google.com, akpm@linux-foundation.org, glider@google.com Cc: hpa@zytor.com, paulmck@kernel.org, andreyknvl@google.com, aryabinin@virtuozzo.com, luto@kernel.org, bp@alien8.de, catalin.marinas@arm.com, cl@linux.com, dave.hansen@linux.intel.com, rientjes@google.com, dvyukov@google.com, edumazet@google.com, gregkh@linuxfoundation.org, hdanton@sina.com, mingo@redhat.com, jannh@google.com, Jonathan.Cameron@huawei.com, corbet@lwn.net, iamjoonsoo.kim@lge.com, joern@purestorage.com, keescook@chromium.org, mark.rutland@arm.com, penberg@kernel.org, peterz@infradead.org, sjpark@amazon.com, tglx@linutronix.de, vbabka@suse.cz, will@kernel.org, x86@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Potapenko Add architecture specific implementation details for KFENCE and enable KFENCE for the x86 architecture. In particular, this implements the required interface in for setting up the pool and providing helper functions for protecting and unprotecting pages. For x86, we need to ensure that the pool uses 4K pages, which is done using the set_memory_4k() helper function. Reviewed-by: Dmitry Vyukov Co-developed-by: Marco Elver Signed-off-by: Marco Elver Signed-off-by: Alexander Potapenko --- v5: * MAJOR CHANGE: Switch to the memblock_alloc'd pool. Running benchmarks with the newly optimized is_kfence_address(), no difference between baseline and KFENCE is observed. * Suggested by Jann Horn: * Move x86 kfence_handle_page_fault before oops handling. * WARN_ON in kfence_protect_page if non-4K pages. * Better comments for x86 kfence_protect_page. v4: * Define __kfence_pool_attrs. --- arch/x86/Kconfig | 1 + arch/x86/include/asm/kfence.h | 65 +++++++++++++++++++++++++++++++++++ arch/x86/mm/fault.c | 4 +++ 3 files changed, 70 insertions(+) create mode 100644 arch/x86/include/asm/kfence.h diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index f6946b81f74a..c9ec6b5ba358 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -144,6 +144,7 @@ config X86 select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if X86_64 select HAVE_ARCH_KASAN_VMALLOC if X86_64 + select HAVE_ARCH_KFENCE select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT diff --git a/arch/x86/include/asm/kfence.h b/arch/x86/include/asm/kfence.h new file mode 100644 index 000000000000..beeac105dae7 --- /dev/null +++ b/arch/x86/include/asm/kfence.h @@ -0,0 +1,65 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _ASM_X86_KFENCE_H +#define _ASM_X86_KFENCE_H + +#include +#include + +#include +#include +#include +#include + +/* + * The page fault handler entry function, up to which the stack trace is + * truncated in reports. + */ +#define KFENCE_SKIP_ARCH_FAULT_HANDLER "asm_exc_page_fault" + +/* Force 4K pages for __kfence_pool. */ +static inline bool arch_kfence_init_pool(void) +{ + unsigned long addr; + + for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr); + addr += PAGE_SIZE) { + unsigned int level; + + if (!lookup_address(addr, &level)) + return false; + + if (level != PG_LEVEL_4K) + set_memory_4k(addr, 1); + } + + return true; +} + +/* Protect the given page and flush TLB. */ +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + unsigned int level; + pte_t *pte = lookup_address(addr, &level); + + if (WARN_ON(!pte || level != PG_LEVEL_4K)) + return false; + + /* + * We need to avoid IPIs, as we may get KFENCE allocations or faults + * with interrupts disabled. Therefore, the below is best-effort, and + * does not flush TLBs on all CPUs. We can tolerate some inaccuracy; + * lazy fault handling takes care of faults after the page is PRESENT. + */ + + if (protect) + set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); + else + set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); + + /* Flush this CPU's TLB. */ + flush_tlb_one_kernel(addr); + return true; +} + +#endif /* _ASM_X86_KFENCE_H */ diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 82bf37a5c9ec..380638745f42 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -9,6 +9,7 @@ #include /* oops_begin/end, ... */ #include /* search_exception_tables */ #include /* max_low_pfn */ +#include /* kfence_handle_page_fault */ #include /* NOKPROBE_SYMBOL, ... */ #include /* kmmio_handler, ... */ #include /* perf_sw_event */ @@ -725,6 +726,9 @@ no_context(struct pt_regs *regs, unsigned long error_code, if (IS_ENABLED(CONFIG_EFI)) efi_recover_from_page_fault(address); + if (kfence_handle_page_fault(address)) + return; + oops: /* * Oops. The kernel tried to access some bad page. We'll have to