From patchwork Mon Feb 27 22:29:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13154260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97DA1C64ED6 for ; Mon, 27 Feb 2023 22:32:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D53036B00A0; Mon, 27 Feb 2023 17:31:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C3B826B00A1; Mon, 27 Feb 2023 17:31:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F68F6B00A3; Mon, 27 Feb 2023 17:31:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8FC776B00A1 for ; Mon, 27 Feb 2023 17:31:56 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 66C69A012F for ; Mon, 27 Feb 2023 22:31:56 +0000 (UTC) X-FDA: 80514520632.16.D9BB519 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf07.hostedemail.com (Postfix) with ESMTP id 62ADF40015 for ; Mon, 27 Feb 2023 22:31:54 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="UbpjA/et"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf07.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677537114; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=Fd4CLx3Wb5bNuvpnqgjIwEqsxwps1hj9mXZuxyqW6+Y=; b=oBvuqKNLvKqaLZS5mmS8Ae40lfySPrtCWFOUHcJI3zEhes422T6NCG0sBs0rhyFgB0dqJL C1radJMDSR8uT3awDwMUZNcrva5VTXaJuo/ifdTVDsfMTOFnCAzs3670xNrfVwCFiEsDV8 sGrL/ReiFeKuGIR9Aq1+BUy4vxH3nk0= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="UbpjA/et"; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf07.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.136 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677537114; a=rsa-sha256; cv=none; b=YZjdKkp8zH4zeTq5Z/FIkIQvUtMlrHDpe+s9RhMqcV6ESQP2WXJiDGts4oGYx1YhCIbUGS QFUaaT3hekLE6tR5L2yShpv7QbPvtiJYVZkOyMeq/vX+LdOi2G4vZaBCTUYS9y62IQVb/5 LsW587FImCFyJ5jDTXRYW4eXiCoMScU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1677537114; x=1709073114; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=iMlXlUo1ewZh0k0SEbwJZr7uR8+MhW0vqvkBqCbDW78=; b=UbpjA/etXuHTsMhSPwuYbSTX1t/CjQIS0II0kQyhDkOxb1nRewE1uARq Gs3eFoxJZZqnFKnPYhFtiWEfxLIThhPtii/gaw5PcQSzr+EuXO0q16aw9 r3F9JLx2nbzeHOmkYmxAhtrgoCkVHHAyWvr0fK9epg0Sz+JvXAatVailU 40p9nQ0RfLcsr+Ijwfi7Diiyw5D4x9Hm4ApgDWKVd4njxKiE4opNQKwQO MuLvaBkjNAE3KIUZ4BHA9c0EzaHcd4QUvpPH13oSuKydX0pemFp8BqelW S4JensxM8goHx9r2yZnW3VjLOOHyOTtbACVetS06COI6gdomsxylkSWqO g==; X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="313657703" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="313657703" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 14:31:29 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10634"; a="848024714" X-IronPort-AV: E=Sophos;i="5.98,220,1673942400"; d="scan'208";a="848024714" Received: from leonqu-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.72.19]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Feb 2023 14:31:28 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v7 29/41] x86/shstk: Add user-mode shadow stack support Date: Mon, 27 Feb 2023 14:29:45 -0800 Message-Id: <20230227222957.24501-30-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230227222957.24501-1-rick.p.edgecombe@intel.com> References: <20230227222957.24501-1-rick.p.edgecombe@intel.com> X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 62ADF40015 X-Stat-Signature: 1okgr4x66jtkjtooy7ntde1nqx6bjdxg X-HE-Tag: 1677537114-842864 X-HE-Meta: U2FsdGVkX185kytlsneS95ugxhUmdMyR57FLPJqYAXPlqbUTzVRn9l0ViSVvbSkbjeydBFzR7QQ3w+eimcIrEwdpYbidhE8I5lXUp5wx7TT+Fy9ZTHPRlkznLHaZGk2lhEnyutNKYwhCj6oL9mCSEdLQQb+1aYEOIDc0JecY+5ceBCzl7RcMQcJ6bD1Kj547SUyhv5MJpaMoTEGO/S1Zt8Bm4Wha4DtLrI6EMlRME4WBVl950GSCkTB4WDqIoGvMFp6mEtKNF8vmtAgOczIrp7NOcRBPWQP/U37tuYbVdXPrw7CiFpSvZ8sXc/X2ZXlbMtFgbO9i9WHcev6Exnh7AavLUxVyq84coX/yPdTSQriKHuJYuI9mSpaAzdLRBsS4yAe7Wv6joURhqv14+PzEPXJmj/ZPbaWZ/aRm8XSh+x8CiWakK+trjdZ8tUB94d6Ef9q/WZNz5+sopITTgUE3/6XnlrtEzeXeosFpa0i0UP1pLNrTskhKXn67F4dRiXj3EI5YUa/+p54rSjSq56i6YMDfvPnMymfiBZWHjJZLrDUb+tLL9tDJEH6GCPGdv/XPppxEG1OiJ8XBUQEDUeZ5S6PRAsuHcjJRIbgJ0G/j+65psdSPU+EAccaanB905eFNKmc2cPkof8XXKdG8xYLVIc9oGb071JIAHct09/UMuDLQKgoF5XgbGGYpIOXaptGO6+W4qTJxwkb2eFJrFeVh02ykxln8BxNx13AcmOIBNvyGev6Mdu7wB/OgVdqiFLnqSqoDvhWNfxwsiVTGCeJFGj9fRiny9pwPn3iCleMYFc5jIWJZMWhiZu4sKhOqPn53kcGH+wC9dr2b7BCZV7Ge5MjwtzdtBrAuCQGsM7gSoKyZUHxkAyzpGk0D+oOYlIntmeSlr0mzlqIb3I2r4YojJJpPDovKPhEQgv74j5ivO45temY3/1yVGd4B6S4vtUexkxFXNwPa1mDz7BLQVAc XNg8H27p C10sLKkSjZXYZJ2llCpRFU7DJA1qmzAvbQKHw3oiGXHpCPfpSLnON/YJnQ77bBYoVKrfJJWSKzr5JoSjB4LO8FQcZIFOFwB0bul913y99sd2ydlAsDP+1/AcennQJuueoDMd4FIN6t4bdiVKVan1bSeIxpGlf+0knMn/E/8gW5SSUHDwHbG1Xo766QXF3ApQI5eWy3hqKEsTZj5MmsVZB/fzT+U2ae8o/u9x6jGk4g5w6XqELzvb4K29uH72xEWR/3PmwujZScg9zXDPKW6HjD3WjawmWXAIMyRMYM+4q/in14daYQKH9loN8e+c8y+4lnUDEWhRAof/GBqfXt/dngUqw2pzpcF1zJOaT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu Introduce basic shadow stack enabling/disabling/allocation routines. A task's shadow stack is allocated from memory with VM_SHADOW_STACK flag and has a fixed size of min(RLIMIT_STACK, 4GB). Keep the task's shadow stack address and size in thread_struct. This will be copied when cloning new threads, but needs to be cleared during exec, so add a function to do this. 32 bit shadow stack is not expected to have many users and it will complicate the signal implementation. So do not support IA32 emulation or x32. Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Acked-by: Mike Rapoport (IBM) Reviewed-by: Kees Cook Signed-off-by: Yu-cheng Yu Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Cc: Kees Cook --- v7: - Add explanation for not supporting 32 bit in commit log (Boris) v5: - Switch to EOPNOTSUPP - Use MAP_ABOVE4G - Move set_clr_bits_msrl() to patch where it is first used v4: - Just set MSR_IA32_U_CET when disabling shadow stack, since we don't have IBT yet. (Peterz) v3: - Use define for set_clr_bits_msrl() (Kees) - Make some functions static (Kees) - Change feature_foo() to features_foo() (Kees) - Centralize shadow stack size rlimit checks (Kees) - Disable x32 support v2: - Get rid of unnecessary shstk->base checks - Don't support IA32 emulation --- arch/x86/include/asm/processor.h | 2 + arch/x86/include/asm/shstk.h | 7 ++ arch/x86/include/uapi/asm/prctl.h | 3 + arch/x86/kernel/shstk.c | 145 ++++++++++++++++++++++++++++++ 4 files changed, 157 insertions(+) diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index bd16e012b3e9..ff98cd6d5af2 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -479,6 +479,8 @@ struct thread_struct { #ifdef CONFIG_X86_USER_SHADOW_STACK unsigned long features; unsigned long features_locked; + + struct thread_shstk shstk; #endif /* Floating point and extended processor state */ diff --git a/arch/x86/include/asm/shstk.h b/arch/x86/include/asm/shstk.h index ec753809f074..2b1f7c9b9995 100644 --- a/arch/x86/include/asm/shstk.h +++ b/arch/x86/include/asm/shstk.h @@ -8,12 +8,19 @@ struct task_struct; #ifdef CONFIG_X86_USER_SHADOW_STACK +struct thread_shstk { + u64 base; + u64 size; +}; + long shstk_prctl(struct task_struct *task, int option, unsigned long features); void reset_thread_features(void); +void shstk_free(struct task_struct *p); #else static inline long shstk_prctl(struct task_struct *task, int option, unsigned long arg2) { return -EINVAL; } static inline void reset_thread_features(void) {} +static inline void shstk_free(struct task_struct *p) {} #endif /* CONFIG_X86_USER_SHADOW_STACK */ #endif /* __ASSEMBLY__ */ diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h index b2b3b7200b2d..7dfd9dc00509 100644 --- a/arch/x86/include/uapi/asm/prctl.h +++ b/arch/x86/include/uapi/asm/prctl.h @@ -26,4 +26,7 @@ #define ARCH_SHSTK_DISABLE 0x5002 #define ARCH_SHSTK_LOCK 0x5003 +/* ARCH_SHSTK_ features bits */ +#define ARCH_SHSTK_SHSTK (1ULL << 0) + #endif /* _ASM_X86_PRCTL_H */ diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 41ed6552e0a5..3cb85224d856 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -8,14 +8,159 @@ #include #include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include #include +static bool features_enabled(unsigned long features) +{ + return current->thread.features & features; +} + +static void features_set(unsigned long features) +{ + current->thread.features |= features; +} + +static void features_clr(unsigned long features) +{ + current->thread.features &= ~features; +} + +static unsigned long alloc_shstk(unsigned long size) +{ + int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_ABOVE4G; + struct mm_struct *mm = current->mm; + unsigned long addr, unused; + + mmap_write_lock(mm); + addr = do_mmap(NULL, addr, size, PROT_READ, flags, + VM_SHADOW_STACK | VM_WRITE, 0, &unused, NULL); + + mmap_write_unlock(mm); + + return addr; +} + +static unsigned long adjust_shstk_size(unsigned long size) +{ + if (size) + return PAGE_ALIGN(size); + + return PAGE_ALIGN(min_t(unsigned long long, rlimit(RLIMIT_STACK), SZ_4G)); +} + +static void unmap_shadow_stack(u64 base, u64 size) +{ + while (1) { + int r; + + r = vm_munmap(base, size); + + /* + * vm_munmap() returns -EINTR when mmap_lock is held by + * something else, and that lock should not be held for a + * long time. Retry it for the case. + */ + if (r == -EINTR) { + cond_resched(); + continue; + } + + /* + * For all other types of vm_munmap() failure, either the + * system is out of memory or there is bug. + */ + WARN_ON_ONCE(r); + break; + } +} + +static int shstk_setup(void) +{ + struct thread_shstk *shstk = ¤t->thread.shstk; + unsigned long addr, size; + + /* Already enabled */ + if (features_enabled(ARCH_SHSTK_SHSTK)) + return 0; + + /* Also not supported for 32 bit and x32 */ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || in_32bit_syscall()) + return -EOPNOTSUPP; + + size = adjust_shstk_size(0); + addr = alloc_shstk(size); + if (IS_ERR_VALUE(addr)) + return PTR_ERR((void *)addr); + + fpregs_lock_and_load(); + wrmsrl(MSR_IA32_PL3_SSP, addr + size); + wrmsrl(MSR_IA32_U_CET, CET_SHSTK_EN); + fpregs_unlock(); + + shstk->base = addr; + shstk->size = size; + features_set(ARCH_SHSTK_SHSTK); + + return 0; +} + void reset_thread_features(void) { + memset(¤t->thread.shstk, 0, sizeof(struct thread_shstk)); current->thread.features = 0; current->thread.features_locked = 0; } +void shstk_free(struct task_struct *tsk) +{ + struct thread_shstk *shstk = &tsk->thread.shstk; + + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) || + !features_enabled(ARCH_SHSTK_SHSTK)) + return; + + if (!tsk->mm) + return; + + unmap_shadow_stack(shstk->base, shstk->size); +} + +static int shstk_disable(void) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return -EOPNOTSUPP; + + /* Already disabled? */ + if (!features_enabled(ARCH_SHSTK_SHSTK)) + return 0; + + fpregs_lock_and_load(); + /* Disable WRSS too when disabling shadow stack */ + wrmsrl(MSR_IA32_U_CET, 0); + wrmsrl(MSR_IA32_PL3_SSP, 0); + fpregs_unlock(); + + shstk_free(current); + features_clr(ARCH_SHSTK_SHSTK); + + return 0; +} + long shstk_prctl(struct task_struct *task, int option, unsigned long features) { if (option == ARCH_SHSTK_LOCK) {