From patchwork Sun Mar 19 00:15:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13180149 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0847EC74A5B for ; Sun, 19 Mar 2023 00:16:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35B19280013; Sat, 18 Mar 2023 20:16:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E22C280001; Sat, 18 Mar 2023 20:16:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10EC1280013; Sat, 18 Mar 2023 20:16:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E0EED280001 for ; Sat, 18 Mar 2023 20:16:32 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BB41240C68 for ; Sun, 19 Mar 2023 00:16:32 +0000 (UTC) X-FDA: 80583731424.20.653629B Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf21.hostedemail.com (Postfix) with ESMTP id B222A1C0012 for ; Sun, 19 Mar 2023 00:16:30 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Btty7E4f; spf=pass (imf21.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679184991; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=7b/QOAWeKJUMsVXOohTWsZBhj9LBeQOJng5A/JiJyYY=; b=kuEAVdXvp9pQcqfpQE1kI79UxDQ41CDJiIECL4+OabmJ+09ZPgE/IJDAREKFs2czb0qV6n SqumHITVTG0gRRxbSzZNUnzHWsUCMzuPej1EVg+ST6qveeytJw5GhppZ4+sEX4NDTH/B47 e8lZLDjakqMVbHD+71XqttQAZkXfEZI= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Btty7E4f; spf=pass (imf21.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.115 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679184991; a=rsa-sha256; cv=none; b=gLwnGyZ1lf4DSc+SJT0ycTeD1AU+PJD/vvo6oLbbGYS/5My/TlV+CMWUsHfGv0mqfsk4WQ jFNfUXmXAklGS0XyoVGcXxBsX1NQvJnKj5BM9DhaW81tMX9o1uQtFEXcFBLPyLrf1TEN3+ i6JaYi32id9gaF/KVw6nHpn5JkXaQoA= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1679184990; x=1710720990; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=u/rJkxTnxXm+UTYwj285Z8LlwIJixTQ+7DbH6nVmNI4=; b=Btty7E4fD3hgE8h3/TQQxL5KiwgSP2aFmzIY+7xTpu6/JCGdRX26tKVE AqtdeAxdfdkmKu2puUMPT5bG/qGHqvECfIe6TKKUHatWDhh826oscTbvT cRO522LIZlblk3XbwggUR5wKG2NPSJ3Ng5W54ZfH1fFfqll4XaJTSReyt 3ubA4AgN6p3NlCmwUW1d189cataTwFETs54H91Pq6iYJbsmVHmQsPXfS+ bN+KWERykJ+jU5Ys2uiXdcmcatPrE+LMf+HGYiJqblYsw/m7JVGvDjw27 V+M3nyAt3+29NJDWBl+zETsIq0RVhNPdVuWTDENMy+3u0WxcoT2lWQrkO Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="338491198" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="338491198" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:30 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10653"; a="749672865" X-IronPort-AV: E=Sophos;i="5.98,272,1673942400"; d="scan'208";a="749672865" Received: from bmahatwo-mobl1.gar.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.135.34.5]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Mar 2023 17:16:28 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v8 21/40] mm: Add guard pages around a shadow stack. Date: Sat, 18 Mar 2023 17:15:16 -0700 Message-Id: <20230319001535.23210-22-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230319001535.23210-1-rick.p.edgecombe@intel.com> References: <20230319001535.23210-1-rick.p.edgecombe@intel.com> X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B222A1C0012 X-Rspam-User: X-Stat-Signature: 1rpbiykm8o5u9i8rh9wi3z741wzff79z X-HE-Tag: 1679184990-187008 X-HE-Meta: U2FsdGVkX1+xkD2gN2XW3Tyh8wMqvmnBCNUdtK14l68Yy7J/iCzju0/KX2+wo9keHZmcNaEZX2QkjsIYZ9sFOMBTE3aGHCJOI8AEKoRqvfsbJYE1UgNK4v4zSZEHUbv5zlrUUulzv0FNje7qRBZxKvi+4M+PmzqSe9afHKiMJdOOaB+6ZuXUW8wH5t79KKEfw8Zkm4DUNS6kQsbIqfQ34P8xUhD4eLAQGNIObPmmKcHBl2p6zw22LiNtwqlmQhu49v+r97zYl4hG4YpgOkrepmtTZJn8oNV7SlqKgG1DwBXSpGonOGSDlLb0vsgWQ9hqWVMFOxWhFuPbXqRmK8+1ZW0yJ/LCe4UTI4JXkzpRHQEG7QLYb/L+YmYVmWvxdOxiNlJdlDf7dQDVuBvXlYWNWY+cKrI5xleVdwzUQmAGuzz5udPH2hUB5T+R8x8zYen1a9maLpuLyTNvQLRVu0oHlyxbrDDN3amJL3pG0PsyYxTbh9keBoP8544pIBEEFEWQjbqWCOR+hWPvGzlw46ooQ6DN5g2TM9sAk0gE7sLx2dMO9DheBgVx4TBElBJ1xhqvhOE9H1g9+usQ7lH8BJcbcTgRytKRJvZBJuAETxDe3aJg5zl+CLiCY7tCiXUDGbBneDTJWaUIdCKZi9BN0PelOUeSD/JoW/OrLitGJxXqFwDKgUJGn+b3yjeMrvZ1B8DPs7/ky8p9K8+zpnYjQ2PlLPaN5Ny+Hb5xt7fhfI910GwlYfuN/nxlYtHKGkDHAv5QTCY3dDmlT1cQr0wG0+TOcJ8jPxP3D6pXGTlX5wdQNPXAwIPysDrVXSQ8mEiQNph/bLgyxEjXFrA66+y8I+5UfyY55SlHxZI4xfqQ21GBmc7culrY8t5AMmAWA7OKIbg9n8wCZMAVv0V6k9dle8iRqRNvtxVGq+CEpkn+KNlE8CFGL8WcZCw5Vnr0QoDixKH0F9rrApI3rpYs9EGQZNm 234/L63X H1gNTDG8cQ21DJ1cKP1ZHQa51ySVaTlcMLnAHDzQikhVPCxKQ8z1Hqdk8GMHXxZnYbvXU/pUthzqQfJee6QEremsgzKblYXpk3+JYctqwNNWm3xuILybOg9v8gD8rTzD4C9vgfsLYJrG2BJn6vLNK2j77/fc2jswXdeEqri9Ybl3MOP2nKqWH6RKAJDgnl7I7vs3vnjTs/HBcrHf2QZUZLzAmDRGuQMJYKT7xh+Y62eAehfOzatJ+9IQ4eppLMFPMJCv0a6Lie9jkXzqgPaoVkmkUkhcomKmFHTDIl4Z0R0s/UAfX0GfMtuDqQ7ozV/NqzZw4uEaibO062mA7sb9VKik+6ZSkwsJL5Zou X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. The architecture of shadow stack constrains the ability of userspace to move the shadow stack pointer (SSP) in order to prevent corrupting or switching to other shadow stacks. The RSTORSSP instruction can move the SSP to different shadow stacks, but it requires a specially placed token in order to do this. However, the architecture does not prevent incrementing the stack pointer to wander onto an adjacent shadow stack. To prevent this in software, enforce guard pages at the beginning of shadow stack VMAs, such that there will always be a gap between adjacent shadow stacks. Make the gap big enough so that no userspace SSP changing operations (besides RSTORSSP), can move the SSP from one stack to the next. The SSP can be incremented or decremented by CALL, RET and INCSSP. CALL and RET can move the SSP by a maximum of 8 bytes, at which point the shadow stack would be accessed. The INCSSP instruction can also increment the shadow stack pointer. It is the shadow stack analog of an instruction like: addq $0x80, %rsp However, there is one important difference between an ADD on %rsp and INCSSP. In addition to modifying SSP, INCSSP also reads from the memory of the first and last elements that were "popped". It can be thought of as acting like this: READ_ONCE(ssp); // read+discard top element on stack ssp += nr_to_pop * 8; // move the shadow stack READ_ONCE(ssp-8); // read+discard last popped stack element The maximum distance INCSSP can move the SSP is 2040 bytes, before it would read the memory. Therefore, a single page gap will be enough to prevent any operation from shifting the SSP to an adjacent stack, since it would have to land in the gap at least once, causing a fault. This could be accomplished by using VM_GROWSDOWN, but this has a downside. The behavior would allow shadow stacks to grow, which is unneeded and adds a strange difference to how most regular stacks work. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- v8: - Update commit log verbiage (Boris) - Move and update comment (Boris, David Hildenbrand) v5: - Fix typo in commit log v4: - Drop references to 32 bit instructions - Switch to generic code to drop __weak (Peterz) v2: - Use __weak instead of #ifdef (Dave Hansen) - Only have start gap on shadow stack (Andy Luto) - Create stack_guard_start_gap() to not duplicate code in an arch version of vm_start_gap() (Dave Hansen) - Improve commit log partly with verbiage from (Dave Hansen) --- include/linux/mm.h | 52 ++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 46 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 097544afb1aa..d09fbe9f43f8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -349,7 +349,36 @@ extern unsigned int kobjsize(const void *objp); #endif /* CONFIG_ARCH_HAS_PKEYS */ #ifdef CONFIG_X86_USER_SHADOW_STACK -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set with VM_SHARED */ +/* + * This flag should not be set with VM_SHARED because of lack of support + * core mm. It will also get a guard page. This helps userspace protect + * itself from attacks. The reasoning is as follows: + * + * The shadow stack pointer(SSP) is moved by CALL, RET, and INCSSPQ. The + * INCSSP instruction can increment the shadow stack pointer. It is the + * shadow stack analog of an instruction like: + * + * addq $0x80, %rsp + * + * However, there is one important difference between an ADD on %rsp + * and INCSSP. In addition to modifying SSP, INCSSP also reads from the + * memory of the first and last elements that were "popped". It can be + * thought of as acting like this: + * + * READ_ONCE(ssp); // read+discard top element on stack + * ssp += nr_to_pop * 8; // move the shadow stack + * READ_ONCE(ssp-8); // read+discard last popped stack element + * + * The maximum distance INCSSP can move the SSP is 2040 bytes, before + * it would read the memory. Therefore a single page gap will be enough + * to prevent any operation from shifting the SSP to an adjacent stack, + * since it would have to land in the gap at least once, causing a + * fault. + * + * Prevent using INCSSP to move the SSP between shadow stacks by + * having a PAGE_SIZE guard gap. + */ +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 #else # define VM_SHADOW_STACK VM_NONE #endif @@ -3107,15 +3136,26 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return mtree_load(&mm->mm_mt, addr); } +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* See reasoning around the VM_SHADOW_STACK definition */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; }