From patchwork Sat Dec 3 00:35:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13063360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A4F0C47089 for ; Sat, 3 Dec 2022 00:37:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DA07E8E0002; Fri, 2 Dec 2022 19:37:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D29D28E0001; Fri, 2 Dec 2022 19:37:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7BEE8E0002; Fri, 2 Dec 2022 19:37:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9F4B68E0001 for ; Fri, 2 Dec 2022 19:37:18 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 76EE91602E4 for ; Sat, 3 Dec 2022 00:37:18 +0000 (UTC) X-FDA: 80199130956.01.978222C Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf14.hostedemail.com (Postfix) with ESMTP id BF5A9100006 for ; Sat, 3 Dec 2022 00:37:17 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="CnNSzTL/"; spf=pass (imf14.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670027838; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=ZtmMKv/boA8R6lHnDKVvwD9PPIesg/l0ZMJgGagxoiU=; b=cdCI9IXoiNM6DFrXC6UqwKjyxcglpOB0t0bXtr7UHM/H10AleXSvR8nmKCg9LrPS/PrBBu yLAVdkJOWJAyhnobxaMvi5aMP9pFsR7shZG/NmByLYrZKhWpDUzSYtyDPVgevG4lhXhQlc I/pY1ybDAeMGTQvIY9eP2XZ0zEOEoFA= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="CnNSzTL/"; spf=pass (imf14.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670027838; a=rsa-sha256; cv=none; b=KPQSYk4pzMKyqlujmreYwduDCq/JrZLYnqr8M5mhsHvFXtHc6ef5iSNXgIul9QZw1xwPWO stCxY8SyLLibygM1dTH99p/kRe0FEF1iEf5w/fVVFRzK3Itf4yXJhn7nAt3RETEkzVwzuN plcv8Fyzx8nGmo9rHPHE/aetxxnzOAE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1670027837; x=1701563837; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=OscWK4vBleHDbCii3J5IxRm9vERbNW8+FtcU6Glk4SQ=; b=CnNSzTL/NI6ekZ5WR4wSqXGiAdigbxrP2zfqCUs+Od/+aV6h5SZeUCnb ljGnMju3kg8uVGyEtW+AOOB8u+34OQlFyTgf4wQlQobL8Vlwk/wQlyqOz 1jVHZsoXpqfqjiCnrnM11QUusVNPrOq9be6eZbT152K5fErs1jGk0HY5z ewxL+cxdwZULJDcbS74nAzP+SYtRhvvt45bXRBGP3MTKN3Lm6miKvJ40R MtHj12Sng8HRxoqnrg7rRcCxSJ/GUWEjKGIgsgCQCYud3SbBsnS8Zp1ih 7fDHn2VjHAMHSVt+cHJPHG23idiSMjvySJ9wCk44SKiO/ae7qXP8GskmV A==; X-IronPort-AV: E=McAfee;i="6500,9779,10549"; a="313711124" X-IronPort-AV: E=Sophos;i="5.96,213,1665471600"; d="scan'208";a="313711124" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2022 16:37:16 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10549"; a="787479912" X-IronPort-AV: E=Sophos;i="5.96,213,1665471600"; d="scan'208";a="787479912" Received: from bgordon1-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.212.211.211]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Dec 2022 16:37:13 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v4 19/39] mm: Add guard pages around a shadow stack. Date: Fri, 2 Dec 2022 16:35:46 -0800 Message-Id: <20221203003606.6838-20-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20221203003606.6838-1-rick.p.edgecombe@intel.com> References: <20221203003606.6838-1-rick.p.edgecombe@intel.com> X-Stat-Signature: f9j5c8nqizwag4695qx6ejkkcd1i6yn6 X-Rspam-User: X-Spamd-Result: default: False [-4.40 / 9.00]; BAYES_HAM(-6.00)[100.00%]; SUSPICIOUS_RECIPS(1.50)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_SPF_ALLOW(-0.20)[+ip4:192.55.52.93/32:c]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; MIME_TRACE(0.00)[0:+]; FROM_EQ_ENVFROM(0.00)[]; RCPT_COUNT_TWELVE(0.00)[40]; TO_MATCH_ENVRCPT_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; TAGGED_RCPT(0.00)[]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: BF5A9100006 X-Rspamd-Server: rspam06 X-HE-Tag: 1670027837-801785 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. The architecture of shadow stack constrains the ability of userspace to move the shadow stack pointer (SSP) in order to prevent corrupting or switching to other shadow stacks. The RSTORSSP can move the spp to different shadow stacks, but it requires a specially placed token in order to do this. However, the architecture does not prevent incrementing the stack pointer to wander onto an adjacent shadow stack. To prevent this in software, enforce guard pages at the beginning of shadow stack vmas, such that there will always be a gap between adjacent shadow stacks. Make the gap big enough so that no userspace SSP changing operations (besides RSTORSSP), can move the SSP from one stack to the next. The SSP can increment or decrement by CALL, RET and INCSSP. CALL and RET can move the SSP by a maximum of 8 bytes, at which point the shadow stack would be accessed. The INCSSP instruction can also increment the shadow stack pointer. It is the shadow stack analog of an instruction like: addq $0x80, %rsp However, there is one important difference between an ADD on %rsp and INCSSP. In addition to modifying SSP, INCSSP also reads from the memory of the first and last elements that were "popped". It can be thought of as acting like this: READ_ONCE(ssp); // read+discard top element on stack ssp += nr_to_pop * 8; // move the shadow stack READ_ONCE(ssp-8); // read+discard last popped stack element The maximum distance INCSSP can move the SSP is 2040 bytes, before it would read the memory. Therefore a single page gap will be enough to prevent any operation from shifting the SSP to an adjacent stack, since it would have to land in the gap at least once, causing a fault. This could be accomplished by using VM_GROWSDOWN, but this has a downside. The behavior would allow shadow stack's to grow, which is unneeded and adds a strange difference to how most regular stacks work. Tested-by: Pengfei Xu Tested-by: John Allen Reviewed-by: Kees Cook Signed-off-by: Yu-cheng Yu Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Cc: Kees Cook --- v4: - Drop references to 32 bit instructions - Switch to generic code to drop __weak (Peterz) v2: - Use __weak instead of #ifdef (Dave Hansen) - Only have start gap on shadow stack (Andy Luto) - Create stack_guard_start_gap() to not duplicate code in an arch version of vm_start_gap() (Dave Hansen) - Improve commit log partly with verbiage from (Dave Hansen) Yu-cheng v25: - Move SHADOW_STACK_GUARD_GAP to arch/x86/mm/mmap.c. Yu-cheng v24: - Instead changing vm_*_gap(), create x86-specific versions. include/linux/mm.h | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f10797a1b236..e0991d2fc5a8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2821,15 +2821,36 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return mtree_load(&mm->mm_mt, addr); } +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* + * Shadow stack pointer is moved by CALL, RET, and INCSSPQ. + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB + * and touches the first and the last element in the range, which + * triggers a page fault if the range is not in a shadow stack. + * Because of this, creating 4-KB guard pages around a shadow + * stack prevents these instructions from going beyond. + * + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK + */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; }