From patchwork Sat Feb 18 21:14:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13145660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9333C6379F for ; Sat, 18 Feb 2023 21:16:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D54C280013; Sat, 18 Feb 2023 16:16:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E4E2280012; Sat, 18 Feb 2023 16:16:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19D2E280014; Sat, 18 Feb 2023 16:16:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F3FBC280012 for ; Sat, 18 Feb 2023 16:16:19 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id CBAB44067B for ; Sat, 18 Feb 2023 21:16:19 +0000 (UTC) X-FDA: 80481670878.16.E4E7D22 Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf16.hostedemail.com (Postfix) with ESMTP id BC31E180012 for ; Sat, 18 Feb 2023 21:16:17 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=IQACktDc; spf=pass (imf16.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676754978; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=iK2VGgQBI/+OGXrqkyy4L52pNUPaFihWRDsp8yLUZlA=; b=v1yRIiZfNbhAj2FZhvxLjmCzA0ukAxwglPg+g7vK58EEP+7q/6563jbPM7nyTMBuv6qYUv uOG32ZvXowillVsfTvs1XucpNFDOJGPQGUaiWog9Vv5ijxgb61fcgcvllyLdq/XKBFvfi/ SDmhbXzD5bIASxFOsFsFui6zoP9YAv0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=IQACktDc; spf=pass (imf16.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676754978; a=rsa-sha256; cv=none; b=ZL3d7ghQwhry2sWGM+Zj39TlYuT23tfdKDMjkgIFd/UoR4Wf8YHcXZ1HSnPAEQImkqRwVL 6xA9z1TmZBERLWLLmA5J1MBRpClNXTqfsgz41IXrCrBX7e56WWxCS1fc+9w3biw8IXOLJW wyIvp/H/uiVkX2GPSaR1kn0jV4ryI5c= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676754977; x=1708290977; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=WCxUl60+C8EdhSm8/KOSywQ4hWAvwLQmO3bE5jU9eJA=; b=IQACktDcYQFAG0AV195lLo8YEDk6gtdP5pyyZokGzmdUa8X0TQM9igLO iaM4ztOobamm62vDODsA16Icv9MYl54WaL9ethPybazGfk6KEvuRIutoW GjCtCr1rYxDSM+UM5LA3roZt9jSf/BBTdSWawXSDagCuuvAkXhoMO/TEg B1C7tc6O7X6YydXziHvbHM6ngGe+RfiEVxloM2xkJy08/0/FSw8/ZEwl2 uCSjSwB/CQWg3Kcet3hOAgrRDrMyB3gnq0kPWOJKGP823cuzds4hndjwe ccwn3C/E73MH6geP3kGE/IvFIaSeBbhF8siWQuv9EK550wKB89NMhrYSM w==; X-IronPort-AV: E=McAfee;i="6500,9779,10625"; a="418427551" X-IronPort-AV: E=Sophos;i="5.97,309,1669104000"; d="scan'208";a="418427551" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2023 13:16:15 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10625"; a="664241671" X-IronPort-AV: E=Sophos;i="5.97,309,1669104000"; d="scan'208";a="664241671" Received: from adityava-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.80.223]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2023 13:16:13 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v6 21/41] mm: Add guard pages around a shadow stack. Date: Sat, 18 Feb 2023 13:14:13 -0800 Message-Id: <20230218211433.26859-22-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230218211433.26859-1-rick.p.edgecombe@intel.com> References: <20230218211433.26859-1-rick.p.edgecombe@intel.com> X-Rspamd-Queue-Id: BC31E180012 X-Stat-Signature: k7mzztuxg3twmg477bxz935x85m5f6w8 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1676754977-554852 X-HE-Meta: U2FsdGVkX1+HKuNu/u87p60kanDLZvQ6v/LooHv2qvDU8RLJ78Ivk0rF50ltCyh3suPnBWW/xCKqSF7gvrlVo7BM5NU5s7nnLmcn12HdQGLbVU+mVwnkTzodsFQfmIkqCoVVBSlxUBrGsNIfxHTTSVlqI9a3xhjflCPIEv+OoP4qvhFBqGJRlp92tjlqm5QdMZiwHHKF2xKQIXLJ6mgbiJD5hpUX4ybtvckDZ8iWTOY5Pt8H6sHoVLvYPncY9TSz9lRgTMwKpH8eX0l4uF/ootma8ob5ujGmUTricaOvEguExslmKtwdFdxMI1I3T2QxMjtZS9lBF3aW86SDZC+MNZDnFWMNqvJ7Wt056wBvY50ZcnTmmjNfuLXUcNd7M69qkAKN/fwootdQnkqPPTD0SQSXo6fU2uRE9qcpzUvIlkGjBp/YM3UPoR3QUBNXrBzQO5BrgQKPwbdblf4jLaTcXxksW0JVyHyDv1gkB+2cGx7+0KipfNTsnGsSpJ9hIelclrN+nB/kUIvVI3pQAMtzlPrYLWf9NyUHjtBV46h65pDZhsw18kAAX2+kPkcF/uVNCnVD74F+p/D2BjLDbUW+JYW0qoOUJ0HzXDoNXhBkyC8t1x6+ND15ADg00geSTa/HY69Xbz0EnI40sRdqR4G/7H7Veae3cFooMu95VCEOAhiGkiLXfdYXRNovWjbKpFuUd6KumyG4jxSvpBvnlgbuYqhRUWWZejLnQwCGLc30xn/56yt4rmbiyWhZevY7FE1W/BLceqGy0K+9jSI2XdJxX+7f7QD9RqQCJ+w26jUk2hUb+xjk+gy4f4NbU7CojJTPFcYSfeD7pzVNWxD58b35o1dJzEKdq+3fjWHc5+tf3qKPl3RRU+xXcPktpnLdtv9kaLAh1pRM2Zwyloy4TKgxsUQoLwSOSuTuXWrr0gZl0lZi45ska9vagZdv5sn8cY6n6v8kF/zZP2PuniO680K shS5fq6R m1SmDIj6IB20pjrXTG5/a1FNrkoHx2xLD8xlK7kNkYfTR892UWJZH0yMFvjnrtci959HrJT+3cimFiF6En1MCeukm1CvraWyN5OjCWwlIdAogmt52sPtd41wTM6QCV2dLoldsOcsxRPZqKKvlvOCbvM6UGibpyluWJZBVC2lG7/R9xCucBPFBGO4IUfulRHg8g/wG29RAvF1OTGBgYuThWqF+ASaQp3ANAjMP3ZcPuWMDfQW8zsV+5eDdWYqnlVCBcz12klwlE3/t7by+/SMLmZKE+Jg7qvOtOjfO X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. The architecture of shadow stack constrains the ability of userspace to move the shadow stack pointer (SSP) in order to prevent corrupting or switching to other shadow stacks. The RSTORSSP can move the ssp to different shadow stacks, but it requires a specially placed token in order to do this. However, the architecture does not prevent incrementing the stack pointer to wander onto an adjacent shadow stack. To prevent this in software, enforce guard pages at the beginning of shadow stack vmas, such that there will always be a gap between adjacent shadow stacks. Make the gap big enough so that no userspace SSP changing operations (besides RSTORSSP), can move the SSP from one stack to the next. The SSP can increment or decrement by CALL, RET and INCSSP. CALL and RET can move the SSP by a maximum of 8 bytes, at which point the shadow stack would be accessed. The INCSSP instruction can also increment the shadow stack pointer. It is the shadow stack analog of an instruction like: addq $0x80, %rsp However, there is one important difference between an ADD on %rsp and INCSSP. In addition to modifying SSP, INCSSP also reads from the memory of the first and last elements that were "popped". It can be thought of as acting like this: READ_ONCE(ssp); // read+discard top element on stack ssp += nr_to_pop * 8; // move the shadow stack READ_ONCE(ssp-8); // read+discard last popped stack element The maximum distance INCSSP can move the SSP is 2040 bytes, before it would read the memory. Therefore a single page gap will be enough to prevent any operation from shifting the SSP to an adjacent stack, since it would have to land in the gap at least once, causing a fault. This could be accomplished by using VM_GROWSDOWN, but this has a downside. The behavior would allow shadow stack's to grow, which is unneeded and adds a strange difference to how most regular stacks work. Tested-by: Pengfei Xu Tested-by: John Allen Reviewed-by: Kees Cook Signed-off-by: Yu-cheng Yu Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Cc: Kees Cook --- v5: - Fix typo in commit log v4: - Drop references to 32 bit instructions - Switch to generic code to drop __weak (Peterz) v2: - Use __weak instead of #ifdef (Dave Hansen) - Only have start gap on shadow stack (Andy Luto) - Create stack_guard_start_gap() to not duplicate code in an arch version of vm_start_gap() (Dave Hansen) - Improve commit log partly with verbiage from (Dave Hansen) Yu-cheng v25: - Move SHADOW_STACK_GUARD_GAP to arch/x86/mm/mmap.c. --- include/linux/mm.h | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 76e0a09aeffe..a41577c5bf3e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2980,15 +2980,36 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return mtree_load(&mm->mm_mt, addr); } +static inline unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* + * Shadow stack pointer is moved by CALL, RET, and INCSSPQ. + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB + * and touches the first and the last element in the range, which + * triggers a page fault if the range is not in a shadow stack. + * Because of this, creating 4-KB guard pages around a shadow + * stack prevents these instructions from going beyond. + * + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK + */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; }