From patchwork Thu Jan 19 21:22:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13108795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78A4CC004D4 for ; Thu, 19 Jan 2023 21:23:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BE30A8E0001; Thu, 19 Jan 2023 16:23:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B415B900002; Thu, 19 Jan 2023 16:23:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91F078E0003; Thu, 19 Jan 2023 16:23:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6FD328E0001 for ; Thu, 19 Jan 2023 16:23:55 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3FDA916099F for ; Thu, 19 Jan 2023 21:23:55 +0000 (UTC) X-FDA: 80372826030.04.924A21C Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf27.hostedemail.com (Postfix) with ESMTP id ED8BF40010 for ; Thu, 19 Jan 2023 21:23:52 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=axh1SzDZ; spf=pass (imf27.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674163433; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=F+8Tf9HdQyQPAdeGQwF3qLXqysHCxD0/3GwiYQ1Fj7g=; b=ykpnI1wLFbXe9dYrg38o9Wj1/75ePCAH2l90wy3P2AwYMBiQAU7BAOZUxalCn5Sy8KokWE T7//EXXJnkd9Sbt7eJZ6tBRATJnAqZnMdPNaaIUOGQlYkcNXr0xKvHh+8GtGuqtu9ewhIh Jaa5qP+IyTaF8eLRUd95fW7fG8ukpeo= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=axh1SzDZ; spf=pass (imf27.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.93 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674163433; a=rsa-sha256; cv=none; b=4IsC9b/YZWGjKi+u82ece6w+oXmZ0bp84FujuBo3Ahbcnt8fu2G2lTBq9MX2Lh5U+b86n9 T03Pb2ydI1gOUuMSXrjgQjDxgiCOi6NG3nyyMthFSkYs/+0uHmmkZhbd77yWaVWRETg8ev 1hAjfK80wWxAZf812oqDFm2ZkQ332pQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674163433; x=1705699433; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=M7Xo1HXic3OeNqgtbqqoeSfgPr5lKe9tQj+QswBwq38=; b=axh1SzDZl3xgqDqDer6PEDEcdr7nIlc4mcTGYKlDwW1/fSXe6BenJrGs l+zSphlNKRbQZ4dQXvuX/3LwKjvUwWzqN5jEMrDWMmAqlhjANqmfWZcvx kyfhEzFT6KTmlILEx2r8rEdTLKKDpyXM/y/nDw51iR7Y7a7exsbEckGwa tobBj77dgmHPp+iKPGaN2K7M6XNoIKl8acl+vA1R5re4Crye20BlkUBRS V0MFPPGGklRLyhCFL1svzVYN2ZLqAabmwG2CmjPk05GheU54l0EyIq5wF +XN0m35CnmQA6pXvvTPQBIJyhfJ+KVyqyzzpFQXlD5Llq8XXppHFe8jIZ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10595"; a="323119557" X-IronPort-AV: E=Sophos;i="5.97,230,1669104000"; d="scan'208";a="323119557" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 13:23:52 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10595"; a="989139076" X-IronPort-AV: E=Sophos;i="5.97,230,1669104000"; d="scan'208";a="989139076" Received: from hossain3-mobl.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.252.128.187]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jan 2023 13:23:50 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v5 17/39] x86/mm: Update maybe_mkwrite() for shadow stack Date: Thu, 19 Jan 2023 13:22:55 -0800 Message-Id: <20230119212317.8324-18-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230119212317.8324-1-rick.p.edgecombe@intel.com> References: <20230119212317.8324-1-rick.p.edgecombe@intel.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: ED8BF40010 X-Stat-Signature: zitk1w1gfttqj8pnrwx33tq3wgacq66e X-Rspam-User: X-HE-Tag: 1674163432-988933 X-HE-Meta: U2FsdGVkX1/Tp2TurAlPO1LhexA9cr1UwLlQqmz2X248XYgzaVFplZ9DE0OKX43j6CUutHhpgH1R5TTi9o54rpkmM7Gtxvvzr6C6939vtRykL0IGl74MwQyU9TrCRZ0Nvrp6SYfNDPAIZaz/yLNjUNLXoZ/gjY3O9UgnoQ0NzO+QeyEnt6fbrr3sNM7F/RW7xryk0qApxBHqK/DcjQE4lqykRuUWXyUqVb/CEU6tPpO4n0Sgea7w8y74+k274/1ncbeVfA0vjw8z74CsuMRYugoEYrc4+n88UhfFmYEdJLgqYbykhoITVlncYvGkNACpa4qdnjpC8qh4gRQBb+HtjR7bx5tBWGuhpzhVsnZ3BF+IuMqy2XVR6vyi+wosm1QlfPb9G8+Xr0dDrWtmauhgUABttshDmete4iFGAeJyKIvAAZL2AhAw9/l1PH9ORYjENp65f6me5ze+wr/RIhs+oD7+nAWURjVvxS7/gm6Bkg3lEyqUBN2ZKA+JiXNtbiO76Wx0LNaCKAtB4v9XcdBFRo3lY+ItRiikXejoj8tTRGjW0jRNz/bE59c/ETZjKtQELHUIp9izBMyvietiL3FsO6r5uzxyV5cCGTDRhE9CYDzLRftVxERI7vSA1avyp0I/KKirVYHoPNGDCLCOqsfdiF/mB6dSxROzpCrBkBe8+SOsAROLpOHLjt6tnWSxwFedPIVj+5y3ij5EmFzmUnNl6Rcs1j+CmfRojlayv/ygVIEjvUmGNKciFAgbXa3DqMkSlBYO8LGGiz0/LDwg+gbcEmWYJL96cjNfyOxe7Ixw+4wbO3f2/q/VYTRS9CKEEXV6eRJ5aNms0NTKCft+fa4oOiP+HrCtKgqyGQaccRynf5e/lKMS5MQYJYoiOPjL50wUGLTdtkQUGZ4br5pDVIwF4uYYrqOzEZisJdXYqhM2N6y1luvjj9d1ZBaxOS7ANU4c3O8XS27NZC/w10fmB7q IbNtflxX fMJd2puzrbYaeUKLK5fxGuTzwkybFZKZlmm2SAZXp1QHBzdeZ/hhjl65oXqJC5eu03dPYC3ycC+1pywp43QhVcAQbFsc2PDbYqo0J1B6Mzps5DZmwJ1OcTAT9Yxpbq6VheF7LONRS+aLX/XF7tvkXfcvjpS9PPXms0MdFvXWeyXy8l5WC+5VGxUS3pt9DwOLj0gMaaGYF0cWkDcFkwt3D6YThTo4rbk/Bj8NyFa57OF/c/j4rrEstQELG2qwYpJpKLAkPskHD+h0A0LW5/07xL+VEdA2Mm5o6uV4DEWdXX3sJP3sFu7j6rDMcgScILdddHbxucvQP8Sj4XIoHb029f3u9nQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu When serving a page fault, maybe_mkwrite() makes a PTE writable if there is a write access to it, and its vma has VM_WRITE. Shadow stack accesses to shadow stack vma's are also treated as write accesses by the fault handler. This is because setting shadow stack memory makes it writable via some instructions, so COW has to happen even for shadow stack reads. So maybe_mkwrite() should continue to set VM_WRITE vma's as normally writable, but also set VM_WRITE|VM_SHADOW_STACK vma's as shadow stack. Do this by adding a pte_mkwrite_shstk() and a cross-arch stub. Check for VM_SHADOW_STACK in maybe_mkwrite() and call pte_mkwrite_shstk() accordingly. Apply the same changes to maybe_pmd_mkwrite(). Reviewed-by: Kees Cook Tested-by: Pengfei Xu Tested-by: John Allen Signed-off-by: Yu-cheng Yu Co-developed-by: Rick Edgecombe Signed-off-by: Rick Edgecombe Cc: Kees Cook --- v3: - Remove unneeded define for maybe_mkwrite (Peterz) - Switch to cleaner version of maybe_mkwrite() (Peterz) v2: - Change to handle shadow stacks that are VM_WRITE|VM_SHADOW_STACK - Ditch arch specific maybe_mkwrite(), and make the code generic - Move do_anonymous_page() to next patch (Kirill) Yu-cheng v29: - Remove likely()'s. arch/x86/include/asm/pgtable.h | 2 ++ include/linux/mm.h | 13 ++++++++++--- include/linux/pgtable.h | 14 ++++++++++++++ mm/huge_memory.c | 10 +++++++--- 4 files changed, 33 insertions(+), 6 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index e96558abc8ec..45b1a8f058fe 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -445,6 +445,7 @@ static inline pte_t pte_mkdirty(pte_t pte) return __pte_mkdirty(pte, true); } +#define pte_mkwrite_shstk pte_mkwrite_shstk static inline pte_t pte_mkwrite_shstk(pte_t pte) { /* pte_clear_cow() also sets Dirty=1 */ @@ -589,6 +590,7 @@ static inline pmd_t pmd_mkdirty(pmd_t pmd) return __pmd_mkdirty(pmd, true); } +#define pmd_mkwrite_shstk pmd_mkwrite_shstk static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) { return pmd_clear_cow(pmd); diff --git a/include/linux/mm.h b/include/linux/mm.h index 824e730b21af..e15d2fc04007 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1106,12 +1106,19 @@ void free_compound_page(struct page *page); * servicing faults for write access. In the normal case, do always want * pte_mkwrite. But get_user_pages can cause write faults for mappings * that do not have writing enabled, when used by access_process_vm. + * + * If a vma is shadow stack (a type of writable memory), mark the pte shadow + * stack. */ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) { - if (likely(vma->vm_flags & VM_WRITE)) - pte = pte_mkwrite(pte); - return pte; + if (!(vma->vm_flags & VM_WRITE)) + return pte; + + if (vma->vm_flags & VM_SHADOW_STACK) + return pte_mkwrite_shstk(pte); + + return pte_mkwrite(pte); } vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 1159b25b0542..14a820a45a37 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -532,6 +532,20 @@ static inline pte_t pte_sw_mkyoung(pte_t pte) #define pte_sw_mkyoung pte_sw_mkyoung #endif +#ifndef pte_mkwrite_shstk +static inline pte_t pte_mkwrite_shstk(pte_t pte) +{ + return pte; +} +#endif + +#ifndef pmd_mkwrite_shstk +static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) +{ + return pmd; +} +#endif + #ifndef __HAVE_ARCH_PMDP_SET_WRPROTECT #ifdef CONFIG_TRANSPARENT_HUGEPAGE static inline void pmdp_set_wrprotect(struct mm_struct *mm, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index abe6cfd92ffa..fbb8beb9265e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -553,9 +553,13 @@ __setup("transparent_hugepage=", setup_transparent_hugepage); pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { - if (likely(vma->vm_flags & VM_WRITE)) - pmd = pmd_mkwrite(pmd); - return pmd; + if (!(vma->vm_flags & VM_WRITE)) + return pmd; + + if (vma->vm_flags & VM_SHADOW_STACK) + return pmd_mkwrite_shstk(pmd); + + return pmd_mkwrite(pmd); } #ifdef CONFIG_MEMCG