From patchwork Sat Feb 18 21:14:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 13145656 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28E24C636CC for ; Sat, 18 Feb 2023 21:16:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B898328000E; Sat, 18 Feb 2023 16:16:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B39C928000F; Sat, 18 Feb 2023 16:16:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 881D628000E; Sat, 18 Feb 2023 16:16:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 72E4428000D for ; Sat, 18 Feb 2023 16:16:17 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3F6961601CD for ; Sat, 18 Feb 2023 21:16:17 +0000 (UTC) X-FDA: 80481670794.08.19902DE Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf25.hostedemail.com (Postfix) with ESMTP id 2112DA0006 for ; Sat, 18 Feb 2023 21:16:13 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=RmT2JBFK; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf25.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676754975; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=UwlF8pj2HbCR2lOHoYAg5ltmgGgpDeMatG0biscucn8=; b=PE36Z/T7833dFjIrAfl6xepLVn+rlsHU7HOiQXBE+D9fZ2znFZxuCml95L125xM5byAB+z B73nUY/Y1DKAs/emzwGS1GcxWu0mFVOitj/4osde1DUt8qt+FNoQg0eIpbYrRr8iDTkrHj ZsF4SOXxJJRUc10imO0uMwscGyMCnis= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=RmT2JBFK; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf25.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.43 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676754975; a=rsa-sha256; cv=none; b=APf9AMofIcll+1vLe/59qXD05+yk0e1PrVLIRZT4QiwkZjzaKOcNHZy5ouxjIHbPECLcDS GQrA5NpEphDSq7hcx57y8E12LSPkooZjnHQz87/eJfIBR5r+oeBpl8W6D+tCD+1lhyThvu i2eKYtl3sdY13N1Bm5PIygKzX5R6sUs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1676754975; x=1708290975; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=jwVGyMm+E1e39ZdaEUcwexkNuji+vNPhFwZ9/yjeQdc=; b=RmT2JBFKTJ0Ysh8S6/aabu3LaYsTihY+1yLv54Hgbrtu8X1HjADksffn MKrCjltXlYbjPBfMruvkxKaAC8xtnHFjnqQeAWW+5DuCE3GccWzA2mzBh P3IhjVO8Bpfm3wTqCVxnmN6dtQkPfGmI3vyezp7NAi6r7k9W/vV0/TELJ +sMQiXwn0t7/YhiokyTPIolzqQqBfEYHlevJeBcCHlcBWWWci7ejypF3f dJXkwRrRnV67WG+rkiBPZ06dAwo5i6xYr/xBXc/AUJ11oAOATQGKbEqCf vwHs6joALdFDqUB0AFRUwxAaBXQ5DSGjqyKKNutI2g7n8cL+O6PDeac+O g==; X-IronPort-AV: E=McAfee;i="6500,9779,10625"; a="418427440" X-IronPort-AV: E=Sophos;i="5.97,309,1669104000"; d="scan'208";a="418427440" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2023 13:16:10 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10625"; a="664241648" X-IronPort-AV: E=Sophos;i="5.97,309,1669104000"; d="scan'208";a="664241648" Received: from adityava-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.80.223]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Feb 2023 13:16:09 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v6 16/41] x86/mm: Start actually marking _PAGE_SAVED_DIRTY Date: Sat, 18 Feb 2023 13:14:08 -0800 Message-Id: <20230218211433.26859-17-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230218211433.26859-1-rick.p.edgecombe@intel.com> References: <20230218211433.26859-1-rick.p.edgecombe@intel.com> X-Rspamd-Queue-Id: 2112DA0006 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 7byzyb4t85gczwg31hb5876dtrjwz7q4 X-HE-Tag: 1676754973-785080 X-HE-Meta: U2FsdGVkX19c+a1NBIUlgMu7idr6T+6JLyFdzRQlcLHKC8Eei9N02s1Sf4T/Zk90VVhXPcLu25URgC4+8bNus4diijHx5sxA+wmPeOsgZ11bKxT3eX8l+BNyv/UtzSfBY69gaYqVrP7jqSo0b9KedqVGpK9XcDc4bfZf4GrioGCbMKXJWHUwcOEdJxdKRNP8cXr6L5KgYeWSCdtqY9T6FGsI6aMWPwoaM4PVEyofeE9gJ4oq6X3vnKA/t47ypfHqdX34iqTf8nGuW3C7UaC3Rdc6uu/6kGsCMLXxyJAYAnekLM9D7BTXD2c2PcQcHP+5H1rG47DSdfTzhfq/ZDfDnGE+qdxeQaEt+617BvBvNWN3H6FheGHzdSM8k6IBIOPrbgy+IXXsybehmhH+5hSu8qJqA2tHhbrnz7r1lS4xPt7p3y7Q8LrJ8j8JxpzC3ndupQuEHYxaANREiQs95Oo0XlfgVis+BTfx96RSTjQ9a5R74y8ovjBv5s/Fj0S34U3uPCLbymSV194bWuaYBg4F6M/AcrwbWHReWm1ZACY1hOUT9g7GOoaPdam+KaYvqtNhbPRbAffd1EaVYTEIyveWXlVpVOGF+UCAJroE6McuxCMNVMdkfvocGXAGDpPd8mWbGYtRmE2nJMvkCt2xLLHUz3QdzYORCvWhmL+sfqMZe4U7Rp4XC5Et+zs7VHgqDJdl1b/0XXfW7e8oEtwCke9gyUuKKlMCni9ErYD38d0R5qQOJNfKNccbaLp0fqdiq6bthJRANr5bbDcXfZrqRE7Hk7/mNPOh3L7RMqJt5xY7M1PphgxTCZmW6AlMrJi68XvedgQ9T35/5HBdrEevGSB6hk0+Zbl8zSkx8OTx+OQscSG3Fa7dqWdsqWpontCmMlmX1gY7AGqMCqr1imkePLq5MoRgoeSoBfe2E0WlDanBk5bW5YKDSh0wgFcYlerUcjTWWKu2jiBiFFDZ4SpcKb3 3G0myvAx 7YyP70lKHJtBC5CVtyRtnpLkSwFmHE3qmTzeERH61+AB872YaCuglHgcjpnOOdUPXE2f7hN5ueULDCj5lzrAImhd4sdww2RtCgMqZebqbdeHs94k//bLwN0pmbeZ6vF7+Z2l0/Y7V7Q9p2A/b/AeMKMcdAp/IS0Zoq0xvKpahS9ftr6IlaIn9zy0Ip6p1wSYsjopQVerIckOtTpS3grehWggFX5mhh/Vjp2ANSZK71EDaQ8lGpyuHyUz30Ljff4lXaJVNADiM+53br9kDvyQJL4qblfXY5Rl8Jea5XO8wRCa2lx4VhZw8UEQUrIXFCsdaupKP0h/I1TfEfHQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The recently introduced _PAGE_SAVED_DIRTY should be used instead of the HW Dirty bit whenever a PTE is Write=0, in order to not inadvertently create shadow stack PTEs. Update pte_mk*() helpers to do this, and apply the same changes to pmd and pud. For pte_modify() this is a bit trickier. It takes a "raw" pgprot_t which was not necessarily created with any of the existing PTE bit helpers. That means that it can return a pte_t with Write=0,Dirty=1, a shadow stack PTE, when it did not intend to create one. Modify it to also move _PAGE_DIRTY to _PAGE_SAVED_DIRTY. To avoid creating Write=0,Dirty=1 PTEs, pte_modify() needs to avoid: 1. Marking Write=0 PTEs Dirty=1 2. Marking Dirty=1 PTEs Write=0 The first case cannot happen as the existing behavior of pte_modify() is to filter out any Dirty bit passed in newprot. Handle the second case by shifting _PAGE_DIRTY=1 to _PAGE_SAVED_DIRTY=1 if the PTE was write protected by the pte_modify() call. Apply the same changes to pmd_modify(). Reviewed-by: Kees Cook Tested-by: Pengfei Xu Tested-by: John Allen Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe --- v6: - Rename _PAGE_COW to _PAGE_SAVED_DIRTY (David Hildenbrand) - Open code _PAGE_SAVED_DIRTY part in pte_modify() (Boris) - Change the logic so the open coded part is not too ugly - Merge pte_modify() patch with this one because of the above v4: - Break part patch for better bisectability --- arch/x86/include/asm/pgtable.h | 168 ++++++++++++++++++++++++++++----- 1 file changed, 145 insertions(+), 23 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index e5f00c077039..292a3b75d7fa 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -124,9 +124,17 @@ extern pmdval_t early_pmd_flags; * The following only work if pte_present() is true. * Undefined behaviour if not.. */ -static inline int pte_dirty(pte_t pte) +static inline bool pte_dirty(pte_t pte) { - return pte_flags(pte) & _PAGE_DIRTY; + return pte_flags(pte) & _PAGE_DIRTY_BITS; +} + +static inline bool pte_shstk(pte_t pte) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return false; + + return (pte_flags(pte) & (_PAGE_RW | _PAGE_DIRTY)) == _PAGE_DIRTY; } static inline int pte_young(pte_t pte) @@ -134,9 +142,18 @@ static inline int pte_young(pte_t pte) return pte_flags(pte) & _PAGE_ACCESSED; } -static inline int pmd_dirty(pmd_t pmd) +static inline bool pmd_dirty(pmd_t pmd) { - return pmd_flags(pmd) & _PAGE_DIRTY; + return pmd_flags(pmd) & _PAGE_DIRTY_BITS; +} + +static inline bool pmd_shstk(pmd_t pmd) +{ + if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK)) + return false; + + return (pmd_flags(pmd) & (_PAGE_RW | _PAGE_DIRTY | _PAGE_PSE)) == + (_PAGE_DIRTY | _PAGE_PSE); } #define pmd_young pmd_young @@ -145,9 +162,9 @@ static inline int pmd_young(pmd_t pmd) return pmd_flags(pmd) & _PAGE_ACCESSED; } -static inline int pud_dirty(pud_t pud) +static inline bool pud_dirty(pud_t pud) { - return pud_flags(pud) & _PAGE_DIRTY; + return pud_flags(pud) & _PAGE_DIRTY_BITS; } static inline int pud_young(pud_t pud) @@ -157,13 +174,21 @@ static inline int pud_young(pud_t pud) static inline int pte_write(pte_t pte) { - return pte_flags(pte) & _PAGE_RW; + /* + * Shadow stack pages are logically writable, but do not have + * _PAGE_RW. Check for them separately from _PAGE_RW itself. + */ + return (pte_flags(pte) & _PAGE_RW) || pte_shstk(pte); } #define pmd_write pmd_write static inline int pmd_write(pmd_t pmd) { - return pmd_flags(pmd) & _PAGE_RW; + /* + * Shadow stack pages are logically writable, but do not have + * _PAGE_RW. Check for them separately from _PAGE_RW itself. + */ + return (pmd_flags(pmd) & _PAGE_RW) || pmd_shstk(pmd); } #define pud_write pud_write @@ -375,7 +400,7 @@ static inline pte_t pte_clear_uffd_wp(pte_t pte) static inline pte_t pte_mkclean(pte_t pte) { - return pte_clear_flags(pte, _PAGE_DIRTY); + return pte_clear_flags(pte, _PAGE_DIRTY_BITS); } static inline pte_t pte_mkold(pte_t pte) @@ -385,7 +410,16 @@ static inline pte_t pte_mkold(pte_t pte) static inline pte_t pte_wrprotect(pte_t pte) { - return pte_clear_flags(pte, _PAGE_RW); + pte = pte_clear_flags(pte, _PAGE_RW); + + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PTE (Write=0,Dirty=1). Move the hardware + * dirty value to the software bit. + */ + if (pte_dirty(pte)) + pte = pte_mksaveddirty(pte); + return pte; } static inline pte_t pte_mkexec(pte_t pte) @@ -395,7 +429,19 @@ static inline pte_t pte_mkexec(pte_t pte) static inline pte_t pte_mkdirty(pte_t pte) { - return pte_set_flags(pte, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pteval_t dirty = _PAGE_DIRTY; + + /* Avoid creating Dirty=1,Write=0 PTEs */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && !pte_write(pte)) + dirty = _PAGE_SAVED_DIRTY; + + return pte_set_flags(pte, dirty | _PAGE_SOFT_DIRTY); +} + +static inline pte_t pte_mkwrite_shstk(pte_t pte) +{ + /* pte_clear_saveddirty() also sets Dirty=1 */ + return pte_clear_saveddirty(pte); } static inline pte_t pte_mkyoung(pte_t pte) @@ -412,7 +458,12 @@ struct vm_area_struct; static inline pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { - return pte_mkwrite_kernel(pte); + pte = pte_mkwrite_kernel(pte); + + if (pte_dirty(pte)) + pte = pte_clear_saveddirty(pte); + + return pte; } static inline pte_t pte_mkhuge(pte_t pte) @@ -503,17 +554,36 @@ static inline pmd_t pmd_mkold(pmd_t pmd) static inline pmd_t pmd_mkclean(pmd_t pmd) { - return pmd_clear_flags(pmd, _PAGE_DIRTY); + return pmd_clear_flags(pmd, _PAGE_DIRTY_BITS); } static inline pmd_t pmd_wrprotect(pmd_t pmd) { - return pmd_clear_flags(pmd, _PAGE_RW); + pmd = pmd_clear_flags(pmd, _PAGE_RW); + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PMD (RW=0, Dirty=1). Move the hardware + * dirty value to the software bit. + */ + if (pmd_dirty(pmd)) + pmd = pmd_mksaveddirty(pmd); + return pmd; } static inline pmd_t pmd_mkdirty(pmd_t pmd) { - return pmd_set_flags(pmd, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pmdval_t dirty = _PAGE_DIRTY; + + /* Avoid creating (HW)Dirty=1, Write=0 PMDs */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && !pmd_write(pmd)) + dirty = _PAGE_SAVED_DIRTY; + + return pmd_set_flags(pmd, dirty | _PAGE_SOFT_DIRTY); +} + +static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) +{ + return pmd_clear_saveddirty(pmd); } static inline pmd_t pmd_mkdevmap(pmd_t pmd) @@ -533,7 +603,12 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd) static inline pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { - return pmd_set_flags(pmd, _PAGE_RW); + pmd = pmd_set_flags(pmd, _PAGE_RW); + + if (pmd_dirty(pmd)) + pmd = pmd_clear_saveddirty(pmd); + + return pmd; } static inline pud_t pud_set_flags(pud_t pud, pudval_t set) @@ -577,17 +652,32 @@ static inline pud_t pud_mkold(pud_t pud) static inline pud_t pud_mkclean(pud_t pud) { - return pud_clear_flags(pud, _PAGE_DIRTY); + return pud_clear_flags(pud, _PAGE_DIRTY_BITS); } static inline pud_t pud_wrprotect(pud_t pud) { - return pud_clear_flags(pud, _PAGE_RW); + pud = pud_clear_flags(pud, _PAGE_RW); + + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PUD (RW=0, Dirty=1). Move the hardware + * dirty value to the software bit. + */ + if (pud_dirty(pud)) + pud = pud_mksaveddirty(pud); + return pud; } static inline pud_t pud_mkdirty(pud_t pud) { - return pud_set_flags(pud, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pudval_t dirty = _PAGE_DIRTY; + + /* Avoid creating (HW)Dirty=1, Write=0 PUDs */ + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && !pud_write(pud)) + dirty = _PAGE_SAVED_DIRTY; + + return pud_set_flags(pud, dirty | _PAGE_SOFT_DIRTY); } static inline pud_t pud_mkdevmap(pud_t pud) @@ -607,7 +697,11 @@ static inline pud_t pud_mkyoung(pud_t pud) static inline pud_t pud_mkwrite(pud_t pud) { - return pud_set_flags(pud, _PAGE_RW); + pud = pud_set_flags(pud, _PAGE_RW); + + if (pud_dirty(pud)) + pud = pud_clear_saveddirty(pud); + return pud; } #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY @@ -724,6 +818,8 @@ static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask); static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { pteval_t val = pte_val(pte), oldval = val; + bool wr_protected; + pte_t pte_result; /* * Chop off the NX bit (if present), and add the NX portion of @@ -732,17 +828,43 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) val &= _PAGE_CHG_MASK; val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK; val = flip_protnone_guard(oldval, val, PTE_PFN_MASK); - return __pte(val); + + pte_result = __pte(val); + + /* + * Do the saveddirty fixup if the PTE was just write protected and + * it's dirty. + */ + wr_protected = (oldval & _PAGE_RW) && !(val & _PAGE_RW); + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && wr_protected && + (val & _PAGE_DIRTY)) + pte_result = pte_mksaveddirty(pte_result); + + return pte_result; } static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) { pmdval_t val = pmd_val(pmd), oldval = val; + bool wr_protected; + pmd_t pmd_result; - val &= _HPAGE_CHG_MASK; + val &= (_HPAGE_CHG_MASK & ~_PAGE_DIRTY); val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK; val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK); - return __pmd(val); + + pmd_result = __pmd(val); + + /* + * Do the saveddirty fixup if the PMD was just write protected and + * it's dirty. + */ + wr_protected = (oldval & _PAGE_RW) && !(val & _PAGE_RW); + if (cpu_feature_enabled(X86_FEATURE_USER_SHSTK) && wr_protected && + (val & _PAGE_DIRTY)) + pmd_result = pmd_mksaveddirty(pmd_result); + + return pmd_result; } /*