From patchwork Tue Jun 13 00:10:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 13277731 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85C3FC88CBC for ; Tue, 13 Jun 2023 00:12:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C61908E0012; Mon, 12 Jun 2023 20:12:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BEB868E000B; Mon, 12 Jun 2023 20:12:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 841178E0012; Mon, 12 Jun 2023 20:12:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 69D898E000B for ; Mon, 12 Jun 2023 20:12:26 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 477D61A0382 for ; Tue, 13 Jun 2023 00:12:26 +0000 (UTC) X-FDA: 80895797892.23.E4014C2 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf08.hostedemail.com (Postfix) with ESMTP id 3DEA2160011 for ; Tue, 13 Jun 2023 00:12:24 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=L0c1KZIy; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf08.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686615144; a=rsa-sha256; cv=none; b=ApZuvgZ+R9BY+HmNeaEt8OKjxbz6ehQuNl8lJBW0sQpHBviXyJCoqeRKNINWsTAwjs+eSg q4/GO3C6UJtInH8GToy0cxPHXwWaxo/FGfBJaQY4OKM6uvgUPJIYCnsRCitx2BakBC26XP 9jts5CU5pcDPdiQ7Gds7hY/RJRMU2wg= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=L0c1KZIy; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf08.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686615144; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=p4A9Ag3Rbq1a3tkkCFr8IkGFKL7/tKsugwm4Dh2bjeE=; b=H3rLt15d8kXNjV910tUfjNYoOzl9guM1ejNFbgbLN3c3SO00Bm2Tf+5yxJGq3/zOdc3mn6 Yt1AhJJT5xAOo/GfSmqvCTUuBCGqUal3lzm9meY6r621YiNpP2ay9E4LI8zpSQ7KXigAMC yzODAeHATl659KZuicTJ4LiEXKJZ29A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686615144; x=1718151144; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dhZWE6YTdZ+FD9PI/U6uJC2MaFgGrtV9CcOxkJ3S4o4=; b=L0c1KZIyHZrSZTFQuexQ0s/JwUr1mdPEJXrq4kklA/a3JhLb84AbgSWM 5gDY+I3FCVYVrIb5I+05V3QvBSk0rjp5J1Zfj6n76/H2p1yF+XXp5rHKB 5CdA8a7fX4OLajvvIZlNmyZL9bXeY5+WQZ3StZnVKXCap8mv2i8+m9yaL 8y2CF5Sx34S0ikrKDUzhEsackDu63TxgYAre0AQ06BWmBJ339r4yg2kai 4WSu3o93YpB6gTk8zXBcyyYeVU+SpIGbFvaPUc8kUA9wdup/ptxOb/trR egq82eqBxQM7O67NDoBEpimPrt5jD5xMFv0brQZNQsLkbyujC066Gop8E A==; X-IronPort-AV: E=McAfee;i="6600,9927,10739"; a="361557058" X-IronPort-AV: E=Sophos;i="6.00,238,1681196400"; d="scan'208";a="361557058" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 17:12:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10739"; a="835671035" X-IronPort-AV: E=Sophos;i="6.00,238,1681196400"; d="scan'208";a="835671035" Received: from almeisch-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.amr.corp.intel.com) ([10.209.42.242]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 17:12:21 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com, torvalds@linux-foundation.org, broonie@kernel.org Cc: rick.p.edgecombe@intel.com, Pengfei Xu Subject: [PATCH v9 17/42] mm: Warn on shadow stack memory in wrong vma Date: Mon, 12 Jun 2023 17:10:43 -0700 Message-Id: <20230613001108.3040476-18-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230613001108.3040476-1-rick.p.edgecombe@intel.com> References: <20230613001108.3040476-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3DEA2160011 X-Stat-Signature: fbgnrifd1xiz7fnnre9ye77kw5o173dp X-HE-Tag: 1686615144-109328 X-HE-Meta: U2FsdGVkX1+YckHE2QbL3Nh6IB5NkeabZJD+LIpfuKkyDn6gUeJYWSd0gE4XTkjLGjSTB8VtB6QtrEStjH5vjbL3rFu4hYScCIl2HkOxH8APVe2lsyd2H7jjqBWacJk5e2HaU7SA4z1tnelXAGfWDFLbCFzPk8AkMtpqQG+OhEQObOO8yTxcWxu7qD4gVU2JlIoENTWKGTEa6majA5qZVePjMdpAsgXBdH32tPQOk5fcpBvLQuC5mUslgYIh2xwn4BFqU02SocN1p7tF7qPspUEH87x510Jw5dpc9usaKUwlGOI8aAx56yrGFEgxb6yDBP46UqQGzWYbX9uQNcvpv8MtO/lcvBiS5MMNtHRmm0Ugc0hvSxD8reZlMSN2/RIng49GghTSL1Tuz1H9F7shiG5NMTFgm31MoWZiEHyiOdbhwvO1N9kp0BS0YyMRb8ZxG2v70rosFOS0+WdBIV/foZrYDnBZUjFXz4F2K+fA8XiBQKM7xgDl9DRKbupxij3T4GNMs29rIMKpkqbrjSJpOtnuv0WCyxWS4/9hK5G6M181G7zD0DKXsGQndgh/PJO09k/dvTvn/xAr3mddZ6XwneqzkoZd7BsAVBcj7D92ZbX5GI/rQIjyr2nfksgHQJHz23VEhFR6fvOoKHUoONXiKx/nTRUC0wP6AmLxKaGOL+kZsomEqMXwvlrGk+lFi6ntQ/A2GMreeOit7YlLx/ieQTW0Stc3E+MAz/OX17N0xSPhCPldkm4t2t/0wVC6+aQ2K6gFDU7BtOOVSCyd4AQSxl2Wer1leoKGrCvp5pWUIUYaBXziJzbA7e/dFCIgKgOkzczNi1pk+EArB4IyVfdyxvFOediSyu98Z8uwIohPRbVvho2kiVcGlViUKHSfC7cGJIEWSLci65qiAkGt/v/6V4FA6c4lbs1qwOjnjO7mX1mo82l73SRIdSWfsIhBz5R4quhGt01GQwPT7bAqW7V Yqu5zzhM RR/Gr2/G8Cr30A60wDvfpvAX8Xy1Va1AItbCwWiUdUmjlIJ0TcBqKJVXMePLkPThLvImnDMiuWF6nlz0AtzwbzczEC9296FMKy+xgWK1MgMXr5eMFuX9vE8zbk79QM5NAEIrCQfmlyWHwtVnXUsxBMF+ZZSue3kyj0iR7KGKD6daDknOvcw9dwdzu0lGF+UF6sxy2HFMcIUP1Z3A80tS6wJFCFSzb6EIqSwe/0M37XTZM3nWWWl1gZYBgG7+4ZtRmtbVkkjH3R1LFSZM9mUBDCafXCYrDYlPxcglzu4PtNnyTW/Ezi1cGoiPhCrj+23c2MjyMhlbJ/phXhkRFsHoUpqeqy4ZuWmOt8w3llQW7HhjIZfhf4e9Mf22loA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The x86 Control-flow Enforcement Technology (CET) feature includes a new type of memory called shadow stack. This shadow stack memory has some unusual properties, which requires some core mm changes to function properly. One sharp edge is that PTEs that are both Write=0 and Dirty=1 are treated as shadow by the CPU, but this combination used to be created by the kernel on x86. Previous patches have changed the kernel to now avoid creating these PTEs unless they are for shadow stack memory. In case any missed corners of the kernel are still creating PTEs like this for non-shadow stack memory, and to catch any re-introductions of the logic, warn if any shadow stack PTEs (Write=0, Dirty=1) are found in non-shadow stack VMAs when they are being zapped. This won't catch transient cases but should have decent coverage. In order to check if a PTE is shadow stack in core mm code, add two arch breakouts arch_check_zapped_pte/pmd(). This will allow shadow stack specific code to be kept in arch/x86. Only do the check if shadow stack is supported by the CPU and configured because in rare cases older CPUs may write Dirty=1 to a Write=0 CPU on older CPUs. This check is handled in pte_shstk()/pmd_shstk(). Signed-off-by: Rick Edgecombe Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook Reviewed-by: Mark Brown --- v9: - Add comments about not doing the check on non-shadow stack CPUs --- arch/x86/include/asm/pgtable.h | 6 ++++++ arch/x86/mm/pgtable.c | 20 ++++++++++++++++++++ include/linux/pgtable.h | 14 ++++++++++++++ mm/huge_memory.c | 1 + mm/memory.c | 1 + 5 files changed, 42 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index d8724f5b1202..89cfa93d0ad6 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1664,6 +1664,12 @@ static inline bool arch_has_hw_pte_young(void) return true; } +#define arch_check_zapped_pte arch_check_zapped_pte +void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte); + +#define arch_check_zapped_pmd arch_check_zapped_pmd +void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd); + #ifdef CONFIG_XEN_PV #define arch_has_hw_nonleaf_pmd_young arch_has_hw_nonleaf_pmd_young static inline bool arch_has_hw_nonleaf_pmd_young(void) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 0ad2c62ac0a8..101e721d74aa 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -894,3 +894,23 @@ pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) return pmd_clear_saveddirty(pmd); } + +void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte) +{ + /* + * Hardware before shadow stack can (rarely) set Dirty=1 + * on a Write=0 PTE. So the below condition + * only indicates a software bug when shadow stack is + * supported by the HW. This checking is covered in + * pte_shstk(). + */ + VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && + pte_shstk(pte)); +} + +void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd) +{ + /* See note in arch_check_zapped_pte() */ + VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && + pmd_shstk(pmd)); +} diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 0f3cf726812a..feb1fd2c814f 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -291,6 +291,20 @@ static inline bool arch_has_hw_pte_young(void) } #endif +#ifndef arch_check_zapped_pte +static inline void arch_check_zapped_pte(struct vm_area_struct *vma, + pte_t pte) +{ +} +#endif + +#ifndef arch_check_zapped_pmd +static inline void arch_check_zapped_pmd(struct vm_area_struct *vma, + pmd_t pmd) +{ +} +#endif + #ifndef __HAVE_ARCH_PTEP_GET_AND_CLEAR static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long address, diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 37dd56b7b3d1..c3cc20c1b26c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1681,6 +1681,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, */ orig_pmd = pmdp_huge_get_and_clear_full(vma, addr, pmd, tlb->fullmm); + arch_check_zapped_pmd(vma, orig_pmd); tlb_remove_pmd_tlb_entry(tlb, pmd, addr); if (vma_is_special_huge(vma)) { if (arch_needs_pgtable_deposit()) diff --git a/mm/memory.c b/mm/memory.c index c1b6fe944c20..40c0b233b61d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1412,6 +1412,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, continue; ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm); + arch_check_zapped_pte(vma, ptent); tlb_remove_tlb_entry(tlb, pte, addr); zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent);