From patchwork Sun Jan 30 21:18:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 12730143 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0CB7C35273 for ; Sun, 30 Jan 2022 21:22:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9ABCD6B0087; Sun, 30 Jan 2022 16:21:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 91DB26B0089; Sun, 30 Jan 2022 16:21:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74A2C6B008A; Sun, 30 Jan 2022 16:21:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 5A06D6B0087 for ; Sun, 30 Jan 2022 16:21:57 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 265DC181E7875 for ; Sun, 30 Jan 2022 21:21:57 +0000 (UTC) X-FDA: 79088225874.25.55CE9EC Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf27.hostedemail.com (Postfix) with ESMTP id 5594640003 for ; Sun, 30 Jan 2022 21:21:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643577716; x=1675113716; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=W6imSLOXTOIpBYqoUozg9ybVACMS4TFkDAllXYnU9RI=; b=X+SsKDa6oF9aiKnJjQUnzWbkL+7B8Sot5tgfCRbXTGgvkaztcSJXSPbf f3Gr0nR1nRFJ06wynFs/v41IwQG7e8gcHW6GafFPP+f0kcVn+RsGBoglh dy20M0BjDGZAvIYzMp8eUk0enVKrVhvTCDUWXspuqqwuYsbCqp7bHdvwG Z//rI8xl+pPQRR0AFS4+4s3JOD9cX//2N4uBJwxa68YVZhdxWKuAeMWm+ KRvzMo+jD/EXG1ZmyaA2YnUvd9CyOQYWFHeSdk9at85ll0kpzHu5BK7LH yAgYjtg+/mdln15g9HaT6bhTBbTbytFTIxMgUR7TQibopaH+SGzaZY0ZQ Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10243"; a="244970202" X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="244970202" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 13:21:55 -0800 X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="536856760" Received: from avmallar-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.123.171]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 13:21:55 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Dave Martin , Weijiang Yang , "Kirill A . Shutemov" , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH 12/35] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY to _PAGE_COW Date: Sun, 30 Jan 2022 13:18:15 -0800 Message-Id: <20220130211838.8382-13-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220130211838.8382-1-rick.p.edgecombe@intel.com> References: <20220130211838.8382-1-rick.p.edgecombe@intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5594640003 X-Rspam-User: nil Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=X+SsKDa6; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf27.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=rick.p.edgecombe@intel.com X-Stat-Signature: auk4diim9b94sy5te71jju8u9q8711ca X-HE-Tag: 1643577716-216596 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu When Shadow Stack is introduced, [R/O + _PAGE_DIRTY] PTE is reserved for shadow stack. Copy-on-write PTEs have [R/O + _PAGE_COW]. When a PTE goes from [R/W + _PAGE_DIRTY] to [R/O + _PAGE_COW], it could become a transient shadow stack PTE in two cases: The first case is that some processors can start a write but end up seeing a read-only PTE by the time they get to the Dirty bit, creating a transient shadow stack PTE. However, this will not occur on processors supporting Shadow Stack, and a TLB flush is not necessary. The second case is that when _PAGE_DIRTY is replaced with _PAGE_COW non- atomically, a transient shadow stack PTE can be created as a result. Thus, prevent that with cmpxchg. Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many insights to the issue. Jann Horn provided the cmpxchg solution. Signed-off-by: Yu-cheng Yu Reviewed-by: Kees Cook Reviewed-by: Kirill A. Shutemov Signed-off-by: Rick Edgecombe --- Yu-cheng v30: - Replace (pmdval_t) cast with CONFIG_PGTABLE_LEVELES > 2 (Borislav Petkov). arch/x86/include/asm/pgtable.h | 38 ++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 5c3886f6ccda..e1061b9cba6a 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1295,6 +1295,24 @@ static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { + /* + * If Shadow Stack is enabled, pte_wrprotect() moves _PAGE_DIRTY + * to _PAGE_COW (see comments at pte_wrprotect()). + * When a thread reads a RW=1, Dirty=0 PTE and before changing it + * to RW=0, Dirty=0, another thread could have written to the page + * and the PTE is RW=1, Dirty=1 now. Use try_cmpxchg() to detect + * PTE changes and update old_pte, then try again. + */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { + pte_t old_pte, new_pte; + + old_pte = READ_ONCE(*ptep); + do { + new_pte = pte_wrprotect(old_pte); + } while (!try_cmpxchg(&ptep->pte, &old_pte.pte, new_pte.pte)); + + return; + } clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte); } @@ -1347,6 +1365,26 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm, static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { +#if CONFIG_PGTABLE_LEVELS > 2 + /* + * If Shadow Stack is enabled, pmd_wrprotect() moves _PAGE_DIRTY + * to _PAGE_COW (see comments at pmd_wrprotect()). + * When a thread reads a RW=1, Dirty=0 PMD and before changing it + * to RW=0, Dirty=0, another thread could have written to the page + * and the PMD is RW=1, Dirty=1 now. Use try_cmpxchg() to detect + * PMD changes and update old_pmd, then try again. + */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { + pmd_t old_pmd, new_pmd; + + old_pmd = READ_ONCE(*pmdp); + do { + new_pmd = pmd_wrprotect(old_pmd); + } while (!try_cmpxchg(&pmdp->pmd, &old_pmd.pmd, new_pmd.pmd)); + + return; + } +#endif clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp); }