From patchwork Thu Sep 29 22:29:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Edgecombe, Rick P" X-Patchwork-Id: 12994674 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67899C4332F for ; Thu, 29 Sep 2022 22:30:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4C6B8D000E; Thu, 29 Sep 2022 18:30:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C84438D000C; Thu, 29 Sep 2022 18:30:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AE2248D000F; Thu, 29 Sep 2022 18:30:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 98EA38D000E for ; Thu, 29 Sep 2022 18:30:27 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 7ACA9140ECF for ; Thu, 29 Sep 2022 22:30:27 +0000 (UTC) X-FDA: 79966568094.07.FDB2AD4 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf26.hostedemail.com (Postfix) with ESMTP id D8D3214000B for ; Thu, 29 Sep 2022 22:30:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664490626; x=1696026626; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=HEoICz/7FChfhD1X/KoxDXONV3hJ7PY4eGyUJG+8lDI=; b=S6+2yhOxbT6OwUa0dyD30JSvPwva4qtMYJvkTBeFeY7u0o+4cKvU2Liz YIsUfN1Jk3VCA3CfpQeJMRL65oR8dFqt627QbeHkDfb218kNrOjHo+C7f cqCiBdPDe7+sJ+g7nHDlhh/4NK/K2ck/cN7LMa1WLh8TJWa23+J4gpGMO XascxxYpTKXtzLspumJicRXQ7pCT0NSqlrsPxsRhkzsQwcgsm6u+qFWDT A3x75m065NF7TrKbWokQZCAChisV3z9SRsYvW5IT15gkaoTRzu87UZcim JMyix2jgzJGCBsMVkSGuOBg4+r6WIRBypRY3quRWRyzxTXZMFomtQdfgb Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="328420496" X-IronPort-AV: E=Sophos;i="5.93,356,1654585200"; d="scan'208";a="328420496" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2022 15:30:26 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10485"; a="691016264" X-IronPort-AV: E=Sophos;i="5.93,356,1654585200"; d="scan'208";a="691016264" Received: from sergungo-mobl.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.251.25.88]) by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Sep 2022 15:30:24 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Weijiang Yang , "Kirill A . Shutemov" , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH v2 20/39] mm/mprotect: Exclude shadow stack from preserve_write Date: Thu, 29 Sep 2022 15:29:17 -0700 Message-Id: <20220929222936.14584-21-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220929222936.14584-1-rick.p.edgecombe@intel.com> References: <20220929222936.14584-1-rick.p.edgecombe@intel.com> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664490627; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references:dkim-signature; bh=EksmLWFFd0h6xLIFCIvhVRecSrF3cMgLHZG5B52jltk=; b=Ia5nUDjnUA92jV6r2cP5BXxo2HPd07ESR12U33Not+Dl+gRWbxLsVo52n7wsP7YoGrZatK bvIucyaISlXsGSZXMvxu2G1bsND+4WU1vK+z8N+6D0fJZgkcU1zCrB7MuGcBhYJl8boa+B qRCP3rbNT//lSau0zJHkpWQgm6otl3g= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=S6+2yhOx; spf=pass (imf26.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664490627; a=rsa-sha256; cv=none; b=Nk87/VI0j1KvsSm2ejByBW6FEvaG1vvj0ygrKlkERnLe7KlBo69qA/9+7uJ+PUW2vJgakQ e2eZY6gtdEuxZUekSgfd3kLDvSj9w2lO/uRRgysNV8KmgSsyqkLbfQLPY5NC2z2R/iL1tB fAGqFuPXpdt0FlAwvUuOG6OKgN7BDFk= Authentication-Results: imf26.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=S6+2yhOx; spf=pass (imf26.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 192.55.52.88 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspam-User: X-Stat-Signature: cc5makqkbkhtc7munrentz61y5uyhp3s X-Rspamd-Queue-Id: D8D3214000B X-Rspamd-Server: rspam08 X-HE-Tag: 1664490626-122983 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu In change_pte_range(), when a PTE is changed for prot_numa, _PAGE_RW is preserved to avoid the additional write fault after the NUMA hinting fault. However, pte_write() now includes both normal writable and shadow stack (Write=0, Dirty=1) PTEs, but the latter does not have _PAGE_RW and has no need to preserve it. Exclude shadow stack from preserve_write test, and apply the same change to change_huge_pmd(). Signed-off-by: Yu-cheng Yu Reviewed-by: Kirill A. Shutemov Signed-off-by: Rick Edgecombe --- Yu-cheng v25: - Move is_shadow_stack_mapping() to a separate line. Yu-cheng v24: - Change arch_shadow_stack_mapping() to is_shadow_stack_mapping(). mm/huge_memory.c | 7 +++++++ mm/mprotect.c | 7 +++++++ 2 files changed, 14 insertions(+) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 11fc69eb4717..492c4f190f55 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1800,6 +1800,13 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return 0; preserve_write = prot_numa && pmd_write(*pmd); + + /* + * Preserve only normal writable huge PMD, but not shadow + * stack (RW=0, Dirty=1). + */ + if (vma->vm_flags & VM_SHADOW_STACK) + preserve_write = false; ret = 1; #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION diff --git a/mm/mprotect.c b/mm/mprotect.c index bc6bddd156ca..983206529dce 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -114,6 +114,13 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, pte_t ptent; bool preserve_write = prot_numa && pte_write(oldpte); + /* + * Preserve only normal writable PTE, but not shadow + * stack (RW=0, Dirty=1). + */ + if (vma->vm_flags & VM_SHADOW_STACK) + preserve_write = false; + /* * Avoid trapping faults against the zero or KSM * pages. See similar comment in change_huge_pmd.