From patchwork Sun Jan 30 21:18:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rick Edgecombe X-Patchwork-Id: 12730150 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD5BCC433F5 for ; Sun, 30 Jan 2022 21:22:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF5766B009B; Sun, 30 Jan 2022 16:22:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AA71D6B009C; Sun, 30 Jan 2022 16:22:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 771B96B009D; Sun, 30 Jan 2022 16:22:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0186.hostedemail.com [216.40.44.186]) by kanga.kvack.org (Postfix) with ESMTP id 685746B009B for ; Sun, 30 Jan 2022 16:22:03 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 24EF895295 for ; Sun, 30 Jan 2022 21:22:03 +0000 (UTC) X-FDA: 79088226126.04.7B9182A Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf27.hostedemail.com (Postfix) with ESMTP id 67A4740002 for ; Sun, 30 Jan 2022 21:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1643577722; x=1675113722; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=numCZyfka0OHXp22LK4zITWZGhKJw0SkYk203jreRnY=; b=dKgsIdou9tgrj5uUZAV9p0lRzNldcupa6JToSK73+KKmRDVsCwhuPD4x ht1qCOmPSTtIU5kAVKlH66urbMCPi8EbnmDVmhSAAJKBkYCb1sPPMbLoO lYMetGxiqHLhLpZoBKx/7Szjp8zHmIwsPU0wm585zdZMGGO8/6lwwISrv 5wn3fJzSj5kAlMfjzrw/zevN5eMV+mM01XLN+lUILc9UOfv7YIqBWSzkq voOS/Y179Rjqi3uo+mQOl3fpUjm76BTTAwxwjYU16KHB386fdffTschG6 SyTcWbLeFCBx3IRn+C3eUouBTM+OYhbWHZ3ecrSt7o0suEWRCS6mWRrcz w==; X-IronPort-AV: E=McAfee;i="6200,9189,10243"; a="244970217" X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="244970217" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 13:22:01 -0800 X-IronPort-AV: E=Sophos;i="5.88,329,1635231600"; d="scan'208";a="536856831" Received: from avmallar-mobl1.amr.corp.intel.com (HELO rpedgeco-desk.amr.corp.intel.com) ([10.209.123.171]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jan 2022 13:22:01 -0800 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Dave Martin , Weijiang Yang , "Kirill A . Shutemov" , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu Subject: [PATCH 20/35] mm: Update can_follow_write_pte() for shadow stack Date: Sun, 30 Jan 2022 13:18:23 -0800 Message-Id: <20220130211838.8382-21-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220130211838.8382-1-rick.p.edgecombe@intel.com> References: <20220130211838.8382-1-rick.p.edgecombe@intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 67A4740002 X-Rspam-User: nil Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dKgsIdou; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf27.hostedemail.com: domain of rick.p.edgecombe@intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=rick.p.edgecombe@intel.com X-Stat-Signature: 94i1o7nkf5hfrhq79utfgk85be17tw7q X-HE-Tag: 1643577722-15063 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Yu-cheng Yu Can_follow_write_pte() ensures a read-only page is COWed by checking the FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid. Like a writable data page, a shadow stack page is writable, and becomes read-only during copy-on-write, but it is always dirty. Thus, in the can_follow_write_pte() check, it belongs to the writable page case and should be excluded from the read-only page pte_dirty() check. Apply the same changes to can_follow_write_pmd(). While at it, also split the long line into smaller ones. Signed-off-by: Yu-cheng Yu Reviewed-by: Kirill A. Shutemov Signed-off-by: Rick Edgecombe Cc: Kees Cook --- Yu-cheng v26: - Instead of passing vm_flags, pass down vma pointer to can_follow_write_*(). Yu-cheng v25: - Split long line into smaller ones. Yu-cheng v24: - Change arch_shadow_stack_mapping() to is_shadow_stack_mapping(). mm/gup.c | 16 ++++++++++++---- mm/huge_memory.c | 16 ++++++++++++---- 2 files changed, 24 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index f0af462ac1e2..95b7d1084c44 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -464,10 +464,18 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags, + struct vm_area_struct *vma) { - return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + if (pte_write(pte)) + return true; + if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW)) + return false; + if (!pte_dirty(pte)) + return false; + if (is_shadow_stack_mapping(vma->vm_flags)) + return false; + return true; } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -510,7 +518,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) { pte_unmap_unlock(ptep, ptl); return NULL; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3588e9fefbe0..1c7167e6f223 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1346,10 +1346,18 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags, + struct vm_area_struct *vma) { - return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + if (pmd_write(pmd)) + return true; + if ((flags & (FOLL_FORCE | FOLL_COW)) != (FOLL_FORCE | FOLL_COW)) + return false; + if (!pmd_dirty(pmd)) + return false; + if (is_shadow_stack_mapping(vma->vm_flags)) + return false; + return true; } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1362,7 +1370,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags, vma)) goto out; /* Avoid dumping huge zero page */