From patchwork Thu Feb 15 12:17:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13558311 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5124CC48BEB for ; Thu, 15 Feb 2024 12:18:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ZwnkWF0/IQozVcVMmdaUui+4sSc3+aQPE18OsyFyaS0=; b=CskV+5KI8Ee6Z7 hld8hUEnYQET1snZ57Yy4DVC6QqY8CA5rVugQLNThDqsPaGw/LDyqWLtL4HyLlJ2yZgW7hNTq+1BR HXkZNzpDhXFfSjNp+G8tE42yzmH0ocQJGhVaI0uhPUwlMqBe2zq0Xx/D8QkMhh9tAgHdu2Yo7FGeQ mU4vHkMP+8G8CNN2TgecBKT20VgtEAz+BWyKDgBhS6Gflx9auoVHAw8r2FfCqK2zb/7RBZOSVBZPS FaHe0T9s+eUrBtZhU3OAIWB4lzrY8xiET5NjW/3Mzc5eH6PXvKMRa6x954ukPzwFH8potEOgdjYkG b0BJtwWqhQyaoobz9adA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raah1-0000000GCGn-34nt; Thu, 15 Feb 2024 12:18:19 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raags-0000000GCB0-0UwJ for linux-arm-kernel@lists.infradead.org; Thu, 15 Feb 2024 12:18:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BA08EDA7; Thu, 15 Feb 2024 04:18:48 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1E9A03F766; Thu, 15 Feb 2024 04:18:06 -0800 (PST) From: Ryan Roberts To: David Hildenbrand , Mark Rutland , Catalin Marinas , Will Deacon , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Andrew Morton , Muchun Song Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 1/4] mm: Introduce ptep_get_lockless_norecency() Date: Thu, 15 Feb 2024 12:17:53 +0000 Message-Id: <20240215121756.2734131-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240215121756.2734131-1-ryan.roberts@arm.com> References: <20240215121756.2734131-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240215_041810_287735_2642342F X-CRM114-Status: GOOD ( 22.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org With the introduction of contpte mapping support for arm64, that architecture's implementation of ptep_get_lockless() has become very complex due to the need to gather access and dirty bits from across all of the ptes in the contpte block. This requires careful implementation to ensure the returned value is consistent (because its not possible to read all ptes atomically), but even in the common case when there is no racing modification, we have to read all ptes, which gives an ~O(n^2) cost if the core-mm is iterating over a range, and performing a ptep_get_lockless() on each pte. Solve this by introducing ptep_get_lockless_norecency(), which does not make any guarantees about access and dirty bits. Therefore it can simply read the single target pte. At the same time, convert all call sites that previously used ptep_get_lockless() but don't care about access and dirty state. We may want to do something similar for ptep_get() (i.e. ptep_get_norecency()) in future; it doesn't suffer from the consistency problem because the PTL serializes it with any modifications, but does suffer the same O(n^2) cost. Signed-off-by: Ryan Roberts --- include/linux/pgtable.h | 37 ++++++++++++++++++++++++++++++++++--- kernel/events/core.c | 2 +- mm/hugetlb.c | 2 +- mm/khugepaged.c | 2 +- mm/memory.c | 2 +- mm/swap_state.c | 2 +- mm/swapfile.c | 2 +- 7 files changed, 40 insertions(+), 9 deletions(-) -- 2.25.1 diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index a36cf4e124b0..9dd40fdbd825 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -528,16 +528,47 @@ static inline pmd_t pmdp_get_lockless(pmd_t *pmdp) #endif /* CONFIG_PGTABLE_LEVELS > 2 */ #endif /* CONFIG_GUP_GET_PXX_LOW_HIGH */ -/* - * We require that the PTE can be read atomically. - */ #ifndef ptep_get_lockless +/** + * ptep_get_lockless - Get a pte without holding the page table lock. Young and + * dirty bits are guaranteed to accurately reflect the state + * of the pte at the time of the call. + * @ptep: Page table pointer for pte to get. + * + * If young and dirty information is not required, use + * ptep_get_lockless_norecency() which can be faster on some architectures. + * + * May be overridden by the architecture; otherwise, implemented using + * ptep_get(), on the assumption that it is atomic. + * + * Context: Any. + */ static inline pte_t ptep_get_lockless(pte_t *ptep) { return ptep_get(ptep); } #endif +#ifndef ptep_get_lockless_norecency +/** + * ptep_get_lockless_norecency - Get a pte without holding the page table lock. + * Young and dirty bits may not be accurate. + * @ptep: Page table pointer for pte to get. + * + * Prefer this over ptep_get_lockless() when young and dirty information is not + * required since it can be faster on some architectures. + * + * May be overridden by the architecture; otherwise, implemented using the more + * precise ptep_get_lockless(). + * + * Context: Any. + */ +static inline pte_t ptep_get_lockless_norecency(pte_t *ptep) +{ + return ptep_get_lockless(ptep); +} +#endif + #ifndef pmdp_get_lockless static inline pmd_t pmdp_get_lockless(pmd_t *pmdp) { diff --git a/kernel/events/core.c b/kernel/events/core.c index f0f0f71213a1..27991312d635 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -7583,7 +7583,7 @@ static u64 perf_get_pgtable_size(struct mm_struct *mm, unsigned long addr) if (!ptep) goto again; - pte = ptep_get_lockless(ptep); + pte = ptep_get_lockless_norecency(ptep); if (pte_present(pte)) size = pte_leaf_size(pte); pte_unmap(ptep); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 68283e54c899..41dc44eb8454 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -7517,7 +7517,7 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, } if (pte) { - pte_t pteval = ptep_get_lockless(pte); + pte_t pteval = ptep_get_lockless_norecency(pte); BUG_ON(pte_present(pteval) && !pte_huge(pteval)); } diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 2771fc043b3b..1a6c9ed8237a 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1019,7 +1019,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm, } } - vmf.orig_pte = ptep_get_lockless(pte); + vmf.orig_pte = ptep_get_lockless_norecency(pte); if (!is_swap_pte(vmf.orig_pte)) continue; diff --git a/mm/memory.c b/mm/memory.c index 4dd8e35b593a..8e65fa1884f1 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4353,7 +4353,7 @@ static bool pte_range_none(pte_t *pte, int nr_pages) int i; for (i = 0; i < nr_pages; i++) { - if (!pte_none(ptep_get_lockless(pte + i))) + if (!pte_none(ptep_get_lockless_norecency(pte + i))) return false; } diff --git a/mm/swap_state.c b/mm/swap_state.c index 2f540748f7c0..061c6c16c7ff 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -837,7 +837,7 @@ static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, if (!pte) break; } - pentry = ptep_get_lockless(pte); + pentry = ptep_get_lockless_norecency(pte); if (!is_swap_pte(pentry)) continue; entry = pte_to_swp_entry(pentry); diff --git a/mm/swapfile.c b/mm/swapfile.c index d1bd8d1e17bd..c414dd238814 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1857,7 +1857,7 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, break; } - ptent = ptep_get_lockless(pte); + ptent = ptep_get_lockless_norecency(pte); if (!is_swap_pte(ptent)) continue; From patchwork Thu Feb 15 12:17:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13558312 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 619E0C48BF0 for ; Thu, 15 Feb 2024 12:18:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=pI9F/Oa1fHEr8JhlqCc04C+ZxD4zozAtWlbmX2Iuin0=; b=jEv380B10/nkDJ fKy/1fKPng1oaKlOMqJqNLYPQqXCLeOfhQwhCf0pW4CfjAjf4BwbLuvvxS2Fljmjvp76bWPuxqQWi H3zufsKb5XKlxJ6Syv6fRtuPZblbcyf/1JvSEnksxTDyZVzC0XrWRsjzDOx2JuDZsjuVPuW1SloCP EAQ78nkOdU3ZFdtrP2HJE16096BCGV2TCyNmGIrh2uDpVF+1GKO1WYK0z4FumNgs7oejWVBxGeK8U 1J1d5gf4wgGlYtCY/ixm7O0ShXtkfwpCdrt//v8GEJrDNKk/X5rFYm8x9G90ua9jTJ7/a7Ei9yJ4b 6NgEaM9XNrgk2sESbBWQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raah1-0000000GCGI-0Dp9; Thu, 15 Feb 2024 12:18:19 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raags-0000000GCBS-2Bk6 for linux-arm-kernel@lists.infradead.org; Thu, 15 Feb 2024 12:18:12 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B279B1480; Thu, 15 Feb 2024 04:18:50 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 16F853F766; Thu, 15 Feb 2024 04:18:07 -0800 (PST) From: Ryan Roberts To: David Hildenbrand , Mark Rutland , Catalin Marinas , Will Deacon , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Andrew Morton , Muchun Song Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 2/4] mm/gup: Use ptep_get_lockless_norecency() Date: Thu, 15 Feb 2024 12:17:54 +0000 Message-Id: <20240215121756.2734131-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240215121756.2734131-1-ryan.roberts@arm.com> References: <20240215121756.2734131-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240215_041810_705507_AE92C3D2 X-CRM114-Status: GOOD ( 16.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Gup needs to read ptes locklessly, so it uses ptep_get_lockless(). However, the returned access and dirty bits are unimportant so let's switch over to ptep_get_lockless_norecency(). The wrinkle is that gup needs to check that the pte hasn't changed once it has pinned the folio following this model: pte = ptep_get_lockless_norecency(ptep) ... if (!pte_same(pte, ptep_get_lockless(ptep))) // RACE! ... And now that pte may not contain correct access and dirty information, the pte_same() comparison could spuriously fail. So let's introduce a new pte_same_norecency() helper which will ignore the access and dirty bits when doing the comparison. Note that previously, ptep_get() was being used for the comparison; this is technically incorrect because the PTL is not held. I've also converted the comparison to use the preferred pmd_same() helper instead of doing a raw value comparison. As a side-effect, this new approach removes the possibility of concurrent read/write to the page causing a spurious fast gup failure, because the access and dirty bits are no longer used in the comparison. Signed-off-by: Ryan Roberts Acked-by: David Hildenbrand --- include/linux/pgtable.h | 18 ++++++++++++++++++ mm/gup.c | 7 ++++--- 2 files changed, 22 insertions(+), 3 deletions(-) -- 2.25.1 diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 9dd40fdbd825..8123affa8baf 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -936,6 +936,24 @@ static inline int pte_same(pte_t pte_a, pte_t pte_b) } #endif +/** + * pte_same_norecency - Compare pte_a and pte_b, ignoring young and dirty bits, + * if the ptes are present. + * + * @pte_a: First pte to compare. + * @pte_b: Second pte to compare. + * + * Returns 1 if the ptes match, else 0. + */ +static inline int pte_same_norecency(pte_t pte_a, pte_t pte_b) +{ + if (pte_present(pte_a)) + pte_a = pte_mkold(pte_mkclean(pte_a)); + if (pte_present(pte_b)) + pte_b = pte_mkold(pte_mkclean(pte_b)); + return pte_same(pte_a, pte_b); +} + #ifndef __HAVE_ARCH_PTE_UNUSED /* * Some architectures provide facilities to virtualization guests diff --git a/mm/gup.c b/mm/gup.c index df83182ec72d..0f96d0a5ec09 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2576,7 +2576,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, if (!ptep) return 0; do { - pte_t pte = ptep_get_lockless(ptep); + pte_t pte = ptep_get_lockless_norecency(ptep); struct page *page; struct folio *folio; @@ -2617,8 +2617,9 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, goto pte_unmap; } - if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) || - unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) { + if (unlikely(!pmd_same(pmd, *pmdp)) || + unlikely(!pte_same_norecency(pte, + ptep_get_lockless_norecency(ptep)))) { gup_put_folio(folio, 1, flags); goto pte_unmap; } From patchwork Thu Feb 15 12:17:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13558313 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3626BC48BEB for ; Thu, 15 Feb 2024 12:18:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=YCaN1HHpfwJA6LY/0qy3ebcvoOrVU54tRmDpGJKPlQU=; b=os1HQXKO6FTbca iYbqLn50DAUUtZ3ujLOREk1kDBXBJsk0uay5Pgr1FTVOSO6AFOJkFd57sVIIOEz3/cY4Jp0hzstcP NYx1tWOcIihxwhKlFzmrFUU0QCWCk8+s/ZcUuWg43NTuIheMbDMVal84s7F2CLQ4DE43npGftx7S+ SV5SpVN/KVEZn+hjsyl5EH6T0CVb8Iezzw+59EdwpyED71Z934hU4G6/kYEzvtRA6NZXMPF4dEfyb IFTs41MdN/5b6gtt/yw7h6r81eqHC5phst4E1cEZrm2Tc+wOzhsiCChMyBEiEoF61lKE1gKc5hVVy Hvt7lJfbDX78mkvbiDvQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raah2-0000000GCHo-36SK; Thu, 15 Feb 2024 12:18:20 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raagu-0000000GCCL-1UbY for linux-arm-kernel@lists.infradead.org; Thu, 15 Feb 2024 12:18:13 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ACE0F14BF; Thu, 15 Feb 2024 04:18:52 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0F9763F766; Thu, 15 Feb 2024 04:18:09 -0800 (PST) From: Ryan Roberts To: David Hildenbrand , Mark Rutland , Catalin Marinas , Will Deacon , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Andrew Morton , Muchun Song Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 3/4] mm/memory: Use ptep_get_lockless_norecency() for orig_pte Date: Thu, 15 Feb 2024 12:17:55 +0000 Message-Id: <20240215121756.2734131-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240215121756.2734131-1-ryan.roberts@arm.com> References: <20240215121756.2734131-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240215_041812_564450_7051B507 X-CRM114-Status: GOOD ( 18.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Let's convert handle_pte_fault()'s use of ptep_get_lockless() to ptep_get_lockless_norecency() to save orig_pte. There are a number of places that follow this model: orig_pte = ptep_get_lockless(ptep) ... if (!pte_same(orig_pte, ptep_get(ptep))) // RACE! ... So we need to be careful to convert all of those to use pte_same_norecency() so that the access and dirty bits are excluded from the comparison. Additionally there are a couple of places that genuinely rely on the access and dirty bits of orig_pte, but with some careful refactoring, we can use ptep_get() once we are holding the lock to achieve equivalent logic. Signed-off-by: Ryan Roberts --- mm/memory.c | 55 +++++++++++++++++++++++++++++++++-------------------- 1 file changed, 34 insertions(+), 21 deletions(-) -- 2.25.1 diff --git a/mm/memory.c b/mm/memory.c index 8e65fa1884f1..075245314ec3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3014,7 +3014,7 @@ static inline int pte_unmap_same(struct vm_fault *vmf) #if defined(CONFIG_SMP) || defined(CONFIG_PREEMPTION) if (sizeof(pte_t) > sizeof(unsigned long)) { spin_lock(vmf->ptl); - same = pte_same(ptep_get(vmf->pte), vmf->orig_pte); + same = pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte); spin_unlock(vmf->ptl); } #endif @@ -3062,11 +3062,14 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, * take a double page fault, so mark it accessed here. */ vmf->pte = NULL; - if (!arch_has_hw_pte_young() && !pte_young(vmf->orig_pte)) { + if (!arch_has_hw_pte_young()) { pte_t entry; vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (likely(vmf->pte)) + entry = ptep_get(vmf->pte); + if (unlikely(!vmf->pte || + !pte_same_norecency(entry, vmf->orig_pte))) { /* * Other thread has already handled the fault * and update local tlb only @@ -3077,9 +3080,11 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, goto pte_unlock; } - entry = pte_mkyoung(vmf->orig_pte); - if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) - update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); + if (!pte_young(entry)) { + entry = pte_mkyoung(entry); + if (ptep_set_access_flags(vma, addr, vmf->pte, entry, 0)) + update_mmu_cache_range(vmf, vma, addr, vmf->pte, 1); + } } /* @@ -3094,7 +3099,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, /* Re-validate under PTL if the page is still mapped */ vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!vmf->pte || + !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) { /* The PTE changed under us, update local tlb */ if (vmf->pte) update_mmu_tlb(vma, addr, vmf->pte); @@ -3369,14 +3375,17 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) * Re-check the pte - we dropped the lock */ vmf->pte = pte_offset_map_lock(mm, vmf->pmd, vmf->address, &vmf->ptl); - if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (likely(vmf->pte)) + entry = ptep_get(vmf->pte); + if (likely(vmf->pte && pte_same_norecency(entry, vmf->orig_pte))) { if (old_folio) { if (!folio_test_anon(old_folio)) { dec_mm_counter(mm, mm_counter_file(old_folio)); inc_mm_counter(mm, MM_ANONPAGES); } } else { - ksm_might_unmap_zero_page(mm, vmf->orig_pte); + /* Needs dirty bit so can't use vmf->orig_pte. */ + ksm_might_unmap_zero_page(mm, entry); inc_mm_counter(mm, MM_ANONPAGES); } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); @@ -3494,7 +3503,7 @@ static vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf, struct folio *folio * We might have raced with another page fault while we released the * pte_offset_map_lock. */ - if (!pte_same(ptep_get(vmf->pte), vmf->orig_pte)) { + if (!pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte)) { update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); return VM_FAULT_NOPAGE; @@ -3883,7 +3892,8 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); - if (likely(vmf->pte && pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + if (likely(vmf->pte && + pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) restore_exclusive_pte(vma, vmf->page, vmf->address, vmf->pte); if (vmf->pte) @@ -3928,7 +3938,7 @@ static vm_fault_t pte_marker_clear(struct vm_fault *vmf) * quickly from a PTE_MARKER_UFFD_WP into PTE_MARKER_POISONED. * So is_pte_marker() check is not enough to safely drop the pte. */ - if (pte_same(vmf->orig_pte, ptep_get(vmf->pte))) + if (pte_same_norecency(vmf->orig_pte, ptep_get(vmf->pte))) pte_clear(vmf->vma->vm_mm, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; @@ -4028,8 +4038,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (unlikely(!vmf->pte || - !pte_same(ptep_get(vmf->pte), - vmf->orig_pte))) + !pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) goto unlock; /* @@ -4117,7 +4127,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); if (likely(vmf->pte && - pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) ret = VM_FAULT_OOM; goto unlock; } @@ -4187,7 +4198,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) */ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); - if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte))) + if (unlikely(!vmf->pte || + !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) goto out_nomap; if (unlikely(!folio_test_uptodate(folio))) { @@ -4747,7 +4759,7 @@ void set_pte_range(struct vm_fault *vmf, struct folio *folio, static bool vmf_pte_changed(struct vm_fault *vmf) { if (vmf->flags & FAULT_FLAG_ORIG_PTE_VALID) - return !pte_same(ptep_get(vmf->pte), vmf->orig_pte); + return !pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte); return !pte_none(ptep_get(vmf->pte)); } @@ -5125,7 +5137,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) * the pfn may be screwed if the read is non atomic. */ spin_lock(vmf->ptl); - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), vmf->orig_pte))) { pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -5197,7 +5209,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) vmf->address, &vmf->ptl); if (unlikely(!vmf->pte)) goto out; - if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), + vmf->orig_pte))) { pte_unmap_unlock(vmf->pte, vmf->ptl); goto out; } @@ -5343,7 +5356,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) vmf->address, &vmf->ptl); if (unlikely(!vmf->pte)) return 0; - vmf->orig_pte = ptep_get_lockless(vmf->pte); + vmf->orig_pte = ptep_get_lockless_norecency(vmf->pte); vmf->flags |= FAULT_FLAG_ORIG_PTE_VALID; if (pte_none(vmf->orig_pte)) { @@ -5363,7 +5376,7 @@ static vm_fault_t handle_pte_fault(struct vm_fault *vmf) spin_lock(vmf->ptl); entry = vmf->orig_pte; - if (unlikely(!pte_same(ptep_get(vmf->pte), entry))) { + if (unlikely(!pte_same_norecency(ptep_get(vmf->pte), entry))) { update_mmu_tlb(vmf->vma, vmf->address, vmf->pte); goto unlock; } From patchwork Thu Feb 15 12:17:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13558310 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C9231C48BEC for ; Thu, 15 Feb 2024 12:18:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=7CwYLavlu+AwScLHjnjvtJnleTAnW2aN8s/ldGbyxn0=; b=Mzb9v2bDr0lmUi ERRqFkn9NegdrEIfOg9zcz0+Rlwzk71SIu4N5Cmh2Hht5x85k9n/V0+ZQgeLilPCHvo0oZiAJHoto aoz8OnFuOPXMU+t/ubKq+n6Lajyyt1ENBv65G1jjg5DONyl3LihLoLtgtAzXjXp2Ck8lGsOKbwG1N 49ZQx8pVASHIfanANjSScvS1bCpJCjVU6egyoL9PUhbVKPzeR26icjDP6u8hcsHmI25fj05KxwIqm IxLtgGhYFAoAFq/HlkKxaO29baQMSZGf7jKety5kwlaal6Rb4FKf8J2XUrbWeH/OgDbOqstPCHsyM jr5ShtLC9tLCRD0238ZQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raah3-0000000GCIf-2TFv; Thu, 15 Feb 2024 12:18:21 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1raagw-0000000GCCL-0YNX for linux-arm-kernel@lists.infradead.org; Thu, 15 Feb 2024 12:18:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A35301FB; Thu, 15 Feb 2024 04:18:54 -0800 (PST) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 07C203F766; Thu, 15 Feb 2024 04:18:11 -0800 (PST) From: Ryan Roberts To: David Hildenbrand , Mark Rutland , Catalin Marinas , Will Deacon , Alexander Shishkin , Jiri Olsa , Ian Rogers , Adrian Hunter , Andrew Morton , Muchun Song Cc: Ryan Roberts , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH v1 4/4] arm64/mm: Override ptep_get_lockless_norecency() Date: Thu, 15 Feb 2024 12:17:56 +0000 Message-Id: <20240215121756.2734131-5-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240215121756.2734131-1-ryan.roberts@arm.com> References: <20240215121756.2734131-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240215_041814_310941_CC6A1ECC X-CRM114-Status: UNSURE ( 9.48 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Override ptep_get_lockless_norecency() when CONFIG_ARM64_CONTPTE is enabled. Because this API doesn't require the access and dirty bits to be accurate, for the contpte case, we can avoid reading all ptes in the contpte block to collect those bits, in contrast to ptep_get_lockless(). Signed-off-by: Ryan Roberts Reviewed-by: David Hildenbrand --- arch/arm64/include/asm/pgtable.h | 6 ++++++ 1 file changed, 6 insertions(+) -- 2.25.1 diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 401087e8a43d..c0e4ccf74714 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1287,6 +1287,12 @@ static inline pte_t ptep_get_lockless(pte_t *ptep) return contpte_ptep_get_lockless(ptep); } +#define ptep_get_lockless_norecency ptep_get_lockless_norecency +static inline pte_t ptep_get_lockless_norecency(pte_t *ptep) +{ + return __ptep_get(ptep); +} + static inline void set_pte(pte_t *ptep, pte_t pte) { /*