From patchwork Sat Apr 13 00:22:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13628386 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35390C4345F for ; Sat, 13 Apr 2024 00:22:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E8A46B0083; Fri, 12 Apr 2024 20:22:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 798386B0085; Fri, 12 Apr 2024 20:22:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6126E6B0087; Fri, 12 Apr 2024 20:22:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3E8A66B0083 for ; Fri, 12 Apr 2024 20:22:54 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id EBA67160F65 for ; Sat, 13 Apr 2024 00:22:53 +0000 (UTC) X-FDA: 82002608226.28.A10335C Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf03.hostedemail.com (Postfix) with ESMTP id 1D50320012 for ; Sat, 13 Apr 2024 00:22:51 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mQRK8K9+; spf=pass (imf03.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712967772; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1zt6lwHTMYCzcNfAGQy6bgUM4No4uIrkQbOTa/d2i2E=; b=rgCCEz4T5wk6xSlXmdJceXLTY2eoV0WjsMFQ4ojRcFYS9wjB1sSoB2UYgICwNsna2QdIhs 4Cpng0gOW/u4Tro6d4ecUeo6IjAiN0OSDakk1N9rI+O+J3w2RpEeVnYd4pcvou+bubM/HI oOs/JElc44firWrOOCZp0cjZddYaCBc= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mQRK8K9+; spf=pass (imf03.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712967772; a=rsa-sha256; cv=none; b=cDWN+PQ2/Hf1uXxbej5BsfwJX8vT2M8F/NcEkUAqklrN8BuMDZDNvssbW5lEyzQxNdPtuB yCmCtv8qV1MTFE1EzjO+XnUmcPvf//0leBOwBA6L+WKvos16k92yJNgYfgfkLz4JHchZ+n XYNVY0ArgbFxVfZVh/G0WFN37fG9Zuo= Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-5d4d15ec7c5so1176524a12.1 for ; Fri, 12 Apr 2024 17:22:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712967771; x=1713572571; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1zt6lwHTMYCzcNfAGQy6bgUM4No4uIrkQbOTa/d2i2E=; b=mQRK8K9+uyryBdByZBShBslzoc3boGQFybWxthz6tAVqW5hbBFX9ZxkNYhOagzolLI sqtbHGxPuDL4RMtVK79OwY2nX7VZLdKrDYco2jqtj+4VgJF13kCQxqqj2B3XJDCsrhPy a8645Dgz62P5X3bFdtQik09jOG5JIrwASljfNLBi6vQGUQcBzOl3rs9hTegMUA5BrZZT P7n4XEox09FzVKQfRs8r3bfetGk4fPbecJ67mOdjzOcy7OgYQ30RdkFbmq/Mr8ZEb0bk l2/et91rspD6lBHK66fYT+aw9yBTU9KHdMtTXnCbKwCgCNsaC7c4o4d2nI0T06Bn182c DgSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712967771; x=1713572571; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1zt6lwHTMYCzcNfAGQy6bgUM4No4uIrkQbOTa/d2i2E=; b=GaSaeEU8jZH98+of/UV9qmY8zy72FSd229BmmeAhZt1evKEzM5Nk1Dlfq1M6Dqut0O f6rNgQnAHUjdXaUtnt3XnBLHju0GRnFbMc3NNa2F9hlCJUX2Azo4C7QHQYsrNdUo8q3j 3F7WFdBStc8SX8sv2Ov3FqORdudfS7bbyHjlUMUrMqMqXp3rnwm4BsRyd9Vnia/KtW5I jJpstZpW61bDdSZRZCtCNxbCzglW1i1VV5uzwLXdl033vkfyNurHgVqwUgHRyy0pAxLP 4hiSHhdyjAGkNjRnF+86GjulU+No0WbSp/ihEyBX93aATfyQlEbcejJtAAOiyF/kQO05 H0+w== X-Forwarded-Encrypted: i=1; AJvYcCXzW9bTtbQpJeKOHCgHujZV5SpGAi/dy+eBEfE3SC3AztnNDW8c2JqsG6Y3NGcC8dbBnreaWDqKZa+NlodX0Aj1Zbo= X-Gm-Message-State: AOJu0YxzQr+iRcPPby8G+8HoIdnXYajTN2EFHuCCl7pvOoNKyiJTAvgB yXyqX1HFlvBeL3VAMBQQmqj/vILKXQunXtfrr+F62Of3Er/9so6g X-Google-Smtp-Source: AGHT+IFg30nvDhllS3qmuOikRSwPK+gGsIS1czM9V9iuO3CFJO8liJj3qyF+PsnpKvILk+7K+Pl3Qw== X-Received: by 2002:a17:902:ea0e:b0:1e4:19e3:56cb with SMTP id s14-20020a170902ea0e00b001e419e356cbmr5302017plg.12.1712967770862; Fri, 12 Apr 2024 17:22:50 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id a17-20020a170902ee9100b001e2a4ac7bf9sm3569618pld.111.2024.04.12.17.22.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Apr 2024 17:22:50 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: zokeefe@google.com, ryan.roberts@arm.com, 21cnbao@gmail.com, shy828301@gmail.com, david@redhat.com, mhocko@suse.com, fengwei.yin@intel.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v6 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Date: Sat, 13 Apr 2024 08:22:18 +0800 Message-Id: <20240413002219.71246-2-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240413002219.71246-1-ioworker0@gmail.com> References: <20240413002219.71246-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1D50320012 X-Rspam-User: X-Stat-Signature: fn3o1xywg5n5q848wy3uhydauf4mqk9c X-Rspamd-Server: rspam01 X-HE-Tag: 1712967771-742066 X-HE-Meta: U2FsdGVkX19ch512R6DfxF/r6DC2hshboyXRWRN6VtEZNcaFr97vRdFFfqOZnhAlMMg+QwX6+d6R4ANcHqvvFzdEwuRSodk56WJ+V/nDiB/8WhCH/M0PhUZoAZi4ijUrVcb/TXYooNRuNZPBDi+jFtJwG6gTtLDRS/M7t67ltOvHrogwpaqRLXC2EinHB3iKkwtKH6rKn8RJf3r4sT6tmElc7oizH86xrrOZCgR6G156rB0B48KXApfSuLIWgwRjzlKYHFYw6NSJQFmDtUvMms48WGqNF/mIIeqSQiuQj8uma9kJKBmnL1PWfNvEchrIeKEP7N1CjceWZt2+hNq3XEpJL9QvFgKEyugLLVT/Ff3TuZL+mitkxgMata2/1sMvBn8kr5hsFI7o1hP/iMAtni80zFh4jfYd/EGgeH9SY1+L+DdmJHgaXlH4lZ4pOgeW10AOIPy1N2HzVwlf4m8rQOTNXmKnZ+gGHdoiX3K3lyJH0o9LW2VHt4l3JDt4/aEeyU4mrGHre31znqZRE7cnJTdOxiOp+YA6J/dSTfd/cjncLE9e3mxOZe2wivJkkSWMX6Jv2mMOhEe2nbAxXtm75BjR1iIZlqZrnVlHu8WKd7mfIks6QoFzfka1VrMOovY+ac3dYqC+PMN3t5itFk2T/Q2HMEKFGCbjgdVjyQjcnMIUVuMJtCEFP0cVqNsbqzw0iyfSM28yfSadY0dotVY/kbvqg23gzDwb5SWyxNDcndnAS03JgmCMO+Z1vw9iTrueTvsIxGQswXUGRABVdVDk+ggyfHAJlgK1ASU1KP0KezNEp/9K53gr4nMwxTy/79vco+tckFbvd/J+XUeWU1RkVUdKMvWcd+Y1pY7hijeWch6jTQEdyTrReq4H2Pb7WeNLYoUmu8aYWpt7TQ7gea5R9ar52Qy6+aMpoCcgnaT1+5Vv3Uy6rPTm98LWa77Ivnx3SE3OOzvzjqL2ZuCphVo SIYPvCoz JO7iDlWz/tS8zOb+12L295ZO8Aw/n7ivJ2PF/kw1Ly+ntR4l5rMGAsTHhZ7kY62Vy01h9n0MIQeJzgAoBG4e9M5J01vNzwgI4ZOG7hFKwN4wvHGdeuhvT+IruzshvTD2XpWLuov0dwXmKXHA/FL0szqKq2o7Bu2MMIif+kxpsH90rpDQb5U96eC0yT8t7sYA21E5p4H/A0SeqQwfLOR+j2vYNGt8gl+eh3BuBbJVv0W2ie+Zi9OUh1DAgIkGaGOH/73516HWnLFIBgXoQHjIxsfzZwIhX7dLNLaJYCdhQWE68XDsBsTkEmhhgpHLAR97vT2nhBv5R3CDjQ9umottLri1/nLV5OX46phPfD1vyRL49idvnH4BtRmzESV4B1ZcezTgjEUURoXt6jv6zzZkUzGtKV8UyzTfJF+NAOGom24wOfY8WdoQqS351MxZ3nEcBJSUhdBxUa7tnR8UCOlbHoBhyLM/LJ5fDEyIkuItjinaycvmZ5Mkz8M0QP0DFA3EOLOsV1ZXwZJLbUakyFloua3iRKcpAIhcCFTIoMDH9wm5+CvSlUdRYyaVPLmbFWaGGouhRgMr+aJCAnHvSX9MZ8S2h2huKLNsWynIo5L+oEcVOVdjY5Ki/LddHT1xk2mDhDh+7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch optimizes lazyfreeing with PTE-mapped mTHP[1] (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio splitting if the large folio is fully mapped within the target range. If a large folio is locked or shared, or if we fail to split it, we just leave it in place and advance to the next PTE in the range. But note that the behavior is changed; previously, any failure of this sort would cause the entire operation to give up. As large folios become more common, sticking to the old way could result in wasted opportunities. On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of the same size results in the following runtimes for madvise(MADV_FREE) in seconds (shorter is better): Folio Size | Old | New | Change ------------------------------------------ 4KiB | 0.590251 | 0.590259 | 0% 16KiB | 2.990447 | 0.185655 | -94% 32KiB | 2.547831 | 0.104870 | -95% 64KiB | 2.457796 | 0.052812 | -97% 128KiB | 2.281034 | 0.032777 | -99% 256KiB | 2.230387 | 0.017496 | -99% 512KiB | 2.189106 | 0.010781 | -99% 1024KiB | 2.183949 | 0.007753 | -99% 2048KiB | 0.002799 | 0.002804 | 0% [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com Signed-off-by: Lance Yang --- include/linux/mm_types.h | 9 +++ include/linux/pgtable.h | 42 +++++++++++ mm/internal.h | 12 +++- mm/madvise.c | 147 ++++++++++++++++++++++----------------- mm/memory.c | 4 +- 5 files changed, 147 insertions(+), 67 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c432add95913..3c224e25f473 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1367,6 +1367,15 @@ enum fault_flag { typedef unsigned int __bitwise zap_flags_t; +/* Flags for clear_young_dirty_ptes(). */ +typedef int __bitwise cydp_t; + +/* make PTEs after pte_mkold() */ +#define CYDP_CLEAR_YOUNG ((__force cydp_t)BIT(0)) + +/* make PTEs after pte_mkclean() */ +#define CYDP_CLEAR_DIRTY ((__force cydp_t)BIT(1)) + /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each * other. Here is what they mean, and how to use them: diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index e2f45e22a6d1..d7958243f099 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -489,6 +489,48 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, } #endif +#ifndef clear_young_dirty_ptes +/** + * clear_young_dirty_ptes - Mark PTEs that map consecutive pages of the + * same folio as old/clean. + * @mm: Address space the pages are mapped into. + * @addr: Address the first page is mapped at. + * @ptep: Page table pointer for the first entry. + * @nr: Number of entries to mark old/clean. + * @flags: Flags to modify the PTE batch semantics. + * + * May be overridden by the architecture; otherwise, implemented by + * get_and_clear/modify/set for each pte in the range. + * + * Note that PTE bits in the PTE range besides the PFN can differ. For example, + * some PTEs might be write-protected. + * + * Context: The caller holds the page table lock. The PTEs map consecutive + * pages that belong to the same folio. The PTEs are all in the same PMD. + */ +static inline void clear_young_dirty_ptes(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags) +{ + pte_t pte; + + for (;;) { + pte = ptep_get_and_clear(mm, addr, ptep); + + if (flags | CYDP_CLEAR_YOUNG) + pte = pte_mkold(pte); + if (flags | CYDP_CLEAR_DIRTY) + pte = pte_mkclean(pte); + + set_pte_at(mm, addr, ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + } +} +#endif + static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { diff --git a/mm/internal.h b/mm/internal.h index 3c0f3e3f9d99..ab8fcdeaf6eb 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -134,6 +134,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) * first one is writable. * @any_young: Optional pointer to indicate whether any entry except the * first one is young. + * @any_dirty: Optional pointer to indicate whether any entry except the + * first one is dirty. * * Detect a PTE batch: consecutive (present) PTEs that map consecutive * pages of the same large folio. @@ -149,18 +151,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) */ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, - bool *any_writable, bool *any_young) + bool *any_writable, bool *any_young, bool *any_dirty) { unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte, *ptep; - bool writable, young; + bool writable, young, dirty; int nr; if (any_writable) *any_writable = false; if (any_young) *any_young = false; + if (any_dirty) + *any_dirty = false; VM_WARN_ON_FOLIO(!pte_present(pte), folio); VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); @@ -176,6 +180,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, writable = !!pte_write(pte); if (any_young) young = !!pte_young(pte); + if (any_dirty) + dirty = !!pte_dirty(pte); pte = __pte_batch_clear_ignored(pte, flags); if (!pte_same(pte, expected_pte)) @@ -193,6 +199,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, *any_writable |= writable; if (any_young) *any_young |= young; + if (any_dirty) + *any_dirty |= dirty; nr = pte_batch_hint(ptep, pte); expected_pte = pte_advance_pfn(expected_pte, nr); diff --git a/mm/madvise.c b/mm/madvise.c index d34ca6983227..b4103e2df346 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma) file_permission(vma->vm_file, MAY_WRITE) == 0; } +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end, + struct folio *folio, pte_t *ptep, + pte_t pte, bool *any_young, + bool *any_dirty) +{ + int max_nr = (end - addr) / PAGE_SIZE; + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL, + any_young, any_dirty); +} + +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, + struct folio *folio, pte_t **pte, + spinlock_t **ptl) +{ + int err; + + if (!folio_trylock(folio)) + return false; + + folio_get(folio); + pte_unmap_unlock(*pte, *ptl); + err = split_folio(folio); + folio_unlock(folio); + folio_put(folio); + + *pte = pte_offset_map_lock(mm, pmd, addr, ptl); + + return err == 0; +} + static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -456,41 +489,30 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, * next pte in the range. */ if (folio_test_large(folio)) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | - FPB_IGNORE_SOFT_DIRTY; - int max_nr = (end - addr) / PAGE_SIZE; bool any_young; - nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, - fpb_flags, NULL, &any_young); - if (any_young) - ptent = pte_mkyoung(ptent); + nr = madvise_folio_pte_batch(addr, end, folio, pte, + ptent, &any_young, NULL); if (nr < folio_nr_pages(folio)) { - int err; - if (folio_likely_mapped_shared(folio)) continue; if (pageout_anon_only_filter && !folio_test_anon(folio)) continue; - if (!folio_trylock(folio)) - continue; - folio_get(folio); + arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); + if (madvise_pte_split_folio(mm, pmd, addr, + folio, &start_pte, &ptl)) + nr = 0; if (!start_pte) break; + pte = start_pte; arch_enter_lazy_mmu_mode(); - if (!err) - nr = 0; continue; } + + if (any_young) + ptent = pte_mkyoung(ptent); } /* @@ -507,7 +529,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, continue; if (!pageout && pte_young(ptent)) { - mkold_ptes(vma, addr, pte, nr); + clear_young_dirty_ptes(mm, addr, pte, nr, + CYDP_CLEAR_YOUNG); tlb_remove_tlb_entries(tlb, pte, nr, addr); } @@ -687,44 +710,51 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, continue; /* - * If pmd isn't transhuge but the folio is large and - * is owned by only this process, split it and - * deactivate all pages. + * If we encounter a large folio, only split it if it is not + * fully mapped within the range we are operating on. Otherwise + * leave it as is so that it can be marked as lazyfree. If we + * fail to split a folio, leave it in place and advance to the + * next pte in the range. */ if (folio_test_large(folio)) { - int err; + bool any_young, any_dirty; - if (folio_likely_mapped_shared(folio)) - break; - if (!folio_trylock(folio)) - break; - folio_get(folio); - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - if (err) - break; - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); - if (!start_pte) - break; - arch_enter_lazy_mmu_mode(); - pte--; - addr -= PAGE_SIZE; - continue; + nr = madvise_folio_pte_batch(addr, end, folio, pte, + ptent, &any_young, &any_dirty); + + if (nr < folio_nr_pages(folio)) { + if (folio_likely_mapped_shared(folio)) + continue; + + arch_leave_lazy_mmu_mode(); + if (madvise_pte_split_folio(mm, pmd, addr, + folio, &start_pte, &ptl)) + nr = 0; + if (!start_pte) + break; + pte = start_pte; + arch_enter_lazy_mmu_mode(); + continue; + } + + if (any_young) + ptent = pte_mkyoung(ptent); + if (any_dirty) + ptent = pte_mkdirty(ptent); } + if (folio_mapcount(folio) != folio_nr_pages(folio)) + continue; + if (folio_test_swapcache(folio) || folio_test_dirty(folio)) { if (!folio_trylock(folio)) continue; /* - * If folio is shared with others, we mustn't clear - * the folio's dirty flag. + * If we have a large folio at this point, we know it is + * fully mapped so if its mapcount is the same as its + * number of pages, it must be exclusive. */ - if (folio_mapcount(folio) != 1) { + if (folio_mapcount(folio) != folio_nr_pages(folio)) { folio_unlock(folio); continue; } @@ -740,19 +770,10 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, } if (pte_young(ptent) || pte_dirty(ptent)) { - /* - * Some of architecture(ex, PPC) don't update TLB - * with set_pte_at and tlb_remove_tlb_entry so for - * the portability, remap the pte with old|clean - * after pte clearing. - */ - ptent = ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); - - ptent = pte_mkold(ptent); - ptent = pte_mkclean(ptent); - set_pte_at(mm, addr, pte, ptent); - tlb_remove_tlb_entry(tlb, pte, addr); + clear_young_dirty_ptes(mm, addr, pte, nr, + CYDP_CLEAR_YOUNG | + CYDP_CLEAR_DIRTY); + tlb_remove_tlb_entries(tlb, pte, nr, addr); } folio_mark_lazyfree(folio); } diff --git a/mm/memory.c b/mm/memory.c index 76157b32faa8..b6fa5146b260 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma flags |= FPB_IGNORE_SOFT_DIRTY; nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags, - &any_writable, NULL); + &any_writable, NULL, NULL); folio_ref_add(folio, nr); if (folio_test_anon(folio)) { if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, @@ -1558,7 +1558,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb, */ if (unlikely(folio_test_large(folio) && max_nr != 1)) { nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags, - NULL, NULL); + NULL, NULL, NULL); zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr, addr, details, rss, force_flush, From patchwork Sat Apr 13 00:22:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13628387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A4C8C4345F for ; Sat, 13 Apr 2024 00:23:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 32D1F6B0087; Fri, 12 Apr 2024 20:23:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DDA76B0088; Fri, 12 Apr 2024 20:23:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17F2D6B0089; Fri, 12 Apr 2024 20:23:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EDDA06B0087 for ; Fri, 12 Apr 2024 20:23:01 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A83E4141046 for ; Sat, 13 Apr 2024 00:23:01 +0000 (UTC) X-FDA: 82002608562.14.87A57BA Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) by imf11.hostedemail.com (Postfix) with ESMTP id F086B40007 for ; Sat, 13 Apr 2024 00:22:59 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mKHHzYMK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712967780; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1OvPRCdhoiay+bQidKgk1FXNIyrlSTLCMUCQGI9Z8wE=; b=WMYwNBmNlFYZkRvUWBmrW41TQghZRqztB9sQeqOujfuZFzlH+Le+16B6v5zGPanHe4UXM/ 6sTZud30s2kv3ZVYgSOVy4QRi0PQmr0BIlj8x8opEkyM3ET+SdSmpSjgVFJt6B1pHVk/dq NrqWQ9EaAsP2poqR2O0QEgYtVMmtUgc= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=mKHHzYMK; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf11.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.214.178 as permitted sender) smtp.mailfrom=ioworker0@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712967780; a=rsa-sha256; cv=none; b=ArqQ29MCw4YgP5JMUHV/+Hr0fLI4+T6y3XWGo9ROuT5fwYa8CKypUHkNlHweRqBVjZlfP+ N/wxnWHZlDYzJfN/K71p3p4DBg0Hm28vn6h4RjY8I1T0a9/WdzFwOV61S4/jD3qNkO9vUE KaeNfDK5K9oUiUFCU6z4Xk80Spac5no= Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-1e3ff14f249so11126205ad.1 for ; Fri, 12 Apr 2024 17:22:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712967779; x=1713572579; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1OvPRCdhoiay+bQidKgk1FXNIyrlSTLCMUCQGI9Z8wE=; b=mKHHzYMKKWqayOzRHPtQJrmfc3Jbk8ysUf5GOdSLt4WrS+DILlDnZowHq+399+8gUq fWpYPvMmN7xrp/BGAQ+zzlGHUrbqJxL0mUD8AuMHVuirRQtE8Ne2FWzuePxYrZzXryzd Ae6Vdbt80oA/2bG2wnImq4h1kWfUOEXeyaeg0ffhc/hyvCiP/4kzUQ5J8v4aAaWSPBJS lMG/CZPvxTjpf8w1L6REqc05+UJs0hNH98TH0k3a092w4AuUpI0jNGYvrPSeoBLu2SCY RQPJsxPidG94eKiioSUWd+79Z2nS4vuUdqG7V88cCkfDJukwUxEWRNL2FhhEkjJcWwwo ob2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712967779; x=1713572579; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1OvPRCdhoiay+bQidKgk1FXNIyrlSTLCMUCQGI9Z8wE=; b=hNAMcKIf6NJCY0jEHq6gnKTMuE3Qo8GaaLx2oAySbyzItrvJlmlX9pE3V9nyD1DBf3 O0pzf46lgZZDVhmK+iLDkWawELX0pMaGSGfz7NA/KGUNDXsVVKLDTG6uym1L+9wlCXAr w8cN97yb/BxQ6I/OnDkGtc81WD1HhQO4FVUxGpZIRqbLabZZYbxbPoT1td8imiO0cPnp lqRWneWQhTflJeXH2jyJ67fvIybH75McpNvJCbfk2Z30esyNOsuc5Utdy7B67F5ToL+M oGgn1FxlPmH2vUvUE+NjhyKI5lB3BWZDLne35TCTmdSpioaJTANsu5NhPTcx25GBc9XM 7m1g== X-Forwarded-Encrypted: i=1; AJvYcCWqvbtUTW4yJ4lRioXUAPhYPxDQbY9ldMuZdI6hOVRPXNq8CY0//jHJoLi2iExkAX/db9TiWHmsUSQl7GVt2stpkdg= X-Gm-Message-State: AOJu0YxwePrTF+PUpi11NEQZdFuueEx3e2RZaCEEXyFj/06J2LJZveED YE+4M/ZzT7Ey616nqfb9rv8s2cD3GM0eg+5HmViZH50YycsyY79s X-Google-Smtp-Source: AGHT+IE6RtlVv/rhkh0e6s0wZJMcHXtBBXCMRcTjEYkjnCpyvQLOKnLnjHsAXiCwN9+1akOrIapfJw== X-Received: by 2002:a17:903:2443:b0:1e5:62:7a81 with SMTP id l3-20020a170903244300b001e500627a81mr5783849pls.22.1712967778837; Fri, 12 Apr 2024 17:22:58 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id a17-20020a170902ee9100b001e2a4ac7bf9sm3569618pld.111.2024.04.12.17.22.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 12 Apr 2024 17:22:58 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: zokeefe@google.com, ryan.roberts@arm.com, 21cnbao@gmail.com, shy828301@gmail.com, david@redhat.com, mhocko@suse.com, fengwei.yin@intel.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v6 2/2] mm/arm64: override clear_young_dirty_ptes() batch helper Date: Sat, 13 Apr 2024 08:22:19 +0800 Message-Id: <20240413002219.71246-3-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240413002219.71246-1-ioworker0@gmail.com> References: <20240413002219.71246-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: h7k399gwzmkrabspiqn5zttbh4x9bu5r X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: F086B40007 X-HE-Tag: 1712967779-739252 X-HE-Meta: U2FsdGVkX184p+5MRg0VO5OJxlkpir/KR3tvbZ61fYxHTZ1sRteCeo1OovN0E0pK2/ENhsAWkNZO7W+kgDEtZBvS2wGyCcSQXO4ADG6qB/WLgZzTPlo7WCt7B94EY4rAESE7z/60Sm+eqLtD9HbnSs+aDAnzGSklCEwXYP3pKfR2vqIu2TF12FbDWNpLbAzEv84CJ3ey5lv8lihdVzUtxOZybfAqjLDe5hXEiyf0ZzQ2anuJj+PKQJpe3kYmv9iDo1stvlKh9x6abDL9pz0t6fnHurIwlDoQMTk+hHJf+Uua76Uif6tjbqwhu5BD5PcRzzqAnLCxZvGNk5G/+BrI6cN9vQOboT4NHOJQMTS3hf4cWG/yx1XAI8nxTDYMCR4TuuhPwnlwE6SXF97F8lvHRsMJRhVtVkE5P0KR482AJDqr0b/01L9DENk3ZYC4MQ4B8EES0E6cM2BBomBUvOGKRNyEW+GKO87d4Pd2jiS0u89psqpBEVwJpYZBoe94VTk2Bg6EHqCdksVeEBGhHTlkGUguTxgHx0fyLGK51p9haV3otuu9tpnYdicz1oe+6aWaKc5yvOy8dfbeoasUNrqPJg/zlUFlWDvUEUXCCa2akVRL28rxsq4HTLcjAyTpVcs+wL0Pn65gJL47Bs2ZxARGeVHRr3oTU9/FB7zJyb1mQnzNyR9OVeTikJEFgnbc7i/JjFQuEmgpH9uds6RIlq9o5tpXB3HtYkyUmLX6O9LW5P8KftxYXt2cU3ibJalyE7y4jGjkN2rz21bMrvYBeMfVKY5BbCzYWnvVZyr4bT2oQXx8e2ZS4CGwKBJ6x274wdGe9ui3TLqVNWPjQjsL+TKhEAuXvoEgu99HAuktO/iL1dNmwbat1Uqb4uww/bUlRIYQE1sE2pOKvIRsOc9D7/t127T1kq88tlv+/rNegXU4N8U9aPy9VcLFo3xxi0A75/1f6ewJmrIdNZwKnqPxf+1 cligqEv4 zwqpJe6Tc5+tWWIrLcllyyt7JsryXMMDbPwBW/H91haOw4vpYeDs43baS0pZAtoM2iVLmbKMt8NJ1HHaZr6i2qGrWoMHUitsgROu1YAX41PFOAMc7TtVxtlP9eMNSRZ5rVeUHqXwst8ciPgFEtEclbVpo8NOOHd9+WnTB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The per-pte get_and_clear/modify/set approach would result in unfolding/refolding for contpte mappings on arm64. So we need to override clear_young_dirty_ptes() for arm64 to avoid it. Suggested-by: David Hildenbrand Suggested-by: Barry Song <21cnbao@gmail.com> Signed-off-by: Ryan Roberts Signed-off-by: Lance Yang --- arch/arm64/include/asm/pgtable.h | 37 ++++++++++++++++++++++++++++++++ arch/arm64/mm/contpte.c | 28 ++++++++++++++++++++++++ 2 files changed, 65 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 9fd8613b2db2..f951774dd2d6 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1223,6 +1223,28 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address, __ptep_set_wrprotect(mm, address, ptep); } +static inline void __clear_young_dirty_ptes(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags) +{ + pte_t pte; + + for (;;) { + pte = __ptep_get(ptep); + + if (flags | CYDP_CLEAR_YOUNG) + pte = pte_mkold(pte); + if (flags | CYDP_CLEAR_DIRTY) + pte = pte_mkclean(pte); + + __set_pte(ptep, pte); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + } +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_PMDP_SET_WRPROTECT static inline void pmdp_set_wrprotect(struct mm_struct *mm, @@ -1379,6 +1401,9 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t entry, int dirty); +extern void contpte_clear_young_dirty_ptes(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags); static __always_inline void contpte_try_fold(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) @@ -1603,6 +1628,17 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty); } +#define clear_young_dirty_ptes clear_young_dirty_ptes +static inline void clear_young_dirty_ptes(struct mm_struct *mm, + unsigned long addr, pte_t *ptep, + unsigned int nr, cydp_t flags) +{ + if (likely(nr == 1 && !pte_cont(__ptep_get(ptep)))) + __clear_young_dirty_ptes(mm, addr, ptep, nr, flags); + else + contpte_clear_young_dirty_ptes(mm, addr, ptep, nr, flags); +} + #else /* CONFIG_ARM64_CONTPTE */ #define ptep_get __ptep_get @@ -1622,6 +1658,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, #define wrprotect_ptes __wrprotect_ptes #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS #define ptep_set_access_flags __ptep_set_access_flags +#define clear_young_dirty_ptes __clear_young_dirty_ptes #endif /* CONFIG_ARM64_CONTPTE */ diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 1b64b4c3f8bf..bf3b089d9641 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -361,6 +361,34 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes); +void contpte_clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned int nr, cydp_t flags) +{ + /* + * We can safely clear access/dirty without needing to unfold from + * the architectures perspective, even when contpte is set. If the + * range starts or ends midway through a contpte block, we can just + * expand to include the full contpte block. While this is not + * exactly what the core-mm asked for, it tracks access/dirty per + * folio, not per page. And since we only create a contpte block + * when it is covered by a single folio, we can get away with + * clearing access/dirty for the whole block. + */ + unsigned int start = addr; + unsigned int end = start + nr; + + if (pte_cont(__ptep_get(ptep + nr - 1))) + end = ALIGN(end, CONT_PTE_SIZE); + + if (pte_cont(__ptep_get(ptep))) { + start = ALIGN_DOWN(start, CONT_PTE_SIZE); + ptep = contpte_align_down(ptep); + } + + __clear_young_dirty_ptes(mm, start, ptep, end - start, flags); +} +EXPORT_SYMBOL_GPL(contpte_clear_young_dirty_ptes); + int contpte_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t entry, int dirty)