From patchwork Mon Apr 8 04:24:36 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13620558 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8EEDCD128A for ; Mon, 8 Apr 2024 04:25:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 70AB26B0085; Mon, 8 Apr 2024 00:25:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6BB846B0087; Mon, 8 Apr 2024 00:25:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5360D6B0088; Mon, 8 Apr 2024 00:25:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 363676B0085 for ; Mon, 8 Apr 2024 00:25:01 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id DE157A0332 for ; Mon, 8 Apr 2024 04:25:00 +0000 (UTC) X-FDA: 81985074360.12.F9993E9 Received: from mail-oa1-f44.google.com (mail-oa1-f44.google.com [209.85.160.44]) by imf24.hostedemail.com (Postfix) with ESMTP id 0F9FE18001B for ; Mon, 8 Apr 2024 04:24:58 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZLNKiPGh; spf=pass (imf24.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.160.44 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712550299; a=rsa-sha256; cv=none; b=wLPsrY5WX7DvLDkpiFHErUBtwBuTxJjaNAu51FaoIht7Sa5LYvbZNbqj6qav15BdghlVHy 3o9YkLBJpBaMV04zBYE6AqR9G9JLhL5MHlYrL12Enq7ptiBZw6SqkOa7P8L2Yp6wsmMLzj 5oe4n/tFdPQApBRgARq5XFaxJFIuUO4= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZLNKiPGh; spf=pass (imf24.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.160.44 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712550299; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=oroWQXlDg9AflEXeqGQePfVQWizPK6mNCr8Aa/qNYjo=; b=wDbNaoPKJFZ/C5YR+vB9Fk63dOOtOAKISEfTa+9TDd5PwxuTg7VzHSYPsHAxaeHvUi2cZt 9IRo2yLIja56BrXqIr1ciNEgJiHLY/vPNB0TFoYKkMLmAipaSEuMM+sW/la5VGVIzVTp8Q 1MMmvsnDhFppVBGqQyCmRSImqUF6HfQ= Received: by mail-oa1-f44.google.com with SMTP id 586e51a60fabf-229d01e81f1so2402211fac.2 for ; Sun, 07 Apr 2024 21:24:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712550298; x=1713155098; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oroWQXlDg9AflEXeqGQePfVQWizPK6mNCr8Aa/qNYjo=; b=ZLNKiPGhjyVRhBN63+dPIlcRuu+OP23f8U7AurE0UYyQw/tsV0at3yFDCtiz+oWGPR 8W5vkbTqGE9ScxK47EyOJ6i/Q7aEoLVtXrOt9r2kxHWhW5pFsgBlXBRm3CA86gvBbnFb tVqWoD7+IS43cg6abJ+q39mkPOCTg4MuM2MkdcWCxFdfcXh8ofivGnNfPbnLLFcZpzAJ wvi0aBYhhmjJ/FPU94hjsysDwkxp0r82FwGGql2J88DBJgrTwJ1gCmu2Ueqad19me5HX uwhUqpmXvoFkp4OW1AtgR2SNbYihzVS91BOdvwXDIXOEsrNrzwPyi9YdtzSOd4s6c+eV A9BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712550298; x=1713155098; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oroWQXlDg9AflEXeqGQePfVQWizPK6mNCr8Aa/qNYjo=; b=W/ZItT508JeIi922wmiKa6h9//SYKRVqyUvIQAhE3wAmwTbJbshiQrOKFF1i39T2m0 jghGt9Y1hqwFnAjc83QZkRqMSJ3HQ3DoyO4WV+Q7AcRLlm4tjqrjAih8xFE4ht1uRnx1 puZ0edkUWSzTjwuLb6GnmrhYQcYGy4B2g0auQ702nO5zfBMeBlGH7HmgSOCupUZ939Yx T4x0LHvlqXvJP9WZhZm5EaCjxieMXFKD2Gebvp+8Jz1g7mVrOOV4xzzqw8f1ASId5Pvl LdUr06Bi6f0Uw9v1edAhTBGNvf22Rcdc2+RfijgE73ahJqNKGyN5CAd4l45c+/SrQ//7 j3SA== X-Forwarded-Encrypted: i=1; AJvYcCUHiUIP0NR+MMabcGQVOYrWOx457PkXJkFhiWkdZUr51vkEiyTnV7N+7QoJD9Y6hx0iZk6Yv+lx3nYsYotpGuh9EGo= X-Gm-Message-State: AOJu0Yy4qlAjcrogAXooV2CtjuRb/Dx6BuADt6Nr8X2A92NQK90zJGif YBC5E2gEp/vrqCImHPYWi4WsqA/MaspbwtmiOrejh9h2kJtugOD1 X-Google-Smtp-Source: AGHT+IE3scd+wat1OlHnS4qTPnTZZAUaawzHkOTC4QsDjJKgI5Ib19QPA1D4Kw4obV7F7ntxrB3eAA== X-Received: by 2002:a05:6870:1151:b0:22e:8a0c:ea26 with SMTP id 17-20020a056870115100b0022e8a0cea26mr6939786oag.44.1712550297985; Sun, 07 Apr 2024 21:24:57 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id p20-20020a056a000b5400b006eab6ac1f83sm5465628pfo.0.2024.04.07.21.24.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 07 Apr 2024 21:24:57 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Date: Mon, 8 Apr 2024 12:24:36 +0800 Message-Id: <20240408042437.10951-2-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240408042437.10951-1-ioworker0@gmail.com> References: <20240408042437.10951-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 0F9FE18001B X-Stat-Signature: q39wggfy8496ar4sfi4cimgxy9tjatyx X-Rspam-User: X-HE-Tag: 1712550298-751832 X-HE-Meta: U2FsdGVkX18kY3kzar/kNLalPbobYAS4AzUxhkTSbAxrZH9E24zegQyZhLuLfGmgGoIvV03iSxDp8BfT0FnUCd40i1WStaFJzX3bBquR1baZrSumGt8mM49/LwH9/lPSdQVrINkyfeU6AQLFI5Z43qH6GDWvvKJ8espnhX9Smy9H9jq7N10zRm8riD6y17tGMxchx/NXDGl1wy903UclcubF6xEOaOsxKsfrYi8aILeHeG9QARuxv1fXH0KssqiWgmuId5b1mmcOMYnUgaH2wIzjktzDc3Akntvc22hK0t6Kbl1747H0X3TMz1+ZU4SCveNhWwHEf9LpKXdw0XCQu2NoV/88jJxgz5eew6C55VX+B7ByAMwrXQa7LKe4hL69gPwomyvglDzu5KwgnKjb7l3wDQY8OUjFx9P/V4tpSLp9F9iiTDsmxq8auW6xCPsmqG0Cq98LkHi6qq+GJJCaSezWv7mr8cl3hCCpY2epaZeVltis3EqgGaRHu7UTXYvaRs1A6NK4BhE5mMg6MA16i1PIIa6i7G764RPOHj5SvhzmJA0FwhbI7Jy+SD+AxNLZwKRoYzxqzX0bBYWGlMUTC3hlC1yPrCu4Hs/PawBgECHHNGEeK3ryDPtEURUBBTVNnl0jfPyK9U+CCeb31DZ+Ift+Xp0XCC0iL4e5Ria0TfDOfMRPbpIogvExPH4BQI8r2+M59txFGCBr+4O5DXryFrVgXedXdVc5Rgb8DNt3v/SRqTrsIVbC+QKhlzB4VXG1LjgU7KSfdkMCybRhvODT5Z2LB+44FGrEwk/tEvpkIYUcmDOlkiDwfPGr5scO2yyi1oBu/Z3hx7Xh7hUZW08fgJ5HOqrRT8XFaIEpcCHaohj2vn6x4ZQGDn+evgDV3FHPydnJ9RPcN7pCBJr3y53elbyhe6twxuu8FhrQRRaoCvn9sMUXsT0GLo1bwsfVpbD7G1QvEMPNIXnjI9OuZvX txkHpJC1 YwuPIPZ44lQk0d5fjRMO9bGvbqKv3eYzWf1nmyCRUPr2Nsvgs5sCL3G8n9ziSCDZAeE5UR1+5xQUxcpkhmpJ5I260lBwRUa06uIDBtWVYHb1dtuan9jkp+2JnWCJq/Hph+D6xFz/sMwZAvtwk5BfmjukdNs7scSwfxx/i/R+v57dlzUZA7NnDaXcQQ9pJRF2RtE8g0jWSLspAa1QctWSr7Rdh+59E5TtgXntxVnDnYBWLRSlriNa+3BdKr2CMkvjqRMr/CtLkm8Vt9Ln9BQM/ZKJ3nSCXOGtHleiPn7Zfq3WjRp1Lz4nKiy2LI39QUio3KApe6cDtY3xgVXDinduOSajtpsqL572wjx1dBu2iw+E8w9O+9qCPiY5LAB+9pPBrcLCXv7mMvTydIWgEr4uww+paqTG/y1bwAPM7rfXIa3JNVxuyRFS1TYBjQJiyv+7FS0F0ixLD6AO3LJIuuNgkXlgjvsOsZYr21aT9uAYgaPQSXtvpo2hG2KDESCSChITigIYIA5nMf6/kzIL0ZLBCIMU2LG3rLroi8sBh8Zc4zfuhqBJzteiSj5TsEKxQ5VHl+CvK1zcRtceCKDJig+5o7/518obccx0kTUHelqGioiViZF92L5Q4meoZNbJdBre5w7eT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch optimizes lazyfreeing with PTE-mapped mTHP[1] (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio splitting if the large folio is fully mapped within the target range. If a large folio is locked or shared, or if we fail to split it, we just leave it in place and advance to the next PTE in the range. But note that the behavior is changed; previously, any failure of this sort would cause the entire operation to give up. As large folios become more common, sticking to the old way could result in wasted opportunities. On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of the same size results in the following runtimes for madvise(MADV_FREE) in seconds (shorter is better): Folio Size | Old | New | Change ------------------------------------------ 4KiB | 0.590251 | 0.590259 | 0% 16KiB | 2.990447 | 0.185655 | -94% 32KiB | 2.547831 | 0.104870 | -95% 64KiB | 2.457796 | 0.052812 | -97% 128KiB | 2.281034 | 0.032777 | -99% 256KiB | 2.230387 | 0.017496 | -99% 512KiB | 2.189106 | 0.010781 | -99% 1024KiB | 2.183949 | 0.007753 | -99% 2048KiB | 0.002799 | 0.002804 | 0% [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com Signed-off-by: Lance Yang --- include/linux/pgtable.h | 34 +++++++++ mm/internal.h | 12 +++- mm/madvise.c | 149 ++++++++++++++++++++++------------------ mm/memory.c | 4 +- 4 files changed, 129 insertions(+), 70 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 0f4b2faa1d71..4dd442787420 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm, } #endif +#ifndef mkold_clean_ptes +/** + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio + * as old and clean. + * @mm: Address space the pages are mapped into. + * @addr: Address the first page is mapped at. + * @ptep: Page table pointer for the first entry. + * @nr: Number of entries to mark old and clean. + * + * May be overridden by the architecture; otherwise, implemented by + * get_and_clear/modify/set for each pte in the range. + * + * Note that PTE bits in the PTE range besides the PFN can differ. For example, + * some PTEs might be write-protected. + * + * Context: The caller holds the page table lock. The PTEs map consecutive + * pages that belong to the same folio. The PTEs are all in the same PMD. + */ +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned int nr) +{ + pte_t pte; + + for (;;) { + pte = ptep_get_and_clear(mm, addr, ptep); + set_pte_at(mm, addr, ptep, pte_mkclean(pte_mkold(pte))); + if (--nr == 0) + break; + ptep++; + addr += PAGE_SIZE; + } +} +#endif + static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { diff --git a/mm/internal.h b/mm/internal.h index 57c1055d5568..792a9baf0d14 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -132,6 +132,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) * first one is writable. * @any_young: Optional pointer to indicate whether any entry except the * first one is young. + * @any_dirty: Optional pointer to indicate whether any entry except the + * first one is dirty. * * Detect a PTE batch: consecutive (present) PTEs that map consecutive * pages of the same large folio. @@ -147,18 +149,20 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) */ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, - bool *any_writable, bool *any_young) + bool *any_writable, bool *any_young, bool *any_dirty) { unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte, *ptep; - bool writable, young; + bool writable, young, dirty; int nr; if (any_writable) *any_writable = false; if (any_young) *any_young = false; + if (any_dirty) + *any_dirty = false; VM_WARN_ON_FOLIO(!pte_present(pte), folio); VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); @@ -174,6 +178,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, writable = !!pte_write(pte); if (any_young) young = !!pte_young(pte); + if (any_dirty) + dirty = !!pte_dirty(pte); pte = __pte_batch_clear_ignored(pte, flags); if (!pte_same(pte, expected_pte)) @@ -191,6 +197,8 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, *any_writable |= writable; if (any_young) *any_young |= young; + if (any_dirty) + *any_dirty |= dirty; nr = pte_batch_hint(ptep, pte); expected_pte = pte_advance_pfn(expected_pte, nr); diff --git a/mm/madvise.c b/mm/madvise.c index bf26cf2b7715..0777df2e3691 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_area_struct *vma) file_permission(vma->vm_file, MAY_WRITE) == 0; } +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end, + struct folio *folio, pte_t *ptep, + pte_t pte, bool *any_young, + bool *any_dirty) +{ + int max_nr = (end - addr) / PAGE_SIZE; + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL, + any_young, any_dirty); +} + +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, + struct folio *folio, pte_t **pte, + spinlock_t **ptl) +{ + int err; + + if (!folio_trylock(folio)) + return false; + + folio_get(folio); + pte_unmap_unlock(*pte, *ptl); + err = split_folio(folio); + folio_unlock(folio); + folio_put(folio); + + *pte = pte_offset_map_lock(mm, pmd, addr, ptl); + + return err == 0; +} + static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -456,41 +489,29 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, * next pte in the range. */ if (folio_test_large(folio)) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | - FPB_IGNORE_SOFT_DIRTY; - int max_nr = (end - addr) / PAGE_SIZE; bool any_young; - - nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, - fpb_flags, NULL, &any_young); - if (any_young) - ptent = pte_mkyoung(ptent); + nr = madvise_folio_pte_batch(addr, end, folio, pte, + ptent, &any_young, NULL); if (nr < folio_nr_pages(folio)) { - int err; - if (folio_likely_mapped_shared(folio)) continue; if (pageout_anon_only_filter && !folio_test_anon(folio)) continue; - if (!folio_trylock(folio)) - continue; - folio_get(folio); + arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); + if (madvise_pte_split_folio(mm, pmd, addr, + folio, &start_pte, &ptl)) + nr = 0; if (!start_pte) break; + pte = start_pte; arch_enter_lazy_mmu_mode(); - if (!err) - nr = 0; continue; } + + if (any_young) + ptent = pte_mkyoung(ptent); } /* @@ -687,47 +708,54 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, continue; /* - * If pmd isn't transhuge but the folio is large and - * is owned by only this process, split it and - * deactivate all pages. + * If we encounter a large folio, only split it if it is not + * fully mapped within the range we are operating on. Otherwise + * leave it as is so that it can be marked as lazyfree. If we + * fail to split a folio, leave it in place and advance to the + * next pte in the range. */ if (folio_test_large(folio)) { - int err; + bool any_young, any_dirty; + nr = madvise_folio_pte_batch(addr, end, folio, pte, + ptent, &any_young, &any_dirty); - if (folio_likely_mapped_shared(folio)) - break; - if (!folio_trylock(folio)) - break; - folio_get(folio); - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); + if (nr < folio_nr_pages(folio)) { + if (folio_likely_mapped_shared(folio)) + continue; + + arch_leave_lazy_mmu_mode(); + if (madvise_pte_split_folio(mm, pmd, addr, + folio, &start_pte, &ptl)) + nr = 0; + if (!start_pte) + break; + pte = start_pte; + arch_enter_lazy_mmu_mode(); + continue; + } + + if (any_young) + ptent = pte_mkyoung(ptent); + if (any_dirty) + ptent = pte_mkdirty(ptent); + } + + if (!folio_trylock(folio)) + continue; + /* + * If we have a large folio at this point, we know it is fully mapped + * so if its mapcount is the same as its number of pages, it must be + * exclusive. + */ + if (folio_mapcount(folio) != folio_nr_pages(folio)) { folio_unlock(folio); - folio_put(folio); - if (err) - break; - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); - if (!start_pte) - break; - arch_enter_lazy_mmu_mode(); - pte--; - addr -= PAGE_SIZE; continue; } + folio_unlock(folio); if (folio_test_swapcache(folio) || folio_test_dirty(folio)) { if (!folio_trylock(folio)) continue; - /* - * If folio is shared with others, we mustn't clear - * the folio's dirty flag. - */ - if (folio_mapcount(folio) != 1) { - folio_unlock(folio); - continue; - } if (folio_test_swapcache(folio) && !folio_free_swap(folio)) { @@ -740,19 +768,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, } if (pte_young(ptent) || pte_dirty(ptent)) { - /* - * Some of architecture(ex, PPC) don't update TLB - * with set_pte_at and tlb_remove_tlb_entry so for - * the portability, remap the pte with old|clean - * after pte clearing. - */ - ptent = ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); - - ptent = pte_mkold(ptent); - ptent = pte_mkclean(ptent); - set_pte_at(mm, addr, pte, ptent); - tlb_remove_tlb_entry(tlb, pte, addr); + mkold_clean_ptes(mm, addr, pte, nr); + tlb_remove_tlb_entries(tlb, pte, nr, addr); } folio_mark_lazyfree(folio); } diff --git a/mm/memory.c b/mm/memory.c index 1723c8ddf9cb..fe9d4d64c627 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma flags |= FPB_IGNORE_SOFT_DIRTY; nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags, - &any_writable, NULL); + &any_writable, NULL, NULL); folio_ref_add(folio, nr); if (folio_test_anon(folio)) { if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, @@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb, */ if (unlikely(folio_test_large(folio) && max_nr != 1)) { nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags, - NULL, NULL); + NULL, NULL, NULL); zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr, addr, details, rss, force_flush, From patchwork Mon Apr 8 04:24:37 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13620559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1C1BC67861 for ; Mon, 8 Apr 2024 04:25:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 55F0C6B0088; Mon, 8 Apr 2024 00:25:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 50CF96B0089; Mon, 8 Apr 2024 00:25:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 388996B008A; Mon, 8 Apr 2024 00:25:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 187D36B0088 for ; Mon, 8 Apr 2024 00:25:07 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 91237160513 for ; Mon, 8 Apr 2024 04:25:06 +0000 (UTC) X-FDA: 81985074612.19.EDDCE1B Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf06.hostedemail.com (Postfix) with ESMTP id DF511180014 for ; Mon, 8 Apr 2024 04:25:04 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Evvu1yhq; spf=pass (imf06.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712550304; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5PrJ/ANkryEW7CJ+WJBbTdeBa44gWubQPwVbhS14t9k=; b=XTZl6LRuJywFRBzXuWisPMHKX1UzTgHQZSBgTwKmJ19sQoL6B5m2C8JmAWLc9zWD7vCLKa dXZApEDYU2BdPloG0AU7QvIEwfEo2+t2Z3tvJ1e0DtL162PntWMErf6yxMlANYZNTfDA+z UTPEqZm5bprPLSirNvpWXnrP57570l0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712550304; a=rsa-sha256; cv=none; b=fG0wmOwbfs6p/bf7rYpFGhK7PTXNr/GLKE1o3bq0BcoJ0bIY6FzgYB2gbOv+RT8QkbL/cN SpWDgKGEf4KIvVpOye7fBfH1NAilMuB+cIvws/LED5cwnnb2qNkMObRc1vb15IgC9RmxHy vPFVsHkgeR6YNXc2YibJgn2LWJkSWH0= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Evvu1yhq; spf=pass (imf06.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.179 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6ed20fb620fso702514b3a.2 for ; Sun, 07 Apr 2024 21:25:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712550304; x=1713155104; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5PrJ/ANkryEW7CJ+WJBbTdeBa44gWubQPwVbhS14t9k=; b=Evvu1yhqCkLf8ACh9fRIWeRf/TNcJyEDUuz4b4WEKbnL5keZQQ/TwGnx4pWNR88sDd LqQspKwL6z63aqCOI2aWChTorhANbf+NryKueOVpmpewKYhXlWmC50r2UW9PrTw/CDVF CVrrRrIccRSY8eIr7yTdfYwAxnl5CLyCtpKo8Wj1/mz2LGt+DTDB7c9CROmbqIsS28/f me/jiSMPuwMfAz0Tbu855/8qJWr0yvAC61AI0A3iYReylQY+3FXTxg1lrn2iTXsWckip KO+ApccgX13z3z2NesiAGyypQ4wZWVGHZ4iiGtp5JlNZuZLLJ1H/+Wk1C596u+s/OOBj f2ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712550304; x=1713155104; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5PrJ/ANkryEW7CJ+WJBbTdeBa44gWubQPwVbhS14t9k=; b=YlPkWT972AwV3Op1ba2NqrhnVAu52e1a5BsPerNj5wQYDMA2Is28CLHTUdpHO+j+yy iMZxmrG0/tHOUE8KHHDuciRIZVRivc1AeSaYxNsILa/AufbxmBF8MtySG8rREcKVM9jk fs5y96aCbavixfb7IFjFV0FGm/Jp4EUbhAzISH3iwBGlFaNa/lfq83Bg0DZ8CsHLjuXI dCKoyAUfx7EFw3A/eAiRNg4X/iTDdAb67Cfx3rOhf6OoZUm527G1bKBhEIZoLwAUBfzI AxY1yTtMY2D0wl/cxijeS3lD7PMJIBvS40Dx+klE2kezc98t45qRwcJUYopyhqjM1Swm Mzsg== X-Forwarded-Encrypted: i=1; AJvYcCUg8VjKQ50tSJ8pNJyapyS335F/OwgPFcSbVCoscf+uE5is6paE/wVAJ3VHSVS4AW2Vya+npli829/2YrfdSKC6Fxs= X-Gm-Message-State: AOJu0YxNjTTH11HtAcgPe3uay7gEKu7Z12Qu3Btd+F9CbCiS54r5Dsjl QKcPoY2Pe/LqIDDf/o8/cyNE23KZVWU1QHbR28TTaZuTg8oefSJK X-Google-Smtp-Source: AGHT+IHgvW4D9q14vF5+KTBGSUvoeHTEnFznIzfkTxepc4GsFqzcMQKJQZJGvDAWWKml0EBiM0dg1g== X-Received: by 2002:a05:6a00:4f84:b0:6ea:afd1:90e6 with SMTP id ld4-20020a056a004f8400b006eaafd190e6mr7988047pfb.6.1712550303756; Sun, 07 Apr 2024 21:25:03 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id p20-20020a056a000b5400b006eab6ac1f83sm5465628pfo.0.2024.04.07.21.25.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 07 Apr 2024 21:25:03 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v5 2/2] mm/arm64: override mkold_clean_ptes() batch helper Date: Mon, 8 Apr 2024 12:24:37 +0800 Message-Id: <20240408042437.10951-3-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240408042437.10951-1-ioworker0@gmail.com> References: <20240408042437.10951-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: DF511180014 X-Rspam-User: X-Stat-Signature: ys5h8zzr5ppwka6rkmsusbg7gfbdk1xg X-Rspamd-Server: rspam03 X-HE-Tag: 1712550304-980483 X-HE-Meta: U2FsdGVkX1+ZC/dekdDnOW8fgzrfsXwfYYw2twDyLz/LNj2k2W5qnBYlFsmSB0/EFWMkN03ycloaWgtn033W2hQmb3UuIBAeaKfMr89D7U/gApPrYYW4Wg2wU8qapqxJXLxhA8Vlbi9SwMoFTV5nTFQoXmHBoIaB3lt7fVA3LcmiEDoUn1qozfINjax8uHkMkXiEeotekUAXfsz7jMStw1XuhzkbrWrYHFyn/IzUNkMr1SKW1stTceW/o83/X12YmxPxCTw4t6FVeE9kdDLtODfOoClGTwG23r7CttgBI9kre96rEA0P0wpNQRsLxG58njC8ZGX+EKA+GJz+9gptUOY/fo6AuqT05XGfrN8wcEon5prHqeoosczAtIZ8qRtxtJI975dfvaR5EmjwCK6jOS6tLzHMKPcDJ7oElk4R8Afq2Gkrvg8tKjurWG5J/nsEDilvkMSqdS8PhUVO/PAOIlpm7+pA7IJXx6sSwbjHN9goAqrHbSKKnk45abk5+HyokWwQi1Ujqx5+K1RceoHh3a9o50TqvXsXE8U6CaaB7Lbr/eUMd9phL3p4GF9TSlR3N+BqKNpsRwjlxy3OvtaR2vP6pUxyE528Z/QG0020iuyaT5AQx7j9fcFXDd4yEH8v8ZKfQNlEy0VECmRWOpi3F4h2msAAtfzKTXQvwz6wguKgvf1VGX8XI7ic3WPGt8W8/94tU0yjzvF9akZCNqkMmDP34/oodH2yc+N/3pIsR0dWhskNdxxVFEXBE+sr4++oL2UdJ8dPLxOfmjU1j6Pj0UbsIUlH2qJ0M38rkbhFYFe3G1H97BjR3jOxcgfrDcxmMsbGy1JO6J1A9iyC8/fWz6zgcVr92L0tOnjZb6M2xxzBV2z783XYMwzK/CxTMMaCzBglan7nTnndUftbKfowuTe1ZcH9M/zOM82Fj4sDe6pkhDo15+7X8cq7/Z7GP5di0pQtIoEv/Lm4nylOiAW Lx3xNcmr VkYq0iGNr30efKR9gE2F94pKg6h64UGRZ+FdABVtuUhzqyNY6eln1Tybt0AjSsY1krW2A8Ah239kuc+w9U6sgUcWIbJAaQotklN7/dxvDRGVvOH/fd+ZLFbymhJ/p47g8f9z+SUqaofmejJq1Tvbp9jsobFZDZTDSjdzq X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The per-pte get_and_clear/modify/set approach would result in unfolding/refolding for contpte mappings on arm64. So we need to override mkold_clean_ptes() for arm64 to avoid it. Suggested-by: David Hildenbrand Suggested-by: Barry Song <21cnbao@gmail.com> Suggested-by: Ryan Roberts Signed-off-by: Lance Yang --- arch/arm64/include/asm/pgtable.h | 55 ++++++++++++++++++++++++++++++++ arch/arm64/mm/contpte.c | 15 +++++++++ 2 files changed, 70 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 9fd8613b2db2..395754638a9a 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -1223,6 +1223,34 @@ static inline void __wrprotect_ptes(struct mm_struct *mm, unsigned long address, __ptep_set_wrprotect(mm, address, ptep); } +static inline void ___ptep_mkold_clean(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t pte) +{ + pte_t old_pte; + + do { + old_pte = pte; + pte = pte_mkclean(pte_mkold(pte)); + pte_val(pte) = cmpxchg_relaxed(&pte_val(*ptep), + pte_val(old_pte), pte_val(pte)); + } while (pte_val(pte) != pte_val(old_pte)); +} + +static inline void __ptep_mkold_clean(struct mm_struct *mm, unsigned long addr, + pte_t *ptep) +{ + ___ptep_mkold_clean(mm, addr, ptep, __ptep_get(ptep)); +} + +static inline void __mkold_clean_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned int nr) +{ + unsigned int i; + + for (i = 0; i < nr; i++, addr += PAGE_SIZE, ptep++) + __ptep_mkold_clean(mm, addr, ptep); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_PMDP_SET_WRPROTECT static inline void pmdp_set_wrprotect(struct mm_struct *mm, @@ -1379,6 +1407,8 @@ extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t entry, int dirty); +extern void contpte_mkold_clean_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned int nr); static __always_inline void contpte_try_fold(struct mm_struct *mm, unsigned long addr, pte_t *ptep, pte_t pte) @@ -1603,6 +1633,30 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, return contpte_ptep_set_access_flags(vma, addr, ptep, entry, dirty); } +#define mkold_clean_ptes mkold_clean_ptes +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned int nr) +{ + if (likely(nr == 1)) { + /* + * Optimization: mkold_clean_ptes() can only be called for present + * ptes so we only need to check contig bit as condition for unfold, + * and we can remove the contig bit from the pte we read to avoid + * re-reading. This speeds up madvise(MADV_FREE) which is sensitive + * for order-0 folios. Equivalent to contpte_try_unfold(). + */ + pte_t orig_pte = __ptep_get(ptep); + + if (unlikely(pte_cont(orig_pte))) { + __contpte_try_unfold(mm, addr, ptep, orig_pte); + orig_pte = pte_mknoncont(orig_pte); + } + ___ptep_mkold_clean(mm, addr, ptep, orig_pte); + } else { + contpte_mkold_clean_ptes(mm, addr, ptep, nr); + } +} + #else /* CONFIG_ARM64_CONTPTE */ #define ptep_get __ptep_get @@ -1622,6 +1676,7 @@ static inline int ptep_set_access_flags(struct vm_area_struct *vma, #define wrprotect_ptes __wrprotect_ptes #define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS #define ptep_set_access_flags __ptep_set_access_flags +#define mkold_clean_ptes __mkold_clean_ptes #endif /* CONFIG_ARM64_CONTPTE */ diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c index 1b64b4c3f8bf..dbff9c5e9eff 100644 --- a/arch/arm64/mm/contpte.c +++ b/arch/arm64/mm/contpte.c @@ -361,6 +361,21 @@ void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr, } EXPORT_SYMBOL_GPL(contpte_wrprotect_ptes); +void contpte_mkold_clean_ptes(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, unsigned int nr) +{ + /* + * If clearing the young and dirty bits for an entire contig range, we can + * avoid unfolding. Just set old/clean and wait for the later mmu_gather + * flush to invalidate the tlb. If it's a partial range though, we need to + * unfold. + */ + + contpte_try_unfold_partial(mm, addr, ptep, nr); + __mkold_clean_ptes(mm, addr, ptep, nr); +} +EXPORT_SYMBOL_GPL(contpte_mkold_clean_ptes); + int contpte_ptep_set_access_flags(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t entry, int dirty)