From patchwork Thu Apr 18 10:57:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lance Yang X-Patchwork-Id: 13634516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CE8DC4345F for ; Thu, 18 Apr 2024 10:58:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C68656B009A; Thu, 18 Apr 2024 06:58:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C173B6B009B; Thu, 18 Apr 2024 06:58:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB86A6B009C; Thu, 18 Apr 2024 06:58:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8FB516B009A for ; Thu, 18 Apr 2024 06:58:41 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 20411A232B for ; Thu, 18 Apr 2024 10:58:41 +0000 (UTC) X-FDA: 82022354442.26.556C157 Received: from mail-ot1-f46.google.com (mail-ot1-f46.google.com [209.85.210.46]) by imf30.hostedemail.com (Postfix) with ESMTP id 5B34F80004 for ; Thu, 18 Apr 2024 10:58:39 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=X8uZxOUq; spf=pass (imf30.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.46 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1713437919; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=O8NkjRR32gwNlYRbH1lR13f1NodvtqYs34036pu1WAk=; b=TA8iB4j9iNNOErcV/wNZUGEacE4NxA/zDnb6+AhRA5NWbn7h/0cuyF+wbbnlf/dwPBf/SW CNVUQ2j12sf3Jfq5YakdxH7ucoiclGBgzeiZo9DumBrY4KwJ8ajNEeCXVzIaTDJXDHgx4q 6ghf58hoIvi9BG2sdP4gbffEO5zxilQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1713437919; a=rsa-sha256; cv=none; b=rviWZSInkIX87ESavy+HXT2f/iaaacJtSXr/6fW8K+A9GRvY2gAXghcdqLokrBLuBnGsZ9 SHKG/W/qc6SH16f9AnxeC+DO13Wr6YfCVZno7G+S+OjY7E1EFl2aJVxxb4KZztdf+kt1zY LpVWrYgqKnXMRBKeRGlK+vPHazSV2OE= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=X8uZxOUq; spf=pass (imf30.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.210.46 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-ot1-f46.google.com with SMTP id 46e09a7af769-6eb77e56b20so361246a34.3 for ; Thu, 18 Apr 2024 03:58:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1713437918; x=1714042718; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=O8NkjRR32gwNlYRbH1lR13f1NodvtqYs34036pu1WAk=; b=X8uZxOUqPfs5yTV+ccXRtN1XW+t72jGmT6bo+qvt8ZwPWM7bppv398jrJWeZlfKB03 CiNHmF2/vYShplIusWRgG5WlPcnDBB5vf4ek0euu1HR+FWlAplkigLiIEpy8ldagqyAU NrYRS5YCkbQweh36Lk55Jj0xJ/CBPzfsXbja1Fw2EVBLBJsV7+9FV4OnOmPN0ZSLlL3o ae4n9DHoNYERtv5dTxpAksx1dloqgotzsk/4HmcwcGb1ClcOK2IKbDWqqpVbwzIOxSvE xicKBY5q2xzluAH08DQYphLGu7hPvXCqQb7STLjIlp4Wav2ChzKvg0RBdGha6zPJaeuQ ktjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713437918; x=1714042718; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O8NkjRR32gwNlYRbH1lR13f1NodvtqYs34036pu1WAk=; b=dmK3ekbtyFf/FrsPof0HktvWWsFf//AhR0DnmhOq+2oyBjLnTOisRuiPZZ86iEMYDi nfLOWLDpl5GVUW3odJTGSbtx5GRzIyczEvgjbPdd/iWx60ie+yI3c2RYvvoRvGpe44Ga c4e2I6Hip9C3pkMJu9eoX0rChSPhEgvLW25K3Db5FwlYr/giP+e7Xeo9HKLkwDlQEbgU uUnLgsc78dcoZn52zzS+gcApAsWmrXe8cmVEfbPUhzu7OHzEuqeZhbja1Uw7gQsX4KZZ pxB7nI/H7IECgNHCY+zHJLPZzfQvwXp2NC+qRkY9rXbdyEEZseslVM56Bw3FiVe28esJ AabQ== X-Forwarded-Encrypted: i=1; AJvYcCVlqAMNLH+pX7dYid2CCHlyHChlviypLA42JS7fjY0zeniKYVL4YOunGrgXbfWdWH108lyJPsr0VrbpXxQu6YQEVIU= X-Gm-Message-State: AOJu0YzhcGOyJTVdz4Hg04WevCWygFEPBVP55vKC063ymufsUul6aN3K Oacvs+Mu276T/Dhxo+BlUB9Hm063D4BBMvYCpx0udJsuK5+8WqXQ X-Google-Smtp-Source: AGHT+IE0Wend3e5EpVQTSC9tsdUxn2WDCsJjwIeBC6hJ6WUDqPXTIzMmKsKYHmthb5COY6jTF/NdRg== X-Received: by 2002:a05:6870:7096:b0:22e:dfbc:4d9d with SMTP id v22-20020a056870709600b0022edfbc4d9dmr2903143oae.1.1713437917381; Thu, 18 Apr 2024 03:58:37 -0700 (PDT) Received: from LancedeMBP.lan ([112.10.225.217]) by smtp.gmail.com with ESMTPSA id gd26-20020a056a00831a00b006ea923678a6sm1200487pfb.137.2024.04.18.03.58.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Apr 2024 03:58:37 -0700 (PDT) From: Lance Yang To: akpm@linux-foundation.org Cc: ryan.roberts@arm.com, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v9 4/4] mm/madvise: optimize lazyfreeing with mTHP in madvise_free Date: Thu, 18 Apr 2024 18:57:50 +0800 Message-Id: <20240418105750.98866-5-ioworker0@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20240418105750.98866-1-ioworker0@gmail.com> References: <20240418105750.98866-1-ioworker0@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 5B34F80004 X-Rspam-User: X-Stat-Signature: s9fthwgqkced4u67qbeqcw8g86kx96a6 X-HE-Tag: 1713437919-1896 X-HE-Meta: U2FsdGVkX19VhDP91XpUo8E598obIQ7x4hfU+jLNncLoESbJvpRq3oLKXKMzK87LBv3pIel4Hd14Y83ZSi1tdEzKMkzK0Oz4XQ/XQcFgrZ+jdfhcfyK1mX4qjG60aQrEtjoAgSx5YFUUrn/ZezZUx3iR9YyVXB+ifdUnh+NyEOgK/6wnpAvXbr/BjThmSlt2pWRP+9RkMFHa5pwnaNyPUDYuhfDXenvlC3XMlve/SIyyC3i1/nMI/2lWgsCl8XsE8uf/Ni37TzntckFMwc5mHJcrpmOCWZJmenXhU2Arp5ogeD0wXjA794niA4lacfo4GkmOSqWYFKUqHOUU+CfHyCHnj6tORSoQB/At7uIPztYX+qE2ylecf8e5RW1mBvBqbkaRmtUsyWsdNYK9ieAdvt6Kgl7Kzz6wSzmN1o7mml+SmAIddm3xYnKBfEMZF/IUEGI5g90qC4yRLiGLxkOj+YBt5S2V59mk0uUdMFEuvuwp4Vnudmn5QtXBvk8z28rKogW0EfDKf6PbNayGXDIiWGhnifUS9hT6b0AufvsxZRaVs7vc9hf5V3JF65OaNfDyWy1/+sUWDbanUIrvWyc5jz0rq0UjYhLM87pXjdGxi5Y5Gc1ykQkx6QhX86POSmCMF+2jXasGIyZS+l6OioXc8wFh7s6EtqhYKVKa31D0aWrqPiDkXTu4U9hb7MCMtFyxVYTi70o3t/2E5RpAfCsLH02YGUOVe6gYSLLKqPnAU1Jd9NV2sVQiWmqDhKxSsUO+rMlYH17aMdkH0eLHDz53m9Lv115BBbwv5iIIOvoxqMCy5x24SHOafeY8REnktuxIrLwcaENlV0iuwzjw4Y2WqIL3M0d64o3+MUSYlDQKjR9eiovnFXt+vPK0Kn/JdklzmkJVtO7l2+Lc6I3FyrrOlplJcSz02Syjh8FRuiWUmeDJZ5lBT9JKoATsoAoSEaV61KHu66jdY8Ov2/E8QA1 RZt62GJl yilZOkGKmhQVvP9OsG9fExHsEHHnPLuHGr7cvyNOHBVEvunhttawUacydo32ChkrxOqYHlaoudbX7/QdtEhhIP38PWijCIUNKHHCf35Ax1dWsvEEUQcyudrWTzrBgWuqG/bh2PGmzgbiCXNNtxnMQb/+4DKSw6WX0oqwazNqmeDnFI2rPtayJJT9UqmlUNQZ+A2ze9sOel7p/5h8ZbnjvaHQ7HnRKQu9IHr4FxyyTHTS1QtmH6oLcaCDtFj6Z/22fE/t7NjlQs5Gm2R/bRY275lbWbsELCdWXIPdvXPM6zbSyJhzcMAjTjDLT5RlcWvlkG6DE4IUOmXkdVK++Y3x8LM5Lg0Ok4XiHTc3s4+A03cwJxeG8U6urX1MQCB8qWPgYkYTA5pqhREC6qQBC7flN1PcfCCppVzlX5IVB8+HrN2Pn/T5wGQLcpUad4tSQx9Fc52+Htpq8gMvwp5EPieXPt9R99Y+1K2PMv6/KLb9B6RZlqzaCzZ2zqDkoFoeldmUoyANZRnI0T26sEK+c960C8LuxA/QMgBVMC2zS+K/f1apmhTgCZW9EoJa3KSFMiTZGLv0xfskWMvSrZbxKDP/uiOfBzIhzYQgjltHHhIWMCaUCohC++QQAZutaWD9BaPieLjUZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch optimizes lazyfreeing with PTE-mapped mTHP[1] (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio splitting if the large folio is fully mapped within the target range. If a large folio is locked or shared, or if we fail to split it, we just leave it in place and advance to the next PTE in the range. But note that the behavior is changed; previously, any failure of this sort would cause the entire operation to give up. As large folios become more common, sticking to the old way could result in wasted opportunities. On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of the same size results in the following runtimes for madvise(MADV_FREE) in seconds (shorter is better): Folio Size | Old | New | Change ------------------------------------------ 4KiB | 0.590251 | 0.590259 | 0% 16KiB | 2.990447 | 0.185655 | -94% 32KiB | 2.547831 | 0.104870 | -95% 64KiB | 2.457796 | 0.052812 | -97% 128KiB | 2.281034 | 0.032777 | -99% 256KiB | 2.230387 | 0.017496 | -99% 512KiB | 2.189106 | 0.010781 | -99% 1024KiB | 2.183949 | 0.007753 | -99% 2048KiB | 0.002799 | 0.002804 | 0% [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm.com [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redhat.com Reviewed-by: Ryan Roberts Signed-off-by: Lance Yang Acked-by: David Hildenbrand --- mm/madvise.c | 85 +++++++++++++++++++++++++++------------------------- 1 file changed, 44 insertions(+), 41 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 4597a3568e7e..375ab3234603 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -643,6 +643,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { + const cydp_t cydp_flags = CYDP_CLEAR_YOUNG | CYDP_CLEAR_DIRTY; struct mmu_gather *tlb = walk->private; struct mm_struct *mm = tlb->mm; struct vm_area_struct *vma = walk->vma; @@ -697,44 +698,57 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, continue; /* - * If pmd isn't transhuge but the folio is large and - * is owned by only this process, split it and - * deactivate all pages. + * If we encounter a large folio, only split it if it is not + * fully mapped within the range we are operating on. Otherwise + * leave it as is so that it can be marked as lazyfree. If we + * fail to split a folio, leave it in place and advance to the + * next pte in the range. */ if (folio_test_large(folio)) { - int err; + bool any_young, any_dirty; - if (folio_likely_mapped_shared(folio)) - break; - if (!folio_trylock(folio)) - break; - folio_get(folio); - arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - start_pte = NULL; - err = split_folio(folio); - folio_unlock(folio); - folio_put(folio); - if (err) - break; - start_pte = pte = - pte_offset_map_lock(mm, pmd, addr, &ptl); - if (!start_pte) - break; - arch_enter_lazy_mmu_mode(); - pte--; - addr -= PAGE_SIZE; - continue; + nr = madvise_folio_pte_batch(addr, end, folio, pte, + ptent, &any_young, NULL); + + if (nr < folio_nr_pages(folio)) { + int err; + + if (folio_likely_mapped_shared(folio)) + continue; + if (!folio_trylock(folio)) + continue; + folio_get(folio); + arch_leave_lazy_mmu_mode(); + pte_unmap_unlock(start_pte, ptl); + start_pte = NULL; + err = split_folio(folio); + folio_unlock(folio); + folio_put(folio); + start_pte = pte = + pte_offset_map_lock(mm, pmd, addr, &ptl); + if (!start_pte) + break; + arch_enter_lazy_mmu_mode(); + if (!err) + nr = 0; + continue; + } + + if (any_young) + ptent = pte_mkyoung(ptent); + if (any_dirty) + ptent = pte_mkdirty(ptent); } if (folio_test_swapcache(folio) || folio_test_dirty(folio)) { if (!folio_trylock(folio)) continue; /* - * If folio is shared with others, we mustn't clear - * the folio's dirty flag. + * If we have a large folio at this point, we know it is + * fully mapped so if its mapcount is the same as its + * number of pages, it must be exclusive. */ - if (folio_mapcount(folio) != 1) { + if (folio_mapcount(folio) != folio_nr_pages(folio)) { folio_unlock(folio); continue; } @@ -750,19 +764,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, } if (pte_young(ptent) || pte_dirty(ptent)) { - /* - * Some of architecture(ex, PPC) don't update TLB - * with set_pte_at and tlb_remove_tlb_entry so for - * the portability, remap the pte with old|clean - * after pte clearing. - */ - ptent = ptep_get_and_clear_full(mm, addr, pte, - tlb->fullmm); - - ptent = pte_mkold(ptent); - ptent = pte_mkclean(ptent); - set_pte_at(mm, addr, pte, ptent); - tlb_remove_tlb_entry(tlb, pte, addr); + clear_young_dirty_ptes(vma, addr, pte, nr, cydp_flags); + tlb_remove_tlb_entries(tlb, pte, nr, addr); } folio_mark_lazyfree(folio); }