From patchwork Fri Feb 14 09:30:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13974667 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 452CEC021A4 for ; Fri, 14 Feb 2025 09:30:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BEF726B0093; Fri, 14 Feb 2025 04:30:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B77CD6B0095; Fri, 14 Feb 2025 04:30:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CAC7280001; Fri, 14 Feb 2025 04:30:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 7C9626B0093 for ; Fri, 14 Feb 2025 04:30:58 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 47F0B1C9F9F for ; Fri, 14 Feb 2025 09:30:58 +0000 (UTC) X-FDA: 83118030996.05.F0F33EA Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf12.hostedemail.com (Postfix) with ESMTP id 5C5F24000C for ; Fri, 14 Feb 2025 09:30:56 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=AQfPIPK2; spf=pass (imf12.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739525456; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ptqpPfUGDg1EiuURg84FX6mkSOnefIkRazcht+VYCyI=; b=pX9PQSq/zb1K8d3VO1hZTPsJR76u4mVLu8VKH5HZO6mqPVdBuxPLvr27n+r/RlAKEr+dMR o5w/s7s9avPzcJq1gKPSjVITYy8LGQ/Nu8Dpjx/aNKAPINcI6lPpFkKs/Eua5cdSVBHQa/ 0mYG7WJP5rlC/itEyd7kfegUOnqNcxw= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=AQfPIPK2; spf=pass (imf12.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.174 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739525456; a=rsa-sha256; cv=none; b=Yu+UL7axX5S2ONw+MbgAkpiprtNHoGghfD2yLmxXReTlsuKpbVmIPgq+S7H8OMHET5HR2o hyzBcwFMPwPZ8Ax14rU623Vp6uaZWWZIULF1iTnPdFCxYoOVX0L6lTDwu85om6pyQJrs13 WjlWp0Pq4KcMojW7M+AC0RqQhTCspx8= Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-21f48ebaadfso36754335ad.2 for ; Fri, 14 Feb 2025 01:30:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1739525455; x=1740130255; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ptqpPfUGDg1EiuURg84FX6mkSOnefIkRazcht+VYCyI=; b=AQfPIPK2qKMWIabmkrDLusZdStB6McpFa6nT0Gs75mnbPyMDX91ulj85fp1U9HFZ5q DdKGhBcC9iO4S/vI0uas7BeBObP7XA4tugRi8sxG7tx+swo47yJEfZg4NrMb8ZjvDvIg VykwG4a2ZxXhhwIPwszr7A0MOKEXIXeiixnYRO6gv3SvCBNAmg0v5aL5dyLgvd3985s8 NJ9T+jhccxEJYMeH9k2jojU3n5OmlaM4t+1bFZP3CYTx+fz/qKOoI2ST6TbdVUJVTvIg jJJ12uNBK+4M1kLL6KKtk78IVMxZ/4k6fQKlhFTxTaE5Ig/pbcKiD2rXBdh5x1mrK3dD Yd5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739525455; x=1740130255; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ptqpPfUGDg1EiuURg84FX6mkSOnefIkRazcht+VYCyI=; b=e0JDzPa7FgeMvhfvBJZuyhsC/MsbauYM5lRP6OjvwhRt5tJdVEf+2e0/XXPANxEpp1 hO8/E9bncQdG76vSyKb2ztBlO1hGxUhUb395HWXd4XUV4zJgJoi7yQYPNDh35Aq61B+v CSeVMkz/AI9oKqFUVemKCBQ5AG516HArLDaJ0wJphIWQ1WIKUj+tkQrf3Ei9Nvxs2F9h 6yGnWBZiNbJO1mpETDtXL/9Rg4mlzZtTOiWbABnMulu/G/DbrdTWPTcJcXiBD56y9csw iUmFIlaDsJsf9Qy+T2TBNqH9FgxhyhU4seMzhJNwg+Fd8Qes6wz3iWQuKlupIxjexFir M/rw== X-Forwarded-Encrypted: i=1; AJvYcCVy9RSBY2Y6GyeXWjATCPj1QbIsRQG5ZoRNyOTKeaWk/4lZwULJVHYg45J2dkxTyWi1EPYKsllMzA==@kvack.org X-Gm-Message-State: AOJu0YzIIyS33OxKoznEvC2DxJuk7G0wLVNFKMiGsYhwMBQxRmPsNulT YuHsZJ3Ms/yfPOVJY99RP6Y1bodKEdA2r2Rr+sDc7FF7drmakLV7 X-Gm-Gg: ASbGncsmUWorWsy6/5rrIAlfR59u99tt9eEgRROmIR0veBZMQYf4rr/asU0kE7a3FVv cIMqnzLY3Bzlv85Vk59JwsgTWbs1cYSKmWXjQhJpNHQCsEFZ8iupI/yYpZV43X+3qT4XfbAQZrA o+5+7YjfzbK/J4jcGJ3VliyNwVA/qytO6oA2XUfdJMbea6Qys0rJ6zqgrtFgPPgKRYU3QZGqVam 35a0v5c4AFFQtI0khHfI4dJ+HG5nVevWKJgKf0Kxjoh7v9Iqj4SzBKLDPEYVFNDZ+NEjmOA7pST llkqvx4kXZEWkjrj2xQZlG05mJF6NM8= X-Google-Smtp-Source: AGHT+IH6XcRb3dY+ed+qthaJ3tDEtbZeAuAInUKd60UoN8JUd4P8SIptONXNJZy3ckuGqMfBoSS1OA== X-Received: by 2002:a17:902:f545:b0:216:393b:23d4 with SMTP id d9443c01a7336-220bbab0e0cmr183420735ad.11.1739525455176; Fri, 14 Feb 2025 01:30:55 -0800 (PST) Received: from Barrys-MBP.hub ([118.92.30.135]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-220d545c814sm25440515ad.148.2025.02.14.01.30.49 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Fri, 14 Feb 2025 01:30:54 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: 21cnbao@gmail.com, baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, ioworker0@gmail.com, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, lorenzo.stoakes@oracle.com, ryan.roberts@arm.com, v-songbaohua@oppo.com, x86@kernel.org, ying.huang@intel.com, zhengtangquan@oppo.com Subject: [PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation Date: Fri, 14 Feb 2025 22:30:14 +1300 Message-Id: <20250214093015.51024-4-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20250214093015.51024-1-21cnbao@gmail.com> References: <20250214093015.51024-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 5C5F24000C X-Stat-Signature: zp7i3ipeqgs15ifgp4pa4hr78jguedmb X-HE-Tag: 1739525456-857133 X-HE-Meta: U2FsdGVkX1+ZBvPiDoKVGVMOFGjtvrssCkK4Di38h/owe4h9JPUh159aYrCn+zCUITXJ8HG4AnN8cuMY7Wmm4wNg0eD1id1CK9D4LU9V8kaW9gdHTJFt00K/KGRHnkqA9K54EXEiB79cH2olwrF7Cu2FCHc2IfGZsC9pPebBKeoG072iqfHxu1gVeogGTio/+/6XD2hIC5GdrYKRoqDuoWRyQXfAZDTztXRO5TGoHortX7jp1FJpW+l70hXLvBinCqdCl4hF/9qwD+kN1YJ+Ba3WWdoCMdGFfZtqd60g7ZC+y3+Kgq+cN7APhoIHS7LwDxM6ZmwnhUAi+jhOiIkApoEz987y2g0ROSyCzTDH6BFDanrbf6VUnlmEfkBEibhNqYRVqA8pXcBUKt/G+1+ZChOBunwPNN0dVywHu5u26vfHK081URwDtRqBmkG3A04ro8OqgvfvPihUm9PBkShm7uCcO3yK4hxAF3lhMncV5+Sede56qBWotJrUuPJnobjgKtnnbeRWLSOWULaaLgVChDYu4Bnh3VvnN5a9hIrwXeRU9O3rho1Ksmkw2GheUTP9bpxSCty8Hu+11LtKIGQ4HRoKtt4wCW/PeKCfk6S03r1nbEKLt229JbH9unGzmIBSbxPbCZfUtPmYLEznARzvBqs7JEaMZPNkT/TXiuQ1gPHQBlZM26pHlUCI9Bgh7+7gwjftEPqkis8eUcL/OMQPghvfbCPmX6DmsttM13F9jH2vF9ixxoOIQuHVM1qqsKg9EJG7P4E5eRB1Dwd8YZHQ0b90KAhF0cWMdQwztktV2LJz3eLiT0vLFO8LsUI4/uaqzTMNA7dl1gnHGgA9NApTiMFtuKkKgGAqQGP8ipEwqHGk8zwEZmE33/ynvvxngzMKu2kScnadd0uXX/AEufHdQu9CdhUF7wSpb4JR+mAvdx9pJxOrw+wLwl+hmEPlzcs1OkUTbJzjOcQ6nafKVbd 7uoMJtB0 QWCDxBpuve4+SSV97zJS8/LXj/8CJ99AMzOZNRneAOb+a3ilaGBvhp7qMG14lI7RMJY5oqfIKbLWc5p/5EaO6QC3GGVnFnTscDoSoOWYS6zIwR4RXopUJ/qF2aVMkz9IF5MCfsQzUEjfNZV1Eog0rzAkCh+8nYnuiUvhWYwxTYwR5OUdWWAL6azu6DNSo2y5Fy97TvndNDD6QfLEBN5OsP1DEbHbIp+pAIFPxR25WLGKs/37bOjHYDa3NPW+HhJIRQALrsd+B9ERfHVxcfpsr5ogfzm5Ve2IdGTqu5Dp25BQD9S78nwiB/PMKW2R7V22Er9uApwIWTU4s47C9YWdW4IP7qTQ6mkdeLxSA4iHUeHWIahLhpAMoGexsaA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song Currently, the PTEs and rmap of a large folio are removed one at a time. This is not only slow but also causes the large folio to be unnecessarily added to deferred_split, which can lead to races between the deferred_split shrinker callback and memory reclamation. This patch releases all PTEs and rmap entries in a batch. Currently, it only handles lazyfree large folios. The below microbench tries to reclaim 128MB lazyfree large folios whose sizes are 64KiB: #include #include #include #include #define SIZE 128*1024*1024 // 128 MB unsigned long read_split_deferred() { FILE *file = fopen("/sys/kernel/mm/transparent_hugepage" "/hugepages-64kB/stats/split_deferred", "r"); if (!file) { perror("Error opening file"); return 0; } unsigned long value; if (fscanf(file, "%lu", &value) != 1) { perror("Error reading value"); fclose(file); return 0; } fclose(file); return value; } int main(int argc, char *argv[]) { while(1) { volatile int *p = mmap(0, SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); memset((void *)p, 1, SIZE); madvise((void *)p, SIZE, MADV_FREE); clock_t start_time = clock(); unsigned long start_split = read_split_deferred(); madvise((void *)p, SIZE, MADV_PAGEOUT); clock_t end_time = clock(); unsigned long end_split = read_split_deferred(); double elapsed_time = (double)(end_time - start_time) / CLOCKS_PER_SEC; printf("Time taken by reclamation: %f seconds, split_deferred: %ld\n", elapsed_time, end_split - start_split); munmap((void *)p, SIZE); } return 0; } w/o patch: ~ # ./a.out Time taken by reclamation: 0.177418 seconds, split_deferred: 2048 Time taken by reclamation: 0.178348 seconds, split_deferred: 2048 Time taken by reclamation: 0.174525 seconds, split_deferred: 2048 Time taken by reclamation: 0.171620 seconds, split_deferred: 2048 Time taken by reclamation: 0.172241 seconds, split_deferred: 2048 Time taken by reclamation: 0.174003 seconds, split_deferred: 2048 Time taken by reclamation: 0.171058 seconds, split_deferred: 2048 Time taken by reclamation: 0.171993 seconds, split_deferred: 2048 Time taken by reclamation: 0.169829 seconds, split_deferred: 2048 Time taken by reclamation: 0.172895 seconds, split_deferred: 2048 Time taken by reclamation: 0.176063 seconds, split_deferred: 2048 Time taken by reclamation: 0.172568 seconds, split_deferred: 2048 Time taken by reclamation: 0.171185 seconds, split_deferred: 2048 Time taken by reclamation: 0.170632 seconds, split_deferred: 2048 Time taken by reclamation: 0.170208 seconds, split_deferred: 2048 Time taken by reclamation: 0.174192 seconds, split_deferred: 2048 ... w/ patch: ~ # ./a.out Time taken by reclamation: 0.074231 seconds, split_deferred: 0 Time taken by reclamation: 0.071026 seconds, split_deferred: 0 Time taken by reclamation: 0.072029 seconds, split_deferred: 0 Time taken by reclamation: 0.071873 seconds, split_deferred: 0 Time taken by reclamation: 0.073573 seconds, split_deferred: 0 Time taken by reclamation: 0.071906 seconds, split_deferred: 0 Time taken by reclamation: 0.073604 seconds, split_deferred: 0 Time taken by reclamation: 0.075903 seconds, split_deferred: 0 Time taken by reclamation: 0.073191 seconds, split_deferred: 0 Time taken by reclamation: 0.071228 seconds, split_deferred: 0 Time taken by reclamation: 0.071391 seconds, split_deferred: 0 Time taken by reclamation: 0.071468 seconds, split_deferred: 0 Time taken by reclamation: 0.071896 seconds, split_deferred: 0 Time taken by reclamation: 0.072508 seconds, split_deferred: 0 Time taken by reclamation: 0.071884 seconds, split_deferred: 0 Time taken by reclamation: 0.072433 seconds, split_deferred: 0 Time taken by reclamation: 0.071939 seconds, split_deferred: 0 ... Signed-off-by: Barry Song --- mm/rmap.c | 72 ++++++++++++++++++++++++++++++++++++++----------------- 1 file changed, 50 insertions(+), 22 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 89e51a7a9509..8786704bd466 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1781,6 +1781,25 @@ void folio_remove_rmap_pud(struct folio *folio, struct page *page, #endif } +/* We support batch unmapping of PTEs for lazyfree large folios */ +static inline bool can_batch_unmap_folio_ptes(unsigned long addr, + struct folio *folio, pte_t *ptep) +{ + const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; + int max_nr = folio_nr_pages(folio); + pte_t pte = ptep_get(ptep); + + if (!folio_test_anon(folio) || folio_test_swapbacked(folio)) + return false; + if (pte_unused(pte)) + return false; + if (pte_pfn(pte) != folio_pfn(folio)) + return false; + + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL, + NULL, NULL) == max_nr; +} + /* * @arg: enum ttu_flags will be passed to this argument */ @@ -1794,6 +1813,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, struct page *subpage; struct mmu_notifier_range range; enum ttu_flags flags = (enum ttu_flags)(long)arg; + unsigned long nr_pages = 1, end_addr; unsigned long pfn; unsigned long hsz = 0; @@ -1933,23 +1953,26 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, if (pte_dirty(pteval)) folio_mark_dirty(folio); } else if (likely(pte_present(pteval))) { - flush_cache_page(vma, address, pfn); - /* Nuke the page table entry. */ - if (should_defer_flush(mm, flags)) { - /* - * We clear the PTE but do not flush so potentially - * a remote CPU could still be writing to the folio. - * If the entry was previously clean then the - * architecture must guarantee that a clear->dirty - * transition on a cached TLB entry is written through - * and traps if the PTE is unmapped. - */ - pteval = ptep_get_and_clear(mm, address, pvmw.pte); + if (folio_test_large(folio) && !(flags & TTU_HWPOISON) && + can_batch_unmap_folio_ptes(address, folio, pvmw.pte)) + nr_pages = folio_nr_pages(folio); + end_addr = address + nr_pages * PAGE_SIZE; + flush_cache_range(vma, address, end_addr); - set_tlb_ubc_flush_pending(mm, pteval, address, address + PAGE_SIZE); - } else { - pteval = ptep_clear_flush(vma, address, pvmw.pte); - } + /* Nuke the page table entry. */ + pteval = get_and_clear_full_ptes(mm, address, pvmw.pte, nr_pages, 0); + /* + * We clear the PTE but do not flush so potentially + * a remote CPU could still be writing to the folio. + * If the entry was previously clean then the + * architecture must guarantee that a clear->dirty + * transition on a cached TLB entry is written through + * and traps if the PTE is unmapped. + */ + if (should_defer_flush(mm, flags)) + set_tlb_ubc_flush_pending(mm, pteval, address, end_addr); + else + flush_tlb_range(vma, address, end_addr); if (pte_dirty(pteval)) folio_mark_dirty(folio); } else { @@ -2027,7 +2050,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * redirtied either using the page table or a previously * obtained GUP reference. */ - set_pte_at(mm, address, pvmw.pte, pteval); + set_ptes(mm, address, pvmw.pte, pteval, nr_pages); folio_set_swapbacked(folio); goto walk_abort; } else if (ref_count != 1 + map_count) { @@ -2040,10 +2063,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * We'll come back here later and detect if the folio was * dirtied when the additional reference is gone. */ - set_pte_at(mm, address, pvmw.pte, pteval); + set_ptes(mm, address, pvmw.pte, pteval, nr_pages); goto walk_abort; } - dec_mm_counter(mm, MM_ANONPAGES); + add_mm_counter(mm, MM_ANONPAGES, -nr_pages); goto discard; } @@ -2108,13 +2131,18 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, dec_mm_counter(mm, mm_counter_file(folio)); } discard: - if (unlikely(folio_test_hugetlb(folio))) + if (unlikely(folio_test_hugetlb(folio))) { hugetlb_remove_rmap(folio); - else - folio_remove_rmap_pte(folio, subpage, vma); + } else { + folio_remove_rmap_ptes(folio, subpage, nr_pages, vma); + folio_ref_sub(folio, nr_pages - 1); + } if (vma->vm_flags & VM_LOCKED) mlock_drain_local(); folio_put(folio); + /* We have already batched the entire folio */ + if (nr_pages > 1) + goto walk_done; continue; walk_abort: ret = false;