From patchwork Thu Apr 17 15:43:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fan Ni X-Patchwork-Id: 14055816 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC2B4C369C2 for ; Thu, 17 Apr 2025 15:56:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1247328005A; Thu, 17 Apr 2025 11:56:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D42228001A; Thu, 17 Apr 2025 11:56:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF20428005A; Thu, 17 Apr 2025 11:56:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BD29628001A for ; Thu, 17 Apr 2025 11:56:07 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8D7D1141A19 for ; Thu, 17 Apr 2025 15:56:09 +0000 (UTC) X-FDA: 83343987258.21.A7ADAE0 Received: from mail-pl1-f172.google.com (mail-pl1-f172.google.com [209.85.214.172]) by imf25.hostedemail.com (Postfix) with ESMTP id A97EFA000E for ; Thu, 17 Apr 2025 15:56:07 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nctKTghN; spf=pass (imf25.hostedemail.com: domain of nifan.cxl@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=nifan.cxl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744905367; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MNo4trKsSrAblzJZ1AtMfvORrQgIvx7kTybovwOHbKI=; b=74lOfRyFF6d7jTdL4X7d7jUmhmSfgJew1Z3E36nVjypGVfhQvSws7Bw9+pkX3YIHGn9Jmq 71tcrSKtHsq3lrzKNxgKMZborqCoBvARR3OhgzV9YXoQawNu0x2gUfWdRcSwyARvHts2Jz lHjIhZhJWacKnHQy0eJNvOx7H/PMf+E= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=nctKTghN; spf=pass (imf25.hostedemail.com: domain of nifan.cxl@gmail.com designates 209.85.214.172 as permitted sender) smtp.mailfrom=nifan.cxl@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744905367; a=rsa-sha256; cv=none; b=YhORqIvqLQnWG1QDaLmxDAWwiy6N0XcnYbLICfKAqQmwcGq+PLW3f2fPsdj+DW+SDA363/ YtKv3dA9bGpSP8q5BQUVmQI3SaS9+DkvyCdwxGAJaIHZU16lDnsOtJpPYaUdSe/Ku9TVOg wDhE+k+kFe3WWxtbdyRHgbpNtVdVmi8= Received: by mail-pl1-f172.google.com with SMTP id d9443c01a7336-2241053582dso14870365ad.1 for ; Thu, 17 Apr 2025 08:56:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744905366; x=1745510166; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MNo4trKsSrAblzJZ1AtMfvORrQgIvx7kTybovwOHbKI=; b=nctKTghNprVfdujGPD2Bbg5tDGZPP/r10+ItWg6NB9Zow/3GJbqG43GitaXbj9MpaD /Sh7sgSMwHrliRhGZc5X4+zIiDln51IS1qJztWq2Dj4roBhGxpGJ7jCf58tU43+QlH1I AR0PXDO1Ds5EjMH+BG5mrQ1xh44Xr+/fZBi39VV8hMy+uE0XMf/ghBflDwu9ftzxz/5Z Ygu5jzphT/kpDhgqB2giYC5Vx2rj48noQdAyR/RDn5geGdGV35+/ZEPhbL+WINS86RIO YVh5qLwPXbD7+U96nG7YYZUWdmlCcPzEEsexqatRVn5pw1kN7FNmdxJ/TjAaHaywGjtV uJFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744905366; x=1745510166; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MNo4trKsSrAblzJZ1AtMfvORrQgIvx7kTybovwOHbKI=; b=fw4hj1ZKXSrXSGxKVuTdEtL22Ou8YAKAxiV20Za+1wPIs+ntWKSJpDjkyE7ZVn+rFi vloQrWILYwPxCKSvtTyJjZ4QEWmz/tLSrQ6+uzvuBmWLV+vY5CidK0F0+0ZC1JgxBl8e gXPj8Ym+D3w+K13IV6XZzlLnRUa0AbdTHS0Q+smw4WHTkukSfBUbq8K9q6wWP5/m2Hsb 3spq0nPRkpHTYdSBqJwiLpSe/yAKfsSdG9a+tN1A6WH2GL2oNZKNkOuYD8SzCxVrHzax SXI/lRB86PJ3EEKbIWy6IynqH/8xugrnDURO1/Vn3D9T5TuOJNua1a40aEtdkZpRnWad JNhg== X-Forwarded-Encrypted: i=1; AJvYcCVOmW4LApC4esusvE6TUbVwDI9H2NCLoV8hA8vYxqaWK8RfaAkgj3Oyyo0IUBPRlfdo5YK2CN3dhA==@kvack.org X-Gm-Message-State: AOJu0YyRi5xmVqNWdn0LIidDPnDXmqaf2Bs8kJ00l7IyC0OX0790LGQd MehzdZxF/Die86EyI731Q7mUiILPwozoUFhIP0Ka1mtXLitCINfo X-Gm-Gg: ASbGncvMo9XGx5y7LQaCKD+gPZ+4whmUEpWTlRX8+5OxNoyWHu5FdASsSL64G1segSx bBXrj+eGPyJjiikZiql+1T8wOJDV3lUjmXezBxe2l/Abl3Ps8aerOTZXWVKlCDSwYDqG5qsT34K 8Y8DgCaDl337GbFgaesn/WjU+61gDWZ6//as9BcsJha9clODblXIVgAFPG8yuvgcKe4Dlg617e/ DN0OEhda0I5ata7ch6/FGWC5E95kQDOZkX4K5NAfAYoG1VAOkgPZghl86PqpV0EHkBO3luNl6R2 7F3FiXEjdG5+XGh2YANmXbSom/2qWoy5R3G1L34CvdmZ8+vkx5iaekA3Cw== X-Google-Smtp-Source: AGHT+IH6hPLhTkRRuAmzXBkZvtr/mvV00mkfy9wR3j+cWFOfhtUSnXbr8ajdUurTGsVz18KRAbX+mQ== X-Received: by 2002:a17:902:d491:b0:216:3c36:69a7 with SMTP id d9443c01a7336-22c359a2687mr85665945ad.45.1744905366545; Thu, 17 Apr 2025 08:56:06 -0700 (PDT) Received: from localhost.localdomain ([2601:646:8f03:9fee:5e33:e006:dcd5:852d]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-22c50bdaae7sm1636085ad.36.2025.04.17.08.56.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 17 Apr 2025 08:56:06 -0700 (PDT) From: nifan.cxl@gmail.com To: muchun.song@linux.dev, willy@infradead.org Cc: mcgrof@kernel.org, a.manzanares@samsung.com, dave@stgolabs.net, akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Fan Ni Subject: [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page Date: Thu, 17 Apr 2025 08:43:15 -0700 Message-ID: <20250417155530.124073-3-nifan.cxl@gmail.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250417155530.124073-1-nifan.cxl@gmail.com> References: <20250417155530.124073-1-nifan.cxl@gmail.com> MIME-Version: 1.0 X-Stat-Signature: 3bqnjw6t6hdo6z4odamiucxafuq1id4f X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A97EFA000E X-Rspam-User: X-HE-Tag: 1744905367-889407 X-HE-Meta: U2FsdGVkX19jcVdAY+QMrHGeM1aQRh5UujpgHbeZgBc5ysBkrkfudWrRrd/OykwQndR3c8cgLvuvtUB1Dk6BkW4hnQARONtkqrTwvsCy60Z15eKQ1Yq915pKvJkxhIoqCgOJnSO55pY2vdTqAR4gXxoYHpJ650OVcqFVXb8sJxtPLDf9e4/jJvmc9bbYk6tTXyLI5xgrKj/hpMUxvgIyNG06ghxk8YoUtm4nBLcZ0gF9wbnZonT0TTS/glxedR13cJFd1DPFtT3VeQiTQE8CMsQLgfyAN8FO/mmaFE2ZIqK/Y0eFGqSo9FQZpli0x9+9ZiGkcrGI3qG76lDQcRKm6o+8YAfW6g431YJZxk1m3zW3d+jgJevSg/xmBnkACJ5g01hWWCUHr/G/vIUwBmpEK+lM9nnVvtxQK4VY0Op3prJA9OmFDiwpGdGna1lhmP26xmszk0N+Ddn9s4C53QKigCePjzB0bj2Smh4rxPavLwNMH7ulE1IvArOut0UQYFd0+cSns0U09FjXZpSkuPtcy1gL15++QNRuooYi4ElrROjEqTgvgRV9XHI9dKmvidjtUYJPvWstXxWSE8zIbBVgM2xwNMpR7iD1148Y5ucNjQBbEUzNqgRHgUvw2vfJsrDEOEL1+pq8CtO1NexpFRtez5zBZ1DN5q+qEIawFTZq1Ltr/NQEdYTH7HVj4t/dHef/d1U+z4/5gBSuR//Td3+U0ovXraaXCau/6h6w2m5EFxZiuA6CXIdctiglgqq5olU8PyXKRFkTXgaVskwIqZ0tLbfjUrUjCJmEwZZjcB59byk2I7e7M8x2V8JwacPtCDG2aI/3PdxWL04w/0+dPasWXEnhHb/jWVo3K7tisOS3/81Sew0Y/xdEW2DD+tyIovKiY2+IFqDHJ1lugqdxD0dae6njVOkMfg7rFnW8anOw32iSjAeknrqamEuQzMMSkWTYxGuQaq8NgQaOJIyswl3 opn1ujwA JB7znQuan/8xj2Q0d3n7SWM0+K4693EZ0kg1KRahBghf9dgQFUixOp2hoRx2a83aqHmgzIHccNh48Q9xYyYNdyH7/P4xr/g23U+1AmowZoxGPBvGNF/Dy/A85Km1oKOzpO3m4fMpc2WUwRFDIwha5WA5eoePJGVY3j0h3xBkvpHREgubtXKK+l+b1R8x7gZcOGO38g5y2B8Zm/lBVmOb8yF3mGWigd0GMdjOxXFrs1rHwGVxwJXbhCmUiMHW2C3Iew7xXi4Vk6a1qc5iGfvO+gjBWPEG6TgA6Mas03Dn9JR7MF8SiQKqegdMxHksIoVZl1gushlajLLNy0A9rhEqEfqsdJXSEQUlJ6wASE2d11pSINhEjfftQ77hRJ4nu6vfrPNEs/Th0EGnrmaRZbZuwKZOfEKgZKhrEPyEIVk+jZroIWNGX398OFajsvVy331VQSuULAFgksMxIbJjeXF4wiK9Mq8RNiFTWfPZBHp9QzPY/uqXAkSFbytewkw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Fan Ni The function __unmap_hugepage_range() has two kinds of users: 1) unmap_hugepage_range(), which passes in the head page of a folio. Since unmap_hugepage_range() already takes folio and there are no other uses of the folio struct in the function, it is natural for __unmap_hugepage_range() to take folio also. 2) All other uses, which pass in NULL pointer. In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to take folio. Signed-off-by: Fan Ni Reviewed-by: Sidhartha Kumar --- Question: If the change in the patch makes sense, should we try to convert all "page" uses in __unmap_hugepage_range() to folio? --- include/linux/hugetlb.h | 2 +- mm/hugetlb.c | 10 +++++----- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index b7699f35c87f..d6c503dd2f7d 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -133,7 +133,7 @@ void unmap_hugepage_range(struct vm_area_struct *, void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, - struct page *ref_page, zap_flags_t zap_flags); + struct folio *ref_folio, zap_flags_t zap_flags); void hugetlb_report_meminfo(struct seq_file *); int hugetlb_report_node_meminfo(char *buf, int len, int nid); void hugetlb_show_meminfo_node(int nid); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3181dbe0c4bb..7d280ab23784 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5833,7 +5833,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, - struct page *ref_page, zap_flags_t zap_flags) + struct folio *ref_folio, zap_flags_t zap_flags) { struct mm_struct *mm = vma->vm_mm; unsigned long address; @@ -5910,8 +5910,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, * page is being unmapped, not a range. Ensure the page we * are about to unmap is the actual page of interest. */ - if (ref_page) { - if (page != ref_page) { + if (ref_folio) { + if (page != folio_page(ref_folio, 0)) { spin_unlock(ptl); continue; } @@ -5977,7 +5977,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, /* * Bail out after unmapping reference page if supplied */ - if (ref_page) + if (ref_folio) break; } tlb_end_vma(tlb, vma); @@ -6052,7 +6052,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, tlb_gather_mmu(&tlb, vma->vm_mm); __unmap_hugepage_range(&tlb, vma, start, end, - folio_page(ref_folio, 0), zap_flags); + ref_folio, zap_flags); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb);