From patchwork Sat Mar 25 06:56:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Zhang X-Patchwork-Id: 13187634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E4B7C6FD1F for ; Sat, 25 Mar 2023 06:56:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A1836B007D; Sat, 25 Mar 2023 02:56:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 42A096B007E; Sat, 25 Mar 2023 02:56:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28A456B0080; Sat, 25 Mar 2023 02:56:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0B5336B007D for ; Sat, 25 Mar 2023 02:56:28 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D1B0340100 for ; Sat, 25 Mar 2023 06:56:27 +0000 (UTC) X-FDA: 80606512014.10.E241894 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf01.hostedemail.com (Postfix) with ESMTP id 20E0140005 for ; Sat, 25 Mar 2023 06:56:24 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679727386; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UwL4Bg0Qw6pK4CYuFcUzzrHTyPhckBc2IsYO9PPGgXs=; b=krpjoIDHwf8NhO4Bq9mNbLRsyx0hm1FyX+tWKW3FgRV97WQ7CszKltGhgNXB+pSZOGWyuQ i1/YycvSfEK+RnvgejlKe2lqmnCYXcKgSj8pREp7UhU6As/gFlxB9+JY6hcyWgx+Z2JaOZ 5PNIXxpPAkLg9iNGkTuCnhpL0mpf+wc= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679727386; a=rsa-sha256; cv=none; b=Bd4zWtih/xVuoQ7UN/DVChjUEVXDZUVeP3P46UFs+nJRibONmIyMkj8ya5Wf84O1IbrhZv HTcDYNSG+oEGcv25vreslAYqAaHC7VoxZOJOSWU1C8ciQnbnk3wSuwqNF8+pdMI0OI1Wyw LEMIsXCrvfI5mBV8K9OilqTLZtQ8q/M= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Pk8sx0rzjz17NfJ; Sat, 25 Mar 2023 14:53:09 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Sat, 25 Mar 2023 14:56:18 +0800 From: Peng Zhang To: , , , CC: , , , , , , ZhangPeng Subject: [PATCH v3 5/6] mm: convert copy_user_huge_page() to copy_user_folio() Date: Sat, 25 Mar 2023 14:56:07 +0800 Message-ID: <20230325065608.601391-6-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230325065608.601391-1-zhangpeng362@huawei.com> References: <20230325065608.601391-1-zhangpeng362@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Stat-Signature: bnjzo9sqmyfcj69y3ui91zijmhktmyqy X-Rspam-User: X-Rspamd-Queue-Id: 20E0140005 X-Rspamd-Server: rspam06 X-HE-Tag: 1679727384-45073 X-HE-Meta: U2FsdGVkX18cW59sbbj9qVFoLi9YoAAZmnyDxfekKh8eaGVRV8DFp4LvzUQAtkgv/ZjFlOo8HZkPyJlVrK11/B5Ye3b0KNrn1mLijhqR/Snsw+Ba2EXLe/eD2Dq7+d2/0BY/jQyZSY9K2J0G3rBqyMY1P9hQ0wkRZqlhizAxWTiklUu0iuJD6A2/ZG5eeJ2e31unlVmVQ0nimzKFmJt7UquW2sPs2J2OMyPEXS03Lsc00JVDrGg6YQZTirLUuJmF/pDbWq5Z8Vl75x+Yiio/X9Sj52DDl737j2OfShvynDSdEYUUK4bcjxwWomLNzO5WJPxGXZbmdqOhuLlMizJurO3FDEPpMNb+5ot3dEjd44FCX6L4M8ND2X/xxHzyAzerPaj5WjzqzegxzSc+H+Thdx2h5qOW7+rK8urn5iTbFr4WBXuU1OHZPgeiMOG9/WUFUVDb/Lhcaj727XFGJXgH4E95OLYptrPMHCQ/OXn8WWcBowRlMfhxpBCllQziC2b2B0YrRNep+DGDxV9LlCWIu7mL8erSGN0Rd7zkCNDKAAfZwpKGPHDpmUjdenSwdHs2WFwZgPCg1J0L03taSqKscoZ2kKuidR8u9kSznLDd4CYzweoiK463Vf1y2lI9YiIWn1WxZD5E/qoCTa6XbiUpEKfXjzgCgHqF32eRYWfxlY+pABNHQVj4tKio8HkC9Km5y20ZVCBoi1hG9nDwDT0T+wBAnvi8dN16p1+k25NSCM+0LcI8dQo6qeV2ucQasa/7zAWJC7qa8Q5r0UGmRgvkj4xUiza9hvM8isc0RJZlt7at2GsbzO9512IB+vmMA05l3VwGtEX+MoBP0gNI8Qc7Dx6tlTXDAAxt+tSVD3S5tlTx6IIhAhGY9x7HLx7dKjE0sn/tOan+Mqee2qN1fBW3HR/oul9Yon1NfgPZDJWmSUKqhDZFL6w/q6CKSJ0B4o+aSU4YQzsY8ATWOrQfpw6 atW6Aabs znwjI8jNC2Z9TK9qjH3VjsirTZFoS05eQrkIYwB0YLxPaXpMBWfEnR9B+wCdw5WVgJ3AXU6sPoOa4fLTS9oTWAAb7+vMODgMBR0bGfbBaioHdXMLkZ/zxaKuLNkSpR6Xhou9p2jUIkmzEuBcjJz9RnPt2ZI6I6TjWDPxFqv8JC/Z9E8ftinJRIV3/37RzAOI6f8Tkwszy/r6jKquW2TGYjvJF2TOyY24ljWj/tpLpCpfBqEcIYAeWXPD3b1RN7kL5s7tpL91VyYbB2tkQN+Ufhxl/pQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: ZhangPeng Replace copy_user_huge_page() with copy_user_folio(). copy_user_folio() does the same as copy_user_huge_page(), but takes in folios instead of pages. Convert copy_user_gigantic_page() to take in folios. Signed-off-by: ZhangPeng --- include/linux/mm.h | 8 ++++---- mm/hugetlb.c | 12 ++++++------ mm/memory.c | 28 ++++++++++++++-------------- 3 files changed, 24 insertions(+), 24 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 69dfadee23e8..6a787fe66ea1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3542,10 +3542,10 @@ extern const struct attribute_group memory_failure_attr_group; extern void clear_huge_page(struct page *page, unsigned long addr_hint, unsigned int pages_per_huge_page); -extern void copy_user_huge_page(struct page *dst, struct page *src, - unsigned long addr_hint, - struct vm_area_struct *vma, - unsigned int pages_per_huge_page); +void copy_user_folio(struct folio *dst, struct folio *src, + unsigned long addr_hint, + struct vm_area_struct *vma, + unsigned int pages_per_huge_page); long copy_folio_from_user(struct folio *dst_folio, const void __user *usr_src, bool allow_pagefault); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1cfd20e5fe8b..85657f9007ee 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5093,8 +5093,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, ret = PTR_ERR(new_folio); break; } - copy_user_huge_page(&new_folio->page, ptepage, addr, dst_vma, - npages); + copy_user_folio(new_folio, page_folio(ptepage), addr, dst_vma, + npages); put_page(ptepage); /* Install the new hugetlb folio if src pte stable */ @@ -5602,8 +5602,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, goto out_release_all; } - copy_user_huge_page(&new_folio->page, old_page, address, vma, - pages_per_huge_page(h)); + copy_user_folio(new_folio, page_folio(old_page), address, vma, + pages_per_huge_page(h)); __folio_mark_uptodate(new_folio); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, haddr, @@ -6244,8 +6244,8 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, *foliop = NULL; goto out; } - copy_user_huge_page(&folio->page, &(*foliop)->page, dst_addr, dst_vma, - pages_per_huge_page(h)); + copy_user_folio(folio, *foliop, dst_addr, dst_vma, + pages_per_huge_page(h)); folio_put(*foliop); *foliop = NULL; } diff --git a/mm/memory.c b/mm/memory.c index faf79742e0b6..4752f0e829b6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5716,21 +5716,21 @@ void clear_huge_page(struct page *page, process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page); } -static void copy_user_gigantic_page(struct page *dst, struct page *src, - unsigned long addr, - struct vm_area_struct *vma, - unsigned int pages_per_huge_page) +static void copy_user_gigantic_page(struct folio *dst, struct folio *src, + unsigned long addr, + struct vm_area_struct *vma, + unsigned int pages_per_huge_page) { int i; - struct page *dst_base = dst; - struct page *src_base = src; + struct page *dst_page; + struct page *src_page; for (i = 0; i < pages_per_huge_page; i++) { - dst = nth_page(dst_base, i); - src = nth_page(src_base, i); + dst_page = folio_page(dst, i); + src_page = folio_page(src, i); cond_resched(); - copy_user_highpage(dst, src, addr + i*PAGE_SIZE, vma); + copy_user_highpage(dst_page, src_page, addr + i*PAGE_SIZE, vma); } } @@ -5748,15 +5748,15 @@ static void copy_subpage(unsigned long addr, int idx, void *arg) addr, copy_arg->vma); } -void copy_user_huge_page(struct page *dst, struct page *src, - unsigned long addr_hint, struct vm_area_struct *vma, - unsigned int pages_per_huge_page) +void copy_user_folio(struct folio *dst, struct folio *src, + unsigned long addr_hint, struct vm_area_struct *vma, + unsigned int pages_per_huge_page) { unsigned long addr = addr_hint & ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); struct copy_subpage_arg arg = { - .dst = dst, - .src = src, + .dst = &dst->page, + .src = &src->page, .vma = vma, };