From patchwork Mon Apr 10 13:39:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Zhang X-Patchwork-Id: 13206370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 029E4C77B61 for ; Mon, 10 Apr 2023 13:39:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A3D6280020; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 582E828001D; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 344C228001E; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id EFA8A28001C for ; Mon, 10 Apr 2023 09:39:52 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C53C8140C3C for ; Mon, 10 Apr 2023 13:39:52 +0000 (UTC) X-FDA: 80665589424.30.50B52ED Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf24.hostedemail.com (Postfix) with ESMTP id B58D0180018 for ; Mon, 10 Apr 2023 13:39:49 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681133991; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=c0rUJV5y2onJPdtWdHOKNOMk90l6YB7IIaQD3cd1B+k=; b=T5odVNqKAw59/Tez4LcIyu9LjTEewIoi68yjNt7kZV6rC7n8gH6h/zE1QJAuXuWOldeTyz cWEwC0FQDsYlG/QUDlsnv4uKOSNzfdEQmyu1PmsWMrRaK+DbCE8oreqY5z0XFUHDNGPkCG PSbY6SSUQ+jolTKMtboQlNisHi9yyqM= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681133991; a=rsa-sha256; cv=none; b=ml4RNYx3jQhUAtDCd6Unzvi27hCY0ci4JQ2UQjNsbUkOV1ppErhbvWOhYySiA5Y7EtoKPE IlsL8kwswfZqafM8GXqA8IcZF7JcEVaHPmWVlQrbZzZXsGvAPygrvFefkW1xm8/ZxpP92N 6tqiSf0Xpa/fch/ljqzpDog3pi0oSJ4= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Pw9373GfFzSrKP; Mon, 10 Apr 2023 21:35:47 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 10 Apr 2023 21:39:40 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v6 1/6] userfaultfd: convert mfill_atomic_pte_copy() to use a folio Date: Mon, 10 Apr 2023 21:39:27 +0800 Message-ID: <20230410133932.32288-2-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230410133932.32288-1-zhangpeng362@huawei.com> References: <20230410133932.32288-1-zhangpeng362@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: wgs7rgnrpbta3u6ptemg1do9qd1a1duf X-Rspamd-Queue-Id: B58D0180018 X-HE-Tag: 1681133989-418438 X-HE-Meta: U2FsdGVkX18AcpA/FqIejzzooz7yOwQm5lKuqXC3SJVObkUWGXWmsF/FFcAVPxAkIGcpf+P2l05tSFXNx62qSdaNkjcKXaA/BTeSGZ1AOqt7x+K26PxsbzcZCouWBtWYfhWph7sd3wWa4vnxSHnIju4AZvVvdkigI2yGjTOQ5l8RoBgE2f3sxwFLxaUyAh0DEHJavikuGhlm6QF/b5mIzYxlFiIZkPRtd2aUvDxzKDDtTI1jkYv0FwVrUGCK78XM/CWiV4vYDWzrszHzIiO18oNGcnowTWF78scRb2B2fg7WTveyiFdytPnjSVkR/aIuHw4+F3G2bGF5Dga5lra+EO8DVuO+LBfnr1xnVNTGwa/G308O9yqJoI7xJq/o+Xk8CLBPfUzBKZ2kIg9ASqjYqYlIjAYaswUn6BNKqumvKOgF9ESC7tx/Sco7/hOiAQTlsxcz2M52ACWmmUhJ4laLG3nxVE77PMWMTzWnbv8Yoj9pitVa4JHeVT9qhA5d9kTOE8Q8ryUCs19q8YhBsshUd3Cyw78E85IvIqqIO3QjB3U2zCme8mYF3xlNGbj1Iwdkq5jDa6Npau3VS7FeFd2fvzl98lhBUhamHM6nyAEE6RiE5rSi9CJ/QEnSnEv1YwJO4oanC5GtrsipqRAUX5B5hRnY0kn4wDKfU4uUZQW9XYNVM/vihJVW3B+qYIfwuHmX9TEXnkgFtUXncY4oInbyU+ohaWEJjscSYSQgX0GLX/nDuHAW55puNcLxaBlV3Iz7NkRepvlOoX/tZmQ5Zhr+yYNWvQHn+jnXBAWDKdgMURpItpBWGu2goI0bW7GzSBpq4TEPKAx3TdVVgGt2ZnL6LYJYsbpkBr3XPLjOHU28gCSN9HbrQ+eYS2ltLKnoGlz/fkbROuZKTGmV222Tuq0nVF8EaqfcITbzEC0KLvFOgIl+ABYGDZYE1PtcDS59xcN5HVBt0QeDXHs97Ra+tVB eA2/5lRp fYR6dHIwiKKAWkX9w6Hvd7sJDzpUEWIVWcFsJnZV9tVm9UZR46/n904AHFJW0XPQEy5K87K2bSbpuPfHx9XRStPSeD8X9zfrxLsI5pmAZGHrlIHSPz2K2Hiv5isX7tcokdczWQRUkVyOsMOX2allvoOcMJWJLNaLuE9f4hiRk5MHfVy9LpySxSu4MA0O5JHVGW7px9jz+qfuat66PK7IR7nFVEBHj5hCw26PW7ggWxxXxfEqrvbF4c9rvsoaQeDrTKvEB9ZTXza1ntziQfDaitL/aJzmLpEDYQ8G/z0S6AP4zVBU0B994kmwmjg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: ZhangPeng Call vma_alloc_folio() directly instead of alloc_page_vma() and convert page_kaddr to kaddr in mfill_atomic_pte_copy(). Removes several calls to compound_head(). Signed-off-by: ZhangPeng Reviewed-by: Sidhartha Kumar Reviewed-by: Mike Kravetz --- mm/userfaultfd.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7f1b5f8b712c..313bc683c2b6 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -135,17 +135,18 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, uffd_flags_t flags, struct page **pagep) { - void *page_kaddr; + void *kaddr; int ret; - struct page *page; + struct folio *folio; if (!*pagep) { ret = -ENOMEM; - page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, dst_vma, dst_addr); - if (!page) + folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, + dst_addr, false); + if (!folio) goto out; - page_kaddr = kmap_local_page(page); + kaddr = kmap_local_folio(folio, 0); /* * The read mmap_lock is held here. Despite the * mmap_lock being read recursive a deadlock is still @@ -162,45 +163,44 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, * and retry the copy outside the mmap_lock. */ pagefault_disable(); - ret = copy_from_user(page_kaddr, - (const void __user *) src_addr, + ret = copy_from_user(kaddr, (const void __user *) src_addr, PAGE_SIZE); pagefault_enable(); - kunmap_local(page_kaddr); + kunmap_local(kaddr); /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { ret = -ENOENT; - *pagep = page; + *pagep = &folio->page; /* don't free the page */ goto out; } - flush_dcache_page(page); + flush_dcache_folio(folio); } else { - page = *pagep; + folio = page_folio(*pagep); *pagep = NULL; } /* - * The memory barrier inside __SetPageUptodate makes sure that + * The memory barrier inside __folio_mark_uptodate makes sure that * preceding stores to the page contents become visible before * the set_pte_at() write. */ - __SetPageUptodate(page); + __folio_mark_uptodate(folio); ret = -ENOMEM; - if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(folio, dst_vma->vm_mm, GFP_KERNEL)) goto out_release; ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, - page, true, flags); + &folio->page, true, flags); if (ret) goto out_release; out: return ret; out_release: - put_page(page); + folio_put(folio); goto out; } From patchwork Mon Apr 10 13:39:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Zhang X-Patchwork-Id: 13206367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 134E5C76196 for ; Mon, 10 Apr 2023 13:39:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 474C9280019; Mon, 10 Apr 2023 09:39:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FE0D280002; Mon, 10 Apr 2023 09:39:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29F26280019; Mon, 10 Apr 2023 09:39:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1BD58280002 for ; Mon, 10 Apr 2023 09:39:51 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4C63B40A91 for ; Mon, 10 Apr 2023 13:39:50 +0000 (UTC) X-FDA: 80665589340.28.5B6F1E1 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf20.hostedemail.com (Postfix) with ESMTP id DD14B1C0018 for ; Mon, 10 Apr 2023 13:39:46 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681133987; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=b/KcJHRk+/LYqoppATynjeN/hY0TLo6z9gkPQIifU7U=; b=fppfPs3rrF/0iOoekQDs1dZdoDG2WWnCfEMHycfecBMpDkaM4dHAd7/WFihESIxECIhIxm l8SIt08Lzkz/qpOWX19BZuuVMd4/FWZbw6k8+2jtv/3jOKTCbf8gh0Dr2BTVBZ5uM+jUHU svnbjiPPtJeU9IvKdCrWj29Bv47D8qQ= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf20.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681133987; a=rsa-sha256; cv=none; b=1nTZoG+srL5kuygtydFWkV7RXszRTDwtL83wjNCDwXSTqU12V1dqW8W8p+Jujx5GTf2cBa tvRxIaoSEPFusg1VUf1yW0yRM28qNsVF9LdLNTOQB5ei9PMKrTe3F2SG9L1IpmWnw3ini9 OBsmkufcD41Z9KfvC8MMO+NXuBvNL14= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.56]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Pw96q713xzKrfJ; Mon, 10 Apr 2023 21:38:59 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 10 Apr 2023 21:39:41 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v6 2/6] userfaultfd: use kmap_local_page() in copy_huge_page_from_user() Date: Mon, 10 Apr 2023 21:39:28 +0800 Message-ID: <20230410133932.32288-3-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230410133932.32288-1-zhangpeng362@huawei.com> References: <20230410133932.32288-1-zhangpeng362@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: DD14B1C0018 X-Rspam-User: X-Stat-Signature: huernci7wip91ezfygcxbonoacso7tmr X-HE-Tag: 1681133986-697387 X-HE-Meta: U2FsdGVkX191pi3IaNrU/b//f+d86Mf1GnQlvvjpIx56r59PUM5Umn62s7yzOL+d2HVkxcAgZtsfemx0+ZzhStRWPWVHdp7FVoHyaICXfkIXjX7hp9eIPjxfbXs3Tg59Kb4dFrytpdt5aCEWknsmZtSuIqpTlXFdw+dAqkVDqwFfsKKvAcbIVHanjkBZZy8fYDQB0dVVOOHEkhnI5LgCUSi66l3DiwEyVdqNE0VSUWhJfj9/cbGo5mTNiTIW+3Hs6OCU3QDGzNE0CkoKaVP9dchuGiTxaF2kmJQl6ciX1oDsf30zlsOwke9ekXM5bzqsaxHt+ZfeCPkBB7JRD3Ar+81ZYvYwzxEvEHKJQdiL1wLDsgGRRUmQx2P/vAP4Q9vmNkGBFCnVwHRSP94WgmWVx6LBnibZgyNOPwaeyfnT73jX0zkZPfStTG+XX6eFXK0Mk2kPy22kLE48VIsWdXpDoBhVBo4SxXT6n3M+59XPoaXzCvVnOyoequ5pLTj5JNJsOBcRjy3CU4pxxPmiIJjtPgn+1zOQS5I4nyD0tWd5UiZKZaJY/Ay2v/Yn+1sYBUR3z3O4k7vRYmil4D+Rmpzf9XcyeSirjGQBQXQbca1JnegZLV70k63QYPZbQ0seL+1Ixz+gizFn+yMF7U14I281mOjv7oXgkgQ2xZ6BUionMDkL7ZKNAg+8TnGmvepMXpoKbHEJOuCBd6l2Y31wxti8qL7kAIrxSGYIBv+7rRb/M4HUw2k99cKOqmL7mYW5K90KDOsuSZkBSRXkbe6UtOKRxVBSbcohpezo3qlLgwWYk7QN8WetmdJkuLfQHCsQewArNM8mfH0YbNrAWSKn0JeCwtsiPM7gD7Tm6uFunfMqWJidrc/igoEdDN+KWS7YGu6mUo7xW/+rbtq3J2R6zJMWdWi9pHnzjCnbNNT51tDd3aS0cfkaHctuixym1pTy8YXP95sRXB4SbdPs0qjZcjm IlBMm5mn L9LHwzshhwHJbvGWSiEst3L6vlxV1mhI+1MTXs1PkiSnh7WiM13/uFHxgrRcOle+UstbXXMyfP1DzJB0oCZxzmD7rL9mBlKqdTAgJMtTOuq2x/TMOOepGkKBncCj8C5IsP1hGjsGBMwPAOukVnBWvKkC3uvBqaRCh8gdduXvvMHYn6vSO0uKPdo+8ns6XxU42Pmlhr9xmmNo6KrSQDsEkZKCPBE4Pt4m0XTCXQAK0vcLHnhWi0QOZN48SNeRyKj+DtsouwLiaKwjIbSGwkouwvyB4yuj/I8D+M0JFJ8KO9FjsaznlEoNmDDfbQZ5RoHGLiLJ8wOWAGbmfgJRuMgs8VUs/rLSzhpbNFjnlcQuUaLgb9UXH5MYWvrnCb5LCCRmblk6B X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: ZhangPeng kmap() and kmap_atomic() are being deprecated in favor of kmap_local_page() which is appropriate for any thread local context.[1] Let's replace the kmap() and kmap_atomic() with kmap_local_page() in copy_huge_page_from_user(). When allow_pagefault is false, disable page faults to prevent potential deadlock.[2] [1] https://lore.kernel.org/all/20220813220034.806698-1-ira.weiny@intel.com/ [2] https://lkml.kernel.org/r/20221025220136.2366143-1-ira.weiny@intel.com Signed-off-by: ZhangPeng Reviewed-by: Mike Kravetz --- mm/memory.c | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 387226d6094d..808f354bce65 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5880,16 +5880,14 @@ long copy_huge_page_from_user(struct page *dst_page, for (i = 0; i < pages_per_huge_page; i++) { subpage = nth_page(dst_page, i); - if (allow_pagefault) - page_kaddr = kmap(subpage); - else - page_kaddr = kmap_atomic(subpage); + page_kaddr = kmap_local_page(subpage); + if (!allow_pagefault) + pagefault_disable(); rc = copy_from_user(page_kaddr, usr_src + i * PAGE_SIZE, PAGE_SIZE); - if (allow_pagefault) - kunmap(subpage); - else - kunmap_atomic(page_kaddr); + if (!allow_pagefault) + pagefault_enable(); + kunmap_local(page_kaddr); ret_val -= (PAGE_SIZE - rc); if (rc) From patchwork Mon Apr 10 13:39:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Zhang X-Patchwork-Id: 13206373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3ED6C77B73 for ; Mon, 10 Apr 2023 13:40:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 204D328001D; Mon, 10 Apr 2023 09:39:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1670E28001E; Mon, 10 Apr 2023 09:39:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F220A28001D; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D700528001E for ; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B7492C0291 for ; Mon, 10 Apr 2023 13:39:53 +0000 (UTC) X-FDA: 80665589466.20.FC58F09 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf29.hostedemail.com (Postfix) with ESMTP id 239A9120014 for ; Mon, 10 Apr 2023 13:39:50 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681133992; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eno+lvlVBcbmVoJeIgVpKFLX7MHDI7aFgQ3DXfqcW0Q=; b=XDy8yXm6IqfFxJs0wiS9FocBTckrbCbOq47i32+j4u9KXE56UzK3B1C24EVj8iXtEygJP6 4cgkxVbC7V9/lNE2Hsp/1ixRvVr7IJgf0Nv8HjuvDmggbo6vOmeHCDiHIXOMqvVRL+R3wy m78nKj9j2LH177P5xy8SBsYUmY4/5BA= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681133992; a=rsa-sha256; cv=none; b=AExwJLl5IDrB15Gm3IxDVx7V5dI7DFF0xB0uuRzPIMaMz36u9B3wzwwYFXn++/wo3u8A6Y I3rlacrXdQcV/ANEg0qr3N8pqhMtxnGMIVUg9Ww2z1VjnBW6Cf2Jm9Q59i5W12finnU4/b rsW1KjnN5we3+YaQQskSk6/Y50ZWmzc= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Pw94k2nk6zKxnl; Mon, 10 Apr 2023 21:37:10 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 10 Apr 2023 21:39:41 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v6 3/6] userfaultfd: convert copy_huge_page_from_user() to copy_folio_from_user() Date: Mon, 10 Apr 2023 21:39:29 +0800 Message-ID: <20230410133932.32288-4-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230410133932.32288-1-zhangpeng362@huawei.com> References: <20230410133932.32288-1-zhangpeng362@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 58gg8r7f41cayoux311emciph47c9hkj X-Rspamd-Queue-Id: 239A9120014 X-HE-Tag: 1681133990-337332 X-HE-Meta: U2FsdGVkX18MOsI60FLn6BGxPPYJ+Se1pvSaiUYNNRRMtpWpdEYT/GVHOJc+KyWB3/ZYwojEWaEFdUiVq8P6cYXgB9TT05vb7JCqEMKMPkaerXKcoFpyuKRtyMkBKPLJ+0aXaXHgZbxUTxhhMHnZz0PEjnZoFJ0geg/I2PmYoXh8SEBEVUyRdVe1O2uQbnnZlkO33iWH9oHj+ptknMttCD9Zy5Igf3sSHPDs/b4l1h2OoPiHkaGsLA4EW3SsuXL5huyDA9HZjJgeOPU33yqgZfSJtw/XFY0RenWaJkgIpZGcCARvcZ6yjFh806wFQrFTWGSCJeHx38/K/HEw0O9ThajS3ozC5PycgvBfAcCi7MHcojMgyearw8U+19vc/dCZOsNee7frD/1QdPQmB9/WAWdDLLBLwWI8jpQPeNizgd5FodQ8tfb+9yHTpCsDxyyOcawgg2HkRrtKeINEozbjlxV/pt/xZg79ji4WYIfGdTU+5ohdW4y7AshYUzD7TAU0WvpwMHmvAqnapEy0U6dAEx2nX0xU0i2VbJGukU+zyuXae6qk/362uSKLICl4wxHeSJ/y8qp1SqUmRHTnq1pF+RHUpHYJmJA3LTxJwTivlCfdjwMcpvhhUdZl5KReiJGMZT6SA9XV7rAv1rdNlEoziaRgIOy0UQD09fPhr6zgI1m1fXQI5SPpdzpVmHehZuCOEj9pH8gO4qrOT1QBja5XQnMMD7VGhUeeCpCXUjF3Ukp73Bl6WaFR7rIH5RYc5Rq/2W8YLcp0FjgGRizIYSFHobRs7z7uaFobhic0fenOVZD1rSTkZOJiG3ER5Ei9gn5+2ZeQGaxDgs5i4mJ1CDoGZjHSISoHbv3QAiV/prxGyabWBFkcQofi/dKDiS0IsQbWbBo1rqDYOr1G9eEvsaWJAy2b7wmAuWXeGVj60nZsB0MXMAiWGIUCWRBpXumV7HS56YQX6kWeDyFlYlJ8v6+ 7F6D4CGw 6yIATFyZ9lv4Hc5MtHXhVDMiaRwqz5JdmnWp3Qzii4ZAtHpBIFB3ouINTKStJlbwpNQ+zSWj+ZC2nA4p+6IP9oX8DWGiv6ZE6tUF35Zt0/OyvtWUy7BM3Yik0yhY9jt70fzmOpOpbJKDqes3jBk3VgqhY8PJHK83Ro4jP+pDKJODQtmqsrPuQle6xvutKppF0dJftNA2qQTPwSyjw7gqA0+eyXai4UDTWSnkfynP/lQ9uPoJ0VJwPWwpyA23q3o2/jtb+ZzIaPO5/yN/XSKobIP6fK8uRlDtk3t/pPjTaXlpkbPgnW4c8p3iasQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: ZhangPeng Replace copy_huge_page_from_user() with copy_folio_from_user(). copy_folio_from_user() does the same as copy_huge_page_from_user(), but takes in a folio instead of a page. Convert page_kaddr to kaddr in copy_folio_from_user() to do indenting cleanup. Signed-off-by: ZhangPeng Reviewed-by: Sidhartha Kumar Reviewed-by: Mike Kravetz --- include/linux/mm.h | 7 +++---- mm/hugetlb.c | 5 ++--- mm/memory.c | 23 +++++++++++------------ mm/userfaultfd.c | 6 ++---- 4 files changed, 18 insertions(+), 23 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 243bfba378c5..a978413b40a4 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3698,10 +3698,9 @@ extern void copy_user_huge_page(struct page *dst, struct page *src, unsigned long addr_hint, struct vm_area_struct *vma, unsigned int pages_per_huge_page); -extern long copy_huge_page_from_user(struct page *dst_page, - const void __user *usr_src, - unsigned int pages_per_huge_page, - bool allow_pagefault); +long copy_folio_from_user(struct folio *dst_folio, + const void __user *usr_src, + bool allow_pagefault); /** * vma_is_special_huge - Are transhuge page-table entries considered special? diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7e4a80769c9e..aade1b513474 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6217,9 +6217,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, goto out; } - ret = copy_huge_page_from_user(&folio->page, - (const void __user *) src_addr, - pages_per_huge_page(h), false); + ret = copy_folio_from_user(folio, (const void __user *) src_addr, + false); /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { diff --git a/mm/memory.c b/mm/memory.c index 808f354bce65..021cab989703 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5868,26 +5868,25 @@ void copy_user_huge_page(struct page *dst, struct page *src, process_huge_page(addr_hint, pages_per_huge_page, copy_subpage, &arg); } -long copy_huge_page_from_user(struct page *dst_page, - const void __user *usr_src, - unsigned int pages_per_huge_page, - bool allow_pagefault) +long copy_folio_from_user(struct folio *dst_folio, + const void __user *usr_src, + bool allow_pagefault) { - void *page_kaddr; + void *kaddr; unsigned long i, rc = 0; - unsigned long ret_val = pages_per_huge_page * PAGE_SIZE; + unsigned int nr_pages = folio_nr_pages(dst_folio); + unsigned long ret_val = nr_pages * PAGE_SIZE; struct page *subpage; - for (i = 0; i < pages_per_huge_page; i++) { - subpage = nth_page(dst_page, i); - page_kaddr = kmap_local_page(subpage); + for (i = 0; i < nr_pages; i++) { + subpage = folio_page(dst_folio, i); + kaddr = kmap_local_page(subpage); if (!allow_pagefault) pagefault_disable(); - rc = copy_from_user(page_kaddr, - usr_src + i * PAGE_SIZE, PAGE_SIZE); + rc = copy_from_user(kaddr, usr_src + i * PAGE_SIZE, PAGE_SIZE); if (!allow_pagefault) pagefault_enable(); - kunmap_local(page_kaddr); + kunmap_local(kaddr); ret_val -= (PAGE_SIZE - rc); if (rc) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 313bc683c2b6..1e7dba6c4c5f 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -421,10 +421,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( mmap_read_unlock(dst_mm); BUG_ON(!page); - err = copy_huge_page_from_user(page, - (const void __user *)src_addr, - vma_hpagesize / PAGE_SIZE, - true); + err = copy_folio_from_user(page_folio(page), + (const void __user *)src_addr, true); if (unlikely(err)) { err = -EFAULT; goto out; From patchwork Mon Apr 10 13:39:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Zhang X-Patchwork-Id: 13206369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BE12C77B74 for ; Mon, 10 Apr 2023 13:39:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17FB728001B; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 16FEF28001D; Mon, 10 Apr 2023 09:39:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB1CE28001B; Mon, 10 Apr 2023 09:39:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B3CE1280002 for ; Mon, 10 Apr 2023 09:39:52 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 6B68BABFDC for ; Mon, 10 Apr 2023 13:39:52 +0000 (UTC) X-FDA: 80665589424.05.1863293 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf04.hostedemail.com (Postfix) with ESMTP id B765540019 for ; Mon, 10 Apr 2023 13:39:49 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681133990; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VFoMYpFH2wUs0f4x/CnJhQPG8lKfFZPCXDfKFh8biu0=; b=m812Ywg0CwI23gb1+AQwMybbYPWSqC9F27Hp/6vD2UQTh6w4DcCEGPMuxk/mMR+sc9WjHb k/UWitoRXGsLI4mDpZmu4FCS7fLkvyiTUDNeLOU4iKnah0Er7uYeqhuAKH7nsMDAyH42+N fGQK+4NHUkQWcDDb+r2oqsoCZN1E3xc= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681133990; a=rsa-sha256; cv=none; b=sOVEIBVHfy4T3bfWhamDfsH/cYQaJh5kABclhO17dspri8AxIVB+ghNmZ1w011FDoEojtl PfNOf9vhkrt8H5hlrdQYxwQs+84gGeIiJkQCNp9WX5NhO7hIlgAc4CMCv1g9cwNsDr//zb xnX+fvNDjpBLlDr+9OgNoaawe43ksa8= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Pw94l0JW4zKxpD; Mon, 10 Apr 2023 21:37:11 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 10 Apr 2023 21:39:42 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v6 4/6] userfaultfd: convert mfill_atomic_hugetlb() to use a folio Date: Mon, 10 Apr 2023 21:39:30 +0800 Message-ID: <20230410133932.32288-5-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230410133932.32288-1-zhangpeng362@huawei.com> References: <20230410133932.32288-1-zhangpeng362@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B765540019 X-Stat-Signature: k7ie969r1pcpthp3bwwzk5xgd43ggbgx X-Rspam-User: X-HE-Tag: 1681133989-427800 X-HE-Meta: U2FsdGVkX19xYtAtzykWD/Ezh4xxrcS6f5mBuOaFMXhdQPc76wWw6Udybk5DptisRO/48K3oeGfybHLb+6ip9p4chcaDUdRHRGYzU6yG6JGlAGac00/meVBszaXbptn5AUy0RPwG/87pKd6r0A1A+3U6CEpHMXB7n0B3C7g0tl5th99yNvQU0O6V0RZy2oIVeDjO6IzeCs6H51NA1Ito5FO8cSvqLZNEERId5/jCqN71satUZ9lFoBLMKkACnub+RBIi3yH6ZoObOTDbuPS3BFICmVhhGYN0jhqTK0D/qEAe1R+BLWYaRfdublXnpt587mxGHebk9bVGR+WidQjKWuxTxkZIQyyjunMBtxwImFqmEEDF6I3DoIpDH7s8sbshmk2Fiz73EZkouuMeAID/z/2QzdNDB/6Ua6Skzvei21NzVljcJkBkPnT/Bpj3PF1zKpYcrL/gozctrRr7wRrXuXXsZqSOfYLXFu+cT/up+t9n682RkUaK0hTBYuboRXH4z0vJ0WE6U+ontnQuYPR7uNnZydKxb/j2rRxtouWW9ARZBcvWtcKXqap6fspan6WjJXnWEAUkslFrx0WeAWMt46MiGgfi+LKmTs7Dq+YsYozlN8P6dBnjHI7GqvLGUYCIonJPMMXvE55vk6z/vJTgjhx5YQxY89jhnM3V6wSTE7Law2Tk/4DD53nA5RuTHbUxFlRN0q6TaRVnJwcLRtq8lBP9DY7G9pt5L8Baum/JVrafZLS7SF+0y3gEoR2FU12zGG2xIGxILbJyQWjeTOclDzN1BvvxaB629mF7RPB55pAxYV7J3oC3afrED8pRKug915eBGr2hWOdEDR6RRRlppJMzTzN6FE8ccburFdypx+xb/ELUlz/b815wDEWQ3r4/JNcSfBy8NP7qJmRBd5RhibDKzIqPhb+aMYVFyeaNPTX7IamDfUdYlx2u8PotU2LNiOLnR6gfZ/VKXN12qZ1 RFF4Ggl4 Lx6hJ8bq/Elr/rIPkUWPAbLgJKh+YnyT6yeXclX7pJ/dVws3FO5LLxfouFN4f3oX3FRDCJwe5SiSCZJvaBD4/nezCtmiuu3NBPqrfR2WOJxu1eOg2efVhvEJdVoRmFWhwTsH5uqoU22jGY6vE7ncNrJnI4B6CMlq+KFp1Vks6TjTFTo7jL1M76XSdDN3sPzTxqPWIISlJZEoIuCTfWfPDsuArljyElDB+pmI9nKpDm+N00wiZVpBMlGzKpFT016w6ewXhWe8/PgIPCrMKm6TvNCdw91J9auy3gGsk6OtwEpUJRniWrcAOxYpmiQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: ZhangPeng Convert hugetlb_mfill_atomic_pte() to take in a folio pointer instead of a page pointer. Convert mfill_atomic_hugetlb() to use a folio. Signed-off-by: ZhangPeng Reviewed-by: Sidhartha Kumar Reviewed-by: Mike Kravetz --- include/linux/hugetlb.h | 4 ++-- mm/hugetlb.c | 26 +++++++++++++------------- mm/userfaultfd.c | 16 ++++++++-------- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 2a758bcd6719..28703fe22386 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -163,7 +163,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, unsigned long dst_addr, unsigned long src_addr, uffd_flags_t flags, - struct page **pagep); + struct folio **foliop); #endif /* CONFIG_USERFAULTFD */ bool hugetlb_reserve_pages(struct inode *inode, long from, long to, struct vm_area_struct *vma, @@ -397,7 +397,7 @@ static inline int hugetlb_mfill_atomic_pte(pte_t *dst_pte, unsigned long dst_addr, unsigned long src_addr, uffd_flags_t flags, - struct page **pagep) + struct folio **foliop) { BUG(); return 0; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index aade1b513474..c88f856ec2e2 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6178,7 +6178,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, unsigned long dst_addr, unsigned long src_addr, uffd_flags_t flags, - struct page **pagep) + struct folio **foliop) { struct mm_struct *dst_mm = dst_vma->vm_mm; bool is_continue = uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE); @@ -6201,8 +6201,8 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, if (IS_ERR(folio)) goto out; folio_in_pagecache = true; - } else if (!*pagep) { - /* If a page already exists, then it's UFFDIO_COPY for + } else if (!*foliop) { + /* If a folio already exists, then it's UFFDIO_COPY for * a non-missing case. Return -EEXIST. */ if (vm_shared && @@ -6237,33 +6237,33 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, ret = -ENOMEM; goto out; } - *pagep = &folio->page; - /* Set the outparam pagep and return to the caller to + *foliop = folio; + /* Set the outparam foliop and return to the caller to * copy the contents outside the lock. Don't free the - * page. + * folio. */ goto out; } } else { if (vm_shared && hugetlbfs_pagecache_present(h, dst_vma, dst_addr)) { - put_page(*pagep); + folio_put(*foliop); ret = -EEXIST; - *pagep = NULL; + *foliop = NULL; goto out; } folio = alloc_hugetlb_folio(dst_vma, dst_addr, 0); if (IS_ERR(folio)) { - put_page(*pagep); + folio_put(*foliop); ret = -ENOMEM; - *pagep = NULL; + *foliop = NULL; goto out; } - copy_user_huge_page(&folio->page, *pagep, dst_addr, dst_vma, + copy_user_huge_page(&folio->page, &(*foliop)->page, dst_addr, dst_vma, pages_per_huge_page(h)); - put_page(*pagep); - *pagep = NULL; + folio_put(*foliop); + *foliop = NULL; } /* diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 1e7dba6c4c5f..2f263afb823d 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -321,7 +321,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( pte_t *dst_pte; unsigned long src_addr, dst_addr; long copied; - struct page *page; + struct folio *folio; unsigned long vma_hpagesize; pgoff_t idx; u32 hash; @@ -341,7 +341,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( src_addr = src_start; dst_addr = dst_start; copied = 0; - page = NULL; + folio = NULL; vma_hpagesize = vma_kernel_pagesize(dst_vma); /* @@ -410,7 +410,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } err = hugetlb_mfill_atomic_pte(dst_pte, dst_vma, dst_addr, - src_addr, flags, &page); + src_addr, flags, &folio); hugetlb_vma_unlock_read(dst_vma); mutex_unlock(&hugetlb_fault_mutex_table[hash]); @@ -419,9 +419,9 @@ static __always_inline ssize_t mfill_atomic_hugetlb( if (unlikely(err == -ENOENT)) { mmap_read_unlock(dst_mm); - BUG_ON(!page); + BUG_ON(!folio); - err = copy_folio_from_user(page_folio(page), + err = copy_folio_from_user(folio, (const void __user *)src_addr, true); if (unlikely(err)) { err = -EFAULT; @@ -432,7 +432,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( dst_vma = NULL; goto retry; } else - BUG_ON(page); + BUG_ON(folio); if (!err) { dst_addr += vma_hpagesize; @@ -449,8 +449,8 @@ static __always_inline ssize_t mfill_atomic_hugetlb( out_unlock: mmap_read_unlock(dst_mm); out: - if (page) - put_page(page); + if (folio) + folio_put(folio); BUG_ON(copied < 0); BUG_ON(err > 0); BUG_ON(!copied && !err); From patchwork Mon Apr 10 13:39:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Zhang X-Patchwork-Id: 13206372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11336C76196 for ; Mon, 10 Apr 2023 13:39:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B86EF28001C; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AE7F928001D; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8722128001C; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 55D4F280002 for ; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 17BA71C63E1 for ; Mon, 10 Apr 2023 13:39:53 +0000 (UTC) X-FDA: 80665589466.30.207E2A5 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf02.hostedemail.com (Postfix) with ESMTP id 9494980024 for ; Mon, 10 Apr 2023 13:39:49 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf02.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681133991; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Nt9FAQe0bDKY4a/URFbns6J0a/vNtg4GEeDVTWeK9JY=; b=R8NiSmrdvEZr0uwAxkaYDq0bIiTVBZQ9KFGNOi2/UVX0JGCGQ9OI+lWrGC8ffhsjug2Esy uh6/3dRPH9nnlOMTwDKjzOqiz7gRT7U8XRPZA5L+QFPCBXWnvl9rHGb6ma9VjiHvw6zWx3 DVPLY2L8AKp2uSG06PJtJqQJyjH9K0M= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf02.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681133991; a=rsa-sha256; cv=none; b=yHmdGx258RJPuhYPmKMcVthpFCFhg8rp83fJRX/a287f5vVNEH3tVgtBeJpIQI106iCZn8 LEe9hNElRMkp2nFLEj2keQ1YcNhSmMmWzwmy2dKne4mGJBffa4F6JCSjQEOOlvhoVO7g3j +5w6/AdDmY2WYbmnqbjplY+oD1tw0RA= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.55]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Pw94l217rzKxqv; Mon, 10 Apr 2023 21:37:11 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 10 Apr 2023 21:39:43 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v6 5/6] mm: convert copy_user_huge_page() to copy_user_large_folio() Date: Mon, 10 Apr 2023 21:39:31 +0800 Message-ID: <20230410133932.32288-6-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230410133932.32288-1-zhangpeng362@huawei.com> References: <20230410133932.32288-1-zhangpeng362@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 9494980024 X-Stat-Signature: ro3hqgaqpz4p3f66sz5i1dxk1zhccar5 X-HE-Tag: 1681133989-808591 X-HE-Meta: U2FsdGVkX18wqhkhEaLYAND3AqOMPhzmiorVRxau2k+XrRxu4NGS1B5NXW8Kurc22GksjQagxc4QxVWZPfhc6iqGTOcYicA9efjADd/hnfqntOFh5eoLZybeIv9O3+yoJmN7pl/dXiBgTVBMe+1P70zEqyQId8vrP2JeJ7Eex3tF/hdGS9UzM0C5nWti8OU/9ZbZOnzK5BL/DMzyZHLd/NYt7atQ+u9oH9p5dYoYKy9NSUG7ZcD9AZNBeHJD+74oAMqF5ZH2tqSAuIs0yXsi3+P9a25Qq8ySE/gXStPcIRsHg5vzW0J5NjbouCHIuFh9mO9tY/osEn7IKkFwCGZgLSxjkV+76s7jQRGnYV541qI1TTecq5Eg0Xd6lQEiWwvoSIP/gLdRu792gzNnvJpQQqezWeeYoKK1bBdCTqQQl93mcccSt3YVHD7MK9A/QGEk/O2SxcJDMTKTo/uvYQIAMdg4EROflMEZ1nLqSH+68Q7DOn0NceMNJW80GJOvDtl8wM0I9MHNspiwu2LBR8jB1ys43UcS54Q92Y4gUwxNFlKtfJNxl7gpvUqw3YKAMn29zgvwda2HMTcAkw9XjHu0DsWC6S94rMosGhRvF7c9BaykzVBuX87Dbyi6/XPGQgll9abLsqZHJpCFIfl7gv+3RTI/P9MkRnJWPJ1Vf+QHLlGWcnGc8utcCOUUnx18URf+Cu83BZeaxZWNolrt3KWy6cZr+YbVEzqGOZiIRNs7uaROLZnteCBQDFUNPUI9w/c0BKnpxrzQsHxYXpOteLMuyUUiGH2VLBCtUExTWakl9AsPTbbQnfDsjrkNC7e4g09N6V1hfFBQ5iwCouJls1f2nbnxiMBa6A/1wymBRRNWd91sLFuwxnaGZFVwDuQzODV+QMcUHp69KJKirIrac2jy88CnnIqQWFxR06+oYnyaADy+N2uWU0Nhecr5UFOZr/tlaqj+kYBAR52bQX5OZXF LwqtpPx2 c7OwnUsDxpyVW2e9YwxWrEbYRiloNZF0CghdOG62xruoCysmkHZbaTAqCrEl0maKylixA2sS86h5jH0/AOnjfNDKNqEqrc4ruY6C16Lv7wPnzOI0qiX0B8ITMD2lWQJzg+T2XOyQMIQ4Gmfn1Wh6lsaWqBlchcCAKeK9Fwq95rvuB90ziERcYa/eA/oqhwdq3yQjmJDP+2hUh6gokMNrrk7TiKTxMqxVdhBV/js5cTy7VsR+zO5UvzaC8ErgWUfRf7GbgnYDrQZvAxgRO04d8T5T2WA0mqsyGcWYy X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: ZhangPeng Replace copy_user_huge_page() with copy_user_large_folio(). copy_user_large_folio() does the same as copy_user_huge_page(), but takes in folios instead of pages. Remove pages_per_huge_page from copy_user_large_folio(), because we can get that from folio_nr_pages(dst). Convert copy_user_gigantic_page() to take in folios. Signed-off-by: ZhangPeng --- include/linux/mm.h | 7 +++---- mm/hugetlb.c | 11 +++++------ mm/memory.c | 28 ++++++++++++++-------------- 3 files changed, 22 insertions(+), 24 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a978413b40a4..c8f05c3e1acb 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3694,10 +3694,9 @@ extern const struct attribute_group memory_failure_attr_group; extern void clear_huge_page(struct page *page, unsigned long addr_hint, unsigned int pages_per_huge_page); -extern void copy_user_huge_page(struct page *dst, struct page *src, - unsigned long addr_hint, - struct vm_area_struct *vma, - unsigned int pages_per_huge_page); +void copy_user_large_folio(struct folio *dst, struct folio *src, + unsigned long addr_hint, + struct vm_area_struct *vma); long copy_folio_from_user(struct folio *dst_folio, const void __user *usr_src, bool allow_pagefault); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c88f856ec2e2..f16b25b1a6b9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5097,8 +5097,9 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, ret = PTR_ERR(new_folio); break; } - copy_user_huge_page(&new_folio->page, ptepage, addr, dst_vma, - npages); + copy_user_large_folio(new_folio, + page_folio(ptepage), + addr, dst_vma); put_page(ptepage); /* Install the new hugetlb folio if src pte stable */ @@ -5616,8 +5617,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, goto out_release_all; } - copy_user_huge_page(&new_folio->page, old_page, address, vma, - pages_per_huge_page(h)); + copy_user_large_folio(new_folio, page_folio(old_page), address, vma); __folio_mark_uptodate(new_folio); mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, haddr, @@ -6260,8 +6260,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, *foliop = NULL; goto out; } - copy_user_huge_page(&folio->page, &(*foliop)->page, dst_addr, dst_vma, - pages_per_huge_page(h)); + copy_user_large_folio(folio, *foliop, dst_addr, dst_vma); folio_put(*foliop); *foliop = NULL; } diff --git a/mm/memory.c b/mm/memory.c index 021cab989703..f315c2198098 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5815,21 +5815,21 @@ void clear_huge_page(struct page *page, process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page); } -static void copy_user_gigantic_page(struct page *dst, struct page *src, - unsigned long addr, - struct vm_area_struct *vma, - unsigned int pages_per_huge_page) +static void copy_user_gigantic_page(struct folio *dst, struct folio *src, + unsigned long addr, + struct vm_area_struct *vma, + unsigned int pages_per_huge_page) { int i; - struct page *dst_base = dst; - struct page *src_base = src; + struct page *dst_page; + struct page *src_page; for (i = 0; i < pages_per_huge_page; i++) { - dst = nth_page(dst_base, i); - src = nth_page(src_base, i); + dst_page = folio_page(dst, i); + src_page = folio_page(src, i); cond_resched(); - copy_user_highpage(dst, src, addr + i*PAGE_SIZE, vma); + copy_user_highpage(dst_page, src_page, addr + i*PAGE_SIZE, vma); } } @@ -5847,15 +5847,15 @@ static void copy_subpage(unsigned long addr, int idx, void *arg) addr, copy_arg->vma); } -void copy_user_huge_page(struct page *dst, struct page *src, - unsigned long addr_hint, struct vm_area_struct *vma, - unsigned int pages_per_huge_page) +void copy_user_large_folio(struct folio *dst, struct folio *src, + unsigned long addr_hint, struct vm_area_struct *vma) { + unsigned int pages_per_huge_page = folio_nr_pages(dst); unsigned long addr = addr_hint & ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); struct copy_subpage_arg arg = { - .dst = dst, - .src = src, + .dst = &dst->page, + .src = &src->page, .vma = vma, }; From patchwork Mon Apr 10 13:39:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Zhang X-Patchwork-Id: 13206371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C13AC77B70 for ; Mon, 10 Apr 2023 13:39:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9347D280002; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 69A9528001E; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E8DA28001C; Mon, 10 Apr 2023 09:39:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E95C6280002 for ; Mon, 10 Apr 2023 09:39:52 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id BD864C093D for ; Mon, 10 Apr 2023 13:39:52 +0000 (UTC) X-FDA: 80665589424.02.FFCE0AE Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf26.hostedemail.com (Postfix) with ESMTP id E9597140016 for ; Mon, 10 Apr 2023 13:39:49 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681133990; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hS59T/jrR63Q23tmoAbSC9eegwDp5lDoYNJge6HCdug=; b=e/UaGRQUJMnjqsD9T4XOPFyLeqLKPrH0V6EsP/n7Lj6Gb3Q0q5VWta+/uCLAbx9EVzJRtV vhL6hngxhgr2Pb1ZzVfBADB3mJId67AkQRL7XdCoUKmP9PcX/WbfkDndAlmcQyiI4wlPH4 cgC3zTemB+cmDvDl8448P/g/Y0KJx5U= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf26.hostedemail.com: domain of zhangpeng362@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=zhangpeng362@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681133990; a=rsa-sha256; cv=none; b=m0Vcyu7OI1wxglWXHqQ3YtOK/TrGBuhRQznstHiYjBy/4RGb7KD0MS1GMS7f/r5vYR/ddx wKMidO6VjYiVPQpfV75p7SO7QUkJ3wzv3sAOZ/ApjQYyS1WwjF0cfclT71CmV9/CbRwUie emzqTSUixPBLDJWaReFl0/Qe0R/LfDI= Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Pw93d28PcznbVp; Mon, 10 Apr 2023 21:36:13 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 10 Apr 2023 21:39:44 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v6 6/6] userfaultfd: convert mfill_atomic() to use a folio Date: Mon, 10 Apr 2023 21:39:32 +0800 Message-ID: <20230410133932.32288-7-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230410133932.32288-1-zhangpeng362@huawei.com> References: <20230410133932.32288-1-zhangpeng362@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E9597140016 X-Rspam-User: X-Stat-Signature: id8twpx7ouz5hfp3ni7js9xpiwbhitkx X-HE-Tag: 1681133989-891281 X-HE-Meta: U2FsdGVkX1+MnfoD1PsC0gVOmYoHLAMRpbQbaGpdsdext67pWVqd2aL8zfZx3qFplPW5DnrMgfeTI+yRcAvAXLU/CeslccMX/KzI5zbXlYnXgPeofvwxBfhYKZf4nP1TKVrld2Xj6GUBgDAOH//oTcZUsTqLIjPdNz0w1zeCvA+l8Cw59N8jrBfI/Eao8dpe9hPDTIx3LyX530z4eK+6P652kTCyiv4TAaSHZW62tdVKSY7VBU+1e7UQUnP1vBapdQ8EtwIhaljsaI0P2UIwDcpUI6+/eaWskRIA2Lw2XeK65o1peRtz0sRzxHcft4JC7GY2q8VSpo9yp5yDi8y+VIPO0FHtDpkRRAe0F7PIPuXqXmsQOz3CGpwZb0qehBxJNxPPLPMJJtu+dEFMLXIYTLYvoPQpS0YHoyeEFx9DDgE72gm0+QwSbyaJwY3cIbR2SD/sVjZkURtoSgf7cSSaFQxo76fOBAr14Ny+4DvOrjVZLXYLa92K68aDz1V1PWQLP+eOY7ZOVBVl4jKz3wLhkiUl7CXULE1ObNDm1HqXeRQfi2s0qvC4vtKDseD4Nbau8P8DJ29A5DM8ez+v5h6onPTtHrqHRi9clEgc20lnObnJapVc/ygiUQsfieDvJnpUXlXORhYoBEu//qiVw5JkHaj1tWIiNGVnQx5u/AhaPogiBUB9RrAWg/b4CK+K4Gca3CB8yALi3yTNuDlSzajwOVk5EspDXZfePeqvBbviFPtNB5qI15NreNgsbrYu1M2HJSz6s9Ps2t2+eH+VNaxgXQiAmuXf+7BGM2PH0eb/v5Uh8wix5jDznKiKZwEDy49VrR3mvYPKvDd7/GiSHHolIKUqBmWUZuuiAa6EmUGiuo0zFctkXD0yvOS1VGbRCTUWquFxfRd7UvPQGMq2AzHUY5cec6R0xBwTaqM3zT5XJPOyWHMy812CgeAAQlyIQWwjUyr9lWGhbn1+TPSZvhs iiuVrqbk oy8V0viWg8pVFtksLDBza9owmyz6OGO68QaSeJnfNdr+l47j4EmcRN0HCTUj7/0Ne+aFOWDHb+6xDfKAriWaO6JwjQOStT6lUdo29z1gf76fpjO4qu/m//JiYvuA65CZHZrEUiMfR0bxNgn+36VtNjejgGuhlEv6VLzcsO3sCovYDCKVKJRAN1cXsZ3GoukY1i30IlLtWumhbSlofPclRMLRsocKXbuwH28Qn1AvqdhUny6auGjVBbwfjaSn9qxpAK9Ul7QR989/qHOD4FuJEJeHGAfjqlEWbJOSh3twICsvCn1LQ14o89gHpWQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: ZhangPeng Convert mfill_atomic_pte_copy(), shmem_mfill_atomic_pte() and mfill_atomic_pte() to take in a folio pointer. Convert mfill_atomic() to use a folio. Convert page_kaddr to kaddr in mfill_atomic(). Signed-off-by: ZhangPeng Reviewed-by: Mike Kravetz --- include/linux/shmem_fs.h | 4 ++-- mm/shmem.c | 16 ++++++++-------- mm/userfaultfd.c | 40 ++++++++++++++++++++-------------------- 3 files changed, 30 insertions(+), 30 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 3bb8d21edbb3..9e151ba45068 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -158,10 +158,10 @@ extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd, unsigned long dst_addr, unsigned long src_addr, uffd_flags_t flags, - struct page **pagep); + struct folio **foliop); #else /* !CONFIG_SHMEM */ #define shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, \ - src_addr, flags, pagep) ({ BUG(); 0; }) + src_addr, flags, foliop) ({ BUG(); 0; }) #endif /* CONFIG_SHMEM */ #endif /* CONFIG_USERFAULTFD */ diff --git a/mm/shmem.c b/mm/shmem.c index 6c08f5a75d3a..9218c955f482 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2548,7 +2548,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, unsigned long dst_addr, unsigned long src_addr, uffd_flags_t flags, - struct page **pagep) + struct folio **foliop) { struct inode *inode = file_inode(dst_vma->vm_file); struct shmem_inode_info *info = SHMEM_I(inode); @@ -2566,14 +2566,14 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, * and now we find ourselves with -ENOMEM. Release the page, to * avoid a BUG_ON in our caller. */ - if (unlikely(*pagep)) { - put_page(*pagep); - *pagep = NULL; + if (unlikely(*foliop)) { + folio_put(*foliop); + *foliop = NULL; } return -ENOMEM; } - if (!*pagep) { + if (!*foliop) { ret = -ENOMEM; folio = shmem_alloc_folio(gfp, info, pgoff); if (!folio) @@ -2605,7 +2605,7 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { - *pagep = &folio->page; + *foliop = folio; ret = -ENOENT; /* don't free the page */ goto out_unacct_blocks; @@ -2616,9 +2616,9 @@ int shmem_mfill_atomic_pte(pmd_t *dst_pmd, clear_user_highpage(&folio->page, dst_addr); } } else { - folio = page_folio(*pagep); + folio = *foliop; VM_BUG_ON_FOLIO(folio_test_large(folio), folio); - *pagep = NULL; + *foliop = NULL; } VM_BUG_ON(folio_test_locked(folio)); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 2f263afb823d..11cfd82c6726 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -133,13 +133,13 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, unsigned long dst_addr, unsigned long src_addr, uffd_flags_t flags, - struct page **pagep) + struct folio **foliop) { void *kaddr; int ret; struct folio *folio; - if (!*pagep) { + if (!*foliop) { ret = -ENOMEM; folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, dst_vma, dst_addr, false); @@ -171,15 +171,15 @@ static int mfill_atomic_pte_copy(pmd_t *dst_pmd, /* fallback to copy_from_user outside mmap_lock */ if (unlikely(ret)) { ret = -ENOENT; - *pagep = &folio->page; + *foliop = folio; /* don't free the page */ goto out; } flush_dcache_folio(folio); } else { - folio = page_folio(*pagep); - *pagep = NULL; + folio = *foliop; + *foliop = NULL; } /* @@ -470,7 +470,7 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, unsigned long dst_addr, unsigned long src_addr, uffd_flags_t flags, - struct page **pagep) + struct folio **foliop) { ssize_t err; @@ -493,14 +493,14 @@ static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, if (uffd_flags_mode_is(flags, MFILL_ATOMIC_COPY)) err = mfill_atomic_pte_copy(dst_pmd, dst_vma, dst_addr, src_addr, - flags, pagep); + flags, foliop); else err = mfill_atomic_pte_zeropage(dst_pmd, dst_vma, dst_addr); } else { err = shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, src_addr, - flags, pagep); + flags, foliop); } return err; @@ -518,7 +518,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, pmd_t *dst_pmd; unsigned long src_addr, dst_addr; long copied; - struct page *page; + struct folio *folio; /* * Sanitize the command parameters: @@ -533,7 +533,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, src_addr = src_start; dst_addr = dst_start; copied = 0; - page = NULL; + folio = NULL; retry: mmap_read_lock(dst_mm); @@ -629,28 +629,28 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_trans_huge(*dst_pmd)); err = mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, - src_addr, flags, &page); + src_addr, flags, &folio); cond_resched(); if (unlikely(err == -ENOENT)) { - void *page_kaddr; + void *kaddr; mmap_read_unlock(dst_mm); - BUG_ON(!page); + BUG_ON(!folio); - page_kaddr = kmap_local_page(page); - err = copy_from_user(page_kaddr, + kaddr = kmap_local_folio(folio, 0); + err = copy_from_user(kaddr, (const void __user *) src_addr, PAGE_SIZE); - kunmap_local(page_kaddr); + kunmap_local(kaddr); if (unlikely(err)) { err = -EFAULT; goto out; } - flush_dcache_page(page); + flush_dcache_folio(folio); goto retry; } else - BUG_ON(page); + BUG_ON(folio); if (!err) { dst_addr += PAGE_SIZE; @@ -667,8 +667,8 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, out_unlock: mmap_read_unlock(dst_mm); out: - if (page) - put_page(page); + if (folio) + folio_put(folio); BUG_ON(copied < 0); BUG_ON(err > 0); BUG_ON(!copied && !err);