From patchwork Wed Mar 8 22:19:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Axel Rasmussen X-Patchwork-Id: 13166510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D674C742A7 for ; Wed, 8 Mar 2023 22:19:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF1D0280002; Wed, 8 Mar 2023 17:19:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EA1A56B0075; Wed, 8 Mar 2023 17:19:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D730F280002; Wed, 8 Mar 2023 17:19:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C307E6B0072 for ; Wed, 8 Mar 2023 17:19:47 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 88A671A1071 for ; Wed, 8 Mar 2023 22:19:47 +0000 (UTC) X-FDA: 80547149214.18.DA80FC0 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf27.hostedemail.com (Postfix) with ESMTP id A409340004 for ; Wed, 8 Mar 2023 22:19:45 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="k5wkO/Kb"; spf=pass (imf27.hostedemail.com: domain of 3AAoJZA0KCF87UBIO7PJRPPBKDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--axelrasmussen.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3AAoJZA0KCF87UBIO7PJRPPBKDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--axelrasmussen.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678313985; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3qfduLMYpbX/eaIxmSwqctkndy7m9wVy8PCWJ7GfFGI=; b=nTKy8JGyMuFtxzczzFBnQEJd3NQfLSKcyLuFURc3GButOIyuL2+I0VDXMSPAn//O/fpSdo /TYXVFpUZv5VAKpFkjpTOW9Vtr3xxKGFtDw2/msGJbNsrZlioesZqfXk2A5eVS99YnYafe ZlWKCb+MRp29LCkKu6jsga4VgG8KKFg= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="k5wkO/Kb"; spf=pass (imf27.hostedemail.com: domain of 3AAoJZA0KCF87UBIO7PJRPPBKDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--axelrasmussen.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3AAoJZA0KCF87UBIO7PJRPPBKDLLDIB.9LJIFKRU-JJHS79H.LOD@flex--axelrasmussen.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678313985; a=rsa-sha256; cv=none; b=LS+CxPk6QCVMSQD896vBoHafrH1hRiW8kfJglQ5Da5V8r1KQSfd4GG7pcNWURZfBovMoWA qqBIRKIKMNlMcxkiNht7S6GG6XIJadYUntgn1ZJgii+wOheK+0DEWR0KFRW+knDlAamm8g ZuwLYqDwoxfGey8uROq87fg6cZdL+dE= Received: by mail-yb1-f201.google.com with SMTP id a137-20020a25ca8f000000b0091b90b20cd9so151946ybg.6 for ; Wed, 08 Mar 2023 14:19:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678313984; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=3qfduLMYpbX/eaIxmSwqctkndy7m9wVy8PCWJ7GfFGI=; b=k5wkO/KbrcGkUo25W5QN4LLfIuQXvS4nk4V7ivGZu4a+ARwzHF8LYZN+MXIvZZfzvD 8PpS7Y3UtSlgTY2kOJHGU7UZPPk2UR+HnRZ3GucaS2hJ1DTwe6fAggGkwIijDIbuZwjR UoXgdH6BBQXDLVoMNI6Qi0NoatB27T1PEu6JuBYHJNzuDe0cC2DW+c0VsLc3XQkM3WKV G/AydUC2vCvEiAl9rGj5NAm3aPG8UO/Oc1kY94iN3o+2dI1lufYZBKbPsAQX0YMUS62I PHZgAgCwW1+rze6bkP1nCv1GL4GiifUQogx/i21ma41dJbc6Tr8UbQne3+GHUcF+F3/X jd6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678313984; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3qfduLMYpbX/eaIxmSwqctkndy7m9wVy8PCWJ7GfFGI=; b=7ucCOQWfubyuZ/+VyVBHETHFLXsfYAiOPNS5fRXNZdmFX9x00q2XnZkv9xHqlCIV2C LHFu/eFN9+hupmTOV/ICE8E5fNsz8sMbru84oLa4KnNiRCARlq4zTOm57uTnroeibXMQ 9IgBr6Iy/Zqx33ZWEhxvzVX3jeP/uG6FpBGrDdsqaLFcFL8yjjr+iNFezI3PVKGgDLpf MFbMGm5RNZdXFvTbcuMNA9TDKLWloPerOJXc8FtcW1Ju3zlnfiHEohlkVgbFIzu5GxWz oED9BaUeL1MxRCzw3ghyxOQFKBxASwV/PuZ6ZRcWbNuY1OlJni8j0yw7nxuCsZLWr+4Q VwEA== X-Gm-Message-State: AO0yUKVGeImQSQABdIQFIuv04ij7g8H1oNbaMq43eduxKSFAJ0xb7pWK ZSuGyBa5oJAr7sXkQeakzkq3rihZzmB7DOoDix5x X-Google-Smtp-Source: AK7set8ENvk9aqnhQf9+1ATRq7bfy/EuaPhVEUgvUxOZRsyPIYfoGwwgoRtTIEhiUpIMpBy1g+alYPUzkKhDDmmxgvaF X-Received: from axel.svl.corp.google.com ([2620:15c:2d4:203:96cb:1c04:7322:78a4]) (user=axelrasmussen job=sendgmr) by 2002:a81:a1ca:0:b0:527:9d23:f6d7 with SMTP id y193-20020a81a1ca000000b005279d23f6d7mr9ywg.48.1678313984535; Wed, 08 Mar 2023 14:19:44 -0800 (PST) Date: Wed, 8 Mar 2023 14:19:30 -0800 In-Reply-To: <20230308221932.1548827-1-axelrasmussen@google.com> Mime-Version: 1.0 References: <20230308221932.1548827-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.40.0.rc1.284.g88254d51c5-goog Message-ID: <20230308221932.1548827-3-axelrasmussen@google.com> Subject: [PATCH v4 2/4] mm: userfaultfd: don't pass around both mm and vma From: Axel Rasmussen To: Alexander Viro , Andrew Morton , Hugh Dickins , Jan Kara , "Liam R. Howlett" , Matthew Wilcox , Mike Kravetz , Mike Rapoport , Muchun Song , Nadav Amit , Peter Xu , Shuah Khan Cc: James Houghton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Axel Rasmussen X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A409340004 X-Rspam-User: X-Stat-Signature: iroso6ii7gw6edouz84fnuhbtm7q6bt4 X-HE-Tag: 1678313985-12976 X-HE-Meta: U2FsdGVkX1+VfI6L+tvPsgjuUGzma30ppJ2+rmryNoQxoY6dlOBQ5/VbnEYmHrlPl+ne2GQ5yNozLc7+O2vKacNMlnKaK3eJLQND70Uuq7Rdb1+O4cUPis1N58bbq/wjrWyvm+oIuQw5QbzsaAH9wfRS5QawENmllj60AwdLu8M2j3of0Co/LHrFzwIhJgAj1vN4tz38HERIjtKQ8pjAWetEFMO8vR18oSJ/8j0fAVP6qc7uFE3V5kElGJ35zZ+wy4aGPJSWCk9V0xEa0kU3AP4MPQmbn4bjF0tiB8Tz6FlNTXpuxVRCjvoaOhlXzclkM55J4nxbGPijdeej0Mew5dgnwx1zjBCsIheOvJzionpp7r0YSyBNXC0jZauSGNquW40/4QpqXypo5eWk8FmDBJrA7jDPERDWyUkyRQQ0pVXYqBa4M/m4ca4uJxZQW/NU9Sxaffa/X27MBI7sgYxwRwz1V6OuBtA/edi58g7bxoXASyLSc5qpxOKOKxcIrnycybMpF8G6zm5+PLPVuYzIMWND9i3JfSX4ynXh/ea+SGTjYtCndBgJssYaHFvpk2yNkJl6zU1qID0Y1TGTDjHX3DnUusZBlDq0+jpGN9wIRTViHN0zPlvbw+v0GM4yeYNYeaxGeFpYtvTcyEDO+zDcOVY8Uvf1SomPFUjf3vMY6fz/Bn9uiPlowsBM7yVSvrAyMsvEdSXrqYDvPoAac4Kt2THkjeOAxVeLgS+ik9dfhWud8vWzsOMqx+sFzDpcVWS53lyP6fMvLNOKC6CBeZQgPZochKqUO4DR2iZXajNMtMqi5qT+O5jmqaVfc3ablPT+ET4Dp3pgPUgyX68VbFptO5Ekat1GfZrM/kvRJklDshGdzTOiHvPqW47HrY2fqa4D0JiOIjBRrk1Z8fALn7Nlx6afC3r3W8ChuX+L7vxnqdE0mHSqryQFe9thZitiTTxHMew14sX5lQK5CsYM8zm s2g4feWW IpaC/pEoR8RMBmGso2Ofp7UJIlNIH2i/6QhvLDO6uHLN4dqBkGTeBOA7WHFx6t/ZRsQtkbbBeDq8fv6lcaLK8gPFtO7beC1tSbcP+lBI4I4y8Y1ATJQTlHDhOOzuSxXz2OXq1TirWby/2/Q1IroTWQU/zz2yioKnFFJH9AXqOhVjsMIR6TR+tnCsXRM1IVMVqYs4qXpg9Bqr0hPRxi0TpiCGvuZ2Ixzm6jHqSsF3a3vc7/QU3BITgSxCRp6QcfdGDoIFALM+htODHBU+d6PqbLL+/A0lC6UWhTl7avXKkjc8gec7EB2wHhdGXpim+//+pP63vWEUG6q1J69mfSViB1Lyf/RT1L8ppHLnMIivPPQ796NPHrlM7Kb9gI0kgIXZy94RIJ8Vx+Y7SqHmgV3BKwicDYymnrlmHMAZm7U1uto+N2IOCHnCl9GeURLpFec7qRn1S/rx90SbDmmaN+ejUW7B0rZQf+Ean4qrAjvjjW4wVDiVnwLbYWQ+GmCukaGnQtEMyy7ddfRIG6CMAY0JhWCBr6fKti2mDKhp9IA081hemdHT0g0qwUyV71k6/QWwLsFgrPb8IvecMTxMv1fbNRqLxKYVI5mWs0rosW7hr290qW2tRJZQe+2sNLKeIphXWl99klcGxZleVB6Q4yCo9X62GyhMeDDp/fB18q/C9rk2TXRISYgZTPsqPsURgG8iLIy9I/TTrP9N5xCI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Quite a few userfaultfd functions took both mm and vma pointers as arguments. Since the mm is trivially accessible via vma->vm_mm, there's no reason to pass both; it just needlessly extends the already long argument list. Get rid of the mm pointer, where possible, to shorten the argument list. Acked-by: Peter Xu Signed-off-by: Axel Rasmussen Acked-by: Mike Rapoport (IBM) --- fs/userfaultfd.c | 2 +- include/linux/hugetlb.h | 5 ++- include/linux/shmem_fs.h | 4 +-- include/linux/userfaultfd_k.h | 4 +-- mm/hugetlb.c | 4 +-- mm/shmem.c | 7 ++-- mm/userfaultfd.c | 61 +++++++++++++++++------------------ 7 files changed, 41 insertions(+), 46 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 365bf00dd8dd..84d5d402214a 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1629,7 +1629,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, /* Reset ptes for the whole vma range if wr-protected */ if (userfaultfd_wp(vma)) - uffd_wp_range(mm, vma, start, vma_end - start, false); + uffd_wp_range(vma, start, vma_end - start, false); new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS; prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags, diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 8f0467bf1cbd..8b9325f77ac3 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -158,7 +158,7 @@ unsigned long hugetlb_total_pages(void); vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, unsigned int flags); #ifdef CONFIG_USERFAULTFD -int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, pte_t *dst_pte, +int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -393,8 +393,7 @@ static inline void hugetlb_free_pgd_range(struct mmu_gather *tlb, } #ifdef CONFIG_USERFAULTFD -static inline int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, - pte_t *dst_pte, +static inline int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 103d1000a5a2..b82916c25e61 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -151,14 +151,14 @@ extern void shmem_uncharge(struct inode *inode, long pages); #ifdef CONFIG_USERFAULTFD #ifdef CONFIG_SHMEM -extern int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, +extern int shmem_mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, bool zeropage, bool wp_copy, struct page **pagep); #else /* !CONFIG_SHMEM */ -#define shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, \ +#define shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, \ src_addr, zeropage, wp_copy, pagep) ({ BUG(); 0; }) #endif /* CONFIG_SHMEM */ #endif /* CONFIG_USERFAULTFD */ diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 468080125612..ba79e296fcc7 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -56,7 +56,7 @@ enum mcopy_atomic_mode { MCOPY_ATOMIC_CONTINUE, }; -extern int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, +extern int mfill_atomic_install_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, struct page *page, bool newly_allocated, bool wp_copy); @@ -73,7 +73,7 @@ extern ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long dst extern int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, unsigned long len, bool enable_wp, atomic_t *mmap_changing); -extern long uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *vma, +extern long uffd_wp_range(struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); /* mm helpers */ diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4c9276549394..fe043034ab46 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6157,8 +6157,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, * Used by userfaultfd UFFDIO_* ioctls. Based on userfaultfd's mfill_atomic_pte * with modifications for hugetlb pages. */ -int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, - pte_t *dst_pte, +int hugetlb_mfill_atomic_pte(pte_t *dst_pte, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -6166,6 +6165,7 @@ int hugetlb_mfill_atomic_pte(struct mm_struct *dst_mm, struct page **pagep, bool wp_copy) { + struct mm_struct *dst_mm = dst_vma->vm_mm; bool is_continue = (mode == MCOPY_ATOMIC_CONTINUE); struct hstate *h = hstate_vma(dst_vma); struct address_space *mapping = dst_vma->vm_file->f_mapping; diff --git a/mm/shmem.c b/mm/shmem.c index 448f393d8ab2..1d751b6cf1ac 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2415,8 +2415,7 @@ static struct inode *shmem_get_inode(struct mnt_idmap *idmap, struct super_block } #ifdef CONFIG_USERFAULTFD -int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +int shmem_mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -2506,11 +2505,11 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, goto out_release; ret = shmem_add_to_page_cache(folio, mapping, pgoff, NULL, - gfp & GFP_RECLAIM_MASK, dst_mm); + gfp & GFP_RECLAIM_MASK, dst_vma->vm_mm); if (ret) goto out_release; - ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, &folio->page, true, wp_copy); if (ret) goto out_delete_from_cache; diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 84db5b2fad3a..4fc373476739 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -55,12 +55,13 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, * This function handles both MCOPY_ATOMIC_NORMAL and _CONTINUE for both shmem * and anon, and for both shared and private VMAs. */ -int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, +int mfill_atomic_install_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, struct page *page, bool newly_allocated, bool wp_copy) { int ret; + struct mm_struct *dst_mm = dst_vma->vm_mm; pte_t _dst_pte, *dst_pte; bool writable = dst_vma->vm_flags & VM_WRITE; bool vm_shared = dst_vma->vm_flags & VM_SHARED; @@ -127,8 +128,7 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, return ret; } -static int mfill_atomic_pte_copy(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +static int mfill_atomic_pte_copy(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -190,10 +190,10 @@ static int mfill_atomic_pte_copy(struct mm_struct *dst_mm, __SetPageUptodate(page); ret = -ENOMEM; - if (mem_cgroup_charge(page_folio(page), dst_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), dst_vma->vm_mm, GFP_KERNEL)) goto out_release; - ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, page, true, wp_copy); if (ret) goto out_release; @@ -204,8 +204,7 @@ static int mfill_atomic_pte_copy(struct mm_struct *dst_mm, goto out; } -static int mfill_atomic_pte_zeropage(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +static int mfill_atomic_pte_zeropage(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr) { @@ -217,7 +216,7 @@ static int mfill_atomic_pte_zeropage(struct mm_struct *dst_mm, _dst_pte = pte_mkspecial(pfn_pte(my_zero_pfn(dst_addr), dst_vma->vm_page_prot)); - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); + dst_pte = pte_offset_map_lock(dst_vma->vm_mm, dst_pmd, dst_addr, &ptl); if (dst_vma->vm_file) { /* the shmem MAP_PRIVATE case requires checking the i_size */ inode = dst_vma->vm_file->f_inode; @@ -230,7 +229,7 @@ static int mfill_atomic_pte_zeropage(struct mm_struct *dst_mm, ret = -EEXIST; if (!pte_none(*dst_pte)) goto out_unlock; - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + set_pte_at(dst_vma->vm_mm, dst_addr, dst_pte, _dst_pte); /* No need to invalidate - it was non-present before */ update_mmu_cache(dst_vma, dst_addr, dst_pte); ret = 0; @@ -240,8 +239,7 @@ static int mfill_atomic_pte_zeropage(struct mm_struct *dst_mm, } /* Handles UFFDIO_CONTINUE for all shmem VMAs (shared or private). */ -static int mfill_atomic_pte_continue(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +static int mfill_atomic_pte_continue(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, bool wp_copy) @@ -269,7 +267,7 @@ static int mfill_atomic_pte_continue(struct mm_struct *dst_mm, goto out_release; } - ret = mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + ret = mfill_atomic_install_pte(dst_pmd, dst_vma, dst_addr, page, false, wp_copy); if (ret) goto out_release; @@ -310,7 +308,7 @@ static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) * mfill_atomic processing for HUGETLB vmas. Note that this routine is * called with mmap_lock held, it will release mmap_lock before returning. */ -static __always_inline ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, +static __always_inline ssize_t mfill_atomic_hugetlb( struct vm_area_struct *dst_vma, unsigned long dst_start, unsigned long src_start, @@ -318,6 +316,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, enum mcopy_atomic_mode mode, bool wp_copy) { + struct mm_struct *dst_mm = dst_vma->vm_mm; int vm_shared = dst_vma->vm_flags & VM_SHARED; ssize_t err; pte_t *dst_pte; @@ -411,7 +410,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, goto out_unlock; } - err = hugetlb_mfill_atomic_pte(dst_mm, dst_pte, dst_vma, + err = hugetlb_mfill_atomic_pte(dst_pte, dst_vma, dst_addr, src_addr, mode, &page, wp_copy); @@ -463,17 +462,15 @@ static __always_inline ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, } #else /* !CONFIG_HUGETLB_PAGE */ /* fail at build time if gcc attempts to use this */ -extern ssize_t mfill_atomic_hugetlb(struct mm_struct *dst_mm, - struct vm_area_struct *dst_vma, - unsigned long dst_start, - unsigned long src_start, - unsigned long len, - enum mcopy_atomic_mode mode, - bool wp_copy); +extern ssize_t mfill_atomic_hugetlb(struct vm_area_struct *dst_vma, + unsigned long dst_start, + unsigned long src_start, + unsigned long len, + enum mcopy_atomic_mode mode, + bool wp_copy); #endif /* CONFIG_HUGETLB_PAGE */ -static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, - pmd_t *dst_pmd, +static __always_inline ssize_t mfill_atomic_pte(pmd_t *dst_pmd, struct vm_area_struct *dst_vma, unsigned long dst_addr, unsigned long src_addr, @@ -484,7 +481,7 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, ssize_t err; if (mode == MCOPY_ATOMIC_CONTINUE) { - return mfill_atomic_pte_continue(dst_mm, dst_pmd, dst_vma, + return mfill_atomic_pte_continue(dst_pmd, dst_vma, dst_addr, wp_copy); } @@ -500,14 +497,14 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, */ if (!(dst_vma->vm_flags & VM_SHARED)) { if (mode == MCOPY_ATOMIC_NORMAL) - err = mfill_atomic_pte_copy(dst_mm, dst_pmd, dst_vma, + err = mfill_atomic_pte_copy(dst_pmd, dst_vma, dst_addr, src_addr, page, wp_copy); else - err = mfill_atomic_pte_zeropage(dst_mm, dst_pmd, + err = mfill_atomic_pte_zeropage(dst_pmd, dst_vma, dst_addr); } else { - err = shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, + err = shmem_mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, src_addr, mode != MCOPY_ATOMIC_NORMAL, wp_copy, page); @@ -588,7 +585,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, * If this is a HUGETLB vma, pass off to appropriate routine */ if (is_vm_hugetlb_page(dst_vma)) - return mfill_atomic_hugetlb(dst_mm, dst_vma, dst_start, + return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, len, mcopy_mode, wp_copy); @@ -641,7 +638,7 @@ static __always_inline ssize_t mfill_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_none(*dst_pmd)); BUG_ON(pmd_trans_huge(*dst_pmd)); - err = mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + err = mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, src_addr, &page, mcopy_mode, wp_copy); cond_resched(); @@ -710,7 +707,7 @@ ssize_t mfill_atomic_continue(struct mm_struct *dst_mm, unsigned long start, mmap_changing, 0); } -long uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma, +long uffd_wp_range(struct vm_area_struct *dst_vma, unsigned long start, unsigned long len, bool enable_wp) { unsigned int mm_cp_flags; @@ -730,7 +727,7 @@ long uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma, */ if (!enable_wp && vma_wants_manual_pte_write_upgrade(dst_vma)) mm_cp_flags |= MM_CP_TRY_CHANGE_WRITABLE; - tlb_gather_mmu(&tlb, dst_mm); + tlb_gather_mmu(&tlb, dst_vma->vm_mm); ret = change_protection(&tlb, dst_vma, start, start + len, mm_cp_flags); tlb_finish_mmu(&tlb); @@ -782,7 +779,7 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, goto out_unlock; } - err = uffd_wp_range(dst_mm, dst_vma, start, len, enable_wp); + err = uffd_wp_range(dst_vma, start, len, enable_wp); /* Return 0 on success, <0 on failures */ if (err > 0)