From patchwork Tue Jan 10 02:57:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alistair Popple X-Patchwork-Id: 13094621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AC75C54EBD for ; Tue, 10 Jan 2023 02:57:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D44998E0002; Mon, 9 Jan 2023 21:57:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CCDDA8E0001; Mon, 9 Jan 2023 21:57:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AF9F58E0002; Mon, 9 Jan 2023 21:57:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 989E58E0001 for ; Mon, 9 Jan 2023 21:57:46 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 76E5BACF31 for ; Tue, 10 Jan 2023 02:57:46 +0000 (UTC) X-FDA: 80337379332.21.2724C58 Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2076.outbound.protection.outlook.com [40.107.243.76]) by imf22.hostedemail.com (Postfix) with ESMTP id CE109C0004 for ; Tue, 10 Jan 2023 02:57:43 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=smMfRGw8; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf22.hostedemail.com: domain of apopple@nvidia.com designates 40.107.243.76 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673319464; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=Qhg0ohuLscQeA5CZY9Qveol/3WsWTIAeg5d6R+rOEP0=; b=H7F//36vCEcRJQZi3bhD0hB7fpUG6A7F1dEnlr0awtLFWxYwwfu7acWjm3NqILWo9yEIyQ WxFG4ENSxBg65QVFDBqw+lk2gNwX+xD+fs6TdTdJPeVAfBzd2Y4n71s20WBenKdsRGoYlV rtoYYvyuo9uV/+0FChwPz0Q2oog1zIg= ARC-Authentication-Results: i=2; imf22.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=smMfRGw8; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf22.hostedemail.com: domain of apopple@nvidia.com designates 40.107.243.76 as permitted sender) smtp.mailfrom=apopple@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1673319464; a=rsa-sha256; cv=pass; b=B2p3GX9ueAiEee/PFOp/M3EWY4L5Z0i0dRo15F/6v64ofm2gwgZcNiniPZXbufYNDit8F5 kEYrdSafG1mTAuv7KNJQ9gyLcYMkU6I0EEKcZzk5I42coWIfcA4uA1x6gkJqP9QSg40gLK jjQdBDQBFfDj52+7Gms6QhiSw2Qhxis= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j8kYWPQDZTnCps1tAy9Dr1UfZzFCeoy4hgjW65D+RPTxxGl7MMeoAqJOCPz+USTptA2fVqiuDcHLLjesDzQAjILnNazx4DtS5O9D9lhQMiBhfBQttgaY5VRFdHtjXPYlw6dRrVQRRDZ76QddGHbQcGO2B0AiowMPNQ1KLapyigLpZ4aNQMOgbrcucDr6bSYZ5KbamTG5ixmWanlKyIbHkvTPuFYHyP7VQ+92J1M/9Go/7Zq8MXUgOuNxYGxndD0Y4mUx2hvXzdY/grBF7AOPUYFrZaodrDrgcf53/lNkCdegXo42ZyHD7gZEupYH3u66taAyMTE6PgDA2RLQW12+Wg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Qhg0ohuLscQeA5CZY9Qveol/3WsWTIAeg5d6R+rOEP0=; b=c4KxPCFuizC81nJxX14ZEtt1/oqldGlbLXf0fRfZ3ZY95HXj7A2LhhAO+euSfxHZs7i8b8NjSTRcp3/vuj3hfN4kVJffrOOL0bYoLJrU35REZqDv1MWdSbeBYJpPrUGZX9lsKivfCHblEex2u1Li4YhsEoZlRaK+/RN8cIQX2tK5aS5MO9Lo9cwHlgHzzXJMCjx2NomSm7JfZk9Sd7wMyHyskcqAKohDB0JVNnMJnkAVlMb9h2XZ97fs2n8iE4CG2i15MJu7F8JS1N38o/KIYk4kPTrek3NFrP3nahB50xmavHLU0jJkkqyGidu+Epp6/HJrydBuyNEECNY58o7iGA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Qhg0ohuLscQeA5CZY9Qveol/3WsWTIAeg5d6R+rOEP0=; b=smMfRGw8bn764pP790ZJgorWhJBaFDGkHIJH5wR/zALJrPrv4lexp8t3eZXRPIBLJgoI6TofQK7ZWsSgJqxJN6CUk98WgMZUMGxkyHUtDkNiZu7CIviWbODq+NJJMhxSOdxlhi1oUJ5ihnXv9mCCmTimhjL0Bkh5cr2XuNi49PRQDfENhuqOxQ1MvNsWsWIO34mR1VEPJhKDHBdnC0A6hGitFIBCJXCCLIrOrdvp7PfRTZWC8VBm1ZPFXL3zFNyzJPbbwPFmAaLl2WtFcBQLnVFQ/WVUliw7uEvy8M1EtjWxOW0L6Tf+lTmA2Ml2PC31zg0wgNAPPUa6dmrZd3yTbg== Received: from BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) by IA0PR12MB7505.namprd12.prod.outlook.com (2603:10b6:208:443::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.18; Tue, 10 Jan 2023 02:57:40 +0000 Received: from BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::ecfb:a3ad:3efa:9df8]) by BYAPR12MB3176.namprd12.prod.outlook.com ([fe80::ecfb:a3ad:3efa:9df8%3]) with mapi id 15.20.5986.018; Tue, 10 Jan 2023 02:57:40 +0000 From: Alistair Popple To: Andrew Morton , linux-mm@kvack.org Cc: John Hubbard , Ralph Campbell , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Ira Weiny , Jason Gunthorpe , Christoph Hellwig , Mike Kravetz , Alistair Popple , Mike Rapoport Subject: [PATCH v2] mm/mmu_notifier: Remove unused mmu_notifier_range_update_to_read_only export Date: Tue, 10 Jan 2023 13:57:22 +1100 Message-Id: <20230110025722.600912-1-apopple@nvidia.com> X-Mailer: git-send-email 2.35.1 X-ClientProxiedBy: SY2PR01CA0013.ausprd01.prod.outlook.com (2603:10c6:1:14::25) To BYAPR12MB3176.namprd12.prod.outlook.com (2603:10b6:a03:134::26) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BYAPR12MB3176:EE_|IA0PR12MB7505:EE_ X-MS-Office365-Filtering-Correlation-Id: 8c7eb4ac-f968-4f91-e642-08daf2b66fd7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: BWXjntRyxHt0qRg9c/bBf+6NoxB9TqdDBtOVeFIvIgKbcqgcUoXlNEDq19nGl+xKQHDyWL5BMLfMiowLwX3+7jA34eMhaePAPKBIXEeQZvnd7YgxlS5i9y7qi5wgy1D2W4seHgn1I55987jpoTv3lhMPfFYMy3Oxm+3j/QWHCxN79El2nNc8QYHJSN3aU4uJZxL5EG18KNHJI1RvHrtsDb9Z4J6uvELVOi0rwgwqz3TMHHj+KU1r+GC5XeXqSNFY4Ku52s0zAnpSYJd2CR3uH3RyGjI2NWj4sul3LGoRp2+kzelKOAJmYb605oufULl8p/QviToE/9LxBtgihAdoYbDh+zbCM5MRMPSCcMshNZPMUV1ZhY13r8Kj3SyjuEi6vMSLNTk5I8XLU6rgRkLuF9S+/1EF/Jrq10BSP4n2lnCj+QPvD0IafUBjeSU8NViJ3CFvKhCrPaJgKwwKRS3FnZekgEx2ao3M4JpmTgKEy22MEtjsD6ywB6WhoJEVZYjKHk7z/060oeilR+O5YJ0YL4qXSw1A52MVMiJay7+7/rbHh49VM0drQeNmaUM6abYeC4UOKLJlPof/ipFsYRRNyBwQnIJfx4SqIAqYzXa5lv8LZkrnXLd/O35UX2wfAoV6RE6xx7JPU3qeGyWyGjZLYQ== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR12MB3176.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(366004)(136003)(376002)(39860400002)(346002)(396003)(451199015)(30864003)(1076003)(5660300002)(316002)(26005)(6512007)(186003)(478600001)(6486002)(2616005)(41300700001)(54906003)(66946007)(66476007)(4326008)(66556008)(8676002)(8936002)(83380400001)(86362001)(36756003)(6666004)(6506007)(38100700002)(2906002);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: G3qvfRQ7t9EoUtJgmz1PO13eTAjdfGgWv2yoWHhqtttmEnAKHga3sVGsyprnqM7mmgIoelTqdYhQTOVias3aMBV0tBS98nYZ+VRIxJF3p2yKIDJhOuL8ce0lIr36IA3sqSg7N0FhvCbfOtp0btqDTLZAtVYxjhpH+pcK67iKY0PY+bmVOb5Eti83OYEvZgOvLtpcgGTFjvNjhaqx5mdQsCCWxRmZIGZEk+mR3awL65IndlXtRBc/yMa2iZkbHDRwqXfJX+a7540C5t5N9/z91q4K5ttut7I55heM3eugHOydpuza3jWzQgDNdYEkcZ/v15McxpVDYthIOZqQRzLXGxlK7DQwRgp39Ilm/d6rUKmYVoHfmaCWcaJ3seymU9IM/eL44OuCVmftSAZT0Q1qBRMifeqr9gkqWS16hk8kUTu15sm+TEfGOgv5QQBLu5cWQshn4DKOQgIG+qe43iG7FffeObTfASrcvkUFm3HU4eTzIdP1JeZl7bO043gHhwFrT6NZnvGJ9FRZzBdk83lJvJhUMoykkY3KtGDG9uO4V+h9RykaDU5uacdTyPgG9hxMp4jK8XJnEcXekk9kY0+xtfgoRGUIXWkUEUp3k32SHw4sp6CKSO4CBZvhXrYag5oKqFQqunlKsaX35hExqQDozLXYkzxnVJNzxNE5kz8fdq6roZp0PTlSCBD12fwjer/ar0YTLQdAGcOTXJfRL1lL8AfU0e6opwrRz6mtL+fahjOMvZxKiyUnmmFbg7ky44OTGhCkZphgJTDRTqnJ1Ivg+peYzC5O4XD/YotiBahRQImvXyIwlFTt7YyRUzx+WQlLjtzQqx0DlWdHEi+683Dy5maweuokFhy7JMcaJZmXcOc21OMXlx8qtq5sLmjMfOCW7H2RstnXKGtUVjzsQvgKJsHlnGVBD8N9kV19paOWORIR24J19akQOqyBZhUFUT/iCbzaj5XdQvuFa0PnuZVT7kdywcCPxDXibCtPlksDjrAquDEIOt9KZRFgotfvypv49rjfG/I47dOdFRA3yESltsTo9Syy88oJiygwUcUSRNs8RPAzbEHMP7AF9Z9rjgqpeuubh8SwXcXp5KEdOlLeUNLxr8KYtrhjDsc2wwDPpEVgvkEQsXa+dVdFXIHf+mFJ6MVtXwEiZYHSpR/5SppYMTjoY4tiQKZTG9ogthWKiKWMRAJls9AX6nGSPooxXWYTGNp4RejAOh0+uEzZ9k1h4ZK2LdgyrRiKrBmvoCKF34EquW90dOZ1MkVrapKpfYLrHnD1QchTAI3qmrqlcGcRQDsW009lSt9jHs1/2/ZxSA6KfdeUCUxzxuxGcrkDgL/T5xVG3xj4oWYIm/JBffUm3pzCqzhTQiEvpU6/0Zthg81W7x6J/zjQVVXti27+ZewaO251gCWQusjYE2s8pGXv4z9WofeHgE6GsjxiwMwRWgN0tLw8gW1MJvn7g2a/epy0ieTvjRTN45R7h+rtBhkkrDibUW3TKElbKvPwDmV1CaJ3ycKrf/boq0MEMxQvKZjKrCMM2a00tGwhQE0Bi+1kiVfPuPGRP4QGpbBxtaoQyVFWkXIfDRqBRVJsIF5jHjzV X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 8c7eb4ac-f968-4f91-e642-08daf2b66fd7 X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB3176.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2023 02:57:40.3079 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: B9/7TtyqwHuNprZxMFoop8lvlUUuy0Rz5e+m31nLjubiMHqS0lUNR0ruopNA8uLikoOFBSTW88V06q69qdD9pg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7505 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: CE109C0004 X-Rspam-User: X-Stat-Signature: kf6tihtkia6kz1dx8on19h7nx4bs9gi9 X-HE-Tag: 1673319463-527090 X-HE-Meta: U2FsdGVkX19njc5eNwwnbXRQC4ZC43c43cIh499CHwD+rWOPKoPNWXwMJUl1NkLNUo3f3tWBqPT8oVPIAmDj4BCrJAMasWSARaN9ATpvcBkovNs6OA/RbjhX8w1COFTJGXGn90wZgk/StQsoc6BZqq+yM08Kw5hE++SKuAGmLZ/qGi4cZPaogeisPGvKw8D8X7/C5acbwnvlHvqUwkkDuk4s3P6sMsKppw5YIk72Kpunpq1itH++dHH58zRTfGFIUXHHpuVR9VNqmB0p9fS13PjN1OUim8kFqwhkT5fOxgbuACOZUQAnTS2LlunDi5CQatih4JsNkoBNlTTXFirhf4H0LmkqjYjQyQSAWpas0tq+D7KCz/gMMWhpsA5S6MJCgp3JNCzJHPRLW6J0rCKnccRP59YNXoOIhomfiyhpzfJoqp+1l0N6YkA5lrpa+l8ayVJP5MuLok9g4QwLQEUm3crDn0tL7lN3Lf9kuV3nSiP2tGPIjuygJTUSgWQ9hUNPMfPA6QwMbY73EFHHrUqhV6lSHyI/Lq0oIw7W55eWQHwyHx5+WE+ftW0L9RhHGV82cafMIsGXns8nBv5HHzKAZabHY7Csxg8pw8oBmvUDlRMmmeEHk/2hhcOI8RkzfntnZq3RjT+XBVx84zZlrc53rUuc8eNXuxPiFHQfnD+pW0yIcd4RfjMzdfj7pgdnH8hQ23MCe4QSgmUwc55OvQ3F/FUtA3v1hP63+AsQeov4bplGudt+WSzbAjv10we2TDOfUxkuT64tZUkTWOz6/YGWi+D3c09028qF6T3xUjzDldZBnJF5I1rkGQ81sA3iEJBZ8SILcXZH2Xvu2v/zY/6Gsd+E40ZCiuBv6GhNrCmWJWAaw9KIeZNvMBC5lNyW90laE/9Z4HHzKNFVu9y1wiktkPRm1NOI4rEjDCUk7JwTlnPVCwtz9prZyWt6gnm8fsCkjUFNGuv3g4KxUny3Y2Z 31yIF9oi 74me4HDcWrb7/0iplhbgpRGWV4B4zD0AAlBa+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: mmu_notifier_range_update_to_read_only() was originally introduced in commit c6d23413f81b ("mm/mmu_notifier: mmu_notifier_range_update_to_read_only() helper") as an optimisation for device drivers that know a range has only been mapped read-only. However there are no users of this feature so remove it. As it is the only user of the struct mmu_notifier_range.vma field remove that also. Signed-off-by: Alistair Popple Acked-by: Mike Rapoport (IBM) Reviewed-by: Jason Gunthorpe Reviewed-by: Christoph Hellwig Reviewed-by: Mike Kravetz --- Changes for v2: - Added Mike's Acked-by - Added Jason's Reviewed-by - Removed the now unused vma parameter from mmu_notififer_range_init{_owner}() --- fs/proc/task_mmu.c | 2 +- include/linux/mmu_notifier.h | 13 +++++-------- kernel/events/uprobes.c | 2 +- mm/huge_memory.c | 4 ++-- mm/hugetlb.c | 13 ++++++------- mm/khugepaged.c | 6 +++--- mm/ksm.c | 5 ++--- mm/madvise.c | 2 +- mm/mapping_dirty_helpers.c | 2 +- mm/memory.c | 12 ++++++------ mm/migrate_device.c | 4 ++-- mm/mmu_notifier.c | 10 ---------- mm/mprotect.c | 2 +- mm/mremap.c | 2 +- mm/oom_kill.c | 2 +- mm/rmap.c | 11 +++++------ 16 files changed, 38 insertions(+), 54 deletions(-) diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 8a74cdcc9af0..b61d00af6cc2 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1300,7 +1300,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, inc_tlb_flush_pending(mm); mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY, - 0, NULL, mm, 0, -1UL); + 0, mm, 0, -1UL); mmu_notifier_invalidate_range_start(&range); } walk_page_range(mm, 0, -1, &clear_refs_walk_ops, &cp); diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h index d6c06e140277..64a3e051c3c4 100644 --- a/include/linux/mmu_notifier.h +++ b/include/linux/mmu_notifier.h @@ -269,7 +269,6 @@ extern struct lockdep_map __mmu_notifier_invalidate_range_start_map; #endif struct mmu_notifier_range { - struct vm_area_struct *vma; struct mm_struct *mm; unsigned long start; unsigned long end; @@ -514,12 +513,10 @@ static inline void mmu_notifier_subscriptions_destroy(struct mm_struct *mm) static inline void mmu_notifier_range_init(struct mmu_notifier_range *range, enum mmu_notifier_event event, unsigned flags, - struct vm_area_struct *vma, struct mm_struct *mm, unsigned long start, unsigned long end) { - range->vma = vma; range->event = event; range->mm = mm; range->start = start; @@ -530,10 +527,10 @@ static inline void mmu_notifier_range_init(struct mmu_notifier_range *range, static inline void mmu_notifier_range_init_owner( struct mmu_notifier_range *range, enum mmu_notifier_event event, unsigned int flags, - struct vm_area_struct *vma, struct mm_struct *mm, - unsigned long start, unsigned long end, void *owner) + struct mm_struct *mm, unsigned long start, + unsigned long end, void *owner) { - mmu_notifier_range_init(range, event, flags, vma, mm, start, end); + mmu_notifier_range_init(range, event, flags, mm, start, end); range->owner = owner; } @@ -659,9 +656,9 @@ static inline void _mmu_notifier_range_init(struct mmu_notifier_range *range, range->end = end; } -#define mmu_notifier_range_init(range,event,flags,vma,mm,start,end) \ +#define mmu_notifier_range_init(range,event,flags,mm,start,end) \ _mmu_notifier_range_init(range, start, end) -#define mmu_notifier_range_init_owner(range, event, flags, vma, mm, start, \ +#define mmu_notifier_range_init_owner(range, event, flags, mm, start, \ end, owner) \ _mmu_notifier_range_init(range, start, end) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index d9e357b7e17c..29f36d2ae129 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -161,7 +161,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr, int err; struct mmu_notifier_range range; - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, addr + PAGE_SIZE); if (new_page) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 811d19b5c4f6..39fd20026172 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1980,7 +1980,7 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, spinlock_t *ptl; struct mmu_notifier_range range; - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address & HPAGE_PUD_MASK, (address & HPAGE_PUD_MASK) + HPAGE_PUD_SIZE); mmu_notifier_invalidate_range_start(&range); @@ -2270,7 +2270,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, spinlock_t *ptl; struct mmu_notifier_range range; - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address & HPAGE_PMD_MASK, (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE); mmu_notifier_invalidate_range_start(&range); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e36ca75311a5..77cf3910819d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4797,7 +4797,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, int ret = 0; if (cow) { - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_vma, src, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src, src_vma->vm_start, src_vma->vm_end); mmu_notifier_invalidate_range_start(&range); @@ -5005,7 +5005,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, struct mmu_notifier_range range; bool shared_pmd = false; - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, old_addr, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, old_addr, old_end); adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); /* @@ -5084,8 +5084,7 @@ static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct /* * If sharing possible, alert mmu notifiers of worst case. */ - mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, mm, start, - end); + mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, mm, start, end); adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); mmu_notifier_invalidate_range_start(&range); last_addr_mask = hugetlb_mask_last_page(h); @@ -5434,7 +5433,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, pages_per_huge_page(h)); __SetPageUptodate(new_page); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, haddr, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, haddr, haddr + huge_page_size(h)); mmu_notifier_invalidate_range_start(&range); @@ -6423,7 +6422,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, * range if PMD sharing is possible. */ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, - 0, vma, mm, start, end); + 0, mm, start, end); adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); BUG_ON(address >= end); @@ -7451,7 +7450,7 @@ void hugetlb_unshare_all_pmds(struct vm_area_struct *vma) * No need to call adjust_range_if_pmd_sharing_possible(), because * we have already done the PUD_SIZE alignment. */ - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, start, end); mmu_notifier_invalidate_range_start(&range); hugetlb_vma_lock_write(vma); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 3703a56571c1..0dd71f6e1739 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1032,8 +1032,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, anon_vma_lock_write(vma->anon_vma); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, - address, address + HPAGE_PMD_SIZE); + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address, + address + HPAGE_PMD_SIZE); mmu_notifier_invalidate_range_start(&range); pte = pte_offset_map(pmd, address); @@ -1411,7 +1411,7 @@ static void collapse_and_free_pmd(struct mm_struct *mm, struct vm_area_struct *v if (vma->anon_vma) lockdep_assert_held_write(&vma->anon_vma->root->rwsem); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, NULL, mm, addr, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, addr + HPAGE_PMD_SIZE); mmu_notifier_invalidate_range_start(&range); pmd = pmdp_collapse_flush(vma, addr, pmdp); diff --git a/mm/ksm.c b/mm/ksm.c index c19fcca9bc03..47e8eb8e0b2d 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1029,8 +1029,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page, BUG_ON(PageTransCompound(page)); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, - pvmw.address, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, pvmw.address, pvmw.address + PAGE_SIZE); mmu_notifier_invalidate_range_start(&range); @@ -1137,7 +1136,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, if (!pmd_present(pmde) || pmd_trans_huge(pmde)) goto out; - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, addr, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr, addr + PAGE_SIZE); mmu_notifier_invalidate_range_start(&range); diff --git a/mm/madvise.c b/mm/madvise.c index b913ba6efc10..38e1700e9b9d 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -750,7 +750,7 @@ static int madvise_free_single_vma(struct vm_area_struct *vma, range.end = min(vma->vm_end, end_addr); if (range.end <= vma->vm_start) return -EINVAL; - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, range.start, range.end); lru_add_drain(); diff --git a/mm/mapping_dirty_helpers.c b/mm/mapping_dirty_helpers.c index 1b0ab8fcfd8b..fca62dfd001b 100644 --- a/mm/mapping_dirty_helpers.c +++ b/mm/mapping_dirty_helpers.c @@ -191,7 +191,7 @@ static int wp_clean_pre_vma(unsigned long start, unsigned long end, wpwalk->tlbflush_end = start; mmu_notifier_range_init(&wpwalk->range, MMU_NOTIFY_PROTECTION_PAGE, 0, - walk->vma, walk->mm, start, end); + walk->mm, start, end); mmu_notifier_invalidate_range_start(&wpwalk->range); flush_cache_range(walk->vma, start, end); diff --git a/mm/memory.c b/mm/memory.c index 8c8420934d60..da2e29e51d89 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1307,7 +1307,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) if (is_cow) { mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, - 0, src_vma, src_mm, addr, end); + 0, src_mm, addr, end); mmu_notifier_invalidate_range_start(&range); /* * Disabling preemption is not needed for the write side, as @@ -1717,7 +1717,7 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt, }; MA_STATE(mas, mt, vma->vm_end, vma->vm_end); - mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma->vm_mm, start_addr, end_addr); mmu_notifier_invalidate_range_start(&range); do { @@ -1744,7 +1744,7 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start, MA_STATE(mas, mt, vma->vm_end, vma->vm_end); lru_add_drain(); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, start, start + size); tlb_gather_mmu(&tlb, vma->vm_mm); update_hiwater_rss(vma->vm_mm); @@ -1773,7 +1773,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, struct mmu_gather tlb; lru_add_drain(); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, end); if (is_vm_hugetlb_page(vma)) adjust_range_if_pmd_sharing_possible(vma, &range.start, @@ -3143,7 +3143,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) __SetPageUptodate(new_page); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, vmf->address & PAGE_MASK, (vmf->address & PAGE_MASK) + PAGE_SIZE); mmu_notifier_invalidate_range_start(&range); @@ -3625,7 +3625,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) return VM_FAULT_RETRY; - mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, + mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, vmf->address & PAGE_MASK, (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); mmu_notifier_invalidate_range_start(&range); diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 721b2365dbca..6c3740318a98 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -306,7 +306,7 @@ static void migrate_vma_collect(struct migrate_vma *migrate) * private page mappings that won't be migrated. */ mmu_notifier_range_init_owner(&range, MMU_NOTIFY_MIGRATE, 0, - migrate->vma, migrate->vma->vm_mm, migrate->start, migrate->end, + migrate->vma->vm_mm, migrate->start, migrate->end, migrate->pgmap_owner); mmu_notifier_invalidate_range_start(&range); @@ -733,7 +733,7 @@ static void __migrate_device_pages(unsigned long *src_pfns, notified = true; mmu_notifier_range_init_owner(&range, - MMU_NOTIFY_MIGRATE, 0, migrate->vma, + MMU_NOTIFY_MIGRATE, 0, migrate->vma->vm_mm, addr, migrate->end, migrate->pgmap_owner); mmu_notifier_invalidate_range_start(&range); diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index f45ff1b7626a..50c0dde1354f 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -1120,13 +1120,3 @@ void mmu_notifier_synchronize(void) synchronize_srcu(&srcu); } EXPORT_SYMBOL_GPL(mmu_notifier_synchronize); - -bool -mmu_notifier_range_update_to_read_only(const struct mmu_notifier_range *range) -{ - if (!range->vma || range->event != MMU_NOTIFY_PROTECTION_VMA) - return false; - /* Return true if the vma still have the read flag set. */ - return range->vma->vm_flags & VM_READ; -} -EXPORT_SYMBOL_GPL(mmu_notifier_range_update_to_read_only); diff --git a/mm/mprotect.c b/mm/mprotect.c index 668bfaa6ed2a..c12c15fdf007 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -381,7 +381,7 @@ static inline unsigned long change_pmd_range(struct mmu_gather *tlb, if (!range.start) { mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0, - vma, vma->vm_mm, addr, end); + vma->vm_mm, addr, end); mmu_notifier_invalidate_range_start(&range); } diff --git a/mm/mremap.c b/mm/mremap.c index e465ffe279bb..d6cabaab738d 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -498,7 +498,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma, new_addr, len); flush_cache_range(vma, old_addr, old_end); - mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma->vm_mm, old_addr, old_end); mmu_notifier_invalidate_range_start(&range); diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 1276e49b31b0..044e1eed720e 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -542,7 +542,7 @@ static bool __oom_reap_task_mm(struct mm_struct *mm) struct mmu_gather tlb; mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, - vma, mm, vma->vm_start, + mm, vma->vm_start, vma->vm_end); tlb_gather_mmu(&tlb, mm); if (mmu_notifier_invalidate_range_start_nonblock(&range)) { diff --git a/mm/rmap.c b/mm/rmap.c index 2ec925e5fa6a..130349cb4240 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -950,9 +950,8 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw) * We have to assume the worse case ie pmd for invalidation. Note that * the folio can not be freed from this function. */ - mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, - 0, vma, vma->vm_mm, address, - vma_address_end(pvmw)); + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, + vma->vm_mm, address, vma_address_end(pvmw)); mmu_notifier_invalidate_range_start(&range); while (page_vma_mapped_walk(pvmw)) { @@ -1499,7 +1498,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, * try_to_unmap() must hold a reference on the folio. */ range.end = vma_address_end(&pvmw); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, range.end); if (folio_test_hugetlb(folio)) { /* @@ -1874,7 +1873,7 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, * try_to_unmap() must hold a reference on the page. */ range.end = vma_address_end(&pvmw); - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm, + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, address, range.end); if (folio_test_hugetlb(folio)) { /* @@ -2204,7 +2203,7 @@ static bool page_make_device_exclusive_one(struct folio *folio, swp_entry_t entry; pte_t swp_pte; - mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, + mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, address, min(vma->vm_end, address + folio_size(folio)), args->owner);