From patchwork Tue Jan 17 15:58:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13104839 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EB20C3DA78 for ; Tue, 17 Jan 2023 15:58:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F048D6B0075; Tue, 17 Jan 2023 10:58:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E8E256B0078; Tue, 17 Jan 2023 10:58:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE0856B007B; Tue, 17 Jan 2023 10:58:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id BEEA56B0075 for ; Tue, 17 Jan 2023 10:58:51 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 8BC0A1A0500 for ; Tue, 17 Jan 2023 15:58:51 +0000 (UTC) X-FDA: 80364749262.20.1DA9AB0 Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by imf08.hostedemail.com (Postfix) with ESMTP id E5755160018 for ; Tue, 17 Jan 2023 15:58:48 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=lLd8bQAk; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf08.hostedemail.com: domain of jgg@nvidia.com designates 40.107.236.41 as permitted sender) smtp.mailfrom=jgg@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673971129; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GjALvm23kGUWqLHMO0zRo2Cdd4iLEKEi148r5kwH6Yo=; b=1koHwzgX/HstzO3ClbGu8Ri/VDDydMGvKTwM+Am5c/NUcQobLOmhvdVpdVK+TdAICygO8I jLOrKEVGtwY0ErxoMN4q7OtvH79RCPBOXbcP9sBoTYhJbp7QGraCvK87IbsAseCK5MoHtW yR7uI0gMPKsh6wxngVom0nZ4j6acrpA= ARC-Authentication-Results: i=2; imf08.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=lLd8bQAk; arc=pass ("microsoft.com:s=arcselector9901:i=1"); spf=pass (imf08.hostedemail.com: domain of jgg@nvidia.com designates 40.107.236.41 as permitted sender) smtp.mailfrom=jgg@nvidia.com; dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1673971129; a=rsa-sha256; cv=pass; b=zlWIpryakWax5R8jq06lIEd/w0+mx2mjFPEbxcf/BqGeZtYF6AatYcsN+AhPVWpZpCSGeU YvAATyGhVKz9xwM/85baeew7OLathSk4qAb46ivKiQEoTR5KMoXO26mlhKktdrk/ursKE0 Q0jWPUUUw7yBEKO4TAzYEXOEieiCw4o= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gb+up6fGC55JHoz1QXt7iUCdIqExiMcnEPibXxW+TRudSwRr1IG7G54fng0WRZ56eq+B3cGO4fsBShhM5AwYIkIxOcTlriQR/QeZ07O4hE4kX/3YEaHDeBRZAdoM/u8p9tP0aR33rvhqpUHNqX8efVxSo+3rB6bUOBf7ubO+4PSjbxqtE822ReQO9TeB280uer0zdT1Jj+lfCVbh1StTwk6eNqnlqxUVemaoGcQ5kKJNlhggXHdN3GqXY2ZhQiCszGu8GudvW+O6cfI16HKV5nyiLAUsCOzI2EReVz3YgOpHqQ0U0AeXLIUfRmAixgyApCfvNrDW5OPklMRUo+e99g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=GjALvm23kGUWqLHMO0zRo2Cdd4iLEKEi148r5kwH6Yo=; b=Muce33/bHbX5FBEXEsz0WcneXmuB7b23kHlHMAhOyKDWNkWKwWUoe8lNGkP7w6uTFRodg63SKUGJBddXMMGvjN//Px87aWrMSIgQ7++ZCPxgmGYvR6nd7mQTQg4UDtzHHhNAKqS/zKmF8pB5IE7+//hNrAHEl3DxSxPVzfBQBh2CMuE5zNOUd08Nvmr3+nO4lY0ZFe6OZCZToyuh7egPIrGZEMSyR07+XJ4Gj9tzhiXu8L94X+iX6HNn6UmQLzURBwO1UzVnTPcJUpnPQjZpxQqkhWiiwdGqLLgoCbvkqUK4jxss0NI8Il69I7EWGwVXrWqXNBqgfVdkjttGYJXT+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=GjALvm23kGUWqLHMO0zRo2Cdd4iLEKEi148r5kwH6Yo=; b=lLd8bQAkm2FCxrvJnFH8tQQ3m9BYvj8dvycwD09k5VSNVQ7IWrTcKmcGMZjV3Lsdk7wM4qXRd3lcB/UMJIW3Yt3ZJkvWFXWTKEW7crXbyMVKnc7/pmzomFFZHmW82k5IqP2w7QBfsojB2XP3zO6r6wAXqq/92DDp+Zfd8GPUFPfIB3CPgFV/2C4CBGQlbZnFdjpgyHcIcih9JukReVFLuoeKlYqxwkKyvLEk8cmOXpEwbIh7Hd/BDTJjGW9y1GlyiXAyYbZwmxQ2VoC7NjypM3E5q0RL5YTvUi6tLddKXoTCD50Srv9MkZCIPUlVa+yGyTZ95KfmIt/ivNBj0vSlZg== Received: from LV2PR12MB5869.namprd12.prod.outlook.com (2603:10b6:408:176::16) by SJ0PR12MB6783.namprd12.prod.outlook.com (2603:10b6:a03:44e::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5986.23; Tue, 17 Jan 2023 15:58:46 +0000 Received: from LV2PR12MB5869.namprd12.prod.outlook.com ([fe80::f8b0:df13:5f8d:12a]) by LV2PR12MB5869.namprd12.prod.outlook.com ([fe80::f8b0:df13:5f8d:12a%9]) with mapi id 15.20.6002.013; Tue, 17 Jan 2023 15:58:46 +0000 From: Jason Gunthorpe To: Cc: Alistair Popple , John Hubbard , linux-mm@kvack.org Subject: [PATCH 3/8] mm/gup: simplify the external interface functions and consolidate invariants Date: Tue, 17 Jan 2023 11:58:34 -0400 Message-Id: <3-v1-dd94f8f0d5ad+716-gup_tidy_jgg@nvidia.com> In-Reply-To: <0-v1-dd94f8f0d5ad+716-gup_tidy_jgg@nvidia.com> References: X-ClientProxiedBy: CP5P284CA0101.BRAP284.PROD.OUTLOOK.COM (2603:10d6:103:92::8) To LV2PR12MB5869.namprd12.prod.outlook.com (2603:10b6:408:176::16) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: LV2PR12MB5869:EE_|SJ0PR12MB6783:EE_ X-MS-Office365-Filtering-Correlation-Id: df0ecac8-7da6-48f4-89f0-08daf8a3b6eb X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Gk/KrtHgb2qNeKyaa2eaN64SzygvsZPkqXon8eatVsLPffHK7ntKTULXErSjQ6JMZg9Sxw70b1Z9mL6zOr1ouQf2uG3s4kH1VzWsWczmwriSvtX1+y5fxIoPDSvP/zgeraGV0uu3NllpAi2AJO2OL33n/lG6ALi2t9aAy3kJRKaB0PtXFyEn8uzB0i5z84W6wjneKmsuzWzhzqPV4HOQOYLaLS3HCGxA4wg78sJz/pCpSZ+U/t9YwCeQqUkOYzwff97cS2/tYIs6MbL8uXJqo/6Xhy64gPdPxq48rnBg0uRVRb0iiG6VTcg4XWpPAcbmyzJhjSE8nH0QNODVVe99P9fGzjLbEMEK5ppnW74z0ZuO7tS1g1y10fJ7GHY7a3iU0purj/pGh0KyELXkHnLYf/6Lh20R29yi0yIq4ALTZQVMFSCVdHOyyjAgohtN4I7IWUHJBUFUobBuIFfDs4ZaN9wigKD64nZiyPEr6dHOaSe7bEUk8rImwioBLrVux7YZT6qpXPGVNCtsNCqvQ9aPVcIbjlbu2/Qi+fq1veIhgYg78JmQP7SirShmSsdymo1SUlxHZjqNHNU840281uTbIe39hhgo1BamCcoc1gHbnVQb0Su/Do9D75XzgXChnVhA/P5n6d+7TeFr2iwPDb9u3dc+5vJKpVQxbQybcaXYlCvk84KeRBvuSNoHVTNvspdVs1L5qtr1t5sR5EP74KdozA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV2PR12MB5869.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230022)(4636009)(346002)(376002)(39860400002)(136003)(366004)(396003)(451199015)(109986013)(36756003)(4326008)(8936002)(66476007)(66556008)(8676002)(66946007)(30864003)(5660300002)(2906002)(83380400001)(38100700002)(6486002)(54906003)(6666004)(316002)(478600001)(86362001)(41300700001)(6512007)(2616005)(26005)(186003)(6506007)(4216001)(266003);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: qs9aVaUBdlYFPL2mrPwCIPaN/eFSv1EVe2oiC0a0DqsKgdTpsJuAAAbp0xxf5n1K5JYfigSwx6xkoCnnJJdRKcNCfUPsKB3PWuVEiiSdL94UEvxu4I019eYgzJFk9615qP3cb2OC25+vkWdool7M2ck7k1K8KSCv+F77usRYNRYL3QfMQaJKGNHycDfnO2DUb8PvdWT+t+eWvYSr6drnBSRtEs9UBkGwmT/ylDpBrBkG45eTn/FMaM5D7DYe5EViw3HmZqN2F4R4cRZOH7wden+RCYz4WgKv0TlcFGoViiVtk4YnruX8qYw99knnYRMdbCUzjFvfWdfuIluPP3/k6+FIfVElGkxmzhLPsXvPtRJeCjuihvYqKIQe+BqwYsHnrtu/21p/kozFUtxZrly4Yei6sO5+rK7FQkD85j0DWo1vCcu4Ez51oPvVfyHY9lkbMB2qb0E+TnJ+nW34Cp+dq+JxhyBG/PX9SZVkHAb4sJh5ixN4ciZ3jJ68XHugv/RUtg/FWSSnAwDUq6VN1poMph8gZ2iIFxvmtQqh0NscZF5ovoHhmfOPrDm41NJrB4Bvf7gKFzWJD9FTYn9qEJPtXmStATmtE2tHXuVjqsKRTRY5TVuXAahHCUn1MT7sqGOelNiW+bhiMymA+Wghve4MPanV+0RtJthCwSg1aI7wTR9A2gZWT8+MP2jJsyWiPmGJSwwcneHUC1gve33uSo+iJKAXsxYfxP0zAUvi+Pw06KhnQ5/W2ya+qHCQ5H+S6QEqDfk86VVfseml0446YLbKG/q9NTez96WVzOclWjdOrwKWaBcfEZQEE7o01GsuwEnII+tNx624EYXWtKTewx3DiLl8WukDl20RUkXqwq2H3grop+SFu4IKt+IJU+Wf/HMyMiBbSHTWqF5EwCX/4AMWAL4ahM5+VYkanwUDd2MqZINBk2XqbfGxq9No2+Ps19y5GGbLhC9+TLlByagrlh1lpzxJ7MEnWmc7B1Ah3EC9ZGcmwUw2yAJBrEly8tjiq0rViurt6TmbpSZFI2MP0rWzKJYF8LQHXNX91PJH2i6ngKWgL8cOirJ+Xa1WqptrVoLXwbOB/FPkQIambFsQpkfd8ka/bpCQgqjfNBvnOmXhkIQ/Sj2r9ifwzHc1u28Y83C4UFrdaNzqpV4NPnbwtvYA4T0wcdoYffeyImP4P0q8Ko1eoR4lsaM6PPdxIrVIUnvLCqK+8VZbIbOuaVeGOuMjNDGdexDudUuxTLPp/Cd1gXyVXo3J0ftiB9Z4oLHvBY3DxvhmQ6ulJVi+12kjTQ0xbV9WkgFAB2KTSm3Zs+QjgHkH1Vkw6liGzmY+mvxr3E3IhR8I4e14fIRWWIfoqIk8RYfmJvOKtc5Sh5jT897Ob0ULyLS/LFicdthHXInDbp08bsoIoBMY8CGvrGp/xAysi175m7k+gW3upsNCyHqVXqoaqtv62cvv2W8Ph+UfFUPkYoz7Aa3KWNTWvibgBBJ7w94W5JF6dlMXn+zJgdLf8mO0hXPU0Pcka8cRVabzwvI67wMS2kk4KrUPzHi/uFBUTOl2PxWAL1dZo24AFaP8/QJ81CwvBY1QbKufMh1Dj3Il X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: df0ecac8-7da6-48f4-89f0-08daf8a3b6eb X-MS-Exchange-CrossTenant-AuthSource: LV2PR12MB5869.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jan 2023 15:58:46.2499 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: pSzfHGrXUs+nM6PZRhHebAtB1tPZXp19DHtuK84tPofyx0u4CdV2LVk4u+3dzI4g X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR12MB6783 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E5755160018 X-Stat-Signature: 4z4jkpeggx3miw78n767jyt1hijqc85d X-Rspam-User: X-HE-Tag: 1673971128-565886 X-HE-Meta: U2FsdGVkX1/ViTxfKwtBzaIWU7UcIgDoest7iquUH/03ja7gX0bNyLLlHhIexZKEfHEKxpJMGC1e66nIOdzGOJvbO9SORyMz08IIqugE72pWmZDLsoptZoWQtNpmuLdeJ1mAE+cilqOvanEUdcO8iN5bo0xmkWb53xtQmuc68xAcXO3xn7qV/6UvbJX7Gms/jwKbOUs0NR2AQcEoRIXnGqL0QiwifipQDA0kLYmqFk5buqC5QrJ5VHuUUZuScDBI55YmcUjZD0AqugS6Tkbqws7PELKxyngl9t14HXEBwPZmJoRl7S2xUjX4TYa2PWC+hOgbOgQcw1NqNyRpW9B4v1c5TOxbU/V7GH3Dxtmk7D/RC99TT8EiFnsTQm2gyBbXHFPMkib2lx5VbEu+SJCSNVsWrnqqA4fVMqLVdkX6r7VwilrLve8rFl1tLZsv31AJDs4MhL8RG/mCZrQqwSQkSxyU77QJcJOjm2yGBBsBrpdY/sb17goP6eqapixVivHgjDlRFh8g7da/q7Dur31Smht2swC8reo4tqkYV7aPmIpj+l1dhsY+ERbp228Lh6yh6Qx4/5Jt0zecu8warw+jE+L7Wr4hNTt1p0UuIgYpXGCw5ZDas4HRQ/pBJyXak//5PWLLff5s7vNGmqGq5+UJdl6jeTeUx9ImYhwzCX8t1c3sACT4n5iW4zEwG+UrsJpAuq/0Wm/dd0JjrVnHmQxr77eSooDPobhOSdK53O5SlUBqfyofJivGsiGRzPLO2RtDxnHJZK4oeExNz0/9X8+kRXZG80PXzNPOAVpboDqJ5TRkew9HuxdiBOpBeVEBDe2SKH4PZNmdnxzpbsLD8/BQe6oXfBHEr4GQxbGOR8c/L2pd+aSDVdqGt4WlwDSwk6z5qAvMNWcC5uKTaYIAI+GhsuTQtprB0GNtBMaRi+1TYEBnmz9x+mg50wWA0FGib5cHjjtH6ikb/X4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The GUP family of functions have a complex, but fairly well defined, set of invariants for their arguments. Currently these are sprinkled about, sometimes in duplicate through many functions. Internally we don't follow all the invariants that the external interface has to follow, so place these checks directly at the exported interface. This ensures the internal functions never reach a violated invariant. Remove the duplicated invariant checks. The end result is to make these functions fully internal: __get_user_pages_locked() internal_get_user_pages_fast() __gup_longterm_locked() And all the other functions call directly into one of these. Suggested-by: John Hubbard Signed-off-by: Jason Gunthorpe --- mm/gup.c | 150 +++++++++++++++++++++++------------------------ mm/huge_memory.c | 10 ---- 2 files changed, 75 insertions(+), 85 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 2c833f862d0354..9e332e3f6ea8e2 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -215,7 +215,6 @@ int __must_check try_grab_page(struct page *page, unsigned int flags) { struct folio *folio = page_folio(page); - WARN_ON_ONCE((flags & (FOLL_GET | FOLL_PIN)) == (FOLL_GET | FOLL_PIN)); if (WARN_ON_ONCE(folio_ref_count(folio) <= 0)) return -ENOMEM; @@ -818,7 +817,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, if (vma_is_secretmem(vma)) return NULL; - if (foll_flags & FOLL_PIN) + if (WARN_ON_ONCE(foll_flags & FOLL_PIN)) return NULL; page = follow_page_mask(vma, address, foll_flags, &ctx); @@ -975,9 +974,6 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma)) return -EOPNOTSUPP; - if ((gup_flags & FOLL_LONGTERM) && (gup_flags & FOLL_PCI_P2PDMA)) - return -EOPNOTSUPP; - if (vma_is_secretmem(vma)) return -EFAULT; @@ -1345,11 +1341,6 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, long ret, pages_done; bool lock_dropped = false; - if (locked) { - /* if VM_FAULT_RETRY can be returned, vmas become invalid */ - BUG_ON(vmas); - } - /* * The internal caller expects GUP to manage the lock internally and the * lock must be released when this returns. @@ -2075,16 +2066,6 @@ static long __gup_longterm_locked(struct mm_struct *mm, return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, gup_flags); - /* - * If we get to this point then FOLL_LONGTERM is set, and FOLL_LONGTERM - * implies FOLL_PIN (although the reverse is not true). Therefore it is - * correct to unconditionally call check_and_migrate_movable_pages() - * which assumes pages have been pinned via FOLL_PIN. - * - * Enforce the above reasoning by asserting that FOLL_PIN is set. - */ - if (WARN_ON(!(gup_flags & FOLL_PIN))) - return -EINVAL; flags = memalloc_pin_save(); do { nr_pinned_pages = __get_user_pages_locked(mm, start, nr_pages, @@ -2094,28 +2075,66 @@ static long __gup_longterm_locked(struct mm_struct *mm, rc = nr_pinned_pages; break; } + + /* FOLL_LONGTERM implies FOLL_PIN */ rc = check_and_migrate_movable_pages(nr_pinned_pages, pages); } while (rc == -EAGAIN); memalloc_pin_restore(flags); return rc ? rc : nr_pinned_pages; } -static bool is_valid_gup_flags(unsigned int gup_flags) +/* + * Check that the given flags are valid for the exported gup/pup interface, and + * update them with the required flags that the caller must have set. + */ +static bool is_valid_gup_args(struct page **pages, struct vm_area_struct **vmas, + int *locked, unsigned int *gup_flags_p, + unsigned int to_set) { + unsigned int gup_flags = *gup_flags_p; + /* - * FOLL_PIN must only be set internally by the pin_user_pages*() APIs, - * never directly by the caller, so enforce that with an assertion: + * These flags not allowed to be specified externally to the gup + * interfaces: + * - FOLL_PIN/FOLL_TRIED/FOLL_FAST_ONLY is internal only + * - FOLL_REMOTE is internal only and used on follow_page() */ - if (WARN_ON_ONCE(gup_flags & FOLL_PIN)) + if (WARN_ON_ONCE(gup_flags & (FOLL_PIN | FOLL_TRIED | + FOLL_REMOTE | FOLL_FAST_ONLY))) return false; + + gup_flags |= to_set; + + /* FOLL_GET and FOLL_PIN are mutually exclusive. */ + if (WARN_ON_ONCE((gup_flags & (FOLL_PIN | FOLL_GET)) == + (FOLL_PIN | FOLL_GET))) + return false; + + /* LONGTERM can only be specified when pinning */ + if (WARN_ON_ONCE(!(gup_flags & FOLL_PIN) && (gup_flags & FOLL_LONGTERM))) + return false; + + /* Pages input must be given if using GET/PIN */ + if (WARN_ON_ONCE((gup_flags & (FOLL_GET | FOLL_PIN)) && !pages)) + return false; + + /* At the external interface locked must be set */ + if (WARN_ON_ONCE(locked && *locked != 1)) + return false; + + /* We want to allow the pgmap to be hot-unplugged at all times */ + if (WARN_ON_ONCE((gup_flags & FOLL_LONGTERM) && + (gup_flags & FOLL_PCI_P2PDMA))) + return false; + /* - * FOLL_PIN is a prerequisite to FOLL_LONGTERM. Another way of saying - * that is, FOLL_LONGTERM is a specific case, more restrictive case of - * FOLL_PIN. + * Can't use VMAs with locked, as locked allows GUP to unlock + * which invalidates the vmas array */ - if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) + if (WARN_ON_ONCE(vmas && locked)) return false; + *gup_flags_p = gup_flags; return true; } @@ -2185,11 +2204,12 @@ long get_user_pages_remote(struct mm_struct *mm, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) { - if (!is_valid_gup_flags(gup_flags)) + if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, + FOLL_TOUCH | FOLL_REMOTE)) return -EINVAL; return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, - gup_flags | FOLL_TOUCH | FOLL_REMOTE); + gup_flags); } EXPORT_SYMBOL(get_user_pages_remote); @@ -2223,11 +2243,11 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas) { - if (!is_valid_gup_flags(gup_flags)) + if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_TOUCH)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, - vmas, NULL, gup_flags | FOLL_TOUCH); + vmas, NULL, gup_flags); } EXPORT_SYMBOL(get_user_pages); @@ -2251,8 +2271,11 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, { int locked = 0; + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_TOUCH)) + return -EINVAL; + return __get_user_pages_locked(current->mm, start, nr_pages, pages, - NULL, &locked, gup_flags | FOLL_TOUCH); + NULL, &locked, gup_flags); } EXPORT_SYMBOL(get_user_pages_unlocked); @@ -2980,7 +3003,9 @@ int get_user_pages_fast_only(unsigned long start, int nr_pages, * FOLL_FAST_ONLY is required in order to match the API description of * this routine: no fall back to regular ("slow") GUP. */ - gup_flags |= FOLL_GET | FOLL_FAST_ONLY; + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + FOLL_GET | FOLL_FAST_ONLY)) + return -EINVAL; nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); @@ -3017,16 +3042,14 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast_only); int get_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { - if (!is_valid_gup_flags(gup_flags)) - return -EINVAL; - /* * The caller may or may not have explicitly set FOLL_GET; either way is * OK. However, internally (within mm/gup.c), gup fast variants must set * FOLL_GET, because gup fast is always a "pin with a +1 page refcount" * request. */ - gup_flags |= FOLL_GET; + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_GET)) + return -EINVAL; return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); } EXPORT_SYMBOL_GPL(get_user_pages_fast); @@ -3050,14 +3073,8 @@ EXPORT_SYMBOL_GPL(get_user_pages_fast); int pin_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages) { - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) - return -EINVAL; - - if (WARN_ON_ONCE(!pages)) + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, FOLL_PIN)) return -EINVAL; - - gup_flags |= FOLL_PIN; return internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); } EXPORT_SYMBOL_GPL(pin_user_pages_fast); @@ -3073,20 +3090,14 @@ int pin_user_pages_fast_only(unsigned long start, int nr_pages, { int nr_pinned; - /* - * FOLL_GET and FOLL_PIN are mutually exclusive. Note that the API - * rules require returning 0, rather than -errno: - */ - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) - return 0; - - if (WARN_ON_ONCE(!pages)) - return 0; /* * FOLL_FAST_ONLY is required in order to match the API description of * this routine: no fall back to regular ("slow") GUP. */ - gup_flags |= (FOLL_PIN | FOLL_FAST_ONLY); + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + FOLL_PIN | FOLL_FAST_ONLY)) + return 0; + nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags, pages); /* @@ -3128,16 +3139,11 @@ long pin_user_pages_remote(struct mm_struct *mm, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas, int *locked) { - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) - return -EINVAL; - - if (WARN_ON_ONCE(!pages)) - return -EINVAL; - + if (!is_valid_gup_args(pages, vmas, locked, &gup_flags, + FOLL_PIN | FOLL_TOUCH | FOLL_REMOTE)) + return 0; return __gup_longterm_locked(mm, start, nr_pages, pages, vmas, locked, - gup_flags | FOLL_PIN | FOLL_TOUCH | - FOLL_REMOTE); + gup_flags); } EXPORT_SYMBOL(pin_user_pages_remote); @@ -3162,14 +3168,8 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas) { - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ - if (WARN_ON_ONCE(gup_flags & FOLL_GET)) - return -EINVAL; - - if (WARN_ON_ONCE(!pages)) - return -EINVAL; - - gup_flags |= FOLL_PIN; + if (!is_valid_gup_args(pages, vmas, NULL, &gup_flags, FOLL_PIN)) + return 0; return __gup_longterm_locked(current->mm, start, nr_pages, pages, vmas, NULL, gup_flags); } @@ -3185,10 +3185,10 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, { int locked = 0; - if (WARN_ON_ONCE(!pages)) - return -EINVAL; + if (!is_valid_gup_args(pages, NULL, NULL, &gup_flags, + FOLL_PIN | FOLL_TOUCH)) + return 0; - gup_flags |= FOLL_PIN | FOLL_TOUCH; return __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, &locked, gup_flags); } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index abe6cfd92ffa0e..eaf879c835de44 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1039,11 +1039,6 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, assert_spin_locked(pmd_lockptr(mm, pmd)); - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ - if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == - (FOLL_PIN | FOLL_GET))) - return NULL; - if (flags & FOLL_WRITE && !pmd_write(*pmd)) return NULL; @@ -1202,11 +1197,6 @@ struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, if (flags & FOLL_WRITE && !pud_write(*pud)) return NULL; - /* FOLL_GET and FOLL_PIN are mutually exclusive. */ - if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == - (FOLL_PIN | FOLL_GET))) - return NULL; - if (pud_present(*pud) && pud_devmap(*pud)) /* pass */; else