From patchwork Tue Jan 24 20:34:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Gunthorpe X-Patchwork-Id: 13114779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF26DC61D9D for ; Tue, 24 Jan 2023 20:34:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 638E26B007B; Tue, 24 Jan 2023 15:34:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5EA3C6B0078; Tue, 24 Jan 2023 15:34:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 413A76B007B; Tue, 24 Jan 2023 15:34:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 293906B0075 for ; Tue, 24 Jan 2023 15:34:46 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E6AEEAB258 for ; Tue, 24 Jan 2023 20:34:45 +0000 (UTC) X-FDA: 80390846130.20.997464F Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2041.outbound.protection.outlook.com [40.107.236.41]) by imf12.hostedemail.com (Postfix) with ESMTP id 312E74001C for ; Tue, 24 Jan 2023 20:34:43 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=HW3dAHkA; spf=pass (imf12.hostedemail.com: domain of jgg@nvidia.com designates 40.107.236.41 as permitted sender) smtp.mailfrom=jgg@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674592483; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qi9NIlQIZAipa3MF9I93lS2N1oyVrOD46d4a19rPLic=; b=Z1mi8qiUaPYkgPg0rpuwhebC9kW+140rshYDX10hrjiEZyHc0o3YM26cUFBVCvlWNey4s8 m0tmnUtA7axXla3F9XjxMRJfa6cX5a54hRAcnvxlSFGUzWW3gpqGLqcjg5cYWAmc3vtGDS ZSQ7BpjC9bGngdkUfh8pLtqPede17/o= ARC-Authentication-Results: i=2; imf12.hostedemail.com; dkim=pass header.d=Nvidia.com header.s=selector2 header.b=HW3dAHkA; spf=pass (imf12.hostedemail.com: domain of jgg@nvidia.com designates 40.107.236.41 as permitted sender) smtp.mailfrom=jgg@nvidia.com; arc=pass ("microsoft.com:s=arcselector9901:i=1"); dmarc=pass (policy=reject) header.from=nvidia.com ARC-Seal: i=2; s=arc-20220608; d=hostedemail.com; t=1674592483; a=rsa-sha256; cv=pass; b=mxVN9G7EtBEpcXU3y1rFPyVxaavnT9E5zAuz4mMkU7PQy98GVTt38a93GGh0CJb+0YtLm/ Oom7xuTfiZ59Mgaw2v5pSKpHid7bJ1zcCA6lWxsrUOLL/SJbdEaHYqHfq1Ug2eXCMNvaRy nVvp9WarZvyMIPnidoqGwYUCt09bTRw= ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DC0TgyvvSml00y0ZkOfNYlU4pi6qukQas2DTiGBEDQCkCb3sHWf2v7vc7D8/5/8NyCBdRydWvo11Pso6xHeAFsKIMv2fjNGYB25A43zP0b1Xtj5MHncOKxh1H6HaxswpK9UwLRXZhN0QNE+h7VBu4z+3OkCo8J450Z9FSue/r4mkf21JOh9iSL+zQEI5w9Zfb/dv359NKcPdp4WkkgT+YQyAv7Zz9QGp8awALfPmrNtbbN4CBBvqqrLwP3WvD2ZSV60gp4v6Z9Lo2d41qP4VpAxD+9Zkvpv8kwpeoTyvOSTFH39wDIkgEPt/L8hUFW00gfsTArzkG8gK2+npwKnWEA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=qi9NIlQIZAipa3MF9I93lS2N1oyVrOD46d4a19rPLic=; b=fzbT3YSnHJgbL2K5/Os6qS5sdA1DcZ03kKpDVHeBAMMlFCKXeL4uZ+cYDzln03356p5eULNahQwZZkVu6v39TjQBDRr+DhLR/+6RcfTfP5NlvkmzxuQNb2tbLOvh8Hta7cqbm43O8ek4RcqgkCgQW0VquO/OiX8Zj5y8p5gTxCD4WaX0u4R5ClG00TGdeourYiQu3M2LlAzRbM33Q0GyaLziRlAHWt2zozA7Aqqlc9nCkbQYDE9JQN6HsHEspM1m7h5nW7exDltJRal5j2x9u01htVGR9i4VpqEDYTK3Zv4vSpM6WNzxC68TolB6EQ5FEgoAwIlGG0/Bm1ze+Zdv4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qi9NIlQIZAipa3MF9I93lS2N1oyVrOD46d4a19rPLic=; b=HW3dAHkAJrq6lM5EO5ugt+dEYttwJ0JDqzLqR3tos1WJcCHCuMevfi/4299NFsWs9pZjIi3G8hj9UA29+LRkzvxMiZuOjFPL1Cwl+HblURkVk6/UzcryahP1fiWiGNb/Oc1EOZ5l1gv1dlD4l0Untitqtx60U7aeziXGJ3V41+X5iBzMtuzWCUs/OjwiIdz3RLBMhwOocjX9dfB8QioNAQ1AntZI6725AJpcPCu4nkQV9wyVITLSIa/835xKCxyQk7Y0tBnpS+8cDIO+95Jr4JVCRgtwNmVNwIVr498/gmtPYLBm6VoGxAj0dxivkHzMib5kYxAIKWdk3LVUTan7ig== Received: from LV2PR12MB5869.namprd12.prod.outlook.com (2603:10b6:408:176::16) by MN2PR12MB4095.namprd12.prod.outlook.com (2603:10b6:208:1d1::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6002.33; Tue, 24 Jan 2023 20:34:35 +0000 Received: from LV2PR12MB5869.namprd12.prod.outlook.com ([fe80::3cb3:2fce:5c8f:82ee]) by LV2PR12MB5869.namprd12.prod.outlook.com ([fe80::3cb3:2fce:5c8f:82ee%4]) with mapi id 15.20.6002.033; Tue, 24 Jan 2023 20:34:35 +0000 From: Jason Gunthorpe To: Cc: Alistair Popple , David Hildenbrand , David Howells , Christoph Hellwig , John Hubbard , linux-mm@kvack.org, "Mike Rapoport (IBM)" Subject: [PATCH v2 01/13] mm/gup: have internal functions get the mmap_read_lock() Date: Tue, 24 Jan 2023 16:34:22 -0400 Message-Id: <1-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com> In-Reply-To: <0-v2-987e91b59705+36b-gup_tidy_jgg@nvidia.com> References: X-ClientProxiedBy: BL0PR02CA0022.namprd02.prod.outlook.com (2603:10b6:207:3c::35) To LV2PR12MB5869.namprd12.prod.outlook.com (2603:10b6:408:176::16) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: LV2PR12MB5869:EE_|MN2PR12MB4095:EE_ X-MS-Office365-Filtering-Correlation-Id: 91982323-2081-48c1-54db-08dafe4a67e8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5rN01T8oDvKM87tKbQby1V6B5ftUr4LQdFHsBjsVRiQfPdhg6oS7apx5a9kQyB9IaROx20J8snaAQI5gvWWbXetuUsoOujWoCHP7dnQ0yMKaAGRfzu2lsAzCxu66s3KomsfzyDAFQNs6etgnmAuLglxIdfeeUkMV7xmL8mr1jPatT4RGSEaQfo1edzKTJvEDaF+ex7SkX7PPMXphJE3bU4Nm7EfzJhZ0QCyG+q+BtdjqWGeVVh68IMTbjay2jRNdUNzhKdm2iNXzsDzPipocgslQUSCVQdnoL6ov/nYGTzat3/kTr0oSHFpi2tXCehnssJT7xGd8GgtxKy+3KexZNmffsD7p8fNuz8BxVeoK9+v4SWrwa1Nu5WgW+IF9/OJao3CxzFFn6auxxKZNXW5G6TE4HxEn0utZuEbEVFgPIHc1olE6/qhNG8robEYeGpfPI3t+PJwnzWQvQ33i7GNTErpip6xV5bUVhzqdvzuNDo3MhydNsMQ+0Tj6WN6iTUBjrKZJrWsfUvwPbF8atG3CVUBwIU9XuOLQr/cnWX2b2j8EO6PO2mGScmxt3xXavrSG1wWLZdCUHVmYllTLhybJPsuQDBuT/8jK4gBEeG+WEhi6qsow+ILNKHmPzrtmm/hg2hjCdhhF//WmNrQiCSsYRXk6D8Da1g2IUwxyDA7G/bK5e+DEYucS1d6rJ3ggugfDxFeEBuEfCaCVUFJFlX3FdFFMPD81nRr3Cw4BeMtS/IA= X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV2PR12MB5869.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230025)(4636009)(366004)(396003)(39860400002)(376002)(346002)(136003)(451199018)(109986016)(38100700002)(83380400001)(5660300002)(41300700001)(86362001)(2906002)(26005)(4326008)(8936002)(316002)(6666004)(6512007)(6506007)(8676002)(186003)(66476007)(66556008)(54906003)(2616005)(478600001)(6486002)(36756003)(66946007)(4216001)(266003)(15583001);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: jRuRVuIg4NOqnHoifhw+/DnNc7xvgwmo/UjZ9+3/r1HI5kStnoTvCq76AGnNqK4hV3ZW6GGbclIwYvLStI8fKecVjvXAPAe2Sz6ysuGVTI/q+BVCuB/iezorGvXwiXNALt0oRiQ8wIPv1+ruSSvzeL7Aw0pLn+UNLmq21OotDBgCSs76A8SxVnYOvLyzl+R+vLi7QfV1u/qRO62zjvlAq0cHYNUSP4TcEW4hSGhOb0wygTxVkjVfzuNj+M6iJuYF254+C5mCQhyykdCg74m0IX15eBVjGDaAJuH+Qi4tokDwHU9p3K1qe3ku8iKpj2xqPtzdxlPeGPy9t0eHlgKt3MIEfLyE1oWvUDNYrxMcTy/nlMXhSASU9nuYrchfs/rnIp4XROkYjsdCttdgvbIySND6jtML62mG9SNraqt8VS5B6o2i5gx1gEcGallVhTVbeC2eFgFuGSlmaBjQTb9j6f1ec4zMNn9t5pGkz60PlFnII4Hmr5IFI/+xlCAo0cDiasmAKEGfrgaOObgvOLx9y5ICEAakYFS57ovgOre0NLBFKRI21kp5jdIlJmF1hxaVI0v3U7zJE0cKwezJMixQLnDMWvPw8ISUcAuSc6gLQXTtwkUZZdB4w0yUVGt6T9I+6jXVCWmHxzbAti0pLgA1EzTMY6j+R4qsoNbQewxMoHCvQY2sY3c/nxlZ1jfR9hbl1yXPRtZ5AyIwx1ncAMcc8TZ3CxWzQxkKT1HQ+75PIHeVRnVHMWn0my7RT1ahqqjEpUeTp1YNMxG3buPJvHv4zIQSFE2OAWn7UJOmqP8wPykrgYaciToWu8I/zRDxFpq+nGoPT6DWIADBhiA56z4KWEofg2mZSwYnTTakQDlNiopj3T055wwwzVJo6RsL2G7r9iS84SHp7EKUS2YgEUTybrNAXafk3pDgab/uwScwxcQsy6mzIPQElMYZKQQy9Uwyh4Id+1I1oWp+MvxxVhRWPu9wEwfL6eYExBuRbstYarvI1pPqqUC2xaPbrpCikaSJ78V8lYyYExbtQxBJ3Hjum4hOw0LNCTFY7F1ukGHkGCPwNRAalpj0sU5hASBQuW0nSC0P7zoy62IQUfBLpXMDS9RIsx/RkqGNCoEzQLjk/kIZrNOd671NmQueqw+VKGFQIh9CI5LzZHcI/2CnUpEWo76VctokTwdJsZN4d5OPtDIvHvB5w9m840P5YD2sNc8XAnPNEhYehQOaPnjKNKnQgpjkIFAU/ZXWAbYsQxle5v3e1J2Bq1SWL/SvoHRsJIKiduryy1QAb1dx2UtWpra6NiSqTl+t/AveTH+hzGFee/+9endhgjCKJtbDu6b8BgDxd5P7HcLGy6BdACV0viyQltBjjpL4Iaw3MQ3dDFwuoNxX3a2UxBsOCaJj+Il7MqPAO3CUvp0ke/AQP0l+b8peytY12WD1RDAISb8g0GonlNFvVN0/4Z1BIiMcA5MK87AnREqFIjVHb7lbEpPzV10rAaq0q+tnUhwzfg7r5o7wS6Eik10Ub28wQ92uXdG9yrMzsH6oTAtH4GheFTIhYhMmCxg8U2apWgrmZuH5+Jwmb2tmVPx+WXzmKEISA0GohqzZ X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 91982323-2081-48c1-54db-08dafe4a67e8 X-MS-Exchange-CrossTenant-AuthSource: LV2PR12MB5869.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jan 2023 20:34:35.2983 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: CpCYBkaQuobZch0yYFl6UISRS9cm4TTw4z86V9cRWASpPplPovtmvbKeoB8N/ogs X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4095 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 312E74001C X-Stat-Signature: em4wtqa93dde1ztubt1y3hm9kjwxdx53 X-HE-Tag: 1674592483-266739 X-HE-Meta: U2FsdGVkX1/+uGeI/2gjbAJoentx+IhQdryvPh4/dpx0Ikn8DQQe4y/QyC7S48CjrKwZdy9FjC3jFoCpU6wUSFP1FuVK5EypYYBQ5u+lPoCukDd7B4vkby2vSzmWP1Coep/dZE0ovZvQEOSrr2BE7YC8MQAB5jE2Z63+MeXhxE/9oJYkutSJIYUH7arFYSTt8Ff2UjHUcOCY/RviEN/Y71gYVRgJXD9rL1E2okNfgWF6xRRDp8vlp1RXhokLrkziyP6gyAx+pjM2WAkDvXkUMwQ1mbn/PdvSWM9KnEZhyDfxa09pPn5vYYVA0VuamS+SDa+6LpKoxct59P7HpwUC7mC0AX5wsRn1iy7AZsgiHHe7CycQRGBlqqhmTlPNKsXd+LO+JC/OWFR1RjuqH7w5nV45+Fg3AiJ25RI2U5aWoWWOi2dFQB1PtaCBiPnEluQDypgLS5haVz0FJNVD3l7v4MtHH/AtZQVx9HsPZMtF4iW0Vn8YFE7ecZRN+YKbpzhCmmZPoMUv3R7UY/Sf3AY23opAy25mQaX5sMVbDq/zHNKEtChZntFpAupJ21eFk2f+om6FUV7p3oYaKfLA6Qda71uTGxm05oLOUeuzDE8KHOCD7aajlp/gK9BpgT3Wzz3XO7xf7/2lnOaenWo08p2mtEr1EqLhkn0JvWR0rghJ42EuYXpdzDVRgvAxkhARJgRhXibSxjieqS0IiXR94ZP3buzsRtvFUrF+LK9vy0WZz8wbjumS13Aj9i58u+7B+rnuGRB1soFvkI2AUs054oaEi+f/T/ARbXZVaU5oC6l8iB+b0+PPEHmKOEA1Yab2+QvA8o5+VYyMs3lC2G99b1/iEOW/tUq0HOL8gXFwhvIljo37WrDia3C5siuM1CnJjTE6d6zN0f8tmSG0OWKWB/AeWhCJSUKiCXL0U+SgENOovpxIbw0/AUTI7M839IU6iRmDt08bmGBpr0QdqJ60whk R1eAaBR5 l0fVXmmM7wsAEXVXOktI+wPj0+N3GtoFhzoIiMw+h2cxFDrMJH6bc3JvrH83z1OAj7mGnyEHBc7Hf8MmIr1IsgRXW3zFcpv5kH1Ro+jNHqCU5sEyYYTSHFWqCk9ps8voqxFLG9H00lXq9GIWAo7Of7RsE04+PYLj9oYT3KKPohrf6+46TTAKG98McMD/g3DkxTocTp6S+ZLEdLww0cVoLis1NnkpN6OQNBSWIguy+KPi16yGdEECYWYbt0L4vC0qQpyRDQstDtoug7BEbAgD/Ji5fBqW1KXCmHOoRgLaNZ8opUGUrwgvU9pLzFxa3G/vjTaEq95Uq5ONwQwAQeaB8InnYsbLJrJsNPTEI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __get_user_pages_locked() and __gup_longterm_locked() both require the mmap lock to be held. They have a slightly unusual locked parameter that is used to allow these functions to unlock and relock the mmap lock and convey that fact to the caller. Several places wrap these functions with a simple mmap_read_lock() just so they can follow the optimized locked protocol. Consolidate this internally to the functions. Allow internal callers to set locked = 0 to cause the functions to acquire and release the lock on their own. Reorganize __gup_longterm_locked() to use the autolocking in __get_user_pages_locked(). Replace all the places obtaining the mmap_read_lock() just to call __get_user_pages_locked() with the new mechanism. Replace all the internal callers of get_user_pages_unlocked() with direct calls to __gup_longterm_locked() using the new mechanism. A following patch will add assertions ensuring the external interface continues to always pass in locked = 1. Acked-by: Mike Rapoport (IBM) Signed-off-by: Jason Gunthorpe Reviewed-by: John Hubbard --- mm/gup.c | 113 ++++++++++++++++++++++++++++++++----------------------- 1 file changed, 65 insertions(+), 48 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 920ee4d85e70ba..7007b3afc4fda8 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -1331,8 +1331,17 @@ static bool gup_signal_pending(unsigned int flags) } /* - * Please note that this function, unlike __get_user_pages will not - * return 0 for nr_pages > 0 without FOLL_NOWAIT + * Locking: (*locked == 1) means that the mmap_lock has already been acquired by + * the caller. This function may drop the mmap_lock. If it does so, then it will + * set (*locked = 0). + * + * (*locked == 0) means that the caller expects this function to acquire and + * drop the mmap_lock. Therefore, the value of *locked will still be zero when + * the function returns, even though it may have changed temporarily during + * function execution. + * + * Please note that this function, unlike __get_user_pages(), will not return 0 + * for nr_pages > 0, unless FOLL_NOWAIT is used. */ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, @@ -1343,13 +1352,22 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned int flags) { long ret, pages_done; - bool lock_dropped; + bool must_unlock = false; if (locked) { /* if VM_FAULT_RETRY can be returned, vmas become invalid */ BUG_ON(vmas); - /* check caller initialized locked */ - BUG_ON(*locked != 1); + } + + /* + * The internal caller expects GUP to manage the lock internally and the + * lock must be released when this returns. + */ + if (locked && !*locked) { + if (mmap_read_lock_killable(mm)) + return -EAGAIN; + must_unlock = true; + *locked = 1; } if (flags & FOLL_PIN) @@ -1368,7 +1386,6 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, flags |= FOLL_GET; pages_done = 0; - lock_dropped = false; for (;;) { ret = __get_user_pages(mm, start, nr_pages, flags, pages, vmas, locked); @@ -1404,7 +1421,9 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, if (likely(pages)) pages += ret; start += ret << PAGE_SHIFT; - lock_dropped = true; + + /* The lock was temporarily dropped, so we must unlock later */ + must_unlock = true; retry: /* @@ -1451,10 +1470,11 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, pages++; start += PAGE_SIZE; } - if (lock_dropped && *locked) { + if (must_unlock && *locked) { /* - * We must let the caller know we temporarily dropped the lock - * and so the critical section protected by it was lost. + * We either temporarily dropped the lock, or the caller + * requested that we both acquire and drop the lock. Either way, + * we must now unlock, and notify the caller of that state. */ mmap_read_unlock(mm); *locked = 0; @@ -1659,9 +1679,24 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned int foll_flags) { struct vm_area_struct *vma; + bool must_unlock = false; unsigned long vm_flags; long i; + if (!nr_pages) + return 0; + + /* + * The internal caller expects GUP to manage the lock internally and the + * lock must be released when this returns. + */ + if (locked && !*locked) { + if (mmap_read_lock_killable(mm)) + return -EAGAIN; + must_unlock = true; + *locked = 1; + } + /* calculate required read or write permissions. * If FOLL_FORCE is set, we only require the "MAY" flags. */ @@ -1673,12 +1708,12 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, for (i = 0; i < nr_pages; i++) { vma = find_vma(mm, start); if (!vma) - goto finish_or_fault; + break; /* protect what we can, including chardevs */ if ((vma->vm_flags & (VM_IO | VM_PFNMAP)) || !(vm_flags & vma->vm_flags)) - goto finish_or_fault; + break; if (pages) { pages[i] = virt_to_page((void *)start); @@ -1690,9 +1725,11 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, start = (start + PAGE_SIZE) & PAGE_MASK; } - return i; + if (must_unlock && *locked) { + mmap_read_unlock(mm); + *locked = 0; + } -finish_or_fault: return i ? : -EFAULT; } #endif /* !CONFIG_MMU */ @@ -1861,17 +1898,13 @@ EXPORT_SYMBOL(fault_in_readable); #ifdef CONFIG_ELF_CORE struct page *get_dump_page(unsigned long addr) { - struct mm_struct *mm = current->mm; struct page *page; - int locked = 1; + int locked = 0; int ret; - if (mmap_read_lock_killable(mm)) - return NULL; - ret = __get_user_pages_locked(mm, addr, 1, &page, NULL, &locked, + ret = __get_user_pages_locked(current->mm, addr, 1, &page, NULL, + &locked, FOLL_FORCE | FOLL_DUMP | FOLL_GET); - if (locked) - mmap_read_unlock(mm); return (ret == 1) ? page : NULL; } #endif /* CONFIG_ELF_CORE */ @@ -2047,13 +2080,9 @@ static long __gup_longterm_locked(struct mm_struct *mm, int *locked, unsigned int gup_flags) { - bool must_unlock = false; unsigned int flags; long rc, nr_pinned_pages; - if (locked && WARN_ON_ONCE(!*locked)) - return -EINVAL; - if (!(gup_flags & FOLL_LONGTERM)) return __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, gup_flags); @@ -2070,11 +2099,6 @@ static long __gup_longterm_locked(struct mm_struct *mm, return -EINVAL; flags = memalloc_pin_save(); do { - if (locked && !*locked) { - mmap_read_lock(mm); - must_unlock = true; - *locked = 1; - } nr_pinned_pages = __get_user_pages_locked(mm, start, nr_pages, pages, vmas, locked, gup_flags); @@ -2085,11 +2109,6 @@ static long __gup_longterm_locked(struct mm_struct *mm, rc = check_and_migrate_movable_pages(nr_pinned_pages, pages); } while (rc == -EAGAIN); memalloc_pin_restore(flags); - - if (locked && *locked && must_unlock) { - mmap_read_unlock(mm); - *locked = 0; - } return rc ? rc : nr_pinned_pages; } @@ -2242,16 +2261,10 @@ EXPORT_SYMBOL(get_user_pages); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags) { - struct mm_struct *mm = current->mm; - int locked = 1; - long ret; + int locked = 0; - mmap_read_lock(mm); - ret = __gup_longterm_locked(mm, start, nr_pages, pages, NULL, &locked, - gup_flags | FOLL_TOUCH); - if (locked) - mmap_read_unlock(mm); - return ret; + return __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, + &locked, gup_flags | FOLL_TOUCH); } EXPORT_SYMBOL(get_user_pages_unlocked); @@ -2904,6 +2917,7 @@ static int internal_get_user_pages_fast(unsigned long start, { unsigned long len, end; unsigned long nr_pinned; + int locked = 0; int ret; if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM | @@ -2932,8 +2946,9 @@ static int internal_get_user_pages_fast(unsigned long start, /* Slow path: try to get the remaining pages with get_user_pages */ start += nr_pinned << PAGE_SHIFT; pages += nr_pinned; - ret = get_user_pages_unlocked(start, nr_pages - nr_pinned, pages, - gup_flags); + ret = __gup_longterm_locked(current->mm, start, nr_pages - nr_pinned, + pages, NULL, &locked, + gup_flags | FOLL_TOUCH); if (ret < 0) { /* * The caller has to unpin the pages we already pinned so @@ -3183,11 +3198,13 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, /* FOLL_GET and FOLL_PIN are mutually exclusive. */ if (WARN_ON_ONCE(gup_flags & FOLL_GET)) return -EINVAL; + int locked = 0; if (WARN_ON_ONCE(!pages)) return -EINVAL; - gup_flags |= FOLL_PIN; - return get_user_pages_unlocked(start, nr_pages, pages, gup_flags); + gup_flags |= FOLL_PIN | FOLL_TOUCH; + return __gup_longterm_locked(current->mm, start, nr_pages, pages, NULL, + &locked, gup_flags); } EXPORT_SYMBOL(pin_user_pages_unlocked);