From patchwork Wed Nov 7 06:06:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 10671873 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A65EA1709 for ; Wed, 7 Nov 2018 06:07:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 929D628755 for ; Wed, 7 Nov 2018 06:07:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8332F289B6; Wed, 7 Nov 2018 06:07:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA75128755 for ; Wed, 7 Nov 2018 06:07:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 62B866B04B3; Wed, 7 Nov 2018 01:07:09 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5DB6A6B04B5; Wed, 7 Nov 2018 01:07:09 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CB986B04B6; Wed, 7 Nov 2018 01:07:09 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by kanga.kvack.org (Postfix) with ESMTP id 1D6326B04B3 for ; Wed, 7 Nov 2018 01:07:09 -0500 (EST) Received: by mail-qt1-f199.google.com with SMTP id l24-v6so5477649qtj.10 for ; Tue, 06 Nov 2018 22:07:09 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references; bh=A16blqehQrExnVKwtFdvRPtpxWZ+yfNpKVWG6kiSzg8=; b=e8DVhRBMjHkvF8t0rN5w58CsVl4gW6U4vKMzMIxXpLUtV8MRIT9/wb0xYodqshck6C oV2lLpRCDAy1oVdS8SQo8LbsC2lsvNqQ8pjPfYCChpbM3FiN+ThGLVayFPMwWDZB4q6N /u+79zjTxwz6NWV0/LXYUkQTcyjffgmZbFd4CvB4iX7QtwRjugSAAUz8Q0yXi2ZKqqTp ngZxri5Q1xtZ5oVSRMXcM9K+QeZ7SKmx8yr1R7VLTpw+8OmP4LVRNE3u4axYmBPNkeZF Ytw8Rf7KOHChDqNKZ/SaqdR/Lf+Qm3uufhlcdEVIBkBSwwQLX1BN6ueLfPG11pVf12gG hG2A== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com X-Gm-Message-State: AGRZ1gIbuPhQTzVWnB7SkzOtY/a7VRHZycUfb5qsJQnrl4PMmvlYHxHW H0cG6sqVfUNvfFTdDfisG81fIgTsay0lRD0uO7YW8ykGJ9Ci3JBrORFo9wQQcuaB1QgvL92PX0M scabnK7gETVn6UudkxEbuHWavT2btwtgT2JlWU9sIAPZRof07WCI/1MNI6vNNEavwcA== X-Received: by 2002:a37:1054:: with SMTP id a81mr466955qkh.150.1541570828868; Tue, 06 Nov 2018 22:07:08 -0800 (PST) X-Google-Smtp-Source: AJdET5fjT8Is7HUVPLgFIFXk8z6eGcJeXZeo37ncWmakdq4AqpdthwWgl/alnp42VsTpfpcD6rZd X-Received: by 2002:a37:1054:: with SMTP id a81mr466923qkh.150.1541570828027; Tue, 06 Nov 2018 22:07:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541570828; cv=none; d=google.com; s=arc-20160816; b=QE8J44V+34VE+2C4mkKqDvox+H7t5Z1xaofbEqHUko50x/CjNLvhMBqub5wdB4Wv9N wkZAIZcJPwgAJU0b62XSKZbNW/0IwSLh9bquGVJ8VPFW5L0M6uZ6/ZwyVsvadUUlUd3P UJgrc+soMqtIZ8H7G0zIPLAjWqWeABwDKqxUjwOBUakCw2zBtGQpD0iO6KCzfv/HyRi5 0lAdEVsDyDGEue/2vgeVTBakGM+nWxCWy09fuQVAXXudmwX8Oo3F9tyjV72BdpxgKGhE Oy/7Dhbq58jaQXb2n4/vWgSi4gVkIt+fGWYyqkWlSR8x9AyQvMXM5DrDlcstUNE6t0zY 6UsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from; bh=A16blqehQrExnVKwtFdvRPtpxWZ+yfNpKVWG6kiSzg8=; b=zw3DPmLdzNLoStv2htEScN4DXPgCLUVUCuc7yYC7KoKa4oBuy2jJsViKXBy5qYKmAx 4jL/Oo7VYlXdMf/DUBS+7j+3rQQgzhDar/bB7iNaXvV/P7QpNG+kKp/NqKPIbSkss0QX kE59wkyTgHylZchzKtrTTNgOmrUbSx0Sc8R8Cw/uCXIa4p3Ut5hsLREcDeC1Q0Zmx5E/ yQRDnFTIYN/1FSe05Xupmhz4gLeG90zWb4AJkXsWBklSnAI2yyIk/G2Cv8QeC6cn1vvg y1Vt01PLSLoZzv/CE1bXdWEMghWSzb3LpBwPyIjarw+kFF8SPG6ZCiZj7fIIdGAjCxwx X5Wg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from mx1.redhat.com (mx1.redhat.com. [209.132.183.28]) by mx.google.com with ESMTPS id p41si4573207qve.126.2018.11.06.22.07.07 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 06 Nov 2018 22:07:07 -0800 (PST) Received-SPF: pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) client-ip=209.132.183.28; Authentication-Results: mx.google.com; spf=pass (google.com: domain of peterx@redhat.com designates 209.132.183.28 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id D97FD85546; Wed, 7 Nov 2018 06:07:06 +0000 (UTC) Received: from xz-x1.nay.redhat.com (dhcp-14-128.nay.redhat.com [10.66.14.128]) by smtp.corp.redhat.com (Postfix) with ESMTP id F0F8D1057051; Wed, 7 Nov 2018 06:07:00 +0000 (UTC) From: Peter Xu To: linux-kernel@vger.kernel.org Cc: Keith Busch , Linus Torvalds , peterx@redhat.com, Dan Williams , linux-mm@kvack.org, Matthew Wilcox , Al Viro , Andrea Arcangeli , Huang Ying , Mike Kravetz , Mike Rapoport , Jerome Glisse , "Michael S. Tsirkin" , "Kirill A . Shutemov" , Michal Hocko , Vlastimil Babka , Pavel Tatashin , Andrew Morton Subject: [PATCH RFC v2 1/4] mm: gup: rename "nonblocking" to "locked" where proper Date: Wed, 7 Nov 2018 14:06:40 +0800 Message-Id: <20181107060643.10950-2-peterx@redhat.com> In-Reply-To: <20181107060643.10950-1-peterx@redhat.com> References: <20181107060643.10950-1-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Wed, 07 Nov 2018 06:07:07 +0000 (UTC) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP There's plenty of places around __get_user_pages() that has a parameter "nonblocking" which does not really mean that "it won't block" (because it can really block) but instead it shows whether the mmap_sem is released by up_read() during the page fault handling mostly when VM_FAULT_RETRY is returned. We have the correct naming in e.g. get_user_pages_locked() or get_user_pages_remote() as "locked", however there're still many places that are using the "nonblocking" as name. Renaming the places to "locked" where proper to better suite the functionality of the variable. While at it, fixing up some of the comments accordingly. Signed-off-by: Peter Xu --- mm/gup.c | 44 +++++++++++++++++++++----------------------- mm/hugetlb.c | 8 ++++---- 2 files changed, 25 insertions(+), 27 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 841d7ef53591..6faff46cd409 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -500,12 +500,12 @@ static int get_gate_page(struct mm_struct *mm, unsigned long address, } /* - * mmap_sem must be held on entry. If @nonblocking != NULL and - * *@flags does not include FOLL_NOWAIT, the mmap_sem may be released. - * If it is, *@nonblocking will be set to 0 and -EBUSY returned. + * mmap_sem must be held on entry. If @locked != NULL and *@flags + * does not include FOLL_NOWAIT, the mmap_sem may be released. If it + * is, *@locked will be set to 0 and -EBUSY returned. */ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, - unsigned long address, unsigned int *flags, int *nonblocking) + unsigned long address, unsigned int *flags, int *locked) { unsigned int fault_flags = 0; vm_fault_t ret; @@ -517,7 +517,7 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_WRITE; if (*flags & FOLL_REMOTE) fault_flags |= FAULT_FLAG_REMOTE; - if (nonblocking) + if (locked) fault_flags |= FAULT_FLAG_ALLOW_RETRY; if (*flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; @@ -543,8 +543,8 @@ static int faultin_page(struct task_struct *tsk, struct vm_area_struct *vma, } if (ret & VM_FAULT_RETRY) { - if (nonblocking && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) - *nonblocking = 0; + if (locked && !(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) + *locked = 0; return -EBUSY; } @@ -621,7 +621,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * only intends to ensure the pages are faulted in. * @vmas: array of pointers to vmas corresponding to each page. * Or NULL if the caller does not require them. - * @nonblocking: whether waiting for disk IO or mmap_sem contention + * @locked: whether we're still with the mmap_sem held * * Returns number of pages pinned. This may be fewer than the number * requested. If nr_pages is 0 or negative, returns 0. If no pages @@ -650,13 +650,11 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) * appropriate) must be called after the page is finished with, and * before put_page is called. * - * If @nonblocking != NULL, __get_user_pages will not wait for disk IO - * or mmap_sem contention, and if waiting is needed to pin all pages, - * *@nonblocking will be set to 0. Further, if @gup_flags does not - * include FOLL_NOWAIT, the mmap_sem will be released via up_read() in - * this case. + * If @locked != NULL, *@locked will be set to 0 when mmap_sem is + * released by an up_read(). That can happen if @gup_flags does not + * has FOLL_NOWAIT. * - * A caller using such a combination of @nonblocking and @gup_flags + * A caller using such a combination of @locked and @gup_flags * must therefore hold the mmap_sem for reading only, and recognize * when it's been released. Otherwise, it must be held for either * reading or writing and will not be released. @@ -668,7 +666,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags) static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, - struct vm_area_struct **vmas, int *nonblocking) + struct vm_area_struct **vmas, int *locked) { long ret = 0, i = 0; struct vm_area_struct *vma = NULL; @@ -713,7 +711,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, if (is_vm_hugetlb_page(vma)) { i = follow_hugetlb_page(mm, vma, pages, vmas, &start, &nr_pages, i, - gup_flags, nonblocking); + gup_flags, locked); continue; } } @@ -731,7 +729,7 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm, page = follow_page_mask(vma, start, foll_flags, &ctx); if (!page) { ret = faultin_page(tsk, vma, start, &foll_flags, - nonblocking); + locked); switch (ret) { case 0: goto retry; @@ -1190,7 +1188,7 @@ EXPORT_SYMBOL(get_user_pages_longterm); * @vma: target vma * @start: start address * @end: end address - * @nonblocking: + * @locked: whether the mmap_sem is still held * * This takes care of mlocking the pages too if VM_LOCKED is set. * @@ -1198,14 +1196,14 @@ EXPORT_SYMBOL(get_user_pages_longterm); * * vma->vm_mm->mmap_sem must be held. * - * If @nonblocking is NULL, it may be held for read or write and will + * If @locked is NULL, it may be held for read or write and will * be unperturbed. * - * If @nonblocking is non-NULL, it must held for read only and may be - * released. If it's released, *@nonblocking will be set to 0. + * If @locked is non-NULL, it must held for read only and may be + * released. If it's released, *@locked will be set to 0. */ long populate_vma_page_range(struct vm_area_struct *vma, - unsigned long start, unsigned long end, int *nonblocking) + unsigned long start, unsigned long end, int *locked) { struct mm_struct *mm = vma->vm_mm; unsigned long nr_pages = (end - start) / PAGE_SIZE; @@ -1240,7 +1238,7 @@ long populate_vma_page_range(struct vm_area_struct *vma, * not result in a stack expansion that recurses back here. */ return __get_user_pages(current, mm, start, nr_pages, gup_flags, - NULL, NULL, nonblocking); + NULL, NULL, locked); } /* diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7b5c0ad9a6bd..c700d4dfbbc3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4166,7 +4166,7 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, struct page **pages, struct vm_area_struct **vmas, unsigned long *position, unsigned long *nr_pages, - long i, unsigned int flags, int *nonblocking) + long i, unsigned int flags, int *locked) { unsigned long pfn_offset; unsigned long vaddr = *position; @@ -4237,7 +4237,7 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, spin_unlock(ptl); if (flags & FOLL_WRITE) fault_flags |= FAULT_FLAG_WRITE; - if (nonblocking) + if (locked) fault_flags |= FAULT_FLAG_ALLOW_RETRY; if (flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | @@ -4254,8 +4254,8 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, break; } if (ret & VM_FAULT_RETRY) { - if (nonblocking) - *nonblocking = 0; + if (locked) + *locked = 0; *nr_pages = 0; /* * VM_FAULT_RETRY must not return an