From patchwork Sun May 31 23:41:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11581027 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A9D9A60D for ; Sun, 31 May 2020 23:41:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 76B1D20707 for ; Sun, 31 May 2020 23:41:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="q8Kn7uZF" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 76B1D20707 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C5F4280009; Sun, 31 May 2020 19:41:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C14D280008; Sun, 31 May 2020 19:41:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A198680009; Sun, 31 May 2020 19:41:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id 7C9CC80008 for ; Sun, 31 May 2020 19:41:36 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 35AD34DAB for ; Sun, 31 May 2020 23:41:36 +0000 (UTC) X-FDA: 76878638592.20.gun44_4820f1935090c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 170DE180C07AB for ; Sun, 31 May 2020 23:41:36 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,jhubbard@nvidia.com,,RULES_HIT:30012:30054:30064,0,RBL:216.228.121.65:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: gun44_4820f1935090c X-Filterd-Recvd-Size: 4434 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Sun, 31 May 2020 23:41:35 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Sun, 31 May 2020 16:41:22 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Sun, 31 May 2020 16:41:34 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Sun, 31 May 2020 16:41:34 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sun, 31 May 2020 23:41:32 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Sun, 31 May 2020 23:41:32 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.56.10]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Sun, 31 May 2020 16:41:32 -0700 From: John Hubbard To: Andrew Morton CC: David Hildenbrand , Pankaj Gupta , Souptick Joarder , LKML , , John Hubbard Subject: [PATCH v2 2/2] mm/gup: frame_vector: convert get_user_pages() --> pin_user_pages() Date: Sun, 31 May 2020 16:41:31 -0700 Message-ID: <20200531234131.770697-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200531234131.770697-1-jhubbard@nvidia.com> References: <20200531234131.770697-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590968482; bh=u/IEFLNJA/oJfZri/00pv0J+slptUmIo4kllStA2qdc=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=q8Kn7uZFM5Id08VQ3w3dIyZSEygIqJHBl6vzO0TGPwOBlQMwgWSnb2p2Jluf+HHi3 Iw1Sduery5c2/ZpgjcS9j5WyD3v2ZutYj+MgbQMZbzEEIU4YJ+v6NNV01Jqe8PWCgJ S4z1GKS0Rqnw9PumwuYRU6w1iUq/AGu0zN2hhycBtiCwa5oo+9xmc2D9/oNpfJKGq4 yWaadM4v1srPD6LgAX2ANnqujMTixF1T6vhRvZF1RaQUGZRTnyKuRXoOkML4hacBqj rjLcugy0c1Hkk3XDhq/D1kW7s7C5f4KU/56ts1rMCccYW10HN91wgQEUO4E/VuBK1h ZdyBfw5SnGp4w== X-Rspamd-Queue-Id: 170DE180C07AB X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This code was using get_user_pages*(), and all of the callers so far were in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ Cc: David Hildenbrand Signed-off-by: John Hubbard --- mm/frame_vector.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/mm/frame_vector.c b/mm/frame_vector.c index c431ca81dad5..4107dbca0056 100644 --- a/mm/frame_vector.c +++ b/mm/frame_vector.c @@ -72,7 +72,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) { vec->got_ref = true; vec->is_pfns = false; - ret = get_user_pages_locked(start, nr_frames, + ret = pin_user_pages_locked(start, nr_frames, gup_flags, (struct page **)(vec->ptrs), &locked); goto out; } @@ -122,7 +122,6 @@ EXPORT_SYMBOL(get_vaddr_frames); */ void put_vaddr_frames(struct frame_vector *vec) { - int i; struct page **pages; if (!vec->got_ref) @@ -135,8 +134,8 @@ void put_vaddr_frames(struct frame_vector *vec) */ if (WARN_ON(IS_ERR(pages))) goto out; - for (i = 0; i < vec->nr_frames; i++) - put_page(pages[i]); + + unpin_user_pages(pages, vec->nr_frames); vec->got_ref = false; out: vec->nr_frames = 0;