From patchwork Wed May 27 22:32:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11574005 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DDCAF159A for ; Wed, 27 May 2020 22:32:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA64820899 for ; Wed, 27 May 2020 22:32:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="h3hLoOZx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA64820899 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B98BA800B6; Wed, 27 May 2020 18:32:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B49EA80010; Wed, 27 May 2020 18:32:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A109D800B6; Wed, 27 May 2020 18:32:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id 845238001A for ; Wed, 27 May 2020 18:32:48 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3D6528248068 for ; Wed, 27 May 2020 22:32:48 +0000 (UTC) X-FDA: 76863950016.15.sofa20_6fb339a50780c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 1B5081814B0C1 for ; Wed, 27 May 2020 22:32:48 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,jhubbard@nvidia.com,,RULES_HIT:30054:30064:30070,0,RBL:216.228.121.65:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: sofa20_6fb339a50780c X-Filterd-Recvd-Size: 4950 Received: from hqnvemgate26.nvidia.com (hqnvemgate26.nvidia.com [216.228.121.65]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 22:32:47 +0000 (UTC) Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 27 May 2020 15:32:34 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Wed, 27 May 2020 15:32:46 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Wed, 27 May 2020 15:32:46 -0700 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 27 May 2020 22:32:46 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 27 May 2020 22:32:45 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.87.74]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 27 May 2020 15:32:45 -0700 From: John Hubbard To: Andrew Morton CC: LKML , , John Hubbard Subject: [PATCH 1/2] mm/gup: introduce pin_user_pages_locked() Date: Wed, 27 May 2020 15:32:42 -0700 Message-ID: <20200527223243.884385-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200527223243.884385-1-jhubbard@nvidia.com> References: <20200527223243.884385-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590618754; bh=IycP13r0Bgehgro+i/kfeYy/gdTMPNNoxOMONjYdmKE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=h3hLoOZxMnWJnisFnZpNwUOKl3cIRfZkUKEo2EH5nn5TVuwgH/Sy9X6W048B9fWuN TkjuhLSVPp/Ldd+oGSUph85GF/1ErZRBXCQ/8O0NJiayOKD3fe3ox+3hF3PALbLPx2 0HVWUlgM8m/4dlJ58ynDuhIGwEFUZywz8EFtwGxwBS5KGC1iikyZaU/weATU8wjHKO qN8Uk8+jli6K/b+6N8ZYlmdOx8dxpohKKuzvTGwvksl+x2NMTk4JEx8G8Q06XdWkFY jzKCcrUSFZWnategb3taYH2Iu3iCvdIXWvobdoPKM4a0lUMl+E0uJthGpSH0divnrx e6gHD8gOfJ3hA== X-Rspamd-Queue-Id: 1B5081814B0C1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce pin_user_pages_locked(), which is nearly identical to get_user_pages_locked() except that it sets FOLL_PIN and rejects FOLL_GET. Signed-off-by: John Hubbard Reviewed-by: David Hildenbrand Acked-by: Pankaj Gupta --- include/linux/mm.h | 2 ++ mm/gup.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 32 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 98be7289d7e9..d16951087c93 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1700,6 +1700,8 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, struct vm_area_struct **vmas); long get_user_pages_locked(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked); +long pin_user_pages_locked(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, int *locked); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags); long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, diff --git a/mm/gup.c b/mm/gup.c index 2f0a0b497c23..17418a949067 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2992,3 +2992,33 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages, return get_user_pages_unlocked(start, nr_pages, pages, gup_flags); } EXPORT_SYMBOL(pin_user_pages_unlocked); + +/* + * pin_user_pages_locked() is the FOLL_PIN variant of get_user_pages_locked(). + * Behavior is the same, except that this one sets FOLL_PIN and rejects + * FOLL_GET. + */ +long pin_user_pages_locked(unsigned long start, unsigned long nr_pages, + unsigned int gup_flags, struct page **pages, + int *locked) +{ + /* + * FIXME: Current FOLL_LONGTERM behavior is incompatible with + * FAULT_FLAG_ALLOW_RETRY because of the FS DAX check requirement on + * vmas. As there are no users of this flag in this call we simply + * disallow this option for now. + */ + if (WARN_ON_ONCE(gup_flags & FOLL_LONGTERM)) + return -EINVAL; + + /* FOLL_GET and FOLL_PIN are mutually exclusive. */ + if (WARN_ON_ONCE(gup_flags & FOLL_GET)) + return -EINVAL; + + gup_flags |= FOLL_PIN; + return __get_user_pages_locked(current, current->mm, start, nr_pages, + pages, NULL, locked, + gup_flags | FOLL_TOUCH); +} +EXPORT_SYMBOL(pin_user_pages_locked); + From patchwork Wed May 27 22:32:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11574009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 90686159A for ; Wed, 27 May 2020 22:32:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5D09820707 for ; Wed, 27 May 2020 22:32:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="SKkdpxYi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5D09820707 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BCA9800B7; Wed, 27 May 2020 18:32:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 46C008001A; Wed, 27 May 2020 18:32:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 333C9800B7; Wed, 27 May 2020 18:32:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 1ABA28001A for ; Wed, 27 May 2020 18:32:49 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B57A08248068 for ; Wed, 27 May 2020 22:32:48 +0000 (UTC) X-FDA: 76863950016.20.cap32_6fc651eedff0e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 9AEC4180C07AB for ; Wed, 27 May 2020 22:32:48 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,jhubbard@nvidia.com,,RULES_HIT:30012:30054:30064,0,RBL:216.228.121.64:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: cap32_6fc651eedff0e X-Filterd-Recvd-Size: 4263 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Wed, 27 May 2020 22:32:47 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 27 May 2020 15:31:23 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 27 May 2020 15:32:46 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 27 May 2020 15:32:46 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 27 May 2020 22:32:46 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 27 May 2020 22:32:46 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.87.74]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Wed, 27 May 2020 15:32:45 -0700 From: John Hubbard To: Andrew Morton CC: LKML , , John Hubbard Subject: [PATCH 2/2] mm/gup: frame_vector: convert get_user_pages() --> pin_user_pages() Date: Wed, 27 May 2020 15:32:43 -0700 Message-ID: <20200527223243.884385-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200527223243.884385-1-jhubbard@nvidia.com> References: <20200527223243.884385-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590618683; bh=sWvJDniuNa0MloWGBo3nO/qwt/IUY5Pd+BokGdPrg7M=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=SKkdpxYia+zl8I1G5aytJpugRzjxOPugRFmINeXYNJyvv2wNBDukmddpHstDNZ4PQ 1aCOw+8GDWZlXhZ6C6i8g+UjIArzo16lNFuKbFNWekzRe/N8w81kArI55DnWNoL0T0 rLUBTUj/hTu0eUsJvN/PGKFmgeRYL5rfhiVj5mc/nPCgedQ/oxsx2yQUM89zK5D2vP pJN16iaiiJEnWDZ6X5TiRvTVAH010Vq1eMjMeEsdgRND8BChBrF9EVbvxscaHNSnrl j5H2FprhjqxJiUHAIrJmCHYDtFX/PZaizEr2Ia0lMrc88NVFRh3zYRloXamQQ28K2B mBQehtwhjhZfw== X-Rspamd-Queue-Id: 9AEC4180C07AB X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This code was using get_user_pages*(), and all of the callers so far were in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ Signed-off-by: John Hubbard --- mm/frame_vector.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/mm/frame_vector.c b/mm/frame_vector.c index c431ca81dad5..4107dbca0056 100644 --- a/mm/frame_vector.c +++ b/mm/frame_vector.c @@ -72,7 +72,7 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) { vec->got_ref = true; vec->is_pfns = false; - ret = get_user_pages_locked(start, nr_frames, + ret = pin_user_pages_locked(start, nr_frames, gup_flags, (struct page **)(vec->ptrs), &locked); goto out; } @@ -122,7 +122,6 @@ EXPORT_SYMBOL(get_vaddr_frames); */ void put_vaddr_frames(struct frame_vector *vec) { - int i; struct page **pages; if (!vec->got_ref) @@ -135,8 +134,8 @@ void put_vaddr_frames(struct frame_vector *vec) */ if (WARN_ON(IS_ERR(pages))) goto out; - for (i = 0; i < vec->nr_frames; i++) - put_page(pages[i]); + + unpin_user_pages(pages, vec->nr_frames); vec->got_ref = false; out: vec->nr_frames = 0;