From patchwork Tue May 19 00:21:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11556547 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 85D2E618 for ; Tue, 19 May 2020 00:21:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5211020715 for ; Tue, 19 May 2020 00:21:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="Gqj1Eegd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5211020715 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B124F900003; Mon, 18 May 2020 20:21:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 89BC580005; Mon, 18 May 2020 20:21:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F0AF900006; Mon, 18 May 2020 20:21:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id 43227900003 for ; Mon, 18 May 2020 20:21:33 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 13EF845B3 for ; Tue, 19 May 2020 00:21:33 +0000 (UTC) X-FDA: 76831564866.18.berry64_64c814589f73a X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,jhubbard@nvidia.com,,RULES_HIT:30012:30054:30064,0,RBL:216.228.121.64:@nvidia.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: berry64_64c814589f73a X-Filterd-Recvd-Size: 6079 Received: from hqnvemgate25.nvidia.com (hqnvemgate25.nvidia.com [216.228.121.64]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Tue, 19 May 2020 00:21:32 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 18 May 2020 17:20:13 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 18 May 2020 17:21:31 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 18 May 2020 17:21:31 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 19 May 2020 00:21:29 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 19 May 2020 00:21:28 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.55.90]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 18 May 2020 17:21:28 -0700 From: John Hubbard To: Andrew Morton CC: Souptick Joarder , Matthew Wilcox , Jani Nikula , "Joonas Lahtinen" , Rodrigo Vivi , David Airlie , Daniel Vetter , Chris Wilson , Tvrtko Ursulin , Matthew Auld , , , LKML , , John Hubbard Subject: [PATCH 4/4] drm/i915: convert get_user_pages() --> pin_user_pages() Date: Mon, 18 May 2020 17:21:24 -0700 Message-ID: <20200519002124.2025955-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200519002124.2025955-1-jhubbard@nvidia.com> References: <20200519002124.2025955-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1589847613; bh=TYbuD5RgYulyxagO4AixT7wa6RlCT1nu+LHkKhXb+U4=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=Gqj1Eegdl4gP6WKGR/AfRzVgeB1rieQnkY/5QKS1v17W3pTdlbPatr7P0KRNamUJ1 trfX9G86fGzJ8KIRUpetIf/3+vFjAwj6rv7NIk0Pw3rLdIXgPRDG6mYpN85oPGlWSz 1+8oi0L/H8w2qox/LNkCIsZecujkQzxG33hzU5exEodgLgOs9xo84w2XpvtZBuO0yr XL+ZM83TH7J7hI2YgMZR0MLmb33DqQVMwPH7nBcAUx7O0cfgbGt1zzNpyrqq68QpY0 +bhCLH0VRb8ZJDkAL5YGTQ78ccRofCa9tZ19PmfqSa52Rikx4UIfo0NSGLpyCqMQp/ 6gFhc7WikkWtg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This code was using get_user_pages*(), in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ Signed-off-by: John Hubbard --- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 22 ++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index 7ffd7afeb7a5..b55ac7563189 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -471,7 +471,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work) down_read(&mm->mmap_sem); locked = 1; } - ret = get_user_pages_remote + ret = pin_user_pages_remote (work->task, mm, obj->userptr.ptr + pinned * PAGE_SIZE, npages - pinned, @@ -507,7 +507,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work) } mutex_unlock(&obj->mm.lock); - release_pages(pvec, pinned); + unpin_user_pages(pvec, pinned); kvfree(pvec); i915_gem_object_put(obj); @@ -564,6 +564,7 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) struct sg_table *pages; bool active; int pinned; + unsigned int gup_flags = 0; /* If userspace should engineer that these pages are replaced in * the vma between us binding this page into the GTT and completion @@ -598,11 +599,14 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); - if (pvec) /* defer to worker if malloc fails */ - pinned = __get_user_pages_fast(obj->userptr.ptr, - num_pages, - !i915_gem_object_is_readonly(obj), - pvec); + /* defer to worker if malloc fails */ + if (pvec) { + if (!i915_gem_object_is_readonly(obj)) + gup_flags |= FOLL_WRITE; + pinned = pin_user_pages_fast_only(obj->userptr.ptr, + num_pages, gup_flags, + pvec); + } } active = false; @@ -620,7 +624,7 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj) __i915_gem_userptr_set_active(obj, true); if (IS_ERR(pages)) - release_pages(pvec, pinned); + unpin_user_pages(pvec, pinned); kvfree(pvec); return PTR_ERR_OR_ZERO(pages); @@ -675,7 +679,7 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj, } mark_page_accessed(page); - put_page(page); + unpin_user_page(page); } obj->mm.dirty = false;