From patchwork Wed Jan 23 22:23:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jerome Glisse X-Patchwork-Id: 10778009 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BC7B1515 for ; Wed, 23 Jan 2019 22:23:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7C0B22C4F8 for ; Wed, 23 Jan 2019 22:23:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7033A2C509; Wed, 23 Jan 2019 22:23:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF8D72C4F8 for ; Wed, 23 Jan 2019 22:23:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727345AbfAWWXx (ORCPT ); Wed, 23 Jan 2019 17:23:53 -0500 Received: from mx1.redhat.com ([209.132.183.28]:54554 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727335AbfAWWXw (ORCPT ); Wed, 23 Jan 2019 17:23:52 -0500 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 067D8432C1; Wed, 23 Jan 2019 22:23:52 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-120-127.rdu2.redhat.com [10.10.120.127]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5CF0E5D965; Wed, 23 Jan 2019 22:23:49 +0000 (UTC) From: jglisse@redhat.com To: linux-mm@kvack.org Cc: Andrew Morton , linux-kernel@vger.kernel.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , =?utf-8?q?Christian_?= =?utf-8?q?K=C3=B6nig?= , Jan Kara , Felix Kuehling , Jason Gunthorpe , Matthew Wilcox , Ross Zwisler , Dan Williams , Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= , Michal Hocko , Ralph Campbell , John Hubbard , kvm@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-rdma@vger.kernel.org, linux-fsdevel@vger.kernel.org, Arnd Bergmann Subject: [PATCH v4 8/9] gpu/drm/i915: optimize out the case when a range is updated to read only Date: Wed, 23 Jan 2019 17:23:14 -0500 Message-Id: <20190123222315.1122-9-jglisse@redhat.com> In-Reply-To: <20190123222315.1122-1-jglisse@redhat.com> References: <20190123222315.1122-1-jglisse@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Wed, 23 Jan 2019 22:23:52 +0000 (UTC) Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse When range of virtual address is updated read only and corresponding user ptr object are already read only it is pointless to do anything. Optimize this case out. Signed-off-by: Jérôme Glisse Cc: Christian König Cc: Jan Kara Cc: Felix Kuehling Cc: Jason Gunthorpe Cc: Andrew Morton Cc: Matthew Wilcox Cc: Ross Zwisler Cc: Dan Williams Cc: Paolo Bonzini Cc: Radim Krčmář Cc: Michal Hocko Cc: Ralph Campbell Cc: John Hubbard Cc: kvm@vger.kernel.org Cc: dri-devel@lists.freedesktop.org Cc: linux-rdma@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org Cc: Arnd Bergmann --- drivers/gpu/drm/i915/i915_gem_userptr.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c index 9558582c105e..23330ac3d7ea 100644 --- a/drivers/gpu/drm/i915/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c @@ -59,6 +59,7 @@ struct i915_mmu_object { struct interval_tree_node it; struct list_head link; struct work_struct work; + bool read_only; bool attached; }; @@ -119,6 +120,7 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, container_of(_mn, struct i915_mmu_notifier, mn); struct i915_mmu_object *mo; struct interval_tree_node *it; + bool update_to_read_only; LIST_HEAD(cancelled); unsigned long end; @@ -128,6 +130,8 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, /* interval ranges are inclusive, but invalidate range is exclusive */ end = range->end - 1; + update_to_read_only = mmu_notifier_range_update_to_read_only(range); + spin_lock(&mn->lock); it = interval_tree_iter_first(&mn->objects, range->start, end); while (it) { @@ -145,6 +149,17 @@ static int i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn, * object if it is not in the process of being destroyed. */ mo = container_of(it, struct i915_mmu_object, it); + + /* + * If it is already read only and we are updating to + * read only then we do not need to change anything. + * So save time and skip this one. + */ + if (update_to_read_only && mo->read_only) { + it = interval_tree_iter_next(it, range->start, end); + continue; + } + if (kref_get_unless_zero(&mo->obj->base.refcount)) queue_work(mn->wq, &mo->work); @@ -270,6 +285,7 @@ i915_gem_userptr_init__mmu_notifier(struct drm_i915_gem_object *obj, mo->mn = mn; mo->obj = obj; mo->it.start = obj->userptr.ptr; + mo->read_only = i915_gem_object_is_readonly(obj); mo->it.last = obj->userptr.ptr + obj->base.size - 1; INIT_WORK(&mo->work, cancel_userptr);