From patchwork Mon Oct 12 02:09:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831367 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C9CBE109B for ; Mon, 12 Oct 2020 02:08:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 95CAE20796 for ; Mon, 12 Oct 2020 02:08:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ITOzAqZA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726991AbgJLCIt (ORCPT ); Sun, 11 Oct 2020 22:08:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726773AbgJLCIq (ORCPT ); Sun, 11 Oct 2020 22:08:46 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 487EFC0613CE; Sun, 11 Oct 2020 19:08:45 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id n9so12814541pgf.9; Sun, 11 Oct 2020 19:08:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=N4MJy1bUVBg4Uoq6rWvQpmAg9Tlj+L8XO2hEQrVc/oM=; b=ITOzAqZA7x4IxUbSoZ0vuMxqRDc7llAadnDtphPldstDuX5ED3y62XI65/s072xx7N Hg/Uv44GJcoQGagNp0pk3jysV4SGBidBvMX0ZdT4TAmxBaOIlePmniA0pahPLyJ+j65d QiLJwSytBLsmlBR+3xo9gnZ902Hr7Cs5hAfcmLoOwIQcMEdcW4vt8hnv9WjFtu0+PtED ApUQ5fiElgEx6GIj7SKEs8RDrhuGa2IUpbpvnYzfGhqhrz2lpqsjDX/xMqkWbg8cXNzC Q4tMhMYwn9QXZvmllzzgL4l0/mC2bCsTdF0kpmQ3EXEh32CCHvdNv7UluafpfetMxJL0 MZYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=N4MJy1bUVBg4Uoq6rWvQpmAg9Tlj+L8XO2hEQrVc/oM=; b=aBXYujO0evTeF5WEzG6sVTbq0U/SPjV80gaN3NanY/qFY7PINJWRbL6fcmwViCvPCt sDKww1KMZCKPSCpAH+hHLcHj5mupQwovPM7lJu8Ux08fgW4ogxJ0Dh18flRVlgrP2eUQ ivbV/cUzy7HAYDfo2TJCYjMzAkWLWjn2Q0HTLHGXAGaOrgWh1ODfS1LLM+BEo9pQyOeO AFPnPNaid0TDU9bG3Rw+1MqGi8qnBPNLeyL5+U4f3HdmEHwd1XjbO5gIavl+lN/3B1Iv i9+Du4fnIkEG6c76GlqVlAXSjOIqjjFH3q+x61cbeKhmmNCRPFt7XRxUrYvEWN0mgare ASKQ== X-Gm-Message-State: AOAM532kROGFRBcO3tlnROimBPE92NurBEAF47SeJMo5vXcrBk8iL4kO /e5aZJbxBG/M8gWlCaPYqTg= X-Google-Smtp-Source: ABdhPJxLuXaXXrrF+b57n2wfOlKazE9Z4zohFpQququdtgmOtQbQoj9ieprQ1KxIZB1y9eqfuzST7g== X-Received: by 2002:a17:90a:6b0d:: with SMTP id v13mr17642997pjj.206.1602468524700; Sun, 11 Oct 2020 19:08:44 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id p3sm21489089pjj.38.2020.10.11.19.08.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:08:43 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 01/22] drm/msm/gem: Add obj->lock wrappers Date: Sun, 11 Oct 2020 19:09:28 -0700 Message-Id: <20201012020958.229288-2-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This will make it easier to transition over to obj->resv locking for everything that is per-bo locking. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 99 ++++++++++++++++------------------- drivers/gpu/drm/msm/msm_gem.h | 28 ++++++++++ 2 files changed, 74 insertions(+), 53 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 14e14caf90f9..afef9c6b1a1c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -178,15 +178,15 @@ struct page **msm_gem_get_pages(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **p; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } p = get_pages(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return p; } @@ -252,14 +252,14 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) * vm_ops.open/drm_gem_mmap_obj and close get and put * a reference on obj. So, we dont need to hold one here. */ - err = mutex_lock_interruptible(&msm_obj->lock); + err = msm_gem_lock_interruptible(obj); if (err) { ret = VM_FAULT_NOPAGE; goto out; } if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return VM_FAULT_SIGBUS; } @@ -280,7 +280,7 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) ret = vmf_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV)); out_unlock: - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); out: return ret; } @@ -289,10 +289,9 @@ vm_fault_t msm_gem_fault(struct vm_fault *vmf) static uint64_t mmap_offset(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; - struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); /* Make it mmapable */ ret = drm_gem_create_mmap_offset(obj); @@ -308,11 +307,10 @@ static uint64_t mmap_offset(struct drm_gem_object *obj) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) { uint64_t offset; - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); offset = mmap_offset(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return offset; } @@ -322,7 +320,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) @@ -341,7 +339,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace == aspace) @@ -360,14 +358,14 @@ static void del_vma(struct msm_gem_vma *vma) kfree(vma); } -/* Called with msm_obj->lock locked */ +/* Called with msm_obj locked */ static void put_iova(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma, *tmp; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->aspace) { @@ -382,11 +380,10 @@ static int msm_gem_get_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; int ret = 0; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); @@ -421,7 +418,7 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, if (msm_obj->flags & MSM_BO_MAP_PRIV) prot |= IOMMU_PRIV; - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) return -EBUSY; @@ -446,11 +443,10 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); u64 local; int ret; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); ret = msm_gem_get_iova_locked(obj, aspace, &local, range_start, range_end); @@ -461,7 +457,7 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, if (!ret) *iova = local; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ret; } @@ -479,12 +475,11 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); ret = msm_gem_get_iova_locked(obj, aspace, iova, 0, U64_MAX); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ret; } @@ -495,12 +490,11 @@ int msm_gem_get_iova(struct drm_gem_object *obj, uint64_t msm_gem_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = lookup_vma(obj, aspace); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); WARN_ON(!vma); return vma ? vma->iova : 0; @@ -514,16 +508,15 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = lookup_vma(obj, aspace); if (!WARN_ON(!vma)) msm_gem_unmap_vma(aspace, vma); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, @@ -564,20 +557,20 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) if (obj->import_attach) return ERR_PTR(-ENODEV); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); if (WARN_ON(msm_obj->madv > madv)) { DRM_DEV_ERROR(obj->dev->dev, "Invalid madv state: %u vs %u\n", msm_obj->madv, madv); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } /* increment vmap_count *before* vmap() call, so shrinker can - * check vmap_count (is_vunmapable()) outside of msm_obj->lock. + * check vmap_count (is_vunmapable()) outside of msm_obj lock. * This guarantees that we won't try to msm_gem_vunmap() this * same object from within the vmap() call (while we already - * hold msm_obj->lock) + * hold msm_obj lock) */ msm_obj->vmap_count++; @@ -595,12 +588,12 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) } } - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return msm_obj->vaddr; fail: msm_obj->vmap_count--; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return ERR_PTR(ret); } @@ -624,10 +617,10 @@ void msm_gem_put_vaddr(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); WARN_ON(msm_obj->vmap_count < 1); msm_obj->vmap_count--; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } /* Update madvise status, returns true if not purged, else @@ -637,7 +630,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); @@ -646,7 +639,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) madv = msm_obj->madv; - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); return (madv != __MSM_MADV_PURGED); } @@ -683,14 +676,14 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } static void msm_gem_vunmap_locked(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&msm_obj->lock)); + WARN_ON(!msm_gem_is_locked(obj)); if (!msm_obj->vaddr || WARN_ON(!is_vunmapable(msm_obj))) return; @@ -705,7 +698,7 @@ void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass) mutex_lock_nested(&msm_obj->lock, subclass); msm_gem_vunmap_locked(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } /* must be called before _move_to_active().. */ @@ -816,7 +809,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); switch (msm_obj->madv) { case __MSM_MADV_PURGED: @@ -884,7 +877,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) describe_fence(fence, "Exclusive", m); rcu_read_unlock(); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); } void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) @@ -929,7 +922,7 @@ static void free_object(struct msm_gem_object *msm_obj) list_del(&msm_obj->mm_list); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); put_iova(obj); @@ -950,7 +943,7 @@ static void free_object(struct msm_gem_object *msm_obj) drm_gem_object_release(obj); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); kfree(msm_obj); } @@ -1070,10 +1063,10 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, struct msm_gem_vma *vma; struct page **pages; - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); vma = add_vma(obj, NULL); - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); if (IS_ERR(vma)) { ret = PTR_ERR(vma); goto fail; @@ -1157,22 +1150,22 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, npages = size / PAGE_SIZE; msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + msm_gem_lock(obj); msm_obj->sgt = sgt; msm_obj->pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); if (!msm_obj->pages) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); ret = -ENOMEM; goto fail; } ret = drm_prime_sg_to_page_addr_arrays(sgt, msm_obj->pages, NULL, npages); if (ret) { - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); goto fail; } - mutex_unlock(&msm_obj->lock); + msm_gem_unlock(obj); mutex_lock(&dev->struct_mutex); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a1bf741b9b89..f6482154e8bb 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -93,6 +93,34 @@ struct msm_gem_object { }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) +static inline void +msm_gem_lock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + mutex_lock(&msm_obj->lock); +} + +static inline int +msm_gem_lock_interruptible(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_lock_interruptible(&msm_obj->lock); +} + +static inline void +msm_gem_unlock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + mutex_unlock(&msm_obj->lock); +} + +static inline bool +msm_gem_is_locked(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_is_locked(&msm_obj->lock); +} + static inline bool is_active(struct msm_gem_object *msm_obj) { return atomic_read(&msm_obj->active_count); From patchwork Mon Oct 12 02:09:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831379 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 985F414D5 for ; Mon, 12 Oct 2020 02:09:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7296420FC3 for ; Mon, 12 Oct 2020 02:09:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NKTH6nfr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727048AbgJLCIt (ORCPT ); Sun, 11 Oct 2020 22:08:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726959AbgJLCIs (ORCPT ); Sun, 11 Oct 2020 22:08:48 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EAAAAC0613D0; Sun, 11 Oct 2020 19:08:47 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id c20so1527462pfr.8; Sun, 11 Oct 2020 19:08:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CYBCVrY685HpPEa6dE7fFTDE7n8ofYv7TLo5Wicy7lE=; b=NKTH6nfrwZXolxNsr19xp4xLaMqJC9vl+SkxxuIj08xFTOkA5Qzo1bMHZREAQhziwj v3GoO3qOzxhzJpMo+RT52XP04Yzt73SClT+zSy5b00dhFiI0orS0GmnMqfQlZXoVuD4K BqSFzsE60W4zzHrDPGmRkxr+2iQbb3XD8ZVwYMM7GDzBqX04G+FqqIL7m4kDnZGnFQyF sU+Qse6hVRV2LdDDKtVtnZEaljK85fZZE3+tBsQGWlvXRUD3+Un6muUEi3VzGjZ6Y2AN sGce+NHz+W9mysNXl+iowGg8PE5uz7h17bm5/OOCLkre7pXLggjBqy1gXNaaqIjMqpc9 LmwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CYBCVrY685HpPEa6dE7fFTDE7n8ofYv7TLo5Wicy7lE=; b=IZLBDe2N8gLrCdq2Jn/AJ/WO+a9ZL+PLbrYFR7jB5z+oT7ylaMKBZK1yToisv5qWZi XWO6Dye7iWz3H6HVovscXYrcVTgEmmXlADctXARnNvO1ItlaagIT05mRuPidLYNYdegl spvGJLVKcaWOuFYkiPISCgJQdpUK+V/Flc27Ytr0QA0uZ55v1cox3g5JcuJhrKK8Y5wB PdK78GV9e4pP5vOWJ0GAXESfnHdtZ2VDuo/drQV5q7Ye0RIvTCcvKnZ+F+ObHHd9Bd1E itnKAEN0tCnMtq1ZU95jBCIvFNk5ccQ5O9eBsdmWutZsysx/2Cw6xyp84FsdQbxMjwi9 l9gQ== X-Gm-Message-State: AOAM533co3vp9roFWmSL1BqNiDNX7FPAjavDno/G5YM645ZbsFCaOKA+ z7rJ6LUd/7d7oLLl3dZ7Nb0= X-Google-Smtp-Source: ABdhPJxWDZBEXxg+flabuSNdCK2BFFN1TFGw0NEt/TJwJynQmkjv4qLbFrPiFSZDYIJnIb1rtnWTiQ== X-Received: by 2002:aa7:8d4c:0:b029:13f:e666:8f05 with SMTP id s12-20020aa78d4c0000b029013fe6668f05mr22358227pfe.0.1602468527481; Sun, 11 Oct 2020 19:08:47 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id ne16sm21118476pjb.11.2020.10.11.19.08.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:08:46 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 02/22] drm/msm/gem: Rename internal get_iova_locked helper Date: Sun, 11 Oct 2020 19:09:29 -0700 Message-Id: <20201012020958.229288-3-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We'll need to introduce a _locked() version of msm_gem_get_iova(), so we need to make that name available. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index afef9c6b1a1c..dec89fe79025 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -376,7 +376,7 @@ put_iova(struct drm_gem_object *obj) } } -static int msm_gem_get_iova_locked(struct drm_gem_object *obj, +static int get_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { @@ -448,7 +448,7 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, msm_gem_lock(obj); - ret = msm_gem_get_iova_locked(obj, aspace, &local, + ret = get_iova_locked(obj, aspace, &local, range_start, range_end); if (!ret) @@ -478,7 +478,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, int ret; msm_gem_lock(obj); - ret = msm_gem_get_iova_locked(obj, aspace, iova, 0, U64_MAX); + ret = get_iova_locked(obj, aspace, iova, 0, U64_MAX); msm_gem_unlock(obj); return ret; From patchwork Mon Oct 12 02:09:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831389 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 45E2814D5 for ; Mon, 12 Oct 2020 02:09:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1B0D520790 for ; Mon, 12 Oct 2020 02:09:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ImVpx2qz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727280AbgJLCJL (ORCPT ); Sun, 11 Oct 2020 22:09:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37066 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726959AbgJLCIx (ORCPT ); Sun, 11 Oct 2020 22:08:53 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E53A3C0613CE; Sun, 11 Oct 2020 19:08:52 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id j7so2059008pgk.5; Sun, 11 Oct 2020 19:08:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hhbdajz6aII8kCOttV+DZEeHN2D3wgXXxPE8UzNjZ0o=; b=ImVpx2qzdud9P/qiJMlO95dfOZHex7S0Q9b1NpjJipbeAb7g/gVXNP6tBNd3q/G3EN elD/xhxgNyoUaYXyf+GaMoC/k7QP26FrHnocJQ0QKruYOPTOujQEg8Iut87ZU4badZwa SQYOnqNpmmZr0r/Zy61YSfX12EBNw1Rd0/sN5gUjZkSJD11m6FtJtkRo36UikiogFi5/ K+TcpE+gIGbo+mvYFTDiNbguuSJvXWnNKEmvPi051ypfaE2Wh1ZZEUbV4j1qPopPshxh aloi2kboOuMji6dsJcL7T2PbBGgeMxQUdwZvxutfSsZe3OwOLXgWe4zvkltuMjf2wtCh K7sQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hhbdajz6aII8kCOttV+DZEeHN2D3wgXXxPE8UzNjZ0o=; b=HpIeLlFP0rLc65tyKQgsStT5hPLFLp7KAG70TlA6WM5jnEN/f/PWgqM5gky97Ztb+j mThd8FTCXL6kGcCPQAYislnOIUaDbPiVgK1xoMi+/Mz+zpsixoriIffO/6+qX4eBX8DD YSk/iequDOadG6sGvkchbYDOlqIry6EOhRZZP3+gJIR/szhAKBUBzLmbk6b+ArzE8eba PRz01nyKN5kpmGcmJqnb17/q7gtMPknV8UKeuCf0RvOS8gIJ2TVbONP9qIZvO9zwvnlz fWriAKLC9Ql+nkvlEn/wVeS9SuHD2DmPD+v1zfVkBmXRfgJQ303bBdX2A7zvzGY2pa8S kupQ== X-Gm-Message-State: AOAM533Ty5xnh5A2JEmLPgx88wTvYcEIxOsX0F37nW8xvW/S9auOomo5 VYhvVe8LndKM/wfUVyJJqY4= X-Google-Smtp-Source: ABdhPJy4zR1YrP0tZaFrOu6lZbyt4LyeJn/5LBtWEYyz8l4zcyGFNkOgZ+CNTlRjsAgfCVC35yu3Ag== X-Received: by 2002:a17:90a:bf8c:: with SMTP id d12mr18101532pjs.157.1602468532430; Sun, 11 Oct 2020 19:08:52 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id x5sm18259999pfp.113.2020.10.11.19.08.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:08:51 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , Thomas Zimmermann , Sam Ravnborg , Emil Velikov , Christophe JAILLET , Brian Masney , Harigovindan P , Jeffrey Hugo , Rajendra Nayak , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK) Subject: [PATCH v2 03/22] drm/msm/gem: Move prototypes to msm_gem.h Date: Sun, 11 Oct 2020 19:09:30 -0700 Message-Id: <20201012020958.229288-4-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 1 + drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 1 + drivers/gpu/drm/msm/dsi/dsi_host.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 54 ---------------------- drivers/gpu/drm/msm/msm_fbdev.c | 1 + drivers/gpu/drm/msm/msm_gem.h | 56 +++++++++++++++++++++++ 6 files changed, 60 insertions(+), 54 deletions(-) diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c index a0253297bc76..b65b2329cc8d 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -11,6 +11,7 @@ #include #include "mdp4_kms.h" +#include "msm_gem.h" struct mdp4_crtc { struct drm_crtc base; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index c39dad151bb6..81fbd52ad7e7 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -15,6 +15,7 @@ #include #include "mdp5_kms.h" +#include "msm_gem.h" #define CURSOR_WIDTH 64 #define CURSOR_HEIGHT 64 diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index b17ac6c27554..5e7cdc11c764 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -26,6 +26,7 @@ #include "sfpb.xml.h" #include "dsi_cfg.h" #include "msm_kms.h" +#include "msm_gem.h" #define DSI_RESET_TOGGLE_DELAY_MS 20 diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b9dd8f8f4887..79ee7d05b363 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -273,28 +273,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, void msm_gem_shrinker_init(struct drm_device *dev); void msm_gem_shrinker_cleanup(struct drm_device *dev); -int msm_gem_mmap_obj(struct drm_gem_object *obj, - struct vm_area_struct *vma); -int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma); -vm_fault_t msm_gem_fault(struct vm_fault *vmf); -uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); -int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); -uint64_t msm_gem_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); -struct page **msm_gem_get_pages(struct drm_gem_object *obj); -void msm_gem_put_pages(struct drm_gem_object *obj); -int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, - struct drm_mode_create_dumb *args); -int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, - uint32_t handle, uint64_t *offset); struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); void *msm_gem_prime_vmap(struct drm_gem_object *obj); void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); @@ -303,38 +281,8 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -void *msm_gem_get_vaddr(struct drm_gem_object *obj); -void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); -void msm_gem_put_vaddr(struct drm_gem_object *obj); -int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); -int msm_gem_sync_object(struct drm_gem_object *obj, - struct msm_fence_context *fctx, bool exclusive); -void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu); -void msm_gem_active_put(struct drm_gem_object *obj); -int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout); -int msm_gem_cpu_fini(struct drm_gem_object *obj); -void msm_gem_free_object(struct drm_gem_object *obj); -int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, - uint32_t size, uint32_t flags, uint32_t *handle, char *name); -struct drm_gem_object *msm_gem_new(struct drm_device *dev, - uint32_t size, uint32_t flags); -struct drm_gem_object *msm_gem_new_locked(struct drm_device *dev, - uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, - struct drm_gem_object **bo, uint64_t *iova); -void *msm_gem_kernel_new_locked(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace, bool locked); -struct drm_gem_object *msm_gem_import(struct drm_device *dev, - struct dma_buf *dmabuf, struct sg_table *sgt); void msm_gem_free_work(struct work_struct *work); -__printf(2, 3) -void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...); - int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct msm_gem_address_space *aspace); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, @@ -447,8 +395,6 @@ void __init msm_dpu_register(void); void __exit msm_dpu_unregister(void); #ifdef CONFIG_DEBUG_FS -void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); -void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m); int msm_debugfs_late_init(struct drm_device *dev); int msm_rd_debugfs_init(struct drm_minor *minor); diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c index 47235f8c5922..678dba1725a6 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -9,6 +9,7 @@ #include #include "msm_drv.h" +#include "msm_gem.h" #include "msm_kms.h" extern int msm_gem_mmap_obj(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f6482154e8bb..fbad08badf43 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -93,6 +93,62 @@ struct msm_gem_object { }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) +int msm_gem_mmap_obj(struct drm_gem_object *obj, + struct vm_area_struct *vma); +int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma); +vm_fault_t msm_gem_fault(struct vm_fault *vmf); +uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_get_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); +int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); +uint64_t msm_gem_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); +void msm_gem_unpin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); +struct page **msm_gem_get_pages(struct drm_gem_object *obj); +void msm_gem_put_pages(struct drm_gem_object *obj); +int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, + struct drm_mode_create_dumb *args); +int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, + uint32_t handle, uint64_t *offset); +void *msm_gem_get_vaddr(struct drm_gem_object *obj); +void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); +void msm_gem_put_vaddr(struct drm_gem_object *obj); +int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); +int msm_gem_sync_object(struct drm_gem_object *obj, + struct msm_fence_context *fctx, bool exclusive); +void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu); +void msm_gem_active_put(struct drm_gem_object *obj); +int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout); +int msm_gem_cpu_fini(struct drm_gem_object *obj); +void msm_gem_free_object(struct drm_gem_object *obj); +int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, + uint32_t size, uint32_t flags, uint32_t *handle, char *name); +struct drm_gem_object *msm_gem_new(struct drm_device *dev, + uint32_t size, uint32_t flags); +struct drm_gem_object *msm_gem_new_locked(struct drm_device *dev, + uint32_t size, uint32_t flags); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, + uint32_t flags, struct msm_gem_address_space *aspace, + struct drm_gem_object **bo, uint64_t *iova); +void *msm_gem_kernel_new_locked(struct drm_device *dev, uint32_t size, + uint32_t flags, struct msm_gem_address_space *aspace, + struct drm_gem_object **bo, uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, + struct msm_gem_address_space *aspace, bool locked); +struct drm_gem_object *msm_gem_import(struct drm_device *dev, + struct dma_buf *dmabuf, struct sg_table *sgt); +__printf(2, 3) +void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...); +#ifdef CONFIG_DEBUG_FS +void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m); +void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); +#endif + static inline void msm_gem_lock(struct drm_gem_object *obj) { From patchwork Mon Oct 12 02:09:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831447 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A02B314D5 for ; Mon, 12 Oct 2020 02:10:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E8FA22202 for ; Mon, 12 Oct 2020 02:10:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Vn1vzJFV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729751AbgJLCJg (ORCPT ); Sun, 11 Oct 2020 22:09:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727346AbgJLCI4 (ORCPT ); Sun, 11 Oct 2020 22:08:56 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A791C0613D0; Sun, 11 Oct 2020 19:08:55 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id t18so7761169plo.1; Sun, 11 Oct 2020 19:08:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kS25d5qKVGDZxWvcUI8CruK0rHlMGZPnc/KSyG6rgIQ=; b=Vn1vzJFVJFmP0O9FSHMj/0BY5nyOCoW//lFof2um+CcmUcqgekVvZaUYEjr/OleuSI wk57WLpqvzk3V315/2nEsIhh2+fD+OD0xpsx759PvRaWk2phS4Hg0xYHrlsZ7RKRWR5b oBtDrE49YGIBCEiuEAiLgmra30QcGqPswn8ODBRhWL3bWc8sxuUjSvIMVFAuLwPljMmR ToByKGFSPcj5clkw8Ow9fHbJ8LGD7uHjCEhGOLl2CWmUQcHU7cSDNZAvqibNynqIe2R9 Mv8BcWEAJiunU4DcWZw767KmlULTzmXSAYi0HUbq5lWadz5RfwkGtu8hmMdsKXDRijey /i8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kS25d5qKVGDZxWvcUI8CruK0rHlMGZPnc/KSyG6rgIQ=; b=KHUXbBVuACQWHMjmnJ0Is8PacSw2ihg1/pke+R1B3BT9fd6zSSy7vUQmnkdcT3whaN vYCY8IqHZ2+UVdDUQZOpIpPfdXgAE8zG+CvQA3XuEAPq/ZJ8qKJX9qSE5tFCnY2/947Q CNGNtlVa12OjdEG+0sOPW8qhbhh4iZbDswbtM5Wv1oMv9mkLS8sV9msUJBkD59Di1va5 WmkeB2L+AfFEcHgO55SsB8kYDEWKDreuoPeCt+92dIe6ybCrUFhbQNifrFVPQtYPiFq2 jbzk4iWUqmtCFjWvQTwB16XJW+XJqsNhCekilXf6XYW3mlzvD8Fh8VMCH9m88nk1a3Ky gQkg== X-Gm-Message-State: AOAM531s3/fQXK777PZ13KG6wv7ub4/n03gcJjBUDdx5tGI0JNppRV3W qn6Avz16drgZdCvwKqHkiww= X-Google-Smtp-Source: ABdhPJwxSla3igtA9DqgAD5GKP2q0CYqbs7fJho7xFNHP7UMTXM1LmB7nqEkXjoNuhlMAOui6nh3vA== X-Received: by 2002:a17:902:d68d:b029:d3:dcce:d7f1 with SMTP id v13-20020a170902d68db02900d3dcced7f1mr21330798ply.84.1602468534924; Sun, 11 Oct 2020 19:08:54 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id q5sm17078811pgh.16.2020.10.11.19.08.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:08:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 04/22] drm/msm/gem: Add some _locked() helpers Date: Sun, 11 Oct 2020 19:09:31 -0700 Message-Id: <20201012020958.229288-5-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark When we cut-over to using dma_resv_lock/etc instead of msm_obj->lock, we'll need these for the submit path (where resv->lock is already held). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 50 +++++++++++++++++++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 4 +++ 2 files changed, 43 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index dec89fe79025..7bca2e815933 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -435,18 +435,14 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, msm_obj->sgt, obj->size >> PAGE_SHIFT); } -/* - * get iova and pin it. Should have a matching put - * limits iova to specified range (in pages) - */ -int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, +static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end) { u64 local; int ret; - msm_gem_lock(obj); + WARN_ON(!msm_gem_is_locked(obj)); ret = get_iova_locked(obj, aspace, &local, range_start, range_end); @@ -457,10 +453,32 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, if (!ret) *iova = local; + return ret; +} + +/* + * get iova and pin it. Should have a matching put + * limits iova to specified range (in pages) + */ +int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova, + u64 range_start, u64 range_end) +{ + int ret; + + msm_gem_lock(obj); + ret = get_and_pin_iova_range_locked(obj, aspace, iova, range_start, range_end); msm_gem_unlock(obj); + return ret; } +int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova) +{ + return get_and_pin_iova_range_locked(obj, aspace, iova, 0, U64_MAX); +} + /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova) @@ -501,21 +519,31 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, } /* - * Unpin a iova by updating the reference counts. The memory isn't actually - * purged until something else (shrinker, mm_notifier, destroy, etc) decides - * to get rid of it + * Locked variant of msm_gem_unpin_iova() */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, +void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { struct msm_gem_vma *vma; - msm_gem_lock(obj); + WARN_ON(!msm_gem_is_locked(obj)); + vma = lookup_vma(obj, aspace); if (!WARN_ON(!vma)) msm_gem_unmap_vma(aspace, vma); +} +/* + * Unpin a iova by updating the reference counts. The memory isn't actually + * purged until something else (shrinker, mm_notifier, destroy, etc) decides + * to get rid of it + */ +void msm_gem_unpin_iova(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace) +{ + msm_gem_lock(obj); + msm_gem_unpin_iova_locked(obj, aspace); msm_gem_unlock(obj); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index fbad08badf43..016f616dd118 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -103,10 +103,14 @@ int msm_gem_get_iova(struct drm_gem_object *obj, int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova, u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace, uint64_t *iova); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova); uint64_t msm_gem_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); +void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, + struct msm_gem_address_space *aspace); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); struct page **msm_gem_get_pages(struct drm_gem_object *obj); From patchwork Mon Oct 12 02:09:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831417 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3909A109B for ; Mon, 12 Oct 2020 02:09:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0502C2087D for ; Mon, 12 Oct 2020 02:09:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="elG/l8t0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727452AbgJLCJL (ORCPT ); Sun, 11 Oct 2020 22:09:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37080 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727362AbgJLCI7 (ORCPT ); Sun, 11 Oct 2020 22:08:59 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1515EC0613D1; Sun, 11 Oct 2020 19:08:58 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id c20so1527761pfr.8; Sun, 11 Oct 2020 19:08:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5Oh42LDxapEVsr08xso+zIkteFrHOEURl7Ox6/NemvA=; b=elG/l8t0KoDrN6fA3BMNkVOrL1bj7/pqOd7/fdpGUlQ9ItbHJZuFMP4hhW3LajCayv Bnu3WOBPa4CwOIPWHwpmeugI87x/TUSU9jwY7eAIKEApYlmuRJme7AWxDC/+kzhvIc7/ NgYLTCgdbnACkDr5anJmvz/CxGx/nzXkZP21sebXW+yFcN9IkgUBNc5CtsJGaTGc1vGt bSzB+n4z9fcbDPAaParwv+7KImAqencoISr4tRQ7MVEluGk4TCxCA9yjrKUL8fIjutZt OTebK4zAGuIGfGHBgCH30gK2bRgKiYKqEgzou7+oL3emr1k8oJLdKMzJuBG0SsjEdl7W fmKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5Oh42LDxapEVsr08xso+zIkteFrHOEURl7Ox6/NemvA=; b=TiWivKJof0c09XNkO/lRLblS7pBcGGtVTZ3V/clmDrzASEOtdmxMXBklanOyWDg93b a3rkP4LKemvmJ1AplMrxsSXSVrqLCHugbBbJmq/ITxAb057VJAFYn1t/ThK/nrZSzCQN 9nRCZeAqjYhzkQn54HlxEjemyqaqIysC/hLeQJGVzLo9o5cofNHBoXDkUZezpeohXZox AgFCdmGMcCZiwISNLUlYbZaA/1uQOfT4ACJS3+C18lBOuetDK8n84j2WYU7sLrlkc+uO GaScoxxCySh2mKrph2rG9VU/dgRU295/n/SxqrKDyx6Le7zMDCYMAw/biAgafrrj0OYY /OIA== X-Gm-Message-State: AOAM532vBgt5ugbFAx0VvVx400EhXme8l/n8Uwe+ThNkyJM1RJmoJGEm AiPYolMpmGSCCK+6DG6ZFk8= X-Google-Smtp-Source: ABdhPJzKiDq5PhzQF6iqJJoDdoW9g+G5RlA2BsKHieaUWHFe0I69EkRcWmT8hTB/F34ipSXL3QKg7g== X-Received: by 2002:a65:67d0:: with SMTP id b16mr2213165pgs.335.1602468537570; Sun, 11 Oct 2020 19:08:57 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id p19sm17527645pfn.204.2020.10.11.19.08.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:08:56 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 05/22] drm/msm/gem: Move locking in shrinker path Date: Sun, 11 Oct 2020 19:09:32 -0700 Message-Id: <20201012020958.229288-6-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Move grabbing the bo lock into shrinker, with a msm_gem_trylock() to skip over bo's that are already locked. This gets rid of the nested lock classes. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 24 +++++---------------- drivers/gpu/drm/msm/msm_gem.h | 29 ++++++++++---------------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 27 +++++++++++++++++------- 3 files changed, 35 insertions(+), 45 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 7bca2e815933..ff8ca257bdc6 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,8 +17,6 @@ #include "msm_gpu.h" #include "msm_mmu.h" -static void msm_gem_vunmap_locked(struct drm_gem_object *obj); - static dma_addr_t physaddr(struct drm_gem_object *obj) { @@ -672,20 +670,19 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) return (madv != __MSM_MADV_PURGED); } -void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) +void msm_gem_purge(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); - mutex_lock_nested(&msm_obj->lock, subclass); - put_iova(obj); - msm_gem_vunmap_locked(obj); + msm_gem_vunmap(obj); put_pages(obj); @@ -703,11 +700,9 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); - - msm_gem_unlock(obj); } -static void msm_gem_vunmap_locked(struct drm_gem_object *obj) +void msm_gem_vunmap(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -720,15 +715,6 @@ static void msm_gem_vunmap_locked(struct drm_gem_object *obj) msm_obj->vaddr = NULL; } -void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - - mutex_lock_nested(&msm_obj->lock, subclass); - msm_gem_vunmap_locked(obj); - msm_gem_unlock(obj); -} - /* must be called before _move_to_active().. */ int msm_gem_sync_object(struct drm_gem_object *obj, struct msm_fence_context *fctx, bool exclusive) @@ -965,7 +951,7 @@ static void free_object(struct msm_gem_object *msm_obj) drm_prime_gem_destroy(obj, msm_obj->sgt); } else { - msm_gem_vunmap_locked(obj); + msm_gem_vunmap(obj); put_pages(obj); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 016f616dd118..947eeaca661d 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -160,6 +160,13 @@ msm_gem_lock(struct drm_gem_object *obj) mutex_lock(&msm_obj->lock); } +static inline bool __must_check +msm_gem_trylock(struct drm_gem_object *obj) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + return mutex_trylock_recursive(&msm_obj->lock) == MUTEX_TRYLOCK_SUCCESS; +} + static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { @@ -188,6 +195,7 @@ static inline bool is_active(struct msm_gem_object *msm_obj) static inline bool is_purgeable(struct msm_gem_object *msm_obj) { + WARN_ON(!msm_gem_is_locked(&msm_obj->base)); WARN_ON(!mutex_is_locked(&msm_obj->base.dev->struct_mutex)); return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach; @@ -195,27 +203,12 @@ static inline bool is_purgeable(struct msm_gem_object *msm_obj) static inline bool is_vunmapable(struct msm_gem_object *msm_obj) { + WARN_ON(!msm_gem_is_locked(&msm_obj->base)); return (msm_obj->vmap_count == 0) && msm_obj->vaddr; } -/* The shrinker can be triggered while we hold objA->lock, and need - * to grab objB->lock to purge it. Lockdep just sees these as a single - * class of lock, so we use subclasses to teach it the difference. - * - * OBJ_LOCK_NORMAL is implicit (ie. normal mutex_lock() call), and - * OBJ_LOCK_SHRINKER is used by shrinker. - * - * It is *essential* that we never go down paths that could trigger the - * shrinker for a purgable object. This is ensured by checking that - * msm_obj->madv == MSM_MADV_WILLNEED. - */ -enum msm_gem_lock { - OBJ_LOCK_NORMAL, - OBJ_LOCK_SHRINKER, -}; - -void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass); -void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass); +void msm_gem_purge(struct drm_gem_object *obj); +void msm_gem_vunmap(struct drm_gem_object *obj); void msm_gem_free_work(struct work_struct *work); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 482576d7a39a..2dc0ffa925b4 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -52,8 +52,11 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) return 0; list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_purgeable(msm_obj)) count += msm_obj->base.size >> PAGE_SHIFT; + msm_gem_unlock(&msm_obj->base); } if (unlock) @@ -78,10 +81,13 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (freed >= sc->nr_to_scan) break; + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_purgeable(msm_obj)) { - msm_gem_purge(&msm_obj->base, OBJ_LOCK_SHRINKER); + msm_gem_purge(&msm_obj->base); freed += msm_obj->base.size >> PAGE_SHIFT; } + msm_gem_unlock(&msm_obj->base); } if (unlock) @@ -107,15 +113,20 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) return NOTIFY_DONE; list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { + if (!msm_gem_trylock(&msm_obj->base)) + continue; if (is_vunmapable(msm_obj)) { - msm_gem_vunmap(&msm_obj->base, OBJ_LOCK_SHRINKER); - /* since we don't know any better, lets bail after a few - * and if necessary the shrinker will be invoked again. - * Seems better than unmapping *everything* - */ - if (++unmapped >= 15) - break; + msm_gem_vunmap(&msm_obj->base); + unmapped++; } + msm_gem_unlock(&msm_obj->base); + + /* since we don't know any better, lets bail after a few + * and if necessary the shrinker will be invoked again. + * Seems better than unmapping *everything* + */ + if (++unmapped >= 15) + break; } if (unlock) From patchwork Mon Oct 12 02:09:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831391 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3AEC109B for ; Mon, 12 Oct 2020 02:09:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7BEE82078E for ; Mon, 12 Oct 2020 02:09:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UlsauAHQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727482AbgJLCJL (ORCPT ); Sun, 11 Oct 2020 22:09:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727386AbgJLCJA (ORCPT ); Sun, 11 Oct 2020 22:09:00 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65B6CC0613D2; Sun, 11 Oct 2020 19:09:00 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id o3so2814001pgr.11; Sun, 11 Oct 2020 19:09:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+0Usxb0puoG06FcnRH1q78kIYCgqSaNY6zI0DBCZlYA=; b=UlsauAHQIt5BqQ1Lsq8LIVoQ1zxVO6RSFFbZPeS7jiiyB5nBIUzHPNPAjgSQaw5gLQ 6pbiQAgjhKlywEdwq3hJS27LmzZvJGRJE79RIG+0OlymHfepsXgVMX8ywJGROb1H9qtM F/Zs09fMnlS7UJwmrdGBHPU6b2qDt0CHCn4HOhLyTHk8Z/C15xhkFu2dWw3vqGq6Zewl Iw+1IXbdPC5XhX+6lKxY6/a+VmS0D7f2fkJgGbxTVCskMY+exjGR1VIElpYBXmUPQGdw +2bkA5mSbIbZN9Ubha8Bbx6z8TCWDm6w2++0JGYswK7Q9/PPl8cyHWUpf4xf7kJKQfnh i67A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+0Usxb0puoG06FcnRH1q78kIYCgqSaNY6zI0DBCZlYA=; b=r7BQ4udyHxdHagNdU6irMPkzuTv7yIRHfU4nwHjwMNUefzu6xBN3Wn84KMkvYRY/3A jQwGEmGyS364oSXBWmQH9qTBOrvErnfvhAPzHioc6uP3V6snTZx33w5HavASn33odjzt jkup5CmsQ0lmA3onR3j62de0xMWHp/RKeszMgDjtT+yHQcoJpRfy3njjmK5W54obHUoe DZWnMd8rIEppAp4qzs+uwI9vk5478hfPelv7W/iT4WsBb3oDXg65ZiW4kcDIEqpbRkam cQhYbaakRi5ddXi3qwIKP2etsPNurlnvm2uWjXfgLn9sbmmmhBTeWMH6i2ArrJuquGAq CODA== X-Gm-Message-State: AOAM530A5+EDO5HY6Ta1h5ma5l0KpajqRLjFGLHbBJWn6d77jPVkExxl zuRYgnW4epjjHkKx6JZX+50= X-Google-Smtp-Source: ABdhPJw5vwHQ1vQzPXfCm4wrWjisKLswLu62DqfKRUaxCjJlYoGme5NhhF5MKFR7rzvTIPTKcT70vw== X-Received: by 2002:a17:90b:1114:: with SMTP id gi20mr16851866pjb.12.1602468539940; Sun, 11 Oct 2020 19:08:59 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id j25sm17648055pfn.212.2020.10.11.19.08.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:08:59 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 06/22] drm/msm/submit: Move copy_from_user ahead of locking bos Date: Sun, 11 Oct 2020 19:09:33 -0700 Message-Id: <20201012020958.229288-7-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We cannot switch to using obj->resv for locking without first moving all the copy_from_user() ahead of submit_lock_objects(). Otherwise in the mm fault path we aquire mm->mmap_sem before obj lock, but in the submit path the order is reversed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 3 + drivers/gpu/drm/msm/msm_gem_submit.c | 121 ++++++++++++++++----------- 2 files changed, 76 insertions(+), 48 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 947eeaca661d..744889436a98 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -238,7 +238,10 @@ struct msm_gem_submit { uint32_t type; uint32_t size; /* in dwords */ uint64_t iova; + uint32_t offset;/* in dwords */ uint32_t idx; /* cmdstream buffer idx in bos[] */ + uint32_t nr_relocs; + struct drm_msm_gem_submit_reloc *relocs; } *cmd; /* array of size nr_cmds */ struct { uint32_t flags; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index aa5c60a7132d..002130d826aa 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -62,11 +62,16 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, void msm_gem_submit_free(struct msm_gem_submit *submit) { + unsigned i; + dma_fence_put(submit->fence); list_del(&submit->node); put_pid(submit->pid); msm_submitqueue_put(submit->queue); + for (i = 0; i < submit->nr_cmds; i++) + kfree(submit->cmd[i].relocs); + kfree(submit); } @@ -150,6 +155,60 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, return ret; } +static int submit_lookup_cmds(struct msm_gem_submit *submit, + struct drm_msm_gem_submit *args, struct drm_file *file) +{ + unsigned i, sz; + int ret = 0; + + for (i = 0; i < args->nr_cmds; i++) { + struct drm_msm_gem_submit_cmd submit_cmd; + void __user *userptr = + u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); + + ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); + if (ret) { + ret = -EFAULT; + goto out; + } + + /* validate input from userspace: */ + switch (submit_cmd.type) { + case MSM_SUBMIT_CMD_BUF: + case MSM_SUBMIT_CMD_IB_TARGET_BUF: + case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: + break; + default: + DRM_ERROR("invalid type: %08x\n", submit_cmd.type); + return -EINVAL; + } + + if (submit_cmd.size % 4) { + DRM_ERROR("non-aligned cmdstream buffer size: %u\n", + submit_cmd.size); + ret = -EINVAL; + goto out; + } + + submit->cmd[i].type = submit_cmd.type; + submit->cmd[i].size = submit_cmd.size / 4; + submit->cmd[i].offset = submit_cmd.submit_offset / 4; + submit->cmd[i].idx = submit_cmd.submit_idx; + submit->cmd[i].nr_relocs = submit_cmd.nr_relocs; + + sz = sizeof(struct drm_msm_gem_submit_reloc) * submit_cmd.nr_relocs; + submit->cmd[i].relocs = kmalloc(sz, GFP_KERNEL); + ret = copy_from_user(submit->cmd[i].relocs, userptr, sz); + if (ret) { + ret = -EFAULT; + goto out; + } + } + +out: + return ret; +} + static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, int i, bool backoff) { @@ -301,7 +360,7 @@ static int submit_bo(struct msm_gem_submit *submit, uint32_t idx, /* process the reloc's and patch up the cmdstream as needed: */ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *obj, - uint32_t offset, uint32_t nr_relocs, uint64_t relocs) + uint32_t offset, uint32_t nr_relocs, struct drm_msm_gem_submit_reloc *relocs) { uint32_t i, last_offset = 0; uint32_t *ptr; @@ -327,18 +386,11 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob } for (i = 0; i < nr_relocs; i++) { - struct drm_msm_gem_submit_reloc submit_reloc; - void __user *userptr = - u64_to_user_ptr(relocs + (i * sizeof(submit_reloc))); + struct drm_msm_gem_submit_reloc submit_reloc = relocs[i]; uint32_t off; uint64_t iova; bool valid; - if (copy_from_user(&submit_reloc, userptr, sizeof(submit_reloc))) { - ret = -EFAULT; - goto out; - } - if (submit_reloc.submit_offset % 4) { DRM_ERROR("non-aligned reloc offset: %u\n", submit_reloc.submit_offset); @@ -694,6 +746,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; + ret = submit_lookup_cmds(submit, args, file); + if (ret) + goto out; + /* copy_*_user while holding a ww ticket upsets lockdep */ ww_acquire_init(&submit->ticket, &reservation_ww_class); has_ww_ticket = true; @@ -710,60 +766,29 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; for (i = 0; i < args->nr_cmds; i++) { - struct drm_msm_gem_submit_cmd submit_cmd; - void __user *userptr = - u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd))); struct msm_gem_object *msm_obj; uint64_t iova; - ret = copy_from_user(&submit_cmd, userptr, sizeof(submit_cmd)); - if (ret) { - ret = -EFAULT; - goto out; - } - - /* validate input from userspace: */ - switch (submit_cmd.type) { - case MSM_SUBMIT_CMD_BUF: - case MSM_SUBMIT_CMD_IB_TARGET_BUF: - case MSM_SUBMIT_CMD_CTX_RESTORE_BUF: - break; - default: - DRM_ERROR("invalid type: %08x\n", submit_cmd.type); - ret = -EINVAL; - goto out; - } - - ret = submit_bo(submit, submit_cmd.submit_idx, + ret = submit_bo(submit, submit->cmd[i].idx, &msm_obj, &iova, NULL); if (ret) goto out; - if (submit_cmd.size % 4) { - DRM_ERROR("non-aligned cmdstream buffer size: %u\n", - submit_cmd.size); + if (!submit->cmd[i].size || + ((submit->cmd[i].size + submit->cmd[i].offset) > + msm_obj->base.size / 4)) { + DRM_ERROR("invalid cmdstream size: %u\n", submit->cmd[i].size * 4); ret = -EINVAL; goto out; } - if (!submit_cmd.size || - ((submit_cmd.size + submit_cmd.submit_offset) > - msm_obj->base.size)) { - DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size); - ret = -EINVAL; - goto out; - } - - submit->cmd[i].type = submit_cmd.type; - submit->cmd[i].size = submit_cmd.size / 4; - submit->cmd[i].iova = iova + submit_cmd.submit_offset; - submit->cmd[i].idx = submit_cmd.submit_idx; + submit->cmd[i].iova = iova + (submit->cmd[i].offset * 4); if (submit->valid) continue; - ret = submit_reloc(submit, msm_obj, submit_cmd.submit_offset, - submit_cmd.nr_relocs, submit_cmd.relocs); + ret = submit_reloc(submit, msm_obj, submit->cmd[i].offset * 4, + submit->cmd[i].nr_relocs, submit->cmd[i].relocs); if (ret) goto out; } From patchwork Mon Oct 12 02:09:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831397 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5446E17E6 for ; Mon, 12 Oct 2020 02:09:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 38B702078E for ; Mon, 12 Oct 2020 02:09:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cYOdqk6J" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726959AbgJLCJM (ORCPT ); Sun, 11 Oct 2020 22:09:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727389AbgJLCJC (ORCPT ); Sun, 11 Oct 2020 22:09:02 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCF47C0613D5; Sun, 11 Oct 2020 19:09:02 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id r10so12804928pgb.10; Sun, 11 Oct 2020 19:09:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=t6k0W0LUpGL7/GToLy9vCWhiEXYilH3isH63u8Yx9dc=; b=cYOdqk6JAEyJPNarImCdXuJLSaTP5hF3m1LbjGxMAKM4MFzOh/QWL4z8r0onTZixmv yYkmmpQDdFX0OwxHTLU8XHVitUgxmHbY+VgNmfTHJh0WSrMa/xANqnLQuTOcWIInbzDw UfcxFAepPMoCFtwEzdSyf2Ye3iKQUTWpwtNijBFPh0bmxE62Swxqg7NXW9H1c2cIZ+gs 5q7uX9NGAwR5FW5q/lEs1s+TzByAGZcWfIF+LWklwrapace46kSUKaz+J1Ups9zY5HXt mcdofmZpHFDCXRkEXSYV8gza5oxh222MiaZcqqSrb1mB18AGlDIOYvKvOJ8DhpnrTJbT KycQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=t6k0W0LUpGL7/GToLy9vCWhiEXYilH3isH63u8Yx9dc=; b=ichaJuk0Krvr60T+0jhTzclCmzpb3W4H/uCEieL7jYBp/Eh2OJggi0WSNqGO+YOCfG EmYHg1aftuIiOTK0I9iHIBEM3t3scAQFx3NWdkyf4cSGQUvPcDgCpgBHqRSK3yoprQHU 5ppdBoDjFNg84aivk5aI1CpPXWQoL2vGilYZ/pZVla4sb7cYd46rL7Z+5CvaLyTIe/jl ahc10Gqhpvq4MOCE6JxY7jvcCuBfmQwyMTgoHsvbt5bc+zfrQyP2i2dQLmHxQouBGoD4 XPbzr9Y2K8cv1Q17CK5c1Lh0sqVKZJh/QY9j6uhvvt1M7UsS3VQnJoeaSKvfjE3cStcr cd8g== X-Gm-Message-State: AOAM533usB2knY6ZtpQt5FzfpLCf8iUqvuUTnI9gN+dXOByjUl5gn3JY Gyt1aO4pA34pW/IbXp/VWLE= X-Google-Smtp-Source: ABdhPJxO6HzuIjY5oyjtW2Q9UuwL21khrzejoxEMJIqlzoRA6aoWAf2AHjFZUW1waNpmmy9hyrFHAA== X-Received: by 2002:a17:90a:17ad:: with SMTP id q42mr17790795pja.36.1602468542200; Sun, 11 Oct 2020 19:09:02 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id g4sm17835780pgg.75.2020.10.11.19.09.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:01 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 07/22] drm/msm: Do rpm get sooner in the submit path Date: Sun, 11 Oct 2020 19:09:34 -0700 Message-Id: <20201012020958.229288-8-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Unfortunately, due to an dev_pm_opp locking interaction with mm->mmap_sem, we need to do pm get before aquiring obj locks, otherwise we can have anger lockdep with the chain: opp_table_lock --> &mm->mmap_sem --> reservation_ww_class_mutex For an explicit fencing userspace, the impact should be minimal as we do all the fence waits before this point. It could result in some needless resumes in error cases, etc. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 002130d826aa..a9422d043bfe 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -744,11 +744,20 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, ret = submit_lookup_objects(submit, args, file); if (ret) - goto out; + goto out_pre_pm; ret = submit_lookup_cmds(submit, args, file); if (ret) - goto out; + goto out_pre_pm; + + /* + * Thanks to dev_pm_opp opp_table_lock interactions with mm->mmap_sem + * in the resume path, we need to to rpm get before we lock objs. + * Which unfortunately might involve powering up the GPU sooner than + * is necessary. But at least in the explicit fencing case, we will + * have already done all the fence waiting. + */ + pm_runtime_get_sync(&gpu->pdev->dev); /* copy_*_user while holding a ww ticket upsets lockdep */ ww_acquire_init(&submit->ticket, &reservation_ww_class); @@ -825,6 +834,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, out: + pm_runtime_put(&gpu->pdev->dev); +out_pre_pm: submit_cleanup(submit); if (has_ww_ticket) ww_acquire_fini(&submit->ticket); From patchwork Mon Oct 12 02:09:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831395 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0430514D5 for ; Mon, 12 Oct 2020 02:09:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D81282078A for ; Mon, 12 Oct 2020 02:09:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Wv9cKwpC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727626AbgJLCJM (ORCPT ); Sun, 11 Oct 2020 22:09:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727050AbgJLCJL (ORCPT ); Sun, 11 Oct 2020 22:09:11 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 35F07C0613D6; Sun, 11 Oct 2020 19:09:05 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id h6so12822066pgk.4; Sun, 11 Oct 2020 19:09:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=eTjQ89IuXOz109TMRi5cWNXEovWk2LGV0dqyy4UoVe4=; b=Wv9cKwpCli62Q4QI5TYRHb6fmfORY/h7oOs2N8NEq5yfdwT2V2AI1gzS87gzk5D2Ln c3UW9P4LiW0C/L3Qoakm/9gz29zl0BLJfdiSyeOMUEQrjSwqtOo3MV+vlgvw9M/C6rds EsbSVhbkx9NoaVurJmovYGejeOqOZm4Uj2eFrGJNV50E2QdOoSesBHNXE1s1bGml+8wc Xv6+Zq1QI5aAxod5p7CISnG/PxCZaD6telUrYsnITfA1l4EFNlGgdMLMqx7ofySVmChF pMBgXh8MLLU61ovVLU91PL+rQrWB5jaWdwxh9Kt3HILnNniPZ1eCXdfBIPaAwavAsQaT jTCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=eTjQ89IuXOz109TMRi5cWNXEovWk2LGV0dqyy4UoVe4=; b=RjKJu0oqlSCLipieROj4aaVCSQ3FNwkYDxKdNdZuHG/mCKM5oyqS1Hui/zzgc+IyT6 CD2+WCh06Sb7X25tNvaE4JzESCtEYY4czxBfEoUZaxd21aoVRHUEjVWvjxuGwv+CRfe0 mBGAHQWZm0KjFw0t5XfbaN98B86e+KRxZJoMgnG0mpMIB+M6MwyeayY3lzLJiWtm9XDs 9Gmah/+RCd7uo25bG4Kya8/pMqMJ3KUWJ22V2fxLWb8+6K8pVaiaPeNwy+8Opp6qh0Qt qcQSkMC6iq05LkZhpu/T5hxLaIUIu1tlcSOIOLzGqGtcaysRkrz09QpJGvz68x2VMs/D YRVA== X-Gm-Message-State: AOAM531q3NeE32pcUN1B6jWobt4ZY+oBBqxIm2J9lMVos8R3/f885hVy 1+z30mKzW+5LAutD5evvFFw= X-Google-Smtp-Source: ABdhPJw507oY7r38qpLpkm1my+hNtuyXGzBuHetbSpVZufP0ikC6GPn4Asn7U13jpu02xkFihu9c+w== X-Received: by 2002:a63:c20f:: with SMTP id b15mr11512847pgd.8.1602468544681; Sun, 11 Oct 2020 19:09:04 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id q13sm6047990pfg.3.2020.10.11.19.09.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 08/22] drm/msm/gem: Switch over to obj->resv for locking Date: Sun, 11 Oct 2020 19:09:35 -0700 Message-Id: <20201012020958.229288-9-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 4 +--- drivers/gpu/drm/msm/msm_gem.h | 16 +++++----------- drivers/gpu/drm/msm/msm_gem_submit.c | 4 ++-- drivers/gpu/drm/msm/msm_gpu.c | 2 +- 4 files changed, 9 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ff8ca257bdc6..210bf5c9c2dd 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -955,9 +955,9 @@ static void free_object(struct msm_gem_object *msm_obj) put_pages(obj); } + msm_gem_unlock(obj); drm_gem_object_release(obj); - msm_gem_unlock(obj); kfree(msm_obj); } @@ -1029,8 +1029,6 @@ static int msm_gem_new_impl(struct drm_device *dev, if (!msm_obj) return -ENOMEM; - mutex_init(&msm_obj->lock); - msm_obj->flags = flags; msm_obj->madv = MSM_MADV_WILLNEED; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 744889436a98..ec01f35ce57b 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -85,7 +85,6 @@ struct msm_gem_object { * an IOMMU. Also used for stolen/splashscreen buffer. */ struct drm_mm_node *vram_node; - struct mutex lock; /* Protects resources associated with bo */ char name[32]; /* Identifier to print for the debugfs files */ @@ -156,36 +155,31 @@ void msm_gem_describe_objects(struct list_head *list, struct seq_file *m); static inline void msm_gem_lock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_lock(&msm_obj->lock); + dma_resv_lock(obj->resv, NULL); } static inline bool __must_check msm_gem_trylock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_trylock_recursive(&msm_obj->lock) == MUTEX_TRYLOCK_SUCCESS; + return dma_resv_trylock(obj->resv); } static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_lock_interruptible(&msm_obj->lock); + return dma_resv_lock_interruptible(obj->resv, NULL); } static inline void msm_gem_unlock(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - mutex_unlock(&msm_obj->lock); + dma_resv_unlock(obj->resv); } static inline bool msm_gem_is_locked(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return mutex_is_locked(&msm_obj->lock); + return dma_resv_is_locked(obj->resv); } static inline bool is_active(struct msm_gem_object *msm_obj) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a9422d043bfe..35b7d9d06850 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -215,7 +215,7 @@ static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, struct msm_gem_object *msm_obj = submit->bos[i].obj; if (submit->bos[i].flags & BO_PINNED) - msm_gem_unpin_iova(&msm_obj->base, submit->aspace); + msm_gem_unpin_iova_locked(&msm_obj->base, submit->aspace); if (submit->bos[i].flags & BO_LOCKED) dma_resv_unlock(msm_obj->base.resv); @@ -318,7 +318,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) uint64_t iova; /* if locking succeeded, pin bo: */ - ret = msm_gem_get_and_pin_iova(&msm_obj->base, + ret = msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); if (ret) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 55d16489d0f3..dbd9020713e5 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -784,7 +784,7 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) /* submit takes a reference to the bo and iova until retired: */ drm_gem_object_get(&msm_obj->base); - msm_gem_get_and_pin_iova(&msm_obj->base, submit->aspace, &iova); + msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) dma_resv_add_excl_fence(drm_obj->resv, submit->fence); From patchwork Mon Oct 12 02:09:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831451 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9DB81109B for ; Mon, 12 Oct 2020 02:10:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7E5FF2087D for ; Mon, 12 Oct 2020 02:10:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JUgmYERx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729736AbgJLCJf (ORCPT ); Sun, 11 Oct 2020 22:09:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727218AbgJLCJL (ORCPT ); Sun, 11 Oct 2020 22:09:11 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77CD6C0613D7; Sun, 11 Oct 2020 19:09:07 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id y14so12115853pfp.13; Sun, 11 Oct 2020 19:09:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lsQ9N+apQGOcHYXFGYjFIayjQSXunpwI7ZmJDjhbJNE=; b=JUgmYERxIKERmHzyVxJYNyCIKZCW4l6H9/iNSMbxyEj6wSVpDicXKDpGflFnL2TBaf cqLULbhM01JjAPrqcvuuxRpCs1qwPO/0FL+1lgWwcUAn39pXTOolxgBnccrv7mB+ttaq 84tG+Hkx+WnjvDC+UHIvltYeG2zWffdK7lJ3CvDMGyFY9SKFXkcJ7vCoKxazGWoR5D8L KnzM7uSX8nnH22mrySu0Zf2aKaujqsnmCw9yimWnGESy3wwvQuYi8xJA40PqI1G4SNoB gVPbpHg+LyGsPA+rIQ5IrXrkvYcq1elmkbRst1dV6bWWeaiACWBfes2YF3AJp1daR8Fr 3CiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lsQ9N+apQGOcHYXFGYjFIayjQSXunpwI7ZmJDjhbJNE=; b=CaKUxuI7jHyGcGBR0xruyPjaR82jSo1nN/r4VnhrFqwoSs1X3hnRM/oygsrKw1FMIS XWINgxZqrVUQJQOzhEzmBRxnzhDskzJOmFNVF8zeMSr4WeU0GldX1LThX6ku4wM/zNBD 4CGoLPKO6G4rsLa+NLzj6Tj8P+Q5JfOmJMguH6SwIpIkU2ueumHHskR/MSQZpDIUuXL4 HC7E/7EG96r8/UAz87QASPYXxmg0rRWBeS2E6hDzJvki6lw9Q8XnZKTuOygkOa3zfip9 KVQoNvNKvwt2FOLoMDz1dDUsuH2ghUfcU6lWk2HwPnhwobYIdT909cgH9wJuOwGIyM8k mhHg== X-Gm-Message-State: AOAM533VgtZ3qojJxdAkEUYl0zTeN5OJgZL3ymHzYSBfuzyTo3hJKd0i 6g8kv1u/u96+T6M7WTek8sc= X-Google-Smtp-Source: ABdhPJzWL0w2g4Ew/5WKB5u0zWmvJb22AOeLQlFsz2hwOWA3PxPUbRr1+f7sEMnI2Tb4VxTDFsJJkA== X-Received: by 2002:a17:90a:46c2:: with SMTP id x2mr17574218pjg.60.1602468546995; Sun, 11 Oct 2020 19:09:06 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id j25sm17648278pfn.212.2020.10.11.19.09.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:06 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 09/22] drm/msm: Use correct drm_gem_object_put() in fail case Date: Sun, 11 Oct 2020 19:09:36 -0700 Message-Id: <20201012020958.229288-10-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We only want to use the _unlocked() variant in the unlocked case. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 210bf5c9c2dd..833e3d3c6e8c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -1120,7 +1120,11 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, return obj; fail: - drm_gem_object_put(obj); + if (struct_mutex_locked) { + drm_gem_object_put_locked(obj); + } else { + drm_gem_object_put(obj); + } return ERR_PTR(ret); } From patchwork Mon Oct 12 02:09:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831449 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF9EB1592 for ; Mon, 12 Oct 2020 02:10:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C9ECA2222C for ; Mon, 12 Oct 2020 02:10:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Wr8RnpP+" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729746AbgJLCJg (ORCPT ); Sun, 11 Oct 2020 22:09:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726963AbgJLCJL (ORCPT ); Sun, 11 Oct 2020 22:09:11 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2369C0613D8; Sun, 11 Oct 2020 19:09:09 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id l18so3729220pgg.0; Sun, 11 Oct 2020 19:09:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kMl9yrbj49Q3BQxacHHpJSzD9mDjKljs1vKng3vrmmU=; b=Wr8RnpP+DKiQ8rLbmnjHRsXOyQyFSWALLo/UAqujFApoQq4ZZVPQgRcBkSxxFcKhNm vwxMociFcdb2pNKFj8flGOIyQko9+2Em6A9t6d6w08OT2qVfMah9wsHM/qijOW4FvcUh MUq5QVi/P7WSXilooCfbrhBLynMHMt46noLDfWtoPAWIXnKv92yLEuYtETwvaLIDBJJr 1KZdoLNT6YtXGEOhAZmjn82+CSq0pQv701YqhdxqVHJ2mN0YBvlpt94F9LC/KUdbNKDf 1UaRBV8w0tXDp+gFmsSCOCidbsmxG7SBPzhA50c3d1PWzsQ2Yteif+h1pXVBbvkVdNN7 2s3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kMl9yrbj49Q3BQxacHHpJSzD9mDjKljs1vKng3vrmmU=; b=kmCVE1BVuU+EU2yr1OoLAH6tVXIVPrnAatx2lKl7wcROW3G1PXKhiyYHT8uHbgdR1y 9MpDi5o2oj/BbOQ4PnzVi2Qp8X+INUU2ReBRtHe/d1WQO4XTchm1b4pnSFZhF2t1eByv MYorwjL40XxtIwhPViEUTxNFl++P9sGY46aNJnRJH4tbtJTefc9luQKSJQqMrzvQFBm4 aEL7JX/MDizprVg5pODQxDJq6/t50PBD3gQsyMBgIpYcpx0fWpY3gb/j6wSAIPqyLs9G 35EAITusy5rVF4uznwmE8ps4xIDu8+5xvJRuWGDKD6dLxCLsg62WrpVAzLTvufGXb0ig 7UaQ== X-Gm-Message-State: AOAM532zkiAqA9NCyHriYkGkfEvnALAJZeRPyFRAvnlwDdCpxF0Mlgz3 Z2QUZ3Hu0tM6cDgQsXs1FTA= X-Google-Smtp-Source: ABdhPJxyxbNBiotDIT/cDYvF6QqCUgr2LglsXzEClcUjc7Oi/NU+STJLnUbvz8pmg0pP8j9pAoijLw== X-Received: by 2002:aa7:8588:0:b029:152:a38c:fbba with SMTP id w8-20020aa785880000b0290152a38cfbbamr22020945pfn.0.1602468549247; Sun, 11 Oct 2020 19:09:09 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id kv19sm21560346pjb.22.2020.10.11.19.09.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 10/22] drm/msm: Drop chatty trace Date: Sun, 11 Oct 2020 19:09:37 -0700 Message-Id: <20201012020958.229288-11-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It is somewhat redundant with the gpu tracepoints, and anyways not too useful to justify spamming the log when debug traces are enabled. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index dbd9020713e5..677b11c5a151 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -535,7 +535,6 @@ static void recover_worker(struct work_struct *work) static void hangcheck_timer_reset(struct msm_gpu *gpu) { - DBG("%s", gpu->name); mod_timer(&gpu->hangcheck_timer, round_jiffies_up(jiffies + DRM_MSM_HANGCHECK_JIFFIES)); } From patchwork Mon Oct 12 02:09:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831411 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D440A14D5 for ; Mon, 12 Oct 2020 02:09:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B80C720790 for ; Mon, 12 Oct 2020 02:09:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MYI0Ly5E" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729707AbgJLCJ3 (ORCPT ); Sun, 11 Oct 2020 22:09:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727582AbgJLCJM (ORCPT ); Sun, 11 Oct 2020 22:09:12 -0400 Received: from mail-pl1-x642.google.com (mail-pl1-x642.google.com [IPv6:2607:f8b0:4864:20::642]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 27D12C0613CE; Sun, 11 Oct 2020 19:09:12 -0700 (PDT) Received: by mail-pl1-x642.google.com with SMTP id v12so1102370ply.12; Sun, 11 Oct 2020 19:09:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wkKO6gE/6KJG3tJnrM8NMN9Fo3JxBHm6Yh5bWEF2DRw=; b=MYI0Ly5EzyTuj4BMLUbYhrlVJJqPXLF/i9xH5If4hAi0yXUSNMD+OTvpRjNCoRQHii VBg4F9mxPyqkWf6/oDvdA473nQFhgd8nnNoMjs2roHv0tfkEATgMrAMqQ2vDEabMbWrd h5bK15LAp8a/HKaWMHt+KNyXFofASY5i+BqSl6SiSz9xY4KFSA8UW1pKSx2d/i5w3UMk TvWEnW3YFzGpDbaIXJT2yy9/UchcGdMIIkWSOGFQj/VrUfg5SnWuWXgIFBCD3iU21egT x7jb8ZSiyhlBG6JGnEsMVzKITzKWHLgUGATjwFs+wbzNPlDVD15gmpF+wDC194TK3XiF IwtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wkKO6gE/6KJG3tJnrM8NMN9Fo3JxBHm6Yh5bWEF2DRw=; b=hfMkCR+t4V2ZYOHJoZ7JjYeylppE0nkzqz0smodMju6xwGhgA0pDqToIxQK4b/YrKB soEgfL8w55B+rkutraA6hq7VkZLmIajN3QWKsGRjm7dTa/zV5dX+ACR9c0cs63xE88PL mB6kC5TYapRxZU2omkVhM60FnEvYlnKrC9G4Iu2Ux35WK3bAhTfFH0uZR/ImLYWFDG3o f7thikdrddIB3T86MKAn82XO8g4c7WUFfIcrxGdkwj6hopUEr+tmRh41u3JL5/8nMtWg UYcsszDSZqGzrBEhLYPNBnB8ciD4wsDYdwgwL1dfOkXvSVRdTLmVGMkSy0j7FdS3d3qK lxxw== X-Gm-Message-State: AOAM533+MM0BDE47gZg9edc14DGyyYmY+CswhbQiC/YNbbQz4H1Fr/T0 IlirKkb3VQs1RSbzWWp1sSE= X-Google-Smtp-Source: ABdhPJz8PFPRUJXoh1vK+0F0u53LCMifZs8ilkAAIk7Rc54/1QJo+KiCddguw4OAOhsB7F9ZxB9ypg== X-Received: by 2002:a17:902:9349:b029:d4:df10:353c with SMTP id g9-20020a1709029349b02900d4df10353cmr2467354plp.20.1602468551695; Sun, 11 Oct 2020 19:09:11 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id na9sm12662085pjb.45.2020.10.11.19.09.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 11/22] drm/msm: Move update_fences() Date: Sun, 11 Oct 2020 19:09:38 -0700 Message-Id: <20201012020958.229288-12-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Small cleanup, update_fences() is used in the hangcheck path, but also in the normal retire path. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 677b11c5a151..e5b7c8a77c99 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -265,6 +265,20 @@ int msm_gpu_hw_init(struct msm_gpu *gpu) return ret; } +static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, + uint32_t fence) +{ + struct msm_gem_submit *submit; + + list_for_each_entry(submit, &ring->submits, node) { + if (submit->seqno > fence) + break; + + msm_update_fence(submit->ring->fctx, + submit->fence->seqno); + } +} + #ifdef CONFIG_DEV_COREDUMP static ssize_t msm_gpu_devcoredump_read(char *buffer, loff_t offset, size_t count, void *data, size_t datalen) @@ -411,20 +425,6 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, * Hangcheck detection for locked gpu: */ -static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, - uint32_t fence) -{ - struct msm_gem_submit *submit; - - list_for_each_entry(submit, &ring->submits, node) { - if (submit->seqno > fence) - break; - - msm_update_fence(submit->ring->fctx, - submit->fence->seqno); - } -} - static struct msm_gem_submit * find_submit(struct msm_ringbuffer *ring, uint32_t fence) { From patchwork Mon Oct 12 02:09:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831407 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C11901592 for ; Mon, 12 Oct 2020 02:09:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 966982078E for ; Mon, 12 Oct 2020 02:09:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mX5yz3gG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729666AbgJLCJV (ORCPT ); Sun, 11 Oct 2020 22:09:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727386AbgJLCJP (ORCPT ); Sun, 11 Oct 2020 22:09:15 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19A13C0613CE; Sun, 11 Oct 2020 19:09:15 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id i2so12814329pgh.7; Sun, 11 Oct 2020 19:09:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J1iwndiMUr9VfCtVQZjzhg8oZ7yMlIzr50XaZ5kg0U4=; b=mX5yz3gG79LxNkNQdG27Oo+um/zCWpthkww71YWcHTskj7Gl0aNDVMuDsOVKKnuq+3 1GcijIoNJZsJl4cZplV44WDC13wBLtfJ0vQiboFmGmvu+3Ek7TrKN6pKURye3wVO3Rwo 48M7tGFPkrvPl+6Zy9e0zIMD3rufn3Ry2Qv2VCNLwdShmc2kVAoGkTI2gJq9/4tcOnku SQTTFhKGmEW0ANA3TfmWRVGmSM28p/7i9PWZ+YERywVUSerB5/linKl00QbPU7QevRIf Ge2Bx89ZmZcId+PPmS/sbmMb8QojlQ1ZU91Fs9n3Blb6Av+Dees0dW25DP2n7PcRUmPL r/DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J1iwndiMUr9VfCtVQZjzhg8oZ7yMlIzr50XaZ5kg0U4=; b=WVf+ej7jJNHZZSOmT69iltrVydx8KfTfJs5R0xdJbJHSzgalzVnGw4GpxJ8x/j+BAH D4C+rQCJ3gJ7lr7fmsY49W8K1LQaMkusTUClCKufF8l9HVYtLIQyOcZIOinn9uIHo58a ag332CJRLxZRJ+Vydo0oVKqI22uxH05r49VlsOEbLiJqkXHvWONBlxLb6/Zebi30M9NV es/g23NmBuSQC3VIxscIHCGAnDljyZNQg26y8jgTdH+RBRmBj/qj+rafyVy6KIYw+ZUp gy9A0nj3SfEp5T3G4Wg0PgIhPAdIHo20ps3m1W6wlHZzw95590xyws94y6wwr3s5UnWH 75/g== X-Gm-Message-State: AOAM5336fofzHxTe9QJLWmTC9qRAFEhGqk3SXRVBmIDvLR9aMfU2CsT5 wzED3cBj3XUGzuCRbKnEMlg= X-Google-Smtp-Source: ABdhPJw6zB1Q1+SsyBblqnkEKoO+pcfD7BBEEQhviz8Zaij6HkComFLpPpW4OWATPUtFvqxQ1zlA4Q== X-Received: by 2002:a17:90a:9414:: with SMTP id r20mr923796pjo.29.1602468554603; Sun, 11 Oct 2020 19:09:14 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id x18sm18492725pfj.90.2020.10.11.19.09.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:13 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 12/22] drm/msm: Add priv->mm_lock to protect active/inactive lists Date: Sun, 11 Oct 2020 19:09:39 -0700 Message-Id: <20201012020958.229288-13-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Rather than relying on the big dev->struct_mutex hammer, introduce a more specific lock for protecting the bo lists. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_debugfs.c | 7 +++++++ drivers/gpu/drm/msm/msm_drv.c | 7 +++++++ drivers/gpu/drm/msm/msm_drv.h | 13 +++++++++++- drivers/gpu/drm/msm/msm_gem.c | 28 +++++++++++++++----------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 12 +++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 5 ++++- 6 files changed, 58 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c index ee2e270f464c..64afbed89821 100644 --- a/drivers/gpu/drm/msm/msm_debugfs.c +++ b/drivers/gpu/drm/msm/msm_debugfs.c @@ -112,6 +112,11 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) { struct msm_drm_private *priv = dev->dev_private; struct msm_gpu *gpu = priv->gpu; + int ret; + + ret = mutex_lock_interruptible(&priv->mm_lock); + if (ret) + return ret; if (gpu) { seq_printf(m, "Active Objects (%s):\n", gpu->name); @@ -121,6 +126,8 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) seq_printf(m, "Inactive Objects:\n"); msm_gem_describe_objects(&priv->inactive_list, m); + mutex_unlock(&priv->mm_lock); + return 0; } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 49685571dc0e..81cb2cecc829 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -7,6 +7,7 @@ #include #include +#include #include #include @@ -441,6 +442,12 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) init_llist_head(&priv->free_list); INIT_LIST_HEAD(&priv->inactive_list); + mutex_init(&priv->mm_lock); + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&priv->mm_lock); + fs_reclaim_release(GFP_KERNEL); drm_mode_config_init(ddev); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 79ee7d05b363..a17dadd38685 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -174,8 +174,19 @@ struct msm_drm_private { struct msm_rd_state *hangrd; /* debugfs to dump hanging submits */ struct msm_perf_state *perf; - /* list of GEM objects: */ + /* + * List of inactive GEM objects. Every bo is either in the inactive_list + * or gpu->active_list (for the gpu it is active on[1]) + * + * These lists are protected by mm_lock. If struct_mutex is involved, it + * should be aquired prior to mm_lock. One should *not* hold mm_lock in + * get_pages()/vmap()/etc paths, as they can trigger the shrinker. + * + * [1] if someone ever added support for the old 2d cores, there could be + * more than one gpu object + */ struct list_head inactive_list; + struct mutex mm_lock; /* worker for delayed free of objects: */ struct work_struct free_work; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 833e3d3c6e8c..15f81ed2e154 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -753,13 +753,17 @@ int msm_gem_sync_object(struct drm_gem_object *obj, void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + struct msm_drm_private *priv = obj->dev->dev_private; + + might_sleep(); WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); if (!atomic_fetch_inc(&msm_obj->active_count)) { + mutex_lock(&priv->mm_lock); msm_obj->gpu = gpu; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); + mutex_unlock(&priv->mm_lock); } } @@ -768,12 +772,14 @@ void msm_gem_active_put(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_drm_private *priv = obj->dev->dev_private; - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + might_sleep(); if (!atomic_dec_return(&msm_obj->active_count)) { + mutex_lock(&priv->mm_lock); msm_obj->gpu = NULL; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); + mutex_unlock(&priv->mm_lock); } } @@ -928,13 +934,16 @@ static void free_object(struct msm_gem_object *msm_obj) { struct drm_gem_object *obj = &msm_obj->base; struct drm_device *dev = obj->dev; + struct msm_drm_private *priv = dev->dev_private; WARN_ON(!mutex_is_locked(&dev->struct_mutex)); /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); + mutex_lock(&priv->mm_lock); list_del(&msm_obj->mm_list); + mutex_unlock(&priv->mm_lock); msm_gem_lock(obj); @@ -1108,14 +1117,9 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); } - if (struct_mutex_locked) { - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - } else { - mutex_lock(&dev->struct_mutex); - list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - mutex_unlock(&dev->struct_mutex); - } + mutex_lock(&priv->mm_lock); + list_add_tail(&msm_obj->mm_list, &priv->inactive_list); + mutex_unlock(&priv->mm_lock); return obj; @@ -1183,9 +1187,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, msm_gem_unlock(obj); - mutex_lock(&dev->struct_mutex); + mutex_lock(&priv->mm_lock); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - mutex_unlock(&dev->struct_mutex); + mutex_unlock(&priv->mm_lock); return obj; diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 2dc0ffa925b4..6be073b8ca08 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -51,6 +51,8 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_shrinker_lock(dev, &unlock)) return 0; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (!msm_gem_trylock(&msm_obj->base)) continue; @@ -59,6 +61,8 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) msm_gem_unlock(&msm_obj->base); } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); @@ -78,6 +82,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_shrinker_lock(dev, &unlock)) return SHRINK_STOP; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (freed >= sc->nr_to_scan) break; @@ -90,6 +96,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) msm_gem_unlock(&msm_obj->base); } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); @@ -112,6 +120,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) if (!msm_gem_shrinker_lock(dev, &unlock)) return NOTIFY_DONE; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (!msm_gem_trylock(&msm_obj->base)) continue; @@ -129,6 +139,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) break; } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 6c9e1fdc1a76..1806e87600c0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -94,7 +94,10 @@ struct msm_gpu { struct msm_ringbuffer *rb[MSM_GPU_MAX_RINGS]; int nr_rings; - /* list of GEM active objects: */ + /* + * List of GEM active objects on this gpu. Protected by + * msm_drm_private::mm_lock + */ struct list_head active_list; /* does gpu need hw_init? */ From patchwork Mon Oct 12 02:09:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831455 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 87003109B for ; Mon, 12 Oct 2020 02:10:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 615E02078E for ; Mon, 12 Oct 2020 02:10:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RazKJWqE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729650AbgJLCKY (ORCPT ); Sun, 11 Oct 2020 22:10:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727389AbgJLCJU (ORCPT ); Sun, 11 Oct 2020 22:09:20 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE64EC0613CE; Sun, 11 Oct 2020 19:09:19 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id x16so12832621pgj.3; Sun, 11 Oct 2020 19:09:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ndjT/5bxuvqBvTi00Id+BLmDSJw4qRZ7/j5AVgTmjhg=; b=RazKJWqE7grplBA/jh3dthZ3fOyd4oW+YFlRUQ1FeUMeyHgxA3OQ5NaOS8rlFqP/rJ EDe/lq+yZuj1Qty2VUVQ9zW8narHyfW/Hy2whbsdHvygNzoNG35LtuOczZO8anuWBF8d rfud3qI2nFO+p7YtFaJo1/sbcTtsYtXrYgtNGXr0AY9ZOU4NoFV3QX9M/Neyr0vbC3j3 sQjPr/SFDlRvgO5uVTAe4l6Z6LgjlPfysN4nRxC1+R3mgU2oFP75j27Cibm6HbECGIO+ a2xbEakmIm5MFlL6aEI/zmW+lVX/+qOpF0cCaVwiBJRYJnxuDVw3WiYT3OuSQDHlPljV ZgKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ndjT/5bxuvqBvTi00Id+BLmDSJw4qRZ7/j5AVgTmjhg=; b=H7Ej3zC+LeuD6y2vAzFPCtTwyc3GGnHsLLHUNYoBBaKNZTdgNuX/cO5b+vuPkv9A1M NUsKVHdFh4EDOncWlpz6sOUGZDWrX9XhqYemAPP3AfOVxf3kD5eY4HG+SbR4f+XOvGQf 5s/KbtD/MwWGGlb5mnN+OQ+2VlegmkdSpafrxjpLx+J9Ks5Tc7sTKc9FABRbvVt+qp/t jSK166SivpXnltSIzJzyCVrHHk/ByoQf3LW1zMtfee8lV+LNuEUTmrj4xAnequjyShK0 NnyThrlnLQ11PCXnCHJ40a98itOsKXQ9WFH4LGpVcYsVet04AalQbFv9jntSOZvxTwlj M3TA== X-Gm-Message-State: AOAM532zi3HYzIMQ0a/1Ad530Y331qioJZ9qiYvM40XoJG14EpKxmM/J dsYjVnOhWjA7esCBe2OIgZs= X-Google-Smtp-Source: ABdhPJxzgd1jr4nHh6sHHjJ0l/yUeChZPcVDF0wpW7VXyTjFzyXu4JEY9I2OFQ5KrDe/3isXOxlRfA== X-Received: by 2002:a17:90a:e697:: with SMTP id s23mr16865949pjy.16.1602468559365; Sun, 11 Oct 2020 19:09:19 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id q16sm18644029pfj.117.2020.10.11.19.09.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:18 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , Eric Anholt , AngeloGioacchino Del Regno , Emil Velikov , Jonathan Marek , Sharat Masetty , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 13/22] drm/msm: Document and rename preempt_lock Date: Sun, 11 Oct 2020 19:09:40 -0700 Message-Id: <20201012020958.229288-14-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Before adding another lock, give ring->lock a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 12 ++++++------ drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.h | 7 ++++++- 5 files changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index c941c8138f25..543437a2186e 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -36,7 +36,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, OUT_RING(ring, upper_32_bits(shadowptr(a5xx_gpu, ring))); } - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); /* Copy the shadow to the actual register */ ring->cur = ring->next; @@ -44,7 +44,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, /* Make sure to wrap wptr if we need to */ wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Make sure everything is posted before making a decision */ mb(); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 7e04509c4e1f..183de1139eeb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -45,9 +45,9 @@ static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) if (!ring) return; - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); gpu_write(gpu, REG_A5XX_CP_RB_WPTR, wptr); } @@ -62,9 +62,9 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu) bool empty; struct msm_ringbuffer *ring = gpu->rb[i]; - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); empty = (get_wptr(ring) == ring->memptrs->rptr); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); if (!empty) return ring; @@ -132,9 +132,9 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu) } /* Make sure the wptr doesn't update while we're in motion */ - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); a5xx_gpu->preempt[ring->id]->wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Set the address of the incoming preemption record */ gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_LO, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 8915882e4444..fc85f008d69d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -65,7 +65,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) OUT_RING(ring, upper_32_bits(shadowptr(a6xx_gpu, ring))); } - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); /* Copy the shadow to the actual register */ ring->cur = ring->next; @@ -73,7 +73,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) /* Make sure to wrap wptr if we need to */ wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Make sure everything is posted before making a decision */ mb(); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 935bf9b1d941..1b6958e908dc 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -46,7 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->memptrs_iova = memptrs_iova; INIT_LIST_HEAD(&ring->submits); - spin_lock_init(&ring->lock); + spin_lock_init(&ring->preempt_lock); snprintf(name, sizeof(name), "gpu-ring-%d", ring->id); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 0987d6bf848c..4956d1bc5d0e 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -46,7 +46,12 @@ struct msm_ringbuffer { struct msm_rbmemptrs *memptrs; uint64_t memptrs_iova; struct msm_fence_context *fctx; - spinlock_t lock; + + /* + * preempt_lock protects preemption and serializes wptr updates against + * preemption. Can be aquired from irq context. + */ + spinlock_t preempt_lock; }; struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, From patchwork Mon Oct 12 02:09:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831453 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BFFA14D5 for ; Mon, 12 Oct 2020 02:10:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 474EC20782 for ; Mon, 12 Oct 2020 02:10:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="e46AKw7J" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727582AbgJLCKS (ORCPT ); Sun, 11 Oct 2020 22:10:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729687AbgJLCJX (ORCPT ); Sun, 11 Oct 2020 22:09:23 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A126BC0613D0; Sun, 11 Oct 2020 19:09:22 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id e10so12144488pfj.1; Sun, 11 Oct 2020 19:09:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZyL5oO8OrN2wimj2udT1EhCNNwPm13wKBqPVtQcCgb8=; b=e46AKw7Ja86lF+PzViNHSEmngqxbLCrq7srlRybORCfJElIQ2kLMnDSAcpnXQWVLWA 39JlDDqryd6+vnF2vvZixwyBsdxevuSUwxU+ZZWH93rvpIZZOuhE59u0VwA/fUYMcABZ IvIZLoMXqJkd0zadNWxxPYJ0MQOSb5cxDIZzjg8PR/rSX8/Uwy8+VNbRETTmW4v8/qjG +YDTbMxap3bltcYUcdHDePFb+bJstz9r6YSoxr5h22kmUS4vfc3GwOUg7hdNdeSdRg4m /yR3qfEhSD5TKj8bea+FV6rJ0qlAsZKJ2Xa4lorrzXRJFVrkyHLNzQoxLVSlb3nAqqBI LP9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZyL5oO8OrN2wimj2udT1EhCNNwPm13wKBqPVtQcCgb8=; b=EcwKjltHe9anP3K38kJYVfgxFt2DC4Z/ucz6AHESu3dXwkroEAuW5wm2e0XbhUoj+J bpa2D3MwwXDh3sq8T5a4mIuZObovOoBMkXX91d58imxXK3OoiuMfX0R4ksSY+btrNKsZ 4cJE1rs+KGWrH17I4jGK8mrn2ZgGf/9Bf003YiWFezuFHHdzJhe0RevA+ZVXx/GC98Gn y10s9IrXY6zWvTK2RaASUiErEpG4nk6XY5UtVmsoTt7zFplp7MH/kXglDcewknjk1BH0 v6u8IZGYnTn24/BNHvybNhmksCGPWQ5BZuwjPKNc+dN8kZvHK25O+bg4No9KIFNQ1CFL Lklw== X-Gm-Message-State: AOAM533GjjZDmhjj2k2qP7EVWt4SU758j5wqeRlBxQP0mrJWcAgZ6gX7 qmwC6jc9dhmS7fg1Ox1ou2pOkp7cE1ntLoJZ X-Google-Smtp-Source: ABdhPJxHXz9/ZXrNkq8BOQH2/sjUdcP8edWZlDkMalRmfx6hLdR2t36zB1O0lUNSY0tSJSgoIe9VEw== X-Received: by 2002:a17:90a:109:: with SMTP id b9mr17897165pjb.35.1602468562114; Sun, 11 Oct 2020 19:09:22 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id c12sm17267726pgd.57.2020.10.11.19.09.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 14/22] drm/msm: Protect ring->submits with it's own lock Date: Sun, 11 Oct 2020 19:09:41 -0700 Message-Id: <20201012020958.229288-15-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark One less place to rely on dev->struct_mutex. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gem_submit.c | 2 ++ drivers/gpu/drm/msm/msm_gpu.c | 37 ++++++++++++++++++++++------ drivers/gpu/drm/msm/msm_ringbuffer.c | 1 + drivers/gpu/drm/msm/msm_ringbuffer.h | 6 +++++ 4 files changed, 39 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 35b7d9d06850..a91c1b99db97 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -65,7 +65,9 @@ void msm_gem_submit_free(struct msm_gem_submit *submit) unsigned i; dma_fence_put(submit->fence); + spin_lock(&submit->ring->submit_lock); list_del(&submit->node); + spin_unlock(&submit->ring->submit_lock); put_pid(submit->pid); msm_submitqueue_put(submit->queue); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index e5b7c8a77c99..bb904e467b24 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -270,6 +270,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, { struct msm_gem_submit *submit; + spin_lock(&ring->submit_lock); list_for_each_entry(submit, &ring->submits, node) { if (submit->seqno > fence) break; @@ -277,6 +278,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, msm_update_fence(submit->ring->fctx, submit->fence->seqno); } + spin_unlock(&ring->submit_lock); } #ifdef CONFIG_DEV_COREDUMP @@ -430,11 +432,14 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence) { struct msm_gem_submit *submit; - WARN_ON(!mutex_is_locked(&ring->gpu->dev->struct_mutex)); - - list_for_each_entry(submit, &ring->submits, node) - if (submit->seqno == fence) + spin_lock(&ring->submit_lock); + list_for_each_entry(submit, &ring->submits, node) { + if (submit->seqno == fence) { + spin_unlock(&ring->submit_lock); return submit; + } + } + spin_unlock(&ring->submit_lock); return NULL; } @@ -523,8 +528,10 @@ static void recover_worker(struct work_struct *work) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; + spin_lock(&ring->submit_lock); list_for_each_entry(submit, &ring->submits, node) gpu->funcs->submit(gpu, submit); + spin_unlock(&ring->submit_lock); } } @@ -711,7 +718,6 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, static void retire_submits(struct msm_gpu *gpu) { struct drm_device *dev = gpu->dev; - struct msm_gem_submit *submit, *tmp; int i; WARN_ON(!mutex_is_locked(&dev->struct_mutex)); @@ -720,9 +726,24 @@ static void retire_submits(struct msm_gpu *gpu) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; - list_for_each_entry_safe(submit, tmp, &ring->submits, node) { - if (dma_fence_is_signaled(submit->fence)) + while (true) { + struct msm_gem_submit *submit = NULL; + + spin_lock(&ring->submit_lock); + submit = list_first_entry_or_null(&ring->submits, + struct msm_gem_submit, node); + spin_unlock(&ring->submit_lock); + + /* + * If no submit, we are done. If submit->fence hasn't + * been signalled, then later submits are not signalled + * either, so we are also done. + */ + if (submit && dma_fence_is_signaled(submit->fence)) { retire_submit(gpu, ring, submit); + } else { + break; + } } } } @@ -765,7 +786,9 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) submit->seqno = ++ring->seqno; + spin_lock(&ring->submit_lock); list_add_tail(&submit->node, &ring->submits); + spin_unlock(&ring->submit_lock); msm_rd_dump_submit(priv->rd, submit, NULL); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 1b6958e908dc..4d2a2a4abef8 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -46,6 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->memptrs_iova = memptrs_iova; INIT_LIST_HEAD(&ring->submits); + spin_lock_init(&ring->submit_lock); spin_lock_init(&ring->preempt_lock); snprintf(name, sizeof(name), "gpu-ring-%d", ring->id); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 4956d1bc5d0e..fe55d4a1aa16 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -39,7 +39,13 @@ struct msm_ringbuffer { int id; struct drm_gem_object *bo; uint32_t *start, *end, *cur, *next; + + /* + * List of in-flight submits on this ring. Protected by submit_lock. + */ struct list_head submits; + spinlock_t submit_lock; + uint64_t iova; uint32_t seqno; uint32_t hangcheck_fence; From patchwork Mon Oct 12 02:09:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831433 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 74C1514D5 for ; Mon, 12 Oct 2020 02:09:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56F022078E for ; Mon, 12 Oct 2020 02:09:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="KgxKD7H/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726461AbgJLCJr (ORCPT ); Sun, 11 Oct 2020 22:09:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37210 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729795AbgJLCJj (ORCPT ); Sun, 11 Oct 2020 22:09:39 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54842C0613D6; Sun, 11 Oct 2020 19:09:25 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id 144so12129617pfb.4; Sun, 11 Oct 2020 19:09:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0Y1ZXL5tzeULxGQEQ98upbCTjOEGFjR0SJeNEjRygY4=; b=KgxKD7H/reOXnjxIspELqz3EGjOjlW01OL0qkzO8E8u33VbhpjUJjBGOiHgz8pF6fU iTyzaFLJGKjMd9MVlq0mmB1TZAu/ikMcN4P4SqAPpJmJL3YrB/p/hk5WPlvHa26OxXFs 9pcxJVqwc12BI2o0Z5GWP6KuBPSdv6Fm/yrfGu/sGLzyTsMwUtRrYO71fkVpm5PY8dUN 3/FdcnVea7ScXPEtqr+JUlN6ntlnD5tcCKKLr5m44pljDchWW5VA1pTZF/QSlGA3Sdqy DjWgM5x9TAZB27g6rdiYIPB8Wxgxdhv25viYt7hMh2sVRkIxo3Ae1ZRn41h+C0DattF6 7hDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0Y1ZXL5tzeULxGQEQ98upbCTjOEGFjR0SJeNEjRygY4=; b=qCtPTzCt6ntWBdtkVkUSG/7TnO++Ys3LSavgSogn42XO5CCe4mL2OE9iirC8W411ap ssbTSVbR9OLeAotAFERSrPYh3V+9AlvagtYTjP42nxuKPnZtoAWBgEtF+lzQzO4DHLp6 2Obkbrohup/DS0snXKXX4TMYWe00ROniHB7QSDSXwEjU6dCDNphPwD5nEywCkQ0fD2YF i/8pEm1P6PrYQWzKd17/Oe6MsC+7F0d1rYf8+oVzzuMAaJDrRQUfC7H5FMmPRzruPAvu sqTzCJMNb322E6F1FOVuc0H4EL3+OwY+zqDp2Ow5ZSMCyt8d50l4ay1kDeajX0EkTb0J djmQ== X-Gm-Message-State: AOAM533fe6iae7kaGSpR92HLTGWAA3rdUnM4vNt6OF4HSNFd1xoZvjCz 7P2lGAI5xBjJCr1h/IUBhyk= X-Google-Smtp-Source: ABdhPJwkHmYG3JLUCbaKXxgn2zEXUFi4lRRA+7075Pf3yqklwMpFZlrqr3qhkwPNZ3pPnR6GgPupew== X-Received: by 2002:a17:90b:e15:: with SMTP id ge21mr16365454pjb.188.1602468564802; Sun, 11 Oct 2020 19:09:24 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id kv19sm21560902pjb.22.2020.10.11.19.09.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:23 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 15/22] drm/msm: Refcount submits Date: Sun, 11 Oct 2020 19:09:42 -0700 Message-Id: <20201012020958.229288-16-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Before we remove dev->struct_mutex from the retire path, we have to deal with the situation of a submit retiring before the submit ioctl returns. To deal with this, ring->submits will hold a reference to the submit, which is dropped when the submit is retired. And the submit ioctl path holds it's own ref, which it drops when it is done with the submit. Also, add to submit list *after* getting/pinning bo's, to prevent badness in case the completed fence is corrupted, and retire_worker mistakenly believes the submit is done too early. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_drv.h | 1 - drivers/gpu/drm/msm/msm_gem.h | 13 +++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 11 +++++------ drivers/gpu/drm/msm/msm_gpu.c | 21 ++++++++++++++++----- 4 files changed, 34 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index a17dadd38685..2ef5cff19883 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -277,7 +277,6 @@ void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); bool msm_use_mmu(struct drm_device *dev); -void msm_gem_submit_free(struct msm_gem_submit *submit); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index ec01f35ce57b..93ee73c620ed 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -211,6 +211,7 @@ void msm_gem_free_work(struct work_struct *work); * lasts for the duration of the submit-ioctl. */ struct msm_gem_submit { + struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; struct msm_gem_address_space *aspace; @@ -247,6 +248,18 @@ struct msm_gem_submit { } bos[]; }; +void __msm_gem_submit_destroy(struct kref *kref); + +static inline void msm_gem_submit_get(struct msm_gem_submit *submit) +{ + kref_get(&submit->ref); +} + +static inline void msm_gem_submit_put(struct msm_gem_submit *submit) +{ + kref_put(&submit->ref, __msm_gem_submit_destroy); +} + /* helper to determine of a buffer in submit should be dumped, used for both * devcoredump and debugfs cmdstream dumping: */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a91c1b99db97..3151a0ca8904 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -42,6 +42,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, if (!submit) return NULL; + kref_init(&submit->ref); submit->dev = dev; submit->aspace = queue->ctx->aspace; submit->gpu = gpu; @@ -60,14 +61,13 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, return submit; } -void msm_gem_submit_free(struct msm_gem_submit *submit) +void __msm_gem_submit_destroy(struct kref *kref) { + struct msm_gem_submit *submit = + container_of(kref, struct msm_gem_submit, ref); unsigned i; dma_fence_put(submit->fence); - spin_lock(&submit->ring->submit_lock); - list_del(&submit->node); - spin_unlock(&submit->ring->submit_lock); put_pid(submit->pid); msm_submitqueue_put(submit->queue); @@ -841,8 +841,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit_cleanup(submit); if (has_ww_ticket) ww_acquire_fini(&submit->ticket); - if (ret) - msm_gem_submit_free(submit); + msm_gem_submit_put(submit); out_unlock: if (ret && (out_fence_fd >= 0)) put_unused_fd(out_fence_fd); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index bb904e467b24..18a7948ac437 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -712,7 +712,12 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, pm_runtime_mark_last_busy(&gpu->pdev->dev); pm_runtime_put_autosuspend(&gpu->pdev->dev); - msm_gem_submit_free(submit); + + spin_lock(&ring->submit_lock); + list_del(&submit->node); + spin_unlock(&ring->submit_lock); + + msm_gem_submit_put(submit); } static void retire_submits(struct msm_gpu *gpu) @@ -786,10 +791,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) submit->seqno = ++ring->seqno; - spin_lock(&ring->submit_lock); - list_add_tail(&submit->node, &ring->submits); - spin_unlock(&ring->submit_lock); - msm_rd_dump_submit(priv->rd, submit, NULL); update_sw_cntrs(gpu); @@ -816,6 +817,16 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) msm_gem_active_get(drm_obj, gpu); } + /* + * ring->submits holds a ref to the submit, to deal with the case + * that a submit completes before msm_ioctl_gem_submit() returns. + */ + msm_gem_submit_get(submit); + + spin_lock(&ring->submit_lock); + list_add_tail(&submit->node, &ring->submits); + spin_unlock(&ring->submit_lock); + gpu->funcs->submit(gpu, submit); priv->lastctx = submit->queue->ctx; From patchwork Mon Oct 12 02:09:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831445 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4BA34109B for ; Mon, 12 Oct 2020 02:10:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 268E42078E for ; Mon, 12 Oct 2020 02:10:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="qs9Kwg6v" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729944AbgJLCKC (ORCPT ); Sun, 11 Oct 2020 22:10:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729763AbgJLCJh (ORCPT ); Sun, 11 Oct 2020 22:09:37 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF0ACC0613D7; Sun, 11 Oct 2020 19:09:27 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id n9so12815999pgf.9; Sun, 11 Oct 2020 19:09:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1sDR5wASc9ZPDU+65532Xft9syZCOfG7kJBBHeoYHbg=; b=qs9Kwg6vcM9brPA6GlK1ADDUgEHmOOGzqCiBb2QaNa5kkmW0jkbU3dTeogS1OfDHaZ QeSRR77mwxQV6UCUPmtvIaO8UhLnmZHtl7U8HY90kvy2d/bgVuI6kx0O/bD0EycyIvU0 W9TCbQkLW34ikTZ7CnHlhcRWhQQVtiSogLFTHf7RqpAQtxtavdef5MxSCyhjCfCZk/Hs IpcRZK5E5o0bydx9ymSirQmPKu/RFBF6ZVYo/fnnIeGkfZtYWXmB5ztu4H+I5ghZqt8B u1Cabn83lwSPn8Sl/Bnz0qJjpQUNuwTqvSPHRYiRNOz1SMWBGLLek+R4/YmKI4KoA+Th L9lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1sDR5wASc9ZPDU+65532Xft9syZCOfG7kJBBHeoYHbg=; b=FuLHM+tWOZqrgLX1Lih3JdU6ZAgmUvifw0mVn9aq1SLicCg+dXGGRavP1+4GDZCQNk Bj8vgpArJeUhLP8GBs2Hn2C/z9keFQdajoPtSjVb6AGevjqwmyDtC8YChd6YrD+r8VHX I0mSeK51+EGLOC2lrLt3V2yWrWSiJTyEivbnbh15Q6mQgGaWwEzNQrgYBy9p9CbJwyav JLGJfuXNQa2UwoCT2h6Oyx46A/XpcFzcjgTfFHkuXSC9VxBw2XG501wtO/SIMAjJN17q LAcos1lWC4W0TQkVJghTZI+nsqYx3O2FjRqN2Ny1Alchnaz/khrQHZ/SYqM7QaBOvJbp sYnA== X-Gm-Message-State: AOAM531lViHscvBzfVH2S/w2KAH8/g23cHcfLcRpAXul8EIyu4EqmCM0 AhY3n3XbbF1M9ycZsixHapU= X-Google-Smtp-Source: ABdhPJwSmhu5yL3ga9OwBqL7M0FfmLC6fLlO0LI7uhSL0N+DrMYZdHT9en3D+tK808DyrOX1zSVS4w== X-Received: by 2002:aa7:9358:0:b029:152:b349:8af8 with SMTP id 24-20020aa793580000b0290152b3498af8mr22631648pfn.9.1602468567273; Sun, 11 Oct 2020 19:09:27 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id p6sm21597805pjd.1.2020.10.11.19.09.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:26 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 16/22] drm/msm: Remove obj->gpu Date: Sun, 11 Oct 2020 19:09:43 -0700 Message-Id: <20201012020958.229288-17-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It cannot be atomically updated with obj->active_count, and the only purpose is a useless WARN_ON() (which becomes a buggy WARN_ON() once retire_submits() is not serialized with incoming submits via struct_mutex) Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 2 -- drivers/gpu/drm/msm/msm_gem.h | 1 - drivers/gpu/drm/msm/msm_gpu.c | 5 ----- 3 files changed, 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 15f81ed2e154..cdbbdd848fe3 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -760,7 +760,6 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) if (!atomic_fetch_inc(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); - msm_obj->gpu = gpu; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); mutex_unlock(&priv->mm_lock); @@ -776,7 +775,6 @@ void msm_gem_active_put(struct drm_gem_object *obj) if (!atomic_dec_return(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); - msm_obj->gpu = NULL; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); mutex_unlock(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 93ee73c620ed..bf5f9e94d0d3 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -64,7 +64,6 @@ struct msm_gem_object { * */ struct list_head mm_list; - struct msm_gpu *gpu; /* non-null if active */ /* Transiently in the process of submit ioctl, objects associated * with the submit are on submit->bo_list.. this only lasts for diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 18a7948ac437..8278a4df331a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -800,11 +800,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct drm_gem_object *drm_obj = &msm_obj->base; uint64_t iova; - /* can't happen yet.. but when we add 2d support we'll have - * to deal w/ cross-ring synchronization: - */ - WARN_ON(is_active(msm_obj) && (msm_obj->gpu != gpu)); - /* submit takes a reference to the bo and iova until retired: */ drm_gem_object_get(&msm_obj->base); msm_gem_get_and_pin_iova_locked(&msm_obj->base, submit->aspace, &iova); From patchwork Mon Oct 12 02:09:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831443 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 73806109B for ; Mon, 12 Oct 2020 02:10:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 522272078A for ; Mon, 12 Oct 2020 02:10:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="afbaQ4gV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729954AbgJLCKD (ORCPT ); Sun, 11 Oct 2020 22:10:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729777AbgJLCJh (ORCPT ); Sun, 11 Oct 2020 22:09:37 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F305C0613D8; Sun, 11 Oct 2020 19:09:30 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id t18so7761805plo.1; Sun, 11 Oct 2020 19:09:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ycYG7+4yZf0+EawqBCWFVGsugPtbJtT4YGGzzwI3EFY=; b=afbaQ4gVwtW7UpRdOzzW2yXOcH1/zUxpxTtkIMNtP7+mHx5erxv40REjWFquDcrnu7 aKtsd7Ny1Kgeb2np5L4Tk6gVFnLei/u8KKP6szTjkLPm8/XeQtaa3yieh71HEy7UOGob swQDSNZWnXlbMLD6bmylhV2w0o/B7ELqBaEaCxgihbIGhDKOK2Ishd2Dwphu8fiYzYbP hQvPa4GZSug7CD54x8aLpJGPlJ8ZcQNht/53yGmoYcmZddTLxtzv/R/btas+my/I0xqt ideudZNuUUDjVtj9Bze7+Jk/rPeOITf7g8aMOGvDLZexHzZ8XybPZFa8Av/MOpTPTF45 gGmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ycYG7+4yZf0+EawqBCWFVGsugPtbJtT4YGGzzwI3EFY=; b=jsmLfdTpzOy7AGwrmZPuTcolaAPVoacuTPgXstnDdJhKiyUVqm7+p4RqfHLSDQXy1w HzUnOFpfEVKZRjeyHpy9+rlAv0rCbx5MqBdwrnUGE9ZguSVIQ2ZaCy4jhGmKNruaNRNI c4sBGYyWC+FKB2L1gsRn1rf+nPakdsAMSQn5B9H2HxmkgMol/i4DDCgJfy5HLxpUSY8z 5GUNQIO61j0pF36REu4QBNvz7cKFYf+PXihb1ATrwmpnm3VH/3Jhq9zsnUoZ4/oUj0jY VjV6nBRmp/MtUxpZ/LfC8ElzpHPq6q577GeilNM9ttS+dDGHqLLBOmDeB2NpxsW2Z/AX YqBg== X-Gm-Message-State: AOAM530izyVDTcYFeTzDn+reTPc/s6EVXQPyg2730KWF8N+Op+l/7AB8 vcOl8hiIUWsQOvNA2Rozjj4= X-Google-Smtp-Source: ABdhPJz4Kg/jGVAqQXCBqhA6ZjgK+Nv2BP6zRV2+ijSWDL3K4XKVh1dGgPPwaINSnIHTRCVz5ZZ9lg== X-Received: by 2002:a17:902:8307:b029:d3:89e2:7866 with SMTP id bd7-20020a1709028307b02900d389e27866mr21601707plb.42.1602468569792; Sun, 11 Oct 2020 19:09:29 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id i1sm22456217pjh.52.2020.10.11.19.09.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:28 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Jordan Crouse , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 17/22] drm/msm: Drop struct_mutex from the retire path Date: Sun, 11 Oct 2020 19:09:44 -0700 Message-Id: <20201012020958.229288-18-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that we are not relying on dev->struct_mutex to protect the ring->submits lists, drop the struct_mutex lock. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 8278a4df331a..a754e84b8b5d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -707,7 +707,7 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, msm_gem_active_put(&msm_obj->base); msm_gem_unpin_iova(&msm_obj->base, submit->aspace); - drm_gem_object_put_locked(&msm_obj->base); + drm_gem_object_put(&msm_obj->base); } pm_runtime_mark_last_busy(&gpu->pdev->dev); @@ -722,11 +722,8 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, static void retire_submits(struct msm_gpu *gpu) { - struct drm_device *dev = gpu->dev; int i; - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - /* Retire the commits starting with highest priority */ for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; @@ -756,15 +753,12 @@ static void retire_submits(struct msm_gpu *gpu) static void retire_worker(struct work_struct *work) { struct msm_gpu *gpu = container_of(work, struct msm_gpu, retire_work); - struct drm_device *dev = gpu->dev; int i; for (i = 0; i < gpu->nr_rings; i++) update_fences(gpu, gpu->rb[i], gpu->rb[i]->memptrs->fence); - mutex_lock(&dev->struct_mutex); retire_submits(gpu); - mutex_unlock(&dev->struct_mutex); } /* call from irq handler to schedule work to retire bo's */ From patchwork Mon Oct 12 02:09:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831441 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 76E2B109B for ; Mon, 12 Oct 2020 02:10:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51F452078E for ; Mon, 12 Oct 2020 02:10:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="T5eHSyoh" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729936AbgJLCKC (ORCPT ); Sun, 11 Oct 2020 22:10:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729774AbgJLCJh (ORCPT ); Sun, 11 Oct 2020 22:09:37 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 68B61C0613D9; Sun, 11 Oct 2020 19:09:32 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id i2so12814860pgh.7; Sun, 11 Oct 2020 19:09:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=E/TFR+2oS8aXAXQSVrSXxUZHArJIJxnLPvbb0HwdP28=; b=T5eHSyohXJkOLlo42v+Hi4QsriRdqy23BLoyzBTOrjdq9FBsd41nbdePj0h5LRzXHa 08YC8PkYjvPxlIHrFUnl1RfWL2SRs9qszqeM7CyIrPnUr6YHKypPLYU0KgnCaRc+rDd2 JyD3DJvFrOdFXEaYedVspZh4cPgjq1ov2o/tFLC/H+aSQkh3W3LSFQcq8DsyQCnUMeiQ gCnrCSQcU9/rfwfJnaRW9iPRo/3JF870WsYTTmbf4rtUuzp5vKqn2Kv++svKwGm9kJWC et1wlh9pMBw5cYhdS2hqhx/Sksmk62iupSXXf1b4SVbpOtZUEhlPZ+KumNkcxYg/DyDo Ri4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=E/TFR+2oS8aXAXQSVrSXxUZHArJIJxnLPvbb0HwdP28=; b=IGiRxAiKRuTZv36vuJWAkM7j2nj1Uq58wZ9ixNz0v09Kh2bPD09fVMit20uQl8d64h DkwArEuENU9KHGXl50O2WexAkCSZKuh++HE1K52rJaPZ8cEf6hZz++jQepo6jIO1Hkr7 gZeCttelNk0euCJd6pnVzvcl0X0vLOZMm0iW2k3PMu560ckXb5AmWVkA1mDq49GtFrW7 luKJYWS8rlpXeyIauWEJkslcn6JzEHeeGjLYdTJytRBHFVfcsbDRSuzi36k2UVxTz3fG TXL6poyEV+H71ulojiuLQsbZek1kikq8/Ye+dFO3t0A0ujkXeGB6R9bHkQXwSxDjeR// NIFA== X-Gm-Message-State: AOAM531d/1UO+AmIQcWFDD67eaQCp1Z42LD2unOm5Pjq8E1Qm/pdivAm wL1tija3WiCIToAnK0YiJo0= X-Google-Smtp-Source: ABdhPJxaWc5G/1wsqt+UdJIQsybc5F/w1jq2ins3zBia7tRzGL51TsgYMNfBcPQwZZHYWheq6MVeIw== X-Received: by 2002:a63:5914:: with SMTP id n20mr10707355pgb.69.1602468571975; Sun, 11 Oct 2020 19:09:31 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id fa12sm12653649pjb.25.2020.10.11.19.09.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:31 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 18/22] drm/msm: Drop struct_mutex in free_object() path Date: Sun, 11 Oct 2020 19:09:45 -0700 Message-Id: <20201012020958.229288-19-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that active_list/inactive_list is protected by mm_lock, we no longer need dev->struct_mutex in the free_object() path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index cdbbdd848fe3..9ead1bf223e9 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -934,8 +934,6 @@ static void free_object(struct msm_gem_object *msm_obj) struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); @@ -972,20 +970,14 @@ void msm_gem_free_work(struct work_struct *work) { struct msm_drm_private *priv = container_of(work, struct msm_drm_private, free_work); - struct drm_device *dev = priv->dev; struct llist_node *freed; struct msm_gem_object *msm_obj, *next; while ((freed = llist_del_all(&priv->free_list))) { - - mutex_lock(&dev->struct_mutex); - llist_for_each_entry_safe(msm_obj, next, freed, freed) free_object(msm_obj); - mutex_unlock(&dev->struct_mutex); - if (need_resched()) break; } From patchwork Mon Oct 12 02:09:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831437 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CF0B714D5 for ; Mon, 12 Oct 2020 02:10:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A1EE72078A for ; Mon, 12 Oct 2020 02:10:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="tt6Y8ZDJ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729820AbgJLCJz (ORCPT ); Sun, 11 Oct 2020 22:09:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729778AbgJLCJi (ORCPT ); Sun, 11 Oct 2020 22:09:38 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A5C0C0613DA; Sun, 11 Oct 2020 19:09:35 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id j7so2060441pgk.5; Sun, 11 Oct 2020 19:09:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8cenh8sbhC3ofASWaW0WRGlig0FwCUNWxn0tSgVv6pc=; b=tt6Y8ZDJZ9YBeu5MGzAIHCPhBMA8WR5nSRsmdpYTiUnXuLe5ogyJr5tKdETiwYAj2a BmFOiGSD5cGX+JDZu0i6KMtFEVJ1saOz5LZhGH7N76J2rNkL+hvAokBbdRBVymnlifrI vMBZwCjpk6ZazGPP+kNWUndXO5Je/PUqnmQ9sC0Tm8OLknUKUmFcpFptrTfYTARZAYhT StLVGFCCrlSSNE3EhzlPj3bp4thgiMD8SITpWGgipTxl/9cV1yunJltxZO397Hg8ILCC V6xKotoKwViFCZgbAQ8EgaAJxf6mIsK2UfenR6kVPD+h4c8eSpQtmafr9rYN1bcMxhAw SlbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8cenh8sbhC3ofASWaW0WRGlig0FwCUNWxn0tSgVv6pc=; b=AjhSU1G756v/2fqYqtpsx+f54s40yEreHdgJIpTIMuv6TtqEG49WoEudH2fNX8/kIh IbmLfMOoGDFtP3MrM6zEKxbIZlF8v1BLoit8/UUpWS91U/ejj334FMDF4Fu9821jMYWV LmtSwGsRlMYFca0OW3Ms1MGg5aK1mQt6IGIy8AC/AlipKdoJ8Iim5xrJz8WllPIXEby6 j3Hz4O0zfSKwwQFkr4KY4gxa2XGb7DnKcVexQH/Am7TLBK6owidf10GNuaRkTGcKUJHl dslJznXkj4rfGhgn9bulOHRgOjqrDJdZi66or0YexQzNUxaJLyeQGDrdbBtL3yK5Cpx5 Htww== X-Gm-Message-State: AOAM532kFSA4665GD1r0/BcnqaQEs2ymWY6WnWqIbxtY0eqyUmNeVkAW ZG6sJPL8RJSLJV8TIeNQMnw= X-Google-Smtp-Source: ABdhPJxnwTIgUU01/dpi3IS0zT1litH2V2HbLH5VFgU5aAhqnvTB+FCFBqwzLcZ76v/fE/QR/bwvGA== X-Received: by 2002:a17:90a:160f:: with SMTP id n15mr17171197pja.75.1602468574533; Sun, 11 Oct 2020 19:09:34 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id l3sm6517081pju.28.2020.10.11.19.09.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:33 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 19/22] drm/msm: remove msm_gem_free_work Date: Sun, 11 Oct 2020 19:09:46 -0700 Message-Id: <20201012020958.229288-20-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that we don't need struct_mutex in the free path, we can get rid of the asynchronous free altogether. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 3 --- drivers/gpu/drm/msm/msm_drv.h | 5 ----- drivers/gpu/drm/msm/msm_gem.c | 27 --------------------------- drivers/gpu/drm/msm/msm_gem.h | 1 - 4 files changed, 36 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 81cb2cecc829..49e6daf30b42 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -438,9 +438,6 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) priv->wq = alloc_ordered_workqueue("msm", 0); - INIT_WORK(&priv->free_work, msm_gem_free_work); - init_llist_head(&priv->free_list); - INIT_LIST_HEAD(&priv->inactive_list); mutex_init(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 2ef5cff19883..af296712eae8 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -188,10 +188,6 @@ struct msm_drm_private { struct list_head inactive_list; struct mutex mm_lock; - /* worker for delayed free of objects: */ - struct work_struct free_work; - struct llist_head free_list; - struct workqueue_struct *wq; unsigned int num_planes; @@ -291,7 +287,6 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -void msm_gem_free_work(struct work_struct *work); int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct msm_gem_address_space *aspace); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 9ead1bf223e9..b60eaf6266e2 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -924,16 +924,6 @@ void msm_gem_free_object(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; - if (llist_add(&msm_obj->freed, &priv->free_list)) - queue_work(priv->wq, &priv->free_work); -} - -static void free_object(struct msm_gem_object *msm_obj) -{ - struct drm_gem_object *obj = &msm_obj->base; - struct drm_device *dev = obj->dev; - struct msm_drm_private *priv = dev->dev_private; - /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); @@ -966,23 +956,6 @@ static void free_object(struct msm_gem_object *msm_obj) kfree(msm_obj); } -void msm_gem_free_work(struct work_struct *work) -{ - struct msm_drm_private *priv = - container_of(work, struct msm_drm_private, free_work); - struct llist_node *freed; - struct msm_gem_object *msm_obj, *next; - - while ((freed = llist_del_all(&priv->free_list))) { - llist_for_each_entry_safe(msm_obj, next, - freed, freed) - free_object(msm_obj); - - if (need_resched()) - break; - } -} - /* convenience method to construct a GEM buffer object, and userspace handle */ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index bf5f9e94d0d3..c12fedf88e85 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -202,7 +202,6 @@ static inline bool is_vunmapable(struct msm_gem_object *msm_obj) void msm_gem_purge(struct drm_gem_object *obj); void msm_gem_vunmap(struct drm_gem_object *obj); -void msm_gem_free_work(struct work_struct *work); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, * associated with the cmdstream submission for synchronization (and From patchwork Mon Oct 12 02:09:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831431 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 53D2F14D5 for ; Mon, 12 Oct 2020 02:09:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 303C82078A for ; Mon, 12 Oct 2020 02:09:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iv62+x78" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729825AbgJLCJl (ORCPT ); Sun, 11 Oct 2020 22:09:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729808AbgJLCJj (ORCPT ); Sun, 11 Oct 2020 22:09:39 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83098C0613DB; Sun, 11 Oct 2020 19:09:37 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id e7so3113143pfn.12; Sun, 11 Oct 2020 19:09:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B9RVbg78ux75/+n4NUnNwRZeJdtfJ/ZCin3JTFeaEqU=; b=iv62+x78oebBSf9WRr77yT04o2Ecq8lN0AzXaBzNBW9eEulyTt8DRPFTRoLgrwAI4Y qs4te07B2lWie8HE1qsOu8vl2R1IeTzzAf7thCFjN5hev8QY2zLY/2rgouq71JHXJNfv YZ4+r99NmSzoBhrRIdPkhSrOEIDUqUDwTypkxu/sDj7FblYs69iYi3UQdX6mKvp46Prx IJWavR2yE2/LkpUj7hI4TYJMWC0tD8DuhIu/eyziXQ2TyRUcs3nsOceLht/wvKu6HkBM NEiU7nAlT/BhCFm9zJoUNu5v5q6lFadz1CXihiWmegaSFoScLBFa/lJVYvbD1YMmEmEh g0VA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B9RVbg78ux75/+n4NUnNwRZeJdtfJ/ZCin3JTFeaEqU=; b=YSV5apf9eV6LRfemdc0WM2mlOQGVCR+jK9RkNA30UE+m2waR9YQfpSVzxyfptTZpkR VOlxbce5i4xOAhAJ4dR0QP1mY3yX9hOMkXYoGmgnHx1FIiAceriXRcR/jSTW0jhXEJJi mijOwB+UQof3DyHRYmO6TUPRe8zFfSS36hDfYq7ZJlod3YtOy+IyxgnAqmoh7FxopEfo nNsAiTSVUxEEcxfQRJwa0dQWGeDBqofdMuXRxRGYYydsGgEVPZ8181Gi6cjYGEMd4AZN MQpdh7hVSNJvn0RB+NzdrKz49pz8q+r+pKwVZOytxAv5DctgsP0nV5y/NGlq8nzruhYX JvIQ== X-Gm-Message-State: AOAM5307m0uxkZRONs8XVusunRikplKW+N8xN5UL4EsJRcE6qII0m26N yx4efZPh5H/e3Bo9dBD7Cik= X-Google-Smtp-Source: ABdhPJx/pnTfRG5VIKHQvFAKpwUpJG1uTKngcpb2xP8fYZlQWrAl43pmv9un5EYkA1S/Ixx0XR3GXQ== X-Received: by 2002:a63:4c4e:: with SMTP id m14mr10981834pgl.199.1602468577032; Sun, 11 Oct 2020 19:09:37 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id ng7sm3510804pjb.14.2020.10.11.19.09.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:36 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 20/22] drm/msm: drop struct_mutex in madvise path Date: Sun, 11 Oct 2020 19:09:47 -0700 Message-Id: <20201012020958.229288-21-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The obj->lock is sufficient for what we need. This *does* have the implication that userspace can try to shoot themselves in the foot by racing madvise(DONTNEED) with submit. But the result will be about the same if they did madvise(DONTNEED) before the submit ioctl, ie. they might not get want they want if they race with shrinker. But iova fault handling is robust enough, and userspace is only shooting it's own foot. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 11 ++--------- drivers/gpu/drm/msm/msm_gem.c | 4 +--- drivers/gpu/drm/msm/msm_gem.h | 2 -- 3 files changed, 3 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 49e6daf30b42..f2d58fe25497 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -912,14 +912,9 @@ static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, return -EINVAL; } - ret = mutex_lock_interruptible(&dev->struct_mutex); - if (ret) - return ret; - obj = drm_gem_object_lookup(file, args->handle); if (!obj) { - ret = -ENOENT; - goto unlock; + return -ENOENT; } ret = msm_gem_madvise(obj, args->madv); @@ -928,10 +923,8 @@ static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, ret = 0; } - drm_gem_object_put_locked(obj); + drm_gem_object_put(obj); -unlock: - mutex_unlock(&dev->struct_mutex); return ret; } diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b60eaf6266e2..8852c05775dc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -658,8 +658,6 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) msm_gem_lock(obj); - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); - if (msm_obj->madv != __MSM_MADV_PURGED) msm_obj->madv = madv; @@ -676,7 +674,6 @@ void msm_gem_purge(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); @@ -756,6 +753,7 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) struct msm_drm_private *priv = obj->dev->dev_private; might_sleep(); + WARN_ON(!msm_gem_is_locked(obj)); WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); if (!atomic_fetch_inc(&msm_obj->active_count)) { diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c12fedf88e85..1f8f5f3d08c0 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -188,8 +188,6 @@ static inline bool is_active(struct msm_gem_object *msm_obj) static inline bool is_purgeable(struct msm_gem_object *msm_obj) { - WARN_ON(!msm_gem_is_locked(&msm_obj->base)); - WARN_ON(!mutex_is_locked(&msm_obj->base.dev->struct_mutex)); return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach; } From patchwork Mon Oct 12 02:09:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831435 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A99DC109B for ; Mon, 12 Oct 2020 02:09:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 85098208B8 for ; Mon, 12 Oct 2020 02:09:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="pOPzTKYV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729854AbgJLCJr (ORCPT ); Sun, 11 Oct 2020 22:09:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729819AbgJLCJk (ORCPT ); Sun, 11 Oct 2020 22:09:40 -0400 Received: from mail-pl1-x641.google.com (mail-pl1-x641.google.com [IPv6:2607:f8b0:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E20D8C0613CE; Sun, 11 Oct 2020 19:09:39 -0700 (PDT) Received: by mail-pl1-x641.google.com with SMTP id w11so22583pll.8; Sun, 11 Oct 2020 19:09:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aM7b0pwaJ3Kic6t5ul6vT7+G6Sk2C9laYoEYMRioRSw=; b=pOPzTKYVTMfn/vTZiKhTePR9dfuJ5aJjHeqFD2Vq5MeZMfeAaXaIiIAUSaR69PTSf3 B/UuIWCh1WvISMnFM7zeo9WIWjat7foLonk1dYlDEXaJJHMaaN1b5X92o71U5tsZpvtf OYb8TsEP/Adh39mEODcqhSAIdzkRVbLQKpp9mx7lJkeNjkDOOKIAth7f1acrkG8biPCu J+TNtU+EBlMRHQt7bqpZhbGF6oggkiynt149iL5udC7va/u4C4wB6XjvD+i0kzspy+JZ UfiHf6IdOFfQytGRqkdTXPJr4/u17dl+ekA21Gxb/cFuyKrsfzhJ0pKkmJq7Pv0s7wlM KNUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aM7b0pwaJ3Kic6t5ul6vT7+G6Sk2C9laYoEYMRioRSw=; b=LJ8mxJuNGvawPVpEXQdS2xf1QubdBqAcWzwClcfyy4JwyDxSNawOSvuiwCY7roRYh8 Mgb7UE4jqH0irXXYVc1VcctfIj5Ci4TQTivqUMNGgpGBc/E7yosPu05FPlMh3dEM6c4m AFceDiF9BnanC0X35Df5OEsoibZe32WohnVXBo7lKqBf8JhEFS8cx0xXi++f/5EI1UYd vd3r8o3ntgEh5veXb+WpI0UtWr+/4M4KHFVVVjNnb8R3bWH3gVFPI7FknKCSTewuWXNd yeZOc+ifCdfpTxnIm/ZnEk/jnzVsgrO5AXLvV1IJR5D+fB0anAzmLvq/Af6QbB8FLlwq grfg== X-Gm-Message-State: AOAM532S785Hrx6HXRecBLVDfUo7iMo3HmhYefhqy2mphbq19rfQ0ow7 lpDaJLMc1UnIrN3pihGDlFo= X-Google-Smtp-Source: ABdhPJw2DDnmYqy+aSPVa6Gk57Xifw79VHiFSVfIjhfOuGVCMllrvyE2SNen/jQ96sfk3SPxk3ZOyQ== X-Received: by 2002:a17:902:b18f:b029:d2:1ec0:4161 with SMTP id s15-20020a170902b18fb02900d21ec04161mr21604424plr.58.1602468579414; Sun, 11 Oct 2020 19:09:39 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id cu5sm12696349pjb.49.2020.10.11.19.09.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:38 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 21/22] drm/msm: Drop struct_mutex in shrinker path Date: Sun, 11 Oct 2020 19:09:48 -0700 Message-Id: <20201012020958.229288-22-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that the inactive_list is protected by mm_lock, and everything else on per-obj basis is protected by obj->lock, we no longer depend on struct_mutex. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 1 - drivers/gpu/drm/msm/msm_gem_shrinker.c | 54 -------------------------- 2 files changed, 55 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 8852c05775dc..ca00c3ccd413 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -673,7 +673,6 @@ void msm_gem_purge(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 6be073b8ca08..6f4b1355725f 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -8,48 +8,13 @@ #include "msm_gem.h" #include "msm_gpu_trace.h" -static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock) -{ - /* NOTE: we are *closer* to being able to get rid of - * mutex_trylock_recursive().. the msm_gem code itself does - * not need struct_mutex, although codepaths that can trigger - * shrinker are still called in code-paths that hold the - * struct_mutex. - * - * Also, msm_obj->madv is protected by struct_mutex. - * - * The next step is probably split out a seperate lock for - * protecting inactive_list, so that shrinker does not need - * struct_mutex. - */ - switch (mutex_trylock_recursive(&dev->struct_mutex)) { - case MUTEX_TRYLOCK_FAILED: - return false; - - case MUTEX_TRYLOCK_SUCCESS: - *unlock = true; - return true; - - case MUTEX_TRYLOCK_RECURSIVE: - *unlock = false; - return true; - } - - BUG(); -} - static unsigned long msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long count = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return 0; mutex_lock(&priv->mm_lock); @@ -63,9 +28,6 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - return count; } @@ -74,13 +36,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long freed = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return SHRINK_STOP; mutex_lock(&priv->mm_lock); @@ -98,9 +55,6 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); @@ -112,13 +66,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) { struct msm_drm_private *priv = container_of(nb, struct msm_drm_private, vmap_notifier); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned unmapped = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return NOTIFY_DONE; mutex_lock(&priv->mm_lock); @@ -141,9 +90,6 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - *(unsigned long *)ptr += unmapped; if (unmapped > 0) From patchwork Mon Oct 12 02:09:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11831439 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A0921592 for ; Mon, 12 Oct 2020 02:10:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 07CC72078E for ; Mon, 12 Oct 2020 02:10:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WmJqGmK6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729782AbgJLCJz (ORCPT ); Sun, 11 Oct 2020 22:09:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729808AbgJLCJn (ORCPT ); Sun, 11 Oct 2020 22:09:43 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00372C0613D0; Sun, 11 Oct 2020 19:09:41 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id x13so9509002pfa.9; Sun, 11 Oct 2020 19:09:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fCvUZrMqyJzQQ14uf8jQABI7a2ChVtRsN/nTnfqUZZE=; b=WmJqGmK6lYy+ITsDu6cyf835lQJCk+kswy9EShZfZTjxTyPKJZhY/Nq2SnAR8mrYgn voh2SgimOlbEHSM1sgAXI6MZ4iiR+oMtHhvjjMSJyOoX8oBLsJd491F6BTdkDXY7mJQY sCyh4HdSgNCZxsDX3zlIfjqJ3s7llujnW5q30K9IOJGjeX+7f40WVNZYYOcN+T6OuxP2 DGBhCy+FJBU19f7cVdussY0FQmhf4UoDc5xg+Ix6ifbWz/nwb6s/2fZePVULezLdQ1Qe GnF4QluzByU8B40DC+oMh5ynodygrx2lXvWCTq5Uo6wmei8o4u0UcHL6FXW7+PJvxuir dYLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fCvUZrMqyJzQQ14uf8jQABI7a2ChVtRsN/nTnfqUZZE=; b=sSihDWW9YJBv7OGC1zBggFcjX4IpCnSCnNmg5ZUDel4zjBua9QRj2AkR7wZE3qdc6B X+7Acak5YLyLNWOPhtrAf6iJxlF84z1Xdvy2ISq20iTVcSuOrGs1z7D/cko5hV62gZUG vkzbNKp1a3Nlwio1x1jjXpmi3KkE5HUi7kp8WIFd6RfSDDBTzpKI115B6lX3I//BM6fP 7Hf7Rzow5iFRlYlNlI/O04FV8I5JUUQt9wuE/A1759NMBTpISmGF22lUGiHnN8W3N9ph xU/yooT9N/IcG4ja2SF2lCGnSChMU1Z0BR4hL2K5uqW29UI0MQMtcSR6Zn06u0cAxGgI Wrog== X-Gm-Message-State: AOAM530W0AZFIQrUCZ9Q0T8wsIH+BNpXKQCOOMvjaWrjmrsYCtN+PG3k mcg0GrOvoY/Xvh5YHfLhYzI= X-Google-Smtp-Source: ABdhPJyvg9NYY3SSw3ECtdf96uoOyKdmHmuRhh+WcKRTDhDtAQOFmYqKak+UQH6tgAOrhK77M8YCMQ== X-Received: by 2002:a17:90a:7d16:: with SMTP id g22mr17496646pjl.135.1602468581497; Sun, 11 Oct 2020 19:09:41 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id q81sm18970519pfc.36.2020.10.11.19.09.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 11 Oct 2020 19:09:40 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , Rob Clark , Rob Clark , Sean Paul , David Airlie , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 22/22] drm/msm: Don't implicit-sync if only a single ring Date: Sun, 11 Oct 2020 19:09:49 -0700 Message-Id: <20201012020958.229288-23-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201012020958.229288-1-robdclark@gmail.com> References: <20201012020958.229288-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Any cross-device sync use-cases *must* use explicit sync. And if there is only a single ring (no-preemption), everything is FIFO order and there is no need to implicit-sync. Mesa should probably just always use MSM_SUBMIT_NO_IMPLICIT, as behavior is undefined when fences are not used to synchronize buffer usage across contexts (which is the only case where multiple different priority rings could come into play). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 3151a0ca8904..c69803ea53c8 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -277,7 +277,7 @@ static int submit_lock_objects(struct msm_gem_submit *submit) return ret; } -static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) +static int submit_fence_sync(struct msm_gem_submit *submit, bool implicit_sync) { int i, ret = 0; @@ -297,7 +297,7 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; } - if (no_implicit) + if (!implicit_sync) continue; ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx, @@ -768,7 +768,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; - ret = submit_fence_sync(submit, !!(args->flags & MSM_SUBMIT_NO_IMPLICIT)); + ret = submit_fence_sync(submit, (gpu->nr_rings > 1) && + !(args->flags & MSM_SUBMIT_NO_IMPLICIT)); if (ret) goto out;