From patchwork Sun Oct 4 19:21:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815927 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BC38392C for ; Sun, 4 Oct 2020 19:22:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9AB332068E for ; Sun, 4 Oct 2020 19:22:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bXOlJJgI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726781AbgJDTVx (ORCPT ); Sun, 4 Oct 2020 15:21:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726736AbgJDTVh (ORCPT ); Sun, 4 Oct 2020 15:21:37 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 737F0C0613CE; Sun, 4 Oct 2020 12:21:37 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id l126so5088059pfd.5; Sun, 04 Oct 2020 12:21:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=myFIFL6NJ2uK/QlsJsQ3/XorFsY0JTujNY+1tdB1S6w=; b=bXOlJJgIK0A3SfjsBUJbXQnzmQx7EVLfv4zyLT2NRAXZytpisRzrfHpSLxJQCc70gL tG3UczCTtX9V4iZsYLNKHMLfDlqeIWIFjM2tkCw2MUQalX2ONoMmUwKdJLaD8vEAAvu1 Yruky35HY5JMNNTze4rqfStJOnV52J/0zsq06JXY4c0CrIezelAMmsbNzZ/4E1tl5hQo qjZ42E/CbsjpXrKEuh36uMaxExMNC8IyoIZ9/7oCzniN2klkVB7SsQiyguFU6clC52w7 opgpHd2ZsaW9LmAeFzAmDBKHFsN8eAauclhnABHpUN1R7hsg4Qp7Iswem4aqajGYEUNu H7Pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=myFIFL6NJ2uK/QlsJsQ3/XorFsY0JTujNY+1tdB1S6w=; b=ZNIZ9p/8qGEbD8sviTSIPWVaBOOU20noLuWevu4YuWuAOelyeirjTAWUeXQN1diV99 AIGETfmgCF+P1HeCEZUHhZeUKGW2riFZVVIKepv9EXaCyrfUImERtR5D+P7FUaUoHEUg sPuf5KmvUxaBUp+UucZdjoO6g7R3TsUmOeLlVhBru62OBPDgYFtJbG3NucfQDYVZCNtM ns9XdHGeXhMGrnhLCy/0FkEGLYnRDP3yFyZy3+LxVvD0sLh3pdQs5ZmDGjweW6xVYOU7 M2EbZHezWqrZmG2uygMtPxplLkhs9Hedt7QvvqAExoacl68VRWdiqJq74w1MrlZP+FUi Ozcg== X-Gm-Message-State: AOAM5337csCpHaMOckVNHldGY+yP80EEqx7qWMr5t4Qjqm1pN/DAWMP5 DHajS5jqWkZpA28bbiV7R8o= X-Google-Smtp-Source: ABdhPJyxQljzF6SU8gU/5ONFqeUxHtVTnbYKItuy/9NTU/uxFnge0zRQuDPqrTiEmDHaMrFUAtKEyw== X-Received: by 2002:a62:178c:0:b029:152:3610:836d with SMTP id 134-20020a62178c0000b02901523610836dmr11763973pfx.57.1601839296937; Sun, 04 Oct 2020 12:21:36 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id c7sm8952626pfj.84.2020.10.04.12.21.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:36 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 13/14] drm/msm: Drop struct_mutex in shrinker path Date: Sun, 4 Oct 2020 12:21:45 -0700 Message-Id: <20201004192152.3298573-14-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that the inactive_list is protected by mm_lock, and everything else on per-obj basis is protected by obj->lock, we no longer depend on struct_mutex. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 1 - drivers/gpu/drm/msm/msm_gem_shrinker.c | 54 -------------------------- 2 files changed, 55 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 9cdac4f7228c..e749a1c6f4e0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -654,7 +654,6 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); WARN_ON(!is_purgeable(msm_obj, subclass)); WARN_ON(obj->import_attach); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 39a1b5327267..2c7bda1e2bf9 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -8,48 +8,13 @@ #include "msm_gem.h" #include "msm_gpu_trace.h" -static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock) -{ - /* NOTE: we are *closer* to being able to get rid of - * mutex_trylock_recursive().. the msm_gem code itself does - * not need struct_mutex, although codepaths that can trigger - * shrinker are still called in code-paths that hold the - * struct_mutex. - * - * Also, msm_obj->madv is protected by struct_mutex. - * - * The next step is probably split out a seperate lock for - * protecting inactive_list, so that shrinker does not need - * struct_mutex. - */ - switch (mutex_trylock_recursive(&dev->struct_mutex)) { - case MUTEX_TRYLOCK_FAILED: - return false; - - case MUTEX_TRYLOCK_SUCCESS: - *unlock = true; - return true; - - case MUTEX_TRYLOCK_RECURSIVE: - *unlock = false; - return true; - } - - BUG(); -} - static unsigned long msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long count = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return 0; mutex_lock(&priv->mm_lock); @@ -60,9 +25,6 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - return count; } @@ -71,13 +33,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long freed = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return SHRINK_STOP; mutex_lock(&priv->mm_lock); @@ -92,9 +49,6 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); @@ -106,13 +60,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) { struct msm_drm_private *priv = container_of(nb, struct msm_drm_private, vmap_notifier); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned unmapped = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return NOTIFY_DONE; mutex_lock(&priv->mm_lock); @@ -130,9 +79,6 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - *(unsigned long *)ptr += unmapped; if (unmapped > 0)