From patchwork Mon Sep 11 09:25:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13379035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58EBFEE57DF for ; Mon, 11 Sep 2023 09:26:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6E846B0192; Mon, 11 Sep 2023 05:26:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D1D716B0193; Mon, 11 Sep 2023 05:26:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BE59E6B0194; Mon, 11 Sep 2023 05:26:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AEB9C6B0192 for ; Mon, 11 Sep 2023 05:26:36 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7DE9D12095A for ; Mon, 11 Sep 2023 09:26:36 +0000 (UTC) X-FDA: 81223786392.22.0D027E6 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf14.hostedemail.com (Postfix) with ESMTP id 28525100008 for ; Mon, 11 Sep 2023 09:26:32 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=b5ShsxF1; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694424393; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5OticL2Ym/Xp3/1qxI1A+TfW6cmSsSh/JXNH+7cjvxw=; b=kNeGujXBYnmKkB0N2zKbKx1IqSXQsSRPeyibMiFXSrkWZ/om0cIX3Cq0KMCRdFrVMXZYvv sGl69BZWRzdBh9O99Yi/BMcQCEGZHkNGQyIQ1S1E4DrC+PZDi32nIqcU/ioMj6UB8z0yM+ wL3nSX0PUM+mBKkSy1YguG3HVrUzelk= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=b5ShsxF1; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf14.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694424393; a=rsa-sha256; cv=none; b=n873W941tkenr/X1IZLW1tgzhaCEvMBdkHe2XvUFUJymmocfjL5d09fD+EP6WviFuHJaag 2ewG32gmeJxSgoo2GgrVBvBdoDgwE0KH4LWi5qClRHzta5cEWW6Pezf/i2su8QD9r1ICA2 xl+Lj2fPJRMHVQp0Etz7CmPvrXdX0LM= Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1c3c4eafe95so433015ad.1 for ; Mon, 11 Sep 2023 02:26:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1694424392; x=1695029192; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5OticL2Ym/Xp3/1qxI1A+TfW6cmSsSh/JXNH+7cjvxw=; b=b5ShsxF1ApNCvp/2qTrBbsNL4of/DmhUHSuSqUx3qAbRoRV80SrICfNpLfnBxIoBBx 9L2+1CD+jyp67mCKxmMmcFb8uV90kMSxGR5y4VVddlfa/rFvRzvSluVdMqnMkQ6yXqp7 JQ98ac1rMSd5Az3IDkcKMdY+F9Mfr//wB7vfaADf5U098b+AUvLKRh8l46uN9R3EcLxz a7O4BexvaxSWVyictTOhILJ9+vvlwKN/QrFElSta4EIINE9AmAIJihFRvYeBgOUY6zNf yU1Xe148EpdItoocKnFlJHepsB09HcMgIvOloxuyslHqmB+tIew1nZwSjov/5tYA22mf ju3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694424392; x=1695029192; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5OticL2Ym/Xp3/1qxI1A+TfW6cmSsSh/JXNH+7cjvxw=; b=hUlYtITVpsQZp/rI0NigMzcY1CjzGl2OOHFHua9dRBFfr20N7j2qOqBx2RX/mCxUGV tmtlFfDDu5WmaUnyzsrN73/+JRk83iDQjFGCPj62a7oVmwY2FB1hx8LwGOIiNT81hf8r IkCUi0PoZJa7CZc3Ru4yQyRR9vTACKSNNaFKeTsEVNkfQfky6gB4uL0nriA5nFsR7mHH I1eDiQSKrNCUqEPcRD5dLDSqb8tNkVoxoWEUOZohrTuyrisb6GYQU0sx6BYwuVUOYo4q kJL/tXp2HHZvc8+jBaMp1+E2j/VkYfZkBKdQJDvJiiq7KecUIADaNP0uTuzhReF8aCT6 bIJg== X-Gm-Message-State: AOJu0YzKDvT6O9QPHxOUfR31rw6ctwjuZLnssG9RbN7rqYjiuvbtmCfq BDwvcAeqoKrGC9XjuT1cHo9xZQ== X-Google-Smtp-Source: AGHT+IHqHCcjDv6SGcleO/Iy3SuA6Ef0lm330IesHOCkoVNzn7x2raz6YBTG533lbhtmW62vMDR7bQ== X-Received: by 2002:a17:902:e546:b0:1bb:9e6e:a9f3 with SMTP id n6-20020a170902e54600b001bb9e6ea9f3mr10956484plf.4.1694424391926; Mon, 11 Sep 2023 02:26:31 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([203.208.167.146]) by smtp.gmail.com with ESMTPSA id q9-20020a170902788900b001b89466a5f4sm5964623pll.105.2023.09.11.02.26.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Sep 2023 02:26:31 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu, steven.price@arm.com, cel@kernel.org, senozhatsky@chromium.org, yujie.liu@intel.com, gregkh@linuxfoundation.org, muchun.song@linux.dev, joel@joelfernandes.org, christian.koenig@amd.com, daniel@ffwll.ch Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, linux-fsdevel@vger.kernel.org, Qi Zheng , Muchun Song , Daniel Vetter Subject: [PATCH v4 4/4] drm/ttm: introduce pool_shrink_rwsem Date: Mon, 11 Sep 2023 17:25:17 +0800 Message-Id: <20230911092517.64141-5-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230911092517.64141-1-zhengqi.arch@bytedance.com> References: <20230911092517.64141-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 28525100008 X-Stat-Signature: a8ihgcopky915fo6tzuch1osmzb7thdd X-HE-Tag: 1694424392-240223 X-HE-Meta: U2FsdGVkX1/HuHI57jOoSsM3Rw29gs1oixihPVomdmdxZb1+HTj+CXo7fWLxc5DQtMKj89WnZLFIxlS3AtpE0gDH1o7ZoUh+tz+h57POfxms/eO3uRyuI5TdNIQviQhD/OENpt+mc9/5ck1wHIgnlpoeiYszM76YDaznyleIEAZaJtcqTPYlBx5fDAHY6ycXYbMOfJCH4pGrroaBXCfyVgb6vYL5JlkWwP5tY9jARjfS2uXVsiSthB1UjyFG1hiemAgk3T3K2cC53721A1PCFD+6JOa05ueNBRtUS9gWKhcodUUpQBfYjMU996SQuf/Ws2b6NCi3fb7KL2FxbMCnha0iUWKHtm5KYJOcDqaDqvFfBx/J1g7O8FNujrVY6qKOM6tGuuEsbEWhvi0safhCuPjd+cJkaA8w42qVoDLr2X5rFy4TCbD9OevdWldaegPckIvPgwZ7tk1hxn5jNOyrf8I/NuKYVDYNpgeWsTbiqiu8ttKyhVDShgKZTdg54DW4yJ0+dyUNIenqSuPLlnhkhVxs0XdkLXWMOH1qOTF4YlB9HI8Bbcg9jpUmVxjwrCQfZGKJJpEenDzWKpu2d3+KcUyBS9T4G1vctwc0C6Ep5KDhSi/uApz0zv7oSQao/NV/Jp0igaTf7GJusL7nwKOH8MxVk6DSsTzjfsPBT2rvUlVwo6RyCkoMYqGhGysqwcCjwjf8IlqFL19ebA9BY6f63kcxnM+kKobO1VYSKzI3x9nPiAdWV//paFMaxGaBiZwJCFOS2LgwI8njxrNLeDoUbiYEf3oTdisKBB0QpAhMj+nMeE+gMuqCSyHB1bUMNYXaTYEhDw5TVSVY6HNHsdrvkjUVgKMkgtozj1X0pAq8Sm9MXjptbaS9tKQGbZVKfhhI2bgrdS19hc00gYCo1sfD88owYnnglAvyUWXA+1/4iHrmg8GiteFchgLCSiufgwXS5oeewIT2CnhE9R0euTC 1kva8K7W HYVuDuejPzZtE4kca9WnSubG5CvHgAMIWgGHlSKwJBSAWdHA1tqqeBhOa3U+t7T68RBP3kWjKNLe/h7aP1qD6eDZCGGMMV0SYem9dHQ+A3z8Ofrdn7z0yMKBqDFuyisLmRi5HPDh81jtL7b99K3bHiCTNUmEESojljjxE8xit6SP9jenVc/ifNeaeI7b6lnwVtt6GgUO9VCEfbJw92l+9t8Ui/OkVmBTZ5mOtE/45t1dlWW6jfumjYAd/aTjX2AjU0mSnD1rriWlezbZaMg/efe62jBj7L6ymDeWzWNpvANfbSX3QMQIC3MEmAISH4ofsOY1QxbjMFLRC83mifsKhUKm+XJn54vO3zR8tTxT6t2HnP3bTqXGuqdpgibVU+QzxPpXw80YGHBz6lJ0kBds/qHNGrhVP+ClndyS0q2UUV1oMWLAZ/7SPZ4URK6cGv4yaY3JUWwEeu6Q0X/b56i3LmsoJbdtMIFsbiXMS+WZN0raSlU53nhaGSf3XOg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, the synchronize_shrinkers() is only used by TTM pool. It only requires that no shrinkers run in parallel. After we use RCU+refcount method to implement the lockless slab shrink, we can not use shrinker_rwsem or synchronize_rcu() to guarantee that all shrinker invocations have seen an update before freeing memory. So we introduce a new pool_shrink_rwsem to implement a private ttm_pool_synchronize_shrinkers(), so as to achieve the same purpose. Signed-off-by: Qi Zheng Reviewed-by: Muchun Song Reviewed-by: Christian König Acked-by: Daniel Vetter --- drivers/gpu/drm/ttm/ttm_pool.c | 17 ++++++++++++++++- include/linux/shrinker.h | 1 - mm/shrinker.c | 15 --------------- 3 files changed, 16 insertions(+), 17 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index cddb9151d20f..648ca70403a7 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -74,6 +74,7 @@ static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1]; static spinlock_t shrinker_lock; static struct list_head shrinker_list; static struct shrinker mm_shrinker; +static DECLARE_RWSEM(pool_shrink_rwsem); /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, @@ -317,6 +318,7 @@ static unsigned int ttm_pool_shrink(void) unsigned int num_pages; struct page *p; + down_read(&pool_shrink_rwsem); spin_lock(&shrinker_lock); pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); list_move_tail(&pt->shrinker_list, &shrinker_list); @@ -329,6 +331,7 @@ static unsigned int ttm_pool_shrink(void) } else { num_pages = 0; } + up_read(&pool_shrink_rwsem); return num_pages; } @@ -572,6 +575,18 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, } EXPORT_SYMBOL(ttm_pool_init); +/** + * ttm_pool_synchronize_shrinkers - Wait for all running shrinkers to complete. + * + * This is useful to guarantee that all shrinker invocations have seen an + * update, before freeing memory, similar to rcu. + */ +static void ttm_pool_synchronize_shrinkers(void) +{ + down_write(&pool_shrink_rwsem); + up_write(&pool_shrink_rwsem); +} + /** * ttm_pool_fini - Cleanup a pool * @@ -593,7 +608,7 @@ void ttm_pool_fini(struct ttm_pool *pool) /* We removed the pool types from the LRU, but we need to also make sure * that no shrinker is concurrently freeing pages from the pool. */ - synchronize_shrinkers(); + ttm_pool_synchronize_shrinkers(); } EXPORT_SYMBOL(ttm_pool_fini); diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 8dc15aa37410..6b5843c3b827 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -103,7 +103,6 @@ extern int __printf(2, 3) register_shrinker(struct shrinker *shrinker, const char *fmt, ...); extern void unregister_shrinker(struct shrinker *shrinker); extern void free_prealloced_shrinker(struct shrinker *shrinker); -extern void synchronize_shrinkers(void); #ifdef CONFIG_SHRINKER_DEBUG extern int __printf(2, 3) shrinker_debugfs_rename(struct shrinker *shrinker, diff --git a/mm/shrinker.c b/mm/shrinker.c index 043c87ccfab4..a16cd448b924 100644 --- a/mm/shrinker.c +++ b/mm/shrinker.c @@ -692,18 +692,3 @@ void unregister_shrinker(struct shrinker *shrinker) shrinker->nr_deferred = NULL; } EXPORT_SYMBOL(unregister_shrinker); - -/** - * synchronize_shrinkers - Wait for all running shrinkers to complete. - * - * This is equivalent to calling unregister_shrink() and register_shrinker(), - * but atomically and with less overhead. This is useful to guarantee that all - * shrinker invocations have seen an update, before freeing memory, similar to - * rcu. - */ -void synchronize_shrinkers(void) -{ - down_write(&shrinker_rwsem); - up_write(&shrinker_rwsem); -} -EXPORT_SYMBOL(synchronize_shrinkers);