From patchwork Thu Jun 22 08:53:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zheng X-Patchwork-Id: 13288654 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54D1EEB64D8 for ; Thu, 22 Jun 2023 08:57:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E99E98D0002; Thu, 22 Jun 2023 04:57:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E49EF8D0001; Thu, 22 Jun 2023 04:57:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D121A8D0002; Thu, 22 Jun 2023 04:57:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id C223B8D0001 for ; Thu, 22 Jun 2023 04:57:02 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 95E7612017F for ; Thu, 22 Jun 2023 08:57:02 +0000 (UTC) X-FDA: 80929779084.15.D04039E Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) by imf28.hostedemail.com (Postfix) with ESMTP id B2707C000D for ; Thu, 22 Jun 2023 08:57:00 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ARIMz0kP; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf28.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687424220; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kOpMuT1tSay47KCV2B0lCCsOSa7MUah7TRoPRzKvzWA=; b=dSwEH04v/NQVjHqk/I4FG87foSTB5EZcdltB0SDQvtrYnG174AFj3ipy1CMSNW4p9KnjDa KCVQzXcu5YoyB3hCjSKKsU1IPlo28KeJiI9HyvffJ/r5T+WcbSWBtBoKiEM4BTHhaXW/lP JhSCG2l9xBTU2h5LxOD1z7MsQXdyEPo= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=ARIMz0kP; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf28.hostedemail.com: domain of zhengqi.arch@bytedance.com designates 209.85.214.182 as permitted sender) smtp.mailfrom=zhengqi.arch@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687424220; a=rsa-sha256; cv=none; b=gVeT6z8H1P2rd5N/2gfZ2P+9rPRgyJHZLWW/i7z0Miggv+30J8nKFTS5Uxa8cDSnbxCtGl zzQJAf3QSG+c7P3I1uEzaogqs60/g5yrIqZx9OAiGMsmVWyLe4+AUJxYzqcvPsBwUff+KL vMmucfxDDVukwSrwh6Hz6sJzplMz3SQ= Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-1b5466bc5f8so9656645ad.1 for ; Thu, 22 Jun 2023 01:57:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1687424219; x=1690016219; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kOpMuT1tSay47KCV2B0lCCsOSa7MUah7TRoPRzKvzWA=; b=ARIMz0kPY37zDpA2KUfwIMWCUvAyZHIYiFTXtRMngKg5guyDvAvZNDKsvB2ZgT8UTW A25yLHuKDIGVoaFMSsnctFsAVv7yr9hBv5sSwiDDhv/mHcXUMpPqBe+IawTWyFdB0del fl+Pxlo1XCZ20awrxgcMp8gUHyp7xw9pGxosYaiH18lXNbdNEUFeaQtGj4iDzF4J11pb jyEKbhwN4kpW66g5W4dvNRCNuWMTsByWhUSRp4GIzQ739D16xyqzHvKa0SrFnLXwxenz WdarMgZ86g8pGHWvsC0B3J0QiIU/iI2cSgeTldT8LPJGZSfxzUyvtAnV3WYD0uKX4hXF pNRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687424219; x=1690016219; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kOpMuT1tSay47KCV2B0lCCsOSa7MUah7TRoPRzKvzWA=; b=Br9GeWxY46st9VvyDqSaoR1HhX7D3iyHOKPQuCXqOG0lS81tLu1ErEGbbXklN8pRFV Ag93Qh135qVSXjakARYFfgfEBslOY0EAoOozUSEzvls3VL16a21Po1PzejNkj8UTvzk9 QH49Ag0OrVnBkjz3d0RZlDIWq/7FGostIOb7OABJXRODOn3Hd68AVe7eAZHiodYM+0X4 3m5H59wrAYCkp56MJ+fpGo0WRkwdGwgUEXBtI2oEC3HUZkFoFuZQzukXSgyb2ZzqGnS8 KortGaroNQ55/SrFi+WTpLUbK7nn8wdu2I88ZlKytvbMwqiGkgeP3rSPvMP3J4lWKXaw abPw== X-Gm-Message-State: AC+VfDyJBsUtMluLp9XDX0b+lKFum1DdcVVDLnKdfIdZiXfLXUQeO+b5 t0CD6nzQJrg40zJm5gmjeVHWPA== X-Google-Smtp-Source: ACHHUZ7DqrxYhESUqlF1zio/fEoWG5nPFDtKmAeSV45runPF4798dCp+1eyZAVtYptcjl0nDKDsZDg== X-Received: by 2002:a17:903:230c:b0:1b3:ec39:f42c with SMTP id d12-20020a170903230c00b001b3ec39f42cmr21844690plh.5.1687424219648; Thu, 22 Jun 2023 01:56:59 -0700 (PDT) Received: from C02DW0BEMD6R.bytedance.net ([139.177.225.254]) by smtp.gmail.com with ESMTPSA id h2-20020a170902f7c200b001b549fce345sm4806971plw.230.2023.06.22.01.56.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 22 Jun 2023 01:56:59 -0700 (PDT) From: Qi Zheng To: akpm@linux-foundation.org, david@fromorbit.com, tkhai@ya.ru, vbabka@suse.cz, roman.gushchin@linux.dev, djwong@kernel.org, brauner@kernel.org, paulmck@kernel.org, tytso@mit.edu Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, dm-devel@redhat.com, linux-raid@vger.kernel.org, linux-bcache@vger.kernel.org, virtualization@lists.linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-nfs@vger.kernel.org, linux-xfs@vger.kernel.org, linux-btrfs@vger.kernel.org, Qi Zheng Subject: [PATCH 22/29] drm/ttm: introduce pool_shrink_rwsem Date: Thu, 22 Jun 2023 16:53:28 +0800 Message-Id: <20230622085335.77010-23-zhengqi.arch@bytedance.com> X-Mailer: git-send-email 2.24.3 (Apple Git-128) In-Reply-To: <20230622085335.77010-1-zhengqi.arch@bytedance.com> References: <20230622085335.77010-1-zhengqi.arch@bytedance.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B2707C000D X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: paik8u3yfdpk8xduqq6g1hcg7bepb5su X-HE-Tag: 1687424220-671015 X-HE-Meta: U2FsdGVkX1/fqynWvBwULEubUndsHbcoBBduc+2fFsbsjGlZmdN+szcMZsgOS24jo/p0pbA1L+/KTUe0fe1qdRc+xb7msp+IYDCOzReuAJCpiS0ZPPh7U9nGdPvFF6aS6gOUq11z3EY6NJhhz4I4slpQRy+ve40k8iJQLxEsMASxvp/t6V3JklAJrlf6sHJmMjXxNgqRGrMF9xx/oxipRyRXLCg7tsUT+Gl8dU6ySo6O5AeBI/ZQnyC//f53pOcb2b/vh2eBB5ZLVpDPgccNIASUVTqAWrkWeqYiT6zC7Y0hH50uTQYIJsikedRnvzgZMXJUoAjJaFfR85jvyX0hdsRUgkuMKStI43YMwLjUqSlRGqYgXPDhl6q7SHX7g8uhK8umelsB9dVD2x2w2RjGY8WHysV/hbs6SfHAsQYN6iyWppDH872+zoStaL5EtsP5XSxOJyFWxa1r0sYhVvrsKZOVEYelmyp9fPFMid5D0Gz1QXKlgZikw2px+Ltxw8YPWJXqquNLZTMfZXDiNXC6UKaHC9BXJjljteawogWaNoAIFfDUqAgYoxHZb272gfdw8xLdxU6vqffUuS+ocgWpQ72iwcachSuvhflxUtHDiZtRyuyEq/5PaaSjaDAhiqAp8rzkylUDIDIop77kXxfj7gEMWRfewu1fnqRapgpiJ8AWnk0MKAMmOuFk7wcLlPiJy+OW1xvahkzASIEPq+2QWjvWvztXb+AO7GrWnZXvA8+nBR8awCjt2PEKnOm3ARmEoQlqFHUtsFj4r1Wu0ZuF21/kAlu/xghddRD0aJIleLNsPdXISGWjdTmLlgmKm7lXT5T7YI2ZOb5FiY7Z70xKDgONbSI1B0+sqGUkQlpeO95WDtMTYnrz2n/Oap2m8uxwoPjSpnZKBfoE4pnwcEvn28vcROXKO9iYCSO8WE8pF1BNyQEbq4Qqai0R7oca5RgvbLlc5veoKqjxekbYVV0 RWHOqZKw YA1EFjocWq71xJrl/ru5fFvBOImAuQdQp1O75PfKEOjK4wYklkrMjh7Tts7YPgjc1HCocTstVy57rd5XTOIrlsotG0mcgjA1imJ28pKkrWM7r+hIw9pAztYjYTsrKdBLtGSIK0gxTGHrOmVeYQ/m6zFWw92f3g1XWwO2F7VLY0DU4i59lkEpjA1YrYoTS8LBzDsUdAmbng4fpMYxF+KQFjjRNPAfYiZvwspw08LcvUNrcXzGbNC5V9L5hWRBa995Lm8XcryUiu7/ZalYCEP+QE2hnVbUzZytacmu8xYcH2i/pIyNNaE4ih0BeHt5eBdD2RTYHf/46TRRQCKwUsBKCcXWfEKNVAT6yVKIaeiGNAbv16GpZYmzrQe1kFJ1vZiaug5H3F70+8TYPyORlNY/yPn9BhcBDUDr2HleD0N1OiZXe3DQ4P5AfrnYmYenMkz8ys6Cqw2qjEbw+85xfF3T0LEKvYw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, the synchronize_shrinkers() is only used by TTM pool. It only requires that no shrinkers run in parallel. After we use RCU+refcount method to implement the lockless slab shrink, we can not use shrinker_rwsem or synchronize_rcu() to guarantee that all shrinker invocations have seen an update before freeing memory. So we introduce a new pool_shrink_rwsem to implement a private synchronize_shrinkers(), so as to achieve the same purpose. Signed-off-by: Qi Zheng --- drivers/gpu/drm/ttm/ttm_pool.c | 15 +++++++++++++++ include/linux/shrinker.h | 1 - mm/vmscan.c | 15 --------------- 3 files changed, 15 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index cddb9151d20f..713b1c0a70e1 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -74,6 +74,7 @@ static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1]; static spinlock_t shrinker_lock; static struct list_head shrinker_list; static struct shrinker mm_shrinker; +static DECLARE_RWSEM(pool_shrink_rwsem); /* Allocate pages of size 1 << order with the given gfp_flags */ static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, @@ -317,6 +318,7 @@ static unsigned int ttm_pool_shrink(void) unsigned int num_pages; struct page *p; + down_read(&pool_shrink_rwsem); spin_lock(&shrinker_lock); pt = list_first_entry(&shrinker_list, typeof(*pt), shrinker_list); list_move_tail(&pt->shrinker_list, &shrinker_list); @@ -329,6 +331,7 @@ static unsigned int ttm_pool_shrink(void) } else { num_pages = 0; } + up_read(&pool_shrink_rwsem); return num_pages; } @@ -572,6 +575,18 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, } EXPORT_SYMBOL(ttm_pool_init); +/** + * synchronize_shrinkers - Wait for all running shrinkers to complete. + * + * This is useful to guarantee that all shrinker invocations have seen an + * update, before freeing memory, similar to rcu. + */ +static void synchronize_shrinkers(void) +{ + down_write(&pool_shrink_rwsem); + up_write(&pool_shrink_rwsem); +} + /** * ttm_pool_fini - Cleanup a pool * diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h index 8e9ba6fa3fcc..4094e4c44e80 100644 --- a/include/linux/shrinker.h +++ b/include/linux/shrinker.h @@ -105,7 +105,6 @@ extern int __printf(2, 3) register_shrinker(struct shrinker *shrinker, const char *fmt, ...); extern void unregister_shrinker(struct shrinker *shrinker); extern void free_prealloced_shrinker(struct shrinker *shrinker); -extern void synchronize_shrinkers(void); typedef unsigned long (*count_objects_cb)(struct shrinker *s, struct shrink_control *sc); diff --git a/mm/vmscan.c b/mm/vmscan.c index 64ff598fbad9..3a8d50ad6ff6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -844,21 +844,6 @@ void unregister_and_free_shrinker(struct shrinker *shrinker) } EXPORT_SYMBOL(unregister_and_free_shrinker); -/** - * synchronize_shrinkers - Wait for all running shrinkers to complete. - * - * This is equivalent to calling unregister_shrink() and register_shrinker(), - * but atomically and with less overhead. This is useful to guarantee that all - * shrinker invocations have seen an update, before freeing memory, similar to - * rcu. - */ -void synchronize_shrinkers(void) -{ - down_write(&shrinker_rwsem); - up_write(&shrinker_rwsem); -} -EXPORT_SYMBOL(synchronize_shrinkers); - #define SHRINK_BATCH 128 static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,