From patchwork Tue Feb 27 03:02:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13573220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EA23C54E4A for ; Tue, 27 Feb 2024 03:03:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D501B4401E4; Mon, 26 Feb 2024 22:03:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CD89C44017F; Mon, 26 Feb 2024 22:03:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9EA14401E4; Mon, 26 Feb 2024 22:03:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AA10444017F for ; Mon, 26 Feb 2024 22:03:17 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7B3AD120AAA for ; Tue, 27 Feb 2024 03:03:17 +0000 (UTC) X-FDA: 81836087634.27.634D5B5 Received: from out-175.mta1.migadu.com (out-175.mta1.migadu.com [95.215.58.175]) by imf06.hostedemail.com (Postfix) with ESMTP id 7D86418001D for ; Tue, 27 Feb 2024 03:03:15 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.175 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=bytedance.com (policy=quarantine) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709002995; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wP17AY8f5RHqRLK5CiTTjEaLsliNwcwu3sUBefqbKyQ=; b=RNaujuAFsv2a2X52HYIifCjWSzhgLuYPMYBriB8nVP/wnWs1NG17axmVafuTBTf12cl15W T0LHxR3Pex1dSQb1G8/UYJND+8C5sJjk2bULRtKjpgh0ZiYnQkXLJCpYE+5qZHMyipuECd WJvfP/Suu/1xsj8OzEjxqIXW0I8tzUc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709002995; a=rsa-sha256; cv=none; b=SdA5j1vMvibe8igH6xt4d2qBZ0qoJu2WCiMuH45jP87PWHGBB19hnJsfSPgfZo2JMDFNgO RuLUKsgXSEArEYuEij4lSsm/nKh1XgliqqJO1bimQHROKJM5eKbB0CB/ESj6+lhbW+zkp8 t0TFon6TtAo9ZoHWwDNA6LcOrI/bIWg= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of chengming.zhou@linux.dev designates 95.215.58.175 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=bytedance.com (policy=quarantine) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Tue, 27 Feb 2024 03:02:55 +0000 Subject: [PATCH 2/2] mm/zsmalloc: remove the deferred free mechanism MIME-Version: 1.0 Message-Id: <20240226-zsmalloc-zspage-rcu-v1-2-456b0ef1a89d@bytedance.com> References: <20240226-zsmalloc-zspage-rcu-v1-0-456b0ef1a89d@bytedance.com> In-Reply-To: <20240226-zsmalloc-zspage-rcu-v1-0-456b0ef1a89d@bytedance.com> To: yosryahmed@google.com, Sergey Senozhatsky , hannes@cmpxchg.org, nphamcs@gmail.com, Andrew Morton , Minchan Kim Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Chengming Zhou X-Developer-Signature: v=1; a=ed25519-sha256; t=1709002985; l=5278; i=zhouchengming@bytedance.com; s=20240220; h=from:subject:message-id; bh=/M8dgwAyEda7XsNe3iFiXBlSIntC0kanjsl4nXxFc+E=; b=HKzHCzoHlRScN2CAHFL2LHEXBGcx4FZexrsgeFkUsWRTeBSsg2wQ+Kv5cGnhN6T8q7yMHkqfy LkC11bU4POFDX84RKgcD2IwlI1sl7CP9Dx9yfr3MiZEdk9Ydr9BUfAL X-Developer-Key: i=zhouchengming@bytedance.com; a=ed25519; pk=5+68Wfci+T30FoQos5RH+hfToF6SlC+S9LMPSPBFWuw= X-Migadu-Flow: FLOW_OUT X-Rspamd-Pre-Result: action=add header; module=dmarc; Action set by DMARC X-Rspam-User: X-Rspamd-Queue-Id: 7D86418001D X-Rspamd-Server: rspam11 X-Stat-Signature: sm1ip4h98bgabmzhweywb3qjkj6dhwod X-Rspam: Yes X-HE-Tag: 1709002995-432543 X-HE-Meta: U2FsdGVkX18Phc2Armz/ZoovYtNvW+6diD/Dzfd5J7uFnVeQKatutObi9GlQU/3scd3bApF8P7bDS85cS4ndv53fvpao0Oc/N3XEsALfR6Luhpu7qqyxkae1G32JQVdF7LK8lIThQYeZjZ8ROJ/IyETGfU3UUr1zFCrPqova4svgzwenhI2BxJXhvs3quZi/vBS3aOOMrS+JUPgxDdskvAOb86Yu2MGw7nvRtakg0tNhYDFjRRxLxHrvw/T4NJmpxKS865ix1r4+4h0MWTS31k1O6zr+WJseQ7lS87i/zQrYY8D/Z7itF1ekPeNFErnROZg6QjZfdCHGvcxN8R0mWR9eCSN8U/8F5R1xffzML7LmEcnSpW3hgy8RukXtUHet3dKJ1NIv5iKKPw5pfeb6eygXaf9B1hc2lGlCGtFPIiDlH4CLPZ6nCiugW4rEsbbxQwY+vZJ5jH4tqP1obzdFnlcHKgIP9ht/xO1V2LF9MgVjqeTjfBbSke7WP8G7mM5wYvZnD9JppALsuCigmkccJjCMVWZWkZcRt5cYzkbvePUUvtTz81qUDll/fHnGEjn7CtLm/evzY6wb96HJ3UmoV197+fJoXfh7tiKzVdz7gaCKjjF6n1DgDCSBp6oC+oOeQJ15cs1uDC7+PT2MyuwxAK9o0oV1GxHMRQQ8hnutnHntJBvYjMp4/MddILh03CVrKN7axdo+4lDBItXrMiI/t4qszVqkyPtADi8O0lWpwArUSmuvcCX0egns/47ixhx+j2YD04r0RVtxZJtLIfLIFjtNTBzMDJVhLFC2xwiNZXtSb69GGg/KBqb4quSgM3aAwMGkBOb+amNcGy4cqdDSizF0XwZnSF1iBHmj9+Bc/QmQbyi0ltvmqld+TBsieL+SLFzeVn4ErfhGZ+LCprS2grKyHm4JN3GqvVdKAUygFYp6VSLrfOWoRED0JH9rEFyCMbTLXkvLi6ElzHaFUWU UmfSrJq5 lCTuVZpmgpuMYtdoVn/8WjnSuGCiGgP2eHVT4LboS2Kd8XaWaSgF1x/vfPW9yt1d/Lq+nkz11xuULEAXw5V7C368oMKr1iPVBfqZ/Pr1WGRGLNh+5wBwKA/e1tcJHI5AeGAxyI83zGHH/N8t1WkK9pvawMWdEXp6NHr2fyZ/pcGQcQ4ty2kqI+UtNUIVLQ2xgAranQKT+268jFF/wGyB9iCD8TTn78xa89pWcPA9qJ0TI+99mmr/GQpdPUNRZuY6v4yFXfFqNw5dDwFS+mGyAbK94J7aaBfI38VON+ylqmtf0EM48Tr5Vy9hE3vkwAsil564s00bY6wUutLs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Since the only user of kick_deferred_free() has gone, remove all the deferred mechanism related code. Signed-off-by: Chengming Zhou --- mm/zsmalloc.c | 109 ---------------------------------------------------------- 1 file changed, 109 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index b153f2e5fc0f..1a044690b389 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -232,9 +232,6 @@ struct zs_pool { #ifdef CONFIG_ZSMALLOC_STAT struct dentry *stat_dentry; -#endif -#ifdef CONFIG_COMPACTION - struct work_struct free_work; #endif spinlock_t lock; atomic_t compaction_in_progress; @@ -281,12 +278,8 @@ static void migrate_write_lock(struct zspage *zspage); static void migrate_write_unlock(struct zspage *zspage); #ifdef CONFIG_COMPACTION -static void kick_deferred_free(struct zs_pool *pool); -static void init_deferred_free(struct zs_pool *pool); static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage); #else -static void kick_deferred_free(struct zs_pool *pool) {} -static void init_deferred_free(struct zs_pool *pool) {} static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) {} #endif @@ -1632,50 +1625,6 @@ static int putback_zspage(struct size_class *class, struct zspage *zspage) return fullness; } -#ifdef CONFIG_COMPACTION -/* - * To prevent zspage destroy during migration, zspage freeing should - * hold locks of all pages in the zspage. - */ -static void lock_zspage(struct zspage *zspage) -{ - struct page *curr_page, *page; - - /* - * Pages we haven't locked yet can be migrated off the list while we're - * trying to lock them, so we need to be careful and only attempt to - * lock each page under migrate_read_lock(). Otherwise, the page we lock - * may no longer belong to the zspage. This means that we may wait for - * the wrong page to unlock, so we must take a reference to the page - * prior to waiting for it to unlock outside migrate_read_lock(). - */ - while (1) { - migrate_read_lock(zspage); - page = get_first_page(zspage); - if (trylock_page(page)) - break; - get_page(page); - migrate_read_unlock(zspage); - wait_on_page_locked(page); - put_page(page); - } - - curr_page = page; - while ((page = get_next_page(curr_page))) { - if (trylock_page(page)) { - curr_page = page; - } else { - get_page(page); - migrate_read_unlock(zspage); - wait_on_page_locked(page); - put_page(page); - migrate_read_lock(zspage); - } - } - migrate_read_unlock(zspage); -} -#endif /* CONFIG_COMPACTION */ - static void migrate_lock_init(struct zspage *zspage) { rwlock_init(&zspage->lock); @@ -1730,10 +1679,6 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage, static bool zs_page_isolate(struct page *page, isolate_mode_t mode) { - /* - * Page is locked so zspage couldn't be destroyed. For detail, look at - * lock_zspage in free_zspage. - */ VM_BUG_ON_PAGE(PageIsolated(page), page); return true; @@ -1848,56 +1793,6 @@ static const struct movable_operations zsmalloc_mops = { .putback_page = zs_page_putback, }; -/* - * Caller should hold page_lock of all pages in the zspage - * In here, we cannot use zspage meta data. - */ -static void async_free_zspage(struct work_struct *work) -{ - int i; - struct size_class *class; - struct zspage *zspage, *tmp; - LIST_HEAD(free_pages); - struct zs_pool *pool = container_of(work, struct zs_pool, - free_work); - - for (i = 0; i < ZS_SIZE_CLASSES; i++) { - class = pool->size_class[i]; - if (class->index != i) - continue; - - spin_lock(&pool->lock); - list_splice_init(&class->fullness_list[ZS_INUSE_RATIO_0], - &free_pages); - spin_unlock(&pool->lock); - } - - list_for_each_entry_safe(zspage, tmp, &free_pages, list) { - list_del(&zspage->list); - lock_zspage(zspage); - - spin_lock(&pool->lock); - class = zspage_class(pool, zspage); - __free_zspage(pool, class, zspage); - spin_unlock(&pool->lock); - } -}; - -static void kick_deferred_free(struct zs_pool *pool) -{ - schedule_work(&pool->free_work); -} - -static void zs_flush_migration(struct zs_pool *pool) -{ - flush_work(&pool->free_work); -} - -static void init_deferred_free(struct zs_pool *pool) -{ - INIT_WORK(&pool->free_work, async_free_zspage); -} - static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) { struct page *page = get_first_page(zspage); @@ -1908,8 +1803,6 @@ static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage) unlock_page(page); } while ((page = get_next_page(page)) != NULL); } -#else -static inline void zs_flush_migration(struct zs_pool *pool) { } #endif /* @@ -2121,7 +2014,6 @@ struct zs_pool *zs_create_pool(const char *name) if (!pool) return NULL; - init_deferred_free(pool); spin_lock_init(&pool->lock); atomic_set(&pool->compaction_in_progress, 0); @@ -2229,7 +2121,6 @@ void zs_destroy_pool(struct zs_pool *pool) int i; zs_unregister_shrinker(pool); - zs_flush_migration(pool); zs_pool_stat_destroy(pool); for (i = 0; i < ZS_SIZE_CLASSES; i++) {