From patchwork Wed Nov 10 18:54:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12612583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18464C433EF for ; Wed, 10 Nov 2021 18:54:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BD9CD611C9 for ; Wed, 10 Nov 2021 18:54:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BD9CD611C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C98036B007E; Wed, 10 Nov 2021 13:54:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BF9AE6B0080; Wed, 10 Nov 2021 13:54:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A1ACD6B0082; Wed, 10 Nov 2021 13:54:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0102.hostedemail.com [216.40.44.102]) by kanga.kvack.org (Postfix) with ESMTP id 789C46B0080 for ; Wed, 10 Nov 2021 13:54:45 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 2F28385F19A2 for ; Wed, 10 Nov 2021 18:54:45 +0000 (UTC) X-FDA: 78793922130.21.916FB0B Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) by imf05.hostedemail.com (Postfix) with ESMTP id 9456A508EC97 for ; Wed, 10 Nov 2021 18:54:23 +0000 (UTC) Received: by mail-pl1-f177.google.com with SMTP id q17so3750466plr.11 for ; Wed, 10 Nov 2021 10:54:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jVE6lTayLaXCOdFaoPS0lzzXzrAOfmZTfj7+NMloSRI=; b=XFHLsehOQ+LSz8mFlviBmvmIjFuBK9IX+X474iJC6JnYMJQXatDTIN5D6iqVkvT6XI XlDh0Tlukbegv1KVOOVepLHu0H70PWVbJalEvFk75pbyxggkC83ARgs6/nCYRaTsz6FM yUzkt6PpqjdeFYO1rNT0PqLt2DLCEAonEKUlJvVtWgE0GBk0Y/rpgmY6kdvP6j5NDWuT 7Ud+iAZYbYa0f4AUMt+E2Cb7LC8N9sTTjIAWUOAtE4S8p7KT+R6RU1iOwOD/5y5psRgt V1dbKj9w4A1uxC3dgMU3/cicGmal1n5b1t9uabXBPmtjfOjsCLBeOUkal39TH9EWknAi 8ZAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=jVE6lTayLaXCOdFaoPS0lzzXzrAOfmZTfj7+NMloSRI=; b=cdCjsp9CPh9QORl/hqchcRPueQhFFAgzQ4cUImU8hBmpCJmUHBLA9DhpIBFWf7qiCm anPyfsSI+DwMcu8/umFvBFh/REsCyJtMfVG40eTGH3PfwMdu2RnoxuXmo9XtJ1p1Fxbi PTUexAt+2j2KPX9RywfSD6xielUgf1LfSP6ADA3lZVzHXejOGgyOYLCEA15n9SC+YNBL gFs1amrSGC+ltlV8hEne4ih5r+rbsLDfK5Ed3DPyPK0FRgRint4Xu41vnbRkM1gVnkTe 3VM2S4jTg37fCJ42phJWqDxYTInFJkWhZQ8P18gTqPtkefd1rrxN6KQb1iIFMOGhLRm7 s6lw== X-Gm-Message-State: AOAM531Q1IrJH0nG8LYmvnVMqYAQ+t2p1aKh2QVC2R8CJYzu3dDQXR/V F3AXiZAoHtiIu5vVfXI+crU= X-Google-Smtp-Source: ABdhPJwmU7iiHsk6V6BA9qoE6i6q6nTY61RJ8AhSUGihjXs2YYUGd9EnV1vtWpxy3Vu9kyoSlKRN8Q== X-Received: by 2002:a17:90a:5642:: with SMTP id d2mr19309267pji.200.1636570483745; Wed, 10 Nov 2021 10:54:43 -0800 (PST) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:11d4:2de3:ab82:be64]) by smtp.gmail.com with ESMTPSA id g13sm325253pjc.39.2021.11.10.10.54.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Nov 2021 10:54:43 -0800 (PST) From: Minchan Kim To: Andrew Morton Cc: Sergey Senozhatsky , linux-mm , Minchan Kim Subject: [PATCH 6/8] zsmalloc: remove zspage isolation for migration Date: Wed, 10 Nov 2021 10:54:31 -0800 Message-Id: <20211110185433.1981097-7-minchan@kernel.org> X-Mailer: git-send-email 2.34.0.rc1.387.gb447b232ab-goog In-Reply-To: <20211110185433.1981097-1-minchan@kernel.org> References: <20211110185433.1981097-1-minchan@kernel.org> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=XFHLsehO; spf=pass (imf05.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.214.177 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9456A508EC97 X-Stat-Signature: 7b79xi3k4ky6gycnjxrkx3c8qqdkbeet X-HE-Tag: 1636570463-60031 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: zspage isolation for migration introduced additional exceptions to be dealt with since the zspage was isolated from class list. The reason why I isolated zspage from class list was to prevent race between obj_malloc and page migration via allocating zpage from the zspage further. However, it couldn't prevent object freeing from zspage so it needed corner case handling. This patch removes the whole mess. Now, we are fine since class->lock and zspage->lock can prevent the race. Signed-off-by: Minchan Kim --- mm/zsmalloc.c | 156 +++----------------------------------------------- 1 file changed, 8 insertions(+), 148 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index da15d98a6e29..b8b098be92fa 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -254,10 +254,6 @@ struct zs_pool { #ifdef CONFIG_COMPACTION struct inode *inode; struct work_struct free_work; - /* A wait queue for when migration races with async_free_zspage() */ - struct wait_queue_head migration_wait; - atomic_long_t isolated_pages; - bool destroying; #endif }; @@ -454,11 +450,6 @@ MODULE_ALIAS("zpool-zsmalloc"); /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */ static DEFINE_PER_CPU(struct mapping_area, zs_map_area); -static bool is_zspage_isolated(struct zspage *zspage) -{ - return zspage->isolated; -} - static __maybe_unused int is_first_page(struct page *page) { return PagePrivate(page); @@ -744,7 +735,6 @@ static void remove_zspage(struct size_class *class, enum fullness_group fullness) { VM_BUG_ON(list_empty(&class->fullness_list[fullness])); - VM_BUG_ON(is_zspage_isolated(zspage)); list_del_init(&zspage->list); class_stat_dec(class, fullness, 1); @@ -770,13 +760,9 @@ static enum fullness_group fix_fullness_group(struct size_class *class, if (newfg == currfg) goto out; - if (!is_zspage_isolated(zspage)) { - remove_zspage(class, zspage, currfg); - insert_zspage(class, zspage, newfg); - } - + remove_zspage(class, zspage, currfg); + insert_zspage(class, zspage, newfg); set_zspage_mapping(zspage, class_idx, newfg); - out: return newfg; } @@ -1511,7 +1497,6 @@ void zs_free(struct zs_pool *pool, unsigned long handle) unsigned long obj; struct size_class *class; enum fullness_group fullness; - bool isolated; if (unlikely(!handle)) return; @@ -1533,11 +1518,9 @@ void zs_free(struct zs_pool *pool, unsigned long handle) goto out; } - isolated = is_zspage_isolated(zspage); migrate_read_unlock(zspage); /* If zspage is isolated, zs_page_putback will free the zspage */ - if (likely(!isolated)) - free_zspage(pool, class, zspage); + free_zspage(pool, class, zspage); out: spin_unlock(&class->lock); @@ -1718,7 +1701,6 @@ static struct zspage *isolate_zspage(struct size_class *class, bool source) zspage = list_first_entry_or_null(&class->fullness_list[fg[i]], struct zspage, list); if (zspage) { - VM_BUG_ON(is_zspage_isolated(zspage)); remove_zspage(class, zspage, fg[i]); return zspage; } @@ -1739,8 +1721,6 @@ static enum fullness_group putback_zspage(struct size_class *class, { enum fullness_group fullness; - VM_BUG_ON(is_zspage_isolated(zspage)); - fullness = get_fullness_group(class, zspage); insert_zspage(class, zspage, fullness); set_zspage_mapping(zspage, class->index, fullness); @@ -1822,34 +1802,10 @@ static void inc_zspage_isolation(struct zspage *zspage) static void dec_zspage_isolation(struct zspage *zspage) { + VM_BUG_ON(zspage->isolated == 0); zspage->isolated--; } -static void putback_zspage_deferred(struct zs_pool *pool, - struct size_class *class, - struct zspage *zspage) -{ - enum fullness_group fg; - - fg = putback_zspage(class, zspage); - if (fg == ZS_EMPTY) - schedule_work(&pool->free_work); - -} - -static inline void zs_pool_dec_isolated(struct zs_pool *pool) -{ - VM_BUG_ON(atomic_long_read(&pool->isolated_pages) <= 0); - atomic_long_dec(&pool->isolated_pages); - /* - * There's no possibility of racing, since wait_for_isolated_drain() - * checks the isolated count under &class->lock after enqueuing - * on migration_wait. - */ - if (atomic_long_read(&pool->isolated_pages) == 0 && pool->destroying) - wake_up_all(&pool->migration_wait); -} - static void replace_sub_page(struct size_class *class, struct zspage *zspage, struct page *newpage, struct page *oldpage) { @@ -1875,10 +1831,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage, static bool zs_page_isolate(struct page *page, isolate_mode_t mode) { - struct zs_pool *pool; - struct size_class *class; struct zspage *zspage; - struct address_space *mapping; /* * Page is locked so zspage couldn't be destroyed. For detail, look at @@ -1888,39 +1841,9 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode) VM_BUG_ON_PAGE(PageIsolated(page), page); zspage = get_zspage(page); - - mapping = page_mapping(page); - pool = mapping->private_data; - - class = zspage_class(pool, zspage); - - spin_lock(&class->lock); - if (get_zspage_inuse(zspage) == 0) { - spin_unlock(&class->lock); - return false; - } - - /* zspage is isolated for object migration */ - if (list_empty(&zspage->list) && !is_zspage_isolated(zspage)) { - spin_unlock(&class->lock); - return false; - } - - /* - * If this is first time isolation for the zspage, isolate zspage from - * size_class to prevent further object allocation from the zspage. - */ - if (!list_empty(&zspage->list) && !is_zspage_isolated(zspage)) { - enum fullness_group fullness; - unsigned int class_idx; - - get_zspage_mapping(zspage, &class_idx, &fullness); - atomic_long_inc(&pool->isolated_pages); - remove_zspage(class, zspage, fullness); - } - + migrate_write_lock(zspage); inc_zspage_isolation(zspage); - spin_unlock(&class->lock); + migrate_write_unlock(zspage); return true; } @@ -2003,21 +1926,6 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, dec_zspage_isolation(zspage); - /* - * Page migration is done so let's putback isolated zspage to - * the list if @page is final isolated subpage in the zspage. - */ - if (!is_zspage_isolated(zspage)) { - /* - * We cannot race with zs_destroy_pool() here because we wait - * for isolation to hit zero before we start destroying. - * Also, we ensure that everyone can see pool->destroying before - * we start waiting. - */ - putback_zspage_deferred(pool, class, zspage); - zs_pool_dec_isolated(pool); - } - if (page_zone(newpage) != page_zone(page)) { dec_zone_page_state(page, NR_ZSPAGES); inc_zone_page_state(newpage, NR_ZSPAGES); @@ -2045,30 +1953,15 @@ static int zs_page_migrate(struct address_space *mapping, struct page *newpage, static void zs_page_putback(struct page *page) { - struct zs_pool *pool; - struct size_class *class; - struct address_space *mapping; struct zspage *zspage; VM_BUG_ON_PAGE(!PageMovable(page), page); VM_BUG_ON_PAGE(!PageIsolated(page), page); zspage = get_zspage(page); - mapping = page_mapping(page); - pool = mapping->private_data; - class = zspage_class(pool, zspage); - - spin_lock(&class->lock); + migrate_write_lock(zspage); dec_zspage_isolation(zspage); - if (!is_zspage_isolated(zspage)) { - /* - * Due to page_lock, we cannot free zspage immediately - * so let's defer. - */ - putback_zspage_deferred(pool, class, zspage); - zs_pool_dec_isolated(pool); - } - spin_unlock(&class->lock); + migrate_write_unlock(zspage); } static const struct address_space_operations zsmalloc_aops = { @@ -2090,36 +1983,8 @@ static int zs_register_migration(struct zs_pool *pool) return 0; } -static bool pool_isolated_are_drained(struct zs_pool *pool) -{ - return atomic_long_read(&pool->isolated_pages) == 0; -} - -/* Function for resolving migration */ -static void wait_for_isolated_drain(struct zs_pool *pool) -{ - - /* - * We're in the process of destroying the pool, so there are no - * active allocations. zs_page_isolate() fails for completely free - * zspages, so we need only wait for the zs_pool's isolated - * count to hit zero. - */ - wait_event(pool->migration_wait, - pool_isolated_are_drained(pool)); -} - static void zs_unregister_migration(struct zs_pool *pool) { - pool->destroying = true; - /* - * We need a memory barrier here to ensure global visibility of - * pool->destroying. Thus pool->isolated pages will either be 0 in which - * case we don't care, or it will be > 0 and pool->destroying will - * ensure that we wake up once isolation hits 0. - */ - smp_mb(); - wait_for_isolated_drain(pool); /* This can block */ flush_work(&pool->free_work); iput(pool->inode); } @@ -2149,7 +2014,6 @@ static void async_free_zspage(struct work_struct *work) spin_unlock(&class->lock); } - list_for_each_entry_safe(zspage, tmp, &free_pages, list) { list_del(&zspage->list); lock_zspage(zspage); @@ -2362,10 +2226,6 @@ struct zs_pool *zs_create_pool(const char *name) if (!pool->name) goto err; -#ifdef CONFIG_COMPACTION - init_waitqueue_head(&pool->migration_wait); -#endif - if (create_cache(pool)) goto err;