From patchwork Thu Jan 11 18:33:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kairui Song X-Patchwork-Id: 13517706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07273C47077 for ; Thu, 11 Jan 2024 18:33:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8EA8D6B009E; Thu, 11 Jan 2024 13:33:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8993F6B00A0; Thu, 11 Jan 2024 13:33:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6EC6C6B00A1; Thu, 11 Jan 2024 13:33:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 59B3D6B009E for ; Thu, 11 Jan 2024 13:33:42 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2C2E412065A for ; Thu, 11 Jan 2024 18:33:42 +0000 (UTC) X-FDA: 81667878684.22.0409BBD Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf03.hostedemail.com (Postfix) with ESMTP id 6B00020007 for ; Thu, 11 Jan 2024 18:33:40 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HdasRBFn; spf=pass (imf03.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704998020; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2NX9KnQsza9Uz6QRxAoPzwzDEngTlQbn6WQFeW15RSs=; b=sJmy/dDiUJ4NbaN42B8kf1n8cLFrvDmI69BUbOeSMKzcZwfJXDUC3OEL3AtORO4L7BaaW+ fKaqhqlugANEGl376hQ352ARmw2h7b6iPR8pvO8PMEW5BjnVUW5kZrmiR7/wxbMUiVwWbu cjIxO4rWKTy07y8suisiMbj37Ibfri0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704998020; a=rsa-sha256; cv=none; b=clAP3vHP/JPGG8TX5zY8jXiqqfuiOPwVGwu340Ees4UxTg5iIoHjb4qaF0cgCTn13+/CUD o771qZZvG3y6hrhgBncOBtrFyiAvRfz3utQ0BHOWUVy3HPrpIfL16m+6ODfERXdwwC2RkG VhmKPT+O+4DTfwT5HPeH96jmSnZPuM0= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=HdasRBFn; spf=pass (imf03.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1d4414ec9c7so29538315ad.0 for ; Thu, 11 Jan 2024 10:33:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1704998018; x=1705602818; darn=kvack.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=2NX9KnQsza9Uz6QRxAoPzwzDEngTlQbn6WQFeW15RSs=; b=HdasRBFnuj3u5rB3VmiMInohvkt9BlC5oeJ/Y+cT94UYn+VfdgLNmQdkJYelGCasmG TqF6EUlR5IV7LOMgcEBfHw5/2hXuy5H8rfaCGSIjYlidg0BERtqqzuNMUzxK/b0SGrnj MbyqUJFNf2rWdG4E2WHUzfl7tnEjbtvCdt0vTPLNl0mRyxqYWdxZ5n1J5YKntZQz7bUq slZue8rxJUoqC3LYuhiAVZTd/8Xv++Yq0VgbozAn2Te5ZV2xO9dzbiugALMCtiRkmD8f B2Wdqx0CSPy7nQwxw6ABd2hWSZvjhw5Vn1QozVCQfbjofT0SeFyjrrDPM4q9JaDz/CFj ZAbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704998018; x=1705602818; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=2NX9KnQsza9Uz6QRxAoPzwzDEngTlQbn6WQFeW15RSs=; b=MB/wc6Mw8ZWgddag2jzW83q+fkaNc/NgwhH19+gbDTrdt2aOtV/tvFIbeB7Ne8Xkdu l1gfB/6lzeMdprstdSb64YT3Yc6xa/JnpQQ2kY4GisOnLxJGZ6LilJ+F5YWd8rg/3aAg cwPFFtttPeqhnt6amHbpHNTDZ8800pOQjtWoXUJltpPj/e8t+p+izTNVrI52QpfyGw66 6N8qpF1lY1RZdkvqQxcDaFCCjv31aWOsvB82+X59R651ZFocB9dPo1AflNkm5Ij0C3f/ fblxgAGHWmpYIA7OA9R/WOwxSFRntG7hVcMwXNZDFOzJwwmz4cVF2RocOerORHFtX9mn Adiw== X-Gm-Message-State: AOJu0YwbmTw4SoOFdVCeVzWYerbGcHlNWXIKlqx7E2Lp09+OsmyEsjUI NWXunowxCCBYxuDUIQfh+O2Q+xdSoCRmR/Xh X-Google-Smtp-Source: AGHT+IHnDtOq2y94Dh8PFjaWgtWLEz5nVEvXR0aC1RzvZIlhpBXuXDqh89s31F4som1CtytG5C2rzA== X-Received: by 2002:a17:902:6b82:b0:1d4:7685:90df with SMTP id p2-20020a1709026b8200b001d4768590dfmr161590plk.31.1704998018116; Thu, 11 Jan 2024 10:33:38 -0800 (PST) Received: from KASONG-MB2.tencent.com ([1.203.117.98]) by smtp.gmail.com with ESMTPSA id mf3-20020a170902fc8300b001d08e080042sm1483267plb.43.2024.01.11.10.33.35 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 11 Jan 2024 10:33:37 -0800 (PST) From: Kairui Song To: linux-mm@kvack.org Cc: Andrew Morton , Yu Zhao , Chris Li , Matthew Wilcox , linux-kernel@vger.kernel.org, Kairui Song Subject: [PATCH v2 1/3] mm, lru_gen: batch update counters on againg Date: Fri, 12 Jan 2024 02:33:19 +0800 Message-ID: <20240111183321.19984-2-ryncsn@gmail.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240111183321.19984-1-ryncsn@gmail.com> References: <20240111183321.19984-1-ryncsn@gmail.com> Reply-To: Kairui Song MIME-Version: 1.0 X-Stat-Signature: bthsuddz3a8mbq4qnc9uwd1aketk1mwp X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 6B00020007 X-Rspam-User: X-HE-Tag: 1704998020-167323 X-HE-Meta: U2FsdGVkX1+nntuu3GoMnaRigN38/nk/LXu2JOEWkEYQbZXVYFaU3wfXHMu0Oz1R91jSHyvBZQMVDCZGYiV6ogNcXCMkDIOiOZDT9GHYSQJ6BqH+Xof16Ck2pgzJuYVTV8vbLVNePeZqKkORnGdj9xXXWn3xERYTvoJnj1vucZ3E7SNjW5oY5MdDZKPlGdI5r6tsI6RDTPEDECEsFzKevjnSYxTxsUX6/MSzFarm1j1XCWmjf6T32hqk0wDdkJ7M1LfXjpzJXjaUoq2BhRhF9ZnXr2vNMQmW4oybmQOowxrFv57Mw6hjClb7VCg6Fjw6iGjoPC0Z4ZVc+xbL5Ma1wcOpQr0ed1kDLUpVKwUL29/P1Ckx2rQpPiykaylPmDLZYxOmxhX6KQOMxFRJER1fHzZac+hqphGG9X/M8h2wMaGt7Zc7Z02pMabiVqpgiBzPR/pF69mu3VCvLEWK3ir+XDlx8kujuKzHqRZ4ZPVo2j5cwsEuNRI/INevfYQpj0xuFDLq+7TyIGNHk+R8NUFf3KfFgPuF24KkpLAS+aN5QXLYI0LFgI+BbjHYwMBjS6lpoMru/4CJ23CPoBli7jiOzN/+pxcYsk3LMBG98Z7ckJ3GU2braukoYgxXvZ7I6mJ2PsjQDGWvEVeGgg013PcpxsI92dKmOMTRJoB3QuKw5rDtD9qPInyLdutzlKA/83fk4FnRcwFlPsOrOOeFCPEkpIhBvtKHdAX1mmlaqJ+IKxrsh8jLkbtZIzmh7AY7OGtZqYe00CZ6lfExHInaRFffiWDtol2SJ4NuqBvcnuXhXTFmPnqyTs4NiUNCUt/eKQV4cCLGaR+FKYaZUEpOqr2CZiPFDa+mGn2aGoWu1ZaJ94cEcE+kQiS02g4IEahTAeSNmzUa8o1cYqDqUgA3OF+SrBdeex8GOwXctA5W4kpRenrHAUI+Yyl6tzqBIzq2XEFbYWMYHYOwSA5W4VSK416 5W8tGVeT w7vCOGqzncOv30Qx/EwNDGcnDAGmat3MJSKI2ytdlIT/pgi4FCsYVHt0LNZhkkOUhOdxSTlu+vdaIM8tUGYMSHMcQmu4TcMusOA5Hvb3OfKzGkUK1Kvf1vVFgEzhjnsZqJRug2pbf4FXzQptpcUnLDCFcC7DebYRts87X6UiuaYqXXKFVfUupRdhrWNWHNNXIZDdLTwBNlsdbGxJKw97kn1YkYYO0hvK7MtOB/N9hJCi/xbiaUCRNGtU9QVA5LW79jh7qSsl3yaVFEjlBh45ERTX6ygFoXP7OR2g1UVEspSDr+Wy/y/lKpNP7Iw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.011855, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Kairui Song When lru_gen is aging, it will update mm counters page by page, which causes a higher overhead if age happens frequently or there are a lot of pages in one generation getting moved. Optimize this by doing the counter update in batch. Although most __mod_*_state has its own caches the overhead is still observable. Tested in a 4G memcg on a EPYC 7K62 with: memcached -u nobody -m 16384 -s /tmp/memcached.socket \ -a 0766 -t 16 -B binary & memtier_benchmark -S /tmp/memcached.socket \ -P memcache_binary -n allkeys \ --key-minimum=1 --key-maximum=16000000 -d 1024 \ --ratio=1:0 --key-pattern=P:P -c 2 -t 16 --pipeline 8 -x 6 Average result of 18 test runs: Before: 44017.78 Ops/sec After: 44687.08 Ops/sec (+1.5%) Signed-off-by: Kairui Song --- mm/vmscan.c | 64 +++++++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 55 insertions(+), 9 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 4f9c854ce6cc..185d53607c7e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3113,9 +3113,47 @@ static int folio_update_gen(struct folio *folio, int gen) return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; } +/* + * Update LRU gen in batch for each lru_gen LRU list. The batch is limited to + * each gen / type / zone level LRU. Batch is applied after finished or aborted + * scanning one LRU list. + */ +struct gen_update_batch { + int delta[MAX_NR_GENS]; +}; + +static void lru_gen_update_batch(struct lruvec *lruvec, int type, int zone, + struct gen_update_batch *batch) +{ + int gen; + int promoted = 0; + struct lru_gen_folio *lrugen = &lruvec->lrugen; + enum lru_list lru = type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON; + + for (gen = 0; gen < MAX_NR_GENS; gen++) { + int delta = batch->delta[gen]; + + if (!delta) + continue; + + WRITE_ONCE(lrugen->nr_pages[gen][type][zone], + lrugen->nr_pages[gen][type][zone] + delta); + + if (lru_gen_is_active(lruvec, gen)) + promoted += delta; + } + + if (promoted) { + __update_lru_size(lruvec, lru, zone, -promoted); + __update_lru_size(lruvec, lru + LRU_ACTIVE, zone, promoted); + } +} + /* protect pages accessed multiple times through file descriptors */ -static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclaiming) +static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, + bool reclaiming, struct gen_update_batch *batch) { + int delta = folio_nr_pages(folio); int type = folio_is_file_lru(folio); struct lru_gen_folio *lrugen = &lruvec->lrugen; int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]); @@ -3138,7 +3176,8 @@ static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool reclai new_flags |= BIT(PG_reclaim); } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); - lru_gen_update_size(lruvec, folio, old_gen, new_gen); + batch->delta[old_gen] -= delta; + batch->delta[new_gen] += delta; return new_gen; } @@ -3672,6 +3711,7 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) { int zone; int remaining = MAX_LRU_BATCH; + struct gen_update_batch batch = { }; struct lru_gen_folio *lrugen = &lruvec->lrugen; int new_gen, old_gen = lru_gen_from_seq(lrugen->min_seq[type]); @@ -3690,12 +3730,15 @@ static bool inc_min_seq(struct lruvec *lruvec, int type, bool can_swap) VM_WARN_ON_ONCE_FOLIO(folio_is_file_lru(folio) != type, folio); VM_WARN_ON_ONCE_FOLIO(folio_zonenum(folio) != zone, folio); - new_gen = folio_inc_gen(lruvec, folio, false); + new_gen = folio_inc_gen(lruvec, folio, false, &batch); list_move_tail(&folio->lru, &lrugen->folios[new_gen][type][zone]); - if (!--remaining) + if (!--remaining) { + lru_gen_update_batch(lruvec, type, zone, &batch); return false; + } } + lru_gen_update_batch(lruvec, type, zone, &batch); } done: reset_ctrl_pos(lruvec, type, true); @@ -4215,7 +4258,7 @@ void lru_gen_soft_reclaim(struct mem_cgroup *memcg, int nid) ******************************************************************************/ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_control *sc, - int tier_idx) + int tier_idx, struct gen_update_batch *batch) { bool success; int gen = folio_lru_gen(folio); @@ -4257,7 +4300,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c if (tier > tier_idx || refs == BIT(LRU_REFS_WIDTH)) { int hist = lru_hist_from_seq(lrugen->min_seq[type]); - gen = folio_inc_gen(lruvec, folio, false); + gen = folio_inc_gen(lruvec, folio, false, batch); list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); WRITE_ONCE(lrugen->protected[hist][type][tier - 1], @@ -4267,7 +4310,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c /* ineligible */ if (zone > sc->reclaim_idx || skip_cma(folio, sc)) { - gen = folio_inc_gen(lruvec, folio, false); + gen = folio_inc_gen(lruvec, folio, false, batch); list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); return true; } @@ -4275,7 +4318,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c /* waiting for writeback */ if (folio_test_locked(folio) || folio_test_writeback(folio) || (type == LRU_GEN_FILE && folio_test_dirty(folio))) { - gen = folio_inc_gen(lruvec, folio, true); + gen = folio_inc_gen(lruvec, folio, true, batch); list_move(&folio->lru, &lrugen->folios[gen][type][zone]); return true; } @@ -4341,6 +4384,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, for (i = MAX_NR_ZONES; i > 0; i--) { LIST_HEAD(moved); int skipped_zone = 0; + struct gen_update_batch batch = { }; int zone = (sc->reclaim_idx + i) % MAX_NR_ZONES; struct list_head *head = &lrugen->folios[gen][type][zone]; @@ -4355,7 +4399,7 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, scanned += delta; - if (sort_folio(lruvec, folio, sc, tier)) + if (sort_folio(lruvec, folio, sc, tier, &batch)) sorted += delta; else if (isolate_folio(lruvec, folio, sc)) { list_add(&folio->lru, list); @@ -4375,6 +4419,8 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc, skipped += skipped_zone; } + lru_gen_update_batch(lruvec, type, zone, &batch); + if (!remaining || isolated >= MIN_LRU_BATCH) break; }