From patchwork Fri Dec 22 06:25:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Chris Li X-Patchwork-Id: 13502993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67C86C41535 for ; Fri, 22 Dec 2023 06:25:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 015B86B007E; Fri, 22 Dec 2023 01:25:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F07A96B0082; Fri, 22 Dec 2023 01:25:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCF616B0085; Fri, 22 Dec 2023 01:25:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C6A186B007E for ; Fri, 22 Dec 2023 01:25:54 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 88F5B1C1093 for ; Fri, 22 Dec 2023 06:25:54 +0000 (UTC) X-FDA: 81593468628.23.81A96BB Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf19.hostedemail.com (Postfix) with ESMTP id A755F1A0011 for ; Fri, 22 Dec 2023 06:25:51 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="EZ4L/qUY"; spf=pass (imf19.hostedemail.com: domain of chrisl@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703226351; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=F5xMbzzD6JL3glt+fsZ03jXfgCcmlRtDbJwWXMSKnpM=; b=7ry0g/30xCaEARc2V8//W5nGJC6w3J3E/nZEDxYzd15oCjJCDDGlqRLzq7UkGfYOcNvak4 5WZmmBCl43203Je0lvz6U+nriRz7Sp20exKDhUrZnLfiDY0uzdyCjKGMHzE8iL/K0Ph1j6 JzBbSXHm+TEWUfXh/pjNavI+5UT0UnI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703226351; a=rsa-sha256; cv=none; b=xlvRDctp5GVdN78NxPIYEgLi4GmL5CeBlITb5xT5eSQdNd/PTfxSEoCcfJk65g14ABOrgB hnagsZqqt2XvI/yK+tvfwRQoXUh6lBh303CieO6Sc+eV1xF5A9j3GFQI6f+NbTTwTn30O6 qXTCi4FDmSz31eHIaNQk6zbno9WfywI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="EZ4L/qUY"; spf=pass (imf19.hostedemail.com: domain of chrisl@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=none) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by ams.source.kernel.org (Postfix) with ESMTP id CFC1FB82173; Fri, 22 Dec 2023 06:25:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64F22C433C8; Fri, 22 Dec 2023 06:25:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1703226349; bh=Jx2LL9VBtn7EevKYNjfg9tbjditIKurBb4A4cGkVspA=; h=From:Date:Subject:To:Cc:From; b=EZ4L/qUYdprfHbJoQZRCR24HRA73UxZyM04nbiV0MwPPACXOlRk+noH0bnkYTGFEu aJjKpjm7vAsibAks0LYcZAKROonVVgLcfGqJk/BgMT0P3tu/mKjz5dZDMSA9i+1HIv IivYWjzxgQdeXK/USykiZeGbm/4dK3CV4iEp4zWEZa3jdvniszsVEQZB9rxSa8vW0F lMDgm/0uRF36MWMZWoxwj9xC9TZ8+jnwRT+zRzKGTqgixXtEkwFdVeucx4+cUKIeWU ygQVGyy2RSrSvtADiI5dCYEx7KVEfboVTVd6Q7v9skpAec3ltil0W0GqUzT9eb8b+c ufTIrxVr+pE6w== From: Chris Li Date: Thu, 21 Dec 2023 22:25:39 -0800 Subject: [PATCH] mm: swap: async free swap slot cache entries MIME-Version: 1.0 Message-Id: <20231221-async-free-v1-1-94b277992cb0@kernel.org> X-B4-Tracking: v=1; b=H4sIAOIrhWUC/6tWKk4tykwtVrJSqFYqSi3LLM7MzwNyDHUUlJIzE vPSU3UzU4B8JSMDI2NDI0Mz3cTiyrxk3bSi1FTdpNQ0Y0sjA0NTE2MjJaCGgqLUtMwKsGHRsbW 1AKOxhW1cAAAA To: Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, =?utf-8?b?V2VpIFh177+8?= , =?utf-8?q?Yu_Zhao=EF=BF=BC?= , Greg Thelen , Chun-Tse Shao , =?utf-8?q?Suren_Baghdasaryan=EF=BF=BC?= , =?utf-8?q?Yosr?= =?utf-8?q?y_Ahmed=EF=BF=BC?= , Brain Geffon , Minchan Kim , Michal Hocko , Mel Gorman , Huang Ying , Nhat Pham , Johannes Weiner , Kairui Song , Zhongkun He , Kemeng Shi , Barry Song , Chris Li X-Mailer: b4 0.12.3 X-Rspamd-Queue-Id: A755F1A0011 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: qoemsn5skfnuoi5uirx5n4fpzf55ohyb X-HE-Tag: 1703226351-294243 X-HE-Meta: U2FsdGVkX1/6MThcGp+TDqws9lZu/8NREzoZ2QJknapYsHBEvenyMWRLta3gJNS0yLASr4QDZRMDiFIYqhPhkk8QyE/gawJnhlIbvoDsL69vT3dpao4tPdJ6lWmcMfe7ulQK5rQepU1nczlXxe1yBVii6ibjDTLMKLVyH+oPez1B5A2fwPukEFRBDU05htYCaZa1xBQZ7LcvPrMF4BrnfouScD1FcMXT127nQRrI3QXbXk/Jy95ZWgp/ZXmJqbQ+cjMZDztSS6J+j9sB4sH1VSYWL/gkMkNJe1slMzj9FCejFPRhDXPgCHtF/+Rtoo09A6MlWQlHtVcpM2gd0xgHmo5Mw7JgS8279DqiBxXo9fWyHegGdX+rEabvQYrjqHtb/Xj3flTltDNlBYLmVkjiliP85/NBZ88rxHoBarg2COPQ81Cd228I8WGFgfDgqCt+1HuiVlfkv6pHW8qXRQyz3iMERcLIUXctN3YDgAmVC4IzNlYzEO4wxbxf6H1oqBXKRq93hS+O3n1sBkEpWIG0ialmm5279ueDb9hIrw7dgf1SACn7N7TKlrKaYCv+pfwCPZonZv5YUV2XNkL1NuNh7xxZEuxqBZn+BJs+vwlQhSm3InL+LAWmu0t+GKzL1ZjFTybOzNuf9PdOaStJhMn19WLGvSDXSQ/DNkCfjW3/gH0DdC0y72YhkmrSoBvRji/N2LpXMGoUAh815NbySvtSiea/FpAYykjE0J0Hdr5uYw2oVMzhE6aQukElgZODlLCz2p+KMggeWeaarNLFNkwmgyuywWG4immG7p+MgyZJ4CJBValEIU32RGsyvw8ETrYci/R+QtUVLh5KIZd1+hJqCDh1yGSgFtoSimtoDogRTTMIbsTEtBaUPIyahNClxpDdkkaiPO5Yw5ztBn+Ea3/vkbHoN+idQodydvza8MeHla6VoDhuQrTA2kIxTRhKz1RHqB3401X4/0zr3yWKvQO UzpvZGrT 1TReqUlmDdGRp3izjKes3BM1G8NM4oVD7ugSmhfprrV0n+avcmPFrqb7W1D70ETubNST6MneqxNwBG4E5u8i3RPsXt3T+IFPrTlO+aRgJkVX6mQDbKm7dnA1UpfVDSId0gQ1acjrI+CSNwyVlWl0oA+FhZeS6mdSm9Z671IdG+hYWe3FkrCMU/yfTWBMtU22awQVLqFrreSit+NTBZQFa2juL5cFVv6VYFXi40dmHU30bpzE6DLZzDe50bd5HXmOD8QpcG0dKad2W30BxNPieYSYSUOUy2YJSJVwpTMO85UxYpV5FA3s+iSciTS8qQBOUVSUCTi1VduUVyuYZor1/od8d5iiYaageqAit1oeELqbX6Gsa7FHxD/3Rez8UtEcFy2Kaj68onJAIC6U/LGX5fkvMd4rk5ul2/tNguaPYshxrS+IyvKfi7ehIMHYB/TC2haqsyx3ZUzRACt5LVly9pDF2Z6pnWxh7ivXp23/YCiYFUx4zoyzucdjnAUdet6boYUClpmIVwLPEFMm1EX66A/7V+X7Cjip8mGdMZDekb4shf3e6yc5a/fyoi6QA+IZ0rLfORToOu/GXrtMMo4FrSCi0hH9LETipzPOycu3sMgFKLkiTwNrMnLBgnwerf5BxZ38ZJ7/md5lOI9M7PNqzzmIa/w6K5SmEnxqCpl7c3ehH01eoObEcaeUkrTLycvFZOdFh/WPHqEdgDB/fHAANIhorX+MKTiUAlOE98aiWWaC5o2lJsGfy94YYJw914f9m3BtdL/yEiXmRfLEqmSYHEfkip/XEUVQTFtl4atY3q5qqqrwdPzC3+UBsJ2iyMSF7rinwbO1C19ETKJVrHRLUskSe5VilIw2NHMxP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We discovered that 1% swap page fault is 100us+ while 50% of the swap fault is under 20us. Further investigation show that a large portion of the time spent in the free_swap_slots() function for the long tail case. The percpu cache of swap slots is freed in a batch of 64 entries inside free_swap_slots(). These cache entries are accumulated from previous page faults, which may not be related to the current process. Doing the batch free in the page fault handler causes longer tail latencies and penalizes the current process. Move free_swap_slots() outside of the swapin page fault handler into an async work queue to avoid such long tail latencies. Testing: Chun-Tse did some benchmark in chromebook, showing that zram_wait_metrics improve about 15% with 80% and 95% confidence. I recently ran some experiments on about 1000 Google production machines. It shows swapin latency drops in the long tail 100us - 500us bucket dramatically. platform (100-500us) (0-100us) A 1.12% -> 0.36% 98.47% -> 99.22% B 0.65% -> 0.15% 98.96% -> 99.46% C 0.61% -> 0.23% 98.96% -> 99.38% Signed-off-by: Chris Li To: Andrew Morton Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: Wei Xu Cc: Yu Zhao Cc: Greg Thelen Cc: Chun-Tse Shao Cc: Suren Baghdasaryan Cc: Yosry Ahmed Cc: Brain Geffon Cc: Minchan Kim Cc: Michal Hocko Cc: Mel Gorman Cc: Huang Ying Cc: Nhat Pham Cc: Johannes Weiner Cc: Kairui Song Cc: Zhongkun He Cc: Kemeng Shi Cc: Barry Song --- include/linux/swap_slots.h | 1 + mm/swap_slots.c | 37 +++++++++++++++++++++++++++++-------- 2 files changed, 30 insertions(+), 8 deletions(-) --- base-commit: eacce8189e28717da6f44ee492b7404c636ae0de change-id: 20231216-async-free-bef392015432 Best regards, diff --git a/include/linux/swap_slots.h b/include/linux/swap_slots.h index 15adfb8c813a..67bc8fa30d63 100644 --- a/include/linux/swap_slots.h +++ b/include/linux/swap_slots.h @@ -19,6 +19,7 @@ struct swap_slots_cache { spinlock_t free_lock; /* protects slots_ret, n_ret */ swp_entry_t *slots_ret; int n_ret; + struct work_struct async_free; }; void disable_swap_slots_cache_lock(void); diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 0bec1f705f8e..a3b306550732 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -42,8 +42,10 @@ static bool swap_slot_cache_initialized; static DEFINE_MUTEX(swap_slots_cache_mutex); /* Serialize swap slots cache enable/disable operations */ static DEFINE_MUTEX(swap_slots_cache_enable_mutex); +static struct workqueue_struct *swap_free_queue; static void __drain_swap_slots_cache(unsigned int type); +static void swapcache_async_free_entries(struct work_struct *data); #define use_swap_slot_cache (swap_slot_cache_active && swap_slot_cache_enabled) #define SLOTS_CACHE 0x1 @@ -149,6 +151,7 @@ static int alloc_swap_slot_cache(unsigned int cpu) spin_lock_init(&cache->free_lock); cache->lock_initialized = true; } + INIT_WORK(&cache->async_free, swapcache_async_free_entries); cache->nr = 0; cache->cur = 0; cache->n_ret = 0; @@ -269,6 +272,20 @@ static int refill_swap_slots_cache(struct swap_slots_cache *cache) return cache->nr; } +static void swapcache_async_free_entries(struct work_struct *data) +{ + struct swap_slots_cache *cache; + + cache = container_of(data, struct swap_slots_cache, async_free); + spin_lock_irq(&cache->free_lock); + /* Swap slots cache may be deactivated before acquiring lock */ + if (cache->slots_ret) { + swapcache_free_entries(cache->slots_ret, cache->n_ret); + cache->n_ret = 0; + } + spin_unlock_irq(&cache->free_lock); +} + void free_swap_slot(swp_entry_t entry) { struct swap_slots_cache *cache; @@ -282,17 +299,14 @@ void free_swap_slot(swp_entry_t entry) goto direct_free; } if (cache->n_ret >= SWAP_SLOTS_CACHE_SIZE) { - /* - * Return slots to global pool. - * The current swap_map value is SWAP_HAS_CACHE. - * Set it to 0 to indicate it is available for - * allocation in global pool - */ - swapcache_free_entries(cache->slots_ret, cache->n_ret); - cache->n_ret = 0; + spin_unlock_irq(&cache->free_lock); + queue_work(swap_free_queue, &cache->async_free); + goto direct_free; } cache->slots_ret[cache->n_ret++] = entry; spin_unlock_irq(&cache->free_lock); + if (cache->n_ret >= SWAP_SLOTS_CACHE_SIZE) + queue_work(swap_free_queue, &cache->async_free); } else { direct_free: swapcache_free_entries(&entry, 1); @@ -348,3 +362,10 @@ swp_entry_t folio_alloc_swap(struct folio *folio) } return entry; } + +static int __init async_queue_init(void) +{ + swap_free_queue = create_workqueue("async swap cache"); + return 0; +} +subsys_initcall(async_queue_init);