From patchwork Mon Apr 8 18:39:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13621504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4E55C67861 for ; Mon, 8 Apr 2024 18:40:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DB6DE6B0089; Mon, 8 Apr 2024 14:40:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D68B96B008A; Mon, 8 Apr 2024 14:40:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C07836B008C; Mon, 8 Apr 2024 14:40:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id A63B66B0089 for ; Mon, 8 Apr 2024 14:40:10 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 5BEAA160135 for ; Mon, 8 Apr 2024 18:40:10 +0000 (UTC) X-FDA: 81987229380.24.F6664D5 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf20.hostedemail.com (Postfix) with ESMTP id B1D3C1C0012 for ; Mon, 8 Apr 2024 18:40:08 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712601608; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CPvydlf1x8WC7f0KwckZbYpQubLZswQl7673V7csftE=; b=umqfIxwNF1yUnNvjA9HeXoFVUjLO5hkyMmbu7u5prEre7bGC7Daqqx4IfWRcsLd58pW9sJ Z8udknZ66ZiNoNGwURDsa20lORQKBZms5PEqEjEZvDB8hCsr/IcE9vjGwCRGMzNFnSWoZE BUhpFsdY7xLxOPKL5tLRQxNyVsV/mc8= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf20.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712601608; a=rsa-sha256; cv=none; b=3XgWUVEIGoZv0+GeObZ6P7VKtrIIc2v9XhqfrB5wp/8OJADipzduYgFiySRCmVtf3gc4w7 511/fe3XvZhK5w5GXJPINWWQauPAr0ykc0NPjq4PVbhPslVlPSgEUIVH61j7S6Z6w6Fglm trwj51GvRLCUN/SQgl/vrH4QFs/kIPQ= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 86AF0139F; Mon, 8 Apr 2024 11:40:38 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 36AA53F766; Mon, 8 Apr 2024 11:40:06 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , Lance Yang Cc: Ryan Roberts , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v7 3/7] mm: swap: Simplify struct percpu_cluster Date: Mon, 8 Apr 2024 19:39:42 +0100 Message-Id: <20240408183946.2991168-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240408183946.2991168-1-ryan.roberts@arm.com> References: <20240408183946.2991168-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: B1D3C1C0012 X-Stat-Signature: 4qbgix4xmdy4wk6sx77piwy6rg1zymhu X-HE-Tag: 1712601608-766563 X-HE-Meta: U2FsdGVkX19qk5AQ/ilmLINyt4S2h0nQWVFx98ckLEsY+PChQhnGYdvoTY9jUlPgljRf0qXEhRsga3CvaFmR4SIm3SnC1/ZoQPoxM5Cca7j7cl1Btf4S+HSf2s/RsSf9FnQE1uILvySxetWQ/CJE5C5XC3QGsbI0l/BrO/uK2+ucnLs3ALeOYAnGTRFl3ZF+1Psv3j92pYrCdWy0bkC3SS3zO2h6tA3YG0GVtZEza2w6l9VWPgKCa/BaaBoH+Vyyx/I8hLF99rzs3JAk9lpTX5XPeGX54tChmUJMm+/KrL5zNluqcJBYLzN/6fsq5xHOZ1VYNmbisC9Ffq5DoMmp8j+cmDRY1pCT2rpeyHKAW6iY2XN+BuI19egv8sTc6NQLXlnHTpVIqnCyu12ac3Ql/uO+gmiSKdZP/rk7uy4wyGGX5qkNg8hJKj4tAVWNcL5OR2LLY7YKA73zH/xUK48bJUFUuXBUtCGketsjxaosxcLfyYDCpnkFqW29S7lBtEjtGWyXq6K84ivBvYYU6R3HxaPFtiBjrx5L/38k7KHujh0ZVAhbikDUYRcHllXum1Z3ZWp/XSIzh7HTE9NLtcyjoGkwl5GEUXoKJnkW5MHcekN8Yt4ZHMhXOEbOBl8jpv9AdYC6EGtatY9Js+8TmFo9uwjnWdB6F+Fpmwt2kqbicB0M8ic6Gzv5rSXsWoH5BhuVGvfVb7DvY3cyjrx61eVAgC8L09OsXM0d75wuBR1GGhoMtsiKwcsm3rFhepAof5wKrTSwHAtjn0MzGp+gJ6+ZHORoFJbfVU/KelCCJQi8y1a293Zyn3Cs/IQf4ff1v+AQ503vV8pfIDd7YNl0Z+tLKbqYuk6v0+T0uotNiEbrnZl5+JUL8P5oFnijVrBoaLFOFeU78ybAIbg/GTfMhmKPur1vk0N2JsW6/uOU9mZR8zK9E3ZTji1Nlty9VAF1ozDkwnOHdWKAqt3IXjxvSar BRrfPgVX WGPxuK0dB5FMfUGSBj4v20m/H3jK1rwlN8HVPte9rDh6jfKzrUynezA7DnchQcLyMQJtCinVqRJ1vUq16qqLFdHZVQeO+YJhFTYY9/PITr9WP4ujccZhYS6eDci9OSmNhtHhbe5rQzOUDuOqZ0Rtp8vdXuQwodVs+MkPTHDRAAcm0hUsWt43Ceg85Av3ERb5yOFoqtP9WLbLdW1Z3v7xgVsgBkLh6j/MLZ9lx X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: struct percpu_cluster stores the index of cpu's current cluster and the offset of the next entry that will be allocated for the cpu. These two pieces of information are redundant because the cluster index is just (offset / SWAPFILE_CLUSTER). The only reason for explicitly keeping the cluster index is because the structure used for it also has a flag to indicate "no cluster". However this data structure also contains a spin lock, which is never used in this context, as a side effect the code copies the spinlock_t structure, which is questionable coding practice in my view. So let's clean this up and store only the next offset, and use a sentinal value (SWAP_NEXT_INVALID) to indicate "no cluster". SWAP_NEXT_INVALID is chosen to be 0, because 0 will never be seen legitimately; The first page in the swap file is the swap header, which is always marked bad to prevent it from being allocated as an entry. This also prevents the cluster to which it belongs being marked free, so it will never appear on the free list. This change saves 16 bytes per cpu. And given we are shortly going to extend this mechanism to be per-cpu-AND-per-order, we will end up saving 16 * 9 = 144 bytes per cpu, which adds up if you have 256 cpus in the system. Reviewed-by: "Huang, Ying" Signed-off-by: Ryan Roberts --- include/linux/swap.h | 9 ++++++++- mm/swapfile.c | 22 +++++++++++----------- 2 files changed, 19 insertions(+), 12 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 5737236dc3ce..5e1e4f5bf0cb 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -260,13 +260,20 @@ struct swap_cluster_info { #define CLUSTER_FLAG_FREE 1 /* This cluster is free */ #define CLUSTER_FLAG_NEXT_NULL 2 /* This cluster has no next cluster */ +/* + * The first page in the swap file is the swap header, which is always marked + * bad to prevent it from being allocated as an entry. This also prevents the + * cluster to which it belongs being marked free. Therefore 0 is safe to use as + * a sentinel to indicate next is not valid in percpu_cluster. + */ +#define SWAP_NEXT_INVALID 0 + /* * We assign a cluster to each CPU, so each CPU can allocate swap entry from * its own cluster and swapout sequentially. The purpose is to optimize swapout * throughput. */ struct percpu_cluster { - struct swap_cluster_info index; /* Current cluster index */ unsigned int next; /* Likely next allocation offset */ }; diff --git a/mm/swapfile.c b/mm/swapfile.c index 20c45757f2b2..e3f855475278 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -609,7 +609,7 @@ scan_swap_map_ssd_cluster_conflict(struct swap_info_struct *si, return false; percpu_cluster = this_cpu_ptr(si->percpu_cluster); - cluster_set_null(&percpu_cluster->index); + percpu_cluster->next = SWAP_NEXT_INVALID; return true; } @@ -622,14 +622,14 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si, { struct percpu_cluster *cluster; struct swap_cluster_info *ci; - unsigned long tmp, max; + unsigned int tmp, max; new_cluster: cluster = this_cpu_ptr(si->percpu_cluster); - if (cluster_is_null(&cluster->index)) { + tmp = cluster->next; + if (tmp == SWAP_NEXT_INVALID) { if (!cluster_list_empty(&si->free_clusters)) { - cluster->index = si->free_clusters.head; - cluster->next = cluster_next(&cluster->index) * + tmp = cluster_next(&si->free_clusters.head) * SWAPFILE_CLUSTER; } else if (!cluster_list_empty(&si->discard_clusters)) { /* @@ -649,9 +649,7 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si, * Other CPUs can use our cluster if they can't find a free cluster, * check if there is still free entry in the cluster */ - tmp = cluster->next; - max = min_t(unsigned long, si->max, - (cluster_next(&cluster->index) + 1) * SWAPFILE_CLUSTER); + max = min_t(unsigned long, si->max, ALIGN(tmp + 1, SWAPFILE_CLUSTER)); if (tmp < max) { ci = lock_cluster(si, tmp); while (tmp < max) { @@ -662,12 +660,13 @@ static bool scan_swap_map_try_ssd_cluster(struct swap_info_struct *si, unlock_cluster(ci); } if (tmp >= max) { - cluster_set_null(&cluster->index); + cluster->next = SWAP_NEXT_INVALID; goto new_cluster; } - cluster->next = tmp + 1; *offset = tmp; *scan_base = tmp; + tmp += 1; + cluster->next = tmp < max ? tmp : SWAP_NEXT_INVALID; return true; } @@ -3163,8 +3162,9 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) } for_each_possible_cpu(cpu) { struct percpu_cluster *cluster; + cluster = per_cpu_ptr(p->percpu_cluster, cpu); - cluster_set_null(&cluster->index); + cluster->next = SWAP_NEXT_INVALID; } } else { atomic_inc(&nr_rotate_swap);