From patchwork Mon Feb 19 13:33:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chengming Zhou X-Patchwork-Id: 13562697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B158C48BC3 for ; Mon, 19 Feb 2024 13:34:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09B758D0003; Mon, 19 Feb 2024 08:34:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E7A9E6B007E; Mon, 19 Feb 2024 08:34:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D1E1F8D0003; Mon, 19 Feb 2024 08:34:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C47066B007D for ; Mon, 19 Feb 2024 08:34:29 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8C17B803AA for ; Mon, 19 Feb 2024 13:34:29 +0000 (UTC) X-FDA: 81808647858.13.2496E56 Received: from out-187.mta0.migadu.com (out-187.mta0.migadu.com [91.218.175.187]) by imf02.hostedemail.com (Postfix) with ESMTP id 9F05780002 for ; Mon, 19 Feb 2024 13:34:27 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=bytedance.com (policy=quarantine); spf=pass (imf02.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.187 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708349667; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HFxfZiBGJL+IWtJOH0RqN6bAv8R7z/wfJobmcJajs8E=; b=HuMUrzAQ55S5JlFq8543exw949ZBuKSgJF2FOTjTrd3nHQbXz8a/FYqHWk4DohOgWiDNe/ b1qzCEp9fAkUIdNMUaVodtnq0zw5EEAeFkvoKK4Di5DYEiTvpzbuOifOa7BMOcMl8Ul/Ki +eZHQAiB0W1+lHGJRRixGaeZ/6hKmSo= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=bytedance.com (policy=quarantine); spf=pass (imf02.hostedemail.com: domain of chengming.zhou@linux.dev designates 91.218.175.187 as permitted sender) smtp.mailfrom=chengming.zhou@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708349667; a=rsa-sha256; cv=none; b=h1tarnImDjJ9fkluY/3vTO1Pm/dgP9LqiLczMtxvppovQs/Pg63U6EV3hmD5FXRPfEOHGs ocN4yDh32L3JMJW007QICOwzq4E2V9mM5z0QQGseQDspmlg/nB3nVpe3aMpyzDyqiDfATe q6XmZ4cVF517gjyZYa8WEAY86lwMmQk= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Chengming Zhou Date: Mon, 19 Feb 2024 13:33:52 +0000 Subject: [PATCH 2/3] mm/zsmalloc: remove migrate_write_lock_nested() MIME-Version: 1.0 Message-Id: <20240219-b4-szmalloc-migrate-v1-2-34cd49c6545b@bytedance.com> References: <20240219-b4-szmalloc-migrate-v1-0-34cd49c6545b@bytedance.com> In-Reply-To: <20240219-b4-szmalloc-migrate-v1-0-34cd49c6545b@bytedance.com> To: nphamcs@gmail.com, yosryahmed@google.com, Sergey Senozhatsky , Minchan Kim , Andrew Morton , hannes@cmpxchg.org Cc: linux-mm@kvack.org, Chengming Zhou , linux-kernel@vger.kernel.org X-Migadu-Flow: FLOW_OUT X-Rspamd-Pre-Result: action=add header; module=dmarc; Action set by DMARC X-Rspam-User: X-Rspamd-Queue-Id: 9F05780002 X-Rspamd-Server: rspam05 X-Stat-Signature: dudusq9gi7z7h37dyds38agfsezakqko X-Rspam: Yes X-HE-Tag: 1708349667-141997 X-HE-Meta: U2FsdGVkX1/gUPYqyMCJtKFKu/U1L9eMSo32J1mhoP3vR/JtcR6ojU1m9TvRJiRyVzUTwOy1lKS24GqTL6unoZbiH8z61KTtdLdVTreROkWdquYsZ9GhYT8Hp7xnrfhXTFMXe/40vuK/PV3qv+BUfDi8ZnmXNQobjgHIXRYDPN1TbbkfLd+Zd5EuBpUeozanbpsRhC9w2AuC8o9aOA+oglCYmQ4NozY2FR114Lgpo81ZekCjGeuMk0U/sAfcZd5U+GtQrWI2ji6skdp3bfd/CjjK5cTSwgxE2okm+Mjmrz8E96HAF3i4uiKhE7O2GcEm4i6Am4md33nTXe0NxmKUs4om7KNEQvXpAzVYJUNdldDzq4o8W966XUYBJ4w5AOzk3rph6Lr7kmLvd9YgUuk8Ib1Nxsgt/7U/YRoLouSn2CcuyDkQ9pSRb06sMSYmhd1UCpby7mNzyR6NpwoLdEMR/tc0UcnMDyKEu5HCD7YGsfdEOzqTe8FiUOrCf7rV4WGuIaSDKFAHSMX+H1Cryra+an8yB7CipPnfYR7kDx0HMvrDq54Uvbf1QkQ3JtxnZcZkg2mjMP6A7VkrCKy7wvq7G/nqgkht2W6Ev7lL3rXrfAYS07TQCgLpLNg5O6Zt3ixAYA+dOCR9wq3MQ6WxITboJLuxRn+/Ido0ICnzchuAwxEmHPbaUr0AkNJCOb0AhLr88fsdaHb051v77KZipJh9S8iX5aIra31qvOHrsJZbi91tIZv8pGEGXglynH8jgoTnI45VFfg2U2OYb2sYW+pwJHOwRSNar/pgjr0he923BH12Sgpoy54GNVkKeZW8e/zs9B7Jf7KSlPetVQTLp8suBwISbTUNW0dwyDCcR7NRnyZ7W5aQI9KjGt35TLp/9XUAJxXBYvVUKUElrbNryUNiDT0wDpFyhj73RkEib0o20KLEfTVTbtA1frEP6xGBK35b1grJGudsv5zoRPMRepu zwbZrCgi tGEIYaULC6KzF4076qQw0iq40/QJqmgsrcZp353wA1zhL60Q06kH475Tr2T79WClf4pQnMdoJ+DETK8QjBDWVPciUgfkYapdwlR8dvaTC9n+Pnio1itIaKDR7WnyGcOR6yiVv9dUanSN5QeOlot7j3mWUs3EJ5ZRfegMwe1e/1SwRn4x5kS4Y2Exjweboyg/hyxUqm2a2gIIQSk60oVXGqPQ3Itl31kzHZfTqKw/Y2hi7VgPpJBa/MUxhAr3G6zkZ4waVjb7tdqtONWcxRKwpJg1mMEq6hHCv3v01o1HlwOAbRGqTG37Vs07wRgduLZBewJqlsOZZEDi8UUMJSbwjd0bYl58b3fx+Vo5XTf73MMEm7HpTnWazX/HKjw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The migrate write lock is to protect the race between zspage migration and zspage objects' map users. We only need to lock out the map users of src zspage, not dst zspage, which is safe to map by users concurrently, since we only need to do obj_malloc() from dst zspage. So we can remove the migrate_write_lock_nested() use case. As we are here, cleanup the __zs_compact() by moving putback_zspage() outside of migrate_write_unlock since we hold pool lock, no malloc or free users can come in. Signed-off-by: Chengming Zhou --- mm/zsmalloc.c | 22 +++++----------------- 1 file changed, 5 insertions(+), 17 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 64d5533fa5d8..f2ae7d4c6f21 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -279,7 +279,6 @@ static void migrate_lock_init(struct zspage *zspage); static void migrate_read_lock(struct zspage *zspage); static void migrate_read_unlock(struct zspage *zspage); static void migrate_write_lock(struct zspage *zspage); -static void migrate_write_lock_nested(struct zspage *zspage); static void migrate_write_unlock(struct zspage *zspage); #ifdef CONFIG_COMPACTION @@ -1727,11 +1726,6 @@ static void migrate_write_lock(struct zspage *zspage) write_lock(&zspage->lock); } -static void migrate_write_lock_nested(struct zspage *zspage) -{ - write_lock_nested(&zspage->lock, SINGLE_DEPTH_NESTING); -} - static void migrate_write_unlock(struct zspage *zspage) { write_unlock(&zspage->lock); @@ -2003,19 +1997,17 @@ static unsigned long __zs_compact(struct zs_pool *pool, dst_zspage = isolate_dst_zspage(class); if (!dst_zspage) break; - migrate_write_lock(dst_zspage); } src_zspage = isolate_src_zspage(class); if (!src_zspage) break; - migrate_write_lock_nested(src_zspage); - + migrate_write_lock(src_zspage); migrate_zspage(pool, src_zspage, dst_zspage); - fg = putback_zspage(class, src_zspage); migrate_write_unlock(src_zspage); + fg = putback_zspage(class, src_zspage); if (fg == ZS_INUSE_RATIO_0) { free_zspage(pool, class, src_zspage); pages_freed += class->pages_per_zspage; @@ -2025,7 +2017,6 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (get_fullness_group(class, dst_zspage) == ZS_INUSE_RATIO_100 || spin_is_contended(&pool->lock)) { putback_zspage(class, dst_zspage); - migrate_write_unlock(dst_zspage); dst_zspage = NULL; spin_unlock(&pool->lock); @@ -2034,15 +2025,12 @@ static unsigned long __zs_compact(struct zs_pool *pool, } } - if (src_zspage) { + if (src_zspage) putback_zspage(class, src_zspage); - migrate_write_unlock(src_zspage); - } - if (dst_zspage) { + if (dst_zspage) putback_zspage(class, dst_zspage); - migrate_write_unlock(dst_zspage); - } + spin_unlock(&pool->lock); return pages_freed;