From patchwork Thu Feb 27 04:35:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13993743 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 509A0C021BE for ; Thu, 27 Feb 2025 04:37:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC16B28000F; Wed, 26 Feb 2025 23:37:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D7139280005; Wed, 26 Feb 2025 23:37:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEA4528000F; Wed, 26 Feb 2025 23:37:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9EBF4280005 for ; Wed, 26 Feb 2025 23:37:46 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5C01A140245 for ; Thu, 27 Feb 2025 04:37:46 +0000 (UTC) X-FDA: 83164466532.19.4BB8620 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf07.hostedemail.com (Postfix) with ESMTP id 7EADC40007 for ; Thu, 27 Feb 2025 04:37:44 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=G8mwJ3w5; spf=pass (imf07.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.169 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740631064; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GwrDBXsyO2EvEhOdvagCAu0rolY3IippWT1+2LO84LM=; b=flfoMnFLiHYAuhrNazLZ+v4vTNFI2K1jUxT12ob9D1Jd1/wxlq3DUV04hMXbm5Y2TaRkmL rQqB183hkZ5cxfi3yhQfhqbNt2jDNW2USrsU3aPNj0xTkMesJMN0F93QrB6rNziswtx4E1 8byxatfGgsoQakcPMo42mDK/dD3r3EA= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=G8mwJ3w5; spf=pass (imf07.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.169 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740631064; a=rsa-sha256; cv=none; b=x/SawPBARnpl6z8sBQcFW4TxX0gr1oLQP4a8jToVsgPuvWubawnfH9/7YewlVvAuDr4/8z MS9RQFyZPoy9M/tnhxTldf9echxk7gW3dB3YvqIsp/8KTK4IfP7tt0TVpaREddpCvmIY1O cyABJuZU1CTYNqfOa0FSfAr9EwqhPUI= Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-22104c4de96so7263555ad.3 for ; Wed, 26 Feb 2025 20:37:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1740631063; x=1741235863; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GwrDBXsyO2EvEhOdvagCAu0rolY3IippWT1+2LO84LM=; b=G8mwJ3w5Us239D3Z6oxhO/ypJ4yjOoMXA9COFap0kes3b3CqYO2FnEA14uiUBuLPzR pmeD6JRAALhmwBlglXwZfYM1sqoAPSGAHbW92TPrIz2l/lBgQZ7yoJFf/A8ZMajyCBMf 3Z+HAG+eBCBvdhng0sZ9BFriKLt9yy/qjhqTY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740631063; x=1741235863; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GwrDBXsyO2EvEhOdvagCAu0rolY3IippWT1+2LO84LM=; b=MYPYSMjKCadrLgwfBeH70fX+kqGa0lZCXcbAlZC36AnjY0cdCS75bFnaBjVmtei4An GzfwyG8iR7HyUrmdEYKu/2OF9gmNx5f3v/40XehOWyYswBSiG8ZnouyA8WvmLrNK0Fez 7Qks6ROCIR2dcHofYtIcKh8ppTWDzTczYZB7Ukq23TPqFdq02r8VEW/oh88TXhF82rSu FdkQGMQjTtlMTkJ6Hu+k9cGTbzdPIdB4ErMd53JJX8RtVHZdt+sC34yZMJUagpD/RzDH YjTkkwsjeCo6zk4NSylpzAD4JIfs5pgeMV4RVZeP4VU5yersfeiGVAjT4964qImbIXfD hGOg== X-Forwarded-Encrypted: i=1; AJvYcCXvvrT6AdEr/Eg6yBYJkkSiSsR0QF+v0DbEj1jPQ5Q24H0NSgYIc6FJCn9qLk5JATBUlX4c7NKDGg==@kvack.org X-Gm-Message-State: AOJu0YwkZudMPgSz1eECL5Ch7WuJxzCd8NAdwOQEG4YOhoEvv2z+MOB+ 0YTKU3uOYoPpodFoerF56Axl1StkyGMN1NYXBRrVZKlcvV+97R9CqT6JbrnC/oXfRY61IfNnmD0 = X-Gm-Gg: ASbGncvOLp2yf2NlxuRANdZ9xPRJSgFY468e7dLF4LzWX5To8JEywCsTNdzh7eu5LKA avrSjbTdo5wlcV24Y9krRMVV0hYsrlCxD9cTSw+vvDmbdquMK6PrHnzrvosQIVlFW1LcR93YIwx TEzrqX/EtiD4ZQrptdcpt4RIkxzKxoKDP3y+Ad8Y88am2DFPWvUw9rCXgnvMzoNa19OUFSgVP7M 782cNWPNX6/4heY2LKLdwUr9OiveWlUrt0dvTwcb5trwDs1mKSe/ul2dFZ3TZlPFARgEj7C2xGa VYetqBym57i9mboqHljiuIkv9WJd X-Google-Smtp-Source: AGHT+IFrOSZSmfTMzWd5HqkTvrPS3qIgNLqZdoN+0693MnsV/OS8jz0BlediKk7QkapJlhesjLvlCw== X-Received: by 2002:aa7:84c8:0:b0:734:918a:4ecd with SMTP id d2e1a72fcca58-734918a5063mr4878646b3a.15.1740631063305; Wed, 26 Feb 2025 20:37:43 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:a9c0:1bc1:74e3:3e31]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-734a0024bd8sm501433b3a.91.2025.02.26.20.37.40 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 26 Feb 2025 20:37:42 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Hillf Danton , Kairui Song , Sebastian Andrzej Siewior , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v9 13/19] zsmalloc: make zspage lock preemptible Date: Thu, 27 Feb 2025 13:35:31 +0900 Message-ID: <20250227043618.88380-14-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.658.g4767266eb4-goog In-Reply-To: <20250227043618.88380-1-senozhatsky@chromium.org> References: <20250227043618.88380-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7EADC40007 X-Stat-Signature: kznneycyywdrzpz3ua5pdnkhxrqnizff X-Rspam-User: X-HE-Tag: 1740631064-552108 X-HE-Meta: U2FsdGVkX19VICNQRogxmbGfU2BRmK4+fU6i5HHKRmMXkdcN4I6VKFEL91GUY60Rsy1v4eQRFqKsv/IYQZ8ehgPubzl4pA01CaEiDIvxmWOx8rc1m8nf7ABNkcwLM3Zz5JxEokcdh9MD66M1BRC5ksQupHGh1Z4bm8UyZdLYuNQ/qIqMn3UKjXqhf83Ad8SgDKKi2xE96lrwjEtEJ989aWiwbChogiCBvmY7X69+pM7QEOkYy2waJup/s0g+X61yo3W4lFA5sqOIE0Mu1RPZOp736cZYa7FD4rRjnFPEqqaWcCPfnOznHd3SyAbSRKeU4PRUkbJVFBpzd9AOdy33jHKPCIsGCeK7F7C4pQgqZ/uFXf3YDusnPmWyVdhHVxMEoU8YBhS8UR5KqXvFcl1QnlWF1LNfbE2gOQ0FhecZ7/xZXJpeeLkgnM1fM2COqgWgqsk9SEuM4/Frqyv3j1JJl0LRZKE4fe6QBKIiJVALGCm2UxQpb9g07bKajDaGoCBUho321cjfSYEDCb/nxxJ2aYXDYT5ZwVKOGWzseoR8TRtflodXiqHsp1agYwxBA4uOE3zdp+GmTicCkCyEh6NL/qiofgrJd15Vi8H6e9vMq8izBFd6oF3jlpblMD1qtInjKzOA0ykQGW3vkftzUQ3aY4GtGg0RfU0WqEvdOQv1KvnlAKTR68to4Cc7FPNqh19jrOmbMGx1dKwp99RMHvIjTGf8RV/qJhe/iVVty99mRQyFgA5SVsG8gSsrCOQz0Oml+upJl4QfBL4OgliSTQGN6PG8PPHLjViiU0xfsH4rZXwGcCjvlfU7rxk9/nKhOWmPXL8Bewc5RK90rQs3MVpvTtn/Zw3f8QmpXEYDk9WWL6B6KETw/juseJf0mbUU1pOxudEhl7xRmRIhvxJSVhW/F0ojMj6GM7sM9rt+mI0m9vO1h9wzWAxBIm4w2gNm8j9YC/KkB2Borj3yoSXdsYD 18FpDLXq eddUKt3L9xv5Ro2UogolZ1yTUqaOp04hebvd5iTkjb7PFfdRck73s+T+q5nIn/LhyZcd1OOPq4C4U24wGcnqmg/X9/BPyXxnCGjhCUjR6cv9RSHUPvVFbJu1KOX9JK2E81g7pNuDQbdSpcGSTj1bdJxLujTprqrR8A8TJRLcpGdQ92h5FtjK5dGpSx5HJd2KxVp2783HHWQsJT9PTnKgn6+XVbdI1XNCTXoDYCFsvady5+/sR8JXE+LjWRA/QSNnD8Wh791qfEkVpEAG7XrF1HZv6IW0BM1aRa/K94/jMshOsng/lid11bnKUrlzLzn/bTWmnTYnyjU2JRlo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In order to implement preemptible object mapping we need a zspage lock that satisfies several preconditions: - it should be reader-write type of a lock - it should be possible to hold it from any context, but also being preemptible if the context allows it - we never sleep while acquiring but can sleep while holding in read mode An rwsemaphore doesn't suffice, due to atomicity requirements, rwlock doesn't satisfy due to reader-preemptability requirement. It's also worth to mention, that per-zspage rwsem is a little too memory heavy (we can easily have double digits megabytes used only on rwsemaphores). Switch over from rwlock_t to a atomic_t-based implementation of a reader-writer semaphore that satisfies all of the preconditions. The spin-lock based zspage_lock is suggested by Hillf Danton. Suggested-by: Hillf Danton Signed-off-by: Sergey Senozhatsky --- mm/zsmalloc.c | 171 ++++++++++++++++++++++++++++++++++---------------- 1 file changed, 118 insertions(+), 53 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 1424ee73cbb5..74a7aaebf7a0 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -226,6 +226,7 @@ struct zs_pool { /* protect zspage migration/compaction */ rwlock_t lock; atomic_t compaction_in_progress; + struct lock_class_key lock_class; }; static inline void zpdesc_set_first(struct zpdesc *zpdesc) @@ -257,6 +258,15 @@ static inline void free_zpdesc(struct zpdesc *zpdesc) __free_page(page); } +#define ZS_PAGE_UNLOCKED 0 +#define ZS_PAGE_WRLOCKED -1 + +struct zspage_lock { + spinlock_t lock; + int cnt; + struct lockdep_map dep_map; +}; + struct zspage { struct { unsigned int huge:HUGE_BITS; @@ -269,7 +279,7 @@ struct zspage { struct zpdesc *first_zpdesc; struct list_head list; /* fullness list */ struct zs_pool *pool; - rwlock_t lock; + struct zspage_lock zsl; }; struct mapping_area { @@ -279,6 +289,85 @@ struct mapping_area { enum zs_mapmode vm_mm; /* mapping mode */ }; +static void zspage_lock_init(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + lockdep_init_map(&zsl->dep_map, "zspage->lock", + &zspage->pool->lock_class, 0); + spin_lock_init(&zsl->lock); + zsl->cnt = ZS_PAGE_UNLOCKED; +} + +/* + * The zspage lock can be held from atomic contexts, but it needs to remain + * preemptible when held for reading because it remains held outside of those + * atomic contexts, otherwise we unnecessarily lose preemptibility. + * + * To achieve this, the following rules are enforced on readers and writers: + * + * - Writers are blocked by both writers and readers, while readers are only + * blocked by writers (i.e. normal rwlock semantics). + * + * - Writers are always atomic (to allow readers to spin waiting for them). + * + * - Writers always use trylock (as the lock may be held be sleeping readers). + * + * - Readers may spin on the lock (as they can only wait for atomic writers). + * + * - Readers may sleep while holding the lock (as writes only use trylock). + */ +static void zspage_read_lock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + rwsem_acquire_read(&zsl->dep_map, 0, 0, _RET_IP_); + + spin_lock(&zsl->lock); + zsl->cnt++; + spin_unlock(&zsl->lock); + + lock_acquired(&zsl->dep_map, _RET_IP_); +} + +static void zspage_read_unlock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + rwsem_release(&zsl->dep_map, _RET_IP_); + + spin_lock(&zsl->lock); + zsl->cnt--; + spin_unlock(&zsl->lock); +} + +static __must_check bool zspage_write_trylock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + spin_lock(&zsl->lock); + if (zsl->cnt == ZS_PAGE_UNLOCKED) { + zsl->cnt = ZS_PAGE_WRLOCKED; + rwsem_acquire(&zsl->dep_map, 0, 1, _RET_IP_); + lock_acquired(&zsl->dep_map, _RET_IP_); + return true; + } + + lock_contended(&zsl->dep_map, _RET_IP_); + spin_unlock(&zsl->lock); + return false; +} + +static void zspage_write_unlock(struct zspage *zspage) +{ + struct zspage_lock *zsl = &zspage->zsl; + + rwsem_release(&zsl->dep_map, _RET_IP_); + + zsl->cnt = ZS_PAGE_UNLOCKED; + spin_unlock(&zsl->lock); +} + /* huge object: pages_per_zspage == 1 && maxobj_per_zspage == 1 */ static void SetZsHugePage(struct zspage *zspage) { @@ -290,12 +379,6 @@ static bool ZsHugePage(struct zspage *zspage) return zspage->huge; } -static void migrate_lock_init(struct zspage *zspage); -static void migrate_read_lock(struct zspage *zspage); -static void migrate_read_unlock(struct zspage *zspage); -static void migrate_write_lock(struct zspage *zspage); -static void migrate_write_unlock(struct zspage *zspage); - #ifdef CONFIG_COMPACTION static void kick_deferred_free(struct zs_pool *pool); static void init_deferred_free(struct zs_pool *pool); @@ -992,7 +1075,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, return NULL; zspage->magic = ZSPAGE_MAGIC; - migrate_lock_init(zspage); + zspage->pool = pool; + zspage->class = class->index; + zspage_lock_init(zspage); for (i = 0; i < class->pages_per_zspage; i++) { struct zpdesc *zpdesc; @@ -1015,8 +1100,6 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, create_page_chain(class, zspage, zpdescs); init_zspage(class, zspage); - zspage->pool = pool; - zspage->class = class->index; return zspage; } @@ -1217,7 +1300,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned long handle, * zs_unmap_object API so delegate the locking from class to zspage * which is smaller granularity. */ - migrate_read_lock(zspage); + zspage_read_lock(zspage); read_unlock(&pool->lock); class = zspage_class(pool, zspage); @@ -1277,7 +1360,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle) } local_unlock(&zs_map_area.lock); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } EXPORT_SYMBOL_GPL(zs_unmap_object); @@ -1671,18 +1754,18 @@ static void lock_zspage(struct zspage *zspage) /* * Pages we haven't locked yet can be migrated off the list while we're * trying to lock them, so we need to be careful and only attempt to - * lock each page under migrate_read_lock(). Otherwise, the page we lock + * lock each page under zspage_read_lock(). Otherwise, the page we lock * may no longer belong to the zspage. This means that we may wait for * the wrong page to unlock, so we must take a reference to the page - * prior to waiting for it to unlock outside migrate_read_lock(). + * prior to waiting for it to unlock outside zspage_read_lock(). */ while (1) { - migrate_read_lock(zspage); + zspage_read_lock(zspage); zpdesc = get_first_zpdesc(zspage); if (zpdesc_trylock(zpdesc)) break; zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); } @@ -1693,41 +1776,16 @@ static void lock_zspage(struct zspage *zspage) curr_zpdesc = zpdesc; } else { zpdesc_get(zpdesc); - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); zpdesc_wait_locked(zpdesc); zpdesc_put(zpdesc); - migrate_read_lock(zspage); + zspage_read_lock(zspage); } } - migrate_read_unlock(zspage); + zspage_read_unlock(zspage); } #endif /* CONFIG_COMPACTION */ -static void migrate_lock_init(struct zspage *zspage) -{ - rwlock_init(&zspage->lock); -} - -static void migrate_read_lock(struct zspage *zspage) __acquires(&zspage->lock) -{ - read_lock(&zspage->lock); -} - -static void migrate_read_unlock(struct zspage *zspage) __releases(&zspage->lock) -{ - read_unlock(&zspage->lock); -} - -static void migrate_write_lock(struct zspage *zspage) -{ - write_lock(&zspage->lock); -} - -static void migrate_write_unlock(struct zspage *zspage) -{ - write_unlock(&zspage->lock); -} - #ifdef CONFIG_COMPACTION static const struct movable_operations zsmalloc_mops; @@ -1785,9 +1843,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page, VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc)); - /* We're committed, tell the world that this is a Zsmalloc page. */ - __zpdesc_set_zsmalloc(newzpdesc); - /* The page is locked, so this pointer must remain valid */ zspage = get_zspage(zpdesc); pool = zspage->pool; @@ -1803,8 +1858,15 @@ static int zs_page_migrate(struct page *newpage, struct page *page, * the class lock protects zpage alloc/free in the zspage. */ spin_lock(&class->lock); - /* the migrate_write_lock protects zpage access via zs_map_object */ - migrate_write_lock(zspage); + /* the zspage write_lock protects zpage access via zs_map_object */ + if (!zspage_write_trylock(zspage)) { + spin_unlock(&class->lock); + write_unlock(&pool->lock); + return -EINVAL; + } + + /* We're committed, tell the world that this is a Zsmalloc page. */ + __zpdesc_set_zsmalloc(newzpdesc); offset = get_first_obj_offset(zpdesc); s_addr = kmap_local_zpdesc(zpdesc); @@ -1835,7 +1897,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, */ write_unlock(&pool->lock); spin_unlock(&class->lock); - migrate_write_unlock(zspage); + zspage_write_unlock(zspage); zpdesc_get(newzpdesc); if (zpdesc_zone(newzpdesc) != zpdesc_zone(zpdesc)) { @@ -1971,9 +2033,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, if (!src_zspage) break; - migrate_write_lock(src_zspage); + if (!zspage_write_trylock(src_zspage)) + break; + migrate_zspage(pool, src_zspage, dst_zspage); - migrate_write_unlock(src_zspage); + zspage_write_unlock(src_zspage); fg = putback_zspage(class, src_zspage); if (fg == ZS_INUSE_RATIO_0) { @@ -2141,6 +2205,7 @@ struct zs_pool *zs_create_pool(const char *name) init_deferred_free(pool); rwlock_init(&pool->lock); atomic_set(&pool->compaction_in_progress, 0); + lockdep_register_key(&pool->lock_class); pool->name = kstrdup(name, GFP_KERNEL); if (!pool->name) @@ -2233,7 +2298,6 @@ struct zs_pool *zs_create_pool(const char *name) * trigger compaction manually. Thus, ignore return code. */ zs_register_shrinker(pool); - return pool; err: @@ -2270,6 +2334,7 @@ void zs_destroy_pool(struct zs_pool *pool) kfree(class); } + lockdep_unregister_key(&pool->lock_class); destroy_cache(pool); kfree(pool->name); kfree(pool);