From patchwork Mon Mar 3 02:03:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13998052 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC655C19F32 for ; Mon, 3 Mar 2025 02:24:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 16CC6280002; Sun, 2 Mar 2025 21:24:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F633280001; Sun, 2 Mar 2025 21:24:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E8CFF280002; Sun, 2 Mar 2025 21:24:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C5AE2280001 for ; Sun, 2 Mar 2025 21:24:48 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 7880DA30D0 for ; Mon, 3 Mar 2025 02:24:48 +0000 (UTC) X-FDA: 83178646656.30.04CD52F Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by imf03.hostedemail.com (Postfix) with ESMTP id 9872B20002 for ; Mon, 3 Mar 2025 02:24:46 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=OFssOlcv; spf=pass (imf03.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.171 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740968686; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jDqZrH9W9+qGmX1OXU2Vyl3xsabTv9L/+SLAveE4fcw=; b=TQ/0zQeMUzmwZFXzZPVIBc5C3ixzg8qbpZ41K/5Q3eLeSmsDXz6pB9MfeyKeT4Gze/29BY oIix1cyjmISHJMJ9xkFcCfQ+IU/RrXa7+Iqznccu5egYWBNqp4La4uEE2AmOgQaHGWTopa eBxPH/LsgDRcx2nNy3bl2j1yhdc3sXE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740968686; a=rsa-sha256; cv=none; b=5dt00ZBQkv422l0+1tFbqtHfzfuyJ/BsIjNrjedMps78JKYQ6ADuna3ZdUkTG3MRaX7m0s Ox1vQopHRqIb7vOZ6WpmUaE3IDbrMKMwwBRkcqgHXjx2BovKW94c5/feYlTF7iNuspzinI 12SFmQs2AZ9vFh5hja7QhNZCRn7iLT4= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=OFssOlcv; spf=pass (imf03.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.171 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-223a3c035c9so8993335ad.1 for ; Sun, 02 Mar 2025 18:24:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1740968685; x=1741573485; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jDqZrH9W9+qGmX1OXU2Vyl3xsabTv9L/+SLAveE4fcw=; b=OFssOlcv3N+QXMqZSJpZO5KYiQT7GQkrlPJ64RrN6IpfO94lSM8Y3p1xOqx3Q8QdzM BrNuaej5ZRdRsxHXyyx2AHzRcX8eSbxr9k3PldE7sM7vQyVX7n+GseYJx8z2MyF9Rhhe RT/Bq8Y5xzlWFOarhXPi1XqvPxS1KpfX26t1U= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740968685; x=1741573485; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jDqZrH9W9+qGmX1OXU2Vyl3xsabTv9L/+SLAveE4fcw=; b=Z41ctU0jLsdzsYuixvDWEeAKE2VDxaIYz1j3GRNfi8CdMiYFbK5HQLEgqBNGmBV72W KIUdXHt1VyVockhghgUtgsFa4uB3g2BBgVHxSmiNy9AHP98OxHBoVUhyCsm6anrRDXmi 7MuaWR+2pPndI2MqBpII1CMNXaX7ATqmBuZpi4zXoJdQm7d8es2fbv7TtlaO51Lcup0b oTZiWUamZ9Un9Aj6f90d8mMOfMGkhS/BciuvgKzas55/RwzVWXzDXhyl+263jZ1+4xFt EMeEuT2It5J/yRWr6QQLlu2n0BXgQggs1mHJ8EZ06Xy+koEWzVByk7ZZ2crMwSZyZ9Mi FQ/Q== X-Forwarded-Encrypted: i=1; AJvYcCVl7ZqRI/wNUO8rhfvjKO5HmlRGSe577FczZUfCMBHsFTSbVI16ehU8moctprmTX4aqAjewYslz6w==@kvack.org X-Gm-Message-State: AOJu0YxNyZxKQ3hRi4N3a4bwoIgrrBAoyqXDKRaOrWFSXuNSkGR51WEN H0f2o7SNLuM+N7odHvFW/VGAakwtD64TOeD5u+PweA4fnRp0Y6S1T3wirCs0+g== X-Gm-Gg: ASbGnctypYinK3kL1aiGf3HzYAe9VyiMvsyyuJeGGQX+zIYTDAlRCToFVMOWLp6Tmyy 54C+zNTWFS6DL7xAkwKCLsH46YSUyRJcPVuAR+7nvJNxaNDeAKIXm1pXLjr+nA/3sq+OLrPc8Zw imfSjXNHZlFxSE0Zg/SzOm5kx/GByy8sMjlr713VckUOcHHmdCQbgbpYy1M5bOYB7dzlA3gqtlC OxZZWcgxjOvvZHjFEZ0dVnJ8mOI86PSo7e+B92hOJG1hpx3DIz9z23SiEnkeFTOs3CSXyTB2UXW QMNs5eqCeMwZnsqxxEsL6KByyQ6VEXn23GTkJVGHsN9fAGo= X-Google-Smtp-Source: AGHT+IGEtduyJTi51HPkBHttjdqcJfHiW6RA1OErSt85BYtPP72gIAiy5XzU4m9bzS3f4VCOBOOEpw== X-Received: by 2002:a17:903:2986:b0:21f:6546:9adc with SMTP id d9443c01a7336-2234a38cd53mr253511975ad.13.1740968685435; Sun, 02 Mar 2025 18:24:45 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:1513:4f61:a4d3:b418]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-223501d5247sm66569735ad.26.2025.03.02.18.24.43 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 02 Mar 2025 18:24:45 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Hillf Danton , Kairui Song , Sebastian Andrzej Siewior , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v10 02/19] zram: permit preemption with active compression stream Date: Mon, 3 Mar 2025 11:03:11 +0900 Message-ID: <20250303022425.285971-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog In-Reply-To: <20250303022425.285971-1-senozhatsky@chromium.org> References: <20250303022425.285971-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Stat-Signature: u3se4gw5jw4j7ipngzbtoyoqfcgb5bs8 X-Rspamd-Queue-Id: 9872B20002 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740968686-702130 X-HE-Meta: U2FsdGVkX1+nRstfqQKkMJ5JSnqo5D5S7ZVo7eohwq5sjXXUolN4cqvmI90kzKBz7aYHJuR9PL7Auv7kDuoZFgAv711uViMV657qMyN7MEwpPGLDxcTh5bTk2EnbEO4ytkA5Sqtdwy3Li+9P/TYx26P30QsLiYeKwnMGXHJYjb463ApoOnydWNSURVut+cAtbebayxMu5wGp6BEqx7aSwnpqohWlt9TknYGNRw0VavF67SO4LmJI9erRZvQ6G4+i1AqMnPGFx2LBZEvHUtc6gGLEwBnmIJlktKPrcWCr0ut4ds8NlzfftDjSHirSXgltcmIXHltXm2Q2mWi/TnCNWJ43c/720MJUS8AAhwyXfW9KHkQlM7BkgMZ4xzeZ8HfWKzcBVYROG9LSm40D0sXvdzWH/KR8H3+qkY+iciow/Z+5NkLai2Qg3ZfuIKXP02nYZAa7UBNW3aPn0DeTJFgn/bSsFXnoFnaglX4CIFbcxsBJnKCXrN0k1X2J/OOsNKM3qLI3mwzpoZnb7/vxbgG3NheHjx19alxrlg2zHfJfSRuMFiBfgVBZ6GimdGdxJS8j3SCPu7wZXm2cwHfIRhwAmfZyIc4UFxTrlF3o8fAYrxC/46HKz9bQi0gVK6CQV5YXX4mJN/fJ6Pu5K3ofUkC6m+zzJWWK5Q96hveTMA0xuaCr65deqQZidqkPb2Kko6zGBvxX9HxeTzIzbVp9EQZjQKUJsw785mtSAmZBKL7c0u8KkkQdsep616pOHixqDN/1rX5J8DTQ+Y8iPXJG9cNG29GJqvEmqUbYLDSJJgW1CXROR2FKmkc62utkadrTwY1NRR077xeXonCcdPKNAvOzb4tUr/SinWt0g83LfZC46LFNIuqQoSX53Rwt1MuhRRNMa/NA0eSDuPUEaXz5f4XJAKNgNpuqzzzvoqBowsLz2qAnNPfiRXjLvbAZO4VKjJ735FwDnddywAbWMN2Z4IL dRcLrPjf vCXuFakQKfC9cHQxdZTnC7dygiIzht3nTIA4TLzBxqaPmQAxCy7WKwVE1dPQqAyvBeuagvRq7v2pOdpkhX7tVQVIqyJWjPe+hL54y2ahbAKpbQ6LLyTuZF7we6Jg5dfYpghEZsiRRhF0r8f1ZRGBSKuMRgiQREnFxSCyjoGfQOotKa6XFyIDtBkszV9SVtn2v6g/Xa3hoKyn/TIZ6fNjyuyH1fOkUcoTYJTWE9PuKRTesNWKcDfSsYJnZRsqLk1uUo1qCwyMQSxjt6Ojea+8ymZ6gTezE/nlO32WFOabQ3ZObsOVkC4wmjCwLDg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, per-CPU stream access is done from a non-preemptible (atomic) section, which imposes the same atomicity requirements on compression backends as entry spin-lock, and makes it impossible to use algorithms that can schedule/wait/sleep during compression and decompression. Switch to preemptible per-CPU model, similar to the one used in zswap. Instead of a per-CPU local lock, each stream carries a mutex which is locked throughout entire time zram uses it for compression or decompression, so that cpu-dead event waits for zram to stop using a particular per-CPU stream and release it. Suggested-by: Yosry Ahmed Signed-off-by: Sergey Senozhatsky Reviewed-by: Yosry Ahmed --- drivers/block/zram/zcomp.c | 41 +++++++++++++++++++++++++---------- drivers/block/zram/zcomp.h | 6 ++--- drivers/block/zram/zram_drv.c | 20 ++++++++--------- 3 files changed, 42 insertions(+), 25 deletions(-) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index bb514403e305..53e4c37441be 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -6,7 +6,7 @@ #include #include #include -#include +#include #include #include @@ -109,13 +109,29 @@ ssize_t zcomp_available_show(const char *comp, char *buf) struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) { - local_lock(&comp->stream->lock); - return this_cpu_ptr(comp->stream); + for (;;) { + struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream); + + /* + * Inspired by zswap + * + * stream is returned with ->mutex locked which prevents + * cpu_dead() from releasing this stream under us, however + * there is still a race window between raw_cpu_ptr() and + * mutex_lock(), during which we could have been migrated + * from a CPU that has already destroyed its stream. If + * so then unlock and re-try on the current CPU. + */ + mutex_lock(&zstrm->lock); + if (likely(zstrm->buffer)) + return zstrm; + mutex_unlock(&zstrm->lock); + } } -void zcomp_stream_put(struct zcomp *comp) +void zcomp_stream_put(struct zcomp_strm *zstrm) { - local_unlock(&comp->stream->lock); + mutex_unlock(&zstrm->lock); } int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, @@ -151,12 +167,9 @@ int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) { struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; + struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu); int ret; - zstrm = per_cpu_ptr(comp->stream, cpu); - local_lock_init(&zstrm->lock); - ret = zcomp_strm_init(comp, zstrm); if (ret) pr_err("Can't allocate a compression stream\n"); @@ -166,16 +179,17 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node) { struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; + struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu); - zstrm = per_cpu_ptr(comp->stream, cpu); + mutex_lock(&zstrm->lock); zcomp_strm_free(comp, zstrm); + mutex_unlock(&zstrm->lock); return 0; } static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) { - int ret; + int ret, cpu; comp->stream = alloc_percpu(struct zcomp_strm); if (!comp->stream) @@ -186,6 +200,9 @@ static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) if (ret) goto cleanup; + for_each_possible_cpu(cpu) + mutex_init(&per_cpu_ptr(comp->stream, cpu)->lock); + ret = cpuhp_state_add_instance(CPUHP_ZCOMP_PREPARE, &comp->node); if (ret < 0) goto cleanup; diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index ad5762813842..23b8236b9090 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -3,7 +3,7 @@ #ifndef _ZCOMP_H_ #define _ZCOMP_H_ -#include +#include #define ZCOMP_PARAM_NO_LEVEL INT_MIN @@ -31,7 +31,7 @@ struct zcomp_ctx { }; struct zcomp_strm { - local_lock_t lock; + struct mutex lock; /* compression buffer */ void *buffer; struct zcomp_ctx ctx; @@ -77,7 +77,7 @@ struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params); void zcomp_destroy(struct zcomp *comp); struct zcomp_strm *zcomp_stream_get(struct zcomp *comp); -void zcomp_stream_put(struct zcomp *comp); +void zcomp_stream_put(struct zcomp_strm *zstrm); int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, const void *src, unsigned int *dst_len); diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 70599d41b828..dd669d48ae6f 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1607,7 +1607,7 @@ static int read_compressed_page(struct zram *zram, struct page *page, u32 index) ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, dst); kunmap_local(dst); zs_unmap_object(zram->mem_pool, handle); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return ret; } @@ -1768,14 +1768,14 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) kunmap_local(mem); if (unlikely(ret)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); pr_err("Compression failed! err=%d\n", ret); zs_free(zram->mem_pool, handle); return ret; } if (comp_len >= huge_class_size) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); return write_incompressible_page(zram, page, index); } @@ -1799,7 +1799,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); atomic64_inc(&zram->stats.writestall); handle = zs_malloc(zram->mem_pool, comp_len, GFP_NOIO | __GFP_HIGHMEM | @@ -1811,7 +1811,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) } if (!zram_can_store_page(zram)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); zs_free(zram->mem_pool, handle); return -ENOMEM; } @@ -1819,7 +1819,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len); - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); zs_unmap_object(zram->mem_pool, handle); zram_slot_lock(zram, index); @@ -1978,7 +1978,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, kunmap_local(src); if (ret) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return ret; } @@ -1988,7 +1988,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, /* Continue until we make progress */ if (class_index_new >= class_index_old || (threshold && comp_len_new >= threshold)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); continue; } @@ -2046,13 +2046,13 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle_new)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return PTR_ERR((void *)handle_new); } dst = zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len_new); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); zs_unmap_object(zram->mem_pool, handle_new);