From patchwork Thu Feb 27 04:35:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13993732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E3CCC021BE for ; Thu, 27 Feb 2025 04:36:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0041C280003; Wed, 26 Feb 2025 23:36:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EF5D96B008A; Wed, 26 Feb 2025 23:36:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D2275280003; Wed, 26 Feb 2025 23:36:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id ADBE26B0089 for ; Wed, 26 Feb 2025 23:36:45 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 2FF4F120172 for ; Thu, 27 Feb 2025 04:36:45 +0000 (UTC) X-FDA: 83164463970.11.911E8E7 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf20.hostedemail.com (Postfix) with ESMTP id 4EF8A1C0007 for ; Thu, 27 Feb 2025 04:36:43 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=bI89CVVk; spf=pass (imf20.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.173 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740631003; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k3g8UcvdgyQ3xyw+tSoX3H8q6cI5L8QKqPIepd+Unng=; b=KVqjBJ5RfaE58Iz/wIWQk05b+CXW9L3GosH+uaQDYGCMslxp5uqtStPX8A5DKg/VSSryzs skUevj5BAxCR4uP16mIdDGzjzAPZv51/pLKjGHlo4kkTR7p/i/E1S22QvrH8WX+v8Nh8Gn xEebOaD1pzYCxalbNa3b69tYjYAoL9A= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=bI89CVVk; spf=pass (imf20.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.173 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740631003; a=rsa-sha256; cv=none; b=158VNawvNOA7ZIP0Cc9oxNUohmmr3GOzM7gBJdVtY7cD9cHW9lTkUmNzYTj66Y7mDxm7t3 /6WeXYZ0Fge0BRtKPPfzFwpWl+/YHZHOlOF+MNBeUhdemlWNtwVAX5akQW79ZMtsCjbkk3 ngcwerCssLlX4UoOBjvXzvQq0a+Ouh0= Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-22339936bbfso7232975ad.1 for ; Wed, 26 Feb 2025 20:36:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1740631002; x=1741235802; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k3g8UcvdgyQ3xyw+tSoX3H8q6cI5L8QKqPIepd+Unng=; b=bI89CVVkH8C09SVyAVhOnGV9hdJMW8ZhHaXsbfOrWTbsy+zphatN4r7RI1Wjfm9FIR ZQaU7ekLESPUAZSqkD8ENg8ElGFGl8aHRIWTlGQfwKhZLVpy0QfK4SwbolUTiNPFnVDJ uc0GP3mNYOD/41XqUZg6U9sVaeUzujmBlYFyI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740631002; x=1741235802; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k3g8UcvdgyQ3xyw+tSoX3H8q6cI5L8QKqPIepd+Unng=; b=oB0PsTrIfxMlV3qgLmJgFMg8ffe34n0697K/udKZ0KFP+Vt/XuSieqTs9CVpDlSQDh W+WeTd/l/N+/CjlU9nFLqUc+a34aL+NXbnQka4B7IfJjQjHUISxfS68VX5Hy2brfy8Ll ooCpVkVNEptSgb+qTY+nBzKtdvZ8CkAXwv56i5U8W8xS1lHPZxlvLaHt1+GnolvfLEcj Wvi2+XpSzGA4UT14qtSDfbAtsreHBX6R0vRpsFl43qg+8NV3tuXqBHYbhVtTiayipmNp k+vUJI9stvmkLA3R3LsQEe2+BIU0jWKbWiUzAHIpP2GnFl9M0JnwMSfJzSf1tCMbaefS 7WLQ== X-Forwarded-Encrypted: i=1; AJvYcCX93jIwfzfrV+In5fi5bOZ5N7Y8U5YV789nX1+0NgU9bWL7SeITH8XbOyXErNlnlAjN52OZy26dAw==@kvack.org X-Gm-Message-State: AOJu0YxwEmI0MAp+kkPjqdmN0HO96WiUygn35/V5nGAaKiFTLMnkdoou ye1kBI7bf9Tmp+ULVuEV7gAcqIMI01UFEz+CtHfHIcEI84byTat05iT/CN71zg2vS01T9FdKEWY = X-Gm-Gg: ASbGncu6pup7h53SefAaPCOsq5vShQQlpBseObLqfe1XdqDcqJrJ6JqjWH6iuz6uxJw qVYfbxGf7Canu5C9wyvbM9PfnikYP8ZYh2sJn9a9Xj3NBVVLYhe2LvQXVobltLWV+OMKYO3WeNT 18RIcqOcRXQYEnQSYUUbGhjZGf/s8aTiOHwCUJEPXnHFovW3FNu+w8eMuYvDyo/is05rbCEUV1e zyheuG8mNPgqXUOZ01spqWslbLj7ichwX0dYrPAhVIgsymTw9Mq6FoSHgl5t15iXuZSfZ3T10Nn K/6TT/H/uM+nZMfzbvdf6xe3QlSq X-Google-Smtp-Source: AGHT+IFKqiVpX1vg0EKtsAPxxL19rWKpvI+zaBubIUtbgTf7uq8NxOqv9BQnr4f0Yj2mw3OI5Ik1WQ== X-Received: by 2002:a05:6a00:23c1:b0:730:4598:ddb5 with SMTP id d2e1a72fcca58-73426c8d300mr33904409b3a.2.1740631002104; Wed, 26 Feb 2025 20:36:42 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:a9c0:1bc1:74e3:3e31]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-7349fe4cc2asm515402b3a.52.2025.02.26.20.36.39 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 26 Feb 2025 20:36:41 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Hillf Danton , Kairui Song , Sebastian Andrzej Siewior , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v9 02/19] zram: permit preemption with active compression stream Date: Thu, 27 Feb 2025 13:35:20 +0900 Message-ID: <20250227043618.88380-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.658.g4767266eb4-goog In-Reply-To: <20250227043618.88380-1-senozhatsky@chromium.org> References: <20250227043618.88380-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 4EF8A1C0007 X-Stat-Signature: pykwswh6hsy8w5n8j7d75mda9qk15oin X-HE-Tag: 1740631003-42795 X-HE-Meta: U2FsdGVkX1/WFAq8HRpiZcThSUfT3do+/otz36hvaV3nvzjecqftrc75hPt83ACTsb06GhGwWejTKvvUdBOaGqWlINjLGpXbkHT8DFc5h3xA0/GVy/O3OuaaZN1p61OFwm9tqSRY7iNAZGe1tCPkifUAEYBEMiuHCsL3umllj4IrOX90hAMGJsCqsbtmCul56Ahui4J36rannlXH5piTHUsEtHAfXLpgGoF0GmH2Zvp15VRcn7Gt4hs7IrlEMmUv7TDhHc9MibQXGxunVHGJKlLskYoZijHomUdTitSSutrphQlTx0xVTx5OKDJVjKGQ2NofBS9B71vtfPuZza9joJysV/WjTeDYKVlzDYfEIKqD76QLx79GDezow+9uvayMLZnAy4DH/C3wuIX2kUT2ru7O5QxIn1ajQVl74/klnIFdBK3pbux9f56RfFabwz0WfKdDalDKBe2UrpAyRBsDNixlo122Abp2ZB7nvQZx5pR4QD2XV6TgMigoPBnZu7FhHtPOW2kb7zKjYtKClAA9CeePC8cXPwsxvwjG25v8M3Zjq9C2/pKH2+t7jSZ1zQQ4P4YhgpYu1nivJHuW+YAFWnCbL3Jna07T5coS6qju7UWTZleHU0AkRuBcdmwXWtHymD4uI8WeQdHuI+ObxLtifsBzdmmw2NalE3mzLPQJHDkDnRRb41C8brDTwk1JLWB/EubdUsTjqN7UtrK6UdG9a7U+Pm8Mta9FlAr7QKpw9YutfmKtN5zOQ3Zs4cmbyy6elMNU1rDg/6MiIt+jLwqBVQvonho0xfc+OiFQTNXDEuQIUZnQrElNwwZ4+pUD1eWChr2rIg7ukNwPK28hZfjE2SzXmpzytlNWb8RO0jjKj/qmtpHnlIjzUiJbCeA2YOtN/mOUGSDcqrtsTGLemIrA5m9XDN4h+nkfyQprNSZW6deQzYvKY9CYQu5mjRftWQ9MPFY5M009MN69Vix1d1M 8+qcBVWa xo0bVFs502IVO6JJwznNRxyEJ+DJvHtKgK1CiXunnlsS7zYnvjy3Yfpgtn8hZQK6QQ6uwExBZ43ZK4SH3jYby7d5b+yX6ssWVktdOIegWV8RLoeD7hguU/WUjESEo85+b6k6UN1VRn0k6JJAcOwu63HG1lJ6JVnNO3FIhGGysLQr5MbKnrHunsNQRnFsyE0TJDkWBCBfzECslCicbsmjVkxJCH3Nb+7gQGlOcuABl//B/FQpr1EW0NcgCB2HDEObovGwk2qkTkZm/OHiTdRubWn0eYAjGrCE3ZDvPLjjimnMFfj+Ry2hCy8kHjA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, per-CPU stream access is done from a non-preemptible (atomic) section, which imposes the same atomicity requirements on compression backends as entry spin-lock, and makes it impossible to use algorithms that can schedule/wait/sleep during compression and decompression. Switch to preemptible per-CPU model, similar to the one used in zswap. Instead of a per-CPU local lock, each stream carries a mutex which is locked throughout entire time zram uses it for compression or decompression, so that cpu-dead event waits for zram to stop using a particular per-CPU stream and release it. Suggested-by: Yosry Ahmed Reviewed-by: Yosry Ahmed Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zcomp.c | 41 +++++++++++++++++++++++++---------- drivers/block/zram/zcomp.h | 6 ++--- drivers/block/zram/zram_drv.c | 20 ++++++++--------- 3 files changed, 42 insertions(+), 25 deletions(-) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index bb514403e305..53e4c37441be 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -6,7 +6,7 @@ #include #include #include -#include +#include #include #include @@ -109,13 +109,29 @@ ssize_t zcomp_available_show(const char *comp, char *buf) struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) { - local_lock(&comp->stream->lock); - return this_cpu_ptr(comp->stream); + for (;;) { + struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream); + + /* + * Inspired by zswap + * + * stream is returned with ->mutex locked which prevents + * cpu_dead() from releasing this stream under us, however + * there is still a race window between raw_cpu_ptr() and + * mutex_lock(), during which we could have been migrated + * from a CPU that has already destroyed its stream. If + * so then unlock and re-try on the current CPU. + */ + mutex_lock(&zstrm->lock); + if (likely(zstrm->buffer)) + return zstrm; + mutex_unlock(&zstrm->lock); + } } -void zcomp_stream_put(struct zcomp *comp) +void zcomp_stream_put(struct zcomp_strm *zstrm) { - local_unlock(&comp->stream->lock); + mutex_unlock(&zstrm->lock); } int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, @@ -151,12 +167,9 @@ int zcomp_decompress(struct zcomp *comp, struct zcomp_strm *zstrm, int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) { struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; + struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu); int ret; - zstrm = per_cpu_ptr(comp->stream, cpu); - local_lock_init(&zstrm->lock); - ret = zcomp_strm_init(comp, zstrm); if (ret) pr_err("Can't allocate a compression stream\n"); @@ -166,16 +179,17 @@ int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node) { struct zcomp *comp = hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; + struct zcomp_strm *zstrm = per_cpu_ptr(comp->stream, cpu); - zstrm = per_cpu_ptr(comp->stream, cpu); + mutex_lock(&zstrm->lock); zcomp_strm_free(comp, zstrm); + mutex_unlock(&zstrm->lock); return 0; } static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) { - int ret; + int ret, cpu; comp->stream = alloc_percpu(struct zcomp_strm); if (!comp->stream) @@ -186,6 +200,9 @@ static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) if (ret) goto cleanup; + for_each_possible_cpu(cpu) + mutex_init(&per_cpu_ptr(comp->stream, cpu)->lock); + ret = cpuhp_state_add_instance(CPUHP_ZCOMP_PREPARE, &comp->node); if (ret < 0) goto cleanup; diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index ad5762813842..23b8236b9090 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -3,7 +3,7 @@ #ifndef _ZCOMP_H_ #define _ZCOMP_H_ -#include +#include #define ZCOMP_PARAM_NO_LEVEL INT_MIN @@ -31,7 +31,7 @@ struct zcomp_ctx { }; struct zcomp_strm { - local_lock_t lock; + struct mutex lock; /* compression buffer */ void *buffer; struct zcomp_ctx ctx; @@ -77,7 +77,7 @@ struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params); void zcomp_destroy(struct zcomp *comp); struct zcomp_strm *zcomp_stream_get(struct zcomp *comp); -void zcomp_stream_put(struct zcomp *comp); +void zcomp_stream_put(struct zcomp_strm *zstrm); int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, const void *src, unsigned int *dst_len); diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index ddf03f6cbeed..545e64ee6234 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1607,7 +1607,7 @@ static int read_compressed_page(struct zram *zram, struct page *page, u32 index) ret = zcomp_decompress(zram->comps[prio], zstrm, src, size, dst); kunmap_local(dst); zs_unmap_object(zram->mem_pool, handle); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return ret; } @@ -1768,14 +1768,14 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) kunmap_local(mem); if (unlikely(ret)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); pr_err("Compression failed! err=%d\n", ret); zs_free(zram->mem_pool, handle); return ret; } if (comp_len >= huge_class_size) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); return write_incompressible_page(zram, page, index); } @@ -1799,7 +1799,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); atomic64_inc(&zram->stats.writestall); handle = zs_malloc(zram->mem_pool, comp_len, GFP_NOIO | __GFP_HIGHMEM | @@ -1811,7 +1811,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) } if (!zram_can_store_page(zram)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); zs_free(zram->mem_pool, handle); return -ENOMEM; } @@ -1819,7 +1819,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len); - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zstrm); zs_unmap_object(zram->mem_pool, handle); zram_slot_lock(zram, index); @@ -1978,7 +1978,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, kunmap_local(src); if (ret) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return ret; } @@ -1988,7 +1988,7 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, /* Continue until we make progress */ if (class_index_new >= class_index_old || (threshold && comp_len_new >= threshold)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); continue; } @@ -2046,13 +2046,13 @@ static int recompress_slot(struct zram *zram, u32 index, struct page *page, __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle_new)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); return PTR_ERR((void *)handle_new); } dst = zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len_new); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zstrm); zs_unmap_object(zram->mem_pool, handle_new);