From patchwork Mon Mar 3 02:03:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13998055 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE72EC282C6 for ; Mon, 3 Mar 2025 02:25:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F181280005; Sun, 2 Mar 2025 21:25:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 451CF280001; Sun, 2 Mar 2025 21:25:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CB90280005; Sun, 2 Mar 2025 21:25:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 095B3280001 for ; Sun, 2 Mar 2025 21:25:02 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AF753C09DE for ; Mon, 3 Mar 2025 02:25:01 +0000 (UTC) X-FDA: 83178647202.05.E5792E7 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf04.hostedemail.com (Postfix) with ESMTP id C8A5E40006 for ; Mon, 3 Mar 2025 02:24:59 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=oEXI6mfH; spf=pass (imf04.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.173 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740968699; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8qnIjXigJRpaB/eOXemhT39qobrTOIzhvUPCZvOaudo=; b=mm167VsncGAsjKMlINaWDeAkhDffI2MBUwsr5cSuiDQPLYxJTag34sN5BOlcf391eRiYEq 8qFyR9sQoskDEdxfhQu8zK0QsGP6uI/S4lGrPGtpau+g7/G/s5wzyLLdQ1gXMfdSscQi56 GWhwoIu1xIQVSU9Xp+Bn2OGJDlkd4ww= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740968699; a=rsa-sha256; cv=none; b=DW756mg1JmtquXbCh5icM29CL95pxKzd8reKQIk8UfKlzaKJE+Egmjv+gmliwWcJ+RylSX xiX1bFj3gyVbA4AZNRstbl3veVb4la+Bx1hRQDZZ2YuwsgSybg1oksNGLjkQx4QeP84Dep VWWpLxu4Guh7+9lQL94r8O60BnAYROY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=oEXI6mfH; spf=pass (imf04.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.173 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-2234e5347e2so76908185ad.1 for ; Sun, 02 Mar 2025 18:24:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1740968699; x=1741573499; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8qnIjXigJRpaB/eOXemhT39qobrTOIzhvUPCZvOaudo=; b=oEXI6mfHgUDiMWinCBU8+nNIPS14lQprhBUrQc/NmQAj4SAYdVv84aRVVBHfRrU9Ay QLPhgM+eH4bVJ9Nqc319+aE+Kjt8Zl75vJHftPlctWnbmc3vGUQw0t/AXx//GVa/cbt1 UVgVlibV99AdR7+quaUsOi0/HpPSf3VtbRZEw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740968699; x=1741573499; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8qnIjXigJRpaB/eOXemhT39qobrTOIzhvUPCZvOaudo=; b=I4nKE9ax2T2QItTN+LwkwCcSsfnhSEvsRoV/6k2bpD/9TmhxF+iYwA1IC3nQBUUOWD aLR79K1Mt/LqVa9zkADTZqAqcuzGUDo5+eIG1G6YI7QgK8p4+MynNhZ1Eezq2JinawkI 5L0L1PoW2KTkMh8HK29J0alYGhRH/f2Zrbunlj7kvC8kt/qAu1BXWzYNO5KUnHk/qwT3 gPpOYWqy6O8sY9oIRU66IEos1F7uS+A8Zsvx849xjRnvB+i3aZU2o18JkwDn6s66qT/P 13J41IJbaemMqrEQrxqWVseBc93g4HKTIQR/Rz8A9krqxSqpoO1vrnAF0ejWcVhjPpl6 Iysg== X-Forwarded-Encrypted: i=1; AJvYcCV5S69zFPJQUNoydyIvsdrpuxnVr6QYo50wU4M9KJgKWrPBOn4owZanmL+BbsTQUkbkSMQNdY+s/g==@kvack.org X-Gm-Message-State: AOJu0YzF1tFSe2hnv2EBjSzBjPRqShlz5garblOV32FWTg2UMaNb3F7V V1Vx23AYLklEepjvVSavQtR19L2Lh3PnVIgzB026m/IfwoTEXsg9SWWZLYfXlnQD4okNaSm9aTI = X-Gm-Gg: ASbGncurhMYT3hQ+kmRJlbmgSKP8Oq1fc32GAouM1k4uSVLsIc6hQqGFaMvJfYV6a20 +9bqC07jBe1u0Xu66xMa6DSCzGvkhaI6GuQ5H9v1zo2vI6OwOJ8nn/4QvgjW00iHTEkiDr1x8zy b7u5aMF7MTC6y0CICUI74xp5GzjoDxJvRQSejIMwugkQMYuqwq6LmVf75hbvsox5CB/3j7LLYNM zoppCFQMY241zrxljK0x9ru3dL2a5tPTY5Mi7jtsGPzQ/h97kBy2heASjGHJQgrVTbL1t/B3e8L SPpZ1ZDnCfJ57Dfgnv6/oCXMOlN6DcNKKxS/CxO5jKexQd0= X-Google-Smtp-Source: AGHT+IH2tyXRUFNLNEQKOHaFYFPbUYX5TokW3hDEBo4R90Yp1othTxhspagFYn7ka7w60KBXBXHJeA== X-Received: by 2002:a05:6a00:1795:b0:736:3cc4:76db with SMTP id d2e1a72fcca58-7363cc4886amr7491350b3a.24.1740968698689; Sun, 02 Mar 2025 18:24:58 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:1513:4f61:a4d3:b418]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-73632e76e1dsm3701620b3a.89.2025.03.02.18.24.56 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 02 Mar 2025 18:24:58 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Hillf Danton , Kairui Song , Sebastian Andrzej Siewior , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v10 05/19] zram: remove second stage of handle allocation Date: Mon, 3 Mar 2025 11:03:14 +0900 Message-ID: <20250303022425.285971-6-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog In-Reply-To: <20250303022425.285971-1-senozhatsky@chromium.org> References: <20250303022425.285971-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Stat-Signature: fkqm1pna8iypwo3i6atpip6w4jx15yj9 X-Rspamd-Queue-Id: C8A5E40006 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1740968699-782216 X-HE-Meta: U2FsdGVkX1/aCjoTYj6UYwI0FDXNKpA6rsIAFMp8ORmQt0ea+dmJnY9SSCgWXhgcOWRblC/8HDdVPU16epayP0BPS/atWsYZSuYOXPzDQC1pjx3+N/QeS+z+Jlb6pKu3nf25ju0hnZ7+drH16zEAZEl3Tq4J9SMKCjEz5xbQrwmZNyWpM3IHHNZfxGruDK1QuFwIFO6e6JxjI+aFgo/8qvFWro1Z93Lg6Ddu92BqePzfGFDg3h2z6bZj+Rt1dS8gq0nq7ZNKpwlb8D/Ch5HDFX3K65VYXmlkgGuctIU6RjsH01Fm0A9I+j9q67yH1ei7/9e0i5rAbbyrmeP0BwRNyzuJ8WNtrY30a6lmzlJhthmxKQ1Ny2bKgRkonSwnM58nZkCpNu7VSKV+XBTDt0bDRx4ACUrG2CykL7smt0jOBMIL7tQWc75BbHdOlNnz0CzqINEEmriN9oPOJoMBLWm8BUA+W2OMhAR/hQkc/mOhhsVFb43p6elxNf8flAdWVYTKoW4ZKjjrv6Wl/+vezAXb7urOUU0dydnXmC2n27IHor14BMx9hq/3YSJ1jszxawhjTQezDhI/ZplDC7O6ZY/FhLrrCZMGkoJguxMS+/800a9gVLc2ahh6zMo0dzxq/mfK2JZrOBQQWJsvCthutJxaVD4OeRxJMvC13X8b8E01ugtkhgGNmu3lUSZy7ncLFvf8tAZLiWnkSGZM6pZanICRIi9WcCIwdKi0XvMs8bakTt9KQ8OrFOSy5LY6FzyFSMoplzPbvxCZyhcD6G4aWJsyO/uenEatycHxDi6oplmSnsQjl9Q+E3YdPEYyVF+GYqOAgFyyAb8p9UcbkJ1+I0pPZ92egLCTai3myKneKwmv/ibDmJCxYInk4nsFY1NzF4Fe7DQRhQeK2M2sCnfXhsIeioTyNybNtfkEfcBoE9upx80vHeg3JcQWFXGtwzA9viavhBn5nG1NSeGdCudIQSL X6jGHMi0 kYBwGEuE4Fje5cQU2jmRkH2Ov0p+spDB2WRLdw7F6Uqa/tZ09ATp7iRI+9yAeEXOqvEMsW3XQaf+zyoOQjhT2sHu0XPfUtZ7YSb+xuRiEWI1GeBXvxqu8dF+8wteQvrIytMHqmSSssuKrx/C7pMhXL1UkT2nNmuUzSNdgIw00wpCQoa6kCxZH4HCcDsFxRL7cliQ1297y0ioiW1+EzvLFtf/hLo1xb7Fwmnj/RQKSn2dJOjzZsIMjZHtxZtarnUX/sy6PwySwP2NZuQKFdxeigowI+BWzzXlzdWEXAP5Gu87NrXh5E9a25BHr3w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Previously zram write() was atomic which required us to pass __GFP_KSWAPD_RECLAIM to zsmalloc handle allocation on a fast path and attempt a slow path allocation (with recompression) if the fast path failed. Since we are not in atomic context anymore we can permit direct reclaim during handle allocation, and hence can have a single allocation path. There is no slow path anymore so we don't unlock per-CPU stream (and don't lose compressed data) which means that there is no need to do recompression now (which should reduce CPU and battery usage). Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 39 +++++++---------------------------- 1 file changed, 7 insertions(+), 32 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 93cedc60ac16..f043f35b17a4 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1723,11 +1723,11 @@ static int write_incompressible_page(struct zram *zram, struct page *page, static int zram_write_page(struct zram *zram, struct page *page, u32 index) { int ret = 0; - unsigned long handle = -ENOMEM; - unsigned int comp_len = 0; + unsigned long handle; + unsigned int comp_len; void *dst, *mem; struct zcomp_strm *zstrm; - unsigned long element = 0; + unsigned long element; bool same_filled; /* First, free memory allocated to this slot (if any) */ @@ -1741,7 +1741,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) if (same_filled) return write_same_filled_page(zram, element, index); -compress_again: zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]); mem = kmap_local_page(page); ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm, @@ -1751,7 +1750,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) if (unlikely(ret)) { zcomp_stream_put(zstrm); pr_err("Compression failed! err=%d\n", ret); - zs_free(zram->mem_pool, handle); return ret; } @@ -1760,35 +1758,12 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) return write_incompressible_page(zram, page, index); } - /* - * handle allocation has 2 paths: - * a) fast path is executed with preemption disabled (for - * per-cpu streams) and has __GFP_DIRECT_RECLAIM bit clear, - * since we can't sleep; - * b) slow path enables preemption and attempts to allocate - * the page with __GFP_DIRECT_RECLAIM bit set. we have to - * put per-cpu compression stream and, thus, to re-do - * the compression once handle is allocated. - * - * if we have a 'non-null' handle here then we are coming - * from the slow path and handle has already been allocated. - */ - if (IS_ERR_VALUE(handle)) - handle = zs_malloc(zram->mem_pool, comp_len, - __GFP_KSWAPD_RECLAIM | - __GFP_NOWARN | - __GFP_HIGHMEM | - __GFP_MOVABLE); + handle = zs_malloc(zram->mem_pool, comp_len, + GFP_NOIO | __GFP_NOWARN | + __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { zcomp_stream_put(zstrm); - atomic64_inc(&zram->stats.writestall); - handle = zs_malloc(zram->mem_pool, comp_len, - GFP_NOIO | __GFP_HIGHMEM | - __GFP_MOVABLE); - if (IS_ERR_VALUE(handle)) - return PTR_ERR((void *)handle); - - goto compress_again; + return PTR_ERR((void *)handle); } if (!zram_can_store_page(zram)) {