From patchwork Fri Feb 14 04:50:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13974467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DE0FC021A4 for ; Fri, 14 Feb 2025 04:52:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2CA6E6B0095; Thu, 13 Feb 2025 23:52:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 253976B0096; Thu, 13 Feb 2025 23:52:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F4236B0098; Thu, 13 Feb 2025 23:52:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DD8DA6B0095 for ; Thu, 13 Feb 2025 23:52:49 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 98C57B1902 for ; Fri, 14 Feb 2025 04:52:49 +0000 (UTC) X-FDA: 83117330058.08.9C20591 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf05.hostedemail.com (Postfix) with ESMTP id C2869100005 for ; Fri, 14 Feb 2025 04:52:47 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=W18fnxnj; spf=pass (imf05.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.169 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739508767; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3bmpTeEVGC+rQQykpv7em5nfwUamOzB3Vnh8lKCRtbQ=; b=kSX7tZcrED9RGgPYiK7tHXM+f6K2pOa4BAPkIn0DrjAK3DGDdZPgKmMtyvrAPEvA2mY6nH vWxktdfgDi74zk3xzfLygDXhntfCFhDtiF40qNXbvCosU+1uZpJPOOc1Ia8bhOidyKGlZs gsTfFKoz/nK1nXnQ6A3CESuoeSVuERw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739508767; a=rsa-sha256; cv=none; b=nZiUT6Aj3FZ+uWtkTmQ8kqETN/avZFe+Rh06yMyHCTwBmzPWM0kDx/URZD1tye/9O0t87P pha3e5Am9Ig4b8pSqtPeikwvlmcAQIhvI91TJm4IeJ3ZYCfRl4MKCiTB4fK95ke43Q2hlx +PNFZW3i1wppbnYdPzPi8Guyj8cYKYo= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=W18fnxnj; spf=pass (imf05.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.214.169 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-21f49bd087cso24735665ad.0 for ; Thu, 13 Feb 2025 20:52:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1739508767; x=1740113567; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3bmpTeEVGC+rQQykpv7em5nfwUamOzB3Vnh8lKCRtbQ=; b=W18fnxnjp3hc//CHkp1Ae8ypEimIkHWwkQebTZ4uCHRtXKiGZkfvM9AhZGvUWHlxZE DMpts54/742n9cUJp4P/01aUkazLbhWW4OKL7Y1wlYaRDYHsfmPaRPva2xStjJfqSeBX 4iaRXrXpR/jUBdLQQDfXSrjVz7hb8Pndrnc7k= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739508767; x=1740113567; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3bmpTeEVGC+rQQykpv7em5nfwUamOzB3Vnh8lKCRtbQ=; b=N7BgsmN0+Q7VWejq0/hMvz8qPNcRMk8a5dvhlQuWkrU1EpbMCyw7jcpCdL1sMmu/k4 ZzHFyyfjXL6H0QV277g+4sxndW++3WLKCCi347MwIWFAdHnDgXxIlGn6dTaA1iUdjwTl axA0TntyGarxhFt/FZsmKcRkkXZavaUReQoLr03xd/uPmO4Q0LAtlGYIeMFh+w8xJjYF XisvAukdO5cLu3crePUiGarDlq04lYVOtRmAijkcR66xbzCbTAUUVNWm7RlikdnqJnto aeqIi+fzWoYKBGlDT8XidI8GUnbKM5AE6zGpYSWDxCl9NlzYoSoeZeQ9BW/+BHxHb3ef iy1w== X-Forwarded-Encrypted: i=1; AJvYcCXzz7/M5RAZVg1mUBKIXJk4bU8xkjuUPgN6E72MWL5TwGJq2FzQvgF9yWid5EHsrZtPjwweI1RlKg==@kvack.org X-Gm-Message-State: AOJu0YzgucuGAV/+eztNXBVzPrm4ibRmEz7Oke8yn4ocMewi18dJwn4o 12hwqSq2P5P2kuK6ynHatz+Evb1meYsl2eQA2mr88bTyHSQOYegp5yL4NuoL2Q== X-Gm-Gg: ASbGncvggDil8fT4tc1W9xWH699tnxXU98psNKWca0iH565ARzd1H2RlxiurbhQLSg2 ANth5PdiUuFhwaPUWxTBCwlyPq4W3j3NTtM8fUP8My3Y0RxLSeu6UIOuqSx2Z1x7NAz9FoND5R6 6CzYIf+a2hbaceJVcqLuSDNSIuIqXDH+GDCyn4YhmyY9PbF3/QokeVqNt0uNomqlCQAKYgbeAYW /+6AZoS9aiMb2Yacv5RyRl35HMdEj7Fs56NxQZ3dqUipk3qdeksJ+c0nX120RPZlv0WzSpYfafi g4vZOpQfwU559JFRZg== X-Google-Smtp-Source: AGHT+IGEF7Dn0tVy7rM1XTaKFMcH+A1xI++T8yz9kJ3lVrfR5w/GZH5hDQABoV3SVr2TB7FEUspOGQ== X-Received: by 2002:a17:903:228b:b0:21f:6546:9af0 with SMTP id d9443c01a7336-220d216b686mr106732985ad.44.1739508766756; Thu, 13 Feb 2025 20:52:46 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:942d:9291:22aa:8126]) by smtp.gmail.com with UTF8SMTPSA id d9443c01a7336-220d558fe3asm20666155ad.234.2025.02.13.20.52.44 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 13 Feb 2025 20:52:46 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Yosry Ahmed , Hillf Danton , Kairui Song , Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH v6 05/17] zram: remove two-staged handle allocation Date: Fri, 14 Feb 2025 13:50:17 +0900 Message-ID: <20250214045208.1388854-6-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.1.601.g30ceb7b040-goog In-Reply-To: <20250214045208.1388854-1-senozhatsky@chromium.org> References: <20250214045208.1388854-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: C2869100005 X-Rspamd-Server: rspam07 X-Stat-Signature: 74ptkycu185mghmtdzsc56rp84takyh5 X-HE-Tag: 1739508767-177654 X-HE-Meta: U2FsdGVkX186KJqiWy4Q7ZURIVFv6a71QSiPbXfXKQn7PkcqR7e5K0BfiXOAK+9AOLDUMYxHDXjSvFxFGHn1raU0JHOcj/MT1tTHoccqF/sbq/XzyyrC+mIrP/GCrE3wEZQcqfZl5dVFPfZ7oXy9IVCUWEfc5/h13sWizjQVNUf7QSFt0uBv/QXtDmSB+qcK683FObwuEYTcRvDQTAcrYsRUz9uCgxHTJTJwrfnZV1Y21E8Su3kZt6w9s5KBBCiMgBAPNPr5YFHLypGGK4RFwgDk2pJ6yfhEKYKzpp2PG9isQImWFXWUoJZ4Lgxwt8GTSPkuT4FCmyZYuqWitM/Gj9P1bkHA/ovuf9Pzhblav4imdDSjUNM7TjlGLGrp6WIZZidyQ6j5CrICubXRsd3X/HvmQ08xuZVirp7R9L/sVpy8YQa1MrJAHg6By/fs92yHGu7de9Sdcye4CFndP0ugLt7jfV6y2M3fTjG3H0VKPeUWD8pHxMum0v49Xsq2zyNBBNggeDL5Zl5VXX/159STmWARmxDaBAi8nYsQb24TJ6E83fylp27FYJS2J2qG4e5pRAok4MIG7T5o2odvL7wW8XJu46W2nes8Pa3M4RZ43fBER46drBtmiyPW8KrJZfxK0JkJAE55ZXuzvh5lDC6X4t1OBgjBUWrVraCCfDZvekeN0KV2d8uY6TUMcgWudVb+s1qRtZE71mVQYtwiD6r6x39+Kmu93RUNHdUfw6FIIXzA2NxBniObT41/qKbz1rj38a93BB8UFhDilny8hiYRWHwRz87GQWJMbtUHLSZxgjkdx8mm2K2COOwjzMUziEMupPob6wTOQwP+OkWx5+20qb3hQq89glrVy9MacIJ5iUEP2wq11bLsEJqD3DIWFrsAimdBHyUg1I6aXSs2i+haLzVrH8MLCMDTajiRk8LFOTURGrFNOOiustHnqMtEhSjZEkjGuotB8betrZ6CyjR cHRQlqta jcHVB6RDBtPOgSllKAnjCnQ69Cq+B/6ajQnH+VN+onohdiRPlnd/G99Ab6qnpbgRd/0L3eEplOEbF73WEwZB6jZssvcs+aknbySgJiMLAV2yHXv5o/pmUXfrGa0d4DhsQTiWxKSb+OPe0oWXsPtzA+eu1oulHeOyWZChOAEerkuURx/H2qsuDFzdVZgLNhTaUCjvWwKg87auayhej7A08lHsd4veH5g8Va8Lymjl8NaT5OdXmc8bkWgOVAAgT9eLYvKO4nPE62S89MTeuvgXBSwOiDebthFj3TzEn2GyX9bSgStZmFPcAUyj4aA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Previously zram write() was atomic which required us to pass __GFP_KSWAPD_RECLAIM to zsmalloc handle allocation on a fast path and attempt a slow path allocation (with recompression) when the fast path failed. Since it's not atomic anymore we can permit direct reclaim during allocation, and remove fast allocation path and, also, drop the recompression path (which should reduce CPU/battery usage). Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 38 ++++++----------------------------- 1 file changed, 6 insertions(+), 32 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index cc4afa01b281..b6bb52c49990 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1766,11 +1766,11 @@ static int write_incompressible_page(struct zram *zram, struct page *page, static int zram_write_page(struct zram *zram, struct page *page, u32 index) { int ret = 0; - unsigned long handle = -ENOMEM; - unsigned int comp_len = 0; + unsigned long handle; + unsigned int comp_len; void *dst, *mem; struct zcomp_strm *zstrm; - unsigned long element = 0; + unsigned long element; bool same_filled; /* First, free memory allocated to this slot (if any) */ @@ -1784,7 +1784,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) if (same_filled) return write_same_filled_page(zram, element, index); -compress_again: zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]); mem = kmap_local_page(page); ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm, @@ -1794,7 +1793,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) if (unlikely(ret)) { zcomp_stream_put(zstrm); pr_err("Compression failed! err=%d\n", ret); - zs_free(zram->mem_pool, handle); return ret; } @@ -1803,35 +1801,11 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) return write_incompressible_page(zram, page, index); } - /* - * handle allocation has 2 paths: - * a) fast path is executed with preemption disabled (for - * per-cpu streams) and has __GFP_DIRECT_RECLAIM bit clear, - * since we can't sleep; - * b) slow path enables preemption and attempts to allocate - * the page with __GFP_DIRECT_RECLAIM bit set. we have to - * put per-cpu compression stream and, thus, to re-do - * the compression once handle is allocated. - * - * if we have a 'non-null' handle here then we are coming - * from the slow path and handle has already been allocated. - */ - if (IS_ERR_VALUE(handle)) - handle = zs_malloc(zram->mem_pool, comp_len, - __GFP_KSWAPD_RECLAIM | - __GFP_NOWARN | - __GFP_HIGHMEM | - __GFP_MOVABLE); + handle = zs_malloc(zram->mem_pool, comp_len, + GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { zcomp_stream_put(zstrm); - atomic64_inc(&zram->stats.writestall); - handle = zs_malloc(zram->mem_pool, comp_len, - GFP_NOIO | __GFP_HIGHMEM | - __GFP_MOVABLE); - if (IS_ERR_VALUE(handle)) - return PTR_ERR((void *)handle); - - goto compress_again; + return PTR_ERR((void *)handle); } if (!zram_can_store_page(zram)) {