From patchwork Wed Dec 18 06:34:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 13913087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6B9BE77187 for ; Wed, 18 Dec 2024 06:35:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F1546B0093; Wed, 18 Dec 2024 01:35:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A2B66B0095; Wed, 18 Dec 2024 01:35:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F4096B0096; Wed, 18 Dec 2024 01:35:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2D9626B0093 for ; Wed, 18 Dec 2024 01:35:39 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A7308140C11 for ; Wed, 18 Dec 2024 06:35:38 +0000 (UTC) X-FDA: 82907118420.02.1F63D96 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf22.hostedemail.com (Postfix) with ESMTP id C6AECC0004 for ; Wed, 18 Dec 2024 06:35:03 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=MA7MSsnx; spf=pass (imf22.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.210.174 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734503702; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kcFgQDfUGNa4RpmE9ZvQmkln//OMWOItv9XoEUwGa60=; b=iHbFUREp2eqK3dVy0YeArXS65H6+nKky4t7uql8ea2CnxcJZp2N1/CdnY5PEJ+kB9jXm3R T8uZcO/kQkBVCudn7bfavnfZ5NmRtirCmNMKFZAWzR9JljGK5GefnbKK2Ygonejr7EelOU tfTDoZ/D/1tdb0XyEUR6c2Ultl1ODf4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734503702; a=rsa-sha256; cv=none; b=kOIBu/AOeUu+OOQdopceOn+WoTGoIIfKNm4Sd6QDKGNoKcJKWvYzJ2aa0gTeKeOT25Wmxg dko7A0eOpk++vkpPxRQs4W95SAYlKUBFhRkDxLvDk1AQZA43zbrpYYWYmklY67SCYAHu08 Aw4IOoMHvbUQxXhlkMF8FnudSWQZUTw= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=MA7MSsnx; spf=pass (imf22.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.210.174 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org; dmarc=pass (policy=none) header.from=chromium.org Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-728e1799d95so7169488b3a.2 for ; Tue, 17 Dec 2024 22:35:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1734503735; x=1735108535; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kcFgQDfUGNa4RpmE9ZvQmkln//OMWOItv9XoEUwGa60=; b=MA7MSsnxr+GEwGVOoxvMO0EVnGXmct8aR8V4GehHAyoON3C0OHj41x6l1aT27xIzLU 5JFkfh9xQJPmRAu71Ag7dqP5/h+wULJBbA2r02wXNh+jhQwiJI67+uLKOstH1r/W1gYV eOSYL4ZVhfHo6epg35q5x//EyodnFGXfQPlu4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734503735; x=1735108535; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kcFgQDfUGNa4RpmE9ZvQmkln//OMWOItv9XoEUwGa60=; b=e1kMdUGDk/1nY2i/iDxDCoxuwvAJwLIbbHFOw1z/8FYoN5M5NFAhuggxG8H7wt7RBN McAg2af2xIBSiFFZNQOZaRelEzj3Ei1Tu3aYUs7TRzklIyK+I4L1SMwwy+gD2X8lBuA8 URh2+zJWr3JLAJUOjp9KPr0IkfW8XyEnpte0DeIXVZjOwN6WGqxNH0/Oxu+4+oKFHOLD 95qEaOlAHrP15E1ptim0uTDvGNIqJ4WoqPTSJHIx/DptFYerOZvK8VIYqtc62FlSdETY on5i8c0+T899mUgwmrVIPIsl8oUkrBXzV/mRpXEKNyBA5awmPeJBy0KZoOO8v7a5bNfD ANMw== X-Forwarded-Encrypted: i=1; AJvYcCViM13rkXZ30Deb4D8b0vPQYY5nWsnqEpXu6zV0AxUmRB8nPRx7JdOlN4PcnIiQ+5Ap4O52bQDbhQ==@kvack.org X-Gm-Message-State: AOJu0YwIwlFhUeR4q6T/OTKm6ucbpTH6I49ssOczJ83vD3x/IayP3D5v XQ6eJaj5t7Y6yxcCbxhDbvuElncHuSzY+/pV11Rd4aP5gbVjK8Qy/RD7RCBRC4JMuXESob4o3sU = X-Gm-Gg: ASbGnctk8CB3TLqKeOcBu+RTgS3jk3RO6JnlCeABuqrW9rdtXmzxSCYF7/yw5T0JRGp tMdvWJcZPaersZFCipVUhW9t12Ak9nKF6AO6ldpfjuoQ7p3JYHioQcv5zIpSSDcckEMmUYZaQwD mjz2hzqiR2aZMNAoIUCnxlzfrIfjVUozPZkKdIPpj6gPPEdEkNyTWhBUoUKM7qDsnmQmbQVgciS Uy2P1Z5kDjc9xSIrAfx8ScMtkImES3eLOrrwRupfSazp3xlf1quqjTXLXU= X-Google-Smtp-Source: AGHT+IH4+dvzwEYz6RMzf3+gGHyop3pBmEqsMorp12eUh0yPbMZK6wxgVt40cdSmpIiL6/jj76oo3w== X-Received: by 2002:a05:6a20:3d88:b0:1e1:b023:6c98 with SMTP id adf61e73a8af0-1e5b482147emr2917670637.26.1734503735547; Tue, 17 Dec 2024 22:35:35 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:3bcc:36cc:b9fe:9379]) by smtp.gmail.com with UTF8SMTPSA id d2e1a72fcca58-72918b77335sm7690904b3a.118.2024.12.17.22.35.34 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 17 Dec 2024 22:35:35 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton Cc: Minchan Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCHv2 4/7] zram: factor out ZRAM_HUGE write Date: Wed, 18 Dec 2024 15:34:21 +0900 Message-ID: <20241218063513.297475-5-senozhatsky@chromium.org> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog In-Reply-To: <20241218063513.297475-1-senozhatsky@chromium.org> References: <20241218063513.297475-1-senozhatsky@chromium.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: C6AECC0004 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: mtu1fn9a711fwfm39zuyc4krnjgt53w8 X-HE-Tag: 1734503703-661414 X-HE-Meta: U2FsdGVkX1+n7BmmB6HNor5FQwNsG7MS4/AtxHCXSBSwhJ8P/OTvxvh9g/y9281lWHJH4DUIqw/AB3JtrpJNymsoLhGZ2pzAasAThEZjrsMjdUlnd62kwDuy/wtwjT2+W7CWFIRs5z+I4HPil3sCR3/NxgZuBW+O+R99yNjBmkArRU5/TjUcdlcap4tIIztFXGfB7U7uqPz5KROwuD3I82cIgjzJVmj1CX41RQhJ2noPh9jjbbGknlwXimNAx5hCYFmuA3QBXsKWO+cCZsxZVDtHo1ek4vjEzHH+iDIZryauhrk/gdpTyCgy+XPD7VSbbhCqngg7O+kGwKOmKqMSO270SXwyJQHRQ0tRLYireMrHJqooGpB7KOGZL0ZvVUL957hsSN64iRF16l/kronnvLSXBVbHw1fyjgD8C84rXk5yPMgPPuwJhl/n1G+uEHi3GDEN9PqM/xbif3DTCTkvJQEjNOnH6e3lz9L/G6+u2myzMCMwEaBz2MapiwIl05V0Pp6nXct9bFpXvAUyHEn0pyL2AM0sdfpLYH3hKbGjwLjuSeNx4oi7pFvTVT+8aYjIABCh57E+EIPJlTH0D/wwI9DIRxu5fRHjAPxbMe/aJiG/1n8tt00+TcG1+4s8HKjWOEyfmuuSvAk74q0tnqJy7j0YUobkCtjYFttAc4oPMT2vY2iceQPvlb5WY7sODpGMr4gDOlwMMY+/4cfl4dx7ADkkG+UFS6Ry9400KAhwEAt1qXIP7d3JjF2hEZl7+mdDLVkwj3s+acl0UlO2unpnLtunMSAGLXCLybTMCChi8CV3LhFBXTjPSCgR8fgUVTbpxKqL34Hc7zNtzRsu4EJd7GkYccoZwFWsfevQokv7gCEiifcMs5naP+ijv3oMX/K0jXSjEeOQ4mP0aWxLJegkDhiMCc9hRX+ip0Rjr49JmdNXf/57WmhtuzUnZ/zaN+ioii2hmL6Q4fckSvFy5Vi F0cwSJAn qRQWHj4U2O3w4Tv8yFTvke3ddTOTmVql39s2ISFeXCeAqQWcK/xX8ae+aChGMyDLsNk+aGi7/LA5C//5960bgslc0NpyAh6SHcSAD/oOkdnJG7uMQohKUwsrzSs+CrgAa0kxARXcVP5yX5hMDxekoJBHggTgTpwO6HPC12aLMefAPFG18ZVv1mA07V7LTUK2KieV2EBTT2zUcUVwedADpCddbo7psDjuxMBzUa31Q9OxJtPV3PpbrqbcaFBsSlPMFRe/QKjTRni0CumrbvOynbAPXMJfiq+i/AOtECTTuL7b6VtL74FdP8Ui5i8VzO2mKUucR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000038, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: zram_write_page() handles: ZRAM_SAME pages (which was already factored out) stores, regular page stores and ZRAM_HUGE pages stores. ZRAM_HUGE handling adds a significant amount of complexity. Instead, we can handle ZRAM_HUGE in a separate function. This allows us to simplify zs_handle allocations slow-path, as it now does not handle ZRAM_HUGE case. ZRAM_HUGE zs_handle allocation, on the other hand, can now drop __GFP_KSWAPD_RECLAIM because we handle ZRAM_HUGE in preemptible context (outside of local-lock scope). Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zram_drv.c | 136 +++++++++++++++++++++------------- 1 file changed, 83 insertions(+), 53 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 89f3aaa23329..1339776bc6c5 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -132,6 +132,27 @@ static inline bool zram_allocated(struct zram *zram, u32 index) zram_test_flag(zram, index, ZRAM_WB); } +static inline void update_used_max(struct zram *zram, const unsigned long pages) +{ + unsigned long cur_max = atomic_long_read(&zram->stats.max_used_pages); + + do { + if (cur_max >= pages) + return; + } while (!atomic_long_try_cmpxchg(&zram->stats.max_used_pages, + &cur_max, pages)); +} + +static bool zram_can_store_page(struct zram *zram) +{ + unsigned long alloced_pages; + + alloced_pages = zs_get_total_pages(zram->mem_pool); + update_used_max(zram, alloced_pages); + + return !zram->limit_pages || alloced_pages <= zram->limit_pages; +} + #if PAGE_SIZE != 4096 static inline bool is_partial_io(struct bio_vec *bvec) { @@ -266,18 +287,6 @@ static struct zram_pp_slot *select_pp_slot(struct zram_pp_ctl *ctl) } #endif -static inline void update_used_max(struct zram *zram, - const unsigned long pages) -{ - unsigned long cur_max = atomic_long_read(&zram->stats.max_used_pages); - - do { - if (cur_max >= pages) - return; - } while (!atomic_long_try_cmpxchg(&zram->stats.max_used_pages, - &cur_max, pages)); -} - static inline void zram_fill_page(void *ptr, unsigned long len, unsigned long value) { @@ -1638,13 +1647,54 @@ static int write_same_filled_page(struct zram *zram, unsigned long fill, return 0; } +static int write_incompressible_page(struct zram *zram, struct page *page, + u32 index) +{ + unsigned long handle; + void *src, *dst; + + /* + * This function is called from preemptible context so we don't need + * to do optimistic and fallback to pessimistic handle allocation, + * like we do for compressible pages. + */ + handle = zs_malloc(zram->mem_pool, PAGE_SIZE, + GFP_NOIO | __GFP_HIGHMEM | __GFP_MOVABLE); + if (IS_ERR_VALUE(handle)) + return PTR_ERR((void *)handle); + + if (!zram_can_store_page(zram)) { + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zs_free(zram->mem_pool, handle); + return -ENOMEM; + } + + dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); + src = kmap_local_page(page); + memcpy(dst, src, PAGE_SIZE); + kunmap_local(src); + zs_unmap_object(zram->mem_pool, handle); + + zram_slot_lock(zram, index); + zram_set_flag(zram, index, ZRAM_HUGE); + zram_set_handle(zram, index, handle); + zram_set_obj_size(zram, index, PAGE_SIZE); + zram_slot_unlock(zram, index); + + atomic64_add(PAGE_SIZE, &zram->stats.compr_data_size); + atomic64_inc(&zram->stats.huge_pages); + atomic64_inc(&zram->stats.huge_pages_since); + atomic64_inc(&zram->stats.pages_stored); + + return 0; +} + static int zram_write_page(struct zram *zram, struct page *page, u32 index) { int ret = 0; - unsigned long alloced_pages; unsigned long handle = -ENOMEM; unsigned int comp_len = 0; - void *src, *dst, *mem; + void *dst, *mem; struct zcomp_strm *zstrm; unsigned long element = 0; bool same_filled; @@ -1662,10 +1712,10 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) compress_again: zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]); - src = kmap_local_page(page); + mem = kmap_local_page(page); ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm, - src, &comp_len); - kunmap_local(src); + mem, &comp_len); + kunmap_local(mem); if (unlikely(ret)) { zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); @@ -1674,8 +1724,11 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) return ret; } - if (comp_len >= huge_class_size) - comp_len = PAGE_SIZE; + if (comp_len >= huge_class_size) { + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + return write_incompressible_page(zram, page, index); + } + /* * handle allocation has 2 paths: * a) fast path is executed with preemption disabled (for @@ -1691,35 +1744,23 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) */ if (IS_ERR_VALUE(handle)) handle = zs_malloc(zram->mem_pool, comp_len, - __GFP_KSWAPD_RECLAIM | - __GFP_NOWARN | - __GFP_HIGHMEM | - __GFP_MOVABLE); + __GFP_KSWAPD_RECLAIM | + __GFP_NOWARN | + __GFP_HIGHMEM | + __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); atomic64_inc(&zram->stats.writestall); handle = zs_malloc(zram->mem_pool, comp_len, - GFP_NOIO | __GFP_HIGHMEM | - __GFP_MOVABLE); + GFP_NOIO | __GFP_HIGHMEM | + __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) return PTR_ERR((void *)handle); - if (comp_len != PAGE_SIZE) - goto compress_again; - /* - * If the page is not compressible, you need to acquire the - * lock and execute the code below. The zcomp_stream_get() - * call is needed to disable the cpu hotplug and grab the - * zstrm buffer back. It is necessary that the dereferencing - * of the zstrm variable below occurs correctly. - */ - zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]); + goto compress_again; } - alloced_pages = zs_get_total_pages(zram->mem_pool); - update_used_max(zram, alloced_pages); - - if (zram->limit_pages && alloced_pages > zram->limit_pages) { + if (!zram_can_store_page(zram)) { zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); zs_free(zram->mem_pool, handle); return -ENOMEM; @@ -1727,30 +1768,19 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index) dst = zs_map_object(zram->mem_pool, handle, ZS_MM_WO); - src = zstrm->buffer; - if (comp_len == PAGE_SIZE) - src = kmap_local_page(page); - memcpy(dst, src, comp_len); - if (comp_len == PAGE_SIZE) - kunmap_local(src); - + memcpy(dst, zstrm->buffer, comp_len); zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); zs_unmap_object(zram->mem_pool, handle); - atomic64_add(comp_len, &zram->stats.compr_data_size); zram_slot_lock(zram, index); - if (comp_len == PAGE_SIZE) { - zram_set_flag(zram, index, ZRAM_HUGE); - atomic64_inc(&zram->stats.huge_pages); - atomic64_inc(&zram->stats.huge_pages_since); - } - zram_set_handle(zram, index, handle); zram_set_obj_size(zram, index, comp_len); zram_slot_unlock(zram, index); /* Update stats */ atomic64_inc(&zram->stats.pages_stored); + atomic64_add(comp_len, &zram->stats.compr_data_size); + return ret; }