From patchwork Tue Mar 25 03:42:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 14028018 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 794A2C36008 for ; Tue, 25 Mar 2025 03:42:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9747D280002; Mon, 24 Mar 2025 23:42:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9274B280001; Mon, 24 Mar 2025 23:42:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C65F280002; Mon, 24 Mar 2025 23:42:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5B979280001 for ; Mon, 24 Mar 2025 23:42:31 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 31B4DC11CE for ; Tue, 25 Mar 2025 03:42:31 +0000 (UTC) X-FDA: 83258676102.17.1711A73 Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by imf30.hostedemail.com (Postfix) with ESMTP id 5B3BC8000C for ; Tue, 25 Mar 2025 03:42:29 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="inBch/lD"; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf30.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.216.41 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1742874149; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=l6JEESLLV1JCllG84hanAMC518jUUA2QNEhCRHmRkPo=; b=GFijJBVSaf29J6f1TU8xzapfc1YHSaxfkEKEPMj21i7ZKfgsUJOp2OSNrqNp2WF06ypW9R M/xk623h2GRM/CAB4pfZejEAjd7nIE6ZvjLyDhFZh18XhRD8/Xoaifql0vdZm0tpRx3hiR s9P4B+Mp8LtsNfyACpUeG30QDtez1iA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1742874149; a=rsa-sha256; cv=none; b=EQuylYL9R7+3pvw5dm99C4cyjv84duiQineCJRzCUdUveM8AqjYuI3hdmRNE3gZfkIJB6z 3buSRVS+ZepDwmWvVre1ip0o74aiErXPP+ujUi9TktnYkBqHZxwd0u6z3lnfOdDWDYhUmQ zTeP3eFCqclxZXLor6AngejTqQ0TU+M= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b="inBch/lD"; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf30.hostedemail.com: domain of senozhatsky@chromium.org designates 209.85.216.41 as permitted sender) smtp.mailfrom=senozhatsky@chromium.org Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-301cda78d48so9980102a91.0 for ; Mon, 24 Mar 2025 20:42:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1742874148; x=1743478948; darn=kvack.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=l6JEESLLV1JCllG84hanAMC518jUUA2QNEhCRHmRkPo=; b=inBch/lDuYYKVtWXd+DUDwMfpzk7M7XzbFLfIGH2X1FdVGcNJP2WRdisNFWkVLU8/j HcUqYwrREZQMHIzBZWBYDsi6j0Pca91t6Q/jXUeXzkNGQzs/NZJ7RwxYSP+/1vdvBzW/ yGd1myFc3LrIqaW5wAZdJt4JDdvR5nB5pQ158= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1742874148; x=1743478948; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=l6JEESLLV1JCllG84hanAMC518jUUA2QNEhCRHmRkPo=; b=tW2V6HF1hegqHppxISeAEVw7bPtDBKF8Eg5G3PkVV3NBmMO6JaHrcqRP58GOVmVtoq EydM3w+rB+aZjn2fntWwo8IgeA+7jH9KFWYRboV4sz9GWsf7W6DN1Se84+AEr9Ie41Wj q+YLspp5f2TDE0CGy/f/s33NlyBYASzZAuEQiThnCU0X+LkgzIeaGGBj8dr9UEz662IF kqPLF3A9XSImzT3PUZuUBMhI6vu74La8qIuqllWvoCKcpxSHx8svx0QJHLemEUXxKyqm z/sMxbcu51FgrZElZKDVlf7eZ1OhusxXRFMtZPKgDwZpuvxLlwe1OPohu6KBpGjFo+25 poPg== X-Forwarded-Encrypted: i=1; AJvYcCVgUMZ5ziS3bRz5UsyhU28CRlTTn73zsy+IL0vGoUBlFT1QdcnMbwkdid6rH3b03RaaUJMn40oXbg==@kvack.org X-Gm-Message-State: AOJu0YyhJOWgNkrvWnntnZszojEdlyZPsneJWR7irKOsNHHoRJATF34T 2fIhAIavXsfReRgz0DTCmLrwjQgq2TOLxrbQkjq6fJjKgZDjFm15Pw1itwyDtP5T8cQ0abMeXrv EBQ== X-Gm-Gg: ASbGncv0ZwSU866O4ue2XGaxlHPOQONyeLkK/EewTNozn82i6Hfx47HU2g/xcWtsApn 7ikPDVPcBwdMlBVpBOtOTFTWXtF6JcZhwaPvmCIdhoxVu022M9lZ3KWarx3jOtGGNWLWxOHaA53 BF2FrY1Q1SMUJ9cyjAu6DD2lbVWnqDshtkUkJi6/7P4Y+klnpuFUHNhehluES5aFr2Tw42K8rUi TAmWpBJF/Aam6PXQP+FSj6JV/D0PHny6z4FTvbxscuop0ofjxhiM07RYQq8ibAkNTESAQzxCReX rn09m39kyvw4iKiRemNIH/aevI0ZA0/z24CuKOWH4y1UiVN7gRKlXgTSIXYawSVASQbIlJQ= X-Google-Smtp-Source: AGHT+IEVH9C89zEjjdOOR4Fn285iF4/gs8w5ibNuGI/A6WU5hES9nEmIT7AkvlUyNMKvqOCPwYBXsg== X-Received: by 2002:a17:90b:4fc6:b0:2ee:45fd:34f2 with SMTP id 98e67ed59e1d1-3030fe5a096mr20199657a91.6.1742874147817; Mon, 24 Mar 2025 20:42:27 -0700 (PDT) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:f107:eb57:41d8:a285]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-227e452ec4asm2642775ad.194.2025.03.24.20.42.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Mar 2025 20:42:27 -0700 (PDT) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim Cc: Brian Geffon , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH] zram: modernize writeback interface Date: Tue, 25 Mar 2025 12:42:04 +0900 Message-ID: <20250325034210.3337080-1-senozhatsky@chromium.org> X-Mailer: git-send-email 2.49.0.395.g12beb8f557-goog MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 5B3BC8000C X-Stat-Signature: ba5wrgcg8ixuodqpuz7htosp11nrhodc X-Rspam-User: X-HE-Tag: 1742874149-938612 X-HE-Meta: U2FsdGVkX19mdL/7So00cUp6RLEjvolvlhlNnTjiQVJcr37VxqUwwbtrMgVeqajLx5s1GxJ6BmE99HXODEqFWi+5sszho/d/YPaV71llGU7LnenuepbcRAFNZiML8YnJht/0GfZW5Fdqt7z8Ik1J3V+VFmEaU/owTqzrinL3nR9+J5K3XEHiSucu2EHFQct6R+grzF0iu7eSCoPZdlZuQmKFCyBKJQApWJu7qGwsyK40OEPLcSRo/TgSvqhhpL2FTUmRmbPzMkYgji7z87axHdeJpSUHWrVV++WN1bi9sFgTnLJIp98CKnsQ7ZEamlZkXSq3wBjXz1poT8lLm4gyGenreDQizZceFc9/dibySuiio1VibuyborW9adLIDb3BbW3anq7qtYRS/Q+tqE0pusYMRQf1i1ivfnQT4XpNSNvTpSP6WjjGuOgXr2SZP7XuSDUNDKADc9xQrXJqmn84OgdzzPKNnu9sdTX6+aZc9gqoGc3vITG49aMV0/23v0ZM2S3GHQwNU5DMHLPlX3ZfGH7FDbYU2Y039iz3K8RcYT6thW4/nSJf4Dj1s9D4lQ7YGkv9tYfG8hmCaf9j9Kk0OgTHeD1/bAHFnW69WRMod1vuu1IHqlC6ddlODpd2pNTQTr8yN6mrslU97rJUKPlSr66/ZxUMccPKS7ak4T4zvVfTCJPW2ldeTqa6PhiKkSRpzdXWGpYPKfAXh+aMqWK0tlYHQFO+ppAwNhJH76b98Qam5m/+cQ8gURmUqiqYfv1ZPK2yLsY6CrbzSME1wu67Z54MaM1HhNrzF6AfYCY7hk8DfwnWN9AuJVYW/CP7MFM4Ju5C3hbm1CxQpgMS19VOJF/hBSTXW+v1UMEm4m8m8Dtt0AjnRJXdlaXOcr7NmpbNNigjYohl7igOR6Lpp/XqcexZBGORZI3iKdKkfDuTn8YKJMdt+hzg5U71WnAprx/EEGO7FjZngKs2YZaqLK+ EQg9CE7P QTffz07g5FFxZaB1CiM1eMdISjmL98qSNg6aOv250tl8wFHaVe54v6S88fkeQug6qggCB4YtA/z6YHjnIPsHB31tyAbnfddDSwqeRzKvEjAnneaUWRAFbCtMqcHZV/sB07UKNxKQzMuaMggkSViVNwLPpV+TP+dAcC/pcqmVskzX769o5AZ4cNuNkTNMMWimSTmT5zuTBdLDewE4IkpPO8t4iaaDXUnr8pWpWyEesnKLZr2dm7c3VM9JDue3jY6qzobUeFPbAsw78R8vZKQaeiJwsGYVPdFquhKwr7ZSioF4Cqn5sCAK5dT0BZw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The writeback interface supports a page_index=N parameter which performs writeback of the given page. Since we rarely need to writeback just one single page, the typical use case involves a number of writeback calls, each performing writeback of one page: echo page_index=100 > zram0/writeback ... echo page_index=200 > zram0/writeback echo page_index=500 > zram0/writeback ... echo page_index=700 > zram0/writeback One obvious downside of this is that it increases the number of syscalls. Less obvious, but a significantly more important downside, is that when given only one page to post-process zram cannot perform an optimal target selection. This becomes a critical limitation when writeback_limit is enabled, because under writeback_limit we want to guarantee the highest memory savings hence we first need to writeback pages that release the highest amount of zsmalloc pool memory. This patch adds page_index_range=LOW-HIGH parameter to the writeback interface: echo page_index_range=100-200 \ page_index_range=500-700 > zram0/writeback This gives zram a chance to apply an optimal target selection strategy on each iteration of the writeback loop. Apart from that the patch also unifies parameters passing and resembles other "modern" zram device attributes (e.g. recompression), while the old interface used a mixed scheme: values-less parameters for mode and a key=value format for page_index. We still support the "old" value-less format for compatibility reasons. Signed-off-by: Sergey Senozhatsky --- Documentation/admin-guide/blockdev/zram.rst | 11 + drivers/block/zram/zram_drv.c | 321 +++++++++++++------- 2 files changed, 227 insertions(+), 105 deletions(-) diff --git a/Documentation/admin-guide/blockdev/zram.rst b/Documentation/admin-guide/blockdev/zram.rst index 9bdb30901a93..9dca86365a4d 100644 --- a/Documentation/admin-guide/blockdev/zram.rst +++ b/Documentation/admin-guide/blockdev/zram.rst @@ -369,6 +369,17 @@ they could write a page index into the interface:: echo "page_index=1251" > /sys/block/zramX/writeback +In Linux 6.16 this interface underwent some rework. First, the interface +now supports `key=value` format for all of its parameters (`type=huge_idle`, +etc.) Second, the support for `page_index_range` was introduced, which +specify `LOW-HIGH` range (or ranges) of pages to be written-back. This +reduces the number of syscalls, but more importantly this enables optimal +post-processing target selection strategy. Usage example:: + + echo "type=idle" > /sys/block/zramX/writeback + echo "page_index_range=1-100 page_index_range=200-300" > \ + /sys/block/zramX/writeback + If there are lots of write IO with flash device, potentially, it has flash wearout problem so that admin needs to design write limitation to guarantee storage health for entire product life. diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index fda7d8624889..2c39d12bd2d4 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -734,114 +734,19 @@ static void read_from_bdev_async(struct zram *zram, struct page *page, submit_bio(bio); } -#define PAGE_WB_SIG "page_index=" - -#define PAGE_WRITEBACK 0 -#define HUGE_WRITEBACK (1<<0) -#define IDLE_WRITEBACK (1<<1) -#define INCOMPRESSIBLE_WRITEBACK (1<<2) - -static int scan_slots_for_writeback(struct zram *zram, u32 mode, - unsigned long nr_pages, - unsigned long index, - struct zram_pp_ctl *ctl) +static int zram_writeback_slots(struct zram *zram, struct zram_pp_ctl *ctl) { - for (; nr_pages != 0; index++, nr_pages--) { - bool ok = true; - - zram_slot_lock(zram, index); - if (!zram_allocated(zram, index)) - goto next; - - if (zram_test_flag(zram, index, ZRAM_WB) || - zram_test_flag(zram, index, ZRAM_SAME)) - goto next; - - if (mode & IDLE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_IDLE)) - goto next; - if (mode & HUGE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_HUGE)) - goto next; - if (mode & INCOMPRESSIBLE_WRITEBACK && - !zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE)) - goto next; - - ok = place_pp_slot(zram, ctl, index); -next: - zram_slot_unlock(zram, index); - if (!ok) - break; - } - - return 0; -} - -static ssize_t writeback_store(struct device *dev, - struct device_attribute *attr, const char *buf, size_t len) -{ - struct zram *zram = dev_to_zram(dev); - unsigned long nr_pages = zram->disksize >> PAGE_SHIFT; - struct zram_pp_ctl *ctl = NULL; + unsigned long blk_idx = 0; + struct page *page = NULL; struct zram_pp_slot *pps; - unsigned long index = 0; - struct bio bio; struct bio_vec bio_vec; - struct page *page = NULL; - ssize_t ret = len; - int mode, err; - unsigned long blk_idx = 0; - - if (sysfs_streq(buf, "idle")) - mode = IDLE_WRITEBACK; - else if (sysfs_streq(buf, "huge")) - mode = HUGE_WRITEBACK; - else if (sysfs_streq(buf, "huge_idle")) - mode = IDLE_WRITEBACK | HUGE_WRITEBACK; - else if (sysfs_streq(buf, "incompressible")) - mode = INCOMPRESSIBLE_WRITEBACK; - else { - if (strncmp(buf, PAGE_WB_SIG, sizeof(PAGE_WB_SIG) - 1)) - return -EINVAL; - - if (kstrtol(buf + sizeof(PAGE_WB_SIG) - 1, 10, &index) || - index >= nr_pages) - return -EINVAL; - - nr_pages = 1; - mode = PAGE_WRITEBACK; - } - - down_read(&zram->init_lock); - if (!init_done(zram)) { - ret = -EINVAL; - goto release_init_lock; - } - - /* Do not permit concurrent post-processing actions. */ - if (atomic_xchg(&zram->pp_in_progress, 1)) { - up_read(&zram->init_lock); - return -EAGAIN; - } - - if (!zram->backing_dev) { - ret = -ENODEV; - goto release_init_lock; - } + struct bio bio; + int ret, err; + u32 index; page = alloc_page(GFP_KERNEL); - if (!page) { - ret = -ENOMEM; - goto release_init_lock; - } - - ctl = init_pp_ctl(); - if (!ctl) { - ret = -ENOMEM; - goto release_init_lock; - } - - scan_slots_for_writeback(zram, mode, nr_pages, index, ctl); + if (!page) + return -ENOMEM; while ((pps = select_pp_slot(ctl))) { spin_lock(&zram->wb_limit_lock); @@ -929,10 +834,216 @@ static ssize_t writeback_store(struct device *dev, if (blk_idx) free_block_bdev(zram, blk_idx); - -release_init_lock: if (page) __free_page(page); + + return ret; +} + +#define PAGE_WRITEBACK 0 +#define HUGE_WRITEBACK (1 << 0) +#define IDLE_WRITEBACK (1 << 1) +#define INCOMPRESSIBLE_WRITEBACK (1 << 2) + +static int parse_page_index(char *val, unsigned long nr_pages, + unsigned long *lo, unsigned long *hi) +{ + int ret; + + ret = kstrtoul(val, 10, lo); + if (ret) + return ret; + *hi = *lo + 1; + if (*lo >= nr_pages || *hi > nr_pages) + return -ERANGE; + return 0; +} + +static int parse_page_index_range(char *val, unsigned long nr_pages, + unsigned long *lo, unsigned long *hi) +{ + char *delim; + int ret; + + delim = strchr(val, '-'); + if (!delim) + return -EINVAL; + + *delim = 0x00; + ret = kstrtoul(val, 10, lo); + if (ret) + return ret; + if (*lo >= nr_pages) + return -ERANGE; + + ret = kstrtoul(delim + 1, 10, hi); + if (ret) + return ret; + if (*hi >= nr_pages || *lo > *hi) + return -ERANGE; + *hi += 1; + return 0; +} + +static int parse_mode(char *val, u32 *mode) +{ + *mode = 0; + + if (!strcmp(val, "idle")) + *mode = IDLE_WRITEBACK; + if (!strcmp(val, "huge")) + *mode = HUGE_WRITEBACK; + if (!strcmp(val, "huge_idle")) + *mode = IDLE_WRITEBACK | HUGE_WRITEBACK; + if (!strcmp(val, "incompressible")) + *mode = INCOMPRESSIBLE_WRITEBACK; + + if (*mode == 0) + return -EINVAL; + return 0; +} + +static int scan_slots_for_writeback(struct zram *zram, u32 mode, + unsigned long lo, unsigned long hi, + struct zram_pp_ctl *ctl) +{ + u32 index = lo; + + while (index < hi) { + bool ok = true; + + zram_slot_lock(zram, index); + if (!zram_allocated(zram, index)) + goto next; + + if (zram_test_flag(zram, index, ZRAM_WB) || + zram_test_flag(zram, index, ZRAM_SAME)) + goto next; + + if (mode & IDLE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_IDLE)) + goto next; + if (mode & HUGE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_HUGE)) + goto next; + if (mode & INCOMPRESSIBLE_WRITEBACK && + !zram_test_flag(zram, index, ZRAM_INCOMPRESSIBLE)) + goto next; + + ok = place_pp_slot(zram, ctl, index); +next: + zram_slot_unlock(zram, index); + if (!ok) + break; + index++; + } + + return 0; +} + +static ssize_t writeback_store(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t len) +{ + struct zram *zram = dev_to_zram(dev); + u64 nr_pages = zram->disksize >> PAGE_SHIFT; + unsigned long lo = 0, hi = nr_pages; + struct zram_pp_ctl *ctl = NULL; + char *args, *param, *val; + ssize_t ret = len; + int err, mode = 0; + + down_read(&zram->init_lock); + if (!init_done(zram)) { + up_read(&zram->init_lock); + return -EINVAL; + } + + /* Do not permit concurrent post-processing actions. */ + if (atomic_xchg(&zram->pp_in_progress, 1)) { + up_read(&zram->init_lock); + return -EAGAIN; + } + + if (!zram->backing_dev) { + ret = -ENODEV; + goto release_init_lock; + } + + ctl = init_pp_ctl(); + if (!ctl) { + ret = -ENOMEM; + goto release_init_lock; + } + + args = skip_spaces(buf); + while (*args) { + args = next_arg(args, ¶m, &val); + + /* + * Workaround to support the old writeback interface. + * + * The old writeback interface has a minor inconsistency and + * requires key=value only for page_index parameter, while the + * writeback mode is a valueless parameter. + * + * This is not the case anymore and now all parameters are + * required to have values, however, we need to support the + * legacy writeback interface format so we check if we can + * recognize a valueless parameter as the (legacy) writeback + * mode. + */ + if (!val || !*val) { + err = parse_mode(param, &mode); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + break; + } + + if (!strcmp(param, "type")) { + err = parse_mode(val, &mode); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + break; + } + + if (!strcmp(param, "page_index")) { + err = parse_page_index(val, nr_pages, &lo, &hi); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + break; + } + + /* There can be several page index ranges */ + if (!strcmp(param, "page_index_range")) { + err = parse_page_index_range(val, nr_pages, &lo, &hi); + if (err) { + ret = err; + goto release_init_lock; + } + + scan_slots_for_writeback(zram, mode, lo, hi, ctl); + continue; + } + } + + err = zram_writeback_slots(zram, ctl); + if (err) + ret = err; + +release_init_lock: release_pp_ctl(zram, ctl); atomic_set(&zram->pp_in_progress, 0); up_read(&zram->init_lock);