From patchwork Mon Aug 8 16:50:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Geffon X-Patchwork-Id: 12938897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5512C00140 for ; Mon, 8 Aug 2022 16:50:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 46C8A6B0071; Mon, 8 Aug 2022 12:50:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41BEA8E0002; Mon, 8 Aug 2022 12:50:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E4656B0073; Mon, 8 Aug 2022 12:50:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 207126B0071 for ; Mon, 8 Aug 2022 12:50:23 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E1E3F40E67 for ; Mon, 8 Aug 2022 16:50:22 +0000 (UTC) X-FDA: 79777013484.29.2FAAA4B Received: from mail-il1-f201.google.com (mail-il1-f201.google.com [209.85.166.201]) by imf08.hostedemail.com (Postfix) with ESMTP id 6CCDC160011 for ; Mon, 8 Aug 2022 16:50:22 +0000 (UTC) Received: by mail-il1-f201.google.com with SMTP id n2-20020a056e02100200b002dee632ca9fso6923029ilj.13 for ; Mon, 08 Aug 2022 09:50:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc; bh=tIPQjG73xQiPMfVUBLX5z8PtyeVm77aa6929Fogn3qM=; b=MK33El1PCNkQhQJ7eGi8AqQ6gyfIBwat67Qwx2MjQmCJBYL2+I02AWrwnn3ueNG0Po ev6efE+oTtGVUscNy59mozD0w4AncrOfQLYpXQr+1nC0neiKTO4qMn6ROXlagaqM49vk MUXC/bdkmEGxI5xQCMTCjihVYAL9u55yYPzawwG4vlWZbdTQRKKpcW2mB6yVPajOe0R8 tgB145eBV2CT1wtec/uT4dyOtZWLIjUNFiYxrJoFx2G7JgdKqFxWMiB8efI1cZUWepRP 7gmqd6qIq9RQYC8vhkDcKhEwWqEHtS4q+YWiLSLpEqcvw9NEpwA3wnIE8jDKlhtfmuZY fnLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc; bh=tIPQjG73xQiPMfVUBLX5z8PtyeVm77aa6929Fogn3qM=; b=SsO+9tB1wP7y8I1bHvw75aMyaR/3727DIhlujD1h3Y8tr+hgf+iVxv9ahDKkNHktSE yrND8o/oNO1EXn3iEl5hpPPtbd2SuCPkDKESSAxSUtS4lrpAGo2ESdUvVUyNcRbLPOF3 umvW0d1J3NKxqsDTDAZOfpRbI24KjkNP7UoUFkcApvotcboPZ7QbPC+3ikyyW8WWlqCW eHZ0TXwW+F9qcxSCW6Zlzl101gTk6IjZ8Mqyz420r1MkdlNxDkytlRoOAWztmEmpTpek J4TqMbpzTXa/zMFfJ+JFjolLeq2PuRkRR3t2uIR6nCao0fKka2hPic4omdJy3VNqcxWs ANLQ== X-Gm-Message-State: ACgBeo1AhSLIVC//uU4O8BLdXOqAJK9RKfZkHPBGWDJvcZX3FhVCgtYA Ts4TdKOgjfA/lHZQXEjO19B2qCA/FMxA X-Google-Smtp-Source: AA6agR6VGJ09aJj6UvCCvJiGaldCHiaKxH2Y9A0k69t5pjVVKSnoyGzcEsYlEpnplGeTD14Ox+AL74oum8sm X-Received: from bg.c.googlers.com ([fda3:e722:ac3:cc00:2b:ff92:c0a8:1b4]) (user=bgeffon job=sendgmr) by 2002:a05:6638:2685:b0:342:f09c:4dda with SMTP id o5-20020a056638268500b00342f09c4ddamr3509768jat.18.1659977421795; Mon, 08 Aug 2022 09:50:21 -0700 (PDT) Date: Mon, 8 Aug 2022 12:50:06 -0400 In-Reply-To: <20220808165006.2451180-1-bgeffon@google.com> Message-Id: <20220808165006.2451180-2-bgeffon@google.com> Mime-Version: 1.0 References: <20220808165006.2451180-1-bgeffon@google.com> X-Mailer: git-send-email 2.37.1.559.g78731f0fdb-goog Subject: [RFC PATCH 1/1] zram: Allow rw_page when page isn't written back. From: Brian Geffon To: Andrew Morton , Minchan Kim Cc: Nitin Gupta , Sergey Senozhatsky , linux-kernel@vger.kernel.org, Suleiman Souhlal , linux-mm@kvack.org, Brian Geffon ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659977422; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tIPQjG73xQiPMfVUBLX5z8PtyeVm77aa6929Fogn3qM=; b=6qxtGVBtNwXzqkn1tp1rz8q4+EgADkDk3IjqMay9bxRaBUU0+K0Yz90/mtNsJ8B/q3NGlT bcgdMqciLEr2W70u3rjpGACB3MmnjohXBEH30soJJnfATECQD5BjizmqKcU7nZG2PY5PmL dtz/o0C+rh0cMLmW2crivxKXU2R11H4= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=MK33El1P; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3zT7xYgcKCC4LQOPPYXQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--bgeffon.bounces.google.com designates 209.85.166.201 as permitted sender) smtp.mailfrom=3zT7xYgcKCC4LQOPPYXQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--bgeffon.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659977422; a=rsa-sha256; cv=none; b=xWVP1hkstla13Kj5gXQNXBbjZqduCH8WpK3q30n3tYrI63NxsGrpQXYt8dEaaXK35ETKgb H7W+uCBGO3f2tpkhnQfnT3XCtfVAy2Oi6Rv5vmt2urO7/OKINlG9I44rA5JWem3p9ZXXzf mtC49haY5kV+UsCwBf8jKW2I8og90JQ= X-Rspamd-Queue-Id: 6CCDC160011 X-Rspamd-Server: rspam03 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=MK33El1P; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3zT7xYgcKCC4LQOPPYXQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--bgeffon.bounces.google.com designates 209.85.166.201 as permitted sender) smtp.mailfrom=3zT7xYgcKCC4LQOPPYXQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--bgeffon.bounces.google.com X-Stat-Signature: c3ef1rnod7kct5snjabyswen1xn989wp X-HE-Tag: 1659977422-658542 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Today when a zram device has a backing device we change the ops to a new set which does not expose a rw_page method. This prevents the upper layers from trying to issue a synchronous rw. This has the downside that we penalize every rw even when it could possibly still be performed as a synchronous rw. This change will always expose a rw_page function and if the page has been written back it will return -EOPNOTSUPP which will force the upper layers to try again with bio. To safely allow a synchronous read to proceed for pages which have not yet written back we introduce a new flag ZRAM_NO_WB. On the first synchronous read if the page is not written back we will set the ZRAM_NO_WB flag. This flag, which is never cleared, prevents writeback from ever happening to that page. This approach works because in the case of zram as a swap backing device the page is going to be removed from zram shortly thereafter so preventing writeback is fine. However, if zram is being used as a generic block device then this might prevent writeback of the page. Signed-off-by: Brian Geffon --- drivers/block/zram/zram_drv.c | 65 +++++++++++++++++++++-------------- drivers/block/zram/zram_drv.h | 1 + 2 files changed, 41 insertions(+), 25 deletions(-) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 92cb929a45b7..196392353bd3 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -52,9 +52,6 @@ static unsigned int num_devices = 1; static size_t huge_class_size; static const struct block_device_operations zram_devops; -#ifdef CONFIG_ZRAM_WRITEBACK -static const struct block_device_operations zram_wb_devops; -#endif static void zram_free_page(struct zram *zram, size_t index); static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec, @@ -439,7 +436,6 @@ static void reset_bdev(struct zram *zram) filp_close(zram->backing_dev, NULL); zram->backing_dev = NULL; zram->bdev = NULL; - zram->disk->fops = &zram_devops; kvfree(zram->bitmap); zram->bitmap = NULL; } @@ -543,17 +539,6 @@ static ssize_t backing_dev_store(struct device *dev, zram->backing_dev = backing_dev; zram->bitmap = bitmap; zram->nr_pages = nr_pages; - /* - * With writeback feature, zram does asynchronous IO so it's no longer - * synchronous device so let's remove synchronous io flag. Othewise, - * upper layer(e.g., swap) could wait IO completion rather than - * (submit and return), which will cause system sluggish. - * Furthermore, when the IO function returns(e.g., swap_readpage), - * upper layer expects IO was done so it could deallocate the page - * freely but in fact, IO is going on so finally could cause - * use-after-free when the IO is really done. - */ - zram->disk->fops = &zram_wb_devops; up_write(&zram->init_lock); pr_info("setup backing device %s\n", file_name); @@ -722,7 +707,8 @@ static ssize_t writeback_store(struct device *dev, if (zram_test_flag(zram, index, ZRAM_WB) || zram_test_flag(zram, index, ZRAM_SAME) || - zram_test_flag(zram, index, ZRAM_UNDER_WB)) + zram_test_flag(zram, index, ZRAM_UNDER_WB) || + zram_test_flag(zram, index, ZRAM_NO_WB)) goto next; if (mode & IDLE_WRITEBACK && @@ -1226,6 +1212,10 @@ static void zram_free_page(struct zram *zram, size_t index) goto out; } + if (zram_test_flag(zram, index, ZRAM_NO_WB)) { + zram_clear_flag(zram, index, ZRAM_NO_WB); + } + /* * No memory is allocated for same element filled pages. * Simply clear same page flag. @@ -1654,6 +1644,40 @@ static int zram_rw_page(struct block_device *bdev, sector_t sector, index = sector >> SECTORS_PER_PAGE_SHIFT; offset = (sector & (SECTORS_PER_PAGE - 1)) << SECTOR_SHIFT; +#ifdef CONFIG_ZRAM_WRITEBACK + /* + * With writeback feature, zram does asynchronous IO so it's no longer + * synchronous device so let's remove synchronous io flag. Othewise, + * upper layer(e.g., swap) could wait IO completion rather than + * (submit and return), which will cause system sluggish. + * Furthermore, when the IO function returns(e.g., swap_readpage), + * upper layer expects IO was done so it could deallocate the page + * freely but in fact, IO is going on so finally could cause + * use-after-free when the IO is really done. + * + * If the page is not currently written back then we may proceed to + * read the page synchronously, otherwise, we must fail with + * -EOPNOTSUPP to force the upper layers to use a normal bio. + */ + zram_slot_lock(zram, index); + if (zram_test_flag(zram, index, ZRAM_WB) || + zram_test_flag(zram, index, ZRAM_UNDER_WB)) { + zram_slot_unlock(zram, index); + /* We cannot proceed with synchronous read */ + return -EOPNOTSUPP; + } + + /* + * Don't allow the page to be written back while we read it, + * this flag is never cleared. It shouldn't be a problem that + * we don't clear this flag because in the case of swap this + * page will be removed shortly after this read anyway. + */ + if (op == REQ_OP_READ) + zram_set_flag(zram, index, ZRAM_NO_WB); + zram_slot_unlock(zram, index); +#endif + bv.bv_page = page; bv.bv_len = PAGE_SIZE; bv.bv_offset = 0; @@ -1827,15 +1851,6 @@ static const struct block_device_operations zram_devops = { .owner = THIS_MODULE }; -#ifdef CONFIG_ZRAM_WRITEBACK -static const struct block_device_operations zram_wb_devops = { - .open = zram_open, - .submit_bio = zram_submit_bio, - .swap_slot_free_notify = zram_slot_free_notify, - .owner = THIS_MODULE -}; -#endif - static DEVICE_ATTR_WO(compact); static DEVICE_ATTR_RW(disksize); static DEVICE_ATTR_RO(initstate); diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h index 158c91e54850..20e4c6a579e0 100644 --- a/drivers/block/zram/zram_drv.h +++ b/drivers/block/zram/zram_drv.h @@ -50,6 +50,7 @@ enum zram_pageflags { ZRAM_UNDER_WB, /* page is under writeback */ ZRAM_HUGE, /* Incompressible page */ ZRAM_IDLE, /* not accessed page since last idle marking */ + ZRAM_NO_WB, /* Do not allow page to be written back */ __NR_ZRAM_PAGEFLAGS, };