diff mbox series

zram: set zram bio priority to REQ_PRIO.

Message ID 20230718071154.21566-1-xinhuanpeng9@gmail.com (mailing list archive)
State New, archived
Headers show
Series zram: set zram bio priority to REQ_PRIO. | expand

Commit Message

Huanpeng Xin July 18, 2023, 7:11 a.m. UTC
From: xinhuanpeng <xinhuanpeng@xiaomi.com>

When the system memory pressure is high, set zram bio priority
to REQ_PRIO can quickly swap zarm's memory to backing device,
freeing up more space for zram.

Signed-off-by: xinhuanpeng <xinhuanpeng@xiaomi.com>
---
 drivers/block/zram/zram_drv.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Sergey Senozhatsky July 18, 2023, 7:49 a.m. UTC | #1
Cc-ing Christoph

On (23/07/18 15:11), Huanpeng Xin wrote:
> 
> When the system memory pressure is high, set zram bio priority
> to REQ_PRIO can quickly swap zarm's memory to backing device,

read_from_bdev_async() does the opposite.

[..]
> @@ -616,7 +616,7 @@ static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
>  {
> +	bio = bio_alloc(zram->bdev, 1, parent ? parent->bi_opf : REQ_OP_READ | REQ_PRIO,
>  			GFP_NOIO);

[..]

> @@ -746,7 +746,7 @@ static ssize_t writeback_store(struct device *dev,
> ...
>  		bio_init(&bio, zram->bdev, &bio_vec, 1,
> -			 REQ_OP_WRITE | REQ_SYNC);
> +			 REQ_OP_WRITE | REQ_SYNC | REQ_PRIO);

In general, zram writeback is not for situations when the system
is critically low on memory; performance there is not that important,
so I'm not sure whether we want to boost requests' priorities.
Sergey Senozhatsky July 18, 2023, 11:51 a.m. UTC | #2
On (23/07/18 16:49), Sergey Senozhatsky wrote:
> On (23/07/18 15:11), Huanpeng Xin wrote:
> > 
> > When the system memory pressure is high, set zram bio priority
> > to REQ_PRIO can quickly swap zarm's memory to backing device,
> > freeing up more space for zram.


This is not how zram writeback works. The only time you can be sure
that writeback frees memory is when you writeback ZRAM_HUGE objects,
because each such object uses a whole physical page on the zsmalloc
side. In any other case, compressed objects share physical pages that
zsmalloc pages consist of, so writeback simply punches wholes in zspages,
without actually freeing any memory immediately. You either need zspages
to becomes empty after writeback or pool compaction, otherwise writeback
does not save any memory, no matter how fast it works.
Christoph Hellwig July 20, 2023, 8:34 a.m. UTC | #3
People really need to stop randomly using REQ_PRIO.  It was added
back then because I did not want to overload REQ_META for WBT
priorisation, but it looks other added this somewhat bogus
priorisation back anyway.  If that is to stay it should probably
just go away entirely, but certainly not added for magic boosters
for normal data I/O.
diff mbox series

Patch

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index b8549c61ff2c..af56766a036b 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -616,7 +616,7 @@  static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec,
 {
 	struct bio *bio;
 
-	bio = bio_alloc(zram->bdev, 1, parent ? parent->bi_opf : REQ_OP_READ,
+	bio = bio_alloc(zram->bdev, 1, parent ? parent->bi_opf : REQ_OP_READ | REQ_PRIO,
 			GFP_NOIO);
 	if (!bio)
 		return -ENOMEM;
@@ -746,7 +746,7 @@  static ssize_t writeback_store(struct device *dev,
 		}
 
 		bio_init(&bio, zram->bdev, &bio_vec, 1,
-			 REQ_OP_WRITE | REQ_SYNC);
+			 REQ_OP_WRITE | REQ_SYNC | REQ_PRIO);
 		bio.bi_iter.bi_sector = blk_idx * (PAGE_SIZE >> 9);
 
 		bio_add_page(&bio, bvec.bv_page, bvec.bv_len,