Message ID | 20230718071154.21566-1-xinhuanpeng9@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | zram: set zram bio priority to REQ_PRIO. | expand |
Cc-ing Christoph On (23/07/18 15:11), Huanpeng Xin wrote: > > When the system memory pressure is high, set zram bio priority > to REQ_PRIO can quickly swap zarm's memory to backing device, read_from_bdev_async() does the opposite. [..] > @@ -616,7 +616,7 @@ static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec, > { > + bio = bio_alloc(zram->bdev, 1, parent ? parent->bi_opf : REQ_OP_READ | REQ_PRIO, > GFP_NOIO); [..] > @@ -746,7 +746,7 @@ static ssize_t writeback_store(struct device *dev, > ... > bio_init(&bio, zram->bdev, &bio_vec, 1, > - REQ_OP_WRITE | REQ_SYNC); > + REQ_OP_WRITE | REQ_SYNC | REQ_PRIO); In general, zram writeback is not for situations when the system is critically low on memory; performance there is not that important, so I'm not sure whether we want to boost requests' priorities.
On (23/07/18 16:49), Sergey Senozhatsky wrote: > On (23/07/18 15:11), Huanpeng Xin wrote: > > > > When the system memory pressure is high, set zram bio priority > > to REQ_PRIO can quickly swap zarm's memory to backing device, > > freeing up more space for zram. This is not how zram writeback works. The only time you can be sure that writeback frees memory is when you writeback ZRAM_HUGE objects, because each such object uses a whole physical page on the zsmalloc side. In any other case, compressed objects share physical pages that zsmalloc pages consist of, so writeback simply punches wholes in zspages, without actually freeing any memory immediately. You either need zspages to becomes empty after writeback or pool compaction, otherwise writeback does not save any memory, no matter how fast it works.
People really need to stop randomly using REQ_PRIO. It was added back then because I did not want to overload REQ_META for WBT priorisation, but it looks other added this somewhat bogus priorisation back anyway. If that is to stay it should probably just go away entirely, but certainly not added for magic boosters for normal data I/O.
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index b8549c61ff2c..af56766a036b 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -616,7 +616,7 @@ static int read_from_bdev_async(struct zram *zram, struct bio_vec *bvec, { struct bio *bio; - bio = bio_alloc(zram->bdev, 1, parent ? parent->bi_opf : REQ_OP_READ, + bio = bio_alloc(zram->bdev, 1, parent ? parent->bi_opf : REQ_OP_READ | REQ_PRIO, GFP_NOIO); if (!bio) return -ENOMEM; @@ -746,7 +746,7 @@ static ssize_t writeback_store(struct device *dev, } bio_init(&bio, zram->bdev, &bio_vec, 1, - REQ_OP_WRITE | REQ_SYNC); + REQ_OP_WRITE | REQ_SYNC | REQ_PRIO); bio.bi_iter.bi_sector = blk_idx * (PAGE_SIZE >> 9); bio_add_page(&bio, bvec.bv_page, bvec.bv_len,