Message ID | 20211111085238.942492-1-shinichiro.kawasaki@wdc.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | block: Hold invalidate_lock in BLKRESETZONE ioctl | expand |
On Thu 11-11-21 17:52:38, Shin'ichiro Kawasaki wrote: > When BLKRESETZONE ioctl and data read race, the data read leaves stale > page cache. The commit e5113505904e ("block: Discard page cache of zone > reset target range") added page cache truncation to avoid stale page > cache after the ioctl. However, the stale page cache still can be read > during the reset zone operation for the ioctl. To avoid the stale page > cache completely, hold invalidate_lock of the block device file mapping. > > Fixes: e5113505904e ("block: Discard page cache of zone reset target range") > Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> > Cc: stable@vger.kernel.org # v5.15 Looks good to me. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza > --- > block/blk-zoned.c | 15 +++++---------- > 1 file changed, 5 insertions(+), 10 deletions(-) > > diff --git a/block/blk-zoned.c b/block/blk-zoned.c > index 1d0c76c18fc5..774ecc598bee 100644 > --- a/block/blk-zoned.c > +++ b/block/blk-zoned.c > @@ -429,9 +429,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, > op = REQ_OP_ZONE_RESET; > > /* Invalidate the page cache, including dirty pages. */ > + filemap_invalidate_lock(bdev->bd_inode->i_mapping); > ret = blkdev_truncate_zone_range(bdev, mode, &zrange); > if (ret) > - return ret; > + goto fail; > break; > case BLKOPENZONE: > op = REQ_OP_ZONE_OPEN; > @@ -449,15 +450,9 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, > ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors, > GFP_KERNEL); > > - /* > - * Invalidate the page cache again for zone reset: writes can only be > - * direct for zoned devices so concurrent writes would not add any page > - * to the page cache after/during reset. The page cache may be filled > - * again due to concurrent reads though and dropping the pages for > - * these is fine. > - */ > - if (!ret && cmd == BLKRESETZONE) > - ret = blkdev_truncate_zone_range(bdev, mode, &zrange); > +fail: > + if (cmd == BLKRESETZONE) > + filemap_invalidate_unlock(bdev->bd_inode->i_mapping); > > return ret; > } > -- > 2.33.1 >
On Thu, Nov 11, 2021 at 05:52:38PM +0900, Shin'ichiro Kawasaki wrote: > When BLKRESETZONE ioctl and data read race, the data read leaves stale > page cache. The commit e5113505904e ("block: Discard page cache of zone > reset target range") added page cache truncation to avoid stale page > cache after the ioctl. However, the stale page cache still can be read > during the reset zone operation for the ioctl. To avoid the stale page > cache completely, hold invalidate_lock of the block device file mapping. > > Fixes: e5113505904e ("block: Discard page cache of zone reset target range") > Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> > Cc: stable@vger.kernel.org # v5.15 > --- Looks fine: Reviewed-by: Ming Lei <ming.lei@redhat.com>
On Thu, 11 Nov 2021 17:52:38 +0900, Shin'ichiro Kawasaki wrote: > When BLKRESETZONE ioctl and data read race, the data read leaves stale > page cache. The commit e5113505904e ("block: Discard page cache of zone > reset target range") added page cache truncation to avoid stale page > cache after the ioctl. However, the stale page cache still can be read > during the reset zone operation for the ioctl. To avoid the stale page > cache completely, hold invalidate_lock of the block device file mapping. > > [...] Applied, thanks! [1/1] block: Hold invalidate_lock in BLKRESETZONE ioctl commit: 86399ea071099ec8ee0a83ac9ad67f7df96a50ad Best regards,
diff --git a/block/blk-zoned.c b/block/blk-zoned.c index 1d0c76c18fc5..774ecc598bee 100644 --- a/block/blk-zoned.c +++ b/block/blk-zoned.c @@ -429,9 +429,10 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, op = REQ_OP_ZONE_RESET; /* Invalidate the page cache, including dirty pages. */ + filemap_invalidate_lock(bdev->bd_inode->i_mapping); ret = blkdev_truncate_zone_range(bdev, mode, &zrange); if (ret) - return ret; + goto fail; break; case BLKOPENZONE: op = REQ_OP_ZONE_OPEN; @@ -449,15 +450,9 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, fmode_t mode, ret = blkdev_zone_mgmt(bdev, op, zrange.sector, zrange.nr_sectors, GFP_KERNEL); - /* - * Invalidate the page cache again for zone reset: writes can only be - * direct for zoned devices so concurrent writes would not add any page - * to the page cache after/during reset. The page cache may be filled - * again due to concurrent reads though and dropping the pages for - * these is fine. - */ - if (!ret && cmd == BLKRESETZONE) - ret = blkdev_truncate_zone_range(bdev, mode, &zrange); +fail: + if (cmd == BLKRESETZONE) + filemap_invalidate_unlock(bdev->bd_inode->i_mapping); return ret; }
When BLKRESETZONE ioctl and data read race, the data read leaves stale page cache. The commit e5113505904e ("block: Discard page cache of zone reset target range") added page cache truncation to avoid stale page cache after the ioctl. However, the stale page cache still can be read during the reset zone operation for the ioctl. To avoid the stale page cache completely, hold invalidate_lock of the block device file mapping. Fixes: e5113505904e ("block: Discard page cache of zone reset target range") Signed-off-by: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com> Cc: stable@vger.kernel.org # v5.15 --- block/blk-zoned.c | 15 +++++---------- 1 file changed, 5 insertions(+), 10 deletions(-)