Message ID | 20220831072111.3569680-5-yi.zhang@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | buffer: remove ll_rw_block() | expand |
On Wed 31-08-22 15:21:01, Zhang Yi wrote: > ll_rw_block() is not safe for the sync read path because it cannot > guarantee that always submitting read IO if the buffer has been locked, > so stop using it. We also switch to new bh_readahead() helper for the > readahead path. > > Signed-off-by: Zhang Yi <yi.zhang@huawei.com> Looks good to me. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza > --- > fs/gfs2/meta_io.c | 6 ++---- > fs/gfs2/quota.c | 4 +--- > 2 files changed, 3 insertions(+), 7 deletions(-) > > diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c > index 7e70e0ba5a6c..07e882aa7ebd 100644 > --- a/fs/gfs2/meta_io.c > +++ b/fs/gfs2/meta_io.c > @@ -525,8 +525,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) > > if (buffer_uptodate(first_bh)) > goto out; > - if (!buffer_locked(first_bh)) > - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &first_bh); > + bh_read_nowait(first_bh, REQ_META | REQ_PRIO); > > dblock++; > extlen--; > @@ -535,8 +534,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) > bh = gfs2_getbuf(gl, dblock, CREATE); > > if (!buffer_uptodate(bh) && !buffer_locked(bh)) > - ll_rw_block(REQ_OP_READ | REQ_RAHEAD | REQ_META | > - REQ_PRIO, 1, &bh); > + bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO); > brelse(bh); > dblock++; > extlen--; > diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c > index f201eaf59d0d..0c2ef4226aba 100644 > --- a/fs/gfs2/quota.c > +++ b/fs/gfs2/quota.c > @@ -746,9 +746,7 @@ static int gfs2_write_buf_to_page(struct gfs2_inode *ip, unsigned long index, > if (PageUptodate(page)) > set_buffer_uptodate(bh); > if (!buffer_uptodate(bh)) { > - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &bh); > - wait_on_buffer(bh); > - if (!buffer_uptodate(bh)) > + if (bh_read(bh, REQ_META | REQ_PRIO)) > goto unlock_out; > } > if (gfs2_is_jdata(ip)) > -- > 2.31.1 >
diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c index 7e70e0ba5a6c..07e882aa7ebd 100644 --- a/fs/gfs2/meta_io.c +++ b/fs/gfs2/meta_io.c @@ -525,8 +525,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) if (buffer_uptodate(first_bh)) goto out; - if (!buffer_locked(first_bh)) - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &first_bh); + bh_read_nowait(first_bh, REQ_META | REQ_PRIO); dblock++; extlen--; @@ -535,8 +534,7 @@ struct buffer_head *gfs2_meta_ra(struct gfs2_glock *gl, u64 dblock, u32 extlen) bh = gfs2_getbuf(gl, dblock, CREATE); if (!buffer_uptodate(bh) && !buffer_locked(bh)) - ll_rw_block(REQ_OP_READ | REQ_RAHEAD | REQ_META | - REQ_PRIO, 1, &bh); + bh_readahead(bh, REQ_RAHEAD | REQ_META | REQ_PRIO); brelse(bh); dblock++; extlen--; diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c index f201eaf59d0d..0c2ef4226aba 100644 --- a/fs/gfs2/quota.c +++ b/fs/gfs2/quota.c @@ -746,9 +746,7 @@ static int gfs2_write_buf_to_page(struct gfs2_inode *ip, unsigned long index, if (PageUptodate(page)) set_buffer_uptodate(bh); if (!buffer_uptodate(bh)) { - ll_rw_block(REQ_OP_READ | REQ_META | REQ_PRIO, 1, &bh); - wait_on_buffer(bh); - if (!buffer_uptodate(bh)) + if (bh_read(bh, REQ_META | REQ_PRIO)) goto unlock_out; } if (gfs2_is_jdata(ip))
ll_rw_block() is not safe for the sync read path because it cannot guarantee that always submitting read IO if the buffer has been locked, so stop using it. We also switch to new bh_readahead() helper for the readahead path. Signed-off-by: Zhang Yi <yi.zhang@huawei.com> --- fs/gfs2/meta_io.c | 6 ++---- fs/gfs2/quota.c | 4 +--- 2 files changed, 3 insertions(+), 7 deletions(-)