Message ID | 20220908002616.3189675-6-shr@fb.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | io-uring/btrfs: support async buffered writes | expand |
On Thu, Sep 8, 2022 at 1:26 AM Stefan Roesch <shr@fb.com> wrote: > > From: Josef Bacik <josef@toxicpanda.com> > > For IOCB_NOWAIT we're going to want to use try lock on the extent lock, > and simply bail if there's an ordered extent in the range because the > only choice there is to wait for the ordered extent to complete. > > Signed-off-by: Josef Bacik <josef@toxicpanda.com> > Signed-off-by: Stefan Roesch <shr@fb.com> > --- > fs/btrfs/ordered-data.c | 28 ++++++++++++++++++++++++++++ > fs/btrfs/ordered-data.h | 1 + > 2 files changed, 29 insertions(+) > > diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c > index 1952ac85222c..3cdfdcedb088 100644 > --- a/fs/btrfs/ordered-data.c > +++ b/fs/btrfs/ordered-data.c > @@ -1041,6 +1041,34 @@ void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, > } > } > > +/* > + * btrfs_try_lock_ordered_range - lock the passed range and ensure all pending > + * ordered extents in it are run to completion in nowait mode. > + * > + * @inode: Inode whose ordered tree is to be searched > + * @start: Beginning of range to flush > + * @end: Last byte of range to lock > + * > + * This function returns 1 if btrfs_lock_ordered_range does not return any > + * extents, otherwise 0. Why not a bool, true/false? That's all that is needed, and it's clear. Thanks. > + */ > +int btrfs_try_lock_ordered_range(struct btrfs_inode *inode, u64 start, u64 end) > +{ > + struct btrfs_ordered_extent *ordered; > + > + if (!try_lock_extent(&inode->io_tree, start, end)) > + return 0; > + > + ordered = btrfs_lookup_ordered_range(inode, start, end - start + 1); > + if (!ordered) > + return 1; > + > + btrfs_put_ordered_extent(ordered); > + unlock_extent(&inode->io_tree, start, end); > + return 0; > +} > + > + > static int clone_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pos, > u64 len) > { > diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h > index 87792f85e2c4..ec27ebf0af4b 100644 > --- a/fs/btrfs/ordered-data.h > +++ b/fs/btrfs/ordered-data.h > @@ -218,6 +218,7 @@ void btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr, > void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, > u64 end, > struct extent_state **cached_state); > +int btrfs_try_lock_ordered_range(struct btrfs_inode *inode, u64 start, u64 end); > int btrfs_split_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pre, > u64 post); > int __init ordered_data_init(void); > -- > 2.30.2 >
On 9/8/22 3:18 AM, Filipe Manana wrote: > > > On Thu, Sep 8, 2022 at 1:26 AM Stefan Roesch <shr@fb.com> wrote: >> >> From: Josef Bacik <josef@toxicpanda.com> >> >> For IOCB_NOWAIT we're going to want to use try lock on the extent lock, >> and simply bail if there's an ordered extent in the range because the >> only choice there is to wait for the ordered extent to complete. >> >> Signed-off-by: Josef Bacik <josef@toxicpanda.com> >> Signed-off-by: Stefan Roesch <shr@fb.com> >> --- >> fs/btrfs/ordered-data.c | 28 ++++++++++++++++++++++++++++ >> fs/btrfs/ordered-data.h | 1 + >> 2 files changed, 29 insertions(+) >> >> diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c >> index 1952ac85222c..3cdfdcedb088 100644 >> --- a/fs/btrfs/ordered-data.c >> +++ b/fs/btrfs/ordered-data.c >> @@ -1041,6 +1041,34 @@ void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, >> } >> } >> >> +/* >> + * btrfs_try_lock_ordered_range - lock the passed range and ensure all pending >> + * ordered extents in it are run to completion in nowait mode. >> + * >> + * @inode: Inode whose ordered tree is to be searched >> + * @start: Beginning of range to flush >> + * @end: Last byte of range to lock >> + * >> + * This function returns 1 if btrfs_lock_ordered_range does not return any >> + * extents, otherwise 0. > > Why not a bool, true/false? That's all that is needed, and it's clear. > > Thanks. > The next version of the patch series will return bool instead of int. >> + */ >> +int btrfs_try_lock_ordered_range(struct btrfs_inode *inode, u64 start, u64 end) >> +{ >> + struct btrfs_ordered_extent *ordered; >> + >> + if (!try_lock_extent(&inode->io_tree, start, end)) >> + return 0; >> + >> + ordered = btrfs_lookup_ordered_range(inode, start, end - start + 1); >> + if (!ordered) >> + return 1; >> + >> + btrfs_put_ordered_extent(ordered); >> + unlock_extent(&inode->io_tree, start, end); >> + return 0; >> +} >> + >> + >> static int clone_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pos, >> u64 len) >> { >> diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h >> index 87792f85e2c4..ec27ebf0af4b 100644 >> --- a/fs/btrfs/ordered-data.h >> +++ b/fs/btrfs/ordered-data.h >> @@ -218,6 +218,7 @@ void btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr, >> void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, >> u64 end, >> struct extent_state **cached_state); >> +int btrfs_try_lock_ordered_range(struct btrfs_inode *inode, u64 start, u64 end); >> int btrfs_split_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pre, >> u64 post); >> int __init ordered_data_init(void); >> -- >> 2.30.2 >>
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c index 1952ac85222c..3cdfdcedb088 100644 --- a/fs/btrfs/ordered-data.c +++ b/fs/btrfs/ordered-data.c @@ -1041,6 +1041,34 @@ void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, } } +/* + * btrfs_try_lock_ordered_range - lock the passed range and ensure all pending + * ordered extents in it are run to completion in nowait mode. + * + * @inode: Inode whose ordered tree is to be searched + * @start: Beginning of range to flush + * @end: Last byte of range to lock + * + * This function returns 1 if btrfs_lock_ordered_range does not return any + * extents, otherwise 0. + */ +int btrfs_try_lock_ordered_range(struct btrfs_inode *inode, u64 start, u64 end) +{ + struct btrfs_ordered_extent *ordered; + + if (!try_lock_extent(&inode->io_tree, start, end)) + return 0; + + ordered = btrfs_lookup_ordered_range(inode, start, end - start + 1); + if (!ordered) + return 1; + + btrfs_put_ordered_extent(ordered); + unlock_extent(&inode->io_tree, start, end); + return 0; +} + + static int clone_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pos, u64 len) { diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h index 87792f85e2c4..ec27ebf0af4b 100644 --- a/fs/btrfs/ordered-data.h +++ b/fs/btrfs/ordered-data.h @@ -218,6 +218,7 @@ void btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr, void btrfs_lock_and_flush_ordered_range(struct btrfs_inode *inode, u64 start, u64 end, struct extent_state **cached_state); +int btrfs_try_lock_ordered_range(struct btrfs_inode *inode, u64 start, u64 end); int btrfs_split_ordered_extent(struct btrfs_ordered_extent *ordered, u64 pre, u64 post); int __init ordered_data_init(void);