diff mbox series

[2/2] xfs: kick extra large ioends to completion workqueue

Message ID 20201002153357.56409-3-bfoster@redhat.com (mailing list archive)
State New, archived
Headers show
Series iomap: avoid soft lockup warnings on large ioends | expand

Commit Message

Brian Foster Oct. 2, 2020, 3:33 p.m. UTC
We've had reports of soft lockup warnings in the iomap ioend
completion path due to very large bios and/or bio chains. Divert any
ioends with 256k or more pages to process to the workqueue so
completion occurs in non-atomic context and can reschedule to avoid
soft lockup warnings.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---
 fs/xfs/xfs_aops.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Darrick J. Wong Oct. 2, 2020, 4:19 p.m. UTC | #1
On Fri, Oct 02, 2020 at 11:33:57AM -0400, Brian Foster wrote:
> We've had reports of soft lockup warnings in the iomap ioend
> completion path due to very large bios and/or bio chains. Divert any
> ioends with 256k or more pages to process to the workqueue so
> completion occurs in non-atomic context and can reschedule to avoid
> soft lockup warnings.
> 
> Signed-off-by: Brian Foster <bfoster@redhat.com>
> ---
>  fs/xfs/xfs_aops.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> index 3e061ea99922..84ee917014f1 100644
> --- a/fs/xfs/xfs_aops.c
> +++ b/fs/xfs/xfs_aops.c
> @@ -30,6 +30,13 @@ XFS_WPC(struct iomap_writepage_ctx *ctx)
>  	return container_of(ctx, struct xfs_writepage_ctx, ctx);
>  }
>  
> +/*
> + * Kick extra large ioends off to the workqueue. Completion will process a lot
> + * of pages for a large bio or bio chain and a non-atomic context is required to
> + * reschedule and avoid soft lockup warnings.
> + */
> +#define XFS_LARGE_IOEND	(262144 << PAGE_SHIFT)

Hm, shouldn't that 262144 have to be annoated with a 'ULL' so that a
dumb compiler won't turn that into a u32 and shift that all the way to
zero?

I still kind of wonder about the letting the limit hit 16G on power with
64k pages, but I guess the number of pages we have to whack is ... not
that high?

I dunno, if you fire up a 64k-page system with fantastical IO
capabilities, attach a realtime volume, fallocate a 32G file and then
try to write to that, will it actually turn that into one gigantic IO?

> +
>  /*
>   * Fast and loose check if this write could update the on-disk inode size.
>   */
> @@ -239,7 +246,8 @@ static inline bool xfs_ioend_needs_workqueue(struct iomap_ioend *ioend)
>  {
>  	return ioend->io_private ||
>  		ioend->io_type == IOMAP_UNWRITTEN ||
> -		(ioend->io_flags & IOMAP_F_SHARED);
> +		(ioend->io_flags & IOMAP_F_SHARED) ||
> +		(ioend->io_size >= XFS_LARGE_IOEND);
>  }
>  
>  STATIC void
> -- 
> 2.25.4
>
Brian Foster Oct. 2, 2020, 4:38 p.m. UTC | #2
On Fri, Oct 02, 2020 at 09:19:23AM -0700, Darrick J. Wong wrote:
> On Fri, Oct 02, 2020 at 11:33:57AM -0400, Brian Foster wrote:
> > We've had reports of soft lockup warnings in the iomap ioend
> > completion path due to very large bios and/or bio chains. Divert any
> > ioends with 256k or more pages to process to the workqueue so
> > completion occurs in non-atomic context and can reschedule to avoid
> > soft lockup warnings.
> > 
> > Signed-off-by: Brian Foster <bfoster@redhat.com>
> > ---
> >  fs/xfs/xfs_aops.c | 10 +++++++++-
> >  1 file changed, 9 insertions(+), 1 deletion(-)
> > 
> > diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
> > index 3e061ea99922..84ee917014f1 100644
> > --- a/fs/xfs/xfs_aops.c
> > +++ b/fs/xfs/xfs_aops.c
> > @@ -30,6 +30,13 @@ XFS_WPC(struct iomap_writepage_ctx *ctx)
> >  	return container_of(ctx, struct xfs_writepage_ctx, ctx);
> >  }
> >  
> > +/*
> > + * Kick extra large ioends off to the workqueue. Completion will process a lot
> > + * of pages for a large bio or bio chain and a non-atomic context is required to
> > + * reschedule and avoid soft lockup warnings.
> > + */
> > +#define XFS_LARGE_IOEND	(262144 << PAGE_SHIFT)
> 
> Hm, shouldn't that 262144 have to be annoated with a 'ULL' so that a
> dumb compiler won't turn that into a u32 and shift that all the way to
> zero?
> 

Probably.. will fix.

> I still kind of wonder about the letting the limit hit 16G on power with
> 64k pages, but I guess the number of pages we have to whack is ... not
> that high?
> 

TBH, the limit is kind of picked out of a hat since we don't have any
real data on the point where the page count becomes generally too high.
I originally was capping the size of the ioend, so for that I figured
1GB on 4k pages was conservative enough to still allow fairly large
ioends without doing too much page processing. This patch doesn't cap
the I/O size, so I suppose it might be more reasonable to reduce the
threshold if we wanted to. I don't really have a strong preference
either way. Hm?

> I dunno, if you fire up a 64k-page system with fantastical IO
> capabilities, attach a realtime volume, fallocate a 32G file and then
> try to write to that, will it actually turn that into one gigantic IO?
> 

Not sure, but one report we had was an x86_64 box pushing a 10GB+ bio
chain... :P

Brian

> > +
> >  /*
> >   * Fast and loose check if this write could update the on-disk inode size.
> >   */
> > @@ -239,7 +246,8 @@ static inline bool xfs_ioend_needs_workqueue(struct iomap_ioend *ioend)
> >  {
> >  	return ioend->io_private ||
> >  		ioend->io_type == IOMAP_UNWRITTEN ||
> > -		(ioend->io_flags & IOMAP_F_SHARED);
> > +		(ioend->io_flags & IOMAP_F_SHARED) ||
> > +		(ioend->io_size >= XFS_LARGE_IOEND);
> >  }
> >  
> >  STATIC void
> > -- 
> > 2.25.4
> > 
>
kernel test robot Oct. 3, 2020, 12:26 a.m. UTC | #3
Hi Brian,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on xfs-linux/for-next]
[also build test WARNING on v5.9-rc7]
[cannot apply to next-20201002]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Brian-Foster/iomap-avoid-soft-lockup-warnings-on-large-ioends/20201002-233417
base:   https://git.kernel.org/pub/scm/fs/xfs/xfs-linux.git for-next
config: arm64-randconfig-r014-20201002 (attached as .config)
compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project bcd05599d0e53977a963799d6ee4f6e0bc21331b)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install arm64 cross compiling tool for clang build
        # apt-get install binutils-aarch64-linux-gnu
        # https://github.com/0day-ci/linux/commit/bfa33e76564c1e273d89f17ec39c1c5fd305c763
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Brian-Foster/iomap-avoid-soft-lockup-warnings-on-large-ioends/20201002-233417
        git checkout bfa33e76564c1e273d89f17ec39c1c5fd305c763
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> fs/xfs/xfs_aops.c:250:22: warning: signed shift result (0x100000000) requires 34 bits to represent, but 'int' only has 32 bits [-Wshift-overflow]
                   (ioend->io_size >= XFS_LARGE_IOEND);
                                      ^~~~~~~~~~~~~~~
   fs/xfs/xfs_aops.c:38:33: note: expanded from macro 'XFS_LARGE_IOEND'
   #define XFS_LARGE_IOEND (262144 << PAGE_SHIFT)
                            ~~~~~~ ^  ~~~~~~~~~~
   1 warning generated.

vim +/int +250 fs/xfs/xfs_aops.c

   244	
   245	static inline bool xfs_ioend_needs_workqueue(struct iomap_ioend *ioend)
   246	{
   247		return ioend->io_private ||
   248			ioend->io_type == IOMAP_UNWRITTEN ||
   249			(ioend->io_flags & IOMAP_F_SHARED) ||
 > 250			(ioend->io_size >= XFS_LARGE_IOEND);
   251	}
   252	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
diff mbox series

Patch

diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 3e061ea99922..84ee917014f1 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -30,6 +30,13 @@  XFS_WPC(struct iomap_writepage_ctx *ctx)
 	return container_of(ctx, struct xfs_writepage_ctx, ctx);
 }
 
+/*
+ * Kick extra large ioends off to the workqueue. Completion will process a lot
+ * of pages for a large bio or bio chain and a non-atomic context is required to
+ * reschedule and avoid soft lockup warnings.
+ */
+#define XFS_LARGE_IOEND	(262144 << PAGE_SHIFT)
+
 /*
  * Fast and loose check if this write could update the on-disk inode size.
  */
@@ -239,7 +246,8 @@  static inline bool xfs_ioend_needs_workqueue(struct iomap_ioend *ioend)
 {
 	return ioend->io_private ||
 		ioend->io_type == IOMAP_UNWRITTEN ||
-		(ioend->io_flags & IOMAP_F_SHARED);
+		(ioend->io_flags & IOMAP_F_SHARED) ||
+		(ioend->io_size >= XFS_LARGE_IOEND);
 }
 
 STATIC void