Message ID | 20191031234618.15403-5-david@fromorbit.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm, xfs: non-blocking inode reclaim | expand |
On Fri, Nov 01, 2019 at 10:45:54AM +1100, Dave Chinner wrote: > From: Dave Chinner <dchinner@redhat.com> > > The buffer cache shrinker frees more than just the xfs_buf slab > objects - it also frees the pages attached to the buffers. Make sure > the memory reclaim code accounts for this memory being freed > correctly, similar to how the inode shrinker accounts for pages > freed from the page cache due to mapping invalidation. > > We also need to make sure that the mm subsystem knows these are > reclaimable objects. We provide the memory reclaim subsystem with a > a shrinker to reclaim xfs_bufs, so we should really mark the slab > that way. > > We also have a lot of xfs_bufs in a busy system, spread them around > like we do inodes. > > Signed-off-by: Dave Chinner <dchinner@redhat.com> > --- I still don't see why we wouldn't set the spread flag on the bli cache as well, but afaict it doesn't matter in most cases unless the spread knob is enabled. Unless I'm misunderstanding how that works, I think the commit log could be improved to describe that since to me it implies the flag by itself has an effect, but otherwise the change seems fine: Reviewed-by: Brian Foster <bfoster@redhat.com> > fs/xfs/xfs_buf.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c > index 1e63dd3d1257..d34e5d2edacd 100644 > --- a/fs/xfs/xfs_buf.c > +++ b/fs/xfs/xfs_buf.c > @@ -324,6 +324,9 @@ xfs_buf_free( > > __free_page(page); > } > + if (current->reclaim_state) > + current->reclaim_state->reclaimed_slab += > + bp->b_page_count; > } else if (bp->b_flags & _XBF_KMEM) > kmem_free(bp->b_addr); > _xfs_buf_free_pages(bp); > @@ -2061,7 +2064,8 @@ int __init > xfs_buf_init(void) > { > xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf", > - KM_ZONE_HWALIGN, NULL); > + KM_ZONE_HWALIGN | KM_ZONE_SPREAD | KM_ZONE_RECLAIM, > + NULL); > if (!xfs_buf_zone) > goto out; > > -- > 2.24.0.rc0 >
On Fri, Nov 01, 2019 at 10:45:54AM +1100, Dave Chinner wrote: > From: Dave Chinner <dchinner@redhat.com> > > The buffer cache shrinker frees more than just the xfs_buf slab > objects - it also frees the pages attached to the buffers. Make sure > the memory reclaim code accounts for this memory being freed > correctly, similar to how the inode shrinker accounts for pages > freed from the page cache due to mapping invalidation. > > We also need to make sure that the mm subsystem knows these are > reclaimable objects. We provide the memory reclaim subsystem with a > a shrinker to reclaim xfs_bufs, so we should really mark the slab > that way. > > We also have a lot of xfs_bufs in a busy system, spread them around > like we do inodes. > > Signed-off-by: Dave Chinner <dchinner@redhat.com> > --- > fs/xfs/xfs_buf.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c > index 1e63dd3d1257..d34e5d2edacd 100644 > --- a/fs/xfs/xfs_buf.c > +++ b/fs/xfs/xfs_buf.c > @@ -324,6 +324,9 @@ xfs_buf_free( > > __free_page(page); > } > + if (current->reclaim_state) > + current->reclaim_state->reclaimed_slab += > + bp->b_page_count; > } else if (bp->b_flags & _XBF_KMEM) > kmem_free(bp->b_addr); > _xfs_buf_free_pages(bp); > @@ -2061,7 +2064,8 @@ int __init > xfs_buf_init(void) > { > xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf", > - KM_ZONE_HWALIGN, NULL); > + KM_ZONE_HWALIGN | KM_ZONE_SPREAD | KM_ZONE_RECLAIM, As discussed on the previous iteration of this series, I'd like to capture the reasons for adding KM_ZONE_SPREAD as a separate patch. --D > + NULL); > if (!xfs_buf_zone) > goto out; > > -- > 2.24.0.rc0 >
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 1e63dd3d1257..d34e5d2edacd 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -324,6 +324,9 @@ xfs_buf_free( __free_page(page); } + if (current->reclaim_state) + current->reclaim_state->reclaimed_slab += + bp->b_page_count; } else if (bp->b_flags & _XBF_KMEM) kmem_free(bp->b_addr); _xfs_buf_free_pages(bp); @@ -2061,7 +2064,8 @@ int __init xfs_buf_init(void) { xfs_buf_zone = kmem_zone_init_flags(sizeof(xfs_buf_t), "xfs_buf", - KM_ZONE_HWALIGN, NULL); + KM_ZONE_HWALIGN | KM_ZONE_SPREAD | KM_ZONE_RECLAIM, + NULL); if (!xfs_buf_zone) goto out;