Message ID | 20200824145511.10500-4-willy@infradead.org (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | THP iomap patches for 5.10 | expand |
On Mon, Aug 24, 2020 at 03:55:04PM +0100, Matthew Wilcox (Oracle) wrote: > We can skip most of the initialisation, although spinlocks still > need explicit initialisation as architectures may use a non-zero > value to indicate unlocked. The comment is no longer useful as > attach_page_private() handles the refcount now. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Reviewed-by: Christoph Hellwig <hch@lst.de> > --- > fs/iomap/buffered-io.c | 10 +--------- > 1 file changed, 1 insertion(+), 9 deletions(-) The sooner this goes in the better :) Reviewed-by: Dave Chinner <dchinner@redhat.com>
On Mon, Aug 24, 2020 at 03:55:04PM +0100, Matthew Wilcox (Oracle) wrote: > We can skip most of the initialisation, although spinlocks still > need explicit initialisation as architectures may use a non-zero > value to indicate unlocked. The comment is no longer useful as > attach_page_private() handles the refcount now. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> > Reviewed-by: Christoph Hellwig <hch@lst.de> Looks good to me, Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> --D > --- > fs/iomap/buffered-io.c | 10 +--------- > 1 file changed, 1 insertion(+), 9 deletions(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 13d5cdab8dcd..639d54a4177e 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -49,16 +49,8 @@ iomap_page_create(struct inode *inode, struct page *page) > if (iop || i_blocks_per_page(inode, page) <= 1) > return iop; > > - iop = kmalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL); > - atomic_set(&iop->read_count, 0); > - atomic_set(&iop->write_count, 0); > + iop = kzalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL); > spin_lock_init(&iop->uptodate_lock); > - bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE); > - > - /* > - * migrate_page_move_mapping() assumes that pages with private data have > - * their count elevated by 1. > - */ > attach_page_private(page, iop); > return iop; > } > -- > 2.28.0 >
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 13d5cdab8dcd..639d54a4177e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -49,16 +49,8 @@ iomap_page_create(struct inode *inode, struct page *page) if (iop || i_blocks_per_page(inode, page) <= 1) return iop; - iop = kmalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL); - atomic_set(&iop->read_count, 0); - atomic_set(&iop->write_count, 0); + iop = kzalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL); spin_lock_init(&iop->uptodate_lock); - bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE); - - /* - * migrate_page_move_mapping() assumes that pages with private data have - * their count elevated by 1. - */ attach_page_private(page, iop); return iop; }