Message ID | 20210423171010.12-1-jack@suse.cz (mailing list archive) |
---|---|
Headers | show |
Series | fs: Hole punch vs page cache filling races | expand |
Hi Jan, In future, can you please use the same cc-list for the entire patchset? The stuff that has hit the XFS list (where I'm replying from) doesn't give me any context as to what the core changes are that allow XFS to be changed, so I can't review them in isolation. I've got to spend time now reconstructing the patchset into a single series because the delivery has been spread across three different mailing lists and so hit 3 different procmail filters. I'll comment on the patches once I've reconstructed the series and read through it as a whole... /me considers the way people use "cc" tags in git commits for including mailing lists on individual patches actively harmful. Unless the recipient is subscribed to all the mailing lists the patchset was CC'd to, they can't easily find the bits of the patchset that didn't arrive in their mail box. Individual mailing lists should receive entire patchsets for review, not random, individual, context free patches. And, FWIW, cc'ing the cover letter to all the mailing lists is not good enough. Being able to see the code change as a whole is what matters for review, not the cover letter... Cheers, Dave. On Fri, Apr 23, 2021 at 07:29:29PM +0200, Jan Kara wrote: > Hello, > > here is another version of my patches to address races between hole punching > and page cache filling functions for ext4 and other filesystems. I think > we are coming close to a complete solution so I've removed the RFC tag from > the subject. I went through all filesystems supporting hole punching and > converted them from their private locks to a generic one (usually fixing the > race ext4 had as a side effect). I also found out ceph & cifs didn't have > any protection from the hole punch vs page fault race either so I've added > appropriate protections there. Open are still GFS2 and OCFS2 filesystems. > GFS2 actually avoids the race but is prone to deadlocks (acquires the same lock > both above and below mmap_sem), OCFS2 locking seems kind of hosed and some > read, write, and hole punch paths are not properly serialized possibly leading > to fs corruption. Both issues are non-trivial so respective fs maintainers > have to deal with those (I've informed them and problems were generally > confirmed). Anyway, for all the other filesystem this kind of race should > be closed. > > As a next step, I'd like to actually make sure all calls to > truncate_inode_pages() happen under mapping->invalidate_lock, add the assert > and then we can also get rid of i_size checks in some places (truncate can > use the same serialization scheme as hole punch). But that step is mostly > a cleanup so I'd like to get these functional fixes in first. > > Changes since v3: > * Renamed and moved lock to struct address_space > * Added conversions of tmpfs, ceph, cifs, fuse, f2fs > * Fixed error handling path in filemap_read() > * Removed .page_mkwrite() cleanup from the series for now > > Changes since v2: > * Added documentation and comments regarding lock ordering and how the lock is > supposed to be used > * Added conversions of ext2, xfs, zonefs > * Added patch removing i_mapping_sem protection from .page_mkwrite handlers > > Changes since v1: > * Moved to using inode->i_mapping_sem instead of aops handler to acquire > appropriate lock > > --- > Motivation: > > Amir has reported [1] a that ext4 has a potential issues when reads can race > with hole punching possibly exposing stale data from freed blocks or even > corrupting filesystem when stale mapping data gets used for writeout. The > problem is that during hole punching, new page cache pages can get instantiated > and block mapping from the looked up in a punched range after > truncate_inode_pages() has run but before the filesystem removes blocks from > the file. In principle any filesystem implementing hole punching thus needs to > implement a mechanism to block instantiating page cache pages during hole > punching to avoid this race. This is further complicated by the fact that there > are multiple places that can instantiate pages in page cache. We can have > regular read(2) or page fault doing this but fadvise(2) or madvise(2) can also > result in reading in page cache pages through force_page_cache_readahead(). > > There are couple of ways how to fix this. First way (currently implemented by > XFS) is to protect read(2) and *advise(2) calls with i_rwsem so that they are > serialized with hole punching. This is easy to do but as a result all reads > would then be serialized with writes and thus mixed read-write workloads suffer > heavily on ext4. Thus this series introduces inode->i_mapping_sem and uses it > when creating new pages in the page cache and looking up their corresponding > block mapping. We also replace EXT4_I(inode)->i_mmap_sem with this new rwsem > which provides necessary serialization with hole punching for ext4. > > Honza > > [1] https://lore.kernel.org/linux-fsdevel/CAOQ4uxjQNmxqmtA_VbYW0Su9rKRk2zobJmahcyeaEVOFKVQ5dw@mail.gmail.com/ > > Previous versions: > Link: https://lore.kernel.org/linux-fsdevel/20210208163918.7871-1-jack@suse.cz/ > Link: http://lore.kernel.org/r/20210413105205.3093-1-jack@suse.cz > > CC: ceph-devel@vger.kernel.org > CC: Chao Yu <yuchao0@huawei.com> > CC: Damien Le Moal <damien.lemoal@wdc.com> > CC: "Darrick J. Wong" <darrick.wong@oracle.com> > CC: Hugh Dickins <hughd@google.com> > CC: Jaegeuk Kim <jaegeuk@kernel.org> > CC: Jeff Layton <jlayton@kernel.org> > CC: Johannes Thumshirn <jth@kernel.org> > CC: linux-cifs@vger.kernel.org > CC: <linux-ext4@vger.kernel.org> > CC: linux-f2fs-devel@lists.sourceforge.net > CC: <linux-fsdevel@vger.kernel.org> > CC: <linux-mm@kvack.org> > CC: <linux-xfs@vger.kernel.org> > CC: Miklos Szeredi <miklos@szeredi.hu> > CC: Steve French <sfrench@samba.org> > CC: Ted Tso <tytso@mit.edu> >
On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote: > I've got to spend time now reconstructing the patchset into a single > series because the delivery has been spread across three different > mailing lists and so hit 3 different procmail filters. I'll comment > on the patches once I've reconstructed the series and read through > it as a whole... $ b4 mbox 20210423171010.12-1-jack@suse.cz Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz Grabbing thread from lore.kernel.org/ceph-devel 6 messages in the thread Saved ./20210423171010.12-1-jack@suse.cz.mbx
On Sat, Apr 24, 2021 at 12:51:49AM +0100, Matthew Wilcox wrote: > On Sat, Apr 24, 2021 at 08:07:51AM +1000, Dave Chinner wrote: > > I've got to spend time now reconstructing the patchset into a single > > series because the delivery has been spread across three different > > mailing lists and so hit 3 different procmail filters. I'll comment > > on the patches once I've reconstructed the series and read through > > it as a whole... > > $ b4 mbox 20210423171010.12-1-jack@suse.cz > Looking up https://lore.kernel.org/r/20210423171010.12-1-jack%40suse.cz > Grabbing thread from lore.kernel.org/ceph-devel > 6 messages in the thread > Saved ./20210423171010.12-1-jack@suse.cz.mbx Yikes. Just send them damn mails. Or switch the lists to NNTP, but don't let the people who are reviewing your patches do stupid work with weird tools.