Message ID | 2244151.1677251586@warthog.procyon.org.uk (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [RFC] cifs: Improve use of filemap_get_folios_tag() | expand |
On Fri, Feb 24, 2023 at 7:13 AM David Howells <dhowells@redhat.com> wrote: > > The inefficiency derived from filemap_get_folios_tag() get a batch of > contiguous folios in Vishal's change to afs that got copied into cifs can > be reduced by skipping over those folios that have been passed by the start > position rather than going through the process of locking, checking and > trying to write them. This patch just makes me go "Ugh". There's something wrong with this code for it to need these games. That just makes me convinced that your other patch that just gets rid of the batching entirely is the right one. Of course, I'd be even happier if Willy is right and the code could use the generic write_cache_pages() and avoid all of these things entirely. I'm not clear on why cifs and afs are being so different in the first place, and some of the differences are just odd (like that skip count). Linus
Linus Torvalds <torvalds@linux-foundation.org> wrote: > Of course, I'd be even happier if Willy is right and the code could > use the generic write_cache_pages() and avoid all of these things > entirely. I'm not clear on why cifs and afs are being so different in > the first place, and some of the differences are just odd (like that > skip count). The main reason is that write_cache_pages() doesn't (and can't) check PG_fscache (btrfs uses PG_private_2 for other purposes). NFS, 9p and ceph, for the moment, don't cache files that are open for writing, but I'm intending to change that at some point. The intention is to unify the writepages code for at least 9p, afs, ceph and cifs in netfslib in the future. David
diff --git a/fs/cifs/file.c b/fs/cifs/file.c index ebfcaae8c437..bae1a9709e32 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -2839,6 +2839,7 @@ static ssize_t cifs_write_back_from_locked_folio(struct address_space *mapping, free_xid(xid); if (rc == 0) { wbc->nr_to_write = count; + rc = len; } else if (is_retryable_error(rc)) { cifs_pages_write_redirty(inode, start, len); } else { @@ -2873,6 +2874,13 @@ static int cifs_writepages_region(struct address_space *mapping, for (int i = 0; i < nr; i++) { ssize_t ret; struct folio *folio = fbatch.folios[i]; + unsigned long long fstart; + + fstart = folio_pos(folio); /* May go backwards with THPs */ + if (fstart < start && + folio_size(folio) <= start - fstart) + continue; + start = fstart; redo_folio: start = folio_pos(folio); /* May regress with THPs */
[This additional to the "cifs: Fix cifs_writepages_region()" patch that I posted] The inefficiency derived from filemap_get_folios_tag() get a batch of contiguous folios in Vishal's change to afs that got copied into cifs can be reduced by skipping over those folios that have been passed by the start position rather than going through the process of locking, checking and trying to write them. A similar change would need to be made in afs, in addition to fixing the bugs there. There's also a fix in cifs_write_back_from_locked_folio() where it doesn't return the amount of data dispatched to the server as ->async_writev() just returns 0 on success. Signed-off-by: David Howells <dhowells@redhat.com> ---