diff mbox series

[27/30] iomap: Remove calls to set and clear folio error flag

Message ID 20240420025029.2166544-28-willy@infradead.org (mailing list archive)
State Superseded, archived
Headers show
Series None | expand

Commit Message

Matthew Wilcox (Oracle) April 20, 2024, 2:50 a.m. UTC
The folio error flag is not checked anywhere, so we can remove the calls
to set and clear it.

Cc: Christian Brauner <brauner@kernel.org>
Cc: Darrick J. Wong <djwong@kernel.org>
Cc: linux-xfs@vger.kernel.org
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/iomap/buffered-io.c | 8 --------
 1 file changed, 8 deletions(-)

Comments

Christoph Hellwig April 22, 2024, 6:16 a.m. UTC | #1
On Sat, Apr 20, 2024 at 03:50:22AM +0100, Matthew Wilcox (Oracle) wrote:
> The folio error flag is not checked anywhere, so we can remove the calls
> to set and clear it.

This patch on it's own looks good, but seeing this is a 27/30 I have
no chance to actually fully review it.
Matthew Wilcox (Oracle) April 22, 2024, 3:05 p.m. UTC | #2
On Sun, Apr 21, 2024 at 11:16:01PM -0700, Christoph Hellwig wrote:
> On Sat, Apr 20, 2024 at 03:50:22AM +0100, Matthew Wilcox (Oracle) wrote:
> > The folio error flag is not checked anywhere, so we can remove the calls
> > to set and clear it.
> 
> This patch on it's own looks good, but seeing this is a 27/30 I have
> no chance to actually fully review it.

You were bcc'd on 0/30 which fully explained this.
Christoph Hellwig April 22, 2024, 3:26 p.m. UTC | #3
On Mon, Apr 22, 2024 at 04:05:59PM +0100, Matthew Wilcox wrote:
> On Sun, Apr 21, 2024 at 11:16:01PM -0700, Christoph Hellwig wrote:
> > On Sat, Apr 20, 2024 at 03:50:22AM +0100, Matthew Wilcox (Oracle) wrote:
> > > The folio error flag is not checked anywhere, so we can remove the calls
> > > to set and clear it.
> > 
> > This patch on it's own looks good, but seeing this is a 27/30 I have
> > no chance to actually fully review it.
> 
> You were bcc'd on 0/30 which fully explained this.

Not on the XFS list through which I'm reading this at least.  If it
was to me personally those all go to >/dev/null anyway for mails
Cced to mailing lists.

Please always send the damn series to everyone, fishing individual
mails out of it is just a giant pain in the butt.
Matthew Wilcox (Oracle) April 22, 2024, 5:51 p.m. UTC | #4
On Mon, Apr 22, 2024 at 08:26:34AM -0700, Christoph Hellwig wrote:
> On Mon, Apr 22, 2024 at 04:05:59PM +0100, Matthew Wilcox wrote:
> > On Sun, Apr 21, 2024 at 11:16:01PM -0700, Christoph Hellwig wrote:
> > > On Sat, Apr 20, 2024 at 03:50:22AM +0100, Matthew Wilcox (Oracle) wrote:
> > > > The folio error flag is not checked anywhere, so we can remove the calls
> > > > to set and clear it.
> > > 
> > > This patch on it's own looks good, but seeing this is a 27/30 I have
> > > no chance to actually fully review it.
> > 
> > You were bcc'd on 0/30 which fully explained this.
> 
> Not on the XFS list through which I'm reading this at least.  If it
> was to me personally those all go to >/dev/null anyway for mails
> Cced to mailing lists.
> 
> Please always send the damn series to everyone, fishing individual
> mails out of it is just a giant pain in the butt.

If I do that then half the mailing lists bounce them for having too
many recipients.  b4 can fetch the entire series for you if you've
decided to break your email workflow.  And yes, 0/30 was bcc'd to
linux-xfs as well.
Christoph Hellwig April 25, 2024, 12:23 p.m. UTC | #5
On Mon, Apr 22, 2024 at 06:51:37PM +0100, Matthew Wilcox wrote:
> If I do that then half the mailing lists bounce them for having too
> many recipients.  b4 can fetch the entire series for you if you've
> decided to break your email workflow.  And yes, 0/30 was bcc'd to
> linux-xfs as well.

I can't find it on linux-xfs still.  And please just don't make up
your own workflow or require odd tools.
Matthew Wilcox (Oracle) April 25, 2024, 12:44 p.m. UTC | #6
On Thu, Apr 25, 2024 at 05:23:44AM -0700, Christoph Hellwig wrote:
> On Mon, Apr 22, 2024 at 06:51:37PM +0100, Matthew Wilcox wrote:
> > If I do that then half the mailing lists bounce them for having too
> > many recipients.  b4 can fetch the entire series for you if you've
> > decided to break your email workflow.  And yes, 0/30 was bcc'd to
> > linux-xfs as well.
> 
> I can't find it on linux-xfs still.  And please just don't make up
> your own workflow or require odd tools.

You even quoted the bit where I explained that the workflow you insist I
follow doesn't work.
Christoph Hellwig April 25, 2024, 12:51 p.m. UTC | #7
On Thu, Apr 25, 2024 at 01:44:49PM +0100, Matthew Wilcox wrote:
> On Thu, Apr 25, 2024 at 05:23:44AM -0700, Christoph Hellwig wrote:
> > On Mon, Apr 22, 2024 at 06:51:37PM +0100, Matthew Wilcox wrote:
> > > If I do that then half the mailing lists bounce them for having too
> > > many recipients.  b4 can fetch the entire series for you if you've
> > > decided to break your email workflow.  And yes, 0/30 was bcc'd to
> > > linux-xfs as well.
> > 
> > I can't find it on linux-xfs still.  And please just don't make up
> > your own workflow or require odd tools.
> 
> You even quoted the bit where I explained that the workflow you insist I
> follow doesn't work.

I've regularly sent series to more list than you'd need for 30
patches even if they were entirely unrelated.  But if they are
entirely unrelated it shouldn't be a series to start with..
Darrick J. Wong April 25, 2024, 4:47 p.m. UTC | #8
On Thu, Apr 25, 2024 at 05:51:45AM -0700, Christoph Hellwig wrote:
> On Thu, Apr 25, 2024 at 01:44:49PM +0100, Matthew Wilcox wrote:
> > On Thu, Apr 25, 2024 at 05:23:44AM -0700, Christoph Hellwig wrote:
> > > On Mon, Apr 22, 2024 at 06:51:37PM +0100, Matthew Wilcox wrote:
> > > > If I do that then half the mailing lists bounce them for having too
> > > > many recipients.  b4 can fetch the entire series for you if you've
> > > > decided to break your email workflow.  And yes, 0/30 was bcc'd to
> > > > linux-xfs as well.
> > > 
> > > I can't find it on linux-xfs still.  And please just don't make up
> > > your own workflow or require odd tools.
> > 
> > You even quoted the bit where I explained that the workflow you insist I
> > follow doesn't work.
> 
> I've regularly sent series to more list than you'd need for 30
> patches even if they were entirely unrelated.  But if they are
> entirely unrelated it shouldn't be a series to start with..

One thing I didn't realize until willy pointed this out separately is
that some of the list processing softwares will silently ignore an email
if it has too many entries (~10) in the to/cc list, because spam
heuristics.  I think vger/linux.dev is fairly forgiving about that, but
indie listservs might not be, and that adds friction to treewide
changes.

At least the whole series made it to fsdevel, but fsdevel is such a
firehose now that I can't keep up with it.  It's too bad that linux-xfs
can't simply mirror patchsets sent to mm/fsdevel with "xfs:" in the
title, and then I wouldn't have to look at the firehose.

All I'm really trying to say is, patchbombs are crap for collaboration.

--D
Darrick J. Wong April 25, 2024, 4:55 p.m. UTC | #9
On Sat, Apr 20, 2024 at 03:50:22AM +0100, Matthew Wilcox (Oracle) wrote:
> The folio error flag is not checked anywhere, so we can remove the calls
> to set and clear it.
> 
> Cc: Christian Brauner <brauner@kernel.org>
> Cc: Darrick J. Wong <djwong@kernel.org>
> Cc: linux-xfs@vger.kernel.org
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Looks fine to me,
Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D

> ---
>  fs/iomap/buffered-io.c | 8 --------
>  1 file changed, 8 deletions(-)
> 
> diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
> index 4e8e41c8b3c0..41352601f939 100644
> --- a/fs/iomap/buffered-io.c
> +++ b/fs/iomap/buffered-io.c
> @@ -306,8 +306,6 @@ static void iomap_finish_folio_read(struct folio *folio, size_t off,
>  		spin_unlock_irqrestore(&ifs->state_lock, flags);
>  	}
>  
> -	if (error)
> -		folio_set_error(folio);
>  	if (finished)
>  		folio_end_read(folio, uptodate);
>  }
> @@ -460,9 +458,6 @@ int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops)
>  	while ((ret = iomap_iter(&iter, ops)) > 0)
>  		iter.processed = iomap_readpage_iter(&iter, &ctx, 0);
>  
> -	if (ret < 0)
> -		folio_set_error(folio);
> -
>  	if (ctx.bio) {
>  		submit_bio(ctx.bio);
>  		WARN_ON_ONCE(!ctx.cur_folio_in_bio);
> @@ -697,7 +692,6 @@ static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
>  
>  	if (folio_test_uptodate(folio))
>  		return 0;
> -	folio_clear_error(folio);
>  
>  	do {
>  		iomap_adjust_read_range(iter->inode, folio, &block_start,
> @@ -1528,8 +1522,6 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error)
>  
>  	/* walk all folios in bio, ending page IO on them */
>  	bio_for_each_folio_all(fi, bio) {
> -		if (error)
> -			folio_set_error(fi.folio);
>  		iomap_finish_folio_write(inode, fi.folio, fi.length);
>  		folio_count++;
>  	}
> -- 
> 2.43.0
> 
>
diff mbox series

Patch

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 4e8e41c8b3c0..41352601f939 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -306,8 +306,6 @@  static void iomap_finish_folio_read(struct folio *folio, size_t off,
 		spin_unlock_irqrestore(&ifs->state_lock, flags);
 	}
 
-	if (error)
-		folio_set_error(folio);
 	if (finished)
 		folio_end_read(folio, uptodate);
 }
@@ -460,9 +458,6 @@  int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops)
 	while ((ret = iomap_iter(&iter, ops)) > 0)
 		iter.processed = iomap_readpage_iter(&iter, &ctx, 0);
 
-	if (ret < 0)
-		folio_set_error(folio);
-
 	if (ctx.bio) {
 		submit_bio(ctx.bio);
 		WARN_ON_ONCE(!ctx.cur_folio_in_bio);
@@ -697,7 +692,6 @@  static int __iomap_write_begin(const struct iomap_iter *iter, loff_t pos,
 
 	if (folio_test_uptodate(folio))
 		return 0;
-	folio_clear_error(folio);
 
 	do {
 		iomap_adjust_read_range(iter->inode, folio, &block_start,
@@ -1528,8 +1522,6 @@  iomap_finish_ioend(struct iomap_ioend *ioend, int error)
 
 	/* walk all folios in bio, ending page IO on them */
 	bio_for_each_folio_all(fi, bio) {
-		if (error)
-			folio_set_error(fi.folio);
 		iomap_finish_folio_write(inode, fi.folio, fi.length);
 		folio_count++;
 	}