Message ID | 1399590979-15331-1-git-send-email-zab@redhat.com (mailing list archive) |
---|---|
State | Superseded, archived |
Headers | show |
On Thu, May 08, 2014 at 07:16:17PM -0400, Zach Brown wrote: > The compression layer seems to have been built to return -1 and have > callers make up errors that make sense. This isn't great because there > are different classes of errors that originate down in the compression > layer. Allocation failure and corrupt compressed data to name two. > > --- a/fs/btrfs/lzo.c > +++ b/fs/btrfs/lzo.c > @@ -143,7 +143,7 @@ static int lzo_compress_pages(struct list_head *ws, > if (ret != LZO_E_OK) { > printk(KERN_DEBUG "BTRFS: deflate in loop returned %d\n", > ret); > - ret = -1; > + ret = -EIO; > goto out; > } > > @@ -189,7 +189,7 @@ static int lzo_compress_pages(struct list_head *ws, > kunmap(out_page); > if (nr_pages == nr_dest_pages) { > out_page = NULL; > - ret = -1; > + ret = -EIO; This is not a true EIO, the error conditions says that the caller prepared nr_dest_pages for the compressed data but the compression wants more. The number of pages is at most 128k / PAGE_SIZE. It's a soft error, the data are written uncompressed. The closest errno here seems E2BIG that would apply in the following hunk as well. > goto out; > } > > @@ -208,7 +208,7 @@ static int lzo_compress_pages(struct list_head *ws, > > /* we're making it bigger, give up */ > if (tot_in > 8192 && tot_in < tot_out) { > - ret = -1; > + ret = -EIO; Here, E2BIG. > goto out; > } > > @@ -335,7 +335,7 @@ cont: > break; > > if (page_in_index + 1 >= total_pages_in) { > - ret = -1; > + ret = -EIO; That looks like an internal error, we should never ask for more pages than is in the input, so the buffer offset calculations are wrong. > goto done; > } > Analogically the same applies to zlib. The rest of the EIOs look ok. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, May 09, 2014 at 03:39:26PM +0200, David Sterba wrote: > On Thu, May 08, 2014 at 07:16:17PM -0400, Zach Brown wrote: > > The compression layer seems to have been built to return -1 and have > > callers make up errors that make sense. This isn't great because there > > are different classes of errors that originate down in the compression > > layer. Allocation failure and corrupt compressed data to name two. > > > > --- a/fs/btrfs/lzo.c > > +++ b/fs/btrfs/lzo.c > > @@ -143,7 +143,7 @@ static int lzo_compress_pages(struct list_head *ws, > > if (ret != LZO_E_OK) { > > printk(KERN_DEBUG "BTRFS: deflate in loop returned %d\n", > > ret); > > - ret = -1; > > + ret = -EIO; > > goto out; > > } > > > > @@ -189,7 +189,7 @@ static int lzo_compress_pages(struct list_head *ws, > > kunmap(out_page); > > if (nr_pages == nr_dest_pages) { > > out_page = NULL; > > - ret = -1; > > + ret = -EIO; > > This is not a true EIO, the error conditions says that the caller > prepared nr_dest_pages for the compressed data but the compression wants > more. > > The number of pages is at most 128k / PAGE_SIZE. > > It's a soft error, the data are written uncompressed. The closest errno > here seems E2BIG that would apply in the following hunk as well. Sure. *Anything* is better than the raw -EPERM :). I'll change these and resend v2. > > @@ -335,7 +335,7 @@ cont: > > break; > > > > if (page_in_index + 1 >= total_pages_in) { > > - ret = -1; > > + ret = -EIO; > > That looks like an internal error, we should never ask for more pages > than is in the input, so the buffer offset calculations are wrong. Yeah, but EIO is still arguably the right thing. There's nothing userspace can do about broken kernel code. We don't want to give them an error that could be misinterpreted as them having used an interface incorrectly. We could wrap WARN_ON_ONCE() around it, I suppose. I'm inclined to leave it as is. - z -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, May 09, 2014 at 01:40:15PM -0700, Zach Brown wrote: > > > @@ -335,7 +335,7 @@ cont: > > > break; > > > > > > if (page_in_index + 1 >= total_pages_in) { > > > - ret = -1; > > > + ret = -EIO; > > > > That looks like an internal error, we should never ask for more pages > > than is in the input, so the buffer offset calculations are wrong. > > Yeah, but EIO is still arguably the right thing. There's nothing > userspace can do about broken kernel code. We don't want to give them > an error that could be misinterpreted as them having used an interface > incorrectly. We could wrap WARN_ON_ONCE() around it, I suppose. I'm > inclined to leave it as is. Ok. An EIO is better than an ASSERT or BUG_ON, the error paths will handle it. I think adding the WARN_ON_ONCE is a good compromise, I'm not expecting to see this error but the stack trace will help. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c index b47f669..24f7ede 100644 --- a/fs/btrfs/lzo.c +++ b/fs/btrfs/lzo.c @@ -143,7 +143,7 @@ static int lzo_compress_pages(struct list_head *ws, if (ret != LZO_E_OK) { printk(KERN_DEBUG "BTRFS: deflate in loop returned %d\n", ret); - ret = -1; + ret = -EIO; goto out; } @@ -189,7 +189,7 @@ static int lzo_compress_pages(struct list_head *ws, kunmap(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; - ret = -1; + ret = -EIO; goto out; } @@ -208,7 +208,7 @@ static int lzo_compress_pages(struct list_head *ws, /* we're making it bigger, give up */ if (tot_in > 8192 && tot_in < tot_out) { - ret = -1; + ret = -EIO; goto out; } @@ -335,7 +335,7 @@ cont: break; if (page_in_index + 1 >= total_pages_in) { - ret = -1; + ret = -EIO; goto done; } @@ -358,7 +358,7 @@ cont: kunmap(pages_in[page_in_index - 1]); if (ret != LZO_E_OK) { printk(KERN_WARNING "BTRFS: decompress failed\n"); - ret = -1; + ret = -EIO; break; } @@ -402,12 +402,12 @@ static int lzo_decompress(struct list_head *ws, unsigned char *data_in, ret = lzo1x_decompress_safe(data_in, in_len, workspace->buf, &out_len); if (ret != LZO_E_OK) { printk(KERN_WARNING "BTRFS: decompress failed!\n"); - ret = -1; + ret = -EIO; goto out; } if (out_len < start_byte) { - ret = -1; + ret = -EIO; goto out; } diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index 8e57191..f14c4a0 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -98,7 +98,7 @@ static int zlib_compress_pages(struct list_head *ws, if (Z_OK != zlib_deflateInit(&workspace->def_strm, 3)) { printk(KERN_WARNING "BTRFS: deflateInit failed\n"); - ret = -1; + ret = -EIO; goto out; } @@ -110,7 +110,7 @@ static int zlib_compress_pages(struct list_head *ws, out_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM); if (out_page == NULL) { - ret = -1; + ret = -ENOMEM; goto out; } cpage_out = kmap(out_page); @@ -128,7 +128,7 @@ static int zlib_compress_pages(struct list_head *ws, printk(KERN_DEBUG "BTRFS: deflate in loop returned %d\n", ret); zlib_deflateEnd(&workspace->def_strm); - ret = -1; + ret = -EIO; goto out; } @@ -136,7 +136,7 @@ static int zlib_compress_pages(struct list_head *ws, if (workspace->def_strm.total_in > 8192 && workspace->def_strm.total_in < workspace->def_strm.total_out) { - ret = -1; + ret = -EIO; goto out; } /* we need another page for writing out. Test this @@ -147,12 +147,12 @@ static int zlib_compress_pages(struct list_head *ws, kunmap(out_page); if (nr_pages == nr_dest_pages) { out_page = NULL; - ret = -1; + ret = -EIO; goto out; } out_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM); if (out_page == NULL) { - ret = -1; + ret = -ENOMEM; goto out; } cpage_out = kmap(out_page); @@ -188,12 +188,12 @@ static int zlib_compress_pages(struct list_head *ws, zlib_deflateEnd(&workspace->def_strm); if (ret != Z_STREAM_END) { - ret = -1; + ret = -EIO; goto out; } if (workspace->def_strm.total_out >= workspace->def_strm.total_in) { - ret = -1; + ret = -EIO; goto out; } @@ -253,7 +253,7 @@ static int zlib_decompress_biovec(struct list_head *ws, struct page **pages_in, if (Z_OK != zlib_inflateInit2(&workspace->inf_strm, wbits)) { printk(KERN_WARNING "BTRFS: inflateInit failed\n"); - return -1; + return -EIO; } while (workspace->inf_strm.total_in < srclen) { ret = zlib_inflate(&workspace->inf_strm, Z_NO_FLUSH); @@ -295,7 +295,7 @@ static int zlib_decompress_biovec(struct list_head *ws, struct page **pages_in, } } if (ret != Z_STREAM_END) - ret = -1; + ret = -EIO; else ret = 0; done: @@ -337,7 +337,7 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in, if (Z_OK != zlib_inflateInit2(&workspace->inf_strm, wbits)) { printk(KERN_WARNING "BTRFS: inflateInit failed\n"); - return -1; + return -EIO; } while (bytes_left > 0) { @@ -354,7 +354,7 @@ static int zlib_decompress(struct list_head *ws, unsigned char *data_in, total_out = workspace->inf_strm.total_out; if (total_out == buf_start) { - ret = -1; + ret = -EIO; break; } @@ -382,7 +382,7 @@ next: } if (ret != Z_STREAM_END && bytes_left != 0) - ret = -1; + ret = -EIO; else ret = 0;
The compression layer seems to have been built to return -1 and have callers make up errors that make sense. This isn't great because there are different classes of errors that originate down in the compression layer. Allocation failure and corrupt compressed data to name two. So let's return real negative errnos from the compression layer so that callers can pass on the error without having to guess what happened. This helps a future path return errors from btrfs_decompress(). Signed-off-by: Zach Brown <zab@redhat.com> --- fs/btrfs/lzo.c | 14 +++++++------- fs/btrfs/zlib.c | 26 +++++++++++++------------- 2 files changed, 20 insertions(+), 20 deletions(-)