Message ID | alpine.LNX.2.00.1501032237360.30995@cheri.shyou.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index 1dc0455..df7d957 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -89,7 +89,7 @@ static int zlib_compress_pages(struct list_head *ws, struct page *in_page = NULL; struct page *out_page = NULL; unsigned long bytes_left; - int deflate_flush = Z_SYNC_FLUSH; + int deflate_flush = Z_NO_FLUSH; *out_pages = 0; *total_out = 0;
Moving the Z_FINISH into the loop also means we don't have to force a flush after every input page to guarantee that there won't be more than 4 KiB to write at the end. This patch lets zlib decide when to flush buffer, which offers a very moderate space savings (on my system, my 400MB test logfile goes from an 11.9% compression ratio to 11.2%, which is nothing to write home about) and might offer a similarly-slight performance boost. Since the end result is still a valid zlib stream, it is completely backwards-compatible with the existing method. Signed-off-by: Danielle Church <dchurch@cheri.shyou.org> --- fs/btrfs/zlib.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)