From patchwork Fri Nov 15 20:01:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Foster X-Patchwork-Id: 13876746 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6E9B912F585 for ; Fri, 15 Nov 2024 20:00:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731700830; cv=none; b=Ddu4x5OS5JSRKw7QDmaYPltx5XYEum1JtnMywuqFOHMPJLDhQP67NyY41f00ccOu7NR7J2SZQKF1pVT43V6j80iuFtqsXzDsy5zSafDdhIURZjepYLiW0X7b93iSXPGZ4e9d3VhyDPV7bI8RYonQt9kgB2c9qIhPZ04SIacUsgc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731700830; c=relaxed/simple; bh=xCCcFj0TgUEtGfABdFQM0s73tU4M90qCrb6slEDmq4g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=b54x+sZktfMxPh4z16oa/1QGgVqyxeMuY7Wb4YZkgtPgPm4Hzfl5dr7KnMZXQEo/D4+FBtpDeNtO2oR/NVMkcUIlw8MJIkHaaEjoDrEUEXThOzO0BKhPTwJJIuX4i4w9TzDJipPURPy4ed/qd5JEOaepfVjABVLLuUAZaNh0Onk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=jDh3nGGo; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="jDh3nGGo" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731700827; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0RK0ZBTwC8BCQOlAHX4o6t4hLRunjrMolcycpNapl9Q=; b=jDh3nGGoMIaowwPKq6znjfX5UHzvAyM9jPWkhKVqD0IuMW8dCRQMY/Ic7GifGs6CAkaA0n HPw+sx6L44RHbTUxGzuzezV8HNVtUOLnEGFEtFvwR2Jrk6JlcdXBDhzKe7BiZRfBimkMmL OqcIaudXK6xzFQ2j2WAdhvGHq+UE6Kk= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-144-sMTN_FJBMaOq-mNP169SyQ-1; Fri, 15 Nov 2024 15:00:25 -0500 X-MC-Unique: sMTN_FJBMaOq-mNP169SyQ-1 X-Mimecast-MFC-AGG-ID: sMTN_FJBMaOq-mNP169SyQ Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 41D441955F42; Fri, 15 Nov 2024 20:00:24 +0000 (UTC) Received: from bfoster.redhat.com (unknown [10.22.80.120]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 547971953882; Fri, 15 Nov 2024 20:00:22 +0000 (UTC) From: Brian Foster To: linux-fsdevel@vger.kernel.org Cc: linux-xfs@vger.kernel.org, hch@infradead.org, djwong@kernel.org Subject: [PATCH v4 1/3] iomap: reset per-iter state on non-error iter advances Date: Fri, 15 Nov 2024 15:01:53 -0500 Message-ID: <20241115200155.593665-2-bfoster@redhat.com> In-Reply-To: <20241115200155.593665-1-bfoster@redhat.com> References: <20241115200155.593665-1-bfoster@redhat.com> Precedence: bulk X-Mailing-List: linux-xfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 iomap_iter_advance() zeroes the processed and mapping fields on every non-error iteration except for the last expected iteration (i.e. return 0 expected to terminate the iteration loop). This appears to be circumstantial as nothing currently relies on these fields after the final iteration. Therefore to better faciliate iomap_iter reuse in subsequent patches, update iomap_iter_advance() to always reset per-iteration state on successful completion. Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/iter.c | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/fs/iomap/iter.c b/fs/iomap/iter.c index 79a0614eaab7..3790918646af 100644 --- a/fs/iomap/iter.c +++ b/fs/iomap/iter.c @@ -22,26 +22,25 @@ static inline int iomap_iter_advance(struct iomap_iter *iter) { bool stale = iter->iomap.flags & IOMAP_F_STALE; + int ret = 1; /* handle the previous iteration (if any) */ if (iter->iomap.length) { if (iter->processed < 0) return iter->processed; - if (!iter->processed && !stale) - return 0; if (WARN_ON_ONCE(iter->processed > iomap_length(iter))) return -EIO; iter->pos += iter->processed; iter->len -= iter->processed; - if (!iter->len) - return 0; + if (!iter->len || (!iter->processed && !stale)) + ret = 0; } - /* clear the state for the next iteration */ + /* clear the per iteration state */ iter->processed = 0; memset(&iter->iomap, 0, sizeof(iter->iomap)); memset(&iter->srcmap, 0, sizeof(iter->srcmap)); - return 1; + return ret; } static inline void iomap_iter_done(struct iomap_iter *iter) From patchwork Fri Nov 15 20:01:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Foster X-Patchwork-Id: 13876748 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03D051E1C0F for ; Fri, 15 Nov 2024 20:00:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731700832; cv=none; b=gkTu2ktCNGu5jjqzrBgPAN6eTIJW9z4HG+rqtv7dcKoSGh7N/8V0DF+GoJKZBbailyC/ODFY83kaAO76xzGTvKslZVcHBwmMX4djFQy+wsC7B/eOkthHOkYt9BgSNy9jfV5PeTtcB6xc0Zd8zYTSr3A3N+VsSkHs6/vJ1EuFIZs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731700832; c=relaxed/simple; bh=F4qOIZVk3I5gzPnIcsEx8l+r24HQDhXGOVfaqqhpqR8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gLQDGBFGWkbXSIUHc5s8WlKKW7p1/Yiz3vlPId5b3nnF9CrWaBHsVBoJrUgZWXtqSbL7PQCtLf3/GXWqWUEeDzug/lgJoF5/esZCpuycTrg3I69O2SS9ACEPF5ag24xv7mVSfBKozsTpnxOqLZbqbQ5tSvBmrpV+gesNhJ2r6x8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=fhaeYCTS; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fhaeYCTS" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731700829; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kKMNJ4/x9WaGOxZMAfsms4wNZwr76fM4wXHUOA3gfpk=; b=fhaeYCTSXhTULQGh7RNBj+8vEdhSBcSYVH1uCu5Y6xvAXirAb8YU5w9NjQFibjBcSw6iYX PjyiVB5k452chm6J2cmCb4PByE3DAfMc2KYqHsN9n1BTg7i4JYECF8s+oiQBlkW5Egc8Cq aF/2OcNSC1OeNwFb078m/EkcQPn3bpI= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-558-PexXpOENNwOuA03MGwY3Bw-1; Fri, 15 Nov 2024 15:00:27 -0500 X-MC-Unique: PexXpOENNwOuA03MGwY3Bw-1 X-Mimecast-MFC-AGG-ID: PexXpOENNwOuA03MGwY3Bw Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9C3C3195394C; Fri, 15 Nov 2024 20:00:25 +0000 (UTC) Received: from bfoster.redhat.com (unknown [10.22.80.120]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8BD9D1953882; Fri, 15 Nov 2024 20:00:24 +0000 (UTC) From: Brian Foster To: linux-fsdevel@vger.kernel.org Cc: linux-xfs@vger.kernel.org, hch@infradead.org, djwong@kernel.org Subject: [PATCH v4 2/3] iomap: lift zeroed mapping handling into iomap_zero_range() Date: Fri, 15 Nov 2024 15:01:54 -0500 Message-ID: <20241115200155.593665-3-bfoster@redhat.com> In-Reply-To: <20241115200155.593665-1-bfoster@redhat.com> References: <20241115200155.593665-1-bfoster@redhat.com> Precedence: bulk X-Mailing-List: linux-xfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 In preparation for special handling of subranges, lift the zeroed mapping logic from the iterator into the caller. Since this puts the pagecache dirty check and flushing in the same place, streamline the comments a bit as well. Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong --- fs/iomap/buffered-io.c | 66 +++++++++++++++--------------------------- 1 file changed, 24 insertions(+), 42 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ef0b68bccbb6..9c1aa0355c71 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1350,40 +1350,12 @@ static inline int iomap_zero_iter_flush_and_stale(struct iomap_iter *i) return filemap_write_and_wait_range(mapping, i->pos, end); } -static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero, - bool *range_dirty) +static loff_t iomap_zero_iter(struct iomap_iter *iter, bool *did_zero) { - const struct iomap *srcmap = iomap_iter_srcmap(iter); loff_t pos = iter->pos; loff_t length = iomap_length(iter); loff_t written = 0; - /* - * We must zero subranges of unwritten mappings that might be dirty in - * pagecache from previous writes. We only know whether the entire range - * was clean or not, however, and dirty folios may have been written - * back or reclaimed at any point after mapping lookup. - * - * The easiest way to deal with this is to flush pagecache to trigger - * any pending unwritten conversions and then grab the updated extents - * from the fs. The flush may change the current mapping, so mark it - * stale for the iterator to remap it for the next pass to handle - * properly. - * - * Note that holes are treated the same as unwritten because zero range - * is (ab)used for partial folio zeroing in some cases. Hole backed - * post-eof ranges can be dirtied via mapped write and the flush - * triggers writeback time post-eof zeroing. - */ - if (srcmap->type == IOMAP_HOLE || srcmap->type == IOMAP_UNWRITTEN) { - if (*range_dirty) { - *range_dirty = false; - return iomap_zero_iter_flush_and_stale(iter); - } - /* range is clean and already zeroed, nothing to do */ - return length; - } - do { struct folio *folio; int status; @@ -1433,24 +1405,34 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, bool range_dirty; /* - * Zero range wants to skip pre-zeroed (i.e. unwritten) mappings, but - * pagecache must be flushed to ensure stale data from previous - * buffered writes is not exposed. A flush is only required for certain - * types of mappings, but checking pagecache after mapping lookup is - * racy with writeback and reclaim. + * Zero range can skip mappings that are zero on disk so long as + * pagecache is clean. If pagecache was dirty prior to zero range, the + * mapping converts on writeback completion and so must be zeroed. * - * Therefore, check the entire range first and pass along whether any - * part of it is dirty. If so and an underlying mapping warrants it, - * flush the cache at that point. This trades off the occasional false - * positive (and spurious flush, if the dirty data and mapping don't - * happen to overlap) for simplicity in handling a relatively uncommon - * situation. + * The simplest way to deal with this across a range is to flush + * pagecache and process the updated mappings. To avoid an unconditional + * flush, check pagecache state and only flush if dirty and the fs + * returns a mapping that might convert on writeback. */ range_dirty = filemap_range_needs_writeback(inode->i_mapping, pos, pos + len - 1); + while ((ret = iomap_iter(&iter, ops)) > 0) { + const struct iomap *srcmap = iomap_iter_srcmap(&iter); - while ((ret = iomap_iter(&iter, ops)) > 0) - iter.processed = iomap_zero_iter(&iter, did_zero, &range_dirty); + if (srcmap->type == IOMAP_HOLE || + srcmap->type == IOMAP_UNWRITTEN) { + loff_t proc = iomap_length(&iter); + + if (range_dirty) { + range_dirty = false; + proc = iomap_zero_iter_flush_and_stale(&iter); + } + iter.processed = proc; + continue; + } + + iter.processed = iomap_zero_iter(&iter, did_zero); + } return ret; } EXPORT_SYMBOL_GPL(iomap_zero_range); From patchwork Fri Nov 15 20:01:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Foster X-Patchwork-Id: 13876747 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 03CA712F585 for ; Fri, 15 Nov 2024 20:00:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731700832; cv=none; b=XW28RnVHJbia2/dmvHSYB6fH1Hz/6/5XUDKqTpDM8IKlQattisgZPkJSskcO6xL6Rhkigtv+UHCZtZkb8Jhuo33KHgdBPcJWt27/oMZg6FKTuULAlfg8FkyeSEbquZHReBYZU8ed1Y7Os2G5cCWP/pkwgrQ/5IldOFab+jSW/y4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731700832; c=relaxed/simple; bh=8129lvCoehHbVeWMqtCkLCo3szkRv9egHG1cvUW5eQ0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=WXxjJtNPMg0JA9iP1NZndpEhH9fUMng0Kf1X7JF/DCEMFd/QO+RRePaMBSKdulW1QXHlVxbzs9Fm/v9QgxAZaf2YCn5de99rCj7BKfIPL2HqaA9h5ulhPw31BRxVh7L82vxV9eyvbTW8a0eSky1+1KHNqnXHHQq21fyNuyL2w4k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=g56CB636; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="g56CB636" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1731700830; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m4AtfWKt2DgYvqWiXDtkXrgtOeibviw4I8ze21URWtQ=; b=g56CB636ejCT0x10sficSwWIkvG3YweUpsjDswcpFIs/tbfuhKjCq9XVbqhLUMIRUQJ5jK Nvt+cqpCnhw+YXt0wimI5NKREPVWswTpqYrnrNZ2AdyvLA1ka1z2frq7TRiiPblID7NmO7 aoWzVs3+ShwePL8i1t+JoWVxyVFKz4g= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-576-zETjAVFYPquJz0GqyXsoDw-1; Fri, 15 Nov 2024 15:00:28 -0500 X-MC-Unique: zETjAVFYPquJz0GqyXsoDw-1 X-Mimecast-MFC-AGG-ID: zETjAVFYPquJz0GqyXsoDw Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id ECEC71954B02; Fri, 15 Nov 2024 20:00:26 +0000 (UTC) Received: from bfoster.redhat.com (unknown [10.22.80.120]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DD55E1953880; Fri, 15 Nov 2024 20:00:25 +0000 (UTC) From: Brian Foster To: linux-fsdevel@vger.kernel.org Cc: linux-xfs@vger.kernel.org, hch@infradead.org, djwong@kernel.org Subject: [PATCH v4 3/3] iomap: elide flush from partial eof zero range Date: Fri, 15 Nov 2024 15:01:55 -0500 Message-ID: <20241115200155.593665-4-bfoster@redhat.com> In-Reply-To: <20241115200155.593665-1-bfoster@redhat.com> References: <20241115200155.593665-1-bfoster@redhat.com> Precedence: bulk X-Mailing-List: linux-xfs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 iomap zero range flushes pagecache in certain situations to determine which parts of the range might require zeroing if dirty data is present in pagecache. The kernel robot recently reported a regression associated with this flushing in the following stress-ng workload on XFS: stress-ng --timeout 60 --times --verify --metrics --no-rand-seed --metamix 64 This workload involves repeated small, strided, extending writes. On XFS, this produces a pattern of post-eof speculative preallocation, conversion of preallocation from delalloc to unwritten, dirtying pagecache over newly unwritten blocks, and then rinse and repeat from the new EOF. This leads to repetitive flushing of the EOF folio via the zero range call XFS uses for writes that start beyond current EOF. To mitigate this problem, special case EOF block zeroing to prefer zeroing the folio over a flush when the EOF folio is already dirty. To do this, split out and open code handling of an unaligned start offset. This brings most of the performance back by avoiding flushes on zero range calls via write and truncate extension operations. The flush doesn't occur in these situations because the entire range is post-eof and therefore the folio that overlaps EOF is the only one in the range. Signed-off-by: Brian Foster Reviewed-by: Darrick J. Wong Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 9c1aa0355c71..af2f59817779 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1401,6 +1401,10 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, .len = len, .flags = IOMAP_ZERO, }; + struct address_space *mapping = inode->i_mapping; + unsigned int blocksize = i_blocksize(inode); + unsigned int off = pos & (blocksize - 1); + loff_t plen = min_t(loff_t, len, blocksize - off); int ret; bool range_dirty; @@ -1410,12 +1414,28 @@ iomap_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero, * mapping converts on writeback completion and so must be zeroed. * * The simplest way to deal with this across a range is to flush - * pagecache and process the updated mappings. To avoid an unconditional - * flush, check pagecache state and only flush if dirty and the fs - * returns a mapping that might convert on writeback. + * pagecache and process the updated mappings. To avoid excessive + * flushing on partial eof zeroing, special case it to zero the + * unaligned start portion if already dirty in pagecache. + */ + if (off && + filemap_range_needs_writeback(mapping, pos, pos + plen - 1)) { + iter.len = plen; + while ((ret = iomap_iter(&iter, ops)) > 0) + iter.processed = iomap_zero_iter(&iter, did_zero); + + iter.len = len - (iter.pos - pos); + if (ret || !iter.len) + return ret; + } + + /* + * To avoid an unconditional flush, check pagecache state and only flush + * if dirty and the fs returns a mapping that might convert on + * writeback. */ range_dirty = filemap_range_needs_writeback(inode->i_mapping, - pos, pos + len - 1); + iter.pos, iter.pos + iter.len - 1); while ((ret = iomap_iter(&iter, ops)) > 0) { const struct iomap *srcmap = iomap_iter_srcmap(&iter);