From patchwork Tue Aug 20 23:20:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13770650 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1534F1C9DCB for ; Tue, 20 Aug 2024 23:21:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724196090; cv=none; b=kG344k1ABdeQbk+wSnnk9X9xrVohVTNVgtrv/FE70CIW9maWznmw+RxnYCAVCh/s2um34Cuz46VF4PB3somY6FCWIzrj6CJc/PfOtYa3tLDc71xYpUssX/kOjW7cU+CISB1uax7lRYFWeYEwPYDWRQMsW5MWxshhHQtdZr0dP3s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724196090; c=relaxed/simple; bh=J0AzZjY++lVaELd9SlHmM+WmZKF2PT/cunGaBNDmsAE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nhqnqmf/5iA8e2hMNAyMferlsOQvLWe1EHkx8KIhKxGe2svLEclfb0/Ncu3U1jo5RMPsU3nqdBHV5xL9DCy6FX1WIZ/DgaIx+1g3iT9zZrYwml1ZdX4AVwMhkpN39I33FdWh+3MwoxIB6pd82MW65V9q1AjFWL5o7AKQpLtRWQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=HfNARwPO; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="HfNARwPO" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724196088; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=i1WfTcBSz40dFfs4YPek5T5SHQcjJbpn4yprFOLb05E=; b=HfNARwPOrZ1QjDDz7F+L6XRu5/fbc1oqKOPejC3gLKxjsrYtMmFZnHlJzIqnzs5nIgkUdK FulJEKARDt87SetIzbBe8MFk8eCYYyeIOlfvBPYOP3z3CZMfz60aTyWgXpgg1WjRMiFBK5 AuYdb3v/hm7g5gVe+kwuBlM2wIXWJtI= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-515-qlsSCgy2P9econK9DfSahQ-1; Tue, 20 Aug 2024 19:21:23 -0400 X-MC-Unique: qlsSCgy2P9econK9DfSahQ-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id EBEED1955D48; Tue, 20 Aug 2024 23:21:19 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 8A1AE1955E8C; Tue, 20 Aug 2024 23:21:14 +0000 (UTC) From: David Howells To: Christian Brauner Cc: David Howells , Pankaj Raghav , Jeff Layton , Matthew Wilcox , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Marc Dionne Subject: [PATCH 1/4] mm: Fix missing folio invalidation calls during truncation Date: Wed, 21 Aug 2024 00:20:55 +0100 Message-ID: <20240820232105.3792638-2-dhowells@redhat.com> In-Reply-To: <20240820232105.3792638-1-dhowells@redhat.com> References: <20240820232105.3792638-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-cifs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 When AS_RELEASE_ALWAYS is set on a mapping, the ->release_folio() and ->invalidate_folio() calls should be invoked even if PG_private and PG_private_2 aren't set. This is used by netfslib to keep track of the point above which reads can be skipped in favour of just zeroing pagecache locally. There are a couple of places in truncation in which invalidation is only called when folio_has_private() is true. Fix these to check folio_needs_release() instead. Without this, the generic/075 and generic/112 xfstests (both fsx-based tests) fail with minimum folio size patches applied[1]. Fixes: b4fa966f03b7 ("mm, netfs, fscache: stop read optimisation when folio removed from pagecache") Signed-off-by: David Howells cc: Matthew Wilcox (Oracle) cc: Pankaj Raghav cc: Jeff Layton cc: Marc Dionne cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org Link: https://lore.kernel.org/r/20240815090849.972355-1-kernel@pankajraghav.com/ [1] --- mm/truncate.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/truncate.c b/mm/truncate.c index 4d61fbdd4b2f..0668cd340a46 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -157,7 +157,7 @@ static void truncate_cleanup_folio(struct folio *folio) if (folio_mapped(folio)) unmap_mapping_folio(folio); - if (folio_has_private(folio)) + if (folio_needs_release(folio)) folio_invalidate(folio, 0, folio_size(folio)); /* @@ -219,7 +219,7 @@ bool truncate_inode_partial_folio(struct folio *folio, loff_t start, loff_t end) if (!mapping_inaccessible(folio->mapping)) folio_zero_range(folio, offset, length); - if (folio_has_private(folio)) + if (folio_needs_release(folio)) folio_invalidate(folio, offset, length); if (!folio_test_large(folio)) return true; From patchwork Tue Aug 20 23:20:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13770651 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E24E018E035 for ; Tue, 20 Aug 2024 23:21:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724196098; cv=none; b=eF2F0A2dQ4KXP8QqEdzBm4l3JqR5TccNjUSwtaTWDXnEdzZ/H25O2WwpPiqPhcVhf6U9IXG+uPqe8x23oNnFAlD9/TlKILmnBVrdd2p6PAVvSM3EcQVYHhosU0x1Z1b5wfhqv1XUhpZhNQC8yPTuDnuXXfS4QSnj7qG0+HzwQg4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724196098; c=relaxed/simple; bh=g8vGiYgfUHHfeHz8oFJMF7SqHkz4KAmGdS17s4e9lUI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CMvW1bi9xwi2OmqTdM/3DSF/xoNGJEXztEHQGxJJZeQ+Z/n4nxwFcb1/2WZ6Vb8NcgfxLP51AmlvozKfnqGUBtizjKqyZUr2riCNJJUQIxnatI/g+CFW7h/9yyqTVWLg8xSj58u9rrzQfdvNTHkLIuCWaTSx81d1N2ENlvyu36M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=C0khNYCW; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="C0khNYCW" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724196095; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LLNXobqu9Lr51n4CzgcItZvbbRppxWdYVF89daxGYJk=; b=C0khNYCW9N9n+Fh9L+UM0bdGvG/8Y0i4pRMtSrXoTWEDnud5siylNOuCeE34J91TijrWTz 2spJJATFfs7f4lOy/ys60yOviyPgl7qkwWpL5bUj59BJhwhUGyyG8GhQL3INwu9f0HCBxf iuzNCgpeKb58ZvEJD0zRkjjjv7qN+1s= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-34-12CfHN9WNkObG7tnXd9WhA-1; Tue, 20 Aug 2024 19:21:29 -0400 X-MC-Unique: 12CfHN9WNkObG7tnXd9WhA-1 Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3436B195608A; Tue, 20 Aug 2024 23:21:25 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 09A0F1955E8C; Tue, 20 Aug 2024 23:21:20 +0000 (UTC) From: David Howells To: Christian Brauner Cc: David Howells , Pankaj Raghav , Jeff Layton , Matthew Wilcox , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Marc Dionne Subject: [PATCH 2/4] afs: Fix post-setattr file edit to do truncation correctly Date: Wed, 21 Aug 2024 00:20:56 +0100 Message-ID: <20240820232105.3792638-3-dhowells@redhat.com> In-Reply-To: <20240820232105.3792638-1-dhowells@redhat.com> References: <20240820232105.3792638-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-cifs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 At the end of an kAFS RPC operation, there is an "edit" phase (originally intended for post-directory modification ops to edit the local image) that the setattr VFS op uses to fix up the pagecache if the RPC that requested truncation of a file was successful. afs_setattr_edit_file() calls truncate_setsize() which sets i_size, expands the pagecache if needed and truncates the pagecache. The first two of those, however, are redundant as they've already been done by afs_setattr_success() under the io_lock and the first is also done under the callback lock (cb_lock). Fix afs_setattr_edit_file() to call truncate_pagecache() instead (which is called by truncate_setsize(), thereby skipping the redundant parts. Fixes: 100ccd18bb41 ("netfs: Optimise away reads above the point at which there can be no data") Signed-off-by: David Howells cc: Matthew Wilcox (Oracle) cc: Pankaj Raghav cc: Jeff Layton cc: Marc Dionne cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org --- fs/afs/inode.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/fs/afs/inode.c b/fs/afs/inode.c index 3acf5e050072..a95e77670b49 100644 --- a/fs/afs/inode.c +++ b/fs/afs/inode.c @@ -695,13 +695,18 @@ static void afs_setattr_edit_file(struct afs_operation *op) { struct afs_vnode_param *vp = &op->file[0]; struct afs_vnode *vnode = vp->vnode; + struct inode *inode = &vnode->netfs.inode; if (op->setattr.attr->ia_valid & ATTR_SIZE) { loff_t size = op->setattr.attr->ia_size; - loff_t i_size = op->setattr.old_i_size; + loff_t old = op->setattr.old_i_size; + + /* Note: inode->i_size was updated by afs_apply_status() inside + * the I/O and callback locks. + */ - if (size != i_size) { - truncate_setsize(&vnode->netfs.inode, size); + if (size != old) { + truncate_pagecache(inode, size); netfs_resize_file(&vnode->netfs, size, true); fscache_resize_cookie(afs_vnode_cache(vnode), size); } From patchwork Tue Aug 20 23:20:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13770653 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E81021C9ECC for ; Tue, 20 Aug 2024 23:21:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724196107; cv=none; b=ULEycJhWHcjk1ucGU3uATs+/rKsxDaX8NWY1KawFRZZwFNRQ+g5Q1njsmd7MU1oEx1xNiCTIxGYrsT1PmrLtaI9flQ+ahBbJd2QAgpCvs48smKPCi5eju3/4L/wyHGN/l39y6OvIQcPECsiJ5e3rj/hKRU35TYtJ/QpUy+qurD4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724196107; c=relaxed/simple; bh=ht8EIVhw2/1il8Rsom6afgUGDhA62o/6wQOgoApvk7U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bnsIKEzrIMLJgyD+gsE02UBt8VlaV1aElDNlXpjSV24HgBHtb5p6eYyItXRt8gOIw2g5jkyZ3PzCy5YWwTPryPhdwG3sfK8bAwybdth6fRPvYWPiHrJeWCAQ+/9PejJ6KoOK+dDLaCuAKMOu1HXuNGp2uvIT6FPgGDnfKm5XbXw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=B9hGyRkP; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="B9hGyRkP" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724196105; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B4tfOtZ3an7b1aGGuqLauY14dten5jR69+zwwl0cSBE=; b=B9hGyRkPueEE9mcewQNeRdtS8YyQ4/PQoatgXoy8SBn2dPNzXx73L/VecYxllF234Xbg8a e0A4psSvFS6MmesTwX1QyCLJ+0HNGbLlfvgf5EFhTs2B0XbKdswEYSBcP8KDQe/FPDp0VY /Ccf1NpoyTvmu8IpJfkZ1pfD7G0oJpg= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-354-Mj8yoYdJNz-fatkw3PTnGQ-1; Tue, 20 Aug 2024 19:21:36 -0400 X-MC-Unique: Mj8yoYdJNz-fatkw3PTnGQ-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 29EA31955D52; Tue, 20 Aug 2024 23:21:31 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id B0C261955F54; Tue, 20 Aug 2024 23:21:26 +0000 (UTC) From: David Howells To: Christian Brauner Cc: David Howells , Pankaj Raghav , Jeff Layton , Matthew Wilcox , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Marc Dionne Subject: [PATCH 3/4] netfs: Fix netfs_release_folio() to say no if folio dirty Date: Wed, 21 Aug 2024 00:20:57 +0100 Message-ID: <20240820232105.3792638-4-dhowells@redhat.com> In-Reply-To: <20240820232105.3792638-1-dhowells@redhat.com> References: <20240820232105.3792638-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-cifs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 Fix netfs_release_folio() to say no (ie. return false) if the folio is dirty (analogous with iomap's behaviour). Without this, it will say yes to the release of a dirty page by split_huge_page_to_list_to_order(), which will result in the loss of untruncated data in the folio. Without this, the generic/075 and generic/112 xfstests (both fsx-based tests) fail with minimum folio size patches applied[1]. Fixes: c1ec4d7c2e13 ("netfs: Provide invalidate_folio and release_folio calls") Signed-off-by: David Howells cc: Matthew Wilcox (Oracle) cc: Pankaj Raghav cc: Jeff Layton cc: Marc Dionne cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org Link: https://lore.kernel.org/r/20240815090849.972355-1-kernel@pankajraghav.com/ [1] --- fs/netfs/misc.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 554a1a4615ad..69324761fcf7 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -161,6 +161,9 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp) struct netfs_inode *ctx = netfs_inode(folio_inode(folio)); unsigned long long end; + if (folio_test_dirty(folio)) + return false; + end = folio_pos(folio) + folio_size(folio); if (end > ctx->zero_point) ctx->zero_point = end; From patchwork Tue Aug 20 23:20:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13770652 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A6341CB303 for ; Tue, 20 Aug 2024 23:21:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724196107; cv=none; b=ujctoSekiq2B7SezTJdyvltmkoFCJ1tojNe7i6hNt9i7/MyyS1VMwmL3IYeq9hqjC4a9fDfet6qYQSJYzWwaEvCbSmJkF7PYjwM3km7/M68qvVZx8QdJOgmOg7oxG+4QZ8yiS1a+Z1PmZur04CURqVwZAxnYKtNzjdghBtwmSF0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1724196107; c=relaxed/simple; bh=tpadh5ijkCCOsa7M3OJjST6sL5scalK1OFyRewofsGo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=foAB0FPgAYKiNS2rKnzvVMN0qrhtFYXFvDhhIaubH4sLAGuf6OpSyzFR/FR7yk6PGFedITd+xXLk4Glb2q699soPDklaaP/+tEeA6Lkpsch9d4gsrHxoMfXtNI9YHVDE3GcwODSzZ+Xg8daIsbVwjTItnAsNEe6Fvn6EAibwtG0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Uu2EAl4n; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Uu2EAl4n" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724196104; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9pcRiiH/lYTDVoni9WSyylXIzj+0iyqYBGBDYoCG5zI=; b=Uu2EAl4n0v+4kmOMUajbWhmqlBITdmxrUbLuaQ+SsZJrM2OJQ9ElaGAy3QtXWywMx4ZBuA qfotm2dscPhzwCv27eJ78s7cJAQsrAc3d66MGqJ+0aF+lv9u/sbXDo7kRURAjPeRzh9bz/ UXAcwU8Elqj4oD3do/LyuNN4xCceXJs= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-329-JolRJguvP_SguD5LijFaFg-1; Tue, 20 Aug 2024 19:21:39 -0400 X-MC-Unique: JolRJguvP_SguD5LijFaFg-1 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BA9DE19560AD; Tue, 20 Aug 2024 23:21:36 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.42.28.30]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A9E6F1955F54; Tue, 20 Aug 2024 23:21:32 +0000 (UTC) From: David Howells To: Christian Brauner Cc: David Howells , Pankaj Raghav , Jeff Layton , Matthew Wilcox , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Marc Dionne Subject: [PATCH 4/4] netfs: Fix trimming of streaming-write folios in netfs_inval_folio() Date: Wed, 21 Aug 2024 00:20:58 +0100 Message-ID: <20240820232105.3792638-5-dhowells@redhat.com> In-Reply-To: <20240820232105.3792638-1-dhowells@redhat.com> References: <20240820232105.3792638-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: linux-cifs@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 When netfslib writes to a folio that it doesn't have data for, but that data exists on the server, it will make a 'streaming write' whereby it stores data in a folio that is marked dirty, but not uptodate. When it does this, it attaches a record to folio->private to track the dirty region. When truncate() or fallocate() wants to invalidate part of such a folio, it will call into ->invalidate_folio(), specifying the part of the folio that is to be invalidated. netfs_invalidate_folio(), on behalf of the filesystem, must then determine how to trim the streaming write record. In a couple of cases, however, it does this incorrectly (the reduce-length and move-start cases are switched over and don't, in any case, calculate the value correctly). Fix this by making the logic tree more obvious and fixing the cases. Fixes: 9ebff83e6481 ("netfs: Prep to use folio->private for write grouping and streaming write") Signed-off-by: David Howells cc: Matthew Wilcox (Oracle) cc: Pankaj Raghav cc: Jeff Layton cc: Marc Dionne cc: linux-afs@lists.infradead.org cc: netfs@lists.linux.dev cc: linux-mm@kvack.org cc: linux-fsdevel@vger.kernel.org --- fs/netfs/misc.c | 50 ++++++++++++++++++++++++++++++++++--------------- 1 file changed, 35 insertions(+), 15 deletions(-) diff --git a/fs/netfs/misc.c b/fs/netfs/misc.c index 69324761fcf7..c1f321cf5999 100644 --- a/fs/netfs/misc.c +++ b/fs/netfs/misc.c @@ -97,10 +97,20 @@ EXPORT_SYMBOL(netfs_clear_inode_writeback); void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) { struct netfs_folio *finfo; + struct netfs_inode *ctx = netfs_inode(folio_inode(folio)); size_t flen = folio_size(folio); _enter("{%lx},%zx,%zx", folio->index, offset, length); + if (offset == 0 && length == flen) { + unsigned long long i_size = i_size_read(&ctx->inode); + unsigned long long fpos = folio_pos(folio), end; + + end = umin(fpos + flen, i_size); + if (fpos < i_size && end > ctx->zero_point) + ctx->zero_point = end; + } + folio_wait_private_2(folio); /* [DEPRECATED] */ if (!folio_test_private(folio)) @@ -115,18 +125,34 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) /* We have a partially uptodate page from a streaming write. */ unsigned int fstart = finfo->dirty_offset; unsigned int fend = fstart + finfo->dirty_len; - unsigned int end = offset + length; + unsigned int iend = offset + length; if (offset >= fend) return; - if (end <= fstart) + if (iend <= fstart) + return; + + /* The invalidation region overlaps the data. If the region + * covers the start of the data, we either move along the start + * or just erase the data entirely. + */ + if (offset <= fstart) { + if (iend >= fend) + goto erase_completely; + /* Move the start of the data. */ + finfo->dirty_len = fend - iend; + finfo->dirty_offset = offset; + return; + } + + /* Reduce the length of the data if the invalidation region + * covers the tail part. + */ + if (iend >= fend) { + finfo->dirty_len = offset - fstart; return; - if (offset <= fstart && end >= fend) - goto erase_completely; - if (offset <= fstart && end > fstart) - goto reduce_len; - if (offset > fstart && end >= fend) - goto move_start; + } + /* A partial write was split. The caller has already zeroed * it, so just absorb the hole. */ @@ -139,12 +165,6 @@ void netfs_invalidate_folio(struct folio *folio, size_t offset, size_t length) folio_clear_uptodate(folio); kfree(finfo); return; -reduce_len: - finfo->dirty_len = offset + length - finfo->dirty_offset; - return; -move_start: - finfo->dirty_len -= offset - finfo->dirty_offset; - finfo->dirty_offset = offset; } EXPORT_SYMBOL(netfs_invalidate_folio); @@ -164,7 +184,7 @@ bool netfs_release_folio(struct folio *folio, gfp_t gfp) if (folio_test_dirty(folio)) return false; - end = folio_pos(folio) + folio_size(folio); + end = umin(folio_pos(folio) + folio_size(folio), i_size_read(&ctx->inode)); if (end > ctx->zero_point) ctx->zero_point = end;