From patchwork Mon Dec 16 20:41:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13910544 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1AC9B1F9F7E for ; Mon, 16 Dec 2024 20:45:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734381935; cv=none; b=HLfv5XhB2DP745J6i/eiyq9avwjN0rpQ1SjLyC94Q7UaWQVLloPIX2nBCiqYIBv0klUCZRAVpcWEx8B5Z9VxYI2ifBpLARis7JWNLYJcihtlZakkPIOcsiEvj2WSu5PaQn+wpAIUwhGLtqY/yIuSS/oAEt5FJeGtAJeAeWTW9vw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734381935; c=relaxed/simple; bh=wMJoA5AMnEpKevidbj50XhKJhvcyxrAG0ZcvzVHwzdI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bNLRSan/z45QoCLYQXcXMpDTWlkuAg7YVEoBJENZuPhe/4lPezXxnz4bf679CGRB4cuWUAyqencJ/6xWHBDfhYNW1ZRfSsUgw/CvGD88+Q7QlC6cIRSGL8DGymFktgFXbm0hv0cHglNohwp5jZLKabnjuVG3jmQljkvwlOc2ghA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=YVn/I0+n; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="YVn/I0+n" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734381933; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Oo1wq8deJtNt2OQQxJVQAHf3OBjfRH+YniUEyQHkJfs=; b=YVn/I0+n3ZX0a3T89VfPQeznOs2Dn3cSBLxYLr+6Nvr7hVOVnlSSFDabFOpw8S1b5cWFla 8MwmSfbfifkHjEy1VN3z7EqDH3FZ+LtHVdQFKnIeQvgjlUirUhhprDHFSkpQT/br0k9YvH DfTPuvbQq1872nGFE9v2XfYZGM27k6Y= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-465-XuzHjsQvNUyN5PcUdS5A8w-1; Mon, 16 Dec 2024 15:45:31 -0500 X-MC-Unique: XuzHjsQvNUyN5PcUdS5A8w-1 X-Mimecast-MFC-AGG-ID: XuzHjsQvNUyN5PcUdS5A8w Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A8C74195608E; Mon, 16 Dec 2024 20:45:28 +0000 (UTC) Received: from warthog.procyon.org.com (unknown [10.42.28.48]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9BF4B19560A2; Mon, 16 Dec 2024 20:45:22 +0000 (UTC) From: David Howells To: Christian Brauner , Steve French , Matthew Wilcox Cc: David Howells , Jeff Layton , Gao Xiang , Dominique Martinet , Marc Dionne , Paulo Alcantara , Shyam Prasad N , Tom Talpey , Eric Van Hensbergen , Ilya Dryomov , netfs@lists.linux.dev, linux-afs@lists.infradead.org, linux-cifs@vger.kernel.org, linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, v9fs@lists.linux.dev, linux-erofs@lists.ozlabs.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+af5c06208fa71bf31b16@syzkaller.appspotmail.com, Chang Yu Subject: [PATCH v5 32/32] netfs: Report on NULL folioq in netfs_writeback_unlock_folios() Date: Mon, 16 Dec 2024 20:41:22 +0000 Message-ID: <20241216204124.3752367-33-dhowells@redhat.com> In-Reply-To: <20241216204124.3752367-1-dhowells@redhat.com> References: <20241216204124.3752367-1-dhowells@redhat.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 It seems that it's possible to get to netfs_writeback_unlock_folios() with an empty rolling buffer during buffered writes. This should not be possible as the rolling buffer is initialised as the write request is set up and thereafter maintains at least one folio_queue struct therein until it gets destroyed. This allows lockless addition and removal of folio_queue structs in the buffer because, unlike with a ring buffer, the producer and consumer each only need to look at and alter one pointer into the buffer. Now, the rolling buffer is only used for buffered I/O operations as netfs_collect_write_results() should only call netfs_writeback_unlock_folios() if the request is of origin type NETFS_WRITEBACK, NETFS_WRITETHROUGH or NETFS_PGPRIV2_COPY_TO_CACHE. So it would seem that one of the following occurred: (1) I/O started before the request was fully initialised, (2) the origin got switched mid-flow or (3) the request has already been freed and this is a UAF error. I think the last is the most likely. Make netfs_writeback_unlock_folios() report information about the request and subrequests if folioq is seen to be NULL to try and help debug this, throw a warning and return. Note that this does not try to fix the problem. Reported-by: syzbot+af5c06208fa71bf31b16@syzkaller.appspotmail.com Link: https://syzkaller.appspot.com/bug?extid=af5c06208fa71bf31b16 Signed-off-by: David Howells cc: Chang Yu Link: https://lore.kernel.org/r/ZxshMEW4U7MTgQYa@gmail.com/ cc: Jeff Layton cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org --- fs/netfs/write_collect.c | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c index 1b7f53d01b8d..294f67795f79 100644 --- a/fs/netfs/write_collect.c +++ b/fs/netfs/write_collect.c @@ -21,6 +21,34 @@ #define NEED_RETRY 0x10 /* A front op requests retrying */ #define SAW_FAILURE 0x20 /* One stream or hit a permanent failure */ +static void netfs_dump_request(const struct netfs_io_request *rreq) +{ + pr_err("Request R=%08x r=%d fl=%lx or=%x e=%ld\n", + rreq->debug_id, refcount_read(&rreq->ref), rreq->flags, + rreq->origin, rreq->error); + pr_err(" st=%llx tsl=%zx/%llx/%llx\n", + rreq->start, rreq->transferred, rreq->submitted, rreq->len); + pr_err(" cci=%llx/%llx/%llx\n", + rreq->cleaned_to, rreq->collected_to, atomic64_read(&rreq->issued_to)); + pr_err(" iw=%pSR\n", rreq->netfs_ops->issue_write); + for (int i = 0; i < NR_IO_STREAMS; i++) { + const struct netfs_io_subrequest *sreq; + const struct netfs_io_stream *s = &rreq->io_streams[i]; + + pr_err(" str[%x] s=%x e=%d acnf=%u,%u,%u,%u\n", + s->stream_nr, s->source, s->error, + s->avail, s->active, s->need_retry, s->failed); + pr_err(" str[%x] ct=%llx t=%zx\n", + s->stream_nr, s->collected_to, s->transferred); + list_for_each_entry(sreq, &s->subrequests, rreq_link) { + pr_err(" sreq[%x:%x] sc=%u s=%llx t=%zx/%zx r=%d f=%lx\n", + sreq->stream_nr, sreq->debug_index, sreq->source, + sreq->start, sreq->transferred, sreq->len, + refcount_read(&sreq->ref), sreq->flags); + } + } +} + /* * Successful completion of write of a folio to the server and/or cache. Note * that we are not allowed to lock the folio here on pain of deadlocking with @@ -87,6 +115,12 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq, unsigned long long collected_to = wreq->collected_to; unsigned int slot = wreq->buffer.first_tail_slot; + if (WARN_ON_ONCE(!folioq)) { + pr_err("[!] Writeback unlock found empty rolling buffer!\n"); + netfs_dump_request(wreq); + return; + } + if (wreq->origin == NETFS_PGPRIV2_COPY_TO_CACHE) { if (netfs_pgpriv2_unlock_copied_folios(wreq)) *notes |= MADE_PROGRESS;