From patchwork Mon Oct 2 02:35:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Al Viro X-Patchwork-Id: 13405516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F09CBE784A7 for ; Mon, 2 Oct 2023 02:35:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235399AbjJBCfu (ORCPT ); Sun, 1 Oct 2023 22:35:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49510 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234639AbjJBCft (ORCPT ); Sun, 1 Oct 2023 22:35:49 -0400 Received: from zeniv.linux.org.uk (zeniv.linux.org.uk [IPv6:2a03:a000:7:0:5054:ff:fe1c:15ff]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 52B4AA7 for ; Sun, 1 Oct 2023 19:35:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=linux.org.uk; s=zeniv-20220401; h=Sender:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=EewcqI+Ox5ekxQxUglqzLiq/FvSzeoK/B4dyiwaADCs=; b=PsJIJ3ivSQByKwOMBtumcyoupc g6mhCPrAp7S+NeEjzIY8xVp+ob2NWWEduzl/83qwOCOtb4XTOfCIi4ZxNV4CZPVHj0YX0IkANY2yR 17P+NL3LCfTaMci4RdXH6s0IkQN69NgyLODWAgzJvtB8gxtBMIm91MgsTwcN/oE604HZPrcWP/c82 2sopADg6YhH+tRPSAg4tmzr8VQpTMcsG/4solf5Ljpc26+vqLLNze57rkwAEsutLPpWD4z0Z5BrAb UozVoxLlycd3hybAEnzOyCgd0m9bCccNQMNLTyNIxelcJPuD4+Vqx1DP73P/go/u93eqmfiLoDZRG g1ScOkbg==; Received: from viro by zeniv.linux.org.uk with local (Exim 4.96 #2 (Red Hat Linux)) id 1qn8mf-00EDyN-1d; Mon, 02 Oct 2023 02:35:45 +0000 Date: Mon, 2 Oct 2023 03:35:45 +0100 From: Al Viro To: linux-fsdevel@vger.kernel.org Cc: Christian Brauner , Christoph Hellwig , Linus Torvalds , Namjae Jeon , David Sterba , David Howells , Miklos Szeredi , Amir Goldstein , Trond Myklebust , Bob Peterson , Steve French , Luis Chamberlain Subject: [PATCH 12/15] afs: fix __afs_break_callback() / afs_drop_open_mmap() race Message-ID: <20231002023545.GM3389589@ZenIV> References: <20231002022815.GQ800259@ZenIV> <20231002022846.GA3389589@ZenIV> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231002022846.GA3389589@ZenIV> Sender: Al Viro Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org In __afs_break_callback() we might check ->cb_nr_mmap and if it's non-zero do queue_work(&vnode->cb_work). In afs_drop_open_mmap() we decrement ->cb_nr_mmap and do flush_work(&vnode->cb_work) if it reaches zero. The trouble is, there's nothing to prevent __afs_break_callback() from seeing ->cb_nr_mmap before the decrement and do queue_work() after both the decrement and flush_work(). If that happens, we might be in trouble - vnode might get freed before the queued work runs. __afs_break_callback() is always done under ->cb_lock, so let's make sure that ->cb_nr_mmap can change from non-zero to zero while holding ->cb_lock (the spinlock component of it - it's a seqlock and we don't need to mess with the counter). Signed-off-by: Al Viro --- fs/afs/file.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/afs/file.c b/fs/afs/file.c index d37dd201752b..0012ea300eb5 100644 --- a/fs/afs/file.c +++ b/fs/afs/file.c @@ -529,13 +529,17 @@ static void afs_add_open_mmap(struct afs_vnode *vnode) static void afs_drop_open_mmap(struct afs_vnode *vnode) { - if (!atomic_dec_and_test(&vnode->cb_nr_mmap)) + if (atomic_add_unless(&vnode->cb_nr_mmap, -1, 1)) return; down_write(&vnode->volume->cell->fs_open_mmaps_lock); - if (atomic_read(&vnode->cb_nr_mmap) == 0) + read_seqlock_excl(&vnode->cb_lock); + // the only place where ->cb_nr_mmap may hit 0 + // see __afs_break_callback() for the other side... + if (atomic_dec_and_test(&vnode->cb_nr_mmap)) list_del_init(&vnode->cb_mmap_link); + read_sequnlock_excl(&vnode->cb_lock); up_write(&vnode->volume->cell->fs_open_mmaps_lock); flush_work(&vnode->cb_work);