From patchwork Mon Apr 10 14:56:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tetsuo Handa X-Patchwork-Id: 13206439 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D417C77B61 for ; Mon, 10 Apr 2023 14:57:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229919AbjDJO5X (ORCPT ); Mon, 10 Apr 2023 10:57:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50838 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229697AbjDJO5W (ORCPT ); Mon, 10 Apr 2023 10:57:22 -0400 Received: from www262.sakura.ne.jp (www262.sakura.ne.jp [202.181.97.72]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2365F4496 for ; Mon, 10 Apr 2023 07:57:20 -0700 (PDT) Received: from fsav112.sakura.ne.jp (fsav112.sakura.ne.jp [27.133.134.239]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 33AEuWqC070505; Mon, 10 Apr 2023 23:56:32 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav112.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav112.sakura.ne.jp); Mon, 10 Apr 2023 23:56:32 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav112.sakura.ne.jp) Received: from [192.168.1.6] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 33AEuV9O070500 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Mon, 10 Apr 2023 23:56:32 +0900 (JST) (envelope-from penguin-kernel@I-love.SAKURA.ne.jp) Message-ID: <38509ddd-51a2-70da-6564-9ded34b2f363@I-love.SAKURA.ne.jp> Date: Mon, 10 Apr 2023 23:56:28 +0900 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.9.1 Subject: [PATCH v2] sysv: don't call sb_bread() with pointers_lock held Content-Language: en-US From: Tetsuo Handa To: Al Viro Cc: syzbot , syzkaller-bugs@googlegroups.com, Andrew Morton , hch@infradead.org, linux-fsdevel , "Matthew Wilcox (Oracle)" , Chen Zhongjin , Dave Chinner , Christian Brauner , Christoph Hellwig References: <0000000000000ccf9a05ee84f5b0@google.com> <6fcbdc89-6aff-064b-a040-0966152856e0@I-love.SAKURA.ne.jp> <20230327000440.GF3390869@ZenIV> <0d195f93-a22a-49a2-0020-103534d6f7f6@I-love.SAKURA.ne.jp> In-Reply-To: <0d195f93-a22a-49a2-0020-103534d6f7f6@I-love.SAKURA.ne.jp> Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org syzbot is reporting sleep in atomic context in SysV filesystem [1], for sb_bread() is called with rw_spinlock held. A "write_lock(&pointers_lock) => read_lock(&pointers_lock) deadlock" bug and a "sb_bread() with write_lock(&pointers_lock)" bug were introduced by "Replace BKL for chain locking with sysvfs-private rwlock" in Linux 2.5.12. Then, "[PATCH] err1-40: sysvfs locking fix" in Linux 2.6.8 fixed the former bug by moving pointers_lock lock to the callers, but instead introduced a "sb_bread() with read_lock(&pointers_lock)" bug (which made this problem easier to hit). Al Viro suggested that why not to do like get_branch()/get_block()/ find_shared() in Minix filesystem does. And doing like that is a revert of "[PATCH] err1-40: sysvfs locking fix" except that get_branch() from find_shared() is called without write_lock(&pointers_lock). Reported-by: syzbot Link: https://syzkaller.appspot.com/bug?extid=69b40dc5fd40f32c199f Suggested-by: Al Viro Signed-off-by: Tetsuo Handa --- Changes in v2: Correct patch description. fs/sysv/itree.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/fs/sysv/itree.c b/fs/sysv/itree.c index b22764fe669c..ab4756c94755 100644 --- a/fs/sysv/itree.c +++ b/fs/sysv/itree.c @@ -82,9 +82,6 @@ static inline sysv_zone_t *block_end(struct buffer_head *bh) return (sysv_zone_t*)((char*)bh->b_data + bh->b_size); } -/* - * Requires read_lock(&pointers_lock) or write_lock(&pointers_lock) - */ static Indirect *get_branch(struct inode *inode, int depth, int offsets[], @@ -104,15 +101,18 @@ static Indirect *get_branch(struct inode *inode, bh = sb_bread(sb, block); if (!bh) goto failure; + read_lock(&pointers_lock); if (!verify_chain(chain, p)) goto changed; add_chain(++p, bh, (sysv_zone_t*)bh->b_data + *++offsets); + read_unlock(&pointers_lock); if (!p->key) goto no_block; } return NULL; changed: + read_unlock(&pointers_lock); brelse(bh); *err = -EAGAIN; goto no_block; @@ -214,9 +214,7 @@ static int get_block(struct inode *inode, sector_t iblock, struct buffer_head *b goto out; reread: - read_lock(&pointers_lock); partial = get_branch(inode, depth, offsets, chain, &err); - read_unlock(&pointers_lock); /* Simplest case - block found, no allocation needed */ if (!partial) { @@ -286,9 +284,9 @@ static Indirect *find_shared(struct inode *inode, *top = 0; for (k = depth; k > 1 && !offsets[k-1]; k--) ; + partial = get_branch(inode, k, offsets, chain, &err); write_lock(&pointers_lock); - partial = get_branch(inode, k, offsets, chain, &err); if (!partial) partial = chain + k-1; /*