From patchwork Fri Jan 22 12:15:11 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhaolei X-Patchwork-Id: 8089591 Return-Path: X-Original-To: patchwork-linux-btrfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id CD59FBEEE5 for ; Fri, 22 Jan 2016 12:17:40 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0CD84202DD for ; Fri, 22 Jan 2016 12:17:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F3DBB202B8 for ; Fri, 22 Jan 2016 12:17:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753191AbcAVMR2 (ORCPT ); Fri, 22 Jan 2016 07:17:28 -0500 Received: from cn.fujitsu.com ([59.151.112.132]:4842 "EHLO heian.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1753815AbcAVMRY (ORCPT ); Fri, 22 Jan 2016 07:17:24 -0500 X-IronPort-AV: E=Sophos;i="5.20,346,1444665600"; d="scan'208";a="2874256" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 22 Jan 2016 20:17:21 +0800 Received: from G08CNEXCHPEKD01.g08.fujitsu.local (unknown [10.167.33.80]) by cn.fujitsu.com (Postfix) with ESMTP id 0BB5042AC841 for ; Fri, 22 Jan 2016 20:16:56 +0800 (CST) Received: from localhost.localdomain (10.167.226.114) by G08CNEXCHPEKD01.g08.fujitsu.local (10.167.33.89) with Microsoft SMTP Server id 14.3.181.6; Fri, 22 Jan 2016 20:16:55 +0800 From: Zhao Lei To: CC: Zhao Lei Subject: [PATCH] [RFC] btrfs: reada: avoid undone reada extents in btrfs_reada_wait Date: Fri, 22 Jan 2016 20:15:11 +0800 Message-ID: <1eb65b4f7a07c532a8a823326de405c0d707070f.1453464904.git.zhaolei@cn.fujitsu.com> X-Mailer: git-send-email 1.8.5.1 MIME-Version: 1.0 X-yoursite-MailScanner-ID: 0BB5042AC841.A7830 X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: zhaolei@cn.fujitsu.com X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In some case, reada background works exit before all extents read, for example, if a device reachs workload limit(MAX_IN_FLIGHT), or total reads reachs max limit. In old code, every works queue 2x new works, so many works make above problem rarely happened. After we limit max works by patch titled: btrfs: reada: limit max works count The chance of above problem invreased. Fix: We can check running background works in btrfs_reada_wait(), and create one work if no works exist. Note: 1: It is patch for debug, discussed on following thread in maillist: Re: [PATCH 1/2] btrfs: reada: limit max works count I havn't reproduce problem in above mail until now, this patch is created by reviewing code. And I also havn't reproduced the problem before patch. I only comfirmed no-problem after this patch applied. 2: It is based on patch named: btrfs: reada: limit max works count The above patch and some detail in this patch needs more improvement before applied. Signed-off-by: Zhao Lei --- fs/btrfs/reada.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c index af1e7b6..e67ce05 100644 --- a/fs/btrfs/reada.c +++ b/fs/btrfs/reada.c @@ -957,6 +957,8 @@ int btrfs_reada_wait(void *handle) struct reada_control *rc = handle; while (atomic_read(&rc->elems)) { + if (!atomic_read(&works_cnt)) + reada_start_machine(rc->root->fs_info); wait_event_timeout(rc->wait, atomic_read(&rc->elems) == 0, 5 * HZ); dump_devs(rc->root->fs_info, @@ -975,6 +977,8 @@ int btrfs_reada_wait(void *handle) struct reada_control *rc = handle; while (atomic_read(&rc->elems)) { + if (!atomic_read(&works_cnt)) + reada_start_machine(rc->root->fs_info); wait_event(rc->wait, atomic_read(&rc->elems) == 0); }