From patchwork Thu Dec 31 04:40:41 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenwei Tao X-Patchwork-Id: 7935451 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E663FBEEE5 for ; Thu, 31 Dec 2015 04:41:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D9B6820204 for ; Thu, 31 Dec 2015 04:41:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 17E3B201F2 for ; Thu, 31 Dec 2015 04:41:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753705AbbLaEkv (ORCPT ); Wed, 30 Dec 2015 23:40:51 -0500 Received: from mail-pa0-f42.google.com ([209.85.220.42]:35674 "EHLO mail-pa0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753248AbbLaEkv (ORCPT ); Wed, 30 Dec 2015 23:40:51 -0500 Received: by mail-pa0-f42.google.com with SMTP id ho8so1324238pac.2; Wed, 30 Dec 2015 20:40:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=H+bkCjVzBkQKhqTGdpkPG24/sl0SFPTSFTxt2AGhA3g=; b=m8Ha/VWWYJOdmYZ5timziMhVlxKWMxp7T+pWhzJHg1gDu2QMYJyOPYDWxpRUJyOrNE j5AUqPh7YDI+cKOOc3qI9zGMhEJBbxLD2RcHuf7aK6/bZe/uv9jZzgW/JA6fcsJuXQYq HHA/JWxjspZ6gffbm6vIGCTJX54pDcBJC6EA7J64EkHlsjReZSjNWGZoXxRPdqZ6Y9XJ O4eGTefT3D4LHOhdmGOSjQyK2hRTQLT+RUmaoSNr+IE2+PaCe2sgPJ67EgLz5/twI1HS O0MEpTmCKGj6R8ZuBuONjDLwRQkGOfROiJwscX+F10a5J7DXz7Su754KcqHv2p+vMr0N nd1Q== X-Received: by 10.66.55.72 with SMTP id q8mr98529686pap.136.1451536850541; Wed, 30 Dec 2015 20:40:50 -0800 (PST) Received: from localhost.localdomain.com ([111.204.49.2]) by smtp.gmail.com with ESMTPSA id dg1sm98149620pad.18.2015.12.30.20.40.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 30 Dec 2015 20:40:49 -0800 (PST) From: Wenwei Tao To: mb@lightnvm.io Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH v2] lightnvm: fix rrpc_lun_gc Date: Thu, 31 Dec 2015 12:40:41 +0800 Message-Id: <1451536841-20494-1-git-send-email-ww.tao0320@gmail.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch fix two issues in rrpc_lun_gc 1. prio_list is protected by rrpc_lun's lock not nvm_lun's, so acquire rlun's lock instead of lun's before operate on the list. 2. we delete block from prio_list before allocating gcb, but gcb allocation may fail, we end without putting it back to the list, this makes the block won't get reclaimed in the future. To solve this issue, delete block after gcb allocation. Signed-off-by: Wenwei Tao --- Changed in v2: -Advance the gcb allocation, make the debug log deliver the correct message. drivers/lightnvm/rrpc.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c index 67b14d4..40b0309 100644 --- a/drivers/lightnvm/rrpc.c +++ b/drivers/lightnvm/rrpc.c @@ -443,7 +443,7 @@ static void rrpc_lun_gc(struct work_struct *work) if (nr_blocks_need < rrpc->nr_luns) nr_blocks_need = rrpc->nr_luns; - spin_lock(&lun->lock); + spin_lock(&rlun->lock); while (nr_blocks_need > lun->nr_free_blocks && !list_empty(&rlun->prio_list)) { struct rrpc_block *rblock = block_prio_find_max(rlun); @@ -452,16 +452,16 @@ static void rrpc_lun_gc(struct work_struct *work) if (!rblock->nr_invalid_pages) break; + gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC); + if (!gcb) + break; + list_del_init(&rblock->prio); BUG_ON(!block_is_full(rrpc, rblock)); pr_debug("rrpc: selected block '%lu' for GC\n", block->id); - gcb = mempool_alloc(rrpc->gcb_pool, GFP_ATOMIC); - if (!gcb) - break; - gcb->rrpc = rrpc; gcb->rblk = rblock; INIT_WORK(&gcb->ws_gc, rrpc_block_gc); @@ -470,7 +470,7 @@ static void rrpc_lun_gc(struct work_struct *work) nr_blocks_need--; } - spin_unlock(&lun->lock); + spin_unlock(&rlun->lock); /* TODO: Hint that request queue can be started again */ }