From patchwork Tue Dec 29 09:56:55 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenwei Tao X-Patchwork-Id: 7929381 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 989C7BEEE5 for ; Tue, 29 Dec 2015 10:12:02 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id B872F201F2 for ; Tue, 29 Dec 2015 10:12:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CA07F201DD for ; Tue, 29 Dec 2015 10:12:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752464AbbL2KMA (ORCPT ); Tue, 29 Dec 2015 05:12:00 -0500 Received: from mail-pf0-f193.google.com ([209.85.192.193]:33364 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750906AbbL2KL7 (ORCPT ); Tue, 29 Dec 2015 05:11:59 -0500 Received: by mail-pf0-f193.google.com with SMTP id 78so12281050pfw.0; Tue, 29 Dec 2015 02:11:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=6gUXJZoGl7tJSYCMq2zYAIWVpEM4yS4ifUmju5B0wVo=; b=FwLgKgcyHrjvyqAHtwOIkM6ZWKxAmUWi0EoITi+t8EvAXh8kKjJl5+WdZtwRH+U0f+ IR4Zm9QPO+TumKuLrYHBcsxdQ2ndPQkys7+rE127FboEqKERMnst1F9ItRroEQx4HQL9 XaTl8/bYbp2Pr/tXiNkYxLsKHo7/TQIe3dJxZmReOGT1c1w2wdoaaBeDmDQ4j1WMld1Q B/25Eq/Q5l4s8bkzx7qUhGnWgFccuhoEzwDWk7fvO96CJjCgykK+E3rMMKiqNoT2ZIZt El5Geopqbv5xzwetMh6844fJuVH2EkpsL+tOF9T29x3RX9YLaXZHO7U2nrXF6flWY6El UWGA== X-Received: by 10.98.14.151 with SMTP id 23mr17131632pfo.154.1451383020980; Tue, 29 Dec 2015 01:57:00 -0800 (PST) Received: from localhost.localdomain.com ([111.204.49.2]) by smtp.gmail.com with ESMTPSA id v16sm82084275pfi.94.2015.12.29.01.56.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 29 Dec 2015 01:57:00 -0800 (PST) From: Wenwei Tao To: mb@lightnvm.io Cc: linux-kernel@vger.kernel.org, linux-block@vger.kernel.org Subject: [PATCH] lightnvm: fix rrpc_lun_gc Date: Tue, 29 Dec 2015 17:56:55 +0800 Message-Id: <1451383015-15711-1-git-send-email-ww.tao0320@gmail.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch fix two issues in rrpc_lun_gc 1. prio_list is protected by rrpc_lun's lock not nvm_lun's, so acquire rlun's lock instead of lun's before operate on the list. 2. we delete block from prio_list before allocating gcb, but gcb allocation may fail, we end without putting it back to the list, this makes the block won't get reclaimed in the future. To solve this issue, delete block after gcb allocation. Signed-off-by: Wenwei Tao --- drivers/lightnvm/rrpc.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c index 67b14d4..373dd9c 100644 --- a/drivers/lightnvm/rrpc.c +++ b/drivers/lightnvm/rrpc.c @@ -443,7 +443,7 @@ static void rrpc_lun_gc(struct work_struct *work) if (nr_blocks_need < rrpc->nr_luns) nr_blocks_need = rrpc->nr_luns; - spin_lock(&lun->lock); + spin_lock(&rlun->lock); while (nr_blocks_need > lun->nr_free_blocks && !list_empty(&rlun->prio_list)) { struct rrpc_block *rblock = block_prio_find_max(rlun); @@ -452,7 +452,6 @@ static void rrpc_lun_gc(struct work_struct *work) if (!rblock->nr_invalid_pages) break; - list_del_init(&rblock->prio); BUG_ON(!block_is_full(rrpc, rblock)); @@ -462,6 +461,8 @@ static void rrpc_lun_gc(struct work_struct *work) if (!gcb) break; + list_del_init(&rblock->prio); + gcb->rrpc = rrpc; gcb->rblk = rblock; INIT_WORK(&gcb->ws_gc, rrpc_block_gc); @@ -470,7 +471,7 @@ static void rrpc_lun_gc(struct work_struct *work) nr_blocks_need--; } - spin_unlock(&lun->lock); + spin_unlock(&rlun->lock); /* TODO: Hint that request queue can be started again */ }