From patchwork Fri Oct 13 12:46:38 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Matias_Bj=C3=B8rling?= X-Patchwork-Id: 10004513 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 21D5C60216 for ; Fri, 13 Oct 2017 12:52:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 14CE529060 for ; Fri, 13 Oct 2017 12:52:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0965D29072; Fri, 13 Oct 2017 12:52:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,RCVD_IN_DNSWL_HI,RCVD_IN_SORBS_SPAM autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E65C29060 for ; Fri, 13 Oct 2017 12:52:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758355AbdJMMwG (ORCPT ); Fri, 13 Oct 2017 08:52:06 -0400 Received: from mail-wm0-f67.google.com ([74.125.82.67]:45165 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932068AbdJMMry (ORCPT ); Fri, 13 Oct 2017 08:47:54 -0400 Received: by mail-wm0-f67.google.com with SMTP id q124so21263457wmb.0 for ; Fri, 13 Oct 2017 05:47:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bjorling.me; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=b59fDf7ZEC41fAd0/6tDphyaEZhaamDqshf48xJ+PnE=; b=bMbHBU5DRQJnBn6e1a0mZvrHhaH8FHOUABBk7zOkJkKHSanx3jsy9X53SKNjdxRREJ /KdDtaQ4OsTql5wKFpKQQTrV6dY/jz3c2vUA4bDkPlRR0SLh/NVbNgrU39gr8EVfvS3e gKLmreB+mMOJH1g+QK7qnIXW/W4TNHQOQiuJY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=b59fDf7ZEC41fAd0/6tDphyaEZhaamDqshf48xJ+PnE=; b=UYyI3RILcG9l8X27MV6R/8boPAQ9nCRzAjnF7Kcne6iqQS36eW5MclqPo6CVobh/QF g14138/3HeYf0drkYiODtZ2pJ4ohERjmsuMqDKkx0XjW+fnIi/4XgC48x8KfJjRNhA5z gno8gIQSTPfvAcVWkiqNMKS9LatRucyySnx3ucZiH7ydX7C6t8+HlhAJZRTLBJ8GvkIa cdtp4lLZ08Heqbx7JHKI136NaCI6hR7iKPCDvJfIf4VD0t1mBK6VYUPRv+dRSo5d3UIK VYekOlzZU+ejOaMtIc5hnm53+96/N1aO2A55XjUf/akqTEOYGrBNWX8XAHn267wJ9i2z V7yw== X-Gm-Message-State: AMCzsaVNmfPPMIVjY8E1gUK/sd9jH+/Dlx+QC+7basfpgBuwVGzSeWGm 6Y+Xmc82FnEe/W/sFr9Kz/dxeg== X-Google-Smtp-Source: AOwi7QBfrJuCanG+SAYGCIapqSsYRaZUhNCnQApDxohH0cpEU2og0UKS+29YZfbsjcju1nkHsri1Nw== X-Received: by 10.80.182.193 with SMTP id f1mr2054012ede.155.1507898873367; Fri, 13 Oct 2017 05:47:53 -0700 (PDT) Received: from skyninja.cnexlabs.com (6164211-cl69.boa.fiberby.dk. [193.106.164.211]) by smtp.gmail.com with ESMTPSA id p91sm735012edp.69.2017.10.13.05.47.52 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 13 Oct 2017 05:47:52 -0700 (PDT) From: =?UTF-8?q?Matias=20Bj=C3=B8rling?= To: axboe@fb.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Hans Holmberg , =?UTF-8?q?Matias=20Bj=C3=B8rling?= Subject: [GIT PULL 49/58] lightnvm: pblk: consider bad sectors in emeta during recovery Date: Fri, 13 Oct 2017 14:46:38 +0200 Message-Id: <20171013124647.32668-50-m@bjorling.me> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20171013124647.32668-1-m@bjorling.me> References: <20171013124647.32668-1-m@bjorling.me> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Hans Holmberg When recovering lines we need to consider that bad blocks in a line affect the emeta area size. Previously it was assumed that the emeta area would grow by the number of sectors per page * number of bad blocks in the line. This assumption is not correct - the number of "extra" pages that are consumed could be both smaller (depending on emeta size) and bigger (depending on the placement of the bad blocks). Fix this by calculating the emeta start by iterating backwards through the line, skipping ppas that map to bad blocks. Also fix the data types used for ppa indices/counts in pblk_recov_l2p_from_emeta - we should use u64. Signed-off-by: Hans Holmberg Signed-off-by: Matias Bjørling --- drivers/lightnvm/pblk-recovery.c | 46 +++++++++++++++++++++++++++------------- 1 file changed, 31 insertions(+), 15 deletions(-) diff --git a/drivers/lightnvm/pblk-recovery.c b/drivers/lightnvm/pblk-recovery.c index a080cf8..9772a94 100644 --- a/drivers/lightnvm/pblk-recovery.c +++ b/drivers/lightnvm/pblk-recovery.c @@ -133,16 +133,16 @@ static int pblk_recov_l2p_from_emeta(struct pblk *pblk, struct pblk_line *line) struct pblk_emeta *emeta = line->emeta; struct line_emeta *emeta_buf = emeta->buf; __le64 *lba_list; - int data_start, data_end; - int nr_valid_lbas, nr_lbas = 0; - int i; + u64 data_start, data_end; + u64 nr_valid_lbas, nr_lbas = 0; + u64 i; lba_list = pblk_recov_get_lba_list(pblk, emeta_buf); if (!lba_list) return 1; data_start = pblk_line_smeta_start(pblk, line) + lm->smeta_sec; - data_end = lm->sec_per_line - lm->emeta_sec[0]; + data_end = line->emeta_ssec; nr_valid_lbas = le64_to_cpu(emeta_buf->nr_valid_lbas); for (i = data_start; i < data_end; i++) { @@ -172,8 +172,8 @@ static int pblk_recov_l2p_from_emeta(struct pblk *pblk, struct pblk_line *line) } if (nr_valid_lbas != nr_lbas) - pr_err("pblk: line %d - inconsistent lba list(%llu/%d)\n", - line->id, emeta_buf->nr_valid_lbas, nr_lbas); + pr_err("pblk: line %d - inconsistent lba list(%llu/%llu)\n", + line->id, nr_valid_lbas, nr_lbas); line->left_msecs = 0; @@ -827,10 +827,32 @@ static void pblk_recov_line_add_ordered(struct list_head *head, __list_add(&line->list, t->list.prev, &t->list); } +static u64 pblk_line_emeta_start(struct pblk *pblk, struct pblk_line *line) +{ + struct nvm_tgt_dev *dev = pblk->dev; + struct nvm_geo *geo = &dev->geo; + struct pblk_line_meta *lm = &pblk->lm; + unsigned int emeta_secs; + u64 emeta_start; + struct ppa_addr ppa; + int pos; + + emeta_secs = lm->emeta_sec[0]; + emeta_start = lm->sec_per_line; + + while (emeta_secs) { + emeta_start--; + ppa = addr_to_pblk_ppa(pblk, emeta_start, line->id); + pos = pblk_ppa_to_pos(geo, ppa); + if (!test_bit(pos, line->blk_bitmap)) + emeta_secs--; + } + + return emeta_start; +} + struct pblk_line *pblk_recov_l2p(struct pblk *pblk) { - struct nvm_tgt_dev *dev = pblk->dev; - struct nvm_geo *geo = &dev->geo; struct pblk_line_meta *lm = &pblk->lm; struct pblk_line_mgmt *l_mg = &pblk->l_mg; struct pblk_line *line, *tline, *data_line = NULL; @@ -930,15 +952,9 @@ struct pblk_line *pblk_recov_l2p(struct pblk *pblk) /* Verify closed blocks and recover this portion of L2P table*/ list_for_each_entry_safe(line, tline, &recov_list, list) { - int off, nr_bb; - recovered_lines++; - /* Calculate where emeta starts based on the line bb */ - off = lm->sec_per_line - lm->emeta_sec[0]; - nr_bb = bitmap_weight(line->blk_bitmap, lm->blk_per_line); - off -= nr_bb * geo->sec_per_pl; - line->emeta_ssec = off; + line->emeta_ssec = pblk_line_emeta_start(pblk, line); line->emeta = emeta; memset(line->emeta->buf, 0, lm->emeta_len[0]);