From patchwork Mon May 28 08:58:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Matias Bjorling X-Patchwork-Id: 10430157 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 17411602CC for ; Mon, 28 May 2018 09:04:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0823128BDC for ; Mon, 28 May 2018 09:04:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F12AD28BE2; Mon, 28 May 2018 09:04:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7AA6F28BDC for ; Mon, 28 May 2018 09:04:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754100AbeE1JCj (ORCPT ); Mon, 28 May 2018 05:02:39 -0400 Received: from mail-wm0-f65.google.com ([74.125.82.65]:36050 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754211AbeE1I7Q (ORCPT ); Mon, 28 May 2018 04:59:16 -0400 Received: by mail-wm0-f65.google.com with SMTP id v131-v6so10869397wma.1 for ; Mon, 28 May 2018 01:59:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lightnvm-io.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ilv7jTIigru8dI8GqJsG8THWq4Jfo1Z7AL3PB3RaDx0=; b=NeCY0DsFwTcYsfXHBmDSeWf7iv8hfubWtkgoZ1279dXsLMD0XWpyvg0Orp4CO2VAs8 qSL596YXSc03+UDcaMyY/CUC79B+AyTjQkIbO4klmpVi1jBF/cI+/sqaHSY8fE0zkFaL GD0WQgtcWMb7ugwwL94rYpYc2ibgNqOlQ/MW+5OG/0iOeB3AlJGkSrql/RMq81eYPNEj fmu9dW16F6U2Ej1N++N+o5KxnO0sf10trKdzUDcZCXzMdLIPjl9mrAMWImMEm8b84mWR E/oUmXNrHbcorWv5OPBMPWD47sqF8ZW8sJUQtI/k15Ybr778oX/6kwlvIVTWQzrpmk2X AEDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ilv7jTIigru8dI8GqJsG8THWq4Jfo1Z7AL3PB3RaDx0=; b=ErA9cYZSZGDkhiO4cCpmevd5MxOrc6c8/qniOzDQUYSax7XAW7Vrs4KHbPNIqBOfAJ hW10otT/UUzxhPQjKrHFrOThNJJbnktmUx+6ZzK9ZeSpl4co14szapSEaTBruGG/c/IO vkKx371zzJyUp2Nj9QvUW1VhBpvX/CLL1yhVRiZ1PWA43qWITJ1oIm6TSkdiRT+mRmrw iiIbbKnY70TQgdeUMQ9cfwp5hzJu9WTaqZ1Vk4uoAis/4NAvEBFuyLY1GHuJWwB3easd V1kv/LQIH2U2sfBLMJOhSoLOSiUULzzr3h6BslawvfrYfCUC/LXP+eKPEDBXCQmX3k9x nz1w== X-Gm-Message-State: ALKqPwcoyaWBwrCo56U9L9fR46Sw2Isp1B4pGFSyjsLISH9TGMjt3T/E nzkcMOp/w3JIonjBqUFZa8vKRQ== X-Google-Smtp-Source: AB8JxZor8mJAxODaC05XgxIJk1TqDk6O0yigSTI9FA5gh56PhtO1nJOR06ZD2bseaA0biFeUFBhxAA== X-Received: by 2002:a2e:87d8:: with SMTP id v24-v6mr7781488ljj.69.1527497955381; Mon, 28 May 2018 01:59:15 -0700 (PDT) Received: from Macroninja.cnexlabs.com (95-166-82-66-cable.dk.customer.tdc.net. [95.166.82.66]) by smtp.gmail.com with ESMTPSA id u2-v6sm5777848lji.4.2018.05.28.01.59.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 May 2018 01:59:14 -0700 (PDT) From: =?UTF-8?q?Matias=20Bj=C3=B8rling?= To: axboe@fb.com Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Igor Konopko , Marcin Dziegielewski , =?UTF-8?q?Matias=20Bj=C3=B8rling?= Subject: [GIT PULL 20/20] lightnvm: pblk: sync RB and RL states during GC Date: Mon, 28 May 2018 10:58:41 +0200 Message-Id: <20180528085841.26684-21-mb@lightnvm.io> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20180528085841.26684-1-mb@lightnvm.io> References: <20180528085841.26684-1-mb@lightnvm.io> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Igor Konopko During sequential workloads we can met the case when almost all the lines are fully written with data. In that case rate limiter will significantly reduce the max number of requests for user IOs. Unfortunately in the case when round buffer is flushed to drive and the entries are not yet removed (which is ok, since there is still enough free entries in round buffer for user IO) we hang on user IO due to not enough entries in rate limiter. The reason is that rate limiter user entries are decreased after freeing the round buffer entries, which does not happen if there is still plenty of space in round buffer. This patch forces freeing the round buffer by calling pblk_rb_sync_l2p and thus making new free entries in rate limiter, when there is no enough of them for user IO. Signed-off-by: Igor Konopko Signed-off-by: Marcin Dziegielewski Reworded description. Signed-off-by: Matias Bjørling --- drivers/lightnvm/pblk-init.c | 2 ++ drivers/lightnvm/pblk-rb.c | 7 +++---- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c index 25aa1e73984f..9d7d9e3b8506 100644 --- a/drivers/lightnvm/pblk-init.c +++ b/drivers/lightnvm/pblk-init.c @@ -1159,7 +1159,9 @@ static void pblk_tear_down(struct pblk *pblk, bool graceful) __pblk_pipeline_flush(pblk); __pblk_pipeline_stop(pblk); pblk_writer_stop(pblk); + spin_lock(&pblk->rwb.w_lock); pblk_rb_sync_l2p(&pblk->rwb); + spin_unlock(&pblk->rwb.w_lock); pblk_rl_free(&pblk->rl); pr_debug("pblk: consistent tear down (graceful:%d)\n", graceful); diff --git a/drivers/lightnvm/pblk-rb.c b/drivers/lightnvm/pblk-rb.c index 1b74ec51a4ad..91824cd3e8d8 100644 --- a/drivers/lightnvm/pblk-rb.c +++ b/drivers/lightnvm/pblk-rb.c @@ -266,21 +266,18 @@ static int pblk_rb_update_l2p(struct pblk_rb *rb, unsigned int nr_entries, * Update the l2p entry for all sectors stored on the write buffer. This means * that all future lookups to the l2p table will point to a device address, not * to the cacheline in the write buffer. + * Caller must ensure that rb->w_lock is taken. */ void pblk_rb_sync_l2p(struct pblk_rb *rb) { unsigned int sync; unsigned int to_update; - spin_lock(&rb->w_lock); - /* Protect from reads and writes */ sync = smp_load_acquire(&rb->sync); to_update = pblk_rb_ring_count(sync, rb->l2p_update, rb->nr_entries); __pblk_rb_update_l2p(rb, to_update); - - spin_unlock(&rb->w_lock); } /* @@ -462,6 +459,8 @@ int pblk_rb_may_write_user(struct pblk_rb *rb, struct bio *bio, spin_lock(&rb->w_lock); io_ret = pblk_rl_user_may_insert(&pblk->rl, nr_entries); if (io_ret) { + /* Sync RB & L2P in order to update rate limiter values */ + pblk_rb_sync_l2p(rb); spin_unlock(&rb->w_lock); return io_ret; }