From patchwork Wed Oct 25 16:47:23 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 10027003 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id EDA3360375 for ; Wed, 25 Oct 2017 16:47:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DEB5628BEC for ; Wed, 25 Oct 2017 16:47:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D39DD28BEE; Wed, 25 Oct 2017 16:47:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 815CB28BEC for ; Wed, 25 Oct 2017 16:47:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932229AbdJYQr5 (ORCPT ); Wed, 25 Oct 2017 12:47:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43036 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932210AbdJYQr4 (ORCPT ); Wed, 25 Oct 2017 12:47:56 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9009BFEBA; Wed, 25 Oct 2017 16:47:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 9009BFEBA Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=mchristi@redhat.com Received: from rh2.redhat.com (ovpn-124-200.rdu2.redhat.com [10.10.124.200]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8F99060BE1; Wed, 25 Oct 2017 16:47:55 +0000 (UTC) From: Mike Christie To: target-devel@vger.kernel.org, nab@linux-iscsi.org Cc: Mike Christie Subject: [PATCH 13/20] tcmu: take blocks from devs waiting on blocks Date: Wed, 25 Oct 2017 11:47:23 -0500 Message-Id: <1508950050-10120-14-git-send-email-mchristi@redhat.com> In-Reply-To: <1508950050-10120-1-git-send-email-mchristi@redhat.com> References: <1508950050-10120-1-git-send-email-mchristi@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Wed, 25 Oct 2017 16:47:56 +0000 (UTC) Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We could end up where all the devs with waiting_blocks set hold all the blocks. find_free_blocks will always fail. This just adds a force round where if the first round fails, we then take pages from all devs. Signed-off-by: Mike Christie --- drivers/target/target_core_user.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index dff7b14..9dad078 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -2040,10 +2040,12 @@ static struct target_backend_ops tcmu_ops = { static uint32_t find_free_blocks(void) { struct tcmu_dev *udev; + bool force = false; loff_t off; uint32_t start, end, block, free_blocks = 0; mutex_lock(&root_udev_mutex); +retry: list_for_each_entry(udev, &root_udev, node) { mutex_lock(&udev->cmdr_lock); @@ -2051,7 +2053,7 @@ static uint32_t find_free_blocks(void) tcmu_handle_completions(udev); /* Skip the udevs waiting the global pool or in idle */ - if (udev->waiting_blocks || !udev->dbi_thresh) { + if (!force && (udev->waiting_blocks || !udev->dbi_thresh)) { mutex_unlock(&udev->cmdr_lock); continue; } @@ -2080,10 +2082,35 @@ static uint32_t find_free_blocks(void) /* Release the block pages */ tcmu_blocks_release(&udev->data_blocks, start, end); + + if (list_empty(&udev->waiter)) { + /* + * if we had to take pages from a dev that hit its + * DATA_BLOCK_BITS limit put it on the waiter + * list so it gets rescheduled when pages are free. + */ + spin_lock(&root_udev_waiter_lock); + list_add_tail(&udev->waiter, &root_udev_waiter); + spin_unlock(&root_udev_waiter_lock); + } + mutex_unlock(&udev->cmdr_lock); + pr_debug("Freed %u blocks from %s. Forced %d\n", end - start, + udev->name, force); + free_blocks += end - start; } + + if (!force && !free_blocks) { + /* + * if all pages were held by devs with waiting_blocks > 0 + * then we have to force the release to prevent deadlock. + */ + force = true; + goto retry; + } + mutex_unlock(&root_udev_mutex); return free_blocks; }