From patchwork Wed Oct 18 07:14:05 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 10013821 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 221E360211 for ; Wed, 18 Oct 2017 07:14:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 154F8285DB for ; Wed, 18 Oct 2017 07:14:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0A61C28790; Wed, 18 Oct 2017 07:14:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD39D285DB for ; Wed, 18 Oct 2017 07:14:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1762059AbdJRHO2 (ORCPT ); Wed, 18 Oct 2017 03:14:28 -0400 Received: from mx1.redhat.com ([209.132.183.28]:43980 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1761227AbdJRHO1 (ORCPT ); Wed, 18 Oct 2017 03:14:27 -0400 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id EEFAF81E0F; Wed, 18 Oct 2017 07:14:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com EEFAF81E0F Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx01.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=mchristi@redhat.com Received: from rh2.redhat.com (ovpn-120-106.rdu2.redhat.com [10.10.120.106]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3A8BF7BA36; Wed, 18 Oct 2017 07:14:26 +0000 (UTC) From: Mike Christie To: lixiubo@cmss.chinamobile.com, target-devel@vger.kernel.org, nab@linux-iscsi.org Cc: Mike Christie Subject: [PATCH 10/17] tcmu: take blocks from devs waiting on blocks Date: Wed, 18 Oct 2017 02:14:05 -0500 Message-Id: <1508310852-15366-11-git-send-email-mchristi@redhat.com> In-Reply-To: <1508310852-15366-1-git-send-email-mchristi@redhat.com> References: <1508310852-15366-1-git-send-email-mchristi@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Wed, 18 Oct 2017 07:14:27 +0000 (UTC) Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We could end up where all the devs with waiting_blocks set hold all the blocks. find_free_blocks will always fail. This just adds a force round where if the first round fails, we then take pages from all devs. Signed-off-by: Mike Christie --- drivers/target/target_core_user.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index b5cdbeb..0bc8217 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -2040,10 +2040,12 @@ static struct target_backend_ops tcmu_ops = { static uint32_t find_free_blocks(void) { struct tcmu_dev *udev; + bool force = false; loff_t off; uint32_t start, end, block, free_blocks = 0; mutex_lock(&root_udev_mutex); +retry: list_for_each_entry(udev, &root_udev, node) { mutex_lock(&udev->cmdr_lock); @@ -2051,7 +2053,7 @@ static uint32_t find_free_blocks(void) tcmu_handle_completions(udev); /* Skip the udevs waiting the global pool or in idle */ - if (udev->waiting_blocks || !udev->dbi_thresh) { + if (!force && (udev->waiting_blocks || !udev->dbi_thresh)) { mutex_unlock(&udev->cmdr_lock); continue; } @@ -2080,10 +2082,35 @@ static uint32_t find_free_blocks(void) /* Release the block pages */ tcmu_blocks_release(&udev->data_blocks, start, end); + + if (list_empty(&udev->waiter)) { + /* + * if we had to take pages from a dev that hit its + * DATA_BLOCK_BITS limit put it on the waiter + * list so it gets rescheduled when pages are free. + */ + spin_lock(&root_udev_waiter_lock); + list_add_tail(&udev->waiter, &root_udev_waiter); + spin_unlock(&root_udev_waiter_lock); + } + mutex_unlock(&udev->cmdr_lock); + pr_debug("Freed %u blocks from %s. Forced %d\n", end - start, + udev->name, force); + free_blocks += end - start; } + + if (!force && !free_blocks) { + /* + * if all pages were held by devs with waiting_blocks > 0 + * then we have to force the release to prevent deadlock. + */ + force = true; + goto retry; + } + mutex_unlock(&root_udev_mutex); return free_blocks; }