From patchwork Wed Oct 25 16:47:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 10027011 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C017560375 for ; Wed, 25 Oct 2017 16:48:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B0D3528BEC for ; Wed, 25 Oct 2017 16:48:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A5EC328BEE; Wed, 25 Oct 2017 16:48:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 54F2D28BEC for ; Wed, 25 Oct 2017 16:48:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932240AbdJYQsD (ORCPT ); Wed, 25 Oct 2017 12:48:03 -0400 Received: from mx1.redhat.com ([209.132.183.28]:60698 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932210AbdJYQsD (ORCPT ); Wed, 25 Oct 2017 12:48:03 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 54866D77E3; Wed, 25 Oct 2017 16:48:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 54866D77E3 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=mchristi@redhat.com Received: from rh2.redhat.com (ovpn-124-200.rdu2.redhat.com [10.10.124.200]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7AA6A6FDAA; Wed, 25 Oct 2017 16:48:02 +0000 (UTC) From: Mike Christie To: target-devel@vger.kernel.org, nab@linux-iscsi.org Cc: Mike Christie Subject: [PATCH 17/20] tcmu: run the unmap thread/wq if waiters. Date: Wed, 25 Oct 2017 11:47:27 -0500 Message-Id: <1508950050-10120-18-git-send-email-mchristi@redhat.com> In-Reply-To: <1508950050-10120-1-git-send-email-mchristi@redhat.com> References: <1508950050-10120-1-git-send-email-mchristi@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Wed, 25 Oct 2017 16:48:03 +0000 (UTC) Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If tcmu_dev 1 took exactly TCMU_GLOBAL_MAX_BLOCKS and then tcmu_dev 2 tried to allocate blocks dev 2 would be put on the waiter list. Later when dev 1's commands complete tcmu_irqcontrol would see that dev 1 is not on the waiter list and so it only runs the dev 1's queue. dev 2 could then be starved. This patch adds a check if we hit the global limit and have waiters in tcmu_irqcontrol. In this case we will then put the completing dev on the waiter list of needed and then wake up the unmapping thread/wq. Signed-off-by: Mike Christie --- drivers/target/target_core_user.c | 47 ++++++++++++++++++++++++++------------- 1 file changed, 32 insertions(+), 15 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 4fe5249..1433838 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -1271,23 +1271,41 @@ static bool run_cmdr_queue(struct tcmu_dev *udev) return drained; } +static bool tcmu_waiting_on_dev_blocks(struct tcmu_dev *udev) +{ + return list_empty(&udev->waiter) && !list_empty(&udev->cmdr_queue); +} + static int tcmu_irqcontrol(struct uio_info *info, s32 irq_on) { - struct tcmu_dev *tcmu_dev = container_of(info, struct tcmu_dev, uio_info); + struct tcmu_dev *udev = container_of(info, struct tcmu_dev, uio_info); + bool run_local = true; - mutex_lock(&tcmu_dev->cmdr_lock); - /* - * If the current udev is also in waiter list, this will - * make sure that the other waiters in list be fed ahead - * of it. - */ - if (!list_empty(&tcmu_dev->waiter)) { - schedule_work(&tcmu_unmap_work); - } else { - tcmu_handle_completions(tcmu_dev); - run_cmdr_queue(tcmu_dev); + mutex_lock(&udev->cmdr_lock); + + if (atomic_read(&global_db_count) == TCMU_GLOBAL_MAX_BLOCKS) { + spin_lock(&root_udev_waiter_lock); + if (!list_empty(&root_udev_waiter)) { + /* + * If we only hit the per block limit then make sure + * we are added to the global list so we get run + * after the other waiters. + */ + if (tcmu_waiting_on_dev_blocks(udev)) + list_add_tail(&udev->waiter, &root_udev_waiter); + + run_local = false; + schedule_work(&tcmu_unmap_work); + } + spin_unlock(&root_udev_waiter_lock); } - mutex_unlock(&tcmu_dev->cmdr_lock); + + if (run_local) { + tcmu_handle_completions(udev); + run_cmdr_queue(udev); + } + + mutex_unlock(&udev->cmdr_lock); return 0; } @@ -2186,8 +2204,7 @@ static uint32_t find_free_blocks(void) /* Release the block pages */ tcmu_blocks_release(&udev->data_blocks, start, end); - if (list_empty(&udev->waiter) && - !list_empty(&udev->cmdr_queue)) { + if (tcmu_waiting_on_dev_blocks(udev)) { /* * if we had to take pages from a dev that hit its * DATA_BLOCK_BITS limit put it on the waiter