From patchwork Mon Oct 30 03:44:28 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Christie X-Patchwork-Id: 10031973 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C4B0F603B4 for ; Mon, 30 Oct 2017 03:44:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B5B86286BD for ; Mon, 30 Oct 2017 03:44:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A8EE228763; Mon, 30 Oct 2017 03:44:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5623F286BD for ; Mon, 30 Oct 2017 03:44:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752211AbdJ3Dox (ORCPT ); Sun, 29 Oct 2017 23:44:53 -0400 Received: from mx1.redhat.com ([209.132.183.28]:41542 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752205AbdJ3Dow (ORCPT ); Sun, 29 Oct 2017 23:44:52 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DA454C058ED4; Mon, 30 Oct 2017 03:44:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com DA454C058ED4 Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx08.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=mchristi@redhat.com Received: from rh2.redhat.com (ovpn-120-93.rdu2.redhat.com [10.10.120.93]) by smtp.corp.redhat.com (Postfix) with ESMTP id E46F75D9C9; Mon, 30 Oct 2017 03:44:50 +0000 (UTC) From: Mike Christie To: martin.petersen@oracle.com, jejb@linux.vnet.ibm.com, linux-scsi@vger.kernel.org, target-devel@vger.kernel.org, nab@linux-iscsi.org Cc: Mike Christie Subject: [PATCH 08/19] tcmu: move expired command completion to unmap thread Date: Sun, 29 Oct 2017 22:44:28 -0500 Message-Id: <1509335079-5276-9-git-send-email-mchristi@redhat.com> In-Reply-To: <1509335079-5276-1-git-send-email-mchristi@redhat.com> References: <1509335079-5276-1-git-send-email-mchristi@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Mon, 30 Oct 2017 03:44:52 +0000 (UTC) Sender: target-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: target-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This moves the expired command completion handling to the unmap wq, so the next patch can use a mutex in tcmu_check_expired_cmd. Notes: tcmu_device_timedout's use of spin_lock_irq was not needed. The commands_lock is used between thread context (tcmu_queue_cmd_ring and tcmu_irqcontrol (even though this is named irqcontrol it is not run in irq context)) and timer/bh context. In the timer/bh context bhs are disabled, so you need to use the _bh lock calls from the thread context callers. Signed-off-by: Mike Christie --- drivers/target/target_core_user.c | 48 +++++++++++++++++++++++++++++++-------- 1 file changed, 39 insertions(+), 9 deletions(-) diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c index 14d9b79..7271ec2 100644 --- a/drivers/target/target_core_user.c +++ b/drivers/target/target_core_user.c @@ -143,6 +143,7 @@ struct tcmu_dev { struct timer_list timeout; unsigned int cmd_time_out; + struct list_head timedout_entry; spinlock_t nl_cmd_lock; struct tcmu_nl_cmd curr_nl_cmd; @@ -179,6 +180,9 @@ struct tcmu_cmd { static DEFINE_MUTEX(root_udev_mutex); static LIST_HEAD(root_udev); +static DEFINE_SPINLOCK(timed_out_udevs_lock); +static LIST_HEAD(timed_out_udevs); + static atomic_t global_db_count = ATOMIC_INIT(0); static struct work_struct tcmu_unmap_work; @@ -1055,18 +1059,15 @@ static int tcmu_check_expired_cmd(int id, void *p, void *data) static void tcmu_device_timedout(unsigned long data) { struct tcmu_dev *udev = (struct tcmu_dev *)data; - unsigned long flags; - spin_lock_irqsave(&udev->commands_lock, flags); - idr_for_each(&udev->commands, tcmu_check_expired_cmd, NULL); - spin_unlock_irqrestore(&udev->commands_lock, flags); + pr_debug("%s cmd timeout has expired\n", udev->name); - schedule_work(&tcmu_unmap_work); + spin_lock(&timed_out_udevs_lock); + if (list_empty(&udev->timedout_entry)) + list_add_tail(&udev->timedout_entry, &timed_out_udevs); + spin_unlock(&timed_out_udevs_lock); - /* - * We don't need to wakeup threads on wait_cmdr since they have their - * own timeout. - */ + schedule_work(&tcmu_unmap_work); } static int tcmu_attach_hba(struct se_hba *hba, u32 host_id) @@ -1110,6 +1111,7 @@ static struct se_device *tcmu_alloc_device(struct se_hba *hba, const char *name) init_waitqueue_head(&udev->wait_cmdr); mutex_init(&udev->cmdr_lock); + INIT_LIST_HEAD(&udev->timedout_entry); idr_init(&udev->commands); spin_lock_init(&udev->commands_lock); @@ -1324,6 +1326,11 @@ static void tcmu_dev_kref_release(struct kref *kref) vfree(udev->mb_addr); udev->mb_addr = NULL; + spin_lock_bh(&timed_out_udevs_lock); + if (!list_empty(&udev->timedout_entry)) + list_del(&udev->timedout_entry); + spin_unlock_bh(&timed_out_udevs_lock); + /* Upper layer should drain all requests before calling this */ spin_lock_irq(&udev->commands_lock); idr_for_each_entry(&udev->commands, cmd, i) { @@ -2039,8 +2046,31 @@ static void run_cmdr_queues(void) mutex_unlock(&root_udev_mutex); } +static void check_timedout_devices(void) +{ + struct tcmu_dev *udev, *tmp_dev; + LIST_HEAD(devs); + + spin_lock_bh(&timed_out_udevs_lock); + list_splice_init(&timed_out_udevs, &devs); + + list_for_each_entry_safe(udev, tmp_dev, &devs, timedout_entry) { + list_del_init(&udev->timedout_entry); + spin_unlock_bh(&timed_out_udevs_lock); + + spin_lock(&udev->commands_lock); + idr_for_each(&udev->commands, tcmu_check_expired_cmd, NULL); + spin_unlock(&udev->commands_lock); + + spin_lock_bh(&timed_out_udevs_lock); + } + + spin_unlock_bh(&timed_out_udevs_lock); +} + static void tcmu_unmap_work_fn(struct work_struct *work) { + check_timedout_devices(); find_free_blocks(); run_cmdr_queues(); }