From patchwork Wed Sep 13 23:04:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383898 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2934EE0211 for ; Wed, 13 Sep 2023 23:30:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233145AbjIMXay (ORCPT ); Wed, 13 Sep 2023 19:30:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233007AbjIMXax (ORCPT ); Wed, 13 Sep 2023 19:30:53 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 79B2EE6 for ; Wed, 13 Sep 2023 16:30:49 -0700 (PDT) Received: from pps.filterd (m0353726.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DNJPp3029718; Wed, 13 Sep 2023 23:30:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=LCqcOd6W7BRT3NhaCyOUHCifJBX+JddBnTaNOVFKIxo=; b=g/RKokVIEF6D8DXL09KXjlZZb58mN7tcmiK6iCQKLyhJYR8fE9+y8LTprqD7eTtizJif H1Cg8rR1ErbMKEU7H2DQFOrEL/xSghrYdFyZLXXviZ9d1k/vB/bCGyKpol4R8HrMGV3H FRasjsfeL0VbafyLystKNt28LgbuVh5k/0dzMoMD/gmLRcFzjeqh14fQjo6ASlXXzsZd lyk1GtCD/O37ACgWHKbjSSaEjqvsuml1LYwiD8ZgfipdFjAgWOdrw8SDZ2LipD0+k+j6 bOjynXvJyEPmEyXBjGD1dAxi0nXqiR1I6vd8cTCAce23p2R5XmoFuLM693MBgJ3656bI xQ== Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3kfgcn20-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:30:47 +0000 Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DKgcXJ022941; Wed, 13 Sep 2023 23:05:01 GMT Received: from smtprelay04.wdc07v.mail.ibm.com ([172.16.1.71]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3t141nxm0e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:01 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay04.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN50lp35455612 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:00 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2394058053; Wed, 13 Sep 2023 23:05:00 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A88E658043; Wed, 13 Sep 2023 23:04:59 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:04:59 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 01/11] ibmvfc: remove BUG_ON in the case of an empty event pool Date: Wed, 13 Sep 2023 18:04:47 -0500 Message-Id: <20230913230457.2575849-2-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: Sa2SImVmbSiElQ_sWBnHOpmP6DyXpeq8 X-Proofpoint-ORIG-GUID: Sa2SImVmbSiElQ_sWBnHOpmP6DyXpeq8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999 phishscore=0 spamscore=0 priorityscore=1501 clxscore=1015 mlxscore=0 impostorscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org In practice the driver should never send more commands than are allocated to a queues event pool. In the unlikely event that this happens the code asserts a BUG_ON, and in the case that the kernel is not configured to crash on panic returns a junk event pointer from the empty event list causing things to spiral from there. This BUG_ON is a historical artifact of the ibmvfc driver first being upstreamed, and it is well known now that the use of BUG_ON is bad practice except in the most unrecoverable scenario. There is nothing about this scenario that prevents the driver from recovering and carrying on. Remove the BUG_ON in question from ibmvfc_get_event() and return a NULL pointer in the case of an empty event pool. Update all call sites to ibmvfc_get_event() to check for a NULL pointer and perfrom the appropriate failure or recovery action. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 124 ++++++++++++++++++++++++++++++++- 1 file changed, 122 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index ce9eb00e2ca0..10435ddddfe5 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -1519,7 +1519,11 @@ static struct ibmvfc_event *ibmvfc_get_event(struct ibmvfc_queue *queue) unsigned long flags; spin_lock_irqsave(&queue->l_lock, flags); - BUG_ON(list_empty(&queue->free)); + if (list_empty(&queue->free)) { + ibmvfc_log(queue->vhost, 4, "empty event pool on queue:%ld\n", queue->hwq_id); + spin_unlock_irqrestore(&queue->l_lock, flags); + return NULL; + } evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list); atomic_set(&evt->free, 0); list_del(&evt->queue_list); @@ -1948,9 +1952,15 @@ static int ibmvfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd) if (vhost->using_channels) { scsi_channel = hwq % vhost->scsi_scrqs.active_queues; evt = ibmvfc_get_event(&vhost->scsi_scrqs.scrqs[scsi_channel]); + if (!evt) + return SCSI_MLQUEUE_HOST_BUSY; + evt->hwq = hwq % vhost->scsi_scrqs.active_queues; - } else + } else { evt = ibmvfc_get_event(&vhost->crq); + if (!evt) + return SCSI_MLQUEUE_HOST_BUSY; + } ibmvfc_init_event(evt, ibmvfc_scsi_done, IBMVFC_CMD_FORMAT); evt->cmnd = cmnd; @@ -2038,6 +2048,11 @@ static int ibmvfc_bsg_timeout(struct bsg_job *job) vhost->aborting_passthru = 1; evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + spin_unlock_irqrestore(vhost->host->host_lock, flags); + return -ENOMEM; + } + ibmvfc_init_event(evt, ibmvfc_bsg_timeout_done, IBMVFC_MAD_FORMAT); tmf = &evt->iu.tmf; @@ -2096,6 +2111,10 @@ static int ibmvfc_bsg_plogi(struct ibmvfc_host *vhost, unsigned int port_id) goto unlock_out; evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + rc = -ENOMEM; + goto unlock_out; + } ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT); plogi = &evt->iu.plogi; memset(plogi, 0, sizeof(*plogi)); @@ -2214,6 +2233,11 @@ static int ibmvfc_bsg_request(struct bsg_job *job) } evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + spin_unlock_irqrestore(vhost->host->host_lock, flags); + rc = -ENOMEM; + goto out; + } ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT); mad = &evt->iu.passthru; @@ -2302,6 +2326,11 @@ static int ibmvfc_reset_device(struct scsi_device *sdev, int type, char *desc) else evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + spin_unlock_irqrestore(vhost->host->host_lock, flags); + return -ENOMEM; + } + ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT); tmf = ibmvfc_init_vfc_cmd(evt, sdev); iu = ibmvfc_get_fcp_iu(vhost, tmf); @@ -2505,6 +2534,8 @@ static struct ibmvfc_event *ibmvfc_init_tmf(struct ibmvfc_queue *queue, struct ibmvfc_tmf *tmf; evt = ibmvfc_get_event(queue); + if (!evt) + return NULL; ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT); tmf = &evt->iu.tmf; @@ -2561,6 +2592,11 @@ static int ibmvfc_cancel_all_mq(struct scsi_device *sdev, int type) if (found_evt && vhost->logged_in) { evt = ibmvfc_init_tmf(&queues[i], sdev, type); + if (!evt) { + spin_unlock(queues[i].q_lock); + spin_unlock_irqrestore(vhost->host->host_lock, flags); + return -ENOMEM; + } evt->sync_iu = &queues[i].cancel_rsp; ibmvfc_send_event(evt, vhost, default_timeout); list_add_tail(&evt->cancel, &cancelq); @@ -2774,6 +2810,10 @@ static int ibmvfc_abort_task_set(struct scsi_device *sdev) if (vhost->state == IBMVFC_ACTIVE) { evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + spin_unlock_irqrestore(vhost->host->host_lock, flags); + return -ENOMEM; + } ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT); tmf = ibmvfc_init_vfc_cmd(evt, sdev); iu = ibmvfc_get_fcp_iu(vhost, tmf); @@ -4032,6 +4072,12 @@ static void ibmvfc_tgt_send_prli(struct ibmvfc_target *tgt) kref_get(&tgt->kref); evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); + kref_put(&tgt->kref, ibmvfc_release_tgt); + __ibmvfc_reset_host(vhost); + return; + } vhost->discovery_threads++; ibmvfc_init_event(evt, ibmvfc_tgt_prli_done, IBMVFC_MAD_FORMAT); evt->tgt = tgt; @@ -4139,6 +4185,12 @@ static void ibmvfc_tgt_send_plogi(struct ibmvfc_target *tgt) kref_get(&tgt->kref); tgt->logo_rcvd = 0; evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); + kref_put(&tgt->kref, ibmvfc_release_tgt); + __ibmvfc_reset_host(vhost); + return; + } vhost->discovery_threads++; ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT); ibmvfc_init_event(evt, ibmvfc_tgt_plogi_done, IBMVFC_MAD_FORMAT); @@ -4215,6 +4267,8 @@ static struct ibmvfc_event *__ibmvfc_tgt_get_implicit_logout_evt(struct ibmvfc_t kref_get(&tgt->kref); evt = ibmvfc_get_event(&vhost->crq); + if (!evt) + return NULL; ibmvfc_init_event(evt, done, IBMVFC_MAD_FORMAT); evt->tgt = tgt; mad = &evt->iu.implicit_logout; @@ -4242,6 +4296,13 @@ static void ibmvfc_tgt_implicit_logout(struct ibmvfc_target *tgt) vhost->discovery_threads++; evt = __ibmvfc_tgt_get_implicit_logout_evt(tgt, ibmvfc_tgt_implicit_logout_done); + if (!evt) { + vhost->discovery_threads--; + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); + kref_put(&tgt->kref, ibmvfc_release_tgt); + __ibmvfc_reset_host(vhost); + return; + } ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT); if (ibmvfc_send_event(evt, vhost, default_timeout)) { @@ -4381,6 +4442,12 @@ static void ibmvfc_tgt_move_login(struct ibmvfc_target *tgt) kref_get(&tgt->kref); evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_DEL_RPORT); + kref_put(&tgt->kref, ibmvfc_release_tgt); + __ibmvfc_reset_host(vhost); + return; + } vhost->discovery_threads++; ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT); ibmvfc_init_event(evt, ibmvfc_tgt_move_login_done, IBMVFC_MAD_FORMAT); @@ -4547,6 +4614,14 @@ static void ibmvfc_adisc_timeout(struct timer_list *t) vhost->abort_threads++; kref_get(&tgt->kref); evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + tgt_err(tgt, "Failed to send cancel event for ADISC. rc=%d\n", rc); + vhost->abort_threads--; + kref_put(&tgt->kref, ibmvfc_release_tgt); + __ibmvfc_reset_host(vhost); + spin_unlock_irqrestore(vhost->host->host_lock, flags); + return; + } ibmvfc_init_event(evt, ibmvfc_tgt_adisc_cancel_done, IBMVFC_MAD_FORMAT); evt->tgt = tgt; @@ -4597,6 +4672,12 @@ static void ibmvfc_tgt_adisc(struct ibmvfc_target *tgt) kref_get(&tgt->kref); evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); + kref_put(&tgt->kref, ibmvfc_release_tgt); + __ibmvfc_reset_host(vhost); + return; + } vhost->discovery_threads++; ibmvfc_init_event(evt, ibmvfc_tgt_adisc_done, IBMVFC_MAD_FORMAT); evt->tgt = tgt; @@ -4700,6 +4781,12 @@ static void ibmvfc_tgt_query_target(struct ibmvfc_target *tgt) kref_get(&tgt->kref); evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); + kref_put(&tgt->kref, ibmvfc_release_tgt); + __ibmvfc_reset_host(vhost); + return; + } vhost->discovery_threads++; evt->tgt = tgt; ibmvfc_init_event(evt, ibmvfc_tgt_query_target_done, IBMVFC_MAD_FORMAT); @@ -4872,6 +4959,13 @@ static void ibmvfc_discover_targets(struct ibmvfc_host *vhost) { struct ibmvfc_discover_targets *mad; struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); + int level = IBMVFC_DEFAULT_LOG_LEVEL; + + if (!evt) { + ibmvfc_log(vhost, level, "Discover Targets failed: no available events\n"); + ibmvfc_hard_reset_host(vhost); + return; + } ibmvfc_init_event(evt, ibmvfc_discover_targets_done, IBMVFC_MAD_FORMAT); mad = &evt->iu.discover_targets; @@ -4949,8 +5043,15 @@ static void ibmvfc_channel_setup(struct ibmvfc_host *vhost) struct ibmvfc_scsi_channels *scrqs = &vhost->scsi_scrqs; unsigned int num_channels = min(vhost->client_scsi_channels, vhost->max_vios_scsi_channels); + int level = IBMVFC_DEFAULT_LOG_LEVEL; int i; + if (!evt) { + ibmvfc_log(vhost, level, "Channel Setup failed: no available events\n"); + ibmvfc_hard_reset_host(vhost); + return; + } + memset(setup_buf, 0, sizeof(*setup_buf)); if (num_channels == 0) setup_buf->flags = cpu_to_be32(IBMVFC_CANCEL_CHANNELS); @@ -5012,6 +5113,13 @@ static void ibmvfc_channel_enquiry(struct ibmvfc_host *vhost) { struct ibmvfc_channel_enquiry *mad; struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); + int level = IBMVFC_DEFAULT_LOG_LEVEL; + + if (!evt) { + ibmvfc_log(vhost, level, "Channel Enquiry failed: no available events\n"); + ibmvfc_hard_reset_host(vhost); + return; + } ibmvfc_init_event(evt, ibmvfc_channel_enquiry_done, IBMVFC_MAD_FORMAT); mad = &evt->iu.channel_enquiry; @@ -5134,6 +5242,12 @@ static void ibmvfc_npiv_login(struct ibmvfc_host *vhost) struct ibmvfc_npiv_login_mad *mad; struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + ibmvfc_dbg(vhost, "NPIV Login failed: no available events\n"); + ibmvfc_hard_reset_host(vhost); + return; + } + ibmvfc_gather_partition_info(vhost); ibmvfc_set_login_info(vhost); ibmvfc_init_event(evt, ibmvfc_npiv_login_done, IBMVFC_MAD_FORMAT); @@ -5198,6 +5312,12 @@ static void ibmvfc_npiv_logout(struct ibmvfc_host *vhost) struct ibmvfc_event *evt; evt = ibmvfc_get_event(&vhost->crq); + if (!evt) { + ibmvfc_dbg(vhost, "NPIV Logout failed: no available events\n"); + ibmvfc_hard_reset_host(vhost); + return; + } + ibmvfc_init_event(evt, ibmvfc_npiv_logout_done, IBMVFC_MAD_FORMAT); mad = &evt->iu.npiv_logout; From patchwork Wed Sep 13 23:04:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383900 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AA76EE0211 for ; Wed, 13 Sep 2023 23:30:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233185AbjIMXbA (ORCPT ); Wed, 13 Sep 2023 19:31:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50624 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233122AbjIMXa4 (ORCPT ); Wed, 13 Sep 2023 19:30:56 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A0376E3 for ; Wed, 13 Sep 2023 16:30:52 -0700 (PDT) Received: from pps.filterd (m0356516.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DN8lkL022804; Wed, 13 Sep 2023 23:30:49 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=ZbNv/mTqk7SxTvZ6GCHdrKjsa7moQsbJT3XjgL76who=; b=ri0YT6yQ3+ZPiNN37iNSE/M/R3nl9oMoi8h9WgVjv/lkB2SC1RzEMlVHeWzfq77r9+pH N00WxLK/gefxc52dPcNrw2HCnVXHictH6VCVkPqzV41TKmbRgjOvn6FVRyr1Z2pEnVbW hjD5Z+LjTN6LnFe4iblaMYGfHwd8qJlkCPc8w6+HYEPdHcO39M2zMQicx528IjFGBPVh KJqmZDyxvOAN0UD3xj+9u4XKEeSbQHJPWx8cwP1g8Dck0aMk1efAdugtd7GPxA7kHM+9 fK8g0fuC2y4sDJfrJ973uK6FrIBWneueez4hVviltetE99d95ImDEOzuyINrC4+wF2un mQ== Received: from ppma22.wdc07v.mail.ibm.com (5c.69.3da9.ip4.static.sl-reverse.com [169.61.105.92]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3mh6k8q1-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:30:49 +0000 Received: from pps.filterd (ppma22.wdc07v.mail.ibm.com [127.0.0.1]) by ppma22.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DMr5hS012083; Wed, 13 Sep 2023 23:05:01 GMT Received: from smtprelay04.wdc07v.mail.ibm.com ([172.16.1.71]) by ppma22.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3t13dyxtn0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:01 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay04.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN50kC33948166 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:00 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B4B8058053; Wed, 13 Sep 2023 23:05:00 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 3CB2A58043; Wed, 13 Sep 2023 23:05:00 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:00 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 02/11] ibmvfc: implement channel queue depth and event buffer accounting Date: Wed, 13 Sep 2023 18:04:48 -0500 Message-Id: <20230913230457.2575849-3-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: o5Q8foEDEjq2vkgaZ5gO5syzQIAchsMh X-Proofpoint-ORIG-GUID: o5Q8foEDEjq2vkgaZ5gO5syzQIAchsMh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 spamscore=0 lowpriorityscore=0 suspectscore=0 clxscore=1015 mlxscore=0 bulkscore=0 impostorscore=0 mlxlogscore=999 malwarescore=0 adultscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Extend ibmvfc_queue, ibmvfc_event, and ibmvfc_event_pool to provide queue depths for general IO commands and reserved commands as well as proper accounting of the free events of each type from the general event pool. Further, calculate the negotiated max command limit with the VIOS at NPIV login time as a function of the number of queues times their total queue depth (general and reserved depths combined). This does away with the legacy max_request value, and allows the driver to better manage and track it resources. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 108 +++++++++++++++++++++------------ drivers/scsi/ibmvscsi/ibmvfc.h | 9 +++ 2 files changed, 78 insertions(+), 39 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 10435ddddfe5..9cd11cab4f3e 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -38,6 +38,7 @@ static unsigned int default_timeout = IBMVFC_DEFAULT_TIMEOUT; static u64 max_lun = IBMVFC_MAX_LUN; static unsigned int max_targets = IBMVFC_MAX_TARGETS; static unsigned int max_requests = IBMVFC_MAX_REQUESTS_DEFAULT; +static u16 scsi_qdepth = IBMVFC_SCSI_QDEPTH; static unsigned int disc_threads = IBMVFC_MAX_DISC_THREADS; static unsigned int ibmvfc_debug = IBMVFC_DEBUG; static unsigned int log_level = IBMVFC_DEFAULT_LOG_LEVEL; @@ -83,6 +84,9 @@ MODULE_PARM_DESC(default_timeout, module_param_named(max_requests, max_requests, uint, S_IRUGO); MODULE_PARM_DESC(max_requests, "Maximum requests for this adapter. " "[Default=" __stringify(IBMVFC_MAX_REQUESTS_DEFAULT) "]"); +module_param_named(scsi_qdepth, scsi_qdepth, ushort, S_IRUGO); +MODULE_PARM_DESC(scsi_qdepth, "Maximum scsi command depth per adapter queue. " + "[Default=" __stringify(IBMVFC_SCSI_QDEPTH) "]"); module_param_named(max_lun, max_lun, ullong, S_IRUGO); MODULE_PARM_DESC(max_lun, "Maximum allowed LUN. " "[Default=" __stringify(IBMVFC_MAX_LUN) "]"); @@ -781,23 +785,22 @@ static int ibmvfc_send_crq_init_complete(struct ibmvfc_host *vhost) * Returns zero on success. **/ static int ibmvfc_init_event_pool(struct ibmvfc_host *vhost, - struct ibmvfc_queue *queue, - unsigned int size) + struct ibmvfc_queue *queue) { int i; struct ibmvfc_event_pool *pool = &queue->evt_pool; ENTER; - if (!size) + if (!queue->total_depth) return 0; - pool->size = size; - pool->events = kcalloc(size, sizeof(*pool->events), GFP_KERNEL); + pool->size = queue->total_depth; + pool->events = kcalloc(pool->size, sizeof(*pool->events), GFP_KERNEL); if (!pool->events) return -ENOMEM; pool->iu_storage = dma_alloc_coherent(vhost->dev, - size * sizeof(*pool->iu_storage), + pool->size * sizeof(*pool->iu_storage), &pool->iu_token, 0); if (!pool->iu_storage) { @@ -807,9 +810,11 @@ static int ibmvfc_init_event_pool(struct ibmvfc_host *vhost, INIT_LIST_HEAD(&queue->sent); INIT_LIST_HEAD(&queue->free); + queue->evt_free = queue->evt_depth; + queue->reserved_free = queue->reserved_depth; spin_lock_init(&queue->l_lock); - for (i = 0; i < size; ++i) { + for (i = 0; i < pool->size; ++i) { struct ibmvfc_event *evt = &pool->events[i]; /* @@ -1033,6 +1038,12 @@ static void ibmvfc_free_event(struct ibmvfc_event *evt) spin_lock_irqsave(&evt->queue->l_lock, flags); list_add_tail(&evt->queue_list, &evt->queue->free); + if (evt->reserved) { + evt->reserved = 0; + evt->queue->reserved_free++; + } else { + evt->queue->evt_free++; + } if (evt->eh_comp) complete(evt->eh_comp); spin_unlock_irqrestore(&evt->queue->l_lock, flags); @@ -1475,6 +1486,12 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost) struct ibmvfc_queue *async_crq = &vhost->async_crq; struct device_node *of_node = vhost->dev->of_node; const char *location; + u16 max_cmds; + + max_cmds = scsi_qdepth + IBMVFC_NUM_INTERNAL_REQ; + if (mq_enabled) + max_cmds += (scsi_qdepth + IBMVFC_NUM_INTERNAL_SUBQ_REQ) * + vhost->client_scsi_channels; memset(login_info, 0, sizeof(*login_info)); @@ -1489,7 +1506,7 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost) if (vhost->client_migrated) login_info->flags |= cpu_to_be16(IBMVFC_CLIENT_MIGRATED); - login_info->max_cmds = cpu_to_be32(max_requests + IBMVFC_NUM_INTERNAL_REQ); + login_info->max_cmds = cpu_to_be32(max_cmds); login_info->capabilities = cpu_to_be64(IBMVFC_CAN_MIGRATE | IBMVFC_CAN_SEND_VF_WWPN); if (vhost->mq_enabled || vhost->using_channels) @@ -1513,24 +1530,33 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost) * * Returns a free event from the pool. **/ -static struct ibmvfc_event *ibmvfc_get_event(struct ibmvfc_queue *queue) +static struct ibmvfc_event *__ibmvfc_get_event(struct ibmvfc_queue *queue, int reserved) { - struct ibmvfc_event *evt; + struct ibmvfc_event *evt = NULL; unsigned long flags; spin_lock_irqsave(&queue->l_lock, flags); - if (list_empty(&queue->free)) { - ibmvfc_log(queue->vhost, 4, "empty event pool on queue:%ld\n", queue->hwq_id); - spin_unlock_irqrestore(&queue->l_lock, flags); - return NULL; + if (reserved && queue->reserved_free) { + evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list); + evt->reserved = 1; + queue->reserved_free--; + } else if (queue->evt_free) { + evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list); + queue->evt_free--; + } else { + goto out; } - evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list); + atomic_set(&evt->free, 0); list_del(&evt->queue_list); +out: spin_unlock_irqrestore(&queue->l_lock, flags); return evt; } +#define ibmvfc_get_event(queue) __ibmvfc_get_event(queue, 0) +#define ibmvfc_get_reserved_event(queue) __ibmvfc_get_event(queue, 1) + /** * ibmvfc_locked_done - Calls evt completion with host_lock held * @evt: ibmvfc evt to complete @@ -2047,7 +2073,7 @@ static int ibmvfc_bsg_timeout(struct bsg_job *job) } vhost->aborting_passthru = 1; - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { spin_unlock_irqrestore(vhost->host->host_lock, flags); return -ENOMEM; @@ -2110,7 +2136,7 @@ static int ibmvfc_bsg_plogi(struct ibmvfc_host *vhost, unsigned int port_id) if (unlikely((rc = ibmvfc_host_chkready(vhost)))) goto unlock_out; - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { rc = -ENOMEM; goto unlock_out; @@ -2232,7 +2258,7 @@ static int ibmvfc_bsg_request(struct bsg_job *job) goto out; } - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { spin_unlock_irqrestore(vhost->host->host_lock, flags); rc = -ENOMEM; @@ -2533,7 +2559,7 @@ static struct ibmvfc_event *ibmvfc_init_tmf(struct ibmvfc_queue *queue, struct ibmvfc_event *evt; struct ibmvfc_tmf *tmf; - evt = ibmvfc_get_event(queue); + evt = ibmvfc_get_reserved_event(queue); if (!evt) return NULL; ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT); @@ -3673,7 +3699,6 @@ static const struct scsi_host_template driver_template = { .max_sectors = IBMVFC_MAX_SECTORS, .shost_groups = ibmvfc_host_groups, .track_queue_depth = 1, - .host_tagset = 1, }; /** @@ -4071,7 +4096,7 @@ static void ibmvfc_tgt_send_prli(struct ibmvfc_target *tgt) return; kref_get(&tgt->kref); - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); kref_put(&tgt->kref, ibmvfc_release_tgt); @@ -4184,7 +4209,7 @@ static void ibmvfc_tgt_send_plogi(struct ibmvfc_target *tgt) kref_get(&tgt->kref); tgt->logo_rcvd = 0; - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); kref_put(&tgt->kref, ibmvfc_release_tgt); @@ -4266,7 +4291,7 @@ static struct ibmvfc_event *__ibmvfc_tgt_get_implicit_logout_evt(struct ibmvfc_t struct ibmvfc_event *evt; kref_get(&tgt->kref); - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) return NULL; ibmvfc_init_event(evt, done, IBMVFC_MAD_FORMAT); @@ -4441,7 +4466,7 @@ static void ibmvfc_tgt_move_login(struct ibmvfc_target *tgt) return; kref_get(&tgt->kref); - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_DEL_RPORT); kref_put(&tgt->kref, ibmvfc_release_tgt); @@ -4613,7 +4638,7 @@ static void ibmvfc_adisc_timeout(struct timer_list *t) vhost->abort_threads++; kref_get(&tgt->kref); - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { tgt_err(tgt, "Failed to send cancel event for ADISC. rc=%d\n", rc); vhost->abort_threads--; @@ -4671,7 +4696,7 @@ static void ibmvfc_tgt_adisc(struct ibmvfc_target *tgt) return; kref_get(&tgt->kref); - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); kref_put(&tgt->kref, ibmvfc_release_tgt); @@ -4780,7 +4805,7 @@ static void ibmvfc_tgt_query_target(struct ibmvfc_target *tgt) return; kref_get(&tgt->kref); - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE); kref_put(&tgt->kref, ibmvfc_release_tgt); @@ -4958,7 +4983,7 @@ static void ibmvfc_discover_targets_done(struct ibmvfc_event *evt) static void ibmvfc_discover_targets(struct ibmvfc_host *vhost) { struct ibmvfc_discover_targets *mad; - struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); + struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); int level = IBMVFC_DEFAULT_LOG_LEVEL; if (!evt) { @@ -5039,7 +5064,7 @@ static void ibmvfc_channel_setup(struct ibmvfc_host *vhost) { struct ibmvfc_channel_setup_mad *mad; struct ibmvfc_channel_setup *setup_buf = vhost->channel_setup_buf; - struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); + struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); struct ibmvfc_scsi_channels *scrqs = &vhost->scsi_scrqs; unsigned int num_channels = min(vhost->client_scsi_channels, vhost->max_vios_scsi_channels); @@ -5112,7 +5137,7 @@ static void ibmvfc_channel_enquiry_done(struct ibmvfc_event *evt) static void ibmvfc_channel_enquiry(struct ibmvfc_host *vhost) { struct ibmvfc_channel_enquiry *mad; - struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); + struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); int level = IBMVFC_DEFAULT_LOG_LEVEL; if (!evt) { @@ -5240,7 +5265,7 @@ static void ibmvfc_npiv_login_done(struct ibmvfc_event *evt) static void ibmvfc_npiv_login(struct ibmvfc_host *vhost) { struct ibmvfc_npiv_login_mad *mad; - struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq); + struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { ibmvfc_dbg(vhost, "NPIV Login failed: no available events\n"); @@ -5311,7 +5336,7 @@ static void ibmvfc_npiv_logout(struct ibmvfc_host *vhost) struct ibmvfc_npiv_logout_mad *mad; struct ibmvfc_event *evt; - evt = ibmvfc_get_event(&vhost->crq); + evt = ibmvfc_get_reserved_event(&vhost->crq); if (!evt) { ibmvfc_dbg(vhost, "NPIV Logout failed: no available events\n"); ibmvfc_hard_reset_host(vhost); @@ -5765,7 +5790,6 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost, { struct device *dev = vhost->dev; size_t fmt_size; - unsigned int pool_size = 0; ENTER; spin_lock_init(&queue->_lock); @@ -5774,7 +5798,9 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost, switch (fmt) { case IBMVFC_CRQ_FMT: fmt_size = sizeof(*queue->msgs.crq); - pool_size = max_requests + IBMVFC_NUM_INTERNAL_REQ; + queue->total_depth = scsi_qdepth + IBMVFC_NUM_INTERNAL_REQ; + queue->evt_depth = scsi_qdepth; + queue->reserved_depth = IBMVFC_NUM_INTERNAL_REQ; break; case IBMVFC_ASYNC_FMT: fmt_size = sizeof(*queue->msgs.async); @@ -5782,14 +5808,17 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost, case IBMVFC_SUB_CRQ_FMT: fmt_size = sizeof(*queue->msgs.scrq); /* We need one extra event for Cancel Commands */ - pool_size = max_requests + 1; + queue->total_depth = scsi_qdepth + IBMVFC_NUM_INTERNAL_SUBQ_REQ; + queue->evt_depth = scsi_qdepth; + queue->reserved_depth = IBMVFC_NUM_INTERNAL_SUBQ_REQ; break; default: dev_warn(dev, "Unknown command/response queue message format: %d\n", fmt); return -EINVAL; } - if (ibmvfc_init_event_pool(vhost, queue, pool_size)) { + queue->fmt = fmt; + if (ibmvfc_init_event_pool(vhost, queue)) { dev_err(dev, "Couldn't initialize event pool.\n"); return -ENOMEM; } @@ -5808,7 +5837,6 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost, } queue->cur = 0; - queue->fmt = fmt; queue->size = PAGE_SIZE / fmt_size; queue->vhost = vhost; @@ -6243,7 +6271,7 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id) } shost->transportt = ibmvfc_transport_template; - shost->can_queue = max_requests; + shost->can_queue = scsi_qdepth; shost->max_lun = max_lun; shost->max_id = max_targets; shost->max_sectors = IBMVFC_MAX_SECTORS; @@ -6402,7 +6430,9 @@ static int ibmvfc_resume(struct device *dev) */ static unsigned long ibmvfc_get_desired_dma(struct vio_dev *vdev) { - unsigned long pool_dma = max_requests * sizeof(union ibmvfc_iu); + unsigned long pool_dma; + + pool_dma = (IBMVFC_MAX_SCSI_QUEUES * scsi_qdepth) * sizeof(union ibmvfc_iu); return pool_dma + ((512 * 1024) * driver_template.cmd_per_lun); } diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index c39a245f43d0..0e641a880e1c 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -27,6 +27,7 @@ #define IBMVFC_ABORT_TIMEOUT 8 #define IBMVFC_ABORT_WAIT_TIMEOUT 40 #define IBMVFC_MAX_REQUESTS_DEFAULT 100 +#define IBMVFC_SCSI_QDEPTH 128 #define IBMVFC_DEBUG 0 #define IBMVFC_MAX_TARGETS 1024 @@ -57,6 +58,8 @@ * 2 for each discovery thread */ #define IBMVFC_NUM_INTERNAL_REQ (1 + 1 + 1 + 2 + (disc_threads * 2)) +/* Reserved suset of events for cancelling channelized IO commands */ +#define IBMVFC_NUM_INTERNAL_SUBQ_REQ 4 #define IBMVFC_MAD_SUCCESS 0x00 #define IBMVFC_MAD_NOT_SUPPORTED 0xF1 @@ -758,6 +761,7 @@ struct ibmvfc_event { struct completion *eh_comp; struct timer_list timer; u16 hwq; + u8 reserved; }; /* a pool of event structs for use */ @@ -793,6 +797,11 @@ struct ibmvfc_queue { struct ibmvfc_event_pool evt_pool; struct list_head sent; struct list_head free; + u16 total_depth; + u16 evt_depth; + u16 reserved_depth; + u16 evt_free; + u16 reserved_free; spinlock_t l_lock; union ibmvfc_iu cancel_rsp; From patchwork Wed Sep 13 23:04:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383897 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E83CAEE0212 for ; Wed, 13 Sep 2023 23:30:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233129AbjIMXax (ORCPT ); Wed, 13 Sep 2023 19:30:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50594 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232883AbjIMXax (ORCPT ); Wed, 13 Sep 2023 19:30:53 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 37975E3 for ; Wed, 13 Sep 2023 16:30:49 -0700 (PDT) Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DN7oo4021656; Wed, 13 Sep 2023 23:30:47 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=7HWDHfLMQE3ZcmXSSmhyOewGnNiIl3xnUvPxAWH9MmQ=; b=ayy2ELr8YC2Ky+e68hxQruP1V+QmtiA1rwgPE8uYybXBLHZL6b0sbF7nVg+/TVIliRwB aDdA9XXMqLVGg48RxDlkJWCllMHPKCB6evuK4dmpwz8hsvbqaI5ITsx3xQveTZfYbVFm bY8onJzlzVJBVWIDX3Z7z4CNL7YyS6/5t57VWJQqx+O6VJYCiJJvH7q2++EjTZpwcgKc 1pBekzZ453rFXMJ96Bd04RDItQo6mls9rpWsAs8xZJgk3TtqrnBdsaw0uwo5gugCsT8p FD+3C7nG9iR+i6OgnJtzGhaRWIceDkXeB+5GdHSeZeXcHnC8oYQwOmmxBHx7SoR13iRW 0Q== Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3nu51fp4-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:30:47 +0000 Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DKMjNh023142; Wed, 13 Sep 2023 23:05:02 GMT Received: from smtprelay05.wdc07v.mail.ibm.com ([172.16.1.72]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3t141nxm0f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:02 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay05.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN512457475350 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:01 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 513A658053; Wed, 13 Sep 2023 23:05:01 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CE0AB58043; Wed, 13 Sep 2023 23:05:00 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:00 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 03/11] ibmvfc: limit max hw queues by num_online_cpus() Date: Wed, 13 Sep 2023 18:04:49 -0500 Message-Id: <20230913230457.2575849-4-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: g90pDCT4tCDM1R0UMKXlFGycVVsfhiql X-Proofpoint-GUID: g90pDCT4tCDM1R0UMKXlFGycVVsfhiql X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 adultscore=0 lowpriorityscore=0 suspectscore=0 bulkscore=0 spamscore=0 malwarescore=0 priorityscore=1501 mlxscore=0 impostorscore=0 phishscore=0 mlxlogscore=846 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org A LPAR could potentially be configured with a small logical cpu count that is less then the default hardware queue max. Ensure that we don't allocate more hw queues than available cpus. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 9cd11cab4f3e..f35fed60fdae 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -6261,7 +6261,8 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id) struct Scsi_Host *shost; struct device *dev = &vdev->dev; int rc = -ENOMEM; - unsigned int max_scsi_queues = IBMVFC_MAX_SCSI_QUEUES; + unsigned int online_cpus = num_online_cpus(); + unsigned int max_scsi_queues = min((unsigned int)IBMVFC_MAX_SCSI_QUEUES, online_cpus); ENTER; shost = scsi_host_alloc(&driver_template, sizeof(*vhost)); From patchwork Wed Sep 13 23:04:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A98AEE0213 for ; Thu, 14 Sep 2023 00:01:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233172AbjINABR (ORCPT ); Wed, 13 Sep 2023 20:01:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40396 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233161AbjINABO (ORCPT ); Wed, 13 Sep 2023 20:01:14 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C687CE4 for ; Wed, 13 Sep 2023 17:01:10 -0700 (PDT) Received: from pps.filterd (m0353723.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DNfT3W002623; Thu, 14 Sep 2023 00:01:06 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=TVKhozCDovhtAR1Lc5/rSQE0MeG1HjCejR3N9rx9llI=; b=NDN963hlibFH2C8WDWALRZ4clqZdyQEAMWky3gruDADRA3BQWjA66LNNAQagY6Z0HOoD SeDZ+1L9zb85IJmT+8+zXLWWgTR0C6NyYDcZVEEBS7aKMTHwAuT0OjDw1w7yoWnhaB1u eL32EL3ECssLNIk9GL2CZp/WlDJJbxZH3S1x3+2vrhyQMtaQuEkKlTyNyamtSI09B91a 8PxTIjkIJM4Zlcuz7xruRZQG2Wg58VAlkDlCob1glWQMXPXJho09oPRF3oBhJ+8Z+ApI yigkTVuIYNwvs3O02XWB4dIfcTSKN8/GG7Hz3chCFLqWByf0vSEB2KEgVeuuP6Bt4ndK fQ== Received: from ppma11.dal12v.mail.ibm.com (db.9e.1632.ip4.static.sl-reverse.com [50.22.158.219]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3kpfnm2a-8 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 14 Sep 2023 00:01:06 +0000 Received: from pps.filterd (ppma11.dal12v.mail.ibm.com [127.0.0.1]) by ppma11.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DMEuBK011942; Wed, 13 Sep 2023 23:05:03 GMT Received: from smtprelay06.wdc07v.mail.ibm.com ([172.16.1.73]) by ppma11.dal12v.mail.ibm.com (PPS) with ESMTPS id 3t15r2632q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:03 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay06.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN52BK66912690 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:02 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E982658061; Wed, 13 Sep 2023 23:05:01 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7D10E58043; Wed, 13 Sep 2023 23:05:01 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:01 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 04/11] ibmvfc: fix erroneous use of rtas_busy_delay with hcall return code Date: Wed, 13 Sep 2023 18:04:50 -0500 Message-Id: <20230913230457.2575849-5-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: Tj2A_LTkEkxxJdclU8C5swL0ExVzjmmM X-Proofpoint-ORIG-GUID: Tj2A_LTkEkxxJdclU8C5swL0ExVzjmmM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_18,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0 phishscore=0 spamscore=0 bulkscore=0 lowpriorityscore=0 malwarescore=0 priorityscore=1501 clxscore=1015 adultscore=0 impostorscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130195 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Commit 0217a272fe13 ("scsi: ibmvfc: Store return code of H_FREE_SUB_CRQ during cleanup") wrongly changed the busy loop check to use rtas_busy_delay() instead of H_BUSY and H_IS_LONG_BUSY(). The busy return codes for RTAS and hypercalls are not the same. Fix this issue by restoring the use of H_BUSY and H_IS_LONG_BUSY(). Fixes: 0217a272fe13 ("scsi: ibmvfc: Store return code of H_FREE_SUB_CRQ during cleanup") Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index f35fed60fdae..a2f6a9ba5955 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -22,7 +22,6 @@ #include #include #include -#include #include #include #include @@ -5952,7 +5951,7 @@ static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost, irq_failed: do { rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address, scrq->cookie); - } while (rtas_busy_delay(rc)); + } while (rc == H_BUSY || H_IS_LONG_BUSY(rc)); reg_failed: LEAVE; return rc; From patchwork Wed Sep 13 23:04:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 672DCEE0213 for ; Thu, 14 Sep 2023 00:01:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231515AbjINABI (ORCPT ); Wed, 13 Sep 2023 20:01:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231588AbjINABH (ORCPT ); Wed, 13 Sep 2023 20:01:07 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4531E6B for ; Wed, 13 Sep 2023 17:01:03 -0700 (PDT) Received: from pps.filterd (m0353724.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DNcVYu021313; Thu, 14 Sep 2023 00:00:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=MHitxDHdHxRHb2NtEOdi0JivkDBDnArbdVXfZV209uc=; b=YH/v9Bm0FIhBeIl+Lz+HwzDBgNxrUCdKkFtaf4y1ALGloLza5mRoORhylZCJCtg0C/o9 V5Xu/UjVwfnfYGz6NLTxpTXvNdj7GQEZRLInW08f9zs9rIhEspj0uXWpRnJSFbQAnYGV Rztn02qhva8GRlMO7DRD2AmLi+iMKKwx0it/EdOt0nnS/oQjkIEi4hmuoB8Y/p5nZCmv rwn4f/XYGLr9t1JIt45q73HAmOJWxaqMqpumyhZd0RkaMHD+nQpFSSObfjlz3ryVUTei zwpf+yVyeEfFXyBKLppiqKU5FX4YulV6MnFGAejURZZZYCR4vz4+5+3U1mVvvAMqCJR/ 4g== Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3ps08jgy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 14 Sep 2023 00:00:54 +0000 Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DL9Cpl002775; Wed, 13 Sep 2023 23:05:04 GMT Received: from smtprelay07.wdc07v.mail.ibm.com ([172.16.1.74]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3t14hm6etp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:04 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay07.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN53f364225732 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:03 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E80175805F; Wed, 13 Sep 2023 23:05:02 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1057B58053; Wed, 13 Sep 2023 23:05:02 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:01 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 05/11] ibmvfc: use a bitfield for boolean flags Date: Wed, 13 Sep 2023 18:04:51 -0500 Message-Id: <20230913230457.2575849-6-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: NfXrMlv7Ptxy6AN6jjhAE1IEU9E_QpXH X-Proofpoint-ORIG-GUID: NfXrMlv7Ptxy6AN6jjhAE1IEU9E_QpXH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_18,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=928 bulkscore=0 suspectscore=0 clxscore=1015 mlxscore=0 spamscore=0 phishscore=0 impostorscore=0 adultscore=0 lowpriorityscore=0 priorityscore=1501 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130195 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There are currently 9 binary flag fields in the ibmvfc host structure. Converting each of these to a single bitfield reduces the foot print of the structure by 32 bytes. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.h | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index 0e641a880e1c..8ae52c239009 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -877,21 +877,21 @@ struct ibmvfc_host { struct ibmvfc_discover_targets_entry *disc_buf; struct mutex passthru_mutex; int max_vios_scsi_channels; + int client_scsi_channels; int task_set; int init_retries; int discovery_threads; int abort_threads; - int client_migrated; - int reinit; - int delay_init; - int scan_complete; + int client_migrated:1; + int reinit:1; + int delay_init:1; + int logged_in:1; + int mq_enabled:1; + int using_channels:1; + int do_enquiry:1; + int aborting_passthru:1; + int scan_complete:1; int scan_timeout; - int logged_in; - int mq_enabled; - int using_channels; - int do_enquiry; - int client_scsi_channels; - int aborting_passthru; int events_to_log; #define IBMVFC_AE_LINKUP 0x0001 #define IBMVFC_AE_LINKDOWN 0x0002 From patchwork Wed Sep 13 23:04:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAD21EE0212 for ; Thu, 14 Sep 2023 00:54:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230033AbjINAyD (ORCPT ); Wed, 13 Sep 2023 20:54:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233248AbjIMXbM (ORCPT ); Wed, 13 Sep 2023 19:31:12 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A05EE41 for ; Wed, 13 Sep 2023 16:31:06 -0700 (PDT) Received: from pps.filterd (m0353723.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DN87ZY030585; Wed, 13 Sep 2023 23:31:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=niacoBGWpBB1nkAeF1ys1ymNNwpY9/FTBISrR+D+PjM=; b=WsfVTRzANCxAfsCUyPNRf+gJkgCYysOYiCGrf2EgYbZ3T8vCKmGW63VVmpU3G2mxrBIO CVq9B5aFJV6iJBoRv1bOpmbXQQMx15ipuOZ3WOQC2oZ7/EcIywpSx1HhyJaL0d0l6M0Q 15p5ZhJDRY11aDqseMv6TezaQF9+zDLgj/NXkIzVzi5PKzCZntbu1EWAkbYIrddnbFTh A0wofRiaGAU4a8R8F8MiXYwaGH7w8gVh1cWFZAeZAYh4v4EIvOBImB+B/IKsBMiDi3qE IuyD26t80MOEFpCiQuZ1J6rzDeUiTxb31ZRUWd6rPMWMQJL9G5TiBNO+vVjbNJfYCS7z IA== Received: from ppma13.dal12v.mail.ibm.com (dd.9e.1632.ip4.static.sl-reverse.com [50.22.158.221]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3kpfn4u2-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:31:02 +0000 Received: from pps.filterd (ppma13.dal12v.mail.ibm.com [127.0.0.1]) by ppma13.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DLbA8e002410; Wed, 13 Sep 2023 23:05:04 GMT Received: from smtprelay07.wdc07v.mail.ibm.com ([172.16.1.74]) by ppma13.dal12v.mail.ibm.com (PPS) with ESMTPS id 3t158ke7dc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:04 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay07.wdc07v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN53Sw61800722 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:03 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 797F058061; Wed, 13 Sep 2023 23:05:03 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0C17F58059; Wed, 13 Sep 2023 23:05:03 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:02 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 06/11] ibmvfc: rename ibmvfc_scsi_channels to ibmvfc_channels Date: Wed, 13 Sep 2023 18:04:52 -0500 Message-Id: <20230913230457.2575849-7-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: N37pkN84sGZ8k7aGyWlBgwC6MzaFoolW X-Proofpoint-ORIG-GUID: N37pkN84sGZ8k7aGyWlBgwC6MzaFoolW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0 phishscore=0 spamscore=0 bulkscore=0 lowpriorityscore=0 malwarescore=0 priorityscore=1501 clxscore=1015 adultscore=0 impostorscore=0 mlxlogscore=740 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There is nothing scsi specific about the ibmvfc_scsi_channels struct. It is meant to encapsulate a set of channels regardless of protocol. Remove _scsi from the struct name to reflect this genric nature. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 4 ++-- drivers/scsi/ibmvscsi/ibmvfc.h | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index a2f6a9ba5955..b03b68b3b1b6 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -5013,7 +5013,7 @@ static void ibmvfc_channel_setup_done(struct ibmvfc_event *evt) { struct ibmvfc_host *vhost = evt->vhost; struct ibmvfc_channel_setup *setup = vhost->channel_setup_buf; - struct ibmvfc_scsi_channels *scrqs = &vhost->scsi_scrqs; + struct ibmvfc_channels *scrqs = &vhost->scsi_scrqs; u32 mad_status = be16_to_cpu(evt->xfer_iu->channel_setup.common.status); int level = IBMVFC_DEFAULT_LOG_LEVEL; int flags, active_queues, i; @@ -5064,7 +5064,7 @@ static void ibmvfc_channel_setup(struct ibmvfc_host *vhost) struct ibmvfc_channel_setup_mad *mad; struct ibmvfc_channel_setup *setup_buf = vhost->channel_setup_buf; struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); - struct ibmvfc_scsi_channels *scrqs = &vhost->scsi_scrqs; + struct ibmvfc_channels *scrqs = &vhost->scsi_scrqs; unsigned int num_channels = min(vhost->client_scsi_channels, vhost->max_vios_scsi_channels); int level = IBMVFC_DEFAULT_LOG_LEVEL; diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index 8ae52c239009..d88a528b8cc1 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -815,7 +815,7 @@ struct ibmvfc_queue { char name[32]; }; -struct ibmvfc_scsi_channels { +struct ibmvfc_channels { struct ibmvfc_queue *scrqs; unsigned int active_queues; }; @@ -866,7 +866,7 @@ struct ibmvfc_host { mempool_t *tgt_pool; struct ibmvfc_queue crq; struct ibmvfc_queue async_crq; - struct ibmvfc_scsi_channels scsi_scrqs; + struct ibmvfc_channels scsi_scrqs; struct ibmvfc_npiv_login login_info; union ibmvfc_npiv_login_data *login_buf; dma_addr_t login_buf_dma; From patchwork Wed Sep 13 23:04:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 824CDEE0212 for ; Wed, 13 Sep 2023 23:31:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233247AbjIMXbL (ORCPT ); Wed, 13 Sep 2023 19:31:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48696 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233214AbjIMXbH (ORCPT ); Wed, 13 Sep 2023 19:31:07 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD237101 for ; Wed, 13 Sep 2023 16:31:01 -0700 (PDT) Received: from pps.filterd (m0353726.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DN7bYb023297; Wed, 13 Sep 2023 23:30:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=VOs4Rwx6CVEUXonfo+MyRL7iSYmpntkMIpk8kbMk7NE=; b=oCDUVQh5qJoak46gxz2pENszM0iLrgfEaVkDZjvQopgW3Y8D9HsZ9a+qz3qdubD33FLq GMoXR7v2cOsaXAlVBmdHCWmSf83tx8LLFbzOYHZPaiy7nTOF38wQGNIvh9hk4apQKepj /2GqKypFsB4GoPr/bMJn9f3yL7/6aqRmRJpMO+/TUTVDvs3Q5C8IgwpyItGk8JUp6wWQ gypsaqVrolaYXbWlOcpoDVscVdpPSdFQ452T8AnIPiXVcp9W8UW8S8Rgm28Vf8b2y2zW nKBtKcek+Wgx0cY64WcsYB8hdaRBvjBU7GOqQg4AUn3Mj2YhLfl8xQAgTvy+TbHWY6uk YQ== Received: from ppma11.dal12v.mail.ibm.com (db.9e.1632.ip4.static.sl-reverse.com [50.22.158.219]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3kfgcn6c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:30:58 +0000 Received: from pps.filterd (ppma11.dal12v.mail.ibm.com [127.0.0.1]) by ppma11.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DMPBfQ012005; Wed, 13 Sep 2023 23:05:05 GMT Received: from smtprelay02.dal12v.mail.ibm.com ([172.16.1.4]) by ppma11.dal12v.mail.ibm.com (PPS) with ESMTPS id 3t15r2632r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:05 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay02.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN540X27787548 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:04 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0B97158059; Wed, 13 Sep 2023 23:05:04 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 91BD158053; Wed, 13 Sep 2023 23:05:03 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:03 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 07/11] ibmvfc: track max and desired queue size in ibmvfc_channels Date: Wed, 13 Sep 2023 18:04:53 -0500 Message-Id: <20230913230457.2575849-8-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: EFucLsOTZWBT9dAuIH2zVOe_2Xx-stMb X-Proofpoint-ORIG-GUID: EFucLsOTZWBT9dAuIH2zVOe_2Xx-stMb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999 phishscore=0 spamscore=0 priorityscore=1501 clxscore=1015 mlxscore=0 impostorscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add fields for desired and max number of queues to ibmvfc_channels. With support for NVMeoF protocol coming these sorts of values should be tracked in the protocol specific channel struct instead of the overarching host adapter. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 13 ++++++++----- drivers/scsi/ibmvscsi/ibmvfc.h | 5 +++-- 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index b03b68b3b1b6..443223c16f39 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -1490,7 +1490,7 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost) max_cmds = scsi_qdepth + IBMVFC_NUM_INTERNAL_REQ; if (mq_enabled) max_cmds += (scsi_qdepth + IBMVFC_NUM_INTERNAL_SUBQ_REQ) * - vhost->client_scsi_channels; + vhost->scsi_scrqs.desired_queues; memset(login_info, 0, sizeof(*login_info)); @@ -3578,11 +3578,12 @@ static ssize_t ibmvfc_show_scsi_channels(struct device *dev, { struct Scsi_Host *shost = class_to_shost(dev); struct ibmvfc_host *vhost = shost_priv(shost); + struct ibmvfc_channels *scsi = &vhost->scsi_scrqs; unsigned long flags = 0; int len; spin_lock_irqsave(shost->host_lock, flags); - len = snprintf(buf, PAGE_SIZE, "%d\n", vhost->client_scsi_channels); + len = snprintf(buf, PAGE_SIZE, "%d\n", scsi->desired_queues); spin_unlock_irqrestore(shost->host_lock, flags); return len; } @@ -3593,12 +3594,13 @@ static ssize_t ibmvfc_store_scsi_channels(struct device *dev, { struct Scsi_Host *shost = class_to_shost(dev); struct ibmvfc_host *vhost = shost_priv(shost); + struct ibmvfc_channels *scsi = &vhost->scsi_scrqs; unsigned long flags = 0; unsigned int channels; spin_lock_irqsave(shost->host_lock, flags); channels = simple_strtoul(buf, NULL, 10); - vhost->client_scsi_channels = min(channels, nr_scsi_hw_queues); + scsi->desired_queues = min(channels, shost->nr_hw_queues); ibmvfc_hard_reset_host(vhost); spin_unlock_irqrestore(shost->host_lock, flags); return strlen(buf); @@ -5066,7 +5068,7 @@ static void ibmvfc_channel_setup(struct ibmvfc_host *vhost) struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq); struct ibmvfc_channels *scrqs = &vhost->scsi_scrqs; unsigned int num_channels = - min(vhost->client_scsi_channels, vhost->max_vios_scsi_channels); + min(scrqs->desired_queues, vhost->max_vios_scsi_channels); int level = IBMVFC_DEFAULT_LOG_LEVEL; int i; @@ -6290,7 +6292,8 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id) vhost->task_set = 1; vhost->mq_enabled = mq_enabled; - vhost->client_scsi_channels = min(shost->nr_hw_queues, nr_scsi_channels); + vhost->scsi_scrqs.desired_queues = min(shost->nr_hw_queues, nr_scsi_channels); + vhost->scsi_scrqs.max_queues = shost->nr_hw_queues; vhost->using_channels = 0; vhost->do_enquiry = 1; vhost->scan_timeout = 0; diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index d88a528b8cc1..79e1a3bbb2f7 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -818,6 +818,8 @@ struct ibmvfc_queue { struct ibmvfc_channels { struct ibmvfc_queue *scrqs; unsigned int active_queues; + unsigned int desired_queues; + unsigned int max_queues; }; enum ibmvfc_host_action { @@ -876,8 +878,7 @@ struct ibmvfc_host { int log_level; struct ibmvfc_discover_targets_entry *disc_buf; struct mutex passthru_mutex; - int max_vios_scsi_channels; - int client_scsi_channels; + unsigned int max_vios_scsi_channels; int task_set; int init_retries; int discovery_threads; From patchwork Wed Sep 13 23:04:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383899 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5CB87EE0213 for ; Wed, 13 Sep 2023 23:30:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233168AbjIMXaz (ORCPT ); Wed, 13 Sep 2023 19:30:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233122AbjIMXax (ORCPT ); Wed, 13 Sep 2023 19:30:53 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACDB8E3 for ; Wed, 13 Sep 2023 16:30:49 -0700 (PDT) Received: from pps.filterd (m0353726.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DNJPp2029718; Wed, 13 Sep 2023 23:30:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=AF/qZLC4l62hVAW5W0ATTYfoPgerK9hCqqy0+Tp0Q9w=; b=XZ9s8g25Ekg/4bJDWdAG12w8YVAv4r761BWM7mat9UrAnc5c9RHrTYCFrvP/bp/5HSrW pFrTKuI3QYndHRaLSUcSnnsBWoHJlFAAEe+FBL2kBuRqepwd/H+9v+Smb4ydxymeWx7K c+nrMlKj3v8F/S/3EgO10OF0cxKwf9XbcVCv/uQY0o+5vfPH5bh1vNuqJ2Ire0022UcL VZoJ8Q254mxtVJff1DLKQChPLiqK/eRwMMyItEtkSsSvBeaNHXtGFZmDiPvTCumL0Rkh ZqIkocVQrgkmZeh+oeDRHE9WPFTNiV3syr0B29NlZZl2YV9F3oEz/8QoRzV0BFuLxkx2 sg== Received: from ppma21.wdc07v.mail.ibm.com (5b.69.3da9.ip4.static.sl-reverse.com [169.61.105.91]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3kfgcn20-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:30:46 +0000 Received: from pps.filterd (ppma21.wdc07v.mail.ibm.com [127.0.0.1]) by ppma21.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DKgcXK022941; Wed, 13 Sep 2023 23:05:06 GMT Received: from smtprelay02.dal12v.mail.ibm.com ([172.16.1.4]) by ppma21.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3t141nxm0g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:06 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay02.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN547M27787550 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:05 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9BE9B58059; Wed, 13 Sep 2023 23:05:04 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 241D958053; Wed, 13 Sep 2023 23:05:04 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:04 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 08/11] ibmvfc: make channel allocation generic Date: Wed, 13 Sep 2023 18:04:54 -0500 Message-Id: <20230913230457.2575849-9-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: 1uKYZPm3jub1GtvW4IeHj7iDqX89w8q6 X-Proofpoint-ORIG-GUID: 1uKYZPm3jub1GtvW4IeHj7iDqX89w8q6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 bulkscore=0 adultscore=0 mlxlogscore=999 phishscore=0 spamscore=0 priorityscore=1501 clxscore=1011 mlxscore=0 impostorscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org With the comming of NVMeoF support the driver will need to also allocate channels for NVMe. Implement generic channel allocation wrappers that can be used for both SCSI and NVMeoF protocol setup. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 127 +++++++++++++++++++-------------- 1 file changed, 75 insertions(+), 52 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 443223c16f39..8b2421fa2b2c 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -163,8 +163,8 @@ static void ibmvfc_npiv_logout(struct ibmvfc_host *); static void ibmvfc_tgt_implicit_logout_and_del(struct ibmvfc_target *); static void ibmvfc_tgt_move_login(struct ibmvfc_target *); -static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *); -static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *); +static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *, struct ibmvfc_channels *); +static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *, struct ibmvfc_channels *); static const char *unknown_error = "unknown error"; @@ -926,7 +926,7 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost) struct vio_dev *vdev = to_vio_dev(vhost->dev); unsigned long flags; - ibmvfc_dereg_sub_crqs(vhost); + ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs); /* Re-enable the CRQ */ do { @@ -945,7 +945,7 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost) spin_unlock(vhost->crq.q_lock); spin_unlock_irqrestore(vhost->host->host_lock, flags); - ibmvfc_reg_sub_crqs(vhost); + ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs); return rc; } @@ -964,7 +964,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost) struct vio_dev *vdev = to_vio_dev(vhost->dev); struct ibmvfc_queue *crq = &vhost->crq; - ibmvfc_dereg_sub_crqs(vhost); + ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs); /* Close the CRQ */ do { @@ -997,7 +997,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost) spin_unlock(vhost->crq.q_lock); spin_unlock_irqrestore(vhost->host->host_lock, flags); - ibmvfc_reg_sub_crqs(vhost); + ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs); return rc; } @@ -5906,12 +5906,13 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost) return retrc; } -static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost, - int index) +static int ibmvfc_register_channel(struct ibmvfc_host *vhost, + struct ibmvfc_channels *channels, + int index) { struct device *dev = vhost->dev; struct vio_dev *vdev = to_vio_dev(dev); - struct ibmvfc_queue *scrq = &vhost->scsi_scrqs.scrqs[index]; + struct ibmvfc_queue *scrq = &channels->scrqs[index]; int rc = -ENOMEM; ENTER; @@ -5959,11 +5960,13 @@ static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost, return rc; } -static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index) +static void ibmvfc_deregister_channel(struct ibmvfc_host *vhost, + struct ibmvfc_channels *channels, + int index) { struct device *dev = vhost->dev; struct vio_dev *vdev = to_vio_dev(dev); - struct ibmvfc_queue *scrq = &vhost->scsi_scrqs.scrqs[index]; + struct ibmvfc_queue *scrq = &channels->scrqs[index]; long rc; ENTER; @@ -5987,18 +5990,19 @@ static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index) LEAVE; } -static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost) +static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost, + struct ibmvfc_channels *channels) { int i, j; ENTER; - if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs) + if (!vhost->mq_enabled || !channels->scrqs) return; - for (i = 0; i < nr_scsi_hw_queues; i++) { - if (ibmvfc_register_scsi_channel(vhost, i)) { + for (i = 0; i < channels->max_queues; i++) { + if (ibmvfc_register_channel(vhost, channels, i)) { for (j = i; j > 0; j--) - ibmvfc_deregister_scsi_channel(vhost, j - 1); + ibmvfc_deregister_channel(vhost, channels, j - 1); vhost->do_enquiry = 0; return; } @@ -6007,77 +6011,96 @@ static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost) LEAVE; } -static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost) +static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost, + struct ibmvfc_channels *channels) { int i; ENTER; - if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs) + if (!vhost->mq_enabled || !channels->scrqs) return; - for (i = 0; i < nr_scsi_hw_queues; i++) - ibmvfc_deregister_scsi_channel(vhost, i); + for (i = 0; i < channels->max_queues; i++) + ibmvfc_deregister_channel(vhost, channels, i); LEAVE; } -static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost) +static int ibmvfc_alloc_channels(struct ibmvfc_host *vhost, + struct ibmvfc_channels *channels) { struct ibmvfc_queue *scrq; int i, j; + int rc = 0; + channels->scrqs = kcalloc(channels->max_queues, + sizeof(*channels->scrqs), + GFP_KERNEL); + if (!channels->scrqs) + return -ENOMEM; + + for (i = 0; i < channels->max_queues; i++) { + scrq = &channels->scrqs[i]; + rc = ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT); + if (rc) { + for (j = i; j > 0; j--) { + scrq = &channels->scrqs[j - 1]; + ibmvfc_free_queue(vhost, scrq); + } + kfree(channels->scrqs); + channels->scrqs = NULL; + channels->active_queues = 0; + return rc; + } + } + + return rc; +} + +static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost) +{ ENTER; if (!vhost->mq_enabled) return; - vhost->scsi_scrqs.scrqs = kcalloc(nr_scsi_hw_queues, - sizeof(*vhost->scsi_scrqs.scrqs), - GFP_KERNEL); - if (!vhost->scsi_scrqs.scrqs) { + if (ibmvfc_alloc_channels(vhost, &vhost->scsi_scrqs)) { vhost->do_enquiry = 0; + vhost->mq_enabled = 0; return; } - for (i = 0; i < nr_scsi_hw_queues; i++) { - scrq = &vhost->scsi_scrqs.scrqs[i]; - if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT)) { - for (j = i; j > 0; j--) { - scrq = &vhost->scsi_scrqs.scrqs[j - 1]; - ibmvfc_free_queue(vhost, scrq); - } - kfree(vhost->scsi_scrqs.scrqs); - vhost->scsi_scrqs.scrqs = NULL; - vhost->scsi_scrqs.active_queues = 0; - vhost->do_enquiry = 0; - vhost->mq_enabled = 0; - return; - } - } - - ibmvfc_reg_sub_crqs(vhost); + ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs); LEAVE; } -static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost) +static void ibmvfc_release_channels(struct ibmvfc_host *vhost, + struct ibmvfc_channels *channels) { struct ibmvfc_queue *scrq; int i; + if (channels->scrqs) { + for (i = 0; i < channels->max_queues; i++) { + scrq = &channels->scrqs[i]; + ibmvfc_free_queue(vhost, scrq); + } + + kfree(channels->scrqs); + channels->scrqs = NULL; + channels->active_queues = 0; + } +} + +static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost) +{ ENTER; if (!vhost->scsi_scrqs.scrqs) return; - ibmvfc_dereg_sub_crqs(vhost); - - for (i = 0; i < nr_scsi_hw_queues; i++) { - scrq = &vhost->scsi_scrqs.scrqs[i]; - ibmvfc_free_queue(vhost, scrq); - } + ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs); - kfree(vhost->scsi_scrqs.scrqs); - vhost->scsi_scrqs.scrqs = NULL; - vhost->scsi_scrqs.active_queues = 0; + ibmvfc_release_channels(vhost, &vhost->scsi_scrqs); LEAVE; } From patchwork Wed Sep 13 23:04:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383902 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A23CEE0213 for ; Wed, 13 Sep 2023 23:31:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233232AbjIMXbJ (ORCPT ); Wed, 13 Sep 2023 19:31:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50656 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233189AbjIMXbA (ORCPT ); Wed, 13 Sep 2023 19:31:00 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBC21E6 for ; Wed, 13 Sep 2023 16:30:56 -0700 (PDT) Received: from pps.filterd (m0360072.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DNCmIg003073; Wed, 13 Sep 2023 23:30:54 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=sAIQzkw/kB4NfEtnFxVZJx1xHjRJophkcktwVasGzOs=; b=gVXiixbGTMdHSrc8uzam8WWvag90355enGEuWp2g5JdEvkdBNF1lJ1S73H32vNvPS7I3 lKnT/MzUZReR+Izq/YGiaLb5qGZBwHSNj3k7yxz9r+EqtrpgHuIFtJPvBE+KVHVSepw6 p8SiOmOeafU+totFsPaVbpGmrX5Wzd2buX+g/8ElKXjEEcrXPdhxkoseGiqknv7gpeKr p4FRcDouke3357f/MBN9VdZOz4ADroq9E1BxTIfOXBz8x6Bg2UrJNq3ktNrLEKltwoM8 ugp7t4tlKViGlB+X6QSz3I5iOuMRXghvnJY+8q0LtyeHoN2lu0RIucogrOwIwIps0g0q kg== Received: from ppma13.dal12v.mail.ibm.com (dd.9e.1632.ip4.static.sl-reverse.com [50.22.158.221]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3k6rdybb-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:30:53 +0000 Received: from pps.filterd (ppma13.dal12v.mail.ibm.com [127.0.0.1]) by ppma13.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DM37ex002367; Wed, 13 Sep 2023 23:05:06 GMT Received: from smtprelay03.dal12v.mail.ibm.com ([172.16.1.5]) by ppma13.dal12v.mail.ibm.com (PPS) with ESMTPS id 3t158ke7dd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:06 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay03.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN55DL1770054 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:05 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 30B7758053; Wed, 13 Sep 2023 23:05:05 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B503D58061; Wed, 13 Sep 2023 23:05:04 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:04 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 09/11] ibmvfc: add protocol field to ibmvfc_channels Date: Wed, 13 Sep 2023 18:04:55 -0500 Message-Id: <20230913230457.2575849-10-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: MA__lAu4Uow14nD_188P2fnOvaulkJBG X-Proofpoint-ORIG-GUID: MA__lAu4Uow14nD_188P2fnOvaulkJBG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 phishscore=0 clxscore=1015 malwarescore=0 adultscore=0 impostorscore=0 bulkscore=0 mlxscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org There are cases in the generic code where protocol specific configuration or actions may need to be taken. Add a protocol field to struct ibmvfc_channels and initial IBMVFC_PROTO_[SCSI/NVME] definitions. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 24 ++++++++++++++++++++---- drivers/scsi/ibmvscsi/ibmvfc.h | 7 +++++++ 2 files changed, 27 insertions(+), 4 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 8b2421fa2b2c..0c6900bc6588 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -3935,7 +3935,7 @@ static void ibmvfc_drain_sub_crq(struct ibmvfc_queue *scrq) } } -static irqreturn_t ibmvfc_interrupt_scsi(int irq, void *scrq_instance) +static irqreturn_t ibmvfc_interrupt_mq(int irq, void *scrq_instance) { struct ibmvfc_queue *scrq = (struct ibmvfc_queue *)scrq_instance; @@ -5936,9 +5936,24 @@ static int ibmvfc_register_channel(struct ibmvfc_host *vhost, goto irq_failed; } - snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-scsi%d", - vdev->unit_address, index); - rc = request_irq(scrq->irq, ibmvfc_interrupt_scsi, 0, scrq->name, scrq); + switch (channels->protocol) { + case IBMVFC_PROTO_SCSI: + snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-scsi%d", + vdev->unit_address, index); + scrq->handler = ibmvfc_interrupt_mq; + break; + case IBMVFC_PROTO_NVME: + snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-nvmf%d", + vdev->unit_address, index); + scrq->handler = ibmvfc_interrupt_mq; + break; + default: + dev_err(dev, "Unknown channel protocol (%d)\n", + channels->protocol); + goto irq_failed; + } + + rc = request_irq(scrq->irq, scrq->handler, 0, scrq->name, scrq); if (rc) { dev_err(dev, "Couldn't register sub-crq[%d] irq\n", index); @@ -6317,6 +6332,7 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id) vhost->mq_enabled = mq_enabled; vhost->scsi_scrqs.desired_queues = min(shost->nr_hw_queues, nr_scsi_channels); vhost->scsi_scrqs.max_queues = shost->nr_hw_queues; + vhost->scsi_scrqs.protocol = IBMVFC_PROTO_SCSI; vhost->using_channels = 0; vhost->do_enquiry = 1; vhost->scan_timeout = 0; diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index 79e1a3bbb2f7..085dfc38446a 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -813,10 +813,17 @@ struct ibmvfc_queue { unsigned long irq; unsigned long hwq_id; char name[32]; + irq_handler_t handler; +}; + +enum ibmvfc_protocol { + IBMVFC_PROTO_SCSI = 0, + IBMVFC_PROTO_NVME = 1, }; struct ibmvfc_channels { struct ibmvfc_queue *scrqs; + enum ibmvfc_protocol protocol; unsigned int active_queues; unsigned int desired_queues; unsigned int max_queues; From patchwork Wed Sep 13 23:04:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383901 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A76BEE0212 for ; Wed, 13 Sep 2023 23:31:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233170AbjIMXbI (ORCPT ); Wed, 13 Sep 2023 19:31:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50660 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233190AbjIMXbA (ORCPT ); Wed, 13 Sep 2023 19:31:00 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E3E5E7 for ; Wed, 13 Sep 2023 16:30:57 -0700 (PDT) Received: from pps.filterd (m0356517.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DN7qhE021722; Wed, 13 Sep 2023 23:30:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=wdkw+UTQ4xVLMyUFAWl3sV/pqIRFm7FVo1BDrncRKgY=; b=AxsRZ7B1qiUyLRJBR5fGVnuMtMspMWuzqOQmlt7yqHyEHgI3P4hj0L8WVvCSYkAQlwl7 scm46aNxnMW4fYsGVK7vmYRWoCG/rq65gVlQ1FEX8vBXMcfHtyPPANs1/GmrlHnVNhkw PdH8qmV0ZPe3UzNAT+BvviZ3ymq2pC4wKbaTCuqnUC77pdnzRL20Ik4u7APzLchlb+Cv DE0QiqNIEx5tmk8V03gfuDIpqLiCOFHZVWT0NISqyPxRgsG+n88RjjQR1hbWhPiGe6pE 6yNxRmEm2BvGyjxsRxxt8LOH6LZWymXKp3nQx6TrfrPvoQVWBQxuUkvioR30TpSrDCeO Nw== Received: from ppma23.wdc07v.mail.ibm.com (5d.69.3da9.ip4.static.sl-reverse.com [169.61.105.93]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3nu51frs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:30:54 +0000 Received: from pps.filterd (ppma23.wdc07v.mail.ibm.com [127.0.0.1]) by ppma23.wdc07v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DKPQBf002967; Wed, 13 Sep 2023 23:05:07 GMT Received: from smtprelay03.dal12v.mail.ibm.com ([172.16.1.5]) by ppma23.wdc07v.mail.ibm.com (PPS) with ESMTPS id 3t14hm6etq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:07 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay03.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN55H91770056 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:06 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B773D58043; Wed, 13 Sep 2023 23:05:05 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4A30658059; Wed, 13 Sep 2023 23:05:05 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:05 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 10/11] ibmvfc: make discovery buffer per protocol channel group Date: Wed, 13 Sep 2023 18:04:56 -0500 Message-Id: <20230913230457.2575849-11-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: OX98-w4mdR_qehIccV8AjKTN7Py5x7qG X-Proofpoint-GUID: OX98-w4mdR_qehIccV8AjKTN7Py5x7qG X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 adultscore=0 lowpriorityscore=0 suspectscore=0 bulkscore=0 spamscore=0 malwarescore=0 priorityscore=1501 mlxscore=0 impostorscore=0 phishscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The target discovery buffer that the VIOS populates with targets is currently a host adapter field. To facilitate the discovery of NVMe targets as well as SCSI another discovery buffer is required. Move the discovery buffer out of the host struct and into the ibmvfc_channels struct so that each channels instance for a given protocol has its own discovery buffer. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 47 ++++++++++++++++++++++------------ drivers/scsi/ibmvscsi/ibmvfc.h | 6 ++--- 2 files changed, 33 insertions(+), 20 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 0c6900bc6588..6f69821f903f 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -4935,7 +4935,7 @@ static int ibmvfc_alloc_targets(struct ibmvfc_host *vhost) int i, rc; for (i = 0, rc = 0; !rc && i < vhost->num_targets; i++) - rc = ibmvfc_alloc_target(vhost, &vhost->disc_buf[i]); + rc = ibmvfc_alloc_target(vhost, &vhost->scsi_scrqs.disc_buf[i]); return rc; } @@ -4999,9 +4999,9 @@ static void ibmvfc_discover_targets(struct ibmvfc_host *vhost) mad->common.version = cpu_to_be32(1); mad->common.opcode = cpu_to_be32(IBMVFC_DISC_TARGETS); mad->common.length = cpu_to_be16(sizeof(*mad)); - mad->bufflen = cpu_to_be32(vhost->disc_buf_sz); - mad->buffer.va = cpu_to_be64(vhost->disc_buf_dma); - mad->buffer.len = cpu_to_be32(vhost->disc_buf_sz); + mad->bufflen = cpu_to_be32(vhost->scsi_scrqs.disc_buf_sz); + mad->buffer.va = cpu_to_be64(vhost->scsi_scrqs.disc_buf_dma); + mad->buffer.len = cpu_to_be32(vhost->scsi_scrqs.disc_buf_sz); mad->flags = cpu_to_be32(IBMVFC_DISC_TGT_PORT_ID_WWPN_LIST); ibmvfc_set_host_action(vhost, IBMVFC_HOST_ACTION_INIT_WAIT); @@ -6119,6 +6119,12 @@ static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost) LEAVE; } +static void ibmvfc_free_disc_buf(struct device *dev, struct ibmvfc_channels *channels) +{ + dma_free_coherent(dev, channels->disc_buf_sz, channels->disc_buf, + channels->disc_buf_dma); +} + /** * ibmvfc_free_mem - Free memory for vhost * @vhost: ibmvfc host struct @@ -6133,8 +6139,7 @@ static void ibmvfc_free_mem(struct ibmvfc_host *vhost) ENTER; mempool_destroy(vhost->tgt_pool); kfree(vhost->trace); - dma_free_coherent(vhost->dev, vhost->disc_buf_sz, vhost->disc_buf, - vhost->disc_buf_dma); + ibmvfc_free_disc_buf(vhost->dev, &vhost->scsi_scrqs); dma_free_coherent(vhost->dev, sizeof(*vhost->login_buf), vhost->login_buf, vhost->login_buf_dma); dma_free_coherent(vhost->dev, sizeof(*vhost->channel_setup_buf), @@ -6144,6 +6149,21 @@ static void ibmvfc_free_mem(struct ibmvfc_host *vhost) LEAVE; } +static int ibmvfc_alloc_disc_buf(struct device *dev, struct ibmvfc_channels *channels) +{ + channels->disc_buf_sz = sizeof(*channels->disc_buf) * max_targets; + channels->disc_buf = dma_alloc_coherent(dev, channels->disc_buf_sz, + &channels->disc_buf_dma, GFP_KERNEL); + + if (!channels->disc_buf) { + dev_err(dev, "Couldn't allocate %s Discover Targets buffer\n", + (channels->protocol == IBMVFC_PROTO_SCSI) ? "SCSI" : "NVMe"); + return -ENOMEM; + } + + return 0; +} + /** * ibmvfc_alloc_mem - Allocate memory for vhost * @vhost: ibmvfc host struct @@ -6179,21 +6199,15 @@ static int ibmvfc_alloc_mem(struct ibmvfc_host *vhost) goto free_sg_pool; } - vhost->disc_buf_sz = sizeof(*vhost->disc_buf) * max_targets; - vhost->disc_buf = dma_alloc_coherent(dev, vhost->disc_buf_sz, - &vhost->disc_buf_dma, GFP_KERNEL); - - if (!vhost->disc_buf) { - dev_err(dev, "Couldn't allocate Discover Targets buffer\n"); + if (ibmvfc_alloc_disc_buf(dev, &vhost->scsi_scrqs)) goto free_login_buffer; - } vhost->trace = kcalloc(IBMVFC_NUM_TRACE_ENTRIES, sizeof(struct ibmvfc_trace_entry), GFP_KERNEL); atomic_set(&vhost->trace_index, -1); if (!vhost->trace) - goto free_disc_buffer; + goto free_scsi_disc_buffer; vhost->tgt_pool = mempool_create_kmalloc_pool(IBMVFC_TGT_MEMPOOL_SZ, sizeof(struct ibmvfc_target)); @@ -6219,9 +6233,8 @@ static int ibmvfc_alloc_mem(struct ibmvfc_host *vhost) mempool_destroy(vhost->tgt_pool); free_trace: kfree(vhost->trace); -free_disc_buffer: - dma_free_coherent(dev, vhost->disc_buf_sz, vhost->disc_buf, - vhost->disc_buf_dma); +free_scsi_disc_buffer: + ibmvfc_free_disc_buf(dev, &vhost->scsi_scrqs); free_login_buffer: dma_free_coherent(dev, sizeof(*vhost->login_buf), vhost->login_buf, vhost->login_buf_dma); diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index 085dfc38446a..ab3a7070171b 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -827,6 +827,9 @@ struct ibmvfc_channels { unsigned int active_queues; unsigned int desired_queues; unsigned int max_queues; + int disc_buf_sz; + struct ibmvfc_discover_targets_entry *disc_buf; + dma_addr_t disc_buf_dma; }; enum ibmvfc_host_action { @@ -881,9 +884,7 @@ struct ibmvfc_host { dma_addr_t login_buf_dma; struct ibmvfc_channel_setup *channel_setup_buf; dma_addr_t channel_setup_dma; - int disc_buf_sz; int log_level; - struct ibmvfc_discover_targets_entry *disc_buf; struct mutex passthru_mutex; unsigned int max_vios_scsi_channels; int task_set; @@ -904,7 +905,6 @@ struct ibmvfc_host { #define IBMVFC_AE_LINKUP 0x0001 #define IBMVFC_AE_LINKDOWN 0x0002 #define IBMVFC_AE_RSCN 0x0004 - dma_addr_t disc_buf_dma; unsigned int partition_number; char partition_name[97]; void (*job_step) (struct ibmvfc_host *); From patchwork Wed Sep 13 23:04:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 13383904 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38548EE0211 for ; Wed, 13 Sep 2023 23:33:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233157AbjIMXdJ (ORCPT ); Wed, 13 Sep 2023 19:33:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233129AbjIMXdI (ORCPT ); Wed, 13 Sep 2023 19:33:08 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1295183 for ; Wed, 13 Sep 2023 16:33:04 -0700 (PDT) Received: from pps.filterd (m0353723.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38DN85V9030406; Wed, 13 Sep 2023 23:33:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=LrlW+Ts3B5EKcpkWgZSqInwiSmlmaMbJ5kPpiOkfGjk=; b=T7tFAFlLvjnL9A+2hMsioGSG9fs4REfXqVDU+V2M3PTMjGefdW5O3t6gboxfCINQ7tIv nRkDhE00MA2mUepFiwYkD9BaPtoqSFHWkmeyB3TitSKaew9apuU3h5aRHblzB8rm6TVA 5GnTphRiofYOrrDbHETFKsA+ftPkRKn9IjQdSeJA8GyeORE5N2YDc/HnGbnpieAPthqY Lnvyjyc/UytZjHSc2je25ZXEjuREBdGC33SX1rek2tP7SFBadfIWCstHSbMRgRMzKBlS rhIZerYuYjEaoSzq2cwOQPj1fXsxABxw9eFHYypGBOHur5OZSiwyGO33WJhc2XXqPUGL Ig== Received: from ppma12.dal12v.mail.ibm.com (dc.9e.1632.ip4.static.sl-reverse.com [50.22.158.220]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3t3kpfn4q2-5 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:33:00 +0000 Received: from pps.filterd (ppma12.dal12v.mail.ibm.com [127.0.0.1]) by ppma12.dal12v.mail.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 38DLvoVw024103; Wed, 13 Sep 2023 23:05:07 GMT Received: from smtprelay04.dal12v.mail.ibm.com ([172.16.1.6]) by ppma12.dal12v.mail.ibm.com (PPS) with ESMTPS id 3t131tey53-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 13 Sep 2023 23:05:07 +0000 Received: from smtpav05.wdc07v.mail.ibm.com (smtpav05.wdc07v.mail.ibm.com [10.39.53.232]) by smtprelay04.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 38DN566p10945174 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 13 Sep 2023 23:05:06 GMT Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 68A295805F; Wed, 13 Sep 2023 23:05:06 +0000 (GMT) Received: from smtpav05.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DFED958059; Wed, 13 Sep 2023 23:05:05 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by smtpav05.wdc07v.mail.ibm.com (Postfix) with ESMTP; Wed, 13 Sep 2023 23:05:05 +0000 (GMT) From: Tyrel Datwyler To: martin.petersen@oracle.com Cc: linux-scsi@vger.kernel.org, brking@linux.ibm.com, james.bottomley@hansenpartnership.com, Tyrel Datwyler , Brian King Subject: [PATCH 11/11] ibmvfc: add protocol field to target structure Date: Wed, 13 Sep 2023 18:04:57 -0500 Message-Id: <20230913230457.2575849-12-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20230913230457.2575849-1-tyreld@linux.ibm.com> References: <20230913230457.2575849-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-GUID: R8Dfzxiu2t0ypDIvLLHMjJT_NgsIsmJo X-Proofpoint-ORIG-GUID: R8Dfzxiu2t0ypDIvLLHMjJT_NgsIsmJo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-13_17,2023-09-13_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0 phishscore=0 spamscore=0 bulkscore=0 lowpriorityscore=0 malwarescore=0 priorityscore=1501 clxscore=1015 adultscore=0 impostorscore=0 mlxlogscore=873 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2308100000 definitions=main-2309130191 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org Add a per target protocol field so target code can determine correct protocol specific actions as well as identify the correct channel group target list. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.h | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h index ab3a7070171b..331ecf8254be 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.h +++ b/drivers/scsi/ibmvscsi/ibmvfc.h @@ -716,9 +716,15 @@ enum ibmvfc_target_action { IBMVFC_TGT_ACTION_LOGOUT_DELETED_RPORT, }; +enum ibmvfc_protocol { + IBMVFC_PROTO_SCSI = 0, + IBMVFC_PROTO_NVME = 1, +}; + struct ibmvfc_target { struct list_head queue; struct ibmvfc_host *vhost; + enum ibmvfc_protocol protocol; u64 scsi_id; u64 wwpn; u64 new_scsi_id; @@ -816,11 +822,6 @@ struct ibmvfc_queue { irq_handler_t handler; }; -enum ibmvfc_protocol { - IBMVFC_PROTO_SCSI = 0, - IBMVFC_PROTO_NVME = 1, -}; - struct ibmvfc_channels { struct ibmvfc_queue *scrqs; enum ibmvfc_protocol protocol;