From patchwork Thu Jan 14 20:31:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyrel Datwyler X-Patchwork-Id: 12020845 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00935C43333 for ; Thu, 14 Jan 2021 20:34:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AF26423B09 for ; Thu, 14 Jan 2021 20:34:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729220AbhANUdx (ORCPT ); Thu, 14 Jan 2021 15:33:53 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:21854 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727056AbhANUcm (ORCPT ); Thu, 14 Jan 2021 15:32:42 -0500 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 10EK2JWD115719; Thu, 14 Jan 2021 15:31:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=Tp7Apfjr247x8ESCxwtc0OwFyKGd4qn1wU74cmHH1kg=; b=E8Sc1OR4p8MCWVZSTBWv5UU2+Yb1oJXYmjwfqnlh+KTEqv/CQbt6HVLz70aXlty6tMYq fVKzrxFZpzf6EbwYit69wrQAf+XC+jEjHOPQircdsHnIl8I/dwZzpkZMFFQKc1iqOr8/ UK0rxzxpcFQPokgBkCEmAkcGzOE1A3uWnGozmUXpiE1BJIGIk2W1973+bHLTN7z4HwiS RBgEgb8QcfVagW8eWsNvcBRG5kg0raPHjibHuzqlgUaTcBM7MgcVV7e/X/PEQ+P1SJSA ef0SZMuOGuzHRLiRz8r3IfTMGAM1sV4LL/zDFeF+rYJ5HqfpAFAHXjb9mUWlqS09LZbG sw== Received: from ppma04wdc.us.ibm.com (1a.90.2fa9.ip4.static.sl-reverse.com [169.47.144.26]) by mx0a-001b2d01.pphosted.com with ESMTP id 362vf4900x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 14 Jan 2021 15:31:56 -0500 Received: from pps.filterd (ppma04wdc.us.ibm.com [127.0.0.1]) by ppma04wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 10EKRP43032079; Thu, 14 Jan 2021 20:31:55 GMT Received: from b03cxnp08026.gho.boulder.ibm.com (b03cxnp08026.gho.boulder.ibm.com [9.17.130.18]) by ppma04wdc.us.ibm.com with ESMTP id 35y449fsk6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 14 Jan 2021 20:31:55 +0000 Received: from b03ledav001.gho.boulder.ibm.com (b03ledav001.gho.boulder.ibm.com [9.17.130.232]) by b03cxnp08026.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 10EKVsT915466844 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 14 Jan 2021 20:31:54 GMT Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 13BB66E054; Thu, 14 Jan 2021 20:31:54 +0000 (GMT) Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B48556E04E; Thu, 14 Jan 2021 20:31:53 +0000 (GMT) Received: from vios4361.aus.stglabs.ibm.com (unknown [9.3.43.61]) by b03ledav001.gho.boulder.ibm.com (Postfix) with ESMTP; Thu, 14 Jan 2021 20:31:53 +0000 (GMT) From: Tyrel Datwyler To: james.bottomley@hansenpartnership.com Cc: martin.petersen@oracle.com, linux-scsi@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, brking@linux.ibm.com, Tyrel Datwyler , Brian King Subject: [PATCH v5 09/21] ibmvfc: add handlers to drain and complete Sub-CRQ responses Date: Thu, 14 Jan 2021 14:31:36 -0600 Message-Id: <20210114203148.246656-10-tyreld@linux.ibm.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210114203148.246656-1-tyreld@linux.ibm.com> References: <20210114203148.246656-1-tyreld@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343,18.0.737 definitions=2021-01-14_07:2021-01-14,2021-01-14 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 impostorscore=0 mlxlogscore=999 spamscore=0 mlxscore=0 clxscore=1015 bulkscore=0 priorityscore=1501 adultscore=0 suspectscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2101140111 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org The logic for iterating over the Sub-CRQ responses is similiar to that of the primary CRQ. Add the necessary handlers for processing those responses. Signed-off-by: Tyrel Datwyler Reviewed-by: Brian King --- drivers/scsi/ibmvscsi/ibmvfc.c | 86 ++++++++++++++++++++++++++++++++++ 1 file changed, 86 insertions(+) diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c index 5d7ada0ed0d6..f3cd092478ee 100644 --- a/drivers/scsi/ibmvscsi/ibmvfc.c +++ b/drivers/scsi/ibmvscsi/ibmvfc.c @@ -3485,6 +3485,92 @@ static int ibmvfc_toggle_scrq_irq(struct ibmvfc_queue *scrq, int enable) return rc; } +static void ibmvfc_handle_scrq(struct ibmvfc_crq *crq, struct ibmvfc_host *vhost, + struct list_head *evt_doneq) +{ + struct ibmvfc_event *evt = (struct ibmvfc_event *)be64_to_cpu(crq->ioba); + + switch (crq->valid) { + case IBMVFC_CRQ_CMD_RSP: + break; + case IBMVFC_CRQ_XPORT_EVENT: + return; + default: + dev_err(vhost->dev, "Got and invalid message type 0x%02x\n", crq->valid); + return; + } + + /* The only kind of payload CRQs we should get are responses to + * things we send. Make sure this response is to something we + * actually sent + */ + if (unlikely(!ibmvfc_valid_event(&evt->queue->evt_pool, evt))) { + dev_err(vhost->dev, "Returned correlation_token 0x%08llx is invalid!\n", + crq->ioba); + return; + } + + if (unlikely(atomic_read(&evt->free))) { + dev_err(vhost->dev, "Received duplicate correlation_token 0x%08llx!\n", + crq->ioba); + return; + } + + spin_lock(&evt->queue->l_lock); + list_move_tail(&evt->queue_list, evt_doneq); + spin_unlock(&evt->queue->l_lock); +} + +static struct ibmvfc_crq *ibmvfc_next_scrq(struct ibmvfc_queue *scrq) +{ + struct ibmvfc_crq *crq; + + crq = &scrq->msgs.scrq[scrq->cur].crq; + if (crq->valid & 0x80) { + if (++scrq->cur == scrq->size) + scrq->cur = 0; + rmb(); + } else + crq = NULL; + + return crq; +} + +static void ibmvfc_drain_sub_crq(struct ibmvfc_queue *scrq) +{ + struct ibmvfc_crq *crq; + struct ibmvfc_event *evt, *temp; + unsigned long flags; + int done = 0; + LIST_HEAD(evt_doneq); + + spin_lock_irqsave(scrq->q_lock, flags); + while (!done) { + while ((crq = ibmvfc_next_scrq(scrq)) != NULL) { + ibmvfc_handle_scrq(crq, scrq->vhost, &evt_doneq); + crq->valid = 0; + wmb(); + } + + ibmvfc_toggle_scrq_irq(scrq, 1); + if ((crq = ibmvfc_next_scrq(scrq)) != NULL) { + ibmvfc_toggle_scrq_irq(scrq, 0); + ibmvfc_handle_scrq(crq, scrq->vhost, &evt_doneq); + crq->valid = 0; + wmb(); + } else + done = 1; + } + spin_unlock_irqrestore(scrq->q_lock, flags); + + list_for_each_entry_safe(evt, temp, &evt_doneq, queue_list) { + del_timer(&evt->timer); + list_del(&evt->queue_list); + ibmvfc_trc_end(evt); + evt->done(evt); + } +} + /** * ibmvfc_init_tgt - Set the next init job step for the target * @tgt: ibmvfc target struct