From patchwork Thu Jun 22 02:13:32 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Uma Krishnan X-Patchwork-Id: 9803313 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 7AF3F60234 for ; Thu, 22 Jun 2017 02:13:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C9CD28531 for ; Thu, 22 Jun 2017 02:13:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 60F8128534; Thu, 22 Jun 2017 02:13:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 02C132855E for ; Thu, 22 Jun 2017 02:13:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751942AbdFVCNq (ORCPT ); Wed, 21 Jun 2017 22:13:46 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:38701 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751140AbdFVCNp (ORCPT ); Wed, 21 Jun 2017 22:13:45 -0400 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v5M29qB4138657 for ; Wed, 21 Jun 2017 22:13:45 -0400 Received: from e11.ny.us.ibm.com (e11.ny.us.ibm.com [129.33.205.201]) by mx0b-001b2d01.pphosted.com with ESMTP id 2b7v80pfj7-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Wed, 21 Jun 2017 22:13:44 -0400 Received: from localhost by e11.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 21 Jun 2017 22:13:44 -0400 Received: from b01cxnp22034.gho.pok.ibm.com (9.57.198.24) by e11.ny.us.ibm.com (146.89.104.198) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 21 Jun 2017 22:13:41 -0400 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp22034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v5M2DdbU41091210; Thu, 22 Jun 2017 02:13:39 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9821FB204E; Wed, 21 Jun 2017 22:11:12 -0400 (EDT) Received: from p8tul1-build.aus.stglabs.ibm.com (unknown [9.3.141.206]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP id C0C31B204D; Wed, 21 Jun 2017 22:11:11 -0400 (EDT) From: Uma Krishnan To: linux-scsi@vger.kernel.org, James Bottomley , "Martin K. Petersen" , "Matthew R. Ochs" , "Manoj N. Kumar" Cc: linuxppc-dev@lists.ozlabs.org, Ian Munsie , Andrew Donnellan , Frederic Barrat , Christophe Lombard Subject: [PATCH 01/17] cxlflash: Combine the send queue locks Date: Wed, 21 Jun 2017 21:13:32 -0500 X-Mailer: git-send-email 2.1.0 In-Reply-To: <1498097563-8680-1-git-send-email-ukrishn@linux.vnet.ibm.com> References: <1498097563-8680-1-git-send-email-ukrishn@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 17062202-2213-0000-0000-000001E48483 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007269; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000214; SDB=6.00878072; UDB=6.00437481; IPR=6.00658212; BA=6.00005434; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015913; XFM=3.00000015; UTC=2017-06-22 02:13:42 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17062202-2214-0000-0000-00005698D90A Message-Id: <1498097612-8717-1-git-send-email-ukrishn@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-06-22_01:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1706220035 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently there are separate spin locks for the two supported I/O queueing models. This makes it difficult to serialize with paths outside the enqueue path. As a design simplification and to support serialization with enqueue operations, move to only a single lock that is used for enqueueing regardless of the queueing model. Signed-off-by: Uma Krishnan Acked-by: Matthew R. Ochs --- drivers/scsi/cxlflash/common.h | 3 +-- drivers/scsi/cxlflash/main.c | 9 +++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/cxlflash/common.h b/drivers/scsi/cxlflash/common.h index 256af81..6fc32cfc 100644 --- a/drivers/scsi/cxlflash/common.h +++ b/drivers/scsi/cxlflash/common.h @@ -193,7 +193,7 @@ struct hwq { u32 index; /* Index of this hwq */ atomic_t hsq_credits; - spinlock_t hsq_slock; + spinlock_t hsq_slock; /* Hardware send queue lock */ struct sisl_ioarcb *hsq_start; struct sisl_ioarcb *hsq_end; struct sisl_ioarcb *hsq_curr; @@ -204,7 +204,6 @@ struct hwq { bool toggle; s64 room; - spinlock_t rrin_slock; /* Lock to rrin queuing and cmd_room updates */ struct irq_poll irqpoll; } __aligned(cache_line_size()); diff --git a/drivers/scsi/cxlflash/main.c b/drivers/scsi/cxlflash/main.c index a7d57c3..64ea597ca 100644 --- a/drivers/scsi/cxlflash/main.c +++ b/drivers/scsi/cxlflash/main.c @@ -261,7 +261,7 @@ static int send_cmd_ioarrin(struct afu *afu, struct afu_cmd *cmd) * To avoid the performance penalty of MMIO, spread the update of * 'room' over multiple commands. */ - spin_lock_irqsave(&hwq->rrin_slock, lock_flags); + spin_lock_irqsave(&hwq->hsq_slock, lock_flags); if (--hwq->room < 0) { room = readq_be(&hwq->host_map->cmd_room); if (room <= 0) { @@ -277,7 +277,7 @@ static int send_cmd_ioarrin(struct afu *afu, struct afu_cmd *cmd) writeq_be((u64)&cmd->rcb, &hwq->host_map->ioarrin); out: - spin_unlock_irqrestore(&hwq->rrin_slock, lock_flags); + spin_unlock_irqrestore(&hwq->hsq_slock, lock_flags); dev_dbg(dev, "%s: cmd=%p len=%u ea=%016llx rc=%d\n", __func__, cmd, cmd->rcb.data_len, cmd->rcb.data_ea, rc); return rc; @@ -1722,7 +1722,10 @@ static int start_afu(struct cxlflash_cfg *cfg) hwq->hrrq_end = &hwq->rrq_entry[NUM_RRQ_ENTRY - 1]; hwq->hrrq_curr = hwq->hrrq_start; hwq->toggle = 1; + + /* Initialize spin locks */ spin_lock_init(&hwq->hrrq_slock); + spin_lock_init(&hwq->hsq_slock); /* Initialize SQ */ if (afu_is_sq_cmd_mode(afu)) { @@ -1731,7 +1734,6 @@ static int start_afu(struct cxlflash_cfg *cfg) hwq->hsq_end = &hwq->sq[NUM_SQ_ENTRY - 1]; hwq->hsq_curr = hwq->hsq_start; - spin_lock_init(&hwq->hsq_slock); atomic_set(&hwq->hsq_credits, NUM_SQ_ENTRY - 1); } @@ -1984,7 +1986,6 @@ static int init_afu(struct cxlflash_cfg *cfg) for (i = 0; i < afu->num_hwqs; i++) { hwq = get_hwq(afu, i); - spin_lock_init(&hwq->rrin_slock); hwq->room = readq_be(&hwq->host_map->cmd_room); }