From patchwork Wed Sep 26 16:25:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10616215 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C0D9C174A for ; Wed, 26 Sep 2018 16:32:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A93EB2B514 for ; Wed, 26 Sep 2018 16:32:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9D6B32B458; Wed, 26 Sep 2018 16:32:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E24602B4FA for ; Wed, 26 Sep 2018 16:32:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728343AbeIZWpt (ORCPT ); Wed, 26 Sep 2018 18:45:49 -0400 Received: from mail-by2nam01on0077.outbound.protection.outlook.com ([104.47.34.77]:41255 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727285AbeIZWpt (ORCPT ); Wed, 26 Sep 2018 18:45:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vq+Ia8D5Gl5+4909g6Sh/A24qXoYp+FdV8uq7b+q9+k=; b=O0Q5ZmOfy71J3r1es3hCH+h6oNLLLvXhChMv/v3No/K4tLgUYxdxdkzSfDuMKk4Cw+1xuH1oNC4u1oTlSbsFYcqA/k0RbpsdlIiXI0OPAzzKaRQetCiG2p729+sPTfewDiSnnhwqK8mchWk+j8fBfs72bmPFjoijo1wSNAFNECU= Received: from SN4PR0701CA0016.namprd07.prod.outlook.com (2603:10b6:803:28::26) by SN6PR07MB4400.namprd07.prod.outlook.com (2603:10b6:805:58::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1143.18; Wed, 26 Sep 2018 16:32:00 +0000 Received: from DM3NAM05FT009.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::205) by SN4PR0701CA0016.outlook.office365.com (2603:10b6:803:28::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.22 via Frontend Transport; Wed, 26 Sep 2018 16:32:00 +0000 Authentication-Results: spf=pass (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; oracle.com; dkim=none (message not signed) header.d=none;oracle.com; dmarc=bestguesspass action=none header.from=cavium.com; Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT009.mail.protection.outlook.com (10.152.98.115) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 16:31:59 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Wed, 26 Sep 2018 09:25:36 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8QGPajK024353; Wed, 26 Sep 2018 09:25:36 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8QGPakA024352; Wed, 26 Sep 2018 09:25:36 -0700 From: Himanshu Madhani To: , , , , , , CC: Subject: [PATCH v2 1/5] qla2xxx_nvmet: Add files for FC-NVMe Target support Date: Wed, 26 Sep 2018 09:25:31 -0700 Message-ID: <20180926162535.24314-2-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926162535.24314-1-himanshu.madhani@cavium.com> References: <20180926162535.24314-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(979002)(39860400002)(396003)(376002)(346002)(136003)(2980300002)(438002)(199004)(189003)(44832011)(2201001)(106002)(26005)(14444005)(186003)(87636003)(4326008)(86362001)(51416003)(34290500001)(36756003)(47776003)(1076002)(478600001)(80596001)(50466002)(72206003)(69596002)(486006)(48376002)(50226002)(8676002)(42186006)(446003)(316002)(11346002)(8936002)(2616005)(16586007)(106466001)(76176011)(5660300001)(126002)(2906002)(476003)(81166006)(81156014)(6666003)(305945005)(107886003)(356003)(110136005)(336012)(969003)(989001)(999001)(1009001)(1019001);DIR:OUT;SFP:1101;SCL:1;SRVR:SN6PR07MB4400;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;A:1;MX:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT009;1:3delfrdp9UfDJSO9ItjbsghAIOQ90vhBBq9Omi3JwDVFiM8wjbaTIp/+dTyloWRz8wYVg76eOj5He6y42IC7mr88dJOZuFh9YPF/71K+JzJQoSKpTu2CcLG42NsOGFJW X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1c90fef7-eedd-43e7-a09f-08d623cd9555 X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:SN6PR07MB4400; X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;3:qKva+LMmdmwVmNfuuhcpiidBlstT+K7ipuL8ybGVZ/ggWbhLcOivH1Ldso8Z5sJ5HC/7xkW5nhcS+g1cVYJMn2PrFGda9C4lvEomVtR+PDE/6kDYyfLsFEUxyYNLvcUPJC1USqhFL1hk8sYptvzKOskGd6YZ/5e2/cNusii2YcI/T9IBFf0fZk4QDHFTlfx0u7BUcukdS92DMUVdikIErjcHfT9w2OY5BaXTk4gk+KSfR/LfjvGTRt5UYNSVzJFyWf6WA6LLRToE303pcGWmJE9YzttMTfCVXh19zoEWwKa9gPlX9hEEVUXHP/25F67NyXsn7OUnZtt37ChuJTscRDYoVr3YxPYXYq74nIJEiIE=;25:1EQyktnAle8Gcs0MpKeB/QvxfAS3PebGYa1H6VHglh47iPxMApXyIHwJIWnd9MFlCJDzawJs03/YpPRLsN1+9QvCVT0X1i2/eK7ssBsEECU96EY+MqKQ26ZqfFDEPQS9CJs5/hrYtx8O7Eg402QIDDV3tqicEgQh+BkwsFCIGpRPXoX+86k5exqTpjOiFTRHDxK3D8JB9dAgfQM1NIpr2Ah2efsZ/jdMhXxcc3rMgYduEgGfqVWn5PA/gfEsSW74wldXAmju5lSwui3stKNI+gUX2AfVVORZkdFgQKTfA6mLiNI1Butq5/jjWg+T9nMLbVNPc2Dwu7N5sYqHlb2ltw== X-MS-TrafficTypeDiagnostic: SN6PR07MB4400: X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;31:F1vnA1SQ+Ewi5O5msA6ahy9VpBoK82gIOEn0mE51MjrsjcPUBpvKfNSqLCuxxpJHI1dijWoiWKk9T3PpekF8rDtySQJZ/DJTU2KXOGUZapHY7sFUlGAywNRUhq8dXhwAnK42OMC1WULeVxlCfLrdux/EiFuWmMwoD9gMv64+lsNacL+7nZwriD/Rh3gF4OX+tt5OkT5spABsEaDG2Jm/tw3eZzMLVbFWJuGuNM3Hx2Y=;20:ZA4vaVgcLqM4MpLlRpb/0k8y7+P1c+FkfBb8zeEYC+eTtqiOm/3qYPM9VGpKG/m4iEKwmPQZZ92qSaYD0KCIlX/xZmFwW9V3Q4prFk8pWD1M2tRtUAWbRPIt7wf38/Q5O8RpPI/svXNDTZyhsFkq9fuCGvXXog6jBwRCoQWToPkiv5XFGwPvvllE65wrPuipYCiQalNunodySdyPjnjMPjdYQq3gDHxvaNuqZn1gnFbDPJRDCxiX4NCBPoZTVQ7PAcglaZdA5x4kFO0JLdJoefaNzh7BzNVu6aPsoA8kHW+VfvN/UJ0ZsPYRQh8uTh+VgK8YBkC4uFvlMI4ZIMB6W9PQ/YQrlF8mIHZKEJqpdwTNgYtaJxvnAN6sKRaXnGcsF8gxzlu3kDTqjdYzD7Zb8W7uajQXH2fhlTZ5E8xFzW2GJP7UglfyAz5+/4B8yF3JI/Xd0RBNf2q4z8eyxQBler7XGYWHXoBfK32aMWBz1mTDgHUTA/ZssV9QIwMCi61u X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3231355)(944501410)(52105095)(93006095)(93004095)(3002001)(10201501046)(149066)(150057)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123558120)(20161123564045)(20161123562045)(201708071742011)(7699051);SRVR:SN6PR07MB4400;BCL:0;PCL:0;RULEID:;SRVR:SN6PR07MB4400; X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;4:jFuf/R511tX9UxuIrUkrEx0BJVuuSVSHgpB+3vr49U8lD06CwPhuT6bRtCmQrds7gbfMe3apagwjb/YZ3RuUt4VDsrGAeXAC96UOgK34DLSVuwnRdNVIgVNcvvMnFgiz6ljmU3MCf9sMW7tQmjkiB3dQt674shKJ6319/2rWre0zryz+6bD1AyvJRHErdf4V+DPge4z1noJbnpGGL1uRgw29JxJRxkzZNGiETZsXNE1suesmcifTpl3Y8yoI4OGt1RfPV8AVGPILgYdJc/FVfg== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;23:spxHePyTyf/hVO9rUtk113duIKyBS9ms1DIyga69zaoiPIbh6yA4qvZV6YBq5x8dxoX92gBQIFlrORYxnF8Z05uUI28He/Ce1jAiyOyfdtwrS1fyjQDRwismjlTfjMcHZJIicra2gITJst6aCJbQJY39RZCx4RKVT8TI2F91mgrBr1XXMKhy/MXiOl6I9Bsa/QXIdPtVBE9Hj5gRcUvH6kys+XZIhKHhKEsisuHCRXvcTGjMVupXzIidjridLcvq6Yxkd4xfw5GjCgjjvyGYDUdjU6QfPJsHFSaHOwi0bA4kwsSTAzYurgJEb51SRVF52nY3GxSDHVQQAaexRuZXv0STfeHJ0iE2NuECLIoSaDu847IBneGOWZn8Az1HDzcdNc2nNyaO4uQxxtOr3CTab6j5lvyl0cDpL2K23COxXdADj3/hwO1FOA/otnDFRAZR80SRgK8CZNgYdDH3cLmZAuRGce4sknuPxndxvJAMb2C+K9U6638DHowjO6ZXL+t7HWXvIZq7Od8B+ybNkHzGjCo9F1+2RFGpOQBMkgZyTsBLjDbGp3TMSypi+ABOoPU4TA9fgBeklo5iXSoXwQdhMEDe9ugwimvkkcJlX1XkdHmuai6+0nRkrfw9fdwHY0A2z7i3CM8gZnL7w+HF62+en2TYQTnM5Jwlj+7AhvfEgv8hBVh1lML8BRMWvyrSNMopM7sSUESTibvBIFN2eW7wnsaSnOLed7gTXoEWVHuvLF/RcR8qS/1XY2BPVZvi2IUUcbd+Ma2DKe+uw5pfIGNJhqbKxgJfMhWUD3YDf9lEdRQeR8R7Nq8q9I+8tthip8a1K5d21sM+aY8C3OiEA5euYpgP3UwnxT8SngNujSIbo8NwqjFR88+0zrPHmwxhUgJILkDord47qlzhPaNEWrdG0poT6xJkzRdW3TP51Tmu4CmUfzq0ixMUos/xoTrhZOwHUAdCcD3+zt/7knWsSdIdmOeeuQ47wFiY10nsdKV4KTSbeJ/7SkigPipP3D/vUZ1UpDBEJO7DoyDg+Nf7LtM3cAj/MvzqIkvULWedDIcrP4zSPAgTquM73Y0Qzj04fCdHim2BU1ttsIaUgTo2H6c52+bs9ZEPEjK/8QCflkcyH8aX4Ei9UsQzDgXFtOUQz22Z6wC6Urt0QjvyoFsICCiXc14PHgEimeP3tENZY8C2h6rKJrNI+ZblmLMWSZmvLqIkh4BBr+K6/AejXWS8WL8rImDRCatnfZ2gSp6A4u2+93UfwO7IVJD33pfYDE78Q7q+ X-Microsoft-Antispam-Message-Info: iuSiNGWdUIhKyzUOUJAsrMHKNjyH34gKpIHaWF4MSop6cbwLgFUMM3Zx2iHgqv1ILZQHcelCBnn5ZiphRiEJpZn9T7Q0F+OYzGpZ4BnrcYk2RDFvqRYY5j4IKCXDks4hCFRrrg62yspBfSohV8P6DjawRmOrurBQhJCsahN98L/HeX7rz7sHx+D3hEVJK5mbhL/kMQDPiQy8COBjkqA0JsI7897qwJlLcG4rv8DiJ7olbngMO9W8MqkISMvyM3tKAh4myZYbZnF00MVrXaaHOXDWipuktKJs+tuyJK08Or6bZVudzurd3tLmcgJEPfQU0B0vro3wDJZJeGjIGusr9VQ4ZWagw2C+ZH4kCGDIqCc= X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;6:vo2xUiKbedtsB4cf6R0RXPH9oiqiI+GBJlWmc1T4Z2e7+yCSOCEXbEdQbhBETakSDz+MTc0Tz7sUS5MXmAALlXyt3NZ+OwPf1x+wbPtictiNvLcgSD/wTMK3QmCiVt+49sJqC3taOi7FFlzRjsytJPe1ev4PyHpZiAWBaP7555hdqml14sSO5dIzgf8t9gEPJYrVeYfOpXFmZaMm1REr73CfZjxxow3IcZ/fazgpAHT+EFXPHnsAldZppGLCsh+70otQ29fw/vC222IYsj0+J5MohlPWNKi6yX29WbrmRcqnDcRJWi3Z3av7eczK/UNdJSXaof/j0YeMMZRRpIq/KZYTBFBMMnkzSpiCP8xbcrozyWf3w4ax6ziDLa71/yJOCW1dfA9BkTwCs7TIutSx6nq01BqTCWnIjqCywTbOcclsdSbU4ZoxU4ysUkHc3Xm4AANNX2U80NXUTreiSakJhw==;5:iLlo4gihyRXCHZrjpEc7XvoQq0onBuJ2bB0yY0udAXLq0IR4JLXHnuKxWrJX7aSk2vMWipO2u2w1y6FPYgihaglfHYVHoRioViLNGJxJr+6GgmYFiOcN2jmYWc0xiQs4betiTrEU/cRQ8ey5oBxYk5Sswm4Xdmq/elpMCo+CLxc=;7:GLRxQLjH6vTBHrmHuigZBuAVpoQ+KZk45fBXuKwPp2ITjVQkLUa2bYn3aTPydjqoWW70j/WgWf+JGV0XtrgmNjGrhZXDTmD7TYQiFHwrIkkO023uFR/aSjjK4kNhD3yBQLrTbW3BhbbUmn7meUjPXj5k8spAfegBnpmg2/AJIWuvpmswDN0pUCIaxRm73KAhJf+0jBJCGRy5YJ3ZqytnKHlQV1Z8BtLxrRynHwRrBgEygY20KDa/SDUc0EJKHb1z SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 16:31:59.4250 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1c90fef7-eedd-43e7-a09f-08d623cd9555 X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR07MB4400 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Anil Gurumurthy This patch adds initial files to enable NVMe Target Support Signed-off-by: Anil Gurumurthy Signed-off-by: Giridhar Malavali Signed-off-by: Darren Trapp Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/Makefile | 3 +- drivers/scsi/qla2xxx/qla_nvmet.c | 798 +++++++++++++++++++++++++++++++++++++++ drivers/scsi/qla2xxx/qla_nvmet.h | 129 +++++++ 3 files changed, 929 insertions(+), 1 deletion(-) create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.c create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.h diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile index 17d5bc1cc56b..ec924733c10e 100644 --- a/drivers/scsi/qla2xxx/Makefile +++ b/drivers/scsi/qla2xxx/Makefile @@ -1,7 +1,8 @@ # SPDX-License-Identifier: GPL-2.0 qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \ qla_dbg.o qla_sup.o qla_attr.o qla_mid.o qla_dfs.o qla_bsg.o \ - qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o qla_nvme.o + qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o qla_nvme.o \ + qla_nvmet.o obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o obj-$(CONFIG_TCM_QLA2XXX) += tcm_qla2xxx.o diff --git a/drivers/scsi/qla2xxx/qla_nvmet.c b/drivers/scsi/qla2xxx/qla_nvmet.c new file mode 100644 index 000000000000..5335c0618f00 --- /dev/null +++ b/drivers/scsi/qla2xxx/qla_nvmet.c @@ -0,0 +1,798 @@ +/* + * QLogic Fibre Channel HBA Driver + * Copyright (c) 2003-2017 QLogic Corporation + * + * See LICENSE.qla2xxx for copyright and licensing details. + */ + +#include +#include +#include +#include + +#include "qla_nvme.h" +#include "qla_nvmet.h" + +static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, + struct qla_nvmet_cmd *cmd, struct nvmefc_tgt_fcp_req *rsp); +static void qla_nvmet_send_abts_ctio(struct scsi_qla_host *vha, + struct abts_recv_from_24xx *abts, bool flag); + +/* + * qla_nvmet_targetport_delete - + * Invoked by the nvmet to indicate that the target port has + * been deleted + */ +static void +qla_nvmet_targetport_delete(struct nvmet_fc_target_port *targetport) +{ + struct qla_nvmet_tgtport *tport = targetport->private; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + complete(&tport->tport_del); +} + +/* + * qlt_nvmet_ls_done - + * Invoked by the firmware interface to indicate the completion + * of an LS cmd + * Free all associated resources of the LS cmd + */ +static void qlt_nvmet_ls_done(void *ptr, int res) +{ + struct srb *sp = ptr; + struct srb_iocb *nvme = &sp->u.iocb_cmd; + struct nvmefc_tgt_ls_req *rsp = nvme->u.nvme.desc; + struct qla_nvmet_cmd *tgt_cmd = nvme->u.nvme.cmd; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + ql_log(ql_log_info, sp->vha, 0x11000, + "Done with NVME LS4 req\n"); + + ql_log(ql_log_info, sp->vha, 0x11001, + "sp: %p vha: %p, rsp: %p, cmd: %p\n", + sp, sp->vha, nvme->u.nvme.desc, nvme->u.nvme.cmd); + + rsp->done(rsp); + /* Free tgt_cmd */ + kfree(tgt_cmd->buf); + kfree(tgt_cmd); + qla2x00_rel_sp(sp); +} + +/* + * qla_nvmet_ls_rsp - + * Invoked by the nvme-t to complete the LS req. + * Prepare and send a response CTIO to the firmware. + */ +static int +qla_nvmet_ls_rsp(struct nvmet_fc_target_port *tgtport, + struct nvmefc_tgt_ls_req *rsp) +{ + struct qla_nvmet_cmd *tgt_cmd = + container_of(rsp, struct qla_nvmet_cmd, cmd.ls_req); + struct scsi_qla_host *vha = tgt_cmd->vha; + struct srb_iocb *nvme; + int rval = QLA_FUNCTION_FAILED; + srb_t *sp; + + ql_log(ql_log_info, vha, 0x11002, + "Dumping the NVMET-LS response buffer\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)rsp->rspbuf, rsp->rsplen); + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_ATOMIC); + if (!sp) { + ql_log(ql_log_info, vha, 0x11003, "Failed to allocate SRB\n"); + return -ENOMEM; + } + + sp->type = SRB_NVMET_LS; + sp->done = qlt_nvmet_ls_done; + sp->vha = vha; + sp->fcport = tgt_cmd->fcport; + + nvme = &sp->u.iocb_cmd; + nvme->u.nvme.rsp_dma = rsp->rspdma; + nvme->u.nvme.rsp_len = rsp->rsplen; + nvme->u.nvme.exchange_address = tgt_cmd->atio.u.pt_ls4.exchange_address; + nvme->u.nvme.nport_handle = tgt_cmd->atio.u.pt_ls4.nport_handle; + nvme->u.nvme.vp_index = tgt_cmd->atio.u.pt_ls4.vp_index; + + nvme->u.nvme.cmd = tgt_cmd; /* To be freed */ + nvme->u.nvme.desc = rsp; /* Call back to nvmet */ + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) { + ql_log(ql_log_warn, vha, 0x11004, + "qla2x00_start_sp failed = %d\n", rval); + return rval; + } + + return 0; +} + +/* + * qla_nvmet_fcp_op - + * Invoked by the nvme-t to complete the IO. + * Prepare and send a response CTIO to the firmware. + */ +static int +qla_nvmet_fcp_op(struct nvmet_fc_target_port *tgtport, + struct nvmefc_tgt_fcp_req *rsp) +{ + struct qla_nvmet_cmd *tgt_cmd = + container_of(rsp, struct qla_nvmet_cmd, cmd.fcp_req); + struct scsi_qla_host *vha = tgt_cmd->vha; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + /* Prepare and send CTIO 82h */ + qla_nvmet_send_resp_ctio(vha->qpair, tgt_cmd, rsp); + + return 0; +} + +/* + * qla_nvmet_fcp_abort_done + * free up the used resources + */ +static void qla_nvmet_fcp_abort_done(void *ptr, int res) +{ + srb_t *sp = ptr; + + qla2x00_rel_sp(sp); +} + +/* + * qla_nvmet_fcp_abort - + * Invoked by the nvme-t to abort an IO + * Send an abort to the firmware + */ +static void +qla_nvmet_fcp_abort(struct nvmet_fc_target_port *tgtport, + struct nvmefc_tgt_fcp_req *req) +{ + struct qla_nvmet_cmd *tgt_cmd = + container_of(req, struct qla_nvmet_cmd, cmd.fcp_req); + struct scsi_qla_host *vha = tgt_cmd->vha; + struct qla_hw_data *ha = vha->hw; + srb_t *sp; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL); + if (!sp) { + ql_log(ql_log_info, vha, 0x11005, "Failed to allocate SRB\n"); + return; + } + + sp->type = SRB_NVMET_SEND_ABTS; + sp->done = qla_nvmet_fcp_abort_done; + sp->vha = vha; + sp->fcport = tgt_cmd->fcport; + + ha->isp_ops->abort_command(sp); + +} + +/* + * qla_nvmet_fcp_req_release - + * Delete the cmd from the list and free the cmd + */ +static void +qla_nvmet_fcp_req_release(struct nvmet_fc_target_port *tgtport, + struct nvmefc_tgt_fcp_req *rsp) +{ + struct qla_nvmet_cmd *tgt_cmd = + container_of(rsp, struct qla_nvmet_cmd, cmd.fcp_req); + scsi_qla_host_t *vha = tgt_cmd->vha; + unsigned long flags; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_del(&tgt_cmd->cmd_list); + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + + kfree(tgt_cmd); +} + +static struct nvmet_fc_target_template qla_nvmet_fc_transport = { + .targetport_delete = qla_nvmet_targetport_delete, + .xmt_ls_rsp = qla_nvmet_ls_rsp, + .fcp_op = qla_nvmet_fcp_op, + .fcp_abort = qla_nvmet_fcp_abort, + .fcp_req_release = qla_nvmet_fcp_req_release, + .max_hw_queues = 8, + .max_sgl_segments = 128, + .max_dif_sgl_segments = 64, + .dma_boundary = 0xFFFFFFFF, + .target_features = NVMET_FCTGTFEAT_READDATA_RSP | + NVMET_FCTGTFEAT_CMD_IN_ISR | + NVMET_FCTGTFEAT_OPDONE_IN_ISR, + .target_priv_sz = sizeof(struct nvme_private), +}; + +/* + * qla_nvmet_create_targetport - + * Create a targetport. Registers the template with the nvme-t + * layer + */ +int qla_nvmet_create_targetport(struct scsi_qla_host *vha) +{ + struct nvmet_fc_port_info pinfo; + struct qla_nvmet_tgtport *tport; + int error = 0; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + ql_dbg(ql_dbg_nvme, vha, 0xe081, + "Creating target port for :%p\n", vha); + + memset(&pinfo, 0, (sizeof(struct nvmet_fc_port_info))); + pinfo.node_name = wwn_to_u64(vha->node_name); + pinfo.port_name = wwn_to_u64(vha->port_name); + pinfo.port_id = vha->d_id.b24; + + error = nvmet_fc_register_targetport(&pinfo, + &qla_nvmet_fc_transport, &vha->hw->pdev->dev, + &vha->targetport); + + if (error) { + ql_dbg(ql_dbg_nvme, vha, 0xe082, + "Cannot register NVME transport:%d\n", error); + return error; + } + tport = (struct qla_nvmet_tgtport *)vha->targetport->private; + tport->vha = vha; + ql_dbg(ql_dbg_nvme, vha, 0xe082, + " Registered NVME transport:%p WWPN:%llx\n", + tport, pinfo.port_name); + return 0; +} + +/* + * qla_nvmet_delete - + * Delete a targetport. + */ +int qla_nvmet_delete(struct scsi_qla_host *vha) +{ + struct qla_nvmet_tgtport *tport; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + if (!vha->flags.nvmet_enabled) + return 0; + if (vha->targetport) { + tport = (struct qla_nvmet_tgtport *)vha->targetport->private; + + ql_dbg(ql_dbg_nvme, vha, 0xe083, + "Deleting target port :%p\n", tport); + init_completion(&tport->tport_del); + nvmet_fc_unregister_targetport(vha->targetport); + wait_for_completion_timeout(&tport->tport_del, 5); + + nvmet_release_sessions(vha); + } + return 0; +} + +/* + * qla_nvmet_handle_ls - + * Handle a link service request from the initiator. + * Get the LS payload from the ATIO queue, invoke + * nvmet_fc_rcv_ls_req to pass the LS req to nvmet. + */ +int qla_nvmet_handle_ls(struct scsi_qla_host *vha, + struct pt_ls4_rx_unsol *pt_ls4, void *buf) +{ + struct qla_nvmet_cmd *tgt_cmd; + uint32_t size; + int ret; + uint32_t look_up_sid; + fc_port_t *sess = NULL; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + look_up_sid = pt_ls4->s_id[2] << 16 | + pt_ls4->s_id[1] << 8 | pt_ls4->s_id[0]; + + ql_log(ql_log_info, vha, 0x11005, + "%s - Look UP sid: %#x\n", __func__, look_up_sid); + + sess = qla_nvmet_find_sess_by_s_id(vha, look_up_sid); + if (unlikely(!sess)) + WARN_ON(1); + + size = cpu_to_le16(pt_ls4->desc_len) + 8; + + tgt_cmd = kzalloc(sizeof(struct qla_nvmet_cmd), GFP_ATOMIC); + if (tgt_cmd == NULL) + return -ENOMEM; + + tgt_cmd->vha = vha; + tgt_cmd->ox_id = pt_ls4->ox_id; + tgt_cmd->buf = buf; + /* Store the received nphdl, rx_exh_addr etc */ + memcpy(&tgt_cmd->atio.u.pt_ls4, pt_ls4, sizeof(struct pt_ls4_rx_unsol)); + tgt_cmd->fcport = sess; + + ql_log(ql_log_info, vha, 0x11006, + "Dumping the PURLS-ATIO request\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pt_ls4, sizeof(struct pt_ls4_rx_unsol)); + + ql_log(ql_log_info, vha, 0x11007, + "Sending LS to nvmet buf: %p, len: %#x\n", buf, size); + + ret = nvmet_fc_rcv_ls_req(vha->targetport, + &tgt_cmd->cmd.ls_req, buf, size); + + if (ret == 0) { + ql_log(ql_log_info, vha, 0x11008, + "LS req handled successfully\n"); + return 0; + } + ql_log(ql_log_warn, vha, 0x11009, + "LS req failed\n"); + + return ret; +} + +/* + * qla_nvmet_process_cmd - + * Handle NVME cmd request from the initiator. + * Get the NVME payload from the ATIO queue, invoke + * nvmet_fc_rcv_ls_req to pass the LS req to nvmet. + * On a failure send an abts to the initiator? + */ +int qla_nvmet_process_cmd(struct scsi_qla_host *vha, + struct qla_nvmet_cmd *tgt_cmd) +{ + int ret; + struct atio7_nvme_cmnd *nvme_cmd; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + nvme_cmd = (struct atio7_nvme_cmnd *)&tgt_cmd->nvme_cmd_iu; + + ret = nvmet_fc_rcv_fcp_req(vha->targetport, &tgt_cmd->cmd.fcp_req, + nvme_cmd, tgt_cmd->cmd_len); + if (ret != 0) { + ql_log(ql_log_warn, vha, 0x1100a, + "%s-%d - Failed (ret: %#x) to process NVME command\n", + __func__, __LINE__, ret); + /* Send ABTS to initator ? */ + } + return 0; +} + +/* + * qla_nvmet_handle_abts + * Handle an abort from the initiator + * Invoke nvmet_fc_rcv_fcp_abort to pass the abts to the nvmet + */ +int qla_nvmet_handle_abts(struct scsi_qla_host *vha, + struct abts_recv_from_24xx *abts) +{ + uint16_t ox_id = cpu_to_be16(abts->fcp_hdr_le.ox_id); + unsigned long flags; + struct qla_nvmet_cmd *cmd = NULL; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + /* Retrieve the cmd from cmd list */ + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_for_each_entry(cmd, &vha->qla_cmd_list, cmd_list) { + if (cmd->ox_id == ox_id) + break; /* Found the cmd */ + } + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + if (!cmd) { + ql_log(ql_log_warn, vha, 0x1100b, + "%s-%d - Command not found\n", __func__, __LINE__); + /* Send a RJT */ + qla_nvmet_send_abts_ctio(vha, abts, 0); + return 0; + } + + nvmet_fc_rcv_fcp_abort(vha->targetport, &cmd->cmd.fcp_req); + /* Send an ACC */ + qla_nvmet_send_abts_ctio(vha, abts, 1); + + return 0; +} + +/* + * qla_nvmet_abts_done + * Complete the cmd back to the nvme-t and + * free up the used resources + */ +static void qla_nvmet_abts_done(void *ptr, int res) +{ + srb_t *sp = ptr; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + qla2x00_rel_sp(sp); +} +/* + * qla_nvmet_fcp_done + * Complete the cmd back to the nvme-t and + * free up the used resources + */ +static void qla_nvmet_fcp_done(void *ptr, int res) +{ + srb_t *sp = ptr; + struct nvmefc_tgt_fcp_req *rsp; + + rsp = sp->u.iocb_cmd.u.nvme.desc; + + if (res) { + rsp->fcp_error = NVME_SC_SUCCESS; + if (rsp->op == NVMET_FCOP_RSP) + rsp->transferred_length = 0; + else + rsp->transferred_length = rsp->transfer_length; + } else { + rsp->fcp_error = NVME_SC_DATA_XFER_ERROR; + rsp->transferred_length = 0; + } + rsp->done(rsp); + qla2x00_rel_sp(sp); +} + +/* + * qla_nvmet_send_resp_ctio + * Send the response CTIO to the firmware + */ +static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, + struct qla_nvmet_cmd *cmd, struct nvmefc_tgt_fcp_req *rsp_buf) +{ + struct atio_from_isp *atio = &cmd->atio; + struct ctio_nvme_to_27xx *ctio; + struct scsi_qla_host *vha = cmd->vha; + struct qla_hw_data *ha = vha->hw; + struct fcp_hdr *fchdr = &atio->u.nvme_isp27.fcp_hdr; + srb_t *sp; + unsigned long flags; + uint16_t temp, c_flags = 0; + struct req_que *req = vha->hw->req_q_map[0]; + uint32_t req_cnt = 1; + uint32_t *cur_dsd; + uint16_t avail_dsds; + uint16_t tot_dsds, i, cnt; + struct scatterlist *sgl, *sg; + + spin_lock_irqsave(&ha->hardware_lock, flags); + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, cmd->fcport, GFP_ATOMIC); + if (!sp) { + ql_log(ql_log_info, vha, 0x1100c, "Failed to allocate SRB\n"); + spin_unlock_irqrestore(&ha->hardware_lock, flags); + return; + } + + sp->type = SRB_NVMET_FCP; + sp->name = "nvmet_fcp"; + sp->done = qla_nvmet_fcp_done; + sp->u.iocb_cmd.u.nvme.desc = rsp_buf; + sp->u.iocb_cmd.u.nvme.cmd = cmd; + + ctio = (struct ctio_nvme_to_27xx *)qla2x00_alloc_iocbs(vha, sp); + if (!ctio) { + ql_dbg(ql_dbg_nvme, vha, 0x3067, + "qla2x00t(%ld): %s failed: unable to allocate request packet", + vha->host_no, __func__); + spin_unlock_irqrestore(&ha->hardware_lock, flags); + return; + } + + ctio->entry_type = CTIO_NVME; + ctio->entry_count = 1; + ctio->handle = sp->handle; + ctio->nport_handle = cpu_to_le16(cmd->fcport->loop_id); + ctio->timeout = cpu_to_le16(QLA_TGT_TIMEOUT); + ctio->vp_index = vha->vp_idx; + ctio->initiator_id[0] = fchdr->s_id[2]; + ctio->initiator_id[1] = fchdr->s_id[1]; + ctio->initiator_id[2] = fchdr->s_id[0]; + ctio->exchange_addr = atio->u.nvme_isp27.exchange_addr; + temp = be16_to_cpu(fchdr->ox_id); + ctio->ox_id = cpu_to_le16(temp); + tot_dsds = ctio->dseg_count = cpu_to_le16(rsp_buf->sg_cnt); + c_flags = atio->u.nvme_isp27.attr << 9; + + if ((ctio->dseg_count > 1) && (rsp_buf->op != NVMET_FCOP_RSP)) { + /* Check for additional continuation IOCB space */ + req_cnt = qla24xx_calc_iocbs(vha, ctio->dseg_count); + ctio->entry_count = req_cnt; + + if (req->cnt < (req_cnt + 2)) { + cnt = (uint16_t)RD_REG_DWORD_RELAXED(req->req_q_out); + + if (req->ring_index < cnt) + req->cnt = cnt - req->ring_index; + else + req->cnt = req->length - + (req->ring_index - cnt); + + if (unlikely(req->cnt < (req_cnt + 2))) { + ql_log(ql_log_warn, vha, 0xfff, + "Running out of IOCB space for continuation IOCBs\n"); + goto err_exit; + } + } + } + + switch (rsp_buf->op) { + case NVMET_FCOP_READDATA: + case NVMET_FCOP_READDATA_RSP: + /* Populate the CTIO resp with the SGL present in the rsp */ + ql_log(ql_log_info, vha, 0x1100c, + "op: %#x, ox_id=%x c_flags=%x transfer_length: %#x req_cnt: %#x, tot_dsds: %#x\n", + rsp_buf->op, ctio->ox_id, c_flags, + rsp_buf->transfer_length, req_cnt, tot_dsds); + + avail_dsds = 1; + cur_dsd = (uint32_t *) + &ctio->u.nvme_status_mode0.dsd0[0]; + sgl = rsp_buf->sg; + + /* Load data segments */ + for_each_sg(sgl, sg, tot_dsds, i) { + dma_addr_t sle_dma; + cont_a64_entry_t *cont_pkt; + + /* Allocate additional continuation packets? */ + if (avail_dsds == 0) { + /* + * Five DSDs are available in the Cont + * Type 1 IOCB. + */ + + /* Adjust ring index */ + req->ring_index++; + if (req->ring_index == req->length) { + req->ring_index = 0; + req->ring_ptr = req->ring; + } else { + req->ring_ptr++; + } + cont_pkt = (cont_a64_entry_t *) + req->ring_ptr; + *((uint32_t *)(&cont_pkt->entry_type)) = + cpu_to_le32(CONTINUE_A64_TYPE); + + cur_dsd = (uint32_t *) + cont_pkt->dseg_0_address; + avail_dsds = 5; + } + + sle_dma = sg_dma_address(sg); + *cur_dsd++ = cpu_to_le32(LSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(MSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(sg_dma_len(sg)); + avail_dsds--; + } + + ctio->u.nvme_status_mode0.transfer_len = + cpu_to_le32(rsp_buf->transfer_length); + ctio->u.nvme_status_mode0.relative_offset = + cpu_to_le32(rsp_buf->offset); + ctio->flags = cpu_to_le16(c_flags | 0x2); + + if (rsp_buf->op == NVMET_FCOP_READDATA_RSP) { + if (rsp_buf->rsplen == 12) { + ctio->flags |= + NVMET_CTIO_STS_MODE0 | + NVMET_CTIO_SEND_STATUS; + } else if (rsp_buf->rsplen == 32) { + struct nvme_fc_ersp_iu *ersp = + rsp_buf->rspaddr; + uint32_t iter = 4, *inbuf, *outbuf; + + ctio->flags |= + NVMET_CTIO_STS_MODE1 | + NVMET_CTIO_SEND_STATUS; + inbuf = (uint32_t *) + &((uint8_t *)rsp_buf->rspaddr)[16]; + outbuf = (uint32_t *) + ctio->u.nvme_status_mode1.nvme_comp_q_entry; + for (; iter; iter--) + *outbuf++ = cpu_to_be32(*inbuf++); + + ctio->u.nvme_status_mode1.rsp_seq_num = + cpu_to_be32(ersp->rsn); + ctio->u.nvme_status_mode1.transfer_len = + cpu_to_be32(ersp->xfrd_len); + } else + ql_log(ql_log_warn, vha, 0x1100d, + "unhandled resp len = %x\n", + rsp_buf->rsplen); + } + break; + + case NVMET_FCOP_WRITEDATA: + /* Send transfer rdy */ + ql_log(ql_log_info, vha, 0x1100e, + "FCOP_WRITE: ox_id=%x c_flags=%x transfer_length: %#x req_cnt: %#x, tot_dsds: %#x\n", + ctio->ox_id, c_flags, rsp_buf->transfer_length, + req_cnt, tot_dsds); + + ctio->flags = cpu_to_le16(c_flags | 0x1); + + avail_dsds = 1; + cur_dsd = (uint32_t *)&ctio->u.nvme_status_mode0.dsd0[0]; + sgl = rsp_buf->sg; + + /* Load data segments */ + for_each_sg(sgl, sg, tot_dsds, i) { + dma_addr_t sle_dma; + cont_a64_entry_t *cont_pkt; + + /* Allocate additional continuation packets? */ + if (avail_dsds == 0) { + /* + * Five DSDs are available in the Continuation + * Type 1 IOCB. + */ + + /* Adjust ring index */ + req->ring_index++; + if (req->ring_index == req->length) { + req->ring_index = 0; + req->ring_ptr = req->ring; + } else { + req->ring_ptr++; + } + cont_pkt = (cont_a64_entry_t *)req->ring_ptr; + *((uint32_t *)(&cont_pkt->entry_type)) = + cpu_to_le32(CONTINUE_A64_TYPE); + + cur_dsd = (uint32_t *)cont_pkt->dseg_0_address; + avail_dsds = 5; + } + + sle_dma = sg_dma_address(sg); + *cur_dsd++ = cpu_to_le32(LSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(MSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(sg_dma_len(sg)); + avail_dsds--; + } + + ctio->u.nvme_status_mode0.transfer_len = + cpu_to_le32(rsp_buf->transfer_length); + ctio->u.nvme_status_mode0.relative_offset = + cpu_to_le32(rsp_buf->offset); + + break; + case NVMET_FCOP_RSP: + /* Send a response frame */ + ctio->flags = cpu_to_le16(c_flags); + if (rsp_buf->rsplen == 12) { + ctio->flags |= + NVMET_CTIO_STS_MODE0 | NVMET_CTIO_SEND_STATUS; + } else if (rsp_buf->rsplen == 32) { + struct nvme_fc_ersp_iu *ersp = rsp_buf->rspaddr; + uint32_t iter = 4, *inbuf, *outbuf; + + ctio->flags |= + NVMET_CTIO_STS_MODE1 | NVMET_CTIO_SEND_STATUS; + inbuf = (uint32_t *) + &((uint8_t *)rsp_buf->rspaddr)[16]; + outbuf = (uint32_t *) + ctio->u.nvme_status_mode1.nvme_comp_q_entry; + for (; iter; iter--) + *outbuf++ = cpu_to_be32(*inbuf++); + ctio->u.nvme_status_mode1.rsp_seq_num = + cpu_to_be32(ersp->rsn); + ctio->u.nvme_status_mode1.transfer_len = + cpu_to_be32(ersp->xfrd_len); + + ql_log(ql_log_info, vha, 0x1100f, + "op: %#x, rsplen: %#x\n", rsp_buf->op, + rsp_buf->rsplen); + } else + ql_log(ql_log_warn, vha, 0x11010, + "unhandled resp len = %x for op NVMET_FCOP_RSP\n", + rsp_buf->rsplen); + break; + } + + /* Memory Barrier */ + wmb(); + + qla2x00_start_iocbs(vha, vha->hw->req_q_map[0]); +err_exit: + spin_unlock_irqrestore(&ha->hardware_lock, flags); +} + +/* + * qla_nvmet_send_abts_ctio + * Send the abts CTIO to the firmware + */ +static void qla_nvmet_send_abts_ctio(struct scsi_qla_host *vha, + struct abts_recv_from_24xx *rabts, bool flag) +{ + struct abts_resp_to_24xx *resp; + srb_t *sp; + uint32_t f_ctl; + uint8_t *p; + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_ATOMIC); + if (!sp) { + ql_log(ql_log_info, vha, 0x11011, "Failed to allocate SRB\n"); + return; + } + + sp->type = SRB_NVMET_ABTS; + sp->name = "nvmet_abts"; + sp->done = qla_nvmet_abts_done; + + resp = (struct abts_resp_to_24xx *)qla2x00_alloc_iocbs(vha, sp); + if (!resp) { + ql_dbg(ql_dbg_nvme, vha, 0x3067, + "qla2x00t(%ld): %s failed: unable to allocate request packet", + vha->host_no, __func__); + return; + } + + resp->entry_type = ABTS_RESP_24XX; + resp->entry_count = 1; + resp->handle = sp->handle; + + resp->nport_handle = rabts->nport_handle; + resp->vp_index = rabts->vp_index; + resp->exchange_address = rabts->exchange_addr_to_abort; + resp->fcp_hdr_le = rabts->fcp_hdr_le; + f_ctl = cpu_to_le32(F_CTL_EXCH_CONTEXT_RESP | + F_CTL_LAST_SEQ | F_CTL_END_SEQ | + F_CTL_SEQ_INITIATIVE); + p = (uint8_t *)&f_ctl; + resp->fcp_hdr_le.f_ctl[0] = *p++; + resp->fcp_hdr_le.f_ctl[1] = *p++; + resp->fcp_hdr_le.f_ctl[2] = *p; + + resp->fcp_hdr_le.d_id[0] = rabts->fcp_hdr_le.s_id[0]; + resp->fcp_hdr_le.d_id[1] = rabts->fcp_hdr_le.s_id[1]; + resp->fcp_hdr_le.d_id[2] = rabts->fcp_hdr_le.s_id[2]; + resp->fcp_hdr_le.s_id[0] = rabts->fcp_hdr_le.d_id[0]; + resp->fcp_hdr_le.s_id[1] = rabts->fcp_hdr_le.d_id[1]; + resp->fcp_hdr_le.s_id[2] = rabts->fcp_hdr_le.d_id[2]; + + if (flag) { /* BA_ACC */ + resp->fcp_hdr_le.r_ctl = R_CTL_BASIC_LINK_SERV | R_CTL_B_ACC; + resp->payload.ba_acct.seq_id_valid = SEQ_ID_INVALID; + resp->payload.ba_acct.low_seq_cnt = 0x0000; + resp->payload.ba_acct.high_seq_cnt = 0xFFFF; + resp->payload.ba_acct.ox_id = rabts->fcp_hdr_le.ox_id; + resp->payload.ba_acct.rx_id = rabts->fcp_hdr_le.rx_id; + } else { + resp->fcp_hdr_le.r_ctl = R_CTL_BASIC_LINK_SERV | R_CTL_B_RJT; + resp->payload.ba_rjt.reason_code = + BA_RJT_REASON_CODE_UNABLE_TO_PERFORM; + } + /* Memory Barrier */ + wmb(); + + qla2x00_start_iocbs(vha, vha->hw->req_q_map[0]); +} diff --git a/drivers/scsi/qla2xxx/qla_nvmet.h b/drivers/scsi/qla2xxx/qla_nvmet.h new file mode 100644 index 000000000000..188ad2c5e3f1 --- /dev/null +++ b/drivers/scsi/qla2xxx/qla_nvmet.h @@ -0,0 +1,129 @@ +/* + * QLogic Fibre Channel HBA Driver + * Copyright (c) 2003-2017 QLogic Corporation + * + * See LICENSE.qla2xxx for copyright and licensing details. + */ +#ifndef __QLA_NVMET_H +#define __QLA_NVMET_H + +#include +#include +#include +#include + +#include "qla_def.h" + +struct qla_nvmet_tgtport { + struct scsi_qla_host *vha; + struct completion tport_del; +}; + +struct qla_nvmet_cmd { + union { + struct nvmefc_tgt_ls_req ls_req; + struct nvmefc_tgt_fcp_req fcp_req; + } cmd; + struct scsi_qla_host *vha; + void *buf; + struct atio_from_isp atio; + struct atio7_nvme_cmnd nvme_cmd_iu; + uint16_t cmd_len; + spinlock_t nvme_cmd_lock; + struct list_head cmd_list; /* List of cmds */ + struct work_struct work; + + struct scatterlist *sg; /* cmd data buffer SG vector */ + int sg_cnt; /* SG segments count */ + int bufflen; /* cmd buffer length */ + int offset; + enum dma_data_direction dma_data_direction; + uint16_t ox_id; + struct fc_port *fcport; +}; + +#define CTIO_NVME 0x82 /* CTIO FC-NVMe IOCB */ +struct ctio_nvme_to_27xx { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + + uint32_t handle; /* System handle. */ + uint16_t nport_handle; /* N_PORT handle. */ + uint16_t timeout; /* Command timeout. */ + + uint16_t dseg_count; /* Data segment count. */ + uint8_t vp_index; /* vp_index */ + uint8_t addl_flags; /* Additional flags */ + + uint8_t initiator_id[3]; /* Initiator ID */ + uint8_t rsvd1; + + uint32_t exchange_addr; /* Exch addr */ + + uint16_t ox_id; /* Ox ID */ + uint16_t flags; +#define NVMET_CTIO_STS_MODE0 0 +#define NVMET_CTIO_STS_MODE1 BIT_6 +#define NVMET_CTIO_STS_MODE2 BIT_7 +#define NVMET_CTIO_SEND_STATUS BIT_15 + union { + struct { + uint8_t reserved1[8]; + uint32_t relative_offset; + uint8_t reserved2[4]; + uint32_t transfer_len; + uint8_t reserved3[4]; + uint32_t dsd0[2]; + uint32_t dsd0_len; + } nvme_status_mode0; + struct { + uint8_t nvme_comp_q_entry[16]; + uint32_t transfer_len; + uint32_t rsp_seq_num; + uint32_t dsd0[2]; + uint32_t dsd0_len; + } nvme_status_mode1; + struct { + uint32_t reserved4[4]; + uint32_t transfer_len; + uint32_t reserved5; + uint32_t rsp_dsd[2]; + uint32_t rsp_dsd_len; + } nvme_status_mode2; + } u; +} __packed; + +/* + * ISP queue - CTIO type FC NVMe from ISP to target driver + * returned entry structure. + */ +struct ctio_nvme_from_27xx { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + uint32_t handle; /* System defined handle */ + uint16_t status; + uint16_t timeout; + uint16_t dseg_count; /* Data segment count. */ + uint8_t vp_index; + uint8_t reserved1[5]; + uint32_t exchange_address; + uint16_t ox_id; + uint16_t flags; + uint32_t residual; + uint8_t reserved2[32]; +} __packed; + +int qla_nvmet_handle_ls(struct scsi_qla_host *vha, + struct pt_ls4_rx_unsol *ls4, void *buf); +int qla_nvmet_create_targetport(struct scsi_qla_host *vha); +int qla_nvmet_delete(struct scsi_qla_host *vha); +int qla_nvmet_handle_abts(struct scsi_qla_host *vha, + struct abts_recv_from_24xx *abts); +int qla_nvmet_process_cmd(struct scsi_qla_host *vha, + struct qla_nvmet_cmd *cmd); + +#endif From patchwork Wed Sep 26 16:25:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10616219 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EE1D0112B for ; Wed, 26 Sep 2018 16:32:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DD7AD2B4D0 for ; Wed, 26 Sep 2018 16:32:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DB8D12B4DB; Wed, 26 Sep 2018 16:32:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5512A2B4FF for ; Wed, 26 Sep 2018 16:32:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728412AbeIZWpw (ORCPT ); Wed, 26 Sep 2018 18:45:52 -0400 Received: from mail-cys01nam02on0089.outbound.protection.outlook.com ([104.47.37.89]:52436 "EHLO NAM02-CY1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728181AbeIZWpv (ORCPT ); Wed, 26 Sep 2018 18:45:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YheErGubLOFhdlhS41OEw6MnhwECI53nXkk6CBdTuwY=; b=PQHlTktzS+mSJGIRFIH2RPdAJ9otvr2Gf4777ithfV+mvEAy7Xni7fzchBetCklK0m7Eo1EO0gCbRqzCXBVcfc0c2rL4O88CUkVXqGn1aWXs10RrfJrdmjvvKeGn+1r4i8OS2DF7vgwA+s6oKkA2yu7MT5Rrzyo3PrQTseckC04= Received: from BYAPR07CA0020.namprd07.prod.outlook.com (2603:10b6:a02:bc::33) by BL0PR07MB5490.namprd07.prod.outlook.com (2603:10b6:208:89::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1122.17; Wed, 26 Sep 2018 16:32:02 +0000 Received: from DM3NAM05FT015.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::206) by BYAPR07CA0020.outlook.office365.com (2603:10b6:a02:bc::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1080.14 via Frontend Transport; Wed, 26 Sep 2018 16:32:01 +0000 Authentication-Results: spf=pass (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; oracle.com; dkim=none (message not signed) header.d=none;oracle.com; dmarc=bestguesspass action=none header.from=cavium.com; Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT015.mail.protection.outlook.com (10.152.98.124) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 16:32:01 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Wed, 26 Sep 2018 09:25:36 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8QGPaJg024357; Wed, 26 Sep 2018 09:25:36 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8QGPaeW024356; Wed, 26 Sep 2018 09:25:36 -0700 From: Himanshu Madhani To: , , , , , , CC: Subject: [PATCH v2 2/5] qla2xxx_nvmet: Add FC-NVMe Target Link Service request handling Date: Wed, 26 Sep 2018 09:25:32 -0700 Message-ID: <20180926162535.24314-3-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926162535.24314-1-himanshu.madhani@cavium.com> References: <20180926162535.24314-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(376002)(136003)(396003)(39860400002)(346002)(2980300002)(438002)(189003)(199004)(16586007)(44832011)(72206003)(2906002)(50466002)(48376002)(1076002)(87636003)(106002)(110136005)(106466001)(336012)(26005)(446003)(126002)(316002)(11346002)(42186006)(305945005)(476003)(4326008)(2616005)(80596001)(8676002)(2201001)(81166006)(81156014)(356003)(6666003)(8936002)(14444005)(186003)(36756003)(50226002)(51416003)(69596002)(486006)(5660300001)(76176011)(47776003)(478600001)(107886003)(86362001)(34290500001);DIR:OUT;SFP:1101;SCL:1;SRVR:BL0PR07MB5490;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;MX:1;A:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT015;1:glUXgfGJhuFAFELdHFq0C81NPtUsCONNyfbNM5H/rgpA4S7xT3HBRCq0tVuNbDi7IJauQ4jRNA6x/QQ6D7GN6jqQZ9CKrkud0ZW1Gy2uIJAeiFkjIg8G6MfGb4CelQCw X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 63552f21-029c-479c-c3a8-08d623cd9641 X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:BL0PR07MB5490; X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5490;3:TmBAWpPcuaJPgEu/vXNA8jTqvbsS3PX7lNNXuLzosC7GF+DWbDKj1tbF5nS7m/GNbH9GsvDDF4B8wdFQfY0vkPh3sBVMlz/aeExx8+ucIDBMB7uZbzCB5H2eSillEmgMgGxy02KqXrZb2bXenL5aQGsLhXlwOoIMwpqjxBQRVHvCD9tZFaWRnkyj+Fng2whAhzzK6tkt2OfWTs5PC3qGl/t+MqDorMMfjM/Za1ESJM1CIn2tp/ozqgjbXWqS8xEE1R1mA4v/d66pRTMOGXxc16fxHJyycVdySZjm43Z1EKg7u/5GzD5ZBmWpjIQO5kHcpBvafZ0zC4tXo1ax9nmSo9D4g289GOOGtEXU9KryDDE=;25:zYS2bvXm4fS5M+rLG6YhihJS+VNL9rbeDV5ijL6BpWDtdCdtbu0TB4zLX/+vgRUZ8KXnM7gTXpoKyqk1Ky8PQzUpZvsZ7dXghapiH1nRKYnsv0Iivq3elAiIJe1QgfMwvLZLLeNo/tuXO+Sy3mvTqI3Fe0vM3PRHuaIVjB2r/q5Thr+fPvj1Tmyd+VVAUMoHl2JtTacSeJeHAs1v8sJZbeM/twYgrRpIT2jJ+SlVItV0v3uurmt0/GL5T6vmsVAatguephOWv5TEq3XGpdGMCOAuE2tG2Yao3IJWYHmfO59/SM0EliJhz8tYtVa6XbVnC0jGNDtRtyW7qatdQwxWTQ== X-MS-TrafficTypeDiagnostic: BL0PR07MB5490: X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5490;31:L59lZzGyHZqOaY8hFbi5hqjOPtWzFyKuPnOVS7YjIU09Nmwz69LSsqDGg6xTkhdGmZLmRGXmydL0HueknzRCQbh3i/6q65aM+Rr8hHPIjkJqDh/9VEH/8Fvt6QXyZJpwdfAMWv5tw1LgcBjF3hsxsginVjMhXei+3DoYvsdBPz/xwI7PVBKw9hs1Xf9VSVMWwqoJ+tGCvM37rrRZZJKvN3xSrxcHN6+jLrUjnQdy0KM=;20:+SDRmfaJ45gfyYJoHW4iIcKCfQG9U70V0GVxAh+qqAddurBXQjnMfCaj/ic7CVK+IyZSOs6ZaOYEu2EqXFxjzuiAQhSB/bpGRQHTALS0VBWrNlmGWbLpidE7qg/XkMIT9aRwUs3gzUvf5XeD6KeFXHdTzPTPjPy81Ic+aeqeinKnYpxn9nQ7fL4cdtd/Rm3n3yGTOlxwIlaLrL6Jg9D5p3HLUPi6Vci2mh6Rl4S8tQ/arSwQPfe9KcSLbnFXnDf7/cygGLUhbbBRWa39ddz0iePhwY9T8FoK/zDzQrvxbJf4mwyYxrszo4+38WvCrnZ8Tm8gAkq/FuMxKnpNHQExKVr8soy3qwumAsiHfcHU8V1oWewOYehqXykzWYZsU0G1O94OJ/bMKFj647ssrCbXuwxAM9vpK4iDdsZyfzBMT9/FA5E23gxWvt/WNQBhP5WgyhGJ3Q3ZPVWH/528jGAvAEAc3ycIh6YO2oFtVALHg1BtWlEFq0/YpZdPS7pPz9JB X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3231355)(944501410)(52105095)(3002001)(10201501046)(93006095)(93004095)(149066)(150057)(6041310)(20161123564045)(20161123560045)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(201708071742011)(7699051);SRVR:BL0PR07MB5490;BCL:0;PCL:0;RULEID:;SRVR:BL0PR07MB5490; X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5490;4:F/nJdVOEcDoDdBL0Qzf9AwzAGS57FntG37eIVjgMbiQOc6Jx+5vvi2x7tRYja4T9pq3qaaxE4McclbYi1+z0cgUJLCsMQFVfbjiS2VnGPdPrekD8tstMdVTdBzzEfEhU2KhT6g8jvZF3hwc5SdHqTV+0Ax2SCLulDM85vrqI0l3ZMygXxkHqE8dbMP1+923k7bmT4mGjzeEFcFRzy6YDKTUE+2YvAoxfzeWgys22RDTDn6SvbEW1t8KT6+pahwoBZJCzsjxG+jSDQ+QCtVahmw== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5490;23:QtvfPyxZbSaASrk6/Grj0cppAvmRQB+gG31r9RQZJ4AKEpCxLg7sTuiNs3hmcerCj7PmX6UJ0R1V8WlhkjWO1uqV+Ba166UZn8SL3yGCLX8ajZ9hJuYDwBSd0VYAtm9VPvbdi0XqgDff0Gl6GMcxUMtnHKaxa6vWIavaLxN57UH9wYeo9QMCXFjVR/wrf1ps3Euk2nAdAuOAx5PGBSEdVRmZ/tWX5GKxeQ0G+RfFSWFORr8EG9Q0O1aJNna/YDRuvyKQXudh8uSglKhM0xRdcjtJBWwfh/f3FC448YHYqiQP8IGxVPjVEhbRTVmuTuO4kuUH9JqNobaUiVpNc4jGgLFsDrFNWes3n5Aqwo+rCUpgyolwy0nIokntnw1FoHDKB/sWODcNTpffhD5oshIlA7HJ/fWBBdglE1WqDjbSwzda85EIUyMgQWOsTSr2P/YpeWdnNkQOEcYpfO2Zk6QFa7MeOOLxAhsw0Wu9UWMOq2FEiu0AIgGeELFNIak+q6/xMk6+BFTiFjGBil25cvOGs43JxvyMr6ZbCmIntgN/LuiwGATwmr1D400R0xKxPM20ZpXA8bXOYZxptVnNddtV3Kr/qIAyMGO7crXjfJ4erMqhkEZj7lMzOR8r3HraQoF0HuGjEoyX0nd14YLfMdMXA3P8fXDtnhIFaoMW9Nf5D5K02fyDzZOSLqYCE6G8dJjreES039ZeSCfbGhegN4pdHUCVV6/9jnBg47pQyS6a174D3jRc/UdfQUM5SHUtWGR3MBmCUyppSapG5+icI2WgVylPUccNSLd1TU5xHr6UaJtwF+Dsezj2YCZUOQusG4eqp43X23cth+Omrnpw5kSzkurx2S/ZC2BhR9ShYeIiv4IST6RL4iRBZRqcUobhyTi/Nnkjr8kLiAkNbbR+ujNAL+2PmAp43AJRvyy9p+gszCrAdEr5Vz3T15VJ8IRPXhCH2pl5RNSa8DKPixRl2HCD/kfGFKlxgvlHZHrkH57qLBW7962Va8Ks0LoS7CyMxXFLIYKrKg72vchz34b1gqy8VKSCxuBBHZvZXMNPLeLBrkDYHCpeqt9cxLrs/gJ8BuUwcZQZa8tTWg6d30iv4yy2epH656T5h6Zzg/kQp0jtFKkkokGkn+QlpD/Wt6X+WM5WJ1hKRfPr6w8lsyswM+nYqqfcDHbBrTqSElxus+bh5J0= X-Microsoft-Antispam-Message-Info: h4GDsXOuv2SXb7xsuyYHrFlo1016K8LgI9kcMpvzS1bI/S2gs7ulMuz6x30B6APvMrqTw/aTdex1twlKJlonD1jUQyR0pRmpfmyxWhfIl/IGgw1NJrmT3HT6C0yhLbQuMJjWltSzMUEWUyyrTiVN/V60tk6WKCswjnswo/UgHZMbBvsu7nrMg4gEH89wP72KKL+/qPgW/L3EtiHyyuOXmZCKaqR4PZyba/HxGlX3l1k9CtySEauSXN0Nu8f3R3wsAVdwIE6WP5vAHqdCu7dV9xUX1DBxgjOt2UbdwDq1s7hln018aY+kb+TZOk2W/xMbIFK2epKT+88cUjWcYLcStiGN6RUBNJQeDV9t39gl6/o= X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5490;6:waYoZ0gEKfx478ZsXiouRUaLc3G5Y8k5Wc/4oTQfjBiGKbMj+gEDSSd2lOyMRRNum3qn/P8VZm7pJ7rNyeVsty3iwap1GbVBl/pnE6802uzsH0VF6+gXlE/4BQSkCzt5sztyXPnHtqrn0SovWjVpE4dMS7xyKeQCMe+9woBcwqR07Ejq7uHfocmK3v/KPJPTdaZSZD4R1c2rRUTF3gefRtCIj2cM6A4t4fT9Z/BI3YSIXmJQwY0jXtMelUJveIakwlYYkU5DK8D7N4jedyqk8hCvmhH2KUlft1Jv7CUkljyprLRgKx6Bex3tHaguxn1lSBeyEQF8CTkdTDrnAsLPkIDQAT/+Nozx2qFbkR4SbzEqbuMgqBNdo9WlxMWs9n2lzdkYU6WdshPrVVu0plHSQ7ecQyPv2QRnOTCK02Nbbwxi0oWTUVgovhaqbvXmCahxv4pv/lt8NrZWftK41OvT2w==;5:j0DXyVCvV5l34JiJQe53uMJIMADq6HzgxUDqLChaECHTkCSyUjhHDGqSF0XTf77Fa2Rp3xkrQU4EH1mbqpLgBy4x2VOSAb95SUuoA7uK2RZGuuTGHG3PpRZtjbjjR0qO6REtHwA360mp1tGxCzACHw/U5F4VD17jeA3X0zwH01o=;7:CkJ1f1egrEwJoGVfPUwEHijXVyBEGGt3rvH84M5WyxZyo4fOlKX14rQ33ZtPvw6M+avcTAfaCb5rA3Rb4/dHoA+3kfPG8+SoW4QW288Bd7R811jSjf7635kjXzSrRLMTdlWuhGxs5AF1j5QmUTM/q/rAzj3n8MSGt6ZWMtDesKwTA2U0EOTC2Z5xsp5Rsl/qGimr+IzcuVvjLPzhxWI8WGcBmF1TAXP7pl0x59Kq61H8CwxmRGRBnoD2z5i3TTqc SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 16:32:01.0195 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 63552f21-029c-479c-c3a8-08d623cd9641 X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR07MB5490 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Anil Gurumurthy This patch provides link service pass through feature handling in the driver. This feature is implemented mainly by the firmware and the same implementation is handled in the driver via an IOCB interface. Signed-off-by: Anil Gurumurthy Signed-off-by: Giridhar Malavali Signed-off-by: Darren Trapp Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_dbg.c | 1 + drivers/scsi/qla2xxx/qla_dbg.h | 2 ++ drivers/scsi/qla2xxx/qla_iocb.c | 42 ++++++++++++++++++++++++++++++++++++++++- 3 files changed, 44 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c index c7533fa7f46e..ed9c228f7d11 100644 --- a/drivers/scsi/qla2xxx/qla_dbg.c +++ b/drivers/scsi/qla2xxx/qla_dbg.c @@ -67,6 +67,7 @@ * | Target Mode Management | 0xf09b | 0xf002 | * | | | 0xf046-0xf049 | * | Target Mode Task Management | 0x1000d | | + * | NVME | 0x11000 | | * ---------------------------------------------------------------------- */ diff --git a/drivers/scsi/qla2xxx/qla_dbg.h b/drivers/scsi/qla2xxx/qla_dbg.h index 8877aa97d829..4ad97923e40b 100644 --- a/drivers/scsi/qla2xxx/qla_dbg.h +++ b/drivers/scsi/qla2xxx/qla_dbg.h @@ -367,6 +367,8 @@ ql_log_qp(uint32_t, struct qla_qpair *, int32_t, const char *fmt, ...); #define ql_dbg_tgt_tmr 0x00001000 /* Target mode task management */ #define ql_dbg_tgt_dif 0x00000800 /* Target mode dif */ +#define ql_dbg_nvme 0x00000400 /* NVME Target */ + extern int qla27xx_dump_mpi_ram(struct qla_hw_data *, uint32_t, uint32_t *, uint32_t, void **); extern int qla24xx_dump_ram(struct qla_hw_data *, uint32_t, uint32_t *, diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c index 4de910231ba6..cce32362cf21 100644 --- a/drivers/scsi/qla2xxx/qla_iocb.c +++ b/drivers/scsi/qla2xxx/qla_iocb.c @@ -2113,7 +2113,7 @@ __qla2x00_alloc_iocbs(struct qla_qpair *qpair, srb_t *sp) req_cnt = 1; handle = 0; - if (sp && (sp->type != SRB_SCSI_CMD)) { + if (sp && (sp->type != SRB_SCSI_CMD) && (sp->type != SRB_NVMET_FCP)) { /* Adjust entry-counts as needed. */ req_cnt = sp->iocbs; } @@ -3433,6 +3433,40 @@ qla24xx_prlo_iocb(srb_t *sp, struct logio_entry_24xx *logio) logio->vp_index = sp->fcport->vha->vp_idx; } +/* + * Build NVMET LS response + */ +static int +qla_nvmet_ls(srb_t *sp, struct pt_ls4_request *rsp_pkt) +{ + struct srb_iocb *nvme; + int rval = QLA_SUCCESS; + + nvme = &sp->u.iocb_cmd; + + rsp_pkt->entry_type = PT_LS4_REQUEST; + rsp_pkt->entry_count = 1; + rsp_pkt->control_flags = cpu_to_le16(CF_LS4_RESPONDER << CF_LS4_SHIFT); + rsp_pkt->handle = sp->handle; + + rsp_pkt->nport_handle = sp->fcport->loop_id; + rsp_pkt->vp_index = nvme->u.nvme.vp_index; + rsp_pkt->exchange_address = cpu_to_le32(nvme->u.nvme.exchange_address); + + rsp_pkt->tx_dseg_count = 1; + rsp_pkt->tx_byte_count = cpu_to_le16(nvme->u.nvme.rsp_len); + rsp_pkt->dseg0_len = cpu_to_le16(nvme->u.nvme.rsp_len); + rsp_pkt->dseg0_address[0] = cpu_to_le32(LSD(nvme->u.nvme.rsp_dma)); + rsp_pkt->dseg0_address[1] = cpu_to_le32(MSD(nvme->u.nvme.rsp_dma)); + + ql_log(ql_log_info, sp->vha, 0xffff, + "Dumping the NVME-LS response IOCB\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, sp->vha, 0x2075, + (uint8_t *)rsp_pkt, sizeof(*rsp_pkt)); + + return rval; +} + int qla2x00_start_sp(srb_t *sp) { @@ -3493,6 +3527,9 @@ qla2x00_start_sp(srb_t *sp) case SRB_NVME_LS: qla_nvme_ls(sp, pkt); break; + case SRB_NVMET_LS: + qla_nvmet_ls(sp, pkt); + break; case SRB_ABT_CMD: IS_QLAFX00(ha) ? qlafx00_abort_iocb(sp, pkt) : @@ -3518,6 +3555,9 @@ qla2x00_start_sp(srb_t *sp) case SRB_PRLO_CMD: qla24xx_prlo_iocb(sp, pkt); break; + case SRB_NVME_ELS_RSP: + qlt_send_els_resp(sp, pkt); + break; default: break; } From patchwork Wed Sep 26 16:25:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10616225 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F3C5A112B for ; Wed, 26 Sep 2018 16:32:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA72B2B4F8 for ; Wed, 26 Sep 2018 16:32:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CE94C2B509; Wed, 26 Sep 2018 16:32:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0D5172B502 for ; Wed, 26 Sep 2018 16:32:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728462AbeIZWp4 (ORCPT ); Wed, 26 Sep 2018 18:45:56 -0400 Received: from mail-bl2nam02on0042.outbound.protection.outlook.com ([104.47.38.42]:62625 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728236AbeIZWpz (ORCPT ); Wed, 26 Sep 2018 18:45:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qfOgQa7ADjJcj/G4XDLJcjYmb5dk0vfNOP701AQF1Xo=; b=Xh10epG5nMUDv1qE/H6368ZnxW0PdzFEY9z/HruvjdWyv/dqmMdqxeUkUQJDAwfXGfs+bKxwirAMlEn68DALcyVNyh97YToUIM5ULAZoLpUoPe5ebVMedqoexxCZEcZI+br8njixI2dFlHaS/kMA4iUf3vpgn0IG2hDS9MBM89o= Received: from DM5PR07CA0111.namprd07.prod.outlook.com (2603:10b6:4:ae::40) by BYAPR07MB4661.namprd07.prod.outlook.com (2603:10b6:a02:f1::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1143.15; Wed, 26 Sep 2018 16:32:02 +0000 Received: from DM3NAM05FT055.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::201) by DM5PR07CA0111.outlook.office365.com (2603:10b6:4:ae::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.22 via Frontend Transport; Wed, 26 Sep 2018 16:32:02 +0000 Authentication-Results: spf=pass (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; oracle.com; dkim=none (message not signed) header.d=none;oracle.com; dmarc=bestguesspass action=none header.from=cavium.com; Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT055.mail.protection.outlook.com (10.152.98.169) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 16:32:01 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Wed, 26 Sep 2018 09:25:36 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8QGPbsG024361; Wed, 26 Sep 2018 09:25:37 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8QGPbq5024360; Wed, 26 Sep 2018 09:25:37 -0700 From: Himanshu Madhani To: , , , , , , CC: Subject: [PATCH v2 3/5] qla2xxx_nvmet: Add FC-NVMe Target handling Date: Wed, 26 Sep 2018 09:25:33 -0700 Message-ID: <20180926162535.24314-4-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926162535.24314-1-himanshu.madhani@cavium.com> References: <20180926162535.24314-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(346002)(39860400002)(376002)(396003)(136003)(2980300002)(438002)(199004)(189003)(336012)(6666003)(48376002)(186003)(81156014)(50226002)(81166006)(50466002)(8936002)(2906002)(53946003)(476003)(126002)(69596002)(446003)(16200700003)(2616005)(11346002)(575784001)(2201001)(86362001)(106466001)(486006)(26005)(44832011)(36756003)(66574007)(34290500001)(80596001)(51416003)(76176011)(356003)(1076002)(14444005)(305945005)(47776003)(107886003)(72206003)(5660300001)(4326008)(16586007)(110136005)(42186006)(106002)(8676002)(87636003)(316002)(478600001)(569006);DIR:OUT;SFP:1101;SCL:1;SRVR:BYAPR07MB4661;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;A:1;MX:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT055;1:2zRSW4IJ9cm+mSHX11/JeLPbp7MBkWhggLM5Nu8gzji1uGp2JCGyzYDQUt3MM8z6Ff299UMYkZ74tFcWPprtfVLNXn/aVZtpSQBj3aPpMfPAAgnylAL3rP2w4BKSxstg X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e755187e-06c9-4a51-dba9-08d623cd969b X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:BYAPR07MB4661; X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4661;3:RsY39evKaU0Y/74m1xIdCYcq37sgH7pnldcq6uqe+Zj0ExE3z6pZZ+DIqQKIvz7lLABlYw7723Me5KTfyhMLb3dxNxrE19u6LjxUYwvrafk4l+LaVx5KB6OHOGEZSSyqNEJf+VSC7unk9aFKisvVW1gVKGRguO4K57Ybhf5uvCBqzCSLmKb69QbYKofORpQlTJoj8AMr7DPEMIC6JkXijxgI2S5HMD+uDFfRXVcEj9jd5uVhjVv/hHQlkmeOqqR4a0tRRuKKA42O8FNn6F49t8srkqUkboDjJwRpBNMXJhmWiYbN2mD6g3kqFGZF/oXVqU4z8WSoQAESyhxIVgnyGxaYO7VusWT/ZmILFqpaSx8=;25:XV+LTQUiTqYHFido5tRss8le6US0XgHDWY/myvPwKk4Mz1ac2MJhFY11Z5JIBVoMrSeeopzliRUjBgstNtFNMcFBWjr4tyfsfNDOQR0zoFdDi8EPnOZ3OMwawuoQA/DWc41TKPIXArM9Y9Tn5Dv+0/1FgKMhBTvI4gtkrVClyUUI0LW90NhZUwA8YHgtncZkSevGxz8cynFgrKg2fL7G6C28rNuP64mC7l9eKdQh3YoUi19U+RijaaO4tMDv5LicAAJMSm2ShYvhnvuoosX55r6MtncAyxyOA4XNxQ3JTGu53uN7vuX9p0FYkxpmJVAJWYIqDHOgaqUhMQmstxr4Ow== X-MS-TrafficTypeDiagnostic: BYAPR07MB4661: X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4661;31:XpyT96M1PH9fUM7tF2USZ/jVgXP4YwRLO4s1FmC1mKF1H4IpOu8N9xEGJdOK/xXkHhNKUeE4nyBECi8i+TN1fRXOOzQzEFxFJOug56ZJ38m6HiB2zzDyanF32+7o1VCizbMoBQTb6JyUC7o7bQJyosHRVfjAgEE1lQxfO9kxsSkH0iBoECqc/PIGh56BVAMda80iCh9qWScEU3aWe7epoziDjE0hyXQR819Ym5qnE+0=;20:wVnbx+XmON8T3n9dtv7wI5clQKHqm5ofsQfbHCRTdaBNgTRAheUNfVUBzefR9/reTU8qqHuhs0G0oS/SKavyp34GkdhMCOgQ8fV5s8bElFQl++kHAlA+NSr21KHbWfzQoX50wn5nECmDgvwAabDLrTErgaS09RS/D2zycHi2oKQ2pVjlDkk7jeSXmyMqY4rVlVeWM1kQPDPUim+yHt6K8EMPYs0G3F4pljZ03gcRV7oLDYwAy4jX5KSVEjlHqk4vdPYmxA0nfzsmCXHeDVckeu5+daV3BEialWquJdq4UPf1Rbh+tNqfOA0ko/HPYFd+el8uVusTmFOTsvlDAI0lrdihqdLjRV/1dOdHd+39hEUnsE7uMqEZfQaYHIwDJbbaLfQ5KQESbj4OFIgXmmQrclp3L2jNbvQ1OlkcKrQDqIlJaY/mqy71xq9T6ugYem2vtX2AwGzyobDmR5rBvdNiLPKgCEKccQ/U7JNa0Bdea/HPti4dVwZmvQwUp8tmPYqA X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(269456686620040)(163750095850); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3002001)(10201501046)(3231355)(944501410)(52105095)(93006095)(93004095)(149066)(150057)(6041310)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123558120)(20161123562045)(201708071742011)(7699051);SRVR:BYAPR07MB4661;BCL:0;PCL:0;RULEID:;SRVR:BYAPR07MB4661; X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4661;4:gC8+rVvqcDBOmNS+va/bzh/n2uyTbZ3e3NSiMK4pxonWsw8j4hZvqU5d89kHoFh3KwV9ES4PxF2MbmfO5DVwdVRFKHNGzCCTf9wGLvcY8jGoFoYD+PKuvlbPaiEjdWSOmMl2iTkga1gw2dttuipNlWMqt49gnRAzO81aT2Ys7HHHJwjzVXQ+YnF6x8gSdqr1wTuGvjvd86JLXCfgNbVEft0IUiscmjm5poU/IcIs2zqpbWw2+xRWygEbby7YmukuDuHzQ645ramnb3etgqP0g7+epOQiY7cYptitqVq7drI38HuXA4Mc2SE3HiU6OrEjeFiTJyeIw670cCB9oJtmrg== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4661;23:/4ZJFICkrDLgu6MzX2uJRueWKtxs7yEbdbrwXj5OPrpLLNcXLkyM9OfWJeX2cWWDRIiGoBa0iJn0B9dzy7usYW53NSqdQnEfih4nitjpzGNe/pwQqRfb63A6TR3Dw4bgSFO7pS6UPPXmQgUvzA2I3Jw68/RhZ5GyCHQabtVlihoQZn1npflycR6i4F04mvJSBV1d2cFlanACjFef/zR+BNcOcPbGHwO1qQmbbrY0GK1S+K8WkLZEEARbAq1X2+gKGjtMFkFnEHf53QSybhYx5pEoWDRyFpogvs/QKcatUwbfPzhUYcfV48v1zUksIyeiQ/7k4vl21MFQKsNyGyz4+JbnD/+lsHOkGqQd2F2mDWKhAxQbmyxqCc9vO1AB2z81H8WvCeXI2q8p2SyN/WS9svErmYT9mPFTZPmLTJZWaDP2y2uIt3Xzr9QASn4VAvwsSAIINK3DwZ758TQ8nIIAtcslaSSFHAkAAmfjtrGOaO334hs1clRAugYKZjbRZRsVuhPAc8HL0qGmp1R73rBluEwOdqXNUfVXidWNNtjpSHyTesNR0myu9QYYq4WDabKAS/R7gBPBeIfjv6QKV1kqb+ErObUhfwe8CAQCY/n1bcX4NCzTD3qYk4iFa9NtEKlGkyT1KqUiV/XPnGojidjq1C84X0jsdCT1WOYGnXkrvsSOkFr5YZaQoYYqYVEPdp16tPmltH+g+gt19j39zcjROUWPe/geT1ubLT0q+1RmXhxUFrediPxvOBNzpkQ881ivSpZ1bsfamSbHSnu3W/wW69zSjP4Lnuo+p8Hepx30K5yFKFTlqawcb6XX5LVQnmWxthOCPU1PLNbWmTr92k4ttgVsLYWatPfLfXBtee+PzEjN5IEugg0GuAkBH5HiWCT9Vc6qgPQfNUImnVUA/hmpEjUNOTGwIykZndf2+xWN4/mFilCEI8+TdVW6WmR4SBeSzHbguyf2eL9mVSw3KH3V+7B3AwyNZGQlEEEHeHjWw8ZiSYHuVCcZ5GTxi86VU3qd6X52l1pyaNrhKtwVa4UmYeXRyqigL++EEeTpbb2q619U5Gn6W3X/3IYqepfV28aYZatVOJbFPn+ymmMEOrP15d9iZZBqxPvtSDLq2ky848gdFLMLB9dzVh1VqJ/KCMKXT1uaNfXSqIcErHTQBri0tmhCCeXHdcD1LlU6CZIYHXPxmQ2m4hiqw2OH32qotI35v5aAPMnxe8Gvym1buvcqlmekI4AAl50G3w0nn38he4aMbBHJNIvI/flXEt8ftYF3FQV2doMeP3lXlsQevd63jQ== X-Microsoft-Antispam-Message-Info: Zgj5cRQujUkeYV8aXc49lKKpYpe9M+URC4kSEL9o/rYX534Zb2XScmkW6mJ+78fKbozvjleHjjWQgpmypaRj5akOiOAGtsoIK4c1UMjzR3DnUoztgnwyVEKrL3MOIkTCW7mRpKF5iFDt+krgiWuIIH3L2oUU+0HtWc/IhVI77wv3kS4VkwrO9u8POcCAWwaJr9pKzKC+naZInLbDpwWjdxi9SSF2BSBheR3XlM29GsmLvMe7KaSo6kkcwrBOK65NBxF1Q5fkBRrnL+9QoElUUywLIDZddD7Rb46ANMf0fFbRrj+ytf2bzQ4s9hx6+avrG6K9FUjyJMColl5WBmhc7kvFE1ZW7JHhvVyn183BAck= X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4661;6:vbvcGGyJp6XWjohAqmW9Ol6Dq1Yd7ZnKRRp6pe9Nbbfi3lQPuisNtyPkZpwXDbPs3xv23dzpJXbG0+KCMc8JyDK7TmKQsTG/g89DzhtLy8gHKagN5QTK59zHgcYym8D9YIz60ECjpi91VIJuYZxa0Yj925r2JRJ3GpSghNiycrPpsCpytyDfADoMz7+z65unfZHe80EE50ko+fghlEooUpdcQT0clN5KYkB8ZEN8x1tJHDu1uiyFWnRzG6vJcWR+lqj/Z+ReSnEof/BPG8qLSwEQv/rY3MVag6rrIXgBDq+qKmgctKiiE55DI8nIuZPN0fIO+Ikva8eRDyFEpwS8ZJT4cSm7qwgrvTdFlnyqoVlX+5EVHcajqK5TPgojPBDlIuAFsNawtdZzc8vMQIzYecD1Qz43ixzEwa6lNO1frUJgy8+YluwZVF2xcIS6/Si/9ZQ1y4C9K0L0e3MPElbc/Q==;5:f8UCK/fXULEQBzXygALUeACMGSdhFTImI+tr823k3r1giRerfcHjtjwJsrlrzXX73cX8O+dJvXKRYAKL0W+KBQR4wrGIkYmuyswNPMKbNao5x9cfZZJUg6nO7wejLIUUWg5dAeG+UIGkhXaAva8ujvJYAJdiyDLLXkdEZ9SGbbE=;7:VumEDq9FfEnM+bR971klheENKYcGFwIXPGPBHooQlBL3afmhgwmfDKmmKVqFUX7WsECw0Hv48zEKkuKoEsq06inQzfUu4qxanBFLszBippv6RfDGlyUf/4vm1F5+QWDhSXBJH8S/cJyE9Wa2/KjXsPxDiizeLElsZGkYRbeAMFiLiSMvJiEM6H4qLTFvuCdSDPRHIoQbW1zTvSrDcmnjm1TnTYa+FeE+5rVUUdYaqwsPWeufDRhIgi5H+MjoWelG SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 16:32:01.5943 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e755187e-06c9-4a51-dba9-08d623cd969b X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR07MB4661 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Anil Gurumurthy This patch Adds following code in the driver to support FC-NVMe Target - Updated ql2xenablenvme to allow FC-NVMe Target operation - Added Link Serviccce Request handling for NVMe Target - Added passthru IOCB for LS4 request - Added CTIO for sending response to FW - Added FC4 Registration for FC-NVMe Target - Added PUREX IOCB support for login processing in FC-NVMe Target mode - Added Continuation IOCB for PUREX - Added Session creation with PUREX IOCB in FC-NVMe Target mode Signed-off-by: Anil Gurumurthy Signed-off-by: Giridhar Malavali Signed-off-by: Darren Trapp Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_def.h | 35 +- drivers/scsi/qla2xxx/qla_fw.h | 263 ++++++++++ drivers/scsi/qla2xxx/qla_gbl.h | 17 +- drivers/scsi/qla2xxx/qla_gs.c | 14 +- drivers/scsi/qla2xxx/qla_init.c | 46 +- drivers/scsi/qla2xxx/qla_isr.c | 112 ++++- drivers/scsi/qla2xxx/qla_mbx.c | 101 +++- drivers/scsi/qla2xxx/qla_nvme.h | 33 -- drivers/scsi/qla2xxx/qla_os.c | 77 ++- drivers/scsi/qla2xxx/qla_target.c | 977 +++++++++++++++++++++++++++++++++++++- drivers/scsi/qla2xxx/qla_target.h | 90 ++++ 11 files changed, 1697 insertions(+), 68 deletions(-) diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h index 26b93c563f92..feda0b90f62e 100644 --- a/drivers/scsi/qla2xxx/qla_def.h +++ b/drivers/scsi/qla2xxx/qla_def.h @@ -480,6 +480,10 @@ struct srb_iocb { uint32_t dl; uint32_t timeout_sec; struct list_head entry; + uint32_t exchange_address; + uint16_t nport_handle; + uint8_t vp_index; + void *cmd; } nvme; struct { u16 cmd; @@ -490,7 +494,11 @@ struct srb_iocb { struct timer_list timer; void (*timeout)(void *); }; - +struct srb_nvme_els_rsp { + dma_addr_t dma_addr; + void *dma_ptr; + void *ptr; +}; /* Values for srb_ctx type */ #define SRB_LOGIN_CMD 1 #define SRB_LOGOUT_CMD 2 @@ -515,6 +523,11 @@ struct srb_iocb { #define SRB_PRLI_CMD 21 #define SRB_CTRL_VP 22 #define SRB_PRLO_CMD 23 +#define SRB_NVME_ELS_RSP 24 +#define SRB_NVMET_LS 25 +#define SRB_NVMET_FCP 26 +#define SRB_NVMET_ABTS 27 +#define SRB_NVMET_SEND_ABTS 28 enum { TYPE_SRB, @@ -545,10 +558,13 @@ typedef struct srb { int rc; int retry_count; struct completion comp; + struct work_struct nvmet_comp_work; + uint16_t comp_status; union { struct srb_iocb iocb_cmd; struct bsg_job *bsg_job; struct srb_cmd scmd; + struct srb_nvme_els_rsp snvme_els; } u; void (*done)(void *, int); void (*free)(void *); @@ -2273,6 +2289,15 @@ struct qlt_plogi_ack_t { void *fcport; }; +/* NVMET */ +struct qlt_purex_plogi_ack_t { + struct list_head list; + struct __fc_plogi rcvd_plogi; + port_id_t id; + int ref_count; + void *fcport; +}; + struct ct_sns_desc { struct ct_sns_pkt *ct_sns; dma_addr_t ct_sns_dma; @@ -3235,6 +3260,7 @@ enum qla_work_type { QLA_EVT_SP_RETRY, QLA_EVT_IIDMA, QLA_EVT_ELS_PLOGI, + QLA_EVT_NEW_NVMET_SESS, }; @@ -4229,6 +4255,7 @@ typedef struct scsi_qla_host { uint32_t qpairs_req_created:1; uint32_t qpairs_rsp_created:1; uint32_t nvme_enabled:1; + uint32_t nvmet_enabled:1; } flags; atomic_t loop_state; @@ -4274,6 +4301,7 @@ typedef struct scsi_qla_host { #define N2N_LOGIN_NEEDED 30 #define IOCB_WORK_ACTIVE 31 #define SET_ZIO_THRESHOLD_NEEDED 32 +#define NVMET_PUREX 33 unsigned long pci_flags; #define PFLG_DISCONNECTED 0 /* PCI device removed */ @@ -4314,6 +4342,7 @@ typedef struct scsi_qla_host { uint8_t fabric_node_name[WWN_SIZE]; struct nvme_fc_local_port *nvme_local_port; + struct nvmet_fc_target_port *targetport; struct completion nvme_del_done; struct list_head nvme_rport_list; @@ -4394,6 +4423,9 @@ typedef struct scsi_qla_host { uint16_t n2n_id; struct list_head gpnid_list; struct fab_scan scan; + /*NVMET*/ + struct list_head purex_atio_list; + struct completion purex_plogi_sess; } scsi_qla_host_t; struct qla27xx_image_status { @@ -4664,6 +4696,7 @@ struct sff_8247_a0 { !ha->current_topology) #include "qla_target.h" +#include "qla_nvmet.h" #include "qla_gbl.h" #include "qla_dbg.h" #include "qla_inline.h" diff --git a/drivers/scsi/qla2xxx/qla_fw.h b/drivers/scsi/qla2xxx/qla_fw.h index 50c1e6c62e31..67a42d153f64 100644 --- a/drivers/scsi/qla2xxx/qla_fw.h +++ b/drivers/scsi/qla2xxx/qla_fw.h @@ -723,6 +723,269 @@ struct ct_entry_24xx { uint32_t dseg_1_len; /* Data segment 1 length. */ }; +/* NVME-T changes */ +/* + * Fibre Channel Header + * Little Endian format. As received in PUREX and PURLS + */ +struct __fc_hdr { + uint16_t did_lo; + uint8_t did_hi; + uint8_t r_ctl; + uint16_t sid_lo; + uint8_t sid_hi; + uint8_t cs_ctl; + uint16_t f_ctl_lo; + uint8_t f_ctl_hi; + uint8_t type; + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; +}; + +/* + * Fibre Channel LOGO acc + * In big endian format + */ +struct __fc_logo_acc { + uint8_t op_code; + uint8_t reserved[3]; +}; + +struct __fc_lsrjt { + uint8_t op_code; + uint8_t reserved[3]; + uint8_t reserved2; + uint8_t reason; + uint8_t exp; + uint8_t vendor; +}; + +/* + * Fibre Channel LOGO Frame + * Little Endian format. As received in PUREX + */ +struct __fc_logo { + struct __fc_hdr hdr; + uint16_t reserved; + uint8_t reserved1; + uint8_t op_code; + uint16_t sid_lo; + uint8_t sid_hi; + uint8_t reserved2; + uint8_t pname[8]; +}; + +/* + * Fibre Channel PRLI Frame + * Little Endian format. As received in PUREX + */ +struct __fc_prli { + struct __fc_hdr hdr; + uint16_t pyld_length; /* word 0 of prli */ + uint8_t page_length; + uint8_t op_code; + uint16_t common;/* word 1. 1st word of SP page */ + uint8_t type_ext; + uint8_t prli_type; +#define PRLI_TYPE_FCP 0x8 +#define PRLI_TYPE_NVME 0x28 + union { + struct { + uint32_t reserved[2]; + uint32_t sp_info; + } fcp; + struct { + uint32_t reserved[2]; + uint32_t sp_info; +#define NVME_PRLI_DISC BIT_3 +#define NVME_PRLI_TRGT BIT_4 +#define NVME_PRLI_INIT BIT_5 +#define NVME_PRLI_CONFIRMATION BIT_7 + uint32_t reserved1; + } nvme; + }; +}; + +/* + * Fibre Channel PLOGI Frame + * Little Endian format. As received in PUREX + */ +struct __fc_plogi { + uint16_t did_lo; + uint8_t did_hi; + uint8_t r_ctl; + uint16_t sid_lo; + uint8_t sid_hi; + uint8_t cs_ctl; + uint16_t f_ctl_lo; + uint8_t f_ctl_hi; + uint8_t type; + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; + uint8_t rsvd[3]; + uint8_t op_code; + uint32_t cs_params[4]; /* common service params */ + uint8_t pname[8]; /* port name */ + uint8_t nname[8]; /* node name */ + uint32_t class1[4]; /* class 1 service params */ + uint32_t class2[4]; /* class 2 service params */ + uint32_t class3[4]; /* class 3 service params */ + uint32_t class4[4]; + uint32_t vndr_vers[4]; +}; + +#define IOCB_TYPE_ELS_PASSTHRU 0x53 + +/* ELS Pass-Through IOCB (IOCB_TYPE_ELS_PASSTHRU = 0x53) + */ +struct __els_pt { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + uint32_t handle; + uint16_t status; /* when returned from fw */ + uint16_t nphdl; + uint16_t tx_dsd_cnt; + uint8_t vp_index; + uint8_t sof; /* bits 7:4 */ + uint32_t rcv_exchg_id; + uint16_t rx_dsd_cnt; + uint8_t op_code; + uint8_t rsvd1; + uint16_t did_lo; + uint8_t did_hi; + uint8_t sid_hi; + uint16_t sid_lo; + uint16_t cntl_flags; +#define ELS_PT_RESPONDER_ACC (1 << 13) + uint32_t rx_bc; + uint32_t tx_bc; + uint32_t tx_dsd[2]; /* Data segment 0 address. */ + uint32_t tx_dsd_len; /* Data segment 0 length. */ + uint32_t rx_dsd[2]; /* Data segment 1 address. */ + uint32_t rx_dsd_len; /* Data segment 1 length. */ +}; + +/* + * Reject a FCP PRLI + * + */ +struct __fc_prli_rjt { + uint8_t op_code; /* word 0 of prli rjt */ + uint8_t rsvd1[3]; + uint8_t rsvd2; /* word 1 of prli rjt */ + uint8_t reason; +#define PRLI_RJT_REASON 0x3 /* logical error */ + uint8_t expl; + uint8_t vendor; +#define PRLI_RJT_FCP_RESP_LEN 8 +}; + +/* + * Fibre Channel PRLI ACC + * Payload only + */ +struct __fc_prli_acc { +/* payload only. In big-endian format */ + uint8_t op_code; /* word 0 of prli acc */ + uint8_t page_length; +#define PRLI_FCP_PAGE_LENGTH 16 +#define PRLI_NVME_PAGE_LENGTH 20 + uint16_t pyld_length; + uint8_t type; /* word 1 of prli acc */ + uint8_t type_ext; + uint16_t common; +#define PRLI_EST_FCP_PAIR 0x2000 +#define PRLI_REQ_EXEC 0x0100 +#define PRLI_REQ_DOES_NOT_EXIST 0x0400 + union { + struct { + uint32_t reserved[2]; + uint32_t sp_info; + /* hard coding resp. target, rdxfr disabled.*/ +#define FCP_PRLI_SP 0x12 + } fcp; + struct { + uint32_t reserved[2]; + uint32_t sp_info; + uint16_t reserved2; + uint16_t first_burst; + } nvme; + }; +#define PRLI_ACC_FCP_RESP_LEN 20 +#define PRLI_ACC_NVME_RESP_LEN 24 + +}; + +/* + * ISP queue - PUREX IOCB entry structure definition + */ +#define PUREX_IOCB_TYPE 0x51 /* CT Pass Through IOCB entry */ +struct purex_entry_24xx { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + + uint16_t reserved1; + uint8_t vp_idx; + uint8_t reserved2; + + uint16_t status_flags; + uint16_t nport_handle; + + uint16_t frame_size; + uint16_t trunc_frame_size; + + uint32_t rx_xchg_addr; + + uint8_t d_id[3]; + uint8_t r_ctl; + + uint8_t s_id[3]; + uint8_t cs_ctl; + + uint8_t f_ctl[3]; + uint8_t type; + + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; + + uint8_t pyld[20]; +#define PUREX_PYLD_SIZE 44 /* Number of bytes (hdr+pyld) in this IOCB */ +}; + +#define PUREX_ENTRY_SIZE (sizeof(purex_entry_24xx_t)) + +#define CONT_SENSE_DATA 60 +/* + * Continuation Status Type 0 (IOCB_TYPE_STATUS_CONT = 0x10) + * Section 5.6 FW Interface Spec + */ +struct __status_cont { + uint8_t entry_type; /* Entry type. - 0x10 */ + uint8_t entry_count; /* Entry count. */ + uint8_t entry_status; /* Entry Status. */ + uint8_t reserved; + + uint8_t data[CONT_SENSE_DATA]; +} __packed; + + /* * ISP queue - ELS Pass-Through entry structure definition. */ diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h index 3673fcdb033a..531f7b049caa 100644 --- a/drivers/scsi/qla2xxx/qla_gbl.h +++ b/drivers/scsi/qla2xxx/qla_gbl.h @@ -313,7 +313,10 @@ extern int qla2x00_set_fw_options(scsi_qla_host_t *, uint16_t *); extern int -qla2x00_mbx_reg_test(scsi_qla_host_t *); +qla2x00_set_purex_mode(scsi_qla_host_t *vha); + +extern int +qla2x00_mbx_reg_test(scsi_qla_host_t *vha); extern int qla2x00_verify_checksum(scsi_qla_host_t *, uint32_t); @@ -899,4 +902,16 @@ void qlt_remove_target_resources(struct qla_hw_data *); void qlt_clr_qp_table(struct scsi_qla_host *vha); void qlt_set_mode(struct scsi_qla_host *); +extern int qla2x00_get_plogi_template(scsi_qla_host_t *vha, dma_addr_t buf, + uint16_t length); +extern void qlt_dequeue_purex(struct scsi_qla_host *vha); +int qla24xx_post_nvmet_newsess_work(struct scsi_qla_host *vha, port_id_t *id, + u8 *port_name, void *pla); +int qlt_send_els_resp(srb_t *sp, struct __els_pt *pkt); +extern void nvmet_release_sessions(struct scsi_qla_host *vha); +struct fc_port *qla_nvmet_find_sess_by_s_id(scsi_qla_host_t *vha, + const uint32_t s_id); +void qla_nvme_cmpl_io(struct srb_iocb *); +void qla24xx_nvmet_abts_resp_iocb(struct scsi_qla_host *vha, + struct abts_resp_to_24xx *pkt, struct req_que *req); #endif /* _QLA_GBL_H */ diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c index a24b0c2a2f00..55dc11d91b35 100644 --- a/drivers/scsi/qla2xxx/qla_gs.c +++ b/drivers/scsi/qla2xxx/qla_gs.c @@ -646,9 +646,11 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id) ct_req->req.rft_id.port_id[0] = vha->d_id.b.domain; ct_req->req.rft_id.port_id[1] = vha->d_id.b.area; ct_req->req.rft_id.port_id[2] = vha->d_id.b.al_pa; - ct_req->req.rft_id.fc4_types[2] = 0x01; /* FCP-3 */ - if (vha->flags.nvme_enabled) + if (!vha->flags.nvmet_enabled) + ct_req->req.rft_id.fc4_types[2] = 0x01; /* FCP-3 */ + + if (vha->flags.nvme_enabled || vha->flags.nvmet_enabled) ct_req->req.rft_id.fc4_types[6] = 1; /* NVMe type 28h */ sp->u.iocb_cmd.u.ctarg.req_size = RFT_ID_REQ_SIZE; @@ -691,6 +693,10 @@ qla2x00_rff_id(scsi_qla_host_t *vha, u8 type) return (QLA_SUCCESS); } + /* only single mode for now */ + if ((vha->flags.nvmet_enabled) && (type == FC4_TYPE_FCP_SCSI)) + return (QLA_SUCCESS); + return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha), FC4_TYPE_FCP_SCSI); } @@ -2355,7 +2361,7 @@ qla2x00_fdmiv2_rpa(scsi_qla_host_t *vha) eiter->a.fc4_types[2], eiter->a.fc4_types[1]); - if (vha->flags.nvme_enabled) { + if (vha->flags.nvme_enabled || vha->flags.nvmet_enabled) { eiter->a.fc4_types[6] = 1; /* NVMe type 28h */ ql_dbg(ql_dbg_disc, vha, 0x211f, "NVME FC4 Type = %02x 0x0 0x0 0x0 0x0 0x0.\n", @@ -2559,7 +2565,7 @@ qla2x00_fdmiv2_rpa(scsi_qla_host_t *vha) "Port Active FC4 Type = %02x %02x.\n", eiter->a.port_fc4_type[2], eiter->a.port_fc4_type[1]); - if (vha->flags.nvme_enabled) { + if (vha->flags.nvme_enabled || vha->flags.nvmet_enabled) { eiter->a.port_fc4_type[4] = 0; eiter->a.port_fc4_type[5] = 0; eiter->a.port_fc4_type[6] = 1; /* NVMe type 28h */ diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c index 41e5358d3739..841541201671 100644 --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -19,6 +19,7 @@ #include #include "qla_target.h" +#include "qla_nvmet.h" /* * QLogic ISP2x00 Hardware Support Function Prototypes. @@ -1094,6 +1095,23 @@ int qla24xx_post_gpdb_work(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt) return qla2x00_post_work(vha, e); } +/* NVMET */ +int qla24xx_post_nvmet_newsess_work(struct scsi_qla_host *vha, port_id_t *id, + u8 *port_name, void *pla) +{ + struct qla_work_evt *e; + + e = qla2x00_alloc_work(vha, QLA_EVT_NEW_NVMET_SESS); + if (!e) + return QLA_FUNCTION_FAILED; + + e->u.new_sess.id = *id; + e->u.new_sess.pla = pla; + memcpy(e->u.new_sess.port_name, port_name, WWN_SIZE); + + return qla2x00_post_work(vha, e); +} + int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt) { srb_t *sp; @@ -3591,6 +3609,13 @@ qla2x00_setup_chip(scsi_qla_host_t *vha) rval = qla2x00_get_fw_version(vha); if (rval != QLA_SUCCESS) goto failed; + + if (vha->flags.nvmet_enabled) { + ql_log(ql_log_info, vha, 0xffff, + "Enabling PUREX mode\n"); + qla2x00_set_purex_mode(vha); + } + ha->flags.npiv_supported = 0; if (IS_QLA2XXX_MIDTYPE(ha) && (ha->fw_attributes & BIT_2)) { @@ -3811,11 +3836,14 @@ qla24xx_update_fw_options(scsi_qla_host_t *vha) /* Move PUREX, ABTS RX & RIDA to ATIOQ */ if (ql2xmvasynctoatio && (IS_QLA83XX(ha) || IS_QLA27XX(ha))) { - if (qla_tgt_mode_enabled(vha) || - qla_dual_mode_enabled(vha)) + if ((qla_tgt_mode_enabled(vha) || qla_dual_mode_enabled(vha)) && + qlt_op_target_mode) { + ql_log(ql_log_info, vha, 0xffff, + "Moving Purex to ATIO Q\n"); ha->fw_options[2] |= BIT_11; - else + } else { ha->fw_options[2] &= ~BIT_11; + } } if (IS_QLA25XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha)) { @@ -5463,7 +5491,8 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha) &vha->dpc_flags)) break; } - if (vha->flags.nvme_enabled) { + if (vha->flags.nvme_enabled || + vha->flags.nvmet_enabled) { if (qla2x00_rff_id(vha, FC_TYPE_NVME)) { ql_dbg(ql_dbg_disc, vha, 0x2049, "Register NVME FC Type Features failed.\n"); @@ -5631,7 +5660,8 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *vha) new_fcport->nvme_flag = 0; new_fcport->fc4f_nvme = 0; - if (vha->flags.nvme_enabled && + if ((vha->flags.nvme_enabled || + vha->flags.nvmet_enabled) && swl[swl_idx].fc4f_nvme) { new_fcport->fc4f_nvme = swl[swl_idx].fc4f_nvme; @@ -8457,6 +8487,12 @@ qla81xx_update_fw_options(scsi_qla_host_t *vha) ha->fw_options[2] |= BIT_11; else ha->fw_options[2] &= ~BIT_11; + + if (ql2xnvmeenable == 2 && qlt_op_target_mode) { + /* Enabled PUREX node */ + ha->fw_options[1] |= FO1_ENABLE_PUREX; + ha->fw_options[2] |= BIT_11; + } } if (qla_tgt_mode_enabled(vha) || diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c index d73b04e40590..5c1833f030a4 100644 --- a/drivers/scsi/qla2xxx/qla_isr.c +++ b/drivers/scsi/qla2xxx/qla_isr.c @@ -23,6 +23,8 @@ static void qla2x00_status_cont_entry(struct rsp_que *, sts_cont_entry_t *); static int qla2x00_error_entry(scsi_qla_host_t *, struct rsp_que *, sts_entry_t *); +extern struct workqueue_struct *qla_nvmet_comp_wq; + /** * qla2100_intr_handler() - Process interrupts for the ISP2100 and ISP2200. * @irq: @@ -1583,6 +1585,12 @@ qla24xx_els_ct_entry(scsi_qla_host_t *vha, struct req_que *req, sp->name); sp->done(sp, res); return; + case SRB_NVME_ELS_RSP: + type = "nvme els"; + ql_log(ql_log_info, vha, 0xffff, + "Completing %s: (%p) type=%d.\n", type, sp, sp->type); + sp->done(sp, 0); + return; default: ql_dbg(ql_dbg_user, vha, 0x503e, "Unrecognized SRB: (%p) type=%d.\n", sp, sp->type); @@ -2456,6 +2464,13 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt) return; } + if (sp->type == SRB_NVMET_LS) { + ql_log(ql_log_info, vha, 0xffff, + "Dump NVME-LS response pkt\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 64); + } + if (unlikely((state_flags & BIT_1) && (sp->type == SRB_BIDI_CMD))) { qla25xx_process_bidir_status_iocb(vha, pkt, req, handle); return; @@ -2825,6 +2840,12 @@ qla2x00_error_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, sts_entry_t *pkt) "iocb type %xh with error status %xh, handle %xh, rspq id %d\n", pkt->entry_type, pkt->entry_status, pkt->handle, rsp->id); + ql_log(ql_log_info, vha, 0xffff, + "(%s-%d)Dumping the NVMET-ERROR pkt IOCB\n", + __func__, __LINE__); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 64); + if (que >= ha->max_req_queues || !ha->req_q_map[que]) goto fatal; @@ -2918,6 +2939,23 @@ qla24xx_abort_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, sp->done(sp, 0); } +/* + * Post a completion to the NVMET layer + */ + +static void qla_nvmet_comp_work(struct work_struct *work) +{ + srb_t *sp = container_of(work, srb_t, nvmet_comp_work); + + sp->done(sp, sp->comp_status); +} + +/** + * qla24xx_nvme_ls4_iocb() - Process LS4 completions + * @vha: SCSI driver HA context + * @pkt: LS4 req packet + * @req: Request Queue + */ void qla24xx_nvme_ls4_iocb(struct scsi_qla_host *vha, struct pt_ls4_request *pkt, struct req_que *req) { @@ -2929,11 +2967,78 @@ void qla24xx_nvme_ls4_iocb(struct scsi_qla_host *vha, if (!sp) return; + ql_log(ql_log_info, vha, 0xc01f, + "Dumping response pkt for SRB type: %#x\n", sp->type); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 16); + comp_status = le16_to_cpu(pkt->status); - sp->done(sp, comp_status); + sp->comp_status = comp_status; + /* Queue the work item */ + INIT_WORK(&sp->nvmet_comp_work, qla_nvmet_comp_work); + queue_work(qla_nvmet_comp_wq, &sp->nvmet_comp_work); } /** + * qla24xx_nvmet_fcp_iocb() - Process FCP completions + * @vha: SCSI driver HA context + * @pkt: FCP completion from firmware + * @req: Request Queue + */ +static void qla24xx_nvmet_fcp_iocb(struct scsi_qla_host *vha, + struct ctio_nvme_from_27xx *pkt, struct req_que *req) +{ + srb_t *sp; + const char func[] = "NVMET_FCP_IOCB"; + uint16_t comp_status; + + sp = qla2x00_get_sp_from_handle(vha, func, req, pkt); + if (!sp) + return; + + if ((pkt->entry_status) || (pkt->status != 1)) { + ql_log(ql_log_info, vha, 0xc01f, + "Dumping response pkt for SRB type: %#x\n", sp->type); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 16); + } + + comp_status = le16_to_cpu(pkt->status); + sp->comp_status = comp_status; + /* Queue the work item */ + INIT_WORK(&sp->nvmet_comp_work, qla_nvmet_comp_work); + queue_work(qla_nvmet_comp_wq, &sp->nvmet_comp_work); +} + +/** + * qla24xx_nvmet_abts_resp_iocb() - Process ABTS completions + * @vha: SCSI driver HA context + * @pkt: ABTS completion from firmware + * @req: Request Queue + */ +void qla24xx_nvmet_abts_resp_iocb(struct scsi_qla_host *vha, + struct abts_resp_to_24xx *pkt, struct req_que *req) +{ + srb_t *sp; + const char func[] = "NVMET_ABTS_RESP_IOCB"; + uint16_t comp_status; + + sp = qla2x00_get_sp_from_handle(vha, func, req, pkt); + if (!sp) + return; + + ql_log(ql_log_info, vha, 0xc01f, + "Dumping response pkt for SRB type: %#x\n", sp->type); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 16); + + comp_status = le16_to_cpu(pkt->entry_status); + sp->comp_status = comp_status; + /* Queue the work item */ + INIT_WORK(&sp->nvmet_comp_work, qla_nvmet_comp_work); + queue_work(qla_nvmet_comp_wq, &sp->nvmet_comp_work); +} +/** * qla24xx_process_response_queue() - Process response queue entries. * @vha: SCSI driver HA context * @rsp: response queue @@ -3011,6 +3116,11 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha, qla24xx_nvme_ls4_iocb(vha, (struct pt_ls4_request *)pkt, rsp->req); break; + case CTIO_NVME: + qla24xx_nvmet_fcp_iocb(vha, + (struct ctio_nvme_from_27xx *)pkt, + rsp->req); + break; case NOTIFY_ACK_TYPE: if (pkt->handle == QLA_TGT_SKIP_HANDLE) qlt_response_pkt_all_vps(vha, rsp, diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c index e016ee9c6d8e..0269566acae2 100644 --- a/drivers/scsi/qla2xxx/qla_mbx.c +++ b/drivers/scsi/qla2xxx/qla_mbx.c @@ -61,6 +61,7 @@ static struct rom_cmd { { MBC_READ_SFP }, { MBC_GET_RNID_PARAMS }, { MBC_GET_SET_ZIO_THRESHOLD }, + { MBC_SET_RNID_PARAMS }, }; static int is_rom_cmd(uint16_t cmd) @@ -1109,12 +1110,15 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha) * FW supports nvme and driver load parameter requested nvme. * BIT 26 of fw_attributes indicates NVMe support. */ - if ((ha->fw_attributes_h & 0x400) && ql2xnvmeenable) { + if ((ha->fw_attributes_h & 0x400) && (ql2xnvmeenable == 1)) { vha->flags.nvme_enabled = 1; ql_log(ql_log_info, vha, 0xd302, "%s: FC-NVMe is Enabled (0x%x)\n", __func__, ha->fw_attributes_h); } + + if ((ha->fw_attributes_h & 0x400) && (ql2xnvmeenable == 2)) + vha->flags.nvmet_enabled = 1; } if (IS_QLA27XX(ha)) { @@ -1189,6 +1193,101 @@ qla2x00_get_fw_options(scsi_qla_host_t *vha, uint16_t *fwopts) return rval; } +#define OPCODE_PLOGI_TMPLT 7 +int +qla2x00_get_plogi_template(scsi_qla_host_t *vha, dma_addr_t buf, + uint16_t length) +{ + mbx_cmd_t mc; + mbx_cmd_t *mcp = &mc; + int rval; + + mcp->mb[0] = MBC_GET_RNID_PARAMS; + mcp->mb[1] = OPCODE_PLOGI_TMPLT << 8; + mcp->mb[2] = MSW(LSD(buf)); + mcp->mb[3] = LSW(LSD(buf)); + mcp->mb[6] = MSW(MSD(buf)); + mcp->mb[7] = LSW(MSD(buf)); + mcp->mb[8] = length; + mcp->out_mb = MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0; + mcp->in_mb = MBX_1|MBX_0; + mcp->buf_size = length; + mcp->flags = MBX_DMA_IN; + mcp->tov = MBX_TOV_SECONDS; + rval = qla2x00_mailbox_command(vha, mcp); + + ql_dbg(ql_dbg_mbx, vha, 0x118f, + "%s: %s rval=%x mb[0]=%x,%x.\n", __func__, + (rval == QLA_SUCCESS) ? "Success" : "Failed", + rval, mcp->mb[0], mcp->mb[1]); + + return rval; +} + +#define OPCODE_LIST_LENGTH 32 /* ELS opcode list */ +#define OPCODE_ELS_CMD 5 /* MBx1 cmd param */ +/* + * qla2x00_set_purex_mode + * Enable purex mode for ELS commands + * + * Input: + * vha = adapter block pointer. + * + * Returns: + * qla2x00 local function return status code. + * + * Context: + * Kernel context. + */ +int +qla2x00_set_purex_mode(scsi_qla_host_t *vha) +{ + int rval; + mbx_cmd_t mc; + mbx_cmd_t *mcp = &mc; + uint8_t *els_cmd_map; + dma_addr_t els_cmd_map_dma; + struct qla_hw_data *ha = vha->hw; + + ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1197, + "Entered %s.\n", __func__); + + els_cmd_map = dma_zalloc_coherent(&ha->pdev->dev, OPCODE_LIST_LENGTH, + &els_cmd_map_dma, GFP_KERNEL); + if (!els_cmd_map) { + ql_log(ql_log_warn, vha, 0x7101, + "Failed to allocate RDP els command param.\n"); + return QLA_MEMORY_ALLOC_FAILED; + } + + els_cmd_map[0] = 0x28; /* enable PLOGI and LOGO ELS */ + els_cmd_map[4] = 0x13; /* enable PRLI ELS */ + els_cmd_map[10] = 0x5; + + mcp->mb[0] = MBC_SET_RNID_PARAMS; + mcp->mb[1] = OPCODE_ELS_CMD << 8; + mcp->mb[2] = MSW(LSD(els_cmd_map_dma)); + mcp->mb[3] = LSW(LSD(els_cmd_map_dma)); + mcp->mb[6] = MSW(MSD(els_cmd_map_dma)); + mcp->mb[7] = LSW(MSD(els_cmd_map_dma)); + mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0; + mcp->in_mb = MBX_1|MBX_0; + mcp->tov = MBX_TOV_SECONDS; + mcp->flags = MBX_DMA_OUT; + mcp->buf_size = OPCODE_LIST_LENGTH; + rval = qla2x00_mailbox_command(vha, mcp); + + ql_dbg(ql_dbg_mbx, vha, 0x118d, + "%s: %s rval=%x mb[0]=%x,%x.\n", __func__, + (rval == QLA_SUCCESS) ? "Success" : "Failed", + rval, mcp->mb[0], mcp->mb[1]); + + dma_free_coherent(&ha->pdev->dev, OPCODE_LIST_LENGTH, + els_cmd_map, els_cmd_map_dma); + + return rval; +} + /* * qla2x00_set_fw_options diff --git a/drivers/scsi/qla2xxx/qla_nvme.h b/drivers/scsi/qla2xxx/qla_nvme.h index 4941d107fb1c..0902c3a27adc 100644 --- a/drivers/scsi/qla2xxx/qla_nvme.h +++ b/drivers/scsi/qla2xxx/qla_nvme.h @@ -106,39 +106,6 @@ struct pt_ls4_request { uint32_t dseg1_address[2]; uint32_t dseg1_len; }; - -#define PT_LS4_UNSOL 0x56 /* pass-up unsolicited rec FC-NVMe request */ -struct pt_ls4_rx_unsol { - uint8_t entry_type; - uint8_t entry_count; - uint16_t rsvd0; - uint16_t rsvd1; - uint8_t vp_index; - uint8_t rsvd2; - uint16_t rsvd3; - uint16_t nport_handle; - uint16_t frame_size; - uint16_t rsvd4; - uint32_t exchange_address; - uint8_t d_id[3]; - uint8_t r_ctl; - uint8_t s_id[3]; - uint8_t cs_ctl; - uint8_t f_ctl[3]; - uint8_t type; - uint16_t seq_cnt; - uint8_t df_ctl; - uint8_t seq_id; - uint16_t rx_id; - uint16_t ox_id; - uint32_t param; - uint32_t desc0; -#define PT_LS4_PAYLOAD_OFFSET 0x2c -#define PT_LS4_FIRST_PACKET_LEN 20 - uint32_t desc_len; - uint32_t payload[3]; -}; - /* * Global functions prototype in qla_nvme.c source file. */ diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index d21dd7700d5d..d10ef1577197 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -137,13 +137,17 @@ MODULE_PARM_DESC(ql2xenabledif, #if (IS_ENABLED(CONFIG_NVME_FC)) int ql2xnvmeenable = 1; +#elif (IS_ENABLED(CONFIG_NVME_TARGET_FC)) +int ql2xnvmeenable = 2; #else int ql2xnvmeenable; #endif module_param(ql2xnvmeenable, int, 0644); MODULE_PARM_DESC(ql2xnvmeenable, - "Enables NVME support. " - "0 - no NVMe. Default is Y"); + "Enables NVME support.\n" + "0 - no NVMe.\n" + "1 - initiator,\n" + "2 - target. Default is 1\n"); int ql2xenablehba_err_chk = 2; module_param(ql2xenablehba_err_chk, int, S_IRUGO|S_IWUSR); @@ -3421,6 +3425,9 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) qlt_add_target(ha, base_vha); + if (ql2xnvmeenable == 2) + qla_nvmet_create_targetport(base_vha); + clear_bit(PFLG_DRIVER_PROBING, &base_vha->pci_flags); if (test_bit(UNLOADING, &base_vha->dpc_flags)) @@ -3701,6 +3708,8 @@ qla2x00_remove_one(struct pci_dev *pdev) qla_nvme_delete(base_vha); + qla_nvmet_delete(base_vha); + dma_free_coherent(&ha->pdev->dev, base_vha->gnl.size, base_vha->gnl.l, base_vha->gnl.ldma); @@ -5024,6 +5033,53 @@ static void qla_sp_retry(struct scsi_qla_host *vha, struct qla_work_evt *e) qla24xx_sp_unmap(vha, sp); } } +/* NVMET */ +static +void qla24xx_create_new_nvmet_sess(struct scsi_qla_host *vha, + struct qla_work_evt *e) +{ + unsigned long flags; + fc_port_t *fcport = NULL; + + spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags); + fcport = qla2x00_find_fcport_by_wwpn(vha, e->u.new_sess.port_name, 1); + if (fcport) { + ql_log(ql_log_info, vha, 0x11020, + "Found fcport: %p for WWN: %8phC\n", fcport, + e->u.new_sess.port_name); + fcport->d_id = e->u.new_sess.id; + + /* Session existing with No loop_ID assigned */ + if (fcport->loop_id == FC_NO_LOOP_ID) { + fcport->loop_id = qla2x00_find_new_loop_id(vha, fcport); + ql_log(ql_log_info, vha, 0x11021, + "Allocated new loop_id: %#x for fcport: %p\n", + fcport->loop_id, fcport); + fcport->fw_login_state = DSC_LS_PLOGI_PEND; + } + } else { + fcport = qla2x00_alloc_fcport(vha, GFP_KERNEL); + if (fcport) { + fcport->d_id = e->u.new_sess.id; + fcport->loop_id = qla2x00_find_new_loop_id(vha, fcport); + ql_log(ql_log_info, vha, 0x11022, + "Allocated new loop_id: %#x for fcport: %p\n", + fcport->loop_id, fcport); + + fcport->scan_state = QLA_FCPORT_FOUND; + fcport->flags |= FCF_FABRIC_DEVICE; + fcport->fw_login_state = DSC_LS_PLOGI_PEND; + + memcpy(fcport->port_name, e->u.new_sess.port_name, + WWN_SIZE); + + list_add_tail(&fcport->list, &vha->vp_fcports); + } + } + spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags); + + complete(&vha->purex_plogi_sess); +} void qla2x00_do_work(struct scsi_qla_host *vha) @@ -5129,6 +5185,10 @@ qla2x00_do_work(struct scsi_qla_host *vha) qla24xx_els_dcmd2_iocb(vha, ELS_DCMD_PLOGI, e->u.fcport.fcport, false); break; + /* FC-NVMe Target */ + case QLA_EVT_NEW_NVMET_SESS: + qla24xx_create_new_nvmet_sess(vha, e); + break; } if (e->flags & QLA_EVT_FLAG_FREE) kfree(e); @@ -6100,6 +6160,12 @@ qla2x00_do_dpc(void *data) set_bit(ISP_ABORT_NEEDED, &base_vha->dpc_flags); } + if (test_and_clear_bit(NVMET_PUREX, &base_vha->dpc_flags)) { + ql_log(ql_log_info, base_vha, 0x11022, + "qla2xxx-nvmet: Received a frame on the wire\n"); + qlt_dequeue_purex(base_vha); + } + if (test_and_clear_bit (ISP_ABORT_NEEDED, &base_vha->dpc_flags) && !test_bit(UNLOADING, &base_vha->dpc_flags)) { @@ -6273,6 +6339,13 @@ qla2x00_do_dpc(void *data) ha->nvme_last_rptd_aen); } } +#if (IS_ENABLED(CONFIG_NVME_TARGET_FC)) + if (test_and_clear_bit(NVMET_PUREX, &base_vha->dpc_flags)) { + ql_log(ql_log_info, base_vha, 0x11025, + "nvmet: Received a frame on the wire\n"); + qlt_dequeue_purex(base_vha); + } +#endif if (test_and_clear_bit(SET_ZIO_THRESHOLD_NEEDED, &base_vha->dpc_flags)) { diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c index a69ec4519d81..6f61d4e04902 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -40,6 +40,7 @@ #include #include "qla_def.h" +#include "qla_nvmet.h" #include "qla_target.h" static int ql2xtgt_tape_enable; @@ -78,6 +79,8 @@ int ql2x_ini_mode = QLA2XXX_INI_MODE_EXCLUSIVE; static int qla_sam_status = SAM_STAT_BUSY; static int tc_sam_status = SAM_STAT_TASK_SET_FULL; /* target core */ +int qlt_op_target_mode; + /* * From scsi/fc/fc_fcp.h */ @@ -149,11 +152,16 @@ static inline uint32_t qlt_make_handle(struct qla_qpair *); */ static struct kmem_cache *qla_tgt_mgmt_cmd_cachep; struct kmem_cache *qla_tgt_plogi_cachep; +static struct kmem_cache *qla_tgt_purex_plogi_cachep; static mempool_t *qla_tgt_mgmt_cmd_mempool; static struct workqueue_struct *qla_tgt_wq; +static struct workqueue_struct *qla_nvmet_wq; static DEFINE_MUTEX(qla_tgt_mutex); static LIST_HEAD(qla_tgt_glist); +/* WQ for nvmet completions */ +struct workqueue_struct *qla_nvmet_comp_wq; + static const char *prot_op_str(u32 prot_op) { switch (prot_op) { @@ -348,13 +356,653 @@ void qlt_unknown_atio_work_fn(struct work_struct *work) qlt_try_to_dequeue_unknown_atios(vha, 0); } +#define ELS_RJT 0x01 +#define ELS_ACC 0x02 + +struct fc_port *qla_nvmet_find_sess_by_s_id( + scsi_qla_host_t *vha, + const uint32_t s_id) +{ + struct fc_port *sess = NULL, *other_sess; + uint32_t other_sid; + + list_for_each_entry(other_sess, &vha->vp_fcports, list) { + other_sid = other_sess->d_id.b.domain << 16 | + other_sess->d_id.b.area << 8 | + other_sess->d_id.b.al_pa; + + if (other_sid == s_id) { + sess = other_sess; + break; + } + } + return sess; +} + +/* Send an ELS response */ +int qlt_send_els_resp(srb_t *sp, struct __els_pt *els_pkt) +{ + struct purex_entry_24xx *purex = (struct purex_entry_24xx *) + sp->u.snvme_els.ptr; + dma_addr_t udma = sp->u.snvme_els.dma_addr; + struct fc_port *fcport; + port_id_t port_id; + uint16_t loop_id; + + port_id.b.domain = purex->s_id[2]; + port_id.b.area = purex->s_id[1]; + port_id.b.al_pa = purex->s_id[0]; + port_id.b.rsvd_1 = 0; + + fcport = qla2x00_find_fcport_by_nportid(sp->vha, &port_id, 1); + if (fcport) + /* There is no session with the swt */ + loop_id = fcport->loop_id; + else + loop_id = 0xFFFF; + + ql_log(ql_log_info, sp->vha, 0xfff9, + "sp: %p, purex: %p, udma: %pad, loop_id: 0x%x\n", + sp, purex, &udma, loop_id); + + els_pkt->entry_type = ELS_IOCB_TYPE; + els_pkt->entry_count = 1; + + els_pkt->handle = sp->handle; + els_pkt->nphdl = cpu_to_le16(loop_id); + els_pkt->tx_dsd_cnt = cpu_to_le16(1); + els_pkt->vp_index = purex->vp_idx; + els_pkt->sof = EST_SOFI3; + els_pkt->rcv_exchg_id = cpu_to_le32(purex->rx_xchg_addr); + els_pkt->op_code = sp->cmd_type; + els_pkt->did_lo = cpu_to_le16(purex->s_id[0] | (purex->s_id[1] << 8)); + els_pkt->did_hi = purex->s_id[2]; + els_pkt->sid_hi = purex->d_id[2]; + els_pkt->sid_lo = cpu_to_le16(purex->d_id[0] | (purex->d_id[1] << 8)); + + if (sp->gen2 == ELS_ACC) + els_pkt->cntl_flags = cpu_to_le16(EPD_ELS_ACC); + else + els_pkt->cntl_flags = cpu_to_le16(EPD_ELS_RJT); + + els_pkt->tx_bc = cpu_to_le32(sp->gen1); + els_pkt->tx_dsd[0] = cpu_to_le32(LSD(udma)); + els_pkt->tx_dsd[1] = cpu_to_le32(MSD(udma)); + els_pkt->tx_dsd_len = cpu_to_le32(sp->gen1); + /* Memory Barrier */ + wmb(); + + ql_log(ql_log_info, sp->vha, 0x11030, "Dumping PLOGI ELS\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, sp->vha, 0xffff, + (uint8_t *)els_pkt, sizeof(*els_pkt)); + + return 0; +} + +static void qlt_nvme_els_done(void *s, int res) +{ + struct srb *sp = s; + + ql_log(ql_log_info, sp->vha, 0x11031, + "Done with NVME els command\n"); + + ql_log(ql_log_info, sp->vha, 0x11032, + "sp: %p vha: %p, dma_ptr: %p, dma_addr: %pad, len: %#x\n", + sp, sp->vha, sp->u.snvme_els.dma_ptr, &sp->u.snvme_els.dma_addr, + sp->gen1); + + qla2x00_rel_sp(sp); +} + +static int qlt_send_plogi_resp(struct scsi_qla_host *vha, uint8_t op_code, + struct purex_entry_24xx *purex, struct fc_port *fcport) +{ + int ret, rval, i; + dma_addr_t plogi_ack_udma = vha->vha_tgt.qla_tgt->nvme_els_rsp; + void *plogi_ack_buf = vha->vha_tgt.qla_tgt->nvme_els_ptr; + uint8_t *tmp; + uint32_t *opcode; + srb_t *sp; + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL); + if (!sp) { + ql_log(ql_log_info, vha, 0x11033, + "Failed to allocate SRB\n"); + return -ENOMEM; + } + + sp->type = SRB_NVME_ELS_RSP; + sp->done = qlt_nvme_els_done; + sp->vha = vha; + + ql_log(ql_log_info, vha, 0x11034, + "sp: %p, vha: %p, plogi_ack_buf: %p\n", + sp, vha, plogi_ack_buf); + + sp->u.snvme_els.dma_addr = plogi_ack_udma; + sp->u.snvme_els.dma_ptr = plogi_ack_buf; + sp->gen1 = 116; + sp->gen2 = ELS_ACC; + sp->u.snvme_els.ptr = (struct purex_entry_24xx *)purex; + sp->cmd_type = ELS_PLOGI; + + tmp = (uint8_t *)plogi_ack_udma; + + tmp += 4; /* fw doesn't return 1st 4 bytes where opcode goes */ + + ret = qla2x00_get_plogi_template(vha, (dma_addr_t)tmp, (116/4 - 1)); + if (ret) { + ql_log(ql_log_warn, vha, 0x11035, + "Failed to get plogi template\n"); + return -ENOMEM; + } + + opcode = (uint32_t *) plogi_ack_buf; + *opcode = cpu_to_be32(ELS_ACC << 24); + + for (i = 0; i < 0x1c; i++) { + ++opcode; + *opcode = cpu_to_be32(*opcode); + } + + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xfff3, + "Dumping the PLOGI from fw\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_verbose, vha, 0x70cf, + (uint8_t *)plogi_ack_buf, 116); + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) + qla2x00_rel_sp(sp); + + return 0; +} + +static struct qlt_purex_plogi_ack_t * +qlt_plogi_find_add(struct scsi_qla_host *vha, port_id_t *id, + struct __fc_plogi *rcvd_plogi) +{ + struct qlt_purex_plogi_ack_t *pla; + + list_for_each_entry(pla, &vha->plogi_ack_list, list) { + if (pla->id.b24 == id->b24) + return pla; + } + + pla = kmem_cache_zalloc(qla_tgt_purex_plogi_cachep, GFP_ATOMIC); + if (!pla) { + ql_dbg(ql_dbg_async, vha, 0x5088, + "qla_target(%d): Allocation of plogi_ack failed\n", + vha->vp_idx); + return NULL; + } + + pla->id = *id; + memcpy(&pla->rcvd_plogi, rcvd_plogi, sizeof(struct __fc_plogi)); + ql_log(ql_log_info, vha, 0xf101, + "New session(%p) created for port: %#x\n", + pla, pla->id.b24); + + list_add_tail(&pla->list, &vha->plogi_ack_list); + + return pla; +} + +static void __swap_wwn(uint8_t *ptr, uint32_t size) +{ + uint32_t *iptr = (uint32_t *)ptr; + uint32_t *optr = (uint32_t *)ptr; + uint32_t i = size >> 2; + + for (; i ; i--) + *optr++ = be32_to_cpu(*iptr++); +} + +static int abort_cmds_for_s_id(struct scsi_qla_host *vha, port_id_t *s_id); +/* + * Parse the PLOGI from the peer port + * Retrieve WWPN, WWNN from the payload + * Create and fc port if it is a new WWN + * else clean up the prev exchange + * Return a response + */ +static void qlt_process_plogi(struct scsi_qla_host *vha, + struct purex_entry_24xx *purex, void *buf) +{ + uint64_t pname, nname; + struct __fc_plogi *rcvd_plogi = (struct __fc_plogi *)buf; + struct qla_tgt *tgt = vha->vha_tgt.qla_tgt; + uint16_t loop_id; + unsigned long flags; + struct fc_port *sess = NULL, *conflict_sess = NULL; + struct qlt_purex_plogi_ack_t *pla; + port_id_t port_id; + int sess_handling = 0; + + port_id.b.domain = purex->s_id[2]; + port_id.b.area = purex->s_id[1]; + port_id.b.al_pa = purex->s_id[0]; + port_id.b.rsvd_1 = 0; + + if (IS_SW_RESV_ADDR(port_id)) { + ql_log(ql_log_info, vha, 0x11036, + "Received plogi from switch, just send an ACC\n"); + goto send_plogi_resp; + } + + loop_id = le16_to_cpu(purex->nport_handle); + + /* Clean up prev commands if any */ + if (sess_handling) { + ql_log(ql_log_info, vha, 0x11037, + "%s %d Cleaning up prev commands\n", + __func__, __LINE__); + abort_cmds_for_s_id(vha, &port_id); + } + + __swap_wwn(rcvd_plogi->pname, 4); + __swap_wwn(&rcvd_plogi->pname[4], 4); + pname = wwn_to_u64(rcvd_plogi->pname); + + __swap_wwn(rcvd_plogi->nname, 4); + __swap_wwn(&rcvd_plogi->nname[4], 4); + nname = wwn_to_u64(rcvd_plogi->nname); + + ql_log(ql_log_info, vha, 0x11038, + "%s %d, pname:%llx, nname:%llx port_id: %#x\n", + __func__, __LINE__, pname, nname, loop_id); + + /* Invalidate other sessions if any */ + spin_lock_irqsave(&tgt->ha->tgt.sess_lock, flags); + sess = qlt_find_sess_invalidate_other(vha, pname, + port_id, loop_id, &conflict_sess); + spin_unlock_irqrestore(&tgt->ha->tgt.sess_lock, flags); + + /* Add the inbound plogi(if from a new device) to the list */ + pla = qlt_plogi_find_add(vha, &port_id, rcvd_plogi); + + /* If there is no existing session, create one */ + if (unlikely(!sess)) { + ql_log(ql_log_info, vha, 0xf102, + "Creating a new session\n"); + init_completion(&vha->purex_plogi_sess); + qla24xx_post_nvmet_newsess_work(vha, &port_id, + rcvd_plogi->pname, pla); + wait_for_completion_timeout(&vha->purex_plogi_sess, 500); + /* Send a PLOGI response */ + goto send_plogi_resp; + } else { + /* Session existing with No loop_ID assigned */ + if (sess->loop_id == FC_NO_LOOP_ID) { + sess->loop_id = qla2x00_find_new_loop_id(vha, sess); + ql_log(ql_log_info, vha, 0x11039, + "Allocated new loop_id: %#x for fcport: %p\n", + sess->loop_id, sess); + } + sess->d_id = port_id; + + sess->fw_login_state = DSC_LS_PLOGI_PEND; + } +send_plogi_resp: + /* Send a PLOGI response */ + qlt_send_plogi_resp(vha, ELS_PLOGI, purex, sess); +} + +static int qlt_process_logo(struct scsi_qla_host *vha, + struct purex_entry_24xx *purex, void *buf) +{ + struct __fc_logo_acc *logo_acc; + dma_addr_t logo_ack_udma = vha->vha_tgt.qla_tgt->nvme_els_rsp; + void *logo_ack_buf = vha->vha_tgt.qla_tgt->nvme_els_ptr; + srb_t *sp; + int rval; + uint32_t look_up_sid; + fc_port_t *sess = NULL; + port_id_t port_id; + + port_id.b.domain = purex->s_id[2]; + port_id.b.area = purex->s_id[1]; + port_id.b.al_pa = purex->s_id[0]; + port_id.b.rsvd_1 = 0; + + if (!IS_SW_RESV_ADDR(port_id)) { + look_up_sid = purex->s_id[2] << 16 | purex->s_id[1] << 8 | + purex->s_id[0]; + ql_log(ql_log_info, vha, 0x11040, + "%s - Look UP sid: %#x\n", __func__, look_up_sid); + + sess = qla_nvmet_find_sess_by_s_id(vha, look_up_sid); + if (unlikely(!sess)) + WARN_ON(1); + } + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL); + if (!sp) { + ql_log(ql_log_info, vha, 0x11041, + "Failed to allocate SRB\n"); + return -ENOMEM; + } + + sp->type = SRB_NVME_ELS_RSP; + sp->done = qlt_nvme_els_done; + sp->vha = vha; + sp->fcport = sess; + + ql_log(ql_log_info, vha, 0x11042, + "sp: %p, vha: %p, logo_ack_buf: %p\n", + sp, vha, logo_ack_buf); + + logo_acc = (struct __fc_logo_acc *)logo_ack_buf; + memset(logo_acc, 0, sizeof(*logo_acc)); + logo_acc->op_code = ELS_ACC; + + /* Send response */ + sp->u.snvme_els.dma_addr = logo_ack_udma; + sp->u.snvme_els.dma_ptr = logo_ack_buf; + sp->gen1 = sizeof(struct __fc_logo_acc); + sp->gen2 = ELS_ACC; + sp->u.snvme_els.ptr = (struct purex_entry_24xx *)purex; + sp->cmd_type = ELS_LOGO; + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) + qla2x00_rel_sp(sp); + + return 0; +} + +static int qlt_process_prli(struct scsi_qla_host *vha, + struct purex_entry_24xx *purex, void *buf) +{ + struct __fc_prli *prli = (struct __fc_prli *)buf; + struct __fc_prli_acc *prli_acc; + struct __fc_prli_rjt *prli_rej; + dma_addr_t prli_ack_udma = vha->vha_tgt.qla_tgt->nvme_els_rsp; + void *prli_ack_buf = vha->vha_tgt.qla_tgt->nvme_els_ptr; + srb_t *sp; + struct fc_port *sess = NULL; + int rval; + uint32_t look_up_sid; + port_id_t port_id; + + port_id.b.domain = purex->s_id[2]; + port_id.b.area = purex->s_id[1]; + port_id.b.al_pa = purex->s_id[0]; + port_id.b.rsvd_1 = 0; + + if (!IS_SW_RESV_ADDR(port_id)) { + look_up_sid = purex->s_id[2] << 16 | purex->s_id[1] << 8 | + purex->s_id[0]; + ql_log(ql_log_info, vha, 0x11043, + "%s - Look UP sid: %#x\n", __func__, look_up_sid); + + sess = qla_nvmet_find_sess_by_s_id(vha, look_up_sid); + if (unlikely(!sess)) + WARN_ON(1); + } + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL); + if (!sp) { + ql_log(ql_log_info, vha, 0x11044, + "Failed to allocate SRB\n"); + return -ENOMEM; + } + + sp->type = SRB_NVME_ELS_RSP; + sp->done = qlt_nvme_els_done; + sp->vha = vha; + sp->fcport = sess; + + ql_log(ql_log_info, vha, 0x11045, + "sp: %p, vha: %p, prli_ack_buf: %p, prli_ack_udma: %pad\n", + sp, vha, prli_ack_buf, &prli_ack_udma); + + memset(prli_ack_buf, 0, sizeof(struct __fc_prli_acc)); + + /* Parse PRLI */ + if (prli->prli_type == PRLI_TYPE_FCP) { + /* Send a RJT for FCP */ + prli_rej = (struct __fc_prli_rjt *)prli_ack_buf; + prli_rej->op_code = ELS_RJT; + prli_rej->reason = PRLI_RJT_REASON; + } else if (prli->prli_type == PRLI_TYPE_NVME) { + uint32_t spinfo; + + prli_acc = (struct __fc_prli_acc *)prli_ack_buf; + prli_acc->op_code = ELS_ACC; + prli_acc->type = PRLI_TYPE_NVME; + prli_acc->page_length = PRLI_NVME_PAGE_LENGTH; + prli_acc->common = cpu_to_be16(PRLI_REQ_EXEC); + prli_acc->pyld_length = cpu_to_be16(PRLI_ACC_NVME_RESP_LEN); + spinfo = NVME_PRLI_DISC | NVME_PRLI_TRGT; + prli_acc->nvme.sp_info = cpu_to_be32(spinfo); + } + + /* Send response */ + sp->u.snvme_els.dma_addr = prli_ack_udma; + sp->u.snvme_els.dma_ptr = prli_ack_buf; + + if (prli->prli_type == PRLI_TYPE_FCP) { + sp->gen1 = sizeof(struct __fc_prli_rjt); + sp->gen2 = ELS_RJT; + } else if (prli->prli_type == PRLI_TYPE_NVME) { + sp->gen1 = sizeof(struct __fc_prli_acc); + sp->gen2 = ELS_ACC; + } + + sp->u.snvme_els.ptr = (struct purex_entry_24xx *)purex; + sp->cmd_type = ELS_PRLI; + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) + qla2x00_rel_sp(sp); + + return 0; +} + +static void *qlt_get_next_atio_pkt(struct scsi_qla_host *vha) +{ + struct qla_hw_data *ha = vha->hw; + void *pkt; + + ha->tgt.atio_ring_index++; + if (ha->tgt.atio_ring_index == ha->tgt.atio_q_length) { + ha->tgt.atio_ring_index = 0; + ha->tgt.atio_ring_ptr = ha->tgt.atio_ring; + } else { + ha->tgt.atio_ring_ptr++; + } + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; + + return pkt; +} + +static void qlt_process_purex(struct scsi_qla_host *vha, + struct qla_tgt_purex_op *p) +{ + struct atio_from_isp *atio = &p->atio; + struct purex_entry_24xx *purex = + (struct purex_entry_24xx *)&atio->u.raw; + uint16_t len = purex->frame_size; + + ql_log(ql_log_info, vha, 0xf100, + "Purex IOCB: EC:%#x, Len:%#x ELS_OP:%#x oxid:%#x rxid:%#x\n", + purex->entry_count, len, purex->pyld[3], + purex->ox_id, purex->rx_id); + + switch (purex->pyld[3]) { + case ELS_PLOGI: + qlt_process_plogi(vha, purex, p->purex_pyld); + break; + case ELS_PRLI: + qlt_process_prli(vha, purex, p->purex_pyld); + break; + case ELS_LOGO: + qlt_process_logo(vha, purex, p->purex_pyld); + break; + default: + ql_log(ql_log_warn, vha, 0x11046, + "Unexpected ELS 0x%x\n", purex->pyld[3]); + break; + } +} + +void qlt_dequeue_purex(struct scsi_qla_host *vha) +{ + struct qla_tgt_purex_op *p, *t; + unsigned long flags; + + list_for_each_entry_safe(p, t, &vha->purex_atio_list, cmd_list) { + ql_log(ql_log_info, vha, 0xff1e, + "Processing ATIO %p\n", &p->atio); + + qlt_process_purex(vha, p); + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_del(&p->cmd_list); + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + kfree(p->purex_pyld); + kfree(p); + } +} + +static void qlt_queue_purex(scsi_qla_host_t *vha, + struct atio_from_isp *atio) +{ + struct qla_tgt_purex_op *p; + unsigned long flags; + struct purex_entry_24xx *purex = + (struct purex_entry_24xx *)&atio->u.raw; + uint16_t len = purex->frame_size; + uint8_t *purex_pyld_tmp; + + p = kzalloc(sizeof(*p), GFP_ATOMIC); + if (p == NULL) + goto out; + + p->vha = vha; + memcpy(&p->atio, atio, sizeof(*atio)); + + ql_dbg(ql_dbg_disc + ql_dbg_buffer, vha, 0xff11, + "Dumping the Purex IOCB received\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0xe012, + (uint8_t *)purex, 64); + + p->purex_pyld = kzalloc(sizeof(purex->entry_count) * 64, GFP_ATOMIC); + if (p->purex_pyld == NULL) { + kfree(p); + goto out; + } + purex_pyld_tmp = (uint8_t *)p->purex_pyld; + p->purex_pyld_len = len; + + if (len < PUREX_PYLD_SIZE) + len = PUREX_PYLD_SIZE; + + memcpy(p->purex_pyld, &purex->d_id, PUREX_PYLD_SIZE); + purex_pyld_tmp += PUREX_PYLD_SIZE; + len -= PUREX_PYLD_SIZE; + + while (len > 0) { + int cpylen; + struct __status_cont *cont_atio; + + cont_atio = (struct __status_cont *)qlt_get_next_atio_pkt(vha); + cpylen = len > CONT_SENSE_DATA ? CONT_SENSE_DATA : len; + ql_log(ql_log_info, vha, 0xff12, + "cont_atio: %p, cpylen: %#x\n", cont_atio, cpylen); + + memcpy(purex_pyld_tmp, &cont_atio->data[0], cpylen); + + purex_pyld_tmp += cpylen; + len -= cpylen; + } + + ql_dbg(ql_dbg_disc + ql_dbg_buffer, vha, 0xff11, + "Dumping the Purex IOCB(%p) received\n", p->purex_pyld); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0xe011, + (uint8_t *)p->purex_pyld, p->purex_pyld_len); + + INIT_LIST_HEAD(&p->cmd_list); + + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_add_tail(&p->cmd_list, &vha->purex_atio_list); + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + +out: + return; +} + +static void sys_to_be32_cpy(uint8_t *dest, uint8_t *src, uint16_t len) +{ + uint32_t *d, *s, i; + + d = (uint32_t *) dest; + s = (uint32_t *) src; + for (i = 0; i < len; i++) + d[i] = cpu_to_be32(s[i]); +} + +/* Prepare an LS req received from the wire to be sent to the nvmet */ +static void *qlt_nvmet_prepare_ls(struct scsi_qla_host *vha, + struct pt_ls4_rx_unsol *ls4) +{ + int desc_len = cpu_to_le16(ls4->desc_len) + 8; + int copy_len, bc; + void *buf; + uint8_t *cpy_buf; + int i; + struct __status_cont *cont_atio; + + ql_dbg(ql_dbg_tgt, vha, 0xe072, + "%s: desc_len:%d\n", __func__, desc_len); + + buf = kzalloc(desc_len, GFP_ATOMIC); + if (!buf) + return NULL; + + cpy_buf = buf; + bc = desc_len; + + if (bc < PT_LS4_FIRST_PACKET_LEN) + copy_len = bc; + else + copy_len = PT_LS4_FIRST_PACKET_LEN; + + sys_to_be32_cpy(cpy_buf, &((uint8_t *)ls4)[PT_LS4_PAYLOAD_OFFSET], + copy_len/4); + + bc -= copy_len; + cpy_buf += copy_len; + + cont_atio = (struct __status_cont *)ls4; + + for (i = 1; i < ls4->entry_count && bc > 0; i++) { + if (bc < CONT_SENSE_DATA) + copy_len = bc; + else + copy_len = CONT_SENSE_DATA; + + cont_atio = (struct __status_cont *)qlt_get_next_atio_pkt(vha); + + sys_to_be32_cpy(cpy_buf, (uint8_t *)&cont_atio->data, + copy_len/4); + cpy_buf += copy_len; + bc -= copy_len; + } + + ql_dbg(ql_dbg_disc + ql_dbg_buffer, vha, 0xc0f1, + "Dump the first 128 bytes of LS request\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)buf, 128); + + return buf; +} + static bool qlt_24xx_atio_pkt_all_vps(struct scsi_qla_host *vha, struct atio_from_isp *atio, uint8_t ha_locked) { - ql_dbg(ql_dbg_tgt, vha, 0xe072, - "%s: qla_target(%d): type %x ox_id %04x\n", - __func__, vha->vp_idx, atio->u.raw.entry_type, - be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id)); + void *buf; switch (atio->u.raw.entry_type) { case ATIO_TYPE7: @@ -414,31 +1062,74 @@ static bool qlt_24xx_atio_pkt_all_vps(struct scsi_qla_host *vha, { struct abts_recv_from_24xx *entry = (struct abts_recv_from_24xx *)atio; - struct scsi_qla_host *host = qlt_find_host_by_vp_idx(vha, - entry->vp_index); - unsigned long flags; - if (unlikely(!host)) { - ql_dbg(ql_dbg_tgt, vha, 0xe00a, - "qla_target(%d): Response pkt (ABTS_RECV_24XX) " - "received, with unknown vp_index %d\n", - vha->vp_idx, entry->vp_index); + if (unlikely(atio->u.nvme_isp27.fcnvme_hdr.scsi_fc_id == + NVMEFC_CMD_IU_SCSI_FC_ID)) { + qla_nvmet_handle_abts(vha, entry); + break; + } + + { + struct abts_recv_from_24xx *entry = + (struct abts_recv_from_24xx *)atio; + struct scsi_qla_host *host = qlt_find_host_by_vp_idx + (vha, entry->vp_index); + unsigned long flags; + + if (unlikely(!host)) { + ql_dbg(ql_dbg_tgt, vha, 0xe00a, + "qla_target(%d): Response pkt (ABTS_RECV_24XX) received, with unknown vp_index %d\n", + vha->vp_idx, entry->vp_index); + break; + } + if (!ha_locked) + spin_lock_irqsave(&host->hw->hardware_lock, + flags); + qlt_24xx_handle_abts(host, + (struct abts_recv_from_24xx *)atio); + if (!ha_locked) + spin_unlock_irqrestore( + &host->hw->hardware_lock, flags); break; } - if (!ha_locked) - spin_lock_irqsave(&host->hw->hardware_lock, flags); - qlt_24xx_handle_abts(host, (struct abts_recv_from_24xx *)atio); - if (!ha_locked) - spin_unlock_irqrestore(&host->hw->hardware_lock, flags); - break; } - /* case PUREX_IOCB_TYPE: ql2xmvasynctoatio */ + /* NVME */ + case ATIO_PURLS: + { + struct scsi_qla_host *host = vha; + unsigned long flags; + + /* Received an LS4 from the init, pass it to the NVMEt */ + ql_log(ql_log_info, vha, 0x11047, + "%s %d Received an LS4 from the initiator on ATIO\n", + __func__, __LINE__); + spin_lock_irqsave(&host->hw->hardware_lock, flags); + buf = qlt_nvmet_prepare_ls(host, + (struct pt_ls4_rx_unsol *)atio); + if (buf) + qla_nvmet_handle_ls(host, + (struct pt_ls4_rx_unsol *)atio, buf); + spin_unlock_irqrestore(&host->hw->hardware_lock, flags); + } + break; + + case PUREX_IOCB_TYPE: /* NVMET */ + { + /* Received a PUREX IOCB */ + /* Queue the iocb and wake up dpc */ + qlt_queue_purex(vha, atio); + set_bit(NVMET_PUREX, &vha->dpc_flags); + qla2xxx_wake_dpc(vha); + break; + } default: ql_dbg(ql_dbg_tgt, vha, 0xe040, "qla_target(%d): Received unknown ATIO atio " "type %x\n", vha->vp_idx, atio->u.raw.entry_type); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0xe011, + (uint8_t *)atio, sizeof(*atio)); break; } @@ -541,6 +1232,10 @@ void qlt_response_pkt_all_vps(struct scsi_qla_host *vha, break; } qlt_response_pkt(host, rsp, pkt); + if (unlikely(qlt_op_target_mode)) + qla24xx_nvmet_abts_resp_iocb(vha, + (struct abts_resp_to_24xx *)pkt, + rsp->req); break; } default: @@ -1623,6 +2318,11 @@ static void qlt_release(struct qla_tgt *tgt) vha->vha_tgt.target_lport_ptr) ha->tgt.tgt_ops->remove_target(vha); + if (tgt->nvme_els_ptr) { + dma_free_coherent(&vha->hw->pdev->dev, 256, + tgt->nvme_els_ptr, tgt->nvme_els_rsp); + } + vha->vha_tgt.qla_tgt = NULL; ql_dbg(ql_dbg_tgt_mgt, vha, 0xf00d, @@ -5648,6 +6348,101 @@ qlt_chk_qfull_thresh_hold(struct scsi_qla_host *vha, struct qla_qpair *qpair, return 1; } +/* + * Worker thread that dequeues the nvme cmd off the list and + * called nvme-t to process the cmd + */ +static void qla_nvmet_work(struct work_struct *work) +{ + struct qla_nvmet_cmd *cmd = + container_of(work, struct qla_nvmet_cmd, work); + scsi_qla_host_t *vha = cmd->vha; + + qla_nvmet_process_cmd(vha, cmd); +} +/* + * Handle the NVME cmd IU + */ +static void qla_nvmet_handle_cmd(struct scsi_qla_host *vha, + struct atio_from_isp *atio) +{ + struct qla_nvmet_cmd *tgt_cmd; + unsigned long flags; + struct qla_hw_data *ha = vha->hw; + struct fc_port *fcport; + struct fcp_hdr *fcp_hdr; + uint32_t s_id = 0; + void *next_pkt; + uint8_t *nvmet_cmd_ptr; + uint32_t nvmet_cmd_iulen = 0; + uint32_t nvmet_cmd_iulen_min = 64; + + /* Create an NVME cmd and queue it up to the work queue */ + tgt_cmd = kzalloc(sizeof(struct qla_nvmet_cmd), GFP_ATOMIC); + if (tgt_cmd == NULL) + return; + + tgt_cmd->vha = vha; + + fcp_hdr = &atio->u.nvme_isp27.fcp_hdr; + + /* Get the session for this command */ + s_id = fcp_hdr->s_id[0] << 16 | fcp_hdr->s_id[1] << 8 + | fcp_hdr->s_id[2]; + tgt_cmd->ox_id = fcp_hdr->ox_id; + + fcport = qla_nvmet_find_sess_by_s_id(vha, s_id); + if (unlikely(!fcport)) { + ql_log(ql_log_warn, vha, 0x11049, + "Cant' find the session for port_id: %#x\n", s_id); + kfree(tgt_cmd); + return; + } + + tgt_cmd->fcport = fcport; + + memcpy(&tgt_cmd->atio, atio, sizeof(*atio)); + + /* The FC-NMVE cmd covers 2 ATIO IOCBs */ + + nvmet_cmd_ptr = (uint8_t *)&tgt_cmd->nvme_cmd_iu; + nvmet_cmd_iulen = be16_to_cpu(atio->u.nvme_isp27.fcnvme_hdr.iu_len) * 4; + tgt_cmd->cmd_len = nvmet_cmd_iulen; + + if (unlikely(ha->tgt.atio_ring_index + atio->u.raw.entry_count > + ha->tgt.atio_q_length)) { + uint8_t i; + + memcpy(nvmet_cmd_ptr, &((uint8_t *)atio)[NVME_ATIO_CMD_OFF], + ATIO_NVME_FIRST_PACKET_CMDLEN); + nvmet_cmd_ptr += ATIO_NVME_FIRST_PACKET_CMDLEN; + nvmet_cmd_iulen -= ATIO_NVME_FIRST_PACKET_CMDLEN; + + for (i = 1; i < atio->u.raw.entry_count; i++) { + uint8_t cplen = min(nvmet_cmd_iulen_min, + nvmet_cmd_iulen); + + next_pkt = qlt_get_next_atio_pkt(vha); + memcpy(nvmet_cmd_ptr, (uint8_t *)next_pkt, cplen); + nvmet_cmd_ptr += cplen; + nvmet_cmd_iulen -= cplen; + } + } else { + memcpy(nvmet_cmd_ptr, &((uint8_t *)atio)[NVME_ATIO_CMD_OFF], + nvmet_cmd_iulen); + next_pkt = qlt_get_next_atio_pkt(vha); + } + + /* Add cmd to the list */ + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_add_tail(&tgt_cmd->cmd_list, &vha->qla_cmd_list); + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + + /* Queue the work item */ + INIT_WORK(&tgt_cmd->work, qla_nvmet_work); + queue_work(qla_nvmet_wq, &tgt_cmd->work); +} + /* ha->hardware_lock supposed to be held on entry */ /* called via callback from qla2xxx */ static void qlt_24xx_atio_pkt(struct scsi_qla_host *vha, @@ -5687,6 +6482,13 @@ static void qlt_24xx_atio_pkt(struct scsi_qla_host *vha, break; } + /* NVME Target*/ + if (unlikely(atio->u.nvme_isp27.fcnvme_hdr.scsi_fc_id + == NVMEFC_CMD_IU_SCSI_FC_ID)) { + qla_nvmet_handle_cmd(vha, atio); + break; + } + if (likely(atio->u.isp24.fcp_cmnd.task_mgmt_flags == 0)) { rc = qlt_chk_qfull_thresh_hold(vha, ha->base_qpair, atio, ha_locked); @@ -6537,6 +7339,14 @@ int qlt_add_target(struct qla_hw_data *ha, struct scsi_qla_host *base_vha) if (ha->tgt.tgt_ops && ha->tgt.tgt_ops->add_target) ha->tgt.tgt_ops->add_target(base_vha); + tgt->nvme_els_ptr = dma_alloc_coherent(&base_vha->hw->pdev->dev, 256, + &tgt->nvme_els_rsp, GFP_KERNEL); + if (!tgt->nvme_els_ptr) { + ql_dbg(ql_dbg_tgt, base_vha, 0xe066, + "Unable to allocate DMA buffer for NVME ELS request\n"); + return -ENOMEM; + } + return 0; } @@ -6831,6 +7641,7 @@ qlt_rff_id(struct scsi_qla_host *vha) u8 fc4_feature = 0; /* * FC-4 Feature bit 0 indicates target functionality to the name server. + * NVME FC-4 Feature bit 2 indicates discovery controller */ if (qla_tgt_mode_enabled(vha)) { fc4_feature = BIT_0; @@ -6868,6 +7679,76 @@ qlt_init_atio_q_entries(struct scsi_qla_host *vha) } +static void +qlt_27xx_process_nvme_atio_queue(struct scsi_qla_host *vha, uint8_t ha_locked) +{ + struct qla_hw_data *ha = vha->hw; + struct atio_from_isp *pkt; + int cnt; + uint32_t atio_q_in; + uint16_t num_atios = 0; + uint8_t nvme_pkts = 0; + + if (!ha->flags.fw_started) + return; + + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; + while (num_atios < pkt->u.raw.entry_count) { + atio_q_in = RD_REG_DWORD(ISP_ATIO_Q_IN(vha)); + if (atio_q_in < ha->tgt.atio_ring_index) + num_atios = ha->tgt.atio_q_length - + (ha->tgt.atio_ring_index - atio_q_in); + else + num_atios = atio_q_in - ha->tgt.atio_ring_index; + if (num_atios == 0) + return; + } + + while ((num_atios) || fcpcmd_is_corrupted(ha->tgt.atio_ring_ptr)) { + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; + cnt = pkt->u.raw.entry_count; + + if (unlikely(fcpcmd_is_corrupted(ha->tgt.atio_ring_ptr))) { + /* + * This packet is corrupted. The header + payload + * can not be trusted. There is no point in passing + * it further up. + */ + ql_log(ql_log_warn, vha, 0xd03c, + "corrupted fcp frame SID[%3phN] OXID[%04x] EXCG[%x] %64phN\n", + pkt->u.isp24.fcp_hdr.s_id, + be16_to_cpu(pkt->u.isp24.fcp_hdr.ox_id), + le32_to_cpu(pkt->u.isp24.exchange_addr), pkt); + + adjust_corrupted_atio(pkt); + qlt_send_term_exchange(ha->base_qpair, NULL, pkt, + ha_locked, 0); + } else { + qlt_24xx_atio_pkt_all_vps(vha, + (struct atio_from_isp *)pkt, ha_locked); + nvme_pkts++; + } + + /* Just move by one index since we have already accounted the + * additional ones while processing individual ATIOs + */ + ha->tgt.atio_ring_index++; + if (ha->tgt.atio_ring_index == ha->tgt.atio_q_length) { + ha->tgt.atio_ring_index = 0; + ha->tgt.atio_ring_ptr = ha->tgt.atio_ring; + } else + ha->tgt.atio_ring_ptr++; + + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; + num_atios -= cnt; + /* memory barrier */ + wmb(); + } + + /* Adjust ring index */ + WRT_REG_DWORD(ISP_ATIO_Q_OUT(vha), ha->tgt.atio_ring_index); +} + /* * qlt_24xx_process_atio_queue() - Process ATIO queue entries. * @ha: SCSI driver HA context @@ -6879,9 +7760,15 @@ qlt_24xx_process_atio_queue(struct scsi_qla_host *vha, uint8_t ha_locked) struct atio_from_isp *pkt; int cnt, i; + if (unlikely(qlt_op_target_mode)) { + qlt_27xx_process_nvme_atio_queue(vha, ha_locked); + return; + } + if (!ha->flags.fw_started) return; + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; while ((ha->tgt.atio_ring_ptr->signature != ATIO_PROCESSED) || fcpcmd_is_corrupted(ha->tgt.atio_ring_ptr)) { pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; @@ -6907,6 +7794,7 @@ qlt_24xx_process_atio_queue(struct scsi_qla_host *vha, uint8_t ha_locked) (struct atio_from_isp *)pkt, ha_locked); } + cnt = 1; for (i = 0; i < cnt; i++) { ha->tgt.atio_ring_index++; if (ha->tgt.atio_ring_index == ha->tgt.atio_q_length) { @@ -6918,11 +7806,13 @@ qlt_24xx_process_atio_queue(struct scsi_qla_host *vha, uint8_t ha_locked) pkt->u.raw.signature = ATIO_PROCESSED; pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; } + /* memory barrier */ wmb(); } /* Adjust ring index */ WRT_REG_DWORD(ISP_ATIO_Q_OUT(vha), ha->tgt.atio_ring_index); + RD_REG_DWORD_RELAXED(ISP_ATIO_Q_OUT(vha)); } void @@ -7219,6 +8109,9 @@ qlt_probe_one_stage1(struct scsi_qla_host *base_vha, struct qla_hw_data *ha) INIT_DELAYED_WORK(&base_vha->unknown_atio_work, qlt_unknown_atio_work_fn); + /* NVMET */ + INIT_LIST_HEAD(&base_vha->purex_atio_list); + qlt_clear_mode(base_vha); rc = btree_init32(&ha->tgt.host_map); @@ -7445,13 +8338,25 @@ int __init qlt_init(void) goto out_mgmt_cmd_cachep; } + qla_tgt_purex_plogi_cachep = + kmem_cache_create("qla_tgt_purex_plogi_cachep", + sizeof(struct qlt_purex_plogi_ack_t), + __alignof__(struct qlt_purex_plogi_ack_t), 0, NULL); + + if (!qla_tgt_purex_plogi_cachep) { + ql_log(ql_log_fatal, NULL, 0xe06d, + "kmem_cache_create for qla_tgt_purex_plogi_cachep failed\n"); + ret = -ENOMEM; + goto out_plogi_cachep; + } + qla_tgt_mgmt_cmd_mempool = mempool_create(25, mempool_alloc_slab, mempool_free_slab, qla_tgt_mgmt_cmd_cachep); if (!qla_tgt_mgmt_cmd_mempool) { ql_log(ql_log_fatal, NULL, 0xe06e, "mempool_create for qla_tgt_mgmt_cmd_mempool failed\n"); ret = -ENOMEM; - goto out_plogi_cachep; + goto out_purex_plogi_cachep; } qla_tgt_wq = alloc_workqueue("qla_tgt_wq", 0, 0); @@ -7461,6 +8366,25 @@ int __init qlt_init(void) ret = -ENOMEM; goto out_cmd_mempool; } + + qla_nvmet_wq = alloc_workqueue("qla_nvmet_wq", 0, 0); + if (!qla_nvmet_wq) { + ql_log(ql_log_fatal, NULL, 0xe070, + "alloc_workqueue for qla_nvmet_wq failed\n"); + ret = -ENOMEM; + destroy_workqueue(qla_tgt_wq); + goto out_cmd_mempool; + } + + qla_nvmet_comp_wq = alloc_workqueue("qla_nvmet_comp_wq", 0, 0); + if (!qla_nvmet_comp_wq) { + ql_log(ql_log_fatal, NULL, 0xe071, + "alloc_workqueue for qla_nvmet_wq failed\n"); + ret = -ENOMEM; + destroy_workqueue(qla_nvmet_wq); + destroy_workqueue(qla_tgt_wq); + goto out_cmd_mempool; + } /* * Return 1 to signal that initiator-mode is being disabled */ @@ -7468,6 +8392,8 @@ int __init qlt_init(void) out_cmd_mempool: mempool_destroy(qla_tgt_mgmt_cmd_mempool); +out_purex_plogi_cachep: + kmem_cache_destroy(qla_tgt_purex_plogi_cachep); out_plogi_cachep: kmem_cache_destroy(qla_tgt_plogi_cachep); out_mgmt_cmd_cachep: @@ -7480,8 +8406,19 @@ void qlt_exit(void) if (!QLA_TGT_MODE_ENABLED()) return; + destroy_workqueue(qla_nvmet_comp_wq); + destroy_workqueue(qla_nvmet_wq); destroy_workqueue(qla_tgt_wq); mempool_destroy(qla_tgt_mgmt_cmd_mempool); kmem_cache_destroy(qla_tgt_plogi_cachep); + kmem_cache_destroy(qla_tgt_purex_plogi_cachep); kmem_cache_destroy(qla_tgt_mgmt_cmd_cachep); } + +void nvmet_release_sessions(struct scsi_qla_host *vha) +{ + struct qlt_purex_plogi_ack_t *pla, *tpla; + + list_for_each_entry_safe(pla, tpla, &vha->plogi_ack_list, list) + list_del(&pla->list); +} diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h index 721da593b1bc..fcb4c9bb4fc1 100644 --- a/drivers/scsi/qla2xxx/qla_target.h +++ b/drivers/scsi/qla2xxx/qla_target.h @@ -322,6 +322,67 @@ struct atio7_fcp_cmnd { /* uint32_t data_length; */ } __packed; +struct fc_nvme_hdr { + union { + struct { + uint8_t scsi_id; +#define NVMEFC_CMD_IU_SCSI_ID 0xfd + uint8_t fc_id; +#define NVMEFC_CMD_IU_FC_ID 0x28 + }; + struct { + uint16_t scsi_fc_id; +#define NVMEFC_CMD_IU_SCSI_FC_ID 0x28fd + }; + }; + uint16_t iu_len; + uint8_t rsv1[3]; + uint8_t flags; +#define NVMEFC_CMD_WRITE 0x1 +#define NVMEFC_CMD_READ 0x2 + uint64_t conn_id; + uint32_t csn; + uint32_t dl; +} __packed; + +struct atio7_nvme_cmnd { + struct fc_nvme_hdr fcnvme_hdr; + + struct nvme_command nvme_cmd; + uint32_t rsv2[2]; +} __packed; + +#define ATIO_PURLS 0x56 +struct pt_ls4_rx_unsol { + uint8_t entry_type; /* 0x56 */ + uint8_t entry_count; + uint16_t rsvd0; + uint16_t rsvd1; + uint8_t vp_index; + uint8_t rsvd2; + uint16_t rsvd3; + uint16_t nport_handle; + uint16_t frame_size; + uint16_t rsvd4; + uint32_t exchange_address; + uint8_t d_id[3]; + uint8_t r_ctl; + uint8_t s_id[3]; + uint8_t cs_ctl; + uint8_t f_ctl[3]; + uint8_t type; + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; + uint32_t desc0; +#define PT_LS4_PAYLOAD_OFFSET 0x2c +#define PT_LS4_FIRST_PACKET_LEN 20 + uint32_t desc_len; + uint32_t payload[3]; +}; /* * ISP queue - Accept Target I/O (ATIO) type entry IOCB structure. * This is sent from the ISP to the target driver. @@ -368,6 +429,21 @@ struct atio_from_isp { uint32_t signature; #define ATIO_PROCESSED 0xDEADDEAD /* Signature */ } raw; + /* FC-NVME */ + struct { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t fcp_cmnd_len_low; + uint8_t fcp_cmnd_len_high:4; + uint8_t attr:4; + uint32_t exchange_addr; +#define ATIO_NVME_ATIO_CMD_OFF 32 +#define ATIO_NVME_FIRST_PACKET_CMDLEN (64 - ATIO_NVME_ATIO_CMD_OFF) + struct fcp_hdr fcp_hdr; + struct fc_nvme_hdr fcnvme_hdr; + uint8_t nvmd_cmd[8]; + } nvme_isp27; + struct pt_ls4_rx_unsol pt_ls4; } u; } __packed; @@ -836,6 +912,8 @@ struct qla_tgt { int modify_lun_expected; atomic_t tgt_global_resets_count; struct list_head tgt_list_entry; + dma_addr_t nvme_els_rsp; + void *nvme_els_ptr; }; struct qla_tgt_sess_op { @@ -848,6 +926,16 @@ struct qla_tgt_sess_op { struct rsp_que *rsp; }; +/* NVMET */ +struct qla_tgt_purex_op { + struct scsi_qla_host *vha; + struct atio_from_isp atio; + uint8_t *purex_pyld; + uint16_t purex_pyld_len; + struct work_struct work; + struct list_head cmd_list; +}; + enum trace_flags { TRC_NEW_CMD = BIT_0, TRC_DO_WORK = BIT_1, @@ -1112,4 +1200,6 @@ void qlt_send_resp_ctio(struct qla_qpair *, struct qla_tgt_cmd *, uint8_t, extern void qlt_abort_cmd_on_host_reset(struct scsi_qla_host *, struct qla_tgt_cmd *); +/* 0 for FCP and 1 for NVMET */ +extern int qlt_op_target_mode; #endif /* __QLA_TARGET_H */ From patchwork Wed Sep 26 16:25:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10616223 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BC5F7174A for ; Wed, 26 Sep 2018 16:32:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AE33D2B503 for ; Wed, 26 Sep 2018 16:32:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A2C3E2B511; Wed, 26 Sep 2018 16:32:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 19A642B458 for ; Wed, 26 Sep 2018 16:32:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728431AbeIZWpx (ORCPT ); Wed, 26 Sep 2018 18:45:53 -0400 Received: from mail-eopbgr680068.outbound.protection.outlook.com ([40.107.68.68]:63712 "EHLO NAM04-BN3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728181AbeIZWpx (ORCPT ); Wed, 26 Sep 2018 18:45:53 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/EU4C/vWJxaCOjyxqXJF8zz2zX/zyhS0YiGLw/Qxhzk=; b=a4jJgmmf8qk8uqo1PXb6I+ymDANesW3OoSVvUFyIIRboQVnV8FRolwH++u/rJIRBqbRlzIAegufxuAIHbKaoFOmeMujFPrilKk+PNPvUt8j9u3ASzBzjIXHU5rSyplGvJKT8rJE5BJjPVKAKJqSqMpM5Qo1CKlfDKfB2lJL8DBE= Received: from CO2PR07CA0063.namprd07.prod.outlook.com (2603:10b6:100::31) by BL0PR07MB5489.namprd07.prod.outlook.com (2603:10b6:208:89::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1122.16; Wed, 26 Sep 2018 16:32:03 +0000 Received: from DM3NAM05FT038.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::206) by CO2PR07CA0063.outlook.office365.com (2603:10b6:100::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.22 via Frontend Transport; Wed, 26 Sep 2018 16:32:02 +0000 Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT038.mail.protection.outlook.com (10.152.98.151) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 16:32:02 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Wed, 26 Sep 2018 09:25:36 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8QGPbuF024365; Wed, 26 Sep 2018 09:25:37 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8QGPbKV024364; Wed, 26 Sep 2018 09:25:37 -0700 From: Himanshu Madhani To: , , , , , , CC: Subject: [PATCH v2 4/5] qla2xxx_nvmet: Add SysFS node for FC-NVMe Target Date: Wed, 26 Sep 2018 09:25:34 -0700 Message-ID: <20180926162535.24314-5-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926162535.24314-1-himanshu.madhani@cavium.com> References: <20180926162535.24314-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(346002)(396003)(376002)(136003)(39860400002)(2980300002)(438002)(189003)(199004)(107886003)(2906002)(69596002)(5660300001)(186003)(50226002)(72206003)(336012)(76176011)(6666003)(6346003)(4326008)(47776003)(34290500001)(48376002)(50466002)(80596001)(81156014)(8676002)(486006)(126002)(1076002)(44832011)(16586007)(86362001)(575784001)(446003)(81166006)(8936002)(51416003)(356003)(87636003)(36756003)(476003)(2616005)(305945005)(106466001)(26005)(42186006)(2201001)(316002)(11346002)(478600001)(110136005);DIR:OUT;SFP:1101;SCL:1;SRVR:BL0PR07MB5489;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;MX:1;A:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT038;1:FaUqL+Eu0/QiaelsdXsgnKLETLdpl7VGpd6qrLqd74Qom8cEQgx8eT3CrlDni9l5qnUXxWSBjGh1x1YXTa6o9pTKHq0sn0rgWxgel1lxVAtcA5bNYchXiYa8voHrkolB X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 76f98eb8-4bc8-44c3-f846-08d623cd96f1 X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:BL0PR07MB5489; X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5489;3:49miY/ZHltVMjNtx5FE+wp8bH/I6HtjZSD1xDzYt0ZX0NAmr2eJYJ1eJVn8+9tYv5XPe4/xZ3UKyqwPLfN/rlWlHpw8bK0Ww8TAeu3Q7ljfjUd1ppnDa26/QHSRZq5HoThSw3Of8nMTUVoqbOinyYNI3d3dAArdDvQed6nl3aRk2Sn5OLp66DBXdMgJZBAASsl+zp6FDlr2ijQaybaZXa1xMcLXm99pwnS9+ltwa5SGciD2VLppTpOM0muySIOc0Nem7L49FlhwObxogBREyWQPVwrkEgiewqwRJ+0naAORxVaUsBMqiKA6CKFENCr6mcLGQzQSssxIRP1QtVJxEiDyf7AWPMGi6UaEMLxunGto=;25:9+fjD7eUwtDfnnwzd8WdQ0O68ssLC/MpUK3MtWrMSutCHvFhnFra4efPH+0aW0RcLIuxfu8wC18/VUCfv6oMigf6UawzztpHbsgtyV74jcLSdbdgrez6d5j/icQX/uSVu0d6u6ZJMuyOVRV9Z69GbGCbM82MuqYWtFZ7Mms2A+nI22jeTVhGP9ree8EzSEARwl9jI5opG4vJ67UZnTgruH3u9DhhbLaHGdSd3M6sHPQlejIoIn3/ylA9SsKhjl3MNUGh94H1DrFIXf3NiJBgeQ9qGnweWrgK69q3po3aBy4zuVMtQzTQQ8dOhqXkfiwIkuBRv9nRxVx0XQhemvWK+A== X-MS-TrafficTypeDiagnostic: BL0PR07MB5489: X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5489;31:dwCZoWDW8yy4O3mTkNRkxxsgq2rMf17oi2siVkRRA6S9YkGspNb4nkjxeuw7gVXxtOJL8EjBPKVB8dGezsagvSQJ8zUykD6hUDDyH6BPLuIUjpmzft1MbvlK+gbq1ws6ants0jx4LlvFGhwwUx/g9lPlA3fyFNNpbbSvAxkdqAmsQy0J7pKnZODYIXfR7hjfR/K/W0Q/pyqHODdJKUxsSSQyzPk9LLEl9BSQ1+kvRq4=;20:U5p5O0bnH9n3AOSTtV9fiin82bUyJG5ctgQvPp6+mHATYbRgdOupCXtbsttGTHBVPiCC85005TFSVvCJ+vaqMoaQbTHz4axx+C6Xm/8b1qfldccrAth5ebvL+cLcrVYCyzn9M8/mB7fUiOdATtYzt0Sm8sIge+ZD6YBV6snjuAhPXlwKncIZjsTCJghQRIr0tHSr+gY/JVbVPrY77SS9kR14WD7okBkpiE8c0J6tTkMVQnP6GU6cmFnxrEPjJM8qRN/z4SgTGeU7+d6mTnKcndZjufVf78+rr2I/y+9oq+QMJERyh0nUmzYuhmun+JvbQuFTCHTQWzodrugcF+YLvGLrSGr2qhMFaSKgJlzKkPU9OwoEtkuiaOn52IcEVPEQfEhTNKrqVKFFS5Uw2FM18ANAMbmp3ebYnL+MOCeEZ0dRfikBoXXvg5yQEfaYwKCpu76zNiDL6b8FvbZxxptxO3CgMFBzQtZkXX2BZJNWWhu6rqcpNkuQZrI5Zgw5oqAB X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(10201501046)(3002001)(93006095)(93004095)(3231355)(944501410)(52105095)(149066)(150057)(6041310)(20161123558120)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123564045)(201708071742011)(7699051)(76991041);SRVR:BL0PR07MB5489;BCL:0;PCL:0;RULEID:;SRVR:BL0PR07MB5489; X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5489;4:PVYDNKPSpLCwJOrkhXhnUAlv/90XGdJFQY8fPREXRPnnaFCHGUdTOtj72Uz6DBGLgNkcyQvN7ZB2xI8m4vlxqMuvfgmpSlrjtGxhaA+q5AoAPe56d8SIAW86lwrwQWs07KdnqiDNuACSjEygg2meDIC/FyJdXpllOJ2a5wGlJB5fhuXgAgbr2nwg8mRtsc2YFJSzE8P2oYmStQHzN1Ha2cnPUkpOSmGjCQiIylciwqsseHMTHl2fGOZbegu2i8jl4+iE6A30b0Ny2/GKZtWznA== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5489;23:iXmBS4tE9By/4TgFqu7QFeEkMNvuhUH053av/8i+aTxNazN+P7t7RslpDMGqTX0peJ0qTk9Fj9dkYpRQ/0KUzBNNQ1luOnNApwcTQjzq+v9AC8VjejfYBdPsjmUY8/DKKDlEqre6f0Y3NYsXgrqO8Kiz2cAbMLkOnlOQk8va3CLVcQgQXrkYz59u8bLuredfuLuRjF8+PpZWcOB9C8Rn3ecDILnYVqUYV6fU8jnvlKsPZtMETRiJ5KAivhhNdlYbP29o6RHOEZw1ueeVOZ8sfP7ZxCsq7/3tj283WtQlj8+V83AQR00OTiQ3YIGAfl3Wj0IDFqhZiNjTOyWXSQ1a+eyBGxlMfn/wOH6ZY7F3pe9bIwm96MX+jNK7ljRuzo9iRNrGO7PhYE7ub8DhLqbF+iyPt1iDYMMdxVcrrLlCZQexTi6PNaJhIju8Tl9E4/SlQC49Y227IvApenQPYRVDPzn9OOqDcsxop1ox3VAJyLkhpZTtIY9aSD/I2i6cheHEUnV6QrbPmTc4q70fAkrG+60beowoZ6jx58gikL+6NzPYNVgww3ArlVbzymdU4HiXF3RtpLKse4vIpmEMZRPtW0egJTT42I7E9EGb6/dZm+5s8rponnpY8L7S1vI7Y0XYXbel2kYg3/Qsj7v/5tzBHlB29mDokMjvbELpg6N6fUKNJTe4nFh94GdLpkTmO9Hp6hk1x7SmUUE4MQ3zRjs+RjWWfdaG29jUj3l7Ot6D+hiIEJ16YHFhFQNLYurNa6eK3i1HYTjiC+fv4hXe6qcLh73Xj3fJM8X6VAq/2aED78tTR8swzJZw+fvJeBUCW3sXRwAVDnjYbuypUePg5mrpgc9t9ZQ0X1QetB+FU11ULVmJD7CwxQiLn6DjGo27HSeJp6B9JeoDIvfMNQy2QWR7/M4DZlmOoXUsk+EMW003WFqnvjEwW3cTHteIdRdZIfghD3JYy5FC4Cu+REkTQFGzOnQmsSCKZRN9tuvHLeXb53pxo0p7/C9Ay2C6EkQwx3l4dri6vtSof+zRotVD7CPxY17EtCgeSM8OCaWEUhi+jvMrEJi/hlItPkPaQouNSL+Zk3yjXAtOoe5b0t/2l/iFP+IXkyAIG6KYRbe3K8MhxyXc3RmTv6r/9/sbVZZL0UAaVlN+O/IJ4Rp2gfLs47GoarcyUBeEnAdqbL6w6bddDyM= X-Microsoft-Antispam-Message-Info: LZC/A4xVrYEAlgfetllT+Mioi4jVk5T5b0PqWinvfHXorRZa+Uy6HY24U9n99DDSWwazwY4bVqtMW9xgObK5lvkKwRv7PffoEyOelDOqbWEIpkd7R+MJ79jVNqvp9WH8M/Ak7+/dRe7cQGjQmn4N6kavMEjZJ/Ra8kTbeZVVqvqYCiNRV97u+jeKmmwy8e7kFVhq0iIMUfTvcnUDoRjsv4GSwL8vTjIZk96YWTQoMSb0Y8elc8qzFr2N4FbxLHn9FywNz9IrEdz7nvQFgiLfLaf0By5AhwO35iwGGrj4qSFh27VhSu6EkTYJr2L9TDrY0XI1tj44z5BmJO4BNXLrlkm6yZyYFtD6relnRjrBGJk= X-Microsoft-Exchange-Diagnostics: 1;BL0PR07MB5489;6:ncvpTmlyUeJ1cOhoBKJlyPgu6L/liBdkaoleEr3h66tgln/5Nrb6GNW99VtKJCViwPHeeJLCKoPaXLb98TYxpRTpCuMm9hiepZReI2RiDtPBqlR+hPFGOWCATJz2mFS1sxO76h6x1rt+8NZJCFvOF//XuFGU6Bt2p30GbKW54PVsryzOJ5X8VRAVjQ6KfPp+2pEKXMqxuqdskoEF/pMRf+CEOgU2b1KHKDwaQNA9NquZVLiYPFhlwfnqR55sycaBq32GhNhhk7ZJ58HtjxNw2bZP85Nv7w3KOZcCVzzi8ggpbBg/KbR5EA3A3o3Euuj+p1bhtT4mV8lkvSbl/cy0hq/njQSRNrCDaUBMlKVFgzo/tBX/w6O3YyCyTkBUHGWaRcfbcSX6JLJqOf5pPT8h/om/E25BxepCoa9Ums0UbeksfKIuWlbJFbKb3vNlg8VPyOXdhEQr6/F8iovAbk2d6Q==;5:4rjRgH4IZ+IaNt/171zqX0SrsiLNRLWjzEG+93MUiwoKeo6EAZ7xmJJKiCGCxLW+lz5NADEIp0Pvxnu+jRBeAYlwBAcj+CWp2ErYpmpf6b8dyVWpuuDYRvqFJjRPxU24XS8R9lLs9i47by3ZtuahfpJXTVPtEQmxHoqEebLex44=;7:Pkd9htKrJ23DZuLhohnvwq1FzYvjnoga0OdHhhTjzdMC6ZCGh9wbdoS2pMZfJE/nQyNvr8MhJmFaQmYL+PVILiNHCStVDekOyb8UgkHbPCtJdiseqwJUl+HBEkYtKMTdsk5NfyzI02Go/lrc0itATFvPQp3oUnmpxc9AToi1k9vy+bea+AftVMCUruEQWohzRvdord4DM0AA/v26ueBHgta9Hy3mHMxB1o9gB1YPUSWTjkX8i4k3atZkOdvs5YNU SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 16:32:02.1743 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 76f98eb8-4bc8-44c3-f846-08d623cd96f1 X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR07MB5489 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Anil Gurumurthy This patch adds SysFS node for NVMe Target configuration Signed-off-by: Anil Gurumurthy Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_attr.c | 33 +++++++++++++++++++++++++++++++++ drivers/scsi/qla2xxx/qla_gs.c | 2 +- drivers/scsi/qla2xxx/qla_init.c | 3 ++- drivers/scsi/qla2xxx/qla_nvmet.c | 6 +++--- 4 files changed, 39 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c index a31d23905753..0d2d4f33701b 100644 --- a/drivers/scsi/qla2xxx/qla_attr.c +++ b/drivers/scsi/qla2xxx/qla_attr.c @@ -13,6 +13,7 @@ #include static int qla24xx_vport_disable(struct fc_vport *, bool); +extern void qlt_set_mode(struct scsi_qla_host *vha); /* SYSFS attributes --------------------------------------------------------- */ @@ -631,6 +632,37 @@ static struct bin_attribute sysfs_sfp_attr = { }; static ssize_t +qla2x00_sysfs_write_nvmet(struct file *filp, struct kobject *kobj, + struct bin_attribute *bin_attr, + char *buf, loff_t off, size_t count) +{ + struct scsi_qla_host *vha = shost_priv(dev_to_shost(container_of(kobj, + struct device, kobj))); + struct qla_hw_data *ha = vha->hw; + scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev); + + ql_log(ql_log_info, vha, 0x706e, + "Bringing up target mode!! vha:%p\n", vha); + qlt_op_target_mode = 1; + qlt_set_mode(base_vha); + set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); + qla2xxx_wake_dpc(vha); + qla2x00_wait_for_hba_online(vha); + + return count; +} + +static struct bin_attribute sysfs_nvmet_attr = { + .attr = { + .name = "nvmet", + .mode = 0200, + }, + .size = 0, + .write = qla2x00_sysfs_write_nvmet, +}; + + +static ssize_t qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj, struct bin_attribute *bin_attr, char *buf, loff_t off, size_t count) @@ -943,6 +975,7 @@ static struct sysfs_entry { { "issue_logo", &sysfs_issue_logo_attr, }, { "xgmac_stats", &sysfs_xgmac_stats_attr, 3 }, { "dcbx_tlv", &sysfs_dcbx_tlv_attr, 3 }, + { "nvmet", &sysfs_nvmet_attr, }, { NULL }, }; diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c index 55dc11d91b35..ba58cfe7ff9b 100644 --- a/drivers/scsi/qla2xxx/qla_gs.c +++ b/drivers/scsi/qla2xxx/qla_gs.c @@ -698,7 +698,7 @@ qla2x00_rff_id(scsi_qla_host_t *vha, u8 type) return (QLA_SUCCESS); return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha), - FC4_TYPE_FCP_SCSI); + type); } static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id, diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c index 841541201671..01676345018f 100644 --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -5523,7 +5523,8 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha) * will be newer than discovery_gen. */ qlt_do_generation_tick(vha, &discovery_gen); - if (USE_ASYNC_SCAN(ha)) { + if (USE_ASYNC_SCAN(ha) && !(vha->flags.nvmet_enabled)) { + /* If NVME target mode is enabled, go through regular scan */ rval = qla24xx_async_gpnft(vha, FC4_TYPE_FCP_SCSI, NULL); if (rval) diff --git a/drivers/scsi/qla2xxx/qla_nvmet.c b/drivers/scsi/qla2xxx/qla_nvmet.c index 5335c0618f00..cc0fb83b8f69 100644 --- a/drivers/scsi/qla2xxx/qla_nvmet.c +++ b/drivers/scsi/qla2xxx/qla_nvmet.c @@ -546,7 +546,7 @@ static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, case NVMET_FCOP_READDATA: case NVMET_FCOP_READDATA_RSP: /* Populate the CTIO resp with the SGL present in the rsp */ - ql_log(ql_log_info, vha, 0x1100c, + ql_dbg(ql_dbg_nvme, vha, 0x1100c, "op: %#x, ox_id=%x c_flags=%x transfer_length: %#x req_cnt: %#x, tot_dsds: %#x\n", rsp_buf->op, ctio->ox_id, c_flags, rsp_buf->transfer_length, req_cnt, tot_dsds); @@ -632,7 +632,7 @@ static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, case NVMET_FCOP_WRITEDATA: /* Send transfer rdy */ - ql_log(ql_log_info, vha, 0x1100e, + ql_dbg(ql_dbg_nvme, vha, 0x1100e, "FCOP_WRITE: ox_id=%x c_flags=%x transfer_length: %#x req_cnt: %#x, tot_dsds: %#x\n", ctio->ox_id, c_flags, rsp_buf->transfer_length, req_cnt, tot_dsds); @@ -707,7 +707,7 @@ static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, ctio->u.nvme_status_mode1.transfer_len = cpu_to_be32(ersp->xfrd_len); - ql_log(ql_log_info, vha, 0x1100f, + ql_dbg(ql_dbg_nvme, vha, 0x1100f, "op: %#x, rsplen: %#x\n", rsp_buf->op, rsp_buf->rsplen); } else From patchwork Wed Sep 26 16:25:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10616217 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B457112B for ; Wed, 26 Sep 2018 16:32:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CDF92B513 for ; Wed, 26 Sep 2018 16:32:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1157B2B509; Wed, 26 Sep 2018 16:32:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C2E52B4FF for ; Wed, 26 Sep 2018 16:32:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728409AbeIZWpv (ORCPT ); Wed, 26 Sep 2018 18:45:51 -0400 Received: from mail-sn1nam02on0079.outbound.protection.outlook.com ([104.47.36.79]:33075 "EHLO NAM02-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728236AbeIZWpv (ORCPT ); Wed, 26 Sep 2018 18:45:51 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OdsrXDZ8PsuMeK/uGxcR/EkUzAY+8TeWG27+tCg2aRs=; b=RIa9luoD9KoqAIEdNfRgb90tXRcQzcTSn3eQlasizJyDPMEobog0+OfIHDVZeg7hSZnWDCXhyw3UajJjCVy0G6np2BbVba3VGqz75K/192ccmYnembH7/9ptcAaSnKI7G42cUal6aWsmuxAIZzpLoRALeBbOwtilHwD7Oess8D0= Received: from SN4PR0701CA0013.namprd07.prod.outlook.com (2603:10b6:803:28::23) by SN6PR07MB5502.namprd07.prod.outlook.com (2603:10b6:805:df::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1164.25; Wed, 26 Sep 2018 16:32:03 +0000 Received: from DM3NAM05FT009.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::200) by SN4PR0701CA0013.outlook.office365.com (2603:10b6:803:28::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.22 via Frontend Transport; Wed, 26 Sep 2018 16:32:03 +0000 Authentication-Results: spf=pass (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; oracle.com; dkim=none (message not signed) header.d=none;oracle.com; dmarc=bestguesspass action=none header.from=cavium.com; Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT009.mail.protection.outlook.com (10.152.98.115) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 16:32:02 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Wed, 26 Sep 2018 09:25:37 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8QGPbDY024369; Wed, 26 Sep 2018 09:25:37 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8QGPbbk024368; Wed, 26 Sep 2018 09:25:37 -0700 From: Himanshu Madhani To: , , , , , , CC: Subject: [PATCH v2 5/5] qla2xxx: Update driver version to 11.00.00.00-k Date: Wed, 26 Sep 2018 09:25:35 -0700 Message-ID: <20180926162535.24314-6-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926162535.24314-1-himanshu.madhani@cavium.com> References: <20180926162535.24314-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(136003)(39860400002)(396003)(346002)(376002)(2980300002)(438002)(189003)(199004)(81166006)(47776003)(87636003)(8676002)(34290500001)(107886003)(8936002)(186003)(4326008)(50226002)(336012)(126002)(2616005)(446003)(11346002)(476003)(486006)(44832011)(356003)(26005)(81156014)(305945005)(106466001)(72206003)(478600001)(1076002)(2906002)(36756003)(42186006)(48376002)(316002)(69596002)(86362001)(80596001)(51416003)(76176011)(2201001)(106002)(5660300001)(50466002)(16586007)(110136005);DIR:OUT;SFP:1101;SCL:1;SRVR:SN6PR07MB5502;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;MX:1;A:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT009;1:oySI3eB+fvhBvgD59yY1ydQj2Vls5zhIQXV2TbsoYNRGFAhmzX8rrcgMCIHvUN7VuNye7LZAR4PsGSPgzY+q+vzb0nrpFw2jo35GvovEgo/C4pJ7E6zgd67eKgqMmMFZ X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 48e142ee-8ad8-492a-65de-08d623cd973e X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:SN6PR07MB5502; X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB5502;3:ZlfcDl5wDL7OjMWmm9jQxr5DlUrsR6SX6wDO58mJapM5TGFmeHSiVYH9yJKRk+1tyLLCGSdvxe7YKb/1MdKKSrcg7CJr33rZMwzNK0b2kg6qyeq2ieQVZaor+5zvKoCEwUfLj7/fsbmTiGir5tki4F91g47iwZ3HFlx82HIPO3EI40hmr1nDLr+0SLwsPfZGEGcxmzCVEalFwWWLJaxcMJLU4Dz7EqTH0rC+eC0ygUtnadB2dAf2D+xUU2ruwudrp8peHhM1cR+kgMXLda8RMAeLbdQB9KR5qctpO95AfZtSYvrOnARnYT7mqZLCmfMJZ076sjGYY79VJMN/ABwsKDhRgW9d3E+DngY6rb2s/rI=;25:8jivz4BrXu0VHb4yFDlzbMCYMpAa3YQiemcAA4k95OEYqHzoeht1961UVwKGHxMuRIBGE/144EBbPCslm1IHp1q0lPQGllw+nF9xeQo/+9L8xhB92Ms8jqSl97ICior7C2bWzEFkExjx5vsvLG+hIQT7lmRfVB7sOG7vKnwgnQlgyUzfhoVjKM+6SVwNHCPhuZ5OQIGuNSFZCCFxMg1VrEWqMsZ5RWTUgr9arW8fpWHCU4MtG3sIryajnCkHvX9D3sspY7aljxrlOaeetecSHj8QfFz7q323Q90x4mNYKa6lpStg3QueqdQeUvcqlXK/1zKRDCc0tfATxYyXofJGuw== X-MS-TrafficTypeDiagnostic: SN6PR07MB5502: X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB5502;31:N7rkzGIAqjOyNvpwciqZzj+7SYN7+5H4Qo8PyTnjWJ8/AZBkQPsEwQxHqWTKqikIrUx8fC8NxqmaofHOjJ/+Lkl5vLGJ1EVJfh9nL1juIqA1+vvqpNbIybtPBk3zRu/ZLOQBFxHN4WDIYrKNPtiMZF52t4TQU3/1RyRigTPr5t0ONWEVXBWyeAkTw0bXdkfSURSnc8AuPJLcj91DnQk4ZPxHexVCkfm0CUQkbDT6quA=;20:dX9oJLFytub+9DucRamOUfYMRMvRh3q+KMtNlJnfHDjSunCEnR4XCOnozXRDo+p14RGCvK8/IoUBpnUns7dkdMsavncVmxBj1cFCindiQzsiRnc3xW6nWpiY74ukLHtDhulb+NoYYKTbhCCT8fomHJws8r4+tNfbd2eaBeYZFzIQqojBYKnELQnlakJPAK8vDvA4PnWmBlsUJUA/IudO7PzLQ51VANKHXm4K97mUUDy5DeVnINPpJ8T8BP6l9W6w2OTyVxfNXZVEnulMWpvrkY9lbb0Xo+mxrggxABRzFjhysa2SupbJcDCBcaMfve4qserXdgZoILxERzSu8ztR59KW5a8tHr9/sjwWMGG1GidWeFKdsUjt5ZTZ+cDj+/VcJ6N556q4ViSfmpJR6uMpaywYRm9DCwTS+c2mx8Hc9WGXDSWVn/NuS9tzgoRjtYRX0sn7GDdAgjXnKrpGJ9PkQ8vb9Y4REiE0bZQAUl8RCwxs+3+VAgAYHfQau4ASPxUV X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3231355)(944501410)(52105095)(3002001)(10201501046)(93006095)(93004095)(149066)(150057)(6041310)(20161123564045)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123558120)(201708071742011)(7699051);SRVR:SN6PR07MB5502;BCL:0;PCL:0;RULEID:;SRVR:SN6PR07MB5502; X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB5502;4:Dsc9F+7yT9kq8RK2K+Gv3djrg5D/fQZwoD1yIk0xiAyaazORTjgR3YJVLM7N/mdp4hyiOm1Nx1DzyqvLykR3PZSPcGYd6IjxI567Y+W7Xa5RQ3M9zVOHj3iT8Wl56IpTkhgb5fhKEtE/7qpkOpVG/5yNdojKkpwh7eCs8d/s2aw/7QBERFnf0F4RxIs1GLEg+V2/kw1Il4IlxgeMd74FesBfkSI5sDkXWltGf1uZf/j4AL/x0qfGg58QzFAw3RsLw20E1/FfnCsYHnbazzJOng== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB5502;23:y/gfYOfGm6UG+Th0vrah7Viz2tYk5ue4tF/0UeZRmRYgDg6LAx2N8dMax3wrYwdHHNJiqXtVyjJBrxDfE/VwxL40AX0JzEgLWFG76Hvsepj3jAIEGDT8akeP0DOqlyD4pXzoBskwBhDvNN1XxItxG4huUNmms5BqplE/MkQ8Vo1EZJYFSHOZQ2SyjC3IMZMCiA/jgcTKo5RAfZ1fsDG8+0ML6A9RCA2WTf7OEdfo//GeCuG2DAmF07d7P58SPU44aKtd4PKaYpBzhL8zs23W1WMIX5uzSLDpJ1H96j8bG1++t50jqX4FrbRqxxRd/MttkY5RYCvew1LEvVBSa4zQJRA6VTBdK57JbGd2wA6aO1YCGJ9tcS4SG5YaT5vg7nn1azu3Haxtw3ks1yFOA8GuYnxjLBQuMXzDH7HGOQAvEmvMNIJoo5b8qK+HJ1Bvh255mKadE7iR76fnIgsTcNtCQRyxYus90irItXJ3fJrxQ4T83Wh7C22XCbvd7104Fg7lC+0ie/bnNSUOrT1u4XBGMhNN0FeUn5kMV4UtMKB03/+iuQBGx2rTIuZA/w7dn52TXQtfFGuhlMTXeel15mqfgVcU/Z0ZyT2+m5IecWjB7J2VPyDq3qIJ2F3I0c2oqRD4OmJns7K7v/t+zuWOnNyah/GVo9uiYy+gqb4v6yNWz0Ghpb4ia1SsV6A6v9m6KgrF71p1Jx3Bvt5XjpKM2SK9h5A9l+S3OtNEZyb1TZ67vBPiHPIWEuylRpv/wctZdt25SOhrzlaAIN4F+UfcSjvL73Z4FIlLTVePkrGrNnhyXl0MGymeEHRfW01I54jA/I1Vi+mAHGHmNKj8bs60YLYiqxhEPUV8QIEwoZJ0Z35hBGmwc+hLLQ43BTvHGwqEE2M6qsd8Kw0w4a/Q6bLQ5lV4UywYcUUopRvaTLi5kCGsSXmGBujTnew57RQ8cf1QHiXHvdda27UCe8MPI4EMpCrxfUH/NSjg+RTBBxm5ePV3TRKaULFrxc9nW9W+k8gmaMG0YD/GBpdTE1yc1nraXOV3PWLZt6/ef6VxqCoEzsUFhw0jmkg9XJ5kECQCv0pJk9BxEKMMlDJoSE6T409CP7YLmRDZMqj2Y3aZqFIldaMX+Y0PKSs5Ub9djhHuAZshNrV1 X-Microsoft-Antispam-Message-Info: rEJyrOeDJptHrROPbJDBeN0B0eyVmgM3rQ3X8ldJjsmZnTNiLlYZ9ccU2qs66naNNzOogVI1uJipVCDUtELKojUk/8iEzbmcjDxtunsefw5cZwnkmt0hHBbeYYryECZXXwrO/Wr/fQcn19zjdVunvDRy51XmSMBJ8u5FadWVIWPwSesNNhA9aoTTq6oIs56dqiqdKx+GLRbVvVETTrc94LEgyOc6347EtIdtIPuSx1VAyYoVD1gU5SIBqorbTD/6WlfJ4GIr7v8Ywvt3Egpjk78bOIwtoN7YM+tSKcCk3klINqDjkISLbtWeW2plpRncjx5sNSbZAp6OxP4P0hSsEJ/OMYVvV0sV4YiDSoyD47k= X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB5502;6:c+RD7NMyZ6IWhB822fzMr9ZzUnoyhn+Qc6k/L/+KhYhUOQerHfKmwNWBPN8367Qx7gKNbHLxwuwtHVdAlptcHcdYLqwej9yeWXy1/L0BUsCHcaYk8RlF7LasBrA4tWMBmesVjfW6KcmwIAS7XtgivdfQh8Y9ecVvGkDw5g8NF8U0QwThJodwIQ++tGn/FOvTYxe00AIVIQX4xQQrbn8RWnMErNIQNBLyxgNHaDkSgT/yFLqSnOqTfT+WEGw5ipN5QW2LnhgRCOZrI4WMNyVw7bZKRTplAGcJVrNri8gSa/x0mXsLspumbG8GBUqq+7EgagQjL28icuTlD92ZpG97Jb35KiyC4imyc/R7cdL7GkPsqb8a/dhMlk1AhrgcW0Hu+d06DdiZyqSpGWju+uqN9WaBKdYsQm8pApNvhaqutLzQLB9Z0NAg7Mr7oWYBWoXxb3kJNazJGLQZkpcYK+hgxw==;5:0A4oqqkR/MsBzL/KJRK3AQdPlPbuNa6uBAanFBQcSZmYYTOJ86hMOSdBSkuxC5hbxz/haT+WVsnmzaSKKFB/r6o46PJwBd7UL8rJvImPreOA8B/CyAewvaNfeL8Gn56BX3oaNodqW/zUvJOrNv/RuiZx9CdjwCbqwGSpzDD61qQ=;7:SjeI7uxo67FxuK1a2WTdUYHH2oQSBdNkNoFoPtRY+pwpuCkn/hsV+e4Jtd0aOT/ZHQvuriOdeze8kpw7+WreXZ92zPohoMA+QOQ7obdLMmV5HHH+juTIYU0hiCfywrQ1vDvBjhx1yucvw/aE0G0xamUMOs6F1ndLFRB8GHrAozgU2832Zi/6OF7cjrtCuE6qHIgKmsVF1RfgaZmfYbZVpUSRqIgzLifeJcd8hYxX+H57TmiXV1HmTFzHcZj6ATbf SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 16:32:02.6751 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 48e142ee-8ad8-492a-65de-08d623cd973e X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR07MB5502 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_version.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/qla2xxx/qla_version.h b/drivers/scsi/qla2xxx/qla_version.h index 12bafff71a1a..0d58aa629c08 100644 --- a/drivers/scsi/qla2xxx/qla_version.h +++ b/drivers/scsi/qla2xxx/qla_version.h @@ -7,9 +7,9 @@ /* * Driver version */ -#define QLA2XXX_VERSION "10.00.00.11-k" +#define QLA2XXX_VERSION "11.00.00.00-k" -#define QLA_DRIVER_MAJOR_VER 10 +#define QLA_DRIVER_MAJOR_VER 11 #define QLA_DRIVER_MINOR_VER 0 #define QLA_DRIVER_PATCH_VER 0 #define QLA_DRIVER_BETA_VER 0