From patchwork Wed Sep 26 04:03:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10615277 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF728112B for ; Wed, 26 Sep 2018 04:27:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB40A28A52 for ; Wed, 26 Sep 2018 04:27:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CF63A2A5E3; Wed, 26 Sep 2018 04:27:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D8CF28A52 for ; Wed, 26 Sep 2018 04:27:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726372AbeIZKiJ (ORCPT ); Wed, 26 Sep 2018 06:38:09 -0400 Received: from mail-sn1nam02on0043.outbound.protection.outlook.com ([104.47.36.43]:34958 "EHLO NAM02-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726375AbeIZKiJ (ORCPT ); Wed, 26 Sep 2018 06:38:09 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vq+Ia8D5Gl5+4909g6Sh/A24qXoYp+FdV8uq7b+q9+k=; b=DAF3icswk5mSD59Kz6FCPUZyYQpVEi1E0KcA1iKBl8Nfvhiouf0/rzD0Hmc0y13HxyqUprZ1JVmobjIOjzmTnyG1+sbjiFihvGz5ZU8agiyht49HnfuPVhhYkepKH1KmioFeNO2DDvc925J8iWFmJqYVRNPi2LTZFRC0XmSFPMo= Received: from CY1PR07CA0019.namprd07.prod.outlook.com (2a01:111:e400:c60a::29) by SN1PR07MB1454.namprd07.prod.outlook.com (2a01:111:e400:5838::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1122.16; Wed, 26 Sep 2018 04:26:57 +0000 Received: from DM3NAM05FT058.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::203) by CY1PR07CA0019.outlook.office365.com (2a01:111:e400:c60a::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.20 via Frontend Transport; Wed, 26 Sep 2018 04:26:57 +0000 Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT058.mail.protection.outlook.com (10.152.98.174) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 04:26:56 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Tue, 25 Sep 2018 21:03:41 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8Q43erj009754; Tue, 25 Sep 2018 21:03:40 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8Q43edd009753; Tue, 25 Sep 2018 21:03:40 -0700 From: Himanshu Madhani To: , CC: , Subject: [PATCH v2 1/5] qla2xxx_nvmet: Add files for FC-NVMe Target support Date: Tue, 25 Sep 2018 21:03:35 -0700 Message-ID: <20180926040339.9715-2-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926040339.9715-1-himanshu.madhani@cavium.com> References: <20180926040339.9715-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(376002)(346002)(39860400002)(396003)(136003)(2980300002)(438002)(189003)(199004)(1076002)(47776003)(11346002)(48376002)(106466001)(478600001)(51416003)(87636003)(76176011)(6346003)(446003)(336012)(34290500001)(72206003)(6666003)(86362001)(186003)(126002)(476003)(2906002)(8936002)(486006)(26005)(106002)(14444005)(80596001)(2616005)(36756003)(110136005)(305945005)(356003)(42186006)(8676002)(16586007)(44832011)(81156014)(5660300001)(54906003)(50226002)(69596002)(81166006)(50466002)(316002)(4326008);DIR:OUT;SFP:1101;SCL:1;SRVR:SN1PR07MB1454;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;MX:1;A:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT058;1:J/WV5jFyZ/uG70hWIAQdCBBD2WwDMWGY0aBS3+1YTs8NkRX0rVdQZVqc5tZ3DjCX1s82TnL98CbXqT4X9r7A2jfsM3w9HFbzPbtpEfzPMsiA9t7akb5FyHizJ2zpO3NK X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0e927d24-53ed-49ba-c9c2-08d623684b80 X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:SN1PR07MB1454; X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1454;3:mChCJlAXPOGL72AW9NTLKaivXHT71A+/mvxMjpe/Bl7dMBEZBrhimK3z/gFKAbT+JmCJBQuRU9b7VQC8zA2pAFjJIxQKJdIMFolnKuXp7enkyToHD/OYYrHSyIkGLmVpgIiunDPVxxTeMBsjKKCkvULMlPlP+z+q9hvneZzJHL+ZLLfNx9/l9kK58J0hPcXTfkjsZw9P5ScifakMBBbT/D3569NBUXM7XrLsW6pweQz6oPGX6lruTiBVhQZCk9TdBcjsX3EFxjc2V3bwCT+38JuyCkzeZIb8qYW4JzE7JeZ+CDLG0p9Xk2YlDrDOYs6RTw2OhvY5tJA630MS3lgItzPfmiY86MjDBxdk6sLRao0=;25:bdstLy91peGjymKszVxQ25lhjjmDMEH75p2Zd7fJriP87z91rPlWAr9ETMV3MBfOO6Z3er1HS/M4jhESiNctIbFCGL47RFlEVlEtAMUbEQKbjntCVcDLhQkzd81SbGGK/t+L1beyIIt860u+sYOdsRk7SY3dRo82oHm2lj7JV6Bfhome7uZttAx5hwL9g/zDhKH0Ez2hRnEy+/BIXSftqRV+ADnVR7E8x4i2u2m8x/yfMABrIHa9A0+rLfpeFkbHmWqJjgvtD3yVDWUcwhC9Zj+6oMVuUbrzZr56EI3ZqTCXXzzXTsf0KQ/SHNrB9tGX2llhKrI4uhLYRpC4JvsTNQ== X-MS-TrafficTypeDiagnostic: SN1PR07MB1454: X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1454;31:EtMAYSxgmLjBaLi9qSZNttZ0V53501qWsm0fK0koD3f/F+96GxgsFru274q0Qmo3f2R+pI5aZ5ctl8tkHpMoinwPxl/KKP3LBq/8ZnAM3cFThzjFATBZ1x25QYbBo/z16kuEEX53uCDHMw8rc0XLXDmIxkuPNdiTHP8PS2O8X5sAAce1G2bg+rBFJ4wGHeqczPJyMgvMGkgxKQ9hlYu+asBP3AQqi+4XhGWNWrTAesk=;20:Gjs1zwCXprXvjfHZ/+A6RxKr8nx5qvEpHH8L2LtmjX/7v90IK95EDW8wPsRrt/tp6zsc9chRXedjdyWSNFO5144+Nc4y6+rVo7EKIN05SXkxARTxtGmM8x6XEOJ6suygOrnaWzJwUdhi0NkWqhHt7QVjv5WI8hAVxtE0vGuB/FiayAVK94kay+jeXEKXvnE2KcoGBhg595f1xW6NsmKf2ThxRMZEtb3sf5PP64ny5ltFOpBIzJmQiGBccvpRFB9CxynxHh8P3OE+Ge5yHZgfiF5JWIumyr2YhkY4s+MHUgfaNlFVNd3FkuZAPuzaAbdlFs90S0tSYzqbfrk/Z7Gn2Xc1Pl2qQ8BS3/4FsfluGsVXDvj4QxOllzSswem0d2RHlT2/jsf/6C6+3utUAL/oQJchz/erLmznmaPLodpMzI+WPfMymPyR5Jxc+9VePbnsQix+pyAQWDl0vbiYkUAwGrH82hBaFBBBtXBGtUsCMi2ZGuQsBXuQ1D80MvASA+Lt X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(93006095)(93004095)(3231355)(944501410)(52105095)(3002001)(10201501046)(149066)(150052)(6041310)(20161123564045)(20161123562045)(20161123560045)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051)(76991041);SRVR:SN1PR07MB1454;BCL:0;PCL:0;RULEID:;SRVR:SN1PR07MB1454; X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1454;4:dSn3+9YBlNiv7I3ihPJ2QKzJeqEFk+aG0clXn6Ac9jaVd7YvctLtKbAgaRPxL3ra3FNBstlO0x45PvXycH2QWD3Q5JaslUsIwumeiPQpzX8xEtw/NiSq4e2QACaKNbzKxuAeQJb+sCDvyCtQOAQ+w8w9qSBSOsaYEOtkUzlGCM/wpso71yunpb2Vfa6c+J3ThR/8zffBI65qqKFBLb5QF9XqtQrl6sQYcztr9ljy1ox+BOXyLmz9EJ38liA+ZibWWrSMH8C1a2ULJJibk4IzmQ== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1454;23:Dl9HKz0a9jHFyT06apLhWuayev5FbnUzeHEXWYhqiq7VxNI0pM2RBnOftufZyy8IhWPWxIaMHF85mLPyH1Ri3Em+TnpqR6CRX7rm9HIf5mvE1AtgdEmYS+GgNX7aFpgUA0VtRiXE76ro8m06WizMB9g0vz28YQUn5PTvsfCYN8QTe/t3H75Faw8o0+RqHsmZuAUw/c4RjbuzcN/GkbFOGCy9l0Ah+kjRSH66hSlQpbU7BFoLS/ingvx16iVWD1S9F4t6mu36YKXve02SqVdVOYPU4GoFLMCSEXWiw75K0ybPuThpVwc6ZkPHmgBTxUq9E5Ubd1m7ERDgNhcYJc18w4+l7lRcxdJQilm/35RFo2g+Q70SemAlWc5zLVCs+w00qq9oRC5A6uLuGpjrrplJHnwhFOZgkKAxpbiW9Y/BT6Qb/ZLeY6g2X/KH/QmHIwOoNeBfIFGqnsSmOBEQdoe4JubV2Uf7CRExUk18WNAQzA+vqYz1IebygJhqOaqS26N1kaHmlAYlJ/DjPJmEttJbybJU81LiGCRs9eWz8K5NS8zg3ACPBxSvdW7WOFGDEVmK2Jwc37CCK8KAmGYfWVzOncXZTvNzhoR8xYCn1UehYzAyuN5RVq4CZjQXtIYhK4QhU2hVmrrjqX3uNPapTpnMQfLoB0pemqkse3q/glrP9D5h4u/XabjdShn/nRmkYtOIFdPhNcroKZ0cVltBIx580Vg8eDRmDR1VerSvddW+GGwioPuaGnr8ltg7hGresZDO8w49o5GLLNfZCgeppyF0J++WdQI2qnVbCscfIfK6Oyb/JZVXB1hkCAdL62BqWDRWeouLRD3FzoRsgfYfzLSfQf3MjtB5z6rPqdAqRkpY/GopI6XF//2vEESrZ+xxNf2hSzx2eJidnUEdPC3lcPwOLDdwm94JQ3B9O1FAxsyXgUObUTGHQhFbr7Ci8oiyROZz/RbNMccssswiayRYtAFmwqq0w54hTaH7W4f5u3+mdOVJJYVz5O6fp70Va2labUL7JWp7aEhKT3BP2szDm+CAAQmo9niN64Hsq6EgSArC+o8g0gXCrGEzsNNLKluuecS0Fe6hdTXlmzaY93c+U/7BLB5JOTeP4VMLY20FmY2b+obMDv7KPBNRioXaQwUOB37Fb05kzotbw5+fCGXe/Z7Coi98DQ4WpoH+6yUG+0kulLw= X-Microsoft-Antispam-Message-Info: O+zp0xasFv46qxkmWsY2KkzjEPf5nR2gNlWkcBdW4aIaZkc5TdtcuGrzefyqKUzGWX9IH2fYcCRDFpBea/fCMJt0gq/gTwWZUx4rwScj/QH7sFcc9zkogCpIQKYJPw9mMPNCbJd0Yzlxn+4oFVSD/DxIti/x+rnlvaTg30qwvULEdy1oDwGtMo9n0HPE3SDKmLJnK+AAnyii6aepyWMQwVzO5KyANLSBHjb7PWuaygjwUu70p6nIn4vDCgDgAPE8z+6D9rN158+I3ldxTt0YsG8dS67znVL6bQN8X7KA2TbKe9Iseb3ssYjeOg8tZRRx8YHD6PULmVzEqbCARBVSKMqgLvH+3QjlLus/yorskE8= X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1454;6:7mZyeFf506yvR4Y4s872f+XERGcmkvIlThtD8CL3/OkfYQFaofWuztpOUuqpJl967SGUHbldic0PaOTVBCyC2sQ5ZqpKS6bEKyApAiNOAkUyTTWpSVEuj2HRmRqe+Lhe0L+Z0fgReTDK0x6PPjlLbBbE5E1sRy9fISLFdAETZBm1FcXLw1Gwo2KEbavIVJ3mZbKuKCRFl1yQvwOZEG/I0xwRg6VkexRgjTFG7WsNNEdSmBli8dkoUWw3TDOCs8rn5dEcXZTgShrbPTiYP8R7O5ACFypzFEQf9LNBDVzjTRxs3AfwY/qkJgRWX9bvwOR4TwnkgPCteFswFTW+monXZmTkfPMleSfeJR1lAgJQLzrzZ6S0Y2PY5Ipf0iVE4PvrgaTY4W1/ykp2LrxL5gaacmDTkFyeSehzZ1Xo8MNfdu/9V4au0EHTcTdrYdAvW7eorF1biXYa/0kcfJKZUiwywA==;5:h86n2WaeFxG9KCXinOlyfz1WpCpnayEy4Fy6Zy8U0ND1VSXVmDfYNU9Gx37K+VrCYaICmwnScARk6yDqScIIOxJ0yJn8EG6UWTJckmNCN+7xUQtjtuXohQxnrLCyBPOhmaV6MEkDt8QppbPw6EjtqF++MtkHPKboziTIhaMfSLg=;7:Tsx55q2hHD+hvpl5eGPQvWQqu/I70Mh4gUABSYMtPQprBE9MfJum79bmn/h9dZo6U1C5AJy6yCW0T4QMfy1LpF5CCfjVzWKbmORw/t/kVlnYT8JPZsEH8aDtNSvNHiObFnS6ZHHcilBMRUdStezSXTGvCBiVrxlB+VdCp2IG0sW9SbUaFFt552ByKnAp3dEzW+QX4wv9IexVCk4mbhP6ahPiA42cR4nNyyoC1tsHI+2B6uzn3Oxhss9ajiia2tSE SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 04:26:56.5274 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0e927d24-53ed-49ba-c9c2-08d623684b80 X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR07MB1454 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Anil Gurumurthy This patch adds initial files to enable NVMe Target Support Signed-off-by: Anil Gurumurthy Signed-off-by: Giridhar Malavali Signed-off-by: Darren Trapp Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/Makefile | 3 +- drivers/scsi/qla2xxx/qla_nvmet.c | 798 +++++++++++++++++++++++++++++++++++++++ drivers/scsi/qla2xxx/qla_nvmet.h | 129 +++++++ 3 files changed, 929 insertions(+), 1 deletion(-) create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.c create mode 100644 drivers/scsi/qla2xxx/qla_nvmet.h diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile index 17d5bc1cc56b..ec924733c10e 100644 --- a/drivers/scsi/qla2xxx/Makefile +++ b/drivers/scsi/qla2xxx/Makefile @@ -1,7 +1,8 @@ # SPDX-License-Identifier: GPL-2.0 qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \ qla_dbg.o qla_sup.o qla_attr.o qla_mid.o qla_dfs.o qla_bsg.o \ - qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o qla_nvme.o + qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o qla_nvme.o \ + qla_nvmet.o obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o obj-$(CONFIG_TCM_QLA2XXX) += tcm_qla2xxx.o diff --git a/drivers/scsi/qla2xxx/qla_nvmet.c b/drivers/scsi/qla2xxx/qla_nvmet.c new file mode 100644 index 000000000000..5335c0618f00 --- /dev/null +++ b/drivers/scsi/qla2xxx/qla_nvmet.c @@ -0,0 +1,798 @@ +/* + * QLogic Fibre Channel HBA Driver + * Copyright (c) 2003-2017 QLogic Corporation + * + * See LICENSE.qla2xxx for copyright and licensing details. + */ + +#include +#include +#include +#include + +#include "qla_nvme.h" +#include "qla_nvmet.h" + +static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, + struct qla_nvmet_cmd *cmd, struct nvmefc_tgt_fcp_req *rsp); +static void qla_nvmet_send_abts_ctio(struct scsi_qla_host *vha, + struct abts_recv_from_24xx *abts, bool flag); + +/* + * qla_nvmet_targetport_delete - + * Invoked by the nvmet to indicate that the target port has + * been deleted + */ +static void +qla_nvmet_targetport_delete(struct nvmet_fc_target_port *targetport) +{ + struct qla_nvmet_tgtport *tport = targetport->private; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + complete(&tport->tport_del); +} + +/* + * qlt_nvmet_ls_done - + * Invoked by the firmware interface to indicate the completion + * of an LS cmd + * Free all associated resources of the LS cmd + */ +static void qlt_nvmet_ls_done(void *ptr, int res) +{ + struct srb *sp = ptr; + struct srb_iocb *nvme = &sp->u.iocb_cmd; + struct nvmefc_tgt_ls_req *rsp = nvme->u.nvme.desc; + struct qla_nvmet_cmd *tgt_cmd = nvme->u.nvme.cmd; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + ql_log(ql_log_info, sp->vha, 0x11000, + "Done with NVME LS4 req\n"); + + ql_log(ql_log_info, sp->vha, 0x11001, + "sp: %p vha: %p, rsp: %p, cmd: %p\n", + sp, sp->vha, nvme->u.nvme.desc, nvme->u.nvme.cmd); + + rsp->done(rsp); + /* Free tgt_cmd */ + kfree(tgt_cmd->buf); + kfree(tgt_cmd); + qla2x00_rel_sp(sp); +} + +/* + * qla_nvmet_ls_rsp - + * Invoked by the nvme-t to complete the LS req. + * Prepare and send a response CTIO to the firmware. + */ +static int +qla_nvmet_ls_rsp(struct nvmet_fc_target_port *tgtport, + struct nvmefc_tgt_ls_req *rsp) +{ + struct qla_nvmet_cmd *tgt_cmd = + container_of(rsp, struct qla_nvmet_cmd, cmd.ls_req); + struct scsi_qla_host *vha = tgt_cmd->vha; + struct srb_iocb *nvme; + int rval = QLA_FUNCTION_FAILED; + srb_t *sp; + + ql_log(ql_log_info, vha, 0x11002, + "Dumping the NVMET-LS response buffer\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)rsp->rspbuf, rsp->rsplen); + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_ATOMIC); + if (!sp) { + ql_log(ql_log_info, vha, 0x11003, "Failed to allocate SRB\n"); + return -ENOMEM; + } + + sp->type = SRB_NVMET_LS; + sp->done = qlt_nvmet_ls_done; + sp->vha = vha; + sp->fcport = tgt_cmd->fcport; + + nvme = &sp->u.iocb_cmd; + nvme->u.nvme.rsp_dma = rsp->rspdma; + nvme->u.nvme.rsp_len = rsp->rsplen; + nvme->u.nvme.exchange_address = tgt_cmd->atio.u.pt_ls4.exchange_address; + nvme->u.nvme.nport_handle = tgt_cmd->atio.u.pt_ls4.nport_handle; + nvme->u.nvme.vp_index = tgt_cmd->atio.u.pt_ls4.vp_index; + + nvme->u.nvme.cmd = tgt_cmd; /* To be freed */ + nvme->u.nvme.desc = rsp; /* Call back to nvmet */ + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) { + ql_log(ql_log_warn, vha, 0x11004, + "qla2x00_start_sp failed = %d\n", rval); + return rval; + } + + return 0; +} + +/* + * qla_nvmet_fcp_op - + * Invoked by the nvme-t to complete the IO. + * Prepare and send a response CTIO to the firmware. + */ +static int +qla_nvmet_fcp_op(struct nvmet_fc_target_port *tgtport, + struct nvmefc_tgt_fcp_req *rsp) +{ + struct qla_nvmet_cmd *tgt_cmd = + container_of(rsp, struct qla_nvmet_cmd, cmd.fcp_req); + struct scsi_qla_host *vha = tgt_cmd->vha; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + /* Prepare and send CTIO 82h */ + qla_nvmet_send_resp_ctio(vha->qpair, tgt_cmd, rsp); + + return 0; +} + +/* + * qla_nvmet_fcp_abort_done + * free up the used resources + */ +static void qla_nvmet_fcp_abort_done(void *ptr, int res) +{ + srb_t *sp = ptr; + + qla2x00_rel_sp(sp); +} + +/* + * qla_nvmet_fcp_abort - + * Invoked by the nvme-t to abort an IO + * Send an abort to the firmware + */ +static void +qla_nvmet_fcp_abort(struct nvmet_fc_target_port *tgtport, + struct nvmefc_tgt_fcp_req *req) +{ + struct qla_nvmet_cmd *tgt_cmd = + container_of(req, struct qla_nvmet_cmd, cmd.fcp_req); + struct scsi_qla_host *vha = tgt_cmd->vha; + struct qla_hw_data *ha = vha->hw; + srb_t *sp; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL); + if (!sp) { + ql_log(ql_log_info, vha, 0x11005, "Failed to allocate SRB\n"); + return; + } + + sp->type = SRB_NVMET_SEND_ABTS; + sp->done = qla_nvmet_fcp_abort_done; + sp->vha = vha; + sp->fcport = tgt_cmd->fcport; + + ha->isp_ops->abort_command(sp); + +} + +/* + * qla_nvmet_fcp_req_release - + * Delete the cmd from the list and free the cmd + */ +static void +qla_nvmet_fcp_req_release(struct nvmet_fc_target_port *tgtport, + struct nvmefc_tgt_fcp_req *rsp) +{ + struct qla_nvmet_cmd *tgt_cmd = + container_of(rsp, struct qla_nvmet_cmd, cmd.fcp_req); + scsi_qla_host_t *vha = tgt_cmd->vha; + unsigned long flags; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_del(&tgt_cmd->cmd_list); + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + + kfree(tgt_cmd); +} + +static struct nvmet_fc_target_template qla_nvmet_fc_transport = { + .targetport_delete = qla_nvmet_targetport_delete, + .xmt_ls_rsp = qla_nvmet_ls_rsp, + .fcp_op = qla_nvmet_fcp_op, + .fcp_abort = qla_nvmet_fcp_abort, + .fcp_req_release = qla_nvmet_fcp_req_release, + .max_hw_queues = 8, + .max_sgl_segments = 128, + .max_dif_sgl_segments = 64, + .dma_boundary = 0xFFFFFFFF, + .target_features = NVMET_FCTGTFEAT_READDATA_RSP | + NVMET_FCTGTFEAT_CMD_IN_ISR | + NVMET_FCTGTFEAT_OPDONE_IN_ISR, + .target_priv_sz = sizeof(struct nvme_private), +}; + +/* + * qla_nvmet_create_targetport - + * Create a targetport. Registers the template with the nvme-t + * layer + */ +int qla_nvmet_create_targetport(struct scsi_qla_host *vha) +{ + struct nvmet_fc_port_info pinfo; + struct qla_nvmet_tgtport *tport; + int error = 0; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + ql_dbg(ql_dbg_nvme, vha, 0xe081, + "Creating target port for :%p\n", vha); + + memset(&pinfo, 0, (sizeof(struct nvmet_fc_port_info))); + pinfo.node_name = wwn_to_u64(vha->node_name); + pinfo.port_name = wwn_to_u64(vha->port_name); + pinfo.port_id = vha->d_id.b24; + + error = nvmet_fc_register_targetport(&pinfo, + &qla_nvmet_fc_transport, &vha->hw->pdev->dev, + &vha->targetport); + + if (error) { + ql_dbg(ql_dbg_nvme, vha, 0xe082, + "Cannot register NVME transport:%d\n", error); + return error; + } + tport = (struct qla_nvmet_tgtport *)vha->targetport->private; + tport->vha = vha; + ql_dbg(ql_dbg_nvme, vha, 0xe082, + " Registered NVME transport:%p WWPN:%llx\n", + tport, pinfo.port_name); + return 0; +} + +/* + * qla_nvmet_delete - + * Delete a targetport. + */ +int qla_nvmet_delete(struct scsi_qla_host *vha) +{ + struct qla_nvmet_tgtport *tport; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + if (!vha->flags.nvmet_enabled) + return 0; + if (vha->targetport) { + tport = (struct qla_nvmet_tgtport *)vha->targetport->private; + + ql_dbg(ql_dbg_nvme, vha, 0xe083, + "Deleting target port :%p\n", tport); + init_completion(&tport->tport_del); + nvmet_fc_unregister_targetport(vha->targetport); + wait_for_completion_timeout(&tport->tport_del, 5); + + nvmet_release_sessions(vha); + } + return 0; +} + +/* + * qla_nvmet_handle_ls - + * Handle a link service request from the initiator. + * Get the LS payload from the ATIO queue, invoke + * nvmet_fc_rcv_ls_req to pass the LS req to nvmet. + */ +int qla_nvmet_handle_ls(struct scsi_qla_host *vha, + struct pt_ls4_rx_unsol *pt_ls4, void *buf) +{ + struct qla_nvmet_cmd *tgt_cmd; + uint32_t size; + int ret; + uint32_t look_up_sid; + fc_port_t *sess = NULL; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + look_up_sid = pt_ls4->s_id[2] << 16 | + pt_ls4->s_id[1] << 8 | pt_ls4->s_id[0]; + + ql_log(ql_log_info, vha, 0x11005, + "%s - Look UP sid: %#x\n", __func__, look_up_sid); + + sess = qla_nvmet_find_sess_by_s_id(vha, look_up_sid); + if (unlikely(!sess)) + WARN_ON(1); + + size = cpu_to_le16(pt_ls4->desc_len) + 8; + + tgt_cmd = kzalloc(sizeof(struct qla_nvmet_cmd), GFP_ATOMIC); + if (tgt_cmd == NULL) + return -ENOMEM; + + tgt_cmd->vha = vha; + tgt_cmd->ox_id = pt_ls4->ox_id; + tgt_cmd->buf = buf; + /* Store the received nphdl, rx_exh_addr etc */ + memcpy(&tgt_cmd->atio.u.pt_ls4, pt_ls4, sizeof(struct pt_ls4_rx_unsol)); + tgt_cmd->fcport = sess; + + ql_log(ql_log_info, vha, 0x11006, + "Dumping the PURLS-ATIO request\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pt_ls4, sizeof(struct pt_ls4_rx_unsol)); + + ql_log(ql_log_info, vha, 0x11007, + "Sending LS to nvmet buf: %p, len: %#x\n", buf, size); + + ret = nvmet_fc_rcv_ls_req(vha->targetport, + &tgt_cmd->cmd.ls_req, buf, size); + + if (ret == 0) { + ql_log(ql_log_info, vha, 0x11008, + "LS req handled successfully\n"); + return 0; + } + ql_log(ql_log_warn, vha, 0x11009, + "LS req failed\n"); + + return ret; +} + +/* + * qla_nvmet_process_cmd - + * Handle NVME cmd request from the initiator. + * Get the NVME payload from the ATIO queue, invoke + * nvmet_fc_rcv_ls_req to pass the LS req to nvmet. + * On a failure send an abts to the initiator? + */ +int qla_nvmet_process_cmd(struct scsi_qla_host *vha, + struct qla_nvmet_cmd *tgt_cmd) +{ + int ret; + struct atio7_nvme_cmnd *nvme_cmd; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + nvme_cmd = (struct atio7_nvme_cmnd *)&tgt_cmd->nvme_cmd_iu; + + ret = nvmet_fc_rcv_fcp_req(vha->targetport, &tgt_cmd->cmd.fcp_req, + nvme_cmd, tgt_cmd->cmd_len); + if (ret != 0) { + ql_log(ql_log_warn, vha, 0x1100a, + "%s-%d - Failed (ret: %#x) to process NVME command\n", + __func__, __LINE__, ret); + /* Send ABTS to initator ? */ + } + return 0; +} + +/* + * qla_nvmet_handle_abts + * Handle an abort from the initiator + * Invoke nvmet_fc_rcv_fcp_abort to pass the abts to the nvmet + */ +int qla_nvmet_handle_abts(struct scsi_qla_host *vha, + struct abts_recv_from_24xx *abts) +{ + uint16_t ox_id = cpu_to_be16(abts->fcp_hdr_le.ox_id); + unsigned long flags; + struct qla_nvmet_cmd *cmd = NULL; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return 0; + + /* Retrieve the cmd from cmd list */ + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_for_each_entry(cmd, &vha->qla_cmd_list, cmd_list) { + if (cmd->ox_id == ox_id) + break; /* Found the cmd */ + } + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + if (!cmd) { + ql_log(ql_log_warn, vha, 0x1100b, + "%s-%d - Command not found\n", __func__, __LINE__); + /* Send a RJT */ + qla_nvmet_send_abts_ctio(vha, abts, 0); + return 0; + } + + nvmet_fc_rcv_fcp_abort(vha->targetport, &cmd->cmd.fcp_req); + /* Send an ACC */ + qla_nvmet_send_abts_ctio(vha, abts, 1); + + return 0; +} + +/* + * qla_nvmet_abts_done + * Complete the cmd back to the nvme-t and + * free up the used resources + */ +static void qla_nvmet_abts_done(void *ptr, int res) +{ + srb_t *sp = ptr; + + if (!IS_ENABLED(CONFIG_NVME_TARGET_FC)) + return; + + qla2x00_rel_sp(sp); +} +/* + * qla_nvmet_fcp_done + * Complete the cmd back to the nvme-t and + * free up the used resources + */ +static void qla_nvmet_fcp_done(void *ptr, int res) +{ + srb_t *sp = ptr; + struct nvmefc_tgt_fcp_req *rsp; + + rsp = sp->u.iocb_cmd.u.nvme.desc; + + if (res) { + rsp->fcp_error = NVME_SC_SUCCESS; + if (rsp->op == NVMET_FCOP_RSP) + rsp->transferred_length = 0; + else + rsp->transferred_length = rsp->transfer_length; + } else { + rsp->fcp_error = NVME_SC_DATA_XFER_ERROR; + rsp->transferred_length = 0; + } + rsp->done(rsp); + qla2x00_rel_sp(sp); +} + +/* + * qla_nvmet_send_resp_ctio + * Send the response CTIO to the firmware + */ +static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, + struct qla_nvmet_cmd *cmd, struct nvmefc_tgt_fcp_req *rsp_buf) +{ + struct atio_from_isp *atio = &cmd->atio; + struct ctio_nvme_to_27xx *ctio; + struct scsi_qla_host *vha = cmd->vha; + struct qla_hw_data *ha = vha->hw; + struct fcp_hdr *fchdr = &atio->u.nvme_isp27.fcp_hdr; + srb_t *sp; + unsigned long flags; + uint16_t temp, c_flags = 0; + struct req_que *req = vha->hw->req_q_map[0]; + uint32_t req_cnt = 1; + uint32_t *cur_dsd; + uint16_t avail_dsds; + uint16_t tot_dsds, i, cnt; + struct scatterlist *sgl, *sg; + + spin_lock_irqsave(&ha->hardware_lock, flags); + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, cmd->fcport, GFP_ATOMIC); + if (!sp) { + ql_log(ql_log_info, vha, 0x1100c, "Failed to allocate SRB\n"); + spin_unlock_irqrestore(&ha->hardware_lock, flags); + return; + } + + sp->type = SRB_NVMET_FCP; + sp->name = "nvmet_fcp"; + sp->done = qla_nvmet_fcp_done; + sp->u.iocb_cmd.u.nvme.desc = rsp_buf; + sp->u.iocb_cmd.u.nvme.cmd = cmd; + + ctio = (struct ctio_nvme_to_27xx *)qla2x00_alloc_iocbs(vha, sp); + if (!ctio) { + ql_dbg(ql_dbg_nvme, vha, 0x3067, + "qla2x00t(%ld): %s failed: unable to allocate request packet", + vha->host_no, __func__); + spin_unlock_irqrestore(&ha->hardware_lock, flags); + return; + } + + ctio->entry_type = CTIO_NVME; + ctio->entry_count = 1; + ctio->handle = sp->handle; + ctio->nport_handle = cpu_to_le16(cmd->fcport->loop_id); + ctio->timeout = cpu_to_le16(QLA_TGT_TIMEOUT); + ctio->vp_index = vha->vp_idx; + ctio->initiator_id[0] = fchdr->s_id[2]; + ctio->initiator_id[1] = fchdr->s_id[1]; + ctio->initiator_id[2] = fchdr->s_id[0]; + ctio->exchange_addr = atio->u.nvme_isp27.exchange_addr; + temp = be16_to_cpu(fchdr->ox_id); + ctio->ox_id = cpu_to_le16(temp); + tot_dsds = ctio->dseg_count = cpu_to_le16(rsp_buf->sg_cnt); + c_flags = atio->u.nvme_isp27.attr << 9; + + if ((ctio->dseg_count > 1) && (rsp_buf->op != NVMET_FCOP_RSP)) { + /* Check for additional continuation IOCB space */ + req_cnt = qla24xx_calc_iocbs(vha, ctio->dseg_count); + ctio->entry_count = req_cnt; + + if (req->cnt < (req_cnt + 2)) { + cnt = (uint16_t)RD_REG_DWORD_RELAXED(req->req_q_out); + + if (req->ring_index < cnt) + req->cnt = cnt - req->ring_index; + else + req->cnt = req->length - + (req->ring_index - cnt); + + if (unlikely(req->cnt < (req_cnt + 2))) { + ql_log(ql_log_warn, vha, 0xfff, + "Running out of IOCB space for continuation IOCBs\n"); + goto err_exit; + } + } + } + + switch (rsp_buf->op) { + case NVMET_FCOP_READDATA: + case NVMET_FCOP_READDATA_RSP: + /* Populate the CTIO resp with the SGL present in the rsp */ + ql_log(ql_log_info, vha, 0x1100c, + "op: %#x, ox_id=%x c_flags=%x transfer_length: %#x req_cnt: %#x, tot_dsds: %#x\n", + rsp_buf->op, ctio->ox_id, c_flags, + rsp_buf->transfer_length, req_cnt, tot_dsds); + + avail_dsds = 1; + cur_dsd = (uint32_t *) + &ctio->u.nvme_status_mode0.dsd0[0]; + sgl = rsp_buf->sg; + + /* Load data segments */ + for_each_sg(sgl, sg, tot_dsds, i) { + dma_addr_t sle_dma; + cont_a64_entry_t *cont_pkt; + + /* Allocate additional continuation packets? */ + if (avail_dsds == 0) { + /* + * Five DSDs are available in the Cont + * Type 1 IOCB. + */ + + /* Adjust ring index */ + req->ring_index++; + if (req->ring_index == req->length) { + req->ring_index = 0; + req->ring_ptr = req->ring; + } else { + req->ring_ptr++; + } + cont_pkt = (cont_a64_entry_t *) + req->ring_ptr; + *((uint32_t *)(&cont_pkt->entry_type)) = + cpu_to_le32(CONTINUE_A64_TYPE); + + cur_dsd = (uint32_t *) + cont_pkt->dseg_0_address; + avail_dsds = 5; + } + + sle_dma = sg_dma_address(sg); + *cur_dsd++ = cpu_to_le32(LSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(MSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(sg_dma_len(sg)); + avail_dsds--; + } + + ctio->u.nvme_status_mode0.transfer_len = + cpu_to_le32(rsp_buf->transfer_length); + ctio->u.nvme_status_mode0.relative_offset = + cpu_to_le32(rsp_buf->offset); + ctio->flags = cpu_to_le16(c_flags | 0x2); + + if (rsp_buf->op == NVMET_FCOP_READDATA_RSP) { + if (rsp_buf->rsplen == 12) { + ctio->flags |= + NVMET_CTIO_STS_MODE0 | + NVMET_CTIO_SEND_STATUS; + } else if (rsp_buf->rsplen == 32) { + struct nvme_fc_ersp_iu *ersp = + rsp_buf->rspaddr; + uint32_t iter = 4, *inbuf, *outbuf; + + ctio->flags |= + NVMET_CTIO_STS_MODE1 | + NVMET_CTIO_SEND_STATUS; + inbuf = (uint32_t *) + &((uint8_t *)rsp_buf->rspaddr)[16]; + outbuf = (uint32_t *) + ctio->u.nvme_status_mode1.nvme_comp_q_entry; + for (; iter; iter--) + *outbuf++ = cpu_to_be32(*inbuf++); + + ctio->u.nvme_status_mode1.rsp_seq_num = + cpu_to_be32(ersp->rsn); + ctio->u.nvme_status_mode1.transfer_len = + cpu_to_be32(ersp->xfrd_len); + } else + ql_log(ql_log_warn, vha, 0x1100d, + "unhandled resp len = %x\n", + rsp_buf->rsplen); + } + break; + + case NVMET_FCOP_WRITEDATA: + /* Send transfer rdy */ + ql_log(ql_log_info, vha, 0x1100e, + "FCOP_WRITE: ox_id=%x c_flags=%x transfer_length: %#x req_cnt: %#x, tot_dsds: %#x\n", + ctio->ox_id, c_flags, rsp_buf->transfer_length, + req_cnt, tot_dsds); + + ctio->flags = cpu_to_le16(c_flags | 0x1); + + avail_dsds = 1; + cur_dsd = (uint32_t *)&ctio->u.nvme_status_mode0.dsd0[0]; + sgl = rsp_buf->sg; + + /* Load data segments */ + for_each_sg(sgl, sg, tot_dsds, i) { + dma_addr_t sle_dma; + cont_a64_entry_t *cont_pkt; + + /* Allocate additional continuation packets? */ + if (avail_dsds == 0) { + /* + * Five DSDs are available in the Continuation + * Type 1 IOCB. + */ + + /* Adjust ring index */ + req->ring_index++; + if (req->ring_index == req->length) { + req->ring_index = 0; + req->ring_ptr = req->ring; + } else { + req->ring_ptr++; + } + cont_pkt = (cont_a64_entry_t *)req->ring_ptr; + *((uint32_t *)(&cont_pkt->entry_type)) = + cpu_to_le32(CONTINUE_A64_TYPE); + + cur_dsd = (uint32_t *)cont_pkt->dseg_0_address; + avail_dsds = 5; + } + + sle_dma = sg_dma_address(sg); + *cur_dsd++ = cpu_to_le32(LSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(MSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(sg_dma_len(sg)); + avail_dsds--; + } + + ctio->u.nvme_status_mode0.transfer_len = + cpu_to_le32(rsp_buf->transfer_length); + ctio->u.nvme_status_mode0.relative_offset = + cpu_to_le32(rsp_buf->offset); + + break; + case NVMET_FCOP_RSP: + /* Send a response frame */ + ctio->flags = cpu_to_le16(c_flags); + if (rsp_buf->rsplen == 12) { + ctio->flags |= + NVMET_CTIO_STS_MODE0 | NVMET_CTIO_SEND_STATUS; + } else if (rsp_buf->rsplen == 32) { + struct nvme_fc_ersp_iu *ersp = rsp_buf->rspaddr; + uint32_t iter = 4, *inbuf, *outbuf; + + ctio->flags |= + NVMET_CTIO_STS_MODE1 | NVMET_CTIO_SEND_STATUS; + inbuf = (uint32_t *) + &((uint8_t *)rsp_buf->rspaddr)[16]; + outbuf = (uint32_t *) + ctio->u.nvme_status_mode1.nvme_comp_q_entry; + for (; iter; iter--) + *outbuf++ = cpu_to_be32(*inbuf++); + ctio->u.nvme_status_mode1.rsp_seq_num = + cpu_to_be32(ersp->rsn); + ctio->u.nvme_status_mode1.transfer_len = + cpu_to_be32(ersp->xfrd_len); + + ql_log(ql_log_info, vha, 0x1100f, + "op: %#x, rsplen: %#x\n", rsp_buf->op, + rsp_buf->rsplen); + } else + ql_log(ql_log_warn, vha, 0x11010, + "unhandled resp len = %x for op NVMET_FCOP_RSP\n", + rsp_buf->rsplen); + break; + } + + /* Memory Barrier */ + wmb(); + + qla2x00_start_iocbs(vha, vha->hw->req_q_map[0]); +err_exit: + spin_unlock_irqrestore(&ha->hardware_lock, flags); +} + +/* + * qla_nvmet_send_abts_ctio + * Send the abts CTIO to the firmware + */ +static void qla_nvmet_send_abts_ctio(struct scsi_qla_host *vha, + struct abts_recv_from_24xx *rabts, bool flag) +{ + struct abts_resp_to_24xx *resp; + srb_t *sp; + uint32_t f_ctl; + uint8_t *p; + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_ATOMIC); + if (!sp) { + ql_log(ql_log_info, vha, 0x11011, "Failed to allocate SRB\n"); + return; + } + + sp->type = SRB_NVMET_ABTS; + sp->name = "nvmet_abts"; + sp->done = qla_nvmet_abts_done; + + resp = (struct abts_resp_to_24xx *)qla2x00_alloc_iocbs(vha, sp); + if (!resp) { + ql_dbg(ql_dbg_nvme, vha, 0x3067, + "qla2x00t(%ld): %s failed: unable to allocate request packet", + vha->host_no, __func__); + return; + } + + resp->entry_type = ABTS_RESP_24XX; + resp->entry_count = 1; + resp->handle = sp->handle; + + resp->nport_handle = rabts->nport_handle; + resp->vp_index = rabts->vp_index; + resp->exchange_address = rabts->exchange_addr_to_abort; + resp->fcp_hdr_le = rabts->fcp_hdr_le; + f_ctl = cpu_to_le32(F_CTL_EXCH_CONTEXT_RESP | + F_CTL_LAST_SEQ | F_CTL_END_SEQ | + F_CTL_SEQ_INITIATIVE); + p = (uint8_t *)&f_ctl; + resp->fcp_hdr_le.f_ctl[0] = *p++; + resp->fcp_hdr_le.f_ctl[1] = *p++; + resp->fcp_hdr_le.f_ctl[2] = *p; + + resp->fcp_hdr_le.d_id[0] = rabts->fcp_hdr_le.s_id[0]; + resp->fcp_hdr_le.d_id[1] = rabts->fcp_hdr_le.s_id[1]; + resp->fcp_hdr_le.d_id[2] = rabts->fcp_hdr_le.s_id[2]; + resp->fcp_hdr_le.s_id[0] = rabts->fcp_hdr_le.d_id[0]; + resp->fcp_hdr_le.s_id[1] = rabts->fcp_hdr_le.d_id[1]; + resp->fcp_hdr_le.s_id[2] = rabts->fcp_hdr_le.d_id[2]; + + if (flag) { /* BA_ACC */ + resp->fcp_hdr_le.r_ctl = R_CTL_BASIC_LINK_SERV | R_CTL_B_ACC; + resp->payload.ba_acct.seq_id_valid = SEQ_ID_INVALID; + resp->payload.ba_acct.low_seq_cnt = 0x0000; + resp->payload.ba_acct.high_seq_cnt = 0xFFFF; + resp->payload.ba_acct.ox_id = rabts->fcp_hdr_le.ox_id; + resp->payload.ba_acct.rx_id = rabts->fcp_hdr_le.rx_id; + } else { + resp->fcp_hdr_le.r_ctl = R_CTL_BASIC_LINK_SERV | R_CTL_B_RJT; + resp->payload.ba_rjt.reason_code = + BA_RJT_REASON_CODE_UNABLE_TO_PERFORM; + } + /* Memory Barrier */ + wmb(); + + qla2x00_start_iocbs(vha, vha->hw->req_q_map[0]); +} diff --git a/drivers/scsi/qla2xxx/qla_nvmet.h b/drivers/scsi/qla2xxx/qla_nvmet.h new file mode 100644 index 000000000000..188ad2c5e3f1 --- /dev/null +++ b/drivers/scsi/qla2xxx/qla_nvmet.h @@ -0,0 +1,129 @@ +/* + * QLogic Fibre Channel HBA Driver + * Copyright (c) 2003-2017 QLogic Corporation + * + * See LICENSE.qla2xxx for copyright and licensing details. + */ +#ifndef __QLA_NVMET_H +#define __QLA_NVMET_H + +#include +#include +#include +#include + +#include "qla_def.h" + +struct qla_nvmet_tgtport { + struct scsi_qla_host *vha; + struct completion tport_del; +}; + +struct qla_nvmet_cmd { + union { + struct nvmefc_tgt_ls_req ls_req; + struct nvmefc_tgt_fcp_req fcp_req; + } cmd; + struct scsi_qla_host *vha; + void *buf; + struct atio_from_isp atio; + struct atio7_nvme_cmnd nvme_cmd_iu; + uint16_t cmd_len; + spinlock_t nvme_cmd_lock; + struct list_head cmd_list; /* List of cmds */ + struct work_struct work; + + struct scatterlist *sg; /* cmd data buffer SG vector */ + int sg_cnt; /* SG segments count */ + int bufflen; /* cmd buffer length */ + int offset; + enum dma_data_direction dma_data_direction; + uint16_t ox_id; + struct fc_port *fcport; +}; + +#define CTIO_NVME 0x82 /* CTIO FC-NVMe IOCB */ +struct ctio_nvme_to_27xx { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + + uint32_t handle; /* System handle. */ + uint16_t nport_handle; /* N_PORT handle. */ + uint16_t timeout; /* Command timeout. */ + + uint16_t dseg_count; /* Data segment count. */ + uint8_t vp_index; /* vp_index */ + uint8_t addl_flags; /* Additional flags */ + + uint8_t initiator_id[3]; /* Initiator ID */ + uint8_t rsvd1; + + uint32_t exchange_addr; /* Exch addr */ + + uint16_t ox_id; /* Ox ID */ + uint16_t flags; +#define NVMET_CTIO_STS_MODE0 0 +#define NVMET_CTIO_STS_MODE1 BIT_6 +#define NVMET_CTIO_STS_MODE2 BIT_7 +#define NVMET_CTIO_SEND_STATUS BIT_15 + union { + struct { + uint8_t reserved1[8]; + uint32_t relative_offset; + uint8_t reserved2[4]; + uint32_t transfer_len; + uint8_t reserved3[4]; + uint32_t dsd0[2]; + uint32_t dsd0_len; + } nvme_status_mode0; + struct { + uint8_t nvme_comp_q_entry[16]; + uint32_t transfer_len; + uint32_t rsp_seq_num; + uint32_t dsd0[2]; + uint32_t dsd0_len; + } nvme_status_mode1; + struct { + uint32_t reserved4[4]; + uint32_t transfer_len; + uint32_t reserved5; + uint32_t rsp_dsd[2]; + uint32_t rsp_dsd_len; + } nvme_status_mode2; + } u; +} __packed; + +/* + * ISP queue - CTIO type FC NVMe from ISP to target driver + * returned entry structure. + */ +struct ctio_nvme_from_27xx { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + uint32_t handle; /* System defined handle */ + uint16_t status; + uint16_t timeout; + uint16_t dseg_count; /* Data segment count. */ + uint8_t vp_index; + uint8_t reserved1[5]; + uint32_t exchange_address; + uint16_t ox_id; + uint16_t flags; + uint32_t residual; + uint8_t reserved2[32]; +} __packed; + +int qla_nvmet_handle_ls(struct scsi_qla_host *vha, + struct pt_ls4_rx_unsol *ls4, void *buf); +int qla_nvmet_create_targetport(struct scsi_qla_host *vha); +int qla_nvmet_delete(struct scsi_qla_host *vha); +int qla_nvmet_handle_abts(struct scsi_qla_host *vha, + struct abts_recv_from_24xx *abts); +int qla_nvmet_process_cmd(struct scsi_qla_host *vha, + struct qla_nvmet_cmd *cmd); + +#endif From patchwork Wed Sep 26 04:03:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10615275 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9479F3CF1 for ; Wed, 26 Sep 2018 04:27:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8552B28A52 for ; Wed, 26 Sep 2018 04:27:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 794432A5E3; Wed, 26 Sep 2018 04:27:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 58DF828A52 for ; Wed, 26 Sep 2018 04:27:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726355AbeIZKiB (ORCPT ); Wed, 26 Sep 2018 06:38:01 -0400 Received: from mail-eopbgr700052.outbound.protection.outlook.com ([40.107.70.52]:61068 "EHLO NAM04-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726319AbeIZKiB (ORCPT ); Wed, 26 Sep 2018 06:38:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YheErGubLOFhdlhS41OEw6MnhwECI53nXkk6CBdTuwY=; b=XSK12v1n9VRXf35DvXb/SxAuENz62VLgHvU77rfBRM3QKYU+sl4T32dH9unba/+H74dYBQyLzQeKDi/+lADm15ZUxtFAzknk1EQcpP03vdGvfddj7DF5pN95sfIX1im2pMc1i1ikzhsTZ0MSlGOmCql/qPhyE9RZyOACskrXzPc= Received: from SN4PR0701CA0009.namprd07.prod.outlook.com (2603:10b6:803:28::19) by BYAPR07MB4663.namprd07.prod.outlook.com (2603:10b6:a02:f1::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1143.15; Wed, 26 Sep 2018 04:26:57 +0000 Received: from DM3NAM05FT008.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::209) by SN4PR0701CA0009.outlook.office365.com (2603:10b6:803:28::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.20 via Frontend Transport; Wed, 26 Sep 2018 04:26:57 +0000 Authentication-Results: spf=pass (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=cavium.com; Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT008.mail.protection.outlook.com (10.152.98.114) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 04:26:56 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Tue, 25 Sep 2018 21:03:41 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8Q43eEU009758; Tue, 25 Sep 2018 21:03:40 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8Q43eKT009757; Tue, 25 Sep 2018 21:03:40 -0700 From: Himanshu Madhani To: , CC: , Subject: [PATCH v2 2/5] qla2xxx_nvmet: Add FC-NVMe Target Link Service request handling Date: Tue, 25 Sep 2018 21:03:36 -0700 Message-ID: <20180926040339.9715-3-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926040339.9715-1-himanshu.madhani@cavium.com> References: <20180926040339.9715-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(39860400002)(376002)(346002)(396003)(136003)(2980300002)(438002)(199004)(189003)(5660300001)(26005)(11346002)(486006)(6666003)(446003)(186003)(80596001)(336012)(478600001)(87636003)(69596002)(72206003)(126002)(50226002)(8936002)(81166006)(81156014)(2616005)(34290500001)(14444005)(476003)(86362001)(106466001)(2906002)(76176011)(51416003)(47776003)(4326008)(305945005)(36756003)(356003)(48376002)(50466002)(44832011)(1076002)(316002)(54906003)(42186006)(110136005)(16586007)(8676002)(106002);DIR:OUT;SFP:1101;SCL:1;SRVR:BYAPR07MB4663;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;A:1;MX:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT008;1:qvn36zFS0IpRecSZknS5KVWDWIfidSeZ5CJSEPoYgJDP69PhGqhcFMrvjTQc8BtW5m8/WfxC2owONnH4fdXXP0kGyERjsauy2qs9Vb4QVtk/ZHLp1f6JfwnFTwZ88MrR X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3e4b7ab5-f2da-4b4d-0f22-08d623684b98 X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:BYAPR07MB4663; X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4663;3:9OhGnu4umPTtbpRJMUZ73OHn4YFII8Oe7ibHT3/a2W5a31PZ9rp3G+aY8T5srJ7zIuk/+iUjbOtgQk0GiCt8aBakpNBdxKCLVSwoz9GRN77ztAR98CgtZvIpxMOCp2q5dv6w4IUSNviFBI4Ok1Vool+yu2hzCfL7aWyAfgigUCnNeKq2TLMXDx6AmTspj9YoyBYqhYZ2U3/kV4VHdN11ytUO+9Gtk9MPngDLjInCdRCTQ1iDLpDnLxRcomBN57eYjz7eV6JbBg8GFrRXzPSHepMqmsbpWLVOBUnXyF16IDMuEYGbPEHnXM7l+OKB97g4DHxpfqTENnFNzcZmUAs45039gC4nrmyrNt/XZyRLjPs=;25:H1ZIk/8t5fKBTs3o0EYnV41hSO+NvzNhEKB602Yj6ORQ6KODUZJuNSb5XeJawRY5Rb6Ycx5GUlj/Dq9cOZpscXUE/C8dVnyJrpX268JfzSurvEmnNl2ojTpANeCk9pisrFs8sebXT9SZRmLZmYWM25gyyJZ8dVFeBhhENIGcs7Nf0mHOM6+SGooPYYNv5WuZ3LGG8/f9tf5yMutOqvFdSGjRAWo9SvVI0j0jWxt16J8hUWDpPiC40tl9QI9xtx9A1iady2mre9HxP/v/eDOYFYeSAzhHHO+w+rVzpyR7heqvm/W1SZV2VaeZbg/KhuOLdOelbjjbKUfXM5eK6DgQoA== X-MS-TrafficTypeDiagnostic: BYAPR07MB4663: X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4663;31:kCSLvUORd4lGCdmLvtNmzveDld1LM0sAfRAaMAuCG4zH8tOFXcYaxwitY4MLE7IBpJ/krhD5tnvCDou2alA59zFWPTHufVibICzoN+C8QCACyF9EF0A7gC2c0d7xMjC36sUo/cn3KUsZKlNXtMWxEXXzFmYfzI3UicONhBOKQ2pbMYPjFJ9dq0nat4VemhElv1SJxbTHal5bz+t9ZJZKJQ5N+npeXpRtNwo6HRlIMjA=;20:ZGxNblpnXJAXcHG0c9NA/Pz8Rx5enrGiZtIOuxS/+6UxEGG9zujOI5Io00RBWVdJ5qB593HDjq0WeChECceenYl6zoBDSFdQI0w3o84Y8ZRzJhW5Io9M+qYZKzJCEyr5yhNZsL44ASSPQJx1lEaqFmW/7UcvjslOkedKbH0tAgNKnM9I5Hh/fmLtXEUbyaAVCWOGixQJARbQWpy5eybdGOHLy5U6IYggehXu7wmoh7vh6XmA1wGFqKxYxhSA2UWtwXLZzvfHAIEImUwp7b1L9DnZZjJQZkPj65BMAwjWK5Twz0xwPCQ/SNd03RiJQ66vgSPvwfeF4BuHjkmA4MQr/7BXuqOOrFTRC2LFwGEQ6MwNCoMEnH8heH4CTV2qATHJPz5Y6Os/hCMjp4m1uTPe5uUskH7J1BK4wITewRufQgXyiNUYuVVZx02QyUc0j1ctcd05Uv99lC5CMSXN9dO/3L3+qPjncgZPdqeE0+CpPCq0E4yFb9MLQK3ow+wCqGcf X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(10201501046)(93006095)(93004095)(3231355)(944501410)(52105095)(3002001)(149066)(150052)(6041310)(20161123564045)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123560045)(201708071742011)(7699051);SRVR:BYAPR07MB4663;BCL:0;PCL:0;RULEID:;SRVR:BYAPR07MB4663; X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4663;4:zRpEw4YlWibMy6qrzBNkP3VXUgzzcuuFqwhMcMb99meNRbl5u/rUFF5FvZRbqZi2N5xEO/5MlprPgpsWt2NQG9sJSisym0of+83Vf23jrzn1FE+KBvS3F3PakgT5El3UlvLRQua1lhVoOPqVSuWBkOOFk98msLzP4ECxsiW7uzItxWNOaddNe1/Di+9BKSYjrt247K9zsZGju7eIrurSxbBtAXLnOaYHbGZlL5rYJprfjanUJB/i2nX2hSfuWdfEmWcAc9hp49Fy1LwuBEV4Pg== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4663;23:4QzR6iTWRAG077wH+sJegbGZjwa4ehoKZOhkDsh7meS8CfOcD/Nm4XzGu9+phMCb0avoL/G00swjPztldZ5hmQp3zA5f6ULyXTA1DH/TqCBEIiUG/gqKL5Y/zXZiyJFwSqHDw2gKfFqkrdqQUyZn7m72pAyEu2Z0XAK6Evn+4wZZWVtrKqm7t3iWUrNDxVyn95G79rX0RaGx9+jxyENAZDNhluQAoPaf0iIpbXif24C0Tw0antnWZkMjSuBcSt5wmf93nhrw4JC9+RjOhPxdrV9pB8TZpYickrV6DugVVNCxkx1kjbRa1Eqhp+ZzZhlhDJEo7OOJCgvR/S0vQ3qxUsGwWfd0Anh/CVQCR8ZtMLBy2TuZ6HdQaFpiD4OO6nrM6QXCbemcvS+VZnTxhtbaa5OptC4yky+2az3OFyWWaTE7YvthbgDVZ1+7X0O865fDhYbNW367BQo0fX4fCl6QLg7vRvpDHkDD6q3NMYYN55ykDOf8CpqkFRxYb+Wm9B8FMmcvKALyZoBjbf6tCu/AdHozObWaNvKay5WSKSZb1HLKAwtC7vmnmwAUc0Cy4LRv0VGnAHKMyaelEtK4WsIls9/nbQpbSiAUw8LVzbh41D0TIHTYaaL8blcF8MLEmMAA2Fxa5aidrcja1rCniowsC+6VDyy/wo2sgTDNFbkN9DaPAlMUBKp74pa8DVNxdV37DQoJmGO3xI32SLarMNU7a2Cnk+5LGO4XD0Rza+wxXZ5XN+rq8YkD/5NYCPZTICghks28Km0Yx4BaIYOcWZv+42dJ/zD9rJ8a5OIKyjg/3K1XN28Gww1ouVhtjJ5K+3+igXtS5qIFxh1yGGkIgZGjNj6JoG3EIYbRfGBHgethEmNZm0mUkxYDtCWSglm65OyMYABWJFtg0UVyrhsUR28Fx7WiI5cWo5aNuOqq2OuR8/31GaXILK3FNNOFOPOlOjyR/UROTH05XKT1bc888AtEUJDXhXIn1gHSNkKRilprl9aQoQfJWtbIihJSJtufZ9gUKd49LHHbCwbEp3aW6eYzTmR/QxcHySw+n1hPjAqZzfXyP6B9tBtkT4kp/Ki9Xp6KoJsnUGXHAt4AsU7dhYkmztHdXfey/xJ2MJtX28hojiwLbvTMDzZ9fZTIIeqkmTXjb47cklZsWOwUoRIJl3s49A== X-Microsoft-Antispam-Message-Info: hUeBhEqU71Vx+aBNpPmpqgYPsLCLUEwkJ2sNOlTZxnBxyDveTk2xsrjSlM68sGX2blUnbSdi1tYPgju6ZlCQZF2FYzJDsxPIKWElfl9gQQzamoJ+DvyWsWm64fwGng/Idw99mSwxMsN3P4mKGFtVekndMhhcCak2RN/fQIA2cHrajfpc/X0Ox3Mu687tQMi4OC0i2mFIYsrikKMu7HaSQAEXn2wuIJOL6SG3nnl5tX+fQw76bWMMiEVSKcbTzk2MOp6Vlg8Xyyabr5mRfRbUuY49mMBbmn2TWWUSDnpqW1nbx6WcAVAGC9u6tAhTclKdyP0JM6QHHFnxcwSpfiv6M+MA14dYttSav5f1uwEmApY= X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4663;6:0wGhfT+CCbO3YXogEGhftnrBbgAuSnUt3f4xvcYo7PeNWBfM0WfxlkJPn/kmCttDAWAxxnVOR9ois06hXWtwgXA/Ab/Tpk4hlsH9+nOWgPhxdei7dw6xL4HPMtrFdi0sBse+GWZGw0TV/KS9JreYe1f2eGPrB1JtvKkJ706WbfMDRjKiW1znfSIEVe09voPa3duO+mW/IGlr7KTzVVUvmA3l08F0GApPFVbb08n5sh5Zkb5yBZJNvunfsCyBeAKwVjZB2ikzDRWHmDMKjqsaMKBa/7QJXK+eQnCAHedMgloWss5UhtAa6W281sixUz/zNLozskqOt9xuyBOdiDlqngCbqYIp1QZA/7M9PGJvUwKe6CUrE6cb8VSSgM2xmysIOgJcAjXQXniAeRP7J2pWxj5RUidVXnhw9cD6ceHrNsqIXx3Xf+pjkiMAjrxdXcNgvyhpLsODBD20/bWP9l0TRw==;5:MtIMqIGPcC2SqwowYnffVhqmQGCDr8gfIBJQ0C4blOU0EJHX/ES9gWOyP+OWJByl0Ter3QzfE+fCFIGlfNeDCySBZoOtDjuNY47FowrPkoljfDS/6a7RJHqupbvLCYcFG5rvgkvRQGVDYM8IVaFCZTDUbCs2HT0HsThO/mu4004=;7:3TNZ32vAmyoxbX43xel9Xi0e+zO582Nvl6cu81WBpwvZPQe45Xrfhto7mG5jQzoGXZVE+Jj9iDBrC6P1jZvyyJqTiOXeNoVCVE9b5JJ4Xmmyj1GVbOp2jWsQb4d2tN5LLNh3i1llLXdxqsyAu5mb+Yt2Ms1V7hxBCk0z/ZA0winOSue8Dr+BhQIDevZYbrW1/VgzgimmO9etUqCkGI1Usk9HfQ4A/+HHcbnXHLe8/hTGyAcwfR3FVuDaLJfQ8+KZ SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 04:26:56.8362 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3e4b7ab5-f2da-4b4d-0f22-08d623684b98 X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR07MB4663 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Anil Gurumurthy This patch provides link service pass through feature handling in the driver. This feature is implemented mainly by the firmware and the same implementation is handled in the driver via an IOCB interface. Signed-off-by: Anil Gurumurthy Signed-off-by: Giridhar Malavali Signed-off-by: Darren Trapp Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_dbg.c | 1 + drivers/scsi/qla2xxx/qla_dbg.h | 2 ++ drivers/scsi/qla2xxx/qla_iocb.c | 42 ++++++++++++++++++++++++++++++++++++++++- 3 files changed, 44 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c index c7533fa7f46e..ed9c228f7d11 100644 --- a/drivers/scsi/qla2xxx/qla_dbg.c +++ b/drivers/scsi/qla2xxx/qla_dbg.c @@ -67,6 +67,7 @@ * | Target Mode Management | 0xf09b | 0xf002 | * | | | 0xf046-0xf049 | * | Target Mode Task Management | 0x1000d | | + * | NVME | 0x11000 | | * ---------------------------------------------------------------------- */ diff --git a/drivers/scsi/qla2xxx/qla_dbg.h b/drivers/scsi/qla2xxx/qla_dbg.h index 8877aa97d829..4ad97923e40b 100644 --- a/drivers/scsi/qla2xxx/qla_dbg.h +++ b/drivers/scsi/qla2xxx/qla_dbg.h @@ -367,6 +367,8 @@ ql_log_qp(uint32_t, struct qla_qpair *, int32_t, const char *fmt, ...); #define ql_dbg_tgt_tmr 0x00001000 /* Target mode task management */ #define ql_dbg_tgt_dif 0x00000800 /* Target mode dif */ +#define ql_dbg_nvme 0x00000400 /* NVME Target */ + extern int qla27xx_dump_mpi_ram(struct qla_hw_data *, uint32_t, uint32_t *, uint32_t, void **); extern int qla24xx_dump_ram(struct qla_hw_data *, uint32_t, uint32_t *, diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c index 4de910231ba6..cce32362cf21 100644 --- a/drivers/scsi/qla2xxx/qla_iocb.c +++ b/drivers/scsi/qla2xxx/qla_iocb.c @@ -2113,7 +2113,7 @@ __qla2x00_alloc_iocbs(struct qla_qpair *qpair, srb_t *sp) req_cnt = 1; handle = 0; - if (sp && (sp->type != SRB_SCSI_CMD)) { + if (sp && (sp->type != SRB_SCSI_CMD) && (sp->type != SRB_NVMET_FCP)) { /* Adjust entry-counts as needed. */ req_cnt = sp->iocbs; } @@ -3433,6 +3433,40 @@ qla24xx_prlo_iocb(srb_t *sp, struct logio_entry_24xx *logio) logio->vp_index = sp->fcport->vha->vp_idx; } +/* + * Build NVMET LS response + */ +static int +qla_nvmet_ls(srb_t *sp, struct pt_ls4_request *rsp_pkt) +{ + struct srb_iocb *nvme; + int rval = QLA_SUCCESS; + + nvme = &sp->u.iocb_cmd; + + rsp_pkt->entry_type = PT_LS4_REQUEST; + rsp_pkt->entry_count = 1; + rsp_pkt->control_flags = cpu_to_le16(CF_LS4_RESPONDER << CF_LS4_SHIFT); + rsp_pkt->handle = sp->handle; + + rsp_pkt->nport_handle = sp->fcport->loop_id; + rsp_pkt->vp_index = nvme->u.nvme.vp_index; + rsp_pkt->exchange_address = cpu_to_le32(nvme->u.nvme.exchange_address); + + rsp_pkt->tx_dseg_count = 1; + rsp_pkt->tx_byte_count = cpu_to_le16(nvme->u.nvme.rsp_len); + rsp_pkt->dseg0_len = cpu_to_le16(nvme->u.nvme.rsp_len); + rsp_pkt->dseg0_address[0] = cpu_to_le32(LSD(nvme->u.nvme.rsp_dma)); + rsp_pkt->dseg0_address[1] = cpu_to_le32(MSD(nvme->u.nvme.rsp_dma)); + + ql_log(ql_log_info, sp->vha, 0xffff, + "Dumping the NVME-LS response IOCB\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, sp->vha, 0x2075, + (uint8_t *)rsp_pkt, sizeof(*rsp_pkt)); + + return rval; +} + int qla2x00_start_sp(srb_t *sp) { @@ -3493,6 +3527,9 @@ qla2x00_start_sp(srb_t *sp) case SRB_NVME_LS: qla_nvme_ls(sp, pkt); break; + case SRB_NVMET_LS: + qla_nvmet_ls(sp, pkt); + break; case SRB_ABT_CMD: IS_QLAFX00(ha) ? qlafx00_abort_iocb(sp, pkt) : @@ -3518,6 +3555,9 @@ qla2x00_start_sp(srb_t *sp) case SRB_PRLO_CMD: qla24xx_prlo_iocb(sp, pkt); break; + case SRB_NVME_ELS_RSP: + qlt_send_els_resp(sp, pkt); + break; default: break; } From patchwork Wed Sep 26 04:03:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10615279 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38B58112B for ; Wed, 26 Sep 2018 04:27:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20BFB28A52 for ; Wed, 26 Sep 2018 04:27:20 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 14CCC2A5E3; Wed, 26 Sep 2018 04:27:20 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7F5E928A52 for ; Wed, 26 Sep 2018 04:27:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726422AbeIZKiN (ORCPT ); Wed, 26 Sep 2018 06:38:13 -0400 Received: from mail-dm3nam03on0083.outbound.protection.outlook.com ([104.47.41.83]:56928 "EHLO NAM03-DM3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726385AbeIZKiM (ORCPT ); Wed, 26 Sep 2018 06:38:12 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=qfOgQa7ADjJcj/G4XDLJcjYmb5dk0vfNOP701AQF1Xo=; b=QllnIVImUVPMsKiC03ZFiqMaQnm1gQIBC1uPtw1ZnZVMwXe00qn8dyJoFJVcHfcuZa/qyjkLRRdw4Vs5kWbwYUs9K86xwptbszIQsOXnjP8QZ0hDfFVYOGBuK60MYnhBhTa4h4mShAuXhlVK5WgQrImHJixDPr2ES2GIP3NaGro= Received: from CY1PR07CA0033.namprd07.prod.outlook.com (2a01:111:e400:c60a::43) by SN1PR07MB1456.namprd07.prod.outlook.com (2a01:111:e400:5838::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1143.18; Wed, 26 Sep 2018 04:26:57 +0000 Received: from DM3NAM05FT059.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::204) by CY1PR07CA0033.outlook.office365.com (2a01:111:e400:c60a::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.22 via Frontend Transport; Wed, 26 Sep 2018 04:26:57 +0000 Authentication-Results: spf=pass (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=cavium.com; Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT059.mail.protection.outlook.com (10.152.98.176) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 04:26:56 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Tue, 25 Sep 2018 21:03:41 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8Q43emn009762; Tue, 25 Sep 2018 21:03:40 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8Q43e9V009761; Tue, 25 Sep 2018 21:03:40 -0700 From: Himanshu Madhani To: , CC: , Subject: [PATCH v2 3/5] qla2xxx_nvmet: Add FC-NVMe Target handling Date: Tue, 25 Sep 2018 21:03:37 -0700 Message-ID: <20180926040339.9715-4-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926040339.9715-1-himanshu.madhani@cavium.com> References: <20180926040339.9715-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(346002)(376002)(136003)(396003)(39860400002)(2980300002)(438002)(199004)(189003)(5660300001)(356003)(478600001)(72206003)(87636003)(6666003)(53946003)(16200700003)(486006)(316002)(14444005)(36756003)(305945005)(186003)(44832011)(47776003)(34290500001)(66574007)(11346002)(126002)(336012)(476003)(2616005)(446003)(106466001)(26005)(51416003)(76176011)(42186006)(48376002)(110136005)(16586007)(54906003)(106002)(69596002)(86362001)(80596001)(4326008)(1076002)(8936002)(2906002)(81166006)(8676002)(81156014)(575784001)(50226002)(50466002)(579004)(569006);DIR:OUT;SFP:1101;SCL:1;SRVR:SN1PR07MB1456;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;A:1;MX:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT059;1:7XkLEiVSojITS9Uw9oTGG08ul8Ua0TUdj/AxzmBPlKiLv/VmzfUoEI9c02vPhOaUqOh7xsbCHXKfuWMyq80ze6dS2ViF8NbVvWQGAASxLRLOetrhz0v3cfI3BO7jf0WB X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c6af2d92-2a3b-4118-497e-08d623684ba3 X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:SN1PR07MB1456; X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1456;3:ol+MjwPkVadyKR8vqiiB68TMuZB53ERPkjGUBNAX34DBOPBzHRni1vPWfoy4UGH0drAbuGKrPEMFei7q+ocqHgHNYCqX3h5L5F/AIz48Xh5IWppo+gV42TMPnytrPIhmky8nv178KhxefJjq2FOEHPhSlHP4v4VINnI5vVVwiDgQMCit/5cwf2IMMi/zwkNmrWy/nL8rf33aLn8595+CTh6hrKF1AFphXzbSysHiCXlDsVLxfUNpwE4xmgbhmdIio56eYyahkEdYuRBl3AejiQn7cHoY0/B38OZeLcowQXGbzKJQXbst8c4eG5E8+pGrjfpq2BTDA9p403QA0LNxR9LjeUxhbjlJV5ammgZOEZU=;25:UUFxpXPKhAFKs2eoF0NLN9tDWXT1xaDO/Vw+3BPdcWwTvH0YgZYHft9ea0R3D5w4T7rZXg/5jpUktCfs1zsppPvA3qzBY2kWiVl2Y6YYcDg7TitxhmIzCTukCbVEYdZVDV44Egi08XLtcjYmc8h6qY/atPHyyHw+EhJNl0Ib2LUFFUtO7yEn+Ey/r15zMXyrqX8zfbIdBBySbj29GzzercGGgAUEOPlYyFfm87D8gUNk01BJ2k0aqYBfQpixB0OpuAPDJFsYrDC5NEeKY9g++CW9arf2dC/gYT2xEqQl6zmpnoBiW2c2k7dbfrKRAye4voL1mGQKR7rX9pDe7gco8A== X-MS-TrafficTypeDiagnostic: SN1PR07MB1456: X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1456;31:De3RP28xN8IdReXz16gtOFgxc9+6Vp/D5JVBZRo3h3gqN+TnXTl7Qp4RM5r9TilZvgE3aGu17WecraPeXlLktGDpWnxKLax9sczyr3qykub2wldfVImPLyNLMzdasdv1Zb2uZt3UW0JjRHUO4RrG1Nl2C8Qdren0TFXQLL1OclBJZdg8hW3ZtSDOuyf2GMng8K/MYuqvdNWHBC4ksDeDqgr8VAvto9cnJA73BNrIE6A=;20:uVRYVQLsEfFd1aaPVo6UjdaSSc9y6625PgaTWeHdPvSmlMrQ7PqrCyGmVPPxv/YpOmDx+WuH99rW8q0yedRpDCqS3/+NSXRl+ITOJd22a2MdcltTtr4UzT5r80SfikfJ02l0y/fYCXLxgrTTZ1Btau0AjGcGNDSESscC+bPl0aRso0hqcs8ALVLGcCRx+FjRLYHjQMmoaiGl6ddkeW0gg8trmFn3D1q0JuU3fmq99wlemfO7LssRSlMJar7p+LA8aHimQ+6pAquFZfKpg32SZL6cLWMihfTGPilNawKquhLT55QAJLRidUEgvrDpUkRowx+rl/PDEsCeGfXYUKIYNoCINWVFLYfNOnqZ5YpNzDsaMOmQP7/6HCjbA6VnF0nBJrtmRDFlHyVat3DKk8LVWNvxxI4UB+fXe5QQJb5lf+dugldgcaI1h4R6VD951ZTq7cLcfkgumvvSgQmqdn8d+fTFEHYY6J4PvioOqJanAl98YAuxGwmJg4bP9+hcwkIj X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(269456686620040)(163750095850); X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(10201501046)(93006095)(93004095)(3231355)(944501410)(52105095)(3002001)(149066)(150027)(6041310)(20161123558120)(20161123564045)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(201708071742011)(7699051);SRVR:SN1PR07MB1456;BCL:0;PCL:0;RULEID:;SRVR:SN1PR07MB1456; X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1456;4:tvrOX2QMn827FqD99veM2QCyqi3r1bXuYAlrt93cNOqTn+7nUYtQHIFiFnOfGXSmAVO6WbisLlJLLyvhYufiq4O+L8AfiPBU9We4sRDFPgJAhDSDqzDYeOQ5MmjQlIX70JSMrfN+q5twCnmI3SGyAs+ApviG2GEvUjMqXfd+KLknxmb7E/m09MsFqZ+hEynrY7smfITn+O2T3pa+t1r5kEzNoELggHWOQ43IkxtLJK9PiyKDM6cHqCWBzd+1tU/sbnPCJY4M6/oHJmwb0V/KKETIlcGjyZbHiqwwUoUUElhKgsQqosJPtK7Tt5MToT9hqy1RgrE9UCIxv0p3SNvJXA== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1456;23:Tn23I/51LzydjKe7/0+MMJaM6wSo2VMumNNi1p7kizUxx7fD0xIA2r5uRye96v8lRE1vglyEWaBssQEkH6aXzUYQ3FkbPFHvKK/lip0JdhCsuRABI7V7x3/YeiDfucENx0wufQBxYrF/Gu5aESjzeujQkry3ukki73cHVybhNMT0CoGjib7Vmboda/GSv0MBTz2gHwQWjWjH/ijf7rUAV3yNQ+4/QTizn6EVUAt3+5Qnfpc+Cl3FTLH+I8OQIPyk5+tulRbojrfTYRCOr5FEOozVwgWW2dhPP1Y/ndLKKd8c7Ja/zNgyXNh9Knbov1n8ayNdSK7nfPaDJRVQw8VkWQlOb4xS27R7+4QKvT7CVc+RmOfiG3bsfOLsC5jizEtzzWjxkrT9cyiQylApJlZjK3Tbx7S0ycQzoc6U09pfToES+STlzyVFUfsJ9AbQ+wNPnZTPaO4KYuGAk1QJtWdUbMzUTFn8yN0ZeYr+vELFMKVsrmP7rMLbRLEqj7Z70I21KX4bZmNdtd0Jfi9Aw4zJi7lVPyJO5SKdnLzgN8gJ/souw9+ZKe08Gij/guARNVrx9b8cTvoceVGi5Z2xh2RaqtYK50Spqag99GxI5kvfk+HrTNEgubNlxewGWLlLCnQqz76DOZJl7NmR3yPob81jnOYjHSTW0Ej6EE6Z7T6oTE/Y2T77tAyvqmf4PteOsV8y0EdgoqvK8JRuUPqvkH/hG3Ym25bpz4cWQZyWaY2KO5v4PMTjULQl2PTKOOdxEFkPM5UrH28Tu/DRhETpJ65ngjJBhSQm/YmzX01+40SCojjUwog9poLCG7IpOaooRYxyYD2kBQWMSOUyHpGx0FgH07f2/Mi7m4kPfwwkqoRJimS1SSqLlodLknQV1X7dO2UZlTfp+BNDSS4Tzip/4Qe95KDIZqoIM9Y/14xOScgD3B3itQhzpyluTxnCmJesrfJe5rcxEtlEtmbMj6eNlhAdL0Q+XUz333ZKVf1F6olJ+9HMO30cohgF/nZJJrMHgTreoSrXTSlZ8JvVY4WMVM+tRQ9iksiIBGS1uZyxL4VIPqJ0007NOxpGGxgZ3JfCA2uVBc3jXTJ2CDcw8Bqg5TShwZBlAOaCaeEYNyJlPrrtOTdjAIaBoMOM7PkgWn9OJNxpe/Kk8dTndrxaAsyyK8Ma+B64v/7sxQ4YGz+idX3YhsB8Guj44o0a+AARIjR7ICUPOspEGCRTSRDfUosmJVX6RSBhNUKT03dp8Gvx4tpq7pQ9uhyFh4XP8EKjFT/SRA+p5fiJd9WUQDk726EOHD6a8g== X-Microsoft-Antispam-Message-Info: DZ6XDFLLg6d8xJhXEd+NcgpMbTMmx0s7Sd3WAYGAjtUuMkBP2Dm3Ce5z3Bo19nBqwvEcUh5Nzox0pSohVtmwheqdcGxGZEEJYedRPMJt9RBxiYbRAEo7BFjHWtaAvYRPjjKlcdhYK6SD9svkjlJcp5U3sJjJ9tpSyk3e5diRzGaldoxK/uyRcgkGOIHGVmcDUjuuf+JIVEd78YL7SmcuYSxZGPnhFo1G10kpdB0oovMyL9CxXA7cPoF50fYevg+HGu6t6lsWRw2UdQQ6/n4W7UmhqXnk73IUkeGRGrpg1ynHUNsHX/oIASi5asVxtMEKV7EJmnNP7zwp1sbXE24eguT32em0EGpUtxJvhfqV6u0= X-Microsoft-Exchange-Diagnostics: 1;SN1PR07MB1456;6:wdVScZAnBOadgShn+1d2lfCAwV6A8UZqR7gIdwvH1+8CejQxfKJYGyv/+ZTVOYNyT4RspzaU75dqEIQ1zB3E6edzroGp4ms7nmQq8lax6Zecmdc6QaHgT0eENgBeJPChT5XhH72Z2CBwNjWmSlTEJ1MSKRoMqpHlxssxFbJUifK5N701B+DrO43g5sDQ2tQHHQQa2vRmTUlKVjsBv8QgUBTUgaVVk73FvAYazEEXq5daIUFu4kWbfv3Y0/uNoqu+JFAmH/yer7Xg1IgmOFu3MKav4VUfDYOPPHNgOIoFfvpe803EcFEyDAsVUpxq7ZD9lggIZrJvGPdzHMl0OloXVmtTmQfOsHfiga0NPHWFZjMLiGZQeKbmzwRDTCVKGZY5Btt55jRHSVvXfF+rRFILdCLgxaGkNM+F2Q7PTw87mRy6nDY6q7FP4wfhd5T9dcQMYY9m27WlBieKuephmQAgjA==;5:PIlZ1vvTGGuqlDA/PGNbggziMogMIez7rJvqvZzfFOjPHnHC5uwwMAl99WaKvkEf+PdP3mG0bjiHcvjia+6LvGDZgFV3TE2dabq6bYokyEwlz617L0632kigVbnnrFD8vBfX/p7IkFH+qOMw8iEN7WyRmAt/s9tXItC/p52hpUA=;7:qbAjldznqmHKd9elmdXn94Y69TOevk7ZzomFlGfezKURVd5fAbrBTl859V3gmNkgYnckzhTv7l/eWHDIQcvBAYLC5IEnzLFiTafWgBsuCEY6cbafE4wOcwTj5+v8khPVfijuIZKxEdaTuHX0kNyLeOJwcOsaiaeJ8NYk/OYPAaLABrO9c+u6O9jC9MTXkHTJwv83pr5GZcVHbYdYuBO0UGjR4KujJKYp4ItDiKqLPxXdc5+q2+HbVf2+Ab1LGSCd SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 04:26:56.9086 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c6af2d92-2a3b-4118-497e-08d623684ba3 X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR07MB1456 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Anil Gurumurthy This patch Adds following code in the driver to support FC-NVMe Target - Updated ql2xenablenvme to allow FC-NVMe Target operation - Added Link Serviccce Request handling for NVMe Target - Added passthru IOCB for LS4 request - Added CTIO for sending response to FW - Added FC4 Registration for FC-NVMe Target - Added PUREX IOCB support for login processing in FC-NVMe Target mode - Added Continuation IOCB for PUREX - Added Session creation with PUREX IOCB in FC-NVMe Target mode Signed-off-by: Anil Gurumurthy Signed-off-by: Giridhar Malavali Signed-off-by: Darren Trapp Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_def.h | 35 +- drivers/scsi/qla2xxx/qla_fw.h | 263 ++++++++++ drivers/scsi/qla2xxx/qla_gbl.h | 17 +- drivers/scsi/qla2xxx/qla_gs.c | 14 +- drivers/scsi/qla2xxx/qla_init.c | 46 +- drivers/scsi/qla2xxx/qla_isr.c | 112 ++++- drivers/scsi/qla2xxx/qla_mbx.c | 101 +++- drivers/scsi/qla2xxx/qla_nvme.h | 33 -- drivers/scsi/qla2xxx/qla_os.c | 77 ++- drivers/scsi/qla2xxx/qla_target.c | 977 +++++++++++++++++++++++++++++++++++++- drivers/scsi/qla2xxx/qla_target.h | 90 ++++ 11 files changed, 1697 insertions(+), 68 deletions(-) diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h index 26b93c563f92..feda0b90f62e 100644 --- a/drivers/scsi/qla2xxx/qla_def.h +++ b/drivers/scsi/qla2xxx/qla_def.h @@ -480,6 +480,10 @@ struct srb_iocb { uint32_t dl; uint32_t timeout_sec; struct list_head entry; + uint32_t exchange_address; + uint16_t nport_handle; + uint8_t vp_index; + void *cmd; } nvme; struct { u16 cmd; @@ -490,7 +494,11 @@ struct srb_iocb { struct timer_list timer; void (*timeout)(void *); }; - +struct srb_nvme_els_rsp { + dma_addr_t dma_addr; + void *dma_ptr; + void *ptr; +}; /* Values for srb_ctx type */ #define SRB_LOGIN_CMD 1 #define SRB_LOGOUT_CMD 2 @@ -515,6 +523,11 @@ struct srb_iocb { #define SRB_PRLI_CMD 21 #define SRB_CTRL_VP 22 #define SRB_PRLO_CMD 23 +#define SRB_NVME_ELS_RSP 24 +#define SRB_NVMET_LS 25 +#define SRB_NVMET_FCP 26 +#define SRB_NVMET_ABTS 27 +#define SRB_NVMET_SEND_ABTS 28 enum { TYPE_SRB, @@ -545,10 +558,13 @@ typedef struct srb { int rc; int retry_count; struct completion comp; + struct work_struct nvmet_comp_work; + uint16_t comp_status; union { struct srb_iocb iocb_cmd; struct bsg_job *bsg_job; struct srb_cmd scmd; + struct srb_nvme_els_rsp snvme_els; } u; void (*done)(void *, int); void (*free)(void *); @@ -2273,6 +2289,15 @@ struct qlt_plogi_ack_t { void *fcport; }; +/* NVMET */ +struct qlt_purex_plogi_ack_t { + struct list_head list; + struct __fc_plogi rcvd_plogi; + port_id_t id; + int ref_count; + void *fcport; +}; + struct ct_sns_desc { struct ct_sns_pkt *ct_sns; dma_addr_t ct_sns_dma; @@ -3235,6 +3260,7 @@ enum qla_work_type { QLA_EVT_SP_RETRY, QLA_EVT_IIDMA, QLA_EVT_ELS_PLOGI, + QLA_EVT_NEW_NVMET_SESS, }; @@ -4229,6 +4255,7 @@ typedef struct scsi_qla_host { uint32_t qpairs_req_created:1; uint32_t qpairs_rsp_created:1; uint32_t nvme_enabled:1; + uint32_t nvmet_enabled:1; } flags; atomic_t loop_state; @@ -4274,6 +4301,7 @@ typedef struct scsi_qla_host { #define N2N_LOGIN_NEEDED 30 #define IOCB_WORK_ACTIVE 31 #define SET_ZIO_THRESHOLD_NEEDED 32 +#define NVMET_PUREX 33 unsigned long pci_flags; #define PFLG_DISCONNECTED 0 /* PCI device removed */ @@ -4314,6 +4342,7 @@ typedef struct scsi_qla_host { uint8_t fabric_node_name[WWN_SIZE]; struct nvme_fc_local_port *nvme_local_port; + struct nvmet_fc_target_port *targetport; struct completion nvme_del_done; struct list_head nvme_rport_list; @@ -4394,6 +4423,9 @@ typedef struct scsi_qla_host { uint16_t n2n_id; struct list_head gpnid_list; struct fab_scan scan; + /*NVMET*/ + struct list_head purex_atio_list; + struct completion purex_plogi_sess; } scsi_qla_host_t; struct qla27xx_image_status { @@ -4664,6 +4696,7 @@ struct sff_8247_a0 { !ha->current_topology) #include "qla_target.h" +#include "qla_nvmet.h" #include "qla_gbl.h" #include "qla_dbg.h" #include "qla_inline.h" diff --git a/drivers/scsi/qla2xxx/qla_fw.h b/drivers/scsi/qla2xxx/qla_fw.h index 50c1e6c62e31..67a42d153f64 100644 --- a/drivers/scsi/qla2xxx/qla_fw.h +++ b/drivers/scsi/qla2xxx/qla_fw.h @@ -723,6 +723,269 @@ struct ct_entry_24xx { uint32_t dseg_1_len; /* Data segment 1 length. */ }; +/* NVME-T changes */ +/* + * Fibre Channel Header + * Little Endian format. As received in PUREX and PURLS + */ +struct __fc_hdr { + uint16_t did_lo; + uint8_t did_hi; + uint8_t r_ctl; + uint16_t sid_lo; + uint8_t sid_hi; + uint8_t cs_ctl; + uint16_t f_ctl_lo; + uint8_t f_ctl_hi; + uint8_t type; + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; +}; + +/* + * Fibre Channel LOGO acc + * In big endian format + */ +struct __fc_logo_acc { + uint8_t op_code; + uint8_t reserved[3]; +}; + +struct __fc_lsrjt { + uint8_t op_code; + uint8_t reserved[3]; + uint8_t reserved2; + uint8_t reason; + uint8_t exp; + uint8_t vendor; +}; + +/* + * Fibre Channel LOGO Frame + * Little Endian format. As received in PUREX + */ +struct __fc_logo { + struct __fc_hdr hdr; + uint16_t reserved; + uint8_t reserved1; + uint8_t op_code; + uint16_t sid_lo; + uint8_t sid_hi; + uint8_t reserved2; + uint8_t pname[8]; +}; + +/* + * Fibre Channel PRLI Frame + * Little Endian format. As received in PUREX + */ +struct __fc_prli { + struct __fc_hdr hdr; + uint16_t pyld_length; /* word 0 of prli */ + uint8_t page_length; + uint8_t op_code; + uint16_t common;/* word 1. 1st word of SP page */ + uint8_t type_ext; + uint8_t prli_type; +#define PRLI_TYPE_FCP 0x8 +#define PRLI_TYPE_NVME 0x28 + union { + struct { + uint32_t reserved[2]; + uint32_t sp_info; + } fcp; + struct { + uint32_t reserved[2]; + uint32_t sp_info; +#define NVME_PRLI_DISC BIT_3 +#define NVME_PRLI_TRGT BIT_4 +#define NVME_PRLI_INIT BIT_5 +#define NVME_PRLI_CONFIRMATION BIT_7 + uint32_t reserved1; + } nvme; + }; +}; + +/* + * Fibre Channel PLOGI Frame + * Little Endian format. As received in PUREX + */ +struct __fc_plogi { + uint16_t did_lo; + uint8_t did_hi; + uint8_t r_ctl; + uint16_t sid_lo; + uint8_t sid_hi; + uint8_t cs_ctl; + uint16_t f_ctl_lo; + uint8_t f_ctl_hi; + uint8_t type; + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; + uint8_t rsvd[3]; + uint8_t op_code; + uint32_t cs_params[4]; /* common service params */ + uint8_t pname[8]; /* port name */ + uint8_t nname[8]; /* node name */ + uint32_t class1[4]; /* class 1 service params */ + uint32_t class2[4]; /* class 2 service params */ + uint32_t class3[4]; /* class 3 service params */ + uint32_t class4[4]; + uint32_t vndr_vers[4]; +}; + +#define IOCB_TYPE_ELS_PASSTHRU 0x53 + +/* ELS Pass-Through IOCB (IOCB_TYPE_ELS_PASSTHRU = 0x53) + */ +struct __els_pt { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + uint32_t handle; + uint16_t status; /* when returned from fw */ + uint16_t nphdl; + uint16_t tx_dsd_cnt; + uint8_t vp_index; + uint8_t sof; /* bits 7:4 */ + uint32_t rcv_exchg_id; + uint16_t rx_dsd_cnt; + uint8_t op_code; + uint8_t rsvd1; + uint16_t did_lo; + uint8_t did_hi; + uint8_t sid_hi; + uint16_t sid_lo; + uint16_t cntl_flags; +#define ELS_PT_RESPONDER_ACC (1 << 13) + uint32_t rx_bc; + uint32_t tx_bc; + uint32_t tx_dsd[2]; /* Data segment 0 address. */ + uint32_t tx_dsd_len; /* Data segment 0 length. */ + uint32_t rx_dsd[2]; /* Data segment 1 address. */ + uint32_t rx_dsd_len; /* Data segment 1 length. */ +}; + +/* + * Reject a FCP PRLI + * + */ +struct __fc_prli_rjt { + uint8_t op_code; /* word 0 of prli rjt */ + uint8_t rsvd1[3]; + uint8_t rsvd2; /* word 1 of prli rjt */ + uint8_t reason; +#define PRLI_RJT_REASON 0x3 /* logical error */ + uint8_t expl; + uint8_t vendor; +#define PRLI_RJT_FCP_RESP_LEN 8 +}; + +/* + * Fibre Channel PRLI ACC + * Payload only + */ +struct __fc_prli_acc { +/* payload only. In big-endian format */ + uint8_t op_code; /* word 0 of prli acc */ + uint8_t page_length; +#define PRLI_FCP_PAGE_LENGTH 16 +#define PRLI_NVME_PAGE_LENGTH 20 + uint16_t pyld_length; + uint8_t type; /* word 1 of prli acc */ + uint8_t type_ext; + uint16_t common; +#define PRLI_EST_FCP_PAIR 0x2000 +#define PRLI_REQ_EXEC 0x0100 +#define PRLI_REQ_DOES_NOT_EXIST 0x0400 + union { + struct { + uint32_t reserved[2]; + uint32_t sp_info; + /* hard coding resp. target, rdxfr disabled.*/ +#define FCP_PRLI_SP 0x12 + } fcp; + struct { + uint32_t reserved[2]; + uint32_t sp_info; + uint16_t reserved2; + uint16_t first_burst; + } nvme; + }; +#define PRLI_ACC_FCP_RESP_LEN 20 +#define PRLI_ACC_NVME_RESP_LEN 24 + +}; + +/* + * ISP queue - PUREX IOCB entry structure definition + */ +#define PUREX_IOCB_TYPE 0x51 /* CT Pass Through IOCB entry */ +struct purex_entry_24xx { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + + uint16_t reserved1; + uint8_t vp_idx; + uint8_t reserved2; + + uint16_t status_flags; + uint16_t nport_handle; + + uint16_t frame_size; + uint16_t trunc_frame_size; + + uint32_t rx_xchg_addr; + + uint8_t d_id[3]; + uint8_t r_ctl; + + uint8_t s_id[3]; + uint8_t cs_ctl; + + uint8_t f_ctl[3]; + uint8_t type; + + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; + + uint8_t pyld[20]; +#define PUREX_PYLD_SIZE 44 /* Number of bytes (hdr+pyld) in this IOCB */ +}; + +#define PUREX_ENTRY_SIZE (sizeof(purex_entry_24xx_t)) + +#define CONT_SENSE_DATA 60 +/* + * Continuation Status Type 0 (IOCB_TYPE_STATUS_CONT = 0x10) + * Section 5.6 FW Interface Spec + */ +struct __status_cont { + uint8_t entry_type; /* Entry type. - 0x10 */ + uint8_t entry_count; /* Entry count. */ + uint8_t entry_status; /* Entry Status. */ + uint8_t reserved; + + uint8_t data[CONT_SENSE_DATA]; +} __packed; + + /* * ISP queue - ELS Pass-Through entry structure definition. */ diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h index 3673fcdb033a..531f7b049caa 100644 --- a/drivers/scsi/qla2xxx/qla_gbl.h +++ b/drivers/scsi/qla2xxx/qla_gbl.h @@ -313,7 +313,10 @@ extern int qla2x00_set_fw_options(scsi_qla_host_t *, uint16_t *); extern int -qla2x00_mbx_reg_test(scsi_qla_host_t *); +qla2x00_set_purex_mode(scsi_qla_host_t *vha); + +extern int +qla2x00_mbx_reg_test(scsi_qla_host_t *vha); extern int qla2x00_verify_checksum(scsi_qla_host_t *, uint32_t); @@ -899,4 +902,16 @@ void qlt_remove_target_resources(struct qla_hw_data *); void qlt_clr_qp_table(struct scsi_qla_host *vha); void qlt_set_mode(struct scsi_qla_host *); +extern int qla2x00_get_plogi_template(scsi_qla_host_t *vha, dma_addr_t buf, + uint16_t length); +extern void qlt_dequeue_purex(struct scsi_qla_host *vha); +int qla24xx_post_nvmet_newsess_work(struct scsi_qla_host *vha, port_id_t *id, + u8 *port_name, void *pla); +int qlt_send_els_resp(srb_t *sp, struct __els_pt *pkt); +extern void nvmet_release_sessions(struct scsi_qla_host *vha); +struct fc_port *qla_nvmet_find_sess_by_s_id(scsi_qla_host_t *vha, + const uint32_t s_id); +void qla_nvme_cmpl_io(struct srb_iocb *); +void qla24xx_nvmet_abts_resp_iocb(struct scsi_qla_host *vha, + struct abts_resp_to_24xx *pkt, struct req_que *req); #endif /* _QLA_GBL_H */ diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c index a24b0c2a2f00..55dc11d91b35 100644 --- a/drivers/scsi/qla2xxx/qla_gs.c +++ b/drivers/scsi/qla2xxx/qla_gs.c @@ -646,9 +646,11 @@ static int qla_async_rftid(scsi_qla_host_t *vha, port_id_t *d_id) ct_req->req.rft_id.port_id[0] = vha->d_id.b.domain; ct_req->req.rft_id.port_id[1] = vha->d_id.b.area; ct_req->req.rft_id.port_id[2] = vha->d_id.b.al_pa; - ct_req->req.rft_id.fc4_types[2] = 0x01; /* FCP-3 */ - if (vha->flags.nvme_enabled) + if (!vha->flags.nvmet_enabled) + ct_req->req.rft_id.fc4_types[2] = 0x01; /* FCP-3 */ + + if (vha->flags.nvme_enabled || vha->flags.nvmet_enabled) ct_req->req.rft_id.fc4_types[6] = 1; /* NVMe type 28h */ sp->u.iocb_cmd.u.ctarg.req_size = RFT_ID_REQ_SIZE; @@ -691,6 +693,10 @@ qla2x00_rff_id(scsi_qla_host_t *vha, u8 type) return (QLA_SUCCESS); } + /* only single mode for now */ + if ((vha->flags.nvmet_enabled) && (type == FC4_TYPE_FCP_SCSI)) + return (QLA_SUCCESS); + return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha), FC4_TYPE_FCP_SCSI); } @@ -2355,7 +2361,7 @@ qla2x00_fdmiv2_rpa(scsi_qla_host_t *vha) eiter->a.fc4_types[2], eiter->a.fc4_types[1]); - if (vha->flags.nvme_enabled) { + if (vha->flags.nvme_enabled || vha->flags.nvmet_enabled) { eiter->a.fc4_types[6] = 1; /* NVMe type 28h */ ql_dbg(ql_dbg_disc, vha, 0x211f, "NVME FC4 Type = %02x 0x0 0x0 0x0 0x0 0x0.\n", @@ -2559,7 +2565,7 @@ qla2x00_fdmiv2_rpa(scsi_qla_host_t *vha) "Port Active FC4 Type = %02x %02x.\n", eiter->a.port_fc4_type[2], eiter->a.port_fc4_type[1]); - if (vha->flags.nvme_enabled) { + if (vha->flags.nvme_enabled || vha->flags.nvmet_enabled) { eiter->a.port_fc4_type[4] = 0; eiter->a.port_fc4_type[5] = 0; eiter->a.port_fc4_type[6] = 1; /* NVMe type 28h */ diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c index 41e5358d3739..841541201671 100644 --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -19,6 +19,7 @@ #include #include "qla_target.h" +#include "qla_nvmet.h" /* * QLogic ISP2x00 Hardware Support Function Prototypes. @@ -1094,6 +1095,23 @@ int qla24xx_post_gpdb_work(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt) return qla2x00_post_work(vha, e); } +/* NVMET */ +int qla24xx_post_nvmet_newsess_work(struct scsi_qla_host *vha, port_id_t *id, + u8 *port_name, void *pla) +{ + struct qla_work_evt *e; + + e = qla2x00_alloc_work(vha, QLA_EVT_NEW_NVMET_SESS); + if (!e) + return QLA_FUNCTION_FAILED; + + e->u.new_sess.id = *id; + e->u.new_sess.pla = pla; + memcpy(e->u.new_sess.port_name, port_name, WWN_SIZE); + + return qla2x00_post_work(vha, e); +} + int qla24xx_async_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport, u8 opt) { srb_t *sp; @@ -3591,6 +3609,13 @@ qla2x00_setup_chip(scsi_qla_host_t *vha) rval = qla2x00_get_fw_version(vha); if (rval != QLA_SUCCESS) goto failed; + + if (vha->flags.nvmet_enabled) { + ql_log(ql_log_info, vha, 0xffff, + "Enabling PUREX mode\n"); + qla2x00_set_purex_mode(vha); + } + ha->flags.npiv_supported = 0; if (IS_QLA2XXX_MIDTYPE(ha) && (ha->fw_attributes & BIT_2)) { @@ -3811,11 +3836,14 @@ qla24xx_update_fw_options(scsi_qla_host_t *vha) /* Move PUREX, ABTS RX & RIDA to ATIOQ */ if (ql2xmvasynctoatio && (IS_QLA83XX(ha) || IS_QLA27XX(ha))) { - if (qla_tgt_mode_enabled(vha) || - qla_dual_mode_enabled(vha)) + if ((qla_tgt_mode_enabled(vha) || qla_dual_mode_enabled(vha)) && + qlt_op_target_mode) { + ql_log(ql_log_info, vha, 0xffff, + "Moving Purex to ATIO Q\n"); ha->fw_options[2] |= BIT_11; - else + } else { ha->fw_options[2] &= ~BIT_11; + } } if (IS_QLA25XX(ha) || IS_QLA83XX(ha) || IS_QLA27XX(ha)) { @@ -5463,7 +5491,8 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha) &vha->dpc_flags)) break; } - if (vha->flags.nvme_enabled) { + if (vha->flags.nvme_enabled || + vha->flags.nvmet_enabled) { if (qla2x00_rff_id(vha, FC_TYPE_NVME)) { ql_dbg(ql_dbg_disc, vha, 0x2049, "Register NVME FC Type Features failed.\n"); @@ -5631,7 +5660,8 @@ qla2x00_find_all_fabric_devs(scsi_qla_host_t *vha) new_fcport->nvme_flag = 0; new_fcport->fc4f_nvme = 0; - if (vha->flags.nvme_enabled && + if ((vha->flags.nvme_enabled || + vha->flags.nvmet_enabled) && swl[swl_idx].fc4f_nvme) { new_fcport->fc4f_nvme = swl[swl_idx].fc4f_nvme; @@ -8457,6 +8487,12 @@ qla81xx_update_fw_options(scsi_qla_host_t *vha) ha->fw_options[2] |= BIT_11; else ha->fw_options[2] &= ~BIT_11; + + if (ql2xnvmeenable == 2 && qlt_op_target_mode) { + /* Enabled PUREX node */ + ha->fw_options[1] |= FO1_ENABLE_PUREX; + ha->fw_options[2] |= BIT_11; + } } if (qla_tgt_mode_enabled(vha) || diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c index d73b04e40590..5c1833f030a4 100644 --- a/drivers/scsi/qla2xxx/qla_isr.c +++ b/drivers/scsi/qla2xxx/qla_isr.c @@ -23,6 +23,8 @@ static void qla2x00_status_cont_entry(struct rsp_que *, sts_cont_entry_t *); static int qla2x00_error_entry(scsi_qla_host_t *, struct rsp_que *, sts_entry_t *); +extern struct workqueue_struct *qla_nvmet_comp_wq; + /** * qla2100_intr_handler() - Process interrupts for the ISP2100 and ISP2200. * @irq: @@ -1583,6 +1585,12 @@ qla24xx_els_ct_entry(scsi_qla_host_t *vha, struct req_que *req, sp->name); sp->done(sp, res); return; + case SRB_NVME_ELS_RSP: + type = "nvme els"; + ql_log(ql_log_info, vha, 0xffff, + "Completing %s: (%p) type=%d.\n", type, sp, sp->type); + sp->done(sp, 0); + return; default: ql_dbg(ql_dbg_user, vha, 0x503e, "Unrecognized SRB: (%p) type=%d.\n", sp, sp->type); @@ -2456,6 +2464,13 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt) return; } + if (sp->type == SRB_NVMET_LS) { + ql_log(ql_log_info, vha, 0xffff, + "Dump NVME-LS response pkt\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 64); + } + if (unlikely((state_flags & BIT_1) && (sp->type == SRB_BIDI_CMD))) { qla25xx_process_bidir_status_iocb(vha, pkt, req, handle); return; @@ -2825,6 +2840,12 @@ qla2x00_error_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, sts_entry_t *pkt) "iocb type %xh with error status %xh, handle %xh, rspq id %d\n", pkt->entry_type, pkt->entry_status, pkt->handle, rsp->id); + ql_log(ql_log_info, vha, 0xffff, + "(%s-%d)Dumping the NVMET-ERROR pkt IOCB\n", + __func__, __LINE__); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 64); + if (que >= ha->max_req_queues || !ha->req_q_map[que]) goto fatal; @@ -2918,6 +2939,23 @@ qla24xx_abort_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, sp->done(sp, 0); } +/* + * Post a completion to the NVMET layer + */ + +static void qla_nvmet_comp_work(struct work_struct *work) +{ + srb_t *sp = container_of(work, srb_t, nvmet_comp_work); + + sp->done(sp, sp->comp_status); +} + +/** + * qla24xx_nvme_ls4_iocb() - Process LS4 completions + * @vha: SCSI driver HA context + * @pkt: LS4 req packet + * @req: Request Queue + */ void qla24xx_nvme_ls4_iocb(struct scsi_qla_host *vha, struct pt_ls4_request *pkt, struct req_que *req) { @@ -2929,11 +2967,78 @@ void qla24xx_nvme_ls4_iocb(struct scsi_qla_host *vha, if (!sp) return; + ql_log(ql_log_info, vha, 0xc01f, + "Dumping response pkt for SRB type: %#x\n", sp->type); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 16); + comp_status = le16_to_cpu(pkt->status); - sp->done(sp, comp_status); + sp->comp_status = comp_status; + /* Queue the work item */ + INIT_WORK(&sp->nvmet_comp_work, qla_nvmet_comp_work); + queue_work(qla_nvmet_comp_wq, &sp->nvmet_comp_work); } /** + * qla24xx_nvmet_fcp_iocb() - Process FCP completions + * @vha: SCSI driver HA context + * @pkt: FCP completion from firmware + * @req: Request Queue + */ +static void qla24xx_nvmet_fcp_iocb(struct scsi_qla_host *vha, + struct ctio_nvme_from_27xx *pkt, struct req_que *req) +{ + srb_t *sp; + const char func[] = "NVMET_FCP_IOCB"; + uint16_t comp_status; + + sp = qla2x00_get_sp_from_handle(vha, func, req, pkt); + if (!sp) + return; + + if ((pkt->entry_status) || (pkt->status != 1)) { + ql_log(ql_log_info, vha, 0xc01f, + "Dumping response pkt for SRB type: %#x\n", sp->type); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 16); + } + + comp_status = le16_to_cpu(pkt->status); + sp->comp_status = comp_status; + /* Queue the work item */ + INIT_WORK(&sp->nvmet_comp_work, qla_nvmet_comp_work); + queue_work(qla_nvmet_comp_wq, &sp->nvmet_comp_work); +} + +/** + * qla24xx_nvmet_abts_resp_iocb() - Process ABTS completions + * @vha: SCSI driver HA context + * @pkt: ABTS completion from firmware + * @req: Request Queue + */ +void qla24xx_nvmet_abts_resp_iocb(struct scsi_qla_host *vha, + struct abts_resp_to_24xx *pkt, struct req_que *req) +{ + srb_t *sp; + const char func[] = "NVMET_ABTS_RESP_IOCB"; + uint16_t comp_status; + + sp = qla2x00_get_sp_from_handle(vha, func, req, pkt); + if (!sp) + return; + + ql_log(ql_log_info, vha, 0xc01f, + "Dumping response pkt for SRB type: %#x\n", sp->type); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)pkt, 16); + + comp_status = le16_to_cpu(pkt->entry_status); + sp->comp_status = comp_status; + /* Queue the work item */ + INIT_WORK(&sp->nvmet_comp_work, qla_nvmet_comp_work); + queue_work(qla_nvmet_comp_wq, &sp->nvmet_comp_work); +} +/** * qla24xx_process_response_queue() - Process response queue entries. * @vha: SCSI driver HA context * @rsp: response queue @@ -3011,6 +3116,11 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha, qla24xx_nvme_ls4_iocb(vha, (struct pt_ls4_request *)pkt, rsp->req); break; + case CTIO_NVME: + qla24xx_nvmet_fcp_iocb(vha, + (struct ctio_nvme_from_27xx *)pkt, + rsp->req); + break; case NOTIFY_ACK_TYPE: if (pkt->handle == QLA_TGT_SKIP_HANDLE) qlt_response_pkt_all_vps(vha, rsp, diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c index e016ee9c6d8e..0269566acae2 100644 --- a/drivers/scsi/qla2xxx/qla_mbx.c +++ b/drivers/scsi/qla2xxx/qla_mbx.c @@ -61,6 +61,7 @@ static struct rom_cmd { { MBC_READ_SFP }, { MBC_GET_RNID_PARAMS }, { MBC_GET_SET_ZIO_THRESHOLD }, + { MBC_SET_RNID_PARAMS }, }; static int is_rom_cmd(uint16_t cmd) @@ -1109,12 +1110,15 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha) * FW supports nvme and driver load parameter requested nvme. * BIT 26 of fw_attributes indicates NVMe support. */ - if ((ha->fw_attributes_h & 0x400) && ql2xnvmeenable) { + if ((ha->fw_attributes_h & 0x400) && (ql2xnvmeenable == 1)) { vha->flags.nvme_enabled = 1; ql_log(ql_log_info, vha, 0xd302, "%s: FC-NVMe is Enabled (0x%x)\n", __func__, ha->fw_attributes_h); } + + if ((ha->fw_attributes_h & 0x400) && (ql2xnvmeenable == 2)) + vha->flags.nvmet_enabled = 1; } if (IS_QLA27XX(ha)) { @@ -1189,6 +1193,101 @@ qla2x00_get_fw_options(scsi_qla_host_t *vha, uint16_t *fwopts) return rval; } +#define OPCODE_PLOGI_TMPLT 7 +int +qla2x00_get_plogi_template(scsi_qla_host_t *vha, dma_addr_t buf, + uint16_t length) +{ + mbx_cmd_t mc; + mbx_cmd_t *mcp = &mc; + int rval; + + mcp->mb[0] = MBC_GET_RNID_PARAMS; + mcp->mb[1] = OPCODE_PLOGI_TMPLT << 8; + mcp->mb[2] = MSW(LSD(buf)); + mcp->mb[3] = LSW(LSD(buf)); + mcp->mb[6] = MSW(MSD(buf)); + mcp->mb[7] = LSW(MSD(buf)); + mcp->mb[8] = length; + mcp->out_mb = MBX_8|MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0; + mcp->in_mb = MBX_1|MBX_0; + mcp->buf_size = length; + mcp->flags = MBX_DMA_IN; + mcp->tov = MBX_TOV_SECONDS; + rval = qla2x00_mailbox_command(vha, mcp); + + ql_dbg(ql_dbg_mbx, vha, 0x118f, + "%s: %s rval=%x mb[0]=%x,%x.\n", __func__, + (rval == QLA_SUCCESS) ? "Success" : "Failed", + rval, mcp->mb[0], mcp->mb[1]); + + return rval; +} + +#define OPCODE_LIST_LENGTH 32 /* ELS opcode list */ +#define OPCODE_ELS_CMD 5 /* MBx1 cmd param */ +/* + * qla2x00_set_purex_mode + * Enable purex mode for ELS commands + * + * Input: + * vha = adapter block pointer. + * + * Returns: + * qla2x00 local function return status code. + * + * Context: + * Kernel context. + */ +int +qla2x00_set_purex_mode(scsi_qla_host_t *vha) +{ + int rval; + mbx_cmd_t mc; + mbx_cmd_t *mcp = &mc; + uint8_t *els_cmd_map; + dma_addr_t els_cmd_map_dma; + struct qla_hw_data *ha = vha->hw; + + ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1197, + "Entered %s.\n", __func__); + + els_cmd_map = dma_zalloc_coherent(&ha->pdev->dev, OPCODE_LIST_LENGTH, + &els_cmd_map_dma, GFP_KERNEL); + if (!els_cmd_map) { + ql_log(ql_log_warn, vha, 0x7101, + "Failed to allocate RDP els command param.\n"); + return QLA_MEMORY_ALLOC_FAILED; + } + + els_cmd_map[0] = 0x28; /* enable PLOGI and LOGO ELS */ + els_cmd_map[4] = 0x13; /* enable PRLI ELS */ + els_cmd_map[10] = 0x5; + + mcp->mb[0] = MBC_SET_RNID_PARAMS; + mcp->mb[1] = OPCODE_ELS_CMD << 8; + mcp->mb[2] = MSW(LSD(els_cmd_map_dma)); + mcp->mb[3] = LSW(LSD(els_cmd_map_dma)); + mcp->mb[6] = MSW(MSD(els_cmd_map_dma)); + mcp->mb[7] = LSW(MSD(els_cmd_map_dma)); + mcp->out_mb = MBX_7|MBX_6|MBX_3|MBX_2|MBX_1|MBX_0; + mcp->in_mb = MBX_1|MBX_0; + mcp->tov = MBX_TOV_SECONDS; + mcp->flags = MBX_DMA_OUT; + mcp->buf_size = OPCODE_LIST_LENGTH; + rval = qla2x00_mailbox_command(vha, mcp); + + ql_dbg(ql_dbg_mbx, vha, 0x118d, + "%s: %s rval=%x mb[0]=%x,%x.\n", __func__, + (rval == QLA_SUCCESS) ? "Success" : "Failed", + rval, mcp->mb[0], mcp->mb[1]); + + dma_free_coherent(&ha->pdev->dev, OPCODE_LIST_LENGTH, + els_cmd_map, els_cmd_map_dma); + + return rval; +} + /* * qla2x00_set_fw_options diff --git a/drivers/scsi/qla2xxx/qla_nvme.h b/drivers/scsi/qla2xxx/qla_nvme.h index 4941d107fb1c..0902c3a27adc 100644 --- a/drivers/scsi/qla2xxx/qla_nvme.h +++ b/drivers/scsi/qla2xxx/qla_nvme.h @@ -106,39 +106,6 @@ struct pt_ls4_request { uint32_t dseg1_address[2]; uint32_t dseg1_len; }; - -#define PT_LS4_UNSOL 0x56 /* pass-up unsolicited rec FC-NVMe request */ -struct pt_ls4_rx_unsol { - uint8_t entry_type; - uint8_t entry_count; - uint16_t rsvd0; - uint16_t rsvd1; - uint8_t vp_index; - uint8_t rsvd2; - uint16_t rsvd3; - uint16_t nport_handle; - uint16_t frame_size; - uint16_t rsvd4; - uint32_t exchange_address; - uint8_t d_id[3]; - uint8_t r_ctl; - uint8_t s_id[3]; - uint8_t cs_ctl; - uint8_t f_ctl[3]; - uint8_t type; - uint16_t seq_cnt; - uint8_t df_ctl; - uint8_t seq_id; - uint16_t rx_id; - uint16_t ox_id; - uint32_t param; - uint32_t desc0; -#define PT_LS4_PAYLOAD_OFFSET 0x2c -#define PT_LS4_FIRST_PACKET_LEN 20 - uint32_t desc_len; - uint32_t payload[3]; -}; - /* * Global functions prototype in qla_nvme.c source file. */ diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index d21dd7700d5d..d10ef1577197 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -137,13 +137,17 @@ MODULE_PARM_DESC(ql2xenabledif, #if (IS_ENABLED(CONFIG_NVME_FC)) int ql2xnvmeenable = 1; +#elif (IS_ENABLED(CONFIG_NVME_TARGET_FC)) +int ql2xnvmeenable = 2; #else int ql2xnvmeenable; #endif module_param(ql2xnvmeenable, int, 0644); MODULE_PARM_DESC(ql2xnvmeenable, - "Enables NVME support. " - "0 - no NVMe. Default is Y"); + "Enables NVME support.\n" + "0 - no NVMe.\n" + "1 - initiator,\n" + "2 - target. Default is 1\n"); int ql2xenablehba_err_chk = 2; module_param(ql2xenablehba_err_chk, int, S_IRUGO|S_IWUSR); @@ -3421,6 +3425,9 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) qlt_add_target(ha, base_vha); + if (ql2xnvmeenable == 2) + qla_nvmet_create_targetport(base_vha); + clear_bit(PFLG_DRIVER_PROBING, &base_vha->pci_flags); if (test_bit(UNLOADING, &base_vha->dpc_flags)) @@ -3701,6 +3708,8 @@ qla2x00_remove_one(struct pci_dev *pdev) qla_nvme_delete(base_vha); + qla_nvmet_delete(base_vha); + dma_free_coherent(&ha->pdev->dev, base_vha->gnl.size, base_vha->gnl.l, base_vha->gnl.ldma); @@ -5024,6 +5033,53 @@ static void qla_sp_retry(struct scsi_qla_host *vha, struct qla_work_evt *e) qla24xx_sp_unmap(vha, sp); } } +/* NVMET */ +static +void qla24xx_create_new_nvmet_sess(struct scsi_qla_host *vha, + struct qla_work_evt *e) +{ + unsigned long flags; + fc_port_t *fcport = NULL; + + spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags); + fcport = qla2x00_find_fcport_by_wwpn(vha, e->u.new_sess.port_name, 1); + if (fcport) { + ql_log(ql_log_info, vha, 0x11020, + "Found fcport: %p for WWN: %8phC\n", fcport, + e->u.new_sess.port_name); + fcport->d_id = e->u.new_sess.id; + + /* Session existing with No loop_ID assigned */ + if (fcport->loop_id == FC_NO_LOOP_ID) { + fcport->loop_id = qla2x00_find_new_loop_id(vha, fcport); + ql_log(ql_log_info, vha, 0x11021, + "Allocated new loop_id: %#x for fcport: %p\n", + fcport->loop_id, fcport); + fcport->fw_login_state = DSC_LS_PLOGI_PEND; + } + } else { + fcport = qla2x00_alloc_fcport(vha, GFP_KERNEL); + if (fcport) { + fcport->d_id = e->u.new_sess.id; + fcport->loop_id = qla2x00_find_new_loop_id(vha, fcport); + ql_log(ql_log_info, vha, 0x11022, + "Allocated new loop_id: %#x for fcport: %p\n", + fcport->loop_id, fcport); + + fcport->scan_state = QLA_FCPORT_FOUND; + fcport->flags |= FCF_FABRIC_DEVICE; + fcport->fw_login_state = DSC_LS_PLOGI_PEND; + + memcpy(fcport->port_name, e->u.new_sess.port_name, + WWN_SIZE); + + list_add_tail(&fcport->list, &vha->vp_fcports); + } + } + spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags); + + complete(&vha->purex_plogi_sess); +} void qla2x00_do_work(struct scsi_qla_host *vha) @@ -5129,6 +5185,10 @@ qla2x00_do_work(struct scsi_qla_host *vha) qla24xx_els_dcmd2_iocb(vha, ELS_DCMD_PLOGI, e->u.fcport.fcport, false); break; + /* FC-NVMe Target */ + case QLA_EVT_NEW_NVMET_SESS: + qla24xx_create_new_nvmet_sess(vha, e); + break; } if (e->flags & QLA_EVT_FLAG_FREE) kfree(e); @@ -6100,6 +6160,12 @@ qla2x00_do_dpc(void *data) set_bit(ISP_ABORT_NEEDED, &base_vha->dpc_flags); } + if (test_and_clear_bit(NVMET_PUREX, &base_vha->dpc_flags)) { + ql_log(ql_log_info, base_vha, 0x11022, + "qla2xxx-nvmet: Received a frame on the wire\n"); + qlt_dequeue_purex(base_vha); + } + if (test_and_clear_bit (ISP_ABORT_NEEDED, &base_vha->dpc_flags) && !test_bit(UNLOADING, &base_vha->dpc_flags)) { @@ -6273,6 +6339,13 @@ qla2x00_do_dpc(void *data) ha->nvme_last_rptd_aen); } } +#if (IS_ENABLED(CONFIG_NVME_TARGET_FC)) + if (test_and_clear_bit(NVMET_PUREX, &base_vha->dpc_flags)) { + ql_log(ql_log_info, base_vha, 0x11025, + "nvmet: Received a frame on the wire\n"); + qlt_dequeue_purex(base_vha); + } +#endif if (test_and_clear_bit(SET_ZIO_THRESHOLD_NEEDED, &base_vha->dpc_flags)) { diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c index a69ec4519d81..6f61d4e04902 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -40,6 +40,7 @@ #include #include "qla_def.h" +#include "qla_nvmet.h" #include "qla_target.h" static int ql2xtgt_tape_enable; @@ -78,6 +79,8 @@ int ql2x_ini_mode = QLA2XXX_INI_MODE_EXCLUSIVE; static int qla_sam_status = SAM_STAT_BUSY; static int tc_sam_status = SAM_STAT_TASK_SET_FULL; /* target core */ +int qlt_op_target_mode; + /* * From scsi/fc/fc_fcp.h */ @@ -149,11 +152,16 @@ static inline uint32_t qlt_make_handle(struct qla_qpair *); */ static struct kmem_cache *qla_tgt_mgmt_cmd_cachep; struct kmem_cache *qla_tgt_plogi_cachep; +static struct kmem_cache *qla_tgt_purex_plogi_cachep; static mempool_t *qla_tgt_mgmt_cmd_mempool; static struct workqueue_struct *qla_tgt_wq; +static struct workqueue_struct *qla_nvmet_wq; static DEFINE_MUTEX(qla_tgt_mutex); static LIST_HEAD(qla_tgt_glist); +/* WQ for nvmet completions */ +struct workqueue_struct *qla_nvmet_comp_wq; + static const char *prot_op_str(u32 prot_op) { switch (prot_op) { @@ -348,13 +356,653 @@ void qlt_unknown_atio_work_fn(struct work_struct *work) qlt_try_to_dequeue_unknown_atios(vha, 0); } +#define ELS_RJT 0x01 +#define ELS_ACC 0x02 + +struct fc_port *qla_nvmet_find_sess_by_s_id( + scsi_qla_host_t *vha, + const uint32_t s_id) +{ + struct fc_port *sess = NULL, *other_sess; + uint32_t other_sid; + + list_for_each_entry(other_sess, &vha->vp_fcports, list) { + other_sid = other_sess->d_id.b.domain << 16 | + other_sess->d_id.b.area << 8 | + other_sess->d_id.b.al_pa; + + if (other_sid == s_id) { + sess = other_sess; + break; + } + } + return sess; +} + +/* Send an ELS response */ +int qlt_send_els_resp(srb_t *sp, struct __els_pt *els_pkt) +{ + struct purex_entry_24xx *purex = (struct purex_entry_24xx *) + sp->u.snvme_els.ptr; + dma_addr_t udma = sp->u.snvme_els.dma_addr; + struct fc_port *fcport; + port_id_t port_id; + uint16_t loop_id; + + port_id.b.domain = purex->s_id[2]; + port_id.b.area = purex->s_id[1]; + port_id.b.al_pa = purex->s_id[0]; + port_id.b.rsvd_1 = 0; + + fcport = qla2x00_find_fcport_by_nportid(sp->vha, &port_id, 1); + if (fcport) + /* There is no session with the swt */ + loop_id = fcport->loop_id; + else + loop_id = 0xFFFF; + + ql_log(ql_log_info, sp->vha, 0xfff9, + "sp: %p, purex: %p, udma: %pad, loop_id: 0x%x\n", + sp, purex, &udma, loop_id); + + els_pkt->entry_type = ELS_IOCB_TYPE; + els_pkt->entry_count = 1; + + els_pkt->handle = sp->handle; + els_pkt->nphdl = cpu_to_le16(loop_id); + els_pkt->tx_dsd_cnt = cpu_to_le16(1); + els_pkt->vp_index = purex->vp_idx; + els_pkt->sof = EST_SOFI3; + els_pkt->rcv_exchg_id = cpu_to_le32(purex->rx_xchg_addr); + els_pkt->op_code = sp->cmd_type; + els_pkt->did_lo = cpu_to_le16(purex->s_id[0] | (purex->s_id[1] << 8)); + els_pkt->did_hi = purex->s_id[2]; + els_pkt->sid_hi = purex->d_id[2]; + els_pkt->sid_lo = cpu_to_le16(purex->d_id[0] | (purex->d_id[1] << 8)); + + if (sp->gen2 == ELS_ACC) + els_pkt->cntl_flags = cpu_to_le16(EPD_ELS_ACC); + else + els_pkt->cntl_flags = cpu_to_le16(EPD_ELS_RJT); + + els_pkt->tx_bc = cpu_to_le32(sp->gen1); + els_pkt->tx_dsd[0] = cpu_to_le32(LSD(udma)); + els_pkt->tx_dsd[1] = cpu_to_le32(MSD(udma)); + els_pkt->tx_dsd_len = cpu_to_le32(sp->gen1); + /* Memory Barrier */ + wmb(); + + ql_log(ql_log_info, sp->vha, 0x11030, "Dumping PLOGI ELS\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, sp->vha, 0xffff, + (uint8_t *)els_pkt, sizeof(*els_pkt)); + + return 0; +} + +static void qlt_nvme_els_done(void *s, int res) +{ + struct srb *sp = s; + + ql_log(ql_log_info, sp->vha, 0x11031, + "Done with NVME els command\n"); + + ql_log(ql_log_info, sp->vha, 0x11032, + "sp: %p vha: %p, dma_ptr: %p, dma_addr: %pad, len: %#x\n", + sp, sp->vha, sp->u.snvme_els.dma_ptr, &sp->u.snvme_els.dma_addr, + sp->gen1); + + qla2x00_rel_sp(sp); +} + +static int qlt_send_plogi_resp(struct scsi_qla_host *vha, uint8_t op_code, + struct purex_entry_24xx *purex, struct fc_port *fcport) +{ + int ret, rval, i; + dma_addr_t plogi_ack_udma = vha->vha_tgt.qla_tgt->nvme_els_rsp; + void *plogi_ack_buf = vha->vha_tgt.qla_tgt->nvme_els_ptr; + uint8_t *tmp; + uint32_t *opcode; + srb_t *sp; + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, fcport, GFP_KERNEL); + if (!sp) { + ql_log(ql_log_info, vha, 0x11033, + "Failed to allocate SRB\n"); + return -ENOMEM; + } + + sp->type = SRB_NVME_ELS_RSP; + sp->done = qlt_nvme_els_done; + sp->vha = vha; + + ql_log(ql_log_info, vha, 0x11034, + "sp: %p, vha: %p, plogi_ack_buf: %p\n", + sp, vha, plogi_ack_buf); + + sp->u.snvme_els.dma_addr = plogi_ack_udma; + sp->u.snvme_els.dma_ptr = plogi_ack_buf; + sp->gen1 = 116; + sp->gen2 = ELS_ACC; + sp->u.snvme_els.ptr = (struct purex_entry_24xx *)purex; + sp->cmd_type = ELS_PLOGI; + + tmp = (uint8_t *)plogi_ack_udma; + + tmp += 4; /* fw doesn't return 1st 4 bytes where opcode goes */ + + ret = qla2x00_get_plogi_template(vha, (dma_addr_t)tmp, (116/4 - 1)); + if (ret) { + ql_log(ql_log_warn, vha, 0x11035, + "Failed to get plogi template\n"); + return -ENOMEM; + } + + opcode = (uint32_t *) plogi_ack_buf; + *opcode = cpu_to_be32(ELS_ACC << 24); + + for (i = 0; i < 0x1c; i++) { + ++opcode; + *opcode = cpu_to_be32(*opcode); + } + + ql_dbg(ql_dbg_disc + ql_dbg_verbose, vha, 0xfff3, + "Dumping the PLOGI from fw\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_verbose, vha, 0x70cf, + (uint8_t *)plogi_ack_buf, 116); + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) + qla2x00_rel_sp(sp); + + return 0; +} + +static struct qlt_purex_plogi_ack_t * +qlt_plogi_find_add(struct scsi_qla_host *vha, port_id_t *id, + struct __fc_plogi *rcvd_plogi) +{ + struct qlt_purex_plogi_ack_t *pla; + + list_for_each_entry(pla, &vha->plogi_ack_list, list) { + if (pla->id.b24 == id->b24) + return pla; + } + + pla = kmem_cache_zalloc(qla_tgt_purex_plogi_cachep, GFP_ATOMIC); + if (!pla) { + ql_dbg(ql_dbg_async, vha, 0x5088, + "qla_target(%d): Allocation of plogi_ack failed\n", + vha->vp_idx); + return NULL; + } + + pla->id = *id; + memcpy(&pla->rcvd_plogi, rcvd_plogi, sizeof(struct __fc_plogi)); + ql_log(ql_log_info, vha, 0xf101, + "New session(%p) created for port: %#x\n", + pla, pla->id.b24); + + list_add_tail(&pla->list, &vha->plogi_ack_list); + + return pla; +} + +static void __swap_wwn(uint8_t *ptr, uint32_t size) +{ + uint32_t *iptr = (uint32_t *)ptr; + uint32_t *optr = (uint32_t *)ptr; + uint32_t i = size >> 2; + + for (; i ; i--) + *optr++ = be32_to_cpu(*iptr++); +} + +static int abort_cmds_for_s_id(struct scsi_qla_host *vha, port_id_t *s_id); +/* + * Parse the PLOGI from the peer port + * Retrieve WWPN, WWNN from the payload + * Create and fc port if it is a new WWN + * else clean up the prev exchange + * Return a response + */ +static void qlt_process_plogi(struct scsi_qla_host *vha, + struct purex_entry_24xx *purex, void *buf) +{ + uint64_t pname, nname; + struct __fc_plogi *rcvd_plogi = (struct __fc_plogi *)buf; + struct qla_tgt *tgt = vha->vha_tgt.qla_tgt; + uint16_t loop_id; + unsigned long flags; + struct fc_port *sess = NULL, *conflict_sess = NULL; + struct qlt_purex_plogi_ack_t *pla; + port_id_t port_id; + int sess_handling = 0; + + port_id.b.domain = purex->s_id[2]; + port_id.b.area = purex->s_id[1]; + port_id.b.al_pa = purex->s_id[0]; + port_id.b.rsvd_1 = 0; + + if (IS_SW_RESV_ADDR(port_id)) { + ql_log(ql_log_info, vha, 0x11036, + "Received plogi from switch, just send an ACC\n"); + goto send_plogi_resp; + } + + loop_id = le16_to_cpu(purex->nport_handle); + + /* Clean up prev commands if any */ + if (sess_handling) { + ql_log(ql_log_info, vha, 0x11037, + "%s %d Cleaning up prev commands\n", + __func__, __LINE__); + abort_cmds_for_s_id(vha, &port_id); + } + + __swap_wwn(rcvd_plogi->pname, 4); + __swap_wwn(&rcvd_plogi->pname[4], 4); + pname = wwn_to_u64(rcvd_plogi->pname); + + __swap_wwn(rcvd_plogi->nname, 4); + __swap_wwn(&rcvd_plogi->nname[4], 4); + nname = wwn_to_u64(rcvd_plogi->nname); + + ql_log(ql_log_info, vha, 0x11038, + "%s %d, pname:%llx, nname:%llx port_id: %#x\n", + __func__, __LINE__, pname, nname, loop_id); + + /* Invalidate other sessions if any */ + spin_lock_irqsave(&tgt->ha->tgt.sess_lock, flags); + sess = qlt_find_sess_invalidate_other(vha, pname, + port_id, loop_id, &conflict_sess); + spin_unlock_irqrestore(&tgt->ha->tgt.sess_lock, flags); + + /* Add the inbound plogi(if from a new device) to the list */ + pla = qlt_plogi_find_add(vha, &port_id, rcvd_plogi); + + /* If there is no existing session, create one */ + if (unlikely(!sess)) { + ql_log(ql_log_info, vha, 0xf102, + "Creating a new session\n"); + init_completion(&vha->purex_plogi_sess); + qla24xx_post_nvmet_newsess_work(vha, &port_id, + rcvd_plogi->pname, pla); + wait_for_completion_timeout(&vha->purex_plogi_sess, 500); + /* Send a PLOGI response */ + goto send_plogi_resp; + } else { + /* Session existing with No loop_ID assigned */ + if (sess->loop_id == FC_NO_LOOP_ID) { + sess->loop_id = qla2x00_find_new_loop_id(vha, sess); + ql_log(ql_log_info, vha, 0x11039, + "Allocated new loop_id: %#x for fcport: %p\n", + sess->loop_id, sess); + } + sess->d_id = port_id; + + sess->fw_login_state = DSC_LS_PLOGI_PEND; + } +send_plogi_resp: + /* Send a PLOGI response */ + qlt_send_plogi_resp(vha, ELS_PLOGI, purex, sess); +} + +static int qlt_process_logo(struct scsi_qla_host *vha, + struct purex_entry_24xx *purex, void *buf) +{ + struct __fc_logo_acc *logo_acc; + dma_addr_t logo_ack_udma = vha->vha_tgt.qla_tgt->nvme_els_rsp; + void *logo_ack_buf = vha->vha_tgt.qla_tgt->nvme_els_ptr; + srb_t *sp; + int rval; + uint32_t look_up_sid; + fc_port_t *sess = NULL; + port_id_t port_id; + + port_id.b.domain = purex->s_id[2]; + port_id.b.area = purex->s_id[1]; + port_id.b.al_pa = purex->s_id[0]; + port_id.b.rsvd_1 = 0; + + if (!IS_SW_RESV_ADDR(port_id)) { + look_up_sid = purex->s_id[2] << 16 | purex->s_id[1] << 8 | + purex->s_id[0]; + ql_log(ql_log_info, vha, 0x11040, + "%s - Look UP sid: %#x\n", __func__, look_up_sid); + + sess = qla_nvmet_find_sess_by_s_id(vha, look_up_sid); + if (unlikely(!sess)) + WARN_ON(1); + } + + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL); + if (!sp) { + ql_log(ql_log_info, vha, 0x11041, + "Failed to allocate SRB\n"); + return -ENOMEM; + } + + sp->type = SRB_NVME_ELS_RSP; + sp->done = qlt_nvme_els_done; + sp->vha = vha; + sp->fcport = sess; + + ql_log(ql_log_info, vha, 0x11042, + "sp: %p, vha: %p, logo_ack_buf: %p\n", + sp, vha, logo_ack_buf); + + logo_acc = (struct __fc_logo_acc *)logo_ack_buf; + memset(logo_acc, 0, sizeof(*logo_acc)); + logo_acc->op_code = ELS_ACC; + + /* Send response */ + sp->u.snvme_els.dma_addr = logo_ack_udma; + sp->u.snvme_els.dma_ptr = logo_ack_buf; + sp->gen1 = sizeof(struct __fc_logo_acc); + sp->gen2 = ELS_ACC; + sp->u.snvme_els.ptr = (struct purex_entry_24xx *)purex; + sp->cmd_type = ELS_LOGO; + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) + qla2x00_rel_sp(sp); + + return 0; +} + +static int qlt_process_prli(struct scsi_qla_host *vha, + struct purex_entry_24xx *purex, void *buf) +{ + struct __fc_prli *prli = (struct __fc_prli *)buf; + struct __fc_prli_acc *prli_acc; + struct __fc_prli_rjt *prli_rej; + dma_addr_t prli_ack_udma = vha->vha_tgt.qla_tgt->nvme_els_rsp; + void *prli_ack_buf = vha->vha_tgt.qla_tgt->nvme_els_ptr; + srb_t *sp; + struct fc_port *sess = NULL; + int rval; + uint32_t look_up_sid; + port_id_t port_id; + + port_id.b.domain = purex->s_id[2]; + port_id.b.area = purex->s_id[1]; + port_id.b.al_pa = purex->s_id[0]; + port_id.b.rsvd_1 = 0; + + if (!IS_SW_RESV_ADDR(port_id)) { + look_up_sid = purex->s_id[2] << 16 | purex->s_id[1] << 8 | + purex->s_id[0]; + ql_log(ql_log_info, vha, 0x11043, + "%s - Look UP sid: %#x\n", __func__, look_up_sid); + + sess = qla_nvmet_find_sess_by_s_id(vha, look_up_sid); + if (unlikely(!sess)) + WARN_ON(1); + } + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, NULL, GFP_KERNEL); + if (!sp) { + ql_log(ql_log_info, vha, 0x11044, + "Failed to allocate SRB\n"); + return -ENOMEM; + } + + sp->type = SRB_NVME_ELS_RSP; + sp->done = qlt_nvme_els_done; + sp->vha = vha; + sp->fcport = sess; + + ql_log(ql_log_info, vha, 0x11045, + "sp: %p, vha: %p, prli_ack_buf: %p, prli_ack_udma: %pad\n", + sp, vha, prli_ack_buf, &prli_ack_udma); + + memset(prli_ack_buf, 0, sizeof(struct __fc_prli_acc)); + + /* Parse PRLI */ + if (prli->prli_type == PRLI_TYPE_FCP) { + /* Send a RJT for FCP */ + prli_rej = (struct __fc_prli_rjt *)prli_ack_buf; + prli_rej->op_code = ELS_RJT; + prli_rej->reason = PRLI_RJT_REASON; + } else if (prli->prli_type == PRLI_TYPE_NVME) { + uint32_t spinfo; + + prli_acc = (struct __fc_prli_acc *)prli_ack_buf; + prli_acc->op_code = ELS_ACC; + prli_acc->type = PRLI_TYPE_NVME; + prli_acc->page_length = PRLI_NVME_PAGE_LENGTH; + prli_acc->common = cpu_to_be16(PRLI_REQ_EXEC); + prli_acc->pyld_length = cpu_to_be16(PRLI_ACC_NVME_RESP_LEN); + spinfo = NVME_PRLI_DISC | NVME_PRLI_TRGT; + prli_acc->nvme.sp_info = cpu_to_be32(spinfo); + } + + /* Send response */ + sp->u.snvme_els.dma_addr = prli_ack_udma; + sp->u.snvme_els.dma_ptr = prli_ack_buf; + + if (prli->prli_type == PRLI_TYPE_FCP) { + sp->gen1 = sizeof(struct __fc_prli_rjt); + sp->gen2 = ELS_RJT; + } else if (prli->prli_type == PRLI_TYPE_NVME) { + sp->gen1 = sizeof(struct __fc_prli_acc); + sp->gen2 = ELS_ACC; + } + + sp->u.snvme_els.ptr = (struct purex_entry_24xx *)purex; + sp->cmd_type = ELS_PRLI; + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) + qla2x00_rel_sp(sp); + + return 0; +} + +static void *qlt_get_next_atio_pkt(struct scsi_qla_host *vha) +{ + struct qla_hw_data *ha = vha->hw; + void *pkt; + + ha->tgt.atio_ring_index++; + if (ha->tgt.atio_ring_index == ha->tgt.atio_q_length) { + ha->tgt.atio_ring_index = 0; + ha->tgt.atio_ring_ptr = ha->tgt.atio_ring; + } else { + ha->tgt.atio_ring_ptr++; + } + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; + + return pkt; +} + +static void qlt_process_purex(struct scsi_qla_host *vha, + struct qla_tgt_purex_op *p) +{ + struct atio_from_isp *atio = &p->atio; + struct purex_entry_24xx *purex = + (struct purex_entry_24xx *)&atio->u.raw; + uint16_t len = purex->frame_size; + + ql_log(ql_log_info, vha, 0xf100, + "Purex IOCB: EC:%#x, Len:%#x ELS_OP:%#x oxid:%#x rxid:%#x\n", + purex->entry_count, len, purex->pyld[3], + purex->ox_id, purex->rx_id); + + switch (purex->pyld[3]) { + case ELS_PLOGI: + qlt_process_plogi(vha, purex, p->purex_pyld); + break; + case ELS_PRLI: + qlt_process_prli(vha, purex, p->purex_pyld); + break; + case ELS_LOGO: + qlt_process_logo(vha, purex, p->purex_pyld); + break; + default: + ql_log(ql_log_warn, vha, 0x11046, + "Unexpected ELS 0x%x\n", purex->pyld[3]); + break; + } +} + +void qlt_dequeue_purex(struct scsi_qla_host *vha) +{ + struct qla_tgt_purex_op *p, *t; + unsigned long flags; + + list_for_each_entry_safe(p, t, &vha->purex_atio_list, cmd_list) { + ql_log(ql_log_info, vha, 0xff1e, + "Processing ATIO %p\n", &p->atio); + + qlt_process_purex(vha, p); + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_del(&p->cmd_list); + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + kfree(p->purex_pyld); + kfree(p); + } +} + +static void qlt_queue_purex(scsi_qla_host_t *vha, + struct atio_from_isp *atio) +{ + struct qla_tgt_purex_op *p; + unsigned long flags; + struct purex_entry_24xx *purex = + (struct purex_entry_24xx *)&atio->u.raw; + uint16_t len = purex->frame_size; + uint8_t *purex_pyld_tmp; + + p = kzalloc(sizeof(*p), GFP_ATOMIC); + if (p == NULL) + goto out; + + p->vha = vha; + memcpy(&p->atio, atio, sizeof(*atio)); + + ql_dbg(ql_dbg_disc + ql_dbg_buffer, vha, 0xff11, + "Dumping the Purex IOCB received\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0xe012, + (uint8_t *)purex, 64); + + p->purex_pyld = kzalloc(sizeof(purex->entry_count) * 64, GFP_ATOMIC); + if (p->purex_pyld == NULL) { + kfree(p); + goto out; + } + purex_pyld_tmp = (uint8_t *)p->purex_pyld; + p->purex_pyld_len = len; + + if (len < PUREX_PYLD_SIZE) + len = PUREX_PYLD_SIZE; + + memcpy(p->purex_pyld, &purex->d_id, PUREX_PYLD_SIZE); + purex_pyld_tmp += PUREX_PYLD_SIZE; + len -= PUREX_PYLD_SIZE; + + while (len > 0) { + int cpylen; + struct __status_cont *cont_atio; + + cont_atio = (struct __status_cont *)qlt_get_next_atio_pkt(vha); + cpylen = len > CONT_SENSE_DATA ? CONT_SENSE_DATA : len; + ql_log(ql_log_info, vha, 0xff12, + "cont_atio: %p, cpylen: %#x\n", cont_atio, cpylen); + + memcpy(purex_pyld_tmp, &cont_atio->data[0], cpylen); + + purex_pyld_tmp += cpylen; + len -= cpylen; + } + + ql_dbg(ql_dbg_disc + ql_dbg_buffer, vha, 0xff11, + "Dumping the Purex IOCB(%p) received\n", p->purex_pyld); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0xe011, + (uint8_t *)p->purex_pyld, p->purex_pyld_len); + + INIT_LIST_HEAD(&p->cmd_list); + + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_add_tail(&p->cmd_list, &vha->purex_atio_list); + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + +out: + return; +} + +static void sys_to_be32_cpy(uint8_t *dest, uint8_t *src, uint16_t len) +{ + uint32_t *d, *s, i; + + d = (uint32_t *) dest; + s = (uint32_t *) src; + for (i = 0; i < len; i++) + d[i] = cpu_to_be32(s[i]); +} + +/* Prepare an LS req received from the wire to be sent to the nvmet */ +static void *qlt_nvmet_prepare_ls(struct scsi_qla_host *vha, + struct pt_ls4_rx_unsol *ls4) +{ + int desc_len = cpu_to_le16(ls4->desc_len) + 8; + int copy_len, bc; + void *buf; + uint8_t *cpy_buf; + int i; + struct __status_cont *cont_atio; + + ql_dbg(ql_dbg_tgt, vha, 0xe072, + "%s: desc_len:%d\n", __func__, desc_len); + + buf = kzalloc(desc_len, GFP_ATOMIC); + if (!buf) + return NULL; + + cpy_buf = buf; + bc = desc_len; + + if (bc < PT_LS4_FIRST_PACKET_LEN) + copy_len = bc; + else + copy_len = PT_LS4_FIRST_PACKET_LEN; + + sys_to_be32_cpy(cpy_buf, &((uint8_t *)ls4)[PT_LS4_PAYLOAD_OFFSET], + copy_len/4); + + bc -= copy_len; + cpy_buf += copy_len; + + cont_atio = (struct __status_cont *)ls4; + + for (i = 1; i < ls4->entry_count && bc > 0; i++) { + if (bc < CONT_SENSE_DATA) + copy_len = bc; + else + copy_len = CONT_SENSE_DATA; + + cont_atio = (struct __status_cont *)qlt_get_next_atio_pkt(vha); + + sys_to_be32_cpy(cpy_buf, (uint8_t *)&cont_atio->data, + copy_len/4); + cpy_buf += copy_len; + bc -= copy_len; + } + + ql_dbg(ql_dbg_disc + ql_dbg_buffer, vha, 0xc0f1, + "Dump the first 128 bytes of LS request\n"); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0x2075, + (uint8_t *)buf, 128); + + return buf; +} + static bool qlt_24xx_atio_pkt_all_vps(struct scsi_qla_host *vha, struct atio_from_isp *atio, uint8_t ha_locked) { - ql_dbg(ql_dbg_tgt, vha, 0xe072, - "%s: qla_target(%d): type %x ox_id %04x\n", - __func__, vha->vp_idx, atio->u.raw.entry_type, - be16_to_cpu(atio->u.isp24.fcp_hdr.ox_id)); + void *buf; switch (atio->u.raw.entry_type) { case ATIO_TYPE7: @@ -414,31 +1062,74 @@ static bool qlt_24xx_atio_pkt_all_vps(struct scsi_qla_host *vha, { struct abts_recv_from_24xx *entry = (struct abts_recv_from_24xx *)atio; - struct scsi_qla_host *host = qlt_find_host_by_vp_idx(vha, - entry->vp_index); - unsigned long flags; - if (unlikely(!host)) { - ql_dbg(ql_dbg_tgt, vha, 0xe00a, - "qla_target(%d): Response pkt (ABTS_RECV_24XX) " - "received, with unknown vp_index %d\n", - vha->vp_idx, entry->vp_index); + if (unlikely(atio->u.nvme_isp27.fcnvme_hdr.scsi_fc_id == + NVMEFC_CMD_IU_SCSI_FC_ID)) { + qla_nvmet_handle_abts(vha, entry); + break; + } + + { + struct abts_recv_from_24xx *entry = + (struct abts_recv_from_24xx *)atio; + struct scsi_qla_host *host = qlt_find_host_by_vp_idx + (vha, entry->vp_index); + unsigned long flags; + + if (unlikely(!host)) { + ql_dbg(ql_dbg_tgt, vha, 0xe00a, + "qla_target(%d): Response pkt (ABTS_RECV_24XX) received, with unknown vp_index %d\n", + vha->vp_idx, entry->vp_index); + break; + } + if (!ha_locked) + spin_lock_irqsave(&host->hw->hardware_lock, + flags); + qlt_24xx_handle_abts(host, + (struct abts_recv_from_24xx *)atio); + if (!ha_locked) + spin_unlock_irqrestore( + &host->hw->hardware_lock, flags); break; } - if (!ha_locked) - spin_lock_irqsave(&host->hw->hardware_lock, flags); - qlt_24xx_handle_abts(host, (struct abts_recv_from_24xx *)atio); - if (!ha_locked) - spin_unlock_irqrestore(&host->hw->hardware_lock, flags); - break; } - /* case PUREX_IOCB_TYPE: ql2xmvasynctoatio */ + /* NVME */ + case ATIO_PURLS: + { + struct scsi_qla_host *host = vha; + unsigned long flags; + + /* Received an LS4 from the init, pass it to the NVMEt */ + ql_log(ql_log_info, vha, 0x11047, + "%s %d Received an LS4 from the initiator on ATIO\n", + __func__, __LINE__); + spin_lock_irqsave(&host->hw->hardware_lock, flags); + buf = qlt_nvmet_prepare_ls(host, + (struct pt_ls4_rx_unsol *)atio); + if (buf) + qla_nvmet_handle_ls(host, + (struct pt_ls4_rx_unsol *)atio, buf); + spin_unlock_irqrestore(&host->hw->hardware_lock, flags); + } + break; + + case PUREX_IOCB_TYPE: /* NVMET */ + { + /* Received a PUREX IOCB */ + /* Queue the iocb and wake up dpc */ + qlt_queue_purex(vha, atio); + set_bit(NVMET_PUREX, &vha->dpc_flags); + qla2xxx_wake_dpc(vha); + break; + } default: ql_dbg(ql_dbg_tgt, vha, 0xe040, "qla_target(%d): Received unknown ATIO atio " "type %x\n", vha->vp_idx, atio->u.raw.entry_type); + ql_dump_buffer(ql_dbg_disc + ql_dbg_buffer, vha, 0xe011, + (uint8_t *)atio, sizeof(*atio)); break; } @@ -541,6 +1232,10 @@ void qlt_response_pkt_all_vps(struct scsi_qla_host *vha, break; } qlt_response_pkt(host, rsp, pkt); + if (unlikely(qlt_op_target_mode)) + qla24xx_nvmet_abts_resp_iocb(vha, + (struct abts_resp_to_24xx *)pkt, + rsp->req); break; } default: @@ -1623,6 +2318,11 @@ static void qlt_release(struct qla_tgt *tgt) vha->vha_tgt.target_lport_ptr) ha->tgt.tgt_ops->remove_target(vha); + if (tgt->nvme_els_ptr) { + dma_free_coherent(&vha->hw->pdev->dev, 256, + tgt->nvme_els_ptr, tgt->nvme_els_rsp); + } + vha->vha_tgt.qla_tgt = NULL; ql_dbg(ql_dbg_tgt_mgt, vha, 0xf00d, @@ -5648,6 +6348,101 @@ qlt_chk_qfull_thresh_hold(struct scsi_qla_host *vha, struct qla_qpair *qpair, return 1; } +/* + * Worker thread that dequeues the nvme cmd off the list and + * called nvme-t to process the cmd + */ +static void qla_nvmet_work(struct work_struct *work) +{ + struct qla_nvmet_cmd *cmd = + container_of(work, struct qla_nvmet_cmd, work); + scsi_qla_host_t *vha = cmd->vha; + + qla_nvmet_process_cmd(vha, cmd); +} +/* + * Handle the NVME cmd IU + */ +static void qla_nvmet_handle_cmd(struct scsi_qla_host *vha, + struct atio_from_isp *atio) +{ + struct qla_nvmet_cmd *tgt_cmd; + unsigned long flags; + struct qla_hw_data *ha = vha->hw; + struct fc_port *fcport; + struct fcp_hdr *fcp_hdr; + uint32_t s_id = 0; + void *next_pkt; + uint8_t *nvmet_cmd_ptr; + uint32_t nvmet_cmd_iulen = 0; + uint32_t nvmet_cmd_iulen_min = 64; + + /* Create an NVME cmd and queue it up to the work queue */ + tgt_cmd = kzalloc(sizeof(struct qla_nvmet_cmd), GFP_ATOMIC); + if (tgt_cmd == NULL) + return; + + tgt_cmd->vha = vha; + + fcp_hdr = &atio->u.nvme_isp27.fcp_hdr; + + /* Get the session for this command */ + s_id = fcp_hdr->s_id[0] << 16 | fcp_hdr->s_id[1] << 8 + | fcp_hdr->s_id[2]; + tgt_cmd->ox_id = fcp_hdr->ox_id; + + fcport = qla_nvmet_find_sess_by_s_id(vha, s_id); + if (unlikely(!fcport)) { + ql_log(ql_log_warn, vha, 0x11049, + "Cant' find the session for port_id: %#x\n", s_id); + kfree(tgt_cmd); + return; + } + + tgt_cmd->fcport = fcport; + + memcpy(&tgt_cmd->atio, atio, sizeof(*atio)); + + /* The FC-NMVE cmd covers 2 ATIO IOCBs */ + + nvmet_cmd_ptr = (uint8_t *)&tgt_cmd->nvme_cmd_iu; + nvmet_cmd_iulen = be16_to_cpu(atio->u.nvme_isp27.fcnvme_hdr.iu_len) * 4; + tgt_cmd->cmd_len = nvmet_cmd_iulen; + + if (unlikely(ha->tgt.atio_ring_index + atio->u.raw.entry_count > + ha->tgt.atio_q_length)) { + uint8_t i; + + memcpy(nvmet_cmd_ptr, &((uint8_t *)atio)[NVME_ATIO_CMD_OFF], + ATIO_NVME_FIRST_PACKET_CMDLEN); + nvmet_cmd_ptr += ATIO_NVME_FIRST_PACKET_CMDLEN; + nvmet_cmd_iulen -= ATIO_NVME_FIRST_PACKET_CMDLEN; + + for (i = 1; i < atio->u.raw.entry_count; i++) { + uint8_t cplen = min(nvmet_cmd_iulen_min, + nvmet_cmd_iulen); + + next_pkt = qlt_get_next_atio_pkt(vha); + memcpy(nvmet_cmd_ptr, (uint8_t *)next_pkt, cplen); + nvmet_cmd_ptr += cplen; + nvmet_cmd_iulen -= cplen; + } + } else { + memcpy(nvmet_cmd_ptr, &((uint8_t *)atio)[NVME_ATIO_CMD_OFF], + nvmet_cmd_iulen); + next_pkt = qlt_get_next_atio_pkt(vha); + } + + /* Add cmd to the list */ + spin_lock_irqsave(&vha->cmd_list_lock, flags); + list_add_tail(&tgt_cmd->cmd_list, &vha->qla_cmd_list); + spin_unlock_irqrestore(&vha->cmd_list_lock, flags); + + /* Queue the work item */ + INIT_WORK(&tgt_cmd->work, qla_nvmet_work); + queue_work(qla_nvmet_wq, &tgt_cmd->work); +} + /* ha->hardware_lock supposed to be held on entry */ /* called via callback from qla2xxx */ static void qlt_24xx_atio_pkt(struct scsi_qla_host *vha, @@ -5687,6 +6482,13 @@ static void qlt_24xx_atio_pkt(struct scsi_qla_host *vha, break; } + /* NVME Target*/ + if (unlikely(atio->u.nvme_isp27.fcnvme_hdr.scsi_fc_id + == NVMEFC_CMD_IU_SCSI_FC_ID)) { + qla_nvmet_handle_cmd(vha, atio); + break; + } + if (likely(atio->u.isp24.fcp_cmnd.task_mgmt_flags == 0)) { rc = qlt_chk_qfull_thresh_hold(vha, ha->base_qpair, atio, ha_locked); @@ -6537,6 +7339,14 @@ int qlt_add_target(struct qla_hw_data *ha, struct scsi_qla_host *base_vha) if (ha->tgt.tgt_ops && ha->tgt.tgt_ops->add_target) ha->tgt.tgt_ops->add_target(base_vha); + tgt->nvme_els_ptr = dma_alloc_coherent(&base_vha->hw->pdev->dev, 256, + &tgt->nvme_els_rsp, GFP_KERNEL); + if (!tgt->nvme_els_ptr) { + ql_dbg(ql_dbg_tgt, base_vha, 0xe066, + "Unable to allocate DMA buffer for NVME ELS request\n"); + return -ENOMEM; + } + return 0; } @@ -6831,6 +7641,7 @@ qlt_rff_id(struct scsi_qla_host *vha) u8 fc4_feature = 0; /* * FC-4 Feature bit 0 indicates target functionality to the name server. + * NVME FC-4 Feature bit 2 indicates discovery controller */ if (qla_tgt_mode_enabled(vha)) { fc4_feature = BIT_0; @@ -6868,6 +7679,76 @@ qlt_init_atio_q_entries(struct scsi_qla_host *vha) } +static void +qlt_27xx_process_nvme_atio_queue(struct scsi_qla_host *vha, uint8_t ha_locked) +{ + struct qla_hw_data *ha = vha->hw; + struct atio_from_isp *pkt; + int cnt; + uint32_t atio_q_in; + uint16_t num_atios = 0; + uint8_t nvme_pkts = 0; + + if (!ha->flags.fw_started) + return; + + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; + while (num_atios < pkt->u.raw.entry_count) { + atio_q_in = RD_REG_DWORD(ISP_ATIO_Q_IN(vha)); + if (atio_q_in < ha->tgt.atio_ring_index) + num_atios = ha->tgt.atio_q_length - + (ha->tgt.atio_ring_index - atio_q_in); + else + num_atios = atio_q_in - ha->tgt.atio_ring_index; + if (num_atios == 0) + return; + } + + while ((num_atios) || fcpcmd_is_corrupted(ha->tgt.atio_ring_ptr)) { + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; + cnt = pkt->u.raw.entry_count; + + if (unlikely(fcpcmd_is_corrupted(ha->tgt.atio_ring_ptr))) { + /* + * This packet is corrupted. The header + payload + * can not be trusted. There is no point in passing + * it further up. + */ + ql_log(ql_log_warn, vha, 0xd03c, + "corrupted fcp frame SID[%3phN] OXID[%04x] EXCG[%x] %64phN\n", + pkt->u.isp24.fcp_hdr.s_id, + be16_to_cpu(pkt->u.isp24.fcp_hdr.ox_id), + le32_to_cpu(pkt->u.isp24.exchange_addr), pkt); + + adjust_corrupted_atio(pkt); + qlt_send_term_exchange(ha->base_qpair, NULL, pkt, + ha_locked, 0); + } else { + qlt_24xx_atio_pkt_all_vps(vha, + (struct atio_from_isp *)pkt, ha_locked); + nvme_pkts++; + } + + /* Just move by one index since we have already accounted the + * additional ones while processing individual ATIOs + */ + ha->tgt.atio_ring_index++; + if (ha->tgt.atio_ring_index == ha->tgt.atio_q_length) { + ha->tgt.atio_ring_index = 0; + ha->tgt.atio_ring_ptr = ha->tgt.atio_ring; + } else + ha->tgt.atio_ring_ptr++; + + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; + num_atios -= cnt; + /* memory barrier */ + wmb(); + } + + /* Adjust ring index */ + WRT_REG_DWORD(ISP_ATIO_Q_OUT(vha), ha->tgt.atio_ring_index); +} + /* * qlt_24xx_process_atio_queue() - Process ATIO queue entries. * @ha: SCSI driver HA context @@ -6879,9 +7760,15 @@ qlt_24xx_process_atio_queue(struct scsi_qla_host *vha, uint8_t ha_locked) struct atio_from_isp *pkt; int cnt, i; + if (unlikely(qlt_op_target_mode)) { + qlt_27xx_process_nvme_atio_queue(vha, ha_locked); + return; + } + if (!ha->flags.fw_started) return; + pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; while ((ha->tgt.atio_ring_ptr->signature != ATIO_PROCESSED) || fcpcmd_is_corrupted(ha->tgt.atio_ring_ptr)) { pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; @@ -6907,6 +7794,7 @@ qlt_24xx_process_atio_queue(struct scsi_qla_host *vha, uint8_t ha_locked) (struct atio_from_isp *)pkt, ha_locked); } + cnt = 1; for (i = 0; i < cnt; i++) { ha->tgt.atio_ring_index++; if (ha->tgt.atio_ring_index == ha->tgt.atio_q_length) { @@ -6918,11 +7806,13 @@ qlt_24xx_process_atio_queue(struct scsi_qla_host *vha, uint8_t ha_locked) pkt->u.raw.signature = ATIO_PROCESSED; pkt = (struct atio_from_isp *)ha->tgt.atio_ring_ptr; } + /* memory barrier */ wmb(); } /* Adjust ring index */ WRT_REG_DWORD(ISP_ATIO_Q_OUT(vha), ha->tgt.atio_ring_index); + RD_REG_DWORD_RELAXED(ISP_ATIO_Q_OUT(vha)); } void @@ -7219,6 +8109,9 @@ qlt_probe_one_stage1(struct scsi_qla_host *base_vha, struct qla_hw_data *ha) INIT_DELAYED_WORK(&base_vha->unknown_atio_work, qlt_unknown_atio_work_fn); + /* NVMET */ + INIT_LIST_HEAD(&base_vha->purex_atio_list); + qlt_clear_mode(base_vha); rc = btree_init32(&ha->tgt.host_map); @@ -7445,13 +8338,25 @@ int __init qlt_init(void) goto out_mgmt_cmd_cachep; } + qla_tgt_purex_plogi_cachep = + kmem_cache_create("qla_tgt_purex_plogi_cachep", + sizeof(struct qlt_purex_plogi_ack_t), + __alignof__(struct qlt_purex_plogi_ack_t), 0, NULL); + + if (!qla_tgt_purex_plogi_cachep) { + ql_log(ql_log_fatal, NULL, 0xe06d, + "kmem_cache_create for qla_tgt_purex_plogi_cachep failed\n"); + ret = -ENOMEM; + goto out_plogi_cachep; + } + qla_tgt_mgmt_cmd_mempool = mempool_create(25, mempool_alloc_slab, mempool_free_slab, qla_tgt_mgmt_cmd_cachep); if (!qla_tgt_mgmt_cmd_mempool) { ql_log(ql_log_fatal, NULL, 0xe06e, "mempool_create for qla_tgt_mgmt_cmd_mempool failed\n"); ret = -ENOMEM; - goto out_plogi_cachep; + goto out_purex_plogi_cachep; } qla_tgt_wq = alloc_workqueue("qla_tgt_wq", 0, 0); @@ -7461,6 +8366,25 @@ int __init qlt_init(void) ret = -ENOMEM; goto out_cmd_mempool; } + + qla_nvmet_wq = alloc_workqueue("qla_nvmet_wq", 0, 0); + if (!qla_nvmet_wq) { + ql_log(ql_log_fatal, NULL, 0xe070, + "alloc_workqueue for qla_nvmet_wq failed\n"); + ret = -ENOMEM; + destroy_workqueue(qla_tgt_wq); + goto out_cmd_mempool; + } + + qla_nvmet_comp_wq = alloc_workqueue("qla_nvmet_comp_wq", 0, 0); + if (!qla_nvmet_comp_wq) { + ql_log(ql_log_fatal, NULL, 0xe071, + "alloc_workqueue for qla_nvmet_wq failed\n"); + ret = -ENOMEM; + destroy_workqueue(qla_nvmet_wq); + destroy_workqueue(qla_tgt_wq); + goto out_cmd_mempool; + } /* * Return 1 to signal that initiator-mode is being disabled */ @@ -7468,6 +8392,8 @@ int __init qlt_init(void) out_cmd_mempool: mempool_destroy(qla_tgt_mgmt_cmd_mempool); +out_purex_plogi_cachep: + kmem_cache_destroy(qla_tgt_purex_plogi_cachep); out_plogi_cachep: kmem_cache_destroy(qla_tgt_plogi_cachep); out_mgmt_cmd_cachep: @@ -7480,8 +8406,19 @@ void qlt_exit(void) if (!QLA_TGT_MODE_ENABLED()) return; + destroy_workqueue(qla_nvmet_comp_wq); + destroy_workqueue(qla_nvmet_wq); destroy_workqueue(qla_tgt_wq); mempool_destroy(qla_tgt_mgmt_cmd_mempool); kmem_cache_destroy(qla_tgt_plogi_cachep); + kmem_cache_destroy(qla_tgt_purex_plogi_cachep); kmem_cache_destroy(qla_tgt_mgmt_cmd_cachep); } + +void nvmet_release_sessions(struct scsi_qla_host *vha) +{ + struct qlt_purex_plogi_ack_t *pla, *tpla; + + list_for_each_entry_safe(pla, tpla, &vha->plogi_ack_list, list) + list_del(&pla->list); +} diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h index 721da593b1bc..fcb4c9bb4fc1 100644 --- a/drivers/scsi/qla2xxx/qla_target.h +++ b/drivers/scsi/qla2xxx/qla_target.h @@ -322,6 +322,67 @@ struct atio7_fcp_cmnd { /* uint32_t data_length; */ } __packed; +struct fc_nvme_hdr { + union { + struct { + uint8_t scsi_id; +#define NVMEFC_CMD_IU_SCSI_ID 0xfd + uint8_t fc_id; +#define NVMEFC_CMD_IU_FC_ID 0x28 + }; + struct { + uint16_t scsi_fc_id; +#define NVMEFC_CMD_IU_SCSI_FC_ID 0x28fd + }; + }; + uint16_t iu_len; + uint8_t rsv1[3]; + uint8_t flags; +#define NVMEFC_CMD_WRITE 0x1 +#define NVMEFC_CMD_READ 0x2 + uint64_t conn_id; + uint32_t csn; + uint32_t dl; +} __packed; + +struct atio7_nvme_cmnd { + struct fc_nvme_hdr fcnvme_hdr; + + struct nvme_command nvme_cmd; + uint32_t rsv2[2]; +} __packed; + +#define ATIO_PURLS 0x56 +struct pt_ls4_rx_unsol { + uint8_t entry_type; /* 0x56 */ + uint8_t entry_count; + uint16_t rsvd0; + uint16_t rsvd1; + uint8_t vp_index; + uint8_t rsvd2; + uint16_t rsvd3; + uint16_t nport_handle; + uint16_t frame_size; + uint16_t rsvd4; + uint32_t exchange_address; + uint8_t d_id[3]; + uint8_t r_ctl; + uint8_t s_id[3]; + uint8_t cs_ctl; + uint8_t f_ctl[3]; + uint8_t type; + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; + uint32_t desc0; +#define PT_LS4_PAYLOAD_OFFSET 0x2c +#define PT_LS4_FIRST_PACKET_LEN 20 + uint32_t desc_len; + uint32_t payload[3]; +}; /* * ISP queue - Accept Target I/O (ATIO) type entry IOCB structure. * This is sent from the ISP to the target driver. @@ -368,6 +429,21 @@ struct atio_from_isp { uint32_t signature; #define ATIO_PROCESSED 0xDEADDEAD /* Signature */ } raw; + /* FC-NVME */ + struct { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t fcp_cmnd_len_low; + uint8_t fcp_cmnd_len_high:4; + uint8_t attr:4; + uint32_t exchange_addr; +#define ATIO_NVME_ATIO_CMD_OFF 32 +#define ATIO_NVME_FIRST_PACKET_CMDLEN (64 - ATIO_NVME_ATIO_CMD_OFF) + struct fcp_hdr fcp_hdr; + struct fc_nvme_hdr fcnvme_hdr; + uint8_t nvmd_cmd[8]; + } nvme_isp27; + struct pt_ls4_rx_unsol pt_ls4; } u; } __packed; @@ -836,6 +912,8 @@ struct qla_tgt { int modify_lun_expected; atomic_t tgt_global_resets_count; struct list_head tgt_list_entry; + dma_addr_t nvme_els_rsp; + void *nvme_els_ptr; }; struct qla_tgt_sess_op { @@ -848,6 +926,16 @@ struct qla_tgt_sess_op { struct rsp_que *rsp; }; +/* NVMET */ +struct qla_tgt_purex_op { + struct scsi_qla_host *vha; + struct atio_from_isp atio; + uint8_t *purex_pyld; + uint16_t purex_pyld_len; + struct work_struct work; + struct list_head cmd_list; +}; + enum trace_flags { TRC_NEW_CMD = BIT_0, TRC_DO_WORK = BIT_1, @@ -1112,4 +1200,6 @@ void qlt_send_resp_ctio(struct qla_qpair *, struct qla_tgt_cmd *, uint8_t, extern void qlt_abort_cmd_on_host_reset(struct scsi_qla_host *, struct qla_tgt_cmd *); +/* 0 for FCP and 1 for NVMET */ +extern int qlt_op_target_mode; #endif /* __QLA_TARGET_H */ From patchwork Wed Sep 26 04:03:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10615273 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B2473CF1 for ; Wed, 26 Sep 2018 04:27:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C4E228A52 for ; Wed, 26 Sep 2018 04:27:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F32EB2A5E3; Wed, 26 Sep 2018 04:27:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6DA5E28A52 for ; Wed, 26 Sep 2018 04:27:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726431AbeIZKiA (ORCPT ); Wed, 26 Sep 2018 06:38:00 -0400 Received: from mail-by2nam03on0050.outbound.protection.outlook.com ([104.47.42.50]:5200 "EHLO NAM03-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726343AbeIZKiA (ORCPT ); Wed, 26 Sep 2018 06:38:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/EU4C/vWJxaCOjyxqXJF8zz2zX/zyhS0YiGLw/Qxhzk=; b=NTv7LlzGNvHoWvhXO4qVwdKISmMYNAE4rcmePDE5GXBCLzCwXddOAdpBAvFEmhr0UD+jtuJ1WAyKc0rlJ2wPa/xDs/auVfsZVGURVBBheoPwb0pUGLE8G9HmLCfVj8EEVfLJM0c4aEV5I05AsyY1ycs08gmEJY5U2vOz8ajmjMQ= Received: from DM5PR07CA0025.namprd07.prod.outlook.com (2603:10b6:3:16::11) by SN6PR07MB4670.namprd07.prod.outlook.com (2603:10b6:805:3a::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1143.18; Wed, 26 Sep 2018 04:26:57 +0000 Received: from DM3NAM05FT045.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::203) by DM5PR07CA0025.outlook.office365.com (2603:10b6:3:16::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.20 via Frontend Transport; Wed, 26 Sep 2018 04:26:57 +0000 Authentication-Results: spf=pass (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=cavium.com; Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT045.mail.protection.outlook.com (10.152.98.159) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 04:26:57 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Tue, 25 Sep 2018 21:03:42 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8Q43e6G009766; Tue, 25 Sep 2018 21:03:40 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8Q43emk009765; Tue, 25 Sep 2018 21:03:40 -0700 From: Himanshu Madhani To: , CC: , Subject: [PATCH v2 4/5] qla2xxx_nvmet: Add SysFS node for FC-NVMe Target Date: Tue, 25 Sep 2018 21:03:38 -0700 Message-ID: <20180926040339.9715-5-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926040339.9715-1-himanshu.madhani@cavium.com> References: <20180926040339.9715-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(396003)(136003)(376002)(346002)(39860400002)(2980300002)(438002)(199004)(189003)(305945005)(11346002)(48376002)(42186006)(69596002)(80596001)(2906002)(5660300001)(47776003)(76176011)(110136005)(316002)(106002)(34290500001)(50466002)(8676002)(51416003)(36756003)(87636003)(476003)(126002)(81166006)(486006)(44832011)(1076002)(478600001)(81156014)(2616005)(72206003)(16586007)(54906003)(356003)(186003)(575784001)(50226002)(26005)(446003)(8936002)(336012)(86362001)(106466001)(4326008);DIR:OUT;SFP:1101;SCL:1;SRVR:SN6PR07MB4670;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;A:1;MX:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT045;1:VS3oNTeZpKkgD8BVTkDxlJmFea5Cv+yWZA+pkFdLXVNkgJ63YqRigzI2fEwrgWfgA5ey6nNM6kzUX+jo88QAutwD2FJgTwf30ivOhyB0ymoPX4/ms7y3LfiZusWogQDZ X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4056bb7d-3bf8-42cd-4289-08d623684bf7 X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:SN6PR07MB4670; X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4670;3:Wnzz0fohxVvm2ouFiKC0P9S9Dzw4vRDb7LM1LpTKCjCGnIG+PQyasJUJHkCxrcrTS8CiaTXR00TfHe/WdnoE4vsCUBbMvvv0pG0Bc6EqEo6tiAX+BI1nRdeDiUMntz63PqZc73kT7kH4m4Q0f5SmYPybz4C+3X3Pq5LYVNIdZsZxszeF3KZWZiwyF9mKRQicSuGPeddtFel6azG6MMNTTpK3J/mZVh3veoiLp6BtN3HWUEC4lDa0Erq1/aHp9Mkfn26rwgkZx21O2UnP/8Blk5tvmg0RQHtz8L1RcAsZKCeS+xNau2lr/bCrifZ5uyMyT5GcOOipJ36aij68St/ZPATQ+GM7q4nFbOSQ5OaCpoM=;25:Yd5zeCG6seVL2oYWZ0F2yd70XnlXvpf9EAG5ZDRwNt+Qofb29nDAsjYyP+ul7C8KIzIPcvlspzC1omxZ2oYD4JMCm1Jd0JUjP15MNFjl9nvalcOjBOZXwZRDKt/jDSYlJxBZazE99FjRa911miW/6V6nwezbgA8Zi1yV2C4MMzuXvDlIjzgU9dxtLJcaWPMf3HIMLMLCj5ZgLoN8ZLgKU6mibMoG2+CL08ku2TJSJk/KhuCeSYaHwTTZx4c/VqsBa9e4QTpkdjszNTTJVf/JnCCo0dYsr15fNm4MHuHWpTZFBzcvqJ0802IoC4FD3n8AoW/QxsEKmkqGhxkxHhrFZw== X-MS-TrafficTypeDiagnostic: SN6PR07MB4670: X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4670;31:+gpoeQG3qgQ01/DCHodb76+QdT0tMMNsbxmvLG39BXztTsMg4uI2S6EACJ6EPHcETToMzDq8FPVijOGv5pXhS5wNzn7KjkFrxmG8rSsCCNJrl2gKMRy83JUEQPwaRE/86zAOul8u0gak/iXyIHIiBJdKZ27MF/cgLt9wrnaL8i2ncQc8zkwcGwGfew0zS4txUfo+GCVXT1PGaK41mAvMZxr90mjYAnN4X+69Wgvj7eM=;20:zvh8svSKsGmpvuTf/XVtDkeVBuce8HsqUaHMyG6IqRezy0x5UM8Xr71JHqZI8x4R0d8d7TiWDvlSMWtmMuLTafUp249+gtvfT0V/mER/PoRNE/bi+xO5WkB6MWpSwpzqGvHGDu5o4aGwW70RoGmbSWMVxEw6ITLMrXqK7CAY+5NcS/gQQHFb8iAA0vW4cZXQT+Yk6GXDYbVbPLMgtzPvHFvzswTK1yLAlr0zUhrLZX/CogA5sjlyWYMudMYVJWjhBk3ae5JySz1d+u56abTlUGfrkWcy/Hy3l2VKsiqAkJpqWMAFzXnFlKg6hY662tTB+WQ8Zr+s9kikL035YQyjqGpuZVhwGVSeHmgy/I9jLS0cHyzfAD54TRjw9buVktYkM6Jx8Xoo03Pf29q6+EqOeOc9yPY0A5zq0sCrLMyh/F5gDoAGvutttyaz+Nf7lcWS9mYyfHYZMppwmxwAiPF2dXea9AHqeiWJhH2j5u+lQbf7St0/vSiEtv7LHJ+3nNL2 X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(10201501046)(3231355)(944501410)(52105095)(93006095)(93004095)(3002001)(149066)(150027)(6041310)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123560045)(20161123562045)(201708071742011)(7699051);SRVR:SN6PR07MB4670;BCL:0;PCL:0;RULEID:;SRVR:SN6PR07MB4670; X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4670;4:gsqxxKJJl/tTfM9mAX4zJLNcRHepSHULxl9d5KDL2gzmdLFsNw6J+h3QbQJsIxQiFGOUDWQp0/xNk2edlbU5qdyJFSTTmsp7eQr4Ndg/cUq+wjI/hacEMrVErJXZ4RAJSBH0CcYy+0hfzh3Kg+cPrDZJxv/+qzkgm1rKa4ytnreUTMbhsjTUWf/fpZ3YcP61iqAQaKZg3JRwu9dURExDC3SqD+p8foZtom1NQX+zldyKMos8D3Q0SkFsvlX08fRYFr8RuL+mGazuc6GvjXWviA== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4670;23:IonTtHOQvR+tysi57u6YfcCARLoPzdDSbHHGOV6g+o64WCDDLe86u61Um/orqvLsKqydnK36zMrAGw1UKq1SYEeLehpWRk1LMFEEyIyevQjowfHWWU5CajmQ01ulducp35YrqZ6R5nbvYsTj24ov77SxCHN/dld3O9n9+dJbbI6gq8YffpulYIvyBNgq3BPE+YvlklT1ISsnOPyqJ6VqWksXf4Q2cN12FSbFod5wgSZEiYxg4h13aEhrU+CTu9IZbxLufTtScI71bHBEc8ckNyPcECc1E5Ta+3J1oN7TkkmBlt73stLWTIhjGcYbBC8ovez94X9h0bhfTrjPK3gDPNS/M1704RJdOEWZhuBXTcM1m4mk5kv74/TkWVOKc4Pwr0cc16UD3RRLJGzfvNYDaBQomlrAaR4rMcwf3Tz4YgVzojU2FzWcaDUVeRmebx+ZHd9JSzo6MvOhjpZaIAcCz0eQdVRtK1UlZ5ENucFbALj3XG4TYf5N2Ka372kuj7iWShcPsI9y2B/4kVtZ7uBKRH4DGR35wly+oPcuyIlY/YiYDHTbjm4IyIjp8LEOmLebMx4ENMCNJVwbjGnmPSFPL6ehRJ3T+tY14zC8Nq16ZQ5q/UGMCtx+mVzdTtreUNJh1lTOttG0gDnY9JMhQJciZyoHT2lcP2s2fHRbc/cxrbjIEtVuauDC9AwOafoiV+hdgieOp4GbSTzktk8MNmHCfWWn7R0gXzK0VsRv0AaN+nwXany/avva89sbRR6P2BUBi0k2Vm+IB5ui1QunUmxLjlTb7PL/6bxKQ/fDnJddajMSauPpnds4dT1RXYfFtLrQm6QqwATZhMLC4qv67diuiUlY4uck6EoepqZCIudaFFo685BSDD5CQ4mwBJ71e9+VReZvh8RcBBh+57Dg5MqcyfvMtwk2zlODTNCo++du0b+mQI9RWfla+i5HLzkEOmTKd5yv8XVmq8zT4GryTrhwllnzzwFj9utlXY6TBP6WlEdOP7xDt/tHN1Mk1JuW2dB89CYSZJD38uPUzoPx64tHsSCEKI5FVawc3OahBXjDj5LZdhNCTdWcaDTfHP0P+wIz8va1NgCDfdqaguBtRkScQkQxC8Qf8msgbPXPbKp1h6z0sChDCVvojkO0/05JX1zZ X-Microsoft-Antispam-Message-Info: Pt+vz8bkqwCqFVYzsh6jrgrDNGKkPDuo25wHb4pXwAnHU2vNXNn5e8bwQzV6UxeW78DAIqPzEHnqGko5XPgiqFOjgUOKmMqYWawYdDeY0+VZ8nLhdAVOI2gItwsHUsadxnVuv+Zdj3tTmAaoqPbkMOzH3oAogzxd6k1U5RJWtVoc8ngTC3/0ToZbGdOqLtED4rhkYhqHHWkuoIOUSPWyNsSDzl86xtyvIKn4GwQE4N6CcGt5+kEOVf4LjcjNu3sR26z80oz+LahoMaOpqYqmTN3OVxZ5eRYvJetYHf6frr0+M98kbLjF4PuV2p3QmQXpmBgCWDdRM8sTmT4DT7tFLFFHf3wZKULB9z5Lgoc1Y4A= X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4670;6:YUDzjbYnVGVgQ8tWzChaegG/YbyK4oc6R8koLSe7sV/AIQEBXbBZCZN/aZibSV/YQ1Qw8ccIwVD30Dl7MVnHKr3N2j8Z3pfRFOakLCLZOPJIr+kitCjZ38IbldDlztnDjBF/c7BaTkGpS4w3cgd3isRgnJxZ1czMpX6kVl1ouR5PROs4uJjzAHfeq3bjPtP/aPrcpB554GUHHvrv8TenYNzHnI1fjqVqRS3iZPb2hyoqvTHq6s0npTgSYNRvHh58l74Y2ib9rWa75BWpg2m7qB236hsXE73tt/ayR+Iw8gAf9mxPw+5zyJP4OePDalwnyHGAFMK4FrCCZDhKF58nIA704HiFcen3hXQ6UP25O1fukQ6/kyQNsQ+TESYAon6snmmGdfv1hj81Z+vQmiiTafj5J1crvZqFn+CmGS33ComLlT5Qv2wlv/YariMcMowD+vtmtMWcd1yi7i8ZHns54g==;5:xNo4UppJJW+GD2Qx7Ckzwd6P5VlxP/vHhvNibsY0KqWFyWEOAhMuFJfsjLSk96yp+NDpV3Q0iIdybgZvU50Qrs0hQA1mI5KDmsd6xggO5ooaOwM805Buig4bEymnJXcKeXP1XfJSs52eLM86pxZBMjOWZ2NTrrUTj9v1dS+67GQ=;7:hK46wsvd6EQOQpmYOGXZbnRMYhrhRf9nDmxMAIVgFU3xiGBp7a8+XVU3TLTCivmi/Y0U2WPxBCEz7IFLfDrfbdpZ11oV9Lj0p2r4XhsN9iVohxsyTCNfQzFg9UHGemlJdjLBC9euWnZUx3AwQqkOMpuxNaIYKzmdRiKfU/UBWOX2FNctfXBr30E2NHOhwkTJvJd4ucKbVo+0rN9wluMJ2gMh9mWPGAc2r0k0WbO486AvDDr3UbmvD1CzSX6l5S3a SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 04:26:57.4583 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4056bb7d-3bf8-42cd-4289-08d623684bf7 X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR07MB4670 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Anil Gurumurthy This patch adds SysFS node for NVMe Target configuration Signed-off-by: Anil Gurumurthy Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_attr.c | 33 +++++++++++++++++++++++++++++++++ drivers/scsi/qla2xxx/qla_gs.c | 2 +- drivers/scsi/qla2xxx/qla_init.c | 3 ++- drivers/scsi/qla2xxx/qla_nvmet.c | 6 +++--- 4 files changed, 39 insertions(+), 5 deletions(-) diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c index a31d23905753..0d2d4f33701b 100644 --- a/drivers/scsi/qla2xxx/qla_attr.c +++ b/drivers/scsi/qla2xxx/qla_attr.c @@ -13,6 +13,7 @@ #include static int qla24xx_vport_disable(struct fc_vport *, bool); +extern void qlt_set_mode(struct scsi_qla_host *vha); /* SYSFS attributes --------------------------------------------------------- */ @@ -631,6 +632,37 @@ static struct bin_attribute sysfs_sfp_attr = { }; static ssize_t +qla2x00_sysfs_write_nvmet(struct file *filp, struct kobject *kobj, + struct bin_attribute *bin_attr, + char *buf, loff_t off, size_t count) +{ + struct scsi_qla_host *vha = shost_priv(dev_to_shost(container_of(kobj, + struct device, kobj))); + struct qla_hw_data *ha = vha->hw; + scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev); + + ql_log(ql_log_info, vha, 0x706e, + "Bringing up target mode!! vha:%p\n", vha); + qlt_op_target_mode = 1; + qlt_set_mode(base_vha); + set_bit(ISP_ABORT_NEEDED, &vha->dpc_flags); + qla2xxx_wake_dpc(vha); + qla2x00_wait_for_hba_online(vha); + + return count; +} + +static struct bin_attribute sysfs_nvmet_attr = { + .attr = { + .name = "nvmet", + .mode = 0200, + }, + .size = 0, + .write = qla2x00_sysfs_write_nvmet, +}; + + +static ssize_t qla2x00_sysfs_write_reset(struct file *filp, struct kobject *kobj, struct bin_attribute *bin_attr, char *buf, loff_t off, size_t count) @@ -943,6 +975,7 @@ static struct sysfs_entry { { "issue_logo", &sysfs_issue_logo_attr, }, { "xgmac_stats", &sysfs_xgmac_stats_attr, 3 }, { "dcbx_tlv", &sysfs_dcbx_tlv_attr, 3 }, + { "nvmet", &sysfs_nvmet_attr, }, { NULL }, }; diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c index 55dc11d91b35..ba58cfe7ff9b 100644 --- a/drivers/scsi/qla2xxx/qla_gs.c +++ b/drivers/scsi/qla2xxx/qla_gs.c @@ -698,7 +698,7 @@ qla2x00_rff_id(scsi_qla_host_t *vha, u8 type) return (QLA_SUCCESS); return qla_async_rffid(vha, &vha->d_id, qlt_rff_id(vha), - FC4_TYPE_FCP_SCSI); + type); } static int qla_async_rffid(scsi_qla_host_t *vha, port_id_t *d_id, diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c index 841541201671..01676345018f 100644 --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -5523,7 +5523,8 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha) * will be newer than discovery_gen. */ qlt_do_generation_tick(vha, &discovery_gen); - if (USE_ASYNC_SCAN(ha)) { + if (USE_ASYNC_SCAN(ha) && !(vha->flags.nvmet_enabled)) { + /* If NVME target mode is enabled, go through regular scan */ rval = qla24xx_async_gpnft(vha, FC4_TYPE_FCP_SCSI, NULL); if (rval) diff --git a/drivers/scsi/qla2xxx/qla_nvmet.c b/drivers/scsi/qla2xxx/qla_nvmet.c index 5335c0618f00..cc0fb83b8f69 100644 --- a/drivers/scsi/qla2xxx/qla_nvmet.c +++ b/drivers/scsi/qla2xxx/qla_nvmet.c @@ -546,7 +546,7 @@ static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, case NVMET_FCOP_READDATA: case NVMET_FCOP_READDATA_RSP: /* Populate the CTIO resp with the SGL present in the rsp */ - ql_log(ql_log_info, vha, 0x1100c, + ql_dbg(ql_dbg_nvme, vha, 0x1100c, "op: %#x, ox_id=%x c_flags=%x transfer_length: %#x req_cnt: %#x, tot_dsds: %#x\n", rsp_buf->op, ctio->ox_id, c_flags, rsp_buf->transfer_length, req_cnt, tot_dsds); @@ -632,7 +632,7 @@ static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, case NVMET_FCOP_WRITEDATA: /* Send transfer rdy */ - ql_log(ql_log_info, vha, 0x1100e, + ql_dbg(ql_dbg_nvme, vha, 0x1100e, "FCOP_WRITE: ox_id=%x c_flags=%x transfer_length: %#x req_cnt: %#x, tot_dsds: %#x\n", ctio->ox_id, c_flags, rsp_buf->transfer_length, req_cnt, tot_dsds); @@ -707,7 +707,7 @@ static void qla_nvmet_send_resp_ctio(struct qla_qpair *qpair, ctio->u.nvme_status_mode1.transfer_len = cpu_to_be32(ersp->xfrd_len); - ql_log(ql_log_info, vha, 0x1100f, + ql_dbg(ql_dbg_nvme, vha, 0x1100f, "op: %#x, rsplen: %#x\n", rsp_buf->op, rsp_buf->rsplen); } else From patchwork Wed Sep 26 04:03:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 10615269 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 869863CF1 for ; Wed, 26 Sep 2018 04:27:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C6CF28A52 for ; Wed, 26 Sep 2018 04:27:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 60C802A5E3; Wed, 26 Sep 2018 04:27:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA53228A52 for ; Wed, 26 Sep 2018 04:27:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726404AbeIZKh6 (ORCPT ); Wed, 26 Sep 2018 06:37:58 -0400 Received: from mail-eopbgr700057.outbound.protection.outlook.com ([40.107.70.57]:38839 "EHLO NAM04-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726343AbeIZKh6 (ORCPT ); Wed, 26 Sep 2018 06:37:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OdsrXDZ8PsuMeK/uGxcR/EkUzAY+8TeWG27+tCg2aRs=; b=K3rc30R8aV3jrW+vftjlHJmLXweJZKtgVi7sE+adS5VPZU+EIO0TKIWMqh+3GJTldA5rwKO7exQQHhVSZr7ksTGyBaJSUHLZcohhrKvBo4hBPF7KO6tyXn+yPmVYIfbOoBd6YaIhCR6wtKdsVdpwgbwbVVAPbzdr9puj7GZVOuA= Received: from DM6PR07CA0016.namprd07.prod.outlook.com (2603:10b6:5:94::29) by SN6PR07MB4400.namprd07.prod.outlook.com (2603:10b6:805:58::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1143.18; Wed, 26 Sep 2018 04:26:58 +0000 Received: from DM3NAM05FT010.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::200) by DM6PR07CA0016.outlook.office365.com (2603:10b6:5:94::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1164.22 via Frontend Transport; Wed, 26 Sep 2018 04:26:58 +0000 Authentication-Results: spf=pass (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=bestguesspass action=none header.from=cavium.com; Received-SPF: Pass (protection.outlook.com: domain of cavium.com designates 50.232.66.26 as permitted sender) receiver=protection.outlook.com; client-ip=50.232.66.26; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by DM3NAM05FT010.mail.protection.outlook.com (10.152.98.117) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256) id 15.20.1185.5 via Frontend Transport; Wed, 26 Sep 2018 04:26:57 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Tue, 25 Sep 2018 21:03:42 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id w8Q43fK4009770; Tue, 25 Sep 2018 21:03:41 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id w8Q43e8M009769; Tue, 25 Sep 2018 21:03:40 -0700 From: Himanshu Madhani To: , CC: , Subject: [PATCH v2 5/5] qla2xxx: Update driver version to 11.00.00.00-k Date: Tue, 25 Sep 2018 21:03:39 -0700 Message-ID: <20180926040339.9715-6-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180926040339.9715-1-himanshu.madhani@cavium.com> References: <20180926040339.9715-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26;IPV:NLI;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(346002)(136003)(376002)(396003)(39860400002)(2980300002)(438002)(199004)(189003)(5660300001)(106466001)(476003)(76176011)(81166006)(81156014)(126002)(2906002)(2616005)(42186006)(446003)(316002)(11346002)(8936002)(16586007)(336012)(54906003)(110136005)(305945005)(356003)(86362001)(4326008)(36756003)(34290500001)(47776003)(51416003)(106002)(26005)(44832011)(186003)(87636003)(478600001)(486006)(50466002)(8676002)(50226002)(72206003)(69596002)(48376002)(80596001)(1076002);DIR:OUT;SFP:1101;SCL:1;SRVR:SN6PR07MB4400;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Pass;LANG:en;PTR:50-232-66-26-static.hfc.comcastbusiness.net;A:1;MX:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT010;1:S2/D3V+eB0dweDOZmPOtPmBjQNHz/jeOSXeHVKNB/SSj4/FMpneGJYPytMuSSX7xhwwq6nnpZEOUxPzo1Q5nfwkRRufMXVk7gDhbNwy3/AaxHdsvMO8kYb4i9wIdO09h X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 77000055-6bb6-4926-0b32-08d623684c37 X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4608076)(2017052603328)(7153060);SRVR:SN6PR07MB4400; X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;3:YmieCeUUlSJp0e7+VdlU6Jrt6SxJ9Xy9R59J2mbz8Nc+ynN/DiuveNIcCs8h+VFs/al118o4510r/OsjrhYXu5zZz2BOSNk006dwa7xPL5hoOCUKezjHeMe3wngGAVhjNwoTDX1CJox5y123QtgVqLp95Yg70au1q72rfwiMHFXEdzf2hYntuymbsBlr2YkgdvyJpMiq0eVCSWHeKBaWl3Vt/k+/633/X5oOlTtguczYKnnhkHmaQBneB51TKxfa8G5hxhe9BDpotjEUe/DeG9w9Pbk9otO/eyztecxNr9iYlDAwBDx11+KjpNlvyfvvkE6pshv1/UGYXfbZA/2Mb796rwXc25TblImknItZ7Qo=;25:4EJ8ELJ73ycqeSCKc/89CNryo82X6DrDOpENs01CHWYaeNCGcc7MHQz7AykWb8KPvjYvVs2OXKkCkrvVv5LUTvnlRMUjOx7NBmSjYKQBkaGseqtPhPYrzkRVurtKcHkp39FsfOhyDwYdUTg4mp3ZNYm4XByl9GxSNAHAQ0WVsyu8l6sw0O0+peiUfHq3m1oVqIOgwCHcW0bXlocWsD7A8XfRpBQwqe9Ul/49/3RIgaB6LRLRzc/11gUFP9WCKfY9kEH2zd1EOZ9CoEaN4dqG0Tukwc3WvHAXRJcQfi5HEaa53p/eYTaMno5HghWYG9Gvgqk48bWcT0/vQXtbqjWNCA== X-MS-TrafficTypeDiagnostic: SN6PR07MB4400: X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;31:b+VjE4Viq1Pbf8azeYJm76u9ES1n7G2kfLsbM/pwiZSJO8liigaRfOrDkX1jtqZIUz/3XrzpTatZVKiXDw/deNaZFFoNdyi6mhwD9NiYTY23aAbSQuxGOZSD9AjvbGoGlpGg5OmlGkNjLrrKBzsTpoG+qsConbH8OZd8vc4A527Io5HPNzRS0an2IQFQoUNdIZr+nLKy2lwWyWLAHSW+dpHI7TSe7lOLhEhm78E0uYE=;20:v4MY+vBZPFM4zAELBcIP0ecjR3TR67tHRTUBp+EVRAIzolyhb4V6f3qSajQu2ZcrtysD781kL+gVHrko1B4jAtNR3GijygTne5ROCv0OtRktam8eH387ekAGI2UXNiKzvOQmivfx3MgHJ0ZSOBaREZOlLnM8Eqw4enYtwCGSLEaLJ+bmF9wukq22P+HLU6ONrQ34qg+PyfancA6PgGAYp/FSzPig0kBRDJanjM2M4Va9THGt4FXi64qiGA+XpqLNlHWx6ydQ7CsdzPnfSzoZBo/Pjho+gVJy+YriWcywSwjFRRNKyWQTVSBzY+6EpKV6pkmJYHuOw+iaWK8/NaEDnO77LwZVpGn1WjGp7KjXcaPtZPnTTg6fGa/QKuPiJ3P4b79VhycNSY4ogKf8XUUzRP28xIFQSkQ0ud4s5h4LcUmnAXaL3FNt0i9LN191Vil2VM3I7C7JczuKAvLlnQ8trSupZTJpFnxaY9+7gLFogsc0hlJ7G5vh/0+BoBiV+fdD X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3231355)(944501410)(52105095)(10201501046)(93006095)(93004095)(3002001)(149066)(150052)(6041310)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123558120)(20161123564045)(201708071742011)(7699051);SRVR:SN6PR07MB4400;BCL:0;PCL:0;RULEID:;SRVR:SN6PR07MB4400; X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;4:rZ6yunxNiC32Ts6fiDW1H/ltk0A4gqHHjxcITI4ReK6gcQ0wegF6hwMUdI8djuhZ2O6AfhNgoIt8zwQsY7olYCbYJG8W0+GwoHwFDFFKRJEPbnTWoMEFzE0EzQ/dPcA+ojZYH1wqw9IiPdXZTE5/6yhzmIhfsNZbLnCNEh7KoOBEJJxnkvCp+s/yl6uzXReStJ8dwFe9QH+uKr4wZS0IaJbkGemOVE7Nx76GdHSNuxRf2umrrwVzmQnvQINnH3C0O6a12o+EcWS0lzt7iibBeQ== X-Forefront-PRVS: 08076ABC99 X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;23:rbBTgsazLDcXkXVQWZ6dbXtvT7Wb6BPrmU4XC0pM50BefUxKRXUFDDyT8xO0DHIvK2975xm02diAg1z4f5Oj0NuMKv7OciVHKfg7gc6hw8hJEjxqwNzZ77yN/1g/j43aJJskwqPuQO4ox89IVn/LE9JfKUXLp3sAKd9NFshmBdLA5VIqil4xV75qwaoQJwDcmoXGN/I+xOwPUKirdt33ChfHLV+K2kfMbLUQ6q8L/UZt5axI4oAHC6g2lJTYHw8tHTSANgEJDOBm4JNZSWrq3A61r30ATlB4+SxYK+VK981peqyknABfs+zlTgkEyz00Se8cq+5krHf7vBu9RNBFbeOZf/RFdKCoaANqDa2CVWkVzRAOXjasWsiF4A000VHEoDuECTbmjl6HqXHE+onWf1udHYNp12rK1vKy87UtW+ETkXq+ecmtSPMioeAvMk8Ut3jREiEJOsQTbeKDaDs1D49nPLgSiUgXizjII7cHt7mFeqN0BWKtsi123Hyxyr6qqaeVspX9lOdzD2oLTlXP0o90kzW6ULUPQQY6TD05/ucCdRHS7SWNz2D6Efh4gMUG7Z/Z+aW3AxKH7SeI60N+6qTeekkdFYvUQLG8w0lXFKMolhCFI8fIqcJmOQe2D1ehwNxJ1TNvsqD2xktiTLGRcOfwBGvjtIF+OjLA6r3HWDNbVI6davKrNbQ8JI7W2mwg9e7zKmhnyGsye6uk1eXI5JzFXnXc5acSB4DsRM+f/hE16Qi/ksskBorsYHC5dpKxsdzySYN9HKABxGU6o9mI77HiItM5VQV4gO/SVSvtYso99MRzBUt3wS7HzwRjMN5kCvZM9w25DD60AMhaoddhw/S8Gh/EVjefOdMHKQPVeUsyoge2eNdEKimHdikBcL3R9Rv91Zxp23j/UUeM5xlvX0wE7PW3V4E3YeZXiON6lVbqGhah9+vp6JghFjPDQhy73vwTP7J5r1mDUQYBef6TDd+HIYwMcOmWi15RTnuDvlplgxFL97rfRHlGHSQ4AER4upR99lNZmCeP6VUAtN/mPykwQWCU7OhOVFHyd5groZFpGhuP7wyt6QUkZK3NVxV2NUW7XP7RM9/eqvyE+91BIDjU6HUz7LUPMaFxe1sFJxs= X-Microsoft-Antispam-Message-Info: 1XxpLQ+/+waxTm27j4otetkQCA3VIWTSpHyOzP092EZnm+0xaORtIQSIN3AgNlkVX7nEydcBXiCCDkZexUi/2xyd1WqhxO7IpimxgotkIrKeWXQoaQPjb7sGKt++sxQ2a2neSLaSRHN0AV3JwTwRbclpu2CKE7ixl5QJjtlP+YHp5m6qp161Lkc4dRayAsmt6vuXxYO8R40iwaBjA6EGvc5M0aEw1MyiXLr2hFtDd73x79QQ64sOP87h16ZRuGYSguxodpK1O51x1MKJkXknI/mDfDTnDBn4kZ1anv7Z9Q1JN3iHORlGCRJZirNQgEFKlOxeUTZlrLXOlVaO/qQEvQ6TvsiueX8c+H94cdkky50= X-Microsoft-Exchange-Diagnostics: 1;SN6PR07MB4400;6:paJA7yGe+ppxBzFtz9M44aeKNw98z4LO1YeVs2b8nKZxWu2IVBkmUpzEDFA8mSHarW75nUqx7PuYXZ+HXacYGwfJSPKqVxF3hs8wRyG+xEqHb9UWefOLjiMFjVJYnclKKiatATz6qN6j3A1XvvREnWvMo4j7n4SoDXbECCzcUdofaFpIFejEe+omIY4M4qauBUoqBQAg2H3mOnSUegnjWsAWyhm5zP82b2Waf46J55q0L4HqsPkRVrkN/CcVaNEndtldXtw7WHCePcV4KEQ0irvDXkgem3Pf+KDW63iwF2ozw6/3gPfB7vA38N/Rje5uiGnTqiDjf3zXcJyF6NxlbCTEKESPP0AQJIOfDsMtXQMi4Jzu1zkYnIlp1FxYXNvw81fFk1MfzfBGErgcGmcC/1nvzdNMektPf1QKTwS67DsKHpmv9Ap8ASa1POtLoXVgi2N/4LJW8pAg7D8l1tNtfA==;5:JDvDCrbSdAEwnGb9D+ONKR7v/gYs+jSzAskNgV5xwkSyDfku8qBYytg8YSq/BEuYZIgu+4L5WK+exa2hqR5RnbLPoToS00V7hBrJpN3mq3CYAHyqvGVIkZiHRFbMkGt0qmabBX0o5fdIVzC2mXS07m+pgDcTWK2hpfCSjxhJWeE=;7:94bEAt5BBH2rZIpO22WpACb30n29rrsZIq+M3Ph1Okr64fWGpbOPdGz/piZVRY+JE3xihfESs5RkP3EBWsmxapCRCLesjEmYnW2MBzH5j0z6wkp7wH/LoJvgs/Gquu7Z4cM1TPUzWj7WA6htUWbVuq3lMzilJNJwbgjuajeiOzasjlah5k8UlT7zx3wETETm4+9vtNd57aB5enPpeCaPRFBi7oeGSoxc19iqnE2UdpuhmUmcjf/0sPRokYKehkO2 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2018 04:26:57.8781 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 77000055-6bb6-4926-0b32-08d623684c37 X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194;Ip=[50.232.66.26];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR07MB4400 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_version.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/qla2xxx/qla_version.h b/drivers/scsi/qla2xxx/qla_version.h index 12bafff71a1a..0d58aa629c08 100644 --- a/drivers/scsi/qla2xxx/qla_version.h +++ b/drivers/scsi/qla2xxx/qla_version.h @@ -7,9 +7,9 @@ /* * Driver version */ -#define QLA2XXX_VERSION "10.00.00.11-k" +#define QLA2XXX_VERSION "11.00.00.00-k" -#define QLA_DRIVER_MAJOR_VER 10 +#define QLA_DRIVER_MAJOR_VER 11 #define QLA_DRIVER_MINOR_VER 0 #define QLA_DRIVER_PATCH_VER 0 #define QLA_DRIVER_BETA_VER 0