From patchwork Wed Jun 21 20:48:43 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Madhani, Himanshu" X-Patchwork-Id: 9802883 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AB16B600C5 for ; Wed, 21 Jun 2017 20:49:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9457C27F97 for ; Wed, 21 Jun 2017 20:49:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 882D528512; Wed, 21 Jun 2017 20:49:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A10B27F97 for ; Wed, 21 Jun 2017 20:49:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751866AbdFUUtG (ORCPT ); Wed, 21 Jun 2017 16:49:06 -0400 Received: from mail-sn1nam01on0078.outbound.protection.outlook.com ([104.47.32.78]:35904 "EHLO NAM01-SN1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751865AbdFUUtB (ORCPT ); Wed, 21 Jun 2017 16:49:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=FtgPuboHWcGPUQiknM08WHf0xQO4C3MsymqERRFd/MU=; b=SC/N0YY+LPkaiu57NRR9qMFYqTjl9FleI485gPNnQKnv986FgcoyTCR7BSmyfvXZyPtflWlVm/GJJS4ZI6vuW3Xi48gvcgkv7BC3MLMJeowVr1UmsMuSO85/FMkVbQWliDztNrwZ2j8U//B9szff0qkPTCe9r3ZOfbOTf5AaHg4= Received: from CY1PR07CA0033.namprd07.prod.outlook.com (10.166.202.43) by BLUPR0701MB1010.namprd07.prod.outlook.com (10.160.34.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1199.15; Wed, 21 Jun 2017 20:48:59 +0000 Received: from BN1BFFO11FD045.protection.gbl (2a01:111:f400:7c10::1:113) by CY1PR07CA0033.outlook.office365.com (2a01:111:e400:c60a::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1199.15 via Frontend Transport; Wed, 21 Jun 2017 20:48:58 +0000 Authentication-Results: spf=none (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; lists.infradead.org; dkim=none (message not signed) header.d=none;lists.infradead.org; dmarc=none action=none header.from=cavium.com; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by BN1BFFO11FD045.mail.protection.outlook.com (10.58.145.0) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384) id 15.1.1178.14 via Frontend Transport; Wed, 21 Jun 2017 20:48:58 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.347.0; Wed, 21 Jun 2017 13:48:47 -0700 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id v5LKmlH3021710; Wed, 21 Jun 2017 13:48:47 -0700 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id v5LKmlJn021709; Wed, 21 Jun 2017 13:48:47 -0700 From: "Madhani, Himanshu" To: CC: , , , , Subject: [PATCH v2 3/6] qla2xxx: Add FC-NVMe F/W initialization and transport registration Date: Wed, 21 Jun 2017 13:48:43 -0700 Message-ID: <20170621204846.21663-4-himanshu.madhani@cavium.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20170621204846.21663-1-himanshu.madhani@cavium.com> References: <20170621204846.21663-1-himanshu.madhani@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(39410400002)(39400400002)(39850400002)(39450400003)(2980300002)(428002)(189002)(199003)(9170700003)(36756003)(305945005)(189998001)(72206003)(81166006)(8676002)(356003)(8936002)(50226002)(478600001)(87636003)(50466002)(50986999)(48376002)(76176999)(5660300001)(86362001)(6916009)(575784001)(6666003)(2950100002)(53946003)(110136004)(47776003)(106466001)(105586002)(2351001)(4326008)(1076002)(54906002)(38730400002)(101416001)(42186005)(2906002)(5003940100001)(33646002); DIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR0701MB1010; H:CAEXCH02.caveonetworks.com; FPR:; SPF:None; MLV:sfv; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN1BFFO11FD045; 1:VS8xo9QX2DHNW2qmO/sUJOBMdbVY2EzJo8vOUUswV4O+pybGhnsoftjoGTb8rcEn8KdPZzpkOFO9qokLaUOaOPvXXLq3cZ5TOWhec8iuUZVNX6tSdE8R6/QxOj+t78uXeyPDVY7k2vuyggvDzaXKy5eQdIGLANACPhLxmTfLHsLtWRmRpE14XbOcjqS5gqb/qG1BrBatj2kC4/lG/MawAanTzJfo7l1EJXgJB3iqHylG5Tr+OI01j9ThZuVpQ6eP054NP6YTdt5XYKNuCoKQ0QiXzFlw4Robsu3v7WubqVf2G8sbyW7aAkKRXAAotmkMccbB+r6yWsm2pOV21fM8MFrsT6e4foz7PpgKE7NbXfnkusLwl4zhquUaxUSXoaPdmjxJQbhkxkVintQjjWiv1rToHaQBdvCzzuCH16RHEG4JibTV6MY9VlhuGMcTudd5zUJqfh/ITWDVtUxG1NJcvfWr9Cww+qv9LekV+5vgu8CUwTWH44uGi2RGNz1hqk7oMu7tX6bjyaxzD4OD7UsZNQ== X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f5033e05-bfbe-403a-b50b-08d4b8e6f0af X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500055)(300135000095)(300000501055)(300135300095)(22001)(300000502055)(300135100095)(2017030254075)(300000503055)(300135400095)(201703131423075)(201703031133081)(201702281549075)(300000504055)(300135200095)(300000505055)(300135600095); SRVR:BLUPR0701MB1010; X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1010; 3:xKfysYsxJ6ljDhfMEHyOxOfyieG4JlKgTFcxMqmKukZe7iEc7+zbDakY4uFr9u3jBHzUPXQaUnsgp2RUjK6KcLdN3lqyBUz+/fgSLJ/WU0rGxMhtSPRrBxy0iv5+d35VJKf/3WaqafLdZgN0ry4N72u7HtIMV0HB7CKfKGDexDvhfFbO0PgEEsdcbitK6Qm0LrymQ4hqyqQLOh8JUBPo8tV6NexYaUaj+ux/x/xi79JAfJOh/Qpnub30GRNlWdqegJ4mV0IHTZyjUlCsa8vI9GRhaRJqnP0FrCy3R9mVmERB1eCF0pM+365ugnhMPHinGXq0Qsd9NCtqmfMy3+db7CaMymo9Ef9ofTw1ka7QynpgzqH8pS/6nbw6nVe7RL9mZPHyXosbtOy+eJWsRz46Wz8w/zQJuHyWVgALkSNzOb14VF9YgoRztgp0fmIto2RC0+Y4cksYPv8tLoVa5JAkrLR4v7atvOTsoVdV0tKaPbzYhud29VlAmyiP5CS+mQuILBV/slmnpoFYot9sePK6dY1X6+UYjAVOp3O+ypS/7WKAWxcQxbGKc1CSth/oj+qv6Ic0CNtir0/aObyAI1CkSV7Fz6X1CaTzTkBNFGA2PrQZBw/o71+nDnm8y042Fmj5fNsiaVD/1Y4QYSAyRmoKdX4+NM+TrYe64Y3jnL0NLq54uUUBN+IMtwJx9+vuM5uWOiLXtZHy44sXGr+Ip2Vl3ZeCgkVg8cky3imorS5CRPml03Sw4DfbuWDiNomLk3Ikuv8BbdKqoWkr18pEcbO+k2OoFk/zjuwNxDEFCSjYE6srVJmkxSCVLV1p3Pep2rHDE1S9yUXygwVRFNxUi1BKYA== X-MS-TrafficTypeDiagnostic: BLUPR0701MB1010: X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1010; 25:aU5Rf2o2Cb6fdmyAA7qxGM/Ig+n9JmW6ms7ysFki/EQa3C1hrFTZlaWXSULuHHVcE1AzNs+Xe4HIhpc0caCiDGKqFRPQ5BAoZzbDnG7AAjIkej26Z7lVtt/LgSdz40WxUW7LmC/6Jged2t9WFNEQzAB5kzeke2rJwyFkKXcjEl3/0tXvCmfu4kdBtxzM6kuKOuqwmjWXdc2Ru+kLIZekfjdlhWGDc7LRoSUelvE/wNv9xcfZ4pNHHbZ4h9WLgSNKJhWVDsO690NxH0lJ2Zmaf40nTeum3wwBp7EUMc5jaz+uV0EX+FdPLLGnYuWqfThCq3s0YeJ5mvjDqx3zrBtZ+5fPDWMpxK7zLQFxI031/bTIG0Us+wD4PYqU08Ga/gjyJ2SLN+fC3C1jEhco3DQi4s7fGNxlEB+RPLI1Uz3GmOKHIs+hMqHmPk8H3uCx2hnZ690oPPApymwtnIqkS4YmSvqd+x6FP56u27KIZi5X9vIaLOvD5kEPjPrpxZi59hR7d38LP71TmsCKIpzXoBYCh7Gtd2Rb3DDNu5GE5H5ucGsQXFl/r+PCz2JpmL1WQzeRc+7q1wrMr7yOrpCUxRHAGqlURAGL0xz4DkY61PDxXF80epGw1Tqi5NadDKmtte0I6DTp0+t7sBvMTGQA88w7UiH7/zgeADGZv9/uNaj4of4V9WksXERk46z4cdi2gampdoc8viO1ukcpdjjnGF8U7LAEnfKaKN2Lxa75lez5dbi1k+ixioBNNN1KCAFbD305PRxvIfATsUBVQkS/cuiEfNhKbVQ4pi2viAJtIE7bRTcZJErwvBKCL5bb/C2ajixmuJUbR5S/dAOpmAWjK/n9a3iJhwBAMiQ3wQZ7IW2gP4Nz+91fqbdmDP21AGr9Rjd9wXJdogv4YlW3fl53le7jjcfYpp2lfeNl9Xo8irL SERw= X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1010; 31:Zls5GQpEQOG7ri9Xm0saajzktL1Yy1GmXUCtW8ng89XYBFpaicg+iMjd6lS3LpBD/NkQ3vJIDuYYhpN+3RHtcSaCBwO3oq+vIe1SY3M7XpVyijkU5obe4pg4vGxWoLJJ5FUVFC+BjlHNjilp6wfw/mgzaAVLVl8wIwXITu0WMbuTUp297L2DzUzujlUcHFBlsT01oE8lWkqqzP8qGC+w2aKlX1SmsXERySqJ047eyD+jww+y31opcg9NI4mBCfZY4sarHQJSwYQaFXRmrco1zPP9S+gLcDljjlzWxH8NrwHOKnMiPCAB7HOI6GQuCo+wH9JIKc2Bitw83Msxgy8eMPCGBHkl5MN9C6JOYPNBFnzAeWhdeDRFnBeL9MZGn8Eoq3kWcINwrFZ2JrO7vepdISIzoZBnGburG2DspL1iYO4ZybB2N8dVmTY1lnH5rZwb6LhL6xMMKlqARgjEeEdWiEor14fHZWmU1PNBn3c3osCTfmSGFUgo/5K0PTeVtvqTNBeCQh/ADDUV6hYrRDL5+GVsgXSMID58IR9HBzCTyAv0HroddGNZs782sLJk4HSpE5PJ7Ab/Hk6BCXiuhPnZ8O8088fk+ykVFVhD4XLrr6XqlzdlZW5E9agDn70yM9EYTiHDOBmdEDn0vflBmag/exxrBMl1rR/vrB+6KqweGgpORnkaWjNQyAyeOFceiy2U913YnetphKvLsAIPWQ1RXg== X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1010; 20:nXoejdLLNwyI5a/3lGPq9tg3q/DgnMdlleoqiBbXzHpByYOturqdaDlDNQ75j8dfxMF89jKR7F4kt+gg6hgmu/gTGXRfYg19wc4/rU8PY0s7SOT5S7sNXFGCnlCv5gcy+JqVVyJDpZ9bfhOp0fgtl9GDSPt7O2a1ief1Jkjr/LqbOroe3FClXBrSXk767dfnMZGNxpX5Y1r6jIp7z4B0wQh/WT+Y7d8k4BQTEHsmCOMTFZywiyi+yjW3VnRaOzqu5nsVhs+sh+E+xLgFZjnBO/4TPyMxtYI9LyYp7aUtT70gjCgpnw+qRRHGlKQhDbL57DQIhJGYydsVSHQncXjGmmJgT+Z/3oz1oN2kNw6qvb+JrgOIHdIcllP36GBk3MtvqoCeL9zX4etjTHunP1Pk6o4N8MawURRmCsONTXLN8IiqUcId0vTEO98FVcWmksp1E9aKjMLBzNiFhl2zL1gIlfpwzOw4HqdVWZIMdiqlCl6XvgSLoBx9C0/evuRBxCKw X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(13018025)(13016025)(8121501046)(5005006)(93006095)(93001095)(100000703101)(100105400095)(10201501046)(3002001)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123558100)(20161123564025)(20161123555025)(20161123562025)(20161123560025)(6072148)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:BLUPR0701MB1010; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:BLUPR0701MB1010; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BLUPR0701MB1010; 4:n1V11K8aHU5umDaiSXNC0isDxcZPjd+t96BZwazI?= =?us-ascii?Q?LYwL/6Quz39FmHK7Y+c+K5OcXSrO7hrAA10w5S8BNX5c1RsJEQf+7eneva7v?= =?us-ascii?Q?ADtXWjl4OpEoFAPhqeNobCGB6tkjIsirNnEZH00Gc5b9fRXkHPZyIPpc0cVK?= =?us-ascii?Q?ZWI2dn7bPrvtbsDlwBVmIrO+jq02zpr+fckLfLbFxiRRTVK19KMnVke3fqjI?= =?us-ascii?Q?RhDt/21+gZ2q3NKnWpNZ/EjExmbZxm6eFW0kLmaVM/UEa4ojDN8ne2gNSrEF?= =?us-ascii?Q?q1ubq6COb0jmOpYEdLGaNf2fHU00pufz+e+suLaCW6fWniSKdTHz9zYtLdIK?= =?us-ascii?Q?r97qo7s1FvMQeEZ2b2aw1mwEFmjNnmF7ai7BJid9Vnr16SiMCoiV3Uhcw230?= =?us-ascii?Q?2SbjTlqtR8aBesbVKS9yZELDpuEJs6NaYE+EDohkXI8X9//OtBJdGT1gEjXI?= =?us-ascii?Q?dgj+sVDRGIBeMOa16kYRWI4PeKESb8CXhZS+hIFmVcKvQaU1vaqHnRNupXIq?= =?us-ascii?Q?ttLQeOEg7S+aYcaTVBTVJF/Pv1lsUDtaOEUiSc71me5dLlmTOOAJiPu7BvH3?= =?us-ascii?Q?QuNXHNraqGXFCsP6OfVmSj6QColmFC8Y/npodhEQ4mXeTwMgKmSKtphGTUAQ?= =?us-ascii?Q?4LzhW32oldHMd/66BDsn1zNSXMBNG+MoX05fIQsanLHP7Pmko51Aqjw4362j?= =?us-ascii?Q?CiTaV3OG0JYG49z6LWt1NE9fA1X3XPXAWx3hSaJbaZJKxGhUetmY3LQEtaaQ?= =?us-ascii?Q?d7NudnSS905G8GYWByPY/zOZUiPcB9xOQ3NK3+NGkmuoO0pIS0euZQYfBura?= =?us-ascii?Q?q0NfEWhv8ab3RcuPDkQ2zNmavLvPQwBk31eVf+cMmmsfJXRuuWVpx4wUpxa6?= =?us-ascii?Q?0y27yYsHl2/ZaFrJIhGmicP443Y7Q1uMUvvap/E/i3/PlLMgjiPxJtL+H4/+?= =?us-ascii?Q?mo1/N+55FKXnT55QzKhS1B0vv5JobgK3y/gazYaHQEdLrwxkcN3Gtj2Gx46v?= =?us-ascii?Q?u+A3WYFhYhxC0kjZviXSzeAvBvJUIsrZ2YDRk2tXVkRLwY7ahCbLHU5J/LY3?= =?us-ascii?Q?Pl3bm6NWxVUfg8/2cnH4FG4xAcEt9yWosWqf4sqTLiwZZn6yqVNh8pJnrsIA?= =?us-ascii?Q?x8PxYyfH8tFZ4nOra8Vl5ewn9rwfxwu6iOz2xW1XZku+o/ncq3d3EBS0Ngv7?= =?us-ascii?Q?bIvbwP9dCdiwW5U=3D?= X-Forefront-PRVS: 0345CFD558 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BLUPR0701MB1010; 23:qRkqQuDs3aFSvH4tFSicVU39oTi+xRKnQ9g8ZxN?= =?us-ascii?Q?qB3rDcPuPUDLHo/T8+dOEy6Uyh6ot+yCP3FtGeQowahCC1qgvOMnaRUNw8nb?= =?us-ascii?Q?XmsDCb+7kAhnOvsyhOTYHdgrAlmq+XN5CstOxe0L6PTaos6ujCaX84yujpVg?= =?us-ascii?Q?zAU7sECJcMhA3Auo/6tOefOQCf/cPBD+MB28rV9wPTV5Y9uDSFxvRmdp8+Dj?= =?us-ascii?Q?TY3G+/3et4rAnhh68pZIstUHYGNsrqVZUKd52nOP7yRcL6UAnAyeOSQTszgZ?= =?us-ascii?Q?/B7OGmaGvcp30unirWhwmR3UhLfPU86ZVEyMxcirCJr8Vxb9jYdZuYKLJN/Y?= =?us-ascii?Q?mF+mUmfcFqNSutGx/32TyV10jcFQr26/1JHVb8FoUqpyfT6nquQTXH4ZHyKp?= =?us-ascii?Q?RLMS0qzFzp5hC0v9obKZC3UPUtZmrl/dC/KzGfW4nSRB6oShNCfUh6w+XlX5?= =?us-ascii?Q?LMuk4o0FAvacR7mYkJ4kxY9+c6HcEKdh1hvOGm1XvsBLAlqMILAUL96Mes68?= =?us-ascii?Q?cSjSj4yJHbdkHLVtLLOSG5yTcnbt3pVf+71TVJ8qt8b5BYl1GX9tsI96korx?= =?us-ascii?Q?9n498pUvIBrm89caoXJ1g/7m7Jf/Gvk9JAF1XGh6OQFxaLxl5Eazyo2LrhBA?= =?us-ascii?Q?CjhwPDHysTlgiZ+owTYkeljZVFv8sB70FQxkkdEg+AhDEHbsToGx8aBx7Xl1?= =?us-ascii?Q?rvyCY2Apnz7JPY89LsB5cW6zoRhXIBGuZhqLULgCibAfR2a02Ez2KFAWgCkd?= =?us-ascii?Q?frMFma+mTqSHbFfXOe77Zjt5B9caUPnY2H9IpGLkHmtK0xGREZFO+UXtbaXC?= =?us-ascii?Q?NmK+kzb7KH/R78astISYDM9mi4b1tNXvwg6kqCu6s2MwL2dD9ftGU3uC6Xy6?= =?us-ascii?Q?GTEpFBR+ISBMo1X1SKvc+aje82Zk2uy5d9DQKB9zcw10e9CHMkF/reSqi021?= =?us-ascii?Q?ARN52El/YPsjSF1LMg3mIahC8aAERan0PFH2USuHWM9dEhPbL+SehjQ6hG9x?= =?us-ascii?Q?B5pn2tLaW4h796PFJWlcCWp9BH0b6CROrpD11A0Jpa0E2A536mT194He020d?= =?us-ascii?Q?6vY7mZdxARJKhiKsqcdINfn14c9oLzHbfkPWabu5pDN9thpeAsLyk87YJkOu?= =?us-ascii?Q?k6pdEoNd9vOzmx6RFjFKIZapNiyz5qR8r?= X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BLUPR0701MB1010; 6:ePGYCuWrLfbIbZBH4VmtCWjtHcCJmrt7g0DGCqlk?= =?us-ascii?Q?O+fWeJpejKN/psUPWEZe9Vi+NrRACO135mFNA/sDnpRZj1B3loAn+y4+i4fS?= =?us-ascii?Q?FR/xNezhRH/Nwe9Oda17l1YitOWZqDCrBp9lUKxt8RfWCFf2Rh7gw1RSkvW6?= =?us-ascii?Q?lwYlFNtQgAAr44iV1+fQX24bbOe3uSjs7avoMuLrA7GflmzJcRnpyNF/JK1L?= =?us-ascii?Q?O4gARGlr4Idn9xisLtRkbT1OsCMGjMrwZwlrJVsK8FwEyqQjex0L54xNNCE4?= =?us-ascii?Q?d66ixLm7aOS9oWsM3LxYaCDMPpmkYUixXiN1Qu0NA44lCBzhiDEGudIVGQqj?= =?us-ascii?Q?WHWzUFLtxXc4DWyLZiVjoaJE72x12PEzszo/7sYwhvQZpr6JzLDGJQW8DL8/?= =?us-ascii?Q?//VtMnxxBSz9+dFdJTLSBQlo5lS8chrLFlyJIV27O2HunISsBoqzykH7BQyv?= =?us-ascii?Q?D1DSv5QMNkICQWLNlCmFcLBHsB1Ossqwkrwb1sd0XSsuF6v0r+HqwlmjadQM?= =?us-ascii?Q?SEqIkHRXFsppAcSeHWDkhuQhsaAzw2z/sgJD0QI+TbtzjJ6+bKaXeDxytJes?= =?us-ascii?Q?JZNGTrbcFI2dr/pP5X+ETghsybIj5N+CHe/uR6KImI/qwbaqszbFKLFht32Y?= =?us-ascii?Q?CPNj+SpzufIcqDh2sAQ5xPhJs+/IGInCpddIDxdoGsdSDyLtGRZLNBVBi4A9?= =?us-ascii?Q?9wO93gWhVRMoOsKm9plI6J4l3vAhPX0yFNPKuHz0AGZVLS4C3YMd1ZM3iJfj?= =?us-ascii?Q?3U9wZGVRuvtpHmN1G6XBB+apguderft4VpKiT8KRy3nxJVwF7S+6t11Nhlmc?= =?us-ascii?Q?7dW59DGQufd1YLyahZqjlvw5opc2a5Uz9uiwxdI8bsFstMv0cz3yW0Ezj0eR?= =?us-ascii?Q?mCHytk18WT8V88P+tUg2NOSWCdm2Ct03nsPTCKNGWL0/3CutSzXkEhF7yxxA?= =?us-ascii?Q?DTMdiJo9LTnpVzlCixhtehIIcjYFkuJGqocX9aPxPBnuZdm4PrRnLW6qkYgw?= =?us-ascii?Q?xFo=3D?= X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1010; 5:kVPoO0aGHTUv3Y+UypAGsztAZqzAhDdKnJs+wXAvOGF3SJa6daJ8g8pcl6zcL09V9a0lHCpgAKcQ1RAt5Dfn9wP3EhcUeK7b2/0SClzTf/94Mpuf3D1UYcD9yj5Ws7qs8XHWexfNYE2LN5uohf6hlKYVkn7iZqTmx++2swimZzxqqDiK/6mguT0puNeaElu4hHiQO/n1OARyenBksf7CmjVH8/f2LpStcMdzPZpizV8UY5l21XPDEu9zairFS36GcR/dN+sFUQNzqm1hJ7wYakLFdS8h+EYFGjKfuxIKMhIOg0Go81G/0x4NZeIbSRnXmJ8qYxAyHyGA1X9fAvFUQQ1/FR9UEKzDBeIGcF1gvZN9Z7b3pTbMUwRQr/jdWTmwUE44cXxecMAbNueadQwMRstSJqdAOPQL3+++JKeQcKFqB4TXcPlgD9wtENtjhyPrxnlxLiYgOtqbeBtHbYWefbGM5XRw1Puj+gh97vEtWkbdB1rkeeOfCDTZYxbu4U6z; 24:NSa9z1Y9PZlTTATEWw+7Hq2M7G5Eh+BaIFblyMfmViD7tYocwb8b2kyOnt4L1+LC0lzdXFbBsGo3V3+RNvh61o0dIoweM8lQmZgXsCKqRgU= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BLUPR0701MB1010; 7:2Qkp0HKsBr984nZ/zsHQhmykCdi0vMAFEWGPUVzKIx3gBlUiks/ys2k7toKJZs4OkXAk3n3L9fBbTYziPokLgrfR7y2jZ6femzQ0pWpVTetnaTNG57kE6L0bMXM0Z4wCFxOiPjKkS/n1XZRmNxxh56qlrEVuKNitfsy7F52dztbmPI5U8iakgZ0s6zfhZpq84oOrAlEKYF5AFrvmMTAotVzjhV20+pHw3j+Ke6PDQ0co1Klg0YuwxzlY7/RGmmFT5AHdZUIexXqaaF+quMx+FWGmigkkhE2TOrtXDMx+4uxsqY+OXx+Xry6ZJ59caGoS/PirgspMcezWHKkbbKyP3lSzv3qAwCA3CPOi+dX3cjR6USwHLd1U3K+LrOOA/7xf+kQZNIhUpVb2Tttv3qCbP3Eb3KUgHJHxRHSOj1qnNL68a+ch6vi5Lx1kVOxDjlz2GjtDDZAphijjrpSS0dhGt9F3rpLC8ehXhYVw+dzM+LzTHfwrI9DJ1ZKZGp4sHA9PDhOSyNHPmKK2HamKPOb7eAW5zFZsEuaUcUtK2UWZFF6Q8I8qJK/6oQxMqd+Mke9Q4l9FEVJ686mErkX2cNU+2Ak8jz5MXm6oSBmbgydLk+Dx2rMtp3qSG1qdicmvEK6bcamYtMTflE4iwaNVmeJskx/aQCgFFEvwUKbBfDMdEdPwS79z7eUFcaFCoxQ5XbJocraRsiQjuRvvrtLiGJe8v7u+TGZB0vVB2BTpI2es4aao+d+FjmagWUxpBLutv38sc5Xk08kPGl3SpCV9y67UOT1qYlcw+4B6tntDEGTXbgY= X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2017 20:48:58.2374 (UTC) X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194; Ip=[50.232.66.26]; Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLUPR0701MB1010 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Duane Grigsby This code provides the interfaces to register remote and local ports of FC4 type 0x28 with the FC-NVMe transport and transports the requests (FC-NVMe FC link services and FC-NVMe commands IUs) to the fabric. It also provides the support for allocating h/w queues and aborting FC-NVMe FC requests. Signed-off-by: Darren Trapp Signed-off-by: Duane Grigsby Signed-off-by: Anil Gurumurthy Signed-off-by: Giridhar Malavali Signed-off-by: Himanshu Madhani Reviewed-by: Hannes Reinecke --- drivers/scsi/qla2xxx/Makefile | 2 +- drivers/scsi/qla2xxx/qla_dbg.c | 2 +- drivers/scsi/qla2xxx/qla_def.h | 6 + drivers/scsi/qla2xxx/qla_gbl.h | 11 + drivers/scsi/qla2xxx/qla_init.c | 8 + drivers/scsi/qla2xxx/qla_iocb.c | 36 ++ drivers/scsi/qla2xxx/qla_isr.c | 19 + drivers/scsi/qla2xxx/qla_mbx.c | 21 ++ drivers/scsi/qla2xxx/qla_nvme.c | 756 ++++++++++++++++++++++++++++++++++++++++ drivers/scsi/qla2xxx/qla_nvme.h | 132 +++++++ drivers/scsi/qla2xxx/qla_os.c | 40 ++- 11 files changed, 1024 insertions(+), 9 deletions(-) create mode 100644 drivers/scsi/qla2xxx/qla_nvme.c create mode 100644 drivers/scsi/qla2xxx/qla_nvme.h diff --git a/drivers/scsi/qla2xxx/Makefile b/drivers/scsi/qla2xxx/Makefile index 44def6bb4bb0..0b767a0bb308 100644 --- a/drivers/scsi/qla2xxx/Makefile +++ b/drivers/scsi/qla2xxx/Makefile @@ -1,6 +1,6 @@ qla2xxx-y := qla_os.o qla_init.o qla_mbx.o qla_iocb.o qla_isr.o qla_gs.o \ qla_dbg.o qla_sup.o qla_attr.o qla_mid.o qla_dfs.o qla_bsg.o \ - qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o + qla_nx.o qla_mr.o qla_nx2.o qla_target.o qla_tmpl.o qla_nvme.o obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx.o obj-$(CONFIG_TCM_QLA2XXX) += tcm_qla2xxx.o diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c index cf4f47603a91..d840529fc023 100644 --- a/drivers/scsi/qla2xxx/qla_dbg.c +++ b/drivers/scsi/qla2xxx/qla_dbg.c @@ -15,7 +15,7 @@ * | | | 0x015b-0x0160 | * | | | 0x016e | * | Mailbox commands | 0x1199 | 0x1193 | - * | Device Discovery | 0x2131 | 0x210e-0x2116 | + * | Device Discovery | 0x2134 | 0x210e-0x2116 | * | | | 0x211a | * | | | 0x211c-0x2128 | * | | | 0x212a-0x2130 | diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h index 4d889eb2993e..0dbcb84011b0 100644 --- a/drivers/scsi/qla2xxx/qla_def.h +++ b/drivers/scsi/qla2xxx/qla_def.h @@ -37,6 +37,7 @@ #include "qla_bsg.h" #include "qla_nx.h" #include "qla_nx2.h" +#include "qla_nvme.h" #define QLA2XXX_DRIVER_NAME "qla2xxx" #define QLA2XXX_APIDEV "ql2xapidev" #define QLA2XXX_MANUFACTURER "QLogic Corporation" @@ -423,6 +424,7 @@ struct srb_iocb { int rsp_len; dma_addr_t cmd_dma; dma_addr_t rsp_dma; + enum nvmefc_fcp_datadir dir; uint32_t dl; uint32_t timeout_sec; } nvme; @@ -452,6 +454,7 @@ struct srb_iocb { #define SRB_NACK_PRLI 17 #define SRB_NACK_LOGO 18 #define SRB_NVME_CMD 19 +#define SRB_NVME_LS 20 #define SRB_PRLI_CMD 21 enum { @@ -467,6 +470,7 @@ typedef struct srb { uint8_t cmd_type; uint8_t pad[3]; atomic_t ref_count; + wait_queue_head_t nvme_ls_waitQ; struct fc_port *fcport; struct scsi_qla_host *vha; uint32_t handle; @@ -2298,6 +2302,7 @@ typedef struct fc_port { struct work_struct nvme_del_work; atomic_t nvme_ref_count; + wait_queue_head_t nvme_waitQ; uint32_t nvme_prli_service_param; #define NVME_PRLI_SP_CONF BIT_7 #define NVME_PRLI_SP_INITIATOR BIT_5 @@ -4124,6 +4129,7 @@ typedef struct scsi_qla_host { struct nvme_fc_local_port *nvme_local_port; atomic_t nvme_ref_count; + wait_queue_head_t nvme_waitQ; struct list_head nvme_rport_list; atomic_t nvme_active_aen_cnt; uint16_t nvme_last_rptd_aen; diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h index 6fbee11c1a18..c6af45f7d5d6 100644 --- a/drivers/scsi/qla2xxx/qla_gbl.h +++ b/drivers/scsi/qla2xxx/qla_gbl.h @@ -10,6 +10,16 @@ #include /* + * Global functions prototype in qla_nvme.c source file. + */ +extern void qla_nvme_register_hba(scsi_qla_host_t *); +extern int qla_nvme_register_remote(scsi_qla_host_t *, fc_port_t *); +extern void qla_nvme_delete(scsi_qla_host_t *); +extern void qla_nvme_abort(struct qla_hw_data *, srb_t *sp); +extern void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *, struct pt_ls4_request *, + struct req_que *); + +/* * Global Function Prototypes in qla_init.c source file. */ extern int qla2x00_initialize_adapter(scsi_qla_host_t *); @@ -141,6 +151,7 @@ extern int ql2xiniexchg; extern int ql2xfwholdabts; extern int ql2xmvasynctoatio; extern int ql2xuctrlirq; +extern int ql2xnvmeenable; extern int qla2x00_loop_reset(scsi_qla_host_t *); extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int); diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c index 8a2586a04961..7286a80f796c 100644 --- a/drivers/scsi/qla2xxx/qla_init.c +++ b/drivers/scsi/qla2xxx/qla_init.c @@ -4513,6 +4513,11 @@ qla2x00_update_fcport(scsi_qla_host_t *vha, fc_port_t *fcport) fcport->deleted = 0; fcport->logout_on_delete = 1; + if (fcport->fc4f_nvme) { + qla_nvme_register_remote(vha, fcport); + return; + } + qla2x00_set_fcport_state(fcport, FCS_ONLINE); qla2x00_iidma_fcport(vha, fcport); qla24xx_update_fcport_fcp_prio(vha, fcport); @@ -4662,6 +4667,9 @@ qla2x00_configure_fabric(scsi_qla_host_t *vha) break; } while (0); + if (!vha->nvme_local_port && vha->flags.nvme_enabled) + qla_nvme_register_hba(vha); + if (rval) ql_dbg(ql_dbg_disc, vha, 0x2068, "Configure fabric error exit rval=%d.\n", rval); diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c index daa53235a28a..d40fa000615c 100644 --- a/drivers/scsi/qla2xxx/qla_iocb.c +++ b/drivers/scsi/qla2xxx/qla_iocb.c @@ -3155,6 +3155,39 @@ static void qla2x00_send_notify_ack_iocb(srb_t *sp, nack->u.isp24.vp_index = ntfy->u.isp24.vp_index; } +/* + * Build NVME LS request + */ +static int +qla_nvme_ls(srb_t *sp, struct pt_ls4_request *cmd_pkt) +{ + struct srb_iocb *nvme; + int rval = QLA_SUCCESS; + + nvme = &sp->u.iocb_cmd; + cmd_pkt->entry_type = PT_LS4_REQUEST; + cmd_pkt->entry_count = 1; + cmd_pkt->control_flags = CF_LS4_ORIGINATOR << CF_LS4_SHIFT; + + cmd_pkt->timeout = cpu_to_le16(nvme->u.nvme.timeout_sec); + cmd_pkt->nport_handle = cpu_to_le16(sp->fcport->loop_id); + cmd_pkt->vp_index = sp->fcport->vha->vp_idx; + + cmd_pkt->tx_dseg_count = 1; + cmd_pkt->tx_byte_count = nvme->u.nvme.cmd_len; + cmd_pkt->dseg0_len = nvme->u.nvme.cmd_len; + cmd_pkt->dseg0_address[0] = cpu_to_le32(LSD(nvme->u.nvme.cmd_dma)); + cmd_pkt->dseg0_address[1] = cpu_to_le32(MSD(nvme->u.nvme.cmd_dma)); + + cmd_pkt->rx_dseg_count = 1; + cmd_pkt->rx_byte_count = nvme->u.nvme.rsp_len; + cmd_pkt->dseg1_len = nvme->u.nvme.rsp_len; + cmd_pkt->dseg1_address[0] = cpu_to_le32(LSD(nvme->u.nvme.rsp_dma)); + cmd_pkt->dseg1_address[1] = cpu_to_le32(MSD(nvme->u.nvme.rsp_dma)); + + return rval; +} + int qla2x00_start_sp(srb_t *sp) { @@ -3211,6 +3244,9 @@ qla2x00_start_sp(srb_t *sp) case SRB_FXIOCB_BCMD: qlafx00_fxdisc_iocb(sp, pkt); break; + case SRB_NVME_LS: + qla_nvme_ls(sp, pkt); + break; case SRB_ABT_CMD: IS_QLAFX00(ha) ? qlafx00_abort_iocb(sp, pkt) : diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c index 477aea7c9a88..011faa1dc618 100644 --- a/drivers/scsi/qla2xxx/qla_isr.c +++ b/drivers/scsi/qla2xxx/qla_isr.c @@ -2828,6 +2828,21 @@ qla24xx_abort_iocb_entry(scsi_qla_host_t *vha, struct req_que *req, sp->done(sp, 0); } +void qla24xx_nvme_ls4_iocb(scsi_qla_host_t *vha, struct pt_ls4_request *pkt, + struct req_que *req) +{ + srb_t *sp; + const char func[] = "LS4_IOCB"; + uint16_t comp_status; + + sp = qla2x00_get_sp_from_handle(vha, func, req, pkt); + if (!sp) + return; + + comp_status = le16_to_cpu(pkt->status); + sp->done(sp, comp_status); +} + /** * qla24xx_process_response_queue() - Process response queue entries. * @ha: SCSI driver HA context @@ -2901,6 +2916,10 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha, case CTIO_CRC2: qlt_response_pkt_all_vps(vha, rsp, (response_t *)pkt); break; + case PT_LS4_REQUEST: + qla24xx_nvme_ls4_iocb(vha, (struct pt_ls4_request *)pkt, + rsp->req); + break; case NOTIFY_ACK_TYPE: if (pkt->handle == QLA_TGT_SKIP_HANDLE) qlt_response_pkt_all_vps(vha, rsp, diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c index 1eac67e8fdfd..0764b6172ed1 100644 --- a/drivers/scsi/qla2xxx/qla_mbx.c +++ b/drivers/scsi/qla2xxx/qla_mbx.c @@ -560,6 +560,8 @@ qla2x00_load_ram(scsi_qla_host_t *vha, dma_addr_t req_dma, uint32_t risc_addr, } #define EXTENDED_BB_CREDITS BIT_0 +#define NVME_ENABLE_FLAG BIT_3 + /* * qla2x00_execute_fw * Start adapter firmware. @@ -601,6 +603,9 @@ qla2x00_execute_fw(scsi_qla_host_t *vha, uint32_t risc_addr) } else mcp->mb[4] = 0; + if (ql2xnvmeenable && IS_QLA27XX(ha)) + mcp->mb[4] |= NVME_ENABLE_FLAG; + if (ha->flags.exlogins_enabled) mcp->mb[4] |= ENABLE_EXTENDED_LOGIN; @@ -941,6 +946,22 @@ qla2x00_get_fw_version(scsi_qla_host_t *vha) ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x1191, "%s: Firmware supports Exchange Offload 0x%x\n", __func__, ha->fw_attributes_h); + + /* bit 26 of fw_attributes */ + if ((ha->fw_attributes_h & 0x400) && ql2xnvmeenable) { + struct init_cb_24xx *icb; + + icb = (struct init_cb_24xx *)ha->init_cb; + /* + * fw supports nvme and driver load + * parameter requested nvme + */ + vha->flags.nvme_enabled = 1; + icb->firmware_options_2 &= cpu_to_le32(~0xf); + ha->zio_mode = 0; + ha->zio_timer = 0; + } + } if (IS_QLA27XX(ha)) { diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c new file mode 100644 index 000000000000..1da8fa8f641d --- /dev/null +++ b/drivers/scsi/qla2xxx/qla_nvme.c @@ -0,0 +1,756 @@ +/* + * QLogic Fibre Channel HBA Driver + * Copyright (c) 2003-2017 QLogic Corporation + * + * See LICENSE.qla2xxx for copyright and licensing details. + */ +#include "qla_nvme.h" +#include "qla_def.h" +#include +#include +#include +#include + +static struct nvme_fc_port_template qla_nvme_fc_transport; + +static void qla_nvme_unregister_remote_port(struct work_struct *); + +int qla_nvme_register_remote(scsi_qla_host_t *vha, fc_port_t *fcport) +{ +#if (IS_ENABLED(CONFIG_NVME_FC)) + struct nvme_rport *rport; + int ret; + + if (fcport->nvme_flag & NVME_FLAG_REGISTERED) + return 0; + + if (!vha->flags.nvme_enabled) { + ql_log(ql_log_info, vha, 0x2100, + "%s: Not registering target since Host NVME is not enabled\n", + __func__); + return 0; + } + + if (!(fcport->nvme_prli_service_param & + (NVME_PRLI_SP_TARGET | NVME_PRLI_SP_DISCOVERY))) + return 0; + + INIT_WORK(&fcport->nvme_del_work, qla_nvme_unregister_remote_port); + rport = kzalloc(sizeof(*rport), GFP_KERNEL); + if (!rport) { + ql_log(ql_log_warn, vha, 0x2101, + "%s: unable to alloc memory\n", __func__); + return -ENOMEM; + } + + rport->req.port_name = wwn_to_u64(fcport->port_name); + rport->req.node_name = wwn_to_u64(fcport->node_name); + rport->req.port_role = 0; + + if (fcport->nvme_prli_service_param & NVME_PRLI_SP_INITIATOR) + rport->req.port_role = FC_PORT_ROLE_NVME_INITIATOR; + + if (fcport->nvme_prli_service_param & NVME_PRLI_SP_TARGET) + rport->req.port_role |= FC_PORT_ROLE_NVME_TARGET; + + if (fcport->nvme_prli_service_param & NVME_PRLI_SP_DISCOVERY) + rport->req.port_role |= FC_PORT_ROLE_NVME_DISCOVERY; + + rport->req.port_id = fcport->d_id.b24; + + ql_log(ql_log_info, vha, 0x2102, + "%s: traddr=pn-0x%016llx:nn-0x%016llx PortID:%06x\n", + __func__, rport->req.port_name, rport->req.node_name, + rport->req.port_id); + + ret = nvme_fc_register_remoteport(vha->nvme_local_port, &rport->req, + &fcport->nvme_remote_port); + if (ret) { + ql_log(ql_log_warn, vha, 0x212e, + "Failed to register remote port. Transport returned %d\n", + ret); + return ret; + } + + fcport->nvme_remote_port->private = fcport; + fcport->nvme_flag |= NVME_FLAG_REGISTERED; + atomic_set(&fcport->nvme_ref_count, 1); + init_waitqueue_head(&fcport->nvme_waitQ); + rport->fcport = fcport; + list_add_tail(&rport->list, &vha->nvme_rport_list); +#endif + return 0; +} + +/* Allocate a queue for NVMe traffic */ +static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport, unsigned int qidx, + u16 qsize, void **handle) +{ + struct scsi_qla_host *vha; + struct qla_hw_data *ha; + struct qla_qpair *qpair; + + if (!qidx) + qidx++; + + vha = (struct scsi_qla_host *)lport->private; + ha = vha->hw; + + ql_log(ql_log_info, vha, 0x2104, + "%s: handle %p, idx =%d, qsize %d\n", + __func__, handle, qidx, qsize); + + if (qidx > qla_nvme_fc_transport.max_hw_queues) { + ql_log(ql_log_warn, vha, 0x212f, + "%s: Illegal qidx=%d. Max=%d\n", + __func__, qidx, qla_nvme_fc_transport.max_hw_queues); + return -EINVAL; + } + + if (ha->queue_pair_map[qidx]) { + *handle = ha->queue_pair_map[qidx]; + ql_log(ql_log_info, vha, 0x2121, + "Returning existing qpair of %p for idx=%x\n", + *handle, qidx); + return 0; + } + + ql_log(ql_log_warn, vha, 0xffff, + "allocating q for idx=%x w/o cpu mask\n", qidx); + qpair = qla2xxx_create_qpair(vha, 5, vha->vp_idx, true); + if (qpair == NULL) { + ql_log(ql_log_warn, vha, 0x2122, + "Failed to allocate qpair\n"); + return -EINVAL; + } + *handle = qpair; + + return 0; +} + +static void qla_nvme_sp_ls_done(void *ptr, int res) +{ + srb_t *sp = ptr; + struct srb_iocb *nvme; + struct nvmefc_ls_req *fd; + struct nvme_private *priv; + + if (atomic_read(&sp->ref_count) == 0) { + ql_log(ql_log_warn, sp->fcport->vha, 0x2123, + "SP reference-count to ZERO on LS_done -- sp=%p.\n", sp); + return; + } + + if (!atomic_dec_and_test(&sp->ref_count)) + return; + + if (res) + res = -EINVAL; + + nvme = &sp->u.iocb_cmd; + fd = nvme->u.nvme.desc; + priv = fd->private; + priv->comp_status = res; + schedule_work(&priv->ls_work); + /* work schedule doesn't need the sp */ + qla2x00_rel_sp(sp); +} + +static void qla_nvme_sp_done(void *ptr, int res) +{ + srb_t *sp = ptr; + struct srb_iocb *nvme; + struct nvmefc_fcp_req *fd; + + nvme = &sp->u.iocb_cmd; + fd = nvme->u.nvme.desc; + + if (!atomic_dec_and_test(&sp->ref_count)) + return; + + if (!(sp->fcport->nvme_flag & NVME_FLAG_REGISTERED)) + goto rel; + + if (unlikely(nvme->u.nvme.comp_status || res)) + fd->status = -EINVAL; + else + fd->status = 0; + + fd->rcv_rsplen = nvme->u.nvme.rsp_pyld_len; + fd->done(fd); +rel: + qla2xxx_rel_qpair_sp(sp->qpair, sp); +} + +static void qla_nvme_ls_abort(struct nvme_fc_local_port *lport, + struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd) +{ + struct nvme_private *priv = fd->private; + fc_port_t *fcport = rport->private; + srb_t *sp = priv->sp; + int rval; + struct qla_hw_data *ha = fcport->vha->hw; + + rval = ha->isp_ops->abort_command(sp); + if (rval != QLA_SUCCESS) + ql_log(ql_log_warn, fcport->vha, 0x2125, + "%s: failed to abort LS command for SP:%p rval=%x\n", + __func__, sp, rval); + + ql_dbg(ql_dbg_io, fcport->vha, 0x212b, + "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport); +} + +static void qla_nvme_ls_complete(struct work_struct *work) +{ + struct nvme_private *priv = + container_of(work, struct nvme_private, ls_work); + struct nvmefc_ls_req *fd = priv->fd; + + fd->done(fd, priv->comp_status); +} + +static int qla_nvme_ls_req(struct nvme_fc_local_port *lport, + struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd) +{ + fc_port_t *fcport = (fc_port_t *)rport->private; + struct srb_iocb *nvme; + struct nvme_private *priv = fd->private; + struct scsi_qla_host *vha; + int rval = QLA_FUNCTION_FAILED; + struct qla_hw_data *ha; + srb_t *sp; + + if (!(fcport->nvme_flag & NVME_FLAG_REGISTERED)) + return rval; + + vha = fcport->vha; + ha = vha->hw; + /* Alloc SRB structure */ + sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC); + if (!sp) + return rval; + + sp->type = SRB_NVME_LS; + sp->name = "nvme_ls"; + sp->done = qla_nvme_sp_ls_done; + atomic_set(&sp->ref_count, 1); + init_waitqueue_head(&sp->nvme_ls_waitQ); + nvme = &sp->u.iocb_cmd; + priv->sp = sp; + priv->fd = fd; + INIT_WORK(&priv->ls_work, qla_nvme_ls_complete); + nvme->u.nvme.desc = fd; + nvme->u.nvme.dir = 0; + nvme->u.nvme.dl = 0; + nvme->u.nvme.cmd_len = fd->rqstlen; + nvme->u.nvme.rsp_len = fd->rsplen; + nvme->u.nvme.rsp_dma = fd->rspdma; + nvme->u.nvme.timeout_sec = fd->timeout; + nvme->u.nvme.cmd_dma = dma_map_single(&ha->pdev->dev, fd->rqstaddr, + fd->rqstlen, DMA_TO_DEVICE); + dma_sync_single_for_device(&ha->pdev->dev, nvme->u.nvme.cmd_dma, + fd->rqstlen, DMA_TO_DEVICE); + + rval = qla2x00_start_sp(sp); + if (rval != QLA_SUCCESS) { + ql_log(ql_log_warn, vha, 0x700e, + "qla2x00_start_sp failed = %d\n", rval); + atomic_dec(&sp->ref_count); + wake_up(&sp->nvme_ls_waitQ); + return rval; + } + + return rval; +} + +static void qla_nvme_fcp_abort(struct nvme_fc_local_port *lport, + struct nvme_fc_remote_port *rport, void *hw_queue_handle, + struct nvmefc_fcp_req *fd) +{ + struct nvme_private *priv = fd->private; + srb_t *sp = priv->sp; + int rval; + fc_port_t *fcport = rport->private; + struct qla_hw_data *ha = fcport->vha->hw; + + rval = ha->isp_ops->abort_command(sp); + if (!rval) + ql_log(ql_log_warn, fcport->vha, 0x2127, + "%s: failed to abort command for SP:%p rval=%x\n", + __func__, sp, rval); + + ql_dbg(ql_dbg_io, fcport->vha, 0x2126, + "%s: aborted sp:%p on fcport:%p\n", __func__, sp, fcport); +} + +static void qla_nvme_poll(struct nvme_fc_local_port *lport, void *hw_queue_handle) +{ + struct scsi_qla_host *vha = lport->private; + unsigned long flags; + struct qla_qpair *qpair = (struct qla_qpair *)hw_queue_handle; + + /* Acquire ring specific lock */ + spin_lock_irqsave(&qpair->qp_lock, flags); + qla24xx_process_response_queue(vha, qpair->rsp); + spin_unlock_irqrestore(&qpair->qp_lock, flags); +} + +static int qla2x00_start_nvme_mq(srb_t *sp) +{ + unsigned long flags; + uint32_t *clr_ptr; + uint32_t index; + uint32_t handle; + struct cmd_nvme *cmd_pkt; + uint16_t cnt, i; + uint16_t req_cnt; + uint16_t tot_dsds; + uint16_t avail_dsds; + uint32_t *cur_dsd; + struct req_que *req = NULL; + struct scsi_qla_host *vha = sp->fcport->vha; + struct qla_hw_data *ha = vha->hw; + struct qla_qpair *qpair = sp->qpair; + struct srb_iocb *nvme = &sp->u.iocb_cmd; + struct scatterlist *sgl, *sg; + struct nvmefc_fcp_req *fd = nvme->u.nvme.desc; + uint32_t rval = QLA_SUCCESS; + + /* Setup qpair pointers */ + req = qpair->req; + tot_dsds = fd->sg_cnt; + + /* Acquire qpair specific lock */ + spin_lock_irqsave(&qpair->qp_lock, flags); + + /* Check for room in outstanding command list. */ + handle = req->current_outstanding_cmd; + for (index = 1; index < req->num_outstanding_cmds; index++) { + handle++; + if (handle == req->num_outstanding_cmds) + handle = 1; + if (!req->outstanding_cmds[handle]) + break; + } + + if (index == req->num_outstanding_cmds) { + rval = -1; + goto queuing_error; + } + req_cnt = qla24xx_calc_iocbs(vha, tot_dsds); + if (req->cnt < (req_cnt + 2)) { + cnt = IS_SHADOW_REG_CAPABLE(ha) ? *req->out_ptr : + RD_REG_DWORD_RELAXED(req->req_q_out); + + if (req->ring_index < cnt) + req->cnt = cnt - req->ring_index; + else + req->cnt = req->length - (req->ring_index - cnt); + + if (req->cnt < (req_cnt + 2)){ + rval = -1; + goto queuing_error; + } + } + + if (unlikely(!fd->sqid)) { + struct nvme_fc_cmd_iu *cmd = fd->cmdaddr; + if (cmd->sqe.common.opcode == nvme_admin_async_event) { + nvme->u.nvme.aen_op = 1; + atomic_inc(&vha->nvme_active_aen_cnt); + } + } + + /* Build command packet. */ + req->current_outstanding_cmd = handle; + req->outstanding_cmds[handle] = sp; + sp->handle = handle; + req->cnt -= req_cnt; + + cmd_pkt = (struct cmd_nvme *)req->ring_ptr; + cmd_pkt->handle = MAKE_HANDLE(req->id, handle); + + /* Zero out remaining portion of packet. */ + clr_ptr = (uint32_t *)cmd_pkt + 2; + memset(clr_ptr, 0, REQUEST_ENTRY_SIZE - 8); + + cmd_pkt->entry_status = 0; + + /* Update entry type to indicate Command NVME IOCB */ + cmd_pkt->entry_type = COMMAND_NVME; + + /* No data transfer how do we check buffer len == 0?? */ + if (fd->io_dir == NVMEFC_FCP_READ) { + cmd_pkt->control_flags = + cpu_to_le16(CF_READ_DATA | CF_NVME_ENABLE); + vha->qla_stats.input_bytes += fd->payload_length; + vha->qla_stats.input_requests++; + } else if (fd->io_dir == NVMEFC_FCP_WRITE) { + cmd_pkt->control_flags = + cpu_to_le16(CF_WRITE_DATA | CF_NVME_ENABLE); + vha->qla_stats.output_bytes += fd->payload_length; + vha->qla_stats.output_requests++; + } else if (fd->io_dir == 0) { + cmd_pkt->control_flags = cpu_to_le16(CF_NVME_ENABLE); + } + + /* Set NPORT-ID */ + cmd_pkt->nport_handle = cpu_to_le16(sp->fcport->loop_id); + cmd_pkt->port_id[0] = sp->fcport->d_id.b.al_pa; + cmd_pkt->port_id[1] = sp->fcport->d_id.b.area; + cmd_pkt->port_id[2] = sp->fcport->d_id.b.domain; + cmd_pkt->vp_index = sp->fcport->vha->vp_idx; + + /* NVME RSP IU */ + cmd_pkt->nvme_rsp_dsd_len = cpu_to_le16(fd->rsplen); + cmd_pkt->nvme_rsp_dseg_address[0] = cpu_to_le32(LSD(fd->rspdma)); + cmd_pkt->nvme_rsp_dseg_address[1] = cpu_to_le32(MSD(fd->rspdma)); + + /* NVME CNMD IU */ + cmd_pkt->nvme_cmnd_dseg_len = cpu_to_le16(fd->cmdlen); + cmd_pkt->nvme_cmnd_dseg_address[0] = cpu_to_le32(LSD(fd->cmddma)); + cmd_pkt->nvme_cmnd_dseg_address[1] = cpu_to_le32(MSD(fd->cmddma)); + + cmd_pkt->dseg_count = cpu_to_le16(tot_dsds); + cmd_pkt->byte_count = cpu_to_le32(fd->payload_length); + + /* One DSD is available in the Command Type NVME IOCB */ + avail_dsds = 1; + cur_dsd = (uint32_t *)&cmd_pkt->nvme_data_dseg_address[0]; + sgl = fd->first_sgl; + + /* Load data segments */ + for_each_sg(sgl, sg, tot_dsds, i) { + dma_addr_t sle_dma; + cont_a64_entry_t *cont_pkt; + + /* Allocate additional continuation packets? */ + if (avail_dsds == 0) { + /* + * Five DSDs are available in the Continuation + * Type 1 IOCB. + */ + + /* Adjust ring index */ + req->ring_index++; + if (req->ring_index == req->length) { + req->ring_index = 0; + req->ring_ptr = req->ring; + } else { + req->ring_ptr++; + } + cont_pkt = (cont_a64_entry_t *)req->ring_ptr; + cont_pkt->entry_type = cpu_to_le32(CONTINUE_A64_TYPE); + + cur_dsd = (uint32_t *)cont_pkt->dseg_0_address; + avail_dsds = 5; + } + + sle_dma = sg_dma_address(sg); + *cur_dsd++ = cpu_to_le32(LSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(MSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(sg_dma_len(sg)); + avail_dsds--; + } + + /* Set total entry count. */ + cmd_pkt->entry_count = (uint8_t)req_cnt; + wmb(); + + /* Adjust ring index. */ + req->ring_index++; + if (req->ring_index == req->length) { + req->ring_index = 0; + req->ring_ptr = req->ring; + } else { + req->ring_ptr++; + } + + /* Set chip new ring index. */ + WRT_REG_DWORD(req->req_q_in, req->ring_index); + +queuing_error: + spin_unlock_irqrestore(&qpair->qp_lock, flags); + return rval; +} + +/* Post a command */ +static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport, + struct nvme_fc_remote_port *rport, void *hw_queue_handle, + struct nvmefc_fcp_req *fd) +{ + fc_port_t *fcport; + struct srb_iocb *nvme; + struct scsi_qla_host *vha; + int rval = QLA_FUNCTION_FAILED; + srb_t *sp; + struct qla_qpair *qpair = (struct qla_qpair *)hw_queue_handle; + struct nvme_private *priv; + + if (!fd) { + ql_log(ql_log_warn, NULL, 0x2134, "NO NVMe FCP reqeust\n"); + return rval; + } + + priv = fd->private; + fcport = (fc_port_t *)rport->private; + if (!fcport) { + ql_log(ql_log_warn, NULL, 0x210e, "No fcport ptr\n"); + return rval; + } + + vha = fcport->vha; + if ((!qpair) || (!(fcport->nvme_flag & NVME_FLAG_REGISTERED))) + return -EBUSY; + + /* Alloc SRB structure */ + sp = qla2xxx_get_qpair_sp(qpair, fcport, GFP_ATOMIC); + if (!sp) + return -EIO; + + atomic_set(&sp->ref_count, 1); + init_waitqueue_head(&sp->nvme_ls_waitQ); + priv->sp = sp; + sp->type = SRB_NVME_CMD; + sp->name = "nvme_cmd"; + sp->done = qla_nvme_sp_done; + sp->qpair = qpair; + nvme = &sp->u.iocb_cmd; + nvme->u.nvme.desc = fd; + + rval = qla2x00_start_nvme_mq(sp); + if (rval != QLA_SUCCESS) { + ql_log(ql_log_warn, vha, 0x212d, + "qla2x00_start_nvme_mq failed = %d\n", rval); + atomic_dec(&sp->ref_count); + wake_up(&sp->nvme_ls_waitQ); + return -EIO; + } + + return rval; +} + +static void qla_nvme_localport_delete(struct nvme_fc_local_port *lport) +{ + struct scsi_qla_host *vha = lport->private; + + atomic_dec(&vha->nvme_ref_count); + wake_up_all(&vha->nvme_waitQ); + + ql_log(ql_log_info, vha, 0x210f, + "localport delete of %p completed.\n", vha->nvme_local_port); + vha->nvme_local_port = NULL; +} + +static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport) +{ + fc_port_t *fcport; + struct nvme_rport *r_port, *trport; + + fcport = (fc_port_t *)rport->private; + fcport->nvme_remote_port = NULL; + fcport->nvme_flag &= ~NVME_FLAG_REGISTERED; + atomic_dec(&fcport->nvme_ref_count); + wake_up_all(&fcport->nvme_waitQ); + + list_for_each_entry_safe(r_port, trport, + &fcport->vha->nvme_rport_list, list) { + if (r_port->fcport == fcport) { + list_del(&r_port->list); + break; + } + } + kfree(r_port); + + ql_log(ql_log_info, fcport->vha, 0x2110, + "remoteport_delete of %p completed.\n", fcport); +} + +static struct nvme_fc_port_template qla_nvme_fc_transport = { + .localport_delete = qla_nvme_localport_delete, + .remoteport_delete = qla_nvme_remoteport_delete, + .create_queue = qla_nvme_alloc_queue, + .delete_queue = NULL, + .ls_req = qla_nvme_ls_req, + .ls_abort = qla_nvme_ls_abort, + .fcp_io = qla_nvme_post_cmd, + .fcp_abort = qla_nvme_fcp_abort, + .poll_queue = qla_nvme_poll, + .max_hw_queues = 8, + .max_sgl_segments = 128, + .max_dif_sgl_segments = 64, + .dma_boundary = 0xFFFFFFFF, + .local_priv_sz = 8, + .remote_priv_sz = 0, + .lsrqst_priv_sz = sizeof(struct nvme_private), + .fcprqst_priv_sz = sizeof(struct nvme_private), +}; + +#define NVME_ABORT_POLLING_PERIOD 2 +static int qla_nvme_wait_on_command(srb_t *sp) +{ + int ret = QLA_SUCCESS; + + wait_event_timeout(sp->nvme_ls_waitQ, (atomic_read(&sp->ref_count) > 1), + NVME_ABORT_POLLING_PERIOD*HZ); + + if (atomic_read(&sp->ref_count) > 1) + ret = QLA_FUNCTION_FAILED; + + return ret; +} + +static int qla_nvme_wait_on_rport_del(fc_port_t *fcport) +{ + int ret = QLA_SUCCESS; + + wait_event_timeout(fcport->nvme_waitQ, + atomic_read(&fcport->nvme_ref_count), + NVME_ABORT_POLLING_PERIOD*HZ); + + if (atomic_read(&fcport->nvme_ref_count)) { + ret = QLA_FUNCTION_FAILED; + ql_log(ql_log_info, fcport->vha, 0x2111, + "timed out waiting for fcport=%p to delete\n", fcport); + } + + return ret; +} + +void qla_nvme_abort(struct qla_hw_data *ha, srb_t *sp) +{ + int rval; + + rval = ha->isp_ops->abort_command(sp); + if (!rval) { + if (!qla_nvme_wait_on_command(sp)) + ql_log(ql_log_warn, NULL, 0x2112, + "nvme_wait_on_comand timed out waiting on sp=%p\n", + sp); + } +} + +static void qla_nvme_abort_all(fc_port_t *fcport) +{ + int que, cnt; + unsigned long flags; + srb_t *sp; + struct qla_hw_data *ha = fcport->vha->hw; + struct req_que *req; + + spin_lock_irqsave(&ha->hardware_lock, flags); + for (que = 0; que < ha->max_req_queues; que++) { + req = ha->req_q_map[que]; + if (!req) + continue; + if (!req->outstanding_cmds) + continue; + for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) { + sp = req->outstanding_cmds[cnt]; + if ((sp) && ((sp->type == SRB_NVME_CMD) || + (sp->type == SRB_NVME_LS)) && + (sp->fcport == fcport)) { + atomic_inc(&sp->ref_count); + spin_unlock_irqrestore(&ha->hardware_lock, + flags); + qla_nvme_abort(ha, sp); + spin_lock_irqsave(&ha->hardware_lock, flags); + req->outstanding_cmds[cnt] = NULL; + sp->done(sp, 1); + } + } + } + spin_unlock_irqrestore(&ha->hardware_lock, flags); +} + +static void qla_nvme_unregister_remote_port(struct work_struct *work) +{ +#if (IS_ENABLED(CONFIG_NVME_FC)) + struct fc_port *fcport = container_of(work, struct fc_port, + nvme_del_work); + struct nvme_rport *rport, *trport; + + list_for_each_entry_safe(rport, trport, + &fcport->vha->nvme_rport_list, list) { + if (rport->fcport == fcport) { + ql_log(ql_log_info, fcport->vha, 0x2113, + "%s: fcport=%p\n", __func__, fcport); + nvme_fc_unregister_remoteport( + fcport->nvme_remote_port); + } + } +#endif +} + +void qla_nvme_delete(scsi_qla_host_t *vha) +{ +#if (IS_ENABLED(CONFIG_NVME_FC)) + struct nvme_rport *rport, *trport; + fc_port_t *fcport; + int nv_ret; + + list_for_each_entry_safe(rport, trport, &vha->nvme_rport_list, list) { + fcport = rport->fcport; + + ql_log(ql_log_info, fcport->vha, 0x2114, "%s: fcport=%p\n", + __func__, fcport); + + nvme_fc_unregister_remoteport(fcport->nvme_remote_port); + qla_nvme_wait_on_rport_del(fcport); + qla_nvme_abort_all(fcport); + } + + if (vha->nvme_local_port) { + nv_ret = nvme_fc_unregister_localport(vha->nvme_local_port); + if (nv_ret == 0) + ql_log(ql_log_info, vha, 0x2116, + "unregistered localport=%p\n", + vha->nvme_local_port); + else + ql_log(ql_log_info, vha, 0x2115, + "Unregister of localport failed\n"); + } +#endif +} + +void qla_nvme_register_hba(scsi_qla_host_t *vha) +{ +#if (IS_ENABLED(CONFIG_NVME_FC)) + struct nvme_fc_port_template *tmpl; + struct qla_hw_data *ha; + struct nvme_fc_port_info pinfo; + int ret; + + ha = vha->hw; + tmpl = &qla_nvme_fc_transport; + + WARN_ON(vha->nvme_local_port); + WARN_ON(ha->max_req_queues < 3); + + qla_nvme_fc_transport.max_hw_queues = + min((uint8_t)(qla_nvme_fc_transport.max_hw_queues), + (uint8_t)(ha->max_req_queues - 2)); + + pinfo.node_name = wwn_to_u64(vha->node_name); + pinfo.port_name = wwn_to_u64(vha->port_name); + pinfo.port_role = FC_PORT_ROLE_NVME_INITIATOR; + pinfo.port_id = vha->d_id.b24; + + ql_log(ql_log_info, vha, 0xffff, + "register_localport: host-traddr=pn-0x%llx:nn-0x%llx on portID:%x\n", + pinfo.port_name, pinfo.node_name, pinfo.port_id); + qla_nvme_fc_transport.dma_boundary = vha->host->dma_boundary; + + ret = nvme_fc_register_localport(&pinfo, tmpl, + get_device(&ha->pdev->dev), &vha->nvme_local_port); + if (ret) { + ql_log(ql_log_warn, vha, 0xffff, + "register_localport failed: ret=%x\n", ret); + return; + } + atomic_set(&vha->nvme_ref_count, 1); + vha->nvme_local_port->private = vha; + init_waitqueue_head(&vha->nvme_waitQ); +#endif +} diff --git a/drivers/scsi/qla2xxx/qla_nvme.h b/drivers/scsi/qla2xxx/qla_nvme.h new file mode 100644 index 000000000000..dfe56f207b28 --- /dev/null +++ b/drivers/scsi/qla2xxx/qla_nvme.h @@ -0,0 +1,132 @@ +/* + * QLogic Fibre Channel HBA Driver + * Copyright (c) 2003-2017 QLogic Corporation + * + * See LICENSE.qla2xxx for copyright and licensing details. + */ +#ifndef __QLA_NVME_H +#define __QLA_NVME_H + +#include +#include +#include +#include + +#define NVME_ATIO_CMD_OFF 32 +#define NVME_FIRST_PACKET_CMDLEN (64 - NVME_ATIO_CMD_OFF) +#define Q2T_NVME_NUM_TAGS 2048 +#define QLA_MAX_FC_SEGMENTS 64 + +struct srb; +struct nvme_private { + struct srb *sp; + struct nvmefc_ls_req *fd; + struct work_struct ls_work; + int comp_status; +}; + +struct nvme_rport { + struct nvme_fc_port_info req; + struct list_head list; + struct fc_port *fcport; +}; + +#define COMMAND_NVME 0x88 /* Command Type FC-NVMe IOCB */ +struct cmd_nvme { + uint8_t entry_type; /* Entry type. */ + uint8_t entry_count; /* Entry count. */ + uint8_t sys_define; /* System defined. */ + uint8_t entry_status; /* Entry Status. */ + + uint32_t handle; /* System handle. */ + uint16_t nport_handle; /* N_PORT handle. */ + uint16_t timeout; /* Command timeout. */ + + uint16_t dseg_count; /* Data segment count. */ + uint16_t nvme_rsp_dsd_len; /* NVMe RSP DSD length */ + + uint64_t rsvd; + + uint16_t control_flags; /* Control Flags */ +#define CF_NVME_ENABLE BIT_9 +#define CF_DIF_SEG_DESCR_ENABLE BIT_3 +#define CF_DATA_SEG_DESCR_ENABLE BIT_2 +#define CF_READ_DATA BIT_1 +#define CF_WRITE_DATA BIT_0 + + uint16_t nvme_cmnd_dseg_len; /* Data segment length. */ + uint32_t nvme_cmnd_dseg_address[2]; /* Data segment address. */ + uint32_t nvme_rsp_dseg_address[2]; /* Data segment address. */ + + uint32_t byte_count; /* Total byte count. */ + + uint8_t port_id[3]; /* PortID of destination port. */ + uint8_t vp_index; + + uint32_t nvme_data_dseg_address[2]; /* Data segment address. */ + uint32_t nvme_data_dseg_len; /* Data segment length. */ +}; + +#define PT_LS4_REQUEST 0x89 /* Link Service pass-through IOCB (request) */ +struct pt_ls4_request { + uint8_t entry_type; + uint8_t entry_count; + uint8_t sys_define; + uint8_t entry_status; + uint32_t handle; + uint16_t status; + uint16_t nport_handle; + uint16_t tx_dseg_count; + uint8_t vp_index; + uint8_t rsvd; + uint16_t timeout; + uint16_t control_flags; +#define CF_LS4_SHIFT 13 +#define CF_LS4_ORIGINATOR 0 +#define CF_LS4_RESPONDER 1 +#define CF_LS4_RESPONDER_TERM 2 + + uint16_t rx_dseg_count; + uint16_t rsvd2; + uint32_t exchange_address; + uint32_t rsvd3; + uint32_t rx_byte_count; + uint32_t tx_byte_count; + uint32_t dseg0_address[2]; + uint32_t dseg0_len; + uint32_t dseg1_address[2]; + uint32_t dseg1_len; +}; + +#define PT_LS4_UNSOL 0x56 /* pass-up unsolicited rec FC-NVMe request */ +struct pt_ls4_rx_unsol { + uint8_t entry_type; + uint8_t entry_count; + uint16_t rsvd0; + uint16_t rsvd1; + uint8_t vp_index; + uint8_t rsvd2; + uint16_t rsvd3; + uint16_t nport_handle; + uint16_t frame_size; + uint16_t rsvd4; + uint32_t exchange_address; + uint8_t d_id[3]; + uint8_t r_ctl; + uint8_t s_id[3]; + uint8_t cs_ctl; + uint8_t f_ctl[3]; + uint8_t type; + uint16_t seq_cnt; + uint8_t df_ctl; + uint8_t seq_id; + uint16_t rx_id; + uint16_t ox_id; + uint32_t param; + uint32_t desc0; +#define PT_LS4_PAYLOAD_OFFSET 0x2c +#define PT_LS4_FIRST_PACKET_LEN 20 + uint32_t desc_len; + uint32_t payload[3]; +}; +#endif diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index ef5211fd2154..3b75d760b99e 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -120,7 +120,11 @@ MODULE_PARM_DESC(ql2xmaxqdepth, "Maximum queue depth to set for each LUN. " "Default is 32."); +#if (IS_ENABLED(CONFIG_NVME_FC)) +int ql2xenabledif; +#else int ql2xenabledif = 2; +#endif module_param(ql2xenabledif, int, S_IRUGO); MODULE_PARM_DESC(ql2xenabledif, " Enable T10-CRC-DIF:\n" @@ -129,6 +133,16 @@ MODULE_PARM_DESC(ql2xenabledif, " 1 -- Enable DIF for all types\n" " 2 -- Enable DIF for all types, except Type 0.\n"); +#if (IS_ENABLED(CONFIG_NVME_FC)) +int ql2xnvmeenable = 1; +#else +int ql2xnvmeenable; +#endif +module_param(ql2xnvmeenable, int, 0644); +MODULE_PARM_DESC(ql2xnvmeenable, + "Enables NVME support. " + "0 - no NVMe. Default is Y"); + int ql2xenablehba_err_chk = 2; module_param(ql2xenablehba_err_chk, int, S_IRUGO|S_IWUSR); MODULE_PARM_DESC(ql2xenablehba_err_chk, @@ -267,6 +281,7 @@ static void qla2x00_clear_drv_active(struct qla_hw_data *); static void qla2x00_free_device(scsi_qla_host_t *); static void qla83xx_disable_laser(scsi_qla_host_t *vha); static int qla2xxx_map_queues(struct Scsi_Host *shost); +static void qla2x00_destroy_deferred_work(struct qla_hw_data *); struct scsi_host_template qla2xxx_driver_template = { .module = THIS_MODULE, @@ -695,7 +710,7 @@ qla2x00_sp_free_dma(void *ptr) } end: - if (sp->type != SRB_NVME_CMD) { + if ((sp->type != SRB_NVME_CMD) && (sp->type != SRB_NVME_LS)) { CMD_SP(cmd) = NULL; qla2x00_rel_sp(sp); } @@ -1700,15 +1715,23 @@ qla2x00_abort_all_cmds(scsi_qla_host_t *vha, int res) if (sp) { req->outstanding_cmds[cnt] = NULL; if (sp->cmd_type == TYPE_SRB) { - /* - * Don't abort commands in adapter - * during EEH recovery as it's not - * accessible/responding. - */ - if (GET_CMD_SP(sp) && + if ((sp->type == SRB_NVME_CMD) || + (sp->type == SRB_NVME_LS)) { + sp_get(sp); + spin_unlock_irqrestore( + &ha->hardware_lock, flags); + qla_nvme_abort(ha, sp); + spin_lock_irqsave( + &ha->hardware_lock, flags); + } else if (GET_CMD_SP(sp) && !ha->flags.eeh_busy && (sp->type == SRB_SCSI_CMD)) { /* + * Don't abort commands in + * adapter during EEH + * recovery as it's not + * accessible/responding. + * * Get a reference to the sp * and drop the lock. The * reference ensures this @@ -3534,6 +3557,9 @@ qla2x00_remove_one(struct pci_dev *pdev) return; set_bit(UNLOADING, &base_vha->dpc_flags); + + qla_nvme_delete(base_vha); + dma_free_coherent(&ha->pdev->dev, base_vha->gnl.size, base_vha->gnl.l, base_vha->gnl.ldma);