From patchwork Fri Dec 23 19:17:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Dupuis, Chad" X-Patchwork-Id: 9487657 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 45FA0601C0 for ; Fri, 23 Dec 2016 19:17:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C2EA209CF for ; Fri, 23 Dec 2016 19:17:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 20A90212D8; Fri, 23 Dec 2016 19:17:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 75D43209CF for ; Fri, 23 Dec 2016 19:17:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965332AbcLWTRl (ORCPT ); Fri, 23 Dec 2016 14:17:41 -0500 Received: from mail-by2nam03on0046.outbound.protection.outlook.com ([104.47.42.46]:7872 "EHLO NAM03-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1756102AbcLWTRf (ORCPT ); Fri, 23 Dec 2016 14:17:35 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=oTbRtZ5ZLWhZO4J6cGNjxXk6Xs+AZVQoDCsVKeLYe+4=; b=E8x6ghrnqEfcKZHtmyLh6yrWjfCU4Ja37CZSdd02ldS+jbdaSxdo2dkIXLLn705fzKCffcTLktFGx8AJgNL2qzzqXsCtkHCIwgBFgOveywLiY9KDnTnwi35zSHD+e7GyTQ2Nqk844VM9DEcMy5rZeQ0ZTjfwGoq+K7/CjpdBNQI= Received: from BN6PR07CA0016.namprd07.prod.outlook.com (10.173.33.154) by BLUPR07MB193.namprd07.prod.outlook.com (10.242.200.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.803.11; Fri, 23 Dec 2016 19:17:32 +0000 Received: from BL2FFO11FD009.protection.gbl (2a01:111:f400:7c09::177) by BN6PR07CA0016.outlook.office365.com (2603:10b6:404:3a::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.803.11 via Frontend Transport; Fri, 23 Dec 2016 19:17:32 +0000 Authentication-Results: spf=none (sender IP is 50.232.66.26) smtp.mailfrom=cavium.com; vger.kernel.org; dkim=none (message not signed) header.d=none; vger.kernel.org; dmarc=none action=none header.from=cavium.com; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) Received: from CAEXCH02.caveonetworks.com (50.232.66.26) by BL2FFO11FD009.mail.protection.outlook.com (10.173.161.15) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384) id 15.1.789.10 via Frontend Transport; Fri, 23 Dec 2016 19:17:31 +0000 Received: from dut1171.mv.qlogic.com (172.29.51.171) by CAEXCH02.caveonetworks.com (10.17.4.29) with Microsoft SMTP Server id 14.2.309.2; Fri, 23 Dec 2016 11:17:10 -0800 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id uBNJHAHn024252; Fri, 23 Dec 2016 11:17:10 -0800 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id uBNJHA6Y024251; Fri, 23 Dec 2016 11:17:10 -0800 From: "Dupuis, Chad" To: CC: , , , , Subject: [PATCH RFC 4/5] qedf: Add offload ELS request handling. Date: Fri, 23 Dec 2016 11:17:07 -0800 Message-ID: <1482520628-24207-5-git-send-email-chad.dupuis@cavium.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1482520628-24207-1-git-send-email-chad.dupuis@cavium.com> References: <1482520628-24207-1-git-send-email-chad.dupuis@cavium.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:50.232.66.26; IPV:CAL; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(7916002)(39450400003)(2980300002)(428002)(189002)(199003)(101416001)(6666003)(110136003)(4720700003)(6916009)(2950100002)(33646002)(5660300001)(305945005)(356003)(80596001)(69596002)(626004)(48376002)(5003940100001)(50466002)(92566002)(76176999)(50986999)(36756003)(8936002)(81156014)(8676002)(81166006)(50226002)(4326007)(47776003)(86362001)(189998001)(107886002)(26826002)(2906002)(2351001)(38730400001)(106466001)(105586002)(42186005)(4001430100002)(87636001)(217873001); DIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR07MB193; H:CAEXCH02.caveonetworks.com; FPR:; SPF:None; PTR:50-232-66-26-static.hfc.comcastbusiness.net; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BL2FFO11FD009; 1:W6z24s4+1rdKo6yXhhW8mdEmNWYTw5bo7e0hxTD88RRuAwjQeSSKTyZAPoymDDy6TCZPf3y9gNi23tZUGnSJts0O/uPkeIHepYrmXVEwM7tyr5c2ilgT7zDnTu9K0tNnSI4RYLjugD7wwSO3+uc4wzIMb7fWpUOmm/hatK7aCwLQU2PqXgbxTye4WZpIy3yylDCEFJ6sxXDZqECVPbA84AzyUCJnM/f0SdPWA6j295B0alsdf24eukEhukaqbYq9d6UTvQwjXhuYRwNx07C4DzGpMn/M0MJnrg5ws03XExRV8bitqDXHEsxOnAQ9lNu/ERUqEg+TYtNR++Z15F4Jb6+GF+X5fNsOAcDuFADIrEYf5Qcz4bW3HX+EQndput+ID4aUOKUj5tz49KLLZWAb3TGyUGLF0d1rJx1AwiQBOqqemlUbV7ii7ktN7owdm4ff+zWlR/5s+DEUnKv7CDblE+hoBFPIepJCpPX/2szUFSDzMYkwYy51ql6ZpDrrh5m+Howrp6twpimTMsls5E5GGg== X-MS-Office365-Filtering-Correlation-Id: e3351ede-f742-40fa-53d5-08d42b685833 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:BLUPR07MB193; X-Microsoft-Exchange-Diagnostics: 1; BLUPR07MB193; 3:ulMRdMJWadnIR1quWx4z/qrN0VGKakmWZWCI9pGbnBWlls+cClHZZPmHzzoW4JKFea3MG7Aqj/zRBU0hcYrNlMUi/IP9xqOe9H0dqs/7DsnkTdpRle2beFxg5YqXK23d/XLj007Cr/awKEdIrVDGm4oyPNCLgsk75tW6rKb8eH6vYGe77glyKvEsgtaGsBFX0S3NjkBWfRY+t856kgEE3JYiV2JToP7dggSM0zp0T3VJ7NpMrZuiJuq9z2l9L70m9im7GaLTe02HecU7v/y0+LFGXpzN3E6V2vmK7gP9BmlIvKym/LWFTv6Ye1samtk4qcOm7b5TL1RdYs1mUgiRsOKHfjB/wqgPgLTgx2tuQK6X6Sn0tskZGGQCNFVdtsBE; 25:YYme0VuQ3YNAO8bQ/u/r4q26ou0uUX5Y1+vRfEHF/LyDM8TqpIqgwcDy603NPNMsrMyKCcZzUW1h5qB3ajPx5kbOso+CMNmUMDOGaarfMRBgVgMO0xO7KMLl/kHbUs/f/+q2pM+khBNG8yfdv9lbGnsldjJuy37Sgb+lm9DEbJJ98aEUW75dPeqJ27qhTaxfswOjfUkl+/m6LS2aYaC161nqSwsqbr85iXuf63uThkxgmZWz1SdU5MY4llRy0lt4vLDRC2qa0od8UqXMjP/GatTyAEWlP2qeYZ+FPX4Figv5Qe9zjpb1ebSOG5jdjclKfXj2BnUUrGi8LrV9reLUJ0Xt6u2cWe36j6d0aQONlqWb92IWgDztPs+TyEaheRnFWS19GfE6SNBPYUmDofyNZFsdHWrTXSZAxuhiLtjDTjGn+r1388al1GggGJok4LC5uy/Lr5kDamq39HGb+xzGOw== X-Microsoft-Exchange-Diagnostics: 1; BLUPR07MB193; 31:jjYW5TEOEE1BavyotdCi2g62+8Rs3AggDPCLKarANE/Z5Gd3Wq4wx8wzJXbMvy/410zqoSIg6XMEnEEpGkASI/NUeoSiA/a+ol2yJIIabaNBvSyWw4Mc57suDjbFqE/PJDbW/RF644Pd/lc707WWs/a/uBRHO2eZGjYVsaxkW0xEaAU2MrSDbulW8o7URuEoOZkyLh37DoQoz4gZO+k1VsSXbbGGciTPkODWCZ573gmRfe7RwepotJuyd9dXCitACvQxbYOqKmujqt9oTKsClg==; 20:baIdK/7cBs/iqEIy4SrrxsuFLotFZW77no4Z+ScC9N2veDDuYAwuU0A7AM32R/NI9PKfgYVvU16U8rglLwTcVOs4DAmrHoSisYhCBtGLiFiSvRxxZi0s9EUPxOGAiL7qOe3o8D36ZGz9NuO4RHrAIlhLHpRnkzjGIL0OB5F7OaomZ/0wfeGeC0R4ZkocPToOs+cYtWPdaB6BWukmF4fhhnLoCYJMeBOkI8t1RkkkDVoABNQFnDjJM8fdLTxYPWX3sfCUMbMJCG39ZiNZ+84905FYyVUH7ZpQqeyU8YM85ugc++x1OnsNxOnoO5Gw5bY8ZONUlI5shVpIxujYtVmd1rBYQX6qJPcJ7FEYhzaDe4WSND5LRQebuJjFXeqXlDdxU2RpfqA+XbWM1jJ5xY3JehiL9OBfP2lmpD9WIYd2yjYAwpegZhDPRfxw/p1AS1dTXhy1xm8UGk/husCwPJLRvNsEt8YXpXsFwDMBsYCszBCdg5ifyxKT/CeWe89Mptdg X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(13023025)(13013025)(13021025)(8121501046)(5005006)(3002001)(10201501046)(6041248)(20161123564025)(20161123562025)(20161123555025)(20161123560025)(6072148); SRVR:BLUPR07MB193; BCL:0; PCL:0; RULEID:; SRVR:BLUPR07MB193; X-Microsoft-Exchange-Diagnostics: 1; BLUPR07MB193; 4:/NxYfAh4ci69QcJ7FdLVQCOqcnVGt/6bJ1au+k7D8mkb6KgFVSnv7dJ0tc6yPdapnSQN/P4BmzSilf7eWgRc8husSfsA86DW+WKpnkXRG+zagKjwQvbfuNbYu+gNbYw4AYQCNMqBEvwkZo+OTh+PmHLZccp7vZ3AMRFA7n7+3tjyjMygGaXZEgN4Fq2KUI4XAIjOCcBWX6JRGwOUuLwdIga+1y/vouN6QNbbGIEeDlQM4h8ZApoNMmJ95ZuVAk+NTKOg8Ay1zZxxYp5kVX7W0ntD4fPvKMl8e5sgshJ7GrR2qg4lcVswz1pus0S/dj6zw6nzaCOnRYo57bdft5LIcAFcIBj0TRIdOifxYz14I/yptqer3vjLZ92iaJEh1hNhYHHEprsCJXVP2s/ZF8D7pKu68TzqM2yEIr3qB2Pl7/VmwT6cUvGyhSwbwqgy4hX3QmHKmof39EDcCcNB0NqU9OcwPYfVrxPwzRgSUizuPPRHJVKAZ0FrWpkP5rCByrehRWsknDs9YeMysadwILx+a2drfEB3h8FMWyrJoIe/57WS37zRmbdTVOuCws24sZM3F4CIQTrpeGJ9MBVcC04loWhx2NYOBZ7R2DJH6NYV+5yQeOUn8MPe8WcM792sX/eT X-Forefront-PRVS: 016572D96D X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BLUPR07MB193; 23:u6/2HtEPsq8BxnSs1ypgi5k/SEqb/7i0+31Vt0bzta?= =?us-ascii?Q?1r/g/VKBLyT9h5cxnwOeh8k2xpHjoUgytgCUxumPY7ew1cgOQasE/MwXsuJM?= =?us-ascii?Q?mVRNh5PaAHGVa7CQNJ6ltgflbDuUmBMbwb4Ju9HvSmApd8e+mO9U5AO3ja5+?= =?us-ascii?Q?Kx2trGGOM5ZIFrP/c1PnyMrpdkrQ38ep6T5bXkZYHt6laeo4oBrurATXXJm4?= =?us-ascii?Q?+MWasz1tYZ4hyiP76CJoHieMXNa3hcPOgcf0MbDln0KkvXKNbeA7b09qrT3T?= =?us-ascii?Q?huRmEgR/5n88FN01Vj02C6Q3SUqZWJ2REjoFfXFfHe7lERdUM8Oq/396RRQv?= =?us-ascii?Q?KdmhkJu0L26f/WLhseoW7Bsj0uiTKXXeS7QRxxGykwJXIcB9sErGbPFTWVkw?= =?us-ascii?Q?3Ktbnx1D1wT8j+W7MgFj1wua/W3pEIpd9KMkFpVfS2aUwkBu4C75LJxsrQjS?= =?us-ascii?Q?sEXfMrQIYs00ReOp3io7LCbCmmr1npGLnV507tfqYDdtObqQq3YFjChuRor5?= =?us-ascii?Q?c3yoPxbF0+CsosG5iDmSHHxSZqrc0YeZCdytKtOGyw2P2wEEPPw0RhPA550n?= =?us-ascii?Q?xh9NcbUQksOd0m8eJenZIuTrPVZrMTZ9Wlq7tcmdpdmc+vnSBAjejdbpceOC?= =?us-ascii?Q?HUoiJllJq8mBuz8tT+zdDp0H39/Qoj9kgCAiEGV+FUdrcXsniXnwyx1GunKe?= =?us-ascii?Q?YYR7YMFSZvzI+LpVCjyvHwJby6YqPTQwE828vRTLNXESvisjeWDFNpzr8GZo?= =?us-ascii?Q?Eo914wP2Fc9sgx77lfVntz37a94dhrj0sG0sbkjTG9VdD2mdpj3C07LooCPc?= =?us-ascii?Q?Q0ttOf91lwtanMqRmhHWGRadhYDuY8nBj/1Z2ZRijSGX72XDKACd9kseCrG0?= =?us-ascii?Q?16CmWcZ5zEA6SiLwgyyyBsdmA/MRWDnAyGRrkJk+i/G5OKwkqPJBC3Hz7OPT?= =?us-ascii?Q?YvkNVTDc2lehEFOALuc+3zJPrbkusNTWfLFQAv3zWTQsn0M0aHEqDkx/l1sp?= =?us-ascii?Q?9SBB/5T5dyHtcoYJMX2hxZu5jkfMCvxNhuQacx2w3z0aKPHlGwhYD49vujY4?= =?us-ascii?Q?zFQQ30Raw6TyNHKC2jLUJkmUu8DgK5VpVATFH3ySDVIV+4Mn21K4PFFKpJIP?= =?us-ascii?Q?nC2wEulRBHU7Syl3u5wcQUg1cqEP7N?= X-Microsoft-Exchange-Diagnostics: 1; BLUPR07MB193; 6:CTiUlE2RIEZepMJZzT4j1Q2zjJdiAgNKzg5EiFnfbHZer/HYA/oo1KNXxQp1hrCma/mjGleH3p/jnN7QKt03jI3WHcnhRyks7z5Gt7IE8ntEYarLooXZWx9VajM/rYxUu1J48Xr3RdkgE9PhBxz1p6uXrXJ8G6KL1aMWyN4+X0Nri+UDRqwijCHWOzlfhqx09RqoW8sb8UrX/XkpXhDFJnxU0XHyBmPzY4KANa/ydKGDM+HNGJrLdFmA/yypnef+lhpAAg79NNcna8XSuTR2k0RBAyM61PF4/cLgdtV7zEN0pJdml4gjLwuDYh8luWgwRQa4vbMlLi5ZD3z5hWuev2bVJ1mOsRY2lsTzHL+0zJHuN3Xljzb95tlaaabcO1LxYgNeIdJ1RIwoO6nxgJ6LwhBg91v2BQlyH1cRg3J8Tvg=; 5:SsViDyoEtbUWc2wOcsXiHHwC1KlRZtkwYz765ks8Na5W/YTbTmvaYeApwPiJOOLWADfwEG/r+LjRMnuF5AGOtYGPtu8+Gwg9D7NfwYTuOM3O61oztGzBI1H4TRsYIgt11XNCkQm1RVNdfQ1GlS0bxw==; 24:KfXNaq7DqjyiQMKNq3eaURoO6z6DxGnLu+qy83Z4Bz0mUxjLq5WNUXaa+zQF0BUcl+H9PadtMclHai4DY77bEr/ApHVkQdn6LfogF93fbZY= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BLUPR07MB193; 7:/a6DXQFFIwYo4lv2kuwLrh8zr2kM66egTZr55fBRXwSMUJ5HexScO2Wj5Q63DQdJzMSGLG9Cpu2kmmqR3KTDwrTBYs3FTws9Mlry0l6fH8Y89MIoqZDvYEgBSP8dxAIuVzY3d1tZUnai4giS+WPZtTCldPkTEz79eSvP1t6oZ2In9y9/wZ19B68IhSE06N9WBwuA6HdMaWBtAdD/pWN/9/SNXaJUHllJ//2xeyFqZ8UzN9iCWXOZRjIR7YA5uYMHYFe6nwT8Gvp4cN0rB9K0fyxYxzj6404PsalaAthMZC+i7EYjQdPkIBzSI9XJnDPodey5KIayq0cG2YkNa7AOum3YprK80oU5KE+fBAoogNg5iZeWGLgoIzT98/Rq/YzFTpW939ayGcx5bm1kzeXM82rPw9jXYLrM8qCQmy3RH0eVfbnZJPKhsWM4Is9cL8Ra4QANJJ+fjB5iqShNI1qW2A== X-OriginatorOrg: cavium.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Dec 2016 19:17:31.8378 (UTC) X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=711e4ccf-2e9b-4bcf-a551-4094005b6194; Ip=[50.232.66.26]; Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLUPR07MB193 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Dupuis, Chad" This patch adds support for ELS requests that are handled by the firmware for offloaded sessions. Signed-off-by: Nilesh Javali Signed-off-by: Manish Rangankar Signed-off-by: Saurav Kashyap Signed-off-by: Chad Dupuis Reviewed-by: Hannes Reinecke --- drivers/scsi/qedf/qedf_els.c | 984 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 984 insertions(+) create mode 100644 drivers/scsi/qedf/qedf_els.c diff --git a/drivers/scsi/qedf/qedf_els.c b/drivers/scsi/qedf/qedf_els.c new file mode 100644 index 0000000..98a80b3 --- /dev/null +++ b/drivers/scsi/qedf/qedf_els.c @@ -0,0 +1,984 @@ +/* + * QLogic FCoE Offload Driver + * Copyright (c) 2016 Cavium Inc. + * + * This software is available under the terms of the GNU General Public License + * (GPL) Version 2, available from the file COPYING in the main directory of + * this source tree. + */ +#include "qedf.h" + +/* It's assumed that the lock is held when calling this function. */ +static int qedf_initiate_els(struct qedf_rport *fcport, unsigned int op, + void *data, uint32_t data_len, + void (*cb_func)(struct qedf_els_cb_arg *cb_arg), + struct qedf_els_cb_arg *cb_arg, uint32_t timer_msec) +{ + struct qedf_ctx *qedf = fcport->qedf; + struct fc_lport *lport = qedf->lport; + struct qedf_ioreq *els_req; + struct qedf_mp_req *mp_req; + struct fc_frame_header *fc_hdr; + struct fcoe_task_context *task; + int rc = 0; + uint32_t did, sid; + uint16_t xid; + uint32_t start_time = jiffies / HZ; + uint32_t current_time; + + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Sending ELS\n"); + + rc = fc_remote_port_chkready(fcport->rport); + if (rc) { + QEDF_ERR(&(qedf->dbg_ctx), "els 0x%x: rport not ready\n", op); + rc = -EAGAIN; + goto els_err; + } + if (lport->state != LPORT_ST_READY || !(lport->link_up)) { + QEDF_ERR(&(qedf->dbg_ctx), "els 0x%x: link is not ready\n", + op); + rc = -EAGAIN; + goto els_err; + } + + if (!(test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags))) { + QEDF_ERR(&(qedf->dbg_ctx), "els 0x%x: fcport not ready\n", op); + rc = -EINVAL; + goto els_err; + } + +retry_els: + els_req = qedf_alloc_cmd(fcport, QEDF_ELS); + if (!els_req) { + current_time = jiffies / HZ; + if ((current_time - start_time) > 10) { + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "els: Failed els 0x%x\n", op); + rc = -ENOMEM; + goto els_err; + } + mdelay(20 * USEC_PER_MSEC); + goto retry_els; + } + + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "initiate_els els_req = " + "0x%p cb_arg = %p xid = %x\n", els_req, cb_arg, + els_req->xid); + els_req->sc_cmd = NULL; + els_req->cmd_type = QEDF_ELS; + els_req->fcport = fcport; + els_req->cb_func = cb_func; + cb_arg->io_req = els_req; + cb_arg->op = op; + els_req->cb_arg = cb_arg; + els_req->data_xfer_len = data_len; + + /* Record which cpu this request is associated with */ + els_req->cpu = smp_processor_id(); + qedf_inc_percpu_requests(els_req->cpu); + + mp_req = (struct qedf_mp_req *)&(els_req->mp_req); + rc = qedf_init_mp_req(els_req); + if (rc) { + QEDF_ERR(&(qedf->dbg_ctx), "ELS MP request init failed\n"); + kref_put(&els_req->refcount, qedf_release_cmd); + goto els_err; + } else { + rc = 0; + } + + /* Fill ELS Payload */ + if ((op >= ELS_LS_RJT) && (op <= ELS_AUTH_ELS)) { + memcpy(mp_req->req_buf, data, data_len); + } else { + QEDF_ERR(&(qedf->dbg_ctx), "Invalid ELS op 0x%x\n", op); + els_req->cb_func = NULL; + els_req->cb_arg = NULL; + kref_put(&els_req->refcount, qedf_release_cmd); + rc = -EINVAL; + } + + if (rc) + goto els_err; + + /* Fill FC header */ + fc_hdr = &(mp_req->req_fc_hdr); + + did = fcport->rdata->ids.port_id; + sid = fcport->sid; + + __fc_fill_fc_hdr(fc_hdr, FC_RCTL_ELS_REQ, sid, did, + FC_TYPE_ELS, FC_FC_FIRST_SEQ | FC_FC_END_SEQ | + FC_FC_SEQ_INIT, 0); + + /* Obtain exchange id */ + xid = els_req->xid; + + /* Initialize task context for this IO request */ + task = qedf_get_task_mem(&qedf->tasks, xid); + qedf_init_mp_task(els_req, task); + + /* Put timer on original I/O request */ + if (timer_msec) + qedf_cmd_timer_set(qedf, els_req, timer_msec); + + qedf_add_to_sq(fcport, xid, 0, FCOE_TASK_TYPE_MIDPATH, 0); + + /* Ring doorbell */ + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Ringing doorbell for ELS " + "req\n"); + qedf_ring_doorbell(fcport); +els_err: + return rc; +} + +void qedf_process_els_compl(struct qedf_ctx *qedf, struct fcoe_cqe *cqe, + struct qedf_ioreq *els_req) +{ + struct fcoe_task_context *task_ctx; + struct scsi_cmnd *sc_cmd; + uint16_t xid; + struct fcoe_cqe_midpath_info *mp_info; + + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Entered with xid = 0x%x" + " cmd_type = %d.\n", els_req->xid, els_req->cmd_type); + + /* Kill the ELS timer */ + cancel_delayed_work(&els_req->timeout_work); + + xid = els_req->xid; + task_ctx = qedf_get_task_mem(&qedf->tasks, xid); + sc_cmd = els_req->sc_cmd; + + /* Get ELS response length from CQE */ + mp_info = &cqe->cqe_info.midpath_info; + els_req->mp_req.resp_len = mp_info->data_placement_size; + + /* Parse ELS response */ + if ((els_req->cb_func) && (els_req->cb_arg)) { + els_req->cb_func(els_req->cb_arg); + els_req->cb_arg = NULL; + } + + kref_put(&els_req->refcount, qedf_release_cmd); +} + +static void qedf_rrq_compl(struct qedf_els_cb_arg *cb_arg) +{ + struct qedf_ioreq *orig_io_req; + struct qedf_ioreq *rrq_req; + struct qedf_ctx *qedf; + int refcount; + + rrq_req = cb_arg->io_req; + qedf = rrq_req->fcport->qedf; + + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Entered.\n"); + + orig_io_req = cb_arg->aborted_io_req; + + if (!orig_io_req) + goto out_free; + + if (rrq_req->event != QEDF_IOREQ_EV_ELS_TMO && + rrq_req->event != QEDF_IOREQ_EV_ELS_ERR_DETECT) + cancel_delayed_work_sync(&orig_io_req->timeout_work); + + refcount = atomic_read(&orig_io_req->refcount.refcount); + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "rrq_compl: orig io = %p," + " orig xid = 0x%x, rrq_xid = 0x%x, refcount=%d\n", + orig_io_req, orig_io_req->xid, rrq_req->xid, refcount); + + /* This should return the aborted io_req to the command pool */ + if (orig_io_req) + kref_put(&orig_io_req->refcount, qedf_release_cmd); + +out_free: + kfree(cb_arg); +} + +/* Assumes kref is already held by caller */ +int qedf_send_rrq(struct qedf_ioreq *aborted_io_req) +{ + + struct fc_els_rrq rrq; + struct qedf_rport *fcport; + struct fc_lport *lport; + struct qedf_els_cb_arg *cb_arg = NULL; + struct qedf_ctx *qedf; + uint32_t sid; + uint32_t r_a_tov; + int rc; + + if (!aborted_io_req) { + QEDF_ERR(NULL, "abort_io_req is NULL.\n"); + return -EINVAL; + } + + fcport = aborted_io_req->fcport; + + /* Check that fcport is still offloaded */ + if (fcport->conn_id == -1) { + QEDF_ERR(NULL, "fcport is no longer offloaded.\n"); + return -EINVAL; + } + + if (!fcport->qedf) { + QEDF_ERR(NULL, "fcport->qedf is NULL.\n"); + return -EINVAL; + } + + qedf = fcport->qedf; + lport = qedf->lport; + sid = fcport->sid; + r_a_tov = lport->r_a_tov; + + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Sending RRQ orig " + "io = %p, orig_xid = 0x%x\n", aborted_io_req, + aborted_io_req->xid); + memset(&rrq, 0, sizeof(rrq)); + + cb_arg = kzalloc(sizeof(struct qedf_els_cb_arg), GFP_NOIO); + if (!cb_arg) { + QEDF_ERR(&(qedf->dbg_ctx), "Unable to allocate cb_arg for " + "RRQ\n"); + rc = -ENOMEM; + goto rrq_err; + } + + cb_arg->aborted_io_req = aborted_io_req; + + rrq.rrq_cmd = ELS_RRQ; + hton24(rrq.rrq_s_id, sid); + rrq.rrq_ox_id = htons(aborted_io_req->xid); + rrq.rrq_rx_id = + htons(aborted_io_req->task->tstorm_st_context.read_write.rx_id); + + rc = qedf_initiate_els(fcport, ELS_RRQ, &rrq, sizeof(rrq), + qedf_rrq_compl, cb_arg, r_a_tov); + +rrq_err: + if (rc) { + QEDF_ERR(&(qedf->dbg_ctx), "RRQ failed - release orig io " + "req 0x%x\n", aborted_io_req->xid); + kfree(cb_arg); + kref_put(&aborted_io_req->refcount, qedf_release_cmd); + } + return rc; +} + +static void qedf_process_l2_frame_compl(struct qedf_rport *fcport, + unsigned char *buf, + u32 frame_len, u16 l2_oxid) +{ + struct fc_lport *lport = fcport->qedf->lport; + struct fc_frame_header *fh; + struct fc_frame *fp; + u32 payload_len; + u32 crc; + + payload_len = frame_len - sizeof(struct fc_frame_header); + + fp = fc_frame_alloc(lport, payload_len); + if (!fp) { + QEDF_ERR(&(fcport->qedf->dbg_ctx), + "fc_frame_alloc failure.\n"); + return; + } + + /* Copy FC Frame header and payload into the frame */ + fh = (struct fc_frame_header *)fc_frame_header_get(fp); + memcpy(fh, buf, frame_len); + + /* Set the OXID we return to what libfc used */ + if (l2_oxid != FC_XID_UNKNOWN) + fh->fh_ox_id = htons(l2_oxid); + + /* Setup header fields */ + fh->fh_r_ctl = FC_RCTL_ELS_REP; + fh->fh_type = FC_TYPE_ELS; + /* Last sequence, end sequence */ + fh->fh_f_ctl[0] = 0x98; + hton24(fh->fh_d_id, lport->port_id); + hton24(fh->fh_s_id, fcport->rdata->ids.port_id); + fh->fh_rx_id = 0xffff; + + /* Set frame attributes */ + crc = fcoe_fc_crc(fp); + fc_frame_init(fp); + fr_dev(fp) = lport; + fr_sof(fp) = FC_SOF_I3; + fr_eof(fp) = FC_EOF_T; + fr_crc(fp) = cpu_to_le32(~crc); + + /* Send completed request to libfc */ + fc_exch_recv(lport, fp); +} + +/* + * In instances where an ELS command times out we may need to restart the + * rport by logging out and then logging back in. + */ +void qedf_restart_rport(struct qedf_rport *fcport) +{ + struct fc_lport *lport; + struct fc_rport_priv *rdata; + u32 port_id; + + if (!fcport) + return; + + rdata = fcport->rdata; + if (rdata) { + lport = fcport->qedf->lport; + port_id = rdata->ids.port_id; + QEDF_ERR(&(fcport->qedf->dbg_ctx), + "LOGO port_id=%x.\n", port_id); + mutex_lock(&lport->disc.disc_mutex); + fc_rport_logoff(rdata); + /* Recreate the rport and log back in */ + rdata = fc_rport_create(lport, port_id); + if (rdata) + fc_rport_login(rdata); + mutex_unlock(&lport->disc.disc_mutex); + } +} + +static void qedf_l2_els_compl(struct qedf_els_cb_arg *cb_arg) +{ + struct qedf_ioreq *els_req; + struct qedf_rport *fcport; + struct qedf_mp_req *mp_req; + struct fc_frame_header *fc_hdr; + unsigned char *buf; + void *resp_buf; + u32 resp_len, hdr_len; + u16 l2_oxid; + int frame_len; + + l2_oxid = cb_arg->l2_oxid; + els_req = cb_arg->io_req; + + if (!els_req) { + QEDF_ERR(NULL, "els_req is NULL.\n"); + goto free_arg; + } + + /* + * If we are flushing the command just free the cb_arg as none of the + * response data will be valid. + */ + if (els_req->event == QEDF_IOREQ_EV_ELS_FLUSH) + goto free_arg; + + fcport = els_req->fcport; + mp_req = &(els_req->mp_req); + fc_hdr = &(mp_req->resp_fc_hdr); + resp_len = mp_req->resp_len; + resp_buf = mp_req->resp_buf; + + /* + * If a middle path ELS command times out, don't try to return + * the command but rather do any internal cleanup and then libfc + * timeout the command and clean up its internal resources. + */ + if (els_req->event == QEDF_IOREQ_EV_ELS_TMO) { + /* + * If ADISC times out, libfc will timeout the exchange and then + * try to send a PLOGI which will timeout since the session is + * still offloaded. Force libfc to logout the session which + * will offload the connection and allow the PLOGI response to + * flow over the LL2 path. + */ + if (cb_arg->op == ELS_ADISC) + qedf_restart_rport(fcport); + return; + } + + buf = kzalloc(QEDF_PAGE_SIZE, GFP_ATOMIC); + if (!buf) { + QEDF_ERR(&(fcport->qedf->dbg_ctx), + "Unable to alloc mp buf.\n"); + goto free_arg; + } + hdr_len = sizeof(*fc_hdr); + if (hdr_len + resp_len > QEDF_PAGE_SIZE) { + QEDF_ERR(&(fcport->qedf->dbg_ctx), "resp_len is " + "beyond page size.\n"); + goto free_buf; + } + memcpy(buf, fc_hdr, hdr_len); + memcpy(buf + hdr_len, resp_buf, resp_len); + frame_len = hdr_len + resp_len; + + QEDF_INFO(&(fcport->qedf->dbg_ctx), QEDF_LOG_ELS, + "Completing OX_ID 0x%x back to libfc.\n", l2_oxid); + qedf_process_l2_frame_compl(fcport, buf, frame_len, l2_oxid); + +free_buf: + kfree(buf); +free_arg: + kfree(cb_arg); +} + +int qedf_send_adisc(struct qedf_rport *fcport, struct fc_frame *fp) +{ + struct fc_els_adisc *adisc; + struct fc_frame_header *fh; + struct fc_lport *lport = fcport->qedf->lport; + struct qedf_els_cb_arg *cb_arg = NULL; + struct qedf_ctx *qedf; + uint32_t r_a_tov = lport->r_a_tov; + int rc; + + qedf = fcport->qedf; + fh = fc_frame_header_get(fp); + + cb_arg = kzalloc(sizeof(struct qedf_els_cb_arg), GFP_NOIO); + if (!cb_arg) { + QEDF_ERR(&(qedf->dbg_ctx), "Unable to allocate cb_arg for " + "ADISC\n"); + rc = -ENOMEM; + goto adisc_err; + } + cb_arg->l2_oxid = ntohs(fh->fh_ox_id); + + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "Sending ADISC ox_id=0x%x.\n", cb_arg->l2_oxid); + + adisc = fc_frame_payload_get(fp, sizeof(*adisc)); + + rc = qedf_initiate_els(fcport, ELS_ADISC, adisc, sizeof(*adisc), + qedf_l2_els_compl, cb_arg, r_a_tov); + +adisc_err: + if (rc) { + QEDF_ERR(&(qedf->dbg_ctx), "ADISC failed.\n"); + kfree(cb_arg); + } + return rc; +} + +static void qedf_srr_compl(struct qedf_els_cb_arg *cb_arg) +{ + struct qedf_ioreq *orig_io_req; + struct qedf_ioreq *srr_req; + struct qedf_mp_req *mp_req; + struct fc_frame_header *fc_hdr, *fh; + struct fc_frame *fp; + unsigned char *buf; + void *resp_buf; + u32 resp_len, hdr_len; + struct fc_lport *lport; + struct qedf_ctx *qedf; + int refcount; + u8 opcode; + + srr_req = cb_arg->io_req; + qedf = srr_req->fcport->qedf; + lport = qedf->lport; + + orig_io_req = cb_arg->aborted_io_req; + + if (!orig_io_req) + goto out_free; + + clear_bit(QEDF_CMD_SRR_SENT, &orig_io_req->flags); + + if (srr_req->event != QEDF_IOREQ_EV_ELS_TMO && + srr_req->event != QEDF_IOREQ_EV_ELS_ERR_DETECT) + cancel_delayed_work_sync(&orig_io_req->timeout_work); + + refcount = atomic_read(&orig_io_req->refcount.refcount); + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Entered: orig_io=%p," + " orig_io_xid=0x%x, rec_xid=0x%x, refcount=%d\n", + orig_io_req, orig_io_req->xid, srr_req->xid, refcount); + + /* If a SRR times out, simply free resources */ + if (srr_req->event == QEDF_IOREQ_EV_ELS_TMO) + goto out_free; + + /* Normalize response data into struct fc_frame */ + mp_req = &(srr_req->mp_req); + fc_hdr = &(mp_req->resp_fc_hdr); + resp_len = mp_req->resp_len; + resp_buf = mp_req->resp_buf; + hdr_len = sizeof(*fc_hdr); + + buf = kzalloc(QEDF_PAGE_SIZE, GFP_ATOMIC); + if (!buf) { + QEDF_ERR(&(qedf->dbg_ctx), + "Unable to alloc mp buf.\n"); + goto out_free; + } + + memcpy(buf, fc_hdr, hdr_len); + memcpy(buf + hdr_len, resp_buf, resp_len); + + fp = fc_frame_alloc(lport, resp_len); + if (!fp) { + QEDF_ERR(&(qedf->dbg_ctx), + "fc_frame_alloc failure.\n"); + goto out_buf; + } + + /* Copy FC Frame header and payload into the frame */ + fh = (struct fc_frame_header *)fc_frame_header_get(fp); + memcpy(fh, buf, hdr_len + resp_len); + + opcode = fc_frame_payload_op(fp); + switch (opcode) { + case ELS_LS_ACC: + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "SRR success.\n"); + break; + case ELS_LS_RJT: + QEDF_INFO(&qedf->dbg_ctx, QEDF_LOG_ELS, + "SRR rejected.\n"); + qedf_initiate_abts(orig_io_req, true); + break; + } + + fc_frame_free(fp); +out_buf: + kfree(buf); +out_free: + /* Put reference for original command since SRR completed */ + kref_put(&orig_io_req->refcount, qedf_release_cmd); + kfree(cb_arg); +} + +static int qedf_send_srr(struct qedf_ioreq *orig_io_req, u32 offset, u8 r_ctl) +{ + struct fcp_srr srr; + struct qedf_ctx *qedf; + struct qedf_rport *fcport; + struct fc_lport *lport; + struct qedf_els_cb_arg *cb_arg = NULL; + u32 sid, r_a_tov; + int rc; + + if (!orig_io_req) { + QEDF_ERR(NULL, "orig_io_req is NULL.\n"); + return -EINVAL; + } + + fcport = orig_io_req->fcport; + + /* Check that fcport is still offloaded */ + if (fcport->conn_id == -1) { + QEDF_ERR(NULL, "fcport is no longer offloaded.\n"); + return -EINVAL; + } + + if (!fcport->qedf) { + QEDF_ERR(NULL, "fcport->qedf is NULL.\n"); + return -EINVAL; + } + + /* Take reference until SRR command completion */ + kref_get(&orig_io_req->refcount); + + qedf = fcport->qedf; + lport = qedf->lport; + sid = fcport->sid; + r_a_tov = lport->r_a_tov; + + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Sending SRR orig_io=%p, " + "orig_xid=0x%x\n", orig_io_req, orig_io_req->xid); + memset(&srr, 0, sizeof(srr)); + + cb_arg = kzalloc(sizeof(struct qedf_els_cb_arg), GFP_NOIO); + if (!cb_arg) { + QEDF_ERR(&(qedf->dbg_ctx), "Unable to allocate cb_arg for " + "SRR\n"); + rc = -ENOMEM; + goto srr_err; + } + + cb_arg->aborted_io_req = orig_io_req; + + srr.srr_op = ELS_SRR; + srr.srr_ox_id = htons(orig_io_req->xid); + srr.srr_rx_id = htons(orig_io_req->rx_id); + srr.srr_rel_off = htonl(offset); + srr.srr_r_ctl = r_ctl; + + rc = qedf_initiate_els(fcport, ELS_SRR, &srr, sizeof(srr), + qedf_srr_compl, cb_arg, r_a_tov); + +srr_err: + if (rc) { + QEDF_ERR(&(qedf->dbg_ctx), "SRR failed - release orig_io_req" + "=0x%x\n", orig_io_req->xid); + kfree(cb_arg); + /* If we fail to queue SRR, send ABTS to orig_io */ + qedf_initiate_abts(orig_io_req, true); + kref_put(&orig_io_req->refcount, qedf_release_cmd); + } else + /* Tell other threads that SRR is in progress */ + set_bit(QEDF_CMD_SRR_SENT, &orig_io_req->flags); + + return rc; +} + +static void qedf_initiate_seq_cleanup(struct qedf_ioreq *orig_io_req, + u32 offset, u8 r_ctl) +{ + struct qedf_rport *fcport; + unsigned long flags; + struct qedf_els_cb_arg *cb_arg; + + fcport = orig_io_req->fcport; + + QEDF_INFO(&(fcport->qedf->dbg_ctx), QEDF_LOG_ELS, + "Doing sequence cleanup for xid=0x%x offset=%u.\n", + orig_io_req->xid, offset); + + cb_arg = kzalloc(sizeof(struct qedf_els_cb_arg), GFP_NOIO); + if (!cb_arg) { + QEDF_ERR(&(fcport->qedf->dbg_ctx), "Unable to allocate cb_arg " + "for sequence cleanup\n"); + return; + } + + /* Get reference for cleanup request */ + kref_get(&orig_io_req->refcount); + + orig_io_req->cmd_type = QEDF_SEQ_CLEANUP; + cb_arg->offset = offset; + cb_arg->r_ctl = r_ctl; + orig_io_req->cb_arg = cb_arg; + + qedf_cmd_timer_set(fcport->qedf, orig_io_req, + QEDF_CLEANUP_TIMEOUT * HZ); + + spin_lock_irqsave(&fcport->rport_lock, flags); + + qedf_add_to_sq(fcport, orig_io_req->xid, 0, + FCOE_TASK_TYPE_SEQUENCE_CLEANUP, offset); + qedf_ring_doorbell(fcport); + + spin_unlock_irqrestore(&fcport->rport_lock, flags); +} + +void qedf_process_seq_cleanup_compl(struct qedf_ctx *qedf, + struct fcoe_cqe *cqe, struct qedf_ioreq *io_req) +{ + int rc; + struct qedf_els_cb_arg *cb_arg; + + cb_arg = io_req->cb_arg; + + /* If we timed out just free resources */ + if (io_req->event == QEDF_IOREQ_EV_ELS_TMO || !cqe) + goto free; + + /* Kill the timer we put on the request */ + cancel_delayed_work_sync(&io_req->timeout_work); + + rc = qedf_send_srr(io_req, cb_arg->offset, cb_arg->r_ctl); + if (rc) + QEDF_ERR(&(qedf->dbg_ctx), "Unable to send SRR, I/O will " + "abort, xid=0x%x.\n", io_req->xid); +free: + kfree(cb_arg); + kref_put(&io_req->refcount, qedf_release_cmd); +} + +static bool qedf_requeue_io_req(struct qedf_ioreq *orig_io_req) +{ + struct qedf_rport *fcport; + struct qedf_ioreq *new_io_req; + unsigned long flags; + bool rc = false; + + fcport = orig_io_req->fcport; + if (!fcport) { + QEDF_ERR(NULL, "fcport is NULL.\n"); + goto out; + } + + if (!orig_io_req->sc_cmd) { + QEDF_ERR(&(fcport->qedf->dbg_ctx), "sc_cmd is NULL for " + "xid=0x%x.\n", orig_io_req->xid); + goto out; + } + + new_io_req = qedf_alloc_cmd(fcport, QEDF_SCSI_CMD); + if (!new_io_req) { + QEDF_ERR(&(fcport->qedf->dbg_ctx), "Could not allocate new " + "io_req.\n"); + goto out; + } + + new_io_req->sc_cmd = orig_io_req->sc_cmd; + + /* + * This keeps the sc_cmd struct from being returned to the tape + * driver and being requeued twice. We do need to put a reference + * for the original I/O request since we will not do a SCSI completion + * for it. + */ + orig_io_req->sc_cmd = NULL; + kref_put(&orig_io_req->refcount, qedf_release_cmd); + + spin_lock_irqsave(&fcport->rport_lock, flags); + + /* kref for new command released in qedf_post_io_req on error */ + if (qedf_post_io_req(fcport, new_io_req)) { + QEDF_ERR(&(fcport->qedf->dbg_ctx), "Unable to post io_req\n"); + /* Return SQE to pool */ + atomic_inc(&fcport->free_sqes); + } else { + QEDF_INFO(&(fcport->qedf->dbg_ctx), QEDF_LOG_ELS, + "Reissued SCSI command from orig_xid=0x%x on " + "new_xid=0x%x.\n", orig_io_req->xid, new_io_req->xid); + /* + * Abort the original I/O but do not return SCSI command as + * it has been reissued on another OX_ID. + */ + spin_unlock_irqrestore(&fcport->rport_lock, flags); + qedf_initiate_abts(orig_io_req, false); + goto out; + } + + spin_unlock_irqrestore(&fcport->rport_lock, flags); +out: + return rc; +} + + +static void qedf_rec_compl(struct qedf_els_cb_arg *cb_arg) +{ + struct qedf_ioreq *orig_io_req; + struct qedf_ioreq *rec_req; + struct qedf_mp_req *mp_req; + struct fc_frame_header *fc_hdr, *fh; + struct fc_frame *fp; + unsigned char *buf; + void *resp_buf; + u32 resp_len, hdr_len; + struct fc_lport *lport; + struct qedf_ctx *qedf; + int refcount; + enum fc_rctl r_ctl; + struct fc_els_ls_rjt *rjt; + struct fc_els_rec_acc *acc; + u8 opcode; + u32 offset, e_stat; + struct scsi_cmnd *sc_cmd; + bool srr_needed = false; + + rec_req = cb_arg->io_req; + qedf = rec_req->fcport->qedf; + lport = qedf->lport; + + orig_io_req = cb_arg->aborted_io_req; + + if (!orig_io_req) + goto out_free; + + if (rec_req->event != QEDF_IOREQ_EV_ELS_TMO && + rec_req->event != QEDF_IOREQ_EV_ELS_ERR_DETECT) + cancel_delayed_work_sync(&orig_io_req->timeout_work); + + refcount = atomic_read(&orig_io_req->refcount.refcount); + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Entered: orig_io=%p," + " orig_io_xid=0x%x, rec_xid=0x%x, refcount=%d\n", + orig_io_req, orig_io_req->xid, rec_req->xid, refcount); + + /* If a REC times out, free resources */ + if (rec_req->event == QEDF_IOREQ_EV_ELS_TMO) + goto out_free; + + /* Normalize response data into struct fc_frame */ + mp_req = &(rec_req->mp_req); + fc_hdr = &(mp_req->resp_fc_hdr); + resp_len = mp_req->resp_len; + acc = resp_buf = mp_req->resp_buf; + hdr_len = sizeof(*fc_hdr); + + buf = kzalloc(QEDF_PAGE_SIZE, GFP_ATOMIC); + if (!buf) { + QEDF_ERR(&(qedf->dbg_ctx), + "Unable to alloc mp buf.\n"); + goto out_free; + } + + memcpy(buf, fc_hdr, hdr_len); + memcpy(buf + hdr_len, resp_buf, resp_len); + + fp = fc_frame_alloc(lport, resp_len); + if (!fp) { + QEDF_ERR(&(qedf->dbg_ctx), + "fc_frame_alloc failure.\n"); + goto out_buf; + } + + /* Copy FC Frame header and payload into the frame */ + fh = (struct fc_frame_header *)fc_frame_header_get(fp); + memcpy(fh, buf, hdr_len + resp_len); + + opcode = fc_frame_payload_op(fp); + + if (opcode == ELS_LS_RJT) { + rjt = fc_frame_payload_get(fp, sizeof(*rjt)); + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "Received LS_RJT for REC: er_reason=0x%x, " + "er_explan=0x%x.\n", rjt->er_reason, rjt->er_explan); + /* + * The following response(s) mean that we need to reissue the + * request on another exchange. We need to do this without + * informing the upper layers lest it cause an application + * error. + */ + if ((rjt->er_reason == ELS_RJT_LOGIC || + rjt->er_reason == ELS_RJT_UNAB) && + rjt->er_explan == ELS_EXPL_OXID_RXID) { + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "Handle CMD LOST case.\n"); + qedf_requeue_io_req(orig_io_req); + } + } else if (opcode == ELS_LS_ACC) { + offset = ntohl(acc->reca_fc4value); + e_stat = ntohl(acc->reca_e_stat); + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "Received LS_ACC for REC: offset=0x%x, e_stat=0x%x.\n", + offset, e_stat); + if (e_stat & ESB_ST_SEQ_INIT) { + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "Target has the seq init\n"); + goto out_free_frame; + } + sc_cmd = orig_io_req->sc_cmd; + if (!sc_cmd) { + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "sc_cmd is NULL for xid=0x%x.\n", + orig_io_req->xid); + goto out_free_frame; + } + /* SCSI write case */ + if (sc_cmd->sc_data_direction == DMA_TO_DEVICE) { + if (offset == orig_io_req->data_xfer_len) { + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "WRITE - response lost.\n"); + r_ctl = FC_RCTL_DD_CMD_STATUS; + srr_needed = true; + offset = 0; + } else { + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "WRITE - XFER_RDY/DATA lost.\n"); + r_ctl = FC_RCTL_DD_DATA_DESC; + /* Use data from warning CQE instead of REC */ + offset = orig_io_req->tx_buf_off; + } + /* SCSI read case */ + } else { + if (orig_io_req->rx_buf_off == + orig_io_req->data_xfer_len) { + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "READ - response lost.\n"); + srr_needed = true; + r_ctl = FC_RCTL_DD_CMD_STATUS; + offset = 0; + } else { + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, + "READ - DATA lost.\n"); + /* + * For read case we always set the offset to 0 + * for sequence recovery task. + */ + offset = 0; + r_ctl = FC_RCTL_DD_SOL_DATA; + } + } + + if (srr_needed) + qedf_send_srr(orig_io_req, offset, r_ctl); + else + qedf_initiate_seq_cleanup(orig_io_req, offset, r_ctl); + } + +out_free_frame: + fc_frame_free(fp); +out_buf: + kfree(buf); +out_free: + /* Put reference for original command since REC completed */ + kref_put(&orig_io_req->refcount, qedf_release_cmd); + kfree(cb_arg); +} + +/* Assumes kref is already held by caller */ +int qedf_send_rec(struct qedf_ioreq *orig_io_req) +{ + + struct fc_els_rec rec; + struct qedf_rport *fcport; + struct fc_lport *lport; + struct qedf_els_cb_arg *cb_arg = NULL; + struct qedf_ctx *qedf; + uint32_t sid; + uint32_t r_a_tov; + int rc; + + if (!orig_io_req) { + QEDF_ERR(NULL, "orig_io_req is NULL.\n"); + return -EINVAL; + } + + fcport = orig_io_req->fcport; + + /* Check that fcport is still offloaded */ + if (fcport->conn_id == -1) { + QEDF_ERR(NULL, "fcport is no longer offloaded.\n"); + return -EINVAL; + } + + if (!fcport->qedf) { + QEDF_ERR(NULL, "fcport->qedf is NULL.\n"); + return -EINVAL; + } + + /* Take reference until REC command completion */ + kref_get(&orig_io_req->refcount); + + qedf = fcport->qedf; + lport = qedf->lport; + sid = fcport->sid; + r_a_tov = lport->r_a_tov; + + memset(&rec, 0, sizeof(rec)); + + cb_arg = kzalloc(sizeof(struct qedf_els_cb_arg), GFP_NOIO); + if (!cb_arg) { + QEDF_ERR(&(qedf->dbg_ctx), "Unable to allocate cb_arg for " + "REC\n"); + rc = -ENOMEM; + goto rec_err; + } + + cb_arg->aborted_io_req = orig_io_req; + + rec.rec_cmd = ELS_REC; + hton24(rec.rec_s_id, sid); + rec.rec_ox_id = htons(orig_io_req->xid); + rec.rec_rx_id = + htons(orig_io_req->task->tstorm_st_context.read_write.rx_id); + + QEDF_INFO(&(qedf->dbg_ctx), QEDF_LOG_ELS, "Sending REC orig_io=%p, " + "orig_xid=0x%x rx_id=0x%x\n", orig_io_req, + orig_io_req->xid, rec.rec_rx_id); + rc = qedf_initiate_els(fcport, ELS_REC, &rec, sizeof(rec), + qedf_rec_compl, cb_arg, r_a_tov); + +rec_err: + if (rc) { + QEDF_ERR(&(qedf->dbg_ctx), "REC failed - release orig_io_req" + "=0x%x\n", orig_io_req->xid); + kfree(cb_arg); + kref_put(&orig_io_req->refcount, qedf_release_cmd); + } + return rc; +}