From patchwork Fri Dec 21 17:33:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Himanshu Madhani X-Patchwork-Id: 10740769 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 02C5C924 for ; Fri, 21 Dec 2018 17:35:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0A12285C6 for ; Fri, 21 Dec 2018 17:34:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CF26C285D2; Fri, 21 Dec 2018 17:34:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 658F2285C6 for ; Fri, 21 Dec 2018 17:34:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387438AbeLURe7 (ORCPT ); Fri, 21 Dec 2018 12:34:59 -0500 Received: from mail-eopbgr730058.outbound.protection.outlook.com ([40.107.73.58]:1118 "EHLO NAM05-DM3-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730364AbeLURe6 (ORCPT ); Fri, 21 Dec 2018 12:34:58 -0500 Received: from DM5PR07CA0081.namprd07.prod.outlook.com (2603:10b6:4:ad::46) by DM6PR07MB4761.namprd07.prod.outlook.com (2603:10b6:5:a1::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1446.17; Fri, 21 Dec 2018 17:34:54 +0000 Received: from DM3NAM05FT049.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::205) by DM5PR07CA0081.outlook.office365.com (2603:10b6:4:ad::46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1446.21 via Frontend Transport; Fri, 21 Dec 2018 17:34:54 +0000 Received-SPF: Fail (protection.outlook.com: domain of marvell.com does not designate 199.233.58.38 as permitted sender) receiver=protection.outlook.com; client-ip=199.233.58.38; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (199.233.58.38) by DM3NAM05FT049.mail.protection.outlook.com (10.152.98.163) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA) id 15.20.1471.7 via Frontend Transport; Fri, 21 Dec 2018 17:34:53 +0000 Received: from dut1171.mv.qlogic.com (10.112.88.18) by CAEXCH02.caveonetworks.com (10.67.98.110) with Microsoft SMTP Server (TLS) id 14.2.347.0; Fri, 21 Dec 2018 09:33:49 -0800 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id wBLHXm9p015645; Fri, 21 Dec 2018 09:33:48 -0800 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id wBLHXmMi015644; Fri, 21 Dec 2018 09:33:48 -0800 From: Himanshu Madhani To: , CC: , Subject: [PATCH 1/2] qla2xxx: Add protection mask module parameters Date: Fri, 21 Dec 2018 09:33:44 -0800 Message-ID: <20181221173345.15606-2-hmadhani@marvell.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20181221173345.15606-1-hmadhani@marvell.com> References: <20181221173345.15606-1-hmadhani@marvell.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Matching-Connectors: 131898872941892253;(abac79dc-c90b-41ba-8033-08d666125e47);(abac79dc-c90b-41ba-8033-08d666125e47) X-Forefront-Antispam-Report: CIP:199.233.58.38;IPV:CAL;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(39860400002)(376002)(136003)(396003)(346002)(2980300002)(1109001)(1110001)(339900001)(189003)(199004)(336012)(97736004)(80596001)(575784001)(1076003)(2616005)(476003)(6346003)(5660300001)(126002)(86362001)(51416003)(54906003)(50226002)(8936002)(14444005)(42186006)(16586007)(316002)(36906005)(47776003)(110136005)(6666004)(356004)(305945005)(8676002)(81156014)(81166006)(498600001)(68736007)(4326008)(2906002)(48376002)(50466002)(446003)(11346002)(76176011)(486006)(106466001)(105606002)(26826003)(87636003)(36756003)(69596002)(53936002)(26005);DIR:OUT;SFP:1101;SCL:1;SRVR:DM6PR07MB4761;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Fail;LANG:en;PTR:InfoDomainNonexistent;MX:1;A:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT049;1:4XgdAXPTAKfX8NOrVz0w5CaMZYPjR81JJdc4zv616Rk6ojhsNCP5JqU3fm3gDIkDTs+HToACqy/AigmzVsfrQoJ/tpYu36tIOyPnl6MvGczZ+LxRGZTCgOgvrWA3fVzS X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 400dc640-97de-4bab-0de0-08d6676a9e6c X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(2390118)(7020095)(5600074)(711020)(2017052603328);SRVR:DM6PR07MB4761; X-Microsoft-Exchange-Diagnostics: 1;DM6PR07MB4761;3:qgwgYFNAmdzOSh6hkzT6GRdpwm3m1aqAgffhwDdcimhj2xVBgIq29V7mKf5TTjNeBg+hPdmbGMnAZR00RE1cf/YSjVeZUI8rwQSDnVn4CTVqnLZutRtxjGVdEEyIy3BhB8w5W8wXXjrxOSKwbNht2K2aLTdiCaQIdhgI+4cByKeVqpkxEoOv/lJflObvKIROsvqYGR1p4WQKQNP0yalHqFXVoZj2Pr1vO5OWWMggJql/zA8Oe3LSCn5SiXJymKrdnLKU1aWBqF7UYy2lHe49bqwhQMxOWmIMEHplKHw1bXkI8LByIX2RMZkbFtqljOWn8rMGEcquql+ion55UIXZ+knUICbi5vWm721E5PA4U3Y=;25:L/3KeggmQRkIk1gSjmR27kir8n/is2RYH+S78+xrVoypBomzW2Hkj4exJzXgvp0jPVqHqsq9hmT/uZF+78hX2ZYPzlraUkrmqcji4LDaIGsH0TLSPNj1T1VkeeDZdBA2Zz37YPY6hO1YmI5HVjBjVDwr2LFQtcjuHqehkE04uwWXX1ZAEo/5B6eyWUsPEcSnyXSxMmcrxR/moWT4Y3bFvGVzDE+/rw+6LFoioDF7Hq3p6NQ/zeTHwdhSuv0JM2kjmDkV4VfxBGwhTg5K0SwNeExKng4hQoyKFuHYsFjZZcKax2hI2MuKIBNKztwXUk2sHesQum3OKPOadJ/l5ENbeg== X-MS-TrafficTypeDiagnostic: DM6PR07MB4761: X-Microsoft-Exchange-Diagnostics: 1;DM6PR07MB4761;31:0sSy2VxipyZDu4qWEkmRNA5G7SquoBHmj9hZJWAZ2AlubHrONymiaTt6XKshlg3qlqPnjnzG2XA9ip2BVW6UIvlSeyH64ZTGFw7xB9CwQ1SXrby28+nA5delrlJxgLYr59onK4G+Y3V22vE3oYDPdOGztzZKe4qvUvcBRkevt2FKV6Hrj7h6SXWzI/xzDV/nlecASZGJNWAH5pTarD6kfcnfVZHzZC6+eIIT5GhoHY8=;4:CP+tIq5W4m9Rrgpn4daLB0N5gCRfmAgJ8QEDx2IL3VgLYBcOj0qh/2m7lyALxHm6jEsWJOK0w3OI6JhnmEjE9G1P/sT+q2F5xyBMRHlbPlbNICT4jg9xSGeGm4qkU/DYXlD588Je+xKBPLSAapdGSRGN+TCxFxmMtHvEoT5DwiRbV4go8mQJJqKmWW+k4mh2T7CpkDkMXkN+zOBl+JsFQOE6l0+GqKZowYTTxuherzoy53m4IywOC8FG8BVHl2KSBnNjcm4z7sYGi9yN9vzfig== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(3230021)(999002)(5005026)(6095135)(8220043)(2401047)(8121501046)(93006095)(93001095)(3231475)(944501520)(52105112)(3002001)(10201501046)(6055026)(6096035)(20161123561025)(20161123563025)(20161123556025)(20161123559100)(201703131430075)(201703131433075)(201703131448075)(201703151042153)(20161123565025)(201708071742011)(7699051)(76991095);SRVR:DM6PR07MB4761;BCL:0;PCL:0;RULEID:(400006);SRVR:DM6PR07MB4761; X-Forefront-PRVS: 0893636978 X-Microsoft-Exchange-Diagnostics: 1;DM6PR07MB4761;23:TsEmc+cfCkUnwPX3zApEO9v1Xec6WYGPQwf5P3gV90HDlfk6GpHyXBuCs76f9QFIay6eC8QGpgLT+1lrRBATJ44ScM6jiTrLLquUGl9lPi9Fy9IBDrpgyF6fKFnHypcGt9l+LIZZBUFh0eA8o1ga3JBOpH3C8W0r5LKsQvGYLU7vyVCF6N4kGhIfFGW715ThhisgIdC1UkNyRAEAr46fjUXIcg2n7QGOK5nReMc6nybWU5MN02WOz4+HhTTWFpU9nw6eN2mZqTVZeNi/Evd/10KpZagI/ZHUtA80JYq3QtE6ktHld4tBgYAV4Kwy/TLTvVl8E0N8JNIRkSS4MVTpYrYJti4Abpxd1m7dpcbIVnUihR2jQMUwnBCy+SPq242nHM4/F6a5XgfTL6rIZX80kwB293p2kcs7nRNP2lXdlst5XcBx8mhMbEkoxRDcjAi5Ilqlyulz9J+JNyBnggQZJOXzg5M4io3Wo1AvinFZ6HRpbSF3hZ0yoTLpCUcAZDcCQJivu7+JpWrNYPZLHhCpHn7AF5aW7e8pF+A63hUx7GGFDSZc3ODY8L+Yl9xUP7zb9sMJOwT2NRbbATKE2Ogdw4PxQUHk287eRZkj5xSEejOOo7ctNqvXxp8hWs4DX0PeD4vf3kejzut5eLmSWuyMyfLKi64Wq70ZFRwSvjUb6Y49UmSgN6iFcRuDqU7bDFfZWtb8oCGikhgG7xxzjM+az4C1ckkJrBlsF/n8WcGOUUbYIIlZEsK9XQjiERJiYku4CibaX0c+pMoIAnx3hllwcKz1TJYD1S891Io61jr/+Nxm/Tzd2Nt8UqAc3S8xrbqTA07/tlZSHo5/JxFtE/apW6tsZqdrQHmCkQGDmlOvybqsS0kMbtmqapC4RUaAmTTSkzwqTMkwyxI93GiZ9GGFjiSVHZI6g/ViJ2zWrhpFTjdabacVWhBZXpuJ8jENp8opxFgtlvuHDRbjM/H6yvO/9PlB8ZQAfw4s9COKILZsOgk3qvt0MR5VXaNoqch0LaGP3DgYjlyE2reRbLJEos6XygROTX2MKhT5FGq9V2us871sqiJ8xLo5u7CR/THiYr48mbyaRg0n8Blr8tdG/w2jIILI4T7welywiFtfPsx9+edsfZz96kpgz/hMFxloupepzqKL248o25X1rK8Tr6eaYSpjCY3ONyD0UcLZf5EcFxCyXziJ8SU+sBLF5oIBQnKX6GDgh1/5VsTpBhAIcF+S8U0+caOxINGtkyoh72cHgfIQZkS27niM6qF9p/AcQ6rN X-Microsoft-Antispam-Message-Info: mlsrHVBgVPd1qITdz4lCyfXGNXrtdvn/VJmsxBtj+EJkrjgOF2I9Z3UfpdABRHYQ2Dgb98LsGsZ1YlJ7AtFBtLgplHEooTMAF54d4c/+Hu4QgPYqxtl0mcHRwqviiVUZ/iR7w/XpFWMv0whD2H5FejY3SnYi4s53wZaZaVNj8KgkYujzhgq0DwfUseUVaHpruxlhmzsnHk+6kcmh8QuuNCa0jZqb86vhUaezFuA1OoYwhPJRa9REjKjX7hm57oXCsp/zphto1qi6ud0+aykGemn3Nc2NymDChunriElzyHeAtshmQWsC5xzDy7lvLxlJ X-Microsoft-Exchange-Diagnostics: 1;DM6PR07MB4761;6:2cdgaSVdGn68WcU8EmFn8EahG2COSWdCXwCH0sEoVIdmElTTzroqupHaOLtvAN7rOjdcoldqUhugffOAdVHffxxAzZacXmoPcZT5gT8dVPzWrggEh5eBUp05fJ2kjJ6/Pb1jymEU8xynQdQhg+eVHeWdQIh+gR20GirzBOZUDU1Bs2EfK8LwGlIowIATgpZEg30onpvU/WmbH0scgGaDCaYSCqd8wpEnlCvPFyeYEsvvmNgzQVrBMYg9iMlWdhup5laoqiGbRgzJqOXmR6E3kByczmq0nM3ACNRO81v+xAvA2R610Pb72ZV5soiqC7ETCtJQjhqYDN5LpgXw6dGkN5EQxdQa89i774Pa/TMFz4lwmUdy3r7ztJiCgjarywqh8madC1cJstvyiu+7cwhWBBVAHhk0blMnkxMYF9quDb7OAj6QWFCwTCIo/AlSOHMpQZeNNUN2+fOdCBdXQc8/BQ==;5:kuylkKDFmEIvnQ98mxFAAh9xhIhD9qcTxFA8L0ka+0n+Xn2ESkD8G29sxMEo0PK3IiHrXPuDFL8nmIB5FW39AZ5CT2gYSIPVgu6FaDwa6TQW5yrMt+jsQ3RilakmSeE6ljnf8MlgqpigLzH3zW4M/bZLv85vDFtihLjA/l0aba4=;7:thbHqsX33Y+v4B8Q8yG2pUCZ+vVtZa2uGYQu6YnThQr9bBeY+S2gvFQ9Agic/6PM8Lkx05Cx/iJr3mtFpZ8GCgytn0GA3bitASKYomYRipUkOz2ZhQZMPD6RBsvEdIAzC52vZKqDgC91X1/oaGP+Zg== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2018 17:34:53.8298 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 400dc640-97de-4bab-0de0-08d6676a9e6c X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e;Ip=[199.233.58.38];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR07MB4761 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Martin K. Petersen" Allow user to selectively enable/disable DIF/DIX protection capabilities mask. Signed-off-by: Martin K. Petersen Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_os.c | 36 ++++++++++++++++++++++++++++-------- 1 file changed, 28 insertions(+), 8 deletions(-) diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index f0ffb0e5c113..deb923058d08 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -285,6 +285,20 @@ MODULE_PARM_DESC(qla2xuseresexchforels, "Reserve 1/2 of emergency exchanges for ELS.\n" " 0 (default): disabled"); +int ql2xprotmask; +module_param(ql2xprotmask, int, 0644); +MODULE_PARM_DESC(ql2xprotmask, + "Override DIF/DIX protection capabilities mask\n" + "Default is 0 which sets protection mask based on capabilities " + "reported by HBA firmware.\n"); + +int ql2xprotguard; +module_param(ql2xprotguard, int, 0644); +MODULE_PARM_DESC(ql2xprotguard, "Override choice of DIX checksum\n" + " 0 -- Let HBA firmware decide\n" + " 1 -- Force T10 CRC\n" + " 2 -- Force IP checksum\n"); + /* * SCSI host template entry points */ @@ -3355,13 +3369,16 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) "Registering for DIF/DIX type 1 and 3 protection.\n"); if (ql2xenabledif == 1) prot = SHOST_DIX_TYPE0_PROTECTION; - scsi_host_set_prot(host, - prot | SHOST_DIF_TYPE1_PROTECTION - | SHOST_DIF_TYPE2_PROTECTION - | SHOST_DIF_TYPE3_PROTECTION - | SHOST_DIX_TYPE1_PROTECTION - | SHOST_DIX_TYPE2_PROTECTION - | SHOST_DIX_TYPE3_PROTECTION); + if (ql2xprotmask) + scsi_host_set_prot(host, ql2xprotmask); + else + scsi_host_set_prot(host, + prot | SHOST_DIF_TYPE1_PROTECTION + | SHOST_DIF_TYPE2_PROTECTION + | SHOST_DIF_TYPE3_PROTECTION + | SHOST_DIX_TYPE1_PROTECTION + | SHOST_DIX_TYPE2_PROTECTION + | SHOST_DIX_TYPE3_PROTECTION); guard = SHOST_DIX_GUARD_CRC; @@ -3369,7 +3386,10 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) (ql2xenabledif > 1 || IS_PI_DIFB_DIX0_CAPABLE(ha))) guard |= SHOST_DIX_GUARD_IP; - scsi_host_set_guard(host, guard); + if (ql2xprotguard) + scsi_host_set_guard(host, ql2xprotguard); + else + scsi_host_set_guard(host, guard); } else base_vha->flags.difdix_supported = 0; } From patchwork Fri Dec 21 17:33:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Himanshu Madhani X-Patchwork-Id: 10740773 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59CD317E1 for ; Fri, 21 Dec 2018 17:35:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45A97284CE for ; Fri, 21 Dec 2018 17:35:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3A07E285C6; Fri, 21 Dec 2018 17:35:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 62E76285D2 for ; Fri, 21 Dec 2018 17:35:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732977AbeLURfG (ORCPT ); Fri, 21 Dec 2018 12:35:06 -0500 Received: from mail-eopbgr750078.outbound.protection.outlook.com ([40.107.75.78]:1280 "EHLO NAM02-BL2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1730364AbeLURfF (ORCPT ); Fri, 21 Dec 2018 12:35:05 -0500 Received: from CO2PR07CA0052.namprd07.prod.outlook.com (2603:10b6:100::20) by BYAPR07MB4375.namprd07.prod.outlook.com (2603:10b6:a02:c0::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1446.23; Fri, 21 Dec 2018 17:34:59 +0000 Received: from DM3NAM05FT012.eop-nam05.prod.protection.outlook.com (2a01:111:f400:7e51::203) by CO2PR07CA0052.outlook.office365.com (2603:10b6:100::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.1446.19 via Frontend Transport; Fri, 21 Dec 2018 17:34:59 +0000 Authentication-Results: spf=fail (sender IP is 199.233.58.38) smtp.mailfrom=marvell.com; vger.kernel.org; dkim=none (message not signed) header.d=none;vger.kernel.org; dmarc=fail action=none header.from=marvell.com; Received-SPF: Fail (protection.outlook.com: domain of marvell.com does not designate 199.233.58.38 as permitted sender) receiver=protection.outlook.com; client-ip=199.233.58.38; helo=CAEXCH02.caveonetworks.com; Received: from CAEXCH02.caveonetworks.com (199.233.58.38) by DM3NAM05FT012.mail.protection.outlook.com (10.152.98.121) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA) id 15.20.1471.7 via Frontend Transport; Fri, 21 Dec 2018 17:34:58 +0000 Received: from dut1171.mv.qlogic.com (10.112.88.18) by CAEXCH02.caveonetworks.com (10.67.98.110) with Microsoft SMTP Server (TLS) id 14.2.347.0; Fri, 21 Dec 2018 09:33:52 -0800 Received: from dut1171.mv.qlogic.com (localhost [127.0.0.1]) by dut1171.mv.qlogic.com (8.14.7/8.14.7) with ESMTP id wBLHXpEV015649; Fri, 21 Dec 2018 09:33:51 -0800 Received: (from root@localhost) by dut1171.mv.qlogic.com (8.14.7/8.14.7/Submit) id wBLHXpOh015648; Fri, 21 Dec 2018 09:33:51 -0800 From: Himanshu Madhani To: , CC: , Subject: [PATCH 2/2] qla2xxx: Fix DMA error when the DIF sg buffer crosses 4GB boundary Date: Fri, 21 Dec 2018 09:33:45 -0800 Message-ID: <20181221173345.15606-3-hmadhani@marvell.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20181221173345.15606-1-hmadhani@marvell.com> References: <20181221173345.15606-1-hmadhani@marvell.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Matching-Connectors: 131898872993908562;(abac79dc-c90b-41ba-8033-08d666125e47);(abac79dc-c90b-41ba-8033-08d666125e47) X-Forefront-Antispam-Report: CIP:199.233.58.38;IPV:CAL;CTRY:US;EFV:NLI;SFV:NSPM;SFS:(10009020)(39860400002)(396003)(136003)(376002)(346002)(2980300002)(1109001)(1110001)(339900001)(189003)(199004)(26005)(47776003)(4744004)(80596001)(97736004)(356004)(6666004)(50466002)(69596002)(36756003)(14444005)(8936002)(26826003)(2906002)(5660300001)(51416003)(48376002)(76176011)(86362001)(575784001)(498600001)(81166006)(8676002)(81156014)(87636003)(4326008)(42186006)(54906003)(110136005)(1076003)(68736007)(50226002)(106466001)(85426001)(305945005)(476003)(126002)(2616005)(53936002)(336012)(16586007)(11346002)(486006)(36906005)(105606002)(446003)(316002);DIR:OUT;SFP:1101;SCL:1;SRVR:BYAPR07MB4375;H:CAEXCH02.caveonetworks.com;FPR:;SPF:Fail;LANG:en;PTR:InfoDomainNonexistent;MX:1;A:1; X-Microsoft-Exchange-Diagnostics: 1;DM3NAM05FT012;1:mRPVdFd5DBqF8SwJ5tTWI6U3n49hrU+wUJjQRHyMyGVawXlPkbSg1re9Lfm9UUfeGwGAuh6/OYQ0Ztl6pDccl8N6Tg6qSB5Jxx65DHJnBvSnyAXpQsdAh9jkZa31aTZN X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: eab0030e-abc1-4924-9bb2-08d6676aa17e X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(2390118)(7020095)(5600074)(711020)(2017052603328);SRVR:BYAPR07MB4375; X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4375;3:zzMXDl+uVdyC5Jj6bAJ1Jbaf9m5G+TBfUNA1SZF4m9x+ZRv7pg+TFX7AufwTtGV/z+UYIOVMzZd5oTlr4c+Mn6xi8YW/FH9DBA4NpS/ZMfrmb06OY7dlgybBfzQ+gh1ijhfjYKMSdBDBRmgYeg5UWD3DMEzjKG/SN67u1pZhqaj5GN92Y53jAu2FGIWsSLw0y+UHhD8Cr/pTlPUdC+QqIae6K7eds74I7HwKbUdQcmmAU2Ln10LYRSbTeaT75KpOdWhwxsYs6mzg4WTOb2ISnR5SH+L0lwhBRm7pkhmmyHkDVRRCq76AwIThdNjnziOeBMhUo4q6mMQX6bHQNi4b7IyhqwCmFaPcKDG6FFzs1hU=;25:mPPQPApCUxXUF9X/Q4EyOyln3hjYTS0aHBq+zTLedkNfr808bZVFjar8UOBlPb/ZRAFolkaL5H00g2boTNA5UKSVlFYdWvLbajOkGY/u1hYDbhAw4C6KRRLL/9FiqRWIoCU5tixUpfTxlri4xwBYUW45BvxOookPVQOkY2YF9sKuD7pDGLN9jqFw/3+lcF7oN/aAmi+5V280dWRwCJTn3TlYtaQ+p8ItWBk8nB5DEI9CfXxXBWTg/iMYzpYLQkb3jyaJzGNXVBd48mIPQjp5f3FvtSx2aWNZ1vs5iFNWpl1LB+Ac+Yoh8YWXxkOmJZLh5PJZ8sZJj/a0K0W69dwvcg== X-MS-TrafficTypeDiagnostic: BYAPR07MB4375: X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4375;31:BI2U+kUh/6PEONvMSFNp2FltslK+Na33JrZJSbZ9GvOW5ayaKBRcH/fVoTRWaaOfdp4i+iBGSUDJtkqeMxCijB2eqrgP5/ucGBRKGorliHlRIUipvGtR+C3hrAbbY12/xEMgFVIuFFmMIgAzD23VwEwC2v0ZrVkm0+p1jooEKxeQaGFqOC3gqYL/MlZrhOyEdf0QlZ5uSYp9kX/twE6dy6eKNNgKRL52u1AHRVyWJXM=;4:mhrFnbAYyQ/k2/DePsmH3wzC/CAGzBotDPx0Y/OmY4CPPoALBCoqGEybaqMr1wxbjHHJ+kH8XAotddzFbxL5RUeJD9xliQJZRNsKciw4ue+T4y+TQayu+ePHx71Xt5fdFZInjXQ9n4MHepNVSGAtNFfX0l2L3kb4LiTcNzwvW6pUxbsHYRNJuUDnAuRPw2PyPgwQjAs10sLKjKGr6YR7ZrOAG4lNGHpNeH/NavNOg6DKyYGtd+VrYKcQ3pbglHhH6V5eHeNXNBX1FbaNhXk/qQ== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:(3230021)(999002)(5005026)(6095135)(2401047)(8121501046)(3231475)(944501520)(52105112)(93006095)(93001095)(3002001)(10201501046)(6055026)(6096035)(20161123565025)(20161123556025)(20161123563025)(20161123559100)(201703131430075)(201703131433075)(201703131448075)(201703151042153)(20161123561025)(201708071742011)(7699051)(76991095);SRVR:BYAPR07MB4375;BCL:0;PCL:0;RULEID:(400006);SRVR:BYAPR07MB4375; X-Forefront-PRVS: 0893636978 X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4375;23:sE6LJRdD6xc44jHYcPpZv9gM40DsdVGG9aBbaeFmaBV9BKCYGiGQivlgPdH5HK+Zeiza8rOZ2QrYkLXjtH0vRiQfJnC/p3qVSoCQc7/sc+1o7/HwtqBDvHYvTYTtdPfvToo3pfVAWe8+3C5xB+XFaEnkYLkCzNT9R/EaSVcBdhzXGLkX1x6FfPZc5vIjTHn9nc0YrybTGUIFr0gD9F2yvEs1Ejw88CGP9ThOjgGYd7TLONahn+6n27vqTCcURD6IqcpQ3d8OcjonuU0lxdf/oaYNfdihRNxvgPTtJv5e5cufgVHIgdS5enzT4mg159zqQAKb4as27vRKtimTXS768Vnr+/5/7FSUxzofgQiznEL5icbkwp+Mbns0tPja8fxLNR5KjWXWcfnmXIbClJjQQQigfB3G9HIsMDJxrz79zn992i5mJompGxKVPwrCf5YuG8Vf0VXcsoQusbCedoN0bmoIUTxT2TS2cKxx477Pp5abYiwmRTcmRHDt1vhgZPIfVEslB1lniIEbMg9+20sIv9D9qjU6iEZ9wWEwwHmTbV0B7Gyv4nsmB24EEoeXEnO6AV69zQpzcvbwrGQzZGJZsX/1jHYeWBzpqoywjpY4WX8BTldaMf7koEQkRMaeRCWtVlOTv9V0zB8qMjSzadQ4Fkuq4LWVBHl4bbDDaFQHfKERQ/wMwGlMxH1PG9s3W4jP17f0p7LQRdzSSdNUei+XL/2LfQCqOnDLlqguzu3vgaovSk5vNGEdlOTUc/+cLbN7GiKLE8kyiVoOmpAwDziIuSzU7Vz+MhZ9OF8XRIVolgoMBGnxPg+w3uqj20aT32d0W7lBkUu7s5/R2mPGLbm8UzPgyMFOwhzWeM/gDpjgloQDdjzeomI7knKUA7a6z/fq0yoIka32lL/v+QKUpo+kVx8VD3TEJOtPXRqMOdAmwOPbbZd+fi5eb828vGZbK5YDbJ1nLwFC/LV4I3ERRMEv8LMz48VGbP/M7kXoB6mEB13al6dFm7j6My3UjviMriaAMi5urPtgMXv90KZwMNnDtVZXTAPaGy0Ra1U2Xa65ZNzJmUzd5plkjz2caFTHBXmm06W/l2p7iAhSCgmohTUErQfp1yJiT4WYqfFvj7qigU/CkynwYlHy/Rv+lk85l7axQVGrbuq5D2PHVdno+XwjZaJZAQMjmnCVnpJ/VdRmsWb0/AvwioenUWSYISd/SHWF4DGuUTqXqJrZ8cvs60iCB/KF3wj4OQlisLMQl8Fe9jJ2xaL62/xNgY0xHqNjjv60aEFjB+HrUobNVNrYNkOtGQ== X-Microsoft-Antispam-Message-Info: ceaSOej/tO2asJCVlNrRmjvpQCztOR/2r0nzHK0kreFbu1cnPxsAd9EL7jSIxQsWLavdpzO2Odj/elSC/St1STPA5ZsYjlgCeI4v0F9Y/yjGvTBzF9LOirmYV0X+90r7Yni589clPRwVXDIvEX3vTDnn805Gd/gjw90PU+dcaX6TB0swa2s9WZxPH9WXn8RP/YeBgAfePLjFznQ7F3J1jhYF2dWd3VPfQahSJ48JRAWXU21dFQ7bOIoXQN+ucFeJ/0g20XbZFEcMmm/Ywt6wdI+fh8YG+FUBdMMP1Cb975ykQrqGEyDhvCfa9ETvrWcr X-Microsoft-Exchange-Diagnostics: 1;BYAPR07MB4375;6:S/20VeymCz4wQZ63AKIAH254wEBEUAbyXagypL6QfK8GfXW9xdsY+1JhSXOqji/GTjGkyQan/E/EOHIpn22lj1l7QpHhUOJZegxf+wNs1XXNy8FDLd6zAjeyrJPpWlOS4f69xOsq4SYGooDopvYqlJMAVlE2c27a3y+F6k+4G1tDdPMvgUGmRovzCc1g7K/NV/sld/iro2XocFM81c/Bbjob/GdXJAdXL8cKkjIDFmid4HeZ5VeOzG1CvTWlH2QCpPkOzrybWM4o5aItzuSWYW5LCGCPxzT7q89yrkai7b6VDr0tbJwrjRKR2d0tAzzP1uqVUYSTw//ana8bMzVv9FhvuHM4eaj2RH98HZjr9d90Lz8oa+iONEc2o5TTBuIvwxFLfX41mHHJ1MD+HTpb7pWE6WZPWFc2vzHbbitPgBjZIbJBWKRh2XLP8n1ugL5ZvjmUKyPzatgrO0WYUbi0Sg==;5:zBvVSaNFTYYmVX218wiQMYrzmahljqve6/Wn18bk+rNppbNcdQyASFbLf9gfSwT61mJnEM8tJpwbrc7A6higZ/feguqolvhq0q2+DlS702k5xkFmnenQIRH0mJ44T2KeebgDIpNjRLdlwaWbUehYL7fca9pYYcz5cPzONUnPgLQ=;7:/5c9X7YVKRJ7HUexkjFGvxvefi51gSi8hd1quNsxVEsHPMyR32HzWsMGWBrW6vPpCnnIfRGigOQ2BEBzpBzwhlCr0UITJiMBPL2UglIK4K0/Gpd/tdimiv9Sr3VR0eoFwmMLZrBn7ZCJ01QouIZINA== SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2018 17:34:58.8283 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: eab0030e-abc1-4924-9bb2-08d6676aa17e X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e;Ip=[199.233.58.38];Helo=[CAEXCH02.caveonetworks.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR07MB4375 Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Giridhar Malavali When SGE buffer containing DIF information crosses 4G boundary, it results into DMA error. This patch fixes this issue by calculating SGE buffer size and if it crosses 4G boundary, driver will split it into multiple SGE buffers to avoid DMA error. Signed-off-by: Giridhar Malavali Signed-off-by: Himanshu Madhani --- drivers/scsi/qla2xxx/qla_attr.c | 21 ++- drivers/scsi/qla2xxx/qla_def.h | 28 ++++ drivers/scsi/qla2xxx/qla_gbl.h | 3 +- drivers/scsi/qla2xxx/qla_iocb.c | 335 +++++++++++++++++++++++++++++++------- drivers/scsi/qla2xxx/qla_isr.c | 11 ++ drivers/scsi/qla2xxx/qla_os.c | 168 ++++++++++++++++++- drivers/scsi/qla2xxx/qla_target.c | 2 +- drivers/scsi/qla2xxx/qla_target.h | 2 + 8 files changed, 502 insertions(+), 68 deletions(-) diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c index 00444dc79756..8b4dd72011bf 100644 --- a/drivers/scsi/qla2xxx/qla_attr.c +++ b/drivers/scsi/qla2xxx/qla_attr.c @@ -1002,7 +1002,7 @@ qla2x00_free_sysfs_attr(scsi_qla_host_t *vha, bool stop_beacon) /* Scsi_Host attributes. */ static ssize_t -qla2x00_drvr_version_show(struct device *dev, +qla2x00_driver_version_show(struct device *dev, struct device_attribute *attr, char *buf) { return scnprintf(buf, PAGE_SIZE, "%s\n", qla2x00_version_str); @@ -2059,7 +2059,21 @@ ql2xiniexchg_store(struct device *dev, struct device_attribute *attr, return strlen(buf); } -static DEVICE_ATTR(driver_version, S_IRUGO, qla2x00_drvr_version_show, NULL); +static ssize_t +qla2x00_dif_bundle_statistics_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + scsi_qla_host_t *vha = shost_priv(class_to_shost(dev)); + struct qla_hw_data *ha = vha->hw; + + return scnprintf(buf, PAGE_SIZE, + "cross=%llu read=%llu write=%llu kalloc=%llu dma_alloc=%llu unusable=%u\n", + ha->dif_bundle_crossed_pages, ha->dif_bundle_reads, + ha->dif_bundle_writes, ha->dif_bundle_kallocs, + ha->dif_bundle_dma_allocs, ha->pool.unusable.count); +} + +static DEVICE_ATTR(driver_version, S_IRUGO, qla2x00_driver_version_show, NULL); static DEVICE_ATTR(fw_version, S_IRUGO, qla2x00_fw_version_show, NULL); static DEVICE_ATTR(serial_num, S_IRUGO, qla2x00_serial_num_show, NULL); static DEVICE_ATTR(isp_name, S_IRUGO, qla2x00_isp_name_show, NULL); @@ -2112,6 +2126,8 @@ static DEVICE_ATTR(zio_threshold, 0644, static DEVICE_ATTR_RW(qlini_mode); static DEVICE_ATTR_RW(ql2xexchoffld); static DEVICE_ATTR_RW(ql2xiniexchg); +static DEVICE_ATTR(dif_bundle_statistics, 0444, + qla2x00_dif_bundle_statistics_show, NULL); struct device_attribute *qla2x00_host_attrs[] = { @@ -2150,6 +2166,7 @@ struct device_attribute *qla2x00_host_attrs[] = { &dev_attr_min_link_speed, &dev_attr_max_speed_sup, &dev_attr_zio_threshold, + &dev_attr_dif_bundle_statistics, NULL, /* reserve for qlini_mode */ NULL, /* reserve for ql2xiniexchg */ NULL, /* reserve for ql2xexchoffld */ diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h index 26b93c563f92..97f66b142ff2 100644 --- a/drivers/scsi/qla2xxx/qla_def.h +++ b/drivers/scsi/qla2xxx/qla_def.h @@ -314,6 +314,7 @@ struct srb_cmd { #define SRB_CRC_PROT_DMA_VALID BIT_4 /* DIF: prot DMA valid */ #define SRB_CRC_CTX_DSD_VALID BIT_5 /* DIF: dsd_list valid */ #define SRB_WAKEUP_ON_COMP BIT_6 +#define SRB_DIF_BUNDL_DMA_VALID BIT_7 /* DIF: DMA list valid */ /* To identify if a srb is of T10-CRC type. @sp => srb_t pointer */ #define IS_PROT_IO(sp) (sp->flags & SRB_CRC_CTX_DSD_VALID) @@ -1892,6 +1893,13 @@ struct crc_context { /* List of DMA context transfers */ struct list_head dsd_list; + /* List of DIF Bundling context DMA address */ + struct list_head ldif_dsd_list; + u8 no_ldif_dsd; + + struct list_head ldif_dma_hndl_list; + u32 dif_bundl_len; + u8 no_dif_bundl; /* This structure should not exceed 512 bytes */ }; @@ -4184,6 +4192,26 @@ struct qla_hw_data { uint16_t min_link_speed; uint16_t max_speed_sup; + /* DMA pool for the DIF bundling buffers */ + struct dma_pool *dif_bundl_pool; + #define DIF_BUNDLING_DMA_POOL_SIZE 1024 + struct { + struct { + struct list_head head; + uint count; + } good; + struct { + struct list_head head; + uint count; + } unusable; + } pool; + + unsigned long long dif_bundle_crossed_pages; + unsigned long long dif_bundle_reads; + unsigned long long dif_bundle_writes; + unsigned long long dif_bundle_kallocs; + unsigned long long dif_bundle_dma_allocs; + atomic_t nvme_active_aen_cnt; uint16_t nvme_last_rptd_aen; /* Last recorded aen count */ diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h index 3673fcdb033a..bcc17a7261e7 100644 --- a/drivers/scsi/qla2xxx/qla_gbl.h +++ b/drivers/scsi/qla2xxx/qla_gbl.h @@ -160,6 +160,7 @@ extern int ql2xautodetectsfp; extern int ql2xenablemsix; extern int qla2xuseresexchforels; extern int ql2xexlogins; +extern int ql2xdifbundlinginternalbuffers; extern int qla2x00_loop_reset(scsi_qla_host_t *); extern void qla2x00_abort_all_cmds(scsi_qla_host_t *, int); @@ -285,7 +286,7 @@ extern int qla24xx_walk_and_build_sglist_no_difb(struct qla_hw_data *, srb_t *, extern int qla24xx_walk_and_build_sglist(struct qla_hw_data *, srb_t *, uint32_t *, uint16_t, struct qla_tc_param *); extern int qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *, srb_t *, - uint32_t *, uint16_t, struct qla_tc_param *); + uint32_t *, uint16_t, struct qla_tgt_cmd *); extern int qla24xx_get_one_block_sg(uint32_t, struct qla2_sgx *, uint32_t *); extern int qla24xx_configure_prot_mode(srb_t *, uint16_t *); extern int qla24xx_build_scsi_crc_2_iocbs(srb_t *, diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c index 032635321ad6..65ba0e36ee60 100644 --- a/drivers/scsi/qla2xxx/qla_iocb.c +++ b/drivers/scsi/qla2xxx/qla_iocb.c @@ -1098,88 +1098,300 @@ qla24xx_walk_and_build_sglist(struct qla_hw_data *ha, srb_t *sp, uint32_t *dsd, int qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp, - uint32_t *dsd, uint16_t tot_dsds, struct qla_tc_param *tc) + uint32_t *cur_dsd, uint16_t tot_dsds, struct qla_tgt_cmd *tc) { - void *next_dsd; - uint8_t avail_dsds = 0; - uint32_t dsd_list_len; - struct dsd_dma *dsd_ptr; + struct dsd_dma *dsd_ptr = NULL, *dif_dsd, *nxt_dsd; struct scatterlist *sg, *sgl; - int i; - struct scsi_cmnd *cmd; - uint32_t *cur_dsd = dsd; - uint16_t used_dsds = tot_dsds; + struct crc_context *difctx = NULL; struct scsi_qla_host *vha; + uint dsd_list_len; + uint avail_dsds = 0; + uint used_dsds = tot_dsds; + bool dif_local_dma_alloc = false; + bool direction_to_device = false; + int i; if (sp) { - cmd = GET_CMD_SP(sp); + struct scsi_cmnd *cmd = GET_CMD_SP(sp); sgl = scsi_prot_sglist(cmd); vha = sp->vha; + difctx = sp->u.scmd.ctx; + direction_to_device = cmd->sc_data_direction == DMA_TO_DEVICE; + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, vha, 0xe021, + "%s: scsi_cmnd: %p, crc_ctx: %p, sp: %p\n", + __func__, cmd, difctx, sp); } else if (tc) { vha = tc->vha; sgl = tc->prot_sg; + difctx = tc->ctx; + direction_to_device = tc->dma_data_direction == DMA_TO_DEVICE; } else { BUG(); return 1; } - ql_dbg(ql_dbg_tgt, vha, 0xe021, - "%s: enter\n", __func__); - - for_each_sg(sgl, sg, tot_dsds, i) { - dma_addr_t sle_dma; - - /* Allocate additional continuation packets? */ - if (avail_dsds == 0) { - avail_dsds = (used_dsds > QLA_DSDS_PER_IOCB) ? - QLA_DSDS_PER_IOCB : used_dsds; - dsd_list_len = (avail_dsds + 1) * 12; - used_dsds -= avail_dsds; - - /* allocate tracking DS */ - dsd_ptr = kzalloc(sizeof(struct dsd_dma), GFP_ATOMIC); - if (!dsd_ptr) - return 1; - - /* allocate new list */ - dsd_ptr->dsd_addr = next_dsd = - dma_pool_alloc(ha->dl_dma_pool, GFP_ATOMIC, - &dsd_ptr->dsd_list_dma); - - if (!next_dsd) { - /* - * Need to cleanup only this dsd_ptr, rest - * will be done by sp_free_dma() - */ - kfree(dsd_ptr); - return 1; + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, vha, 0xe021, + "%s: enter (write=%u)\n", __func__, direction_to_device); + + /* if initiator doing write or target doing read */ + if (direction_to_device) { + for_each_sg(sgl, sg, tot_dsds, i) { + dma_addr_t sle_phys = sg_phys(sg); + + /* If SGE addr + len flips bits in upper 32-bits */ + if (MSD(sle_phys + sg->length) ^ MSD(sle_phys)) { + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, vha, 0xe022, + "%s: page boundary crossing (phys=%llx len=%x)\n", + __func__, sle_phys, sg->length); + + if (difctx) { + ha->dif_bundle_crossed_pages++; + dif_local_dma_alloc = true; + } else { + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, + vha, 0xe022, + "%s: difctx pointer is NULL\n", + __func__); + } + break; + } + } + ha->dif_bundle_writes++; + } else { + ha->dif_bundle_reads++; + } + + if (ql2xdifbundlinginternalbuffers) + dif_local_dma_alloc = direction_to_device; + + if (dif_local_dma_alloc) { + u32 track_difbundl_buf = 0; + u32 ldma_sg_len = 0; + u8 ldma_needed = 1; + + difctx->no_dif_bundl = 0; + difctx->dif_bundl_len = 0; + + /* Track DSD buffers */ + INIT_LIST_HEAD(&difctx->ldif_dsd_list); + /* Track local DMA buffers */ + INIT_LIST_HEAD(&difctx->ldif_dma_hndl_list); + + for_each_sg(sgl, sg, tot_dsds, i) { + u32 sglen = sg_dma_len(sg); + + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, vha, 0xe023, + "%s: sg[%x] (phys=%llx sglen=%x) ldma_sg_len: %x dif_bundl_len: %x ldma_needed: %x\n", + __func__, i, sg_phys(sg), sglen, ldma_sg_len, + difctx->dif_bundl_len, ldma_needed); + + while (sglen) { + u32 xfrlen = 0; + + if (ldma_needed) { + /* + * Allocate list item to store + * the DMA buffers + */ + dsd_ptr = kzalloc(sizeof(*dsd_ptr), + GFP_ATOMIC); + if (!dsd_ptr) { + ql_dbg(ql_dbg_tgt, vha, 0xe024, + "%s: failed alloc dsd_ptr\n", + __func__); + return 1; + } + ha->dif_bundle_kallocs++; + + /* allocate dma buffer */ + dsd_ptr->dsd_addr = dma_pool_alloc + (ha->dif_bundl_pool, GFP_ATOMIC, + &dsd_ptr->dsd_list_dma); + if (!dsd_ptr->dsd_addr) { + ql_dbg(ql_dbg_tgt, vha, 0xe024, + "%s: failed alloc ->dsd_ptr\n", + __func__); + /* + * need to cleanup only this + * dsd_ptr rest will be done + * by sp_free_dma() + */ + kfree(dsd_ptr); + ha->dif_bundle_kallocs--; + return 1; + } + ha->dif_bundle_dma_allocs++; + ldma_needed = 0; + difctx->no_dif_bundl++; + list_add_tail(&dsd_ptr->list, + &difctx->ldif_dma_hndl_list); + } + + /* xfrlen is min of dma pool size and sglen */ + xfrlen = (sglen > + (DIF_BUNDLING_DMA_POOL_SIZE - ldma_sg_len)) ? + DIF_BUNDLING_DMA_POOL_SIZE - ldma_sg_len : + sglen; + + /* replace with local allocated dma buffer */ + sg_pcopy_to_buffer(sgl, sg_nents(sgl), + dsd_ptr->dsd_addr + ldma_sg_len, xfrlen, + difctx->dif_bundl_len); + difctx->dif_bundl_len += xfrlen; + sglen -= xfrlen; + ldma_sg_len += xfrlen; + if (ldma_sg_len == DIF_BUNDLING_DMA_POOL_SIZE || + sg_is_last(sg)) { + ldma_needed = 1; + ldma_sg_len = 0; + } } + } - if (sp) { - list_add_tail(&dsd_ptr->list, - &((struct crc_context *) - sp->u.scmd.ctx)->dsd_list); + track_difbundl_buf = used_dsds = difctx->no_dif_bundl; + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, vha, 0xe025, + "dif_bundl_len=%x, no_dif_bundl=%x track_difbundl_buf: %x\n", + difctx->dif_bundl_len, difctx->no_dif_bundl, + track_difbundl_buf); - sp->flags |= SRB_CRC_CTX_DSD_VALID; - } else { - list_add_tail(&dsd_ptr->list, - &(tc->ctx->dsd_list)); - *tc->ctx_dsd_alloced = 1; + if (sp) + sp->flags |= SRB_DIF_BUNDL_DMA_VALID; + else + tc->prot_flags = DIF_BUNDL_DMA_VALID; + + list_for_each_entry_safe(dif_dsd, nxt_dsd, + &difctx->ldif_dma_hndl_list, list) { + u32 sglen = (difctx->dif_bundl_len > + DIF_BUNDLING_DMA_POOL_SIZE) ? + DIF_BUNDLING_DMA_POOL_SIZE : difctx->dif_bundl_len; + + BUG_ON(track_difbundl_buf == 0); + + /* Allocate additional continuation packets? */ + if (avail_dsds == 0) { + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, vha, + 0xe024, + "%s: adding continuation iocb's\n", + __func__); + avail_dsds = (used_dsds > QLA_DSDS_PER_IOCB) ? + QLA_DSDS_PER_IOCB : used_dsds; + dsd_list_len = (avail_dsds + 1) * 12; + used_dsds -= avail_dsds; + + /* allocate tracking DS */ + dsd_ptr = kzalloc(sizeof(*dsd_ptr), GFP_ATOMIC); + if (!dsd_ptr) { + ql_dbg(ql_dbg_tgt, vha, 0xe026, + "%s: failed alloc dsd_ptr\n", + __func__); + return 1; + } + ha->dif_bundle_kallocs++; + + difctx->no_ldif_dsd++; + /* allocate new list */ + dsd_ptr->dsd_addr = + dma_pool_alloc(ha->dl_dma_pool, GFP_ATOMIC, + &dsd_ptr->dsd_list_dma); + if (!dsd_ptr->dsd_addr) { + ql_dbg(ql_dbg_tgt, vha, 0xe026, + "%s: failed alloc ->dsd_addr\n", + __func__); + /* + * need to cleanup only this dsd_ptr + * rest will be done by sp_free_dma() + */ + kfree(dsd_ptr); + ha->dif_bundle_kallocs--; + return 1; + } + ha->dif_bundle_dma_allocs++; + + if (sp) { + list_add_tail(&dsd_ptr->list, + &difctx->ldif_dsd_list); + sp->flags |= SRB_CRC_CTX_DSD_VALID; + } else { + list_add_tail(&dsd_ptr->list, + &difctx->ldif_dsd_list); + tc->ctx_dsd_alloced = 1; + } + + /* add new list to cmd iocb or last list */ + *cur_dsd++ = + cpu_to_le32(LSD(dsd_ptr->dsd_list_dma)); + *cur_dsd++ = + cpu_to_le32(MSD(dsd_ptr->dsd_list_dma)); + *cur_dsd++ = dsd_list_len; + cur_dsd = dsd_ptr->dsd_addr; } - - /* add new list to cmd iocb or last list */ - *cur_dsd++ = cpu_to_le32(LSD(dsd_ptr->dsd_list_dma)); - *cur_dsd++ = cpu_to_le32(MSD(dsd_ptr->dsd_list_dma)); - *cur_dsd++ = dsd_list_len; - cur_dsd = (uint32_t *)next_dsd; + *cur_dsd++ = cpu_to_le32(LSD(dif_dsd->dsd_list_dma)); + *cur_dsd++ = cpu_to_le32(MSD(dif_dsd->dsd_list_dma)); + *cur_dsd++ = cpu_to_le32(sglen); + avail_dsds--; + difctx->dif_bundl_len -= sglen; + track_difbundl_buf--; } - sle_dma = sg_dma_address(sg); - - *cur_dsd++ = cpu_to_le32(LSD(sle_dma)); - *cur_dsd++ = cpu_to_le32(MSD(sle_dma)); - *cur_dsd++ = cpu_to_le32(sg_dma_len(sg)); - avail_dsds--; + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, vha, 0xe026, + "%s: no_ldif_dsd:%x, no_dif_bundl:%x\n", __func__, + difctx->no_ldif_dsd, difctx->no_dif_bundl); + } else { + for_each_sg(sgl, sg, tot_dsds, i) { + dma_addr_t sle_dma; + + /* Allocate additional continuation packets? */ + if (avail_dsds == 0) { + avail_dsds = (used_dsds > QLA_DSDS_PER_IOCB) ? + QLA_DSDS_PER_IOCB : used_dsds; + dsd_list_len = (avail_dsds + 1) * 12; + used_dsds -= avail_dsds; + + /* allocate tracking DS */ + dsd_ptr = kzalloc(sizeof(*dsd_ptr), GFP_ATOMIC); + if (!dsd_ptr) { + ql_dbg(ql_dbg_tgt + ql_dbg_verbose, + vha, 0xe027, + "%s: failed alloc dsd_dma...\n", + __func__); + return 1; + } + + /* allocate new list */ + dsd_ptr->dsd_addr = + dma_pool_alloc(ha->dl_dma_pool, GFP_ATOMIC, + &dsd_ptr->dsd_list_dma); + if (!dsd_ptr->dsd_addr) { + /* need to cleanup only this dsd_ptr */ + /* rest will be done by sp_free_dma() */ + kfree(dsd_ptr); + return 1; + } + + if (sp) { + list_add_tail(&dsd_ptr->list, + &difctx->dsd_list); + sp->flags |= SRB_CRC_CTX_DSD_VALID; + } else { + list_add_tail(&dsd_ptr->list, + &difctx->dsd_list); + tc->ctx_dsd_alloced = 1; + } + + /* add new list to cmd iocb or last list */ + *cur_dsd++ = + cpu_to_le32(LSD(dsd_ptr->dsd_list_dma)); + *cur_dsd++ = + cpu_to_le32(MSD(dsd_ptr->dsd_list_dma)); + *cur_dsd++ = dsd_list_len; + cur_dsd = dsd_ptr->dsd_addr; + } + sle_dma = sg_dma_address(sg); + *cur_dsd++ = cpu_to_le32(LSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(MSD(sle_dma)); + *cur_dsd++ = cpu_to_le32(sg_dma_len(sg)); + avail_dsds--; + } } /* Null termination */ *cur_dsd++ = 0; @@ -1187,7 +1399,6 @@ qla24xx_walk_and_build_prot_sglist(struct qla_hw_data *ha, srb_t *sp, *cur_dsd++ = 0; return 0; } - /** * qla24xx_build_scsi_crc_2_iocbs() - Build IOCB command utilizing Command * Type 6 IOCB types. diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c index 30d3090842f8..5c1de6ed825b 100644 --- a/drivers/scsi/qla2xxx/qla_isr.c +++ b/drivers/scsi/qla2xxx/qla_isr.c @@ -2725,6 +2725,17 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt) cp->device->vendor); break; + case CS_DMA: + ql_log(ql_log_info, fcport->vha, 0x3022, + "CS_DMA error: 0x%x-0x%x (0x%x) nexus=%ld:%d:%llu portid=%06x oxid=0x%x cdb=%10phN len=0x%x rsp_info=0x%x resid=0x%x fw_resid=0x%x sp=%p cp=%p.\n", + comp_status, scsi_status, res, vha->host_no, + cp->device->id, cp->device->lun, fcport->d_id.b24, + ox_id, cp->cmnd, scsi_bufflen(cp), rsp_info_len, + resid_len, fw_resid_len, sp, cp); + ql_dump_buffer(ql_dbg_tgt + ql_dbg_verbose, vha, 0xe0ee, + pkt, sizeof(*sts24)); + res = DID_ERROR << 16; + break; default: res = DID_ERROR << 16; break; diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c index deb923058d08..2bc4fcd0c797 100644 --- a/drivers/scsi/qla2xxx/qla_os.c +++ b/drivers/scsi/qla2xxx/qla_os.c @@ -299,6 +299,13 @@ MODULE_PARM_DESC(ql2xprotguard, "Override choice of DIX checksum\n" " 1 -- Force T10 CRC\n" " 2 -- Force IP checksum\n"); +int ql2xdifbundlinginternalbuffers; +module_param(ql2xdifbundlinginternalbuffers, int, 0644); +MODULE_PARM_DESC(ql2xdifbundlinginternalbuffers, + "Force using internal buffers for DIF information\n" + "0 (Default). Based on check.\n" + "1 Force using internal buffers\n"); + /* * SCSI host template entry points */ @@ -819,7 +826,44 @@ qla2xxx_qpair_sp_free_dma(void *ptr) ha->gbl_dsd_inuse -= ctx1->dsd_use_cnt; ha->gbl_dsd_avail += ctx1->dsd_use_cnt; mempool_free(ctx1, ha->ctx_mempool); + sp->flags &= ~SRB_FCP_CMND_DMA_VALID; + } + if (sp->flags & SRB_DIF_BUNDL_DMA_VALID) { + struct crc_context *difctx = sp->u.scmd.ctx; + struct dsd_dma *dif_dsd, *nxt_dsd; + + list_for_each_entry_safe(dif_dsd, nxt_dsd, + &difctx->ldif_dma_hndl_list, list) { + list_del(&dif_dsd->list); + dma_pool_free(ha->dif_bundl_pool, dif_dsd->dsd_addr, + dif_dsd->dsd_list_dma); + kfree(dif_dsd); + difctx->no_dif_bundl--; + } + + list_for_each_entry_safe(dif_dsd, nxt_dsd, + &difctx->ldif_dsd_list, list) { + list_del(&dif_dsd->list); + dma_pool_free(ha->dl_dma_pool, dif_dsd->dsd_addr, + dif_dsd->dsd_list_dma); + kfree(dif_dsd); + difctx->no_ldif_dsd--; + } + + if (difctx->no_ldif_dsd) { + ql_dbg(ql_dbg_tgt+ql_dbg_verbose, sp->vha, 0xe022, + "%s: difctx->no_ldif_dsd=%x\n", + __func__, difctx->no_ldif_dsd); + } + + if (difctx->no_dif_bundl) { + ql_dbg(ql_dbg_tgt+ql_dbg_verbose, sp->vha, 0xe022, + "%s: difctx->no_dif_bundl=%x\n", + __func__, difctx->no_dif_bundl); + } + sp->flags &= ~SRB_DIF_BUNDL_DMA_VALID; } + end: CMD_SP(cmd) = NULL; qla2xxx_rel_qpair_sp(sp->qpair, sp); @@ -4030,9 +4074,86 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len, "Failed to allocate memory for fcp_cmnd_dma_pool.\n"); goto fail_dl_dma_pool; } + + if (ql2xenabledif) { + u64 bufsize = DIF_BUNDLING_DMA_POOL_SIZE; + struct dsd_dma *dsd, *nxt; + uint i; + /* Creata a DMA pool of buffers for DIF bundling */ + ha->dif_bundl_pool = dma_pool_create(name, + &ha->pdev->dev, DIF_BUNDLING_DMA_POOL_SIZE, 8, 0); + if (!ha->dif_bundl_pool) { + ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0024, + "%s: failed create dif_bundl_pool\n", + __func__); + goto fail_dif_bundl_dma_pool; + } + + INIT_LIST_HEAD(&ha->pool.good.head); + INIT_LIST_HEAD(&ha->pool.unusable.head); + ha->pool.good.count = 0; + ha->pool.unusable.count = 0; + for (i = 0; i < 128; i++) { + dsd = kzalloc(sizeof(*dsd), GFP_ATOMIC); + if (!dsd) { + ql_dbg_pci(ql_dbg_init, ha->pdev, + 0xe0ee, "%s: failed alloc dsd\n", + __func__); + return 1; + } + ha->dif_bundle_kallocs++; + + dsd->dsd_addr = dma_pool_alloc( + ha->dif_bundl_pool, GFP_ATOMIC, + &dsd->dsd_list_dma); + if (!dsd->dsd_addr) { + ql_dbg_pci(ql_dbg_init, ha->pdev, + 0xe0ee, + "%s: failed alloc ->dsd_addr\n", + __func__); + kfree(dsd); + ha->dif_bundle_kallocs--; + continue; + } + ha->dif_bundle_dma_allocs++; + + /* + * if DMA buffer crosses 4G boundary, + * put it on bad list + */ + if (MSD(dsd->dsd_list_dma) ^ + MSD(dsd->dsd_list_dma + bufsize)) { + list_add_tail(&dsd->list, + &ha->pool.unusable.head); + ha->pool.unusable.count++; + } else { + list_add_tail(&dsd->list, + &ha->pool.good.head); + ha->pool.good.count++; + } + } + + /* return the good ones back to the pool */ + list_for_each_entry_safe(dsd, nxt, + &ha->pool.good.head, list) { + list_del(&dsd->list); + dma_pool_free(ha->dif_bundl_pool, + dsd->dsd_addr, dsd->dsd_list_dma); + ha->dif_bundle_dma_allocs--; + kfree(dsd); + ha->dif_bundle_kallocs--; + } + + ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0024, + "%s: dif dma pool (good=%u unusable=%u)\n", + __func__, ha->pool.good.count, + ha->pool.unusable.count); + } + ql_dbg_pci(ql_dbg_init, ha->pdev, 0x0025, - "dl_dma_pool=%p fcp_cmnd_dma_pool=%p.\n", - ha->dl_dma_pool, ha->fcp_cmnd_dma_pool); + "dl_dma_pool=%p fcp_cmnd_dma_pool=%p dif_bundl_pool=%p.\n", + ha->dl_dma_pool, ha->fcp_cmnd_dma_pool, + ha->dif_bundl_pool); } /* Allocate memory for SNS commands */ @@ -4197,6 +4318,24 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len, dma_free_coherent(&ha->pdev->dev, sizeof(struct sns_cmd_pkt), ha->sns_cmd, ha->sns_cmd_dma); fail_dma_pool: + if (ql2xenabledif) { + struct dsd_dma *dsd, *nxt; + + list_for_each_entry_safe(dsd, nxt, &ha->pool.unusable.head, + list) { + list_del(&dsd->list); + dma_pool_free(ha->dif_bundl_pool, dsd->dsd_addr, + dsd->dsd_list_dma); + ha->dif_bundle_dma_allocs--; + kfree(dsd); + ha->dif_bundle_kallocs--; + ha->pool.unusable.count--; + } + dma_pool_destroy(ha->dif_bundl_pool); + ha->dif_bundl_pool = NULL; + } + +fail_dif_bundl_dma_pool: if (IS_QLA82XX(ha) || ql2xenabledif) { dma_pool_destroy(ha->fcp_cmnd_dma_pool); ha->fcp_cmnd_dma_pool = NULL; @@ -4583,6 +4722,31 @@ qla2x00_mem_free(struct qla_hw_data *ha) if (ha->ctx_mempool) mempool_destroy(ha->ctx_mempool); + if (ql2xenabledif) { + struct dsd_dma *dsd, *nxt; + + list_for_each_entry_safe(dsd, nxt, &ha->pool.unusable.head, + list) { + list_del(&dsd->list); + dma_pool_free(ha->dif_bundl_pool, dsd->dsd_addr, + dsd->dsd_list_dma); + ha->dif_bundle_dma_allocs--; + kfree(dsd); + ha->dif_bundle_kallocs--; + ha->pool.unusable.count--; + } + list_for_each_entry_safe(dsd, nxt, &ha->pool.good.head, list) { + list_del(&dsd->list); + dma_pool_free(ha->dif_bundl_pool, dsd->dsd_addr, + dsd->dsd_list_dma); + ha->dif_bundle_dma_allocs--; + kfree(dsd); + ha->dif_bundle_kallocs--; + } + } + + if (ha->dif_bundl_pool) + dma_pool_destroy(ha->dif_bundl_pool); qlt_mem_free(ha); diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c index bceb8e882e7f..0df24968aeda 100644 --- a/drivers/scsi/qla2xxx/qla_target.c +++ b/drivers/scsi/qla2xxx/qla_target.c @@ -3230,7 +3230,7 @@ qlt_build_ctio_crc2_pkt(struct qla_qpair *qpair, struct qla_tgt_prm *prm) cur_dsd = (uint32_t *) &crc_ctx_pkt->u.bundling.dif_address; if (qla24xx_walk_and_build_prot_sglist(ha, NULL, cur_dsd, - prm->prot_seg_cnt, &tc)) + prm->prot_seg_cnt, cmd)) goto crc_queuing_error; } return QLA_SUCCESS; diff --git a/drivers/scsi/qla2xxx/qla_target.h b/drivers/scsi/qla2xxx/qla_target.h index 577e1786a3f1..f3de75000a08 100644 --- a/drivers/scsi/qla2xxx/qla_target.h +++ b/drivers/scsi/qla2xxx/qla_target.h @@ -928,6 +928,8 @@ struct qla_tgt_cmd { uint64_t lba; uint16_t a_guard, e_guard, a_app_tag, e_app_tag; uint32_t a_ref_tag, e_ref_tag; +#define DIF_BUNDL_DMA_VALID 1 + uint16_t prot_flags; uint64_t jiffies_at_alloc; uint64_t jiffies_at_free;