From patchwork Fri Mar 3 14:52:13 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Horia Geanta X-Patchwork-Id: 9602999 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4A6056016C for ; Fri, 3 Mar 2017 14:56:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34E2428510 for ; Fri, 3 Mar 2017 14:56:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2702C28518; Fri, 3 Mar 2017 14:56:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 917EE28510 for ; Fri, 3 Mar 2017 14:56:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=aoNZE70w27XUcJmhBtJWWCkoq3Q+SE69hO4TIBSIXXI=; b=EgCjNz2ytqb7Vw J6VP78Dg6/t6JkgIOhuSrdhL/25DtS183/aBJiJTcLTol8AOaesfrFqJaVD6DLXg/iAAekgVZ+Vms Rs1RXO/USxg74Zxxh5pCmFtVxy8dAb0WQyw6ekFUwdooYZK6EBh3NN/FiqA2qXayJesccC+xhLRoy UQFw3s2H1P9QyexaRVZO7TJnfZXd9qCRdCnnFx1Ni+N7t/CzNkdEg3RvI+kwVN45fhk5S9qJ6fILe /GRIfoo2vPlxH5wuhv1WoKezVVk1dPSAlCHY+Rdi/JTfOngq8GFUPpneYm35idmFGI8zxVjZPkfl2 1Q4Y88jGv4zz2bsNb/DQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cjodS-0003TQ-Pb; Fri, 03 Mar 2017 14:56:46 +0000 Received: from mail-cys01nam02on0049.outbound.protection.outlook.com ([104.47.37.49] helo=NAM02-CY1-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cjoaE-0008QX-Bq for linux-arm-kernel@lists.infradead.org; Fri, 03 Mar 2017 14:53:31 +0000 Received: from BN3PR0301CA0050.namprd03.prod.outlook.com (10.160.152.146) by DM2PR0301MB0735.namprd03.prod.outlook.com (10.160.97.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.933.12; Fri, 3 Mar 2017 14:53:03 +0000 Received: from BN1AFFO11FD041.protection.gbl (2a01:111:f400:7c10::102) by BN3PR0301CA0050.outlook.office365.com (2a01:111:e400:401e::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.947.12 via Frontend Transport; Fri, 3 Mar 2017 14:53:03 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; nxp.com; dkim=none (message not signed) header.d=none;nxp.com; dmarc=fail action=none header.from=nxp.com; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BN1AFFO11FD041.mail.protection.outlook.com (10.58.52.252) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.947.7 via Frontend Transport; Fri, 3 Mar 2017 14:53:02 +0000 Received: from enigma.ea.freescale.net (enigma.ea.freescale.net [10.171.77.120]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id v23EqdQW018743; Fri, 3 Mar 2017 07:53:00 -0700 From: =?UTF-8?q?Horia=20Geant=C4=83?= To: Herbert Xu , Scott Wood , Roy Pledge Subject: [RFC 7/7] crypto: caam/qi - add ablkcipher and authenc algorithms Date: Fri, 3 Mar 2017 16:52:13 +0200 Message-ID: <1488552733-20806-8-git-send-email-horia.geanta@nxp.com> X-Mailer: git-send-email 2.4.4 In-Reply-To: <1488552733-20806-1-git-send-email-horia.geanta@nxp.com> References: <1488552733-20806-1-git-send-email-horia.geanta@nxp.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-Matching-Connectors: 131330263833074112; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(336005)(7916002)(39850400002)(39380400002)(39860400002)(39410400002)(39400400002)(39450400003)(39840400002)(2980300002)(1110001)(1109001)(339900001)(199003)(189002)(9170700003)(105606002)(86362001)(2870700001)(575784001)(626004)(305945005)(53936002)(92566002)(356003)(81166006)(8936002)(7416002)(50226002)(8676002)(104016004)(50466002)(77096006)(23676002)(38730400002)(50986999)(76176999)(5820100001)(85426001)(53946003)(16200700003)(2906002)(189998001)(8656002)(54906002)(33646002)(5660300001)(47776003)(4326008)(106466001)(36756003)(2950100002)(6636002)(6666003)(2004002)(569005); DIR:OUT; SFP:1101; SCL:1; SRVR:DM2PR0301MB0735; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; MLV:ovrnspm; A:1; MX:1; PTR:InfoDomainNonexistent; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN1AFFO11FD041; 1:1PILdSKIvk2xXVeRW8uyAajmGlymN0cT/ngRQPPFEqlD+u9YhAbB7pWHYRD1Z0fp+nJ35YP6HcQ3JD5KzfG+0advpgyP3c5v0VK1vpdtsg5H+rciOeCpjiDWbAZtG7jhUerdZmGAWpt45ZecuktV4XeH5M38EukIuOHKJITtBmmD163DhYVeYfetjHpZT7eBYXsI48Vl7bwYgP+kTktz6aJLWVfzbxqvan+JKIKPXSyAz8vaaxZP3qP4DfNS6ZVGLGsscAkm6MBjHkLSwCv/yVb2Qal5AOMfprSBqOEiYTGeo+YtS/2m85ES6A+fkHgcON6GD/wRnrKN47JDigOcc8mSXORezzLd2oR+z1gI498hMDZ8d90f5BaW6M6Ub6rBgAT9okKht9m7LzatWO16CRmALKM8MJv5nCfoHMnkCSA75ldwYEW2mgnmUf7xPjy4x0pevjPTC73mrNHj8Fac06co2SHsk9mVO1r6Ifn3KsNZGSmpaJwLU+nUnzZS1RCbrgZ0nfHr6s3jigR2Rmyofy3sCf8Ou1nbL8N/UV47WJpEhaKaCVgqrTNnoGsEcTVGpqXj+qaFIAq3oPMyT+rH703jQGleP6QiNyWL9z+K9gMPeCJPs9XArkYFkUiGGLaNfvzF7NGUiHZGf91NqALqU+7foPY4Ot7gS6PNx2hDyG3UCNeji1z2laP07cnbVjtVYB8yYE0dLJHJ652A6B0luA== X-MS-Office365-Filtering-Correlation-Id: 98345979-edd2-4ffa-f86e-08d46244fe7b X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:DM2PR0301MB0735; X-Microsoft-Exchange-Diagnostics: 1; DM2PR0301MB0735; 3:M6gFMI1+LaPg+CyRXd2eHgtGgX5SNokBGwwpq3jHYF96zSb0afq7//+RYCaskP72xjRzDliSoIzqPxTO7CWyn+sC+49yv477kYRqlA1T+GGI64/ebYStX3NSWYxR/wQbkYM0ZUH6D3567vv7q5KP3oYcNMFjxZ74TyIhPsMY4UOsZ+KtvwafyS2pzEEOt725L9NzbKYRRnxc5SjSL7p9YjqGRFTzCd32540EwS/7XNv1BJooCwR66pye0M4bixHTDm8UJz45wRxve3YOXA+VN5wCBa5Sgs523Xg7vjiPI7d/VCNEb0hvmrQd3fD/IpeKP9/TO5kqb9MBgK1GEr4W1MN46Q7HpTDc+Oe+szscc7SpMDuEDt/VqKWIQMBrj2D6; 25:rk0AO9pdg0CkL1qqUAUCuIriUOSYkXOZp5GHorMDeouUSEYfbgamjuWMHoejx+JmZWHt/iP7ewyqlTUQJMoX5Sh2eXCXMxKuM17Tc8xX4g+iY+JaGmmCAcCgBG6UcfTv1Tt/0zwAj41OQ11Q4NirJTB6ejggbJpHGpP8/fidXngMcq5ik79Zdcfcl6u9eF4LY4QK525XgIzFtspIM5uzNji1h8vjq8zYHlwfF6b0IcgQhQdoixcls3AzRwnlQHtzK6s6EM2sr7AMwum8BsBQRP0UDZOC/mfJcuz1siO1clVLvRdcGYQ/dgQHZEGGng+ECJoHtUOewaGWbfpTBtPEIsR0uuzuJ1dJ0vZnF5GPqlQjPnAkaeV1Rf5sQnLxjlfE/6SX0b+OpIw9pW0EwM/O70WF2bjo0j/Ydrot+42zHSo+ky1KN6D9zHX4cnQ5dE58Qo2D+DjuzFK1QgFr1AThDQ== X-Microsoft-Exchange-Diagnostics: 1; DM2PR0301MB0735; 31:E31gfLGcFgBw/Mla1u2XZhmfG/9QUwHBFNsModZz7ZrgBzaiM2FvNnYw9sxx0ABUNDO6D41JxYR0Jh7YZfpIvfmpbsNdQfkFgwrokGKnA7fpgNcjUwsQL65mDswLvHgNeerFTzgTdp4W7bKhXUViH1F4tATdsZyHLu8uGR4VhLJwXTyB5sAhA7cK+V0X5GPaMPuPeprDoPZo09bCOFfd1GycuwwEGmVXMowj7qFO48QPAkszUKx5EfGEm4b7f4zeIqDlmgA0eMksulyAAdv9Rg== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(227817650892897); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095060)(601004)(2401047)(8121501046)(13015025)(5005006)(13017025)(13023025)(13024025)(13018025)(3002001)(10201501046)(6055026)(6096035)(20161123561025)(20161123559025)(20161123565025)(20161123556025)(20161123563025); SRVR:DM2PR0301MB0735; BCL:0; PCL:0; RULEID:(400006); SRVR:DM2PR0301MB0735; X-Microsoft-Exchange-Diagnostics: 1; DM2PR0301MB0735; 4:Apg30UhQjodD3RWl5CHOSVBCtV/Fs4itR8xEQAoASfI01NopoFIQfRGCxzkD97h8Itu27MKubXjlGUR1s5AOg2wY5w7jyfztGCZW8ihhSiEIz/9hnmOxWfrjrpReReUnw3Q7KJt/+pi/TauLbGxwMeQtr8kJBSh/yTL8a0wHRZGkG1JU0O32SUCXEoUOy8mZrA8dA/rv97XYLGDgj3hsAbR0IcxK3qMQcvH4r6loUTU6q6iXVbev+I1nj92mYNYdXJ6rngoMaks+LmOfwEiW4ktm3kDOs5Wn4/89YFDq3Uw0WC5eMQKxF9ooIuGfNJXUNIcujGtBxC6OisVJxpYcAMfh4M9kxdyOeecrzfBPzdbZCJvdgSpxhRBIv55ExPGtbTQa+ygwo/dCNkzaTSZPDKOspmjeYnltlsGcU+z9bKQyMxdiT4DxKrwTMZsiiCkhs+C3WpfK3zgLwK2mc8sYU7YeklPNPOSDI5cEFOVH59HDrWCn9kxmRqDe2wmbtayUxFyi4krJp7xc5kPE8apffQaw2ytKjfSouG4ImhI1o1z5865p99AwBgQiELW8D6PYAX0soasGa5P05psM/wyt997u9nPdFeAEeSWadRaQHqKl+f+8VaJYYaKz4L1yWQ8cI7eXV98iNY6DSIUP/R10rLiIHEeYiOE9kZ6DCORxJwZHMeX141v9B+ZV8tT3oQFC1AoRZphnjm3Cj00sDefPIjx+NGFjYDIWOmwFpcNUFx40etx4IDvRGl7rB22MJiQt/PtFgXU92rECl2iuhARCdQ== X-Forefront-PRVS: 0235CBE7D0 X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtETTJQUjAzMDFNQjA3MzU7MjM6QmplZkhIN2xuNDVXQWpCUW5sSVUzUVJ1?= =?utf-8?B?ajlaQTNpSktFZ1V5aitpdDlSMDJCaWJveHBEOG5JKzZKY2N5UnRXcTZHbGtB?= =?utf-8?B?SmJQL2V5Y3VUUnFlS2xVa1VPRlpQLzkveS9WV2J3c0FTZW1jTlRLL2NOdDRN?= =?utf-8?B?ZU1aWE1VOEhabzJRd082TEU3cFFOQ2k4Kythck12SlRTbzJ0TWNsRFl3ZXlN?= =?utf-8?B?RGdPT1pKWlpRRHRIM0I3c3ZCbGxhZURxOTVnNE1ONS9RaGp1Zm9SVXF2bVBM?= =?utf-8?B?NEhJOFRPTVpPOGltbDB6STZwUlM0MTlRWHQxakdpc0hKYjNWa0lVSWJnSWFn?= =?utf-8?B?cjJMN0IzSGl4Tmh0UTdqN0IzajkrZUZVL1RrdHJzdUlHZ3YrangzTm0zMkVP?= =?utf-8?B?cllLZEpwRHZxOC9maHdLWUtraHY5OTZ5YXlwZWhYY1dTVDNzOUkrZWRTMFNy?= =?utf-8?B?bWtOTHNtdHB2L09keXhzb0RRMFAyLzFYL3NEajdvYmV1UW56eGlEOVNJbWVy?= =?utf-8?B?SmtaeEQvTEcyckwrWC9zdGhTUnR6Y0NwbFcyb1lMelRKQmlBOXRVWktwU3JX?= =?utf-8?B?eW00SzljYWcvbk5zYkRVL2NGdzc0VkVuNjZmM0xnNm16VnVoampic1dMQVcr?= =?utf-8?B?ak9FZ0thTG1nSk1NN1gvOEs5eXJyc2k3WnZieDN0bmNYMnFzSTQ2UVUwaHVT?= =?utf-8?B?Y1R5amhyY0NuaU4zN0xXWWNaYWJvbHNCREd2OGQycXBpb0QyVXBZN0tsU3hB?= =?utf-8?B?bUhMRitWVGxPc0F4aWNtN25pZVRwSS80UXNEbWpjYytpUXQxVW5jYTNGckhM?= =?utf-8?B?azBBYmVVanRFVG50alBENlFDcjNvV0VJc2N6cU1ZL2RVMERYdDY2MFBoYWZp?= =?utf-8?B?WWNDMzZ6bk1PZlBrTE5XMFdOVFB6SVZuZXl2OVNaTVpkMm93NVpSWEJrd09W?= =?utf-8?B?V21CY3BxcjZ4VUkrV1g0ZFZ1VVFxRE9rZmtkQ0g2VlhWM3BaZ0VuZE1SVDRV?= =?utf-8?B?eXgxSStMcUttTUc0aUNyRjkyZGVCdFZPNUJ5a0xrUnp6dUpMV2JONC9TYkM2?= =?utf-8?B?elgrZTM0b2NGUkJ5ZmJVMWNEa0gxQndmQTN2S1RqSy93dDQrNGh4NklyT1A4?= =?utf-8?B?Yytnd2haYVFmakhlN0JIVm9BNnJpalFhdlEyNExiSFQ1WmVVKzkrOWJ2R0pM?= =?utf-8?B?Wk5yT2Q4czdTSnhrSGpjRURyREtFWkJseEVHZXcwdzVxZ05lakl2YTNoamhk?= =?utf-8?B?R2Ura1FMSU5nZVkvc2lKRFVyNmhQMm93TG9CM2FHTmdDWnk0YTltbWdnaTJ4?= =?utf-8?B?bGltTFFlY0d6bkN2WmF1STJ0aUtaekRzYm0wZVJxcnh4cUVQTU9vVEVUc0hE?= =?utf-8?B?UGQ3MEZBMU9YdE5nZVJPcWFsSE5OYjhDbmtuT1NubngvV1JGL2ZBdnhHU3JR?= =?utf-8?B?REJHTEZsU1BaTWlITDZqT21XYmlMMTNEMU1LN3IzWmVqTDFnTmx3YnpQQkIy?= =?utf-8?B?SjhJNkJ1ZC9vZ3VuaVhvcUxleGJ0OUFFVlFHYzFBSjJtMzUzVmRkYVMybVNp?= =?utf-8?B?U2RWRXAySXkvenFHcEVjQnpicVVEWUJodWo5elJIcGRSRkZNMnRFZG9PNnU5?= =?utf-8?B?Q1lLTnBWN2cxVlFUNUcxT3RIcHNlNlFQWU40VDZjbm1MMHFtci9wWHRZTEdQ?= =?utf-8?B?VGNpbXptSEV6V3cxQVhscVpzVEY2QUoxcGoybXQyYUlLZnRpTWR6MmNURnQ4?= =?utf-8?B?WGY5dXk0RHZZYXowdmxYMDRwcnZLTVJmdHZsSEJDYlF0ekg0VnNOOVFtUnAx?= =?utf-8?B?UHZkSW92Qmx2V0U5YmhTVFdPM2I2RkJDaUwySlR2c0Q1Qkx2dUIzSlVQRm9P?= =?utf-8?Q?e06O5hhXfoMic=3D?= X-Microsoft-Exchange-Diagnostics: 1; DM2PR0301MB0735; 6:vV0X0ZlxiaQhj0lsI2Z2knynQwXaXiNCwMUwVZruVEr1mD4QYCQ/a63o11QIyO07zOF28fqhw3Wc3Swv2nEnLBfpGBEEiPYN7vpirGcQa48752gYDshsZsnMGfm9QCcDnbTEiooY2HR2wqA9UExnVH2HzLpZjPRNXu0tWFh2IZSHxNrrF24SnAeqHbdxw6SNwIRFgs7nmGssx9tlDiHn3COHMAzYTZs0YGKLwkAbAgTovNMenYnIdVCp6y+ROpq4+SeOzPi1UtliP2+9EFQzF99EVZL4j45jwUv8YAvX8pCaYKk+lDiYNYaM8mH4Xy3nl6Y+OqMoExMffE7OFe0LW7MgsTeqZt/cIWW+wm+LQD/wnVPpg1ypmtzUuNPvh/lqlrvv7CS6DIH3xWtdhOzQqsi6tVzjtoSNpSkCKFhCga0=; 5:oUpLau/xEasKhwcT4WUNUXHgkpnE4316itCmavjifR5E85SYwZm4ts8BXbA6iSv/yQ51NfUR/gfLMbd1crkOpHEfpsuavJEC8Dybvtva49JEeSUiDRqiRQjn7gJwnBk3MlmTl2VX+FyAJJ7EJ3r6XEvWuZ1G1n/W3TyWScdZSPKRx53WHhEhgSkDLQqyek03; 24:9TZelas1Igrw6YaZO5JzEc3gpm4/wetfhIlokKMvSRIuvAkqPbUi0DTVPIwWIcXl3CBD2Nx4/F2GNCVtnrO0mHqAvyB9jOe8bi4s1YGTPGQ= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DM2PR0301MB0735; 7:CBiLVe0o7zhUvqsHoOwPoH1CI8Ayq2mD8U8POaOzthFK4tG8n3sXj6b9S/qAaTa06XPkE5G7nYqIaSRbsXZD/WShygEC0RAjN/uSD0wgIkLN0wtDW3U/4iVqsUIQO+aU98EFZ22ycbbAos1krWeFnw8srdEraCPz6/Bm+U2G3MygRPehpAd5eTfzcx7ogK4NQQLQUX5XYNcp0Rp6ng4G1f/WXZDI1WXuv4XIGk6bbXjtzkXkDZbPaXJftYFf7AKGN1WaalnXf3EdGAhDpDjCBeE7MDocN2aZNIA47GAQNvEs9793mHKjTvg527WArIdonwRzFQmgGJtTnNWxXRicjw== X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Mar 2017 14:53:02.9954 (UTC) X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM2PR0301MB0735 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170303_065326_673314_651758C0 X-CRM114-Status: GOOD ( 15.94 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arm-kernel@lists.infradead.org, Claudiu Manoil , Cristian Stoica , Dan Douglass , linux-crypto@vger.kernel.org, Vakul Garg , Vakul Garg , Alexandru Porosanu Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Add support to submit ablkcipher and authenc algorithms via the QI backend: -ablkcipher: cbc({aes,des,des3_ede}) ctr(aes), rfc3686(ctr(aes)) xts(aes) -authenc: authenc(hmac(md5),cbc({aes,des,des3_ede})) authenc(hmac(sha*),cbc({aes,des,des3_ede})) caam/qi being a new driver, let's wait some time to settle down without interfering with existing caam/jr driver. Accordingly, for now all caam/qi algorithms (caamalg_qi module) are marked to be of lower priority than caam/jr ones (caamalg module). Signed-off-by: Vakul Garg Signed-off-by: Alex Porosanu Signed-off-by: Horia Geantă --- drivers/crypto/caam/Kconfig | 20 +- drivers/crypto/caam/Makefile | 1 + drivers/crypto/caam/caamalg.c | 9 +- drivers/crypto/caam/caamalg_desc.c | 77 +- drivers/crypto/caam/caamalg_desc.h | 15 +- drivers/crypto/caam/caamalg_qi.c | 2387 ++++++++++++++++++++++++++++++++++++ drivers/crypto/caam/sg_sw_qm.h | 107 ++ 7 files changed, 2600 insertions(+), 16 deletions(-) create mode 100644 drivers/crypto/caam/caamalg_qi.c create mode 100644 drivers/crypto/caam/sg_sw_qm.h diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig index bc0d3569f8d9..e36aeacd7635 100644 --- a/drivers/crypto/caam/Kconfig +++ b/drivers/crypto/caam/Kconfig @@ -87,6 +87,23 @@ config CRYPTO_DEV_FSL_CAAM_CRYPTO_API To compile this as a module, choose M here: the module will be called caamalg. +config CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI + tristate "Queue Interface as Crypto API backend" + depends on CRYPTO_DEV_FSL_CAAM_JR && FSL_DPAA && NET + default y + select CRYPTO_AUTHENC + select CRYPTO_BLKCIPHER + help + Selecting this will use CAAM Queue Interface (QI) for sending + & receiving crypto jobs to/from CAAM. This gives better performance + than job ring interface when the number of cores are more than the + number of job rings assigned to the kernel. The number of portals + assigned to the kernel should also be more than the number of + job rings. + + To compile this as a module, choose M here: the module + will be called caamalg_qi. + config CRYPTO_DEV_FSL_CAAM_AHASH_API tristate "Register hash algorithm implementations with Crypto API" depends on CRYPTO_DEV_FSL_CAAM_JR @@ -136,4 +153,5 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG information in the CAAM driver. config CRYPTO_DEV_FSL_CAAM_CRYPTO_API_DESC - def_tristate CRYPTO_DEV_FSL_CAAM_CRYPTO_API + def_tristate (CRYPTO_DEV_FSL_CAAM_CRYPTO_API || \ + CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI) diff --git a/drivers/crypto/caam/Makefile b/drivers/crypto/caam/Makefile index 2e60e45c2bf1..9e2e98856b9b 100644 --- a/drivers/crypto/caam/Makefile +++ b/drivers/crypto/caam/Makefile @@ -8,6 +8,7 @@ endif obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam.o obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_JR) += caam_jr.o obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o +obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI) += caamalg_qi.o obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_DESC) += caamalg_desc.o obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API) += caamhash.o obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API) += caamrng.o diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index 9bc80eb06934..398807d1b77e 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -266,8 +266,9 @@ static int aead_set_sh_desc(struct crypto_aead *aead) /* aead_encrypt shared descriptor */ desc = ctx->sh_desc_enc; - cnstr_shdsc_aead_encap(desc, &ctx->cdata, &ctx->adata, ctx->authsize, - is_rfc3686, nonce, ctx1_iv_off); + cnstr_shdsc_aead_encap(desc, &ctx->cdata, &ctx->adata, ivsize, + ctx->authsize, is_rfc3686, nonce, ctx1_iv_off, + false); dma_sync_single_for_device(jrdev, ctx->sh_desc_enc_dma, desc_bytes(desc), DMA_TO_DEVICE); @@ -299,7 +300,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) desc = ctx->sh_desc_dec; cnstr_shdsc_aead_decap(desc, &ctx->cdata, &ctx->adata, ivsize, ctx->authsize, alg->caam.geniv, is_rfc3686, - nonce, ctx1_iv_off); + nonce, ctx1_iv_off, false); dma_sync_single_for_device(jrdev, ctx->sh_desc_dec_dma, desc_bytes(desc), DMA_TO_DEVICE); @@ -333,7 +334,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead) desc = ctx->sh_desc_enc; cnstr_shdsc_aead_givencap(desc, &ctx->cdata, &ctx->adata, ivsize, ctx->authsize, is_rfc3686, nonce, - ctx1_iv_off); + ctx1_iv_off, false); dma_sync_single_for_device(jrdev, ctx->sh_desc_enc_dma, desc_bytes(desc), DMA_TO_DEVICE); diff --git a/drivers/crypto/caam/caamalg_desc.c b/drivers/crypto/caam/caamalg_desc.c index f3f48c10b9d6..6f9c7ec0e339 100644 --- a/drivers/crypto/caam/caamalg_desc.c +++ b/drivers/crypto/caam/caamalg_desc.c @@ -265,17 +265,19 @@ static void init_sh_desc_key_aead(u32 * const desc, * split key is to be used, the size of the split key itself is * specified. Valid algorithm values - one of OP_ALG_ALGSEL_{MD5, SHA1, * SHA224, SHA256, SHA384, SHA512} ANDed with OP_ALG_AAI_HMAC_PRECOMP. + * @ivsize: initialization vector size * @icvsize: integrity check value (ICV) size (truncated or full) * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template * @nonce: pointer to rfc3686 nonce * @ctx1_iv_off: IV offset in CONTEXT1 register + * @is_qi: true when called from caam/qi * * Note: Requires an MDHA split key. */ void cnstr_shdsc_aead_encap(u32 * const desc, struct alginfo *cdata, - struct alginfo *adata, unsigned int icvsize, - const bool is_rfc3686, u32 *nonce, - const u32 ctx1_iv_off) + struct alginfo *adata, unsigned int ivsize, + unsigned int icvsize, const bool is_rfc3686, + u32 *nonce, const u32 ctx1_iv_off, const bool is_qi) { /* Note: Context registers are saved. */ init_sh_desc_key_aead(desc, cdata, adata, is_rfc3686, nonce); @@ -284,6 +286,25 @@ void cnstr_shdsc_aead_encap(u32 * const desc, struct alginfo *cdata, append_operation(desc, adata->algtype | OP_ALG_AS_INITFINAL | OP_ALG_ENCRYPT); + if (is_qi) { + u32 *wait_load_cmd; + + /* REG3 = assoclen */ + append_seq_load(desc, 4, LDST_CLASS_DECO | + LDST_SRCDST_WORD_DECO_MATH3 | + (4 << LDST_OFFSET_SHIFT)); + + wait_load_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL | + JUMP_COND_CALM | JUMP_COND_NCP | + JUMP_COND_NOP | JUMP_COND_NIP | + JUMP_COND_NIFP); + set_jump_tgt_here(desc, wait_load_cmd); + + append_seq_load(desc, ivsize, LDST_CLASS_1_CCB | + LDST_SRCDST_BYTE_CONTEXT | + (ctx1_iv_off << LDST_OFFSET_SHIFT)); + } + /* Read and write assoclen bytes */ append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ); append_math_add(desc, VARSEQOUTLEN, ZERO, REG3, CAAM_CMD_SZ); @@ -338,6 +359,7 @@ EXPORT_SYMBOL(cnstr_shdsc_aead_encap); * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template * @nonce: pointer to rfc3686 nonce * @ctx1_iv_off: IV offset in CONTEXT1 register + * @is_qi: true when called from caam/qi * * Note: Requires an MDHA split key. */ @@ -345,7 +367,7 @@ void cnstr_shdsc_aead_decap(u32 * const desc, struct alginfo *cdata, struct alginfo *adata, unsigned int ivsize, unsigned int icvsize, const bool geniv, const bool is_rfc3686, u32 *nonce, - const u32 ctx1_iv_off) + const u32 ctx1_iv_off, const bool is_qi) { /* Note: Context registers are saved. */ init_sh_desc_key_aead(desc, cdata, adata, is_rfc3686, nonce); @@ -354,6 +376,26 @@ void cnstr_shdsc_aead_decap(u32 * const desc, struct alginfo *cdata, append_operation(desc, adata->algtype | OP_ALG_AS_INITFINAL | OP_ALG_DECRYPT | OP_ALG_ICV_ON); + if (is_qi) { + u32 *wait_load_cmd; + + /* REG3 = assoclen */ + append_seq_load(desc, 4, LDST_CLASS_DECO | + LDST_SRCDST_WORD_DECO_MATH3 | + (4 << LDST_OFFSET_SHIFT)); + + wait_load_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL | + JUMP_COND_CALM | JUMP_COND_NCP | + JUMP_COND_NOP | JUMP_COND_NIP | + JUMP_COND_NIFP); + set_jump_tgt_here(desc, wait_load_cmd); + + if (!geniv) + append_seq_load(desc, ivsize, LDST_CLASS_1_CCB | + LDST_SRCDST_BYTE_CONTEXT | + (ctx1_iv_off << LDST_OFFSET_SHIFT)); + } + /* Read and write assoclen bytes */ append_math_add(desc, VARSEQINLEN, ZERO, REG3, CAAM_CMD_SZ); if (geniv) @@ -423,21 +465,44 @@ EXPORT_SYMBOL(cnstr_shdsc_aead_decap); * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template * @nonce: pointer to rfc3686 nonce * @ctx1_iv_off: IV offset in CONTEXT1 register + * @is_qi: true when called from caam/qi * * Note: Requires an MDHA split key. */ void cnstr_shdsc_aead_givencap(u32 * const desc, struct alginfo *cdata, struct alginfo *adata, unsigned int ivsize, unsigned int icvsize, const bool is_rfc3686, - u32 *nonce, const u32 ctx1_iv_off) + u32 *nonce, const u32 ctx1_iv_off, + const bool is_qi) { u32 geniv, moveiv; /* Note: Context registers are saved. */ init_sh_desc_key_aead(desc, cdata, adata, is_rfc3686, nonce); - if (is_rfc3686) + if (is_qi) { + u32 *wait_load_cmd; + + /* REG3 = assoclen */ + append_seq_load(desc, 4, LDST_CLASS_DECO | + LDST_SRCDST_WORD_DECO_MATH3 | + (4 << LDST_OFFSET_SHIFT)); + + wait_load_cmd = append_jump(desc, JUMP_JSL | JUMP_TEST_ALL | + JUMP_COND_CALM | JUMP_COND_NCP | + JUMP_COND_NOP | JUMP_COND_NIP | + JUMP_COND_NIFP); + set_jump_tgt_here(desc, wait_load_cmd); + } + + if (is_rfc3686) { + if (is_qi) + append_seq_load(desc, ivsize, LDST_CLASS_1_CCB | + LDST_SRCDST_BYTE_CONTEXT | + (ctx1_iv_off << LDST_OFFSET_SHIFT)); + goto copy_iv; + } /* Generate IV */ geniv = NFIFOENTRY_STYPE_PAD | NFIFOENTRY_DEST_DECO | diff --git a/drivers/crypto/caam/caamalg_desc.h b/drivers/crypto/caam/caamalg_desc.h index 95551737333a..8731e4a7ff05 100644 --- a/drivers/crypto/caam/caamalg_desc.h +++ b/drivers/crypto/caam/caamalg_desc.h @@ -12,6 +12,9 @@ #define DESC_AEAD_ENC_LEN (DESC_AEAD_BASE + 11 * CAAM_CMD_SZ) #define DESC_AEAD_DEC_LEN (DESC_AEAD_BASE + 15 * CAAM_CMD_SZ) #define DESC_AEAD_GIVENC_LEN (DESC_AEAD_ENC_LEN + 7 * CAAM_CMD_SZ) +#define DESC_QI_AEAD_ENC_LEN (DESC_AEAD_ENC_LEN + 3 * CAAM_CMD_SZ) +#define DESC_QI_AEAD_DEC_LEN (DESC_AEAD_DEC_LEN + 3 * CAAM_CMD_SZ) +#define DESC_QI_AEAD_GIVENC_LEN (DESC_AEAD_GIVENC_LEN + 3 * CAAM_CMD_SZ) /* Note: Nonce is counted in cdata.keylen */ #define DESC_AEAD_CTR_RFC3686_LEN (4 * CAAM_CMD_SZ) @@ -45,20 +48,22 @@ void cnstr_shdsc_aead_null_decap(u32 * const desc, struct alginfo *adata, unsigned int icvsize); void cnstr_shdsc_aead_encap(u32 * const desc, struct alginfo *cdata, - struct alginfo *adata, unsigned int icvsize, - const bool is_rfc3686, u32 *nonce, - const u32 ctx1_iv_off); + struct alginfo *adata, unsigned int ivsize, + unsigned int icvsize, const bool is_rfc3686, + u32 *nonce, const u32 ctx1_iv_off, + const bool is_qi); void cnstr_shdsc_aead_decap(u32 * const desc, struct alginfo *cdata, struct alginfo *adata, unsigned int ivsize, unsigned int icvsize, const bool geniv, const bool is_rfc3686, u32 *nonce, - const u32 ctx1_iv_off); + const u32 ctx1_iv_off, const bool is_qi); void cnstr_shdsc_aead_givencap(u32 * const desc, struct alginfo *cdata, struct alginfo *adata, unsigned int ivsize, unsigned int icvsize, const bool is_rfc3686, - u32 *nonce, const u32 ctx1_iv_off); + u32 *nonce, const u32 ctx1_iv_off, + const bool is_qi); void cnstr_shdsc_gcm_encap(u32 * const desc, struct alginfo *cdata, unsigned int icvsize); diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c new file mode 100644 index 000000000000..ea0e5b8b9171 --- /dev/null +++ b/drivers/crypto/caam/caamalg_qi.c @@ -0,0 +1,2387 @@ +/* + * Freescale FSL CAAM support for crypto API over QI backend. + * Based on caamalg.c + * + * Copyright 2013-2016 Freescale Semiconductor, Inc. + * Copyright 2016-2017 NXP + */ + +#include "compat.h" + +#include "regs.h" +#include "intern.h" +#include "desc_constr.h" +#include "error.h" +#include "sg_sw_sec4.h" +#include "sg_sw_qm.h" +#include "key_gen.h" +#include "qi.h" +#include "jr.h" +#include "caamalg_desc.h" + +/* + * crypto alg + */ +#define CAAM_CRA_PRIORITY 2000 +/* max key is sum of AES_MAX_KEY_SIZE, max split key size */ +#define CAAM_MAX_KEY_SIZE (AES_MAX_KEY_SIZE + \ + SHA512_DIGEST_SIZE * 2) + +#define DESC_MAX_USED_BYTES (DESC_QI_AEAD_GIVENC_LEN + \ + CAAM_MAX_KEY_SIZE) +#define DESC_MAX_USED_LEN (DESC_MAX_USED_BYTES / CAAM_CMD_SZ) + +struct caam_alg_entry { + int class1_alg_type; + int class2_alg_type; + bool rfc3686; + bool geniv; +}; + +struct caam_aead_alg { + struct aead_alg aead; + struct caam_alg_entry caam; + bool registered; +}; + +/* + * per-session context + */ +struct caam_ctx { + struct device *jrdev; + u32 sh_desc_enc[DESC_MAX_USED_LEN]; + u32 sh_desc_dec[DESC_MAX_USED_LEN]; + u32 sh_desc_givenc[DESC_MAX_USED_LEN]; + u8 key[CAAM_MAX_KEY_SIZE]; + dma_addr_t key_dma; + struct alginfo adata; + struct alginfo cdata; + unsigned int authsize; + struct device *qidev; + spinlock_t lock; /* Protects multiple init of driver context */ + struct caam_drv_ctx *drv_ctx[NUM_OP]; +}; + +static int aead_set_sh_desc(struct crypto_aead *aead) +{ + struct caam_aead_alg *alg = container_of(crypto_aead_alg(aead), + typeof(*alg), aead); + struct caam_ctx *ctx = crypto_aead_ctx(aead); + unsigned int ivsize = crypto_aead_ivsize(aead); + u32 ctx1_iv_off = 0; + u32 *nonce = NULL; + unsigned int data_len[2]; + u32 inl_mask; + const bool ctr_mode = ((ctx->cdata.algtype & OP_ALG_AAI_MASK) == + OP_ALG_AAI_CTR_MOD128); + const bool is_rfc3686 = alg->caam.rfc3686; + + if (!ctx->cdata.keylen || !ctx->authsize) + return 0; + + /* + * AES-CTR needs to load IV in CONTEXT1 reg + * at an offset of 128bits (16bytes) + * CONTEXT1[255:128] = IV + */ + if (ctr_mode) + ctx1_iv_off = 16; + + /* + * RFC3686 specific: + * CONTEXT1[255:128] = {NONCE, IV, COUNTER} + */ + if (is_rfc3686) { + ctx1_iv_off = 16 + CTR_RFC3686_NONCE_SIZE; + nonce = (u32 *)((void *)ctx->key + ctx->adata.keylen_pad + + ctx->cdata.keylen - CTR_RFC3686_NONCE_SIZE); + } + + data_len[0] = ctx->adata.keylen_pad; + data_len[1] = ctx->cdata.keylen; + + if (alg->caam.geniv) + goto skip_enc; + + /* aead_encrypt shared descriptor */ + if (desc_inline_query(DESC_QI_AEAD_ENC_LEN + + (is_rfc3686 ? DESC_AEAD_CTR_RFC3686_LEN : 0), + DESC_JOB_IO_LEN, data_len, &inl_mask, + ARRAY_SIZE(data_len)) < 0) + return -EINVAL; + + if (inl_mask & 1) + ctx->adata.key_virt = ctx->key; + else + ctx->adata.key_dma = ctx->key_dma; + + if (inl_mask & 2) + ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad; + else + ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad; + + ctx->adata.key_inline = !!(inl_mask & 1); + ctx->cdata.key_inline = !!(inl_mask & 2); + + cnstr_shdsc_aead_encap(ctx->sh_desc_enc, &ctx->cdata, &ctx->adata, + ivsize, ctx->authsize, is_rfc3686, nonce, + ctx1_iv_off, true); + +skip_enc: + /* aead_decrypt shared descriptor */ + if (desc_inline_query(DESC_QI_AEAD_DEC_LEN + + (is_rfc3686 ? DESC_AEAD_CTR_RFC3686_LEN : 0), + DESC_JOB_IO_LEN, data_len, &inl_mask, + ARRAY_SIZE(data_len)) < 0) + return -EINVAL; + + if (inl_mask & 1) + ctx->adata.key_virt = ctx->key; + else + ctx->adata.key_dma = ctx->key_dma; + + if (inl_mask & 2) + ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad; + else + ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad; + + ctx->adata.key_inline = !!(inl_mask & 1); + ctx->cdata.key_inline = !!(inl_mask & 2); + + cnstr_shdsc_aead_decap(ctx->sh_desc_dec, &ctx->cdata, &ctx->adata, + ivsize, ctx->authsize, alg->caam.geniv, + is_rfc3686, nonce, ctx1_iv_off, true); + + if (!alg->caam.geniv) + goto skip_givenc; + + /* aead_givencrypt shared descriptor */ + if (desc_inline_query(DESC_QI_AEAD_GIVENC_LEN + + (is_rfc3686 ? DESC_AEAD_CTR_RFC3686_LEN : 0), + DESC_JOB_IO_LEN, data_len, &inl_mask, + ARRAY_SIZE(data_len)) < 0) + return -EINVAL; + + if (inl_mask & 1) + ctx->adata.key_virt = ctx->key; + else + ctx->adata.key_dma = ctx->key_dma; + + if (inl_mask & 2) + ctx->cdata.key_virt = ctx->key + ctx->adata.keylen_pad; + else + ctx->cdata.key_dma = ctx->key_dma + ctx->adata.keylen_pad; + + ctx->adata.key_inline = !!(inl_mask & 1); + ctx->cdata.key_inline = !!(inl_mask & 2); + + cnstr_shdsc_aead_givencap(ctx->sh_desc_enc, &ctx->cdata, &ctx->adata, + ivsize, ctx->authsize, is_rfc3686, nonce, + ctx1_iv_off, true); + +skip_givenc: + return 0; +} + +static int aead_setauthsize(struct crypto_aead *authenc, unsigned int authsize) +{ + struct caam_ctx *ctx = crypto_aead_ctx(authenc); + + ctx->authsize = authsize; + aead_set_sh_desc(authenc); + + return 0; +} + +static int aead_setkey(struct crypto_aead *aead, const u8 *key, + unsigned int keylen) +{ + struct caam_ctx *ctx = crypto_aead_ctx(aead); + struct device *jrdev = ctx->jrdev; + struct crypto_authenc_keys keys; + int ret = 0; + + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) + goto badkey; + +#ifdef DEBUG + dev_err(jrdev, "keylen %d enckeylen %d authkeylen %d\n", + keys.authkeylen + keys.enckeylen, keys.enckeylen, + keys.authkeylen); + print_hex_dump(KERN_ERR, "key in @" __stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, key, keylen, 1); +#endif + + ret = gen_split_key(jrdev, ctx->key, &ctx->adata, keys.authkey, + keys.authkeylen, CAAM_MAX_KEY_SIZE - + keys.enckeylen); + if (ret) + goto badkey; + + /* postpend encryption key to auth split key */ + memcpy(ctx->key + ctx->adata.keylen_pad, keys.enckey, keys.enckeylen); + dma_sync_single_for_device(jrdev, ctx->key_dma, ctx->adata.keylen_pad + + keys.enckeylen, DMA_TO_DEVICE); +#ifdef DEBUG + print_hex_dump(KERN_ERR, "ctx.key@" __stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, ctx->key, + ctx->adata.keylen_pad + keys.enckeylen, 1); +#endif + + ctx->cdata.keylen = keys.enckeylen; + + ret = aead_set_sh_desc(aead); + if (ret) + goto badkey; + + /* Now update the driver contexts with the new shared descriptor */ + if (ctx->drv_ctx[ENCRYPT]) { + ret = caam_drv_ctx_update(ctx->drv_ctx[ENCRYPT], + ctx->sh_desc_enc); + if (ret) { + dev_err(jrdev, "driver enc context update failed\n"); + goto badkey; + } + } + + if (ctx->drv_ctx[DECRYPT]) { + ret = caam_drv_ctx_update(ctx->drv_ctx[DECRYPT], + ctx->sh_desc_dec); + if (ret) { + dev_err(jrdev, "driver dec context update failed\n"); + goto badkey; + } + } + + return ret; +badkey: + crypto_aead_set_flags(aead, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; +} + +static int ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher, + const u8 *key, unsigned int keylen) +{ + struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablkcipher); + const char *alg_name = crypto_tfm_alg_name(tfm); + struct device *jrdev = ctx->jrdev; + unsigned int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + u32 ctx1_iv_off = 0; + const bool ctr_mode = ((ctx->cdata.algtype & OP_ALG_AAI_MASK) == + OP_ALG_AAI_CTR_MOD128); + const bool is_rfc3686 = (ctr_mode && strstr(alg_name, "rfc3686")); + int ret = 0; + + memcpy(ctx->key, key, keylen); +#ifdef DEBUG + print_hex_dump(KERN_ERR, "key in @" __stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, key, keylen, 1); +#endif + /* + * AES-CTR needs to load IV in CONTEXT1 reg + * at an offset of 128bits (16bytes) + * CONTEXT1[255:128] = IV + */ + if (ctr_mode) + ctx1_iv_off = 16; + + /* + * RFC3686 specific: + * | CONTEXT1[255:128] = {NONCE, IV, COUNTER} + * | *key = {KEY, NONCE} + */ + if (is_rfc3686) { + ctx1_iv_off = 16 + CTR_RFC3686_NONCE_SIZE; + keylen -= CTR_RFC3686_NONCE_SIZE; + } + + dma_sync_single_for_device(jrdev, ctx->key_dma, keylen, DMA_TO_DEVICE); + ctx->cdata.keylen = keylen; + ctx->cdata.key_virt = ctx->key; + ctx->cdata.key_inline = true; + + /* ablkcipher encrypt, decrypt, givencrypt shared descriptors */ + cnstr_shdsc_ablkcipher_encap(ctx->sh_desc_enc, &ctx->cdata, ivsize, + is_rfc3686, ctx1_iv_off); + cnstr_shdsc_ablkcipher_decap(ctx->sh_desc_dec, &ctx->cdata, ivsize, + is_rfc3686, ctx1_iv_off); + cnstr_shdsc_ablkcipher_givencap(ctx->sh_desc_givenc, &ctx->cdata, + ivsize, is_rfc3686, ctx1_iv_off); + + /* Now update the driver contexts with the new shared descriptor */ + if (ctx->drv_ctx[ENCRYPT]) { + ret = caam_drv_ctx_update(ctx->drv_ctx[ENCRYPT], + ctx->sh_desc_enc); + if (ret) { + dev_err(jrdev, "driver enc context update failed\n"); + goto badkey; + } + } + + if (ctx->drv_ctx[DECRYPT]) { + ret = caam_drv_ctx_update(ctx->drv_ctx[DECRYPT], + ctx->sh_desc_dec); + if (ret) { + dev_err(jrdev, "driver dec context update failed\n"); + goto badkey; + } + } + + if (ctx->drv_ctx[GIVENCRYPT]) { + ret = caam_drv_ctx_update(ctx->drv_ctx[GIVENCRYPT], + ctx->sh_desc_givenc); + if (ret) { + dev_err(jrdev, "driver givenc context update failed\n"); + goto badkey; + } + } + + return ret; +badkey: + crypto_ablkcipher_set_flags(ablkcipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + return -EINVAL; +} + +static int xts_ablkcipher_setkey(struct crypto_ablkcipher *ablkcipher, + const u8 *key, unsigned int keylen) +{ + struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + struct device *jrdev = ctx->jrdev; + int ret = 0; + + if (keylen != 2 * AES_MIN_KEY_SIZE && keylen != 2 * AES_MAX_KEY_SIZE) { + crypto_ablkcipher_set_flags(ablkcipher, + CRYPTO_TFM_RES_BAD_KEY_LEN); + dev_err(jrdev, "key size mismatch\n"); + return -EINVAL; + } + + memcpy(ctx->key, key, keylen); + dma_sync_single_for_device(jrdev, ctx->key_dma, keylen, DMA_TO_DEVICE); + ctx->cdata.keylen = keylen; + ctx->cdata.key_virt = ctx->key; + ctx->cdata.key_inline = true; + + /* xts ablkcipher encrypt, decrypt shared descriptors */ + cnstr_shdsc_xts_ablkcipher_encap(ctx->sh_desc_enc, &ctx->cdata); + cnstr_shdsc_xts_ablkcipher_decap(ctx->sh_desc_dec, &ctx->cdata); + + /* Now update the driver contexts with the new shared descriptor */ + if (ctx->drv_ctx[ENCRYPT]) { + ret = caam_drv_ctx_update(ctx->drv_ctx[ENCRYPT], + ctx->sh_desc_enc); + if (ret) { + dev_err(jrdev, "driver enc context update failed\n"); + goto badkey; + } + } + + if (ctx->drv_ctx[DECRYPT]) { + ret = caam_drv_ctx_update(ctx->drv_ctx[DECRYPT], + ctx->sh_desc_dec); + if (ret) { + dev_err(jrdev, "driver dec context update failed\n"); + goto badkey; + } + } + + return ret; +badkey: + crypto_ablkcipher_set_flags(ablkcipher, CRYPTO_TFM_RES_BAD_KEY_LEN); + return 0; +} + +/* + * aead_edesc - s/w-extended aead descriptor + * @src_nents: number of segments in input scatterlist + * @dst_nents: number of segments in output scatterlist + * @iv_dma: dma address of iv for checking continuity and link table + * @qm_sg_bytes: length of dma mapped h/w link table + * @qm_sg_dma: bus physical mapped address of h/w link table + * @assoclen_dma: bus physical mapped address of req->assoclen + * @drv_req: driver-specific request structure + * @sgt: the h/w link table + */ +struct aead_edesc { + int src_nents; + int dst_nents; + dma_addr_t iv_dma; + int qm_sg_bytes; + dma_addr_t qm_sg_dma; + dma_addr_t assoclen_dma; + struct caam_drv_req drv_req; + struct qm_sg_entry sgt[0]; +}; + +/* + * ablkcipher_edesc - s/w-extended ablkcipher descriptor + * @src_nents: number of segments in input scatterlist + * @dst_nents: number of segments in output scatterlist + * @iv_dma: dma address of iv for checking continuity and link table + * @qm_sg_bytes: length of dma mapped h/w link table + * @qm_sg_dma: bus physical mapped address of h/w link table + * @drv_req: driver-specific request structure + * @sgt: the h/w link table + */ +struct ablkcipher_edesc { + int src_nents; + int dst_nents; + dma_addr_t iv_dma; + int qm_sg_bytes; + dma_addr_t qm_sg_dma; + struct caam_drv_req drv_req; + struct qm_sg_entry sgt[0]; +}; + +static struct caam_drv_ctx *get_drv_ctx(struct caam_ctx *ctx, + enum optype type) +{ + /* + * This function is called on the fast path with values of 'type' + * known at compile time. Invalid arguments are not expected and + * thus no checks are made. + */ + struct caam_drv_ctx *drv_ctx = ctx->drv_ctx[type]; + u32 *desc; + + if (unlikely(!drv_ctx)) { + spin_lock(&ctx->lock); + + /* Read again to check if some other core init drv_ctx */ + drv_ctx = ctx->drv_ctx[type]; + if (!drv_ctx) { + int cpu; + + if (type == ENCRYPT) + desc = ctx->sh_desc_enc; + else if (type == DECRYPT) + desc = ctx->sh_desc_dec; + else /* (type == GIVENCRYPT) */ + desc = ctx->sh_desc_givenc; + + cpu = smp_processor_id(); + drv_ctx = caam_drv_ctx_init(ctx->qidev, &cpu, desc); + if (likely(!IS_ERR_OR_NULL(drv_ctx))) + drv_ctx->op_type = type; + + ctx->drv_ctx[type] = drv_ctx; + } + + spin_unlock(&ctx->lock); + } + + return drv_ctx; +} + +static void caam_unmap(struct device *dev, struct scatterlist *src, + struct scatterlist *dst, int src_nents, + int dst_nents, dma_addr_t iv_dma, int ivsize, + enum optype op_type, dma_addr_t qm_sg_dma, + int qm_sg_bytes) +{ + if (dst != src) { + if (src_nents) + dma_unmap_sg(dev, src, src_nents, DMA_TO_DEVICE); + dma_unmap_sg(dev, dst, dst_nents, DMA_FROM_DEVICE); + } else { + dma_unmap_sg(dev, src, src_nents, DMA_BIDIRECTIONAL); + } + + if (iv_dma) + dma_unmap_single(dev, iv_dma, ivsize, + op_type == GIVENCRYPT ? DMA_FROM_DEVICE : + DMA_TO_DEVICE); + if (qm_sg_bytes) + dma_unmap_single(dev, qm_sg_dma, qm_sg_bytes, DMA_TO_DEVICE); +} + +static void aead_unmap(struct device *dev, + struct aead_edesc *edesc, + struct aead_request *req) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + int ivsize = crypto_aead_ivsize(aead); + + caam_unmap(dev, req->src, req->dst, edesc->src_nents, edesc->dst_nents, + edesc->iv_dma, ivsize, edesc->drv_req.drv_ctx->op_type, + edesc->qm_sg_dma, edesc->qm_sg_bytes); + dma_unmap_single(dev, edesc->assoclen_dma, 4, DMA_TO_DEVICE); +} + +static void ablkcipher_unmap(struct device *dev, + struct ablkcipher_edesc *edesc, + struct ablkcipher_request *req) +{ + struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); + int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + + caam_unmap(dev, req->src, req->dst, edesc->src_nents, edesc->dst_nents, + edesc->iv_dma, ivsize, edesc->drv_req.drv_ctx->op_type, + edesc->qm_sg_dma, edesc->qm_sg_bytes); +} + +static void aead_done(struct caam_drv_req *drv_req, u32 status) +{ + struct device *qidev; + struct aead_edesc *edesc; + struct aead_request *aead_req = drv_req->app_ctx; + struct crypto_aead *aead = crypto_aead_reqtfm(aead_req); + struct caam_ctx *caam_ctx = crypto_aead_ctx(aead); + int ecode = 0; + + qidev = caam_ctx->qidev; + + if (unlikely(status)) { + caam_jr_strstatus(qidev, status); + ecode = -EIO; + } + + edesc = container_of(drv_req, typeof(*edesc), drv_req); + aead_unmap(qidev, edesc, aead_req); + + aead_request_complete(aead_req, ecode); + qi_cache_free(edesc); +} + +/* + * allocate and map the aead extended descriptor + */ +static struct aead_edesc *aead_edesc_alloc(struct aead_request *req, + bool encrypt) +{ + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct caam_ctx *ctx = crypto_aead_ctx(aead); + struct caam_aead_alg *alg = container_of(crypto_aead_alg(aead), + typeof(*alg), aead); + struct device *qidev = ctx->qidev; + gfp_t flags = (req->base.flags & (CRYPTO_TFM_REQ_MAY_BACKLOG | + CRYPTO_TFM_REQ_MAY_SLEEP)) ? GFP_KERNEL : GFP_ATOMIC; + int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0; + struct aead_edesc *edesc; + dma_addr_t qm_sg_dma, iv_dma = 0; + int ivsize = 0; + unsigned int authsize = ctx->authsize; + int qm_sg_index = 0, qm_sg_ents = 0, qm_sg_bytes; + int in_len, out_len; + struct qm_sg_entry *sg_table, *fd_sgt; + struct caam_drv_ctx *drv_ctx; + enum optype op_type = encrypt ? ENCRYPT : DECRYPT; + + drv_ctx = get_drv_ctx(ctx, op_type); + if (unlikely(IS_ERR_OR_NULL(drv_ctx))) + return (struct aead_edesc *)drv_ctx; + + /* allocate space for base edesc and hw desc commands, link tables */ + edesc = qi_cache_alloc(GFP_DMA | flags); + if (unlikely(!edesc)) { + dev_err(qidev, "could not allocate extended descriptor\n"); + return ERR_PTR(-ENOMEM); + } + + if (likely(req->src == req->dst)) { + src_nents = sg_nents_for_len(req->src, req->assoclen + + req->cryptlen + + (encrypt ? authsize : 0)); + if (unlikely(src_nents < 0)) { + dev_err(qidev, "Insufficient bytes (%d) in src S/G\n", + req->assoclen + req->cryptlen + + (encrypt ? authsize : 0)); + qi_cache_free(edesc); + return ERR_PTR(src_nents); + } + + mapped_src_nents = dma_map_sg(qidev, req->src, src_nents, + DMA_BIDIRECTIONAL); + if (unlikely(!mapped_src_nents)) { + dev_err(qidev, "unable to map source\n"); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + } else { + src_nents = sg_nents_for_len(req->src, req->assoclen + + req->cryptlen); + if (unlikely(src_nents < 0)) { + dev_err(qidev, "Insufficient bytes (%d) in src S/G\n", + req->assoclen + req->cryptlen); + qi_cache_free(edesc); + return ERR_PTR(src_nents); + } + + dst_nents = sg_nents_for_len(req->dst, req->assoclen + + req->cryptlen + + (encrypt ? authsize : + (-authsize))); + if (unlikely(dst_nents < 0)) { + dev_err(qidev, "Insufficient bytes (%d) in dst S/G\n", + req->assoclen + req->cryptlen + + (encrypt ? authsize : (-authsize))); + qi_cache_free(edesc); + return ERR_PTR(dst_nents); + } + + if (src_nents) { + mapped_src_nents = dma_map_sg(qidev, req->src, + src_nents, DMA_TO_DEVICE); + if (unlikely(!mapped_src_nents)) { + dev_err(qidev, "unable to map source\n"); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + } else { + mapped_src_nents = 0; + } + + mapped_dst_nents = dma_map_sg(qidev, req->dst, dst_nents, + DMA_FROM_DEVICE); + if (unlikely(!mapped_dst_nents)) { + dev_err(qidev, "unable to map destination\n"); + dma_unmap_sg(qidev, req->src, src_nents, DMA_TO_DEVICE); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + } + + if ((alg->caam.rfc3686 && encrypt) || !alg->caam.geniv) { + ivsize = crypto_aead_ivsize(aead); + iv_dma = dma_map_single(qidev, req->iv, ivsize, DMA_TO_DEVICE); + if (dma_mapping_error(qidev, iv_dma)) { + dev_err(qidev, "unable to map IV\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, + dst_nents, 0, 0, op_type, 0, 0); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + } + + /* + * Create S/G table: req->assoclen, [IV,] req->src [, req->dst]. + * Input is not contiguous. + */ + qm_sg_ents = 1 + !!ivsize + mapped_src_nents + + (mapped_dst_nents > 1 ? mapped_dst_nents : 0); + sg_table = &edesc->sgt[0]; + qm_sg_bytes = qm_sg_ents * sizeof(*sg_table); + + edesc->src_nents = src_nents; + edesc->dst_nents = dst_nents; + edesc->iv_dma = iv_dma; + edesc->drv_req.app_ctx = req; + edesc->drv_req.cbk = aead_done; + edesc->drv_req.drv_ctx = drv_ctx; + + edesc->assoclen_dma = dma_map_single(qidev, &req->assoclen, 4, + DMA_TO_DEVICE); + if (dma_mapping_error(qidev, edesc->assoclen_dma)) { + dev_err(qidev, "unable to map assoclen\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, + iv_dma, ivsize, op_type, 0, 0); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + + dma_to_qm_sg_one(sg_table, edesc->assoclen_dma, 4, 0); + qm_sg_index++; + if (ivsize) { + dma_to_qm_sg_one(sg_table + qm_sg_index, iv_dma, ivsize, 0); + qm_sg_index++; + } + sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table + qm_sg_index, 0); + qm_sg_index += mapped_src_nents; + + if (mapped_dst_nents > 1) + sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table + + qm_sg_index, 0); + + qm_sg_dma = dma_map_single(qidev, sg_table, qm_sg_bytes, DMA_TO_DEVICE); + if (dma_mapping_error(qidev, qm_sg_dma)) { + dev_err(qidev, "unable to map S/G table\n"); + dma_unmap_single(qidev, edesc->assoclen_dma, 4, DMA_TO_DEVICE); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, + iv_dma, ivsize, op_type, 0, 0); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + + edesc->qm_sg_dma = qm_sg_dma; + edesc->qm_sg_bytes = qm_sg_bytes; + + out_len = req->assoclen + req->cryptlen + + (encrypt ? ctx->authsize : (-ctx->authsize)); + in_len = 4 + ivsize + req->assoclen + req->cryptlen; + + fd_sgt = &edesc->drv_req.fd_sgt[0]; + dma_to_qm_sg_one_last_ext(&fd_sgt[1], qm_sg_dma, in_len, 0); + + if (req->dst == req->src) { + if (mapped_src_nents == 1) + dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->src), + out_len, 0); + else + dma_to_qm_sg_one_ext(&fd_sgt[0], qm_sg_dma + + (1 + !!ivsize) * sizeof(*sg_table), + out_len, 0); + } else if (mapped_dst_nents == 1) { + dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->dst), out_len, + 0); + } else { + dma_to_qm_sg_one_ext(&fd_sgt[0], qm_sg_dma + sizeof(*sg_table) * + qm_sg_index, out_len, 0); + } + + return edesc; +} + +static inline int aead_crypt(struct aead_request *req, bool encrypt) +{ + struct aead_edesc *edesc; + struct crypto_aead *aead = crypto_aead_reqtfm(req); + struct caam_ctx *ctx = crypto_aead_ctx(aead); + int ret; + + if (unlikely(caam_congested)) + return -EAGAIN; + + /* allocate extended descriptor */ + edesc = aead_edesc_alloc(req, encrypt); + if (IS_ERR_OR_NULL(edesc)) + return PTR_ERR(edesc); + + /* Create and submit job descriptor */ + ret = caam_qi_enqueue(ctx->qidev, &edesc->drv_req); + if (!ret) { + ret = -EINPROGRESS; + } else { + aead_unmap(ctx->qidev, edesc, req); + qi_cache_free(edesc); + } + + return ret; +} + +static int aead_encrypt(struct aead_request *req) +{ + return aead_crypt(req, true); +} + +static int aead_decrypt(struct aead_request *req) +{ + return aead_crypt(req, false); +} + +static void ablkcipher_done(struct caam_drv_req *drv_req, u32 status) +{ + struct ablkcipher_edesc *edesc; + struct ablkcipher_request *req = drv_req->app_ctx; + struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); + struct caam_ctx *caam_ctx = crypto_ablkcipher_ctx(ablkcipher); + struct device *qidev = caam_ctx->qidev; +#ifdef DEBUG + int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + + dev_err(qidev, "%s %d: status 0x%x\n", __func__, __LINE__, status); +#endif + + edesc = container_of(drv_req, typeof(*edesc), drv_req); + + if (status) + caam_jr_strstatus(qidev, status); + +#ifdef DEBUG + print_hex_dump(KERN_ERR, "dstiv @" __stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, req->info, + edesc->src_nents > 1 ? 100 : ivsize, 1); + dbg_dump_sg(KERN_ERR, "dst @" __stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, req->dst, + edesc->dst_nents > 1 ? 100 : req->nbytes, 1); +#endif + + ablkcipher_unmap(qidev, edesc, req); + qi_cache_free(edesc); + + ablkcipher_request_complete(req, status); +} + +static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request + *req, bool encrypt) +{ + struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); + struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + struct device *qidev = ctx->qidev; + gfp_t flags = (req->base.flags & (CRYPTO_TFM_REQ_MAY_BACKLOG | + CRYPTO_TFM_REQ_MAY_SLEEP)) ? + GFP_KERNEL : GFP_ATOMIC; + int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0; + struct ablkcipher_edesc *edesc; + dma_addr_t iv_dma; + bool in_contig; + int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + int dst_sg_idx, qm_sg_ents; + struct qm_sg_entry *sg_table, *fd_sgt; + struct caam_drv_ctx *drv_ctx; + enum optype op_type = encrypt ? ENCRYPT : DECRYPT; + + drv_ctx = get_drv_ctx(ctx, op_type); + if (unlikely(IS_ERR_OR_NULL(drv_ctx))) + return (struct ablkcipher_edesc *)drv_ctx; + + src_nents = sg_nents_for_len(req->src, req->nbytes); + if (unlikely(src_nents < 0)) { + dev_err(qidev, "Insufficient bytes (%d) in src S/G\n", + req->nbytes); + return ERR_PTR(src_nents); + } + + if (unlikely(req->src != req->dst)) { + dst_nents = sg_nents_for_len(req->dst, req->nbytes); + if (unlikely(dst_nents < 0)) { + dev_err(qidev, "Insufficient bytes (%d) in dst S/G\n", + req->nbytes); + return ERR_PTR(dst_nents); + } + + mapped_src_nents = dma_map_sg(qidev, req->src, src_nents, + DMA_TO_DEVICE); + if (unlikely(!mapped_src_nents)) { + dev_err(qidev, "unable to map source\n"); + return ERR_PTR(-ENOMEM); + } + + mapped_dst_nents = dma_map_sg(qidev, req->dst, dst_nents, + DMA_FROM_DEVICE); + if (unlikely(!mapped_dst_nents)) { + dev_err(qidev, "unable to map destination\n"); + dma_unmap_sg(qidev, req->src, src_nents, DMA_TO_DEVICE); + return ERR_PTR(-ENOMEM); + } + } else { + mapped_src_nents = dma_map_sg(qidev, req->src, src_nents, + DMA_BIDIRECTIONAL); + if (unlikely(!mapped_src_nents)) { + dev_err(qidev, "unable to map source\n"); + return ERR_PTR(-ENOMEM); + } + } + + iv_dma = dma_map_single(qidev, req->info, ivsize, DMA_TO_DEVICE); + if (dma_mapping_error(qidev, iv_dma)) { + dev_err(qidev, "unable to map IV\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); + return ERR_PTR(-ENOMEM); + } + + if (mapped_src_nents == 1 && + iv_dma + ivsize == sg_dma_address(req->src)) { + in_contig = true; + qm_sg_ents = 0; + } else { + in_contig = false; + qm_sg_ents = 1 + mapped_src_nents; + } + dst_sg_idx = qm_sg_ents; + + /* allocate space for base edesc and link tables */ + edesc = qi_cache_alloc(GFP_DMA | flags); + if (unlikely(!edesc)) { + dev_err(qidev, "could not allocate extended descriptor\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, + iv_dma, ivsize, op_type, 0, 0); + return ERR_PTR(-ENOMEM); + } + + edesc->src_nents = src_nents; + edesc->dst_nents = dst_nents; + edesc->iv_dma = iv_dma; + qm_sg_ents += mapped_dst_nents > 1 ? mapped_dst_nents : 0; + sg_table = &edesc->sgt[0]; + edesc->qm_sg_bytes = qm_sg_ents * sizeof(*sg_table); + edesc->drv_req.app_ctx = req; + edesc->drv_req.cbk = ablkcipher_done; + edesc->drv_req.drv_ctx = drv_ctx; + + if (!in_contig) { + dma_to_qm_sg_one(sg_table, iv_dma, ivsize, 0); + sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table + 1, 0); + } + + if (mapped_dst_nents > 1) + sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table + + dst_sg_idx, 0); + + edesc->qm_sg_dma = dma_map_single(qidev, sg_table, edesc->qm_sg_bytes, + DMA_TO_DEVICE); + if (dma_mapping_error(qidev, edesc->qm_sg_dma)) { + dev_err(qidev, "unable to map S/G table\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, + iv_dma, ivsize, op_type, 0, 0); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + + fd_sgt = &edesc->drv_req.fd_sgt[0]; + + if (!in_contig) + dma_to_qm_sg_one_last_ext(&fd_sgt[1], edesc->qm_sg_dma, + ivsize + req->nbytes, 0); + else + dma_to_qm_sg_one_last(&fd_sgt[1], iv_dma, ivsize + req->nbytes, + 0); + + if (req->src == req->dst) { + if (!in_contig) + dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + + sizeof(*sg_table), req->nbytes, 0); + else + dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->src), + req->nbytes, 0); + } else if (mapped_dst_nents > 1) { + dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + dst_sg_idx * + sizeof(*sg_table), req->nbytes, 0); + } else { + dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->dst), + req->nbytes, 0); + } + + return edesc; +} + +static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc( + struct skcipher_givcrypt_request *creq) +{ + struct ablkcipher_request *req = &creq->creq; + struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); + struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + struct device *qidev = ctx->qidev; + gfp_t flags = (req->base.flags & (CRYPTO_TFM_REQ_MAY_BACKLOG | + CRYPTO_TFM_REQ_MAY_SLEEP)) ? + GFP_KERNEL : GFP_ATOMIC; + int src_nents, mapped_src_nents, dst_nents, mapped_dst_nents; + struct ablkcipher_edesc *edesc; + dma_addr_t iv_dma; + bool out_contig; + int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + struct qm_sg_entry *sg_table, *fd_sgt; + int dst_sg_idx, qm_sg_ents; + struct caam_drv_ctx *drv_ctx; + + drv_ctx = get_drv_ctx(ctx, GIVENCRYPT); + if (unlikely(IS_ERR_OR_NULL(drv_ctx))) + return (struct ablkcipher_edesc *)drv_ctx; + + src_nents = sg_nents_for_len(req->src, req->nbytes); + if (unlikely(src_nents < 0)) { + dev_err(qidev, "Insufficient bytes (%d) in src S/G\n", + req->nbytes); + return ERR_PTR(src_nents); + } + + if (unlikely(req->src != req->dst)) { + dst_nents = sg_nents_for_len(req->dst, req->nbytes); + if (unlikely(dst_nents < 0)) { + dev_err(qidev, "Insufficient bytes (%d) in dst S/G\n", + req->nbytes); + return ERR_PTR(dst_nents); + } + + mapped_src_nents = dma_map_sg(qidev, req->src, src_nents, + DMA_TO_DEVICE); + if (unlikely(!mapped_src_nents)) { + dev_err(qidev, "unable to map source\n"); + return ERR_PTR(-ENOMEM); + } + + mapped_dst_nents = dma_map_sg(qidev, req->dst, dst_nents, + DMA_FROM_DEVICE); + if (unlikely(!mapped_dst_nents)) { + dev_err(qidev, "unable to map destination\n"); + dma_unmap_sg(qidev, req->src, src_nents, DMA_TO_DEVICE); + return ERR_PTR(-ENOMEM); + } + } else { + mapped_src_nents = dma_map_sg(qidev, req->src, src_nents, + DMA_BIDIRECTIONAL); + if (unlikely(!mapped_src_nents)) { + dev_err(qidev, "unable to map source\n"); + return ERR_PTR(-ENOMEM); + } + + dst_nents = src_nents; + mapped_dst_nents = src_nents; + } + + iv_dma = dma_map_single(qidev, creq->giv, ivsize, DMA_FROM_DEVICE); + if (dma_mapping_error(qidev, iv_dma)) { + dev_err(qidev, "unable to map IV\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, 0, + 0, 0, 0, 0); + return ERR_PTR(-ENOMEM); + } + + qm_sg_ents = mapped_src_nents > 1 ? mapped_src_nents : 0; + dst_sg_idx = qm_sg_ents; + if (mapped_dst_nents == 1 && + iv_dma + ivsize == sg_dma_address(req->dst)) { + out_contig = true; + } else { + out_contig = false; + qm_sg_ents += 1 + mapped_dst_nents; + } + + /* allocate space for base edesc and link tables */ + edesc = qi_cache_alloc(GFP_DMA | flags); + if (!edesc) { + dev_err(qidev, "could not allocate extended descriptor\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, + iv_dma, ivsize, GIVENCRYPT, 0, 0); + return ERR_PTR(-ENOMEM); + } + + edesc->src_nents = src_nents; + edesc->dst_nents = dst_nents; + edesc->iv_dma = iv_dma; + sg_table = &edesc->sgt[0]; + edesc->qm_sg_bytes = qm_sg_ents * sizeof(*sg_table); + edesc->drv_req.app_ctx = req; + edesc->drv_req.cbk = ablkcipher_done; + edesc->drv_req.drv_ctx = drv_ctx; + + if (mapped_src_nents > 1) + sg_to_qm_sg_last(req->src, mapped_src_nents, sg_table, 0); + + if (!out_contig) { + dma_to_qm_sg_one(sg_table + dst_sg_idx, iv_dma, ivsize, 0); + sg_to_qm_sg_last(req->dst, mapped_dst_nents, sg_table + + dst_sg_idx + 1, 0); + } + + edesc->qm_sg_dma = dma_map_single(qidev, sg_table, edesc->qm_sg_bytes, + DMA_TO_DEVICE); + if (dma_mapping_error(qidev, edesc->qm_sg_dma)) { + dev_err(qidev, "unable to map S/G table\n"); + caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents, + iv_dma, ivsize, GIVENCRYPT, 0, 0); + qi_cache_free(edesc); + return ERR_PTR(-ENOMEM); + } + + fd_sgt = &edesc->drv_req.fd_sgt[0]; + + if (mapped_src_nents > 1) + dma_to_qm_sg_one_ext(&fd_sgt[1], edesc->qm_sg_dma, req->nbytes, + 0); + else + dma_to_qm_sg_one(&fd_sgt[1], sg_dma_address(req->src), + req->nbytes, 0); + + if (!out_contig) + dma_to_qm_sg_one_ext(&fd_sgt[0], edesc->qm_sg_dma + dst_sg_idx * + sizeof(*sg_table), ivsize + req->nbytes, + 0); + else + dma_to_qm_sg_one(&fd_sgt[0], sg_dma_address(req->dst), + ivsize + req->nbytes, 0); + + return edesc; +} + +static inline int ablkcipher_crypt(struct ablkcipher_request *req, bool encrypt) +{ + struct ablkcipher_edesc *edesc; + struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); + struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + int ret; + + if (unlikely(caam_congested)) + return -EAGAIN; + + /* allocate extended descriptor */ + edesc = ablkcipher_edesc_alloc(req, encrypt); + if (IS_ERR(edesc)) + return PTR_ERR(edesc); + + ret = caam_qi_enqueue(ctx->qidev, &edesc->drv_req); + if (!ret) { + ret = -EINPROGRESS; + } else { + ablkcipher_unmap(ctx->qidev, edesc, req); + qi_cache_free(edesc); + } + + return ret; +} + +static int ablkcipher_encrypt(struct ablkcipher_request *req) +{ + return ablkcipher_crypt(req, true); +} + +static int ablkcipher_decrypt(struct ablkcipher_request *req) +{ + return ablkcipher_crypt(req, false); +} + +static int ablkcipher_givencrypt(struct skcipher_givcrypt_request *creq) +{ + struct ablkcipher_request *req = &creq->creq; + struct ablkcipher_edesc *edesc; + struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); + struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + int ret; + + if (unlikely(caam_congested)) + return -EAGAIN; + + /* allocate extended descriptor */ + edesc = ablkcipher_giv_edesc_alloc(creq); + if (IS_ERR(edesc)) + return PTR_ERR(edesc); + + ret = caam_qi_enqueue(ctx->qidev, &edesc->drv_req); + if (!ret) { + ret = -EINPROGRESS; + } else { + ablkcipher_unmap(ctx->qidev, edesc, req); + qi_cache_free(edesc); + } + + return ret; +} + +#define template_ablkcipher template_u.ablkcipher +struct caam_alg_template { + char name[CRYPTO_MAX_ALG_NAME]; + char driver_name[CRYPTO_MAX_ALG_NAME]; + unsigned int blocksize; + u32 type; + union { + struct ablkcipher_alg ablkcipher; + } template_u; + u32 class1_alg_type; + u32 class2_alg_type; +}; + +static struct caam_alg_template driver_algs[] = { + /* ablkcipher descriptor */ + { + .name = "cbc(aes)", + .driver_name = "cbc-aes-caam-qi", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_GIVCIPHER, + .template_ablkcipher = { + .setkey = ablkcipher_setkey, + .encrypt = ablkcipher_encrypt, + .decrypt = ablkcipher_decrypt, + .givencrypt = ablkcipher_givencrypt, + .geniv = "", + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + }, + { + .name = "cbc(des3_ede)", + .driver_name = "cbc-3des-caam-qi", + .blocksize = DES3_EDE_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_GIVCIPHER, + .template_ablkcipher = { + .setkey = ablkcipher_setkey, + .encrypt = ablkcipher_encrypt, + .decrypt = ablkcipher_decrypt, + .givencrypt = ablkcipher_givencrypt, + .geniv = "", + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + }, + { + .name = "cbc(des)", + .driver_name = "cbc-des-caam-qi", + .blocksize = DES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_GIVCIPHER, + .template_ablkcipher = { + .setkey = ablkcipher_setkey, + .encrypt = ablkcipher_encrypt, + .decrypt = ablkcipher_decrypt, + .givencrypt = ablkcipher_givencrypt, + .geniv = "", + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + }, + { + .name = "ctr(aes)", + .driver_name = "ctr-aes-caam-qi", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = ablkcipher_setkey, + .encrypt = ablkcipher_encrypt, + .decrypt = ablkcipher_decrypt, + .geniv = "chainiv", + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CTR_MOD128, + }, + { + .name = "rfc3686(ctr(aes))", + .driver_name = "rfc3686-ctr-aes-caam-qi", + .blocksize = 1, + .type = CRYPTO_ALG_TYPE_GIVCIPHER, + .template_ablkcipher = { + .setkey = ablkcipher_setkey, + .encrypt = ablkcipher_encrypt, + .decrypt = ablkcipher_decrypt, + .givencrypt = ablkcipher_givencrypt, + .geniv = "", + .min_keysize = AES_MIN_KEY_SIZE + + CTR_RFC3686_NONCE_SIZE, + .max_keysize = AES_MAX_KEY_SIZE + + CTR_RFC3686_NONCE_SIZE, + .ivsize = CTR_RFC3686_IV_SIZE, + }, + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CTR_MOD128, + }, + { + .name = "xts(aes)", + .driver_name = "xts-aes-caam-qi", + .blocksize = AES_BLOCK_SIZE, + .type = CRYPTO_ALG_TYPE_ABLKCIPHER, + .template_ablkcipher = { + .setkey = xts_ablkcipher_setkey, + .encrypt = ablkcipher_encrypt, + .decrypt = ablkcipher_decrypt, + .geniv = "eseqiv", + .min_keysize = 2 * AES_MIN_KEY_SIZE, + .max_keysize = 2 * AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_XTS, + }, +}; + +static struct caam_aead_alg driver_aeads[] = { + /* single-pass ipsec_esp descriptor */ + { + .aead = { + .base = { + .cra_name = "authenc(hmac(md5),cbc(aes))", + .cra_driver_name = "authenc-hmac-md5-" + "cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = MD5_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_MD5 | + OP_ALG_AAI_HMAC_PRECOMP, + } + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(md5)," + "cbc(aes)))", + .cra_driver_name = "echainiv-authenc-hmac-md5-" + "cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = MD5_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_MD5 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha1),cbc(aes))", + .cra_driver_name = "authenc-hmac-sha1-" + "cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA1 | + OP_ALG_AAI_HMAC_PRECOMP, + } + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha1)," + "cbc(aes)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha1-cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA1 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + }, + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha224),cbc(aes))", + .cra_driver_name = "authenc-hmac-sha224-" + "cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA224_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA224 | + OP_ALG_AAI_HMAC_PRECOMP, + } + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha224)," + "cbc(aes)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha224-cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA224_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA224 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha256),cbc(aes))", + .cra_driver_name = "authenc-hmac-sha256-" + "cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA256 | + OP_ALG_AAI_HMAC_PRECOMP, + } + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha256)," + "cbc(aes)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha256-cbc-aes-" + "caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA256 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha384),cbc(aes))", + .cra_driver_name = "authenc-hmac-sha384-" + "cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA384_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA384 | + OP_ALG_AAI_HMAC_PRECOMP, + } + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha384)," + "cbc(aes)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha384-cbc-aes-" + "caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA384_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA384 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha512),cbc(aes))", + .cra_driver_name = "authenc-hmac-sha512-" + "cbc-aes-caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA512_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA512 | + OP_ALG_AAI_HMAC_PRECOMP, + } + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha512)," + "cbc(aes)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha512-cbc-aes-" + "caam-qi", + .cra_blocksize = AES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA512_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_AES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA512 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(md5),cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-md5-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = MD5_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_MD5 | + OP_ALG_AAI_HMAC_PRECOMP, + } + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(md5)," + "cbc(des3_ede)))", + .cra_driver_name = "echainiv-authenc-hmac-md5-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = MD5_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_MD5 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha1)," + "cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-sha1-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA1 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha1)," + "cbc(des3_ede)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha1-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA1 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha224)," + "cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-sha224-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA224_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA224 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha224)," + "cbc(des3_ede)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha224-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA224_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA224 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha256)," + "cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-sha256-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA256 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha256)," + "cbc(des3_ede)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha256-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA256 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha384)," + "cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-sha384-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA384_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA384 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha384)," + "cbc(des3_ede)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha384-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA384_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA384 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha512)," + "cbc(des3_ede))", + .cra_driver_name = "authenc-hmac-sha512-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA512_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA512 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha512)," + "cbc(des3_ede)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha512-" + "cbc-des3_ede-caam-qi", + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES3_EDE_BLOCK_SIZE, + .maxauthsize = SHA512_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_3DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA512 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(md5),cbc(des))", + .cra_driver_name = "authenc-hmac-md5-" + "cbc-des-caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = MD5_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_MD5 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(md5)," + "cbc(des)))", + .cra_driver_name = "echainiv-authenc-hmac-md5-" + "cbc-des-caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = MD5_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_MD5 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha1),cbc(des))", + .cra_driver_name = "authenc-hmac-sha1-" + "cbc-des-caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA1 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha1)," + "cbc(des)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha1-cbc-des-caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA1 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha224),cbc(des))", + .cra_driver_name = "authenc-hmac-sha224-" + "cbc-des-caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA224_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA224 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha224)," + "cbc(des)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha224-cbc-des-" + "caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA224_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA224 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha256),cbc(des))", + .cra_driver_name = "authenc-hmac-sha256-" + "cbc-des-caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA256 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha256)," + "cbc(des)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha256-cbc-desi-" + "caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA256 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + }, + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha384),cbc(des))", + .cra_driver_name = "authenc-hmac-sha384-" + "cbc-des-caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA384_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA384 | + OP_ALG_AAI_HMAC_PRECOMP, + }, + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha384)," + "cbc(des)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha384-cbc-des-" + "caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA384_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA384 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, + { + .aead = { + .base = { + .cra_name = "authenc(hmac(sha512),cbc(des))", + .cra_driver_name = "authenc-hmac-sha512-" + "cbc-des-caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA512_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA512 | + OP_ALG_AAI_HMAC_PRECOMP, + } + }, + { + .aead = { + .base = { + .cra_name = "echainiv(authenc(hmac(sha512)," + "cbc(des)))", + .cra_driver_name = "echainiv-authenc-" + "hmac-sha512-cbc-des-" + "caam-qi", + .cra_blocksize = DES_BLOCK_SIZE, + }, + .setkey = aead_setkey, + .setauthsize = aead_setauthsize, + .encrypt = aead_encrypt, + .decrypt = aead_decrypt, + .ivsize = DES_BLOCK_SIZE, + .maxauthsize = SHA512_DIGEST_SIZE, + }, + .caam = { + .class1_alg_type = OP_ALG_ALGSEL_DES | OP_ALG_AAI_CBC, + .class2_alg_type = OP_ALG_ALGSEL_SHA512 | + OP_ALG_AAI_HMAC_PRECOMP, + .geniv = true, + } + }, +}; + +struct caam_crypto_alg { + struct list_head entry; + struct crypto_alg crypto_alg; + struct caam_alg_entry caam; +}; + +static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam) +{ + struct caam_drv_private *priv; + + /* + * distribute tfms across job rings to ensure in-order + * crypto request processing per tfm + */ + ctx->jrdev = caam_jr_alloc(); + if (IS_ERR(ctx->jrdev)) { + pr_err("Job Ring Device allocation for transform failed\n"); + return PTR_ERR(ctx->jrdev); + } + + ctx->key_dma = dma_map_single(ctx->jrdev, ctx->key, sizeof(ctx->key), + DMA_TO_DEVICE); + if (dma_mapping_error(ctx->jrdev, ctx->key_dma)) { + dev_err(ctx->jrdev, "unable to map key\n"); + caam_jr_free(ctx->jrdev); + return -ENOMEM; + } + + /* copy descriptor header template value */ + ctx->cdata.algtype = OP_TYPE_CLASS1_ALG | caam->class1_alg_type; + ctx->adata.algtype = OP_TYPE_CLASS2_ALG | caam->class2_alg_type; + + priv = dev_get_drvdata(ctx->jrdev->parent); + ctx->qidev = priv->qidev; + + spin_lock_init(&ctx->lock); + ctx->drv_ctx[ENCRYPT] = NULL; + ctx->drv_ctx[DECRYPT] = NULL; + ctx->drv_ctx[GIVENCRYPT] = NULL; + + return 0; +} + +static int caam_cra_init(struct crypto_tfm *tfm) +{ + struct crypto_alg *alg = tfm->__crt_alg; + struct caam_crypto_alg *caam_alg = container_of(alg, typeof(*caam_alg), + crypto_alg); + struct caam_ctx *ctx = crypto_tfm_ctx(tfm); + + return caam_init_common(ctx, &caam_alg->caam); +} + +static int caam_aead_init(struct crypto_aead *tfm) +{ + struct aead_alg *alg = crypto_aead_alg(tfm); + struct caam_aead_alg *caam_alg = container_of(alg, typeof(*caam_alg), + aead); + struct caam_ctx *ctx = crypto_aead_ctx(tfm); + + return caam_init_common(ctx, &caam_alg->caam); +} + +static void caam_exit_common(struct caam_ctx *ctx) +{ + caam_drv_ctx_rel(ctx->drv_ctx[ENCRYPT]); + caam_drv_ctx_rel(ctx->drv_ctx[DECRYPT]); + caam_drv_ctx_rel(ctx->drv_ctx[GIVENCRYPT]); + + dma_unmap_single(ctx->jrdev, ctx->key_dma, sizeof(ctx->key), + DMA_TO_DEVICE); + + caam_jr_free(ctx->jrdev); +} + +static void caam_cra_exit(struct crypto_tfm *tfm) +{ + caam_exit_common(crypto_tfm_ctx(tfm)); +} + +static void caam_aead_exit(struct crypto_aead *tfm) +{ + caam_exit_common(crypto_aead_ctx(tfm)); +} + +static struct list_head alg_list; +static void __exit caam_qi_algapi_exit(void) +{ + struct caam_crypto_alg *t_alg, *n; + int i; + + for (i = 0; i < ARRAY_SIZE(driver_aeads); i++) { + struct caam_aead_alg *t_alg = driver_aeads + i; + + if (t_alg->registered) + crypto_unregister_aead(&t_alg->aead); + } + + if (!alg_list.next) + return; + + list_for_each_entry_safe(t_alg, n, &alg_list, entry) { + crypto_unregister_alg(&t_alg->crypto_alg); + list_del(&t_alg->entry); + kfree(t_alg); + } +} + +static struct caam_crypto_alg *caam_alg_alloc(struct caam_alg_template + *template) +{ + struct caam_crypto_alg *t_alg; + struct crypto_alg *alg; + + t_alg = kzalloc(sizeof(*t_alg), GFP_KERNEL); + if (!t_alg) + return ERR_PTR(-ENOMEM); + + alg = &t_alg->crypto_alg; + + snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", template->name); + snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", + template->driver_name); + alg->cra_module = THIS_MODULE; + alg->cra_init = caam_cra_init; + alg->cra_exit = caam_cra_exit; + alg->cra_priority = CAAM_CRA_PRIORITY; + alg->cra_blocksize = template->blocksize; + alg->cra_alignmask = 0; + alg->cra_ctxsize = sizeof(struct caam_ctx); + alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY | + template->type; + switch (template->type) { + case CRYPTO_ALG_TYPE_GIVCIPHER: + alg->cra_type = &crypto_givcipher_type; + alg->cra_ablkcipher = template->template_ablkcipher; + break; + case CRYPTO_ALG_TYPE_ABLKCIPHER: + alg->cra_type = &crypto_ablkcipher_type; + alg->cra_ablkcipher = template->template_ablkcipher; + break; + } + + t_alg->caam.class1_alg_type = template->class1_alg_type; + t_alg->caam.class2_alg_type = template->class2_alg_type; + + return t_alg; +} + +static void caam_aead_alg_init(struct caam_aead_alg *t_alg) +{ + struct aead_alg *alg = &t_alg->aead; + + alg->base.cra_module = THIS_MODULE; + alg->base.cra_priority = CAAM_CRA_PRIORITY; + alg->base.cra_ctxsize = sizeof(struct caam_ctx); + alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY; + + alg->init = caam_aead_init; + alg->exit = caam_aead_exit; +} + +static int __init caam_qi_algapi_init(void) +{ + struct device_node *dev_node; + struct platform_device *pdev; + struct device *ctrldev; + struct caam_drv_private *priv; + int i = 0, err = 0; + u32 cha_vid, cha_inst, des_inst, aes_inst, md_inst; + unsigned int md_limit = SHA512_DIGEST_SIZE; + bool registered = false; + + dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec-v4.0"); + if (!dev_node) { + dev_node = of_find_compatible_node(NULL, NULL, "fsl,sec4.0"); + if (!dev_node) + return -ENODEV; + } + + pdev = of_find_device_by_node(dev_node); + of_node_put(dev_node); + if (!pdev) + return -ENODEV; + + ctrldev = &pdev->dev; + priv = dev_get_drvdata(ctrldev); + + /* + * If priv is NULL, it's probably because the caam driver wasn't + * properly initialized (e.g. RNG4 init failed). Thus, bail out here. + */ + if (!priv || !priv->qi_present) + return -ENODEV; + + INIT_LIST_HEAD(&alg_list); + + /* + * Register crypto algorithms the device supports. + * First, detect presence and attributes of DES, AES, and MD blocks. + */ + cha_vid = rd_reg32(&priv->ctrl->perfmon.cha_id_ls); + cha_inst = rd_reg32(&priv->ctrl->perfmon.cha_num_ls); + des_inst = (cha_inst & CHA_ID_LS_DES_MASK) >> CHA_ID_LS_DES_SHIFT; + aes_inst = (cha_inst & CHA_ID_LS_AES_MASK) >> CHA_ID_LS_AES_SHIFT; + md_inst = (cha_inst & CHA_ID_LS_MD_MASK) >> CHA_ID_LS_MD_SHIFT; + + /* If MD is present, limit digest size based on LP256 */ + if (md_inst && ((cha_vid & CHA_ID_LS_MD_MASK) == CHA_ID_LS_MD_LP256)) + md_limit = SHA256_DIGEST_SIZE; + + for (i = 0; i < ARRAY_SIZE(driver_algs); i++) { + struct caam_crypto_alg *t_alg; + struct caam_alg_template *alg = driver_algs + i; + u32 alg_sel = alg->class1_alg_type & OP_ALG_ALGSEL_MASK; + + /* Skip DES algorithms if not supported by device */ + if (!des_inst && + ((alg_sel == OP_ALG_ALGSEL_3DES) || + (alg_sel == OP_ALG_ALGSEL_DES))) + continue; + + /* Skip AES algorithms if not supported by device */ + if (!aes_inst && (alg_sel == OP_ALG_ALGSEL_AES)) + continue; + + t_alg = caam_alg_alloc(alg); + if (IS_ERR(t_alg)) { + err = PTR_ERR(t_alg); + dev_warn(priv->qidev, "%s alg allocation failed\n", + alg->driver_name); + continue; + } + + err = crypto_register_alg(&t_alg->crypto_alg); + if (err) { + dev_warn(priv->qidev, "%s alg registration failed\n", + t_alg->crypto_alg.cra_driver_name); + kfree(t_alg); + continue; + } + + list_add_tail(&t_alg->entry, &alg_list); + registered = true; + } + + for (i = 0; i < ARRAY_SIZE(driver_aeads); i++) { + struct caam_aead_alg *t_alg = driver_aeads + i; + u32 c1_alg_sel = t_alg->caam.class1_alg_type & + OP_ALG_ALGSEL_MASK; + u32 c2_alg_sel = t_alg->caam.class2_alg_type & + OP_ALG_ALGSEL_MASK; + u32 alg_aai = t_alg->caam.class1_alg_type & OP_ALG_AAI_MASK; + + /* Skip DES algorithms if not supported by device */ + if (!des_inst && + ((c1_alg_sel == OP_ALG_ALGSEL_3DES) || + (c1_alg_sel == OP_ALG_ALGSEL_DES))) + continue; + + /* Skip AES algorithms if not supported by device */ + if (!aes_inst && (c1_alg_sel == OP_ALG_ALGSEL_AES)) + continue; + + /* + * Check support for AES algorithms not available + * on LP devices. + */ + if (((cha_vid & CHA_ID_LS_AES_MASK) == CHA_ID_LS_AES_LP) && + (alg_aai == OP_ALG_AAI_GCM)) + continue; + + /* + * Skip algorithms requiring message digests + * if MD or MD size is not supported by device. + */ + if (c2_alg_sel && + (!md_inst || (t_alg->aead.maxauthsize > md_limit))) + continue; + + caam_aead_alg_init(t_alg); + + err = crypto_register_aead(&t_alg->aead); + if (err) { + pr_warn("%s alg registration failed\n", + t_alg->aead.base.cra_driver_name); + continue; + } + + t_alg->registered = true; + registered = true; + } + + if (registered) + dev_info(priv->qidev, "algorithms registered in /proc/crypto\n"); + + return err; +} + +module_init(caam_qi_algapi_init); +module_exit(caam_qi_algapi_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Support for crypto API using CAAM-QI backend"); +MODULE_AUTHOR("Freescale Semiconductor"); diff --git a/drivers/crypto/caam/sg_sw_qm.h b/drivers/crypto/caam/sg_sw_qm.h new file mode 100644 index 000000000000..65b0f3c5670c --- /dev/null +++ b/drivers/crypto/caam/sg_sw_qm.h @@ -0,0 +1,107 @@ +/* + * Copyright 2013-2016 Freescale Semiconductor, Inc. + * Copyright 2016-2017 NXP + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions are met: + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in the + * documentation and/or other materials provided with the distribution. + * * Neither the name of Freescale Semiconductor nor the + * names of its contributors may be used to endorse or promote products + * derived from this software without specific prior written permission. + * + * + * ALTERNATIVELY, this software may be distributed under the terms of the + * GNU General Public License ("GPL") as published by the Free Software + * Foundation, either version 2 of that License or (at your option) any + * later version. + * + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef __SG_SW_QM_H +#define __SG_SW_QM_H + +#include +#include "regs.h" + +static inline void __dma_to_qm_sg(struct qm_sg_entry *qm_sg_ptr, dma_addr_t dma, + u16 offset) +{ + qm_sg_entry_set64(qm_sg_ptr, dma); + qm_sg_ptr->__reserved2 = 0; + qm_sg_ptr->bpid = 0; + qm_sg_entry_set_off(qm_sg_ptr, offset); +} + +static inline void dma_to_qm_sg_one(struct qm_sg_entry *qm_sg_ptr, + dma_addr_t dma, u32 len, u16 offset) +{ + __dma_to_qm_sg(qm_sg_ptr, dma, offset); + qm_sg_entry_set_len(qm_sg_ptr, len); +} + +static inline void dma_to_qm_sg_one_last(struct qm_sg_entry *qm_sg_ptr, + dma_addr_t dma, u32 len, u16 offset) +{ + __dma_to_qm_sg(qm_sg_ptr, dma, offset); + qm_sg_entry_set_f(qm_sg_ptr, len); +} + +static inline void dma_to_qm_sg_one_ext(struct qm_sg_entry *qm_sg_ptr, + dma_addr_t dma, u32 len, u16 offset) +{ + __dma_to_qm_sg(qm_sg_ptr, dma, offset); + qm_sg_entry_set_e(qm_sg_ptr, len); +} + +static inline void dma_to_qm_sg_one_last_ext(struct qm_sg_entry *qm_sg_ptr, + dma_addr_t dma, u32 len, + u16 offset) +{ + __dma_to_qm_sg(qm_sg_ptr, dma, offset); + qm_sg_entry_set_ef(qm_sg_ptr, len); +} + +/* + * convert scatterlist to h/w link table format + * but does not have final bit; instead, returns last entry + */ +static inline struct qm_sg_entry * +sg_to_qm_sg(struct scatterlist *sg, int sg_count, + struct qm_sg_entry *qm_sg_ptr, u16 offset) +{ + while (sg_count && sg) { + dma_to_qm_sg_one(qm_sg_ptr, sg_dma_address(sg), + sg_dma_len(sg), offset); + qm_sg_ptr++; + sg = sg_next(sg); + sg_count--; + } + return qm_sg_ptr - 1; +} + +/* + * convert scatterlist to h/w link table format + * scatterlist must have been previously dma mapped + */ +static inline void sg_to_qm_sg_last(struct scatterlist *sg, int sg_count, + struct qm_sg_entry *qm_sg_ptr, u16 offset) +{ + qm_sg_ptr = sg_to_qm_sg(sg, sg_count, qm_sg_ptr, offset); + qm_sg_entry_set_f(qm_sg_ptr, qm_sg_entry_get_len(qm_sg_ptr)); +} + +#endif /* __SG_SW_QM_H */