From patchwork Sat Sep 19 09:02:16 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Porosanu Alexandru X-Patchwork-Id: 7221941 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 68ABA9F32B for ; Sat, 19 Sep 2015 09:35:33 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 1CC1620814 for ; Sat, 19 Sep 2015 09:35:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4B8962080E for ; Sat, 19 Sep 2015 09:35:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752426AbbISJfJ (ORCPT ); Sat, 19 Sep 2015 05:35:09 -0400 Received: from mail-bn1on0141.outbound.protection.outlook.com ([157.56.110.141]:4672 "EHLO na01-bn1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751631AbbISJfF (ORCPT ); Sat, 19 Sep 2015 05:35:05 -0400 Received: from BLUPR0301CA0035.namprd03.prod.outlook.com (10.162.113.173) by BY2PR03MB571.namprd03.prod.outlook.com (10.141.143.145) with Microsoft SMTP Server (TLS) id 15.1.268.17; Sat, 19 Sep 2015 09:02:21 +0000 Received: from BY2FFO11FD024.protection.gbl (2a01:111:f400:7c0c::164) by BLUPR0301CA0035.outlook.office365.com (2a01:111:e400:5259::45) with Microsoft SMTP Server (TLS) id 15.1.274.16 via Frontend Transport; Sat, 19 Sep 2015 09:02:21 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=freescale.com; gondor.apana.org.au; dkim=none (message not signed) header.d=none;gondor.apana.org.au; dmarc=none action=none header.from=freescale.com; Received-SPF: Fail (protection.outlook.com: domain of freescale.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BY2FFO11FD024.mail.protection.outlook.com (10.1.15.213) with Microsoft SMTP Server (TLS) id 15.1.274.4 via Frontend Transport; Sat, 19 Sep 2015 09:02:20 +0000 Received: from fsr-fed2064-107.ea.freescale.net (fsr-fed2064-107.ea.freescale.net [10.171.73.38]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id t8J92Gp8030334; Sat, 19 Sep 2015 02:02:17 -0700 From: Alex Porosanu To: CC: , , Subject: [PATCH v3] crypto/caam: add backlogging support Date: Sat, 19 Sep 2015 12:02:16 +0300 Message-ID: <1442653336-8527-1-git-send-email-alexandru.porosanu@freescale.com> X-Mailer: git-send-email 1.9.3 X-EOPAttributedMessage: 0 X-Microsoft-Exchange-Diagnostics: 1; BY2FFO11FD024; 1:oEHGebJSzltE28d7cPW2Fz0HQA5q67JOBhfyRxIPEW7/8Q/C+/F0TCVrlAk5S+JZqr4ijfZrfqiv9j583gnb2Mb5gPQcrkMu+XmJTFliDoCeWJFzlgGO990cJvTQG6XT2YkwXm0EP7rB8MFm4FKU/oYBDmqrSj3VcDXTTj70YTy0jrznXD3ptHGucdiIAszWrzT+ZIwGx7LpR7PdwvT9SrQ2POGqBl8I0rg4bgXAIzdBfmt7Y2+NhVJN6KGgWsRfr5YQtDAW3ENpH84TwQFKPnfyu+GaYq56XoeG7BjLkvsnlSztsueS0d3dog+01mEePYbxaJfoa+h3U2C+tyZhKKNouMQMos8s1qhPOS3TLBul2cmsCzkPcUcSHv0Vg3VmfHI+ZDkfMMv734XUQ2Ah0Q== X-Forefront-Antispam-Report: CIP:192.88.168.50; CTRY:US; IPV:NLI; EFV:NLI; SFV:NSPM; SFS:(10019020)(6009001)(2980300002)(1110001)(1109001)(339900001)(199003)(189002)(87936001)(5001960100002)(64706001)(110136002)(229853001)(77096005)(36756003)(33646002)(106466001)(5001860100001)(189998001)(97736004)(4001540100001)(11100500001)(81156007)(5001830100001)(104016003)(48376002)(47776003)(19580395003)(575784001)(50986999)(19580405001)(6806004)(50466002)(5001920100001)(77156002)(68736005)(5007970100001)(85426001)(86362001)(5003940100001)(62966003)(2351001)(105606002)(46102003)(92566002)(50226001); DIR:OUT; SFP:1102; SCL:1; SRVR:BY2PR03MB571; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; PTR:InfoDomainNonexistent; A:1; MX:1; LANG:en; MIME-Version: 1.0 X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB571; 2:W7jx5bAvoTDNYZ0SMRVQMw+vuD8nJg6bUGBTZl2FgbM/uZ3oMCGQTPw6audqIOOvozK2ecQpHq5qLaAPAwdIUTOKZsyAGULMn1Xoins0qO8QmuUrlDmREn64sY0HzuN5uJhsiSl9OBJpd6c+pSp8S8J5ZAzjiyXY4UdyMii9ryg=; 3:Z+G0Tb0a3gIvxCnqKCoks6PVN6CYLbMpWEGXdGTHxg50wPbL91JdNNFjkyWylFmYtzKm+3ijd6AxdVuFaANmFTPWuyemzSg5t6wPRMWQx7ecMIwLae48XzFV1LMwtWNN3G5MX6plgY66PIClo5SroTBNYGIPvTwXHBce7pqCqLcofEkkvv89XWVUfH0gDn0OiV8NQ//1HEiEFVSaSrIZ9MM6OMiBLeD1O/zQTtsTTEM=; 25:r3j3W6YWx+zDw4yQJW6u+YXCsc17D0oPJ4JQgjtFWudUFs7pJ0K2b3CHOOWxRliJYzGx8tngcybCBuiyCU0Pz0JkSsNwk/kIRInMKvSpTR7Q3+V+gYcYoIoT9BxpVsQFv2Ji1Dpt5K1am2j3JAiefVoVJfyOnyfhwqBiuyBIIqu00P+MTYYCtpuMg/EYKZGP2z6C32y2u0OW4eOhbUNVHl38lZnW1PwlPZaTZEAruMVquZVGx0zMHQ2DXTbdE/Naqi8Ko/dRXUFrxp/ZdYh2ug== X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BY2PR03MB571; X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB571; 20:vAq02wIJuVVxdr69y+pyMode6lkWTq7BrqZ3WyM9aIDrGLgauswTn4E72AcLrcvIUF0d9h7W5Hcv7A3POiH2aR+ptYlLm+cI8ZE9I47RC3AoFZZRvGBZD4k2s8Z5l55SF7/NAhrxFWGgoJEeEKJ6eWD/aosxWVudyT1mvvWN2ElHOIxdRl8h9T3MNQ52uW5NREwg70W2BiOnRREzaMNqBS8mjhFvWw01QZEfrsZ4X9b+Ux5qbt3KmcN2mLu9Ds6u17/2BeK5zZ+4VGSH0DxtBpQgABPP8uN5+gBOOyCL2LupI1MOWANWFlAtm52CHgEEsZzJZNU+nWCzxFKoH7dhplnIOxseUrIgjuMESb1Lmaw=; 4:7YI85mR9pla4Zk95eY7wW1b8khMwMuSuZneriWKHuW15GRflxLq3c3X1QLDe4G84EElKCLR3Y8SadCSE8ouOiIcpVwmKvCb3FRuc/hGfVazAdfBnaP91xZpBQ9MW+D/vKpJk9eJ6qlMT5vcdzdzl8g0+7bRSjPBD1XBf9iJ+zRv4+Y1zG0ydxtB1NCmNTFx30vQBKKcATbEGqZJYuZ6syP2F3gYBMrPO19B3r9aUEAzpwtjg5Z7Bmhm12l27d9uovOJv7Pwnu8NAoO6zqwLz/XCSPpHaUYFlUEDVanRMc3AH9R0YCIEhhFLG2u1oxw6Bm+pYqEYq4VhnVDzsx2jeFyYbVv+WhSjFLPKzWhyBwbVTe764e9D48p6VVb9mDhtW X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(8121501046)(520075)(520058)(520078)(5005006)(3002001); SRVR:BY2PR03MB571; BCL:0; PCL:0; RULEID:; SRVR:BY2PR03MB571; X-Forefront-PRVS: 0704670F76 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BY2PR03MB571; 23:TaacC+ZSo9PSHE2LA2TFXOQtCwKBGwNpsjBokJ3rL5?= =?us-ascii?Q?yZLg1ZAdh8uWKESQ09KJDkZagULlV9YT+DvUD9FlTsp4FkaxM5bsQMRxnhTH?= =?us-ascii?Q?/q6WIjME240mpe1rr7xbeYeDCQXxG7C9IowBLaC95eD+lr4tgKtqSUtOpV1M?= =?us-ascii?Q?Ifhz03XSF9hg4Ykq6fNyxyWILHiYgWdoX31pI+QlLor6NaF/yaBblsF4LA/M?= =?us-ascii?Q?kz7xTV0A5SZUH32QTB0am/kgnzwO0ZMXGhrgHmOBtIOAup05LmrXdTbTnpW4?= =?us-ascii?Q?/jY9CvQDqRK60skx7omAmqrENUJXZF57oLkNBleCiWMjz93b+68v13kh27Nv?= =?us-ascii?Q?acnK94P/USwcoCwBeyYWFrPzBLAhiHYdduXAgH8UC5ZoHhh3Mu9EL8Q0Vttu?= =?us-ascii?Q?Xqsrc3R94m5zZSBMrcvnWO2m04vYG8c3ybVqUhRM5kn/EVMz7FsEZPnNsTVm?= =?us-ascii?Q?W9w7Nj5Q3WCbUhuLg1qyC/dwdnami0akPn7TrQu5d/C+P0DsNtrk9QuV88N+?= =?us-ascii?Q?x/WMU/yjPzfa7A6zpl7ft3mTWlIrNe3EwOokoEIxPYoWujiU7SzrmVb4Bwfs?= =?us-ascii?Q?5f2MyElq/rA48l4M3VJm+zZva8uDSeX6IsujH4utFJ0ob2i6zUX1sYFpGcsg?= =?us-ascii?Q?8y0sWMrdVSiShOE3lXVozJdivCmaaQJOXl+0iOkigI5neuqS4tyL5dguzONg?= =?us-ascii?Q?S1VCd0/0csRccwd/V5rHNGFVjQnd19OrD7TJqxMoXnS4inbzpD0hL6ccVohG?= =?us-ascii?Q?yS5mF4vsNilGZ3LCUQxv8eCIIRSOJa+5m02P1QiGTkaZCL/8WCTJxht61IW8?= =?us-ascii?Q?SYaoPia4S7JY3PD+ZNH/CnDYwXiJkQk3xwAFj9DOBOx/EnFt2gBaMSO0zWHu?= =?us-ascii?Q?vouqJ2sEc1f7D2+77fFgTMh15zBsDdD/KNB9J2lfps6NyXMFi/7HPWW5t9yS?= =?us-ascii?Q?kg4NVX7iSJJUqLGKgQvJ+5j19FIzfCCmROCHyv1Gg2gFgIqipTgq9oeKOjKr?= =?us-ascii?Q?y/su6lXXRVxWaJ+euPUOfTT9zsFBIz6hggB898uuPnmWUsCcdV1+/JCjI3EK?= =?us-ascii?Q?SZ+nP+fGjE0vs8AnajUnfcsR/D/tz4Bd46QL8S/pqJccrLbFqV2F1RPEiRP7?= =?us-ascii?Q?DOx1IugkrBAFpP0zyGTejEJ6djEa1KCx6pPs8nkqvOnYMRp/0us05rs3rlOD?= =?us-ascii?Q?6bQsr2vhPRh4vcW63IbMjaIHXmDRIlT6g52voJgD/i0QnRumYyrlgHpw=3D?= =?us-ascii?Q?=3D?= X-Microsoft-Exchange-Diagnostics: 1; BY2PR03MB571; 5:mD9xOgixNN6OeUdpwSDdzQqBIvCmbLt3doWDSvz4lf5mbAExOgodIGjcPdsPQxaAPQzyD46fAxFjS8/UYwMg2lID6yht7YX20xmiCfFJbdE7jFaij9ODk/W3fquMFMZ3DkjJwwSD/LcE+K+H9G6UfA==; 24:jmFersnODvP4aSMf103ZhAbD/M62fKzLFMwkr9XKOCLGKR3ZzC+EgtXvK6BlozxdeLJzTJR5FJ96vL9h9slbFTBnTheh9muybacKOvvNDQ8=; 20:a8uNgnnfooqhVsrqKAYslU90d6oqrXYJwBuFMbWLBVoJFGw9j5Ds58apb4Sj9pDXKcKEJrv/ymSfiJzsbePlUg== X-OriginatorOrg: freescale.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Sep 2015 09:02:20.1041 (UTC) X-MS-Exchange-CrossTenant-Id: 710a03f5-10f6-4d38-9ff4-a80b81da590d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=710a03f5-10f6-4d38-9ff4-a80b81da590d; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY2PR03MB571 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP caam_jr_enqueue() function returns -EBUSY once there are no more slots available in the JR, but it doesn't actually save the current request. This breaks the functionality of users that expect that even if there is no more space for the request, it is at least queued for later execution. In other words, all crypto transformations that request backlogging (i.e. have CRYPTO_TFM_REQ_MAY_BACKLOG set), will hang. Such an example is dm-crypt. The current patch solves this issue by setting a threshold after which caam_jr_enqueue() returns -EBUSY, but since the HW job ring isn't actually full, the job is enqueued. Caveat: if the users of the driver don't obey the API contract which states that once -EBUSY is received, no more requests are to be sent, eventually the driver will reject the enqueues. For well-behaved CryptoAPI users, like dm-crypt, this is not the case, since the processing thread will sleep once -EBUSY is received. Signed-off-by: Alex Porosanu --- v3: - as per Herbert's observation, allow only # of backlogging slots transformations w/MAY_BACKLOG flag set to be affined to a JR; the total # of transformations that can be thus allocated is equal to the # of JRs times the # of "backlogging slots". In the standard configuration, this means 16 x 4 = 64 transformations. v2: - added backlogging support for hash as well (caamhash) - simplfied some convoluted logic in *_done_* callbacks - simplified backlogging entries addition in jr.c - made the # of backlogging entries depend on the JR size - fixed wrong function call for abklcipher (backlogging instead of 'normal') --- drivers/crypto/caam/caamalg.c | 112 +++++++++++++++--- drivers/crypto/caam/caamhash.c | 113 +++++++++++++++--- drivers/crypto/caam/intern.h | 13 +++ drivers/crypto/caam/jr.c | 258 ++++++++++++++++++++++++++++++++++------- drivers/crypto/caam/jr.h | 7 ++ 5 files changed, 432 insertions(+), 71 deletions(-) diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c index ba79d63..65d797d 100644 --- a/drivers/crypto/caam/caamalg.c +++ b/drivers/crypto/caam/caamalg.c @@ -1815,6 +1815,9 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err, edesc = container_of(desc, struct aead_edesc, hw_desc[0]); + if (err == -EINPROGRESS) + goto out_bklogged; + if (err) caam_jr_strstatus(jrdev, err); @@ -1822,6 +1825,7 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err, kfree(edesc); +out_bklogged: aead_request_complete(req, err); } @@ -1837,6 +1841,9 @@ static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err, edesc = container_of(desc, struct aead_edesc, hw_desc[0]); + if (err == -EINPROGRESS) + goto out_bklogged; + if (err) caam_jr_strstatus(jrdev, err); @@ -1850,6 +1857,7 @@ static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err, kfree(edesc); +out_bklogged: aead_request_complete(req, err); } @@ -1864,10 +1872,12 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); #endif - edesc = (struct ablkcipher_edesc *)((char *)desc - offsetof(struct ablkcipher_edesc, hw_desc)); + if (err == -EINPROGRESS) + goto out_bklogged; + if (err) caam_jr_strstatus(jrdev, err); @@ -1883,6 +1893,7 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, ablkcipher_unmap(jrdev, edesc, req); kfree(edesc); +out_bklogged: ablkcipher_request_complete(req, err); } @@ -1900,6 +1911,9 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ablkcipher_edesc *)((char *)desc - offsetof(struct ablkcipher_edesc, hw_desc)); + if (err == -EINPROGRESS) + goto out_bklogged; + if (err) caam_jr_strstatus(jrdev, err); @@ -1915,6 +1929,7 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, ablkcipher_unmap(jrdev, edesc, req); kfree(edesc); +out_bklogged: ablkcipher_request_complete(req, err); } @@ -2294,7 +2309,15 @@ static int gcm_encrypt(struct aead_request *req) #endif desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, aead_encrypt_done, + req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -2338,7 +2361,15 @@ static int aead_encrypt(struct aead_request *req) #endif desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, aead_encrypt_done, + req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -2373,7 +2404,15 @@ static int gcm_decrypt(struct aead_request *req) #endif desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, aead_decrypt_done, + req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -2423,7 +2462,15 @@ static int aead_decrypt(struct aead_request *req) #endif desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, aead_decrypt_done, + req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -2575,7 +2622,15 @@ static int ablkcipher_encrypt(struct ablkcipher_request *req) desc_bytes(edesc->hw_desc), 1); #endif desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, ablkcipher_encrypt_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, + ablkcipher_encrypt_done, req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ablkcipher_encrypt_done, + req); + } if (!ret) { ret = -EINPROGRESS; @@ -2612,15 +2667,22 @@ static int ablkcipher_decrypt(struct ablkcipher_request *req) DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc, desc_bytes(edesc->hw_desc), 1); #endif + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, + ablkcipher_decrypt_done, req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ablkcipher_decrypt_done, + req); + } - ret = caam_jr_enqueue(jrdev, desc, ablkcipher_decrypt_done, req); if (!ret) { ret = -EINPROGRESS; } else { ablkcipher_unmap(jrdev, edesc, req); kfree(edesc); } - return ret; } @@ -2757,7 +2819,15 @@ static int ablkcipher_givencrypt(struct skcipher_givcrypt_request *creq) desc_bytes(edesc->hw_desc), 1); #endif desc = edesc->hw_desc; - ret = caam_jr_enqueue(jrdev, desc, ablkcipher_encrypt_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, + ablkcipher_encrypt_done, req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ablkcipher_encrypt_done, + req); + } if (!ret) { ret = -EINPROGRESS; @@ -4215,9 +4285,10 @@ struct caam_crypto_alg { struct caam_alg_entry caam; }; -static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam) +static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam, + bool bklog) { - ctx->jrdev = caam_jr_alloc(); + ctx->jrdev = bklog ? caam_jr_alloc_bklog() : caam_jr_alloc(); if (IS_ERR(ctx->jrdev)) { pr_err("Job Ring Device allocation for transform failed\n"); return PTR_ERR(ctx->jrdev); @@ -4238,7 +4309,8 @@ static int caam_cra_init(struct crypto_tfm *tfm) container_of(alg, struct caam_crypto_alg, crypto_alg); struct caam_ctx *ctx = crypto_tfm_ctx(tfm); - return caam_init_common(ctx, &caam_alg->caam); + return caam_init_common(ctx, &caam_alg->caam, + tfm->crt_flags & CRYPTO_TFM_REQ_MAY_BACKLOG); } static int caam_aead_init(struct crypto_aead *tfm) @@ -4248,10 +4320,11 @@ static int caam_aead_init(struct crypto_aead *tfm) container_of(alg, struct caam_aead_alg, aead); struct caam_ctx *ctx = crypto_aead_ctx(tfm); - return caam_init_common(ctx, &caam_alg->caam); + return caam_init_common(ctx, &caam_alg->caam, + tfm->base.crt_flags & CRYPTO_TFM_REQ_MAY_BACKLOG); } -static void caam_exit_common(struct caam_ctx *ctx) +static void caam_exit_common(struct caam_ctx *ctx, bool bklog) { if (ctx->sh_desc_enc_dma && !dma_mapping_error(ctx->jrdev, ctx->sh_desc_enc_dma)) @@ -4272,17 +4345,22 @@ static void caam_exit_common(struct caam_ctx *ctx) ctx->enckeylen + ctx->split_key_pad_len, DMA_TO_DEVICE); - caam_jr_free(ctx->jrdev); + if (bklog) + caam_jr_free_bklog(ctx->jrdev); + else + caam_jr_free(ctx->jrdev); } static void caam_cra_exit(struct crypto_tfm *tfm) { - caam_exit_common(crypto_tfm_ctx(tfm)); + caam_exit_common(crypto_tfm_ctx(tfm), + tfm->crt_flags & CRYPTO_TFM_REQ_MAY_BACKLOG); } static void caam_aead_exit(struct crypto_aead *tfm) { - caam_exit_common(crypto_aead_ctx(tfm)); + caam_exit_common(crypto_aead_ctx(tfm), + tfm->base.crt_flags & CRYPTO_TFM_REQ_MAY_BACKLOG); } static void __exit caam_algapi_exit(void) diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c index 72acf8e..10bbd3c 100644 --- a/drivers/crypto/caam/caamhash.c +++ b/drivers/crypto/caam/caamhash.c @@ -645,6 +645,10 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ahash_edesc *)((char *)desc - offsetof(struct ahash_edesc, hw_desc)); + + if (err == -EINPROGRESS) + goto out_bklogged; + if (err) caam_jr_strstatus(jrdev, err); @@ -661,6 +665,7 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err, digestsize, 1); #endif +out_bklogged: req->base.complete(&req->base, err); } @@ -680,6 +685,9 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ahash_edesc *)((char *)desc - offsetof(struct ahash_edesc, hw_desc)); + if (err == -EINPROGRESS) + goto out_bklogged; + if (err) caam_jr_strstatus(jrdev, err); @@ -695,7 +703,7 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err, DUMP_PREFIX_ADDRESS, 16, 4, req->result, digestsize, 1); #endif - +out_bklogged: req->base.complete(&req->base, err); } @@ -715,6 +723,9 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ahash_edesc *)((char *)desc - offsetof(struct ahash_edesc, hw_desc)); + if (err == -EINPROGRESS) + goto out_bklogged; + if (err) caam_jr_strstatus(jrdev, err); @@ -730,7 +741,7 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err, DUMP_PREFIX_ADDRESS, 16, 4, req->result, digestsize, 1); #endif - +out_bklogged: req->base.complete(&req->base, err); } @@ -750,6 +761,9 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, edesc = (struct ahash_edesc *)((char *)desc - offsetof(struct ahash_edesc, hw_desc)); + if (err == -EINPROGRESS) + goto out_bklogged; + if (err) caam_jr_strstatus(jrdev, err); @@ -765,7 +779,7 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err, DUMP_PREFIX_ADDRESS, 16, 4, req->result, digestsize, 1); #endif - +out_bklogged: req->base.complete(&req->base, err); } @@ -870,7 +884,15 @@ static int ahash_update_ctx(struct ahash_request *req) desc_bytes(desc), 1); #endif - ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, ahash_done_bi, + req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -966,7 +988,15 @@ static int ahash_final_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); #endif - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, ahash_done_ctx_src, + req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -1056,7 +1086,15 @@ static int ahash_finup_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); #endif - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, ahash_done_ctx_src, + req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -1135,7 +1173,14 @@ static int ahash_digest(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); #endif - ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, ahash_done, req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -1197,7 +1242,14 @@ static int ahash_final_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); #endif - ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, ahash_done, req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -1296,7 +1348,16 @@ static int ahash_update_no_ctx(struct ahash_request *req) desc_bytes(desc), 1); #endif - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, + ahash_done_ctx_dst, req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, + req); + } + if (!ret) { ret = -EINPROGRESS; state->update = ahash_update_ctx; @@ -1398,7 +1459,15 @@ static int ahash_finup_no_ctx(struct ahash_request *req) DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc), 1); #endif - ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, ahash_done, + req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ahash_done, req); + } + if (!ret) { ret = -EINPROGRESS; } else { @@ -1501,8 +1570,16 @@ static int ahash_update_first(struct ahash_request *req) desc_bytes(desc), 1); #endif - ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, - req); + if (req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG) { + ret = caam_jr_enqueue_bklog(jrdev, desc, + ahash_done_ctx_dst, req); + if (ret == -EBUSY) + return ret; + } else { + ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, + req); + } + if (!ret) { ret = -EINPROGRESS; state->update = ahash_update_ctx; @@ -1768,7 +1845,11 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm) * Get a Job ring from Job Ring driver to ensure in-order * crypto request processing per tfm */ - ctx->jrdev = caam_jr_alloc(); + if (tfm->crt_flags & CRYPTO_TFM_REQ_MAY_BACKLOG) + ctx->jrdev = caam_jr_alloc_bklog(); + else + ctx->jrdev = caam_jr_alloc(); + if (IS_ERR(ctx->jrdev)) { pr_err("Job Ring Device allocation for transform failed\n"); return PTR_ERR(ctx->jrdev); @@ -1815,8 +1896,10 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm) !dma_mapping_error(ctx->jrdev, ctx->sh_desc_finup_dma)) dma_unmap_single(ctx->jrdev, ctx->sh_desc_finup_dma, desc_bytes(ctx->sh_desc_finup), DMA_TO_DEVICE); - - caam_jr_free(ctx->jrdev); + if (tfm->crt_flags & CRYPTO_TFM_REQ_MAY_BACKLOG) + caam_jr_free_bklog(ctx->jrdev); + else + caam_jr_free(ctx->jrdev); } static void __exit caam_algapi_hash_exit(void) diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h index e2bcacc..6606200 100644 --- a/drivers/crypto/caam/intern.h +++ b/drivers/crypto/caam/intern.h @@ -11,6 +11,12 @@ /* Currently comes from Kconfig param as a ^2 (driver-required) */ #define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE) +/* + * If the user tries to enqueue a job and the number of slots available + * is less than this value, then the job will be backlogged (if the user + * allows for it) or it will be dropped. + */ +#define JOBR_THRESH ((JOBR_DEPTH / 32) ? JOBR_DEPTH / 32 : 2) /* Kconfig params for interrupt coalescing if selected (else zero) */ #ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_INTC @@ -33,6 +39,7 @@ struct caam_jrentry_info { u32 *desc_addr_virt; /* Stored virt addr for postprocessing */ dma_addr_t desc_addr_dma; /* Stored bus addr for done matching */ u32 desc_size; /* Stored size for postprocessing, header derived */ + bool is_backlogged; /* True if the request has been backlogged */ }; /* Private sub-storage for a single JobR */ @@ -47,6 +54,12 @@ struct caam_drv_private_jr { /* Number of scatterlist crypt transforms active on the JobR */ atomic_t tfm_count ____cacheline_aligned; + /* + * Number of backlogging-enabled scatterlist crypt transforms active + * on the JobR + */ + atomic_t bklog_tfm_count ____cacheline_aligned; + /* Job ring info */ int ringsize; /* Size of rings (assume input = output) */ struct caam_jrentry_info *entinfo; /* Alloc'ed 1 per ring entry */ diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c index f7e0d8d..0961038 100644 --- a/drivers/crypto/caam/jr.c +++ b/drivers/crypto/caam/jr.c @@ -168,6 +168,7 @@ static void caam_jr_dequeue(unsigned long devarg) void (*usercall)(struct device *dev, u32 *desc, u32 status, void *arg); u32 *userdesc, userstatus; void *userarg; + bool is_backlogged; while (rd_reg32(&jrp->rregs->outring_used)) { @@ -201,6 +202,7 @@ static void caam_jr_dequeue(unsigned long devarg) userarg = jrp->entinfo[sw_idx].cbkarg; userdesc = jrp->entinfo[sw_idx].desc_addr_virt; userstatus = jrp->outring[hw_idx].jrstatus; + is_backlogged = jrp->entinfo[sw_idx].is_backlogged; /* * Make sure all information from the job has been obtained @@ -231,6 +233,20 @@ static void caam_jr_dequeue(unsigned long devarg) spin_unlock(&jrp->outlock); + if (is_backlogged) + /* + * For backlogged requests, the user callback needs to + * be called twice: once when starting to process it + * (with a status of -EINPROGRESS and once when it's + * done. Since SEC cheats by enqueuing the request in + * its HW ring but returning -EBUSY, the time when the + * request's processing has started is not known. + * Thus notify here the user. The second call is on the + * normal path (i.e. the one that is called even for + * non-backlogged requests). + */ + usercall(dev, userdesc, -EINPROGRESS, userarg); + /* Finally, execute user's callback */ usercall(dev, userdesc, userstatus, userarg); } @@ -240,6 +256,60 @@ static void caam_jr_dequeue(unsigned long devarg) } /** + * caam_jr_alloc_bklog() - Alloc a job ring for someone to use for backloggable + * requests. + * Note: A maximum of JOBR_THRESH backlogabble transformations can be allocated + * on a job ring. + * + * returns : pointer to the newly allocated physical + * JobR dev can be written to if successful. + **/ +struct device *caam_jr_alloc_bklog(void) +{ + struct caam_drv_private_jr *jrpriv, *min_jrpriv = NULL; + struct device *dev = ERR_PTR(-ENODEV); + int min_tfm_cnt = INT_MAX; + int tfm_cnt, bklog_tfm_cnt; + + spin_lock(&driver_data.jr_alloc_lock); + + if (list_empty(&driver_data.jr_list)) { + spin_unlock(&driver_data.jr_alloc_lock); + return ERR_PTR(-ENODEV); + } + + list_for_each_entry(jrpriv, &driver_data.jr_list, list_node) { + bklog_tfm_cnt = atomic_read(&jrpriv->bklog_tfm_count); + tfm_cnt = atomic_read(&jrpriv->tfm_count); + + /* + * Don't allow more than the # of available slots for + * backlogging transformations on this JR. + */ + if (bklog_tfm_cnt == JOBR_THRESH) + continue; + + if (tfm_cnt < min_tfm_cnt) { + min_tfm_cnt = tfm_cnt; + min_jrpriv = jrpriv; + } + + if (!min_tfm_cnt) + break; + } + + if (min_jrpriv) { + atomic_inc(&min_jrpriv->bklog_tfm_count); + atomic_inc(&min_jrpriv->tfm_count); + dev = min_jrpriv->dev; + } + spin_unlock(&driver_data.jr_alloc_lock); + + return dev; +} +EXPORT_SYMBOL(caam_jr_alloc_bklog); + +/** * caam_jr_alloc() - Alloc a job ring for someone to use as needed. * * returns : pointer to the newly allocated physical @@ -280,6 +350,21 @@ struct device *caam_jr_alloc(void) EXPORT_SYMBOL(caam_jr_alloc); /** + * caam_jr_free_bklog() - Free a Job Ring on which a backloggable request + * has been allocated. + * @rdev - points to the dev that identifies the Job ring on which the + * backloggable request has been allocated. + **/ +void caam_jr_free_bklog(struct device *rdev) +{ + struct caam_drv_private_jr *jrpriv = dev_get_drvdata(rdev); + + atomic_dec(&jrpriv->bklog_tfm_count); + atomic_dec(&jrpriv->tfm_count); +} +EXPORT_SYMBOL(caam_jr_free_bklog); + +/** * caam_jr_free() - Free the Job Ring * @rdev - points to the dev that identifies the Job ring to * be released. @@ -292,6 +377,83 @@ void caam_jr_free(struct device *rdev) } EXPORT_SYMBOL(caam_jr_free); +static inline int __caam_jr_enqueue(struct caam_drv_private_jr *jrp, u32 *desc, + int desc_size, dma_addr_t desc_dma, + void (*cbk)(struct device *dev, u32 *desc, + u32 status, void *areq), + void *areq, + bool can_be_backlogged) +{ + int head, tail; + struct caam_jrentry_info *head_entry; + int ret = 0, hw_slots, sw_slots; + + spin_lock_bh(&jrp->inplock); + + head = jrp->head; + tail = ACCESS_ONCE(jrp->tail); + + head_entry = &jrp->entinfo[head]; + + /* Reset backlogging status here */ + head_entry->is_backlogged = false; + + hw_slots = rd_reg32(&jrp->rregs->inpring_avail); + sw_slots = CIRC_SPACE(head, tail, JOBR_DEPTH); + + if (hw_slots <= JOBR_THRESH || sw_slots <= JOBR_THRESH) { + /* + * The state below can be reached in three cases: + * 1) A badly behaved backlogging user doesn't back off when + * told so by the -EBUSY return code + * 2) More than JOBR_THRESH backlogging users requests + * 3) Due to the high system load, the entries reserved for the + * backlogging users are being filled (slowly) in between + * the successive calls to the user callback (the first one + * with -EINPROGRESS and the 2nd one with the real result. + * The code below is a last-resort measure which will DROP + * any request if there is physically no more space. This will + * lead to data-loss for disk-related users. + */ + if (!hw_slots || !sw_slots) { + ret = -EIO; + goto out_unlock; + } + + ret = -EBUSY; + if (!can_be_backlogged) + goto out_unlock; + + head_entry->is_backlogged = true; + } + + head_entry->desc_addr_virt = desc; + head_entry->desc_size = desc_size; + head_entry->callbk = (void *)cbk; + head_entry->cbkarg = areq; + head_entry->desc_addr_dma = desc_dma; + + jrp->inpring[jrp->inp_ring_write_index] = desc_dma; + + /* + * Guarantee that the descriptor's DMA address has been written to + * the next slot in the ring before the write index is updated, since + * other cores may update this index independently. + */ + smp_wmb(); + + jrp->inp_ring_write_index = (jrp->inp_ring_write_index + 1) & + (JOBR_DEPTH - 1); + jrp->head = (head + 1) & (JOBR_DEPTH - 1); + + wr_reg32(&jrp->rregs->inpring_jobadd, 1); + +out_unlock: + spin_unlock_bh(&jrp->inplock); + + return ret; +} + /** * caam_jr_enqueue() - Enqueue a job descriptor head. Returns 0 if OK, * -EBUSY if the queue is full, -EIO if it cannot map the caller's @@ -326,8 +488,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, void *areq) { struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); - struct caam_jrentry_info *head_entry; - int head, tail, desc_size; + int desc_size, ret; dma_addr_t desc_dma; desc_size = (*desc & HDR_JD_LENGTH_MASK) * sizeof(u32); @@ -337,51 +498,70 @@ int caam_jr_enqueue(struct device *dev, u32 *desc, return -EIO; } - spin_lock_bh(&jrp->inplock); - - head = jrp->head; - tail = ACCESS_ONCE(jrp->tail); - - if (!rd_reg32(&jrp->rregs->inpring_avail) || - CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) { - spin_unlock_bh(&jrp->inplock); + ret = __caam_jr_enqueue(jrp, desc, desc_size, desc_dma, cbk, areq, + false); + if (unlikely(ret)) dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE); - return -EBUSY; - } - head_entry = &jrp->entinfo[head]; - head_entry->desc_addr_virt = desc; - head_entry->desc_size = desc_size; - head_entry->callbk = (void *)cbk; - head_entry->cbkarg = areq; - head_entry->desc_addr_dma = desc_dma; - - jrp->inpring[jrp->inp_ring_write_index] = desc_dma; - - /* - * Guarantee that the descriptor's DMA address has been written to - * the next slot in the ring before the write index is updated, since - * other cores may update this index independently. - */ - smp_wmb(); + return ret; +} +EXPORT_SYMBOL(caam_jr_enqueue); - jrp->inp_ring_write_index = (jrp->inp_ring_write_index + 1) & - (JOBR_DEPTH - 1); - jrp->head = (head + 1) & (JOBR_DEPTH - 1); +/** + * caam_jr_enqueue_bklog() - Enqueue a job descriptor head, returns 0 if OK, or + * -EBUSY if the number of available entries in the Job Ring is less + * than the threshold configured through JOBR_THRESH, and -EIO if it cannot map + * the caller's descriptor or if there is really no more space in the hardware + * job ring. + * @dev: device of the job ring to be used. This device should have + * been assigned prior by caam_jr_register(). + * @desc: points to a job descriptor that execute our request. All + * descriptors (and all referenced data) must be in a DMAable + * region, and all data references must be physical addresses + * accessible to CAAM (i.e. within a PAMU window granted + * to it). + * @cbk: pointer to a callback function to be invoked upon completion + * of this request. This has the form: + * callback(struct device *dev, u32 *desc, u32 stat, void *arg) + * where: + * @dev: contains the job ring device that processed this + * response. + * @desc: descriptor that initiated the request, same as + * "desc" being argued to caam_jr_enqueue(). + * @status: untranslated status received from CAAM. See the + * reference manual for a detailed description of + * error meaning, or see the JRSTA definitions in the + * register header file + * @areq: optional pointer to an argument passed with the + * original request + * @areq: optional pointer to a user argument for use at callback + * time. + **/ +int caam_jr_enqueue_bklog(struct device *dev, u32 *desc, + void (*cbk)(struct device *dev, u32 *desc, + u32 status, void *areq), + void *areq) +{ + struct caam_drv_private_jr *jrp = dev_get_drvdata(dev); + int desc_size, ret; + dma_addr_t desc_dma; - /* - * Ensure that all job information has been written before - * notifying CAAM that a new job was added to the input ring. - */ - wmb(); + desc_size = (*desc & HDR_JD_LENGTH_MASK) * sizeof(u32); + desc_dma = dma_map_single(dev, desc, desc_size, DMA_TO_DEVICE); + if (dma_mapping_error(dev, desc_dma)) { + dev_err(dev, "caam_jr_enqueue(): can't map jobdesc\n"); + return -EIO; + } - wr_reg32(&jrp->rregs->inpring_jobadd, 1); + ret = __caam_jr_enqueue(jrp, desc, desc_size, desc_dma, cbk, areq, + true); + if (unlikely(ret && (ret != -EBUSY))) + dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE); - spin_unlock_bh(&jrp->inplock); + return ret; - return 0; } -EXPORT_SYMBOL(caam_jr_enqueue); +EXPORT_SYMBOL(caam_jr_enqueue_bklog); /* * Init JobR independent of platform property detection diff --git a/drivers/crypto/caam/jr.h b/drivers/crypto/caam/jr.h index 97113a6..7f0cd83 100644 --- a/drivers/crypto/caam/jr.h +++ b/drivers/crypto/caam/jr.h @@ -9,10 +9,17 @@ /* Prototypes for backend-level services exposed to APIs */ struct device *caam_jr_alloc(void); +struct device *caam_jr_alloc_bklog(void); void caam_jr_free(struct device *rdev); +void caam_jr_free_bklog(struct device *rdev); int caam_jr_enqueue(struct device *dev, u32 *desc, void (*cbk)(struct device *dev, u32 *desc, u32 status, void *areq), void *areq); +int caam_jr_enqueue_bklog(struct device *dev, u32 *desc, + void (*cbk)(struct device *dev, u32 *desc, u32 status, + void *areq), + void *areq); + #endif /* JR_H */