From patchwork Fri Apr 22 05:19:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Steven Lee X-Patchwork-Id: 12822744 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3AE22C433FE for ; Fri, 22 Apr 2022 05:24:26 +0000 (UTC) Received: from localhost ([::1]:47014 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nhlmL-0008IJ-9X for qemu-devel@archiver.kernel.org; Fri, 22 Apr 2022 01:24:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:38700) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nhli9-0002L9-S9; Fri, 22 Apr 2022 01:20:05 -0400 Received: from twspam01.aspeedtech.com ([211.20.114.71]:39732) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nhli7-0002ai-PG; Fri, 22 Apr 2022 01:20:05 -0400 Received: from mail.aspeedtech.com ([192.168.0.24]) by twspam01.aspeedtech.com with ESMTP id 23M56xe0058496; Fri, 22 Apr 2022 13:06:59 +0800 (GMT-8) (envelope-from steven_lee@aspeedtech.com) Received: from localhost.localdomain (192.168.70.100) by TWMBX02.aspeed.com (192.168.0.24) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 22 Apr 2022 13:19:10 +0800 From: Steven Lee To: =?utf-8?q?C=C3=A9dric_Le_Goater?= , Peter Maydell , Andrew Jeffery , Joel Stanley , Thomas Huth , Laurent Vivier , Paolo Bonzini , "open list:ASPEED BMCs" , "open list:All patches CC here" Subject: [PATCH v5 1/3] aspeed/hace: Support HMAC Key Buffer register. Date: Fri, 22 Apr 2022 13:19:07 +0800 Message-ID: <20220422051909.27540-2-steven_lee@aspeedtech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220422051909.27540-1-steven_lee@aspeedtech.com> References: <20220422051909.27540-1-steven_lee@aspeedtech.com> MIME-Version: 1.0 X-Originating-IP: [192.168.70.100] X-ClientProxiedBy: TWMBX02.aspeed.com (192.168.0.24) To TWMBX02.aspeed.com (192.168.0.24) X-DNSRBL: X-MAIL: twspam01.aspeedtech.com 23M56xe0058496 Received-SPF: pass client-ip=211.20.114.71; envelope-from=steven_lee@aspeedtech.com; helo=twspam01.aspeedtech.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jamin_lin@aspeedtech.com, troy_lee@aspeedtech.com, steven_lee@aspeedtech.com Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" Support HACE28: Hash HMAC Key Buffer Base Address Register. Signed-off-by: Troy Lee Signed-off-by: Steven Lee Reviewed-by: Cédric Le Goater --- hw/misc/aspeed_hace.c | 7 +++++++ include/hw/misc/aspeed_hace.h | 1 + 2 files changed, 8 insertions(+) diff --git a/hw/misc/aspeed_hace.c b/hw/misc/aspeed_hace.c index 10f00e65f4..59fe5bfca2 100644 --- a/hw/misc/aspeed_hace.c +++ b/hw/misc/aspeed_hace.c @@ -27,6 +27,7 @@ #define R_HASH_SRC (0x20 / 4) #define R_HASH_DEST (0x24 / 4) +#define R_HASH_KEY_BUFF (0x28 / 4) #define R_HASH_SRC_LEN (0x2c / 4) #define R_HASH_CMD (0x30 / 4) @@ -210,6 +211,9 @@ static void aspeed_hace_write(void *opaque, hwaddr addr, uint64_t data, case R_HASH_DEST: data &= ahc->dest_mask; break; + case R_HASH_KEY_BUFF: + data &= ahc->key_mask; + break; case R_HASH_SRC_LEN: data &= 0x0FFFFFFF; break; @@ -333,6 +337,7 @@ static void aspeed_ast2400_hace_class_init(ObjectClass *klass, void *data) ahc->src_mask = 0x0FFFFFFF; ahc->dest_mask = 0x0FFFFFF8; + ahc->key_mask = 0x0FFFFFC0; ahc->hash_mask = 0x000003ff; /* No SG or SHA512 modes */ } @@ -351,6 +356,7 @@ static void aspeed_ast2500_hace_class_init(ObjectClass *klass, void *data) ahc->src_mask = 0x3fffffff; ahc->dest_mask = 0x3ffffff8; + ahc->key_mask = 0x3FFFFFC0; ahc->hash_mask = 0x000003ff; /* No SG or SHA512 modes */ } @@ -369,6 +375,7 @@ static void aspeed_ast2600_hace_class_init(ObjectClass *klass, void *data) ahc->src_mask = 0x7FFFFFFF; ahc->dest_mask = 0x7FFFFFF8; + ahc->key_mask = 0x7FFFFFF8; ahc->hash_mask = 0x00147FFF; } diff --git a/include/hw/misc/aspeed_hace.h b/include/hw/misc/aspeed_hace.h index 94d5ada95f..2242945eb4 100644 --- a/include/hw/misc/aspeed_hace.h +++ b/include/hw/misc/aspeed_hace.h @@ -37,6 +37,7 @@ struct AspeedHACEClass { uint32_t src_mask; uint32_t dest_mask; + uint32_t key_mask; uint32_t hash_mask; }; From patchwork Fri Apr 22 05:19:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Lee X-Patchwork-Id: 12822741 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB1BAC433F5 for ; Fri, 22 Apr 2022 05:21:36 +0000 (UTC) Received: from localhost ([::1]:41262 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nhljc-0004NJ-1P for qemu-devel@archiver.kernel.org; Fri, 22 Apr 2022 01:21:36 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:38678) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nhli8-0002Jd-Mn; Fri, 22 Apr 2022 01:20:04 -0400 Received: from twspam01.aspeedtech.com ([211.20.114.71]:56466) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nhli5-0002Xn-3T; Fri, 22 Apr 2022 01:20:04 -0400 Received: from mail.aspeedtech.com ([192.168.0.24]) by twspam01.aspeedtech.com with ESMTP id 23M5703p058497; Fri, 22 Apr 2022 13:07:00 +0800 (GMT-8) (envelope-from steven_lee@aspeedtech.com) Received: from localhost.localdomain (192.168.70.100) by TWMBX02.aspeed.com (192.168.0.24) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 22 Apr 2022 13:19:11 +0800 From: Steven Lee To: =?utf-8?q?C=C3=A9dric_Le_Goater?= , Peter Maydell , Andrew Jeffery , Joel Stanley , Thomas Huth , Laurent Vivier , Paolo Bonzini , "open list:ASPEED BMCs" , "open list:All patches CC here" Subject: [PATCH v5 2/3] aspeed/hace: Support AST2600 HACE Date: Fri, 22 Apr 2022 13:19:08 +0800 Message-ID: <20220422051909.27540-3-steven_lee@aspeedtech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220422051909.27540-1-steven_lee@aspeedtech.com> References: <20220422051909.27540-1-steven_lee@aspeedtech.com> MIME-Version: 1.0 X-Originating-IP: [192.168.70.100] X-ClientProxiedBy: TWMBX02.aspeed.com (192.168.0.24) To TWMBX02.aspeed.com (192.168.0.24) X-DNSRBL: X-MAIL: twspam01.aspeedtech.com 23M5703p058497 Received-SPF: pass client-ip=211.20.114.71; envelope-from=steven_lee@aspeedtech.com; helo=twspam01.aspeedtech.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jamin_lin@aspeedtech.com, troy_lee@aspeedtech.com, steven_lee@aspeedtech.com Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" The aspeed ast2600 accumulative mode is described in datasheet ast2600v10.pdf section 25.6.4: 1. Allocating and initiating accumulative hash digest write buffer with initial state. * Since QEMU crypto/hash api doesn't provide the API to set initial state of hash library, and the initial state is already set by crypto library (gcrypt/glib/...), so skip this step. 2. Calculating accumulative hash digest. (a) When receiving the last accumulative data, software need to add padding message at the end of the accumulative data. Padding message described in specific of MD5, SHA-1, SHA224, SHA256, SHA512, SHA512/224, SHA512/256. * Since the crypto library (gcrypt/glib) already pad the padding message internally. * This patch is to remove the padding message which fed byguest machine driver. Signed-off-by: Troy Lee Signed-off-by: Steven Lee --- hw/misc/aspeed_hace.c | 132 ++++++++++++++++++++++++++++++++-- include/hw/misc/aspeed_hace.h | 4 ++ 2 files changed, 131 insertions(+), 5 deletions(-) diff --git a/hw/misc/aspeed_hace.c b/hw/misc/aspeed_hace.c index 59fe5bfca2..3164f99996 100644 --- a/hw/misc/aspeed_hace.c +++ b/hw/misc/aspeed_hace.c @@ -65,7 +65,6 @@ #define SG_LIST_ADDR_SIZE 4 #define SG_LIST_ADDR_MASK 0x7FFFFFFF #define SG_LIST_ENTRY_SIZE (SG_LIST_LEN_SIZE + SG_LIST_ADDR_SIZE) -#define ASPEED_HACE_MAX_SG 256 /* max number of entries */ static const struct { uint32_t mask; @@ -95,11 +94,104 @@ static int hash_algo_lookup(uint32_t reg) return -1; } -static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode) +/** + * Check whether the request contains padding message. + * + * @param s aspeed hace state object + * @param iov iov of current request + * @param req_len length of the current request + * @param total_msg_len length of all acc_mode requests(excluding padding msg) + * @param pad_offset start offset of padding message + */ +static bool has_padding(AspeedHACEState *s, struct iovec *iov, + hwaddr req_len, uint32_t *total_msg_len, + uint32_t *pad_offset) +{ + *total_msg_len = (uint32_t)(ldq_be_p(iov->iov_base + req_len - 8) / 8); + /* + * SG_LIST_LEN_LAST asserted in the request length doesn't mean it is the + * last request. The last request should contain padding message. + * We check whether message contains padding by + * 1. Get total message length. If the current message contains + * padding, the last 8 bytes are total message length. + * 2. Check whether the total message length is valid. + * If it is valid, the value should less than or equal to + * total_req_len. + * 3. Current request len - padding_size to get padding offset. + * The padding message's first byte should be 0x80 + */ + if (*total_msg_len <= s->total_req_len) { + uint32_t padding_size = s->total_req_len - *total_msg_len; + uint8_t *padding = iov->iov_base; + *pad_offset = req_len - padding_size; + if (padding[*pad_offset] == 0x80) { + return true; + } + } + + return false; +} + +static int reconstruct_iov(AspeedHACEState *s, struct iovec *iov, int id, + uint32_t *pad_offset) +{ + int i, iov_count; + if (pad_offset != 0) { + s->iov_cache[s->iov_count].iov_base = iov[id].iov_base; + s->iov_cache[s->iov_count].iov_len = *pad_offset; + ++s->iov_count; + } + for (i = 0; i < s->iov_count; i++) { + iov[i].iov_base = s->iov_cache[i].iov_base; + iov[i].iov_len = s->iov_cache[i].iov_len; + } + iov_count = s->iov_count; + s->iov_count = 0; + s->total_req_len = 0; + return iov_count; +} + +/** + * Generate iov for accumulative mode. + * + * @param s aspeed hace state object + * @param iov iov of the current request + * @param id index of the current iov + * @param req_len length of the current request + * + * @return count of iov + */ +static int gen_acc_mode_iov(AspeedHACEState *s, struct iovec *iov, int id, + hwaddr *req_len) +{ + uint32_t pad_offset; + uint32_t total_msg_len; + s->total_req_len += *req_len; + + if (has_padding(s, &iov[id], *req_len, &total_msg_len, &pad_offset)) { + if (s->iov_count) { + return reconstruct_iov(s, iov, id, &pad_offset); + } + + *req_len -= s->total_req_len - total_msg_len; + s->total_req_len = 0; + iov[id].iov_len = *req_len; + } else { + s->iov_cache[s->iov_count].iov_base = iov->iov_base; + s->iov_cache[s->iov_count].iov_len = *req_len; + ++s->iov_count; + } + + return id + 1; +} + +static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode, + bool acc_mode) { struct iovec iov[ASPEED_HACE_MAX_SG]; g_autofree uint8_t *digest_buf; size_t digest_len = 0; + int niov = 0; int i; if (sg_mode) { @@ -124,10 +216,16 @@ static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode) MEMTXATTRS_UNSPECIFIED, NULL); addr &= SG_LIST_ADDR_MASK; - iov[i].iov_len = len & SG_LIST_LEN_MASK; - plen = iov[i].iov_len; + plen = len & SG_LIST_LEN_MASK; iov[i].iov_base = address_space_map(&s->dram_as, addr, &plen, false, MEMTXATTRS_UNSPECIFIED); + + if (acc_mode) { + niov = gen_acc_mode_iov(s, iov, i, &plen); + + } else { + iov[i].iov_len = plen; + } } } else { hwaddr len = s->regs[R_HASH_SRC_LEN]; @@ -137,6 +235,25 @@ static void do_hash_operation(AspeedHACEState *s, int algo, bool sg_mode) &len, false, MEMTXATTRS_UNSPECIFIED); i = 1; + + if (s->iov_count) { + /* + * In aspeed sdk kernel driver, sg_mode is disabled in hash_final(). + * Thus if we received a request with sg_mode disabled, it is + * required to check whether cache is empty. If no, we should + * combine cached iov and the current iov. + */ + uint32_t total_msg_len; + uint32_t pad_offset; + s->total_req_len += len; + if (has_padding(s, iov, len, &total_msg_len, &pad_offset)) { + niov = reconstruct_iov(s, iov, 0, &pad_offset); + } + } + } + + if (niov) { + i = niov; } if (qcrypto_hash_bytesv(algo, iov, i, &digest_buf, &digest_len, NULL) < 0) { @@ -238,7 +355,8 @@ static void aspeed_hace_write(void *opaque, hwaddr addr, uint64_t data, __func__, data & ahc->hash_mask); break; } - do_hash_operation(s, algo, data & HASH_SG_EN); + do_hash_operation(s, algo, data & HASH_SG_EN, + ((data & HASH_HMAC_MASK) == HASH_DIGEST_ACCUM)); if (data & HASH_IRQ_EN) { qemu_irq_raise(s->irq); @@ -271,6 +389,8 @@ static void aspeed_hace_reset(DeviceState *dev) struct AspeedHACEState *s = ASPEED_HACE(dev); memset(s->regs, 0, sizeof(s->regs)); + s->iov_count = 0; + s->total_req_len = 0; } static void aspeed_hace_realize(DeviceState *dev, Error **errp) @@ -306,6 +426,8 @@ static const VMStateDescription vmstate_aspeed_hace = { .minimum_version_id = 1, .fields = (VMStateField[]) { VMSTATE_UINT32_ARRAY(regs, AspeedHACEState, ASPEED_HACE_NR_REGS), + VMSTATE_UINT32(total_req_len, AspeedHACEState), + VMSTATE_UINT32(iov_count, AspeedHACEState), VMSTATE_END_OF_LIST(), } }; diff --git a/include/hw/misc/aspeed_hace.h b/include/hw/misc/aspeed_hace.h index 2242945eb4..40aebf1d6e 100644 --- a/include/hw/misc/aspeed_hace.h +++ b/include/hw/misc/aspeed_hace.h @@ -18,6 +18,7 @@ OBJECT_DECLARE_TYPE(AspeedHACEState, AspeedHACEClass, ASPEED_HACE) #define ASPEED_HACE_NR_REGS (0x64 >> 2) +#define ASPEED_HACE_MAX_SG 256 /* max number of entries */ struct AspeedHACEState { SysBusDevice parent; @@ -25,7 +26,10 @@ struct AspeedHACEState { MemoryRegion iomem; qemu_irq irq; + struct iovec iov_cache[ASPEED_HACE_MAX_SG]; uint32_t regs[ASPEED_HACE_NR_REGS]; + uint32_t total_req_len; + uint32_t iov_count; MemoryRegion *dram_mr; AddressSpace dram_as; From patchwork Fri Apr 22 05:19:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Lee X-Patchwork-Id: 12822742 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 37E86C433F5 for ; Fri, 22 Apr 2022 05:22:11 +0000 (UTC) Received: from localhost ([::1]:42470 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1nhlkA-0005CR-BM for qemu-devel@archiver.kernel.org; Fri, 22 Apr 2022 01:22:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:38680) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nhli8-0002Jl-UL; Fri, 22 Apr 2022 01:20:05 -0400 Received: from twspam01.aspeedtech.com ([211.20.114.71]:50871) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1nhli6-0002YW-5c; Fri, 22 Apr 2022 01:20:04 -0400 Received: from mail.aspeedtech.com ([192.168.0.24]) by twspam01.aspeedtech.com with ESMTP id 23M570eV058504; Fri, 22 Apr 2022 13:07:00 +0800 (GMT-8) (envelope-from steven_lee@aspeedtech.com) Received: from localhost.localdomain (192.168.70.100) by TWMBX02.aspeed.com (192.168.0.24) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Fri, 22 Apr 2022 13:19:11 +0800 From: Steven Lee To: =?utf-8?q?C=C3=A9dric_Le_Goater?= , Peter Maydell , Andrew Jeffery , Joel Stanley , Thomas Huth , Laurent Vivier , Paolo Bonzini , "open list:ASPEED BMCs" , "open list:All patches CC here" Subject: [PATCH v5 3/3] tests/qtest: Add test for Aspeed HACE accumulative mode Date: Fri, 22 Apr 2022 13:19:09 +0800 Message-ID: <20220422051909.27540-4-steven_lee@aspeedtech.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220422051909.27540-1-steven_lee@aspeedtech.com> References: <20220422051909.27540-1-steven_lee@aspeedtech.com> MIME-Version: 1.0 X-Originating-IP: [192.168.70.100] X-ClientProxiedBy: TWMBX02.aspeed.com (192.168.0.24) To TWMBX02.aspeed.com (192.168.0.24) X-DNSRBL: X-MAIL: twspam01.aspeedtech.com 23M570eV058504 Received-SPF: pass client-ip=211.20.114.71; envelope-from=steven_lee@aspeedtech.com; helo=twspam01.aspeedtech.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: jamin_lin@aspeedtech.com, troy_lee@aspeedtech.com, steven_lee@aspeedtech.com Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" This add two addition test cases for accumulative mode under sg enabled. The input vector was manually craft with "abc" + bit 1 + padding zeros + L. The padding length depends on algorithm, i.e. SHA512 (1024 bit), SHA256 (512 bit). The result was calculated by command line sha512sum/sha256sum utilities without padding, i.e. only "abc" ascii text. Signed-off-by: Troy Lee Signed-off-by: Steven Lee Acked-by: Thomas Huth Reviewed-by: Joel Stanley --- tests/qtest/aspeed_hace-test.c | 145 +++++++++++++++++++++++++++++++++ 1 file changed, 145 insertions(+) diff --git a/tests/qtest/aspeed_hace-test.c b/tests/qtest/aspeed_hace-test.c index 58aa22014d..85d705ec50 100644 --- a/tests/qtest/aspeed_hace-test.c +++ b/tests/qtest/aspeed_hace-test.c @@ -20,6 +20,7 @@ #define HACE_ALGO_SHA512 (BIT(5) | BIT(6)) #define HACE_ALGO_SHA384 (BIT(5) | BIT(6) | BIT(10)) #define HACE_SG_EN BIT(18) +#define HACE_ACCUM_EN BIT(8) #define HACE_STS 0x1c #define HACE_RSA_ISR BIT(13) @@ -95,6 +96,57 @@ static const uint8_t test_result_sg_sha256[] = { 0x55, 0x1e, 0x1e, 0xc5, 0x80, 0xdd, 0x6d, 0x5a, 0x6e, 0xcd, 0xe9, 0xf3, 0xd3, 0x5e, 0x6e, 0x4a, 0x71, 0x7f, 0xbd, 0xe4}; +/* + * The accumulative mode requires firmware to provide internal initial state + * and message padding (including length L at the end of padding). + * + * This test vector is a ascii text "abc" with padding message. + * + * Expected results were generated using command line utitiles: + * + * echo -n -e 'abc' | dd of=/tmp/test + * for hash in sha512sum sha256sum; do $hash /tmp/test; done + */ +static const uint8_t test_vector_accum_512[] = { + 0x61, 0x62, 0x63, 0x80, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18}; + +static const uint8_t test_vector_accum_256[] = { + 0x61, 0x62, 0x63, 0x80, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18}; + +static const uint8_t test_result_accum_sha512[] = { + 0xdd, 0xaf, 0x35, 0xa1, 0x93, 0x61, 0x7a, 0xba, 0xcc, 0x41, 0x73, 0x49, + 0xae, 0x20, 0x41, 0x31, 0x12, 0xe6, 0xfa, 0x4e, 0x89, 0xa9, 0x7e, 0xa2, + 0x0a, 0x9e, 0xee, 0xe6, 0x4b, 0x55, 0xd3, 0x9a, 0x21, 0x92, 0x99, 0x2a, + 0x27, 0x4f, 0xc1, 0xa8, 0x36, 0xba, 0x3c, 0x23, 0xa3, 0xfe, 0xeb, 0xbd, + 0x45, 0x4d, 0x44, 0x23, 0x64, 0x3c, 0xe8, 0x0e, 0x2a, 0x9a, 0xc9, 0x4f, + 0xa5, 0x4c, 0xa4, 0x9f}; + +static const uint8_t test_result_accum_sha256[] = { + 0xba, 0x78, 0x16, 0xbf, 0x8f, 0x01, 0xcf, 0xea, 0x41, 0x41, 0x40, 0xde, + 0x5d, 0xae, 0x22, 0x23, 0xb0, 0x03, 0x61, 0xa3, 0x96, 0x17, 0x7a, 0x9c, + 0xb4, 0x10, 0xff, 0x61, 0xf2, 0x00, 0x15, 0xad}; static void write_regs(QTestState *s, uint32_t base, uint32_t src, uint32_t length, uint32_t out, uint32_t method) @@ -307,6 +359,86 @@ static void test_sha512_sg(const char *machine, const uint32_t base, qtest_quit(s); } +static void test_sha256_accum(const char *machine, const uint32_t base, + const uint32_t src_addr) +{ + QTestState *s = qtest_init(machine); + + const uint32_t buffer_addr = src_addr + 0x1000000; + const uint32_t digest_addr = src_addr + 0x4000000; + uint8_t digest[32] = {0}; + struct AspeedSgList array[] = { + { cpu_to_le32(sizeof(test_vector_accum_256) | SG_LIST_LEN_LAST), + cpu_to_le32(buffer_addr) }, + }; + + /* Check engine is idle, no busy or irq bits set */ + g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0); + + /* Write test vector into memory */ + qtest_memwrite(s, buffer_addr, test_vector_accum_256, sizeof(test_vector_accum_256)); + qtest_memwrite(s, src_addr, array, sizeof(array)); + + write_regs(s, base, src_addr, sizeof(test_vector_accum_256), + digest_addr, HACE_ALGO_SHA256 | HACE_SG_EN | HACE_ACCUM_EN); + + /* Check hash IRQ status is asserted */ + g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0x00000200); + + /* Clear IRQ status and check status is deasserted */ + qtest_writel(s, base + HACE_STS, 0x00000200); + g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0); + + /* Read computed digest from memory */ + qtest_memread(s, digest_addr, digest, sizeof(digest)); + + /* Check result of computation */ + g_assert_cmpmem(digest, sizeof(digest), + test_result_accum_sha256, sizeof(digest)); + + qtest_quit(s); +} + +static void test_sha512_accum(const char *machine, const uint32_t base, + const uint32_t src_addr) +{ + QTestState *s = qtest_init(machine); + + const uint32_t buffer_addr = src_addr + 0x1000000; + const uint32_t digest_addr = src_addr + 0x4000000; + uint8_t digest[64] = {0}; + struct AspeedSgList array[] = { + { cpu_to_le32(sizeof(test_vector_accum_512) | SG_LIST_LEN_LAST), + cpu_to_le32(buffer_addr) }, + }; + + /* Check engine is idle, no busy or irq bits set */ + g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0); + + /* Write test vector into memory */ + qtest_memwrite(s, buffer_addr, test_vector_accum_512, sizeof(test_vector_accum_512)); + qtest_memwrite(s, src_addr, array, sizeof(array)); + + write_regs(s, base, src_addr, sizeof(test_vector_accum_512), + digest_addr, HACE_ALGO_SHA512 | HACE_SG_EN | HACE_ACCUM_EN); + + /* Check hash IRQ status is asserted */ + g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0x00000200); + + /* Clear IRQ status and check status is deasserted */ + qtest_writel(s, base + HACE_STS, 0x00000200); + g_assert_cmphex(qtest_readl(s, base + HACE_STS), ==, 0); + + /* Read computed digest from memory */ + qtest_memread(s, digest_addr, digest, sizeof(digest)); + + /* Check result of computation */ + g_assert_cmpmem(digest, sizeof(digest), + test_result_accum_sha512, sizeof(digest)); + + qtest_quit(s); +} + struct masks { uint32_t src; uint32_t dest; @@ -395,6 +527,16 @@ static void test_sha512_sg_ast2600(void) test_sha512_sg("-machine ast2600-evb", 0x1e6d0000, 0x80000000); } +static void test_sha256_accum_ast2600(void) +{ + test_sha256_accum("-machine ast2600-evb", 0x1e6d0000, 0x80000000); +} + +static void test_sha512_accum_ast2600(void) +{ + test_sha512_accum("-machine ast2600-evb", 0x1e6d0000, 0x80000000); +} + static void test_addresses_ast2600(void) { test_addresses("-machine ast2600-evb", 0x1e6d0000, &ast2600_masks); @@ -454,6 +596,9 @@ int main(int argc, char **argv) qtest_add_func("ast2600/hace/sha512_sg", test_sha512_sg_ast2600); qtest_add_func("ast2600/hace/sha256_sg", test_sha256_sg_ast2600); + qtest_add_func("ast2600/hace/sha512_accum", test_sha512_accum_ast2600); + qtest_add_func("ast2600/hace/sha256_accum", test_sha256_accum_ast2600); + qtest_add_func("ast2500/hace/addresses", test_addresses_ast2500); qtest_add_func("ast2500/hace/sha512", test_sha512_ast2500); qtest_add_func("ast2500/hace/sha256", test_sha256_ast2500);