From patchwork Tue Oct 1 15:37:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13818276 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7E9BA1C9EDE; Tue, 1 Oct 2024 15:39:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797160; cv=none; b=SuY2XVboxks7ABNstW97V5rBS1c/0a2+jP8GW3vdrGNM1JlhiSe9maBF1pTQq0sHlvFgPWxMP7KarGWiJzBOcjpTfWhbxjy9+VlaiotoTaE4tSYprOmDz9l8GFNEfz9oE/Pjvp0WxmdF0xgXWzrJ/83sifgPScm3EU8gv4ZMqFc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797160; c=relaxed/simple; bh=ielSOuzQ2SBTpoMZmdZ5PFQXCPp4yQmI4oAxE2cnej8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NqoNcVwakpbXtp40vbeHHTH1/HcXyv7pFW3Jj8IBrcU6+9Kz1w87gtt4+aQGWmDcbaGDjPJIEWTds9f5TRXahraealEwi2qLN/X36pINuyOe7m8fV42wXAKre6Q9xYwFj6oofA6z915bmB6mvunzpAx6TRJlcPff2YNMvpa5pg8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dRHU3IRq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dRHU3IRq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D3248C4CECD; Tue, 1 Oct 2024 15:39:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727797160; bh=ielSOuzQ2SBTpoMZmdZ5PFQXCPp4yQmI4oAxE2cnej8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dRHU3IRq2eWM/5gjHQOw3v2mrn3NoHhd7TiRZiEpxypgoFKYN2ePdGsch/onmnKAv 1KmjeDEy1a/cnYm0aNeMirz8OcOiVBYyqE7FT569nXrcKI5O5mTRKBhE8IZOTBYcFw /OKJcvRb2x/J+w6Nmr/dMqKGzeaVSMcCVwuBkaYXvAuBjCkovt2WlXkw5VoNmUy7lK wlAHY+zGH6Q8qpo/jlt6JhnUhGujvcIYTtacmqqcl7o5SoVnPIHQPVe3j0vS00JjkT yBal5JjZLOEgQCFWChCEid5o3+tflGeAaaphw6H8TNH2cXlJugh+XTzJLyhmryT7Gl NVk1A4gv/ejrA== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v7 1/7] crypto: shash - add support for finup_mb Date: Tue, 1 Oct 2024 08:37:12 -0700 Message-ID: <20241001153718.111665-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241001153718.111665-1-ebiggers@kernel.org> References: <20241001153718.111665-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Most cryptographic hash functions are serialized, in the sense that they have an internal block size and the blocks must be processed serially. (BLAKE3 is a notable exception that has tree-based hashing built-in, but all the more common choices such as the SHAs and BLAKE2 are serialized. ParallelHash and Sakura are parallel hashes based on SHA3, but SHA3 is much slower than SHA256 in software even with the ARMv8 SHA3 extension.) This limits the performance of computing a single hash. Yet, computing multiple hashes simultaneously does not have this limitation. Modern CPUs are superscalar and often can execute independent instructions in parallel. As a result, on many modern CPUs, it is possible to hash two equal-length messages in about the same time as a single message, if all the instructions are interleaved. Meanwhile, a very common use case for hashing in the Linux kernel is dm-verity and fs-verity. Both use a Merkle tree that has a fixed block size, usually 4096 bytes with an empty or 32-byte salt prepended. The hash algorithm is usually SHA-256. Usually, many blocks need to be hashed at a time. This is an ideal scenario for multibuffer hashing. Linux actually used to support SHA-256 multibuffer hashing on x86_64, before it was removed by commit ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd"). However, it was integrated with the crypto API in a weird way, where it behaved as an asynchronous hash that queued up and executed all requests on a global queue. This made it very complex, buggy, and virtually unusable. This patch takes a new approach of just adding an API crypto_shash_finup_mb() that synchronously computes the hash of multiple equal-length messages, starting from a common state that represents the (possibly empty) common prefix shared by the messages. The new API is part of the "shash" algorithm type, as it does not make sense in "ahash". It does a "finup" operation rather than a "digest" operation in order to support the salt that is used by dm-verity and fs-verity. The data and output buffers are provided in arrays of length @num_msgs in order to make the API itself extensible to interleaving factors other than 2. (Though, initially only 2x will actually be used. There are some platforms in which a higher factor could help, but there are significant trade-offs.) Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/shash.c | 58 +++++++++++++++++++++++++++++++++++++++++++ include/crypto/hash.h | 52 +++++++++++++++++++++++++++++++++++++- 2 files changed, 109 insertions(+), 1 deletion(-) diff --git a/crypto/shash.c b/crypto/shash.c index 301ab42bf8499..5ee5ce68c7b4f 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -73,10 +73,57 @@ int crypto_shash_finup(struct shash_desc *desc, const u8 *data, { return crypto_shash_alg(desc->tfm)->finup(desc, data, len, out); } EXPORT_SYMBOL_GPL(crypto_shash_finup); +static noinline_for_stack int +shash_finup_mb_fallback(struct shash_desc *desc, const u8 * const data[], + unsigned int len, u8 * const outs[], + unsigned int num_msgs) +{ + struct crypto_shash *tfm = desc->tfm; + SHASH_DESC_ON_STACK(desc2, tfm); + unsigned int i; + int err; + + for (i = 0; i < num_msgs - 1; i++) { + desc2->tfm = tfm; + memcpy(shash_desc_ctx(desc2), shash_desc_ctx(desc), + crypto_shash_descsize(tfm)); + err = crypto_shash_finup(desc2, data[i], len, outs[i]); + if (err) + return err; + } + return crypto_shash_finup(desc, data[i], len, outs[i]); +} + +int crypto_shash_finup_mb(struct shash_desc *desc, const u8 * const data[], + unsigned int len, u8 * const outs[], + unsigned int num_msgs) +{ + struct shash_alg *alg = crypto_shash_alg(desc->tfm); + int err; + + if (num_msgs == 1) + return crypto_shash_finup(desc, data[0], len, outs[0]); + + if (num_msgs == 0) + return 0; + + if (WARN_ON_ONCE(num_msgs > alg->mb_max_msgs)) + goto fallback; + + err = alg->finup_mb(desc, data, len, outs, num_msgs); + if (unlikely(err == -EOPNOTSUPP)) + goto fallback; + return err; + +fallback: + return shash_finup_mb_fallback(desc, data, len, outs, num_msgs); +} +EXPORT_SYMBOL_GPL(crypto_shash_finup_mb); + static int shash_default_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { struct shash_alg *shash = crypto_shash_alg(desc->tfm); @@ -312,10 +359,21 @@ static int shash_prepare_alg(struct shash_alg *alg) return -EINVAL; if ((alg->export && !alg->import) || (alg->import && !alg->export)) return -EINVAL; + if (alg->mb_max_msgs > 1) { + if (alg->mb_max_msgs > HASH_MAX_MB_MSGS) + return -EINVAL; + if (!alg->finup_mb) + return -EINVAL; + } else { + if (alg->finup_mb) + return -EINVAL; + alg->mb_max_msgs = 1; + } + err = hash_prepare_alg(&alg->halg); if (err) return err; base->cra_type = &crypto_shash_type; diff --git a/include/crypto/hash.h b/include/crypto/hash.h index 2d5ea9f9ff43e..38511727b2ff4 100644 --- a/include/crypto/hash.h +++ b/include/crypto/hash.h @@ -154,11 +154,13 @@ struct ahash_alg { struct shash_desc { struct crypto_shash *tfm; void *__ctx[] __aligned(ARCH_SLAB_MINALIGN); }; -#define HASH_MAX_DIGESTSIZE 64 +#define HASH_MAX_DIGESTSIZE 64 + +#define HASH_MAX_MB_MSGS 2 /* max value of crypto_shash_mb_max_msgs() */ /* * Worst case is hmac(sha3-224-generic). Its context is a nested 'shash_desc' * containing a 'struct sha3_state'. */ @@ -177,10 +179,19 @@ struct shash_desc { * @finup: see struct ahash_alg * @digest: see struct ahash_alg * @export: see struct ahash_alg * @import: see struct ahash_alg * @setkey: see struct ahash_alg + * @finup_mb: **[optional]** Multibuffer hashing support. Finish calculating + * the digests of multiple messages, interleaving the instructions to + * potentially achieve better performance than hashing each message + * individually. The num_msgs argument will be between 2 and + * @mb_max_msgs inclusively. If there are particular values of len + * or num_msgs, or a particular calling context (e.g. no-SIMD) that + * the implementation does not support with this function, then it + * must return -EOPNOTSUPP in those cases to cause the crypto API to + * fall back to repeated finups. * @init_tfm: Initialize the cryptographic transformation object. * This function is called only once at the instantiation * time, right after the transformation context was * allocated. In case the cryptographic hardware has * some special requirements which need to be handled @@ -192,10 +203,11 @@ struct shash_desc { * various changes set in @init_tfm. * @clone_tfm: Copy transform into new object, may allocate memory. * @descsize: Size of the operational state for the message digest. This state * size is the memory size that needs to be allocated for * shash_desc.__ctx + * @mb_max_msgs: Maximum supported value of num_msgs argument to @finup_mb * @halg: see struct hash_alg_common * @HASH_ALG_COMMON: see struct hash_alg_common */ struct shash_alg { int (*init)(struct shash_desc *desc); @@ -208,15 +220,19 @@ struct shash_alg { unsigned int len, u8 *out); int (*export)(struct shash_desc *desc, void *out); int (*import)(struct shash_desc *desc, const void *in); int (*setkey)(struct crypto_shash *tfm, const u8 *key, unsigned int keylen); + int (*finup_mb)(struct shash_desc *desc, const u8 * const data[], + unsigned int len, u8 * const outs[], + unsigned int num_msgs); int (*init_tfm)(struct crypto_shash *tfm); void (*exit_tfm)(struct crypto_shash *tfm); int (*clone_tfm)(struct crypto_shash *dst, struct crypto_shash *src); unsigned int descsize; + unsigned int mb_max_msgs; union { struct HASH_ALG_COMMON; struct hash_alg_common halg; }; @@ -750,10 +766,23 @@ static inline unsigned int crypto_shash_digestsize(struct crypto_shash *tfm) static inline unsigned int crypto_shash_statesize(struct crypto_shash *tfm) { return crypto_shash_alg(tfm)->statesize; } +/** + * crypto_shash_mb_max_msgs() - get max multibuffer interleaving factor + * @tfm: hash transformation object + * + * Return the maximum supported multibuffer hashing interleaving factor, i.e. + * the maximum num_msgs that can be passed to crypto_shash_finup_mb(). The + * return value will be between 1 and HASH_MAX_MB_MSGS inclusively. + */ +static inline unsigned int crypto_shash_mb_max_msgs(struct crypto_shash *tfm) +{ + return crypto_shash_alg(tfm)->mb_max_msgs; +} + static inline u32 crypto_shash_get_flags(struct crypto_shash *tfm) { return crypto_tfm_get_flags(crypto_shash_tfm(tfm)); } @@ -942,10 +971,31 @@ int crypto_shash_final(struct shash_desc *desc, u8 *out); * occurred */ int crypto_shash_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out); +/** + * crypto_shash_finup_mb() - multibuffer message hashing + * @desc: the starting state that is forked for each message. It contains the + * state after hashing a (possibly-empty) common prefix of the messages. + * @data: the data of each message (not including any common prefix from @desc) + * @len: length of each data buffer in bytes + * @outs: output buffer for each message digest + * @num_msgs: number of messages, i.e. the number of entries in @data and @outs. + * This can't be more than crypto_shash_mb_max_msgs(). + * + * This function provides support for hashing multiple messages with the + * instructions interleaved, if supported by the algorithm. This can + * significantly improve performance, depending on the CPU and algorithm. + * + * Context: Any context. + * Return: 0 on success; a negative errno value on failure. + */ +int crypto_shash_finup_mb(struct shash_desc *desc, const u8 * const data[], + unsigned int len, u8 * const outs[], + unsigned int num_msgs); + static inline void shash_desc_zero(struct shash_desc *desc) { memzero_explicit(desc, sizeof(*desc) + crypto_shash_descsize(desc->tfm)); } From patchwork Tue Oct 1 15:37:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13818277 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 24DA71CB317; Tue, 1 Oct 2024 15:39:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797161; cv=none; b=uOgcqbme1X4CW2Vx5UHnJ5fa+8w4KCQZumRb6V0hePsuUTc5SWNZJESq2SJpy/pn1OohZNLfRiNlCrRpC4Lgo3XIeJgKogvQwDZx0aHurxehzq8nv7vnid3jLUEpswjfYViq+wkaEHdw/QMX0Dbf5ojXT17VHydbKXvuOxc0zrw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797161; c=relaxed/simple; bh=y6QJHmQRzGsxlK+VOHgV97EncrnVBFiQIOh3jV3PFBo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uG4/c2YnWJ12LnDg14wNGgibLP6UDnP+8YZ2QEz6OCHPKfLujxZkwtC4VArM+32LfrY8PQdnO+RjzpmDzE7SBFGzFrdIJH0KYSgOdzwWmJzSHFwXfhDI24kxylj+YZgldWQ81tCWmcUFvE1N4rZwcY74tzfXX78EkL4gexNz4Y8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dicco58O; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dicco58O" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F113C4CEC6; Tue, 1 Oct 2024 15:39:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727797160; bh=y6QJHmQRzGsxlK+VOHgV97EncrnVBFiQIOh3jV3PFBo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dicco58OAoSPnV50rKhJVNYljRPOA4rzsQfqSuF5CRN4t2AoOAIKamP3OT7B8ekH7 x9qTGQCrdZgvc6a3j3eKsbAwATIn/FoMD7+ziL9kygZKeM/6zr4+ppkXhMZNRCW1FZ QVe2ZaHlwUHzM2DQfJwznXn4YNa23XMT+2cH0dX7IA7vLLSIGf9nuSp4kQ8sLvwwCd 1QeYjziD2hghH0PowQmuriI/49KmDiWerbDEhqifBL2hUSG+XIrtTt8dlVxOBuMge+ 6XZnqNUhYZs1FxnQu7GV4jTpi/R9l+ADMioJguDX5490T4fwaZpRDJaDWGqOZJHPMN gSI7bsw1sts7Q== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v7 2/7] crypto: testmgr - add tests for finup_mb Date: Tue, 1 Oct 2024 08:37:13 -0700 Message-ID: <20241001153718.111665-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241001153718.111665-1-ebiggers@kernel.org> References: <20241001153718.111665-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Update the shash self-tests to test the new finup_mb method when CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- crypto/testmgr.c | 73 +++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 66 insertions(+), 7 deletions(-) diff --git a/crypto/testmgr.c b/crypto/testmgr.c index ee8da628e9da4..8bd24238794c5 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -227,10 +227,11 @@ enum flush_type { /* finalization function for hash algorithms */ enum finalization_type { FINALIZATION_TYPE_FINAL, /* use final() */ FINALIZATION_TYPE_FINUP, /* use finup() */ + FINALIZATION_TYPE_FINUP_MB, /* use finup_mb() */ FINALIZATION_TYPE_DIGEST, /* use digest() */ }; /* * Whether the crypto operation will occur in-place, and if so whether the @@ -290,10 +291,14 @@ struct test_sg_division { * the @iv_offset * @key_offset: misalignment of the key, where 0 is default alignment * @key_offset_relative_to_alignmask: if true, add the algorithm's alignmask to * the @key_offset * @finalization_type: what finalization function to use for hashes + * @multibuffer_index: random number used to generate the message index to use + * for finup_mb (when finup_mb is used). + * @multibuffer_count: random number used to generate the num_msgs parameter to + * finup_mb (when finup_mb is used). * @nosimd: execute with SIMD disabled? Requires !CRYPTO_TFM_REQ_MAY_SLEEP. * This applies to the parts of the operation that aren't controlled * individually by @nosimd_setkey or @src_divs[].nosimd. * @nosimd_setkey: set the key (if applicable) with SIMD disabled? Requires * !CRYPTO_TFM_REQ_MAY_SLEEP. @@ -307,10 +312,12 @@ struct testvec_config { unsigned int iv_offset; unsigned int key_offset; bool iv_offset_relative_to_alignmask; bool key_offset_relative_to_alignmask; enum finalization_type finalization_type; + unsigned int multibuffer_index; + unsigned int multibuffer_count; bool nosimd; bool nosimd_setkey; }; #define TESTVEC_CONFIG_NAMELEN 192 @@ -1122,19 +1129,27 @@ static void generate_random_testvec_config(struct rnd_state *rng, if (prandom_bool(rng)) { cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP; p += scnprintf(p, end - p, " may_sleep"); } - switch (prandom_u32_below(rng, 4)) { + switch (prandom_u32_below(rng, 8)) { case 0: + case 1: cfg->finalization_type = FINALIZATION_TYPE_FINAL; p += scnprintf(p, end - p, " use_final"); break; - case 1: + case 2: cfg->finalization_type = FINALIZATION_TYPE_FINUP; p += scnprintf(p, end - p, " use_finup"); break; + case 3: + case 4: + cfg->finalization_type = FINALIZATION_TYPE_FINUP_MB; + cfg->multibuffer_index = prandom_u32_state(rng); + cfg->multibuffer_count = prandom_u32_state(rng); + p += scnprintf(p, end - p, " use_finup_mb"); + break; default: cfg->finalization_type = FINALIZATION_TYPE_DIGEST; p += scnprintf(p, end - p, " use_digest"); break; } @@ -1289,10 +1304,37 @@ static inline int check_shash_op(const char *op, int err, pr_err("alg: shash: %s %s() failed with err %d on test vector %s, cfg=\"%s\"\n", driver, op, err, vec_name, cfg->name); return err; } +static int do_finup_mb(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *result, + const struct testvec_config *cfg, + const struct test_sglist *tsgl) +{ + struct crypto_shash *tfm = desc->tfm; + const u8 *unused_data = tsgl->bufs[XBUFSIZE - 1]; + u8 unused_result[HASH_MAX_DIGESTSIZE]; + const u8 *datas[HASH_MAX_MB_MSGS]; + u8 *outs[HASH_MAX_MB_MSGS]; + unsigned int num_msgs; + unsigned int msg_idx; + unsigned int i; + + num_msgs = 1 + (cfg->multibuffer_count % crypto_shash_mb_max_msgs(tfm)); + if (WARN_ON_ONCE(num_msgs > HASH_MAX_MB_MSGS)) + return -EINVAL; + msg_idx = cfg->multibuffer_index % num_msgs; + for (i = 0; i < num_msgs; i++) { + datas[i] = unused_data; + outs[i] = unused_result; + } + datas[msg_idx] = data; + outs[msg_idx] = result; + return crypto_shash_finup_mb(desc, datas, len, outs, num_msgs); +} + /* Test one hash test vector in one configuration, using the shash API */ static int test_shash_vec_cfg(const struct hash_testvec *vec, const char *vec_name, const struct testvec_config *cfg, struct shash_desc *desc, @@ -1365,11 +1407,14 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec, return -EINVAL; } goto result_ready; } - /* Using init(), zero or more update(), then final() or finup() */ + /* + * Using init(), zero or more update(), then either final(), finup(), or + * finup_mb(). + */ if (cfg->nosimd) crypto_disable_simd_for_test(); err = crypto_shash_init(desc); if (cfg->nosimd) @@ -1377,28 +1422,42 @@ static int test_shash_vec_cfg(const struct hash_testvec *vec, err = check_shash_op("init", err, driver, vec_name, cfg); if (err) return err; for (i = 0; i < tsgl->nents; i++) { + const u8 *data = sg_virt(&tsgl->sgl[i]); + unsigned int len = tsgl->sgl[i].length; + if (i + 1 == tsgl->nents && cfg->finalization_type == FINALIZATION_TYPE_FINUP) { if (divs[i]->nosimd) crypto_disable_simd_for_test(); - err = crypto_shash_finup(desc, sg_virt(&tsgl->sgl[i]), - tsgl->sgl[i].length, result); + err = crypto_shash_finup(desc, data, len, result); if (divs[i]->nosimd) crypto_reenable_simd_for_test(); err = check_shash_op("finup", err, driver, vec_name, cfg); if (err) return err; goto result_ready; } + if (i + 1 == tsgl->nents && + cfg->finalization_type == FINALIZATION_TYPE_FINUP_MB) { + if (divs[i]->nosimd) + crypto_disable_simd_for_test(); + err = do_finup_mb(desc, data, len, result, cfg, tsgl); + if (divs[i]->nosimd) + crypto_reenable_simd_for_test(); + err = check_shash_op("finup_mb", err, driver, vec_name, + cfg); + if (err) + return err; + goto result_ready; + } if (divs[i]->nosimd) crypto_disable_simd_for_test(); - err = crypto_shash_update(desc, sg_virt(&tsgl->sgl[i]), - tsgl->sgl[i].length); + err = crypto_shash_update(desc, data, len); if (divs[i]->nosimd) crypto_reenable_simd_for_test(); err = check_shash_op("update", err, driver, vec_name, cfg); if (err) return err; From patchwork Tue Oct 1 15:37:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13818278 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 711441CB31E; Tue, 1 Oct 2024 15:39:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797161; cv=none; b=LSd1z3YJh30D/uNbXA7utIOs78iFX4k4VaCb1F3EpNRATuRSVNNhQGsAaYyD8K2GSL5oZIyvREdn7wNrE66THjmvqJh6zwU/CtbT6HJYnRMNnlJzsLWZ2pApRpQw0ojC+M8egpTbu+XRYFDLIci6lgosiHuLMV6Tiu30R/1Imzg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797161; c=relaxed/simple; bh=krQhYNfW5Gw+qZwiXcHFZLiWRuJpJtrqUNKA2uFMrgM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oiKRW7Y2TVjgLbLDKXluXEY0a/CdPdA3PzyarPdOemPGsLKcfP1xr/dahIeerJPjGM8a7Fb02Nux1kPtODovmZOi+eiYSR326D7JCwBFeD481VMbz2995uAqxbvdPkbi4WwMESEtN1jxWfjq+ReFmUqoIcDGP3nGnLiCPm9gInE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=XhGjEc4w; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="XhGjEc4w" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE883C4CED3; Tue, 1 Oct 2024 15:39:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727797161; bh=krQhYNfW5Gw+qZwiXcHFZLiWRuJpJtrqUNKA2uFMrgM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XhGjEc4wfa9HdSIMgqFZD3U5ly2Es+CDTY7RJNnuqJ5nt6L5j1v/aPVdBxNtjB4+x jTIpcTjwcjCV9m0d5Nlr+cgwaZAc41hp/K7bgAKuzsA3KUhHOq7H9oyD07XJwf8GHH 57lk3YJDYKz8Dd+bGrFtLtSQ0Fdosud24/wksP2Mnj5Bg5P7IfvUmjpJ9qAJ/9+Mr9 mUf9/FErhWmH3HKytuKBmTi5f35KlCB7Bj7VAUszgudMXmJEBN/Wx0DDiOG+kGG9ly sQL2vtb8n73bycYsDElsKWYQY5u7NTzRx0WLv8ZTpwsazKIOVP33QSePmcFA4oYfdw TzjEykmyYlGUA== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v7 3/7] crypto: x86/sha256-ni - add support for finup_mb Date: Tue, 1 Oct 2024 08:37:14 -0700 Message-ID: <20241001153718.111665-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241001153718.111665-1-ebiggers@kernel.org> References: <20241001153718.111665-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add an implementation of finup_mb to sha256-ni, using an interleaving factor of 2. It interleaves a finup operation for two equal-length messages that share a common prefix. dm-verity and fs-verity will take advantage of this for greatly improved performance on capable CPUs. This increases the throughput of SHA-256 hashing 4096-byte messages by the following amounts on the following CPUs: Intel Ice Lake (server): 4% Intel Sapphire Rapids: 38% Intel Emerald Rapids: 38% AMD Zen 1 (Threadripper 1950X): 84% AMD Zen 4 (EPYC 9B14): 98% AMD Zen 5 (Ryzen 9 9950X): 64% For now, this seems to benefit AMD more than Intel. This seems to be because current AMD CPUs support concurrent execution of the SHA-NI instructions, but unfortunately current Intel CPUs don't, except for the sha256msg2 instruction. Hopefully future Intel CPUs will support SHA-NI on more execution ports. Zen 1 supports 2 concurrent sha256rnds2, and Zen 4 supports 4 concurrent sha256rnds2, which suggests that even better performance may be achievable on Zen 4 by interleaving more than two hashes. However, doing so poses a number of trade-offs, and furthermore Zen 5 goes back to supporting "only" 2 concurrent sha256rnds2. It's been reported that the method that achieves the highest SHA-256 throughput on Intel CPUs is actually computing 16 hashes simultaneously using AVX512. That method would be quite different to the SHA-NI method used in this patch. However, such a high interleaving factor isn't practical for the use cases being targeted in the kernel. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- arch/x86/crypto/sha256_ni_asm.S | 368 ++++++++++++++++++++++++++++ arch/x86/crypto/sha256_ssse3_glue.c | 39 +++ 2 files changed, 407 insertions(+) diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S index d515a55a3bc1d..5e97922a24e4a 100644 --- a/arch/x86/crypto/sha256_ni_asm.S +++ b/arch/x86/crypto/sha256_ni_asm.S @@ -172,10 +172,378 @@ SYM_TYPED_FUNC_START(sha256_ni_transform) .Ldone_hash: RET SYM_FUNC_END(sha256_ni_transform) +#undef DIGEST_PTR +#undef DATA_PTR +#undef NUM_BLKS +#undef SHA256CONSTANTS +#undef MSG +#undef STATE0 +#undef STATE1 +#undef MSG0 +#undef MSG1 +#undef MSG2 +#undef MSG3 +#undef TMP +#undef SHUF_MASK +#undef ABEF_SAVE +#undef CDGH_SAVE + +// parameters for __sha256_ni_finup2x() +#define SCTX %rdi +#define DATA1 %rsi +#define DATA2 %rdx +#define LEN %ecx +#define LEN8 %cl +#define LEN64 %rcx +#define OUT1 %r8 +#define OUT2 %r9 + +// other scalar variables +#define SHA256CONSTANTS %rax +#define COUNT %r10 +#define COUNT32 %r10d +#define FINAL_STEP %r11d + +// rbx is used as a temporary. + +#define MSG %xmm0 // sha256rnds2 implicit operand +#define STATE0_A %xmm1 +#define STATE1_A %xmm2 +#define STATE0_B %xmm3 +#define STATE1_B %xmm4 +#define TMP_A %xmm5 +#define TMP_B %xmm6 +#define MSG0_A %xmm7 +#define MSG1_A %xmm8 +#define MSG2_A %xmm9 +#define MSG3_A %xmm10 +#define MSG0_B %xmm11 +#define MSG1_B %xmm12 +#define MSG2_B %xmm13 +#define MSG3_B %xmm14 +#define SHUF_MASK %xmm15 + +#define OFFSETOF_STATE 0 // offsetof(struct sha256_state, state) +#define OFFSETOF_COUNT 32 // offsetof(struct sha256_state, count) +#define OFFSETOF_BUF 40 // offsetof(struct sha256_state, buf) + +// Do 4 rounds of SHA-256 for each of two messages (interleaved). m0_a and m0_b +// contain the current 4 message schedule words for the first and second message +// respectively. +// +// If not all the message schedule words have been computed yet, then this also +// computes 4 more message schedule words for each message. m1_a-m3_a contain +// the next 3 groups of 4 message schedule words for the first message, and +// likewise m1_b-m3_b for the second. After consuming the current value of +// m0_a, this macro computes the group after m3_a and writes it to m0_a, and +// likewise for *_b. This means that the next (m0_a, m1_a, m2_a, m3_a) is the +// current (m1_a, m2_a, m3_a, m0_a), and likewise for *_b, so the caller must +// cycle through the registers accordingly. +.macro do_4rounds_2x i, m0_a, m1_a, m2_a, m3_a, m0_b, m1_b, m2_b, m3_b + movdqa (\i-32)*4(SHA256CONSTANTS), TMP_A + movdqa TMP_A, TMP_B + paddd \m0_a, TMP_A + paddd \m0_b, TMP_B +.if \i < 48 + sha256msg1 \m1_a, \m0_a + sha256msg1 \m1_b, \m0_b +.endif + movdqa TMP_A, MSG + sha256rnds2 STATE0_A, STATE1_A + movdqa TMP_B, MSG + sha256rnds2 STATE0_B, STATE1_B + pshufd $0x0E, TMP_A, MSG + sha256rnds2 STATE1_A, STATE0_A + pshufd $0x0E, TMP_B, MSG + sha256rnds2 STATE1_B, STATE0_B +.if \i < 48 + movdqa \m3_a, TMP_A + movdqa \m3_b, TMP_B + palignr $4, \m2_a, TMP_A + palignr $4, \m2_b, TMP_B + paddd TMP_A, \m0_a + paddd TMP_B, \m0_b + sha256msg2 \m3_a, \m0_a + sha256msg2 \m3_b, \m0_b +.endif +.endm + +// +// void __sha256_ni_finup2x(const struct sha256_state *sctx, +// const u8 *data1, const u8 *data2, int len, +// u8 out1[SHA256_DIGEST_SIZE], +// u8 out2[SHA256_DIGEST_SIZE]); +// +// This function computes the SHA-256 digests of two messages |data1| and +// |data2| that are both |len| bytes long, starting from the initial state +// |sctx|. |len| must be at least SHA256_BLOCK_SIZE. +// +// The instructions for the two SHA-256 operations are interleaved. On many +// CPUs, this is almost twice as fast as hashing each message individually due +// to taking better advantage of the CPU's SHA-256 and SIMD throughput. +// +SYM_FUNC_START(__sha256_ni_finup2x) + // Allocate 128 bytes of stack space, 16-byte aligned. + push %rbx + push %rbp + mov %rsp, %rbp + sub $128, %rsp + and $~15, %rsp + + // Load the shuffle mask for swapping the endianness of 32-bit words. + movdqa PSHUFFLE_BYTE_FLIP_MASK(%rip), SHUF_MASK + + // Set up pointer to the round constants. + lea K256+32*4(%rip), SHA256CONSTANTS + + // Initially we're not processing the final blocks. + xor FINAL_STEP, FINAL_STEP + + // Load the initial state from sctx->state. + movdqu OFFSETOF_STATE+0*16(SCTX), STATE0_A // DCBA + movdqu OFFSETOF_STATE+1*16(SCTX), STATE1_A // HGFE + movdqa STATE0_A, TMP_A + punpcklqdq STATE1_A, STATE0_A // FEBA + punpckhqdq TMP_A, STATE1_A // DCHG + pshufd $0x1B, STATE0_A, STATE0_A // ABEF + pshufd $0xB1, STATE1_A, STATE1_A // CDGH + + // Load sctx->count. Take the mod 64 of it to get the number of bytes + // that are buffered in sctx->buf. Also save it in a register with LEN + // added to it. + mov LEN, LEN + mov OFFSETOF_COUNT(SCTX), %rbx + lea (%rbx, LEN64, 1), COUNT + and $63, %ebx + jz .Lfinup2x_enter_loop // No bytes buffered? + + // %ebx bytes (1 to 63) are currently buffered in sctx->buf. Load them + // followed by the first 64 - %ebx bytes of data. Since LEN >= 64, we + // just load 64 bytes from each of sctx->buf, DATA1, and DATA2 + // unconditionally and rearrange the data as needed. + + movdqu OFFSETOF_BUF+0*16(SCTX), MSG0_A + movdqu OFFSETOF_BUF+1*16(SCTX), MSG1_A + movdqu OFFSETOF_BUF+2*16(SCTX), MSG2_A + movdqu OFFSETOF_BUF+3*16(SCTX), MSG3_A + movdqa MSG0_A, 0*16(%rsp) + movdqa MSG1_A, 1*16(%rsp) + movdqa MSG2_A, 2*16(%rsp) + movdqa MSG3_A, 3*16(%rsp) + + movdqu 0*16(DATA1), MSG0_A + movdqu 1*16(DATA1), MSG1_A + movdqu 2*16(DATA1), MSG2_A + movdqu 3*16(DATA1), MSG3_A + movdqu MSG0_A, 0*16(%rsp,%rbx) + movdqu MSG1_A, 1*16(%rsp,%rbx) + movdqu MSG2_A, 2*16(%rsp,%rbx) + movdqu MSG3_A, 3*16(%rsp,%rbx) + movdqa 0*16(%rsp), MSG0_A + movdqa 1*16(%rsp), MSG1_A + movdqa 2*16(%rsp), MSG2_A + movdqa 3*16(%rsp), MSG3_A + + movdqu 0*16(DATA2), MSG0_B + movdqu 1*16(DATA2), MSG1_B + movdqu 2*16(DATA2), MSG2_B + movdqu 3*16(DATA2), MSG3_B + movdqu MSG0_B, 0*16(%rsp,%rbx) + movdqu MSG1_B, 1*16(%rsp,%rbx) + movdqu MSG2_B, 2*16(%rsp,%rbx) + movdqu MSG3_B, 3*16(%rsp,%rbx) + movdqa 0*16(%rsp), MSG0_B + movdqa 1*16(%rsp), MSG1_B + movdqa 2*16(%rsp), MSG2_B + movdqa 3*16(%rsp), MSG3_B + + sub $64, %rbx // rbx = buffered - 64 + sub %rbx, DATA1 // DATA1 += 64 - buffered + sub %rbx, DATA2 // DATA2 += 64 - buffered + add %ebx, LEN // LEN += buffered - 64 + movdqa STATE0_A, STATE0_B + movdqa STATE1_A, STATE1_B + jmp .Lfinup2x_loop_have_data + +.Lfinup2x_enter_loop: + sub $64, LEN + movdqa STATE0_A, STATE0_B + movdqa STATE1_A, STATE1_B +.Lfinup2x_loop: + // Load the next two data blocks. + movdqu 0*16(DATA1), MSG0_A + movdqu 0*16(DATA2), MSG0_B + movdqu 1*16(DATA1), MSG1_A + movdqu 1*16(DATA2), MSG1_B + movdqu 2*16(DATA1), MSG2_A + movdqu 2*16(DATA2), MSG2_B + movdqu 3*16(DATA1), MSG3_A + movdqu 3*16(DATA2), MSG3_B + add $64, DATA1 + add $64, DATA2 +.Lfinup2x_loop_have_data: + // Convert the words of the data blocks from big endian. + pshufb SHUF_MASK, MSG0_A + pshufb SHUF_MASK, MSG0_B + pshufb SHUF_MASK, MSG1_A + pshufb SHUF_MASK, MSG1_B + pshufb SHUF_MASK, MSG2_A + pshufb SHUF_MASK, MSG2_B + pshufb SHUF_MASK, MSG3_A + pshufb SHUF_MASK, MSG3_B +.Lfinup2x_loop_have_bswapped_data: + + // Save the original state for each block. + movdqa STATE0_A, 0*16(%rsp) + movdqa STATE0_B, 1*16(%rsp) + movdqa STATE1_A, 2*16(%rsp) + movdqa STATE1_B, 3*16(%rsp) + + // Do the SHA-256 rounds on each block. +.irp i, 0, 16, 32, 48 + do_4rounds_2x (\i + 0), MSG0_A, MSG1_A, MSG2_A, MSG3_A, \ + MSG0_B, MSG1_B, MSG2_B, MSG3_B + do_4rounds_2x (\i + 4), MSG1_A, MSG2_A, MSG3_A, MSG0_A, \ + MSG1_B, MSG2_B, MSG3_B, MSG0_B + do_4rounds_2x (\i + 8), MSG2_A, MSG3_A, MSG0_A, MSG1_A, \ + MSG2_B, MSG3_B, MSG0_B, MSG1_B + do_4rounds_2x (\i + 12), MSG3_A, MSG0_A, MSG1_A, MSG2_A, \ + MSG3_B, MSG0_B, MSG1_B, MSG2_B +.endr + + // Add the original state for each block. + paddd 0*16(%rsp), STATE0_A + paddd 1*16(%rsp), STATE0_B + paddd 2*16(%rsp), STATE1_A + paddd 3*16(%rsp), STATE1_B + + // Update LEN and loop back if more blocks remain. + sub $64, LEN + jge .Lfinup2x_loop + + // Check if any final blocks need to be handled. + // FINAL_STEP = 2: all done + // FINAL_STEP = 1: need to do count-only padding block + // FINAL_STEP = 0: need to do the block with 0x80 padding byte + cmp $1, FINAL_STEP + jg .Lfinup2x_done + je .Lfinup2x_finalize_countonly + add $64, LEN + jz .Lfinup2x_finalize_blockaligned + + // Not block-aligned; 1 <= LEN <= 63 data bytes remain. Pad the block. + // To do this, write the padding starting with the 0x80 byte to + // &sp[64]. Then for each message, copy the last 64 data bytes to sp + // and load from &sp[64 - LEN] to get the needed padding block. This + // code relies on the data buffers being >= 64 bytes in length. + mov $64, %ebx + sub LEN, %ebx // ebx = 64 - LEN + sub %rbx, DATA1 // DATA1 -= 64 - LEN + sub %rbx, DATA2 // DATA2 -= 64 - LEN + mov $0x80, FINAL_STEP // using FINAL_STEP as a temporary + movd FINAL_STEP, MSG0_A + pxor MSG1_A, MSG1_A + movdqa MSG0_A, 4*16(%rsp) + movdqa MSG1_A, 5*16(%rsp) + movdqa MSG1_A, 6*16(%rsp) + movdqa MSG1_A, 7*16(%rsp) + cmp $56, LEN + jge 1f // will COUNT spill into its own block? + shl $3, COUNT + bswap COUNT + mov COUNT, 56(%rsp,%rbx) + mov $2, FINAL_STEP // won't need count-only block + jmp 2f +1: + mov $1, FINAL_STEP // will need count-only block +2: + movdqu 0*16(DATA1), MSG0_A + movdqu 1*16(DATA1), MSG1_A + movdqu 2*16(DATA1), MSG2_A + movdqu 3*16(DATA1), MSG3_A + movdqa MSG0_A, 0*16(%rsp) + movdqa MSG1_A, 1*16(%rsp) + movdqa MSG2_A, 2*16(%rsp) + movdqa MSG3_A, 3*16(%rsp) + movdqu 0*16(%rsp,%rbx), MSG0_A + movdqu 1*16(%rsp,%rbx), MSG1_A + movdqu 2*16(%rsp,%rbx), MSG2_A + movdqu 3*16(%rsp,%rbx), MSG3_A + + movdqu 0*16(DATA2), MSG0_B + movdqu 1*16(DATA2), MSG1_B + movdqu 2*16(DATA2), MSG2_B + movdqu 3*16(DATA2), MSG3_B + movdqa MSG0_B, 0*16(%rsp) + movdqa MSG1_B, 1*16(%rsp) + movdqa MSG2_B, 2*16(%rsp) + movdqa MSG3_B, 3*16(%rsp) + movdqu 0*16(%rsp,%rbx), MSG0_B + movdqu 1*16(%rsp,%rbx), MSG1_B + movdqu 2*16(%rsp,%rbx), MSG2_B + movdqu 3*16(%rsp,%rbx), MSG3_B + jmp .Lfinup2x_loop_have_data + + // Prepare a padding block, either: + // + // {0x80, 0, 0, 0, ..., count (as __be64)} + // This is for a block aligned message. + // + // { 0, 0, 0, 0, ..., count (as __be64)} + // This is for a message whose length mod 64 is >= 56. + // + // Pre-swap the endianness of the words. +.Lfinup2x_finalize_countonly: + pxor MSG0_A, MSG0_A + jmp 1f + +.Lfinup2x_finalize_blockaligned: + mov $0x80000000, %ebx + movd %ebx, MSG0_A +1: + pxor MSG1_A, MSG1_A + pxor MSG2_A, MSG2_A + ror $29, COUNT + movq COUNT, MSG3_A + pslldq $8, MSG3_A + movdqa MSG0_A, MSG0_B + pxor MSG1_B, MSG1_B + pxor MSG2_B, MSG2_B + movdqa MSG3_A, MSG3_B + mov $2, FINAL_STEP + jmp .Lfinup2x_loop_have_bswapped_data + +.Lfinup2x_done: + // Write the two digests with all bytes in the correct order. + movdqa STATE0_A, TMP_A + movdqa STATE0_B, TMP_B + punpcklqdq STATE1_A, STATE0_A // GHEF + punpcklqdq STATE1_B, STATE0_B + punpckhqdq TMP_A, STATE1_A // ABCD + punpckhqdq TMP_B, STATE1_B + pshufd $0xB1, STATE0_A, STATE0_A // HGFE + pshufd $0xB1, STATE0_B, STATE0_B + pshufd $0x1B, STATE1_A, STATE1_A // DCBA + pshufd $0x1B, STATE1_B, STATE1_B + pshufb SHUF_MASK, STATE0_A + pshufb SHUF_MASK, STATE0_B + pshufb SHUF_MASK, STATE1_A + pshufb SHUF_MASK, STATE1_B + movdqu STATE0_A, 1*16(OUT1) + movdqu STATE0_B, 1*16(OUT2) + movdqu STATE1_A, 0*16(OUT1) + movdqu STATE1_B, 0*16(OUT2) + + mov %rbp, %rsp + pop %rbp + pop %rbx + RET +SYM_FUNC_END(__sha256_ni_finup2x) + .section .rodata.cst256.K256, "aM", @progbits, 256 .align 64 K256: .long 0x428a2f98,0x71374491,0xb5c0fbcf,0xe9b5dba5 .long 0x3956c25b,0x59f111f1,0x923f82a4,0xab1c5ed5 diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index e04a43d9f7d55..ff688bb1d5607 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -331,10 +331,15 @@ static void unregister_sha256_avx2(void) #ifdef CONFIG_AS_SHA256_NI asmlinkage void sha256_ni_transform(struct sha256_state *digest, const u8 *data, int rounds); +asmlinkage void __sha256_ni_finup2x(const struct sha256_state *sctx, + const u8 *data1, const u8 *data2, int len, + u8 out1[SHA256_DIGEST_SIZE], + u8 out2[SHA256_DIGEST_SIZE]); + static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, sha256_ni_transform); } @@ -355,18 +360,52 @@ static int sha256_ni_digest(struct shash_desc *desc, const u8 *data, { return sha256_base_init(desc) ?: sha256_ni_finup(desc, data, len, out); } +static int sha256_ni_finup_mb(struct shash_desc *desc, + const u8 * const data[], unsigned int len, + u8 * const outs[], unsigned int num_msgs) +{ + struct sha256_state *sctx = shash_desc_ctx(desc); + + /* + * num_msgs != 2 should not happen here, since this algorithm sets + * mb_max_msgs=2, and the crypto API handles num_msgs <= 1 before + * calling into the algorithm's finup_mb method. + */ + if (WARN_ON_ONCE(num_msgs != 2)) + return -EOPNOTSUPP; + + if (unlikely(!crypto_simd_usable())) + return -EOPNOTSUPP; + + /* __sha256_ni_finup2x() assumes SHA256_BLOCK_SIZE <= len <= INT_MAX. */ + if (unlikely(len < SHA256_BLOCK_SIZE || len > INT_MAX)) + return -EOPNOTSUPP; + + /* __sha256_ni_finup2x() assumes the following offsets. */ + BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); + BUILD_BUG_ON(offsetof(struct sha256_state, count) != 32); + BUILD_BUG_ON(offsetof(struct sha256_state, buf) != 40); + + kernel_fpu_begin(); + __sha256_ni_finup2x(sctx, data[0], data[1], len, outs[0], outs[1]); + kernel_fpu_end(); + return 0; +} + static struct shash_alg sha256_ni_algs[] = { { .digestsize = SHA256_DIGEST_SIZE, .init = sha256_base_init, .update = sha256_ni_update, .final = sha256_ni_final, .finup = sha256_ni_finup, .digest = sha256_ni_digest, + .finup_mb = sha256_ni_finup_mb, .descsize = sizeof(struct sha256_state), + .mb_max_msgs = 2, .base = { .cra_name = "sha256", .cra_driver_name = "sha256-ni", .cra_priority = 250, .cra_blocksize = SHA256_BLOCK_SIZE, From patchwork Tue Oct 1 15:37:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13818279 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AC8701CB32C; Tue, 1 Oct 2024 15:39:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797161; cv=none; b=AjK/MkhfroJAJR2c8F/qlnOAT+tYSMUR2wg36FOM6QyLOF6Oc2uXJZxHD3C82KR+tvMvKoskV3DxKRDMGT69Aj+hQlAm4QrZ4npPS0ImoYp6EkFNhVXOLp5IOSPSmZEri2O3AUsMZJOpK3p62zEXSbaU8otI5IgYgGmXA36LiDE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797161; c=relaxed/simple; bh=6e2w/98sLW6Dl4hsS6z553fGZi0Q6OZIjpkmmNfynUQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DOf3ymWtL22S01FCyP4LuEVf0r7W+3u0T2/26a8YWsLIhgqMP7/LH3IB6bFaZl+CkB6ybDiq8EM4XyrXnrpKvLIpRmfXkpAi+XJjy/VMeHWAJMYgenF8CUB+s6ilblYSYaw1AEGg+LHpL11/uHh/Z9XQQMPLCPHopDSpDDOhqS0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pua0eS6T; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pua0eS6T" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3B51EC4CED6; Tue, 1 Oct 2024 15:39:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727797161; bh=6e2w/98sLW6Dl4hsS6z553fGZi0Q6OZIjpkmmNfynUQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pua0eS6T8ElOqrlJUAJkAM/8LWQCbWuGGyveDb8+v0xQVXkAXNWDa+4DiD5rHcEFd 3ZODXeXcrZC7R9+7mE1T09B22DDLktEFRRt4VTQmV5CZVXRhg/HqyRxW9WNvLWpcIb MaSoIoG8IkWQjWvP47l45ANcdtVwW7tvdnbwFKCHQOx5XL7SFCWNJGeqX903FUHWkJ XYH+0ZHZ+Rz4KDP9QyBW33sN95tpcFAyG2Z8cAsKWpuePEJUcARYW+BqimmUEAvDUR qL94uYw7do34xw42/Sz8GJWcRGGkUWk2qnugW0/VTN1+zeKdnKQiN1Adp3CN/eMxYU K5Kvls9LR/0VQ== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v7 4/7] crypto: arm64/sha256-ce - add support for finup_mb Date: Tue, 1 Oct 2024 08:37:15 -0700 Message-ID: <20241001153718.111665-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241001153718.111665-1-ebiggers@kernel.org> References: <20241001153718.111665-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers Add an implementation of finup_mb to sha256-ce, using an interleaving factor of 2. It interleaves a finup operation for two equal-length messages that share a common prefix. dm-verity and fs-verity will take advantage of this for greatly improved performance on capable CPUs. This increases the throughput of SHA-256 hashing 4096-byte messages by the following amounts on the following CPUs: ARM Cortex-X1: 70% ARM Cortex-X3: 68% ARM Cortex-A76: 65% ARM Cortex-A715: 43% ARM Cortex-A510: 25% ARM Cortex-A55: 8% Reviewed-by: Ard Biesheuvel Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- arch/arm64/crypto/sha2-ce-core.S | 281 ++++++++++++++++++++++++++++++- arch/arm64/crypto/sha2-ce-glue.c | 40 +++++ 2 files changed, 315 insertions(+), 6 deletions(-) diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/crypto/sha2-ce-core.S index fce84d88ddb2c..fb5d5227e585c 100644 --- a/arch/arm64/crypto/sha2-ce-core.S +++ b/arch/arm64/crypto/sha2-ce-core.S @@ -68,22 +68,26 @@ .word 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5 .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 + .macro load_round_constants tmp + adr_l \tmp, .Lsha2_rcon + ld1 { v0.4s- v3.4s}, [\tmp], #64 + ld1 { v4.4s- v7.4s}, [\tmp], #64 + ld1 { v8.4s-v11.4s}, [\tmp], #64 + ld1 {v12.4s-v15.4s}, [\tmp] + .endm + /* * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src, * int blocks) */ .text SYM_FUNC_START(__sha256_ce_transform) - /* load round constants */ - adr_l x8, .Lsha2_rcon - ld1 { v0.4s- v3.4s}, [x8], #64 - ld1 { v4.4s- v7.4s}, [x8], #64 - ld1 { v8.4s-v11.4s}, [x8], #64 - ld1 {v12.4s-v15.4s}, [x8] + + load_round_constants x8 /* load state */ ld1 {dgav.4s, dgbv.4s}, [x0] /* load sha256_ce_state::finalize */ @@ -153,5 +157,270 @@ CPU_LE( rev32 v19.16b, v19.16b ) /* store new state */ 3: st1 {dgav.4s, dgbv.4s}, [x0] mov w0, w2 ret SYM_FUNC_END(__sha256_ce_transform) + + .unreq dga + .unreq dgav + .unreq dgb + .unreq dgbv + .unreq t0 + .unreq t1 + .unreq dg0q + .unreq dg0v + .unreq dg1q + .unreq dg1v + .unreq dg2q + .unreq dg2v + + // parameters for __sha256_ce_finup2x() + sctx .req x0 + data1 .req x1 + data2 .req x2 + len .req w3 + out1 .req x4 + out2 .req x5 + + // other scalar variables + count .req x6 + final_step .req w7 + + // x8-x9 are used as temporaries. + + // v0-v15 are used to cache the SHA-256 round constants. + // v16-v19 are used for the message schedule for the first message. + // v20-v23 are used for the message schedule for the second message. + // v24-v31 are used for the state and temporaries as given below. + // *_a are for the first message and *_b for the second. + state0_a_q .req q24 + state0_a .req v24 + state1_a_q .req q25 + state1_a .req v25 + state0_b_q .req q26 + state0_b .req v26 + state1_b_q .req q27 + state1_b .req v27 + t0_a .req v28 + t0_b .req v29 + t1_a_q .req q30 + t1_a .req v30 + t1_b_q .req q31 + t1_b .req v31 + +#define OFFSETOF_COUNT 32 // offsetof(struct sha256_state, count) +#define OFFSETOF_BUF 40 // offsetof(struct sha256_state, buf) +// offsetof(struct sha256_state, state) is assumed to be 0. + + // Do 4 rounds of SHA-256 for each of two messages (interleaved). m0_a + // and m0_b contain the current 4 message schedule words for the first + // and second message respectively. + // + // If not all the message schedule words have been computed yet, then + // this also computes 4 more message schedule words for each message. + // m1_a-m3_a contain the next 3 groups of 4 message schedule words for + // the first message, and likewise m1_b-m3_b for the second. After + // consuming the current value of m0_a, this macro computes the group + // after m3_a and writes it to m0_a, and likewise for *_b. This means + // that the next (m0_a, m1_a, m2_a, m3_a) is the current (m1_a, m2_a, + // m3_a, m0_a), and likewise for *_b, so the caller must cycle through + // the registers accordingly. + .macro do_4rounds_2x i, k, m0_a, m1_a, m2_a, m3_a, \ + m0_b, m1_b, m2_b, m3_b + add t0_a\().4s, \m0_a\().4s, \k\().4s + add t0_b\().4s, \m0_b\().4s, \k\().4s + .if \i < 48 + sha256su0 \m0_a\().4s, \m1_a\().4s + sha256su0 \m0_b\().4s, \m1_b\().4s + sha256su1 \m0_a\().4s, \m2_a\().4s, \m3_a\().4s + sha256su1 \m0_b\().4s, \m2_b\().4s, \m3_b\().4s + .endif + mov t1_a.16b, state0_a.16b + mov t1_b.16b, state0_b.16b + sha256h state0_a_q, state1_a_q, t0_a\().4s + sha256h state0_b_q, state1_b_q, t0_b\().4s + sha256h2 state1_a_q, t1_a_q, t0_a\().4s + sha256h2 state1_b_q, t1_b_q, t0_b\().4s + .endm + + .macro do_16rounds_2x i, k0, k1, k2, k3 + do_4rounds_2x \i + 0, \k0, v16, v17, v18, v19, v20, v21, v22, v23 + do_4rounds_2x \i + 4, \k1, v17, v18, v19, v16, v21, v22, v23, v20 + do_4rounds_2x \i + 8, \k2, v18, v19, v16, v17, v22, v23, v20, v21 + do_4rounds_2x \i + 12, \k3, v19, v16, v17, v18, v23, v20, v21, v22 + .endm + +// +// void __sha256_ce_finup2x(const struct sha256_state *sctx, +// const u8 *data1, const u8 *data2, int len, +// u8 out1[SHA256_DIGEST_SIZE], +// u8 out2[SHA256_DIGEST_SIZE]); +// +// This function computes the SHA-256 digests of two messages |data1| and +// |data2| that are both |len| bytes long, starting from the initial state +// |sctx|. |len| must be at least SHA256_BLOCK_SIZE. +// +// The instructions for the two SHA-256 operations are interleaved. On many +// CPUs, this is almost twice as fast as hashing each message individually due +// to taking better advantage of the CPU's SHA-256 and SIMD throughput. +// +SYM_FUNC_START(__sha256_ce_finup2x) + sub sp, sp, #128 + mov final_step, #0 + load_round_constants x8 + + // Load the initial state from sctx->state. + ld1 {state0_a.4s-state1_a.4s}, [sctx] + + // Load sctx->count. Take the mod 64 of it to get the number of bytes + // that are buffered in sctx->buf. Also save it in a register with len + // added to it. + ldr x8, [sctx, #OFFSETOF_COUNT] + add count, x8, len, sxtw + and x8, x8, #63 + cbz x8, .Lfinup2x_enter_loop // No bytes buffered? + + // x8 bytes (1 to 63) are currently buffered in sctx->buf. Load them + // followed by the first 64 - x8 bytes of data. Since len >= 64, we + // just load 64 bytes from each of sctx->buf, data1, and data2 + // unconditionally and rearrange the data as needed. + add x9, sctx, #OFFSETOF_BUF + ld1 {v16.16b-v19.16b}, [x9] + st1 {v16.16b-v19.16b}, [sp] + + ld1 {v16.16b-v19.16b}, [data1], #64 + add x9, sp, x8 + st1 {v16.16b-v19.16b}, [x9] + ld1 {v16.4s-v19.4s}, [sp] + + ld1 {v20.16b-v23.16b}, [data2], #64 + st1 {v20.16b-v23.16b}, [x9] + ld1 {v20.4s-v23.4s}, [sp] + + sub len, len, #64 + sub data1, data1, x8 + sub data2, data2, x8 + add len, len, w8 + mov state0_b.16b, state0_a.16b + mov state1_b.16b, state1_a.16b + b .Lfinup2x_loop_have_data + +.Lfinup2x_enter_loop: + sub len, len, #64 + mov state0_b.16b, state0_a.16b + mov state1_b.16b, state1_a.16b +.Lfinup2x_loop: + // Load the next two data blocks. + ld1 {v16.4s-v19.4s}, [data1], #64 + ld1 {v20.4s-v23.4s}, [data2], #64 +.Lfinup2x_loop_have_data: + // Convert the words of the data blocks from big endian. +CPU_LE( rev32 v16.16b, v16.16b ) +CPU_LE( rev32 v17.16b, v17.16b ) +CPU_LE( rev32 v18.16b, v18.16b ) +CPU_LE( rev32 v19.16b, v19.16b ) +CPU_LE( rev32 v20.16b, v20.16b ) +CPU_LE( rev32 v21.16b, v21.16b ) +CPU_LE( rev32 v22.16b, v22.16b ) +CPU_LE( rev32 v23.16b, v23.16b ) +.Lfinup2x_loop_have_bswapped_data: + + // Save the original state for each block. + st1 {state0_a.4s-state1_b.4s}, [sp] + + // Do the SHA-256 rounds on each block. + do_16rounds_2x 0, v0, v1, v2, v3 + do_16rounds_2x 16, v4, v5, v6, v7 + do_16rounds_2x 32, v8, v9, v10, v11 + do_16rounds_2x 48, v12, v13, v14, v15 + + // Add the original state for each block. + ld1 {v16.4s-v19.4s}, [sp] + add state0_a.4s, state0_a.4s, v16.4s + add state1_a.4s, state1_a.4s, v17.4s + add state0_b.4s, state0_b.4s, v18.4s + add state1_b.4s, state1_b.4s, v19.4s + + // Update len and loop back if more blocks remain. + sub len, len, #64 + tbz len, #31, .Lfinup2x_loop // len >= 0? + + // Check if any final blocks need to be handled. + // final_step = 2: all done + // final_step = 1: need to do count-only padding block + // final_step = 0: need to do the block with 0x80 padding byte + tbnz final_step, #1, .Lfinup2x_done + tbnz final_step, #0, .Lfinup2x_finalize_countonly + add len, len, #64 + cbz len, .Lfinup2x_finalize_blockaligned + + // Not block-aligned; 1 <= len <= 63 data bytes remain. Pad the block. + // To do this, write the padding starting with the 0x80 byte to + // &sp[64]. Then for each message, copy the last 64 data bytes to sp + // and load from &sp[64 - len] to get the needed padding block. This + // code relies on the data buffers being >= 64 bytes in length. + sub w8, len, #64 // w8 = len - 64 + add data1, data1, w8, sxtw // data1 += len - 64 + add data2, data2, w8, sxtw // data2 += len - 64 + mov x9, 0x80 + fmov d16, x9 + movi v17.16b, #0 + stp q16, q17, [sp, #64] + stp q17, q17, [sp, #96] + sub x9, sp, w8, sxtw // x9 = &sp[64 - len] + cmp len, #56 + b.ge 1f // will count spill into its own block? + lsl count, count, #3 + rev count, count + str count, [x9, #56] + mov final_step, #2 // won't need count-only block + b 2f +1: + mov final_step, #1 // will need count-only block +2: + ld1 {v16.16b-v19.16b}, [data1] + st1 {v16.16b-v19.16b}, [sp] + ld1 {v16.4s-v19.4s}, [x9] + ld1 {v20.16b-v23.16b}, [data2] + st1 {v20.16b-v23.16b}, [sp] + ld1 {v20.4s-v23.4s}, [x9] + b .Lfinup2x_loop_have_data + + // Prepare a padding block, either: + // + // {0x80, 0, 0, 0, ..., count (as __be64)} + // This is for a block aligned message. + // + // { 0, 0, 0, 0, ..., count (as __be64)} + // This is for a message whose length mod 64 is >= 56. + // + // Pre-swap the endianness of the words. +.Lfinup2x_finalize_countonly: + movi v16.2d, #0 + b 1f +.Lfinup2x_finalize_blockaligned: + mov x8, #0x80000000 + fmov d16, x8 +1: + movi v17.2d, #0 + movi v18.2d, #0 + ror count, count, #29 // ror(lsl(count, 3), 32) + mov v19.d[0], xzr + mov v19.d[1], count + mov v20.16b, v16.16b + movi v21.2d, #0 + movi v22.2d, #0 + mov v23.16b, v19.16b + mov final_step, #2 + b .Lfinup2x_loop_have_bswapped_data + +.Lfinup2x_done: + // Write the two digests with all bytes in the correct order. +CPU_LE( rev32 state0_a.16b, state0_a.16b ) +CPU_LE( rev32 state1_a.16b, state1_a.16b ) +CPU_LE( rev32 state0_b.16b, state0_b.16b ) +CPU_LE( rev32 state1_b.16b, state1_b.16b ) + st1 {state0_a.4s-state1_a.4s}, [out1] + st1 {state0_b.4s-state1_b.4s}, [out2] + add sp, sp, #128 + ret +SYM_FUNC_END(__sha256_ce_finup2x) diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c index 0a44d2e7ee1f7..b37cffc4191fb 100644 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ b/arch/arm64/crypto/sha2-ce-glue.c @@ -31,10 +31,15 @@ extern const u32 sha256_ce_offsetof_count; extern const u32 sha256_ce_offsetof_finalize; asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src, int blocks); +asmlinkage void __sha256_ce_finup2x(const struct sha256_state *sctx, + const u8 *data1, const u8 *data2, int len, + u8 out1[SHA256_DIGEST_SIZE], + u8 out2[SHA256_DIGEST_SIZE]); + static void sha256_ce_transform(struct sha256_state *sst, u8 const *src, int blocks) { while (blocks) { int rem; @@ -122,10 +127,43 @@ static int sha256_ce_digest(struct shash_desc *desc, const u8 *data, { sha256_base_init(desc); return sha256_ce_finup(desc, data, len, out); } +static int sha256_ce_finup_mb(struct shash_desc *desc, + const u8 * const data[], unsigned int len, + u8 * const outs[], unsigned int num_msgs) +{ + struct sha256_ce_state *sctx = shash_desc_ctx(desc); + + /* + * num_msgs != 2 should not happen here, since this algorithm sets + * mb_max_msgs=2, and the crypto API handles num_msgs <= 1 before + * calling into the algorithm's finup_mb method. + */ + if (WARN_ON_ONCE(num_msgs != 2)) + return -EOPNOTSUPP; + + if (unlikely(!crypto_simd_usable())) + return -EOPNOTSUPP; + + /* __sha256_ce_finup2x() assumes SHA256_BLOCK_SIZE <= len <= INT_MAX. */ + if (unlikely(len < SHA256_BLOCK_SIZE || len > INT_MAX)) + return -EOPNOTSUPP; + + /* __sha256_ce_finup2x() assumes the following offsets. */ + BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); + BUILD_BUG_ON(offsetof(struct sha256_state, count) != 32); + BUILD_BUG_ON(offsetof(struct sha256_state, buf) != 40); + + kernel_neon_begin(); + __sha256_ce_finup2x(&sctx->sst, data[0], data[1], len, outs[0], + outs[1]); + kernel_neon_end(); + return 0; +} + static int sha256_ce_export(struct shash_desc *desc, void *out) { struct sha256_ce_state *sctx = shash_desc_ctx(desc); memcpy(out, &sctx->sst, sizeof(struct sha256_state)); @@ -162,13 +200,15 @@ static struct shash_alg algs[] = { { .init = sha256_base_init, .update = sha256_ce_update, .final = sha256_ce_final, .finup = sha256_ce_finup, .digest = sha256_ce_digest, + .finup_mb = sha256_ce_finup_mb, .export = sha256_ce_export, .import = sha256_ce_import, .descsize = sizeof(struct sha256_ce_state), + .mb_max_msgs = 2, .statesize = sizeof(struct sha256_state), .digestsize = SHA256_DIGEST_SIZE, .base = { .cra_name = "sha256", .cra_driver_name = "sha256-ce", From patchwork Tue Oct 1 15:37:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13818280 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 258C61CB339; Tue, 1 Oct 2024 15:39:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797162; cv=none; b=G7d/ZdkEHzRpHXoNsr79WKhU2DSqjo4bSjvvhvBCtR9WEz2W3Zm7bl+EYwuM/oGZIU3JYEkIipDQSyb0BQv/4ZGjz3nM/GiBZwzavg6qln5CsB0bV/Z+L6gmciCOgPl/QULY6TBJHTbLPbvCGqDNqld/AdBCmUOPxMxCX8KsLrg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797162; c=relaxed/simple; bh=hJVS4brsMwQOzjfUUbc1xpujgYEHBqrksr15X431BTo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=S6oQQBugLQ54wlys8qkQPBXdksnGxWhoXk3xygb2RB+66I8VW4SYcF7oS+KrzFZVD/J6N6EPxYctu2EDoYbvVDwulJT4o9pw2gjB+oVK8K13Lr08zmxcN9ookBFW9FopnncGfCLrH1P5F/qWfopS/iQDmVt9r6BZTQpmAp5xh40= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nrNi6bDp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nrNi6bDp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ACA7AC4CECF; Tue, 1 Oct 2024 15:39:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727797162; bh=hJVS4brsMwQOzjfUUbc1xpujgYEHBqrksr15X431BTo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nrNi6bDp3du6HwiCctgr2368xqcWCsBAbDlB2DMVNDJVI/QvEAR1QY8Tuc9ENmC00 g+0uTj28u5pexuAd4yzs9kf4e5HBiqqp/+E8ZydErUpXUK2TvEMco985bI2R5Et/Wg paP+Pu4vFmrrNPCeWJROW74FtZOXDTyQr/ZGDlhs6P+MxLVpQDR9Jmr49qw1jdLiFm aN35kX+wRuJ1mE+4yifRwVJCt8YonhEGvUxaUF/DFLzxzJX6umyNQkXciut+cyN8eP L6wBboDWE8QwpNRDjnlzZ5wh6eeZ4XAfG550t2r+2C6gXmkLGGc9gopQsin/jb2mwd vuiwtRB9JBqmQ== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v7 5/7] fsverity: improve performance by using multibuffer hashing Date: Tue, 1 Oct 2024 08:37:16 -0700 Message-ID: <20241001153718.111665-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241001153718.111665-1-ebiggers@kernel.org> References: <20241001153718.111665-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers When supported by the hash algorithm, use crypto_shash_finup_mb() to interleave the hashing of pairs of data blocks. On some CPUs this nearly doubles hashing performance. The increase in overall throughput of cold-cache fsverity reads that I'm seeing on arm64 and x86_64 is roughly 35% (though this metric is hard to measure as it jumps around a lot). For now this is only done on the verification path, and only for data blocks, not Merkle tree blocks. We could use finup_mb on Merkle tree blocks too, but that is less important as there aren't as many Merkle tree blocks as data blocks, and that would require some additional code restructuring. We could also use finup_mb to accelerate building the Merkle tree, but verification performance is more important. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- fs/verity/fsverity_private.h | 7 ++ fs/verity/hash_algs.c | 8 +- fs/verity/verify.c | 169 +++++++++++++++++++++++++++++------ 3 files changed, 153 insertions(+), 31 deletions(-) diff --git a/fs/verity/fsverity_private.h b/fs/verity/fsverity_private.h index b3506f56e180b..7535c9d9b516f 100644 --- a/fs/verity/fsverity_private.h +++ b/fs/verity/fsverity_private.h @@ -27,10 +27,15 @@ struct fsverity_hash_alg { /* * The HASH_ALGO_* constant for this algorithm. This is different from * FS_VERITY_HASH_ALG_*, which uses a different numbering scheme. */ enum hash_algo algo_id; + /* + * The maximum supported interleaving factor for multibuffer hashing, or + * 1 if the algorithm doesn't support multibuffer hashing + */ + int mb_max_msgs; }; /* Merkle tree parameters: hash algorithm, initial hash state, and topology */ struct merkle_tree_params { const struct fsverity_hash_alg *hash_alg; /* the hash algorithm */ @@ -150,8 +155,10 @@ static inline void fsverity_init_signature(void) } #endif /* !CONFIG_FS_VERITY_BUILTIN_SIGNATURES */ /* verify.c */ +#define FS_VERITY_MAX_PENDING_DATA_BLOCKS 2 + void __init fsverity_init_workqueue(void); #endif /* _FSVERITY_PRIVATE_H */ diff --git a/fs/verity/hash_algs.c b/fs/verity/hash_algs.c index 6b08b1d9a7d7c..f24d7c2954552 100644 --- a/fs/verity/hash_algs.c +++ b/fs/verity/hash_algs.c @@ -82,12 +82,16 @@ const struct fsverity_hash_alg *fsverity_get_hash_alg(const struct inode *inode, if (WARN_ON_ONCE(alg->digest_size != crypto_shash_digestsize(tfm))) goto err_free_tfm; if (WARN_ON_ONCE(alg->block_size != crypto_shash_blocksize(tfm))) goto err_free_tfm; - pr_info("%s using implementation \"%s\"\n", - alg->name, crypto_shash_driver_name(tfm)); + alg->mb_max_msgs = min(crypto_shash_mb_max_msgs(tfm), + FS_VERITY_MAX_PENDING_DATA_BLOCKS); + + pr_info("%s using implementation \"%s\"%s\n", + alg->name, crypto_shash_driver_name(tfm), + alg->mb_max_msgs > 1 ? " (multibuffer)" : ""); /* pairs with smp_load_acquire() above */ smp_store_release(&alg->tfm, tfm); goto out_unlock; diff --git a/fs/verity/verify.c b/fs/verity/verify.c index 4fcad0825a120..a07651c22520e 100644 --- a/fs/verity/verify.c +++ b/fs/verity/verify.c @@ -8,10 +8,31 @@ #include "fsverity_private.h" #include #include +struct fsverity_pending_block { + const void *data; + u64 pos; + u8 real_hash[FS_VERITY_MAX_DIGEST_SIZE]; +}; + +struct fsverity_verification_context { + struct inode *inode; + struct fsverity_info *vi; + unsigned long max_ra_pages; + + /* + * This is the queue of data blocks that are pending verification. We + * allow multiple blocks to be queued up in order to support multibuffer + * hashing, i.e. interleaving the hashing of multiple messages. On many + * CPUs this improves performance significantly. + */ + int num_pending; + struct fsverity_pending_block pending_blocks[FS_VERITY_MAX_PENDING_DATA_BLOCKS]; +}; + static struct workqueue_struct *fsverity_read_workqueue; /* * Returns true if the hash block with index @hblock_idx in the tree, located in * @hpage, has already been verified. @@ -77,23 +98,25 @@ static bool is_hash_block_verified(struct fsverity_info *vi, struct page *hpage, SetPageChecked(hpage); return false; } /* - * Verify a single data block against the file's Merkle tree. + * Verify the hash of a single data block against the file's Merkle tree. * * In principle, we need to verify the entire path to the root node. However, * for efficiency the filesystem may cache the hash blocks. Therefore we need * only ascend the tree until an already-verified hash block is seen, and then * verify the path to that block. * * Return: %true if the data block is valid, else %false. */ static bool verify_data_block(struct inode *inode, struct fsverity_info *vi, - const void *data, u64 data_pos, unsigned long max_ra_pages) + const struct fsverity_pending_block *dblock, + unsigned long max_ra_pages) { + const u64 data_pos = dblock->pos; const struct merkle_tree_params *params = &vi->tree_params; const unsigned int hsize = params->digest_size; int level; u8 _want_hash[FS_VERITY_MAX_DIGEST_SIZE]; const u8 *want_hash; @@ -113,23 +136,27 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi, * The index of the previous level's block within that level; also the * index of that block's hash within the current level. */ u64 hidx = data_pos >> params->log_blocksize; - /* Up to 1 + FS_VERITY_MAX_LEVELS pages may be mapped at once */ - BUILD_BUG_ON(1 + FS_VERITY_MAX_LEVELS > KM_MAX_IDX); + /* + * Up to FS_VERITY_MAX_PENDING_DATA_BLOCKS + FS_VERITY_MAX_LEVELS pages + * may be mapped at once. + */ + BUILD_BUG_ON(FS_VERITY_MAX_PENDING_DATA_BLOCKS + + FS_VERITY_MAX_LEVELS > KM_MAX_IDX); if (unlikely(data_pos >= inode->i_size)) { /* * This can happen in the data page spanning EOF when the Merkle * tree block size is less than the page size. The Merkle tree * doesn't cover data blocks fully past EOF. But the entire * page spanning EOF can be visible to userspace via a mmap, and * any part past EOF should be all zeroes. Therefore, we need * to verify that any data blocks fully past EOF are all zeroes. */ - if (memchr_inv(data, 0, params->block_size)) { + if (memchr_inv(dblock->data, 0, params->block_size)) { fsverity_err(inode, "FILE CORRUPTED! Data past EOF is not zeroed"); return false; } return true; @@ -219,54 +246,120 @@ verify_data_block(struct inode *inode, struct fsverity_info *vi, want_hash = _want_hash; kunmap_local(haddr); put_page(hpage); } - /* Finally, verify the data block. */ - if (fsverity_hash_block(params, inode, data, real_hash) != 0) - goto error; - if (memcmp(want_hash, real_hash, hsize) != 0) + /* Finally, verify the hash of the data block. */ + if (memcmp(want_hash, dblock->real_hash, hsize) != 0) goto corrupted; return true; corrupted: fsverity_err(inode, "FILE CORRUPTED! pos=%llu, level=%d, want_hash=%s:%*phN, real_hash=%s:%*phN", data_pos, level - 1, params->hash_alg->name, hsize, want_hash, - params->hash_alg->name, hsize, real_hash); + params->hash_alg->name, hsize, + level == 0 ? dblock->real_hash : real_hash); error: for (; level > 0; level--) { kunmap_local(hblocks[level - 1].addr); put_page(hblocks[level - 1].page); } return false; } +static void +fsverity_init_verification_context(struct fsverity_verification_context *ctx, + struct inode *inode, + unsigned long max_ra_pages) +{ + ctx->inode = inode; + ctx->vi = inode->i_verity_info; + ctx->max_ra_pages = max_ra_pages; + ctx->num_pending = 0; +} + +static void +fsverity_clear_pending_blocks(struct fsverity_verification_context *ctx) +{ + int i; + + for (i = ctx->num_pending - 1; i >= 0; i--) { + kunmap_local(ctx->pending_blocks[i].data); + ctx->pending_blocks[i].data = NULL; + } + ctx->num_pending = 0; +} + +static bool +fsverity_verify_pending_blocks(struct fsverity_verification_context *ctx) +{ + struct inode *inode = ctx->inode; + struct fsverity_info *vi = ctx->vi; + const struct merkle_tree_params *params = &vi->tree_params; + SHASH_DESC_ON_STACK(desc, params->hash_alg->tfm); + const u8 *data[FS_VERITY_MAX_PENDING_DATA_BLOCKS]; + u8 *real_hashes[FS_VERITY_MAX_PENDING_DATA_BLOCKS]; + int i; + int err; + + if (ctx->num_pending == 0) + return true; + + for (i = 0; i < ctx->num_pending; i++) { + data[i] = ctx->pending_blocks[i].data; + real_hashes[i] = ctx->pending_blocks[i].real_hash; + } + + desc->tfm = params->hash_alg->tfm; + if (params->hashstate) + err = crypto_shash_import(desc, params->hashstate); + else + err = crypto_shash_init(desc); + if (err) { + fsverity_err(inode, "Error %d importing hash state", err); + return false; + } + err = crypto_shash_finup_mb(desc, data, params->block_size, real_hashes, + ctx->num_pending); + if (err) { + fsverity_err(inode, "Error %d computing block hashes", err); + return false; + } + + for (i = 0; i < ctx->num_pending; i++) { + if (!verify_data_block(inode, vi, &ctx->pending_blocks[i], + ctx->max_ra_pages)) + return false; + } + + fsverity_clear_pending_blocks(ctx); + return true; +} + static bool -verify_data_blocks(struct folio *data_folio, size_t len, size_t offset, - unsigned long max_ra_pages) +fsverity_add_data_blocks(struct fsverity_verification_context *ctx, + struct folio *data_folio, size_t len, size_t offset) { - struct inode *inode = data_folio->mapping->host; - struct fsverity_info *vi = inode->i_verity_info; - const unsigned int block_size = vi->tree_params.block_size; + struct fsverity_info *vi = ctx->vi; + const struct merkle_tree_params *params = &vi->tree_params; + const unsigned int block_size = params->block_size; + const int mb_max_msgs = params->hash_alg->mb_max_msgs; u64 pos = (u64)data_folio->index << PAGE_SHIFT; if (WARN_ON_ONCE(len <= 0 || !IS_ALIGNED(len | offset, block_size))) return false; if (WARN_ON_ONCE(!folio_test_locked(data_folio) || folio_test_uptodate(data_folio))) return false; do { - void *data; - bool valid; - - data = kmap_local_folio(data_folio, offset); - valid = verify_data_block(inode, vi, data, pos + offset, - max_ra_pages); - kunmap_local(data); - if (!valid) + ctx->pending_blocks[ctx->num_pending].data = + kmap_local_folio(data_folio, offset); + ctx->pending_blocks[ctx->num_pending].pos = pos + offset; + if (++ctx->num_pending == mb_max_msgs && + !fsverity_verify_pending_blocks(ctx)) return false; offset += block_size; len -= block_size; } while (len); return true; @@ -284,11 +377,19 @@ verify_data_blocks(struct folio *data_folio, size_t len, size_t offset, * * Return: %true if the data is valid, else %false. */ bool fsverity_verify_blocks(struct folio *folio, size_t len, size_t offset) { - return verify_data_blocks(folio, len, offset, 0); + struct fsverity_verification_context ctx; + + fsverity_init_verification_context(&ctx, folio->mapping->host, 0); + + if (fsverity_add_data_blocks(&ctx, folio, len, offset) && + fsverity_verify_pending_blocks(&ctx)) + return true; + fsverity_clear_pending_blocks(&ctx); + return false; } EXPORT_SYMBOL_GPL(fsverity_verify_blocks); #ifdef CONFIG_BLOCK /** @@ -305,10 +406,12 @@ EXPORT_SYMBOL_GPL(fsverity_verify_blocks); * filesystems) must instead call fsverity_verify_page() directly on each page. * All filesystems must also call fsverity_verify_page() on holes. */ void fsverity_verify_bio(struct bio *bio) { + struct inode *inode = bio_first_folio_all(bio)->mapping->host; + struct fsverity_verification_context ctx; struct folio_iter fi; unsigned long max_ra_pages = 0; if (bio->bi_opf & REQ_RAHEAD) { /* @@ -321,17 +424,25 @@ void fsverity_verify_bio(struct bio *bio) * reduces the number of I/O requests made to the Merkle tree. */ max_ra_pages = bio->bi_iter.bi_size >> (PAGE_SHIFT + 2); } + fsverity_init_verification_context(&ctx, inode, max_ra_pages); + bio_for_each_folio_all(fi, bio) { - if (!verify_data_blocks(fi.folio, fi.length, fi.offset, - max_ra_pages)) { - bio->bi_status = BLK_STS_IOERR; - break; - } + if (!fsverity_add_data_blocks(&ctx, fi.folio, fi.length, + fi.offset)) + goto ioerr; } + + if (!fsverity_verify_pending_blocks(&ctx)) + goto ioerr; + return; + +ioerr: + fsverity_clear_pending_blocks(&ctx); + bio->bi_status = BLK_STS_IOERR; } EXPORT_SYMBOL_GPL(fsverity_verify_bio); #endif /* CONFIG_BLOCK */ /** From patchwork Tue Oct 1 15:37:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13818281 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D5691CB30B; Tue, 1 Oct 2024 15:39:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797162; cv=none; b=VQ2Xp5dii9ZOQBtYhwQkfkQ25kbYrTr6asUI9aRAUKSpwT32WUtfsBsrg44Wf5ov8FWjJawFounXsAIG2P/aRzgpq4cFNPIKOfsolGoMNWnb5/JVInv0dpGv9ZVyRmHhPpKMZe7ZNFVkfDSLgQZQSXjU/UsarRJBMLEgz31f3gg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797162; c=relaxed/simple; bh=X634+WQbQNIQ5hisrXRU0gBeQmzidV7yl9kCtngl8ow=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Vlt3vNIInNYPji9Hlf/3XUnLW9sQur+c54moM1AnfhkOrv5adZPtYqX83vQ6AsA9/k+sjjzaY+k5FlBzj2o0A+/ClfTws7PN8MtJ2H5IZAgo9VU3AzPPNCpKqg8sMYaMNVSmHkky5bBBPrZV0Qx/18u5SIWxSlhdga9II+W/2wg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bYF7D6O2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bYF7D6O2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 299F0C4CED3; Tue, 1 Oct 2024 15:39:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727797162; bh=X634+WQbQNIQ5hisrXRU0gBeQmzidV7yl9kCtngl8ow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bYF7D6O26qDqXuLNvpvTCaCQctnvRMYZ8LmTfjHa/ckDFPzoLKBb2sML7tddCkuII 9IMrGP0BnmhXQ2OgC0L5Oe7bn3QW2k73JCgrozE3KDYUFl7WB80Wg0FZkLoUxI0kQD Q5GLzpc4bDyoNQg0FIJBTRXQQirJo24L/ZXsv+44XehGW/RgX6PtmskE29KunZityJ 9HyfHPCPxo4XvJZCORm/Ab2Ddze+T+swR0Y4YQ/D5obffqqIHp8a+y9GRcHu1GGzL1 MZ1Q05FwKqjAVwIv7EYY5yeuBRPFHhzpYb7ruG6EsHa4OGd0/WEz3yFKNlVHx5KWpz pej5SQsSaIzpw== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v7 6/7] dm-verity: reduce scope of real and wanted digests Date: Tue, 1 Oct 2024 08:37:17 -0700 Message-ID: <20241001153718.111665-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241001153718.111665-1-ebiggers@kernel.org> References: <20241001153718.111665-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers In preparation for supporting multibuffer hashing where dm-verity will need to keep track of the real and wanted digests for multiple data blocks simultaneously, stop using the want_digest and real_digest fields of struct dm_verity_io from so many different places. Specifically: - Make various functions take want_digest as a parameter rather than having it be implicitly passed via the struct dm_verity_io. - Add a new tmp_digest field, and use this instead of real_digest when computing a hash solely for the purpose of immediately checking it. The result is that real_digest and want_digest are used only by verity_verify_io(). Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-fec.c | 19 +++++++++--------- drivers/md/dm-verity-fec.h | 5 +++-- drivers/md/dm-verity-target.c | 36 ++++++++++++++++++----------------- drivers/md/dm-verity.h | 1 + 4 files changed, 32 insertions(+), 29 deletions(-) diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c index 62b1a44b8dd2e..79f3794e197e6 100644 --- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -185,15 +185,14 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io, */ static int fec_is_erasure(struct dm_verity *v, struct dm_verity_io *io, u8 *want_digest, u8 *data) { if (unlikely(verity_hash(v, io, data, 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true))) + io->tmp_digest, true))) return 0; - return memcmp(verity_io_real_digest(v, io), want_digest, - v->digest_size) != 0; + return memcmp(io->tmp_digest, want_digest, v->digest_size) != 0; } /* * Read data blocks that are part of the RS block and deinterleave as much as * fits into buffers. Check for erasure locations if @neras is non-NULL. @@ -360,11 +359,11 @@ static void fec_init_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio) * (indicated by @offset) in fio->output. If @use_erasures is non-zero, uses * hashes to locate erasures. */ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io, struct dm_verity_fec_io *fio, u64 rsb, u64 offset, - bool use_erasures) + const u8 *want_digest, bool use_erasures) { int r, neras = 0; unsigned int pos; r = fec_alloc_bufs(v, fio); @@ -386,27 +385,27 @@ static int fec_decode_rsb(struct dm_verity *v, struct dm_verity_io *io, pos += fio->nbufs << DM_VERITY_FEC_BUF_RS_BITS; } /* Always re-validate the corrected block against the expected hash */ r = verity_hash(v, io, fio->output, 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true); + io->tmp_digest, true); if (unlikely(r < 0)) return r; - if (memcmp(verity_io_real_digest(v, io), verity_io_want_digest(v, io), - v->digest_size)) { + if (memcmp(io->tmp_digest, want_digest, v->digest_size)) { DMERR_LIMIT("%s: FEC %llu: failed to correct (%d erasures)", v->data_dev->name, (unsigned long long)rsb, neras); return -EILSEQ; } return 0; } /* Correct errors in a block. Copies corrected block to dest. */ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, - enum verity_block_type type, sector_t block, u8 *dest) + enum verity_block_type type, const u8 *want_digest, + sector_t block, u8 *dest) { int r; struct dm_verity_fec_io *fio = fec_io(io); u64 offset, res, rsb; @@ -445,13 +444,13 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, /* * Locating erasures is slow, so attempt to recover the block without * them first. Do a second attempt with erasures if the corruption is * bad enough. */ - r = fec_decode_rsb(v, io, fio, rsb, offset, false); + r = fec_decode_rsb(v, io, fio, rsb, offset, want_digest, false); if (r < 0) { - r = fec_decode_rsb(v, io, fio, rsb, offset, true); + r = fec_decode_rsb(v, io, fio, rsb, offset, want_digest, true); if (r < 0) goto done; } memcpy(dest, fio->output, 1 << v->data_dev_block_bits); diff --git a/drivers/md/dm-verity-fec.h b/drivers/md/dm-verity-fec.h index 09123a6129538..a6689cdc489db 100644 --- a/drivers/md/dm-verity-fec.h +++ b/drivers/md/dm-verity-fec.h @@ -66,12 +66,12 @@ struct dm_verity_fec_io { #define DM_VERITY_OPTS_FEC 8 extern bool verity_fec_is_enabled(struct dm_verity *v); extern int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, - enum verity_block_type type, sector_t block, - u8 *dest); + enum verity_block_type type, const u8 *want_digest, + sector_t block, u8 *dest); extern unsigned int verity_fec_status_table(struct dm_verity *v, unsigned int sz, char *result, unsigned int maxlen); extern void verity_fec_finish_io(struct dm_verity_io *io); @@ -97,10 +97,11 @@ static inline bool verity_fec_is_enabled(struct dm_verity *v) } static inline int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io, enum verity_block_type type, + const u8 *want_digest, sector_t block, u8 *dest) { return -EOPNOTSUPP; } diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 36e4ddfe2d158..e40c5c083a931 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -286,16 +286,16 @@ static int verity_handle_err(struct dm_verity *v, enum verity_block_type type, /* * Verify hash of a metadata block pertaining to the specified data block * ("block" argument) at a specified level ("level" argument). * - * On successful return, verity_io_want_digest(v, io) contains the hash value - * for a lower tree level or for the data block (if we're at the lowest level). + * On successful return, want_digest contains the hash value for a lower tree + * level or for the data block (if we're at the lowest level). * * If "skip_unverified" is true, unverified buffer is skipped and 1 is returned. * If "skip_unverified" is false, unverified buffer is hashed and verified - * against current value of verity_io_want_digest(v, io). + * against current value of want_digest. */ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, sector_t block, int level, bool skip_unverified, u8 *want_digest) { @@ -334,26 +334,26 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, r = 1; goto release_ret_r; } r = verity_hash(v, io, data, 1 << v->hash_dev_block_bits, - verity_io_real_digest(v, io), !io->in_bh); + io->tmp_digest, !io->in_bh); if (unlikely(r < 0)) goto release_ret_r; - if (likely(memcmp(verity_io_real_digest(v, io), want_digest, + if (likely(memcmp(io->tmp_digest, want_digest, v->digest_size) == 0)) aux->hash_verified = 1; else if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Error handling code (FEC included) cannot be run in a * tasklet since it may sleep, so fallback to work-queue. */ r = -EAGAIN; goto release_ret_r; } else if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_METADATA, - hash_block, data) == 0) + want_digest, hash_block, data) == 0) aux->hash_verified = 1; else if (verity_handle_err(v, DM_VERITY_BLOCK_TYPE_METADATA, hash_block)) { struct bio *bio = @@ -412,11 +412,12 @@ int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, return r; } static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, - sector_t cur_block, u8 *dest) + const u8 *want_digest, sector_t cur_block, + u8 *dest) { struct page *page; void *buffer; int r; struct dm_io_request io_req; @@ -436,16 +437,15 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT); if (unlikely(r)) goto free_ret; r = verity_hash(v, io, buffer, 1 << v->data_dev_block_bits, - verity_io_real_digest(v, io), true); + io->tmp_digest, true); if (unlikely(r)) goto free_ret; - if (memcmp(verity_io_real_digest(v, io), - verity_io_want_digest(v, io), v->digest_size)) { + if (memcmp(io->tmp_digest, want_digest, v->digest_size)) { r = -EIO; goto free_ret; } memcpy(dest, buffer, 1 << v->data_dev_block_bits); @@ -456,28 +456,29 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, return r; } static int verity_handle_data_hash_mismatch(struct dm_verity *v, struct dm_verity_io *io, - struct bio *bio, sector_t blkno, - u8 *data) + struct bio *bio, + const u8 *want_digest, + sector_t blkno, u8 *data) { if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Error handling code (FEC included) cannot be run in the * BH workqueue, so fallback to a standard workqueue. */ return -EAGAIN; } - if (verity_recheck(v, io, blkno, data) == 0) { + if (verity_recheck(v, io, want_digest, blkno, data) == 0) { if (v->validated_blocks) set_bit(blkno, v->validated_blocks); return 0; } #if defined(CONFIG_DM_VERITY_FEC) - if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, blkno, - data) == 0) + if (verity_fec_decode(v, io, DM_VERITY_BLOCK_TYPE_DATA, want_digest, + blkno, data) == 0) return 0; #endif if (bio->bi_status) return -EIO; /* Error correction failed; Just return error */ @@ -564,12 +565,13 @@ static int verity_verify_io(struct dm_verity_io *io) if (v->validated_blocks) set_bit(cur_block, v->validated_blocks); kunmap_local(data); continue; } - r = verity_handle_data_hash_mismatch(v, io, bio, cur_block, - data); + r = verity_handle_data_hash_mismatch(v, io, bio, + verity_io_want_digest(v, io), + cur_block, data); kunmap_local(data); if (unlikely(r)) return r; } diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index 754e70bb5fe09..347d3fc4fff4d 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -93,10 +93,11 @@ struct dm_verity_io { bool in_bh; struct work_struct work; struct work_struct bh_work; + u8 tmp_digest[HASH_MAX_DIGESTSIZE]; u8 real_digest[HASH_MAX_DIGESTSIZE]; u8 want_digest[HASH_MAX_DIGESTSIZE]; /* * This struct is followed by a variable-sized hash request of size From patchwork Tue Oct 1 15:37:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Biggers X-Patchwork-Id: 13818282 X-Patchwork-Delegate: herbert@gondor.apana.org.au Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 194D61CB50F; Tue, 1 Oct 2024 15:39:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797163; cv=none; b=MNje/ND0GURBAOiFxOi5TxlCbLsrgfSOg+C+Fbws+VCkN8Jd9PwF/C410sQlmpEM/gJxhxNzBWTKAYYEvwRPwsuq5fUdwTvCQvL/uyCBAZDTu4PVcZfdScFBTZS0rhA4s8UqKgq9V1mr5cOC4T5U/5pc4pbOV7kGdauzJEYgiuA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1727797163; c=relaxed/simple; bh=/qo20Qgesl0vfgq6f0N2WrRqbQOTpbi16iNpD/hqvL0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HtTJT/GuZyT4uqN4T81dV34P013sWh+15iPE3uoVWV2asUfL9mcp3ICLSp441bm6ZX7vanrSFSKao3bb9O8BOF/VXPtGrUQHn39ZYTkWFoP8pJA6/CaXdZFbyN0Ek2V084Ocuc4w2Y+GhCuIzTNrX/IiglZdgIezVQzLINhGj4c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VVBmlxsy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VVBmlxsy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A98CC4CECF; Tue, 1 Oct 2024 15:39:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727797162; bh=/qo20Qgesl0vfgq6f0N2WrRqbQOTpbi16iNpD/hqvL0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VVBmlxsyzc6tRrL0pa3RpEQCiP97Auxi85v0OjhHy+o8BmDltvHoaDAHe9xS8eakX TW7uIl34C497hZIYfLlLlBZe9bHmlWLpwOjLJQEjfzrCUB7ZiScihbBCvcxXkMpHeK DSCoNHcCmOR2nWGlthpUT56M86wOysVVAcZTW3EMtOdLgAECWkTHRKr0G7UTEw88MA WYx5Q7gBP5be3RD1rb9LvQfwQScKYfDaCX6cy1dWuanQfhLQdRnZXwtvBv3ieScg8V AFp+HS++QB+G+VgPlMDFzvfRz/W2Sp+R+b1yW6kWh3ZIRdhm5kSnNwLVTUCEh6NfD8 M10xO4gaIlsog== From: Eric Biggers To: linux-crypto@vger.kernel.org, fsverity@lists.linux.dev, dm-devel@lists.linux.dev Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, Ard Biesheuvel , Sami Tolvanen , Herbert Xu , Alasdair Kergon , Mike Snitzer , Mikulas Patocka Subject: [PATCH v7 7/7] dm-verity: improve performance by using multibuffer hashing Date: Tue, 1 Oct 2024 08:37:18 -0700 Message-ID: <20241001153718.111665-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.46.2 In-Reply-To: <20241001153718.111665-1-ebiggers@kernel.org> References: <20241001153718.111665-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-crypto@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Eric Biggers When supported by the hash algorithm, use crypto_shash_finup_mb() to interleave the hashing of pairs of data blocks. On some CPUs this nearly doubles hashing performance. The increase in overall throughput of cold-cache dm-verity reads that I'm seeing on arm64 and x86_64 is roughly 35% (though this metric is hard to measure as it jumps around a lot). For now this is only done on data blocks, not Merkle tree blocks. We could use finup_mb on Merkle tree blocks too, but that is less important as there aren't as many Merkle tree blocks as data blocks, and that would require some additional code restructuring. Reviewed-by: Sami Tolvanen Acked-by: Ard Biesheuvel Signed-off-by: Eric Biggers --- drivers/md/dm-verity-target.c | 166 ++++++++++++++++++++++++++-------- drivers/md/dm-verity.h | 33 ++++--- 2 files changed, 147 insertions(+), 52 deletions(-) diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index e40c5c083a931..d2ba1f0d2f154 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -182,22 +182,28 @@ static int verity_ahash_final(struct dm_verity *v, struct ahash_request *req, r = crypto_wait_req(crypto_ahash_final(req), wait); out: return r; } +static int verity_ahash(struct dm_verity *v, struct dm_verity_io *io, + const u8 *data, size_t len, u8 *digest, bool may_sleep) +{ + struct ahash_request *req = verity_io_hash_req(v, io); + struct crypto_wait wait; + + return verity_ahash_init(v, req, &wait, may_sleep) ?: + verity_ahash_update(v, req, data, len, &wait) ?: + verity_ahash_final(v, req, digest, &wait); +} + int verity_hash(struct dm_verity *v, struct dm_verity_io *io, const u8 *data, size_t len, u8 *digest, bool may_sleep) { int r; if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) { - struct ahash_request *req = verity_io_hash_req(v, io); - struct crypto_wait wait; - - r = verity_ahash_init(v, req, &wait, may_sleep) ?: - verity_ahash_update(v, req, data, len, &wait) ?: - verity_ahash_final(v, req, digest, &wait); + r = verity_ahash(v, io, data, len, digest, may_sleep); } else { struct shash_desc *desc = verity_io_hash_req(v, io); desc->tfm = v->shash_tfm; r = crypto_shash_import(desc, v->initial_hashstate) ?: @@ -206,10 +212,38 @@ int verity_hash(struct dm_verity *v, struct dm_verity_io *io, if (unlikely(r)) DMERR("Error hashing block: %d", r); return r; } +static int verity_hash_mb(struct dm_verity *v, struct dm_verity_io *io, + const u8 *data[], size_t len, u8 *digests[], + int num_blocks) +{ + int r = 0; + + if (static_branch_unlikely(&ahash_enabled) && !v->shash_tfm) { + int i; + + /* Note: in practice num_blocks is always 1 in this case. */ + for (i = 0; i < num_blocks; i++) { + r = verity_ahash(v, io, data[i], len, digests[i], + !io->in_bh); + if (r) + break; + } + } else { + struct shash_desc *desc = verity_io_hash_req(v, io); + + desc->tfm = v->shash_tfm; + r = crypto_shash_import(desc, v->initial_hashstate) ?: + crypto_shash_finup_mb(desc, data, len, digests, num_blocks); + } + if (unlikely(r)) + DMERR("Error hashing blocks: %d", r); + return r; +} + static void verity_hash_at_level(struct dm_verity *v, sector_t block, int level, sector_t *hash_block, unsigned int *offset) { sector_t position = verity_position_at_level(v, block, level); unsigned int idx; @@ -457,13 +491,16 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io, } static int verity_handle_data_hash_mismatch(struct dm_verity *v, struct dm_verity_io *io, struct bio *bio, - const u8 *want_digest, - sector_t blkno, u8 *data) + struct pending_block *block) { + const u8 *want_digest = block->want_digest; + sector_t blkno = block->blkno; + u8 *data = block->data; + if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Error handling code (FEC included) cannot be run in the * BH workqueue, so fallback to a standard workqueue. */ @@ -487,10 +524,57 @@ static int verity_handle_data_hash_mismatch(struct dm_verity *v, return -EIO; } return 0; } +static void verity_clear_pending_blocks(struct dm_verity_io *io) +{ + int i; + + for (i = io->num_pending - 1; i >= 0; i--) { + kunmap_local(io->pending_blocks[i].data); + io->pending_blocks[i].data = NULL; + } + io->num_pending = 0; +} + +static int verity_verify_pending_blocks(struct dm_verity *v, + struct dm_verity_io *io, + struct bio *bio) +{ + const u8 *data[DM_VERITY_MAX_PENDING_DATA_BLOCKS]; + u8 *real_digests[DM_VERITY_MAX_PENDING_DATA_BLOCKS]; + int i; + int r; + + for (i = 0; i < io->num_pending; i++) { + data[i] = io->pending_blocks[i].data; + real_digests[i] = io->pending_blocks[i].real_digest; + } + + r = verity_hash_mb(v, io, data, 1 << v->data_dev_block_bits, + real_digests, io->num_pending); + if (unlikely(r)) + return r; + + for (i = 0; i < io->num_pending; i++) { + struct pending_block *block = &io->pending_blocks[i]; + + if (likely(memcmp(block->real_digest, block->want_digest, + v->digest_size) == 0)) { + if (v->validated_blocks) + set_bit(block->blkno, v->validated_blocks); + } else { + r = verity_handle_data_hash_mismatch(v, io, bio, block); + if (unlikely(r)) + return r; + } + } + verity_clear_pending_blocks(io); + return 0; +} + /* * Verify one "dm_verity_io" structure. */ static int verity_verify_io(struct dm_verity_io *io) { @@ -498,10 +582,13 @@ static int verity_verify_io(struct dm_verity_io *io) const unsigned int block_size = 1 << v->data_dev_block_bits; struct bvec_iter iter_copy; struct bvec_iter *iter; struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); unsigned int b; + int r; + + io->num_pending = 0; if (static_branch_unlikely(&use_bh_wq_enabled) && io->in_bh) { /* * Copy the iterator in case we need to restart * verification in a work-queue. @@ -511,36 +598,38 @@ static int verity_verify_io(struct dm_verity_io *io) } else iter = &io->iter; for (b = 0; b < io->n_blocks; b++, bio_advance_iter(bio, iter, block_size)) { - int r; - sector_t cur_block = io->block + b; + sector_t blkno = io->block + b; + struct pending_block *block; bool is_zero; struct bio_vec bv; void *data; if (v->validated_blocks && bio->bi_status == BLK_STS_OK && - likely(test_bit(cur_block, v->validated_blocks))) + likely(test_bit(blkno, v->validated_blocks))) continue; - r = verity_hash_for_block(v, io, cur_block, - verity_io_want_digest(v, io), + block = &io->pending_blocks[io->num_pending]; + + r = verity_hash_for_block(v, io, blkno, block->want_digest, &is_zero); if (unlikely(r < 0)) - return r; + goto error; bv = bio_iter_iovec(bio, *iter); if (unlikely(bv.bv_len < block_size)) { /* * Data block spans pages. This should not happen, * since dm-verity sets dma_alignment to the data block * size minus 1, and dm-verity also doesn't allow the * data block size to be greater than PAGE_SIZE. */ DMERR_LIMIT("unaligned io (data block spans pages)"); - return -EIO; + r = -EIO; + goto error; } data = bvec_kmap_local(&bv); if (is_zero) { @@ -550,34 +639,30 @@ static int verity_verify_io(struct dm_verity_io *io) */ memset(data, 0, block_size); kunmap_local(data); continue; } - - r = verity_hash(v, io, data, block_size, - verity_io_real_digest(v, io), !io->in_bh); - if (unlikely(r < 0)) { - kunmap_local(data); - return r; + block->data = data; + block->blkno = blkno; + if (++io->num_pending == v->mb_max_msgs) { + r = verity_verify_pending_blocks(v, io, bio); + if (unlikely(r)) + goto error; } + } - if (likely(memcmp(verity_io_real_digest(v, io), - verity_io_want_digest(v, io), v->digest_size) == 0)) { - if (v->validated_blocks) - set_bit(cur_block, v->validated_blocks); - kunmap_local(data); - continue; - } - r = verity_handle_data_hash_mismatch(v, io, bio, - verity_io_want_digest(v, io), - cur_block, data); - kunmap_local(data); + if (io->num_pending) { + r = verity_verify_pending_blocks(v, io, bio); if (unlikely(r)) - return r; + goto error; } return 0; + +error: + verity_clear_pending_blocks(io); + return r; } /* * Skip verity work in response to I/O error when system is shutting down. */ @@ -1217,14 +1302,15 @@ static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name) /* * Allocate the hash transformation object that this dm-verity instance * will use. The vast majority of dm-verity users use CPU-based * hashing, so when possible use the shash API to minimize the crypto - * API overhead. If the ahash API resolves to a different driver - * (likely an off-CPU hardware offload), use ahash instead. Also use - * ahash if the obsolete dm-verity format with the appended salt is - * being used, so that quirk only needs to be handled in one place. + * API overhead, especially when multibuffer hashing is used. If the + * ahash API resolves to a different driver (likely an off-CPU hardware + * offload), use ahash instead. Also use ahash if the obsolete + * dm-verity format with the appended salt is being used, so that quirk + * only needs to be handled in one place. */ ahash = crypto_alloc_ahash(alg_name, 0, v->use_bh_wq ? CRYPTO_ALG_ASYNC : 0); if (IS_ERR(ahash)) { ti->error = "Cannot initialize hash function"; @@ -1248,17 +1334,21 @@ static int verity_setup_hash_alg(struct dm_verity *v, const char *alg_name) ahash = NULL; v->shash_tfm = shash; v->digest_size = crypto_shash_digestsize(shash); v->hash_reqsize = sizeof(struct shash_desc) + crypto_shash_descsize(shash); - DMINFO("%s using shash \"%s\"", alg_name, driver_name); + v->mb_max_msgs = min(crypto_shash_mb_max_msgs(shash), + DM_VERITY_MAX_PENDING_DATA_BLOCKS); + DMINFO("%s using shash \"%s\"%s", alg_name, driver_name, + v->mb_max_msgs > 1 ? " (multibuffer)" : ""); } else { v->ahash_tfm = ahash; static_branch_inc(&ahash_enabled); v->digest_size = crypto_ahash_digestsize(ahash); v->hash_reqsize = sizeof(struct ahash_request) + crypto_ahash_reqsize(ahash); + v->mb_max_msgs = 1; DMINFO("%s using ahash \"%s\"", alg_name, driver_name); } if ((1 << v->hash_dev_block_bits) < v->digest_size * 2) { ti->error = "Digest size too big"; return -EINVAL; diff --git a/drivers/md/dm-verity.h b/drivers/md/dm-verity.h index 347d3fc4fff4d..b43ed93c7c438 100644 --- a/drivers/md/dm-verity.h +++ b/drivers/md/dm-verity.h @@ -59,10 +59,11 @@ struct dm_verity { unsigned char hash_per_block_bits; /* log2(hashes in hash block) */ unsigned char levels; /* the number of tree levels */ unsigned char version; bool hash_failed:1; /* set if hash of any block failed */ bool use_bh_wq:1; /* try to verify in BH wq before normal work-queue */ + unsigned char mb_max_msgs; /* max multibuffer hashing interleaving factor */ unsigned int digest_size; /* digest size for the current hash algorithm */ unsigned int hash_reqsize; /* the size of temporary space for crypto */ enum verity_mode mode; /* mode for handling verification errors */ unsigned int corrupted_errs;/* Number of errors for corrupted blocks */ @@ -78,10 +79,19 @@ struct dm_verity { struct dm_io_client *io; mempool_t recheck_pool; }; +#define DM_VERITY_MAX_PENDING_DATA_BLOCKS HASH_MAX_MB_MSGS + +struct pending_block { + void *data; + sector_t blkno; + u8 want_digest[HASH_MAX_DIGESTSIZE]; + u8 real_digest[HASH_MAX_DIGESTSIZE]; +}; + struct dm_verity_io { struct dm_verity *v; /* original value of bio->bi_end_io */ bio_end_io_t *orig_bi_end_io; @@ -94,12 +104,19 @@ struct dm_verity_io { struct work_struct work; struct work_struct bh_work; u8 tmp_digest[HASH_MAX_DIGESTSIZE]; - u8 real_digest[HASH_MAX_DIGESTSIZE]; - u8 want_digest[HASH_MAX_DIGESTSIZE]; + + /* + * This is the queue of data blocks that are pending verification. We + * allow multiple blocks to be queued up in order to support multibuffer + * hashing, i.e. interleaving the hashing of multiple messages. On many + * CPUs this improves performance significantly. + */ + int num_pending; + struct pending_block pending_blocks[DM_VERITY_MAX_PENDING_DATA_BLOCKS]; /* * This struct is followed by a variable-sized hash request of size * v->hash_reqsize, either a struct ahash_request or a struct shash_desc * (depending on whether ahash_tfm or shash_tfm is being used). To @@ -111,22 +128,10 @@ static inline void *verity_io_hash_req(struct dm_verity *v, struct dm_verity_io *io) { return io + 1; } -static inline u8 *verity_io_real_digest(struct dm_verity *v, - struct dm_verity_io *io) -{ - return io->real_digest; -} - -static inline u8 *verity_io_want_digest(struct dm_verity *v, - struct dm_verity_io *io) -{ - return io->want_digest; -} - extern int verity_hash(struct dm_verity *v, struct dm_verity_io *io, const u8 *data, size_t len, u8 *digest, bool may_sleep); extern int verity_hash_for_block(struct dm_verity *v, struct dm_verity_io *io, sector_t block, u8 *digest, bool *is_zero);