From 2f1f34c1bf7b309b296bc04321a09e6b5dba0673 Mon Sep 17 00:00:00 2001 From: Eric Biggers Date: Sun, 22 Oct 2023 01:11:00 -0700 Subject: crypto: ahash - optimize performance when wrapping shash The "ahash" API provides access to both CPU-based and hardware offload- based implementations of hash algorithms. Typically the former are implemented as "shash" algorithms under the hood, while the latter are implemented as "ahash" algorithms. The "ahash" API provides access to both. Various kernel subsystems use the ahash API because they want to support hashing hardware offload without using a separate API for it. Yet, the common case is that a crypto accelerator is not actually being used, and ahash is just wrapping a CPU-based shash algorithm. This patch optimizes the ahash API for that common case by eliminating the extra indirect call for each ahash operation on top of shash. It also fixes the double-counting of crypto stats in this scenario (though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested in performance anyway...), and it eliminates redundant checking of CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash. Signed-off-by: Eric Biggers Signed-off-by: Herbert Xu --- crypto/shash.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) (limited to 'crypto/shash.c') diff --git a/crypto/shash.c b/crypto/shash.c index 28092ed8415a..d5194221c88c 100644 --- a/crypto/shash.c +++ b/crypto/shash.c @@ -23,7 +23,13 @@ static inline struct crypto_istat_hash *shash_get_stat(struct shash_alg *alg) static inline int crypto_shash_errstat(struct shash_alg *alg, int err) { - return crypto_hash_errstat(&alg->halg, err); + if (!IS_ENABLED(CONFIG_CRYPTO_STATS)) + return err; + + if (err && err != -EINPROGRESS && err != -EBUSY) + atomic64_inc(&shash_get_stat(alg)->err_cnt); + + return err; } int shash_no_setkey(struct crypto_shash *tfm, const u8 *key, -- cgit v1.2.3