diff options
author | Eric Biggers <ebiggers@google.com> | 2019-05-30 10:50:39 -0700 |
---|---|---|
committer | Herbert Xu <herbert@gondor.apana.org.au> | 2019-06-06 14:38:57 +0800 |
commit | 5c6bc4dfa515738149998bb0db2481a4fdead979 (patch) | |
tree | 4e5e9129c4eb8fcc6dc9329ccaaa9dd6a7f6dd58 | |
parent | 67882e76492483bafa9b1b1648bb031e9abe5185 (diff) | |
download | linux-5c6bc4dfa515738149998bb0db2481a4fdead979.tar.gz linux-5c6bc4dfa515738149998bb0db2481a4fdead979.tar.bz2 linux-5c6bc4dfa515738149998bb0db2481a4fdead979.zip |
crypto: ghash - fix unaligned memory access in ghash_setkey()
Changing ghash_mod_init() to be subsys_initcall made it start running
before the alignment fault handler has been installed on ARM. In kernel
builds where the keys in the ghash test vectors happened to be
misaligned in the kernel image, this exposed the longstanding bug that
ghash_setkey() is incorrectly casting the key buffer (which can have any
alignment) to be128 for passing to gf128mul_init_4k_lle().
Fix this by memcpy()ing the key to a temporary buffer.
Don't fix it by setting an alignmask on the algorithm instead because
that would unnecessarily force alignment of the data too.
Fixes: 2cdc6899a88e ("crypto: ghash - Add GHASH digest algorithm for GCM")
Reported-by: Peter Robinson <pbrobinson@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Tested-by: Peter Robinson <pbrobinson@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-rw-r--r-- | crypto/ghash-generic.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c index e6307935413c..c8a347798eae 100644 --- a/crypto/ghash-generic.c +++ b/crypto/ghash-generic.c @@ -34,6 +34,7 @@ static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct ghash_ctx *ctx = crypto_shash_ctx(tfm); + be128 k; if (keylen != GHASH_BLOCK_SIZE) { crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); @@ -42,7 +43,12 @@ static int ghash_setkey(struct crypto_shash *tfm, if (ctx->gf128) gf128mul_free_4k(ctx->gf128); - ctx->gf128 = gf128mul_init_4k_lle((be128 *)key); + + BUILD_BUG_ON(sizeof(k) != GHASH_BLOCK_SIZE); + memcpy(&k, key, GHASH_BLOCK_SIZE); /* avoid violating alignment rules */ + ctx->gf128 = gf128mul_init_4k_lle(&k); + memzero_explicit(&k, GHASH_BLOCK_SIZE); + if (!ctx->gf128) return -ENOMEM; |