summaryrefslogtreecommitdiffstats
path: root/drivers/crypto/vmx/aes_ctr.c
diff options
context:
space:
mode:
authorEric Biggers <ebiggers@google.com>2019-04-09 23:46:32 -0700
committerHerbert Xu <herbert@gondor.apana.org.au>2019-04-18 22:14:58 +0800
commit4a8108b70508df0b6c4ffa4a3974dab93dcbe851 (patch)
tree541d1c7daad8179310c43e7faf0b59cf08446379 /drivers/crypto/vmx/aes_ctr.c
parent767f015ea0b7ab9d60432ff6cd06b664fd71f50f (diff)
downloadlinux-stable-4a8108b70508df0b6c4ffa4a3974dab93dcbe851.tar.gz
linux-stable-4a8108b70508df0b6c4ffa4a3974dab93dcbe851.tar.bz2
linux-stable-4a8108b70508df0b6c4ffa4a3974dab93dcbe851.zip
crypto: arm64/aes-neonbs - don't access already-freed walk.iv
If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and unconditionally accessing walk.iv has caused a real problem in other algorithms. Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt(). Fixes: 1abee99eafab ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64") Cc: <stable@vger.kernel.org> # v4.11+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Diffstat (limited to 'drivers/crypto/vmx/aes_ctr.c')
0 files changed, 0 insertions, 0 deletions