summaryrefslogtreecommitdiffstats
path: root/crypto
Commit message (Collapse)AuthorAgeFilesLines
...
| * | crypto: salsa20 - remove Salsa20 stream cipher algorithmArd Biesheuvel2021-01-296-1403/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Salsa20 is not used anywhere in the kernel, is not suitable for disk encryption, and widely considered to have been superseded by ChaCha20. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: tgr192 - remove Tiger 128/160/192 hash algorithmsArd Biesheuvel2021-01-296-876/+0
| | | | | | | | | | | | | | | | | | | | | | | | Tiger is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: rmd320 - remove RIPE-MD 320 hash algorithmArd Biesheuvel2021-01-296-494/+1
| | | | | | | | | | | | | | | | | | | | | | | | RIPE-MD 320 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: rmd256 - remove RIPE-MD 256 hash algorithmArd Biesheuvel2021-01-297-441/+1
| | | | | | | | | | | | | | | | | | | | | | | | RIPE-MD 256 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: rmd128 - remove RIPE-MD 128 hash algorithmArd Biesheuvel2021-01-297-506/+1
| | | | | | | | | | | | | | | | | | | | | | | | RIPE-MD 128 is never referenced anywhere in the kernel, and unlikely to be depended upon by userspace via AF_ALG. So let's remove it. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86 - remove glue helper moduleArd Biesheuvel2021-01-142-11/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All dependencies on the x86 glue helper module have been replaced by local instantiations of the new ECB/CBC preprocessor helper macros, so the glue helper module can be retired. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/twofish - drop dependency on glue helperArd Biesheuvel2021-01-141-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/cast6 - drop dependency on glue helperArd Biesheuvel2021-01-141-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/serpent - drop dependency on glue helperArd Biesheuvel2021-01-141-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/camellia - drop dependency on glue helperArd Biesheuvel2021-01-141-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the glue helper dependency with implementations of ECB and CBC based on the new CPP macros, which avoid the need for indirect calls. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/blowfish - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | Blowfish in counter mode is never used in the kernel, so there is no point in keeping an accelerated implementation around. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/des - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | DES or Triple DES in counter mode is never used in the kernel, so there is no point in keeping an accelerated implementation around. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/twofish - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Twofish in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/cast6 - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CAST6 in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/cast5 - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CAST5 in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/serpent - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Serpent in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/camellia - drop CTR mode implementationArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Camellia in CTR mode is never used by the kernel directly, and is highly unlikely to be relied upon by dm-crypt or algif_skcipher. So let's drop the accelerated CTR mode implementation, and instead, rely on the CTR template and the bare cipher. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/twofish - switch to XTS templateArd Biesheuvel2021-01-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the XTS template can wrap accelerated ECB modes, it can be used to implement Twofish in XTS mode as well, which turns out to be at least as fast, and sometimes even faster Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/serpent- switch to XTS templateArd Biesheuvel2021-01-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the XTS template can wrap accelerated ECB modes, it can be used to implement Serpent in XTS mode as well, which turns out to be at least as fast, and sometimes even faster Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/cast6 - switch to XTS templateArd Biesheuvel2021-01-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the XTS template can wrap accelerated ECB modes, it can be used to implement CAST6 in XTS mode as well, which turns out to be at least as fast, and sometimes even faster Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/camellia - switch to XTS templateArd Biesheuvel2021-01-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the XTS template can wrap accelerated ECB modes, it can be used to implement Camellia in XTS mode as well, which turns out to be at least as fast, and sometimes even faster. Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: x86/aes-ni-xts - rewrite and drop indirections via glue helperArd Biesheuvel2021-01-081-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The AES-NI driver implements XTS via the glue helper, which consumes a struct with sets of function pointers which are invoked on chunks of input data of the appropriate size, as annotated in the struct. Let's get rid of this indirection, so that we can perform direct calls to the assembler helpers. Instead, let's adopt the arm64 strategy, i.e., provide a helper which can consume inputs of any size, provided that the penultimate, full block is passed via the last call if ciphertext stealing needs to be applied. This also allows us to enable the XTS mode for i386. Tested-by: Eric Biggers <ebiggers@google.com> # x86_64 Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reported-by: kernel test robot <lkp@intel.com> Reported-by: kernel test robot <lkp@intel.com> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: blake2b - update file commentEric Biggers2021-01-031-13/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The file comment for blake2b_generic.c makes it sound like it's the reference implementation of BLAKE2b with only minor changes. But it's actually been changed a lot. Update the comment to make this clearer. Reviewed-by: David Sterba <dsterba@suse.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: blake2b - sync with blake2s implementationEric Biggers2021-01-031-178/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sync the BLAKE2b code with the BLAKE2s code as much as possible: - Move a lot of code into new headers <crypto/blake2b.h> and <crypto/internal/blake2b.h>, and adjust it to be like the corresponding BLAKE2s code, i.e. like <crypto/blake2s.h> and <crypto/internal/blake2s.h>. - Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE. - Use a macro BLAKE2B_ALG() to define the shash_alg structs. - Export blake2b_compress_generic() for use as a fallback. This makes it much easier to add optimized implementations of BLAKE2b, as optimized implementations can use the helper functions crypto_blake2b_{setkey,init,update,final}() and blake2b_compress_generic(). The ARM implementation will use these. But this change is also helpful because it eliminates unnecessary differences between the BLAKE2b and BLAKE2s code, so that the same improvements can easily be made to both. (The two algorithms are basically identical, except for the word size and constants.) It also makes it straightforward to add a library API for BLAKE2b in the future if/when it's needed. This change does make the BLAKE2b code slightly more complicated than it needs to be, as it doesn't actually provide a library API yet. For example, __blake2b_update() doesn't really need to exist yet; it could just be inlined into crypto_blake2b_update(). But I believe this is outweighed by the benefits of keeping the code in sync. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: blake2s - share the "shash" API boilerplate codeEric Biggers2021-01-031-67/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add helper functions for shash implementations of BLAKE2s to include/crypto/internal/blake2s.h, taking advantage of __blake2s_update() and __blake2s_final() that were added by the previous patch to share more code between the library and shash implementations. crypto_blake2s_setkey() and crypto_blake2s_init() are usable as shash_alg::setkey and shash_alg::init directly, while crypto_blake2s_update() and crypto_blake2s_final() take an extra 'blake2s_compress_t' function pointer parameter. This allows the implementation of the compression function to be overridden, which is the only part that optimized implementations really care about. The new functions are inline functions (similar to those in sha1_base.h, sha256_base.h, and sm3_base.h) because this avoids needing to add a new module blake2s_helpers.ko, they aren't *too* long, and this avoids indirect calls which are expensive these days. Note that they can't go in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency. Finally, use these new helper functions in the x86 implementation of BLAKE2s. (This part should be a separate patch, but unfortunately the x86 implementation used the exact same function names like "crypto_blake2s_update()", so it had to be updated at the same time.) Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: blake2s - remove unneeded includesEric Biggers2021-01-031-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It doesn't make sense for the generic implementation of BLAKE2s to include <crypto/internal/simd.h> and <linux/jump_label.h>, as these are things that would only be useful in an architecture-specific implementation. Remove these unnecessary includes. Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: blake2s - define shash_alg structs using macrosEric Biggers2021-01-031-61/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The shash_alg structs for the four variants of BLAKE2s are identical except for the algorithm name, driver name, and digest size. So, avoid code duplication by using a macro to define these structs. Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: remove cipher routines from public crypto APIArd Biesheuvel2021-01-0319-3/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cipher routines in the crypto API are mostly intended for templates implementing skcipher modes generically in software, and shouldn't be used outside of the crypto subsystem. So move the prototypes and all related definitions to a new header file under include/crypto/internal. Also, let's use the new module namespace feature to move the symbol exports into a new namespace CRYPTO_INTERNAL. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: tcrypt - avoid signed overflow in byte countArd Biesheuvel2021-01-031-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The signed long type used for printing the number of bytes processed in tcrypt benchmarks limits the range to -/+ 2 GiB, which is not sufficient to cover the performance of common accelerated ciphers such as AES-NI when benchmarked with sec=1. So switch to u64 instead. While at it, fix up a missing printk->pr_cont conversion in the AEAD benchmark. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | | keys: Update comment for restrict_link_by_key_or_keyring_chainAndrew Zaborowski2021-02-161-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the bit of information that makes restrict_link_by_key_or_keyring_chain different from restrict_link_by_key_or_keyring to the inline docs comment. Signed-off-by: Andrew Zaborowski <andrew.zaborowski@intel.com> Acked-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
* | | X.509: Fix crash caused by NULL pointerTianjia Zhang2021-01-201-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On the following call path, `sig->pkey_algo` is not assigned in asymmetric_key_verify_signature(), which causes runtime crash in public_key_verify_signature(). keyctl_pkey_verify asymmetric_key_verify_signature verify_signature public_key_verify_signature This patch simply check this situation and fixes the crash caused by NULL pointer. Fixes: 215525639631 ("X.509: support OSCCA SM2-with-SM3 certificate verification") Reported-by: Tobias Markus <tobias@markus-regensburg.de> Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-and-tested-by: Toke Høiland-Jørgensen <toke@redhat.com> Tested-by: João Fonseca <jpedrofonseca@ua.pt> Acked-by: Jarkko Sakkinen <jarkko@kernel.org> Cc: stable@vger.kernel.org # v5.10+ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | Merge branch 'linus' of ↵Linus Torvalds2021-01-181-0/+2
|\ \ \ | |_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: "A Kconfig dependency issue with omap-sham and a divide by zero in xor on some platforms" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: omap-sham - Fix link error without crypto-engine crypto: xor - Fix divide error in do_xor_speed()
| * | crypto: xor - Fix divide error in do_xor_speed()Kirill Tkhai2021-01-081-0/+2
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | crypto: Fix divide error in do_xor_speed() From: Kirill Tkhai <ktkhai@virtuozzo.com> Latest (but not only latest) linux-next panics with divide error on my QEMU setup. The patch at the bottom of this message fixes the problem. xor: measuring software checksum speed divide error: 0000 [#1] PREEMPT SMP KASAN PREEMPT SMP KASAN CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.10.0-next-20201223+ #2177 RIP: 0010:do_xor_speed+0xbb/0xf3 Code: 41 ff cc 75 b5 bf 01 00 00 00 e8 3d 23 8b fe 65 8b 05 f6 49 83 7d 85 c0 75 05 e8 84 70 81 fe b8 00 00 50 c3 31 d2 48 8d 7b 10 <f7> f5 41 89 c4 e8 58 07 a2 fe 44 89 63 10 48 8d 7b 08 e8 cb 07 a2 RSP: 0000:ffff888100137dc8 EFLAGS: 00010246 RAX: 00000000c3500000 RBX: ffffffff823f0160 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000808 RDI: ffffffff823f0170 RBP: 0000000000000000 R08: ffffffff8109c50f R09: ffffffff824bb6f7 R10: fffffbfff04976de R11: 0000000000000001 R12: 0000000000000000 R13: ffff888101997000 R14: ffff888101994000 R15: ffffffff823f0178 FS: 0000000000000000(0000) GS:ffff8881f7780000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 000000000220e000 CR4: 00000000000006a0 Call Trace: calibrate_xor_blocks+0x13c/0x1c4 ? do_xor_speed+0xf3/0xf3 do_one_initcall+0xc1/0x1b7 ? start_kernel+0x373/0x373 ? unpoison_range+0x3a/0x60 kernel_init_freeable+0x1dd/0x238 ? rest_init+0xc6/0xc6 kernel_init+0x8/0x10a ret_from_fork+0x1f/0x30 ---[ end trace 5bd3c1d0b77772da ]--- Fixes: c055e3eae0f1 ("crypto: xor - use ktime for template benchmarking") Cc: <stable@vger.kernel.org> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | Merge tag 'char-misc-5.11-rc3' of ↵Linus Torvalds2021-01-101-1/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver fixes from Greg KH: "Here are some small char and misc driver fixes for 5.11-rc3. The majority here are fixes for the habanalabs drivers, but also in here are: - crypto driver fix - pvpanic driver fix - updated font file - interconnect driver fixes All of these have been in linux-next for a while with no reported issues" * tag 'char-misc-5.11-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (26 commits) Fonts: font_ter16x32: Update font with new upstream Terminus release misc: pvpanic: Check devm_ioport_map() for NULL speakup: Add github repository URL and bug tracker MAINTAINERS: Update Georgi's email address crypto: asym_tpm: correct zero out potential secrets habanalabs: Fix memleak in hl_device_reset interconnect: imx8mq: Use icc_sync_state interconnect: imx: Remove a useless test interconnect: imx: Add a missing of_node_put after of_device_is_available interconnect: qcom: fix rpmh link failures habanalabs: fix order of status check habanalabs: register to pci shutdown callback habanalabs: add validation cs counter, fix misplaced counters habanalabs/gaudi: retry loading TPC f/w on -EINTR habanalabs: adjust pci controller init to new firmware habanalabs: update comment in hl_boot_if.h habanalabs/gaudi: enhance reset message habanalabs: full FW hard reset support habanalabs/gaudi: disable CGM at HW initialization habanalabs: Revise comment to align with mirror list name ...
| * crypto: asym_tpm: correct zero out potential secretsGreg Kroah-Hartman2020-12-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The function derive_pub_key() should be calling memzero_explicit() instead of memset() in case the complier decides to optimize away the call to memset() because it "knows" no one is going to touch the memory anymore. Cc: stable <stable@vger.kernel.org> Reported-by: Ilil Blum Shem-Tov <ilil.blum.shem-tov@intel.com> Tested-by: Ilil Blum Shem-Tov <ilil.blum.shem-tov@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Link: https://lore.kernel.org/r/X8ns4AfwjKudpyfe@kroah.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | crypto: ecdh - avoid buffer overflow in ecdh_set_secret()Ard Biesheuvel2021-01-031-1/+2
|/ | | | | | | | | | | | | | | | | Pavel reports that commit 17858b140bf4 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") fixes one problem but introduces another: the unconditional memcpy() introduced by that commit may overflow the target buffer if the source data is invalid, which could be the result of intentional tampering. So check params.key_size explicitly against the size of the target buffer before validating the key further. Fixes: 17858b140bf4 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()") Reported-by: Pavel Machek <pavel@denx.de> Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128 - avoid spurious references crypto_aegis128_update_simdArd Biesheuvel2020-12-041-2/+2
| | | | | | | | | | | | | | | Geert reports that builds where CONFIG_CRYPTO_AEGIS128_SIMD is not set may still emit references to crypto_aegis128_update_simd(), which cannot be satisfied and therefore break the build. These references only exist in functions that can be optimized away, but apparently, the compiler is not always able to prove this. So add some explicit checks for CONFIG_CRYPTO_AEGIS128_SIMD to help the compiler figure this out. Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: seed - remove trailing semicolon in macro definitionTom Rix2020-12-041-1/+1
| | | | | | | The macro use will already have a semicolon. Signed-off-by: Tom Rix <trix@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()Ard Biesheuvel2020-12-041-4/+5
| | | | | | | | | | | | | | | ecdh_set_secret() casts a void* pointer to a const u64* in order to feed it into ecc_is_key_valid(). This is not generally permitted by the C standard, and leads to actual misalignment faults on ARMv6 cores. In some cases, these are fixed up in software, but this still leads to performance hits that are entirely avoidable. So let's copy the key into the ctx buffer first, which we will do anyway in the common case, and which guarantees correct alignment. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarksArd Biesheuvel2020-11-271-37/+44
| | | | | | | | | | | | | | | | WireGuard and IPsec both typically operate on input blocks that are ~1420 bytes in size, given the default Ethernet MTU of 1500 bytes and the overhead of the VPN metadata. Many aead and sckipher implementations are optimized for power-of-2 block sizes, and whether they perform well when operating on 1420 byte blocks cannot be easily extrapolated from the performance on power-of-2 block size. So let's add 1420 bytes explicitly, and round it up to the next blocksize multiple of the algo in question if it does not support 1420 byte blocks. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - permit tcrypt.ko to be builtinArd Biesheuvel2020-11-271-1/+1
| | | | | | | | | | | | | | | | When working on crypto algorithms, being able to run tcrypt quickly without booting an entire Linux installation can be very useful. For instance, QEMU/kvm can be used to boot a kernel from the command line, and having tcrypt.ko builtin would allow tcrypt to be executed to run benchmarks, or to run tests for algorithms that need to be instantiated from templates, without the need to make it past the point where the rootfs is mounted. So let's relax the requirement that tcrypt can only be built as a module when CONFIG_EXPERT is enabled. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - don't initialize at subsys_initcall timeArd Biesheuvel2020-11-271-1/+1
| | | | | | | | | | | | | | | | | | | | Commit c4741b2305979 ("crypto: run initcalls for generic implementations earlier") converted tcrypt.ko's module_init() to subsys_initcall(), but this was unintentional: tcrypt.ko currently cannot be built into the core kernel, and so the subsys_initcall() gets converted into module_init() under the hood. Given that tcrypt.ko does not implement a generic version of a crypto algorithm that has to be available early during boot, there is no point in running the tcrypt init code earlier than implied by module_init(). However, for crypto development purposes, we will lift the restriction that tcrypt.ko must be built as a module, and when builtin, it makes sense for tcrypt.ko (which does its work inside the module init function) to run as late as possible. So let's switch to late_initcall() instead. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128 - expose SIMD code path as separate driverArd Biesheuvel2020-11-271-77/+143
| | | | | | | | | | | | | | Wiring the SIMD code into the generic driver has the unfortunate side effect that the tcrypt testing code cannot distinguish them, and will therefore not use the latter to fuzz test the former, as it does for other algorithms. So let's refactor the code a bit so we can register two implementations: aegis128-generic and aegis128-simd. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128/neon - move final tag check to SIMD domainArd Biesheuvel2020-11-273-18/+57
| | | | | | | | | | | | | | | Instead of calculating the tag and returning it to the caller on decryption, use a SIMD compare and min across vector to perform the comparison. This is slightly more efficient, and removes the need on the caller's part to wipe the tag from memory if the decryption failed. While at it, switch to unsigned int when passing cryptlen and assoclen - we don't support input sizes where it matters anyway. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128/neon - optimize tail block handlingArd Biesheuvel2020-11-271-14/+75
| | | | | | | | | | | | | | Avoid copying the tail block via a stack buffer if the total size exceeds a single AEGIS block. In this case, we can use overlapping loads and stores and NEON permutation instructions instead, which leads to a modest performance improvement on some cores (< 5%), and is slightly cleaner. Note that we still need to use a stack buffer if the entire input is smaller than 16 bytes, given that we cannot use 16 byte NEON loads and stores safely in this case. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aegis128 - wipe plaintext and tag if decryption failsArd Biesheuvel2020-11-271-6/+26
| | | | | | | | | | | | | The AEGIS spec mentions explicitly that the security guarantees hold only if the resulting plaintext and tag of a failed decryption are withheld. So ensure that we abide by this. While at it, drop the unused struct aead_request *req parameter from crypto_aegis128_process_crypt(). Reviewed-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: sha - split sha.h into sha1.h and sha2.hEric Biggers2020-11-204-4/+4
| | | | | | | | | | | | | | | | | | | | | | | Currently <crypto/sha.h> contains declarations for both SHA-1 and SHA-2, and <crypto/sha3.h> contains declarations for SHA-3. This organization is inconsistent, but more importantly SHA-1 is no longer considered to be cryptographically secure. So to the extent possible, SHA-1 shouldn't be grouped together with any of the other SHA versions, and usage of it should be phased out. Therefore, split <crypto/sha.h> into two headers <crypto/sha1.h> and <crypto/sha2.h>, and make everyone explicitly specify whether they want the declarations for SHA-1, SHA-2, or both. This avoids making the SHA-1 declarations visible to files that don't want anything to do with SHA-1. It also prepares for potentially moving sha1.h into a new insecure/ or dangerous/ directory. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Kconfig - CRYPTO_MANAGER_EXTRA_TESTS requires the managerJason A. Donenfeld2020-11-131-1/+1
| | | | | | | | | | | | The extra tests in the manager actually require the manager to be selected too. Otherwise the linker gives errors like: ld: arch/x86/crypto/chacha_glue.o: in function `chacha_simd_stream_xor': chacha_glue.c:(.text+0x422): undefined reference to `crypto_simd_disabled_for_test' Fixes: 2343d1529aff ("crypto: Kconfig - allow tests to be disabled when manager is disabled") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: af_alg - avoid undefined behavior accessing salg_nameEric Biggers2020-11-061-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 3f69cc60768b ("crypto: af_alg - Allow arbitrarily long algorithm names") made the kernel start accepting arbitrarily long algorithm names in sockaddr_alg. However, the actual length of the salg_name field stayed at the original 64 bytes. This is broken because the kernel can access indices >= 64 in salg_name, which is undefined behavior -- even though the memory that is accessed is still located within the sockaddr structure. It would only be defined behavior if the array were properly marked as arbitrary-length (either by making it a flexible array, which is the recommended way these days, or by making it an array of length 0 or 1). We can't simply change salg_name into a flexible array, since that would break source compatibility with userspace programs that embed sockaddr_alg into another struct, or (more commonly) declare a sockaddr_alg like 'struct sockaddr_alg sa = { .salg_name = "foo" };'. One solution would be to change salg_name into a flexible array only when '#ifdef __KERNEL__'. However, that would keep userspace without an easy way to actually use the longer algorithm names. Instead, add a new structure 'sockaddr_alg_new' that has the flexible array field, and expose it to both userspace and the kernel. Make the kernel use it correctly in alg_bind(). This addresses the syzbot report "UBSAN: array-index-out-of-bounds in alg_bind" (https://syzkaller.appspot.com/bug?extid=92ead4eb8e26a26d465e). Reported-by: syzbot+92ead4eb8e26a26d465e@syzkaller.appspotmail.com Fixes: 3f69cc60768b ("crypto: af_alg - Allow arbitrarily long algorithm names") Cc: <stable@vger.kernel.org> # v4.12+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - WARN on test failureEric Biggers2020-11-061-7/+13
| | | | | | | | | | | | | | | | | | Currently, by default crypto self-test failures only result in a pr_warn() message and an "unknown" status in /proc/crypto. Both of these are easy to miss. There is also an option to panic the kernel when a test fails, but that can't be the default behavior. A crypto self-test failure always indicates a kernel bug, however, and there's already a standard way to report (recoverable) kernel bugs -- the WARN() family of macros. WARNs are noisier and harder to miss, and existing test systems already know to look for them in dmesg or via /proc/sys/kernel/tainted. Therefore, call WARN() when an algorithm fails its self-tests. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>