summaryrefslogtreecommitdiffstats
path: root/include/crypto
Commit message (Collapse)AuthorAgeFilesLines
* crypto: skcipher - remove remnants of internal IV generatorsEric Biggers2018-12-233-18/+0
| | | | | | | | | | | | | | | | | | | Remove dead code related to internal IV generators, which are no longer used since they've been replaced with the "seqiv" and "echainiv" templates. The removed code includes: - The "givcipher" (GIVCIPHER) algorithm type. No algorithms are registered with this type anymore, so it's unneeded. - The "const char *geniv" member of aead_alg, ablkcipher_alg, and blkcipher_alg. A few algorithms still set this, but it isn't used anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG. Just hardcode "<default>" or "<none>" in those cases. - The 'skcipher_givcrypt_request' structure, which is never used. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - remove unused dump functionsCorentin Labbe2018-12-231-12/+0
| | | | | | | | | This patch removes unused dump functions for crypto_user_stats. There are remains of the copy/paste of crypto_user_base to crypto_user_stat and I forgot to remove them. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - fix use_after_free of struct xxx_requestCorentin Labbe2018-12-077-238/+55
| | | | | | | | | | | | | All crypto_stats functions use the struct xxx_request for feeding stats, but in some case this structure could already be freed. For fixing this, the needed parameters (len and alg) will be stored before the request being executed. Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics") Reported-by: syzbot <syzbot+6939a606a5305e9e9799@syzkaller.appspotmail.com> Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - convert all stats from u32 to u64Corentin Labbe2018-12-077-33/+33
| | | | | | | | | All the 32-bit fields need to be 64-bit. In some cases, UINT32_MAX crypto operations can be done in seconds. Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - made crypto_user_stat optionalCorentin Labbe2018-12-071-0/+17
| | | | | | | | | Even if CRYPTO_STATS is set to n, some part of CRYPTO_STATS are compiled. This patch made all part of crypto_user_stat uncompiled in that case. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: nhpoly1305 - add NHPoly1305 supportEric Biggers2018-11-201-0/+74
| | | | | | | | | | | | Add a generic implementation of NHPoly1305, an ε-almost-∆-universal hash function used in the Adiantum encryption mode. CONFIG_NHPOLY1305 is not selectable by itself since there won't be any real reason to enable it without also enabling Adiantum support. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: poly1305 - add Poly1305 core APIEric Biggers2018-11-201-0/+16
| | | | | | | | | | | | | | | | | | | | | | Expose a low-level Poly1305 API which implements the ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305 MAC and supports block-aligned inputs only. This is needed for Adiantum hashing, which builds an εA∆U hash function from NH and a polynomial evaluation in GF(2^{130}-5); this polynomial evaluation is identical to the one the Poly1305 MAC does. However, the crypto_shash Poly1305 API isn't very appropriate for this because its calling convention assumes it is used as a MAC, with a 32-byte "one-time key" provided for every digest. But by design, in Adiantum hashing the performance of the polynomial evaluation isn't nearly as critical as NH. So it suffices to just have some C helper functions. Thus, this patch adds such functions. Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: poly1305 - use structures for key and accumulatorEric Biggers2018-11-201-2/+10
| | | | | | | | | | | | | | | | In preparation for exposing a low-level Poly1305 API which implements the ε-almost-∆-universal (εA∆U) hash function underlying the Poly1305 MAC and supports block-aligned inputs only, create structures poly1305_key and poly1305_state which hold the limbs of the Poly1305 "r" key and accumulator, respectively. These structures could actually have the same type (e.g. poly1305_val), but different types are preferable, to prevent misuse. Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha - add XChaCha12 supportEric Biggers2018-11-201-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the generic implementation of ChaCha20 has been refactored to allow varying the number of rounds, add support for XChaCha12, which is the XSalsa construction applied to ChaCha12. ChaCha12 is one of the three ciphers specified by the original ChaCha paper (https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than ChaCha20 but has a lower, but still large, security margin. We need XChaCha12 support so that it can be used in the Adiantum encryption mode, which enables disk/file encryption on low-end mobile devices where AES-XTS is too slow as the CPUs lack AES instructions. We'd prefer XChaCha20 (the more popular variant), but it's too slow on some of our target devices, so at least in some cases we do need the XChaCha12-based version. In more detail, the problem is that Adiantum is still much slower than we're happy with, and encryption still has a quite noticeable effect on the feel of low-end devices. Users and vendors push back hard against encryption that degrades the user experience, which always risks encryption being disabled entirely. So we need to choose the fastest option that gives us a solid margin of security, and here that's XChaCha12. The best known attack on ChaCha breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's security margin is still better than AES-256's. Much has been learned about cryptanalysis of ARX ciphers since Salsa20 was originally designed in 2005, and it now seems we can be comfortable with a smaller number of rounds. The eSTREAM project also suggests the 12-round version of Salsa20 as providing the best balance among the different variants: combining very good performance with a "comfortable margin of security". Note that it would be trivial to add vanilla ChaCha12 in addition to XChaCha12. However, it's unneeded for now and therefore is omitted. As discussed in the patch that introduced XChaCha20 support, I considered splitting the code into separate chacha-common, chacha20, xchacha20, and xchacha12 modules, so that these algorithms could be enabled/disabled independently. However, since nearly all the code is shared anyway, I ultimately decided there would have been little benefit to the added complexity. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha20-generic - refactor to allow varying number of roundsEric Biggers2018-11-202-42/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for adding XChaCha12 support, rename/refactor chacha20-generic to support different numbers of rounds. The justification for needing XChaCha12 support is explained in more detail in the patch "crypto: chacha - add XChaCha12 support". The only difference between ChaCha{8,12,20} are the number of rounds itself; all other parts of the algorithm are the same. Therefore, remove the "20" from all definitions, structures, functions, files, etc. that will be shared by all ChaCha versions. Also make ->setkey() store the round count in the chacha_ctx (previously chacha20_ctx). The generic code then passes the round count through to chacha_block(). There will be a ->setkey() function for each explicitly allowed round count; the encrypt/decrypt functions will be the same. I decided not to do it the opposite way (same ->setkey() function for all round counts, with different encrypt/decrypt functions) because that would have required more boilerplate code in architecture-specific implementations of ChaCha and XChaCha. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha20-generic - add XChaCha20 supportEric Biggers2018-11-201-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the XChaCha20 stream cipher. XChaCha20 is the application of the XSalsa20 construction (https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or 96 bits, depending on convention) to 192 bits, while provably retaining ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the key and first 128 nonce bits to a 256-bit subkey. Then, it does the ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce. We need XChaCha support in order to add support for the Adiantum encryption mode. Note that to meet our performance requirements, we actually plan to primarily use the variant XChaCha12. But we believe it's wise to first add XChaCha20 as a baseline with a higher security margin, in case there are any situations where it can be used. Supporting both variants is straightforward. Since XChaCha20's subkey differs for each request, XChaCha20 can't be a template that wraps ChaCha20; that would require re-keying the underlying ChaCha20 for every request, which wouldn't be thread-safe. Instead, we make XChaCha20 its own top-level algorithm which calls the ChaCha20 streaming implementation internally. Similar to the existing ChaCha20 implementation, we define the IV to be the nonce and stream position concatenated together. This allows users to seek to any position in the stream. I considered splitting the code into separate chacha20-common, chacha20, and xchacha20 modules, so that chacha20 and xchacha20 could be enabled/disabled independently. However, since nearly all the code is shared anyway, I ultimately decided there would have been little benefit to the added complexity of separate modules. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha20-generic - add HChaCha20 library functionEric Biggers2018-11-201-0/+2
| | | | | | | | | | | | | | Refactor the unkeyed permutation part of chacha20_block() into its own function, then add hchacha20_block() which is the ChaCha equivalent of HSalsa20 and is an intermediate step towards XChaCha20 (see https://cr.yp.to/snuffle/xsalsa-20081128.pdf). HChaCha20 skips the final addition of the initial state, and outputs only certain words of the state. It should not be used for streaming directly. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha20poly1305 - export CHACHAPOLY_IV_SIZECristian Stoica2018-11-161-0/+1
| | | | | | | | Move CHACHAPOLY_IV_SIZE to header file, so it can be reused. Signed-off-by: Cristian Stoica <cristian.stoica@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: streebog - register Streebog in hash info for IMAVitaly Chikunov2018-11-161-0/+1
| | | | | | | | | | Register Streebog hash function in Hash Info arrays to let IMA use it for its purposes. Cc: linux-integrity@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: streebog - add Streebog hash functionVitaly Chikunov2018-11-161-0/+34
| | | | | | | | | | Add GOST/IETF Streebog hash function (GOST R 34.11-2012, RFC 6986) generic hash transformation. Cc: linux-integrity@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* KEYS: asym_tpm: extract key size & public key [ver #2]Denis Kenzior2018-10-261-0/+3
| | | | | | | | | | | | | | The parsed BER/DER blob obtained from user space contains a TPM_Key structure. This structure has some information about the key as well as the public key portion. This patch extracts this information for future use. Signed-off-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Marcel Holtmann <marcel@holtmann.org> Reviewed-by: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: James Morris <james.morris@microsoft.com>
* KEYS: asym_tpm: add skeleton for asym_tpm [ver #2]Denis Kenzior2018-10-261-0/+16
| | | | | | | | | | | This patch adds the basic skeleton for the asym_tpm asymmetric key subtype. Signed-off-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Marcel Holtmann <marcel@holtmann.org> Reviewed-by: Marcel Holtmann <marcel@holtmann.org> Signed-off-by: James Morris <james.morris@microsoft.com>
* KEYS: Allow the public_key struct to hold a private key [ver #2]David Howells2018-10-261-0/+1
| | | | | | | | | | | | | | | | Put a flag in the public_key struct to indicate if the structure is holding a private key. The private key must be held ASN.1 encoded in the format specified in RFC 3447 A.1.2. This is the form required by crypto/rsa.c. The software encryption subtype's verification and query functions then need to select the appropriate crypto function to set the key. Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Marcel Holtmann <marcel@holtmann.org> Reviewed-by: Marcel Holtmann <marcel@holtmann.org> Reviewed-by: Denis Kenzior <denkenz@gmail.com> Tested-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: James Morris <james.morris@microsoft.com>
* KEYS: Provide missing asymmetric key subops for new key type ops [ver #2]David Howells2018-10-261-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | Provide the missing asymmetric key subops for new key type ops. This include query, encrypt, decrypt and create signature. Verify signature already exists. Also provided are accessor functions for this: int query_asymmetric_key(const struct key *key, struct kernel_pkey_query *info); int encrypt_blob(struct kernel_pkey_params *params, const void *data, void *enc); int decrypt_blob(struct kernel_pkey_params *params, const void *enc, void *data); int create_signature(struct kernel_pkey_params *params, const void *data, void *enc); The public_key_signature struct gains an encoding field to carry the encoding for verify_signature(). Signed-off-by: David Howells <dhowells@redhat.com> Tested-by: Marcel Holtmann <marcel@holtmann.org> Reviewed-by: Marcel Holtmann <marcel@holtmann.org> Reviewed-by: Denis Kenzior <denkenz@gmail.com> Tested-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: James Morris <james.morris@microsoft.com>
* crypto/morus(640,1280) - make crypto_...-algs staticvaldis.kletnieks@vt.edu2018-10-052-2/+2
| | | | | | | | | | | | | | | | | sparse complains thusly: CHECK arch/x86/crypto/morus640-sse2-glue.c arch/x86/crypto/morus640-sse2-glue.c:38:1: warning: symbol 'crypto_morus640_sse2_algs' was not declared. Should it be static? CHECK arch/x86/crypto/morus1280-sse2-glue.c arch/x86/crypto/morus1280-sse2-glue.c:38:1: warning: symbol 'crypto_morus1280_sse2_algs' was not declared. Should it be static? CHECK arch/x86/crypto/morus1280-avx2-glue.c arch/x86/crypto/morus1280-avx2-glue.c:38:1: warning: symbol 'crypto_morus1280_avx2_algs' was not declared. Should it be static? and sparse is correct - these don't need to be global and polluting the namespace. Signed-off-by: Valdis Kletnieks <valdis.kletnieks@vt.edu> Acked-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: user - Implement a generic crypto statisticsCorentin Labbe2018-09-288-26/+303
| | | | | | | | This patch implement a generic way to get statistics about all crypto usages. Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - Remove SKCIPHER_REQUEST_ON_STACK()Kees Cook2018-09-281-5/+0
| | | | | | | | | Now that all the users of the VLA-generating SKCIPHER_REQUEST_ON_STACK() macro have been moved to SYNC_SKCIPHER_REQUEST_ON_STACK(), we can remove the former. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: null - Remove VLA usage of skcipherKees Cook2018-09-282-2/+2
| | | | | | | | | | | | In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - Introduce crypto_sync_skcipherKees Cook2018-09-281-0/+75
| | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for removal of VLAs due to skcipher requests on the stack via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure for the "sync skcipher" tfm, which is for handling the on-stack cases of skcipher, which are always non-ASYNC and have a known limited request size. The crypto API additions: struct crypto_sync_skcipher (wrapper for struct crypto_skcipher) crypto_alloc_sync_skcipher() crypto_free_sync_skcipher() crypto_sync_skcipher_setkey() crypto_sync_skcipher_get_flags() crypto_sync_skcipher_set_flags() crypto_sync_skcipher_clear_flags() crypto_sync_skcipher_blocksize() crypto_sync_skcipher_ivsize() crypto_sync_skcipher_reqtfm() skcipher_request_set_sync_tfm() SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check) Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha20 - Fix chacha20_block() keystream alignment (again)Eric Biggers2018-09-211-2/+1
| | | | | | | | | | | | | | | | | | | | | In commit 9f480faec58c ("crypto: chacha20 - Fix keystream alignment for chacha20_block()"), I had missed that chacha20_block() can be called directly on the buffer passed to get_random_bytes(), which can have any alignment. So, while my commit didn't break anything, it didn't fully solve the alignment problems. Revert my solution and just update chacha20_block() to use put_unaligned_le32(), so the output buffer need not be aligned. This is simpler, and on many CPUs it's the same speed. But, I kept the 'tmp' buffers in extract_crng_user() and _get_random_bytes() 4-byte aligned, since that alignment is actually needed for _crng_backtrack_protect() too. Reported-by: Stephan Müller <smueller@chronox.de> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Introduce notifier for new crypto algorithmsMartin K. Petersen2018-09-041-0/+10
| | | | | | | | | | | | | | | | Introduce a facility that can be used to receive a notification callback when a new algorithm becomes available. This can be used by existing crypto registrations to trigger a switch from a software-only algorithm to a hardware-accelerated version. A new CRYPTO_MSG_ALG_LOADED state is introduced to the existing crypto notification chain, and the register/unregister functions are exported so they can be called by subsystems outside of crypto. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86 - remove SHA multibuffer routines and mcryptdArd Biesheuvel2018-09-041-114/+0
| | | | | | | | | | | | | | | | | | | | | | As it turns out, the AVX2 multibuffer SHA routines are currently broken [0], in a way that would have likely been noticed if this code were in wide use. Since the code is too complicated to be maintained by anyone except the original authors, and since the performance benefits for real-world use cases are debatable to begin with, it is better to drop it entirely for the moment. [0] https://marc.info/?l=linux-crypto-vger&m=153476243825350&w=2 Suggested-by: Eric Biggers <ebiggers@google.com> Cc: Megha Dey <megha.dey@linux.intel.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Introduce generic max blocksize and alignmaskKees Cook2018-09-041-1/+3
| | | | | | | | | | | | | | | | In the quest to remove all stack VLA usage from the kernel[1], this exposes a new general upper bound on crypto blocksize and alignmask (higher than for the existing cipher limits) for VLA removal, and introduces new checks. At present, the highest cra_alignmask in the kernel is 63. The highest cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the new blocksize limit, I went with 160 (20 8-byte words). [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Remove VLA usageKees Cook2018-09-041-1/+5
| | | | | | | | | | | | | | | | | In the quest to remove all stack VLA usage from the kernel[1], this removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize()) by using the maximum allowable size (which is now more clearly captured in a macro), along with a few other cases. Similar limits are turned into macros as well. A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the largest digest size and that sizeof(struct sha3_state) (360) is the largest descriptor size. The corresponding maximums are reduced. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cbc - Remove VLA usageKees Cook2018-09-041-1/+1
| | | | | | | | | | | In the quest to remove all stack VLA usage from the kernel[1], this uses the upper bounds on blocksize. Since this is always a cipher blocksize, use the existing cipher max blocksize. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: speck - remove SpeckJason A. Donenfeld2018-09-041-62/+0
| | | | | | | | | | | | | | | | These are unused, undesired, and have never actually been used by anybody. The original authors of this code have changed their mind about its inclusion. While originally proposed for disk encryption on low-end devices, the idea was discarded [1] in favor of something else before that could really get going. Therefore, this patch removes Speck. [1] https://marc.info/?l=linux-crypto-vger&m=153359499015659 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Eric Biggers <ebiggers@google.com> Cc: stable@vger.kernel.org Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: scatterwalk - remove scatterwalk_samebuf()Eric Biggers2018-08-031-7/+0
| | | | | | | scatterwalk_samebuf() is never used. Remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: scatterwalk - remove 'chain' argument from scatterwalk_crypto_chain()Eric Biggers2018-08-031-7/+1
| | | | | | | | | All callers pass chain=0 to scatterwalk_crypto_chain(). Remove this unneeded parameter. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: drbg - in-place cipher operation for CTRStephan Müller2018-08-031-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cipher implementations of the kernel crypto API favor in-place cipher operations. Thus, switch the CTR cipher operation in the DRBG to perform in-place operations. This is implemented by using the output buffer as input buffer and zeroizing it before the cipher operation to implement a CTR encryption of a NULL buffer. The speed improvement is quite visibile with the following comparison using the LRNG implementation. Without the patch set: 16 bytes| 12.267661 MB/s| 61338304 bytes | 5000000213 ns 32 bytes| 23.603770 MB/s| 118018848 bytes | 5000000073 ns 64 bytes| 46.732262 MB/s| 233661312 bytes | 5000000241 ns 128 bytes| 90.038042 MB/s| 450190208 bytes | 5000000244 ns 256 bytes| 160.399616 MB/s| 801998080 bytes | 5000000393 ns 512 bytes| 259.878400 MB/s| 1299392000 bytes | 5000001675 ns 1024 bytes| 386.050662 MB/s| 1930253312 bytes | 5000001661 ns 2048 bytes| 493.641728 MB/s| 2468208640 bytes | 5000001598 ns 4096 bytes| 581.835981 MB/s| 2909179904 bytes | 5000003426 ns With the patch set: 16 bytes | 17.051142 MB/s | 85255712 bytes | 5000000854 ns 32 bytes | 32.695898 MB/s | 163479488 bytes | 5000000544 ns 64 bytes | 64.490739 MB/s | 322453696 bytes | 5000000954 ns 128 bytes | 123.285043 MB/s | 616425216 bytes | 5000000201 ns 256 bytes | 233.434573 MB/s | 1167172864 bytes | 5000000573 ns 512 bytes | 384.405197 MB/s | 1922025984 bytes | 5000000671 ns 1024 bytes | 566.313370 MB/s | 2831566848 bytes | 5000001080 ns 2048 bytes | 744.518042 MB/s | 3722590208 bytes | 5000000926 ns 4096 bytes | 867.501670 MB/s | 4337508352 bytes | 5000002181 ns Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linuxHerbert Xu2018-08-031-1/+2
|\ | | | | | | | | Merge mainline to pick up c7513c2a2714 ("crypto/arm64: aes-ce-gcm - add missing kernel_neon_begin/end pair").
| * Revert changes to convert to ->poll_mask() and aio IOCB_CMD_POLLLinus Torvalds2018-06-281-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The poll() changes were not well thought out, and completely unexplained. They also caused a huge performance regression, because "->poll()" was no longer a trivial file operation that just called down to the underlying file operations, but instead did at least two indirect calls. Indirect calls are sadly slow now with the Spectre mitigation, but the performance problem could at least be largely mitigated by changing the "->get_poll_head()" operation to just have a per-file-descriptor pointer to the poll head instead. That gets rid of one of the new indirections. But that doesn't fix the new complexity that is completely unwarranted for the regular case. The (undocumented) reason for the poll() changes was some alleged AIO poll race fixing, but we don't make the common case slower and more complex for some uncommon special case, so this all really needs way more explanations and most likely a fundamental redesign. [ This revert is a revert of about 30 different commits, not reverted individually because that would just be unnecessarily messy - Linus ] Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | crypto: drbg - eliminate constant reinitialization of SGLStephan Mueller2018-07-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | The CTR DRBG requires two SGLs pointing to input/output buffers for the CTR AES operation. The used SGLs always have only one entry. Thus, the SGL can be initialized during allocation time, preventing a re-initialization of the SGLs during each call. The performance is increased by about 1 to 3 percent depending on the size of the requested buffer size. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: dh - add public key verification testStephan Mueller2018-07-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to SP800-56A section 5.6.2.1, the public key to be processed for the DH operation shall be checked for appropriateness. The check shall covers the full verification test in case the domain parameter Q is provided as defined in SP800-56A section 5.6.2.3.1. If Q is not provided, the partial check according to SP800-56A section 5.6.2.3.2 is performed. The full verification test requires the presence of the domain parameter Q. Thus, the patch adds the support to handle Q. It is permissible to not provide the Q value as part of the domain parameters. This implies that the interface is still backwards-compatible where so far only P and G are to be provided. However, if Q is provided, it is imported. Without the test, the NIST ACVP testing fails. After adding this check, the NIST ACVP testing passes. Testing without providing the Q domain parameter has been performed to verify the interface has not changed. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: vmac - separate tfm and request contextEric Biggers2018-07-011-63/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syzbot reported a crash in vmac_final() when multiple threads concurrently use the same "vmac(aes)" transform through AF_ALG. The bug is pretty fundamental: the VMAC template doesn't separate per-request state from per-tfm (per-key) state like the other hash algorithms do, but rather stores it all in the tfm context. That's wrong. Also, vmac_final() incorrectly zeroes most of the state including the derived keys and cached pseudorandom pad. Therefore, only the first VMAC invocation with a given key calculates the correct digest. Fix these bugs by splitting the per-tfm state from the per-request state and using the proper init/update/final sequencing for requests. Reproducer for the crash: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { int fd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "vmac(aes)", }; char buf[256] = { 0 }; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16); fork(); fd = accept(fd, NULL, NULL); for (;;) write(fd, buf, 256); } The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds VMAC_NHBYTES, causing vmac_final() to memset() a negative length. Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support") Cc: <stable@vger.kernel.org> # v2.6.32+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: sha512_generic - add a sha384 0-length pre-computed hashAntoine Tenart2018-06-221-0/+2
| | | | | | | | | | | | | | | | | | This patch adds the sha384 pre-computed 0-length hash so that device drivers can use it when an hardware engine does not support computing a hash from a 0 length input. Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: sha512_generic - add a sha512 0-length pre-computed hashAntoine Tenart2018-06-221-0/+2
|/ | | | | | | | | This patch adds the sha512 pre-computed 0-length hash so that device drivers can use it when an hardware engine does not support computing a hash from a 0 length input. Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* Merge branch 'linus' of ↵Linus Torvalds2018-06-056-27/+308
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Decryption test vectors are now automatically generated from encryption test vectors. Algorithms: - Fix unaligned access issues in crc32/crc32c. - Add zstd compression algorithm. - Add AEGIS. - Add MORUS. Drivers: - Add accelerated AEGIS/MORUS on x86. - Add accelerated SM4 on arm64. - Removed x86 assembly salsa implementation as it is slower than C. - Add authenc(hmac(sha*), cbc(aes)) support in inside-secure. - Add ctr(aes) support in crypto4xx. - Add hardware key support in ccree. - Add support for new Centaur CPU in via-rng" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (112 commits) crypto: chtls - free beyond end rspq_skb_cache crypto: chtls - kbuild warnings crypto: chtls - dereference null variable crypto: chtls - wait for memory sendmsg, sendpage crypto: chtls - key len correction crypto: salsa20 - Revert "crypto: salsa20 - export generic helpers" crypto: x86/salsa20 - remove x86 salsa20 implementations crypto: ccp - Add GET_ID SEV command crypto: ccp - Add DOWNLOAD_FIRMWARE SEV command crypto: qat - Add MODULE_FIRMWARE for all qat drivers crypto: ccree - silence debug prints crypto: ccree - better clock handling crypto: ccree - correct host regs offset crypto: chelsio - Remove separate buffer used for DMA map B0 block in CCM crypt: chelsio - Send IV as Immediate for cipher algo crypto: chelsio - Return -ENOSPC for transient busy indication. crypto: caam/qi - fix warning in init_cgr() crypto: caam - fix rfc4543 descriptors crypto: caam - fix MC firmware detection crypto: clarify licensing of OpenSSL asm code ...
| * crypto: salsa20 - Revert "crypto: salsa20 - export generic helpers"Eric Biggers2018-05-311-27/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit eb772f37ae8163a89e28a435f6a18742ae06653b, as now the x86 Salsa20 implementation has been removed and the generic helpers are no longer needed outside of salsa20_generic.c. We could keep this just in case someone else wants to add a new optimized Salsa20 implementation. But given that we have ChaCha20 now too, I think it's unlikely. And this can always be reverted back. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: morus - Add common SIMD glue code for MORUSOndrej Mosnacek2018-05-192-0/+274
| | | | | | | | | | | | | | | | This patch adds a common glue code for optimized implementations of MORUS AEAD algorithms. Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: morus - Add generic MORUS AEAD implementationsOndrej Mosnacek2018-05-191-0/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the generic implementation of the MORUS family of AEAD algorithms (MORUS-640 and MORUS-1280). The original authors of MORUS are Hongjun Wu and Tao Huang. At the time of writing, MORUS is one of the finalists in CAESAR, an open competition intended to select a portfolio of alternatives to the problematic AES-GCM: https://competitions.cr.yp.to/caesar-submissions.html https://competitions.cr.yp.to/round3/morusv2.pdf Signed-off-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: sm4 - export encrypt/decrypt routines to other driversArd Biesheuvel2018-05-051-0/+3
| | | | | | | | | | | | | | | | | | | | | | In preparation of adding support for the SIMD based arm64 implementation of arm64, which requires a fallback to non-SIMD code when invoked in certain contexts, expose the generic SM4 encrypt and decrypt routines to other drivers. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * crypto: api - laying defines and checks for statically allocated buffersSalvatore Mesoraca2018-04-211-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for the removal of VLAs[1] from crypto code. We create 2 new compile-time constants: all ciphers implemented in Linux have a block size less than or equal to 16 bytes and the most demanding hw require 16 bytes alignment for the block buffer. We also enforce these limits in crypto_check_alg when a new cipher is registered. [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | crypto: af_alg: convert to ->poll_maskChristoph Hellwig2018-05-261-2/+1
|/ | | | Signed-off-by: Christoph Hellwig <hch@lst.de>
* crypto: api - Remove unused crypto_type lookup functionHerbert Xu2018-03-311-1/+0
| | | | | | | | | | The lookup function in crypto_type was only used for the implicit IV generators which have been completely removed from the crypto API. This patch removes the lookup function as it is now useless. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: hash - Prevent use of req->result in ahash updateKamil Konieczny2018-03-161-4/+7
| | | | | | | | | Prevent improper use of req->result field in ahash update, init, export and import functions in drivers code. A driver should use ahash request context if it needs to save internal state. Signed-off-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>