summaryrefslogtreecommitdiffstats
path: root/crypto
Commit message (Collapse)AuthorAgeFilesLines
* crypto: shash - remove useless crypto_yield() in shash_ahash_digest()Eric Biggers2019-04-251-1/+0
| | | | | | | | The crypto_yield() in shash_ahash_digest() occurs after the entire digest operation already happened, so there's no real point. Remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ccm - fix incompatibility between "ccm" and "ccm_base"Eric Biggers2019-04-191-26/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: gcm - fix incompatibility between "gcm" and "gcm_base"Eric Biggers2019-04-191-23/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: shash - fix missed optimization in shash_ahash_digest()Eric Biggers2019-04-181-1/+1
| | | | | | | | | | | shash_ahash_digest(), which is the ->digest() method for ahash tfms that use an shash algorithm, has an optimization where crypto_shash_digest() is called if the data is in a single page. But an off-by-one error prevented this path from being taken unless the user happened to provide extra data in the scatterlist. Fix it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cryptd - remove ability to instantiate ablkciphersEric Biggers2019-04-181-249/+0
| | | | | | | | | | | Remove cryptd_alloc_ablkcipher() and the ability of cryptd to create algorithms with the deprecated "ablkcipher" type. This has been unused since commit 0e145b477dea ("crypto: ablk_helper - remove ablk_helper"). Instead, cryptd_alloc_skcipher() is used. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: scompress - initialize per-CPU variables on each CPUSebastian Andrzej Siewior2019-04-181-2/+2
| | | | | | | | | | | | | | | | | In commit 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead multiple variables") I accidentally initialized multiple times the memory on a random CPU. I should have initialize the memory on every CPU like it has been done earlier. I didn't notice this because the scheduler didn't move the task to another CPU. Guenter managed to do that and the code crashed as expected. Allocate / free per-CPU memory on each CPU. Fixes: 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead multiple variables") Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: run initcalls for generic implementations earlierEric Biggers2019-04-1883-83/+89
| | | | | | | | | | | | | | | | | | | | Use subsys_initcall for registration of all templates and generic algorithm implementations, rather than module_init. Then change cryptomgr to use arch_initcall, to place it before the subsys_initcalls. This is needed so that when both a generic and optimized implementation of an algorithm are built into the kernel (not loadable modules), the generic implementation is registered before the optimized one. Otherwise, the self-tests for the optimized implementation are unable to allocate the generic implementation for the new comparison fuzz tests. Note that on arm, a side effect of this change is that self-tests for generic implementations may run before the unaligned access handler has been installed. So, unaligned accesses will crash the kernel. This is arguably a good thing as it makes it easier to detect that type of bug. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - fuzz AEADs against their generic implementationEric Biggers2019-04-181-0/+229
| | | | | | | | | | | | When the extra crypto self-tests are enabled, test each AEAD algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - fuzz skciphers against their generic implementationEric Biggers2019-04-182-1/+198
| | | | | | | | | | | | | | | When the extra crypto self-tests are enabled, test each skcipher algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. This has already detected a bug in the skcipher_walk API, a bug in the LRW template, and an inconsistency in the cts implementations. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - fuzz hashes against their generic implementationEric Biggers2019-04-181-4/+170
| | | | | | | | | | | | | | | When the extra crypto self-tests are enabled, test each hash algorithm against its generic implementation when one is available. This involves: checking the algorithm properties for consistency, then randomly generating test vectors using the generic implementation and running them against the implementation under test. Both good and bad inputs are tested. This has already detected a bug in the x86 implementation of poly1305, bugs in crct10dif, and an inconsistency in cbcmac. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - add helpers for fuzzing against generic implementationEric Biggers2019-04-181-0/+128
| | | | | | | | Add some helper functions in preparation for fuzz testing algorithms against their generic implementation. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - identify test vectors by name rather than numberEric Biggers2019-04-181-87/+96
| | | | | | | | | | | | In preparation for fuzz testing algorithms against their generic implementation, make error messages in testmgr identify test vectors by name rather than index. Built-in test vectors are simply "named" by their index in testmgr.h, as before. But (in later patches) generated test vectors will be given more descriptive names to help developers debug problems detected with them. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - expand ability to test for errorsEric Biggers2019-04-182-50/+104
| | | | | | | | | | | | | | | | | | | | | Update testmgr to support testing for specific errors from setkey() and digest() for hashes; setkey() and encrypt()/decrypt() for skciphers and ciphers; and setkey(), setauthsize(), and encrypt()/decrypt() for AEADs. This is useful because algorithms usually restrict the lengths or format of the message, key, and/or authentication tag in some way. And bad inputs should be tested too, not just good inputs. As part of this change, remove the ambiguously-named 'fail' flag and replace it with 'setkey_error = -EINVAL' for the only test vector that used it -- the DES weak key test vector. Note that this tightens the test to require -EINVAL rather than any error code, but AFAICS this won't cause any test failure. Other than that, these new fields aren't set on any test vectors yet. Later patches will do so. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ecrdsa - add EC-RDSA test vectors to testmgrVitaly Chikunov2019-04-182-0/+160
| | | | | | | | | Add testmgr test vectors for EC-RDSA algorithm for every of five supported parameters (curves). Because there are no officially published test vectors for the curves, the vectors are generated by gost-engine. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ecrdsa - add EC-RDSA (GOST 34.10) algorithmVitaly Chikunov2019-04-189-13/+1004
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add Elliptic Curve Russian Digital Signature Algorithm (GOST R 34.10-2012, RFC 7091, ISO/IEC 14888-3) is one of the Russian (and since 2018 the CIS countries) cryptographic standard algorithms (called GOST algorithms). Only signature verification is supported, with intent to be used in the IMA. Summary of the changes: * crypto/Kconfig: - EC-RDSA is added into Public-key cryptography section. * crypto/Makefile: - ecrdsa objects are added. * crypto/asymmetric_keys/x509_cert_parser.c: - Recognize EC-RDSA and Streebog OIDs. * include/linux/oid_registry.h: - EC-RDSA OIDs are added to the enum. Also, a two currently not implemented curve OIDs are added for possible extension later (to not change numbering and grouping). * crypto/ecc.c: - Kenneth MacKay copyright date is updated to 2014, because vli_mmod_slow, ecc_point_add, ecc_point_mult_shamir are based on his code from micro-ecc. - Functions needed for ecrdsa are EXPORT_SYMBOL'ed. - New functions: vli_is_negative - helper to determine sign of vli; vli_from_be64 - unpack big-endian array into vli (used for a signature); vli_from_le64 - unpack little-endian array into vli (used for a public key); vli_uadd, vli_usub - add/sub u64 value to/from vli (used for increment/decrement); mul_64_64 - optimized to use __int128 where appropriate, this speeds up point multiplication (and as a consequence signature verification) by the factor of 1.5-2; vli_umult - multiply vli by a small value (speeds up point multiplication by another factor of 1.5-2, depending on vli sizes); vli_mmod_special - module reduction for some form of Pseudo-Mersenne primes (used for the curves A); vli_mmod_special2 - module reduction for another form of Pseudo-Mersenne primes (used for the curves B); vli_mmod_barrett - module reduction using pre-computed value (used for the curve C); vli_mmod_slow - more general module reduction which is much slower (used when the modulus is subgroup order); vli_mod_mult_slow - modular multiplication; ecc_point_add - add two points; ecc_point_mult_shamir - add two points multiplied by scalars in one combined multiplication (this gives speed up by another factor 2 in compare to two separate multiplications). ecc_is_pubkey_valid_partial - additional samity check is added. - Updated vli_mmod_fast with non-strict heuristic to call optimal module reduction function depending on the prime value; - All computations for the previously defined (two NIST) curves should not unaffected. * crypto/ecc.h: - Newly exported functions are documented. * crypto/ecrdsa_defs.h - Five curves are defined. * crypto/ecrdsa.c: - Signature verification is implemented. * crypto/ecrdsa_params.asn1, crypto/ecrdsa_pub_key.asn1: - Templates for BER decoder for EC-RDSA parameters and public key. Cc: linux-integrity@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: ecc - make ecc into separate moduleVitaly Chikunov2019-04-185-23/+122
| | | | | | | | | | | | ecc.c have algorithms that could be used togeter by ecdh and ecrdsa. Make it separate module. Add CRYPTO_ECC into Kconfig. EXPORT_SYMBOL and document to what seems appropriate. Move structs ecc_point and ecc_curve from ecc_curve_defs.h into ecc.h. No code changes. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Kconfig - create Public-key cryptography sectionVitaly Chikunov2019-04-181-23/+25
| | | | | | | Group RSA, DH, and ECDH into Public-key cryptography config section. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* X.509: parse public key parameters from x509 for akcipherVitaly Chikunov2019-04-186-21/+122
| | | | | | | | | | | | | | | | | | | | | | Some public key algorithms (like EC-DSA) keep in parameters field important data such as digest and curve OIDs (possibly more for different EC-DSA variants). Thus, just setting a public key (as for RSA) is not enough. Append parameters into the key stream for akcipher_set_{pub,priv}_key. Appended data is: (u32) algo OID, (u32) parameters length, parameters data. This does not affect current akcipher API nor RSA ciphers (they could ignore it). Idea of appending parameters to the key stream is by Herbert Xu. Cc: David Howells <dhowells@redhat.com> Cc: Denis Kenzior <denkenz@gmail.com> Cc: keyrings@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* KEYS: do not kmemdup digest in {public,tpm}_key_verify_signatureVitaly Chikunov2019-04-182-17/+2
| | | | | | | | | | | | | | | Treat (struct public_key_signature)'s digest same as its signature (s). Since digest should be already in the kmalloc'd memory do not kmemdup digest value before calling {public,tpm}_key_verify_signature. Patch is split from the previous as suggested by Herbert Xu. Suggested-by: David Howells <dhowells@redhat.com> Cc: David Howells <dhowells@redhat.com> Cc: keyrings@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: akcipher - new verify API for public key algorithmsVitaly Chikunov2019-04-184-78/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previous akcipher .verify() just `decrypts' (using RSA encrypt which is using public key) signature to uncover message hash, which was then compared in upper level public_key_verify_signature() with the expected hash value, which itself was never passed into verify(). This approach was incompatible with EC-DSA family of algorithms, because, to verify a signature EC-DSA algorithm also needs a hash value as input; then it's used (together with a signature divided into halves `r||s') to produce a witness value, which is then compared with `r' to determine if the signature is correct. Thus, for EC-DSA, nor requirements of .verify() itself, nor its output expectations in public_key_verify_signature() wasn't sufficient. Make improved .verify() call which gets hash value as input and produce complete signature check without any output besides status. Now for the top level verification only crypto_akcipher_verify() needs to be called and its return value inspected. Make sure that `digest' is in kmalloc'd memory (in place of `output`) in {public,tpm}_key_verify_signature() as insisted by Herbert Xu, and will be changed in the following commit. Cc: David Howells <dhowells@redhat.com> Cc: keyrings@vger.kernel.org Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Denis Kenzior <denkenz@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rsa - unimplement sign/verify for raw RSA backendsVitaly Chikunov2019-04-182-111/+2
| | | | | | | | | | | | | | | | | | | | In preparation for new akcipher verify call remove sign/verify callbacks from RSA backends and make PKCS1 driver call encrypt/decrypt instead. This also complies with the well-known idea that raw RSA should never be used for sign/verify. It only should be used with proper padding scheme such as PKCS1 driver provides. Cc: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Cc: qat-linux@intel.com Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Gary Hook <gary.hook@amd.com> Cc: Horia Geantă <horia.geanta@nxp.com> Cc: Aymen Sghaier <aymen.sghaier@nxp.com> Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: akcipher - default implementations for request callbacksVitaly Chikunov2019-04-181-0/+14
| | | | | | | | | | | | | Because with the introduction of EC-RDSA and change in workings of RSA in regard to sign/verify, akcipher could have not all callbacks defined, check the presence of callbacks in crypto_register_akcipher() and provide default implementation if the callback is not implemented. This is suggested by Herbert Xu instead of checking the presence of the callback on every request. Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: des_generic - Forbid 2-key in 3DES and add helpersHerbert Xu2019-04-181-7/+4
| | | | | | | | | | This patch adds a requirement to the generic 3DES implementation such that 2-key 3DES (K1 == K3) is no longer allowed in FIPS mode. We will also provide helpers that may be used by drivers that implement 3DES to make the same check. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: salsa20 - don't access already-freed walk.ivEric Biggers2019-04-181-1/+1
| | | | | | | | | | | | | | | | | | | | | If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. salsa20-generic doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup and convert to skcipher API"). Since salsa20-generic does not update the IV and does not need any IV alignment, update it to use req->iv instead of walk.iv. Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: lrw - don't access already-freed walk.ivEric Biggers2019-04-181-1/+3
| | | | | | | | | | | | | | | | | | | | | If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. Fix this in the LRW template by checking the return value of skcipher_walk_virt(). This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. When the extra self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free splat occured during lrw(aes) testing. Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation") Cc: <stable@vger.kernel.org> # v4.20+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - add panic_on_fail module parameterEric Biggers2019-04-081-2/+6
| | | | | | | | | | | | | | | | | | | Add a module parameter cryptomgr.panic_on_fail which causes the kernel to panic if any crypto self-tests fail. Use cases: - More easily detect crypto self-test failures by boot testing, e.g. on KernelCI. - Get a bug report if syzkaller manages to use the template system to instantiate an algorithm that fails its self-tests. The command-line option "fips=1" already does this, but it also makes other changes not wanted for general testing, such as disabling "unapproved" algorithms. panic_on_fail just does what it says. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: cts - don't support empty messagesEric Biggers2019-04-081-7/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | My patches to make testmgr fuzz algorithms against their generic implementation detected that the arm64 implementations of "cts(cbc(aes))" handle empty messages differently from the cts template. Namely, the arm64 implementations forbids (with -EINVAL) all messages shorter than the block size, including the empty message; but the cts template permits empty messages as a special case. No user should be CTS-encrypting/decrypting empty messages, but we need to keep the behavior consistent. Unfortunately, as noted in the source of OpenSSL's CTS implementation [1], there's no common specification for CTS. This makes it somewhat debatable what the behavior should be. However, all CTS specifications seem to agree that messages shorter than the block size are not allowed, and OpenSSL follows this in both CTS conventions it implements. It would also simplify the user-visible semantics to have empty messages no longer be a special case. Therefore, make the cts template return -EINVAL on *all* messages shorter than the block size, including the empty message. [1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: streebog - fix unaligned memory accessesEric Biggers2019-04-081-12/+13
| | | | | | | | | | | Don't cast the data buffer directly to streebog_uint512, as this violates alignment rules. Fixes: fe18957e8e87 ("crypto: streebog - add Streebog hash function") Cc: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha20poly1305 - set cra_name correctlyEric Biggers2019-04-081-2/+2
| | | | | | | | | | | | | | | | | | | | | If the rfc7539 template is instantiated with specific implementations, e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than "rfc7539(chacha20,poly1305)", then the implementation names end up included in the instance's cra_name. This is incorrect because it then prevents all users from allocating "rfc7539(chacha20,poly1305)", if the highest priority implementations of chacha20 and poly1305 were selected. Also, the self-tests aren't run on an instance allocated in this way. Fix it by setting the instance's cra_name from the underlying algorithms' actual cra_names, rather than from the requested names. This matches what other templates do. Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: skcipher - don't WARN on unprocessed data after slow walk stepEric Biggers2019-04-081-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | skcipher_walk_done() assumes it's a bug if, after the "slow" path is executed where the next chunk of data is processed via a bounce buffer, the algorithm says it didn't process all bytes. Thus it WARNs on this. However, this can happen legitimately when the message needs to be evenly divisible into "blocks" but isn't, and the algorithm has a 'walksize' greater than the block size. For example, ecb-aes-neonbs sets 'walksize' to 128 bytes and only supports messages evenly divisible into 16-byte blocks. If, say, 17 message bytes remain but they straddle scatterlist elements, the skcipher_walk code will take the "slow" path and pass the algorithm all 17 bytes in the bounce buffer. But the algorithm will only be able to process 16 bytes, triggering the WARN. Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code already does, is the right behavior. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: crct10dif-generic - fix use via crypto_shash_digest()Eric Biggers2019-04-081-7/+4
| | | | | | | | | | | | | | | | | | | | The ->digest() method of crct10dif-generic reads the current CRC value from the shash_desc context. But this value is uninitialized, causing crypto_shash_digest() to compute the wrong result. Fix it. Probably this wasn't noticed before because lib/crc-t10dif.c only uses crypto_shash_update(), not crypto_shash_digest(). Likewise, crypto_shash_digest() is not yet tested by the crypto self-tests because those only test the ahash API which only uses shash init/update/final. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: 2d31e518a428 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework") Cc: <stable@vger.kernel.org> # v3.11+ Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: aes - Use ___cacheline_aligned for aes dataAndi Kleen2019-04-081-4/+4
| | | | | | | | | | | | | | cacheline_aligned is a special section. It cannot be const at the same time because it's not read-only. It doesn't give any MMU protection. Mark it ____cacheline_aligned to not place it in a special section, but just align it in .rodata Cc: herbert@gondor.apana.org.au Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: scompress - Use per-CPU struct instead multiple variablesSebastian Andrzej Siewior2019-04-081-71/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | Two per-CPU variables are allocated as pointer to per-CPU memory which then are used as scratch buffers. We could be smart about this and use instead a per-CPU struct which contains the pointers already and then we need to allocate just the scratch buffers. Add a lock to the struct. By doing so we can avoid the get_cpu() statement and gain lockdep coverage (if enabled) to ensure that the lock is always acquired in the right context. On non-preemptible kernels the lock vanishes. It is okay to use raw_cpu_ptr() in order to get a pointer to the struct since it is protected by the spinlock. The diffstat of this is negative and according to size scompress.o: text data bss dec hex filename 1847 160 24 2031 7ef dbg_before.o 1754 232 4 1990 7c6 dbg_after.o 1799 64 24 1887 75f no_dbg-before.o 1703 88 4 1795 703 no_dbg-after.o The overall size increase difference is also negative. The increase in the data section is only four bytes without lockdep. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: scompress - return proper error code for allocation failureSebastian Andrzej Siewior2019-04-081-1/+3
| | | | | | | | | | | | If scomp_acomp_comp_decomp() fails to allocate memory for the destination then we never copy back the data we compressed. It is probably best to return an error code instead 0 in case of failure. I haven't found any user that is using acomp_request_set_params() without the `dst' buffer so there is probably no harm. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: fips - Grammar s/options/option/, s/to/the/Geert Uytterhoeven2019-03-281-2/+2
| | | | | | | Fixes: ccb778e1841ce04b ("crypto: api - Add fips_enable flag") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: Kconfig - fix typos AEGSI -> AEGISOndrej Mosnacek2019-03-221-3/+3
| | | | | | | Spotted while reviewind patches from Eric Biggers. Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: salsa20-generic - use crypto_xor_cpy()Eric Biggers2019-03-221-5/+4
| | | | | | | | In salsa20_docrypt(), use crypto_xor_cpy() instead of crypto_xor(). This avoids having to memcpy() the src buffer to the dst buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha-generic - use crypto_xor_cpy()Eric Biggers2019-03-221-5/+3
| | | | | | | | In chacha_docrypt(), use crypto_xor_cpy() instead of crypto_xor(). This avoids having to memcpy() the src buffer to the dst buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - test the !may_use_simd() fallback codeEric Biggers2019-03-221-24/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All crypto API algorithms are supposed to support the case where they are called in a context where SIMD instructions are unusable, e.g. IRQ context on some architectures. However, this isn't tested for by the self-tests, causing bugs to go undetected. Now that all algorithms have been converted to use crypto_simd_usable(), update the self-tests to test the no-SIMD case. First, a bool testvec_config::nosimd is added. When set, the crypto operation is executed with preemption disabled and with crypto_simd_usable() mocked out to return false on the current CPU. A bool test_sg_division::nosimd is also added. For hash algorithms it's honored by the corresponding ->update(). By setting just a subset of these bools, the case where some ->update()s are done in SIMD context and some are done in no-SIMD context is also tested. These bools are then randomly set by generate_random_testvec_config(). For now, all no-SIMD testing is limited to the extra crypto self-tests, because it might be a bit too invasive for the regular self-tests. But this could be changed later. This has already found bugs in the arm64 AES-GCM and ChaCha algorithms. This would have found some past bugs as well. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: simd - convert to use crypto_simd_usable()Eric Biggers2019-03-221-4/+4
| | | | | | | | | Replace all calls to may_use_simd() in the shared SIMD helpers with crypto_simd_usable(), in order to allow testing the no-SIMD code paths. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: simd,testmgr - introduce crypto_simd_usable()Eric Biggers2019-03-221-1/+25
| | | | | | | | | | | So that the no-SIMD fallback code can be tested by the crypto self-tests, add a macro crypto_simd_usable() which wraps may_use_simd(), but also returns false if the crypto self-tests have set a per-CPU bool to disable SIMD in crypto code on the current CPU. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha-generic - fix use as arm64 no-NEON fallbackEric Biggers2019-03-221-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The arm64 implementations of ChaCha and XChaCha are failing the extra crypto self-tests following my patches to test the !may_use_simd() code paths, which previously were untested. The problem is as follows: When !may_use_simd(), the arm64 NEON implementations fall back to the generic implementation, which uses the skcipher_walk API to iterate through the src/dst scatterlists. Due to how the skcipher_walk API works, walk.stride is set from the skcipher_alg actually being used, which in this case is the arm64 NEON algorithm. Thus walk.stride is 5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE. This unnecessarily large stride shouldn't cause an actual problem. However, the generic implementation computes round_down(nbytes, walk.stride). round_down() assumes the round amount is a power of 2, which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result. This causes the following case in skcipher_walk_done() to be hit, causing a WARN() and failing the encryption operation: if (WARN_ON(err)) { /* unexpected case; didn't process all bytes */ err = -EINVAL; goto finish; } Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride. (Or we could replace round_down() with rounddown(), but that would add a slow division operation every time, which I think we should avoid.) Fixes: 2fe55987b262 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed") Cc: <stable@vger.kernel.org> # v5.0+ Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - remove workaround for AEADs that modify aead_requestEric Biggers2019-03-221-3/+0
| | | | | | | | | | Now that all AEAD algorithms (that I have the hardware to test, at least) have been fixed to not modify the user-provided aead_request, remove the workaround from testmgr that reset aead_request::tfm after each AEAD encryption/decryption. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/morus1280 - convert to use AEAD SIMD helpersEric Biggers2019-03-221-1/+1
| | | | | | | | | | Convert the x86 implementations of MORUS-1280 to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/morus640 - convert to use AEAD SIMD helpersEric Biggers2019-03-221-1/+1
| | | | | | | | | | Convert the x86 implementation of MORUS-640 to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/aegis256 - convert to use AEAD SIMD helpersEric Biggers2019-03-221-1/+1
| | | | | | | | | | Convert the x86 implementation of AEGIS-256 to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/aegis128l - convert to use AEAD SIMD helpersEric Biggers2019-03-221-1/+1
| | | | | | | | | | Convert the x86 implementation of AEGIS-128L to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: x86/aegis128 - convert to use AEAD SIMD helpersEric Biggers2019-03-221-1/+1
| | | | | | | | | | Convert the x86 implementation of AEGIS-128 to use the AEAD SIMD helpers, rather than hand-rolling the same functionality. This simplifies the code and also fixes the bug where the user-provided aead_request is modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: simd - support wrapping AEAD algorithmsEric Biggers2019-03-221-0/+269
| | | | | | | | | | | | | | | | Update the crypto_simd module to support wrapping AEAD algorithms. Previously it only supported skciphers. The code for each is similar. I'll be converting the x86 implementations of AES-GCM, AEGIS, and MORUS to use this. Currently they each independently implement the same functionality. This will not only simplify the code, but it will also fix the bug detected by the improved self-tests: the user-provided aead_request is modified. This is because these algorithms currently reuse the original request, whereas the crypto_simd helpers build a new request in the original request's context. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* lib/lzo: separate lzo-rle from lzoDave Rodgman2019-03-073-3/+178
| | | | | | | | | | | | | | | | | | | | To prevent any issues with persistent data, separate lzo-rle from lzo so that it is treated as a separate algorithm, and lzo is still available. Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com Signed-off-by: Dave Rodgman <dave.rodgman@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com> Cc: Matt Sealey <matt.sealey@arm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Nitin Gupta <nitingupta910@gmail.com> Cc: Richard Purdie <rpurdie@openedhand.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: Sonny Rao <sonnyrao@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>