summaryrefslogtreecommitdiffstats
path: root/crypto
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'random-6.13-rc1-for-linus' of ↵Linus Torvalds44 hours1-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/crng/random Pull random number generator updates from Jason Donenfeld: "This contains a single series from Uros to replace uses of <linux/random.h> with prandom.h or other more specific headers as needed, in order to avoid a circular header issue. Uros' goal is to be able to use percpu.h from prandom.h, which will then allow him to define __percpu in percpu.h rather than in compiler_types.h" * tag 'random-6.13-rc1-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random: prandom: Include <linux/percpu.h> in <linux/prandom.h> random: Do not include <linux/prandom.h> in <linux/random.h> netem: Include <linux/prandom.h> in sch_netem.c lib/test_scanf: Include <linux/prandom.h> instead of <linux/random.h> lib/test_parman: Include <linux/prandom.h> instead of <linux/random.h> bpf/tests: Include <linux/prandom.h> instead of <linux/random.h> lib/rbtree-test: Include <linux/prandom.h> instead of <linux/random.h> random32: Include <linux/prandom.h> instead of <linux/random.h> kunit: string-stream-test: Include <linux/prandom.h> lib/interval_tree_test.c: Include <linux/prandom.h> instead of <linux/random.h> bpf: Include <linux/prandom.h> instead of <linux/random.h> scsi: libfcoe: Include <linux/prandom.h> instead of <linux/random.h> fscrypt: Include <linux/once.h> in fs/crypto/keyring.c mtd: tests: Include <linux/prandom.h> instead of <linux/random.h> media: vivid: Include <linux/prandom.h> in vivid-vid-cap.c drm/lib: Include <linux/prandom.h> instead of <linux/random.h> drm/i915/selftests: Include <linux/prandom.h> instead of <linux/random.h> crypto: testmgr: Include <linux/prandom.h> instead of <linux/random.h> x86/kaslr: Include <linux/prandom.h> instead of <linux/random.h>
| * crypto: testmgr: Include <linux/prandom.h> instead of <linux/random.h>Uros Bizjak2024-10-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Substitute the inclusion of <linux/random.h> header with <linux/prandom.h> to allow the removal of legacy inclusion of <linux/prandom.h> from <linux/random.h>. Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David S. Miller <davem@davemloft.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
* | Merge tag 'v6.13-p1' of ↵Linus Torvalds44 hours23-1055/+2330
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Add sig driver API - Remove signing/verification from akcipher API - Move crypto_simd_disabled_for_test to lib/crypto - Add WARN_ON for return values from driver that indicates memory corruption Algorithms: - Provide crc32-arch and crc32c-arch through Crypto API - Optimise crc32c code size on x86 - Optimise crct10dif on arm/arm64 - Optimise p10-aes-gcm on powerpc - Optimise aegis128 on x86 - Output full sample from test interface in jitter RNG - Retry without padata when it fails in pcrypt Drivers: - Add support for Airoha EN7581 TRNG - Add support for STM32MP25x platforms in stm32 - Enable iproc-r200 RNG driver on BCMBCA - Add Broadcom BCM74110 RNG driver" * tag 'v6.13-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (112 commits) crypto: marvell/cesa - fix uninit value for struct mv_cesa_op_ctx crypto: cavium - Fix an error handling path in cpt_ucode_load_fw() crypto: aesni - Move back to module_init crypto: lib/mpi - Export mpi_set_bit crypto: aes-gcm-p10 - Use the correct bit to test for P10 hwrng: amd - remove reference to removed PPC_MAPLE config crypto: arm/crct10dif - Implement plain NEON variant crypto: arm/crct10dif - Macroify PMULL asm code crypto: arm/crct10dif - Use existing mov_l macro instead of __adrl crypto: arm64/crct10dif - Remove remaining 64x64 PMULL fallback code crypto: arm64/crct10dif - Use faster 16x64 bit polynomial multiply crypto: arm64/crct10dif - Remove obsolete chunking logic crypto: bcm - add error check in the ahash_hmac_init function crypto: caam - add error check to caam_rsa_set_priv_key_form hwrng: bcm74110 - Add Broadcom BCM74110 RNG driver dt-bindings: rng: add binding for BCM74110 RNG padata: Clean up in padata_do_multithreaded() crypto: inside-secure - Fix the return value of safexcel_xcbcmac_cra_init() crypto: qat - Fix missing destroy_workqueue in adf_init_aer() crypto: rsassa-pkcs1 - Reinstate support for legacy protocols ...
| * | crypto: rsassa-pkcs1 - Reinstate support for legacy protocolsLukas Wunner11 days4-5/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend") enforced that rsassa-pkcs1 sign/verify operations specify a hash algorithm. That is necessary because per RFC 8017 sec 8.2, a hash algorithm identifier must be prepended to the hash before generating or verifying the signature ("Full Hash Prefix"). However the commit went too far in that it changed user space behavior: KEYCTL_PKEY_QUERY system calls now return -EINVAL unless they specify a hash algorithm. Intel Wireless Daemon (iwd) is one application issuing such system calls (for EAP-TLS). Closer analysis of the Embedded Linux Library (ell) used by iwd reveals that the problem runs even deeper: When iwd uses TLS 1.1 or earlier, it not only queries for keys, but performs sign/verify operations without specifying a hash algorithm. These legacy TLS versions concatenate an MD5 to a SHA-1 hash and omit the Full Hash Prefix: https://git.kernel.org/pub/scm/libs/ell/ell.git/tree/ell/tls-suites.c#n97 TLS 1.1 was deprecated in 2021 by RFC 8996, but removal of support was inadvertent in this case. It probably should be coordinated with iwd maintainers first. So reinstate support for such legacy protocols by defaulting to hash algorithm "none" which uses an empty Full Hash Prefix. If it is later on decided to remove TLS 1.1 support but still allow KEYCTL_PKEY_QUERY without a hash algorithm, that can be achieved by reverting the present commit and replacing it with the following patch: https://lore.kernel.org/r/ZxalYZwH5UiGX5uj@wunner.de/ It's worth noting that Python's cryptography library gained support for such legacy use cases very recently, so they do seem to still be a thing. The Python developers identified IKE version 1 as another protocol omitting the Full Hash Prefix: https://github.com/pyca/cryptography/issues/10226 https://github.com/pyca/cryptography/issues/5495 The author of those issues, Zoltan Kelemen, spent considerable effort searching for test vectors but only found one in a 2019 blog post by Kevin Jones. Add it to testmgr.h to verify correctness of this feature. Examination of wpa_supplicant as well as various IKE daemons (libreswan, strongswan, isakmpd, raccoon) has determined that none of them seems to use the kernel's Key Retention Service, so iwd is the only affected user space application known so far. Fixes: 1e562deacecc ("crypto: rsassa-pkcs1 - Migrate to sig_alg backend") Reported-by: Klara Modin <klarasmodin@gmail.com> Tested-by: Klara Modin <klarasmodin@gmail.com> Closes: https://lore.kernel.org/r/2ed09a22-86c0-4cf0-8bda-ef804ccb3413@gmail.com/ Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: asymmetric_keys - Remove unused functionsDr. David Alan Gilbert2024-11-021-63/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | encrypt_blob(), decrypt_blob() and create_signature() were some of the functions added in 2018 by commit 5a30771832aa ("KEYS: Provide missing asymmetric key subops for new key type ops [ver #2]") however, they've not been used. Remove them. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: api - move crypto_simd_disabled_for_test to libEric Biggers2024-10-281-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move crypto_simd_disabled_for_test to lib/ so that crypto_simd_usable() can be used by library code. This was discussed previously (https://lore.kernel.org/linux-crypto/20220716062920.210381-4-ebiggers@kernel.org/) but was not done because there was no use case yet. However, this is now needed for the arm64 CRC32 library code. Tested with: export ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- echo CONFIG_CRC32=y > .config echo CONFIG_MODULES=y >> .config echo CONFIG_CRYPTO=m >> .config echo CONFIG_DEBUG_KERNEL=y >> .config echo CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=n >> .config echo CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y >> .config make olddefconfig make -j$(nproc) Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: crc32c - Provide crc32c-arch driver for accelerated library codeArd Biesheuvel2024-10-282-22/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | crc32c-generic is currently backed by the architecture's CRC-32c library code, which may offer a variety of implementations depending on the capabilities of the platform. These are not covered by the crypto subsystem's fuzz testing capabilities because crc32c-generic is the reference driver that the fuzzing logic uses as a source of truth. Fix this by providing a crc32c-arch implementation which is based on the arch library code if available, and modify crc32c-generic so it is always based on the generic C implementation. If the arch has no CRC-32c library code, this change does nothing. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: crc32 - Provide crc32-arch driver for accelerated library codeArd Biesheuvel2024-10-282-24/+71
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | crc32-generic is currently backed by the architecture's CRC-32 library code, which may offer a variety of implementations depending on the capabilities of the platform. These are not covered by the crypto subsystem's fuzz testing capabilities because crc32-generic is the reference driver that the fuzzing logic uses as a source of truth. Fix this by providing a crc32-arch implementation which is based on the arch library code if available, and modify crc32-generic so it is always based on the generic C implementation. If the arch has no CRC-32 library code, this change does nothing. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: drbg - Use str_true_false() and str_enabled_disabled() helpersThorsten Blum2024-10-281-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | Remove hard-coded strings by using the helper functions str_true_false() and str_enabled_disabled(). Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: pcrypt - Call crypto layer directly when padata_do_parallel() return ↵Yi Yang2024-10-281-4/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | -EBUSY Since commit 8f4f68e788c3 ("crypto: pcrypt - Fix hungtask for PADATA_RESET"), the pcrypt encryption and decryption operations return -EAGAIN when the CPU goes online or offline. In alg_test(), a WARN is generated when pcrypt_aead_decrypt() or pcrypt_aead_encrypt() returns -EAGAIN, the unnecessary panic will occur when panic_on_warn set 1. Fix this issue by calling crypto layer directly without parallelization in that case. Fixes: 8f4f68e788c3 ("crypto: pcrypt - Fix hungtask for PADATA_RESET") Signed-off-by: Yi Yang <yiyang13@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecdsa - Update Kconfig help text for NIST P521Lukas Wunner2024-10-281-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a7d45ba77d3d ("crypto: ecdsa - Register NIST P521 and extend test suite") added support for ECDSA signature verification using NIST P521, but forgot to amend the Kconfig help text. Fix it. Fixes: a7d45ba77d3d ("crypto: ecdsa - Register NIST P521 and extend test suite") Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: sig - Fix oops on KEYCTL_PKEY_QUERY for RSA keysLukas Wunner2024-10-261-12/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit a2471684dae2 ("crypto: ecdsa - Move X9.62 signature size calculation into template") introduced ->max_size() and ->digest_size() callbacks to struct sig_alg. They return an algorithm's maximum signature size and digest size, respectively. For algorithms which lack these callbacks, crypto_register_sig() was amended to use the ->key_size() callback instead. However the commit neglected to also amend sig_register_instance(). As a result, the ->max_size() and ->digest_size() callbacks remain NULL pointers if instances do not define them. A KEYCTL_PKEY_QUERY system call results in an oops for such instances: BUG: kernel NULL pointer dereference, address: 0000000000000000 Call Trace: software_key_query+0x169/0x370 query_asymmetric_key+0x67/0x90 keyctl_pkey_query+0x86/0x120 __do_sys_keyctl+0x428/0x480 do_syscall_64+0x4b/0x110 The only instances affected by this are "pkcs1(rsa, ...)". Fix by moving the callback checks from crypto_register_sig() to sig_prepare_alg(), which is also invoked by sig_register_instance(). Change the return type of sig_prepare_alg() from void to int to be able to return errors. This matches other algorithm types, see e.g. aead_prepare_alg() or ahash_prepare_alg(). Fixes: a2471684dae2 ("crypto: ecdsa - Move X9.62 signature size calculation into template") Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: jitter - output full sample from test interfaceJoachim Vandersmissen2024-10-192-17/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Jitter RNG time delta is computed based on the difference of two high-resolution, 64-bit time stamps. However, the test interface added in 69f1c387ba only outputs the lower 32 bits of those time stamps. To ensure all information is available during the evaluation process of the Jitter RNG, output the full 64-bit time stamps. Any clients collecting data from the test interface will need to be updated to take this change into account. Additionally, the size of the temporary buffer that holds the data for user space has been clarified. Previously, this buffer was JENT_TEST_RINGBUFFER_SIZE (= 1000) bytes in size, however that value represents the number of samples held in the kernel space ring buffer, with each sample taking 8 (previously 4) bytes. Rather than increasing the size to allow for all 1000 samples to be output, we keep it at 1000 bytes, but clarify that this means at most 125 64-bit samples will be output every time this interface is called. Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Joachim Vandersmissen <git@jvdsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecrdsa - Fix signature size calculationLukas Wunner2024-10-051-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | software_key_query() returns the curve size as maximum signature size for ecrdsa. However it should return twice as much. It's only the maximum signature size that seems to be off. The maximum digest size is likewise set to the curve size, but that's correct as it matches the checks in ecrdsa_set_pub_key() and ecrdsa_verify(). Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecdsa - Support P1363 signature decodingLukas Wunner2024-10-056-1/+216
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alternatively to the X9.62 encoding of ecdsa signatures, which uses ASN.1 and is already supported by the kernel, there's another common encoding called P1363. It stores r and s as the concatenation of two big endian, unsigned integers. The name originates from IEEE P1363. Add a P1363 template in support of the forthcoming SPDM library (Security Protocol and Data Model) for PCI device authentication. P1363 is prescribed by SPDM 1.2.1 margin no 44: "For ECDSA signatures, excluding SM2, in SPDM, the signature shall be the concatenation of r and s. The size of r shall be the size of the selected curve. Likewise, the size of s shall be the size of the selected curve. See BaseAsymAlgo in NEGOTIATE_ALGORITHMS for the size of r and s. The byte order for r and s shall be in big endian order. When placing ECDSA signatures into an SPDM signature field, r shall come first followed by s." Link: https://www.dmtf.org/sites/default/files/standards/documents/DSP0274_1.2.1.pdf Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecdsa - Move X9.62 signature size calculation into templateLukas Wunner2024-10-054-34/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | software_key_query() returns the maximum signature and digest size for a given key to user space. When it only supported RSA keys, calculating those sizes was trivial as they were always equivalent to the key size. However when ECDSA was added, the function grew somewhat complicated calculations which take the ASN.1 encoding and curve into account. This doesn't scale well and adjusting the calculations is easily forgotten when adding support for new encodings or curves. In fact, when NIST P521 support was recently added, the function was initially not amended: https://lore.kernel.org/all/b749d5ee-c3b8-4cbd-b252-7773e4536e07@linux.ibm.com/ Introduce a ->max_size() callback to struct sig_alg and take advantage of it to move the signature size calculations to ecdsa-x962.c. Introduce a ->digest_size() callback to struct sig_alg and move the maximum ECDSA digest size to ecdsa.c. It is common across ecdsa-x962.c and the upcoming ecdsa-p1363.c and thus inherited by both of them. For all other algorithms, continue using the key size as maximum signature and digest size. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: sig - Rename crypto_sig_maxsize() to crypto_sig_keysize()Lukas Wunner2024-10-057-17/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | crypto_sig_maxsize() is a bit of a misnomer as it doesn't return the maximum signature size, but rather the key size. Rename it as well as all implementations of the ->max_size callback. A subsequent commit introduces a crypto_sig_maxsize() function which returns the actual maximum signature size. While at it, change the return type of crypto_sig_keysize() from int to unsigned int for consistency with crypto_akcipher_maxsize(). None of the callers checks for a negative return value and an error condition can always be indicated by returning zero. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecdsa - Move X9.62 signature decoding into templateLukas Wunner2024-10-056-70/+942
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unlike the rsa driver, which separates signature decoding and signature verification into two steps, the ecdsa driver does both in one. This restricts users to the one signature format currently supported (X9.62) and prevents addition of others such as P1363, which is needed by the forthcoming SPDM library (Security Protocol and Data Model) for PCI device authentication. Per Herbert's suggestion, change ecdsa to use a "raw" signature encoding and then implement X9.62 and P1363 as templates which convert their respective encodings to the raw one. One may then specify "x962(ecdsa-nist-XXX)" or "p1363(ecdsa-nist-XXX)" to pick the encoding. The present commit moves X9.62 decoding to a template. A separate commit is going to introduce another template for P1363 decoding. The ecdsa driver internally represents a signature as two u64 arrays of size ECC_MAX_BYTES. This appears to be the most natural choice for the raw format as it can directly be used for verification without having to further decode signature data or copy it around. Repurpose all the existing test vectors for "x962(ecdsa-nist-XXX)" and create a duplicate of them to test the raw encoding. Link: https://lore.kernel.org/all/ZoHXyGwRzVvYkcTP@gondor.apana.org.au/ Signed-off-by: Lukas Wunner <lukas@wunner.de> Tested-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecdsa - Avoid signed integer overflow on signature decodingLukas Wunner2024-10-051-12/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When extracting a signature component r or s from an ASN.1-encoded integer, ecdsa_get_signature_rs() subtracts the expected length "bufsize" from the ASN.1 length "vlen" (both of unsigned type size_t) and stores the result in "diff" (of signed type ssize_t). This results in a signed integer overflow if vlen > SSIZE_MAX + bufsize. The kernel is compiled with -fno-strict-overflow, which implies -fwrapv, meaning signed integer overflow is not undefined behavior. And the function does check for overflow: if (-diff >= bufsize) return -EINVAL; So the code is fine in principle but not very obvious. In the future it might trigger a false-positive with CONFIG_UBSAN_SIGNED_WRAP=y. Avoid by comparing the two unsigned variables directly and erroring out if "vlen" is too large. Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: sig - Move crypto_sig_*() API calls to include fileLukas Wunner2024-10-051-46/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The crypto_sig_*() API calls lived in sig.c so far because they needed access to struct crypto_sig_type: This was necessary to differentiate between signature algorithms that had already been migrated from crypto_akcipher to crypto_sig and those that hadn't yet. Now that all algorithms have been migrated, the API calls can become static inlines in <crypto/sig.h> to mimic what <crypto/akcipher.h> is doing. Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: akcipher - Drop sign/verify operationsLukas Wunner2024-10-055-221/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A sig_alg backend has just been introduced and all asymmetric sign/verify algorithms have been migrated to it. The sign/verify operations can thus be dropped from akcipher_alg. It is now purely for asymmetric encrypt/decrypt. Move struct crypto_akcipher_sync_data from internal.h to akcipher.c and unexport crypto_akcipher_sync_{prep,post}(): They're no longer used by sig.c but only locally in akcipher.c. In crypto_akcipher_sync_{prep,post}(), drop various NULL pointer checks for data->dst as they were only necessary for the verify operation. In the crypto_sig_*() API calls, remove the forks that were necessary while algorithms were converted from crypto_akcipher to crypto_sig one by one. In struct akcipher_testvec, remove the "params", "param_len" and "algo" elements as they were only needed for the ecrdsa verify operation. Remove corresponding dead code from test_akcipher_one() as well. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: rsassa-pkcs1 - Avoid copying hash prefixLukas Wunner2024-10-051-8/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When constructing the EMSA-PKCS1-v1_5 padding for the sign operation, a buffer for the padding is allocated and the Full Hash Prefix is copied into it. The padding is then passed to the RSA decrypt operation as an sglist entry which is succeeded by a second sglist entry for the hash. Actually copying the hash prefix around is completely unnecessary. It can simply be referenced from a third sglist entry which sits in-between the padding and the digest. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: rsassa-pkcs1 - Harden digest length verificationLukas Wunner2024-10-051-1/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The RSASSA-PKCS1-v1_5 sign operation currently only checks that the digest length is less than "key_size - hash_prefix->size - 11". The verify operation merely checks that it's more than zero. Actually the precise digest length is known because the hash algorithm is specified upon instance creation and the digest length is encoded into the final byte of the hash algorithm's Full Hash Prefix. So check for the exact digest length rather than solely relying on imprecise maximum/minimum checks. Keep the maximum length check for the sign operation as a safety net, but drop the now unnecessary minimum check for the verify operation. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: rsassa-pkcs1 - Migrate to sig_alg backendLukas Wunner2024-10-058-342/+475
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A sig_alg backend has just been introduced with the intent of moving all asymmetric sign/verify algorithms to it one by one. Migrate the sign/verify operations from rsa-pkcs1pad.c to a separate rsassa-pkcs1.c which uses the new backend. Consequently there are now two templates which build on the "rsa" akcipher_alg: * The existing "pkcs1pad" template, which is instantiated as an akcipher_instance and retains the encrypt/decrypt operations of RSAES-PKCS1-v1_5 (RFC 8017 sec 7.2). * The new "pkcs1" template, which is instantiated as a sig_instance and contains the sign/verify operations of RSASSA-PKCS1-v1_5 (RFC 8017 sec 8.2). In a separate step, rsa-pkcs1pad.c could optionally be renamed to rsaes-pkcs1.c for clarity. Additional "oaep" and "pss" templates could be added for RSAES-OAEP and RSASSA-PSS. Note that it's currently allowed to allocate a "pkcs1pad(rsa)" transform without specifying a hash algorithm. That makes sense if the transform is only used for encrypt/decrypt and continues to be supported. But for sign/verify, such transforms previously did not insert the Full Hash Prefix into the padding. The resulting message encoding was incompliant with EMSA-PKCS1-v1_5 (RFC 8017 sec 9.2) and therefore nonsensical. From here on in, it is no longer allowed to allocate a transform without specifying a hash algorithm if the transform is used for sign/verify operations. This simplifies the code because the insertion of the Full Hash Prefix is no longer optional, so various "if (digest_info)" clauses can be removed. There has been a previous attempt to forbid transform allocation without specifying a hash algorithm, namely by commit c0d20d22e0ad ("crypto: rsa-pkcs1pad - Require hash to be present"). It had to be rolled back with commit b3a8c8a5ebb5 ("crypto: rsa-pkcs1pad: Allow hash to be optional [ver #2]"), presumably because it broke allocation of a transform which was solely used for encrypt/decrypt, not sign/verify. Avoid such breakage by allowing transform allocation for encrypt/decrypt with and without specifying a hash algorithm (and simply ignoring the hash algorithm in the former case). So again, specifying a hash algorithm is now mandatory for sign/verify, but optional and ignored for encrypt/decrypt. The new sig_alg API uses kernel buffers instead of sglists, which avoids the overhead of copying signature and digest from sglists back into kernel buffers. rsassa-pkcs1.c is thus simplified quite a bit. sig_alg is always synchronous, whereas the underlying "rsa" akcipher_alg may be asynchronous. So await the result of the akcipher_alg, similar to crypto_akcipher_sync_{en,de}crypt(). As part of the migration, rename "rsa_digest_info" to "hash_prefix" to adhere to the spec language in RFC 9580. Otherwise keep the code unmodified wherever possible to ease reviewing and bisecting. Leave several simplification and hardening opportunities to separate commits. rsassa-pkcs1.c uses modern __free() syntax for allocation of buffers which need to be freed by kfree_sensitive(), hence a DEFINE_FREE() clause for kfree_sensitive() is introduced herein as a byproduct. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: rsa-pkcs1pad - Deduplicate set_{pub,priv}_key callbacksLukas Wunner2024-10-051-28/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | pkcs1pad_set_pub_key() and pkcs1pad_set_priv_key() are almost identical. The upcoming migration of sign/verify operations from rsa-pkcs1pad.c into a separate crypto_template will require another copy of the exact same functions. When RSASSA-PSS and RSAES-OAEP are introduced, each will need yet another copy. Deduplicate the functions into a single one which lives in a common header file for reuse by RSASSA-PKCS1-v1_5, RSASSA-PSS and RSAES-OAEP. Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecrdsa - Migrate to sig_alg backendLukas Wunner2024-10-054-41/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A sig_alg backend has just been introduced with the intent of moving all asymmetric sign/verify algorithms to it one by one. Migrate ecrdsa.c to the new backend. One benefit of the new API is the use of kernel buffers instead of sglists, which avoids the overhead of copying signature and digest sglists back into kernel buffers. ecrdsa.c is thus simplified quite a bit. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecdsa - Migrate to sig_alg backendLukas Wunner2024-10-054-90/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A sig_alg backend has just been introduced with the intent of moving all asymmetric sign/verify algorithms to it one by one. Migrate ecdsa.c to the new backend. One benefit of the new API is the use of kernel buffers instead of sglists, which avoids the overhead of copying signature and digest sglists back into kernel buffers. ecdsa.c is thus simplified quite a bit. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: sig - Introduce sig_alg backendLukas Wunner2024-10-053-2/+269
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6cb8815f41a9 ("crypto: sig - Add interface for sign/verify") began a transition of asymmetric sign/verify operations from crypto_akcipher to a new crypto_sig frontend. Internally, the crypto_sig frontend still uses akcipher_alg as backend, however: "The link between sig and akcipher is meant to be temporary. The plan is to create a new low-level API for sig and then migrate the signature code over to that from akcipher." https://lore.kernel.org/r/ZrG6w9wsb-iiLZIF@gondor.apana.org.au/ "having a separate alg for sig is definitely where we want to be since there is very little that the two types actually share." https://lore.kernel.org/r/ZrHlpz4qnre0zWJO@gondor.apana.org.au/ Take the next step of that migration and augment the crypto_sig frontend with a sig_alg backend to which all algorithms can be moved. During the migration, there will briefly be signature algorithms that are still based on crypto_akcipher, whilst others are already based on crypto_sig. Allow for that by building a fork into crypto_sig_*() API calls (i.e. crypto_sig_maxsize() and friends) such that one of the two backends is selected based on the transform's cra_type. Signed-off-by: Lukas Wunner <lukas@wunner.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: ecdsa - Drop unused test vector elementsLukas Wunner2024-10-051-105/+10
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ECDSA test vectors contain "params", "param_len" and "algo" elements even though ecdsa.c doesn't make any use of them. The only algorithm implementation using those elements is ecrdsa.c. Drop the unused test vector elements. For the curious, "params" is an ASN.1 SEQUENCE of OID_id_ecPublicKey and a second OID identifying the curve. For example: "\x30\x13\x06\x07\x2a\x86\x48\xce\x3d\x02\x01\x06\x08\x2a\x86\x48" "\xce\x3d\x03\x01\x01" ... decodes to: SEQUENCE (OID_id_ecPublicKey, OID_id_prime192v1) The curve OIDs used in those "params" elements are unsurprisingly: OID_id_prime192v1 (2a8648ce3d030101) OID_id_prime256v1 (2a8648ce3d030107) OID_id_ansip384r1 (2b81040022) OID_id_ansip521r1 (2b81040023) Those are just different names for secp192r1, secp256r1, secp384r1 and secp521r1, respectively, per RFC 8422 appendix A: https://www.rfc-editor.org/rfc/rfc8422#appendix-A The entries for secp384r1 and secp521r1 curves contain a useful code comment calling out the curve and hash. Add analogous code comments to secp192r1 and secp256r1 curve entries. Signed-off-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Stefan Berger <stefanb@linux.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | Merge tag 'v6.12-p3' of ↵Linus Torvalds2024-10-162-13/+12
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fixes from Herbert Xu: - Remove bogus testmgr ENOENT error messages - Ensure algorithm is still alive before marking it as tested - Disable buggy hash algorithms in marvell/cesa * tag 'v6.12-p3' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: marvell/cesa - Disable hash algorithms crypto: testmgr - Hide ENOENT errors better crypto: api - Fix liveliness check in crypto_alg_tested
| * | crypto: testmgr - Hide ENOENT errors betterHerbert Xu2024-10-101-12/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous patch removed the ENOENT warning at the point of allocation, but the overall self-test warning is still there. Fix all of them by returning zero as the test result. This is safe because if the algorithm has gone away, then it cannot be marked as tested. Fixes: 4eded6d14f5b ("crypto: testmgr - Hide ENOENT errors") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | crypto: api - Fix liveliness check in crypto_alg_testedHerbert Xu2024-10-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As algorithm testing is carried out without holding the main crypto lock, it is always possible for the algorithm to go away during the test. So before crypto_alg_tested updates the status of the tested alg, it checks whether it's still on the list of all algorithms. This is inaccurate because it may be off the main list but still on the list of algorithms to be removed. Updating the algorithm status is safe per se as the larval still holds a reference to it. However, killing spawns of other algorithms that are of lower priority is clearly a deficiency as it adds unnecessary churn. Fix the test by checking whether the algorithm is dead. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | | move asm/unaligned.h to linux/unaligned.hAl Viro2024-10-0226-26/+26
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | asm/unaligned.h is always an include of asm-generic/unaligned.h; might as well move that thing to linux/unaligned.h and include that - there's nothing arch-specific in that header. auto-generated by the following: for i in `git grep -l -w asm/unaligned.h`; do sed -i -e "s/asm\/unaligned.h/linux\/unaligned.h/" $i done for i in `git grep -l -w asm-generic/unaligned.h`; do sed -i -e "s/asm-generic\/unaligned.h/linux\/unaligned.h/" $i done git mv include/asm-generic/unaligned.h include/linux/unaligned.h git mv tools/include/asm-generic/unaligned.h tools/include/linux/unaligned.h sed -i -e "/unaligned.h/d" include/asm-generic/Kbuild sed -i -e "s/__ASM_GENERIC/__LINUX/" include/linux/unaligned.h tools/include/linux/unaligned.h
* | KEYS: prevent NULL pointer dereference in find_asymmetric_key()Roman Smirnov2024-09-201-3/+4
|/ | | | | | | | | | | | | | | | | | | | In find_asymmetric_key(), if all NULLs are passed in the id_{0,1,2} arguments, the kernel will first emit WARN but then have an oops because id_2 gets dereferenced anyway. Add the missing id_2 check and move WARN_ON() to the final else branch to avoid duplicate NULL checks. Found by Linux Verification Center (linuxtesting.org) with Svace static analysis tool. Cc: stable@vger.kernel.org # v5.17+ Fixes: 7d30198ee24f ("keys: X.509 public key issuer lookup without AKID") Suggested-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Roman Smirnov <r.smirnov@omp.ru> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Reviewed-by: Jarkko Sakkinen <jarkko@kernel.org> Signed-off-by: Jarkko Sakkinen <jarkko@kernel.org>
* crypto: aegis128 - Fix indentation issue in crypto_aegis128_process_crypt()Riyan Dhiman2024-09-131-2/+3
| | | | | | | | | | | | | | | The code in crypto_aegis128_process_crypt() had an indentation issue where spaces were used instead of tabs. This commit corrects the indentation to use tabs, adhering to the Linux kernel coding style guidelines. Issue reported by checkpatch: - ERROR: code indent should use tabs where possible No functional changes are intended. Signed-off-by: Riyan Dhiman <riyandhiman14@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - Hide ENOENT errorsHerbert Xu2024-09-061-1/+22
| | | | | | | | | | | | | | When a crypto algorithm with a higher priority is registered, it kills the spawns of all lower-priority algorithms. Thus it is to be expected for an algorithm to go away at any time, even during a self-test. This is now much more common with asynchronous testing. Remove the printk when an ENOENT is encountered during a self-test. This is not really an error since the algorithm being tested is no longer there (i.e., it didn't fail the test which is what we care about). Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: algboss - Pass instance creation error upHerbert Xu2024-09-061-1/+2
| | | | | | | Pass any errors we get during instance creation up through the larval. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Fix generic algorithm self-test racesHerbert Xu2024-09-061-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On Fri, Aug 30, 2024 at 10:51:54AM -0700, Eric Biggers wrote: > > Given below in defconfig form, use 'make olddefconfig' to apply. The failures > are nondeterministic and sometimes there are different ones, for example: > > [ 0.358017] alg: skcipher: failed to allocate transform for cbc(twofish-generic): -2 > [ 0.358365] alg: self-tests for cbc(twofish) using cbc(twofish-generic) failed (rc=-2) > [ 0.358535] alg: skcipher: failed to allocate transform for cbc(camellia-generic): -2 > [ 0.358918] alg: self-tests for cbc(camellia) using cbc(camellia-generic) failed (rc=-2) > [ 0.371533] alg: skcipher: failed to allocate transform for xts(ecb(aes-generic)): -2 > [ 0.371922] alg: self-tests for xts(aes) using xts(ecb(aes-generic)) failed (rc=-2) > > Modules are not enabled, maybe that matters (I haven't checked yet). Yes I think that was the key. This triggers a massive self-test run which executes in parallel and reveals a few race conditions in the system. I think it boils down to the following scenario: Base algorithm X-generic, X-optimised Template Y Optimised algorithm Y-X-optimised Everything gets registered, and then the self-tests are started. When Y-X-optimised gets tested, it requests the creation of the generic Y(X-generic). Which then itself undergoes testing. The race is that after Y(X-generic) gets registered, but just before it gets tested, X-optimised finally finishes self-testing which then causes all spawns of X-generic to be destroyed. So by the time the self-test for Y(X-generic) comes along, it can no longer find the algorithm. This error then bubbles up all the way up to the self-test of Y-X-optimised which then fails. Note that there is some complexity that I've omitted here because when the generic self-test fails to find Y(X-generic) it actually triggers the construction of it again which then fails for various other reasons (these are not important because the construction should *not* be triggered at this point). So in a way the error is expected, and we should probably remove the pr_err for the case where ENOENT is returned for the algorithm that we're currently testing. The solution is two-fold. First when an algorithm undergoes self-testing it should not trigger its construction. Secondly if an instance larval fails to materialise due to it being destroyed by a more optimised algorithm coming along, it should obviously retry the construction. Remove the check in __crypto_alg_lookup that stops a larval from matching new requests based on differences in the mask. It is better to block new requests even if it is wrong and then simply retry the lookup. If this ends up being the wrong larval it will sort iself out during the retry. Reduce the CRYPTO_ALG_TYPE_MASK bits in type during larval creation as otherwise LSKCIPHER algorithms may not match SKCIPHER larvals. Also block the instance creation during self-testing in the function crypto_larval_lookup by checking for CRYPTO_ALG_TESTED in the mask field. Finally change the return value when crypto_alg_lookup fails in crypto_larval_wait to EAGAIN to redo the lookup. Fixes: 37da5d0ffa7b ("crypto: api - Do not wait for tests during registration") Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: jitter - Use min() to simplify jent_read_entropy()Thorsten Blum2024-08-301-4/+2
| | | | | | | | Use the min() macro to simplify the jent_read_entropy() function and improve its readability. Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: simd - Do not call crypto_alloc_tfm during registrationHerbert Xu2024-08-241-61/+15
| | | | | | | | | | | | | | | Algorithm registration is usually carried out during module init, where as little work as possible should be carried out. The SIMD code violated this rule by allocating a tfm, this then triggers a full test of the algorithm which may dead-lock in certain cases. SIMD is only allocating the tfm to get at the alg object, which is in fact already available as it is what we are registering. Use that directly and remove the crypto_alloc_tfm call. Also remove some obsolete and unused SIMD API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Do not wait for tests during registrationHerbert Xu2024-08-243-31/+32
| | | | | | | | | | | | | | | As registration is usually carried out during module init, this is a context where as little work as possible should be carried out. Testing may trigger module loads of underlying components, which could even lead back to the module that is registering at the moment. This may lead to dead-locks outside of the Crypto API. Avoid this by not waiting for the tests to complete. They will be scheduled but completion will be asynchronous. Any users will still wait for completion. Reported-by: Russell King <linux@armlinux.org.uk> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: api - Remove instance larval fulfilmentHerbert Xu2024-08-243-49/+23
| | | | | | | | | | | | | In order to allow testing to complete asynchronously after the registration process, instance larvals need to complete prior to having a test result. Support this by redoing the lookup for instance larvals after completion. This should locate the pending test larval and then repeat the wait on that (if it is still pending). As the lookup is now repeated there is no longer any need to compute the fulfilment status and all that code can be removed. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: jitter - set default OSR to 3Stephan Mueller2024-08-241-1/+1
| | | | | | | | | | | | | | | | | | | The user space Jitter RNG library uses the oversampling rate of 3 which implies that each time stamp is credited with 1/3 bit of entropy. To obtain 256 bits of entropy, 768 time stamps need to be sampled. The increase in OSR is applied based on a report where the Jitter RNG is used on a system exhibiting a challenging environment to collect entropy. This OSR default value is now applied to the Linux kernel version of the Jitter RNG as well. The increase in the OSR from 1 to 3 also implies that the Jitter RNG is now slower by default. Reported-by: Jeff Barnes <jeffbarnes@microsoft.com> Signed-off-by: Stephan Mueller <smueller@chronox.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: rsa - Check MPI allocation errorsHerbert Xu2024-08-171-7/+12
| | | | | | Fixes: 6637e11e4ad2 ("crypto: rsa - allow only odd e and restrict value in FIPS mode") Fixes: f145d411a67e ("crypto: rsa - implement Chinese Remainder Theorem for faster private key operation") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: dh - Check mpi_rshift errorsHerbert Xu2024-08-171-2/+2
| | | | | | | Now that mpi_rshift can return errors, check them. Fixes: 35d2bf20683f ("crypto: dh - calculate Q from P for the full public key verification") Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: chacha20poly1305 - Annotate struct chachapoly_ctx with __counted_by()Thorsten Blum2024-08-171-1/+1
| | | | | | | | | | Add the __counted_by compiler attribute to the flexible array member salt to improve access bounds-checking via CONFIG_UBSAN_BOUNDS and CONFIG_FORTIFY_SOURCE. Reviewed-by: Kees Cook <kees@kernel.org> Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: xor - fix template benchmarkingHelge Deller2024-08-021-17/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit c055e3eae0f1 ("crypto: xor - use ktime for template benchmarking") switched from using jiffies to ktime-based performance benchmarking. This works nicely on machines which have a fine-grained ktime() clocksource as e.g. x86 machines with TSC. But other machines, e.g. my 4-way HP PARISC server, don't have such fine-grained clocksources, which is why it seems that 800 xor loops take zero seconds, which then shows up in the logs as: xor: measuring software checksum speed 8regs : -1018167296 MB/sec 8regs_prefetch : -1018167296 MB/sec 32regs : -1018167296 MB/sec 32regs_prefetch : -1018167296 MB/sec Fix this with some small modifications to the existing code to improve the algorithm to always produce correct results without introducing major delays for architectures with a fine-grained ktime() clocksource: a) Delay start of the timing until ktime() just advanced. On machines with a fast ktime() this should be just one additional ktime() call. b) Count the number of loops. Run at minimum 800 loops and finish earliest when the ktime() counter has progressed. With that the throughput can now be calculated more accurately under all conditions. Fixes: c055e3eae0f1 ("crypto: xor - use ktime for template benchmarking") Signed-off-by: Helge Deller <deller@gmx.de> Tested-by: John David Anglin <dave.anglin@bell.net> v2: - clean up coding style (noticed & suggested by Herbert Xu) - rephrased & fixed typo in commit message Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: testmgr - generate power-of-2 lengths more oftenEric Biggers2024-07-131-4/+12
| | | | | | | | | | | | | | | | Implementations of hash functions often have special cases when lengths are a multiple of the hash function's internal block size (e.g. 64 for SHA-256, 128 for SHA-512). Currently, when the fuzz testing code generates lengths, it doesn't prefer any length mod 64 over any other. This limits the coverage of these special cases. Therefore, this patch updates the fuzz testing code to generate power-of-2 lengths and divide messages exactly in half a bit more often. Reviewed-by: Sami Tolvanen <samitolvanen@google.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: deflate - Add aliases to deflateKyle Meyer2024-06-281-0/+1
| | | | | | | | | | | | | | | iaa_crypto depends on the deflate compression algorithm that's provided by deflate. If the algorithm is not available because CRYPTO_DEFLATE=m and deflate is not inserted, iaa_crypto will request "crypto-deflate-generic". Deflate will not be inserted because "crypto-deflate-generic" is not a valid alias. Add deflate-generic and crypto-deflate-generic aliases to deflate. Signed-off-by: Kyle Meyer <kyle.meyer@hpe.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* crypto: tcrypt - add skcipher speed for given algSergey Portnoy2024-06-281-0/+9
| | | | | | | | | | | | | | | | | | Allow to run skcipher speed for given algorithm. Case 600 is modified to cover ENCRYPT and DECRYPT directions. Example: modprobe tcrypt mode=600 alg="qat_aes_xts" klen=32 If succeed, the performance numbers will be printed in dmesg: testing speed of multibuffer qat_aes_xts (qat_aes_xts) encryption test 0 (256 bit key, 16 byte blocks): 1 operation in 14596 cycles (16 bytes) ... test 6 (256 bit key, 4096 byte blocks): 1 operation in 8053 cycles (4096 bytes) Signed-off-by: Sergey Portnoy <sergey.portnoy@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>