summaryrefslogtreecommitdiffstats
path: root/crypto
Commit message (Collapse)AuthorAgeFilesLines
* crypto: pcbc - remove bogus memcpy()s with src == destEric Biggers2019-03-231-10/+4
| | | | | | | | | | | | | | | | | | | | commit 251b7aea34ba3c4d4fdfa9447695642eb8b8b098 upstream. The memcpy()s in the PCBC implementation use walk->iv as both the source and destination, which has undefined behavior. These memcpy()'s are actually unneeded, because walk->iv is already used to hold the previous plaintext block XOR'd with the previous ciphertext block. Thus, walk->iv is already updated to its final value. So remove the broken and unnecessary memcpy()s. Fixes: 91652be5d1b9 ("[CRYPTO] pcbc: Add Propagated CBC template") Cc: <stable@vger.kernel.org> # v2.6.21+ Cc: David Howells <dhowells@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Maxim Zhukov <mussitantesmortem@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: ahash - fix another early termination in hash walkEric Biggers2019-03-231-7/+7
| | | | | | | | | | | | | | | | | | | | commit 77568e535af7c4f97eaef1e555bf0af83772456c upstream. Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and "michael_mic", fail the improved hash tests because they sometimes produce the wrong digest. The bug is that in the case where a scatterlist element crosses pages, not all the data is actually hashed because the scatterlist walk terminates too early. This happens because the 'nbytes' variable in crypto_hash_walk_done() is assigned the number of bytes remaining in the page, then later interpreted as the number of bytes remaining in the scatterlist element. Fix it. Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: authenc - fix parsing key with misaligned rta_lenEric Biggers2019-01-261-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 8f9c469348487844328e162db57112f7d347c49f upstream. Keys for "authenc" AEADs are formatted as an rtattr containing a 4-byte 'enckeylen', followed by an authentication key and an encryption key. crypto_authenc_extractkeys() parses the key to find the inner keys. However, it fails to consider the case where the rtattr's payload is longer than 4 bytes but not 4-byte aligned, and where the key ends before the next 4-byte aligned boundary. In this case, 'keylen -= RTA_ALIGN(rta->rta_len);' underflows to a value near UINT_MAX. This causes a buffer overread and crash during crypto_ahash_setkey(). Fix it by restricting the rtattr payload to the expected size. Reproducer using AF_ALG: #include <linux/if_alg.h> #include <linux/rtnetlink.h> #include <sys/socket.h> int main() { int fd; struct sockaddr_alg addr = { .salg_type = "aead", .salg_name = "authenc(hmac(sha256),cbc(aes))", }; struct { struct rtattr attr; __be32 enckeylen; char keys[1]; } __attribute__((packed)) key = { .attr.rta_len = sizeof(key), .attr.rta_type = 1 /* CRYPTO_AUTHENC_KEYA_PARAM */, }; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, &key, sizeof(key)); } It caused: BUG: unable to handle kernel paging request at ffff88007ffdc000 PGD 2e01067 P4D 2e01067 PUD 2e04067 PMD 2e05067 PTE 0 Oops: 0000 [#1] SMP CPU: 0 PID: 883 Comm: authenc Not tainted 4.20.0-rc1-00108-g00c9fe37a7f27 #13 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 RIP: 0010:sha256_ni_transform+0xb3/0x330 arch/x86/crypto/sha256_ni_asm.S:155 [...] Call Trace: sha256_ni_finup+0x10/0x20 arch/x86/crypto/sha256_ssse3_glue.c:321 crypto_shash_finup+0x1a/0x30 crypto/shash.c:178 shash_digest_unaligned+0x45/0x60 crypto/shash.c:186 crypto_shash_digest+0x24/0x40 crypto/shash.c:202 hmac_setkey+0x135/0x1e0 crypto/hmac.c:66 crypto_shash_setkey+0x2b/0xb0 crypto/shash.c:66 shash_async_setkey+0x10/0x20 crypto/shash.c:223 crypto_ahash_setkey+0x2d/0xa0 crypto/ahash.c:202 crypto_authenc_setkey+0x68/0x100 crypto/authenc.c:96 crypto_aead_setkey+0x2a/0xc0 crypto/aead.c:62 aead_setkey+0xc/0x10 crypto/algif_aead.c:526 alg_setkey crypto/af_alg.c:223 [inline] alg_setsockopt+0xfe/0x130 crypto/af_alg.c:256 __sys_setsockopt+0x6d/0xd0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0x1f/0x30 net/socket.c:1910 do_syscall_64+0x4a/0x180 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe Fixes: e236d4a89a2f ("[CRYPTO] authenc: Move enckeylen into key itself") Cc: <stable@vger.kernel.org> # v2.6.25+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: cts - fix crash on short inputsEric Biggers2019-01-261-4/+4
| | | | | | | | | | | | | | | | | | | | | | [It's a minimal fix for a bug that was fixed incidentally by a large refactoring in v4.8.] In the CTS template, when the input length is <= one block cipher block (e.g. <= 16 bytes for AES) pass the correct length to the underlying CBC transform rather than one block. This matches the upstream behavior and makes the encryption/decryption operation correctly return -EINVAL when 1 <= nbytes < bsize or succeed when nbytes == 0, rather than crashing. This was fixed upstream incidentally by a large refactoring, commit 0605c41cc53c ("crypto: cts - Convert to skcipher"). But syzkaller easily trips over this when running on older kernels, as it's easily reachable via AF_ALG. Therefore, this patch makes the minimal fix for older kernels. Cc: linux-crypto@vger.kernel.org Fixes: 76cb9521795a ("[CRYPTO] cts: Add CTS mode required for Kerberos AES support") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: lrw - Fix out-of bounds access on counter overflowOndrej Mosnacek2018-11-221-1/+6
| | | | | | | | | | | | | | | | commit fbe1a850b3b1522e9fc22319ccbbcd2ab05328d2 upstream. When the LRW block counter overflows, the current implementation returns 128 as the index to the precomputed multiplication table, which has 128 entries. This patch fixes it to return the correct value (127). Fixes: 64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode") Cc: <stable@vger.kernel.org> # 2.6.20+ Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: skcipher - Fix -Wstringop-truncation warningsStafford Horne2018-10-132-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit cefd769fd0192c84d638f66da202459ed8ad63ba ] As of GCC 9.0.0 the build is reporting warnings like: crypto/ablkcipher.c: In function ‘crypto_ablkcipher_report’: crypto/ablkcipher.c:374:2: warning: ‘strncpy’ specified bound 64 equals destination size [-Wstringop-truncation] strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "<default>", ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ sizeof(rblkcipher.geniv)); ~~~~~~~~~~~~~~~~~~~~~~~~~ This means the strnycpy might create a non null terminated string. Fix this by explicitly performing '\0' termination. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Nick Desaulniers <nick.desaulniers@gmail.com> Signed-off-by: Stafford Horne <shorne@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: ablkcipher - fix crash flushing dcache in error pathEric Biggers2018-08-171-31/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | commit 318abdfbe708aaaa652c79fb500e9bd60521f9dc upstream. Like the skcipher_walk and blkcipher_walk cases: scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of ablkcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing ablkcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: bf06099db18a ("crypto: skcipher - Add ablkcipher_walk interfaces") Cc: <stable@vger.kernel.org> # v2.6.35+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: blkcipher - fix crash flushing dcache in error pathEric Biggers2018-08-171-28/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 0868def3e4100591e7a1fdbf3eed1439cc8f7ca3 upstream. Like the skcipher_walk case: scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of blkcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing blkcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. This bug was found by syzkaller fuzzing. Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "ecb(aes-generic)", }; char buffer[4096] __attribute__((aligned(4096))) = { 0 }; int fd; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16); fd = accept(fd, NULL, NULL); write(fd, buffer, 15); read(fd, buffer, 15); } Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type") Cc: <stable@vger.kernel.org> # v2.6.19+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmac - separate tfm and request contextEric Biggers2018-08-171-227/+181
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit bb29648102335586e9a66289a1d98a0cb392b6e5 upstream. syzbot reported a crash in vmac_final() when multiple threads concurrently use the same "vmac(aes)" transform through AF_ALG. The bug is pretty fundamental: the VMAC template doesn't separate per-request state from per-tfm (per-key) state like the other hash algorithms do, but rather stores it all in the tfm context. That's wrong. Also, vmac_final() incorrectly zeroes most of the state including the derived keys and cached pseudorandom pad. Therefore, only the first VMAC invocation with a given key calculates the correct digest. Fix these bugs by splitting the per-tfm state from the per-request state and using the proper init/update/final sequencing for requests. Reproducer for the crash: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { int fd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "vmac(aes)", }; char buf[256] = { 0 }; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16); fork(); fd = accept(fd, NULL, NULL); for (;;) write(fd, buf, 256); } The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds VMAC_NHBYTES, causing vmac_final() to memset() a negative length. Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support") Cc: <stable@vger.kernel.org> # v2.6.32+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: vmac - require a block cipher with 128-bit block sizeEric Biggers2018-08-171-0/+4
| | | | | | | | | | | | | | | | | | commit 73bf20ef3df262026c3470241ae4ac8196943ffa upstream. The VMAC template assumes the block cipher has a 128-bit block size, but it failed to check for that. Thus it was possible to instantiate it using a 64-bit block size cipher, e.g. "vmac(cast5)", causing uninitialized memory to be used. Add the needed check when instantiating the template. Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support") Cc: <stable@vger.kernel.org> # v2.6.32+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: authenc - don't leak pointers to authenc keysTudor-Dan Ambarus2018-08-091-0/+1
| | | | | | | | | | | | | | [ Upstream commit ad2fdcdf75d169e7a5aec6c7cb421c0bec8ec711 ] In crypto_authenc_setkey we save pointers to the authenc keys in a local variable of type struct crypto_authenc_keys and we don't zeroize it after use. Fix this and don't leak pointers to the authenc keys. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: authencesn - don't leak pointers to authenc keysTudor-Dan Ambarus2018-08-091-0/+1
| | | | | | | | | | | | | | [ Upstream commit 31545df391d58a3bb60e29b1192644a6f2b5a8dd ] In crypto_authenc_esn_setkey we save pointers to the authenc keys in a local variable of type struct crypto_authenc_keys and we don't zeroize it after use. Fix this and don't leak pointers to the authenc keys. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* async_tx: Fix DMA_PREP_FENCE usage in do_async_gen_syndrome()Anup Patel2018-04-131-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit baae03a0e2497f49704628fd0aaf993cf98e1b99 ] The DMA_PREP_FENCE is to be used when preparing Tx descriptor if output of Tx descriptor is to be used by next/dependent Tx descriptor. The DMA_PREP_FENSE will not be set correctly in do_async_gen_syndrome() when calling dma->device_prep_dma_pq() under following conditions: 1. ASYNC_TX_FENCE not set in submit->flags 2. DMA_PREP_FENCE not set in dma_flags 3. src_cnt (= (disks - 2)) is greater than dma_maxpq(dma, dma_flags) This patch fixes DMA_PREP_FENCE usage in do_async_gen_syndrome() taking inspiration from do_async_xor() implementation. Signed-off-by: Anup Patel <anup.patel@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: ahash - Fix early termination in hash walkHerbert Xu2018-04-081-3/+4
| | | | | | | | | | | | | | | | commit 900a081f6912a8985dc15380ec912752cb66025a upstream. When we have an unaligned SG list entry where there is no leftover aligned data, the hash walk code will incorrectly return zero as if the entire SG list has been processed. This patch fixes it by moving onto the next page instead. Reported-by: Eli Cooper <elicooper@gmx.com> Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: cryptd - pass through absence of ->setkey()Eric Biggers2018-02-161-1/+2
| | | | | | | | | | | | | | commit 841a3ff329713f796a63356fef6e2f72e4a3f6a3 upstream. When the cryptd template is used to wrap an unkeyed hash algorithm, don't install a ->setkey() method to the cryptd instance. This change is necessary for cryptd to keep working with unkeyed hash algorithms once we start enforcing that ->setkey() is called when present. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: hash - introduce crypto_hash_alg_has_setkey()Eric Biggers2018-02-161-0/+11
| | | | | | | | | | | | | | | | commit cd6ed77ad5d223dc6299fb58f62e0f5267f7e2ba upstream. Templates that use an shash spawn can use crypto_shash_alg_has_setkey() to determine whether the underlying algorithm requires a key or not. But there was no corresponding function for ahash spawns. Add it. Note that the new function actually has to support both shash and ahash algorithms, since the ahash API can be used with either. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: algapi - fix NULL dereference in crypto_remove_spawns()Eric Biggers2018-01-171-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 9a00674213a3f00394f4e3221b88f2d21fc05789 upstream. syzkaller triggered a NULL pointer dereference in crypto_remove_spawns() via a program that repeatedly and concurrently requests AEADs "authenc(cmac(des3_ede-asm),pcbc-aes-aesni)" and hashes "cmac(des3_ede)" through AF_ALG, where the hashes are requested as "untested" (CRYPTO_ALG_TESTED is set in ->salg_mask but clear in ->salg_feat; this causes the template to be instantiated for every request). Although AF_ALG users really shouldn't be able to request an "untested" algorithm, the NULL pointer dereference is actually caused by a longstanding race condition where crypto_remove_spawns() can encounter an instance which has had spawn(s) "grabbed" but hasn't yet been registered, resulting in ->cra_users still being NULL. We probably should properly initialize ->cra_users earlier, but that would require updating many templates individually. For now just fix the bug in a simple way that can easily be backported: make crypto_remove_spawns() treat a NULL ->cra_users list as empty. Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: mcryptd - protect the per-CPU queue with a lockSebastian Andrzej Siewior2018-01-021-13/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 9abffc6f2efe46c3564c04312e52e07622d40e51 upstream. mcryptd_enqueue_request() grabs the per-CPU queue struct and protects access to it with disabled preemption. Then it schedules a worker on the same CPU. The worker in mcryptd_queue_worker() guards access to the same per-CPU variable with disabled preemption. If we take CPU-hotplug into account then it is possible that between queue_work_on() and the actual invocation of the worker the CPU goes down and the worker will be scheduled on _another_ CPU. And here the preempt_disable() protection does not work anymore. The easiest thing is to add a spin_lock() to guard access to the list. Another detail: mcryptd_queue_worker() is not processing more than MCRYPTD_BATCH invocation in a row. If there are still items left, then it will invoke queue_work() to proceed with more later. *I* would suggest to simply drop that check because it does not use a system workqueue and the workqueue is already marked as "CPU_INTENSIVE". And if preemption is required then the scheduler should do it. However if queue_work() is used then the work item is marked as CPU unbound. That means it will try to run on the local CPU but it may run on another CPU as well. Especially with CONFIG_DEBUG_WQ_FORCE_RR_CPU=y. Again, the preempt_disable() won't work here but lock which was introduced will help. In order to keep work-item on the local CPU (and avoid RR) I changed it to queue_work_on(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: salsa20 - fix blkcipher_walk API usageEric Biggers2017-12-201-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ecaaab5649781c5a0effdaf298a925063020500e upstream. When asked to encrypt or decrypt 0 bytes, both the generic and x86 implementations of Salsa20 crash in blkcipher_walk_done(), either when doing 'kfree(walk->buffer)' or 'free_page((unsigned long)walk->page)', because walk->buffer and walk->page have not been initialized. The bug is that Salsa20 is calling blkcipher_walk_done() even when nothing is in 'walk.nbytes'. But blkcipher_walk_done() is only meant to be called when a nonzero number of bytes have been provided. The broken code is part of an optimization that tries to make only one call to salsa20_encrypt_bytes() to process inputs that are not evenly divisible by 64 bytes. To fix the bug, just remove this "optimization" and use the blkcipher_walk API the same way all the other users do. Reproducer: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { int algfd, reqfd; struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "salsa20", }; char key[16] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (void *)&addr, sizeof(addr)); reqfd = accept(algfd, 0, 0); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key)); read(reqfd, key, sizeof(key)); } Reported-by: syzbot <syzkaller@googlegroups.com> Fixes: eb6f13eb9f81 ("[CRYPTO] salsa20_generic: Fix multi-page processing") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: hmac - require that the underlying hash algorithm is unkeyedEric Biggers2017-12-202-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit af3ff8045bbf3e32f1a448542e73abb4c8ceb6f1 upstream. Because the HMAC template didn't check that its underlying hash algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))" through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC being used without having been keyed, resulting in sha3_update() being called without sha3_init(), causing a stack buffer overflow. This is a very old bug, but it seems to have only started causing real problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3) because the innermost hash's state is ->import()ed from a zeroed buffer, and it just so happens that other hash algorithms are fine with that, but SHA-3 is not. However, there could be arch or hardware-dependent hash algorithms also affected; I couldn't test everything. Fix the bug by introducing a function crypto_shash_alg_has_setkey() which tests whether a shash algorithm is keyed. Then update the HMAC template to require that its underlying hash algorithm is unkeyed. Here is a reproducer: #include <linux/if_alg.h> #include <sys/socket.h> int main() { int algfd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "hmac(hmac(sha3-512-generic))", }; char key[4096] = { 0 }; algfd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(algfd, (const struct sockaddr *)&addr, sizeof(addr)); setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key)); } Here was the KASAN report from syzbot: BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341 [inline] BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044 CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x257 lib/dump_stack.c:53 print_address_description+0x73/0x250 mm/kasan/report.c:252 kasan_report_error mm/kasan/report.c:351 [inline] kasan_report+0x25b/0x340 mm/kasan/report.c:409 check_memory_region_inline mm/kasan/kasan.c:260 [inline] check_memory_region+0x137/0x190 mm/kasan/kasan.c:267 memcpy+0x37/0x50 mm/kasan/kasan.c:303 memcpy include/linux/string.h:341 [inline] sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161 crypto_shash_update+0xcb/0x220 crypto/shash.c:109 shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 hmac_finup+0x182/0x330 crypto/hmac.c:152 crypto_shash_finup+0xc4/0x120 crypto/shash.c:165 shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172 crypto_shash_digest+0xc4/0x120 crypto/shash.c:186 hmac_setkey+0x36a/0x690 crypto/hmac.c:66 crypto_shash_setkey+0xad/0x190 crypto/shash.c:64 shash_async_setkey+0x47/0x60 crypto/shash.c:207 crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200 hash_setkey+0x40/0x90 crypto/algif_hash.c:446 alg_setkey crypto/af_alg.c:221 [inline] alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254 SYSC_setsockopt net/socket.c:1851 [inline] SyS_setsockopt+0x189/0x360 net/socket.c:1830 entry_SYSCALL_64_fastpath+0x1f/0x96 Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* X.509: reject invalid BIT STRING for subjectPublicKeyEric Biggers2017-12-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 0f30cbea005bd3077bd98cd29277d7fc2699c1da upstream. Adding a specially crafted X.509 certificate whose subjectPublicKey ASN.1 value is zero-length caused x509_extract_key_data() to set the public key size to SIZE_MAX, as it subtracted the nonexistent BIT STRING metadata byte. Then, x509_cert_parse() called kmemdup() with that bogus size, triggering the WARN_ON_ONCE() in kmalloc_slab(). This appears to be harmless, but it still must be fixed since WARNs are never supposed to be user-triggerable. Fix it by updating x509_cert_parse() to validate that the value has a BIT STRING metadata byte, and that the byte is 0 which indicates that the number of bits in the bitstring is a multiple of 8. It would be nice to handle the metadata byte in asn1_ber_decoder() instead. But that would be tricky because in the general case a BIT STRING could be implicitly tagged, and/or could legitimately have a length that is not a whole number of bytes. Here was the WARN (cleaned up slightly): WARNING: CPU: 1 PID: 202 at mm/slab_common.c:971 kmalloc_slab+0x5d/0x70 mm/slab_common.c:971 Modules linked in: CPU: 1 PID: 202 Comm: keyctl Tainted: G B 4.14.0-09238-g1d3b78bbc6e9 #26 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.11.0-20171110_100015-anatol 04/01/2014 task: ffff880033014180 task.stack: ffff8800305c8000 Call Trace: __do_kmalloc mm/slab.c:3706 [inline] __kmalloc_track_caller+0x22/0x2e0 mm/slab.c:3726 kmemdup+0x17/0x40 mm/util.c:118 kmemdup include/linux/string.h:414 [inline] x509_cert_parse+0x2cb/0x620 crypto/asymmetric_keys/x509_cert_parser.c:106 x509_key_preparse+0x61/0x750 crypto/asymmetric_keys/x509_public_key.c:174 asymmetric_key_preparse+0xa4/0x150 crypto/asymmetric_keys/asymmetric_type.c:388 key_create_or_update+0x4d4/0x10a0 security/keys/key.c:850 SYSC_add_key security/keys/keyctl.c:122 [inline] SyS_add_key+0xe8/0x290 security/keys/keyctl.c:62 entry_SYSCALL_64_fastpath+0x1f/0x96 Fixes: 42d5ec27f873 ("X.509: Add an ASN.1 decoder") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Revert "crypto: xts - Add ECB dependency"Sasha Levin2017-11-211-1/+0
| | | | | | | | | | This reverts commit 6145171a6bc0abdc3eca7a4b795ede467d2ba569. The commit fixes a bug that was only introduced in 4.10, thus is irrelevant for <=4.9. Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: xts - Add ECB dependencyMilan Broz2017-10-211-0/+1
| | | | | | | | | | | | | | | [ Upstream commit 12cb3a1c4184f891d965d1f39f8cfcc9ef617647 ] Since the commit f1c131b45410a202eb45cc55980a7a9e4e4b4f40 crypto: xts - Convert to skcipher the XTS mode is based on ECB, so the mode must select ECB otherwise it can fail to initialize. Signed-off-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: shash - Fix zero-length shash ahash digest crashHerbert Xu2017-10-181-3/+5
| | | | | | | | | | | | | | | | commit b61907bb42409adf9b3120f741af7c57dd7e3db2 upstream. The shash ahash digest adaptor function may crash if given a zero-length input together with a null SG list. This is because it tries to read the SG list before looking at the length. This patch fixes it by checking the length first. Reported-by: Stephan Müller<smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: algif_skcipher - Load TX SG list after waitingHerbert Xu2017-10-081-7/+7
| | | | | | | | | | | | | | | | commit 4f0414e54e4d1893c6f08260693f8ef84c929293 upstream. We need to load the TX SG list in sendmsg(2) after waiting for incoming data, not before. [connoro@google.com: backport to 3.18, where the relevant logic is located in skcipher_recvmsg() rather than skcipher_recvmsg_sync()] Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Connor O'Brien <connoro@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: AF_ALG - remove SGL terminator indicator when chainingStephan Mueller2017-09-271-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Fixed differently upstream as commit 2d97591ef43d ("crypto: af_alg - consolidation of duplicate code") The SGL is MAX_SGL_ENTS + 1 in size. The last SG entry is used for the chaining and is properly updated with the sg_chain invocation. During the filling-in of the initial SG entries, sg_mark_end is called for each SG entry. This is appropriate as long as no additional SGL is chained with the current SGL. However, when a new SGL is chained and the last SG entry is updated with sg_chain, the last but one entry still contains the end marker from the sg_mark_end. This end marker must be removed as otherwise a walk of the chained SGLs will cause a NULL pointer dereference at the last but one SG entry, because sg_next will return NULL. The patch only applies to all kernels up to and including 4.13. The patch 2d97591ef43d0587be22ad1b0d758d6df4999a0b added to 4.14-rc1 introduced a complete new code base which addresses this bug in a different way. Yet, that patch is too invasive for stable kernels and was therefore not marked for stable. Fixes: 8ff590903d5fc ("crypto: algif_skcipher - User-space interface for skcipher operations") Signed-off-by: Stephan Mueller <smueller@chronox.de> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: gcm - wait for crypto op not signal safeGilad Ben-Yossef2017-06-141-4/+2
| | | | | | | | | | | | | | | | | commit f3ad587070d6bd961ab942b3fd7a85d00dfc934b upstream. crypto_gcm_setkey() was using wait_for_completion_interruptible() to wait for completion of async crypto op but if a signal occurs it may return before DMA ops of HW crypto provider finish, thus corrupting the data buffer that is kfree'ed in this case. Resolve this by using wait_for_completion() instead. Reported-by: Eric Biggers <ebiggers3@gmail.com> Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: ahash - Fix EINPROGRESS notification callbackHerbert Xu2017-04-221-29/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | commit ef0579b64e93188710d48667cb5e014926af9f1b upstream. The ahash API modifies the request's callback function in order to clean up after itself in some corner cases (unaligned final and missing finup). When the request is complete ahash will restore the original callback and everything is fine. However, when the request gets an EBUSY on a full queue, an EINPROGRESS callback is made while the request is still ongoing. In this case the ahash API will incorrectly call its own callback. This patch fixes the problem by creating a temporary request object on the stack which is used to relay EINPROGRESS back to the original completion function. This patch also adds code to preserve the original flags value. Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...") Reported-by: Sabrina Dubroca <sd@queasysnail.net> Tested-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: algif_hash - avoid zero-sized arrayJiri Slaby2017-04-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 6207119444595d287b1e9e83a2066c17209698f3 upstream. With this reproducer: struct sockaddr_alg alg = { .salg_family = 0x26, .salg_type = "hash", .salg_feat = 0xf, .salg_mask = 0x5, .salg_name = "digest_null", }; int sock, sock2; sock = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(sock, (struct sockaddr *)&alg, sizeof(alg)); sock2 = accept(sock, NULL, NULL); setsockopt(sock, SOL_ALG, ALG_SET_KEY, "\x9b\xca", 2); accept(sock2, NULL, NULL); ==== 8< ======== 8< ======== 8< ======== 8< ==== one can immediatelly see an UBSAN warning: UBSAN: Undefined behaviour in crypto/algif_hash.c:187:7 variable length array bound value 0 <= 0 CPU: 0 PID: 15949 Comm: syz-executor Tainted: G E 4.4.30-0-default #1 ... Call Trace: ... [<ffffffff81d598fd>] ? __ubsan_handle_vla_bound_not_positive+0x13d/0x188 [<ffffffff81d597c0>] ? __ubsan_handle_out_of_bounds+0x1bc/0x1bc [<ffffffffa0e2204d>] ? hash_accept+0x5bd/0x7d0 [algif_hash] [<ffffffffa0e2293f>] ? hash_accept_nokey+0x3f/0x51 [algif_hash] [<ffffffffa0e206b0>] ? hash_accept_parent_nokey+0x4a0/0x4a0 [algif_hash] [<ffffffff8235c42b>] ? SyS_accept+0x2b/0x40 It is a correct warning, as hash state is propagated to accept as zero, but creating a zero-length variable array is not allowed in C. Fix this as proposed by Herbert -- do "?: 1" on that site. No sizeof or similar happens in the code there, so we just allocate one byte even though we do not use the array. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> (maintainer:CRYPTO API) Reported-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: mcryptd - Fix load failureWang, Rui Y2017-04-181-0/+1
| | | | | | | | | | | | | commit ddef482420b1ba8ec45e6123a7e8d3f67b21e5e3 upstream. mcryptd_create_hash() fails by returning -EINVAL, causing any driver using mcryptd to fail to load. It is because it needs to set its statesize properly. Signed-off-by: Rui Wang <rui.y.wang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: cryptd - Assign statesize properlyWang, Rui Y2017-04-181-0/+1
| | | | | | | | | | | | | | | commit 1a07834024dfca5c4bed5de8f8714306e0a11836 upstream. cryptd_create_hash() fails by returning -EINVAL. It is because after 8996eafdc ("crypto: ahash - ensure statesize is non-zero") all ahash drivers must have a non-zero statesize. This patch fixes the problem by properly assigning the statesize. Signed-off-by: Rui Wang <rui.y.wang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: improve gcc optimization flags for serpent and wp512Arnd Bergmann2017-04-181-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 7d6e9105026788c497f0ab32fa16c82f4ab5ff61 upstream. An ancient gcc bug (first reported in 2003) has apparently resurfaced on MIPS, where kernelci.org reports an overly large stack frame in the whirlpool hash algorithm: crypto/wp512.c:987:1: warning: the frame size of 1112 bytes is larger than 1024 bytes [-Wframe-larger-than=] With some testing in different configurations, I'm seeing large variations in stack frames size up to 1500 bytes for what should have around 300 bytes at most. I also checked the reference implementation, which is essentially the same code but also comes with some test and benchmarking infrastructure. It seems that recent compiler versions on at least arm, arm64 and powerpc have a partial fix for this problem, but enabling "-fsched-pressure", but even with that fix they suffer from the issue to a certain degree. Some testing on arm64 shows that the time needed to hash a given amount of data is roughly proportional to the stack frame size here, which makes sense given that the wp512 implementation is doing lots of loads for table lookups, and the problem with the overly large stack is a result of doing a lot more loads and stores for spilled registers (as seen from inspecting the object code). Disabling -fschedule-insns consistently fixes the problem for wp512, in my collection of cross-compilers, the results are consistently better or identical when comparing the stack sizes in this function, though some architectures (notable x86) have schedule-insns disabled by default. The four columns are: default: -O2 press: -O2 -fsched-pressure nopress: -O2 -fschedule-insns -fno-sched-pressure nosched: -O2 -no-schedule-insns (disables sched-pressure) default press nopress nosched alpha-linux-gcc-4.9.3 1136 848 1136 176 am33_2.0-linux-gcc-4.9.3 2100 2076 2100 2104 arm-linux-gnueabi-gcc-4.9.3 848 848 1048 352 cris-linux-gcc-4.9.3 272 272 272 272 frv-linux-gcc-4.9.3 1128 1000 1128 280 hppa64-linux-gcc-4.9.3 1128 336 1128 184 hppa-linux-gcc-4.9.3 644 308 644 276 i386-linux-gcc-4.9.3 352 352 352 352 m32r-linux-gcc-4.9.3 720 656 720 268 microblaze-linux-gcc-4.9.3 1108 604 1108 256 mips64-linux-gcc-4.9.3 1328 592 1328 208 mips-linux-gcc-4.9.3 1096 624 1096 240 powerpc64-linux-gcc-4.9.3 1088 432 1088 160 powerpc-linux-gcc-4.9.3 1080 584 1080 224 s390-linux-gcc-4.9.3 456 456 624 360 sh3-linux-gcc-4.9.3 292 292 292 292 sparc64-linux-gcc-4.9.3 992 240 992 208 sparc-linux-gcc-4.9.3 680 592 680 312 x86_64-linux-gcc-4.9.3 224 240 272 224 xtensa-linux-gcc-4.9.3 1152 704 1152 304 aarch64-linux-gcc-7.0.0 224 224 1104 208 arm-linux-gnueabi-gcc-7.0.1 824 824 1048 352 mips-linux-gcc-7.0.0 1120 648 1120 272 x86_64-linux-gcc-7.0.1 240 240 304 240 arm-linux-gnueabi-gcc-4.4.7 840 392 arm-linux-gnueabi-gcc-4.5.4 784 728 784 320 arm-linux-gnueabi-gcc-4.6.4 736 728 736 304 arm-linux-gnueabi-gcc-4.7.4 944 784 944 352 arm-linux-gnueabi-gcc-4.8.5 464 464 760 352 arm-linux-gnueabi-gcc-4.9.3 848 848 1048 352 arm-linux-gnueabi-gcc-5.3.1 824 824 1064 336 arm-linux-gnueabi-gcc-6.1.1 808 808 1056 344 arm-linux-gnueabi-gcc-7.0.1 824 824 1048 352 Trying the same test for serpent-generic, the picture is a bit different, and while -fno-schedule-insns is generally better here than the default, -fsched-pressure wins overall, so I picked that instead. default press nopress nosched alpha-linux-gcc-4.9.3 1392 864 1392 960 am33_2.0-linux-gcc-4.9.3 536 524 536 528 arm-linux-gnueabi-gcc-4.9.3 552 552 776 536 cris-linux-gcc-4.9.3 528 528 528 528 frv-linux-gcc-4.9.3 536 400 536 504 hppa64-linux-gcc-4.9.3 524 208 524 480 hppa-linux-gcc-4.9.3 768 472 768 508 i386-linux-gcc-4.9.3 564 564 564 564 m32r-linux-gcc-4.9.3 712 576 712 532 microblaze-linux-gcc-4.9.3 724 392 724 512 mips64-linux-gcc-4.9.3 720 384 720 496 mips-linux-gcc-4.9.3 728 384 728 496 powerpc64-linux-gcc-4.9.3 704 304 704 480 powerpc-linux-gcc-4.9.3 704 296 704 480 s390-linux-gcc-4.9.3 560 560 592 536 sh3-linux-gcc-4.9.3 540 540 540 540 sparc64-linux-gcc-4.9.3 544 352 544 496 sparc-linux-gcc-4.9.3 544 344 544 496 x86_64-linux-gcc-4.9.3 528 536 576 528 xtensa-linux-gcc-4.9.3 752 544 752 544 aarch64-linux-gcc-7.0.0 432 432 656 480 arm-linux-gnueabi-gcc-7.0.1 616 616 808 536 mips-linux-gcc-7.0.0 720 464 720 488 x86_64-linux-gcc-7.0.1 536 528 600 536 arm-linux-gnueabi-gcc-4.4.7 592 440 arm-linux-gnueabi-gcc-4.5.4 776 448 776 544 arm-linux-gnueabi-gcc-4.6.4 776 448 776 544 arm-linux-gnueabi-gcc-4.7.4 768 448 768 544 arm-linux-gnueabi-gcc-4.8.5 488 488 776 544 arm-linux-gnueabi-gcc-4.9.3 552 552 776 536 arm-linux-gnueabi-gcc-5.3.1 552 552 776 536 arm-linux-gnueabi-gcc-6.1.1 560 560 776 536 arm-linux-gnueabi-gcc-7.0.1 616 616 808 536 I did not do any runtime tests with serpent, so it is possible that stack frame size does not directly correlate with runtime performance here and it actually makes things worse, but it's more likely to help here, and the reduced stack frame size is probably enough reason to apply the patch, especially given that the crypto code is often used in deep call chains. Link: https://kernelci.org/build/id/58797d7559b5149efdf6c3a9/logs/ Link: http://www.larc.usp.br/~pbarreto/WhirlpoolPage.html Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=11488 Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149 Cc: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Fix "crypto: algif_hash - Require setkey before accept(2)"Greg Kroah-Hartman2017-02-081-4/+4
| | | | | | | | | | | | Fix up commit cec8983e6d2cf52e60cd59583ae30f81f923f563 which was commit 6de62f15b581f920ade22d758f4c338311c2f0d4 upstream. The function prototypes were wrong. Someone was ignoring compiler warnings :( Cc: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* crypto: skcipher - Fix blkcipher walk OOM crashHerbert Xu2016-10-051-1/+2
| | | | | | | | | | | | | | | | | | | | [ Upstream commit acdb04d0b36769b3e05990c488dc74d8b7ac8060 ] When we need to allocate a temporary blkcipher_walk_next and it fails, the code is supposed to take the slow path of processing the data block by block. However, due to an unrelated change we instead end up dereferencing the NULL pointer. This patch fixes it by moving the unrelated bsize setting out of the way so that we enter the slow path as inteded. Fixes: 7607bd8ff03b ("[CRYPTO] blkcipher: Added blkcipher_walk_virt_block") Cc: stable@vger.kernel.org Reported-by: xiakaixu <xiakaixu@huawei.com> Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
* crypto: cryptd - initialize child shash_desc on importArd Biesheuvel2016-10-051-2/+7
| | | | | | | | | | | | | [ Upstream commit 0bd2223594a4dcddc1e34b15774a3a4776f7749e ] When calling .import() on a cryptd ahash_request, the structure members that describe the child transform in the shash_desc need to be initialized like they are when calling .init() Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
* crypto: scatterwalk - Fix test in scatterwalk_doneHerbert Xu2016-08-221-1/+2
| | | | | | | | | | | | | | | | | | | [ Upstream commit 5f070e81bee35f1b7bd1477bb223a873ff657803 ] When there is more data to be processed, the current test in scatterwalk_done may prevent us from calling pagedone even when we should. In particular, if we're on an SG entry spanning multiple pages where the last page is not a full page, we will incorrectly skip calling pagedone on the second last page. This patch fixes this by adding a separate test for whether we've reached the end of a page. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
* crypto: gcm - Filter out async ghash if necessaryHerbert Xu2016-08-221-1/+3
| | | | | | | | | | | | | | | [ Upstream commit b30bdfa86431afbafe15284a3ad5ac19b49b88e3 ] As it is if you ask for a sync gcm you may actually end up with an async one because it does not filter out async implementations of ghash. This patch fixes this by adding the necessary filter when looking for ghash. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
* PKCS#7: pkcs7_validate_trust(): initialize the _trusted output argumentNicolai Stange2016-07-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit e54358915d0a00399c11c2c23ae1be674cba188a ] Despite what the DocBook comment to pkcs7_validate_trust() says, the *_trusted argument is never set to false. pkcs7_validate_trust() only positively sets *_trusted upon encountering a trusted PKCS#7 SignedInfo block. This is quite unfortunate since its callers, system_verify_data() for example, depend on pkcs7_validate_trust() clearing *_trusted on non-trust. Indeed, UBSAN splats when attempting to load the uninitialized local variable 'trusted' from system_verify_data() in pkcs7_validate_trust(): UBSAN: Undefined behaviour in crypto/asymmetric_keys/pkcs7_trust.c:194:14 load of value 82 is not a valid value for type '_Bool' [...] Call Trace: [<ffffffff818c4d35>] dump_stack+0xbc/0x117 [<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169 [<ffffffff8194113b>] ubsan_epilogue+0xd/0x4e [<ffffffff819419fa>] __ubsan_handle_load_invalid_value+0x111/0x158 [<ffffffff819418e9>] ? val_to_string.constprop.12+0xcf/0xcf [<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370 [<ffffffff814b83f0>] ? kfree+0x220/0x370 [<ffffffff818312c2>] ? public_key_verify_signature_2+0x32/0x50 [<ffffffff81835e04>] pkcs7_validate_trust+0x524/0x5f0 [<ffffffff813c391a>] system_verify_data+0xca/0x170 [<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b [<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0 [<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290 [...] The implication is that pkcs7_validate_trust() effectively grants trust when it really shouldn't have. Fix this by explicitly setting *_trusted to false at the very beginning of pkcs7_validate_trust(). Cc: <stable@vger.kernel.org> Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: public_key: select CRYPTO_AKCIPHERArnd Bergmann2016-06-181-0/+1
| | | | | | | | | | | | | | | | | | | [ Upstream commit bad6a185b4d6f81d0ed2b6e4c16307969f160b95 ] In some rare randconfig builds, we can end up with ASYMMETRIC_PUBLIC_KEY_SUBTYPE enabled but CRYPTO_AKCIPHER disabled, which fails to link because of the reference to crypto_alloc_akcipher: crypto/built-in.o: In function `public_key_verify_signature': :(.text+0x110e4): undefined reference to `crypto_alloc_akcipher' This adds a Kconfig 'select' statement to ensure the dependency is always there. Cc: <stable@vger.kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: hash - Fix page length clamping in hash walkHerbert Xu2016-05-171-1/+2
| | | | | | | | | | | | | [ Upstream commit 13f4bb78cf6a312bbdec367ba3da044b09bf0e29 ] The crypto hash walk code is broken when supplied with an offset greater than or equal to PAGE_SIZE. This patch fixes it by adjusting walk->pg and walk->offset when this happens. Cc: <stable@vger.kernel.org> Reported-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: algif_skcipher - Fix race condition in skcipher_check_keyHerbert Xu2016-04-191-3/+6
| | | | | | | | | | | | commit 1822793a523e5d5730b19cc21160ff1717421bc8 upstream. We need to lock the child socket in skcipher_check_key as otherwise two simultaneous calls can cause the parent socket to be freed. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: algif_skcipher - Remove custom release parent functionHerbert Xu2016-04-191-40/+3
| | | | | | | | | | | | commit d7b65aee1e7b4c87922b0232eaba56a8a143a4a0 upstream. This patch removes the custom release parent function as the generic af_alg_release_parent now works for nokey sockets too. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: algif_skcipher - Add nokey compatibility pathHerbert Xu2016-04-191-5/+144
| | | | | | | | | | | | | commit a0fa2d037129a9849918a92d91b79ed6c7bd2818 upstream. This patch adds a compatibility path to support old applications that do acept(2) before setkey. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> [backported to 3.18 by Milan Broz <gmazyland@gmail.com>] Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: algif_skcipher - Require setkey before accept(2)Herbert Xu2016-04-191-7/+42
| | | | | | | | | | | | | | | | | commit dd504589577d8e8e70f51f997ad487a4cb6c026f upstream. Some cipher implementations will crash if you try to use them without calling setkey first. This patch adds a check so that the accept(2) call will fail with -ENOKEY if setkey hasn't been done on the socket yet. Cc: stable@vger.kernel.org Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Dmitry Vyukov <dvyukov@google.com> [backported to 3.18 by Milan Broz <gmazyland@gmail.com>] Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: user - lock crypto_alg_list on alg dumpMathias Krause2016-03-021-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 63e41ebc6630f39422d87f8a4bade1e793f37a01 ] We miss to take the crypto_alg_sem semaphore when traversing the crypto_alg_list for CRYPTO_MSG_GETALG dumps. This allows a race with crypto_unregister_alg() removing algorithms from the list while we're still traversing it, thereby leading to a use-after-free as show below: [ 3482.071639] general protection fault: 0000 [#1] SMP [ 3482.075639] Modules linked in: aes_x86_64 glue_helper lrw ablk_helper cryptd gf128mul ipv6 pcspkr serio_raw virtio_net microcode virtio_pci virtio_ring virtio sr_mod cdrom [last unloaded: aesni_intel] [ 3482.075639] CPU: 1 PID: 11065 Comm: crconf Not tainted 4.3.4-grsec+ #126 [ 3482.075639] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014 [ 3482.075639] task: ffff88001cd41a40 ti: ffff88001cd422c8 task.ti: ffff88001cd422c8 [ 3482.075639] RIP: 0010:[<ffffffff93722bd3>] [<ffffffff93722bd3>] strncpy+0x13/0x30 [ 3482.075639] RSP: 0018:ffff88001f713b60 EFLAGS: 00010202 [ 3482.075639] RAX: ffff88001f6c4430 RBX: ffff88001f6c43a0 RCX: ffff88001f6c4430 [ 3482.075639] RDX: 0000000000000040 RSI: fefefefefefeff16 RDI: ffff88001f6c4430 [ 3482.075639] RBP: ffff88001f713b60 R08: ffff88001f6c4470 R09: ffff88001f6c4480 [ 3482.075639] R10: 0000000000000002 R11: 0000000000000246 R12: ffff88001ce2aa28 [ 3482.075639] R13: ffff880000093700 R14: ffff88001f5e4bf8 R15: 0000000000003b20 [ 3482.075639] FS: 0000033826fa2700(0000) GS:ffff88001e900000(0000) knlGS:0000000000000000 [ 3482.075639] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 3482.075639] CR2: ffffffffff600400 CR3: 00000000139ec000 CR4: 00000000001606f0 [ 3482.075639] Stack: [ 3482.075639] ffff88001f713bd8 ffffffff936ccd00 ffff88001e5c4200 ffff880000093700 [ 3482.075639] ffff88001f713bd0 ffffffff938ef4bf 0000000000000000 0000000000003b20 [ 3482.075639] ffff88001f5e4bf8 ffff88001f5e4848 0000000000000000 0000000000003b20 [ 3482.075639] Call Trace: [ 3482.075639] [<ffffffff936ccd00>] crypto_report_alg+0xc0/0x3e0 [ 3482.075639] [<ffffffff938ef4bf>] ? __alloc_skb+0x16f/0x300 [ 3482.075639] [<ffffffff936cd08a>] crypto_dump_report+0x6a/0x90 [ 3482.075639] [<ffffffff93935707>] netlink_dump+0x147/0x2e0 [ 3482.075639] [<ffffffff93935f99>] __netlink_dump_start+0x159/0x190 [ 3482.075639] [<ffffffff936ccb13>] crypto_user_rcv_msg+0xc3/0x130 [ 3482.075639] [<ffffffff936cd020>] ? crypto_report_alg+0x3e0/0x3e0 [ 3482.075639] [<ffffffff936cc4b0>] ? alg_test_crc32c+0x120/0x120 [ 3482.075639] [<ffffffff93933145>] ? __netlink_lookup+0xd5/0x120 [ 3482.075639] [<ffffffff936cca50>] ? crypto_add_alg+0x1d0/0x1d0 [ 3482.075639] [<ffffffff93938141>] netlink_rcv_skb+0xe1/0x130 [ 3482.075639] [<ffffffff936cc4f8>] crypto_netlink_rcv+0x28/0x40 [ 3482.075639] [<ffffffff939375a8>] netlink_unicast+0x108/0x180 [ 3482.075639] [<ffffffff93937c21>] netlink_sendmsg+0x541/0x770 [ 3482.075639] [<ffffffff938e31e1>] sock_sendmsg+0x21/0x40 [ 3482.075639] [<ffffffff938e4763>] SyS_sendto+0xf3/0x130 [ 3482.075639] [<ffffffff93444203>] ? bad_area_nosemaphore+0x13/0x20 [ 3482.075639] [<ffffffff93444470>] ? __do_page_fault+0x80/0x3a0 [ 3482.075639] [<ffffffff939d80cb>] entry_SYSCALL_64_fastpath+0x12/0x6e [ 3482.075639] Code: 88 4a ff 75 ed 5d 48 0f ba 2c 24 3f c3 66 66 2e 0f 1f 84 00 00 00 00 00 55 48 85 d2 48 89 f8 48 89 f9 4c 8d 04 17 48 89 e5 74 15 <0f> b6 16 80 fa 01 88 11 48 83 de ff 48 83 c1 01 4c 39 c1 75 eb [ 3482.075639] RIP [<ffffffff93722bd3>] strncpy+0x13/0x30 To trigger the race run the following loops simultaneously for a while: $ while : ; do modprobe aesni-intel; rmmod aesni-intel; done $ while : ; do crconf show all > /dev/null; done Fix the race by taking the crypto_alg_sem read lock, thereby preventing crypto_unregister_alg() from modifying the algorithm list during the dump. This bug has been detected by the PaX memory sanitize feature. Cc: stable@vger.kernel.org Signed-off-by: Mathias Krause <minipli@googlemail.com> Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: PaX Team <pageexec@freemail.hu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* X.509: Don't strip leading 00's from key ID when constructing key descriptionDavid Howells2016-02-151-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit e7c87bef7de2417b219d4dbfe8d33a0098a8df54 ] Don't strip leading zeros from the crypto key ID when using it to construct the struct key description as the signature in kernels up to and including 4.2 matched this aspect of the key. This means that 1 in 256 keys won't actually match if their key ID begins with 00. The key ID is stored in the module signature as binary and so must be converted to text in order to invoke request_key() - but it isn't stripped at this point. Something like this is likely to be observed in dmesg when the key is loaded: [ 1.572423] Loaded X.509 cert 'Build time autogenerated kernel key: 62a7c3d2da278be024da4af8652c071f3fea33' followed by this when we try and use it: [ 1.646153] Request for unknown module key 'Build time autogenerated kernel key: 0062a7c3d2da278be024da4af8652c071f3fea33' err -11 The 'Loaded' line should show an extra '00' on the front of the hex string. This problem should not affect 4.3-rc1 and onwards because there the key should be matched on one of its auxiliary identities rather than the key struct's description string. Reported-by: Arjan van de Ven <arjan@linux.intel.com> Reported-by: Andy Whitcroft <apw@canonical.com> Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: algif_hash - wait for crypto_ahash_init() to completeWang, Rui Y2016-02-151-1/+3
| | | | | | | | | | | | [ Upstream commit fe09786178f9df713a4b2dd6b93c0a722346bf5e ] hash_sendmsg/sendpage() need to wait for the completion of crypto_ahash_init() otherwise it can cause panic. Cc: stable@vger.kernel.org Signed-off-by: Rui Wang <rui.y.wang@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: shash - Fix has_key settingHerbert Xu2016-02-151-4/+3
| | | | | | | | | | | | | | | [ Upstream commit 00420a65fa2beb3206090ead86942484df2275f3 ] The has_key logic is wrong for shash algorithms as they always have a setkey function. So we should instead be testing against shash_no_setkey. Fixes: a5596d633278 ("crypto: hash - Add crypto_ahash_has_setkey") Cc: stable@vger.kernel.org Reported-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Tested-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: crc32c - Fix crc32c soft dependencyJean Delvare2016-02-091-1/+0
| | | | | | | | | | | | | | | | | | | | [ Upstream commit fd7f6727102a1ccf6b4c1dfcc631f9b546526b26 ] I don't think it makes sense for a module to have a soft dependency on itself. This seems quite cyclic by nature and I can't see what purpose it could serve. OTOH libcrc32c calls crypto_alloc_shash("crc32c", 0, 0) so it pretty much assumes that some incarnation of the "crc32c" hash algorithm has been loaded. Therefore it makes sense to have the soft dependency there (as crc-t10dif does.) Cc: stable@vger.kernel.org Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Jean Delvare <jdelvare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
* crypto: algif_hash - Fix race condition in hash_check_keyHerbert Xu2016-02-091-3/+6
| | | | | | | | | | | [ Upstream commit ad46d7e33219218605ea619e32553daf4f346b9f ] We need to lock the child socket in hash_check_key as otherwise two simultaneous calls can cause the parent socket to be freed. Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>