| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
All error handling paths 'goto err_drop_spawn' except this one.
In order to avoid some resources leak, we should do it as well here.
Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The variable len is set to zero, never read and then later updated
to p - name, so clearly the zero'ing of len is redundant and
can be removed.
Detected by clang scan-build:
" warning: Value stored to 'len' is never read"
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix checkpatch.pl warnings:
WARNING: void function return statements are not generally useful
FILE: crypto/rmd128.c:218:
FILE: crypto/rmd160.c:261:
FILE: crypto/rmd256.c:233:
FILE: crypto/rmd320.c:280:
FILE: crypto/tcrypt.c:385:
FILE: drivers/crypto/ixp4xx_crypto.c:538:
FILE: drivers/crypto/marvell/cesa.c:81:
FILE: drivers/crypto/ux500/cryp/cryp_core.c:1755:
Signed-off-by: Geliang Tang <geliangtang@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
| |
This patch replace GCM IV size value by their constant name.
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
| |
Add testmgr and tcrypt tests and vectors for SM3 secure hash.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
Add OSCCA SM3 secure hash (OSCCA GM/T 0004-2012 SM3)
generic hash transformation.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When two adjacent TX SGL are processed and parts of both TX SGLs
are pulled into the per-request TX SGL, the wrong per-request
TX SGL entries were updated.
This fixes a NULL pointer dereference when a cipher implementation walks
the TX SGL where some of the SGL entries were NULL.
Fixes: e870456d8e7c ("crypto: algif_skcipher - overhaul memory...")
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
During the change to use aligned buffers, the deallocation code path was
not updated correctly. The current code tries to free the aligned buffer
pointer and not the original buffer pointer as it is supposed to.
Thus, the code is updated to free the original buffer pointer and set
the aligned buffer pointer that is used throughout the code to NULL.
Fixes: 3cfc3b9721123 ("crypto: drbg - use aligned buffers")
CC: <stable@vger.kernel.org>
CC: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a page is assigned to a TX SGL, call get_page to increment the
reference counter. It is possible that one page is referenced in
multiple SGLs:
- in the global TX SGL in case a previous af_alg_pull_tsgl only
reassigned parts of a page to a per-request TX SGL
- in the per-request TX SGL as assigned by af_alg_pull_tsgl
Note, multiple requests can be active at the same time whose TX SGLs all
point to different parts of the same page.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
There are already helpers to (un)register multiple normal
and AEAD algos. Add one for ahashes too.
Signed-off-by: Lars Persson <larper@axis.com>
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
| |
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|\
| |
| |
| |
| | |
Merge the crypto tree to resolve the conflict between the temporary
and long-term fixes in algif_skcipher.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
For asynchronous operation, SGs are allocated without a page mapped to
them or with a page that is not used (ref-counted). If the SGL is freed,
the code must only call put_page for an SG if there was a page assigned
and ref-counted in the first place.
This fixes a kernel crash when using io_submit with more than one iocb
using the sendmsg and sendpage (vmsplice/splice) interface.
Cc: <stable@vger.kernel.org>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We failed to catch a bug in the chacha20 code after porting it to the
skcipher API. We would have caught it if any chunked tests had been
defined, so define some now so we will catch future regressions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Commit 9ae433bc79f9 ("crypto: chacha20 - convert generic and x86 versions
to skcipher") ported the existing chacha20 code to use the new skcipher
API, and introduced a bug along the way. Unfortunately, the tcrypt tests
did not catch the error, and it was only found recently by Tobias.
Stefan kindly diagnosed the error, and proposed a fix which is similar
to the one below, with the exception that 'walk.stride' is used rather
than the hardcoded block size. This does not actually matter in this
case, but it's a better example of how to use the skcipher walk API.
Fixes: 9ae433bc79f9 ("crypto: chacha20 - convert generic and x86 ...")
Cc: <stable@vger.kernel.org> # v4.11+
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Reported-by: Tobias Brunner <tobias@strongswan.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Consolidate following data structures:
skcipher_async_req, aead_async_req -> af_alg_async_req
skcipher_rsgl, aead_rsql -> af_alg_rsgl
skcipher_tsgl, aead_tsql -> af_alg_tsgl
skcipher_ctx, aead_ctx -> af_alg_ctx
Consolidate following functions:
skcipher_sndbuf, aead_sndbuf -> af_alg_sndbuf
skcipher_writable, aead_writable -> af_alg_writable
skcipher_rcvbuf, aead_rcvbuf -> af_alg_rcvbuf
skcipher_readable, aead_readable -> af_alg_readable
aead_alloc_tsgl, skcipher_alloc_tsgl -> af_alg_alloc_tsgl
aead_count_tsgl, skcipher_count_tsgl -> af_alg_count_tsgl
aead_pull_tsgl, skcipher_pull_tsgl -> af_alg_pull_tsgl
aead_free_areq_sgls, skcipher_free_areq_sgls -> af_alg_free_areq_sgls
aead_wait_for_wmem, skcipher_wait_for_wmem -> af_alg_wait_for_wmem
aead_wmem_wakeup, skcipher_wmem_wakeup -> af_alg_wmem_wakeup
aead_wait_for_data, skcipher_wait_for_data -> af_alg_wait_for_data
aead_data_wakeup, skcipher_data_wakeup -> af_alg_data_wakeup
aead_sendmsg, skcipher_sendmsg -> af_alg_sendmsg
aead_sendpage, skcipher_sendpage -> af_alg_sendpage
aead_async_cb, skcipher_async_cb -> af_alg_async_cb
aead_poll, skcipher_poll -> af_alg_poll
Split out the following common code from recvmsg:
af_alg_alloc_areq: allocation of the request data structure for the
cipher operation
af_alg_get_rsgl: creation of the RX SGL anchored in the request data
structure
The following changes to the implementation without affecting the
functionality have been applied to synchronize slightly different code
bases in algif_skcipher and algif_aead:
The wakeup in af_alg_wait_for_data is triggered when either more data
is received or the indicator that more data is to be expected is
released. The first is triggered by user space, the second is
triggered by the kernel upon finishing the processing of data
(i.e. the kernel is ready for more).
af_alg_sendmsg uses size_t in min_t calculation for obtaining len.
Return code determination is consistent with algif_skcipher. The
scope of the variable i is reduced to match algif_aead. The type of the
variable i is switched from int to unsigned int to match algif_aead.
af_alg_sendpage does not contain the superfluous err = 0 from
aead_sendpage.
af_alg_async_cb requires to store the number of output bytes in
areq->outlen before the AIO callback is triggered.
The POLLIN / POLLRDNORM is now set when either not more data is given or
the kernel is supplied with data. This is consistent to the wakeup from
sleep when the kernel waits for data.
The request data structure is extended by the field last_rsgl which
points to the last RX SGL list entry. This shall help recvmsg
implementation to chain the RX SGL to other SG(L)s if needed. It is
currently used by algif_aead which chains the tag SGL to the RX SGL
during decryption.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When UBSAN is enabled, we get a very large stack frame for
__serpent_setkey, when the register allocator ends up using more registers
than it has, and has to spill temporary values to the stack. The code
was originally optimized for in-order x86-32 CPU implementations using
older compilers, but it now runs into a highly suboptimal case on all
CPU architectures, as seen by this warning:
crypto/serpent_generic.c: In function '__serpent_setkey':
crypto/serpent_generic.c:436:1: error: the frame size of 2720 bytes is larger than 2048 bytes [-Werror=frame-larger-than=]
Disabling -fsanitize=alignment would avoid that warning, presumably the
option turns off a optimization step that is required for getting the
register allocation right, but there is no easy way to do that on gcc-7
(gcc-8 introduces a function attribute for this).
I tried to figure out a way to modify the source code instead, and noticed
that the two stages of the setkey() function (keyiter and sbox) each are
fine by themselves, but not when combined into one function. Splitting
out the entire sbox into a separate function also happens to work fine
with all compilers I tried (arm, arm64 and x86).
The setkey function uses a strange way to handle offsets into the key
array, using both negative and positive index values, as well as adjusting
the array pointer back and forth. I have checked that this actually
makes no difference to modern compilers, but I left that untouched
to make the patch easier to review and to keep the code closer to
the reference implementation.
Link: https://patchwork.kernel.org/patch/9189575/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Use the NULL cipher to copy the AAD and PT/CT from the TX SGL
to the RX SGL. This allows an in-place crypto operation on the
RX SGL for encryption, because the TX data is always smaller or
equal to the RX data (the RX data will hold the tag).
For decryption, a per-request TX SGL is created which will only hold
the tag value. As the RX SGL will have no space for the tag value and
an in-place operation will not write the tag buffer, the TX SGL with the
tag value is chained to the RX SGL. This now allows an in-place
crypto operation.
For example:
* without the patch:
kcapi -x 2 -e -c "gcm(aes)" -p 89154d0d4129d322e4487bafaa4f6b46 -k c0ece3e63198af382b5603331cc23fa8 -i 7e489b83622e7228314d878d -a afcd7202d621e06ca53b70c2bdff7fb2 -l 16 -u -s
00000000000000000000000000000000f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c
* with the patch:
kcapi -x 2 -e -c "gcm(aes)" -p 89154d0d4129d322e4487bafaa4f6b46 -k c0ece3e63198af382b5603331cc23fa8 -i 7e489b83622e7228314d878d -a afcd7202d621e06ca53b70c2bdff7fb2 -l 16 -u -s
afcd7202d621e06ca53b70c2bdff7fb2f4a3eacfbdadd3b1a17117b1d67ffc1f1e21efbbc6d83724a8c296e3bb8cda0c
Tests covering this functionality have been added to libkcapi.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If no data has been processed during recvmsg, return the error code.
This covers all errors received during non-AIO operations.
If any error occurs during a synchronous operation in addition to
-EIOCBQUEUED or -EBADMSG (like -ENOMEM), it should be relayed to the
caller.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are quite a number of occurrences in the kernel of the pattern
if (dst != src)
memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);
or
crypto_xor(keystream, src, nbytes);
memcpy(dst, keystream, nbytes);
where crypto_xor() is preceded or followed by a memcpy() invocation
that is only there because crypto_xor() uses its output parameter as
one of the inputs. To avoid having to add new instances of this pattern
in the arm64 code, which will be refactored to implement non-SIMD
fallbacks, add an alternative implementation called crypto_xor_cpy(),
taking separate input and output arguments. This removes the need for
the separate memcpy().
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In preparation of introducing crypto_xor_cpy(), which will use separate
operands for input and output, modify the __crypto_xor() implementation,
which it will share with the existing crypto_xor(), which provides the
actual functionality when not using the inline version.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The scompress code allocates 2 x 128 KB of scratch buffers for each CPU,
so that clients of the async API can use synchronous implementations
even from atomic context. However, on systems such as Cavium Thunderx
(which has 96 cores), this adds up to a non-negligible 24 MB. Also,
32-bit systems may prefer to use their precious vmalloc space for other
things,especially since there don't appear to be any clients for the
async compression API yet.
So let's defer allocation of the scratch buffers until the first time
we allocate an acompress cipher based on an scompress implementation.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When allocating the per-CPU scratch buffers, we allocate the source
and destination buffers separately, but bail immediately if the second
allocation fails, without freeing the first one. Fix that.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Due to the use of per-CPU buffers, scomp_acomp_comp_decomp() executes
with preemption disabled, and so whether the CRYPTO_TFM_REQ_MAY_SLEEP
flag is set is irrelevant, since we cannot sleep anyway. So disregard
the flag, and use GFP_ATOMIC unconditionally.
Cc: <stable@vger.kernel.org> # v4.10+
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ecdh_ctx contained static allocated data for the shared secret
and public key.
The shared secret and the public key were doomed to concurrency
issues because they could be shared by multiple crypto requests.
The concurrency is fixed by replacing per-tfm shared secret and
public key with per-request dynamically allocated shared secret
and public key.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Remove xts(aes) speed tests with 2 x 192-bit keys, since implementations
adhering strictly to IEEE 1619-2007 standard cannot cope with key sizes
other than 2 x 128, 2 x 256 bits - i.e. AES-XTS-{128,256}:
[...]
tcrypt: test 5 (384 bit key, 16 byte blocks):
caam_jr 8020000.jr: key size mismatch
tcrypt: setkey() failed flags=200000
[...]
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Otherwise, we might be seeding the RNG using bad randomness, which is
dangerous. The one use of this function from within the kernel -- not
from userspace -- is being removed (keys/big_key), so that call site
isn't relevant in assessing this.
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The updated memory management is described in the top part of the code.
As one benefit of the changed memory management, the AIO and synchronous
operation is now implemented in one common function. The AF_ALG
operation uses the async kernel crypto API interface for each cipher
operation. Thus, the only difference between the AIO and sync operation
types visible from user space is:
1. the callback function to be invoked when the asynchronous operation
is completed
2. whether to wait for the completion of the kernel crypto API operation
or not
The change includes the overhaul of the TX and RX SGL handling. The TX
SGL holding the data sent from user space to the kernel is now dynamic
similar to algif_skcipher. This dynamic nature allows a continuous
operation of a thread sending data and a second thread receiving the
data. These threads do not need to synchronize as the kernel processes
as much data from the TX SGL to fill the RX SGL.
The caller reading the data from the kernel defines the amount of data
to be processed. Considering that the interface covers AEAD
authenticating ciphers, the reader must provide the buffer in the
correct size. Thus the reader defines the encryption size.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The updated memory management is described in the top part of the code.
As one benefit of the changed memory management, the AIO and synchronous
operation is now implemented in one common function. The AF_ALG
operation uses the async kernel crypto API interface for each cipher
operation. Thus, the only difference between the AIO and sync operation
types visible from user space is:
1. the callback function to be invoked when the asynchronous operation
is completed
2. whether to wait for the completion of the kernel crypto API operation
or not
In addition, the code structure is adjusted to match the structure of
algif_aead for easier code assessment.
The user space interface changed slightly as follows: the old AIO
operation returned zero upon success and < 0 in case of an error to user
space. As all other AF_ALG interfaces (including the sync skcipher
interface) returned the number of processed bytes upon success and < 0
in case of an error, the new skcipher interface (regardless of AIO or
sync) returns the number of processed bytes in case of success.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When authencesn is used together with digest_null a crash will
occur on the decrypt path. This is because normally we perform
a special setup to preserve the ESN, but this is skipped if there
is no authentication. However, on the post-authentication path
it always expects the preservation to be in place, thus causing
a crash when digest_null is used.
This patch fixes this by also skipping the post-processing when
there is no authentication.
Fixes: 104880a6b470 ("crypto: authencesn - Convert to new AEAD...")
Cc: <stable@vger.kernel.org>
Reported-by: Jan Tluka <jtluka@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
- fix new compiler warnings in cavium
- set post-op IV properly in caam (this fixes chaining)
- fix potential use-after-free in atmel in case of EBUSY
- fix sleeping in softirq path in chcr
- disable buggy sha1-avx2 driver (may overread and page fault)
- fix use-after-free on signals in caam
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: cavium - make several functions static
crypto: chcr - Avoid algo allocation in softirq.
crypto: caam - properly set IV after {en,de}crypt
crypto: atmel - only treat EBUSY as transient if backlog
crypto: af_alg - Avoid sock_graft call warning
crypto: caam - fix signals handling
crypto: sha1-ssse3 - Disable avx2
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
crypto: af_alg - Avoid sock_graft call warning
The newly added sock_graft warning triggers in af_alg_accept.
It's harmless as we're essentially doing sock->sk = sock->sk.
The sock_graft call is actually redundant because all the work
it does is subsumed by sock_init_data. However, it was added
to placate SELinux as it uses it to initialise its internal state.
This patch avoisd the warning by making the SELinux call directly.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: David S. Miller <davem@davemloft.net>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Pull dmaengine updates from Vinod Koul:
- removal of AVR32 support in dw driver as AVR32 is gone
- new driver for Broadcom stream buffer accelerator (SBA) RAID driver
- add support for Faraday Technology FTDMAC020 in amba-pl08x driver
- IOMMU support in pl330 driver
- updates to bunch of drivers
* tag 'dmaengine-4.13-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (36 commits)
dmaengine: qcom_hidma: correct API violation for submit
dmaengine: zynqmp_dma: Remove max len check in zynqmp_dma_prep_memcpy
dmaengine: tegra-apb: Really fix runtime-pm usage
dmaengine: fsl_raid: make of_device_ids const.
dmaengine: qcom_hidma: allow ACPI/DT parameters to be overridden
dmaengine: fsldma: set BWC, DAHTS and SAHTS values correctly
dmaengine: Kconfig: Simplify the help text for MXS_DMA
dmaengine: pl330: Delete unused functions
dmaengine: Replace WARN_TAINT_ONCE() with pr_warn_once()
dmaengine: Kconfig: Extend the dependency for MXS_DMA
dmaengine: mxs: Use %zu for printing a size_t variable
dmaengine: ste_dma40: Cleanup scatterlist layering violations
dmaengine: imx-dma: cleanup scatterlist layering violations
dmaengine: use proper name for the R-Car SoC
dmaengine: imx-sdma: Fix compilation warning.
dmaengine: imx-sdma: Handle return value of clk_prepare_enable
dmaengine: pl330: Add IOMMU support to slave tranfers
dmaengine: DW DMAC: Handle return value of clk_prepare_enable
dmaengine: pl08x: use GENMASK() to create bitmasks
dmaengine: pl08x: Add support for Faraday Technology FTDMAC020
...
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The DMA_PREP_FENCE is to be used when preparing Tx descriptor if output
of Tx descriptor is to be used by next/dependent Tx descriptor.
The DMA_PREP_FENSE will not be set correctly in do_async_gen_syndrome()
when calling dma->device_prep_dma_pq() under following conditions:
1. ASYNC_TX_FENCE not set in submit->flags
2. DMA_PREP_FENCE not set in dma_flags
3. src_cnt (= (disks - 2)) is greater than dma_maxpq(dma, dma_flags)
This patch fixes DMA_PREP_FENCE usage in do_async_gen_syndrome() taking
inspiration from do_async_xor() implementation.
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Ray Jui <ray.jui@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Pull networking updates from David Miller:
"Reasonably busy this cycle, but perhaps not as busy as in the 4.12
merge window:
1) Several optimizations for UDP processing under high load from
Paolo Abeni.
2) Support pacing internally in TCP when using the sch_fq packet
scheduler for this is not practical. From Eric Dumazet.
3) Support mutliple filter chains per qdisc, from Jiri Pirko.
4) Move to 1ms TCP timestamp clock, from Eric Dumazet.
5) Add batch dequeueing to vhost_net, from Jason Wang.
6) Flesh out more completely SCTP checksum offload support, from
Davide Caratti.
7) More plumbing of extended netlink ACKs, from David Ahern, Pablo
Neira Ayuso, and Matthias Schiffer.
8) Add devlink support to nfp driver, from Simon Horman.
9) Add RTM_F_FIB_MATCH flag to RTM_GETROUTE queries, from Roopa
Prabhu.
10) Add stack depth tracking to BPF verifier and use this information
in the various eBPF JITs. From Alexei Starovoitov.
11) Support XDP on qed device VFs, from Yuval Mintz.
12) Introduce BPF PROG ID for better introspection of installed BPF
programs. From Martin KaFai Lau.
13) Add bpf_set_hash helper for TC bpf programs, from Daniel Borkmann.
14) For loads, allow narrower accesses in bpf verifier checking, from
Yonghong Song.
15) Support MIPS in the BPF selftests and samples infrastructure, the
MIPS eBPF JIT will be merged in via the MIPS GIT tree. From David
Daney.
16) Support kernel based TLS, from Dave Watson and others.
17) Remove completely DST garbage collection, from Wei Wang.
18) Allow installing TCP MD5 rules using prefixes, from Ivan
Delalande.
19) Add XDP support to Intel i40e driver, from Björn Töpel
20) Add support for TC flower offload in nfp driver, from Simon
Horman, Pieter Jansen van Vuuren, Benjamin LaHaise, Jakub
Kicinski, and Bert van Leeuwen.
21) IPSEC offloading support in mlx5, from Ilan Tayari.
22) Add HW PTP support to macb driver, from Rafal Ozieblo.
23) Networking refcount_t conversions, From Elena Reshetova.
24) Add sock_ops support to BPF, from Lawrence Brako. This is useful
for tuning the TCP sockopt settings of a group of applications,
currently via CGROUPs"
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next: (1899 commits)
net: phy: dp83867: add workaround for incorrect RX_CTRL pin strap
dt-bindings: phy: dp83867: provide a workaround for incorrect RX_CTRL pin strap
cxgb4: Support for get_ts_info ethtool method
cxgb4: Add PTP Hardware Clock (PHC) support
cxgb4: time stamping interface for PTP
nfp: default to chained metadata prepend format
nfp: remove legacy MAC address lookup
nfp: improve order of interfaces in breakout mode
net: macb: remove extraneous return when MACB_EXT_DESC is defined
bpf: add missing break in for the TCP_BPF_SNDCWND_CLAMP case
bpf: fix return in load_bpf_file
mpls: fix rtm policy in mpls_getroute
net, ax25: convert ax25_cb.refcount from atomic_t to refcount_t
net, ax25: convert ax25_route.refcount from atomic_t to refcount_t
net, ax25: convert ax25_uid_assoc.refcount from atomic_t to refcount_t
net, sctp: convert sctp_ep_common.refcnt from atomic_t to refcount_t
net, sctp: convert sctp_transport.refcnt from atomic_t to refcount_t
net, sctp: convert sctp_chunk.refcnt from atomic_t to refcount_t
net, sctp: convert sctp_datamsg.refcnt from atomic_t to refcount_t
net, sctp: convert sctp_auth_bytes.refcnt from atomic_t to refcount_t
...
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
refcount_t type and corresponding API should be
used instead of atomic_t when the variable is used as
a reference counter. This allows to avoid accidental
refcounter overflows that might lead to use-after-free
situations.
This patch uses refcount_inc_not_zero() instead of
atomic_inc_not_zero_hint() due to absense of a _hint()
version of refcount API. If the hint() version must
be used, we might need to revisit API.
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Hans Liljestrand <ishkamiel@gmail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David Windsor <dwindsor@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|\ \ \ \
| |/ / /
|/| | /
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
"Algorithms:
- add private key generation to ecdh
Drivers:
- add generic gcm(aes) to aesni-intel
- add SafeXcel EIP197 crypto engine driver
- add ecb(aes), cfb(aes) and ecb(des3_ede) to cavium
- add support for CNN55XX adapters in cavium
- add ctr mode to chcr
- add support for gcm(aes) to omap"
* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (140 commits)
crypto: testmgr - Reenable sha1/aes in FIPS mode
crypto: ccp - Release locks before returning
crypto: cavium/nitrox - dma_mapping_error() returns bool
crypto: doc - fix typo in docs
Documentation/bindings: Document the SafeXel cryptographic engine driver
crypto: caam - fix gfp allocation flags (part II)
crypto: caam - fix gfp allocation flags (part I)
crypto: drbg - Fixes panic in wait_for_completion call
crypto: caam - make of_device_ids const.
crypto: vmx - remove unnecessary check
crypto: n2 - make of_device_ids const
crypto: inside-secure - use the base_end pointer in ring rollback
crypto: inside-secure - increase the batch size
crypto: inside-secure - only dequeue when needed
crypto: inside-secure - get the backlog before dequeueing the request
crypto: inside-secure - stop requeueing failed requests
crypto: inside-secure - use one queue per hw ring
crypto: inside-secure - update the context and request later
crypto: inside-secure - align the cipher and hash send functions
crypto: inside-secure - optimize DSE bufferability control
...
|
| |\ \
| | | |
| | | |
| | | | |
Merge the crypto tree to pull in fixes for the next merge window.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Initialise ctr_completion variable before use.
Cc: <stable@vger.kernel.org>
Signed-off-by: Harsh Jain <harshjain.prof@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The combination of sha1 and aes was disabled in FIPS Mode
accidentally. This patch reenables it.
Fixes: 284a0f6e87b0 ("crypto: testmgr - Disable fips-allowed for...")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Müller <smueller@chronox.de>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The PKCS#1 RSA implementation is provided with a self test with RSA 2048
and SHA-256. This self test implicitly covers other RSA keys and other
hashes. Also, this self test implies that the pkcs1pad(rsa) is FIPS
140-2 compliant.
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Otherwise, we enable all sorts of forgeries via timing attack.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Suggested-by: Stephan Müller <smueller@chronox.de>
Cc: stable@vger.kernel.org
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
By adding a struct device *dev to struct engine, we could store the
device used at register time and so use all dev_xxx functions instead of
pr_xxx.
Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Fix inconsistent format and spelling in hash tests error messages.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Use more common error logging style.
Signed-off-by: Karim Eshapa <karim.eshapa@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
mix_columns() contains a comment which shows the matrix used by the
MixColumns step of AES, but the last entry in this matrix was incorrect
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The test considers a party that already has a private-public
key pair and a party that provides a NULL key. The kernel will
generate the private-public key pair for the latter, computes
the shared secret on both ends and verifies if it's the same.
The explicit private-public key pair was copied from
the previous test vector.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Add support for generating ecc private keys.
Generation of ecc private keys is helpful in a user-space to kernel
ecdh offload because the keys are not revealed to user-space. Private
key generation is also helpful to implement forward secrecy.
If the user provides a NULL ecc private key, the kernel will generate it
and further use it for ecdh.
Move ecdh's object files below drbg's. drbg must be present in the kernel
at the time of calling.
Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
Reviewed-by: Stephan Müller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We forgot to set the error code on this path so it could result in
returning NULL which leads to a NULL dereference.
Fixes: db6c43bd2132 ("crypto: KEYS: convert public key and digsig asym to the akcipher api")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Initialise ctr_completion variable before use.
Signed-off-by: Harsh Jain <harshjain.prof@gmail.com>
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|