| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use subsys_initcall for registration of all templates and generic
algorithm implementations, rather than module_init. Then change
cryptomgr to use arch_initcall, to place it before the subsys_initcalls.
This is needed so that when both a generic and optimized implementation
of an algorithm are built into the kernel (not loadable modules), the
generic implementation is registered before the optimized one.
Otherwise, the self-tests for the optimized implementation are unable to
allocate the generic implementation for the new comparison fuzz tests.
Note that on arm, a side effect of this change is that self-tests for
generic implementations may run before the unaligned access handler has
been installed. So, unaligned accesses will crash the kernel. This is
arguably a good thing as it makes it easier to detect that type of bug.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
My patches to make testmgr fuzz algorithms against their generic
implementation detected that the arm64 implementations of
"cts(cbc(aes))" handle empty messages differently from the cts template.
Namely, the arm64 implementations forbids (with -EINVAL) all messages
shorter than the block size, including the empty message; but the cts
template permits empty messages as a special case.
No user should be CTS-encrypting/decrypting empty messages, but we need
to keep the behavior consistent. Unfortunately, as noted in the source
of OpenSSL's CTS implementation [1], there's no common specification for
CTS. This makes it somewhat debatable what the behavior should be.
However, all CTS specifications seem to agree that messages shorter than
the block size are not allowed, and OpenSSL follows this in both CTS
conventions it implements. It would also simplify the user-visible
semantics to have empty messages no longer be a special case.
Therefore, make the cts template return -EINVAL on *all* messages
shorter than the block size, including the empty message.
[1] https://github.com/openssl/openssl/blob/master/crypto/modes/cts128.c
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
We avoid various VLAs[1] by using constant expressions for block size
and alignment mask.
[1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Salvatore Mesoraca <s.mesoraca16@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
Now that -EBUSY return code only indicates backlog queueing
we can safely remove the now redundant check for the
CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of unconditionally forcing 4 byte alignment for all generic
chaining modes that rely on crypto_xor() or crypto_inc() (which may
result in unnecessary copying of data when the underlying hardware
can perform unaligned accesses efficiently), make those functions
deal with unaligned input explicitly, but only if the Kconfig symbol
HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.
For crypto_inc(), this simply involves making the 4-byte stride
conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
it typically operates on 16 byte buffers.
For crypto_xor(), an algorithm is implemented that simply runs through
the input using the largest strides possible if unaligned accesses are
allowed. If they are not, an optimal sequence of memory accesses is
emitted that takes the relative alignment of the input buffers into
account, e.g., if the relative misalignment of dst and src is 4 bytes,
the entire xor operation will be completed using 4 byte loads and stores
(modulo unaligned bits at the start and end). Note that all expressions
involving misalign are simply eliminated by the compiler when
HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Continuing from this commit: 52f5684c8e1e
("kernel: use macros from compiler.h instead of __attribute__((...))")
I submitted 4 total patches. They are part of task I've taken up to
increase compiler portability in the kernel. I've cleaned up the
subsystems under /kernel /mm /block and /security, this patch targets
/crypto.
There is <linux/compiler.h> which provides macros for various gcc specific
constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
instances of gcc specific attributes with the right macros for the crypto
subsystem.
I had to make one additional change into compiler-gcc.h for the case when
one wants to use this: __attribute__((aligned) and not specify an alignment
factor. From the gcc docs, this will result in the largest alignment for
that data type on the target machine so I've named the macro
__aligned_largest. Please advise if another name is more appropriate.
Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
Since commit 3a01d0ee2b99 ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_spawn_skcipher2() and
crypto_spawn_skcipher() are equivalent. So switch callers of
crypto_spawn_skcipher2() to crypto_spawn_skcipher() and remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
| |
Since commit 3a01d0ee2b99 ("crypto: skcipher - Remove top-level
givcipher interface"), crypto_grab_skcipher2() and
crypto_grab_skcipher() are equivalent. So switch callers of
crypto_grab_skcipher2() to crypto_grab_skcipher() and remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
This patch converts cts over to the skcipher interface. It also
optimises the implementation to use one CBC operation for all but
the last block, which is then processed separately.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
| |
The cts algorithm as currently implemented assumes the underlying
is a CBC-mode algorithm. So this patch adds a check for that to
eliminate bogus combinations of cts with non-CBC modes.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
| |
The seqiv generator is completely inappropriate for cts as it's
designed for IPsec algorithms. Since cts users do not actually
use the IV generator we can just fall back to the default.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Maciej ?enczykowski <zenczykowski@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the module loading prefix "crypto-" to the template lookup
as well.
For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
includes the "crypto-" prefix at every level, correctly rejecting "vfat":
net-pf-38
algif-hash
crypto-vfat(blowfish)
crypto-vfat(blowfish)-all
crypto-vfat
Reported-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recently, in commit 13aa93c70e71 ("random: add and use memzero_explicit()
for clearing data"), we have found that GCC may optimize some memset()
cases away when it detects a stack variable is not being used anymore
and going out of scope. This can happen, for example, in cases when we
are clearing out sensitive information such as keying material or any
e.g. intermediate results from crypto computations, etc.
With the help of Coccinelle, we can figure out and fix such occurences
in the crypto subsytem as well. Julia Lawall provided the following
Coccinelle program:
@@
type T;
identifier x;
@@
T x;
... when exists
when any
-memset
+memzero_explicit
(&x,
-0,
...)
... when != x
when strict
@@
type T;
identifier x;
@@
T x[...];
... when exists
when any
-memset
+memzero_explicit
(x,
-0,
...)
... when != x
when strict
Therefore, make use of the drop-in replacement memzero_explicit() for
exactly such cases instead of using memset().
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
// <smpl>
@@
expression err,x;
@@
- err = PTR_ERR(x);
if (IS_ERR(x))
- return ERR_PTR(err);
+ return ERR_CAST(x);
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Steps to reproduce:
modprobe tcrypt # with CONFIG_DEBUG_SG=y
testing cts(cbc(aes)) encryption
test 1 (128 bit key):
------------[ cut here ]------------
kernel BUG at include/linux/scatterlist.h:65!
invalid opcode: 0000 [1] PREEMPT SMP DEBUG_PAGEALLOC
CPU 0
Modules linked in: tea xts twofish twofish_common tcrypt(+) [maaaany]
Pid: 16151, comm: modprobe Not tainted 2.6.26-rc4-fat #7
RIP: 0010:[<ffffffffa0bf032e>] [<ffffffffa0bf032e>] :cts:cts_cbc_encrypt+0x151/0x355
RSP: 0018:ffff81016f497a88 EFLAGS: 00010286
RAX: ffffe20009535d58 RBX: ffff81016f497af0 RCX: 0000000087654321
RDX: ffff8100010d4f28 RSI: ffff81016f497ee8 RDI: ffff81016f497ac0
RBP: ffff81016f497c38 R08: 0000000000000000 R09: 0000000000000011
R10: ffffffff00000008 R11: ffff8100010d4f28 R12: ffff81016f497ac0
R13: ffff81016f497b30 R14: 0000000000000010 R15: 0000000000000010
FS: 00007fac6fa276f0(0000) GS:ffffffff8060e000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f12ca7cc000 CR3: 000000016f441000 CR4: 00000000000026e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff4ff0 DR7: 0000000000000400
Process modprobe (pid: 16151, threadinfo ffff81016f496000, task ffff8101755b4ae0)
Stack: 0000000000000001 ffff81016f496000 ffffffff80719f78 0000000000000001
0000000000000001 ffffffff8020c87c ffff81016f99c918 20646c756f772049
65687420656b696c 0000000000000020 0000000000000000 0000000033341102
Call Trace:
[<ffffffff8020c87c>] ? restore_args+0x0/0x30
[<ffffffffa04aa311>] ? :aes_generic:crypto_aes_expand_key+0x311/0x369
[<ffffffff802ab453>] ? check_object+0x15a/0x213
[<ffffffff802aad22>] ? init_object+0x6e/0x76
[<ffffffff802ac3ae>] ? __slab_free+0xfc/0x371
[<ffffffffa0bf05ed>] :cts:crypto_cts_encrypt+0xbb/0xca
[<ffffffffa07108de>] ? :crypto_blkcipher:setkey+0xc7/0xec
[<ffffffffa07110b8>] :crypto_blkcipher:async_encrypt+0x38/0x3a
[<ffffffffa2ce9341>] :tcrypt:test_cipher+0x261/0x7c6
[<ffffffffa2cfd9df>] :tcrypt:tcrypt_mod_init+0x9df/0x1b30
[<ffffffff80261e35>] sys_init_module+0x9e/0x1b2
[<ffffffff8020c15a>] system_call_after_swapgs+0x8a/0x8f
Code: 45 c0 e8 aa 24 63 df 48 c1 e8 0c 48 b9 00 00 00 00 00 e2 ff ff 48 8b 55 88 48 6b c0 68 48 01 c8 b9 21 43 65 87 48 39 4d 80 74 04 <0f> 0b eb fe f6 c2 01 74 04 0f 0b eb fe 83 e2 03 4c 89 ef 44 89
RIP [<ffffffffa0bf032e>] :cts:cts_cbc_encrypt+0x151/0x355
RSP <ffff81016f497a88>
---[ end trace e8bahiarjand37fd ]---
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Implement CTS wrapper for CBC mode required for support of AES
encryption support for Kerberos (rfc3962).
Signed-off-by: Kevin Coffman <kwc@citi.umich.edu>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|